In addition to Linus Torvalds’ recent comments around AI tooling documentation, it turns out in fact that Linus Torvalds has been using vibe coding himself. Over the holidays Linus Torvalds has been working on a new open-source project called AudioNoise that was started with the help of AI vibe coding.

Over the winter holidays, Linus Torvalds routinely works on new hobbies and the like. Last year he started creating his own guitar pedals as a hobby. Or as he put it back in the Linux 6.13-rc7 announcement, “LEGO for grown-ups with a soldering iron.”

  • CeeBee_Eh@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    6 hours ago

    Stop calling it “vibe coding”. Vibe coding is literally yoloing the generated output and then repeatedly reprompting when it doesn’t work. The second you put in a modicum of effort to examine the code generated you are no longer “vibe” coding.

  • CoyoteFacts@piefed.ca
    link
    fedilink
    English
    arrow-up
    93
    arrow-down
    2
    ·
    15 hours ago

    Given his flogging of LLMs with respect to the kernel, I’m guessing Linus is of the opinion that vibe coding is okay to play around with for yourself and for your personal tools, but to use it professionally or to force others to interact with your own vibe coded junk is where the fault lies. This is a fairly mature take on the surface, but also I’m someone who really can’t get past the part where the inherent existence of LLMs is carving ruin through the world through their content theft, resource depletion, and class warfare… so like… I hope he pulls a little harder on those threads sometime instead of judging it purely based on its utility.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 hours ago

      Sure, and I also think it’s important to realize that an open-weight LLM pretty much negates the problem of content theft. If you’re ignoring copyright, it’s only fair that you share alike. I don’t think it’s wrong to ingest all of humanity’s knowledge so long as you give it to all of humanity.

      The big commercial LLMs are the issue here, because they hoard the data for profit.

    • Holytimes@sh.itjust.works
      link
      fedilink
      arrow-up
      32
      arrow-down
      8
      ·
      15 hours ago

      Its also generally stupid for any well respecting practitioner of a trade to just disregard a new tool because its new and different.

      You at least try it, learn about it, use it. If it doesn’t work you set it down and watch it. Someone else may find a use case for it.

      AI is in that spot right now, despite the over push from the corpos into everything. Ai does have a number of use cases that arn’t bad not great either to be fair. And the tool IS improving every few months.

      If you can pull your own dick out of your ass for long enough to be pragmatic its pretty easy to tell that while Ai wont ever live up to its hype it will find a good place in most peoples work flow at SOME level. Its unlikely that Ai will reach a level where it will EVER be safe to deploy for a project as high profile as the kernel. Least not Ai as we currently think of it. But for a hobby project to rough out some jank code and boot strap a projects proof of concept? Its honestly good enough for that already.

      (general you, not you in particular coyote)

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        40
        arrow-down
        2
        ·
        15 hours ago

        Its also generally stupid for any well respecting practitioner of a trade to just disregard a new tool because its new and different.

        But it’s perfectly reasonable to disregard a tool for ethical reasons, of which there are many

      • forrgott@lemmy.zip
        link
        fedilink
        arrow-up
        22
        arrow-down
        3
        ·
        14 hours ago

        If you can pull your own dick out of your ass for long enough to be pragmatic its pretty easy to tell that while Ai wont ever live up to its hype it will find a good place in most peoples work flow at SOME level. Its unlikely that Ai will reach a level where it will EVER be safe to deploy for a project as high profile as the kernel.

        Maybe take your own advice? If the best case scenario is still a tool with high enough failure rate it can’t actually be trusted, then your tool might just be hallucinations from huffing your own farts.

    • otacon239@lemmy.world
      link
      fedilink
      arrow-up
      18
      arrow-down
      2
      ·
      14 hours ago

      I don’t remember how long ago it was, but there was an interview where he said exactly this. AI is not a bad tool. It’s just that lots of people use it as the apply-to-everything hammer.

    • stephen01king@piefed.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      13 hours ago

      I feel like you’re blaming the technology for something the corporations are doing. Try to separate them. Even if you recognise that LLMs are what allowed corporations to overhype their technology’s potential and scam their way into getting more money, the technology itself is not inherently bad.

      • CoyoteFacts@piefed.ca
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        12 hours ago

        I don’t think they’re all that separable. In the worst case, using a corporation’s LLM, as Linus is doing, is in essence voicing support for any negative effects in the strongest way possible. LLMs as a technology are fueled by stolen and scraped content, which is in turn fueled by other myriad issues, like datamining and privacy erosion. LLMs as a technology are also extremely inefficient and resource intensive; by writing yourself off as “just one person” doing it we’re ignoring the global effect of many “one persons” all consuming resources by using this technology.

        I guess my point is that by using and helping to normalize LLM usage it’s playing right into the hands of all the previously mentioned consequences. Big tech doesn’t need you to use their specific brand of LLM, they just need you to become dependent on the idea of LLM assistance itself. Their endgoal is total adoption and mindshare, and they’re spending vast amounts of money in order to reach it. By refusing to support the technology no matter how “useful” it might be, we can prevent many of the inherent problems from getting worse, and prevent big tech from gaining even more leverage over slightly important things like “is the news real”.

        • FauxLiving@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          10 hours ago

          It looks like you’re blaming the technology and not the corporations.

          OpenAI didn’t invent machine learning, nor did they invent the Transformer model.

          AI is not more responsible for OpenAI’s poor decisions than the electricity or the IP protocol, despite their also being key technologies required for the growth of OpenAI and all of the other AI companies.

          If a person is driving a car wrecklessly, you go after that person… you don’t outlaw automobiles.

        • Luminous5481 [they/them]@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          10 hours ago

          LLMs as a technology are also extremely inefficient and resource intensive; by writing yourself off as “just one person” doing it we’re ignoring the global effect of many “one persons” all consuming resources by using this technology.

          the same can be said of gaming. criticizing LLMs for being resource intensive even for individual use would be hypocritical if you’re not also criticising gamers for using their PCs to their full potential while gaming.

            • Luminous5481 [they/them]@anarchist.nexus
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              10 hours ago

              It’s absolutely a fair comparison. An LLM can’t use any more than 100% of the system resources. neither can a video game. for an individual, there’s no practical difference between being an avid gamer and someone who uses LLMs if you’re comparing environmental impact.

              if you don’t agree, then perhaps you could explain to me how using 100% of my GPU for an LLM is different than using 100% of my GPU for Cyberpunk 2077? both use cases are using the same amount of power, so how is one use worse for the environment than the other? especially since I might use an LLM for a few minutes of work, whereas I’ve had many, many days where I spend 8 hours or more gaming. surely my gaming causes far more damage to the environment than my using LLMs does, but perhaps you’re more educated on the matter than I am and can show me otherwise.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    15 hours ago

    If there’s anyone I trust to think critically about the output of an LLM it’s Linus. The main problem with vibe coding is when people never review the output and just let the AI go wild without understanding what just happened.

    • tangonov@lemmy.ca
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      15 hours ago

      This is why I’d just call it programming with AI tools and not “vibe coding”

      • exu@feditown.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 hours ago

        The distinction is important. Using AI tools implies you check and verify the output. Vibecoding is not doing that or having no idea what is happening.

          • tangonov@lemmy.ca
            link
            fedilink
            arrow-up
            4
            ·
            9 hours ago

            Its doing the job of a code review alongside putting the program together. The parts the AI cannot get right need to be written yourself. AI really speeds up tedium for me but every line of code has to be read carefully. If the “vibe” is having to do the work anyway, then it’s vibe coding. I do like the speed boost I get for my 7 cents per query. It’s like I’m in my 30s without a kid again