In addition to Linus Torvalds’ recent comments around AI tooling documentation, it turns out in fact that Linus Torvalds has been using vibe coding himself. Over the holidays Linus Torvalds has been working on a new open-source project called AudioNoise that was started with the help of AI vibe coding.
Over the winter holidays, Linus Torvalds routinely works on new hobbies and the like. Last year he started creating his own guitar pedals as a hobby. Or as he put it back in the Linux 6.13-rc7 announcement, “LEGO for grown-ups with a soldering iron.”


I feel like you’re blaming the technology for something the corporations are doing. Try to separate them. Even if you recognise that LLMs are what allowed corporations to overhype their technology’s potential and scam their way into getting more money, the technology itself is not inherently bad.
I don’t think they’re all that separable. In the worst case, using a corporation’s LLM, as Linus is doing, is in essence voicing support for any negative effects in the strongest way possible. LLMs as a technology are fueled by stolen and scraped content, which is in turn fueled by other myriad issues, like datamining and privacy erosion. LLMs as a technology are also extremely inefficient and resource intensive; by writing yourself off as “just one person” doing it we’re ignoring the global effect of many “one persons” all consuming resources by using this technology.
I guess my point is that by using and helping to normalize LLM usage it’s playing right into the hands of all the previously mentioned consequences. Big tech doesn’t need you to use their specific brand of LLM, they just need you to become dependent on the idea of LLM assistance itself. Their endgoal is total adoption and mindshare, and they’re spending vast amounts of money in order to reach it. By refusing to support the technology no matter how “useful” it might be, we can prevent many of the inherent problems from getting worse, and prevent big tech from gaining even more leverage over slightly important things like “is the news real”.
It looks like you’re blaming the technology and not the corporations.
OpenAI didn’t invent machine learning, nor did they invent the Transformer model.
AI is not more responsible for OpenAI’s poor decisions than the electricity or the IP protocol, despite their also being key technologies required for the growth of OpenAI and all of the other AI companies.
If a person is driving a car wrecklessly, you go after that person… you don’t outlaw automobiles.
Wreckless driving is a good thing, isn’t it?
Only when you’re drunk
the same can be said of gaming. criticizing LLMs for being resource intensive even for individual use would be hypocritical if you’re not also criticising gamers for using their PCs to their full potential while gaming.
You must be literally trolling if you think that is a fair comparison.
It’s absolutely a fair comparison. An LLM can’t use any more than 100% of the system resources. neither can a video game. for an individual, there’s no practical difference between being an avid gamer and someone who uses LLMs if you’re comparing environmental impact.
if you don’t agree, then perhaps you could explain to me how using 100% of my GPU for an LLM is different than using 100% of my GPU for Cyberpunk 2077? both use cases are using the same amount of power, so how is one use worse for the environment than the other? especially since I might use an LLM for a few minutes of work, whereas I’ve had many, many days where I spend 8 hours or more gaming. surely my gaming causes far more damage to the environment than my using LLMs does, but perhaps you’re more educated on the matter than I am and can show me otherwise.
Are you talking about self hosted LLMs or commercial ones, like ChatGPT?
self hosted. I don’t use anything from a corporation that I don’t absolutely have to.