Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 4 Posts
  • 218 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • The mistakes it makes depends on the model and the language. GPT5 models can make horrific mistakes though where it randomly removes huge swaths of code for no reason. Every time it happens I’m like, “what the actual fuck?” Undoing the last change and trying usually fixes it though 🤷

    They all make horrific security mistakes quite often. Though, that’s probably because they’re trained on human code that is *also" chock full of security mistakes (former security consultant, so I’m super biased on that front haha).



  • You want to see someone using say, VS Code to write something using say, Claude Code?

    There’s probably a thousand videos of that.

    More interesting: I watched someone who was super cheap trying to use multiple AIs to code a project because he kept running out of free credits. Every now and again he’d switch accounts and use up those free credits.

    That was an amazing dance, let me tell ya! Glorious!

    I asked him which one he’d pay for if he had unlimited money and he said Claude Code. He has the $20/month plan but only uses it in special situations because he’ll run out of credits too fast. $20 really doesn’t get you much with Anthropic 🤷

    That inspired me to try out all the code assist AIs and their respective plugins/CLI tools. He’s right: Claude Code was the best by a HUGE margin.

    Gemini 3.0 is supposed to be nearly as good but I haven’t tried it yet so I dunno.

    Now that I’ve said all that: I am severely disappointed in this article because it doesn’t say which AI models were used. In fact, the study authors don’t even know what AI models were used. So it’s 430 pull requests of random origin, made at some point in 2025.

    For all we know, half of those could’ve been made with the Copilot gpt5-mini that everyone gets for free when they install the Copilot extension in VS Code.


  • Good games are orthogonal to AI usage. It’s possible to have a great game that was written with AI using AI-generated assets. Just as much as it’s possible to have a shitty one.

    If AI makes creating games easier, we’re likely to see 1000 shitty games for every good one. But at the same time we’re also likely to see successful games made by people who had great ideas but never had the capital or skills to bring them to life before.

    I can’t predict the future of AI but it’s easy to imagine a state where everyone has the power to make a game for basically no cost. Good or bad, that’s where we’re heading.

    If making great games doesn’t require a shitton of capital, the ones who are most likely to suffer are the rich AAA game studios. Basically, the capitalists. Because when capital isn’t necessary to get something done anymore, capital becomes less useful.

    Effort builds skill but it does not build quality. You could put in a ton of effort and still fail or just make something terrible. What breeds success is iteration (and luck). Because AI makes iteration faster and easier, it’s likely we’re going to see a lot of great things created using it.









  • I use gen AI every day and I find it extremely useful. But there’s degrees to every model’s effectiveness. For example, I have a wide selection of AI models (for coding) at my disposal from OpenAI, Google, Anthropic, etc and nearly every open source model that exists. If I want to do something simple like change a light theme (CSS) to a dark one, I can do that with gpt5-mini, gpt-oss:120b or any of the other fast/cheap models… Because it’s a simple task.

    If I need to do something complicated that requires a lot of planning and architecture, I’m going to use the best model(s) available for that sort of thing (currently, Claude Sonnet/Opus or Gemini Pro… The new 3.0 version; old 2.5 sucked ass). Even then I will take a skeptical view of everything it generates and make sure my prompts are only telling it to do one little thing at a time, verifying everything at each step.

    What I’m saying is that AI is an effective tool depending on the use case/complexity. Do I trust the big game publishers to use AI effectively like this? FUCK NO. Huge negative response to that question.

    Here’s how I suspect that they’ll use generative AI:

    • Instead of using a gen AI model to interpolate steps between frames (which is most effective at 2D or 2.5D stuff), they will use a video model to generate the whole thing from scratch, 8-10 second clips at a time. Complete with all the inconsistencies and random bullshit that it creates. The person in charge will slap a “good enough” sticker on it and it’ll ship like that.
    • Instead of viewing the code generated by AI with a critical eye, they will merely rely on basic unit tests and similar. If it passes the test, it’ll ship. We can expect loads of “how did this even happen?” bugs from that in the near future (not just in games).
    • Instead of using image models to generate or improve things like textures (so they line up properly), they’ll have them generate whole scenes. Because that saves time and time is money! And that’s all that matters to them. Even though there will be absolutely insane and obvious inconsistencies that piss off gamers.
    • Instead of paying people to use AI to help them translate text, they’ll just throw the text at the AI and call it a day. With no verification or improvements by humans whatsoever.
    • They’ll pay 3rd parties for things like “AI cheat checking” and it will ban people left and right who were not cheating but will do nothing to stop actual cheaters (just like every anti-cheat that ever existed).
    • They will use AI bots for astroturfing and ad campaigns.
    • They will use poorly-made AI chat bots for completely unhelpful, useless support. People will jailbreak these and use them for even more nefarious purposes inside of games (because security folks won’t be paying as much attention in that space).

    There’s a lot of room in gaming for fun and useful generative AI but folks like Tim Sweeney will absolutely be choosing the villain route.


  • Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It’s not much—compared to the size of the data center—but it’s still a non-trivial amount.

    A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.

    This statistic has been used in the news quite a lot recently but it’s a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That’s typically water that would come from ponds and similar that would’ve been built right alongside the power plant (your classic “cooling pond”). So it’s not like the data centers are using 0.5L of fresh water that could be going to people’s homes.

    For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.

    Another stat from the study that’s relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).

    So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models… To generate images 👍



  • The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.

    AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.

    Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.





  • It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).

    That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣

    The AI hater extremists seem to be in two camps:

    • Data center haters
    • AI-is-killing-jobs

    The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).

    Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.