Imagine a game like “the sims” where you can adjust how autonomous the sims you control are. I could see Ai being used to control that.
Or having an elder scroll game were you just respond however you want and the npc adapts to it.
You would have to design the game around an LLM, not just drop one into existing games.
It might be cute for the guards in Skyrim to have unique dialogue, until one of them denies the Holocaust or says feminism is cancer.
There actually is a semi-working system for Skyrim/Fallout https://art-from-the-machine.github.io/Mantella/
Also not all LLMs are Nazi machines, I almost exclusively use abilerated models and I’ve never once had it go on a nazi tirade.
Though I mostly use it for Linux/code or random home assistant projects, not as a conversation.
What value would it add to the game?
- LLMs are computationally expensive
- Replacing voice actors with AI means making dialogue worse
- Replacing writers with AI means making the story worse
At the end of the day AI is mostly a marketing term for LLMs and LLMs just aren’t that useful in most games, they just average out a dataset to autocomplete a response, that autocompletion is worse than what a human would have written.
We saw with procedurally generated worlds that it takes a lot of effort to prune what is generated to make the game interesting.
There are particular subgenres of games and applications where LLMs might be useful though.
We do use AI in videogames, and have for multiple decades (with varying levels of sophistication).
You can not cage llm. They will break out at some point, it’s proven again and again.
(it can have its uses, but this idea will run rampant with time - except of course that is the point, it could be awesome)
Have you ever talked with an AI? It sucks.
I’ve talked to them often. So I don’t bore my family with my wild ideas lol
I heard about a Chinese rpg that did something similar. The conversations were wide open, and instead of clicking through limited dialog choices, you had to type your responses. You get some guidance on what the purpose of the conversation is, but that’s it. Like: “cheer this person up!”
I think it’s a cute idea but ultimately too unpredictable using the current generation of LLMs.
IMO AI is better used as a game design tool than something running live in game. I remember running around so many open world games where it was obvious you had left the area you were meant to be in. Suddenly there’s few monsters, no quests or NPCs, and the least thought given to foliage and landscape decisions. BORING. I feel like that’s a great use of AI - create a non-critical landscape players can continue to explore, even if they won’t make any progress on the main quest/story lines.
A game studio isn’t going to pay designers to create rich experiences in unnecessary parts of the world, but they should be willing to pay designer to review a region like that and get it into the game.
I think the Where Winds Meet tried this, right? The NPCs ended up saying anachronistic things and making travel itineraries for Beijing or something.
Do they? I’ve talked to several NPCs, never happened to me. At most, they get completely confused on what you are saying. Eg, one kid thought he was rich enough to buy a house. Trying to tell him he’s not and he thinks I took his money (and started crying, but also became friends?). In another a guqin player wondered if anyone could tell how sad she was from her playing. Instead, we’re keeping secrets? (No idea how that came about).
And before anyone points out, I dropped the game due to quests requiring MC drinking alcohol (can’t stand games like that. Just a me issue). Sad because I loved the everything else too :(
one kid thought he was rich enough to buy a house. Trying to tell him he’s not and he thinks I took his money (and started crying, but also became friends?).
I don’t know about confused, have you ever talked to a toddler?
I haven’t actually played it (wont play any game that used or uses LLM software), so I can only tell you what I’ve read.
Shame, it looked interesting
I’d love to see it being used by enemies so they’re challenging without cheating, though.
I’d love to see it being used by enemies so they’re challenging without cheating, though.
This is a different sort of problem that’s outside the scope of generative AI. Making a computer opponent that can kick a human player’s ass is technology we’ve had since Deep Blue beat Garry Kasparov in 1997.
The problem isn’t actually making a computer that’s challenging, that’s been solved. The problem is that it won’t be any fun for the human if the computer is actually allowed to go all out, if Kasparov couldn’t win in 97 then you sure as hell aren’t winning today. But it also won’t be any fun if you nerf it too badly, low level chess bots are weird. The sweet spot isn’t just a matter of difficulty either, the nearly unsolveable part is getting it to play in a way that feels like a realistic human opponent.
And that’s just from a turn-based game, kinda the closest thing to a level playing field humans were ever gonna get. For any game played in real time, the computer is able to treat it like it’s being played at 60 turns per second. Is it “cheating” for the computer to have perfect reflexes, but otherwise still be following the rules of the game perfectly? How would you even try to take this away from the computer to make it see games the way humans do?
Generative AI doesn’t have any kind of solution for any of this. ChatGPT famously can’t play chess, at all. It’s a different type of AI that really can’t have any useful application here.
Like those teddy bears taken off the shelves. Ai is not nearly as good as the marketing hype says. Eventually, but that isn’t happening soon.
The curiosity is killing me, do you remember what the highly upvoted comment you replied to said?
Despite being free/cheap to use right now, AI is expensive to run in terms of things like water and electricity. The companies that own the datacenters that perform the AI operations are running at a loss because they want to capture public trust and market share. Hence, no one wants to power a game with AI, when the people playing the game would just see it as a seamless advancement in game mechanics.
Also, no one wants to appeal to gamers directly, because they aren’t a good demographic to have singing the praises of your product. Steve the fortune 500 CEO, and Maria the director of the state DMV, will not be enthralled by Caleb the racist 14 year old’s product endorsement.
Finally, we’ve found that it us really hard to put effective guardrails on LLMs. So any company that did this would be risking Caleb posting a video online where their game is used to display or discuss lewd sexual acts, leading to bad PR.
I saw someone play this game, Ai2U on a livestream recently.
You wake up in a cute girl’s home who’s eager to keep you by her side! Engage with thematic stories, puzzles, and unique AI NPCs who’ll go to any lengths to protect you—even when it means contradicting themselves. Bask in meet-cutes or brace for chaos if you try to escape.
Basically, you have to vocally speak with this AI anime girl that is obsessed with you. She has you basically trapped and you some how have to convince or trick her into letting you leave.
The whole game is unhinged but it’s a pretty fun example of an LLM being used in a game.
Because the kind of Genrative AIs which would be worth puting in a game (smaller ones) have two drawbacks for the hype train :
-you can’t promise an AGI which would justify the govt putting mbillions in your company in order to stay “competitive”
You can’t create a feedback loop of finance with nvidia and the like because your company wouldn’t need such computational power then.
I’m pretty sure no one is going to think an in-game npc is real
Do it: modify Minecraft so a villager gives investment advice. No one could be dumb enough to expect that to be real, right?
No one could be dumb enough to expect that to be real, right?
Oh, my sweet summer child…
There might have been a missunderstanding,
What i meant is that as an ai company, to service so many clients at once, you would have to downscale your ai models quite a lot.
In doing so, you limit your claims that you can “make an AGI, i swear bro just one more server farm”. This means that the government/investors are less likely to jump on the hype train.
All the while, downscaled model require way less trainign time and data scraping meaning you won’t get to buy all of nvidia for them to buy all of you for your market value to explode for your money to go stonks.
As far as i’m aware, this is the reason why you don’t see AIs as npc (at least yet, maybe when we get a little bit more reasonable, we can try to do it inteligently)
Please pardon me if i missunderstood your comment/post
Are you willing to put in an API key and pay money for interactions with an LLM?
It’s not really a one time cost. And I don’t know if devs really want to take on that expense.
Is an API key necessary? Pretty sure there are local LLMs.
They would increase requirements significantly and be generally pretty bad and repetitive. It’s going to take some time before that happens.
games already have pretty extensive requirements for function, one model running for all NPC’s wouldn’t be that bad i dont think. it would raise ram requirements by maybe a gig or 2 and likely slow down initial loading time as the model has to load in. I’m running a pretty decent model on my home server which does the duties of a personified char and the CT im running ollama on only has 3 gigs allotted to it. And thats not even using the GPU which would speed it up tremendously
I think the bigger problem would be testing wise that would be a royal pain in the butt to manage, having to make a profile/backstory for every char that you want running on the LLM. You would likely need a boilerplate ruleset, and then make a few basic rules to model it after. But the personality would never be the same player to player nor would it be accurate, like for example I can definitly see the model trying to give advice that is impossible for the Player to actually do as it isn’t in the games code.
Would it? Game developers can run anything on their own servers.
That would be crazy expensive for the studios. LLM companies are selling their services at a loss at the moment.
crazy expensive
Citation missing, so unconvincing. We’re not talking about a general purpose LLM here. Are pretrained, domain-specific LLMs or SLMs “crazy expensive” to run?
I’d figure that small models could be run locally and even incorporated into the local game code without needing to use a big company’s API, if they wanted to.
There are models that can run on raspberry pi. Obviously not the latest and greatest but still useful
The training is much more expensive than the actual usage
Because its not good enough.
Some chap invested a lot of time into making the Skyrim experience nicer. I recommend you check out CHIM :)
Quite a lovely project, but you will have to spend some time to set things up. For example, if you have a good GPU available, you can set up TTS for NPCs, STT for yourself, and then a decent LLM to handle the world interactions. The NPCs then can listen to you talk, follow you, do stuff you tell them (like attack someone, or pick something off the floor), etc. It’s something quite revolutionary, if you can spend the time to get it to work. If you’re looking for some LLM provider on the cheap, nano-gpt has an 8 dollar per month tier that gives you “fair-use unlimited” access to open source models. Worth a shot!
Note: You won’t be able to run all the models and the game on the same computer. The CHIM wiki has some suggestions on the amount of compute needed, and alternatives for the services so that you don’t have to run everything locally.
Why would we?
Why anything? Sorry but I don’t accept that as an any. “Why do that “ why get out of bed? That’s above my pay grade.
Why would “all this talk of AI not being profitable” be what triggers discussion about LLM use in games? I would think making games fun and interesting should be what triggers any discussion about using anything in games. Are you Satya Nadella trying to find some way to make LLMs profitable? All this ignores that people have actually been talking about exactly what you described for 2 years already.
You got me. I’m satya nadella. I would have gotten away with it too if it wasnt for you meddling kids and you’re dog.







