Imagine a game like “the sims” where you can adjust how autonomous the sims you control are. I could see Ai being used to control that.

Or having an elder scroll game were you just respond however you want and the npc adapts to it.

  • etchinghillside@reddthat.com
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    6 hours ago

    Are you willing to put in an API key and pay money for interactions with an LLM?

    It’s not really a one time cost. And I don’t know if devs really want to take on that expense.

      • SGforce@lemmy.ca
        link
        fedilink
        arrow-up
        7
        ·
        6 hours ago

        They would increase requirements significantly and be generally pretty bad and repetitive. It’s going to take some time before that happens.

        • Pika@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          games already have pretty extensive requirements for function, one model running for all NPC’s wouldn’t be that bad i dont think. it would raise ram requirements by maybe a gig or 2 and likely slow down initial loading time as the model has to load in. I’m running a pretty decent model on my home server which does the duties of a personified char and the CT im running ollama on only has 3 gigs allotted to it. And thats not even using the GPU which would speed it up tremendously

          I think the bigger problem would be testing wise that would be a royal pain in the butt to manage, having to make a profile/backstory for every char that you want running on the LLM. You would likely need a boilerplate ruleset, and then make a few basic rules to model it after. But the personality would never be the same player to player nor would it be accurate, like for example I can definitly see the model trying to give advice that is impossible for the Player to actually do as it isn’t in the games code.

          • hayvan@piefed.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            5 hours ago

            That would be crazy expensive for the studios. LLM companies are selling their services at a loss at the moment.

            • lmmarsano@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              crazy expensive

              Citation missing, so unconvincing. We’re not talking about a general purpose LLM here. Are pretrained, domain-specific LLMs or SLMs “crazy expensive” to run?

    • hesh@quokk.au
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      I’d figure that small models could be run locally and even incorporated into the local game code without needing to use a big company’s API, if they wanted to.

      • AA5B@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        There are models that can run on raspberry pi. Obviously not the latest and greatest but still useful

        The training is much more expensive than the actual usage