• AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    7 hours ago

    Can someone help me to understand the difference between Generative AI and procedural generation (which isn’t something that’s relevant for Expedition 33, but I’m talking about in general).

    Like, I tend to use the term “machine learning” for the legit stuff that has existed for years in various forms, and “AI” for the hype propelled slop machines. Most of the time, the distinction between these two terms is pretty clean, but this area seems to be a bit blurry.

    I might be wrong, because I’ve only worked with machine learning in a biochemistry context, but it seems likely that modern procedural generation in games is probably going to use some amount of machine learning? In which case, would a developer need to declare usage of that? That feels to me like it’s not what the spirit of the rule is calling for, but I’m not sure

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      I don’t know of any games that use machine learning for procedural generation and would be slightly surprised if there are any. But there is a little bit of a distinction there because that is required at runtime, so it’s not something an artist could possibly be involved in.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        I’m not so much talking about machine learning being implemented in the final game, but rather used in the development process.

        For example, if I were to attempt a naive implementation of procedurally generated terrains, I imagine I’d use noise functions to create variety (which I wouldn’t consider to be machine learning). However, I would expect that this would end up producing predictable results, so to avoid that, I could try chucking in a bunch of real world terrain data, and that starts getting into machine learning.

        A different, less specific example I can imagine a workflow for is reinforcement learning. Like if the developer writes code that effectively says "give me terrain that is [a variety of different parameters], then when the system produces that for them, they go “hmm, not quite. Needs more [thing]”. This iterative process could, of course, be done without any machine learning, if the dev was tuning the parameters themselves at each stage, but it seems plausible to me that it could use machine learning (which would involve tuning model hyperparameters rather than parameters).

        You make a good point about procedural generation at runtime, and I agree that this seems unlikely to be viable. However, I’d be surprised if it wasn’t used in the development process though in at least some cases. I’ll give a couple of hypothetical examples using real games, though I emphasise that I do not have grounds to believe that either of these games used machine learning during development, and that this is just a hypothetical pondering.

        For instance, in Valheim, maps are procedurally generated. In the meadows biome, you can find raspberry bushes. Another feature of the meadows biome is that it occasionally has large clearings that are devoid of trees, and around the edges of these clearings, there is usually a higher rate of raspberry bushes. When I played, I wondered why this was the case — was it a deliberate design decision, or just an artifact of how the procedural generation works? Through machine learning, it could in theory, be both of these things — the devs could tune the hyperparameters a particular way, and then notice that the output results in raspberry bushes being more likely to occur in clusters on the edge of clearings, which they like. This kind of process would require any machine learning to be running at runtime

        Another example game is Deep Rock Galactic. I really like the level generation it uses. The biomes are diverse and interesting, and despite having hundreds of hours in the game, there are very few instances that I can remember seeing the level generation being broken in some way — the vast majority of environments appear plausible and natural, which is impressive given the large number of game objects and terrain. The level generation code that runs each time a new map is generated has a heckton of different parameters and constraints that enable these varied and non-broken levels, and there’s certainly no machine learning being used at runtime here, but I can plausibly imagine machine learning being useful in the development process, for figuring out which parameters and constraints were the most important ones (especially because too many will cause excessive load times for players, so reducing that down would be useful).

        Machine learning certainly wouldn’t be necessary in either of these examples, but it could be something that could make certain parts of development easier.

        • AdrianTheFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Sure, I could definitely see situations where it would be useful, but I’m fairly confident that no current games are doing that. First of all, it is a whole lot easier said than done to get real-world data for that type of thing. Even if you manage to find a dataset with positions of various features across various biomes and train an AI model on that, in 99% of cases it will still take a whole lot more development time and probably be a whole lot less flexible than manually setting up rulesets, blending different noise maps, having artists scatter objects in an area, etc. It will probably also have problems generating unusual terrain types, which is a problem if the game is set in a fantasy world with terrain that is unlike what you would find in the real world. So then, you’d need artists to come up with a whole lot of datat to train the model with, when they could just be making the terrain directly. I’m sure Google DeepMind or Meta AI whatever or some team of university researchers could come up with a way to do ai terrain generation very well, but game studios are not typically connected to those sorts of people, even if they technically are under the same company of Microsoft or Meta.

          You can get very far with conventional procedural generation techniques, hydraulic erosion, climate simulation, maybe even a model of an ecosystem. And all of those things together would probably still be much more approvable for a game studio than some sort of machine learning landscape prediction.

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      6 hours ago

      You can use statistics to estimate a child’s final height by their current height and their parents’ height.

      People “train” models by writing a program to randomly make and modify equations, then keep them depending on if new accuracy is higher.

      Generative AI can predict what first result on google search or first reply on whatsapp will look like for llms.

      There are problems. Training from 94% to 95% accuracy takes exponentially more resources as it doesn’t have some “code” you can fix. Hallucinations will happen.

      On the other side, procedural algorithms in games just refer to handwritten algorithms.

      For example a programmer may go “well a maze is just multiple, smaller mazes combined.” Then write a program to generate mazes based on that concept.

      It’s much cheaper, you don’t need GPU or internet connection to use the algorithm. And if it doesn’t work people can debug it on the spot.

      Also it doesn’t require stealing from 100 million people to be usable

      (I kinda oversimplified generative AI, modern models may do something entirely different)

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      7 hours ago

      generative ai is a subset of procedural generation algorithms. specifically it’s a procedural algorithm with a massive amount of weight parameters, on the order of hundreds of billions. you get the weights by training. for image generation (which i’m assuming is what was in use here), the term to look up is “latent diffusion”. basically you take all your training images and blur them step by step, then set your weights to mimic the blur operation. then when you want an image you run the model backwards.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Yeah, that was my understanding of things too. What I’m curious about is how the Indie Game awards define it. Because if games that use ((Procedural Generation) AND NOT (Generative AI)) are permitted, then that would surely require a way of cleanly delineating between Generative AI and the rest of procedural generation that exists beyond generative AI

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          2
          ·
          27 minutes ago

          most procedural algorithms don’t require training data, for one. they can just be given a seed and run. or rather, the number of weights is so minimal that you can set them by hand.

    • nlgranger@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      From my understanding, AI is the general field of automating logical (“intelligent”) tasks.

      Within it, you will find Machine Learning algorithms, the ones that are trained on exemplar data, but also other methods, for instance old text generators based on syntactic rules.

      Within Machine Learning, not all methods use Neural Networks, for instance if you have seen cool brake calipers and rocket nozzle designed with AI, I believe those were made with genetic algorithms.

      For procedural generation, I assume there is a whole range of methods that can be used:

      • Unreal Engine Megaplants seems to contain configurable tree generation algorithms, that’s mostly handcrafted algorithms with maybe some machine learning to find the parameters ranges.
      • Motion capture and 3D reconstruction models can be used to build the assets. I don’t believe these rely on stolen artist data.
      • Full on image generation models (sora, etc.) to produce assets and textures, these require training on stolen artist data AFAIK (some arrangements were made between some companies but I suspect it’s marginal).
      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I agree with the ethical standpoint of banning Generative AI on the grounds that it’s trained on stolen artist data, but I’m not sure how tenable “trained on stolen artist data” is as a technical definition of what is not acceptable.

        For example, if a model were trained exclusively on licensed works and data, would this be permissible? Intuitively, I’d still consider that to be Generative AI (though this might be a moot point, because the one thing I agree with the tech giants on is that it’s impractical to train Generative AI systems on licensed data because of the gargantuan amounts of training data required)

        Perhaps it’s foolish of me to even attempt to pin down definitions in this way, but given how tech oligarchs often use terms in slippery and misleading ways, I’ve found it useful to try pin terms down where possible