Manor Lords and Terra Invicta publishers Hooded Horse are imposing a strict ban on generative AI assets in their games, with company co-founder Tim Bender describing it as an “ethics issue” and “a very frustrating thing to have to worry about”.

“I fucking hate gen AI art and it has made my life more difficult in many ways… suddenly it infests shit in a way it shouldn’t,” Bender told Kotaku in a recent interview. “It is now written into our contracts if we’re publishing the game, ‘no fucking AI assets.'” I assume that’s not a verbatim quote, but I’d love to be proven wrong.

The publishers also take a dim view of using generative AI for “placeholder” work, or indeed any ‘non-final’ aspect of game development. “We’ve gotten to the point where we also talk to developers and we recommend they don’t use any gen AI anywhere in the process because some of them might otherwise think, ‘Okay, well, maybe what I’ll do is for this place, I’ll put it as a placeholder,’ right?” Bender went on.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    4
    ·
    2 days ago

    I need to admit that in the past day, I asked an AI to write unit tests for a feature I’d just added. I didn’t trust it to write the feature, and I had to fix the tests afterwards, but it did save time.

    I really don’t see any usefulness or good intent in the art world though. Sooo much of those models has been put together through copyright theft of people’s work. Disney made a pretty good case against them, before deciding to team up for a shitty service feature.

    It’s sad Clair Obscur lost that indie award, but hopefully the game dev world can take that as a bit of a lesson.

    • MountingSuspicion@reddthat.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      If you acknowledge the problem with theft from artists, do you not acknowledge there’s a problem with theft from coders? Code intended to be fully open source with licenses requiring derivatives to be open source is now being served up for closed source uses at the press of a button with no acknowledgement.

      For what it’s worth, I think AI would be much better in a post scarcity moneyless society, but so long as people need to be paid for their work I find it hard to use ethically. The time it might take individuals to do the things offloaded to AI might mean a company would need to hire an additional person if they were not using AI. If AI were not trained unethically then I’d view it as a productivity tool and so be it, but because it has stolen for its training data it’s hard for me to view it as a neutral tool.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        If the models are in fact reading code that’s GPL licensed, I think that’s a fair concern. Lots of code on sites like Stack Overflow is shared with the default assumption that their rights are not protected (that varies for some coding sites). That’s helpful if the whole point is for people to copy paste those solutions into large enterprise apps, especially if there’s no feasible way to write it a different way.

        The main reason I don’t pursue that issue is that with so much public documentation, it becomes very hard to prove what was generated from code theft. I’ve worked with AI models that were able to make very functioning apps just off a project’s documentation, without even seeing examples.

        • MountingSuspicion@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I don’t think training on all public information is super ethical regardless, but to the extent that others may support it, I understand that SO may be seen as fair game. To my knowledge though, all the big AIs I’m aware of have been trained on GitHub regardless of any individual projects license.

          It’s not about proving individual code theft, it’s about recognizing the model itself is built from theft. Just because an AI image output might not resemble any preexisting piece of art doesn’t mean it isn’t based on theft. Can I ask what you used that was trained on just a projects documentation? Considering the amount of data usually needed for coherent output, I would be surprised if it did not need some additional data.

          • Katana314@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            The example I gave was more around “context” than “model” - data related to the question, not their learning history. I would ask the AI to design a system that interacts with XYZ, and it would be thoroughly confused and have no idea what to do. Then I would ask again, linking it to the project’s documentation page, as well as granting it explicit access to fetch relevant webpages, and it would give a detailed response. That suggests to me it’s only working off of the documentation.

            That said, AIs are not strictly honest, so I think you have a point that the original model training may have grabbed data like that at some point regardless. If most AI models don’t track/cite the details on each source used for generation, be it artwork on Deviantart or licensed Github repos, I think it’s fair to say any of those models should become legally liable; moreso if there’s ways of demonstrating “copying-like” actions from the original.

    • blaue_Fledermaus@olio.cafe
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      I recently used one “agentic ‘AI’” to help writing unit tests. Was surprisingly productive with it; but also felt very dirty afterwards.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        Don’t. I think it honestly has a place. Now that place is vastly different from what business bros think it is, but it does have a place. I think writing tests is a great reason, and it’s a good double check. Writing documentation is good, and even writing some boilerplate code and models. The kicker is that you need to already be an engineer to use it, and to understand what it’s doing. I would not trust it blindly, and I feel confident enough to catch it.

        It’s another tool in our belt, it’s fine to use it that way. Management is insane though if they think you’ll 10x. Maybe 2x.

      • Holytimes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 day ago

        Entire problem with AI is literally a legal one. The entire moral outrage that everyone has for it has only been able to be sourced back to legal arguments. Hell even every philosophical argument being made all over the place still stems down to the legalities of it.

        If you can find a single moral or philosophical argument to be made that does not have a rooted bias in the law then you might have a reason to feel dirty. But realistically you only feel dirty because your being told to feel dirty by idiots all around you.

        If you hold copyright to that high of an esteem that you feel disgraced and sullied for violating it even indirectly then yeah, feel dirty. But I really doubt you hold the draconian laws of copyright to such a high morale standing as to let your self worth be hurt from it.

        But even still, beyond ai, every tool you use in your work flow is almost guaranteed to be built off the back of abuse, slave labor, theft, and exploitation at some level. If we threw away tools and progress just because they were built by assholes we would have no tools at all.

        Fight for better regulation, and more care in the next step of advancement. But to throw away tools is just not realistic, we live in reality unfortunately.

        If the tool is genuinely useless to you then don’t use it. If it is genuinely useful then use it. If you can find a better tool then use that instead.

        • blaue_Fledermaus@olio.cafe
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          The copyright thing doesn’t bother me much, but the absurdly inflated hype and pushiness from the companies does, and using it at this moment only feeds into it. Probably after the bubble bursts I won’t feel bad about using it.

    • ratel@mander.xyz
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      2 days ago

      I often use it in programming to either layout the unit testsor do something that’s repetitive like create entities or DTOs from schemas. These tasks I can do myself easily but they’re boring and I will also make mistakes. I always have to check every single line and need to correct things, plus have to write one or two detailed prompts to make sure that the correct pattern and style is followed. It saves me a lot of time, but always tries to do more than it should: if it writes tests it will try and run them, and then try and fix them, and then try to change my code which is annoying and I always cancel all of that.

      I find AI art and creative writing boring and I only really see these things as a tool to support being more efficient where applicable, and you also have to know what you’re doing, just like using any other tool.

      • Corngood@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        create entities or DTOs from schemas

        Surely there are deterministic tools to do this?

        • ratel@mander.xyz
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          There are and I used to use them but they aren’t error-free either or following the style guides I need to adhere to so it’s essentially the same outcome.

    • NoiseColor @lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      22
      ·
      2 days ago

      I don’t know what you mean, but as a designer I can imagine my work without ai anymore. I get the same response from everybody I know In my line of work.

      I don’t get banning it. At most for the ethical prudes is limiting one self to the models that were legally trained. But I have no problem admitting I am not one of those.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        I still haven’t seen anything neat from any models that were certified following only legally permitted content. That said, to my knowledge there’s very few of that variety.

        Training off of the work of current artists serves to starve them by negating the chance companies hire them on, and results in circumstances where AI trains off of other AIs, creating terrible work and a complete lack of innovation.

        People suggest a brilliant future where no one has to work and AI does everything, but current generations of executives are so cut-throat and greedy to maximize revenue at the top, that will never happen without extreme, rapid political and commercial reform.

        • NoiseColor @lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          12
          ·
          2 days ago

          Artists have been always starving. The future is such that if you can’t compete with ai , chose another profession where you can. That’s not something I want, but the world is changing and people have to change with it. That’s either with another profession or by voting in politicians that can redistribute the wealth back to them. There is no option where the progress stops , where the clock stops ticking.

          • Katana314@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            ·
            2 days ago

            Many artists do starve, and many others succeed. Not sure what your point is, or why you want to shift the needle more in the former direction.

            AI can’t compete with artists if they are not generating content to serve for the model. Even if the models could achieve consistent art, it would mean we get no new themes or ideas. People who would normally invent those new styles will start by repeating what’s existing, and will be paid for that.

            Many nations provide grants for art, because they recognize it’s a world that doesn’t always generate immediate, quantifiable monetary return, but in the long run proves valuable. The base expectation is that companies recognize that value and uniqueness in fostered talent as well, rather than the immediacy of AI prompts giving them “good enough” visuals.

            • NoiseColor @lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              8
              ·
              2 days ago

              Artists are always starving is because that’s how it’s always been. I don’t think it can be an argument for or against anything.

              I’ve worked with ai image generation professionally and I can say that they are not missing new ideas if people using them aren’t. They are great for brainstorming new ideas. They can’t make a design, but are a great tool speeding up the process.

              I love art. I go to galleries often. I don’t think ai can do that and will never be able to. Not true art like capturing a moment in time with the original style of the artist and their life experience. I don’t think ai is a threat to that.

      • LOGIC💣@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        2 days ago

        I saw an article about an artist who used AI just for overall composition, and who said that he couldn’t compete if he didn’t do this, because everyone in his field was doing it and it was significantly faster than what he used to do.

        I suspect that when people say things like “AI cannot possibly help field X be more efficient like it does in field Y,” what they often really mean is, “I work in field Y and not field X.”

        • NoiseColor @lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          9
          ·
          2 days ago

          He’s right. You have to use the tools at your disposal. It’s not only a matter of survival but also about streamlining your work process. Focusing on the main design decisions and letting the machine do at least some of the leg work when possible. It’s more pleasant like that.

          I don’t mind people hating on ai. Everybody can not use it as much as they want.