I have read a few people mention it being an issue on here, but now I am starting to see it myself, blatant bots posting really crappy AI images. I do not want this to turn into Facebook with shrimp Jesus, so I’m just wondering what can be done to prevent bots from polluting the airwaves here. Any ideas, or work being done on this front?

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    11 hours ago

    How else would this “trusted” status be applied without some kind of central authority or authentication? If one instance declares “this guy’s a bot” and another one says “nah, he’s fine” how is that resolved? If there’s no global resolution then there isn’t any difference between this and the existing methods of banning accounts.

    • Ada@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      12 minutes ago

      I mean, approving users, you just let your regular established users approve instance applications. All they need to do is stop the egregious bots from getting through. And if there is enough of them, the applications will be processed really quickly. If there is any doubt about an application, let them through, because they can be caught afterwards. And historical applications are already visible, and easily checked if someone has a complaint.

      And if you don’t like the idea of trusted users being able to moderate new accounts, you can tinker with that idea. Let accounts start posting before their application has been approved, but stop their content from federating outwards until an instance staff member approves them. It would let people post right away without requiring approval, and still get some interaction, but it would mitigate the damage that bots can do, by containing them to a single instance.

      My point is, there are options that could be implemented. The status quo of open sign ups, with a growing number of bots doesn’t have to be the unquestioned approach going forward.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        6 minutes ago

        This is just regular moderation, though. This is how the Fediverse already works. And it doesn’t resolve the question I raised about what happens when two instances disagree about whether an account is a bot.