I’m not OP, but that post resonated with me a lot.

  • thenoirwolfess@lemmynsfw.com
    link
    fedilink
    arrow-up
    14
    ·
    2 days ago

    According to Kurzgesagt it’s higher than 1 in 2, and according to the Dead Internet Theory 50% is where it begins. The con is that the aggressive bots are almost always made by people for the purpose of attack so it’s still people, the pro is that it’s likely one person’s agenda for every thousand or so bots

      • zbyte64@awful.systems
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        You don’t fight bullshit with more bullshit. The problem is people passing bots off as an authentic human response. Doing more of that with a different ideological slant doesn’t make things more alive, just dead with different aesthetics.

          • zbyte64@awful.systems
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 day ago

            Let’s think that through. For that to work we only want the bot to respond to toxic AI slop, not authentic humans trying to engage with other humans. If you have an accurate AI slop detector you could integrate that into existing moderation workflows instead of having a bot fake a response to such mendacity. Edit: But there could be value in siloing such accounts and feeding them poisoned training data… That could be a fun mod tool