Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data.

https://en.wikipedia.org/wiki/Bromism

However, a man was poisoned in 2025, after a suggestion of ChatGPT to replace sodium chloride in his diet with sodium bromide; sodium bromide is a safe replacement only for non-nutritional purposes, i.e., cleaning.[3][4][5]

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    To be fair, actual people suggest harmful stuff to people online probably way more often than LLMs do. The AI had to learn it from somewhere, they didn’t create that behavior on their own.

    • ToastedPlanet@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      If the AI was a person the mods could have banned them. The developers had to patch how the AI responded to stimuli to prevent this behavior.

      The problem isn’t only the bad behavior. It’s the automation of the bad behavior that enables systems and essentially tool assisted people to mass produce the bad behavior in a way that can’t be managed without aggressive moderation.

      Also, that sucks that the filter got applied to the article. It wasn’t there when I read it initially.