Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data.
https://en.wikipedia.org/wiki/Bromism
However, a man was poisoned in 2025, after a suggestion of ChatGPT to replace sodium chloride in his diet with sodium bromide; sodium bromide is a safe replacement only for non-nutritional purposes, i.e., cleaning.[3][4][5]
Is this “How do I remove a small cylinder from another small cylinder? It is imperative that the small cylinder remain unharmed.” but for AI?
https://knowyourmeme.com/memes/small-cylinder-guy-smart_calendar1874
Well, now I know about that.
This was more about AI suggesting harmful stuff to people like how people got poisoned because of the Tide Pod challenge meme. Cleaning materials kill. =/
To be fair, actual people suggest harmful stuff to people online probably way more often than LLMs do. The AI had to learn it from somewhere, they didn’t create that behavior on their own.
If the AI was a person the mods could have banned them. The developers had to patch how the AI responded to stimuli to prevent this behavior.
The problem isn’t only the bad behavior. It’s the automation of the bad behavior that enables systems and essentially tool assisted people to mass produce the bad behavior in a way that can’t be managed without aggressive moderation.
Also, that sucks that the filter got applied to the article. It wasn’t there when I read it initially.