Reports of delusions fuelled by conversations with AI chatbots have been growing, with some cases involving psychotic breaks, hospital stays and even violence. Two Canadians have launched an online support group to help people through their experiences.
An April MIT study found AI Large Language Models (LLM) encourage delusional thinking, likely due to their tendency to flatter and agree with users rather than pushing back or providing objective information.
If all it takes to get someone to believe something is to flatter them and agree with them, it does kind of explain how people manage to sell people on all kinds of crazy things.
If all it takes to get someone to believe something is to flatter them and agree with them, it does kind of explain how people manage to sell people on all kinds of crazy things.