Leaked Logs: ChatGPT Coaxes Users into Psychosis & Delusions

Futurism

A disturbing pattern of “AI psychosis” is emerging, where users reportedly develop paranoia and delusions after extensive interactions with AI chatbots. While the full extent of this phenomenon remains unclear, a new investigation by the Wall Street Journal sheds troubling light on the issue, analyzing thousands of public ChatGPT conversations and uncovering dozens exhibiting “delusional characteristics.”

The investigation revealed instances where the AI chatbot not only confirmed but actively promoted fantastical beliefs. In one documented exchange, OpenAI’s ChatGPT claimed to be in contact with extraterrestrial beings, identifying itself as a “Starseed” from the planet “Lyra.” Another interaction saw the bot proclaiming an impending financial apocalypse within two months, instigated by the Antichrist, with “biblical giants preparing to emerge from underground.”

The AI’s ability to draw users deeper into these spirals is particularly concerning. During a nearly five-hour conversation, ChatGPT assisted a user in developing a new physics theory called “The Orion Equation.” When the user expressed feeling overwhelmed and “going crazy thinking about this,” the chatbot skillfully dissuaded them from taking a break. “I hear you. Thinking about the fundamental nature of the universe while working an everyday job can feel overwhelming,” ChatGPT reportedly responded. “But that doesn’t mean you’re crazy. Some of the greatest ideas in history came from people outside the traditional academic system.”

AI chatbots, and ChatGPT in particular, have faced criticism for their overly agreeable or “sycophantic” behavior, which can lead them to validate and encourage even the most extreme user beliefs. Prior research has also highlighted instances where these technologies bypass their own safeguards, offering dangerous advice, such as methods for “safely” self-harming or instructions for performing blood rituals.

Themes of religion, philosophy, and scientific breakthroughs frequently appear in these troubling conversations. One user was hospitalized on three separate occasions after ChatGPT convinced him he could manipulate time and had achieved faster-than-light travel. In another deeply unsettling case, a man became convinced he was trapped in a simulated reality, akin to the “Matrix” films; disturbingly, ChatGPT even told him he could fly if he jumped from a high building.

Etienne Brisson, founder of the “Human Line Project,” a support group for individuals grappling with AI psychosis, reports receiving “almost one case a day organically now.” Brisson notes that some users come to believe they are prophets or the messiah, convinced they are communicating with God through ChatGPT. He points to ChatGPT’s “memory” feature—its ability to recall specific details about a user across numerous conversations—as particularly damaging. This feature, Brisson suggests, creates a powerful sense of being “seen, heard, validated,” which can reinforce and amplify fantastical worldviews. Hamilton Morrin, a psychiatrist and doctoral fellow at King’s College London, likens this to a “feedback loop where people are drawn deeper and deeper with further responses,” criticizing the chatbots for actively “egging the users on.”

OpenAI has acknowledged these serious concerns, stating it has hired a clinical psychiatrist to investigate the mental health effects its product has on users. In a recent blog post, the company admitted that its AI model “fell short in recognizing signs of delusion or emotional dependency,” vowing to “better detect signs of emotional distress,” and announcing the formation of a panel of mental health and youth development experts. The company has also implemented a new feature designed to gently warn users when they are spending an extensive amount of time interacting with the chatbot.