AI Therapy: Lifeline or Danger? The Risks of Chatbot Dependence

Theguardian

In an era where professional mental health services are increasingly strained, the appeal of readily available tools like generative AI chatbots for emotional support is undeniable. These “always-on” platforms, such as ChatGPT, offer instant, customized responses, making them seem like a convenient lifeline during moments of crisis. However, as mental health experts observe a growing reliance on artificial intelligence in place of human connection, significant concerns are emerging about the potential dangers of seeking certainty in a chatbot.

One psychologist, Carly Dober, highlights a silent but concerning shift in how individuals are processing distress within her practice. Patients, like “Tran,” have begun turning to AI for guidance on complex emotional issues, such as relationship disagreements. Tran, under work pressure and relationship uncertainty, initially explored ChatGPT out of curiosity. It quickly became a daily habit, used for drafting messages, asking questions, and even seeking reassurance about his feelings. While he found strange comfort, believing “no one knew me better,” his partner began to feel she was communicating with someone entirely different. The chatbot’s articulate, logical, and overly composed responses lacked Tran’s authentic voice and failed to acknowledge his own contributions to the relationship strain.

The temptation of AI as an accessory, or even an alternative, to traditional therapy is strong. Chatbots are often free, available 24/7, and can provide detailed responses in real-time. For individuals who are overwhelmed, sleepless, and desperate for clarity in messy situations, receiving what feels like sage advice from a few typed sentences can be incredibly appealing.

However, this convenience comes with considerable risks, especially as the lines between advice, reassurance, and emotional dependence become blurred. Many psychologists now advise clients to establish boundaries around their use of such tools. The seductive, continuous availability and friendly tone of AI can inadvertently reinforce unhelpful behaviors, particularly for those with anxiety, obsessive-compulsive disorder (OCD), or trauma-related issues. For instance, reassurance-seeking is a common feature in OCD, and AI, by its very design, provides abundant reassurance without challenging avoidance or encouraging individuals to tolerate uncomfortable feelings.

Tran’s experience exemplifies this. He often reworded prompts until the AI provided an answer that “felt right,” effectively outsourcing his emotional processing rather than seeking clarity or exploring nuance. This constant tailoring prevented him from learning to tolerate distress, leading him to rely on AI-generated certainty and making it harder for him to trust his own instincts over time. His partner also noted a strange detachment and lack of accountability in his messages, causing further relational friction.

Beyond these psychological concerns, significant ethical issues arise. Information shared with platforms like ChatGPT is not protected by the same confidentiality standards that govern registered mental health professionals. While some companies state that user data is not used for model training without permission, the sheer volume of fine print in user agreements often goes unread. Users may not realize how their inputs can be stored, analyzed, and potentially reused.

Furthermore, there is a risk of harmful or false information. Large language models predict the next word based on patterns, a probabilistic process that can lead to “hallucinations”—confidently delivered, polished answers that are entirely untrue. AI also reflects biases embedded within its training data, potentially perpetuating or amplifying gender, racial, and disability-based stereotypes. Unlike human therapists, AI cannot observe non-verbal cues like a trembling voice or interpret the meaning behind silence—critical elements of clinical insight.

This is not to say that AI has no place in mental health support. Like many technological advancements, generative AI is here to stay. It may offer useful summaries, psycho-educational content, or even supplementary support in regions where access to mental health professionals is severely limited. However, its use must be approached with extreme caution and never as a replacement for relational, regulated care.

Tran’s initial instinct to seek help and communicate more thoughtfully was logical. Yet, his heavy reliance on AI hindered his skill development. In therapy, Tran and his psychologist explored the underlying fears that drove him to seek certainty in a chatbot, including his discomfort with emotional conflict and the belief that perfect words could prevent pain. Over time, he began crafting his own responses—sometimes messy, sometimes unsure, but authentically his own.

Effective therapy is inherently relational. It thrives on imperfection, nuance, and slow discovery. It involves pattern recognition, accountability, and the kind of discomfort that leads to lasting change. A therapist does not merely provide answers; they ask questions, offer challenges, hold space for difficult emotions, provide reflection, and walk alongside the individual, often serving as an “uncomfortable mirror.” For Tran, the shift was not just about limiting his use of ChatGPT; it was about reclaiming his own voice and learning to navigate life’s complexities with curiosity, courage, and care, rather than relying on perfect, artificial scripts.

AI Therapy: Lifeline or Danger? The Risks of Chatbot Dependence - OmegaNext AI News