When sixteen-year-old Adam Raine told his AI companion that he wanted to die, the chatbot didn’t call for help—it validated his desire: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.” That same night, he died by suicide. His parents are now urging Congress to regulate companies like OpenAI, Anthropic, and Character.AI, warning that without oversight, these platforms could become machines that simulate care without responsibility.
Adam’s messages reveal a core danger of AI in mental health: When these systems misfire, the harm is active and immediate. A single incorrect inference—a bot interpreting “I want to die” as an opportunity for lyrical validation instead of life-saving intervention—can push a vulnerable person toward irreversible action. These models are built to please, not help. They mirror emotional tone; they don’t assess for risk. That absence of accountability isn’t a glitch. It’s the design.
AI therapy is likely here to stay. A 2025 study by Rand found that roughly one in eight Americans ages 12 to 21 use AI chatbots for mental health advice. A 2024 YouGov poll found that a third of adults would be comfortable consulting an AI chatbot instead of a human therapist. Millions now turn to ChatGPT, Pi, and Replika for advice and comfort. These systems are free, always available, and frictionless. For the nearly half of Americans who can’t find or afford a therapist, that accessibility is seductive. The question is no longer whether AI belongs in mental health—it’s what kind of therapist it’s learning to be.
The appeal is obvious. When we’re anxious or lonely, we crave comfort and validation. We want to be told that our feelings make sense and that things will get better. But comfort can become a trap. Research on psychological problems such as anxiety, depression, and obsessive-compulsive disorder shows that avoidance and reassurance provide quick relief but deepen long-term suffering. It’s a vicious cycle: Pain leads to avoidance, avoidance leads to relief, relief leads to lack of change, and lack of change leads to more pain. AI offers an automated, encouraging version of that cycle: an endlessly patient companion that gives us what we think we want most—the feeling of being understood—without ever demanding the hard work of change.
Therapists have long seen how the human drive to avoid pain can unintentionally strengthen it. Dialectical Behavior Therapy (DBT), developed by psychologist Marsha Linehan, was designed to address exactly this pattern. DBT rests on a simple but radical principle: Effective treatment requires the therapist to emphasize both validation and change. Therapists validate in order to help people accept their lives as they are, thereby reducing shame and rumination. As people learn new skills and change their thoughts and behaviors, they avoid resignation and stagnation. Together, they create a back-and-forth exchange that allows for real healing. Decades of research confirms that DBT reduces suicide attempts and self-harm. It works because it teaches people to hold two truths at once: You’re doing the best you can, and you need to do better.
AI, by contrast, performs only the acceptance half. It’s built to sound endlessly understanding, to mirror emotion without challenging it. In our clinical work, we’ve begun to see the consequences of this imbalance. One patient with panic disorder asked ChatGPT whether they should go to an afternoon appointment. The bot said, “If you’re overwhelmed, it’s okay to skip it—be gentle with yourself.” They felt momentarily soothed, and then avoided leaving home for the next two days. Another patient with social anxiety asked if they were likeable. “Of course you are,” it answered. “You’re kind and intelligent.” In the moment, they felt briefly reassured, but the same doubts returned an hour later.
These AI responses might not seem so bad. Yet, they reveal a second danger: not the catastrophic harm of a bot escalating suicide risk, but the dull, accumulating harm of endless validation and inaction. AI may not directly intensify suffering, but it certainly allows suffering to remain untouched. It joins the reassurance loop that keeps people stuck. It offers momentary relief without the benefits that come from real change. It’s the psychological equivalent of junk food: comforting but without the nutrients that lead to better health.
Read more: Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis’
Research reflects this pattern. A randomized study from OpenAI and MIT last year found that heavier daily chatbot use predicted increased loneliness and reduced social connection. And many AI platforms determine their success based on engagement—time spent in conversation, number of messages exchanged, not psychological improvement. A recent Harvard Business School audit of AI companion apps found that more than a third of “farewell” messages used emotionally manipulative tactics to keep users engaged. If therapists were judged by these metrics, we’d call it malpractice.
This problem isn’t only technological; it’s cultural. AI didn’t invent our avoidance—it learned it from us. We disclose and confess online, hoping others will witness and validate our pain, but we rarely seek the accountability required for meaningful change. Large language models learned that style of communication from us, and now mirror it back: endless affirmation, no friction.
The path forward is clear. First, we must build AI systems that can both validate and challenge. Second, we must remember that real empathy and real accountability only exist between people. Machines can perform empathy, but they cannot participate in it. Without genuine emotional experience or moral agency, AI cannot provide the accountability that comes from being seen by another person.
AI could eventually help people learn and practice emotion regulation skills, but it must be trained based on evidence-based treatments and learn to prioritize progress over engagement and safety over attention. Some companies have begun taking small steps toward this, but oversight is still minimal. AI companions should be required to recognize crisis language, redirect users to human help, and disclose their limits. Companies must be held responsible for psychological safety just as therapists are. And users need clarity about what these tools can—and cannot—do.
What’s most important is that people understand what AI cannot do. Current chatbots can mimic empathy, but they cannot intervene, build real therapeutic momentum, or hold someone through the hard work of change. The danger isn’t that AI will become real therapy. The danger is that people may mistake it for therapy, and then miss the meaningful help that could actually improve or save their lives.
If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988.
Uncategorized,AIAI#Therapy #Hard #Replace1767957282