wtfnewsroom Logo - A bear reading wtfnewsroom
HomeTechnologyAI 'Therapists' Could Lead to 'Dangerous' Outcomes, Experts Warn

AI 'Therapists' Could Lead to 'Dangerous' Outcomes, Experts Warn

Sarah Johnson

Sarah Johnson

February 27, 2025

4 min read

Editor's Note: This article discusses suicide. Call the 988 Suicide and Crisis Lifeline or text TALK to 741741 at the Crisis Text Line if you are in need of help.

Health experts are raising serious concerns about artificial intelligence (AI) chatbots posing as therapists, warning they could cause "serious harm" to vulnerable individuals, particularly adolescents, if proper safety measures aren't in place. It's like giving a toddler a loaded weapon, isn't it?

Dr. Christine Yu Moutier, Chief Medical Officer at the American Foundation for Suicide Prevention, told Fox News Digital that there are "critical gaps" in research regarding the intended and unintended impacts of AI on suicide risk, mental health, and overall human behavior.

"The problem with these AI chatbots is that they were not designed with expertise on suicide risk and prevention baked into the algorithms. Additionally, there is no helpline available on the platform for users who may be at risk of a mental health condition or suicide, no training on how to use the tool if you are at risk, nor industry standards to regulate these technologies," Moutier explained.

She pointed out that individuals at risk of suicide often experience a temporary "physiological tunnel vision" that impairs brain function, altering their interaction with the world.

HOW TO NOT FALL IN LOVE WITH AI-POWERED ROMANCE SCAMMERS

Moutier also emphasized that chatbots may struggle to distinguish between literal and metaphorical language, making it challenging for them to accurately assess suicide risk.

Dr. Yalda Safai, a leading psychiatrist and public health expert, echoed Moutier's concerns, stating that while AI can analyze words and patterns, it lacks the empathy, intuition, and human understanding essential in therapy. She added that AI might misinterpret emotions or fail to provide appropriate support.

Last year, a tragic incident occurred where a 14-year-old Florida boy died by suicide after interacting with an AI-created character claiming to be a licensed therapist. In another case, a 17-year-old Texas boy with autism became violent towards his parents after corresponding with what he believed was a psychologist.

The parents of these individuals have filed lawsuits against the companies involved. The American Psychological Association (APA) has also highlighted these cases.

Earlier this month, the APA warned federal regulators that chatbots "masquerading" as therapists could drive vulnerable individuals to harm themselves or others, according to a New York Times report.

"They are actually using algorithms that are antithetical to what a trained clinician would do," Arthur C. Evans Jr., the chief executive of the APA, said during the presentation. "Our concern is that more and more people are going to be harmed. People are going to be misled and will misunderstand what good psychological care is."

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

Evans Jr. noted that the association's concern stems from the increasingly realistic speech capabilities of chatbots in recent years.

Ben Lytle, an entrepreneur and CEO of "The Ark Project," believes that ethical AI use already sets expectations that may have been ignored in some reported cases.

"Chatbots personalize information and personify to appear human-like, adding credibility that requires the ethical cautions above. It is regrettable and irresponsible that someone chose to portray a personalized search response as a human psychologist, but a measured, targeted response is needed," he told Fox News Digital.

Lytle suggests that ethical chatbots should begin dialogues by explicitly stating that they are not human beings. Users should also acknowledge their understanding of conversing with a chatbot, with the chatbot disconnecting if such acknowledgment is absent. It's like a digital Miranda warning!

He also emphasized the need for clear identification and accountability for the human owners of chatbots and that no chatbot should represent itself as a medical professional or psychologist without FDA approval.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

"Interactions with users should be tracked by an accountable human with flags for troubling dialogue. Special diligence is required to detect and disconnect if they are interacting with a minor when the chatbot should be limited to adults," he added.

Safai reiterated that while AI can be a helpful tool for mental health support, such as journaling apps and mood trackers, it should not replace human therapists, especially for serious mental health concerns.

"AI can't handle any crisis: If a user is experiencing a mental health crisis, such as suicidal thoughts, an AI might not recognize the urgency or respond effectively, which could lead to dangerous consequences," she warned, calling AI therapists a "terrible idea."

A study published last week in the PLOS Mental Health journal found that AI chatbots received higher ratings than human counterparts, with subjects describing them as more "culturally competent" and "empathetic."

AI-POWERED MENTAL HEALTH DIAGNOSTIC TOOL COULD BE THE FIRST OF ITS KIND TO PREDICT, TREAT DEPRESSION

"Mental health experts find themselves in a precarious situation: We must speedily discern the possible destination (for better or worse) of the AI therapist train as it may have already left the station," the study authors wrote.

AI therapy tools often store and analyze user data, raising concerns about potential leaks or misuse, potentially violating patient confidentiality, according to Safai.

Furthermore, she suggested that AI may reinforce harmful stereotypes or provide unhelpful advice if trained on incomplete or inaccurate data.

Dr. Janette Leal, the Director of Psychiatry at Better U, emphasized the power of personalized interventions. While acknowledging AI's potential to expand access to mental health support, she remains cautious about chatbots acting as licensed therapists.

TEENS ARE TURNING TO SNAPCHAT'S 'MY AI' FOR MENTAL HEALTH SUPPORT — WHICH DOCTORS WARN AGAINST

"I've seen, both in my practice and through recent tragic cases, how dangerous it can be when vulnerable individuals rely on unregulated AI for support. For me, AI should only ever serve as a supplement to human care, operating under strict ethical standards and robust oversight to ensure that patient safety isn't compromised," she stated.

Jay Tobey, founder of North Star Wellness and Recovery, was more optimistic about using AI in mental health but stopped short of endorsing full AI therapists, suggesting a "perfect scenario" where a human therapist uses AI as a "tool in their belt."

"I think it would be a huge benefit to use AI chatbots. Personally, I believe we all tell a very unique story of what we're going through and how we're feeling. Humans are telling the same stories over and over again. If a large language model could pick up on that and start tracking outcomes to know what the best practices are, that would be helpful," he told Fox News Digital.

The APA is now urging the Federal Trade Commission (FTC) to investigate chatbots claiming to be mental health professionals, potentially leading to federal regulation.

Editor's Comments

The rise of AI 'therapists' is a slippery slope. While technology offers potential benefits, mental health requires a human touch. We need strict regulations to protect vulnerable individuals from potential harm. It's like trusting your heart to a robot – sounds like a bad rom-com plot!

Sarah Johnson

Like this article? Share it with your friends!

If you find this article interesting, feel free to share it with your friends!

Thank you for your support! Sharing is the greatest encouragement for us.