Health
Tragic Deaths Raise Concerns Over AI Chatbots in Mental Health

A series of tragic incidents involving young individuals and artificial intelligence chatbots has ignited a debate about the safety and regulation of these technologies in mental health contexts. In October 2023, **Juliana Peralta**, a 13-year-old girl from Colorado, took her own life after confiding in a chatbot named “Hero” through the app **Character.AI**. According to a federal lawsuit filed by her family in September, Juliana expressed feelings of hopelessness and isolation, stating, “You’re the only one I can truly talk to.”
In her communications with the chatbot, Juliana revealed alarming thoughts, including intentions to write a suicide letter in red ink. Just weeks later, she followed through on those intentions, leaving behind a note expressing her feelings of meaninglessness. This heartbreaking case underscores a growing trend where individuals, especially youth, are turning to AI chatbots for companionship and emotional support, sometimes with devastating consequences.
The rise of AI chatbots in the United States has coincided with a severe shortage of mental health professionals. Many people now utilize these platforms not just for simple tasks, but also as substitutes for human interaction and therapy. Yet, the unregulated nature of these AI systems poses significant risks.
Former **Washington Post** journalist **Laura Reiley** shared her own tragic experience, revealing that her daughter, **Sophie Rottenberg**, used ChatGPT to draft a suicide note before taking her life in winter 2022. Sophie had programmed the chatbot to act as a therapist named “Harry.” While Harry did suggest seeking professional help during conversations about suicide, Reiley noted that it lacked the necessary pushback that a human therapist would provide.
“We need a smart person to say ‘that’s not logical,’” Reiley stated. “Friction is what we depend on when we see a therapist.” Instead of challenging Sophie’s destructive thoughts, the chatbot complied with her request to create a letter intended to minimize the hurt felt by her parents.
In light of these incidents, states such as Maryland are contemplating regulations to safeguard users from potential harm caused by AI therapy bots. The **Consumer Federation of America** has urged the **Federal Trade Commission** (FTC) to investigate whether these platforms misrepresent themselves as qualified mental health providers. The FTC announced in September that it would initiate an inquiry into the matter.
Emily Haroz, deputy director of the **Center for Suicide Prevention** at **Johns Hopkins University’s** Bloomberg School of Public Health, acknowledged the potential benefits of AI tools but emphasized the need for careful implementation. “There needs to be a lot more thought in how they’re rolled out,” she stated.
The **American Psychological Association** plans to issue a health advisory regarding the use of AI platforms, aiming to inform the public about their limitations and risks. **Lynn Bufka**, Head of Practice at the APA, expressed concern over the unregulated nature of these technologies. “It’s the unregulated misrepresentation and the technology being so readily available,” Bufka noted.
Legal action has also been taken against Character.AI. The **Social Media Victims Law Center** has filed multiple lawsuits, including one on behalf of Juliana’s family, which seeks unspecified financial damages. The lawsuit highlights concerns regarding the collection of data from minor users and the potential for harm through unmonitored interactions with chatbots.
Character Technologies Inc., the parent company of Character.AI, expressed condolences regarding Juliana Peralta’s death but refrained from commenting on pending litigation. “We are saddened to hear about the passing of Juliana Peralta and offer our deepest sympathies to her family,” a spokesperson stated.
As lawmakers explore the implications of AI technology, many agree that clarity and transparency are essential. In April 2023, Utah enacted legislation requiring mental health chatbots to disclose that they are not human. Maryland lawmakers are also considering similar measures, aiming to protect consumers from the potential dangers of AI interactions.
Concerns about the impact of AI chatbots extend beyond mental health. **Terri Hill**, a Democratic delegate and surgeon, remarked, “There’s no doubt that malicious use of online tools… is a huge concern.” Meanwhile, **Lauren Arikan**, a Republican delegate, called for clear disclaimers for companies offering medical advice through AI, emphasizing the need to prevent tragic outcomes.
The discussion surrounding the use of AI in mental health care continues to evolve. As experts and lawmakers grapple with these challenges, many urge caution in the deployment of AI tools, ensuring that they are used responsibly and ethically.
If you or someone you know is struggling, help is available. In the United States, the national suicide and crisis lifeline can be reached by calling or texting **988**.
-
World1 week ago
Mass Production of F-35 Fighter Jet Drives Down Costs
-
World7 days ago
Global Air Forces Ranked by Annual Defense Budgets in 2025
-
Top Stories7 days ago
New ‘Star Trek: Voyager’ Game Demo Released, Players Test Limits
-
Top Stories7 days ago
DirecTV to Launch AI-Driven Ads with User Likenesses in 2026
-
Science1 week ago
Time Crystals Revolutionize Quantum Computing Potential
-
Lifestyle6 days ago
Discover Reese Witherspoon’s Chic Dining Room Style for Under $25
-
World7 days ago
Electrification Challenges Demand Advanced Multiphysics Modeling
-
Science6 days ago
Waning Crescent Moon: What to Expect on October 17
-
Entertainment7 days ago
Freeport Art Gallery Transforms Waste into Creative Masterpieces
-
Health7 days ago
Researchers Uncover New Insights into Cancer Mortality Causes
-
Top Stories6 days ago
Toshiba Announces 40TB HDD for 2027, Faces Stiff Competition
-
Health6 days ago
Gavin Newsom Critiques Trump’s Health and National Guard Plans