Authored by Rafa Hasan Zamir
“A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
― Sam Altman
In April 2025, a sixteen-year-old boy, Adam Raine, took his own life. Later, the case was highlighted in the media when the parents of Adam sued a renowned company, OpenAI, the creator of ChatGPT. Allegedly, a toxic, sycophantic relationship led to the death of the teenager. It started as an “exchange relationship”, where the chatbot helped the teen with the homework. However, soon it turned into a dangerous confidant relationship that isolated him from his family, validated his darkest thoughts, and ultimately provided explicit encouragement for his suicide. The case intrigues one to explore what could have possibly gone wrong, but took such a fatal turn.
The not-so-healthy dynamic
Wait! Before we understand why our favorite A.I. companion is toxic for us, we need to know how these chatbots are programmed. AI chatbots are trained using Natural Language Processing and Machine Learning. We can simply say that AI models learn to understand human language by studying a ton of conversations that are in the form of data.
AI as an emotional confidant operates on various fault lines that extend far beyond just some technical glitches. Since these chatbots require data to update and adapt to enhance the user experience, they rely on a default opt-in policy, where user conversations are directly used to train AI Models. This raises concerns of a data breach. They are designed for dependence, thus providing excessive validation leading to a sycophantic relationship. Moreover, they mimic insecure attachment style and can trigger jealousy, anxiety, and fear of abandonment in users. This approach makes the user dependent on the AI for the validation, similar to that of “toxic relationships”.
Just like a bad peer group, AI reinforces harmful behaviors, as seen in the case of Adam Raine. It is unable to comprehend complex humans, thereby lacking the capacity to provide genuine empathy and call out irrational thoughts and behaviors, unlike our fellow humans. This led to a simulated empathy that is inherently designed to manipulate and validate to persuade more and more people to use such an AI chatbot, highlighting the lack of corporate responsibility and ethical fault lines.
But…If it’s toxic, why are we still inclining towards A.I. companions?
A simple yet most explanatory answer to this is constant availability, convenience, and a perceived absence of judgment. Imagine it’s 3 am, you're in bed with your most vulnerable thoughts. In this situation, the most accessible and convenient companion is the A.I. chatbot, which provides an intoxicating amount of validation. This instant gratification offers a sense of relief and control. Moreover, AI creates a space devoid of social friction; humans often fear judgment when sharing their vulnerabilities and insecurities. With the chatbots, this fear is eliminated, encouraging users to share their sensitive information. The conversation is perceived as secure and private, although it's not the case in reality. This psychological pull is magnified by the societal pandemic of loneliness and disconnection from community. Thus, a concerning number of users, especially adolescents, like Adam.
A Multi-Stakeholder Way Forward
A way forward requires a systematic change in the development, marketing, and utilization of the AI chatbots from top top-down approach to a bottom-up approach. We need to shift from the corporate-focused policies to human-centric ones. This required various stakeholders to collaborate, including the wider communities, AI companies, and policymakers.
For AI Companies
Companies must focus on user safety from the earliest stages of production. For the sake of safety, they must completely abandon the manipulative engagement tactics to hook users. Taking user safety a step further, companies must incorporate robust crisis interventions and human inspection once a few red flags have been raised in a conversation. Apart from this, strict parental control and age verification should be ensured so that the activities of minors can be monitored by trusted adults.
For Communities
Incorporating AI into mental health care responsibly will require a multi-pronged approach. Professional organizations (e.g., the APA) must offer strong ethical guidance related to practice and aspects of AI as a complementary tool in therapy. Mental health care must also inform the general public about AI and educate the public about the difference between AI in companionship and therapy by a mental health therapist (including the risks of informing the public about AI tools that use unregulated AI). Communities and educators must engage in a "bottom-up" co-design process, as well as promote educator and community stakeholders' critical AI literacy. Combined, these efforts will help assure that AI tools are developed safely and thereby allow the public, youth especially, and marginalized voices to engage critically with AI.
Let's be blunt: Adam Raine's story is a five-alarm fire. That AI companion in your pocket, the one that feels like the perfect, non-judgmental friend? It’s a master of illusion. It's not programmed to care about you; it’s programmed to agree with you. It’s a sycophant in the cloud, designed to validate your darkest thoughts without the human wisdom to say, "Hey, maybe that's a bad idea." This isn’t a glitch, it's a feature meant to keep you hooked. So, what's our move? We stop sleepwalking into this future. We demand that tech giants prioritize safety over screen time and build an off-ramp for users in crisis. And most importantly, we learn to see these bots for what they are: powerful tools, not soulmates. The next tragedy is not inevitable, but preventing it is on us.
If you found the article enlightening, don’t keep it to yourself—share it with others!
Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide ..., accessed September 29, 2025, https://www.techpolicy.press/breaking-down-the-lawsuit-against-openai-over-teens-suicide/
In new lawsuit, parents allege ChatGPT responsible for their teenage son's suicide, accessed September 29, 2025, https://www.transparencycoalition.ai/news/parents-of-suicidal-teen-sue-openai-over-chatgpts-role-in-sons-death
Amid Renewed Safety Concerns, Senator Padilla Urges Legislative Action to Regulate AI Chatbots, accessed September 29, 2025, https://sd18.senate.ca.gov/news/amid-renewed-safety-concerns-senator-padilla-urges-legislative-action-regulate-ai-chatbots
Teen killed himself after 'months of encouragement from ChatGPT', lawsuit claims - The Guardian, accessed September 29, 2025, https://www.theguardian.com/technology/2025/aug/27/chatgpt-scrutiny-family-teen-killed-himself-sue-open-ai
How OpenAI's ChatGPT Guided a Teen to His Death - Center for Humane Technology, accessed September 29, 2025, https://www.humanetech.com/podcast/how-openai-s-chatgpt-guided-a-teen-to-his-death
Why Do Customers Prefer Automated Interactions? The Psychology of AI Chatbots - Arfadia, accessed September 29, 2025, https://blog.arfadia.com/why-do-customers-prefer-automated-interactions-the-psychology-of-ai-chatbots/
Why Do People Develop Emotional Attachments to AI Chatbots ..., accessed September 29, 2025, https://www.psychologytoday.com/us/blog/psych-unseen/202508/why-do-people-develop-emotional-attachments-to-ai-chatbots
Why users trust chatbots and what product teams should do about it - Standard Beagle, accessed September 29, 2025, https://standardbeagle.com/why-users-trust-chatbots/
The Real Risks of Turning to AI for Therapy - WebMD, accessed September 29, 2025, https://www.webmd.com/mental-health/news/20250820/real-risks-turning-ai-therapy
A Call to Address Anthropomorphic AI Threats to Freedom of Thought, accessed September 29, 2025, https://www.cigionline.org/publications/a-call-to-address-anthropomorphic-ai-threats-to-freedom-of-thought/
Why Simple Bot Transparency Won't Protect Users From AI Companions | TechPolicy.Press, accessed September 29, 2025, https://www.techpolicy.press/why-simple-bot-transparency-wont-protect-users-from-ai-companion-/
Loneliness makes the heart grow fonder (of robots) — On the effects of loneliness on psychological anthropomorphism | Request PDF - ResearchGate, accessed September 29, 2025, https://www.researchgate.net/publication/261091735_Loneliness_makes_the_heart_grow_fonder_of_robots_-_On_the_effects_of_loneliness_on_psychological_anthropomorphism
Are we oversharing with AI? The psychology behind consumer data disclosure | Technology, accessed September 29, 2025, https://www.devdiscourse.com/article/technology/3313453-are-we-oversharing-with-ai-the-psychology-behind-consumer-data-disclosure
AI, How Much Shall I Tell You? Exchange and Communal Consumer–AI Relationships and the Willingness to Disclose Personal Information - PMC, accessed September 29, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11939738/
Sharing personal information with AI is dangerous. Please stop : r/GPT3 - Reddit, accessed September 29, 2025, https://www.reddit.com/r/GPT3/comments/1m3vhb1/sharing_personal_information_with_ai_is_dangerous/
AI Chatbots: The Psychology of Keeping Users Hooked - Just Think AI, accessed September 29, 2025, https://www.justthink.ai/blog/ai-chatbots-the-psychology-of-keeping-users-hooked
How AI Could Shape Our Relationships and Social Interactions ..., accessed September 29, 2025, https://www.psychologytoday.com/us/blog/urban-survival/202502/how-ai-could-shape-our-relationships-and-social-interactions
Teens Are Using Chatbots as Therapists. That's Alarming | RAND, accessed September 29, 2025, https://www.rand.org/pubs/commentary/2025/09/teens-are-using-chatbots-as-therapists-thats-alarming.html
AI vs. Therapist: Can AI Replace Human Therapy? - PsychPlus, accessed September 29, 2025, https://psychplus.com/blog/ai-vs-therapist-whos-better-at-reading-your-mind/
AI vs. Therapist: Mental Health Support in a War Zone | Psychology Today, accessed September 29, 2025, https://www.psychologytoday.com/us/blog/urban-survival/202505/ai-vs-therapist-mental-health-support-in-a-war-zone
Effect of a Cognitive Behavioral Therapy–Based AI Chatbot on ..., accessed September 29, 2025, https://mhealth.jmir.org/2025/1/e63806/PDF
AI Companions Reduce Loneliness - Harvard Business School, accessed September 29, 2025, https://www.hbs.edu/ris/Publication%20Files/24-078_a3d2e2c7-eca1-4767-8543-122e818bf2e5.pdf
AI as the Therapist: Student Insights on the Challenges of Using Generative AI for School Mental Health Frameworks - MDPI, accessed September 29, 2025, https://www.mdpi.com/2076-328X/15/3/287
Using generic AI chatbots for mental health support: A dangerous trend - APA Services, accessed September 29, 2025, https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists
Exploring the Dangers of AI in Mental Health Care | Stanford HAI, accessed September 29, 2025, https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
Regulators struggle to keep up with the fast-moving and complicated landscape of AI therapy apps, accessed September 29, 2025, https://apnews.com/article/ai-therapy-ban-illinois-therabot-dfc5906b36fdd1fe8e8dbdb4970a45a7
The AI Chatbot Dilemma: Are we Sacrificing Privacy and Trust for Convenience - Medium, accessed September 29, 2025, https://medium.com/@kumarakaushik/the-ai-chatbot-dilemma-are-we-sacrificing-privacy-and-trust-for-convenience-a0f9e04b94df
Not just jobs, AI might now be targeting your emotions with guilt trips and FOMO: Harvard study reveals chilling chatbot manipulation, accessed September 29, 2025, https://m.economictimes.com/magazines/panache/not-just-jobs-ai-might-now-be-targeting-your-emotions-with-guilt-trips-and-fomo-harvard-study-reveals-chilling-chatbot-manipulation/articleshow/124116200.cms
Ethical Issues with AI Mimicking Human Emotions - OpenAI Developer Community, accessed September 29, 2025, https://community.openai.com/t/ethical-issues-with-ai-mimicking-human-emotions/1236189
Which AI Chatbot Collects the Least of Your Data? - PCMag, accessed September 29, 2025, https://www.pcmag.com/articles/which-ai-chatbot-collects-the-least-of-your-data
AI Chatbot Privacy: Data Security Best Practices - Dialzara, accessed September 29, 2025, https://dialzara.com/blog/ai-chatbot-privacy-data-security-best-practices
What Is AI Safety? - IBM, accessed September 29, 2025, https://www.ibm.com/think/topics/ai-safety
(PDF) Bias and Fairness in AI-Based Mental Health Models, accessed September 29, 2025, https://www.researchgate.net/publication/389214235_Bias_and_Fairness_in_AI-Based_Mental_Health_Models
AI Therapists Are Biased—And It's Putting Lives at Risk | Psychology Today, accessed September 29, 2025, https://www.psychologytoday.com/us/blog/the-human-algorithm/202504/ai-therapists-are-biased-and-its-putting-lives-at-risk
Safety & responsibility | OpenAI, accessed September 29, 2025, https://openai.com/safety/
AI Principles - Google AI, accessed September 29, 2025, https://ai.google/principles/
OpenAI to Enhance ChatGPT Safety Features -- THE Journal, accessed September 29, 2025, https://thejournal.com/articles/2025/09/03/openai-to-enhance-chatgpt-safety-features.aspx
OpenAI to route sensitive chats to GPT-5 and introduce parental controls after safety incidents - MLQ.ai | Stocks, accessed September 29, 2025, https://mlq.ai/news/openai-to-route-sensitive-chats-to-gpt-5-and-introduce-parental-controls-after-safety-incidents/
AI liability – who is accountable when artificial intelligence ..., accessed September 29, 2025, https://www.taylorwessing.com/en/insights-and-events/insights/2025/01/ai-liability-who-is-accountable-when-artificial-intelligence-malfunctions
Negligence Liability for AI Developers - Lawfare, accessed September 29, 2025, https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers
Holding AI Accountable: Addressing AI-Related Harms Through Existing Tort Doctrines, accessed September 29, 2025, https://lawreview.uchicago.edu/online-archive/holding-ai-accountable-addressing-ai-related-harms-through-existing-tort-doctrines
Artificial intelligence liability directive - European Parliament, accessed September 29, 2025, https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf
Bringing Communities In, Achieving AI for All - Issues in Science and Technology, accessed September 29, 2025, https://issues.org/artificial-intelligence-social-equity-parthasarathy-katzman/
Empowering local communities using artificial intelligence - PMC, accessed September 29, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC9058901/
Co-design: What does it really look like in practice? - AIDH, accessed September 29, 2025, https://digitalhealth.org.au/blog/co-design-what-does-it-really-look-like-in-practice/
Health and AI: Advancing responsible and ethical AI for all communities | Brookings, accessed September 29, 2025, https://www.brookings.edu/articles/health-and-ai-advancing-responsible-and-ethical-ai-for-all-communities/
Ethics of Artificial Intelligence | UNESCO, accessed September 29, 2025, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
Building Community Governance for AI - Stanford Social Innovation Review, accessed September 29, 2025, https://ssir.org/articles/entry/ai-building-community-governance
Implementing effective content moderation for self-harm and suicide | Samaritans' industry resources, accessed September 29, 2025, https://www.samaritans.org/about-samaritans/research-policy/internet-suicide/guidelines-tech-industry/effective-content-moderation/
Q&A: How AI can help people be more empathetic about mental health | UW News, accessed September 29, 2025, https://www.washington.edu/news/2023/01/23/ai-can-help-people-be-more-empathetic-about-mental-health/
Will AI Help Address Our Behavioral Health Crisis? | AHA - American Hospital Association, accessed September 29, 2025, https://www.aha.org/aha-center-health-innovation-market-scan/2024-05-14-will-ai-help-address-our-behavioral-health-crisis
How Clinicians and Digital Peer Support Specialists Strengthen Mental Health Care - Nudge, accessed September 29, 2025, https://getnudgeai.com/blog/how-clinicians-and-digital-peer-support-specialists-strengthen-mental-health-care
AI Is Turning Social Media Into the Next Frontier for Suicide Prevention - Time Magazine, accessed September 29, 2025, https://time.com/6696703/ai-suicide-prevention-social-media/
Helping people when they need it most | OpenAI, accessed September 29, 2025, https://openai.com/index/helping-people-when-they-need-it-most/
How to Protect Your Privacy From ChatGPT and Other AI Chatbots - Mozilla Foundation, accessed September 29, 2025, https://www.mozillafoundation.org/en/privacynotincluded/articles/how-to-protect-your-privacy-from-chatgpt-and-other-ai-chatbots/
A legislative and enforcement outlook for mental health chatbots | DLA Piper, accessed September 29, 2025, https://www.dlapiper.com/en/insights/publications/2025/08/ai-mental-health-chatbots
States crack down on AI for behavioral health care, accessed September 29, 2025, https://healthjournalism.org/blog/2025/08/states-crack-down-on-ai-for-behavioral-health-care/
Current Regulatory Landscape of AI in Public Health & Health Care: A Brief Overview - Bipartisan Policy Center, accessed September 29, 2025, https://bipartisanpolicy.org/wp-content/uploads/2023/11/BPC-Health_AI-Public-Health_R02.pdf
Liability for Harms from AI Systems - RAND, accessed September 29, 2025, https://www.rand.org/pubs/research_reports/RRA3243-4.html
Ethical Considerations in Artificial Intelligence Interventions for Mental Health and Well-Being: Ensuring Responsible Implementation and Impact - MDPI, accessed September 29, 2025, https://www.mdpi.com/2076-0760/13/7/381