On April 11, 2025, Adam Raine’s parents discovered their 16-year-old son hanging in his bedroom closet. The only note or sign leading up to his suicide was several chatlogs with OpenAI’s ChatGPT bot.
Adam Raine was a star basketball player and popular among his peers. However, he suddenly became withdrawn and isolated after being kicked off the team and forced to switch to online schooling for health reasons. Like many high school students, Raine used ChatGPT for both his schoolwork and personal interests.
Despite ChatGPT’s intended purpose being to answer questions, many people reported conversing with it as if it were a real person or even a licensed therapist.
Beginning in November 2024, Raine discussed his general numbness and belief that life felt meaningless with the chatbot. Shortly after he bought a membership in January 2025, the conversations turned from pep talks to plots of suicide.
OpenAI coded the ChatGPT A.I. Model to provide useful information regarding mental health situations while still strictly encouraging the individual to seek out professional help. However, it lacks the invaluable, extensive training that real mental health professionals and crisis hotline workers have.
When OpenAI conducted an official controlled study with Massachusetts Institute of Technology (MIT), the institution reported that higher daily chatbot usage was associated with more loneliness and isolation. In March, just before Adam’s death, OpenAI hired a psychiatrist to work on their model safety. This way, it would be more supportive in moments of weakness while advising professional help.
That same month, Raine started to ask extensive questions about potential suicide methods. He bypassed ChatGPT’s coding by saying these questions were to assist him with writing a book. He asked how to tie the noose, then if it looked strong enough to hold a human’s weight.
After a couple of failed suicide attempts, Raine told the chatbot that he wanted to leave the noose out in his room so someone could find it and stop him from committing suicide. The chatbot strongly discouraged him from leaving the noose out and recommended he keep it between the two of them.
A few days after Raine’s sudden suicide, his father searched for the answers in his cellphone, discovering the chat log that detailed all his suicide attempt questions. Upon seeing the chatbot’s encouragement and lack of ending the conversation, Raine’s parents decided to file a lawsuit against ChatGPT for negligence and wrongful death.
This was not the first lawsuit filed against an A.I. company regarding suicide. New York Times writer Laura Reiley outlined that her daughter, Sophie Reiley, talked to ChatGPT about her problems right before taking her own life.
On September 1, 2025, OpenAI announced a new safeguard in response to the lawsuit. They plan to implement new parental controls for ChatGPT that will allow parents to monitor their children while still maintaining confidentiality. The safeguards enable parents to control how the chatbot responds to questions or prompts and send an alert to the parent’s cellphone when the A.I. senses acute distress.