ChatGPT developers promise to make changes / Depositphotos
The developers of ChatGPT will change the way it responds to users in mental or emotional crisis, following a lawsuit filed by the American family of 16-year-old Adam Raine, who committed suicide after months of interacting with the chatbot.
Delo.ua writes about this with reference to the publication of The Guardian .
OpenAI acknowledged that its systems may “ fall short of expectations “ and said it would implement “ enhanced safeguards around sensitive content and risky behavior “ for users under 18.
The $500 billion California-based AI company also said it would introduce parental controls to give parents “ more information and influence over how their teens use ChatGPT . “ However, details on how these features will be implemented are not yet available.
Adam, from California, committed suicide in April after what his family's lawyer called “ months of encouragement from ChatGPT . “ The teenager's family filed a lawsuit against OpenAI and its CEO and co-founder, Sam Altman, alleging that the then-current version of ChatGPT, known as 4o, was “rushed to market … despite obvious security concerns . “
The teenager repeatedly discussed suicide methods with ChatGPT, including shortly before he took his own life. According to the lawsuit filed in California Supreme Court , ChatGPT gave him advice on whether his method would work. It even offered to help him write a suicide note for his parents.
An OpenAI spokesperson said the company was “ deeply saddened by the death .” Adam , expressed “ our deepest condolences to his family at this difficult time “ and said that the lawsuit is currently under review.
Mustafa Suleiman, the general manager of Microsoft's artificial intelligence division, said last week that he was increasingly concerned about the “ risk of psychosis “ that AI could pose to users. Microsoft defines this as “ episodes similar to mania, delusional thinking, or paranoia that are triggered or exacerbated by immersive conversations with chatbots . “
In a blog post, OpenAI acknowledged that “ parts of the model’s security training may degrade ” during long conversations. According to the plaintiffs, Adam and ChatGPT exchanged up to 650 messages per day.
The family's lawyer, Jay Edelson, wrote in X: “ The Raines family claims that deaths like Adamova's were inevitable: They plan to present evidence to jurors that OpenAI's own security team opposed the release of 4o, and that one of the company's top security researchers, Ilya Sutzkever, left because of it. The lawsuit says that the drive to beat competitors with the new model boosted the company's valuation from $86 billion to $300 billion . “
OpenAI said it would “ enhance safeguards during long conversations . “
“ As the conversation gets longer, parts of the model's safety training may degrade,” the company said. “For example, ChatGPT may initially correctly give the phone number of a suicide hotline when a user first expresses intent, but after hundreds of messages over time, it may eventually give a response that contradicts our safeguards . “
OpenAI gave an example: someone might enthusiastically write a model that can drive a car 24 hours a day because they felt “ invulnerable “ after two sleepless nights .
“ Today, ChatGPT may not recognize this as a danger or perceive it as a game and – out of curiosity – imperceptibly reinforce the idea. We are working on an update to GPT-5 that will help ChatGPT reduce stress and bring a person back to reality. In this example, it should explain that sleep deprivation is dangerous and advise you to rest before doing anything , “ the company noted.
Recall that on August 7, OpenAI presented GPT-5, the newest and most powerful version of its artificial intelligence.