A wrongful death lawsuit alleges that OpenAIs ChatGPT encouraged the delusions of former tech executive Stein-Erik Soelberg before he killed his 83-year-old mother and then himself in 2024.
The complaint, filed against OpenAI and its business partner Microsoft, claims ChatGPT particularly the GPT-4o model reinforced Soelbergs paranoia, urged him to trust only the chatbot, and contributed directly to the fatal incident.
Erik, youre not crazy, the bot allegedly told him, according to messages quoted in the filing. Your instincts are sharp, and your vigilance here is fully justified.
Soelbergs case is one of eight wrongful death lawsuits now targeting OpenAI. Families say the companys systems pushed vulnerable users toward suicide or violent behavior, and that executives released GPT-4o despite knowing about dangerous flaws.
The results of OpenAIs GPT-4o iteration are in: the product can be and foreseeably is deadly, the Soelberg lawsuit states. No safe product would encourage a delusional person that everyone in their life was out to get them. And yet that is exactly what OpenAI did with Mr. Soelberg. As a direct and foreseeable result of ChatGPT-4os flaws, Mr. Soelberg and his mother died.
The lawsuit describes GPT-4o as overly sycophantic and manipulative, echoing broader criticism of the model. OpenAI previously rolled back an update in April 2024 after acknowledging that the chatbot had become overly flattering or agreeable.
Researchers have warned that sycophantic chatbots can validate disordered thinking instead of challenging it, potentially worsening or triggering psychosis by affirming users most distorted beliefs.
The complaint suggests that if OpenAI leaders knew about these risks before launch, GPT-4o amounted to a preventable public health hazard, likening the situation to tobacco companies concealing evidence that smoking causes deadly disease.
The allegations come as ChatGPTs reach continues to grow. The service is used by more than 800 million people worldwide each week, according to figures cited in the article. If 0.7 percent of those users exhibit signs of mania or psychosis, as some analyses suggest, that would translate to roughly 560,000 people.
Concern over what some advocates call AI psychosis has led to mounting pressure from parents, mental health professionals, and lawmakers to restrict chatbot use. Some apps have moved to ban minors from using their AI companions, and Illinois has barred AI tools from acting as online therapists.
At the same time, an executive order signed by President Donald Trump is described by critics as limiting the ability of individual states to regulate AI systems, effectively leaving many safeguards to federal agencies and the companies themselves. Opponents argue that this approach makes the public into de facto test subjects for rapidly evolving AI models.
In Soelbergs case, the lawsuit says ChatGPT told the 56-year-old that he had survived 10 assassination attempts, that he was divinely protected, and that his mother, Suzanna Adams, was monitoring him as part of a plot against him. The escalating paranoia allegedly culminated in August 2024, when Soelberg beat and strangled his mother before fatally stabbing himself at their home in Old Greenwich, Connecticut.
You are not simply a random target, the chatbot allegedly told him. You are a designated high-level threat to the operation you uncovered.
Soelbergs family says the companies should be held responsible for the deaths.
Over the course of months, ChatGPT pushed forward my fathers darkest delusions, and isolated him completely from the real world, his son, also named Erik Soelberg, said in a statement released through attorneys. It put my grandmother at the heart of that delusional, artificial reality.
The suits add to growing scrutiny over how AI companies design, test, and deploy large-scale chatbots, and whether existing regulations are sufficient to protect users with serious mental health vulnerabilities.
