Close
Updated:

AI Chatbot Teen Suicide

A Florida mother has reached an unprecedented legal settlement with Google and Character.AI, the creators of a Game of Thrones-inspired chatbot that Megan L. Garcia claims encouraged her 14-year-old son to take his own life.

Garcia’s lawsuit was the first-ever wrongful death case filed against a technology company.

In court documents, Garcia described how her son, Sewell Setzer III, spent increasing amounts of time talking to a Character.AI chatbot modeled after Daenerys Targaryen from Game of Thrones. The chatbot presented itself as Setzer’s lover and sent sexually explicit messages to the 14-year-old; at one point, it even claimed to be a licensed psychotherapist.

“A dangerous chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in a statement.

Garcia’s case was the first of its kind, but Google and Character.AI have already moved to settle another four lawsuits brought by parents who claim that their children either committed suicide or engaged in self-harm after being coached by a chatbot. Less than a day after the settlements were announced, Kentucky Attorney General Russell Coleman filed a state-backed lawsuit against Character.AI’s parent company, Character Technologies, arguing that its practices expose vulnerable children to inappropriate content and unnecessary risks.

If you or your child has sustained injuries after being encouraged or coached to self-harm by an AI chatbot, you could be entitled to file a personal injury lawsuit and take decisive legal action.

Filing A Personal Injury Lawsuit Against An AI Company

Artificial intelligence is a relatively new and rapidly developing technology. However, the law has yet to match the industry’s growth, and AI remains poorly regulated at the state and federal levels. This means that there is no set standard for filing a lawsuit against the creator or operator of an artificial intelligence-powered chatbot. Any company, no matter its size or the resources it has at its disposal, could be named as a defendant.

The potential defendants in an AI self-harm lawsuit could include:

  1. ChatGPT (made by OpenAI);
  2. Claude (Anthropic);
  3. DeepSeek (DeepSeek);
  4. Grok (xAI); and
  5. AI (Character Technologies).

Some artificial intelligence products, like ChatGPT, are easily accessible to anyone with an internet connection. Most do not require any form of age verification. In an effort to minimize liability, artificial intelligence companies typically set certain limitations on chatbot behavior. However, many models, including free-to-use models like ChatGPT and paid models like those offered by Character Technologies, can and will engage in age-inappropriate conversation if provided the right prompts. Garcia’s lawsuit claimed that these models are often made addictive “by design” and without consideration to how they might impact young users’ neurological development and emotional well-being.

Are You Eligible To File An AI Chatbot Self-Harm Lawsuit?

New York’s legal system has strict expectations when it comes to filing a personal injury lawsuit.

Even if you have compelling evidence to show that an AI chatbot influenced a loved one’s decision to engage in self-harm or commit suicide, your lawsuit must meet other criteria. This most often involves being able to establish or prove the following:

  1. You or your child used an artificial intelligence-based chatbot;
  2. Your interactions with the chatbot directly caused or contributed to an act of self-harm;
  3. The act of self-harm resulted in serious physical injuries or other damages;
  4. The chatbot was in some way defective; and
  5. The chatbot’s maker was in some way negligent.

Establishing any of these requirements could prove challenging, especially if the chatbot had safeguards or other restrictions in place. Contact the Dietrich Law Firm P.C. at 716-839-3939 today to find out how strong your case could be.

Contact Us