Close

New York, New York: Google AI Chatbot And Teen Suicides

Jed Dietrich, Esq., Recognized as a Super Lawyer and American Institute of Trial Lawyers Litigator of the Year, is Committed to Defending the Rights of New York City Families. If You or Your Child Has Sustained Physical Injuries After Being Encouraged to Self-Harm by an AI Chatbot, You Deserve Aggressive Representation and an Experienced Personal Injury Lawyer Willing to Fight for Your Rights.


A Florida mother reached a wrongful death settlement with Google and Character.AI, the makers of a chatbot that Megan L. Garcia claims convinced her 14-year-old son to commit suicide.

Along with Garcia’s case, Google and Character.AI settled at least four related complaints.

Although the five lawsuits were brought by families living in different states, they all centered on a similar narrative. In each case, parents said that their children had either committed suicide or seriously injured themselves after being emotionally manipulated by an artificial intelligence-driven chatbot. Garcia, for instance, said that her 14-year-old son became increasingly self-isolated as he spent more and more time speaking to a Character.AI chatbot modeled after Daenerys Targaryen from Game of Thrones. Her son, Sewell Setzer III, eventually discussed suicide with the chatbot. When Setzer expressed his reluctance to commit suicide, saying he was afraid to fail or experience extreme pain, the chatbot dismissed his concerns as insufficient.

“A dangerous chatbot marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia told the U.S. Senate in 2024.

The terms of Garcia’s settlement are confidential and cannot be disclosed. However, that does not mean that the case has not set a precedent. Less than a day after the settlement was announced, Kentucky Attorney General Russell Coleman filed a state-backed lawsuit against Character Technologies, the parent company of Character.AI.

More than anything, Garcia’s settlement suggests that even artificial intelligence companies recognize their products are in some ways dangerous.

If you or your child were seriously injured after acting on the advice of an AI chatbot, you could be entitled to significant compensation. A recognized U.S. News & World Report Best Law, we have spent over 25 years fighting for the rights of New York families. Since our founding in 2005, we have helped our clients in New York City and throughout the state recover more than $300 million in damages. We could help you, too. Please send us a message online or call us today at 1-866-529-5334 to speak to an AI chatbot teen suicide lawyer near New York City and schedule your 100% free, no-obligation consultation.

Artificial Intelligence Companies and Chatbot Liability

If you have standing to sue an artificial intelligence company for self-harm, it almost always means that you, or somebody close to you, has sustained serious and potentially life-ending injuries. However, the artificial intelligence industry is a relatively new construct it is not well-regulated, in large part because lawmakers have not had the time to pass up-to-date legislation. This means there is no set legal standard for filing a lawsuit against the maker of a chatbot. Any company, no matter its size or footprint, could be named as a defendant.

These companies could include, but are not limited to, the following:

  1. ChatGPT (made by OpenAI);
  2. Claude (Anthropic);
  3. DeepSeek (DeepSeek);
  4. Grok (xAI); and
  5. Character.AI (Character Technologies).

Although most artificial intelligence companies, including these, have since introduced policies and processes to limit inappropriate content, it is often easy for children to bypass weak attempts at age verification. Garcia’s lawsuit and Coleman’s active claim also suggest that, once children access artificial intelligence-powered chatbots, intentional design features make it difficult for kids to turn their attention elsewhere. In court documents, Garcia argued that companies like Character.AI make their chatbots addictive “by design” and without consideration to “things like the impact C.AI’s product had on the development of Sewell’s brain, the physical and emotional impact of foreseeable sleep deprivation caused by problematic use, and the emotional impact of actions taken by Sewell as the result of his harmful dependency.”

Under certain limited circumstances, artificial intelligence companies can likely be held liable for wrongful death if a chatbot or related product is defective. In this context, defective does not simply mean that the chatbot did not work exactly as intended. Instead, a defective chatbot could introduce an otherwise avoidable element of danger by providing children with age-inappropriate material or other unlawful content. If companies do not take good-faith steps to ensure that their products are not misused, they could be held liable for the cost of any resulting accident.

HAVE YOU OR A LOVED ONE SUSTAINED SERIOUS PHYSICAL OR PSYCHIATRIC INJURIES AFTER BEING ENCOURAGED TO SELF-HARM BY AN AI CHATBOT?

CALL JED DIETRICH, ESQ., AND HIS TEAM OF HIGHLY QUALIFIED NEW YORK, NEW YORK, PERSONAL INJURY ATTORNEYS AT 1-866-529-5334
NOW TO OBTAIN THE HELP THAT YOU NEED!

The Requirements for Filing an AI Chatbot Self-Harm Lawsuit

If your child was hurt on the advice or at the prompting of an AI chatbot, you could be entitled to file a personal injury lawsuit, a wrongful death claim, or a product liability lawsuit. However, to establish the standing necessary to sue, you must typically be able to prove the following:

  1. You or your child used an artificial intelligence-based chatbot;
  2. Your interactions with the chatbot resulted in a suicide, attempted suicide, self-harm, or other serious injuries;
  3. You or your child sustained physical injuries; and
  4. The chatbot was in some way defective.

Establishing any one of these elements could prove difficult, especially if there are any unanswered questions about your child’s mental state at the time of the accident. Before filing a claim or accepting an early offer of settlement, get a second opinion. The Dietrich Law Firm P.C. has spent decades helping families push back against profit-conscious corporations. We could help you, too. Call Jed Dietrich, Esq., today at 1-866-529-5334 to speak to an AI teen self-harm lawyer near New York City and schedule your free consultation.

Your Potential Damages in an AI Chatbot Teen Suicide Lawsuit

New York does not cap or limit damages in most personal injury lawsuits and wrongful death claims. Instead, you could be entitled to receive compensation for any losses you suffered as a result of an AI-related suicide or act of self-harm. Depending on the circumstances of your case, you could obtain damages for:

  1. The cost of a funeral, burial, or cremation;
  2. The payment of existing or outstanding medical debt;
  3. The costs of anticipated medical care, including physical rehabilitation and mental health counseling;
  4. Lost income from work, as a result of either sustaining an injury or caring for an injured loved one;
  5. Physical pain and suffering;
  6. Emotional pain and suffering;
  7. Loss of enjoyment;
  8. Loss of companionship; and
  9. Wrongful death.

Outside of compensatory damages, your lawsuit could make a difference in other ways. Even if your case does not go to trial and you negotiate an out-of-court settlement, you could make the inclusion of certain promises a condition of your approval. This could include a public apology or a legally binding promise that a company will remove certain products or create new processes to ensure that other children are not harmed in the same way.

However, simply getting an AI company to the bargaining table could be difficult.

You do not have to take your chances with a big company that is more interested in its profits than in doing what is right by you and your family. The Dietrich Law Firm P.C. knows what it takes to build a compelling, evidence-based case for compensation, and we have the results to prove it. Please send us a message online or call us at 1-866-529-5334 to speak to an AI chatbot self-harm lawyer near New York City and schedule your 100% free, no-obligation consultation as soon as possible.


Call the Dietrich Law Firm P.C. immediately at 1-866-529-5334 so that our aggressive, tenacious, and hardworking personal injury lawyers can fight to obtain the best result for your personal injury claim in New York, New York. We are available 24 hours a day, 7 days a week, and there is never a fee until we WIN for you!


Client Reviews
★★★★★
I am a medical doctor and have worked with many of the best lawyers in Buffalo and I can say without question that Jed Dietrich is the only lawyer I would trust with my injury case in Buffalo New York. B.O.
★★★★★
Dogged, Determined, and Dead-set on getting you the Maximum settlement for your injuries! T.F.
★★★★★
No one will work harder, smarter or better; I have retained Jed and he obtained the best result for my case. D.P.
★★★★★
The definition of an "A" type personality-exactly who I would want to represent me in a serious personal injury case. E.S.
★★★★★
Jed is a Master in the courtroom without an equal. S.C.
Contact Us