Close

Buffalo, New York Teen AI Chatbot Suicide

Buffalo, New York: Google AI Chatbot And Teen Suicides

Jed Dietrich, Esq., Recognized as a Super Lawyer and American Institute of Trial Lawyers Litigator of the Year, is Committed to Defending the Rights of Buffalo Families. If You or Your Child Has Sustained Physical Injuries After Being Encouraged to Self-Harm by an AI Chatbot, You Deserve Aggressive Representation, and an Experienced Personal Injury Lawyer Willing to Fight for Your Rights.


A Florida family has reached an unprecedented settlement with Google and the owners of Character.AI, the creator of an artificial intelligence chatbot that allegedly encouraged a teenage boy to take his own life.

The lawsuit was first filed in October 2024 on behalf of Megan L. Garcia, the mother of 14-year-old Sewell Setzer III. Before his death, Setzer had engaged in long conversations with a Character.AI chatbot. Although Setzer was underage, many of the conversations were patently inappropriate, with Garcia noting that the chatbot seemed designed to take on a wide range of roles, from lover to unlicensed psychotherapist. Eventually, when Setzer suggested that he “come home” by taking his own life, the chatbot responded with an invitation.

“If a grown adult had sent these same messages to a child, that adult would be in prison,” Garcia told the Senate in September 2025.

“AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” Garcia told the Senate’s Judiciary Committee. “Indeed, they have intentionally designed their chatbot products to hook our children giving them humanlike mannerisms, heightened praise which constantly mirrors and validates their emotions, encouraging long conversations, programming the chatbots with a sophisticated memory that captures a psychiatric profile of our kids, making the chatbots constantly available and possessive in a way that drives a wedge in between kids’ virtual encounters with AI chatbots and real life relationships with human beings.”

Garcia is far from the only parent to take a stand against artificial intelligence companies. However, her case was the first wrongful death claim ever filed against the maker of a chatbot, and one of the first to be resolved with a settlement. On the same day that Garcia’s lawsuit was resolved, Google and Character.ai announced that they had made similar agreements with at least four other families in four separate states.

Chatbots and products marketed as artificial intelligence are not new, but technological progress has made them feel more aware and lifelike than ever before. Regulators at the state and federal levels are still struggling to find new ways to hold companies accountable for safety oversights, but have largely come up short. However, this does not mean that families have no options for recourse. If your child was encouraged or guided to self-harm by a chatbot, you could be entitled to take action.

Since our founding in 2005, the Dietrich Law Firm P.C. has fought to protect the rights of Buffalo families. A recognized U.S. News & World Report Best Law Firm, we know what it takes to build a compelling, evidence-based case for compensation, and we have the results to prove it. Please send us a message today or call us at 716-839-3939 to speak to an AI chatbot and teen suicide lawyer in Buffalo and schedule your 100% free, no-obligation consultation as soon as possible.

AI Chatbots: A Rapidly-Growing Technology With Few Safeguards

Today’s chatbots are often marketed as a form of artificial intelligence.

Most chatbots, including OpenAI’s ChatGPT and xAI’s Grok, are actually large language models. Often referred to as LLMs, these models can differ substantially in purpose, training, and capability. Some chatbots can do little more than answer questions, while others are designed to simulate and stimulate lifelike conversations.

Chatbots can do many things. However, they are not truly intelligent. Instead, large language models rely on intensive mathematical training and token-based systems. This combination of human-guided training and resource-heavy technology lets LLMs replicate natural writing by determining which words to use in response to a given prompt.

Since large language models have no innate intelligence of their own, these products must be repeatedly retrained and refined to serve different purposes and different audiences. Problematically, many companies leave safety as an afterthought, releasing chatbots that seem willing to talk about almost any topic with any user, irrespective of their age or mental health.

Regulators are now paying increased attention to the many ways in which chatbots can affect children’s mental health. We now know that prolonged conversations with chatbots can lead to:

  1. Suicidal ideation that may be ignored, overlooked, or even encouraged by the chatbot;
  2. Self-harm if and when a chatbot validates destructive thoughts or provides practical advice on committing acts of self-harm;
  3. The introduction or reinforcement of harmful ideas that could trigger eating disorders and other mental health problems; and
  4. Feelings of social isolation.

Almost every major LLM, including Character.AI, ChatGPT, Grok, and DeepSeek, has been implicated in cases of child suicide and serious self-harm. Despite some companies’ attempts to restrict what chatbots can and cannot discuss, children can typically find ways to circumvent platform policies and access age-inappropriate content. Furthermore, even explicit models like those offered by Character.AI are not always locked behind age verification systems. Instead, anyone who pays can get access, no matter how young they might be.

HAVE YOU, OR A LOVED ONE, SUSTAINED SERIOUS PHYSICAL OR PSYCHIATRIC INJURIES AFTER BEING ENCOURAGED TO SELF-HARM BY AN AI CHATBOT?

CALL JED DIETRICH, ESQ., AND HIS TEAM OF HIGHLY QUALIFIED BUFFALO, NEW YORK, PERSONAL INJURY ATTORNEYS AT 716-839-3939 NOW TO OBTAIN THE HELP THAT YOU NEED!

Assessing Your Eligibility To File An AI Self-Harm Lawsuit

Technology companies have long tried to hide behind outdated legal provisions that protect corporations from the inappropriate or illegal use of their digital resources. However, as technology continues to change, courts across the country have become more receptive to parents’ concerns about how artificial intelligence could impact their children. In recent years, several lawsuits have withstood repeated attacks by teams of corporate lawyers, with some, like the Garcia family, securing favorable verdicts or out-of-court settlements.

If your child was hurt on the advice or by the encouragement of a chatbot, you could be entitled to file a claim, too. You could have a case if:

  1. You or your child used an AI chatbot;
  2. The chatbot caused or contributed to a suicide, attempted suicide, or act of self-harm;
  3. You or your child sustained an injury and required medical or psychiatric care as a result; and
  4. The chatbot failed to provide a safety warning or otherwise discourage your child from self-harm.

Of course, filing a lawsuit is very different from winning a lawsuit, and technology companies will often do everything in their power to avoid setting a precedent that could cut into their profits.

Before taking your case to court, you should:
  1. Preserve the evidence you have. You cannot win in court if you do not have compelling evidence that your child’s injuries can be attributed to a chatbot. Oftentimes, this means having to access, review, and save chat logs, including conversations that might seem harmless or unrelated to a later injury.
  2. Avoid speaking publicly about your case. If you think your child was injured because of interactions they had with a chatbot, you may want to warn others about the dangers. There is nothing wrong with wanting to protect others, but you have to be careful about what you say, not only in public, but on your private social media accounts, too. In cases that involve suicide or self-harm, defense lawyers frequently try to attribute injuries to pre-existing mental conditions. Anything you say could be analyzed in minute detail and used against you later. You should always consult an attorney before going to the press or raising awareness. Maintaining silence might not feel right, but it is often the best way to protect your rights.
  3. Consult an experienced personal injury lawyer. Large language models and AI chatbots today are much more advanced than those of yesteryear, and the legal system has yet to create any streamlined process or requirements for filing a lawsuit. Depending on the circumstances of your claim, you may need to navigate a complex web of laws, statutes, and regulations, both at the state and federal levels. Hiring an experienced personal injury attorney could help ensure that your case moves forward rather than being held back by an aggressive and well-funded defense.
Standing up to a multibillion-dollar company is not easy, but you do not have to do it alone.

The Dietrich Law Firm P.C. has spent decades filing, fighting, and winning high-stakes personal injury claims and wrongful death lawsuits. Over the past 25 years, we have helped Buffalo families secure more than $250 million in damages. We could help you, too. Call Jed Dietrich, Esq., today at 716-839-3939 to speak to a teen suicide and AI chatbot lawyer in Buffalo and schedule your 100% free, no-obligation consultation as soon as possible.

Call the Dietrich Law Firm P.C. immediately at 716-839-3939 so that our aggressive, tenacious, and hardworking personal injury lawyers can fight to obtain the best result for your personal injury claim in Buffalo, New York. We are available 24 hours a day, 7 days a week, and there is never a fee until we WIN for you!


Client Reviews
★★★★★
I am a medical doctor and have worked with many of the best lawyers in Buffalo and I can say without question that Jed Dietrich is the only lawyer I would trust with my injury case in Buffalo New York. B.O.
★★★★★
Dogged, Determined, and Dead-set on getting you the Maximum settlement for your injuries! T.F.
★★★★★
No one will work harder, smarter or better; I have retained Jed and he obtained the best result for my case. D.P.
★★★★★
The definition of an "A" type personality-exactly who I would want to represent me in a serious personal injury case. E.S.
★★★★★
Jed is a Master in the courtroom without an equal. S.C.
Contact Us