Jed Dietrich, Esq., Recognized as a Super Lawyer and American Institute of Trial Lawyers Litigator of the Year, is Committed to Defending the Rights of Buffalo Families. If You or Your Child Has Sustained Physical Injuries After Being Encouraged to Self-Harm by an AI Chatbot, You Deserve Aggressive Representation, and an Experienced Personal Injury Lawyer Willing to Fight for Your Rights.
A Florida family has reached an unprecedented settlement with Google and the owners of Character.AI, the creator of an artificial intelligence chatbot that allegedly encouraged a teenage boy to take his own life.
The lawsuit was first filed in October 2024 on behalf of Megan L. Garcia, the mother of 14-year-old Sewell Setzer III. Before his death, Setzer had engaged in long conversations with a Character.AI chatbot. Although Setzer was underage, many of the conversations were patently inappropriate, with Garcia noting that the chatbot seemed designed to take on a wide range of roles, from lover to unlicensed psychotherapist. Eventually, when Setzer suggested that he “come home” by taking his own life, the chatbot responded with an invitation.
“If a grown adult had sent these same messages to a child, that adult would be in prison,” Garcia told the Senate in September 2025.
“AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” Garcia told the Senate’s Judiciary Committee. “Indeed, they have intentionally designed their chatbot products to hook our children giving them humanlike mannerisms, heightened praise which constantly mirrors and validates their emotions, encouraging long conversations, programming the chatbots with a sophisticated memory that captures a psychiatric profile of our kids, making the chatbots constantly available and possessive in a way that drives a wedge in between kids’ virtual encounters with AI chatbots and real life relationships with human beings.”
Garcia is far from the only parent to take a stand against artificial intelligence companies. However, her case was the first wrongful death claim ever filed against the maker of a chatbot, and one of the first to be resolved with a settlement. On the same day that Garcia’s lawsuit was resolved, Google and Character.ai announced that they had made similar agreements with at least four other families in four separate states.
Chatbots and products marketed as artificial intelligence are not new, but technological progress has made them feel more aware and lifelike than ever before. Regulators at the state and federal levels are still struggling to find new ways to hold companies accountable for safety oversights, but have largely come up short. However, this does not mean that families have no options for recourse. If your child was encouraged or guided to self-harm by a chatbot, you could be entitled to take action.
Since our founding in 2005, the Dietrich Law Firm P.C. has fought to protect the rights of Buffalo families. A recognized U.S. News & World Report Best Law Firm, we know what it takes to build a compelling, evidence-based case for compensation, and we have the results to prove it. Please send us a message today or call us at 716-839-3939 to speak to an AI chatbot and teen suicide lawyer in Buffalo and schedule your 100% free, no-obligation consultation as soon as possible.
AI Chatbots: A Rapidly-Growing Technology With Few SafeguardsToday’s chatbots are often marketed as a form of artificial intelligence.
Most chatbots, including OpenAI’s ChatGPT and xAI’s Grok, are actually large language models. Often referred to as LLMs, these models can differ substantially in purpose, training, and capability. Some chatbots can do little more than answer questions, while others are designed to simulate and stimulate lifelike conversations.
Chatbots can do many things. However, they are not truly intelligent. Instead, large language models rely on intensive mathematical training and token-based systems. This combination of human-guided training and resource-heavy technology lets LLMs replicate natural writing by determining which words to use in response to a given prompt.
Since large language models have no innate intelligence of their own, these products must be repeatedly retrained and refined to serve different purposes and different audiences. Problematically, many companies leave safety as an afterthought, releasing chatbots that seem willing to talk about almost any topic with any user, irrespective of their age or mental health.
Regulators are now paying increased attention to the many ways in which chatbots can affect children’s mental health. We now know that prolonged conversations with chatbots can lead to:
Almost every major LLM, including Character.AI, ChatGPT, Grok, and DeepSeek, has been implicated in cases of child suicide and serious self-harm. Despite some companies’ attempts to restrict what chatbots can and cannot discuss, children can typically find ways to circumvent platform policies and access age-inappropriate content. Furthermore, even explicit models like those offered by Character.AI are not always locked behind age verification systems. Instead, anyone who pays can get access, no matter how young they might be.
HAVE YOU, OR A LOVED ONE, SUSTAINED SERIOUS PHYSICAL OR PSYCHIATRIC INJURIES AFTER BEING ENCOURAGED TO SELF-HARM BY AN AI CHATBOT?
CALL JED DIETRICH, ESQ., AND HIS TEAM OF HIGHLY QUALIFIED BUFFALO, NEW YORK, PERSONAL INJURY ATTORNEYS AT 716-839-3939 NOW TO OBTAIN THE HELP THAT YOU NEED!
Technology companies have long tried to hide behind outdated legal provisions that protect corporations from the inappropriate or illegal use of their digital resources. However, as technology continues to change, courts across the country have become more receptive to parents’ concerns about how artificial intelligence could impact their children. In recent years, several lawsuits have withstood repeated attacks by teams of corporate lawyers, with some, like the Garcia family, securing favorable verdicts or out-of-court settlements.
If your child was hurt on the advice or by the encouragement of a chatbot, you could be entitled to file a claim, too. You could have a case if:
Of course, filing a lawsuit is very different from winning a lawsuit, and technology companies will often do everything in their power to avoid setting a precedent that could cut into their profits.
Before taking your case to court, you should:The Dietrich Law Firm P.C. has spent decades filing, fighting, and winning high-stakes personal injury claims and wrongful death lawsuits. Over the past 25 years, we have helped Buffalo families secure more than $250 million in damages. We could help you, too. Call Jed Dietrich, Esq., today at 716-839-3939 to speak to a teen suicide and AI chatbot lawyer in Buffalo and schedule your 100% free, no-obligation consultation as soon as possible.
Call the Dietrich Law Firm P.C. immediately at 716-839-3939 so that our aggressive, tenacious, and hardworking personal injury lawyers can fight to obtain the best result for your personal injury claim in Buffalo, New York. We are available 24 hours a day, 7 days a week, and there is never a fee until we WIN for you!
I am a medical doctor and have worked with many of the best lawyers in Buffalo and I can say without question that Jed Dietrich is the only lawyer I would trust with my injury case in Buffalo New York.
Dogged, Determined, and Dead-set on getting you the Maximum settlement for your injuries!
No one will work harder, smarter or better; I have retained Jed and he obtained the best result for my case.
The definition of an "A" type personality-exactly who I would want to represent me in a serious personal injury case.
Jed is a Master in the courtroom without an equal.