Rochester, New York: Google AI Chatbot and Teen Suicides
Jed Dietrich, Esq., Recognized as a Super Lawyer and American Institute of Trial Lawyers Litigator of the Year, is Committed to Defending the Rights of Rochester Families. If You or Your Child Has Sustained Physical Injuries After Being Encouraged to Self-Harm by an AI Chatbot, You Deserve Aggressive Representation, and an Experienced Personal Injury Lawyer Willing to Fight for Your Rights.
A Florida mother has settled a landmark wrongful death lawsuit against Google and Character.AI, the creators of an artificial intelligence chatbot that Megan L. Garcia claims encouraged her 14-year-old son to commit suicide.

Garcia’s case was the first-ever wrongful death lawsuit filed against an artificial intelligence company. In court filings, Garcia described how her teenage son, Sewell Setzer III, became increasingly isolated after having secret conversations with one of Character.AI’s subscription chatbots. The chatbot, originally designed for sexual roleplay, presented itself as Setzer’s lover and, at times, as a psychotherapist. Eventually, when Setzer asked if he should “come home,” the chatbot responded with an invitation that prompted the 14-year-old to take his own life.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in a 2024 press release announcing the lawsuit.
Setzer, notesThe Guardian, had become fascinated with a Character.AI chatbot modeled after Daenerys Targaryen, a fictional character from Game of Thrones. He texted the chatbot dozens of times throughout the day, sometimes spending hours alone to engage in long, drawn-out conversations. At one point, the chatbot asked Sewell if he had thought about how he would commit suicide. The boy responded that he thought about it, but was afraid to follow through because he did not know if it would work or be painful.
“That is not a reason to go through with it,” the chatbot told him.

Attorneys for Garcia argued that the amount of time Setzer spent with the chatbot was a red flag in its own right and that once the conversation turned to suicide, it should have ended.
“If an adult had sent these same messages to a child, that adult would be in prison,” Garcia told the U.S. Senate in September 2025.
Garcia is far from the only parent who has tried to take a stand against artificial intelligence companies and their most predatory products, but her case was one of the first to be successfully resolved with an out-of-court settlement. On the same day, Google and Character.AI released statements saying they would settle similar claims with another four families.
Chatbots are not new, but the rapid acceleration of artificial intelligence-like technology means that they are more powerful and more lifelike than ever before. Even though regulators are still struggling to rein in the powerful AI industry, families still have rights that even the most powerful companies cannot take away. If you or your child has been injured or encouraged to self-harm in conversations with an AI chatbot, you could be entitled to take decisive legal action.
A recognized U.S. News & World Report Best Law Firm, the Dietrich Law Firm P.C. knows what it takes to build a compelling, evidence-based case for compensation. Since our founding in 2005, we have helped our clients in and around Rochester secure more than $250 million in damages. We could help you, too. Please send us a message online or call us today at 585-939-3939 to speak to an AI chatbot teen suicide lawyer and schedule your 100% free, no-obligation consultation.
Artificial Intelligence, LLMs, And The Very Real Risks Of Unregulated ChatbotsOver the past several years, products marketed as artificial intelligence have become an increasingly visible part of everyday life.
Most chatbots, including those made by companies like Character.AI, OpenAI, and xAI, are categorized as large language models, or LLMs. Large language models can seem intelligent, and their outputs can sometimes be difficult to distinguish from real writing. However, chatbots cannot think for themselves. Instead, they are trained on large bodies of text, ranging from classic literature to Reddit comments. Computer scientists use this data to “train” LLMs to interpret words through a series of intensive calculations, which create human-like responses through a process of logical, token-based determinations.
Since large language models have no intelligence of their own, their outputs must be routinely curated and refined. Without guidance, an LLM has no natural sense of morality; it has no reservations about writing code to create malicious computer viruses, nor will it hesitate to provide advice on committing suicide or inflicting self-harm. Consequently, most companies that offer free or subscription-based chatbots implement rules that restrict what kinds of information a chatbot can provide and which topics it should not discuss.
Unfortunately, many of these rules-based systems are severely flawed. In the Garcia family’s lawsuit, attorneys alleged that Character.AI took few, if any, steps to prevent children from engaging in age-inappropriate conversations. Similarly, Garcia noted that her son was actively encouraged to commit suicide instead of being redirected to a helpline.
Even when and where restrictions are in place, they are all too often easy for children to circumvent. Today, almost every widely available LLM has been implicated in cases of teen suicide and self-harm.
HAVE YOU OR A LOVED ONE SUSTAINED SERIOUS PHYSICAL OR PSYCHIATRIC
INJURIES AFTER BEING ENCOURAGED TO SELF-HARM
BY AN AI CHATBOT?
CALL JED DIETRICH, ESQ., AND HIS TEAM OF HIGHLY QUALIFIED ROCHESTER, NEW YORK, PERSONAL INJURY ATTORNEYS AT 585-939-3939 NOW
TO OBTAIN THE HELP THAT YOU NEED!
Taking action against a multibillion-dollar technology company is not an easy decision.
For many parents, filing a personal injury lawsuit or wrongful death claim feels wrong. Nobody wants to make money off the suffering of a loved one, and taking a case to court can, in some instances, make it all the more difficult to grieve. However, making a stand is rarely about money in the absence of strong legislation; it is often the only way to hold wrongdoers accountable and help ensure that nobody else has to endure the same pain.
Depending on the circumstances of your claim, a successful lawsuit could help you:
- Pay for the cost of a funeral;
- Eliminate outstanding medical debt;
- Obtain physical therapy or mental health counseling;
- Recover a range of economic and non-economic damages; and
- Secure a legally-binding promise from a company to offer a public apology, change its internal policies, remove defective products from its website, or enforce more effective safety standards.
The Dietrich Law Firm P.C. has spent decades fighting for the rights of New York families.
We understand that, in the wake of an unimaginable personal tragedy, most parents have priorities beyond setting up a dedicated legal fund. So, instead of asking our clients to pay upfront, we work on a contingency fee basis. This means that we only accept payment as a fixed percentage of an eventual settlement or court-ordered award. We bear the immediate cost and all the financial risk. If we cannot win your case, we will not send you a bill for our legal services, and we will not get paid.
You do not have to take your chances with a company that is more interested in preserving its profits and protecting its public image than doing what is right by your family. Call Jed Dietrich, Esq., today at 585-939-3939 to speak to an experienced AI chatbot teen suicide and self-harm lawyer near Rochester and schedule your 100% free, no-obligation consultation as soon as possible.
Call the Dietrich Law Firm P.C. immediately at 585-939-3939 so that our aggressive, tenacious, and hardworking personal injury lawyers can fight to obtain the best result for your personal injury claim in Rochester, New York. We are available 24 hours a day, 7 days a week, and there is never a fee until we WIN for you!


