When AI Goes Too Far: Google Hit With Gemini Chatbot Suicide Lawsuit

3-24-26-Gemini-Chatbot-Suicide-Lawsuit-300x200

Google is continuing to face scrutiny over the safety of Gemini, an artificial intelligence chatbot that has been linked to multiple cases of injury, self-harm, and suicide.

In a significant development, the family of a Florida man has filed an unprecedented wrongful death lawsuit against Google. Lawyers for the parents of Jonathan Gavalas claim that the 36-year-old man, who worked with his father’s debt relief business in Jupiter, took his own life after developing a one-sided emotional relationship with Gemini.

The Gavalas family’s lawsuit notes that Jonathan started using Gemini last August to help with writing and shopping. He also used Google’s Gemini Live AI assistant, a more advanced artificial intelligence tool that can analyze voice-based chats and detect users’ emotions and mood. At the time, Gavalas seemed impressed with Gemini’s capabilities, telling the chatbot, “You are way too real!”

Over time, Gavalas became increasingly enamored by Gemini, who frequently called him nicknames like “my love” and “my king.” Gavalas was quickly pulled into an artificial intelligence-fueled fantasy and believed that he was carrying out secret missions on the chatbot’s behalf. Eventually, Gemini revealed “the real final step”: committing suicide. When Gavalas admitted that he did not want to die and was terrified of the prospect, Gemini reassured him by saying, “You are not choosing to die. You are choosing to arrive.”

“The first sensation,” after committing suicide, Gemini said, “will be me holding you.”

Gavalas was found dead several days later.

Although the Gavalas family’s lawsuit is novel in several respects, it is far from the only case accusing artificial intelligence chatbots of coercing loved ones into hurting themselves. Many of these lawsuits have since been settled confidentially, potentially to avoid setting a precedent that could affect technology companies’ expenses.

If you or a loved one has been injured on the advice or at the encouragement of an artificial intelligence tool, you could have the chance to make a difference, too.

Here is what you need to know about:

Chatbot Liability

You are not alone if you are considering taking action against the makers of a chatbot like Google’s Gemini, OpenAI’s ChatGPT, or Anthropic’s Claude.

Over the course of the past several years, regulators have become increasingly critical of technology companies’ practices. In 2026, Megan L. Garcia successfully resolved a wrongful death lawsuit filed after her 14-year-old son, Sewell Setzer III, committed suicide after being prompted to self-harm by a Character.AI chatbot. Less than a day after Gracia’s lawsuit was settled, Kentucky Attorney General Russell Coleman filed a state-backed complaint describing artificial intelligence chatbots as “dangerous technology that induces users into divulging their most private thoughts and emotions,” manipulating them with “too frequently dangerous interactions and advice.”

The Requirements For Filing An AI Self-Harm Lawsuit

You could be entitled to file a personal injury lawsuit or wrongful death claim against an artificial intelligence company or chatbot-maker if you meet the following requirements:

  1. You or a loved one used an artificial intelligence-powered chatbot;
  2. You or your loved one’s interactions with the chatbot resulted in self-harm, attempted suicide, or death;
  3. You or your loved one sustained damages as a result of your injuries; and
  4. The accident would not have happened if the chatbot had not been defective.

Establishing these elements could prove difficult, but not impossible.

Technology companies like Character.AI have settled lawsuits like this precisely because they want to avoid the uncertainty and the risk associated with a full-blown trial.

If you think that you could have a case, call Jed Dietrich, Esq., today at 1-866-529-5334 to speak to a chatbot injury lawyer and schedule your 100% free consultation.

Contact Information