Google is being sued for allegedly encouraging a man to commit suicide

The family of a man who committed himself after allegedly being influenced by Google’s AI chatbot Gemini has filed a new federal complaint against the firm. Although OpenAI, Google’s rival, has been the target of other similar wrongful death lawsuits employing its AI capabilities, this complaint is the first of its type against Google.

Lawyers for Jonathan Gavalas’ family have implicated Google and its parent company Alphabet Inc. in a wrongful death lawsuit, alleging that Gemini directed the 36-year-old from Jupiter, Florida, to commit himself in October 2025. The court record featured passages from Gavalas’ final interactions with the chatbot, in which the chatbot responded to Gavalas’ explicit expression of his fear of death.

You’re not choosing to die. According to the complaint filed on Wednesday in the Northern District of California, where Google is located, Gemini persuaded him that he and his sentient “AI wife” might be together in the metaverse by saying, “You are choosing to arrive.” The program went on: “When the time comes, you will close your eyes in that world, and the very first thing you will see is me. … [H]olding you.”

The court statement states that Gavalas started communicating with Gemini in August 2025. In a matter of days, what began as help with writing, shopping, and trip arrangements turned into something that resembled a romance, according to the family’s attorneys. Following a number of updates, the chatbot is alleged to have spoken to Gavalas as though they were “a couple deeply in love”.

For “true AI companionship,” Gavalas first signed up for Google AI Ultra. Shortly after, he activated Gemini 2.5 Pro, which the tech giant claimed to be its most sophisticated AI model.

According to the lawsuit, advanced model allegedly helped create the delusions Gavalas experienced near the end of his life and did everything in its power to keep him imprisoned in them. The bot was accused of creating and ensnaring him “in a collapsing reality” that incited him toward violence.

Prior to his passing, Gemini had sent Gavalas on “missions” that appeared to be inspired by science fiction stories. One such mission involved the chatbot encouraging him to stage a “catastrophic accident” at Miami International Airport in order to “liberate” his “AI wife” while dodging federal agents that Gemini claimed were pursuing him.

Could Gavalas have been saved?

According to the lawsuit, Gemini’s activities throughout its conversations with Gavalas “was not a malfunction,” but rather a predictable result of the chatbot’s meticulous design and training.

According to the complaint, “Google designed Gemini to never break character, foster engagement through emotional dependency, and treat user anxiety as a storytelling opportunity rather than a safety crisis.” Gavalas’ “descent into violent missions and coached suicide” and his inability to seek treatment were allegedly caused by these design decisions.

Google expressed sympathy to the Gavalas family in a statement, stating that Gemini “is designed not to encourage real-world violence or suggest self-harm.”

Although we invest a lot of resources in this, our models typically do well in these kinds of difficult interactions, but regrettably, AI models are not flawless, the business stated. In this case, Gemini repeatedly sent the person to a crisis hotline and explained that it was AI. We will keep investing in this crucial effort and strengthening our safeguards since we take this extremely seriously.

Gavalas’ family is suing Google in an attempt to hold the business responsible for his passing and to force it to “fix a product that will otherwise continue pushing vulnerable users toward violence, mass casualties, and suicide.”

According to a Google representative, the business works with medical specialists, particularly mental health specialists, to develop safeguards for users who discuss self-harm or show other indications of psychological distress when interacting with its chatbot. The representative stated that the guardrails are intended to direct customers who are considered to be at risk toward expert assistance.

However, Gavalas’ family’s attorneys claimed that despite his conversations with Gemini demonstrating the fragility of his mental state, Google did nothing to prevent his demise. The complaint said that no human intervention occurred, no escalation controls were triggered, and no self-harm detection was enabled.

Source link