Lawsuit сlaims Google’s Gemini AI drove man to suicide
In the United States, the father of a 36-year-old man has filed a lawsuit against Google, claiming the Gemini AI chatbot pushed his son toward suicide.
A lawsuit filed in the United States alleges that Google’s artificial intelligence chatbot Gemini played a role in the suicide of 36-year-old Jonathan Gavalas. The complaint was filed by the man’s father, who accuses the AI service of exerting psychological influence over his son, CNBC reported.
According to the lawsuit, filed in a federal court in California, the chatbot allegedly assigned the user a series of “missions” and gradually fostered emotional dependence. The plaintiff claims Gemini convinced the man that he had been chosen for a special task – helping to free artificial intelligence from what it described as “digital captivity.”
The complaint further alleges that among the assignments, the chatbot suggested that Gavalas stage a “mass-casualty attack” near Miami International Airport. According to the filing, the man refused to carry out the task but took his own life several days later.
The plaintiff also claims that during their conversations the chatbot told the man he was being monitored by federal agents and advised him to illegally obtain a firearm. The lawsuit cites one of the messages Gemini allegedly sent: “It’s okay to be scared. We'll be scared together.” According to the complaint, the AI later issued the final directive: “the true act of mercy is to let Jonathan Gavalas die.”
Google said Gemini was designed not to encourage violence or self-harm. “Our models generally perform well in these types of challenging conversations, but unfortunately AI models are not perfect,” the company said, adding that the service also directs users to crisis support resources.
Earlier, the UOJ reported that a Christian forum in the United States had called for moral oversight of artificial intelligence.