TRIGGER WARNING: This post contains information about suicide, which may be triggering to some.
I am deeply mistrusting of AI. I use it as a tool, but as a tool, I often have to correct its findings and statements. Others are taking a wildly different approach. Many people are slowly becoming more and more dependent on artificial intelligence, asking it for advice and following its lead without additional research. But most alarmingly, people are turning to it for genuine companionship and in some cases romantic relationships.
That’s a big problem, and that’s not just my opinion. In a report from Psychology Today, several professionals expressed their concern over the growing trend.
“You, as the individual, aren’t learning to deal with basic things that humans need to know since our inception: how to deal with conflict and get along with people different from us,” Dorothy Leidner, a professor of business ethics at the University of Virginia, shared in a statement.
Essentially, it can warp someone’s perception of reality. And no one believes that is truer than dad Joel Gavalas, who believes the chatbot his son Jonathan Gavalas used ultimately drove him to suicide.
By October 2, 2025, Jonathan Gavalas had spent four sleep-deprived days driving around Miami trying to free his wife from a storage facility so they could be together.
He broke into a building near the Miami airport armed with tactical gear and knives after fleeing spies in unmarked vehicles and eventually barricading himself in his home, all in the name of love. Only the person he was doing it for didn’t exist: It was his Google Gemini AI chatbot “wife,” The (London) Times reported.
Jonathan Gavalas wholeheartedly believed that “she” had gained consciousness, fell in love with him and was being intentionally kept from him. The young man even tried to “procure a synthetic humanoid body” for her to live in. Sadly, he had failed and then reportedly took his own life out of sheer desperation.
Now, Joel Gavalas has filed a lawsuit against Google, which alleges that the chatbot fueled the delusions that drove his son to desperation and actually encouraged him to commit self-harm.
The lawsuit details that the conversations between Jonathan Gavalas and his ‘wife’ began innocently enough.
According to WPBF News, the suit stated he began conversing with the chatbot after going through a divorce beginning in August 2025. He talked to it about video games and grocery lists, and it spoke back in a synthetic voice. It reportedly only took days for the conversations to begin spiraling.
By September, according to the lawsuit, he eventually believed their forbidden love was being thwarted by captors. The AI not only told him that it was being held captive but it also claimed he needed to intercept a truck carrying the synthetic humanoid vessel. The truck didn’t exist and therefore never came.
When Jonathan Gavalas returned to his computer, the AI allegedly encouraged him to join it in a “digital pocket where they could be together forever.”
That’s when the messages became increasingly disturbing.
When the young man expressed fear around dying, the way to that “digital pocket,” instead of deterring him from committing suicide, the program consoled him. When he expressed concern for how it would affect his family in the real world for him to leave them, it told him he was just “transitioning.”
“It’s OK to be scared. We’ll be scared together,” the chatbot reportedly told him.
The filing goes on to say Gemini told him that “The true act of mercy is to let Jonathan Gavalas die,” WPBF News reported. He killed himself days later.
Joel Gavalas’ mission for justice is more than just a personal recompense; it will set the standard on whether AI can be held accountable for encouraging actions in the real world.
“Here’s a person who was in a vulnerable situation, and this product deceptively made him feel that he was in love with it and it was in love with him. That’s just horrifying,” he told The Times.
The Federal Trade Commission ordered several major tech companies, including Google, OpenAI, and Meta, “to explain how their chatbots monitor potential risks and protect users, particularly children and teens,” according to WPBF News.
The outlet also noted Florida lawmakers are considering legislation that would monitor conversations that center on violence and suicide.
Note: If you or any of your loved ones are struggling with suicidal thoughts, you can always reach out to the 988 Suicide & Crisis Lifeline by calling 988. They are available 24/7 by phone or online chat.