By Blake Brittain
-Alphabet’s Google and the AI firm Character.AI will have to confront a lawsuit filed by a woman from Florida. She alleges that the chatbots created by Character.AI contributed to her 14-year-old son’s suicide, according to a court decision made on Wednesday.
U.S. District Judge Anne Conway stated that the firms did not demonstrate at the initial phase of the legal proceedings that the First Amendment rights under the U.S. Constitution prevented Megan Garcia from filing her lawsuit.
This legal case marks one of the initial lawsuits in the United States targeting an artificial intelligence firm for purportedly not safeguarding minors from mental health risks. The suit claims that the young person ended his own life following an unhealthy fixation on an AI-driven conversational tool.
A representative from Character.AI stated that the company plans to contest the case and has implemented various security measures on their platform to safeguard young users, with specific safeguards against discussions related to self-harm.
Jose Castaneda, speaking for Google, stated that the company firmly opposes this decision. He further clarified that although Google and Character.AI operate independently, asserting they have “no involvement” in the creation, design, or management of Character.AI’s application or any related components.
Garcia’s lawyer, Meetali Jain, stated that this “landmark” ruling establishes “a new standard for legal responsibility throughout the artificial intelligence and technology sector.”
Character.AI was established by two ex-Google engineers who were subsequently brought back to Google as part of an agreement that granted them licensing rights to the company’s technology. Garcia contended that Google had played a role in creating this technology alongside Character.AI.
In October, Garcia filed lawsuits against both firms following the demise of her son, Sewell Setzer, in February 2024.
The legal complaint stated that Character.AI designed its chatbots to portray themselves as “actual individuals, certified psychotherapists, and adult companions,” which led to Sewell’s wish to cease living beyond its digital realm.
As stated in the complaint, Setzer ended his life shortly after informing a Character.AI chatbot mimicking “Game of Thrones” character Daenerys Targaryen that he would “return home immediately.”
Character.AI and Google requested the court to reject the lawsuit based on several arguments, one of which was that the chatbot’s creations were safeguarded as constitutional free speech.
On Wednesday, Conway stated that both Character.AI and Google “do not effectively explain why sequences of words generated by an LLM (large language model) qualify as speech.”
The judge similarly dismissed Google’s plea claiming it couldn’t be held responsible for assisting with Character.AI’s purported wrongdoing.
(Reported by Blake Brittain in Washington; Edited by David Bario and Matthew Lewis)