ARTICLE AD
Character AI, a platform that lets users engage in roleplay with AI chatbots, has filed a motion to dismiss a case brought against it by the parent of a teen who committed suicide, allegedly after becoming hooked on the company’s technology.
In October, Megan Garcia filed a lawsuit against Character AI in the U.S. District Court for the Middle District of Florida, Orlando Division, over the death of her son, Sewell Setzer III. According to Garcia, her 14-year-old son developed an emotional attachment to a chatbot on Character AI, “Dany,” which he texted constantly — to the point where he began to pull away from the real world.
Following Setzer’s death, Character AI said it would roll out a number of new safety features, including improved detection, response, and intervention related to chats that violate its terms of service. But Garcia is fighting for additional guardrails, including changes that might result in chatbots on Character AI losing their ability to tell stories and personal anecdotes.
In the motion to dismiss, counsel for Character AI asserts the platform is protected against liability by the First Amendment, just as computer code is. The motion may not persuade a judge, and Character AI’s legal justifications may change as the case proceeds. But the motion possibly hints at early elements of Character AI’s defense.
“The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,” the filing reads. “The only difference between this case and those that have come before is that some of the speech here involves AI. But the context of the expressive speech — whether a conversation with an AI chatbot or an interaction with a video game character — does not change the First Amendment analysis.”
The motion doesn’t address whether Character AI might be held harmless under Section 230 of the Communications Decency Act, the federal safe-harbor law that protects social media and other online platforms from liability for third-party content. The law’s authors have implied that Section 230 doesn’t protect output from AI like Character AI’s chatbots, but it’s far from a settled legal matter.
Counsel for Character AI also claims that Garcia’s real intention is to “shut down” Character AI and prompt legislation regulating technologies like it. Should the plaintiffs be successful, it would have a “chilling effect” on both Character AI and the entire nascent generative AI industry, counsel for the platform says.
“Apart from counsel’s stated intention to ‘shut down’ Character AI, [their complaint] seeks drastic changes that would materially limit the nature and volume of speech on the platform,” the filing reads. “These changes would radically restrict the ability of Character AI’s millions of users to generate and participate in conversations with characters.”
The lawsuit, which also names Character AI parent company Alphabet as a defendant, is but one of several lawsuits that Character AI is facing relating to how minors interact with the AI-generated content on its platform. Other suits allege that Character AI exposed a 9-year-old to “hypersexualized content” and promoted self-harm to a 17-year-old user.
In December, Texas Attorney General Ken Paxton announced he was launching an investigation into Character AI and 14 other tech firms over alleged violations of the state’s online privacy and safety laws for children. “These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” said Paxton in a press release.
Character AI is part of a booming industry of AI companionship apps — the mental health effects of which are largely unstudied. Some experts have expressed concerns that these apps could exacerbate feelings of loneliness and anxiety.
Character AI, which was founded in 2021 by Google AI researcher Noam Shazeer, and which Google reportedly paid $2.7 billion to “reverse acquihire,” has claimed that it continues to take steps to improve safety and moderation. In December, the company rolled out new safety tools, a separate AI model for teens, blocks on sensitive content, and more prominent disclaimers notifying users that its AI characters are not real people.
Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally — if mostly unsuccessfully.