Character AI, a platform that allows users to role-play with AI chatbots, has filed a motion to dismiss a lawsuit filed by the parents of a suicidal teenager who was allegedly addicted to the company's technology.
In October, Megan Garcia filed a lawsuit against Character AI in the United States District Court for the Middle District of Florida, Orlando, over the death of her son Sewell Setzer III. Garcia said her 14-year-old son developed an emotional attachment to the character AI chatbot, Danny, and started texting him constantly and withdrawing from the real world.
Following Setzer's death, Character AI announced that it would be rolling out a number of new safety features, including improved detection, response, and intervention related to chats that violate its terms of service. But Garcia is fighting for additional guardrails, including changes that could remove the ability of chatbots on Character AI to tell stories and personal anecdotes.
In their motion to dismiss, Character AI's lawyers argued that the platform, like computer code, is protected from liability by the First Amendment. The allegations may not persuade a judge, and Character AI's legal legitimacy could change as the case progresses. However, this move may hint at early elements of character AI defense.
“The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech that allegedly leads to suicide,” the filing states. . “The only difference between this case and previous cases is that some of the speech here involves AI. But whether it's a conversation with an AI chatbot or an interaction with a video game character, Regardless, the context of expressive speech does not change the First Amendment analysis.”
To be clear, Character AI's attorneys are not asserting the company's First Amendment rights. Rather, the motion argues that Character AI's users would have their First Amendment rights violated if the lawsuit against the platform was successful.
The motion does not address whether character AI would be considered harmless under Section 230 of the Communications Decency Act, a federal safe harbor law that protects social media and other online platforms from liability for third-party content. . The law's authors have hinted that Section 230 does not protect output from AI, such as Character AI's chatbots, but that is far from a legal solution.
Character AI lawyers also claim that Garcia's real intention is to “shut down” character AI and enact legislation to regulate similar technologies. If the plaintiffs succeed, the platform's lawyers said it would have a “chilling effect” on both character AI and the nascent generative AI industry as a whole.
“Aside from the attorney’s stated intention to ‘shut down’ the character AI; [their complaint] “We are seeking fundamental changes that would substantially limit the nature and amount of speech on our platform,” the filing states. “These changes will fundamentally limit the ability of Character AI’s millions of users to generate and participate in conversations with characters.”
The lawsuit also names Alphabet, Character AI's corporate backer, as a defendant and challenges Character AI over how minors interact with AI-generated content on its platform. It's just one of several lawsuits. Other lawsuits allege that Character AI exposed a 9-year-old child to “excessively sexual content” and encouraged a 17-year-old user to self-harm.
In December, Texas Attorney General Ken Paxton announced the opening of an investigation into Character AI and 14 other technology companies for allegedly violating the state's online privacy and child safety laws. “These investigations are an important step in ensuring social media and AI companies comply with laws designed to protect children from exploitation and harm,” Paxton said in a press release.
Character AI is part of a burgeoning industry of AI companion apps, but its impact on mental health remains largely unstudied. Some experts have expressed concern that these apps can worsen feelings of loneliness and anxiety.
Character AI, which was founded in 2021 by Google AI researcher Noam Shazeer and reportedly paid $2.7 billion for a “reverse acquisition” by Google, is taking steps to improve safety and moderation. He claims to be taking continuous steps. In December, the company announced new safety tools, a separate AI model for teens, blocking of sensitive content, and a more prominent disclaimer informing users that AI characters are not real people.
Character AI has undergone a number of personnel changes since Shazeer and the company's other co-founder, Daniel De Freitas, left for Google. The platform hired former YouTube executive Erin Teague as chief product officer and named Dominic Perera general counsel and interim CEO of Character AI.
Character AI recently began testing games on the web to increase user engagement and retention.
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.