Texas Attorney General Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over children's privacy and safety concerns. The investigation will assess whether Character.AI and other platforms popular with young people, such as Reddit, Instagram, and Discord, comply with Texas child privacy and safety laws.
Research by Paxton, who often takes a tough stance on technology companies, found that these platforms are subject to two Texas laws: the Parental Empowerment to Protect Children Online Act (SCOPE Act) and the Texas Data Privacy and Security Act. (DPSA) compliance will be investigated.
These laws require platforms to provide parents with tools to manage the privacy settings of their children's accounts and impose strict consent requirements on tech companies when collecting data on minors. is required. Paxton argues that both of these laws also apply to how minors interact with AI chatbots.
“These investigations are an important step in ensuring social media and AI companies comply with laws designed to protect children from exploitation and harm,” Paxton said in a press release.
Character.AI, which allows you to set up generative AI chatbot characters that can text and chat, has recently been involved in a number of child safety lawsuits. The company's AI chatbot quickly became popular among young users, but several parents claim in lawsuits that Character.AI's chatbot made inappropriate and offensive comments to their children.
In one case in Florida, a 14-year-old boy became romantically involved with a character-based AI chatbot and told it he was suicidal in the days leading up to his suicide. In another incident in Texas, one of Character.AI's chatbots allegedly suggested to an autistic teenager that he poison his family. Another parent in the Texas case claims that one of Character.AI's chatbots exposed their 11-year-old daughter to sexual content over the past two years.
“We are currently reviewing the Attorney General's announcement. As a company, we take the safety of our users very seriously,” a Character.AI spokesperson said in a statement to TechCrunch. “We welcome cooperation with regulators and recently announced that we would be releasing some of the features mentioned in the release, including parental controls.”
Character.AI on Thursday announced new safety features aimed at protecting teens, saying these updates will restrict its chatbots from starting romantic conversations with minors. Ta. The company also began training a new model last month specifically for teenage users. Eventually, the company hopes to allow adults to use one model on the platform and minors to use another.
These are just the latest safety updates released by Character.AI. In the same week that the Florida lawsuit became public, the company announced it was expanding its trust and safety team and recently hired a new head of the department.
As expected, issues with AI companionship platforms have arisen at the same time as their popularity has increased. Last year, Andreessen Horowitz (a16z) said in a blog post that he believes AI companionship is an undervalued part of the consumer internet and will invest more in it. A16z is an investor in Character.AI and continues to invest in other AI companionship startups. , most recently helping a company whose founders want to recreate the technology from the movie “Her.”
Reddit, Meta, and Discord did not immediately respond to requests for comment.