Following a series of lawsuits and criticisms that have allegedly failed to protect minor users from harm, the character AI, a startup that allows users to create different AI characters and talk to them via phone or text, said Tuesday it is rolling out a new set of parental supervision tools to increase the safety of teenage users.
Character AI provides Guardians and parents with a summary of teenage activities on the app via weekly email. Emails are likely to show the average amount of time a child spends on the app and the web, how long they spend talking to each character, and the top characters they interacted with each week.
The startup says the data is intended to protect parents' insights into teenage engagement habits on the platform. Parents have specified that they cannot access the chat directly themselves.
Following the lawsuit, last year's startup reminded users that they were chatting with AI-powered characters with the addition of a dedicated model for users under the age of 18, time-consuming notifications, and safety measures like disclaimers. The company also blocked sensitive content in input and output by creating new classifiers for teens.
Earlier this year, the startup filed a motion to dismiss a lawsuit alleging it was involved in a teenage suicide.