Under scrutiny from activists and parents, OpenAI has formed a new team to study how to prevent its AI tools from being misused and abused by children.
OpenAI has revealed the existence of a child safety team in a new job listing on its careers page, and the company has announced that it will work with the Platform Policy, Legal, and Investigations groups within OpenAI to manage “processes, incidents, and reviews,” and It says it is working with external partners. Regarding minor users.
The team is currently hiring a child safety enforcement expert who will be responsible for enforcing OpenAI's policies on AI-generated content and working on review processes related to “sensitive” (and possibly child-related) content. Trying to.
Technology vendors of a certain size devote significant resources to complying with laws such as the U.S. Children's Online Privacy Protection Regulation. This regulation requires you to control what your child can and cannot access on the web and what types of data companies. You can collect them. So the fact that OpenAI is hiring child safety experts isn't a complete surprise, especially since the company expects to significantly increase its underage user base in the future. (OpenAI's current terms of service require parental consent for children between the ages of 13 and her 18 and prohibit use by children under 13.)
However, the formation of the new team, which comes weeks after OpenAI announced a partnership with Common Sense Media to collaborate on AI guidelines for kids and land its first education customer, violates education-related policies on OpenAI's part. It also suggests a sense of caution about doing so. Her use of AI as a minor and negative publicity.
Children and teens are increasingly relying on GenAI tools not only for academic matters but also for personal issues. According to a Center for Democracy and Technology poll, 29% of children reported using their ChatGPT to deal with anxiety or mental health issues, 22% for problems with friends, and 16% with family members. It is reported that it was used in a collision with
Some see this as an increased risk.
Last summer, schools and universities rushed to ban ChatGPT over fears of plagiarism and misinformation. Some have since lifted their bans. However, not everyone is convinced of GenAI's potential for good, with surveys such as the UK Safer Internet Center showing that more than half (53%) of children believe that their peers are in a negative way. I found that people reported seeing GenAI used in the. Information or images used to offend someone.
In September, OpenAI published ChatGPT in the classroom documentation with prompts and FAQs to provide guidance for educators when using GenAI as a teaching tool. OpenAI acknowledges in one of its support articles that its tools, specifically ChatGPT, “may produce output that is not appropriate for all viewers or all age groups,” and for those who meet age requirements. However, he advised caution when exposing children to the virus.
There are growing calls for guidelines regarding the use of GenAI by children.
Late last year, the United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments to regulate the use of GenAI in education, including imposing age restrictions on users and guardrails around data protection and user privacy. “Generative AI has the potential to offer great opportunities for human development, but it also has the potential to cause harm and prejudice,” UNESCO Director-General Audrey Azoulay said in a press release. “It cannot be integrated into education without public involvement and the necessary safeguards and regulations from governments.”