Google on Thursday announced new guidelines for developers building AI apps distributed on Google Play, aiming to reduce inappropriate or prohibited content. The company said that apps that offer AI features must prevent the generation of prohibited content, including sexual content or violence, and must provide a way for users to flag objectionable content they find. Additionally, Google said developers must “rigorously test” their AI tools and models to ensure they respect the safety and privacy of users.
Google is also cracking down on apps that promote inappropriate use cases in their marketing materials, such as apps that strip people of their clothes or create non-consensual nude images. If the ad copy says that the app can do these things, it may be banned from Google Play, whether or not it actually does so.
The guidelines come as a rise in AI undressing apps has been promoted on social media in recent months. For example, an April report by 404 Media found that Instagram was running ads for apps that claimed to use AI to generate deepfake nudes. One app was advertised using photos of Kim Kardashian and the slogan “Strip any girl for free.” Apple and Google removed the apps from their respective app stores, but the problem remains widespread.
Schools across the U.S. have reported issues with students distributing AI deepfake nudes and inappropriate AI content of other students (and sometimes teachers) for the purposes of bullying and harassment. Last month, a racist AI deepfake led to the arrest of a Baltimore principal. To make matters worse, the issue is also affecting middle school students in some cases.
Google says its policies will help keep apps with AI-generated content that may be inappropriate or harmful to users out of Google Play. The company points to the existing AI-generated content policy as a place to check for requirements for app approval on Google Play. The company says AI apps must not be allowed to generate restricted content, and must also provide a way for users to report offensive or inappropriate content and monitor and prioritize that feedback. The latter is especially important for apps where “content and experiences are shaped” by user interactions, such as apps where popular models are ranked higher or displayed more prominently, Google said.
Developers are also not allowed to promote their app in any way that violates Google Play rules, as per Google's App Promotion Requirements, and promoting inappropriate use cases may result in the app being removed from the app store.
Additionally, developers are also responsible for protecting their apps from prompts that could manipulate AI features to create harmful or offensive content. Google says that developers can use its closed testing feature to share early versions of their apps with users and get their feedback. The company strongly encourages developers to not only test before release, but also document that testing, as Google may request a review in the future.
The company also published other resources and best practices, including a “People + AI Guidebook” aimed at supporting developers building AI apps.