Meta has announced changes to its rules regarding AI-generated content and manipulated media following criticism from the Oversight Board. Starting next month, the company announced it will label a wider range of content, including applying the “Made with AI” badge to deepfakes. Additional contextual information may be displayed if the content is manipulated in other ways that pose a high risk of misleading the public about important issues.
The move could lead to the social networking giant adding more labels to potentially misleading content, which is important in a year with many elections around the world. However, in the case of deepfakes, Meta will only label it if the content in question has an “industry standard AI image indicator” or if the uploader is publishing AI-generated content. Apply.
These out-of-scope content generated by AI will likely be leaked without labels.
This policy change will also likely lead to more AI-generated content and manipulated media remaining on Meta's platform. Meta is shifting toward favoring an approach that focuses on “transparency and providing additional context” as a “better way to deal with this content” (as it reduces the associated risks to free speech). (instead of deleting the manipulated media).
So the strategy for AI-generated or otherwise manipulated media on meta-platforms like Facebook and Instagram seems to be more labels and less removals.
Meta said it would stop removing content based solely on its current manipulated video policy in July, adding in a blog post published Friday: Manipulated media. ”
This change in approach may be aimed at responding to growing legal demands on meta around content moderation and systemic risk, such as the European Union's Digital Services Act. Since August last year, EU law has applied a series of rules to the EU's two major social networks, with Meta having to walk a fine line between removing illegal content, reducing system risks and protecting free speech. I am forced to make a decision. The European Union is also putting extra pressure on platforms ahead of European Parliament elections this June, including asking tech giants to watermark their deepfakes where technically possible. .
The US presidential election scheduled for November is also likely on Mehta's mind.
Criticism of the monitoring committee
Meta's advisory board, which is funded by the tech giant but is allowed to operate independently, reviews a small number of content moderation decisions but makes policy recommendations. You can also do Although Meta is not obligated to accept the board's recommendations, it has agreed to modify its approach in this case.
Monika Bickert, Meta's vice president of content policy, said in a blog post published Friday that the company will revise its policies regarding AI-generated content and manipulated media based on board feedback. “Our existing approach is too narrow because it only targets videos that have been created or modified by AI to appear as if the human is saying something that is not said. “We agree with the committee’s arguments,” she wrote.
Back in February, the Oversight Committee asked Meta to change its approach to AI-generated content in the wake of the doctored video of President Biden, in which a platonic kiss with his granddaughter was edited to suggest a sexual motive. I urged them to reconsider.
The board agreed with Mehta's decision not to publish certain content, but attacked the company's manipulated media policy as “incoherent.” For example, it pointed out that the policy only applies to videos created by AI, while other fake content (such as more basic ones) is allowed. (processed video or audio) off the hook.
Mehta appears to have taken the critical feedback on board.
“Over the past four years, and especially in the last year, this technology has rapidly evolved with the development of other types of realistic AI-generated content, such as audio and photos,” Bickert wrote. “As the Board noted, it is equally important to address manipulation that makes it appear that people are doing something they are not doing.
“The Board also argued that when removing manipulated media that does not violate community standards, it risks unnecessarily restricting freedom of expression. It recommended a “less restrictive” approach. ”
Earlier this year, Meta announced that it was working with others in the industry to develop common technical standards for identifying AI content, including video and audio. The company is now relying on this effort to expand labeling of synthetic media.
“Our 'Made with AI' label for AI-generated video, audio, and images is based on detecting industry-shared signals of people who self-disclose that they are uploading AI images or AI-generated content. ,” Bickert said. Note that the company is already applying the “Imagined with AI” label to photorealistic images created using its proprietary Meta AI capabilities.
Bickert said the expanded policy covers “a broader range of content in addition to the manipulated content that the Oversight Committee recommended labeling.”
“If we determine that digitally created or altered images, video, or audio pose a particularly high risk of materially misleading the public about material matters, we will provide the public with more information and context. may add more prominent labels to the list,” she wrote. “This holistic approach allows people to be more informed about the content, so they can evaluate it better and gain context when they see the same content elsewhere. It will be.”
Meta says it will not remove manipulated content (whether AI-based or otherwise defaced) unless it violates other policies (such as voter interference, bullying or harassment, violence or incitement, or other community standards issues). Ta. Instead, as mentioned above, we may add “informational labels and context” in certain scenarios of high public interest.
Meta's blog post highlights the network of nearly 100 independent fact checkers the company says it engages to help identify risks related to manipulated content.
Per Meta, these external entities will continue to review false and misleading content generated by AI. If content is rated as “false or altered,” Meta said it will respond by applying algorithm changes that reduce the content's reach. This means fewer people see your content because it appears at the bottom of the feed, and the meta also places an overlay label with additional information. The eyeball that lands there.
As synthetic content proliferates due to the boom in generative AI tools, these third-party fact-checkers are likely to face an increased burden. And because we think a lot of this stuff will remain on Meta's platform as a result of this policy change.