Following an investigation into how Meta handles AI-generated explicit images, the company's semi-independent watchdog, the Oversight Board, is calling on the company to improve its policies regarding such images. The Oversight Board wants Meta to change the term it uses from “derogatory” to “non-consensual,” and to move its policy regarding such images from the “Bullying and Harassment” section to the “Sexual Exploitation Community Standards” section.
Currently, Meta's policy on AI-generated explicit images is derived from its “offensive and sexual Photoshop” rule in its bullying and harassment section. The committee also asked Meta to replace the word “Photoshop” with a more general term for manipulated media.
Additionally, Meta prohibits images without consent if they are “produced for non-commercial purposes or in a private environment.” The Commission suggested that this provision should not be made mandatory to remove or prohibit images generated or manipulated by AI without consent.
These recommendations come after two high-profile incidents in which Meta found itself in hot water after explicit AI-generated images of celebrities were posted on Instagram and Facebook.
One of these cases involved AI-generated nude images of Indian celebrities posted on Instagram. Although several users reported the images, Meta did not remove them and closed the ticket within 48 hours without further review. The users appealed the decision, but the ticket was closed again. The company only took action after the oversight committee took up the case, removed the content and banned the accounts.
Another AI-generated image resembled a US celebrity and was posted to Facebook: Meta had already included the image in its Media Matching Service (MMS) repository (a bank of images that violate its terms of service that can be used to find similar images) following media reports, and quickly removed it after another user uploaded it to Facebook.
Notably, Meta added images of Indian celebrities to its MMS bank only after the oversight committee's recommendation, and the company apparently told the committee that the images were not in the repository earlier because there had been no media coverage of the issue.
“This is disturbing as many victims of deepfake intimate images are not in the public eye and are forced to either accept the dissemination of their non-consensual depictions or report all instances,” the committee said in the memo.
Breakthrough Trust, an Indian group that campaigns to reduce gender-based violence online, noted that these issues, and Meta's policies, have cultural implications. In comments submitted to the oversight board, Breakthrough said non-consensual images are often downplayed as a problem of identity theft, rather than gender-based violence.
“Victims often face collateral victimization when reporting such cases to police stations or courts (even when it is not their own photo, e.g. 'why did they put up my photo?'). Once on the internet, photos quickly outgrow the source platform. Removal on the source platform is not enough as they quickly spread to other platforms,” the organisation's media head Bhasha Charkolbourti wrote to the oversight committee.
Charcolbolty told TechCrunch over the call that users often don't know that their reports are automatically marked “resolved” within 48 hours, and that Meta shouldn't apply the same timeline to every case. Additionally, she suggested the company should also work to raise awareness about these issues.
Devika Malik, a platform policy expert who previously worked on Meta's South Asia policy team, told TechCrunch earlier this year that platforms rely heavily on user reports to remove non-consensual imagery, which may not be a reliable approach to dealing with AI-generated media.
“This places an unfair burden on affected users to prove their identity and lack of consent (as is the case with Meta's policy). Synthetic media is prone to errors, meaning the time it takes to capture and verify these external signals can allow content to gain harmful influence,” Malik said.
Aparajita Bharti, founding partner at Delhi-based think tank The Quantum Hub (TQH), said Meta should allow users to provide more context when reporting content as they may not be aware of the various categories of violations under Meta's policies.
“We hope that Mehta will achieve a better outcome than the final verdict. [of the Oversight Board] “It's about enabling flexible, user-centric channels to report this type of content,” she said.
“We recognize that users cannot be expected to fully understand the nuances between the various reporting responsibilities, and we advocate for a system that prevents real issues from being overlooked due to technical issues with Meta content moderation policies.”