Meta's semi-independent policy council, the Oversight Board, has focused on how the company's social platforms handle explicit images generated by AI. On Tuesday, two separate cases over how Instagram in India and Facebook in the US handled AI-generated images of public figures came after Mehta's systems failed to detect and respond to explicit content. announced an investigation into the incident.
In either case, the site is currently removing the media. The board is not naming the individuals who were the subject of the AI images “to avoid gender-based harassment,” according to an email sent to TechCrunch by Meta.
The board will take up the case regarding meta moderation decisions. Users must first complain to Meta about the moderation move before contacting the Oversight Board. The Board plans to publish its full findings and conclusions in due course.
case study
In the first incident, the commission said a user reported AI-generated nudes of Indian celebrities on Instagram as pornographic. The image was posted by an account that only posts AI-generated images of Indian women, and the majority of users who interact with these images are based in India.
Meta failed to remove the image after the initial report, and the report ticket was automatically closed after 48 hours after the company did not review the report further. When the first complainant appealed this decision, the report was again automatically closed without Meta's oversight. This means that even after two reports, the explicit AI-generated images remained on Instagram.
The user then ultimately appealed to the board. At that point, the company only took action by removing the offensive content and removing the images for violating its community standards regarding bullying and harassment.
The second case involved Facebook, where a user posted an explicit AI-generated image resembling a U.S. celebrity in a group focused on AI works. In this case, the social network had previously removed an image posted by another user, and Meta had added it to the Media Matching Service Bank's “derogatory sexual photoshops and drawings” category.
When TechCrunch asked why the board chose a case in which the company successfully removed explicit AI-generated images, the board said it chose a case that was “emblematic of broader issues across Meta's platform.” Stated. These examples will help the advisory board consider the global effectiveness of Meta's policies and processes on a variety of topics, it added.
“We know that Meta can moderate content more quickly and effectively in some markets and languages than in others. “We want to examine whether Meta is protecting all women around the world in an equitable way,” Supervisory Board co-chair Helle Thorning-Schmidt said in a statement.
“The Board believes it is important to consider whether Meta's policies and enforcement practices are effective in addressing this issue.”
The issue of deepfake porn and online gender-based violence
In recent years, some, but not all, generation AI tools have been extended to allow users to generate pornography. As TechCrunch previously reported, groups like Unstable Diffusion are exploiting blurred ethical lines and data bias to monetize AI porn.
Deepfakes are also a concern in regions such as India. Last year, a BBC report noted that the number of deepfake videos of Indian actresses had recently surged. Data shows that women are more commonly targeted by deepfake videos.
Earlier this year, Deputy IT Minister Rajeev Chandrasekhar expressed dissatisfaction with tech companies' approach to combating deepfakes.
“If a platform thinks it can get away with deepfake videos without taking them down, or if it simply maintains a casual approach, we have no right to do so,” Chandrasekhar said at a press conference at the time. “We have the power to protect the people by blocking them.”
India is considering incorporating specific rules related to deepfakes into the law, but nothing has been decided yet.
The country has legal provisions for reporting online gender-based violence, but experts say the process can be tedious and there is often little support. ing. Indian advocacy group IT for Change said in a study published last year that Indian courts need to have robust processes in place to address online gender-based violence and avoid trivializing these cases. did.
Aparajita Bharti, co-founder of India-based public policy consulting firm Quantum Hub, said limits need to be placed on AI models to stop them from creating explicit content that causes harm. .
“The main risk of generative AI is that it increases the amount of content due to the ease and sophistication of generating such content. “We need to prevent the creation of such content in the first place by training our AI models to restrict it. Default labels should also be introduced for easy detection,” Bharti said in an email to TechCrunch. told.
Currently, there are only a few laws around the world that address the production and distribution of pornography generated using AI tools. Several US states have enacted laws against deepfakes. This week, the UK introduced legislation that would criminalize the use of AI to create sexually explicit images.
Meta reaction and next steps
Following the oversight board's lawsuit, Meta said it removed both content. However, the social media company did not address the fact that it was unable to remove content on Instagram after the initial report by users or while the content was up on the platform.
Meta said it uses a combination of artificial intelligence and human reviews to detect sexually suggestive content. The social media giant said it doesn't recommend this type of content in places like Instagram Explore and Reels Recommendations.
The Oversight Committee has set a deadline of April 30 for public comment on the harm caused by deepfake pornography, contextual information on the proliferation of such content in regions such as the United States and India, and the potential pitfalls of meta detection approaches. I'm looking for. Explicit images generated by AI.
The board will review the incident and public comments and will post its decision on the site in the coming weeks.
These examples show that while AI-powered tools are enabling users to quickly and easily create and distribute different types of content, large platforms are still grappling with outdated moderation processes. It shows that. Companies like Meta are experimenting with tools that use AI for content generation, as well as efforts to detect such images. However, perpetrators constantly find ways to evade these detection systems and post problematic content on social platforms.