In early February, Meta announced that it would begin labeling photos on its social network that were created with its AI tools. Since May, Meta has routinely labeled some photos as “Made with AI” on Facebook, Instagram, and its Threads app.
But the company's photo labeling techniques, which label photos as “created with AI” when they weren't created using AI tools, have sparked outrage from users and photographers.
There are many examples of Meta automatically labeling photos that weren't created by AI, like this photo of Kolkata Knight Riders winning the Indian Premier League cricket tournament: Notably, the labels only appear on the mobile app, not on the web.
Kolkata Knight Riders Instagram photo labeled “Made with AI.” Image credit: Instagram (screenshot)
Many other photographers have also expressed concern that their photos have been mistakenly labeled “created with AI,” arguing that simply using a tool to edit a photo does not qualify for the label.
Former White House photographer Pete Souza said in an Instagram post that one of his photos had the new label. In an email to TechCrunch, Souza explained that Adobe changed how the crop tool works, so that you have to “flatten the image” before saving it as a JPEG. He suspects that this action triggered Meta's algorithm to add the label.
“I find it annoying that I'm forced to include the phrase 'made with AI' in my posts even though I unchecked it,” Souza told TechCrunch.
Photo taken by Pete Souza but labeled by Instagram as “created with AI.” Image credit: Instagram (screenshot)
Meta did not publicly respond to TechCrunch's questions about Souza's experience, or those of other photographers who claim their posts have been mistakenly tagged.
In a February blog post, Meta said it uses image metadata to detect labels.
“We are building industry-leading tools that can identify invisible markers at scale, including 'AI-generated' information for C2PA and IPTC technical standards. This will enable Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock to label images created with their tools as they implement their plans to add metadata to these images,” the company said at the time.
As PetaPixel reported last week, Meta appears to be applying the “Made with AI” label when photographers use tools like Adobe's Generative AI Fill to remove objects.
Meta doesn't clarify when it automatically applies the labels, but some photographers support Meta's approach, arguing that any use of AI tools should be fully disclosed.
For now, Meta doesn't offer separate labels to indicate whether a photographer used tools to clean up a photo or whether AI was used to create the photo. It can be hard for users to understand how much AI is involved in a photo. Meta's label says, “Generative AI may have been used to create or edit the content of this post,” but only if you tap on the label.
Despite this approach, there are many clearly AI-generated photos on Meta's platform that its algorithms don't label. With the US election just months away, social media companies are under more pressure than ever to get the AI-generated content right.