Google says it's making changes to Google Search to make it clearer which images in search results have been generated by AI or edited by AI tools.
In the coming months, Google will begin flagging images generated and edited by AI in the “About this image” window in Search, Google Lens, and Android's Circle to Search feature. Similar disclosures may apply to other Google properties, such as YouTube, in the future. Google says it plans to provide more information on this later this year.
Importantly, only images that contain “C2PA metadata” will be flagged as AI-manipulated in search. C2PA stands for Coalition for Content Provenance and Authenticity, a group that is developing technical standards for tracking the history of an image, including the equipment and software used to take or create it.
C2PA is backed by companies including Google, Amazon, Microsoft, OpenAI, and Adobe. But the coalition's standards have yet to be widely adopted. As The Verge noted in a recent article, C2PA faces a number of challenges around adoption and interoperability, and only a handful of generative AI tools and cameras from Leica and Sony support the group's specifications.
Additionally, C2PA metadata, like any other metadata, can be deleted, erased, or corrupted to the point where it is unreadable. And images from some of the more popular generative AI tools, such as Flux, which xAI's Grok chatbot uses to generate images, do not have C2PA metadata attached to them, in part because their creators have not agreed to uphold the standard.
As deepfakes continue to spread rapidly, any action is certainly better than nothing: One estimate sees a 245% increase in fraud involving AI-generated content from 2023 to 2024. Deloitte predicts deepfake-related losses will soar from $12.3 billion in 2023 to $40 billion by 2027.
Surveys have shown that a majority of people are concerned about being fooled by deepfakes and the potential for AI to help spread propaganda.