Starting next week, the Google Photos app will have new disclosures when a photo is edited using one of its AI features, including Magic Editor, Magic Eraser, and Zoom Enhance. When you click on a photo in Google Photos, scrolling to the bottom of the “Details” section will now show when the photo was “edited with Google AI.”
Google says it introduced this disclosure to “further increase transparency,” but it's still not so obvious when a photo has been edited by AI. There's no visual watermark inside the photo frame to indicate that the photo is AI-generated. When someone sees a photo edited by Google's AI on social media, in a text message, or even while scrolling through the Photos app, it's not immediately obvious that the photo is a composite.
Google's new AI disclosure (Image source: Google)
On Thursday, a little more than two months after Google announced its new Pixel 9 phones packed with AI photo editing features, the company announced new disclosures about AI photography in a blog post. This disclosure appears to be a response to the backlash Google received for widely distributing these AI tools without any visual watermarks that are easily readable by humans.
For Google's other new photo editing features that don't use generated AI, Best Take and Add Me, Google Photos will also indicate that those photos have been edited in their metadata.[詳細]It is not displayed on the tab. These features edit multiple photos together and display them as one clean image.
These new tags don't exactly solve the main issues people have with Google's AI editing features. Having no visual watermarks on photo frames (at least one that's noticeable at a glance) might help people avoid feeling cheated, but Google doesn't have them.
All photos edited by Google AI already reveal that they were edited by AI in the photo's metadata. Currently, Google Photos[詳細]The tab also has easy-to-find disclosure information. But the problem is that most people don't look at the metadata or details tabs of the photos they see on the internet. They just watch, scroll and leave without doing any further research.
To be fair, visual watermarks in AI photo frames aren't a perfect solution either. These watermarks can be easily cropped or edited to remove them, and then you're back to square one. We contacted Google to ask if it was working on anything to help people quickly identify whether a photo has been edited by Google AI, but we didn't receive an immediate response. There wasn't.
The proliferation of Google's AI imaging tools could increase the amount of synthetic content people view on the internet, making it difficult to tell what's real and what's fake. Google's approach to using metadata watermarks relies on the platform showing users that they are viewing AI-generated content. Meta already does this on Facebook and Instagram, and Google says it plans to flag AI images in search later this year. However, other platforms have been slow to catch up.