In a significant move to address the growing concerns surrounding disinformation propagated through artificial intelligence (AI)-generated images, Google has embarked on a trial of an advanced digital watermarking system. This pioneering effort, developed by DeepMind, Google’s AI division, goes by the name of SynthID, and its primary purpose is to identify images that have been created by machines rather than humans.
SynthID operates by subtly modifying individual pixels within images, incorporating changes that are invisible to the human eye but can be detected by computers. Unlike traditional watermarks, these modifications are virtually imperceptible, making them more challenging to manipulate or remove. This marks a critical advancement, especially considering the increasing complexity of distinguishing genuine images from those generated by AI, as demonstrated in quizzes like BBC Bitesize’s AI or Real.
The prevalence of AI-powered image generators has surged, with tools such as Midjourney amassing a staggering user base of over 14.5 million individuals. These tools empower users to create images within seconds by inputting simple text instructions, thereby raising important questions about copyright and ownership on a global scale.
Notably, Google boasts its own AI image generator called Imagen, and the watermarking system designed to identify AI-generated images will specifically target those produced using this tool. Unlike conventional watermarks, which often consist of logos or text overlaid on an image, DeepMind’s SynthID operates at the pixel level, rendering it much more resilient to tampering and manipulation.
Traditional watermarking techniques have long been employed to indicate ownership and deter unauthorized use of images. However, these methods are not suitable for identifying AI-generated images due to their susceptibility to cropping or editing. In contrast, Google’s system introduces an effectively invisible watermark that enables instantaneous verification of an image’s authenticity, whether it was created by a human or a machine.
Pushmeet Kohli, the head of research at DeepMind, emphasized that the subtlety of their watermarking system ensures that “to you and me, to a human, it does not change.” This nuanced modification remains detectable even after subsequent edits or cropping. Kohli explained that the watermark persists, even when alterations are made to factors like color, contrast, or size.
However, it’s important to note that Google acknowledges this launch as an “experimental” phase, urging users to engage with the system to assess its robustness further. This move aligns with the voluntary agreement made by seven leading AI companies, including Google, in the US, to prioritize the responsible development and usage of AI, which encompasses the implementation of watermarks to facilitate image authenticity verification.
While Google’s initiative is a step in the right direction, Claire Leibowicz from the Partnership on AI advocacy group emphasizes the need for more cohesive approaches across businesses. Standardization, she asserts, would greatly benefit the field by streamlining various methods and facilitating clearer reporting on their effectiveness.
This move toward watermarking AI-generated content isn’t limited to Google alone. Tech giants like Microsoft and Amazon have also committed to employing watermarks on certain AI-generated materials. Expanding beyond images, Meta (formerly Facebook) has revealed plans to incorporate watermarks into generated videos using their unreleased Make-A-Video generator. Similarly, in an effort to ensure transparency over AI-generated works, China has even instituted a ban on AI-generated images without watermarks.
Google’s innovative endeavor to tackle AI-generated disinformation through invisible watermarks represents a significant stride in the quest for digital transparency and responsible AI utilization. As technology continues to evolve, these advancements will play a pivotal role in maintaining the integrity of visual content across the digital landscape.