Google is gearing up to implement new regulations for political ads on its platforms, aiming to address the growing issue of synthetic content generated by artificial intelligence (AI). These changes are set to take effect in November, well in advance of the upcoming US presidential election. The move comes as a response to the escalating use of AI tools to produce deceptive content, which has raised concerns about the potential for disinformation campaigns to gain traction.
Google’s existing advertising policies already prohibit the manipulation of digital media to deceive or mislead the public on matters related to politics, social issues, or public concerns. However, the upcoming update will require political ads linked to elections to conspicuously disclose the presence of “synthetic content” that portrays real or lifelike individuals and events. This disclosure could take the form of labels such as “this image does not depict real events” or “this video content was synthetically generated.”
Additionally, Google’s advertising policy explicitly prohibits demonstrably false claims that could erode trust in the electoral process. Political ads on Google are already required to disclose their funding sources, and the details about these messages are accessible through an online ads library. The disclosure of digitally altered content in election-related ads must be prominently displayed, ensuring that viewers are likely to notice it.
Examples of content that would warrant such labeling include synthetic images or audio depicting individuals saying or doing things they never did or events that never took place. This move by Google is in response to incidents like the sharing of a fake image of former US President Donald Trump supposedly being arrested, which was generated by AI tools. Similarly, a deepfake video circulated in March featuring Ukrainian President Volodymyr Zelensky discussing surrendering to Russia. In June, a campaign video for Ron DeSantis attacking former President Trump included images that bore marks of AI-generated manipulation, showing Mr. Trump embracing Anthony Fauci with kisses on the cheek.
Experts in the field of AI have emphasized that while manipulated imagery is not a new phenomenon, the rapid advancements in generative AI technology, coupled with the potential for misuse, are causes for concern. Google’s proactive stance on this issue signals its commitment to curbing the spread of AI-generated disinformation, particularly in the realm of political advertising. As the November deadline approaches, it remains to be seen how these measures will impact the landscape of political communication and digital advertising.