Would You like a feature Interview?
All Interviews are 100% FREE of Charge
AI could make digital clones of politicians say whatever they want, which could cause all sorts of problems this election season.
That’s why Google is taking steps to mitigate the political dangers of deepfakes, or realistic AI-created people, by requiring advertisers to disclose if they use them in election ad campaigns.
on monday update the Political Content PolicyGoogle is telling election advertisers that they should: [election ads] Verification required.”
Digital face scan. Image credit: Getty Images
This policy applies to the United States. MondayGoogle is also requiring verification from all advertisers running US election ads for federal, state and territory election campaigns. local Campaigns will also be affected.
Related: AI clones gain human emotions, Synthesia deepfakes look real
Google’s update requires advertisers to check the “altered or synthetic content” box if they want to run ads that contain deepfakes — one example would be an ad that alters existing video footage to make a political candidate appear to say something they didn’t actually say.
Any “altered or synthetic” media disclosure statements in your ads must be prominent and clear, in accordance with Google’s requirements. This applies to images, video and audio.
Certain alterations, including editing techniques such as cropping and color correction, are not subject to disclosure.
Deepfakes have shocked the public in recent years with realistic depictions of everyone from Selena Gomez to the Pope.
In May, hackers used a cloned voice and likeness of the CEO of WPP, the world’s largest advertising agency, in an attempt to extract money and personal information from the company.
Related: WPP CEO uses deepfakes to impersonate executives and steal their money
According to the same update, Google will automatically generate the phrase “Paid for by” for election ads.