Google has announced that it will require election advertisers to disclose when their ads contain synthetic or altered content. The new policy, which goes into effect in November, is designed to combat the spread of misinformation and disinformation in online political advertising.
Synthetic content is created using artificial intelligence (AI) tools, and can be used to create fake images, videos, and audio. This type of content can be used to manipulate voters or to spread false information about candidates or issues.
The new policy requires election advertisers to prominently disclose the use of synthetic content in their ads. The disclosure must be clear and conspicuous, and must be placed in a location where users are likely to see it.
The policy does not apply to ads that use synthetic content in a way that is not misleading. For example, an ad that uses a doctored photo of a candidate to make them look younger would not be required to disclose the use of synthetic content.
Google will enforce the new policy using a combination of human review and machine learning. Advertisers who violate the policy may have their ads disapproved or removed.
The sources for this piece include an article in Axios.