Google to require election advertisers to disclose synthetic content

Share post:

Google has announced that it will require election advertisers to disclose when their ads contain synthetic or altered content. The new policy, which goes into effect in November, is designed to combat the spread of misinformation and disinformation in online political advertising.

Synthetic content is created using artificial intelligence (AI) tools, and can be used to create fake images, videos, and audio. This type of content can be used to manipulate voters or to spread false information about candidates or issues.

The new policy requires election advertisers to prominently disclose the use of synthetic content in their ads. The disclosure must be clear and conspicuous, and must be placed in a location where users are likely to see it.

The policy does not apply to ads that use synthetic content in a way that is not misleading. For example, an ad that uses a doctored photo of a candidate to make them look younger would not be required to disclose the use of synthetic content.

Google will enforce the new policy using a combination of human review and machine learning. Advertisers who violate the policy may have their ads disapproved or removed.

The sources for this piece include an article in Axios.

SUBSCRIBE NOW

Related articles

Research Raises Concerns Over AI Impact on Code Quality

Recent findings from GitClear, a developer analytics firm, indicate that the increasing reliance on AI assistance in software...

Microsoft to train 100,000 Indian developers in AI

Microsoft has launched an ambitious program called "AI Odyssey" to train 100,000 Indian developers in artificial intelligence by...

NIST issues cybersecurity guide for AI developers

Paper identifies the types of cyberattacks that can manipulate the behavior of artificial intelligen

Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems. It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways