YouTube will required disclosure of AI Content

Share post:

YouTube is set to implement new policy changes next year, requiring creators to disclose the use of generative AI in videos, especially for content depicting sensitive topics such as politics and health issues.

These changes are a response to the rapidly advancing capabilities of generative AI in creating realistic-looking videos.

Under the new policies, YouTube will:
– Require creators to disclose if generative AI has been used to create scenes that depict fictional events or show real people saying things they did not actually say.
– Allow individuals to request the removal of content that simulates an identifiable person, including their face or voice. This removal request, however, will not be automatically granted, with a higher threshold for moderation applied to satire, parody, or content involving public figures.
– Establish a separate process for music industry partners to seek the removal of content that imitates an artist’s unique singing or rapping voice.
– Ensure full disclosure of any generative AI tools used in YouTube’s own content production.

The disclosure requirement is mandatory for creators, and failure to comply could lead to content removal or other penalties. YouTube emphasizes that while AI can enable powerful storytelling, it also has the potential to mislead viewers, particularly if they are not aware that the content has been altered or synthetically created.

The manner in which AI usage is disclosed to viewers will depend on the sensitivity of the content. For most videos, the disclosure will appear on the video’s description screen. However, for videos addressing sensitive topics like politics, military conflicts, and health issues, YouTube plans to make these labels more prominent.

YouTube also noted that all its standard content guidelines, including those governing violence and hate speech, will apply to AI-generated videos. This move by YouTube reflects a growing awareness of the ethical implications and potential risks associated with AI-generated content, particularly in the context of misinformation and the integrity of online information.

Sources include: Axios )

SUBSCRIBE NOW

Related articles

Developer of “Unfollow Everything” sues Meta over control of social feeds

Ethan Zuckerman, an associate professor at the University of Massachusetts—Amherst, has filed a lawsuit against Meta, arguing that...

Meta to face EU investigation over disinformation under Digital Services Act

Meta, the parent company of Facebook and Instagram, is set to undergo a European Union investigation concerning its...

Tests unable to distinguish AI from human reviews

AI-generated restaurant reviews can now pass the Turing test, successfully fooling both human readers and automated detectors, according...

Zuckerberg shares his vision with investors and Meta stock tanks

In an era where instant gratification is often the norm, Meta CEO Mark Zuckerberg’s strategic pivot towards long-term,...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways