YouTube will required disclosure of AI Content

Share post:

YouTube is set to implement new policy changes next year, requiring creators to disclose the use of generative AI in videos, especially for content depicting sensitive topics such as politics and health issues.

These changes are a response to the rapidly advancing capabilities of generative AI in creating realistic-looking videos.

Under the new policies, YouTube will:
– Require creators to disclose if generative AI has been used to create scenes that depict fictional events or show real people saying things they did not actually say.
– Allow individuals to request the removal of content that simulates an identifiable person, including their face or voice. This removal request, however, will not be automatically granted, with a higher threshold for moderation applied to satire, parody, or content involving public figures.
– Establish a separate process for music industry partners to seek the removal of content that imitates an artist’s unique singing or rapping voice.
– Ensure full disclosure of any generative AI tools used in YouTube’s own content production.

The disclosure requirement is mandatory for creators, and failure to comply could lead to content removal or other penalties. YouTube emphasizes that while AI can enable powerful storytelling, it also has the potential to mislead viewers, particularly if they are not aware that the content has been altered or synthetically created.

The manner in which AI usage is disclosed to viewers will depend on the sensitivity of the content. For most videos, the disclosure will appear on the video’s description screen. However, for videos addressing sensitive topics like politics, military conflicts, and health issues, YouTube plans to make these labels more prominent.

YouTube also noted that all its standard content guidelines, including those governing violence and hate speech, will apply to AI-generated videos. This move by YouTube reflects a growing awareness of the ethical implications and potential risks associated with AI-generated content, particularly in the context of misinformation and the integrity of online information.

Sources include: Axios )

Featured Tech Jobs


Related articles

Google delays launch of new AI model Gemini

Google's highly anticipated AI model, Gemini, has had its launch rescheduled to early 2024, as reported by The...

Walmart drops ads from X/Twitter

On December 1st, Walmart announced it is no longer advertising on social media platform X, previously known as...

Canadian group gets $2.2 million to research AI threat detection for wireless networks

Ericsson Canada and three universities have been awarded funds by the National Cybersecurity

Proposed Canadian AI law ‘fundamentally flawed,’ Parliament told

A privacy lawyer said the proposed AI bill is vague and sets a dangerous precedent

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways