YouTube said on Tuesday that starting this week, more stringent policies will be applied to realistic AI-generated videos that are hosted by the platform. The company said in a statement, “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.” The adjustments will take effect in the upcoming months and into the following year.
The action taken by YouTube is a part of the platform’s ongoing attempts to address issues with deepfakes, voice cloning, and misinformation, among other problems brought on by generative AI in content creation. YouTube will now offer new options for creators to specify whether their content contains realistic AI-generated content or AI-altered content. According to YouTube, these could be videos created by AI that authentically portray an unreal event or content that shows someone saying or acting in a way that they never did.
Vice presidents of product management at YouTube Jennifer Flannery O’Connor and Emily Moxley stated in the comprehensive announcement that the goal of the policy update is to preserve a healthy ecosystem in the face of generative AI. They write that they think it’s everyone’s responsibility to preserve a healthy information ecosystem on YouTube. They have established guidelines that forbid technically altered content that deceives viewers. But viewers may be misled by content produced by AI’s potent new storytelling techniques, especially if they aren’t aware that the video has been edited or is artificially produced.
Additionally, YouTube is going to implement a new labelling system that will let users know what kind of content they are watching. For example, the description panel and video player will have a new label added for content that has been altered or is synthetic, the company says, especially when discussing sensitive subjects like elections, ongoing conflicts, public health crises, or public officials.
Videos produced with YouTube’s own generative AI tools, like the AI-powered Dream Screen video creator, will automatically be flagged as fake or altered. The company released three mock-ups of what they could look like, although these labels could change in the future.
If creators choose not to disclose AI usage, they may face consequences such as having their content removed or being suspended from the YouTube Partner Programme. In addition, YouTube intends to introduce artificial intelligence (AI)-driven content moderation tools to improve the speed and precision of detecting and managing content that breaks the new guidelines.
Addressing concerns about deepfake and artist imitation
Additionally, YouTube revealed that users will be able to use a privacy request process to ask for the removal of AI-generated content—such as deepfakes—that mimics identifiable people’s voices or faces. They write, “Not all content will be removed from YouTube, and we’ll evaluate these requests based on a variety of factors.” This could include whether the content is satire or parody, whether the requester can be uniquely identified, or whether the request features a famous or public figure, in which case the standard might be higher.
Accordingly, YouTube will also implement a policy that will allow music publishers or artists to ask for the removal of AI-generated music that imitates a particular artist’s distinctive singing or rapping voice. According to the company, as with the privacy requests, any content removals could take into account if it is a part of news reporting, analysis, or criticism of the synthetic vocals.
YouTube claims it is trying to strike a balance between its community safety initiatives and new AI applications, keeping in mind the requirements of parody, fair use, and political commentary. They write, “We are just getting started on our journey to use generative AI to unlock new forms of innovation and creativity on YouTube.” “To create a future that is beneficial to all of us, we will collaborate closely with creators, artists, and other members of the creative industries.”