OUR PARTNERS

YouTube Tightens Rules on AI-Generated Content, Requires Full Disclosure


03 July, 2024

On November 14, 2023, YouTube announced new regulations for content generated through artificial intelligence (AI), including a requirement for creators to disclose whether they’ve used generative AI to create realistic videos. This move is part of YouTube’s broader effort to ensure transparency and uphold community standards in the evolving AI landscape.

In a blog post detailing the AI-related policy updates, YouTube executives Jennifer Flannery O’Connor and Emily Moxley stressed the importance of balancing the potential of generative AI to inspire creativity with the need to protect the YouTube community. Creators who fail to disclose the use of AI tools in creating “altered or synthetic” videos could face penalties, including content removal or suspension from the platform’s revenue sharing program.

Generative AI, which includes AI video generators and AI images generator tools, can create highly realistic content that can sometimes blur the line between fact and fiction. This has raised concerns about the potential misuse of such technology, particularly in sensitive areas such as politics, public health crises, and ongoing conflicts.

In response to these concerns, YouTube’s new guidelines expand on rules introduced by its parent company, Google, in September. These rules required political ads on YouTube and other Google platforms that use artificial intelligence generated images or videos to carry a clear warning label.

Under the updated regulations, YouTube creators will have new options to indicate if they’re posting AI-generated content that could realistically portray an event that never happened or depict someone saying or doing something they didn’t do. This is especially crucial when content discusses sensitive topics or involves public figures.

To help viewers identify altered content, YouTube will place labels on such videos, with more prominent labels for sensitive topics. The platform is also leveraging AI to identify content that violates its rules. The company said this technology has expedited the detection of “novely forms of abuse.”

YouTube’s privacy complaint process will also be updated to accommodate requests for the removal of AI-generated videos that simulate an identifiable person, including their face or voice. This is particularly relevant to the music industry, where AI text generator tools can create lyrics and AI can mimic an artist’s unique singing or rapping voice. YouTube’s music partners, such as record labels or distributors, will be able to request the takedown of such AI-generated music content.

This latest AI news from YouTube underscores the platform’s commitment to maintaining a safe and transparent environment for its users. As AI tools continue to evolve and become more sophisticated, it’s essential for platforms like YouTube to stay ahead of the curve in managing potential misuse while still fostering creativity and innovation.

The new regulations are set to take effect by next year, marking a significant step in YouTube’s efforts to manage the increasing use of AI in content creation. It also sets a precedent for other platforms to follow suit, highlighting the need for clear guidelines and transparency in the use of AI technology.