YouTube Bids for Generative-AI Transparency
Published 17 November 2023
YouTube is rolling out a content warning system for videos containing convincingly realistic generative artificial intelligence (AI). With only 23% of Americans trusting how generative AI is used on social media (Insider Intelligence, 2023), this is the latest move from Big Tech to assuage growing concerns about an influx of misleading and harmful AI content.
The move adds to existing Community Guidelines by requiring creators to label “manipulated or synthetic content that is realistic”, including material generated by AI, with “realistic” defined as content that convincingly “depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do”, YouTube VPs of product management Jennifer Flannery O’Connor and Emily Moxley said. YouTube – the biggest social platform globally, with 2.7 billion users – thus moves in line with TikTok, which earlier this year asked creators to “clearly disclose” when their content is AI-generated.
YouTube users will be alerted to this type of content either via a label in the video’s description panel or, for AI-manipulated videos about sensitive topics (such as political events or ongoing conflicts), by a more prominent label on the video player. The labels will be available for content creators to use early next year. Creators who repeatedly fail to use them will have their content removed and may be suspended from YouTube’s content monetisation programme.
In September, YouTube launched its own suite of generative AI content creation tools, including features for creating video backgrounds. Tech companies are under pressure to balance encouraging users to create content with generative AI with managing rising anxiety about a social internet awash with dangerous content, such as non-consensual sexual images and deepfakes.
With 66% of Americans concerned about how generative AI on social media could damage privacy (Insider Intelligence, 2023), YouTube users will also be encouraged to police harmful content by requesting the removal of AI simulations of identifiable faces or voices, of themselves or others, under the site’s privacy rules.