YouTube expands AI ‘Likeness Detection’ tool to journalists and government officials

The feature will help public figures identify and request the removal of AI-generated videos impersonating them, as platforms grapple with the growing threat of synthetic media

e4m by e4m Staff
Published: Mar 13, 2026 12:32 PM  | 2 min read
Expanding YouTube’s Likeness Detection to Civic Leaders and Journalists
  • e4m Twitter

YouTube has announced that it is expanding access to its AI-powered “likeness detection” technology to include government officials, journalists and political candidates, a move aimed at tackling the growing threat of deepfake videos that impersonate public figures and extends a tool that was earlier made available to creators and artists.

The system works by scanning uploaded videos for content that mimics a person’s face or likeness, enabling participants to identify AI-generated videos that appear to impersonate them, adding that expanding access to journalists and civic leaders will allow more public figures to protect themselves from impersonation.

The expansion reflects a broader effort by YouTube to safeguard public discourse at a time when generative AI tools have made it easier than ever to create highly realistic synthetic media. Deepfakes, AI-generated or manipulated videos that make it appear as though someone said or did something they did not and have increasingly targeted public figures, including politicians and journalists, raising concerns about the integrity of online information.

In the context of ongoing geopolitical tensions and rapidly evolving news, the ability to distinguish authentic content from manipulated media has become increasingly critical. Fake videos featuring political leaders or journalists can spread quickly on social platforms, amplifying misinformation before fact-checking mechanisms can respond.

Participants who opt into the programme will be required to submit a short verification video along with a government-issued ID, enabling YouTube to accurately identify their likeness and match it with potentially manipulated content on the platform. The company said the verification data will only be used for the detection system and participants can opt out whenever they choose.

Once flagged, users can review the videos through YouTube Studio and request their removal through the platform’s existing privacy and moderation processes.

This balance between detection and free speech was also highlighted by YouTube Creator Liaison Rene Ritchie, who that the platform aims to protect individuals from harmful impersonation while still safeguarding speech that serves the public interest. The policy ensures that content such as satire, commentary or criticism of public figures may remain available.

The move addresses the risks posed by AI-generated media. Platforms and policymakers across the world are exploring detection technologies, labelling systems and regulatory frameworks to help maintain trust in digital content as generative AI capabilities continue to evolve.

Published On: Mar 13, 2026 12:32 PM