YouTube is stepping up its efforts to combat the rise of deepfake videos, particularly those involving celebrities. As AI technology continues to advance, so too does its potential for misuse, leading to an increase in AI-generated scams and deepfakes. These fake videos, often featuring well-known figures, have become more realistic, making it harder to distinguish them from genuine content.
YouTube Partners with CAA for AI Detection Tools
To address this issue, YouTube has partnered with the Creative Artists Agency (CAA), aiming to introduce tools capable of detecting AI-generated content. The collaboration will help identify videos that mimic a creator’s appearance, voice, or other defining traits. These tools will also make it easier for creators to request the removal of such content. The system is expected to launch next year, initially targeting celebrities and athletes. Afterwards, YouTube plans to expand it to top creators, influencers, and professionals on its platform.
This initiative is part of YouTube’s broader effort to address the growing concerns about impersonation and the unauthorised use of AI-generated depictions. The platform had already signalled its intentions to combat such misuse in September.
Also read: Digital Arrest Scam: UPI creator NPCI issues BIG warning to all Indians
CAAVault
CAA, which represents many high-profile stars, has been proactive in preparing for such issues. Last year, the agency introduced CAAVault, a system designed to scan and store digital records of clients’ likenesses, including their faces, bodies, and voices. This technology will complement YouTube’s detection tools, allowing creators and celebrities to track and control the use of their digital identities across the platform.
Also read: Instagram now lets you schedule DMs, share year-end collages, and enjoy new holiday features
Misuse of AI in Music
In addition to tackling deepfake videos, YouTube is addressing the misuse of AI in replicating creators’ singing voices. The company is developing “synthetic-singing identification technology,” which will detect AI-generated songs that mimic the voices of artists. Music labels have already begun requesting the removal of these unauthorised AI-created tracks, and YouTube’s new tools will further support these efforts.
Earlier this year, YouTube introduced a policy requiring creators to label videos containing AI-generated content. This move aims to ensure transparency and allows viewers to be informed when content has been artificially created.