By Taylor Herzlich
Publication Date: 2025-12-02 18:00:00
Experts are sounding the alarm over YouTube’s deepfake detection tool — a new safety feature that could allow Google to train its own AI bots with creators’ faces, according to a report.
The tool gives YouTube users the option to submit a video of their face so the platform can flag uploads that include unauthorized deepfakes of their likeness.
Creators can then request that the AI-generated doppelgangers be taken down.
But the safety policy would also allow Google, which owns YouTube, to train its own AI models using biometric data from creators, CNBC reported Tuesday.
Jack Malon, a spokesperson for YouTube, told the Post the company has never used creators’ biometric data to train AI models, and that users’ likeness is only used for identity verification…