Google launches AI video verification in Gemini App using synthID watermarking
2025-12-19 09:16:00+08
Google has rolled out a new feature in its Gemini app that allows users to upload a video and instantly check whether it was created or edited by Google AI. This update marks a significant step forward in the company’s efforts to promote AI transparency and combat the growing threat of deepfakes.
How It Works: Advanced SynthID Detection
The feature leverages SynthID, Google’s proprietary digital watermarking technology introduced in 2023. To date, SynthID has been embedded in over 20 billion AI-generated assets across images, audio, text, and now video. The watermark is imperceptible to humans but detectable by specialized algorithms.
When a user uploads a video (up to 100MB and 90 seconds), Gemini analyzes both visual and audio tracks separately and provides a detailed, timestamped report—not just a binary “yes/no.” For example:
“SynthID watermark detected in audio between 10–20 seconds. No SynthID found in visual content.”
This granular insight helps users pinpoint exactly which parts of a video may involve AI generation or editing.
Global Access, Zero Friction
The tool is free, requires no subscription, and works in all languages and regions where the Gemini app is available. Users simply upload a video and ask, “Was this video made with Google AI?” to trigger the analysis.
Part of a Broader Trust Initiative
This launch follows November’s introduction of AI image verification in Gemini and expands Google’s content provenance ecosystem. By enabling rapid, reliable detection of AI-modified media, Google aims to bolster trust in digital content—especially in high-stakes contexts like news reporting, social media, and entertainment.
Looking ahead, Google plans to integrate support for the C2PA (Content Authenticity Initiative) standard, which would allow Gemini to verify media generated by non-Google AI tools as well—paving the way for a more universal, interoperable system of AI content attribution.