Google’s Gemini AI assistant can now watch your clips, not just read your words. The company has begun rolling out a video-upload feature across the Gemini app for Android and iPhone, as well as the Gemini web client, letting users attach short movies to their prompts for on-the-fly analysis and Q&A.
How It Works
After updating to the latest Gemini build on iOS, tap the “+” icon in a chat and choose Gallery or Files to pick a video. On desktop, you can simply drag a clip onto the prompt bar at gemini.google.com. Gemini embeds a full video player above the chat and immediately begins processing the footage so you can ask questions such as “What time is shown on the Nest Hub screen?” or “Describe the scene.”

Google says the capability is “for everyone,” meaning free users, Gemini 2.5 Flash subscribers, and the newly stable Gemini 2.5 Pro tier all get access. The function first appeared in limited A/B testing earlier this week but is now propagating more broadly; grayed-out files indicate the feature hasn’t hit your device yet. While uploads work on mobile and the web, Gemini’s built-in camera still captures only still photos, and the web client may show a “File type unsupported” warning during the phased launch.
Why It Matters
Video input joins existing image and document uploads, YouTube link parsing, and the Scheduled Actions automation that started rolling out yesterday, underscoring Google’s effort to broaden Gemini’s multimodal chops after unveiling new AI Pro and AI Ultra tiers at I/O 2025.
Letting an AI parse personal clips opens fresh use-cases, from troubleshooting gadgets to summarising lecture recordings, while raising privacy questions about what metadata Google stores. Expect scrutiny to intensify as Gemini moves beyond text and images into moving pictures at scale.