From the moment AI video generation tools made it possible to create amazing clips using only text prompts, it was clear the movie industry was heading for a major AI shift. That shift wasn’t going to happen overnight. Early AI tools weren’t sophisticated or affordable enough for filmmakers to use them in new projects. Then there’s the ongoing pushback against using AI in place of humans for the many creative roles involved in making a film.
Still, AI movies are inevitable, and Google just introduced a slew of AI innovations at I/O 2025 that make it even easier to include AI-generated video in future films. We might be years away from a fully AI-made movie, but creators are already starting to blend AI-generated scenes into their work, with Google ready to back them up.
This isn’t just theory anymore. Google’s AI tools are already being used in a film set to debut at the Tribeca Film Festival next month. It’s called Ancestra, and it includes sequences made with Google’s latest AI tech.
The film is from a new venture called Primordial Soup, led by director Darren Aronofsky, who partnered with Google DeepMind on the project.
“Filmmaking has always been driven by technology. After the Lumiere Brothers and Edison’s groundbreaking invention, filmmakers unleashed the hidden storytelling power of cameras,” Aronofsky said in a statement. “Later technological breakthroughs – sound, color, vfx – allowed us to tell stories in ways that couldn’t be told before. Today is no different. Now is the moment to explore these new tools and shape them for the future of storytelling.”
Directed by Eliza McNitt, Ancestra blends live action with imagery generated using Google AI tools. As you’ll see in the clip below, which ends with the film’s trailer, McNitt said they used AI to create video that would have been impossible to shoot otherwise, alongside powerful live performances.
Ancestra tells the story of a mother and what happens when her baby is born with a hole in her heart. McNitt says it’s based on a real story — the day she was born.
Watching Google’s video above, it’s easy to spot scenes that are likely created with AI. There are sequences showing cell growth and one where a newborn holds its mother’s finger.
While special effects have always involved digital work, the baby’s hand grasping her mother’s finger is virtually indistinguishable from real life.
That’s how far video generation has come, a point Google emphasized during its main I/O 2025 keynote.
Veo 3 lets users add audio for dialogue, music, or background sounds, and enhances the visual quality of clips. The AI can control camera movements and add or remove objects from scenes.
But Flow is the standout tool from I/O 2025 for AI video generation. It brings together Veo, Imagen, and Gemini to create entire scenes based on text descriptions.
You can drop in your own characters, locations, and objects, and the AI will maintain consistency across scenes. Flow also gives users intuitive camera control through a simple interface.
Clips can be stitched into larger scenes, rearranged, and continued without friction. The video below shows exactly why Flow feels like a game-changer.
If I’m this excited about what Flow can do, it makes sense that experienced filmmakers would want to use genAI tools too. And it’s no surprise that ventures like Primordial Soup are popping up. We’re just getting started, and AI-generated video is only going to become more common in movies and TV.