Click to Skip Ad
Closing in...

Runway Gen-4 creates stunning AI videos that should scare Hollywood

Published Apr 1st, 2025 5:55PM EDT
Runway Gen-4: Still image from a short clip called The Herd.
Image: Runway

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

I do not appreciate what OpenAI did with ChatGPT’s 4o image generation model’s AI safety features, but I certainly appreciate the massive advancements from OpenAI. Yes, the Studio Ghibli-esque images that are flooding social networks are annoying, as are ChatGPT deepfakes featuring celebrities. But, again, OpenAI has achieved something amazing here.

The ease of use of 4o image generation essentially puts a sophisticated Photoshop tool in your pocket. You don’t even have to know how to use Photoshop, I mentioned the product because of what the word means or used to mean. You simply have to tell the AI what you want, and ChatGPT will deliver it.

I didn’t think my mind could be blown twice in the span of a few weeks, but that’s what Runway achieved with the new Gen-4 text/image-to-video model. The AI startup practically competes directly against OpenAI’s Sora and similar AI tools that let you create videos just as easily as you create images.

However, Runway Gen-4 does something others failed to achieve. The company came up with a model that brings consistency to AI video generation, which is one problem to fix in this particular sub-field of genAI programs.

To make videos with AI that are worth something, AI needs to be able to support character and scene consistency. That’s something Runway Gen-4 can offer, and the results are mind-blowing.

Making movies has evolved greatly in recent years, but one thing hasn’t changed. Any story you might tell through this medium has the same characters. They appear in various scenes, wear different outfits, and perform different tasks while delivering all sorts of lines.

Think of it like this: If you have Robert Downey Jr. in a Marvel movie, you’ll always want to recognize it’s him, no matter how many distinct roles he’ll have to play in the MCU, what costumes he wears, and what accent he might use to talk.

That’s not what’s been happening with AI text-to-video generators. Or if it was, it wasn’t easy to pull off. We did see plenty of exciting AI-generated video creations where the AI was largely available to preserve the same character from one scene to the next. But you could still tell it was an AI character and character consistency wasn’t perfect.

But Runway Gen-4 might make it even easier than ever to achieve movie-grade character permanence. At least, that seems to be the conclusion from what Runway demonstrated when announcing the Gen-4 model.

“With Gen-4, you are now able to precisely generate consistent characters, locations and objects across scenes,” the company wrote in a blog post.

“Simply set your look and feel and the model will maintain coherent world environments while preserving the distinctive style, mood and cinematographic elements of each frame. Then, regenerate those elements from multiple perspectives and positions within your scenes.”

Consistency is the key here, and that’s what Runaway is emphasizing in its blog post. “Gen-4 can utilize visual references, combined with instructions, to create new images and videos utilizing consistent styles, subjects, locations and more. Giving you unprecedented creative freedom to tell your story,” Runway wrote. All of that happens without additional fine-tuning or training.

The results are stunning, both when it comes to live-action AI videos and animated clips. The live-action shots are particularly exciting, as they look just like real-life clips. You won’t be able to tell the difference, which is another highlight for Runway Gen-4.

A short Gen-4 called The Herd (above) is a drama with two characters and lots of cows where one man is hunting the other. The first threatens to kill as many cows as needed in that farm setting at night until the second character gives up. You’ll see character A in the eye of one of those cows. Meanwhile, character B burns the farm down in response. We have no idea what happened with the cows, but they were AI-made animals.

A different clip imagines the entire city of New York as a zoo (below), and it’s equally impressive. All sorts of life-like animals are taking over New York. Again, you might think that herd of elephants patrolling the streets is real, not AI-generated.

The AI model needs to be able to preserve the characters from one scene to the next. It also needs to be able to change scenes, lighting, and effects without impacting the identity of the main characters.

“Gen-4 excels in its ability to generate highly dynamic videos with realistic motion as well as subject, object and style consistency with superior prompt adherence and best in class world understanding,” Runway writes, and the examples it provided seem to prove that’s the case.

The best part about it is that you can introduce your own characters and locations. Just upload photos in your text prompts to the AI and tell Gen-4 what you want from them. That’s the visual reference that Runway mentions on its blog and in the short video explanation on X.

“To craft a scene, simply provide reference images of your subjects and describe the composition of your shot. Runway Gen-4 will do the rest,” the company writes.

There’s a lot of hype here, sure. Like any AI startup, Runway is raising capital, a lot of it. And things might not work perfectly on the first try. Also, things might look better in those Runway-crated demos than what you might come up with on the first try. But it certainly seems like Runway is working with advanced AI video generation tech here.

Put differently, unlike Apple’s smart Siri in Apple Intelligence, Runway Gen-4 will is available to test. Users can get their hands on the tech.

What’s also clear is that movie studios will have to pay increasing attention to genAI products like Runway Gen-4. I don’t expect them to suddenly make new flicks featuring AI-generated characters. But products like Gen-4 might make it a lot easier to complete shots and create complex video effects, all without breaking the bank.

Unfortunately, yes, this will lead to job losses. People in the entertainment business will not like what they see here, as the AI will take over jobs. That’s also valid for ChatGPT 4o image generation and other AI tools that are able to do the work of humans faster and cheaper.

I will also point out the other downsides that should be obvious. Tech like Runway Gen-4 can be abused to create fake stories and deepfakes.

Also, there’s the question of how Runway trained its models to come up with Gen-4. Like OpenAI with Sora, Runway isn’t saying. TechCrunch says the startup is facing a lawsuit where artists accuse AI companies, including Runway, of training their AI on copyright content without permission.

This doesn’t change the fact that Gen-4 is simply stunning. I hate to say it, but if it works as well as Runway says it does (and it will probably get better), Runway will be able to settle any lawsuits for any wrongdoing with the profits they’re about to generate.

Gen-4 image-to-video features are rolling out to all paid plans and Enterprise customers. References will be available in the future, Runway said. You’ll find plenty of examples of Gen-4 in action at this link. Also, look for creators posting their Gen-4 creations on social media. You can try Runway for free, though paid plans offer better features.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2007. When he’s not writing about the most recent tech news for BGR, he closely follows the events in Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming new movies and TV shows, or training to run his next marathon.