Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
After showing it off in a preview late last month, Runway ML has officially released Gen-3 Alpha Turbo, the latest version of the AI video generation model that it claims is seven times faster and half the cost of its predecessor, Gen-3 Alpha.
The goal? Make AI video production more accessible to a wider audience across all subscription plans, including free trials.
The New York City-based company announced the news on its X account, writing: “Gen-3 Alpha Turbo Image to Video is now available and can generate 7x faster for half the price of the original Gen-3 Alpha. All while still matching performance across many use cases. Turbo is available for all plans, including trial for free users. More improvements to the model, control mechanisms and possibilities for real-time interactivity to come.”
1/1
Gen-3 Alpha Turbo Image to Video is now available and can generate 7x faster for half the price of the original Gen-3 Alpha. All while still matching performance across many use cases. Turbo is available for all plans, including trial for free users.
More improvements to the model, control mechanisms and possibilities for real-time interactivity to come.
Gen-3 Alpha Turbo builds on the already impressive capabilities of Runway’s Gen-3 Alpha, which gained attention for its realistic video generation.
However, Runway has pushed the boundaries even further with this latest release, prioritizing speed without compromising on performance. According to Runway co-founder and CEO Cristóbal Valenzuela, the new Turbo model means “it now takes me longer to type a sentence than to generate a video.”
1/1
it now takes me longer to type a sentence than to generate a video.
This leap in speed addresses a critical issue with AI video generation models—time lag—allowing for near real-time video production.
As a result, users can expect a more seamless and efficient workflow, particularly in industries where quick turnaround times are essential.
Broad accessibility and aggressively low pricing
Runway’s decision to lower the cost of using Gen-3 Alpha Turbo aligns with its strategy to encourage more widespread adoption of its technology.
While Gen-3 Alpha regular is priced at 10 credits per second of video generated by the model, Gen-3 Alpha Turbo should be priced at 5 credits per 1 second of video per Runway’s statement that it is 50% less.
The model’s availability across all subscription plans, including free trials, ensures that a broad spectrum of users—from hobbyists to professional creators—can benefit from these enhancements.
By offering a faster and cheaper alternative, Runway is positioning itself to maintain a competitive edge in the rapidly evolving AI video generation market, where rivals including Pika Labs, Luma AI’s Dream Machine, Kuaishou’s Kling, and OpenAI’s Sora are also vying for dominance.
Yet despite showing off Sora in January of this year and releasing it to a select group of creators, OpenAI’s video model remains out of reach to the public, and other video generation models tend to take much longer to generate from text prompts and images — more than several minutes in my tests.
Promising initial results
Already, users of Runway Gen-3 Alpha Turbo and subscribers are sharing videos made with the new model and are finding themselves impressed with its combination of speed and quality.
While not always 1×1 in terms of seconds spent generating to seconds of video, the users are nonetheless delighted with the overall experience of using the new model and showcasing a wide range of styles, from realistic to animation and anime.
Some users, such as @LouiErik8Irl on X, prefer the regular Gen-3 Alpha model for its higher quality, in their eyes. Yet they see value in being able to generate simple motion quickly through Gen-3 Alpha Turbo.
1/10
@runwayml Gen-3 Alpha Turbo model is out! It is insanely fast (7x) and very high quality too! Tho the base Alpha model still wins when you want more dynamic motions.
Here are 6 examples to test and compare the two models.
(1/6)
The left is the normal model, and the right is Turbo.
I think I will use Turbo for shots that just need some simple motion from now on. However, the Turbo model doesn't have the Last frame gen, so it's a trade-off.
2/10
It's pretty clear that the base model is far more dynamic. But getting 7X speed with Turbo is also a great trade-off.
Used the same prompt for both to test:
The camera flies inside the tornado
3/10
(2/6)
The base model is better at dynamic motion, but that also leads to more morphing. So if you want more stable and simple motion, Turbo is the way to go!
No prompt for this one to test the models raw.
The left is the normal model, and the right is Turbo.
4/10
(3/6)
But if you want more complex motions and changes, the base model is far better.
Same prompt for both:
The dragon breathes fire out of its mouth.
The left is the normal model, and the right is Turbo.
5/10
(4/6)
The turbo model also seems to stick to the original image more closely, while the base model is more creative.
No prompt for both to test raw motion.
The left is the normal model, and the right is Turbo.
6/10
(5/6)
Some shot types might also work better with Turbo due to the fact that it is more stable. You can see the fire is definitely better for the base model here, but the overall motion of the Turbo model is not bad either.
No prompt for both to test raw motion.
The left is the normal model, and the right is Turbo.
7/10
(6/6)
Again, the base model wins in terms of dynamics. But Turbo model is more consistent and stable. It also doesn't change the character's faces when moving, which was a big problem with the base model. Turbo sticks to the original image really well, tho it is not immune from morphing either.
No prompt for both to test raw motion.
The left is the normal model, and the right is Turbo.
8/10
Overall, the new Turbo model is a fantastic addition to Gen-3. I would use Turbo for shots that need simple motion, more stability, sticking closer to the original image, or faster iteration. And use the base model for more complex motion, more creative outputs, and the First and Last frame feature.
9/10
Btw this set of images was for the Discord daily challenge. Which is themed Fire.
10/10
At the model selection drop-down button on the top left.
Future improvements and unresolved legal/ethical issues
Runway is not resting on its laurels with the release of Gen-3 Alpha Turbo. The company has indicated that more improvements are on the horizon, including enhancements to the model’s control mechanisms and possibilities for real-time interactivity.
Previously, on its older Gen-2 model, Runway enabled the capability to edit selective objects and portions of a video with its Multi Motion Brush, enabling a more granular direction of the AI algorithms and resulting clips.
As the debate over ethical AI practices unfolds, Runway and other generative AI companies may find themselves compelled to disclose more information about their training data and methods. The outcome of these discussions could have significant implications for the future of AI model development and deployment.
The comic book industry wants a 100% ban on AI, too late, folks are already using to create short clips, soon whole comic books and tv-show/movies will be possible. Credit:Eric Solorio
This is why I’ll never really understand all the fear about AI. Like you’d have to be a really uncreative, unoriginal, robotic person incapable of growth to let a piece of tech scare you and make you fear for your future. Only the people who don’t understand how to utilize new techs nor even had a reason to use it, create fear about it.
Just the fact that something can assist you with writing like AI does, if you really map out your plans correctly and use it properly, you save time and still create good product
This is why I’ll never really understand all the fear about AI. Like you’d have to be a really uncreative, unoriginal, robotic person incapable of growth to let a piece of tech scare you and make you fear for your future. Only the people who don’t understand how to utilize new techs nor even had a reason to use it, create fear about it.
Just the fact that something can assist you with writing like AI does, if you really map out your plans correctly and use it properly, you save time and still create good product
1/11
@CaptainHaHaa
Testing Out @runwayml Gen 3 Video to Video before the big event and Wow it's awesome!
Prompts and some extras below
2/11
@CaptainHaHaa
Runway Gen-3 Alpha Video to Video
Prompt: A Viking with a Magic Crossbow with a Castle background
3/11
@CaptainHaHaa
Runway Gen-3 Alpha Video to Video
Prompt: A Military Man with a shotgun with a Jungle background
4/11
@CaptainHaHaa
Runway Gen-3 Alpha Video to Video
Prompt: A Female Valkyrie with a Laser Gun with a Thor background
5/11
@CaptainHaHaa
Runway Gen-3 Alpha Video to Video
Prompt: Shades on with a Shotgun on a ship
Thanks for checking it out!
6/11
@StevieMac03
Great to see people using their own vids, I was fearing we'd see a ton of people lifting scenes right out of movies. Good on you! Now I'm gonna have to learn how to swing a sword in my back garden
7/11
@CaptainHaHaa
Thank you mate I've got so much footage that I can use now so excited to share it all.
8/11
@RichKleinAI
I feel like a lot more creators will be starring in their own Gen:48 films.
9/11
@CaptainHaHaa
Oh Yeah My mind is racing now rethinking my whole approach
10/11
@david_vipernz
This is really the future. I've been meaning to work on learning it but keep putting it off
11/11
@CaptainHaHaa
it's pretty cool, lots of room for experimenting
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.