OpenAI introduces Sora

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,296
Reputation
7,979
Daps
150,029









1/11
Today we are releasing Gen-3 Alpha Image to Video. This update allows you to use any image as the first frame of your video generation, either on its own or with a text prompt for additional guidance.

Image to Video is major update that greatly improves the artistic control and consistency of your generations. See more below.

(1/10)

2/11
2/10

3/11
3/10

4/11
4/10

5/11
5/10

6/11
6/10

7/11
7/10

8/11
8/10

9/11
9/10

10/11
10/10

11/11
Image created with Radiant Creative EasyAi software. DM for more info


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,296
Reputation
7,979
Daps
150,029


7 best OpenAI Sora alternatives for generating AI videos​

Round-up

By Ryan Morrison
last updated August 2, 2024

Here's what you can use now

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Pika Labs lip sync video

(Image credit: Pika Labs)

Jump to:


OpenAI’s Sora is one of the most impressive AI tools I’ve seen in years of covering the technology but only a handful of professional creatives have been given access so far and it doesn't look like it will be widely available anytime soon.

We’ve seen dozens of impressive videos from a documentary about an astronaut to a music video about watching the rain. We’ve even seen a short film about a man with a balloon head and a commercial for Toys 'R' Us.

Mira Muratti, OpenAI's Chief Technology Officer, originally hinted we'd get access to Sora this year, but that seems to have slipped. The most recent updated suggested OpenAI's developers were having issues making it easy to use.

The company says the focus is currently on both safety and usability, which likely includes ensuring guardrails don't allow it to replicate real people or allow it to be used in misinformation campaigns.


Alternatives to Sora already available​


While you’re waiting for Sora, there are several amazing AI video tools already available that can create a range of clips, styles and content to try. These Sora alternatives include Pika Labs and Runway and Luma Labs Dream Machine.

Sora's biggest selling points were more natural movement and longer initial clip generations, but with the arrival of Dream Machine and Runway Gen-3 some of those unique abilities have already been replicated.

There are now two categories of AI video models and I split them into first and second generation. The first are models like Runway Gen-2, Pika 1.0, Haiper, and any Stable Video Diffusion-based model including the one in Leonardo and NightCafe.

The main limitation of this early generation of AI video tools is in duration. Most can’t do more than 3-6 seconds of consistent motion; some struggle to go beyond 3 seconds. This, coupled with a smaller context window makes consistency tough.

However, the second-gen models, including Runway's Gen-3 and Luma Labs Dream Machine (as well as unavailable models like Kling and Sora) have much longer initial generations, improved motion understanding and better realism.


Runway​


Runway AI video

(Image credit: Runway)

Runway is one of the biggest players in this space. Before OpenAI unveiled Sora, Runway had some of the most realistic and impressive generative video content, and remains very impressive, with Gen-3 at a near Sora level of motion quality.

Runway was the first to launch a commercial synthetic video model and has been adding new features and improvements over the past year including very accurate lip-synching, motion brush to control the animation and voice over.

With the launch of Gen-3 you can now create videos starting at ten seconds long. It is currently only in Alpha so some of the more advanced features aren't available such as video-to-video and clip extension — but its coming soon. Image-to-video has already launched and it is very impressive.

In an increasingly crowded market Runway is still one of the best AI video platforms, and on top of generative content it has good collaboration tools and other image-based AI features such as upscaling and text-to-image.

Runway has a free plan with 125 credits. The standard plan is $15 per month.


Luma Labs Dream Machine​


Luma Labs Dream Machine

(Image credit: Luma Labs/Future AI)

One of the newest AI video platforms, Luma Labs released Dream Machine seemingly out of nowhere. It offers impressive levels of realism, prompt following and natural motion and has an initial 5-second video generation.

Unlike other platforms Dream Machine charges one credit per generation, making it easier to keep track of what you're spending or when you're near the limit.

It automatically improves on your prompt, ensuring better output and one of the most innovative features is its keyframes. You can give it two images — a start and finish point — and tell it how to fill the gap between the two. This is perfect if you want to do a fun transition or have a character walk across the screen.

Being able to extend clips is also particularly powerful in Dream Machine as this allows for character following and fresh scenes. It continues from the final frame of your last video and you can change the motion description for each extension.

Dream Machine has a free plan with 30 generations per month. The standard tier is $30 per month and includes 150 generations.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,296
Reputation
7,979
Daps
150,029


Kling​


Kling AI

(Image credit: Kling AI Video)

Kling AI is a Chinese generative video product from video platform company Kuaishou, its features include longer video generations, improved movement, better prompt following and multi-shot sequences.

Its interface is simple and easy to learn, offering image-to-video and text-to-video with automatic upscaling options and clips starting at 5 or 10 seconds.

I've used it to make multiple videos and in every case it captures human and animal motion in a way no other platform I've tried has achieved. It also seems to be able to depict emotion with both camera movement and within the characters it generates.

In the text-to-video mode you can start with a 1 second clip, use a very long description and set the orientation to widescreen, phone screen or square. It also allows you to define the camera movement and set a negative prompt.

Kling is free to use with 66 credits per day and between 5 and 10 credits per generation. There isn't a subscription plan as such but it offers memberships starting at $10 per month for 660 monthly credits.


Pika Labs​


Pika Labs lip sync video

(Image credit: Pika Labs)

Pika Labs is one of the two major players in the generative AI video space alongside Runway. Its Pika 1.0 model can create video from images, text or other video, as well as extend a video to up to 12 seconds — although the more you extend it, the worse the motion becomes.

Pika launched last year to a lot of fanfare, sharing a cartoon version of Elon Musk and an impressive inpainting ability that allows you to replace or animate a specific region of a clip.

Pika Labs offers negative prompting and fine control over the motion in the video. It also features sound effects that are either included from a text prompt or aligned to the video and lip sync.

The lip syncing from Pika Labs can be added to video content. So you can have it generate a video from, say, a Midjourney photo, then animate its lips and give it voice. Or, as I did in an experiment, you can animate action figures.

I'm told Pika 2.0 is in development and they recently introduced significant upgrades to the image-to-video model, creating better overall motion and control.

Pika Labs has a free plan with 300 credits. The standard plan is $10 per month.

Leonardo and Night Cafe​


Leonardo AI

(Image credit: Leonardo AI)

Stable Video Diffusion is an open model which means it can be commercially licensed and adapted by other companies. Two of the best examples of this are from Leonardo and Night Cafe, two AI image platforms that offer a range of models including Stable Diffusion itself.

Branded as Motion by Leonardo and Animate by NightCafe, the image platforms essentially do the same thing — take an image you’ve already made with the platform and make it move. You can set the degree of motion but there are minimal options for other controls.

NightCafe's base plan is $6 per month for 100 credits.

Leonardo has a free plan with 150 creations per day. The basic plan is $10 per month.


FinalFrame​


FinalFrame

(Image credit: FinalFrame AI generated)

This is a bit of a dark horse in the AI video space with some interesting features. A relatively small bootstrapped company, FinalFrame comfortably competes in terms of quality and features with the likes of Pika Labs and Runway, building out to a “total platform.”

The name stems from the fact FinalFrame builds the next clip based on the final frame of the previous video, improving consistency across longer video generations. You can generate or import a clip, then drop it on to the timeline to create a follow on, or to build a full production.

The startup recently added lip syncing and sound effects for certain users, including an audio track in the timeline view to add those sounds to your videos.

FinalFrame requires the purchase of credit packs which last a month. The basic plan is 20 credits for $2.99.


Haiper​


Haiper

(Image credit: Haiper AI video)

A relative newcomer with its own model, Haiper takes a slightly different approach from other AI video tools, building out an underlying model and training dataset that is better at following the prompt rather than offering fine-tuned control over the motion.

The default mode doesn't even allow you to change the motion level. It assumes the AI will understand the level of motion from the prompt, and for the most part, it works well. In a few tests, I found leaving the motion set to default worked better than any control I could set.

Haiper has now launched version 1.5 with improved realism and longer initial clips, starting at 8 seconds and extendable.

Haiper has a free plan with 10 creations per day and watermarks with no commercial use. If you want commercial use and to remove watermarks you need the $ 30-a-month Pro plan which includes unlimited video creations.

LTX Studio​


AI video from LTX Studio

(Image credit: LTX Studio/AI Video)

Unlike the others, this is a full generative content platform, able to create a multishot, multiscene video from a text prompt. LTX Studio has images, video, voice-over, music and sound effects; it can generate all of the above at the same time.

The layout is more like a storyboard than the usual prompt box and video player of the other platforms. When you generate video, LTX Studio lets you go in and adapt any single element, including changing the camera angle or pulling in an image to animate from an external application.

I don’t find LTX Studio handles motion as well as Runway or Stable Video, often generating unsightly blurring or warping, but those are issues the others have started to resolve and something LTX Studio owner Lightricks will tackle over time. It also doesn’t have lip sync, but that is likely to come at some point in the future.

LTX Studio has a free plan with 1 hour of generation per month and personal use. For $5 a month you get three hours but if you want commercial use it costs $175 per month and comes with 25 computing hours.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
52,296
Reputation
7,979
Daps
150,029


Runway’s Gen-3 Alpha Turbo is here and can make AI videos faster than you can type​

Carl Franzen@carlfranzen

August 15, 2024 9:04 AM

Robot director in red beret looks through camera monitor setup video village


Credit: VentureBeat made with Midjourney

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More



After showing it off in a preview late last month, Runway ML has officially released Gen-3 Alpha Turbo, the latest version of the AI video generation model that it claims is seven times faster and half the cost of its predecessor, Gen-3 Alpha.

The goal? Make AI video production more accessible to a wider audience across all subscription plans, including free trials.

The New York City-based company announced the news on its X account, writing: “Gen-3 Alpha Turbo Image to Video is now available and can generate 7x faster for half the price of the original Gen-3 Alpha. All while still matching performance across many use cases. Turbo is available for all plans, including trial for free users. More improvements to the model, control mechanisms and possibilities for real-time interactivity to come.”



1/1
Gen-3 Alpha Turbo Image to Video is now available and can generate 7x faster for half the price of the original Gen-3 Alpha. All while still matching performance across many use cases. Turbo is available for all plans, including trial for free users.

More improvements to the model, control mechanisms and possibilities for real-time interactivity to come.


Gen-3 Alpha Turbo builds on the already impressive capabilities of Runway’s Gen-3 Alpha, which gained attention for its realistic video generation.

However, Runway has pushed the boundaries even further with this latest release, prioritizing speed without compromising on performance. According to Runway co-founder and CEO Cristóbal Valenzuela, the new Turbo model means “it now takes me longer to type a sentence than to generate a video.”



1/1
it now takes me longer to type a sentence than to generate a video.

This leap in speed addresses a critical issue with AI video generation models—time lag—allowing for near real-time video production.

As a result, users can expect a more seamless and efficient workflow, particularly in industries where quick turnaround times are essential.


Broad accessibility and aggressively low pricing​


Runway’s decision to lower the cost of using Gen-3 Alpha Turbo aligns with its strategy to encourage more widespread adoption of its technology.

While Gen-3 Alpha regular is priced at 10 credits per second of video generated by the model, Gen-3 Alpha Turbo should be priced at 5 credits per 1 second of video per Runway’s statement that it is 50% less.

Credits can be purchased in bundles starting at 1,000 credits on the Runway website or as part of monthly or annual subscription tiers. It costs $10 for 1,000 credits, or $0.01 per credit.

Screenshot-2024-08-15-at-11.44.45%E2%80%AFAM-1.png


The model’s availability across all subscription plans, including free trials, ensures that a broad spectrum of users—from hobbyists to professional creators—can benefit from these enhancements.

By offering a faster and cheaper alternative, Runway is positioning itself to maintain a competitive edge in the rapidly evolving AI video generation market, where rivals including Pika Labs, Luma AI’s Dream Machine, Kuaishou’s Kling, and OpenAI’s Sora are also vying for dominance.

Yet despite showing off Sora in January of this year and releasing it to a select group of creators, OpenAI’s video model remains out of reach to the public, and other video generation models tend to take much longer to generate from text prompts and images — more than several minutes in my tests.


Promising initial results​


Already, users of Runway Gen-3 Alpha Turbo and subscribers are sharing videos made with the new model and are finding themselves impressed with its combination of speed and quality.

While not always 1×1 in terms of seconds spent generating to seconds of video, the users are nonetheless delighted with the overall experience of using the new model and showcasing a wide range of styles, from realistic to animation and anime.







Some users, such as @LouiErik8Irl on X, prefer the regular Gen-3 Alpha model for its higher quality, in their eyes. Yet they see value in being able to generate simple motion quickly through Gen-3 Alpha Turbo.











1/10
@runwayml Gen-3 Alpha Turbo model is out! It is insanely fast (7x) and very high quality too! Tho the base Alpha model still wins when you want more dynamic motions.

Here are 6 🔥examples to test and compare the two models.

(1/6)
The left is the normal model, and the right is Turbo.

I think I will use Turbo for shots that just need some simple motion from now on. However, the Turbo model doesn't have the Last frame gen, so it's a trade-off.

2/10
It's pretty clear that the base model is far more dynamic. But getting 7X speed with Turbo is also a great trade-off.

Used the same prompt for both to test:
The camera flies inside the tornado

3/10
(2/6)
The base model is better at dynamic motion, but that also leads to more morphing. So if you want more stable and simple motion, Turbo is the way to go!

No prompt for this one to test the models raw.

The left is the normal model, and the right is Turbo.

4/10
(3/6)
But if you want more complex motions and changes, the base model is far better.

Same prompt for both:
The dragon breathes fire out of its mouth.

The left is the normal model, and the right is Turbo.

5/10
(4/6)
The turbo model also seems to stick to the original image more closely, while the base model is more creative.

No prompt for both to test raw motion.

The left is the normal model, and the right is Turbo.

6/10
(5/6)
Some shot types might also work better with Turbo due to the fact that it is more stable. You can see the fire is definitely better for the base model here, but the overall motion of the Turbo model is not bad either.

No prompt for both to test raw motion.

The left is the normal model, and the right is Turbo.

7/10
(6/6)
Again, the base model wins in terms of dynamics. But Turbo model is more consistent and stable. It also doesn't change the character's faces when moving, which was a big problem with the base model. Turbo sticks to the original image really well, tho it is not immune from morphing either.

No prompt for both to test raw motion.

The left is the normal model, and the right is Turbo.

8/10
Overall, the new Turbo model is a fantastic addition to Gen-3. I would use Turbo for shots that need simple motion, more stability, sticking closer to the original image, or faster iteration. And use the base model for more complex motion, more creative outputs, and the First and Last frame feature.

9/10
Btw this set of images was for the Discord daily challenge. Which is themed Fire.

10/10
At the model selection drop-down button on the top left.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GVCGD7ta0AEPSSn.png


Future improvements and unresolved legal/ethical issues​


Runway is not resting on its laurels with the release of Gen-3 Alpha Turbo. The company has indicated that more improvements are on the horizon, including enhancements to the model’s control mechanisms and possibilities for real-time interactivity.

Previously, on its older Gen-2 model, Runway enabled the capability to edit selective objects and portions of a video with its Multi Motion Brush, enabling a more granular direction of the AI algorithms and resulting clips.

However, the company continues to navigate the ethical complexities of AI model training. Runway has faced scrutiny over the sources of its training data, particularly following a report from 404 Media that the company may have used copyrighted content from YouTube for training purposes without authorization.

Although Runway has not commented on these allegations, the broader industry is grappling with similar challenges, as legal battles over the use of copyrighted materials in AI training intensify.

As the debate over ethical AI practices unfolds, Runway and other generative AI companies may find themselves compelled to disclose more information about their training data and methods. The outcome of these discussions could have significant implications for the future of AI model development and deployment.
 
Top