Kling
(Image credit: Kling AI Video)
Kling AI is a Chinese generative video product from video platform company Kuaishou, its features include longer video generations, improved movement, better prompt following and multi-shot sequences.
Its interface is simple and easy to learn, offering image-to-video and text-to-video with automatic upscaling options and clips starting at 5 or 10 seconds.
I've used it to make multiple videos and in every case it captures human and animal motion in a way no other platform I've tried has achieved. It also seems to be able to depict emotion with both camera movement and within the characters it generates.
In the text-to-video mode you can start with a 1 second clip, use a very long description and set the orientation to widescreen, phone screen or square. It also allows you to define the camera movement and set a negative prompt.
Kling is free to use with 66 credits per day and between 5 and 10 credits per generation. There isn't a subscription plan as such but it offers memberships starting at $10 per month for 660 monthly credits.
Pika Labs
(Image credit: Pika Labs)
Pika Labs is one of the two major players in the generative AI video space alongside Runway. Its Pika 1.0 model can create video from images, text or other video, as well as extend a video to up to 12 seconds — although the more you extend it, the worse the motion becomes.
Pika launched last year to a lot of fanfare, sharing a cartoon version of Elon Musk and an impressive inpainting ability that allows you to replace or animate a specific region of a clip.
Pika Labs offers negative prompting and fine control over the motion in the video. It also features sound effects that are either included from a text prompt or aligned to the video and lip sync.
The lip syncing from Pika Labs can be added to video content. So you can have it generate a video from, say, a Midjourney photo, then animate its lips and give it voice. Or, as I did in an experiment, you can animate action figures.
I'm told Pika 2.0 is in development and they recently introduced significant upgrades to the image-to-video model, creating better overall motion and control.
Pika Labs has a free plan with 300 credits. The standard plan is $10 per month.
Leonardo and Night Cafe
(Image credit: Leonardo AI)
Stable Video Diffusion is an open model which means it can be commercially licensed and adapted by other companies. Two of the best examples of this are from Leonardo and Night Cafe, two AI image platforms that offer a range of models including Stable Diffusion itself.
Branded as Motion by Leonardo and Animate by NightCafe, the image platforms essentially do the same thing — take an image you’ve already made with the platform and make it move. You can set the degree of motion but there are minimal options for other controls.
NightCafe's base plan is $6 per month for 100 credits.
Leonardo has a free plan with 150 creations per day. The basic plan is $10 per month.
FinalFrame
(Image credit: FinalFrame AI generated)
This is a bit of a dark horse in the AI video space with some interesting features. A relatively small bootstrapped company, FinalFrame comfortably competes in terms of quality and features with the likes of Pika Labs and Runway, building out to a “total platform.”
The name stems from the fact FinalFrame builds the next clip based on the final frame of the previous video, improving consistency across longer video generations. You can generate or import a clip, then drop it on to the timeline to create a follow on, or to build a full production.
The startup recently added lip syncing and sound effects for certain users, including an audio track in the timeline view to add those sounds to your videos.
FinalFrame requires the purchase of credit packs which last a month. The basic plan is 20 credits for $2.99.
Haiper
(Image credit: Haiper AI video)
A relative newcomer with its own model, Haiper takes a slightly different approach from other AI video tools, building out an underlying model and training dataset that is better at following the prompt rather than offering fine-tuned control over the motion.
The default mode doesn't even allow you to change the motion level. It assumes the AI will understand the level of motion from the prompt, and for the most part, it works well. In a few tests, I found leaving the motion set to default worked better than any control I could set.
Haiper has now
launched version 1.5 with improved realism and longer initial clips, starting at 8 seconds and extendable.
Haiper has a free plan with 10 creations per day and watermarks with no commercial use. If you want commercial use and to remove watermarks you need the $ 30-a-month Pro plan which includes unlimited video creations.
LTX Studio
(Image credit: LTX Studio/AI Video)
Unlike the others, this is a full generative content platform, able to create a multishot, multiscene video from a text prompt. LTX Studio has images, video, voice-over, music and sound effects; it can generate all of the above at the same time.
The layout is more like a storyboard than the usual prompt box and video player of the other platforms. When you generate video, LTX Studio lets you go in and adapt any single element, including changing the camera angle or pulling in an image to animate from an external application.
I don’t find LTX Studio handles motion as well as Runway or Stable Video, often generating unsightly blurring or warping, but those are issues the others have started to resolve and something LTX Studio owner Lightricks will tackle over time. It also doesn’t have lip sync, but that is likely to come at some point in the future.
LTX Studio has a free plan with 1 hour of generation per month and personal use. For $5 a month you get three hours but if you want commercial use it costs $175 per month and comes with 25 computing hours.