Open AI's Sora has competition. With Kling, china has entered the chat

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,004

1/1
中国の電話番号が無いと使えなかった【Kling AI】が、遂にメールアドレスで使用可能になりました

自分は以前より使用していましたが、"動画の動きのスムーズさ"は今の所トップレベルかと思います。

現状、高画質化はまだ使用できないみたいですが、これが使えるようになったらかなり強力です。

ひとまず66コイン無料配布されているので、まだ使ってない人は有料化される前に試してみることをおすすめします。

※添付は参考までに"日本の妖怪風画像"をKling AIで動かしてみたものです


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GTOiWOlb0AAtvJ7.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,004






1/11
Today we are releasing Loops in Dream Machine to keep your imagination going… and going… and going! Get started here: Luma Dream Machine
🧵1/6 /search?q=#LumaDreamMachine

2/11
2. Looping made easy: simply check a box to create a loop from any text instruction, image, keyframes, or to extend a previous generation into loop.
🔄 “spaceship flying in hyperspace portal”

3/11
3. 🔄 “capybara riding a bicycle in a park”

4/11
4. 🔄 “a spinning top on the table”

5/11
5. 🔄 "loop"

6/11
6. 🔄 by @KeziaBarnett

7/11
7. 🔄 “Fireworks Exploding in the Sky”

8/11
Here you go @LumaLabsAI: Keyframes used to create a big long loop.🔄 For the stock traders out there, this represents 2024 The Year of the Dragon morphing into 2025 The Year of the Super Unicorn Dragon! 🐉🦄 Big reverse merger /search?q=#IGPK ➡️ /search?q=#JFHE followed by Q2, Ticker/Name Change, Q3, 8 more company divisions coming into the ticker! NFA and do your own dd! 🔥🔥🔥

9/11
Is it the same as putting the init as end image?

10/11
i love paying for an account and then waiting an hour for a generation 😑

11/11
very useful thank you!!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,004







1/15
@Ror_Fly
KLING MOTION BRUSH → THE DIFFERENTIATOR.

Super useful for tough shots + multiple subjects.

MJ PROMPT EX:
wide angle documentary photography, once in a lifetime photo, the human and animal co-dependence, exotic locations, vibrant award winning photography, epic scenery, photojournalism --ar 5:3 --quality 2 --p --stylize 125 --v 6.1

KLING PROMPT EX:
the camera subtly pushes in as a young girl kisses an elephant on the head in a river, natural dynamic fluid motion of the water, cinematic style

TIPS:
+ Use for hard-to-describe motion
+ Great for car turns (this is hard)
+ Text prompts play a role
+ Consider environmental movement (water/etc)

#kling_ai[/URL][/U] #midjourney[/URL][/U] #kling[/URL][/U]



https://video.twimg.com/ext_tw_video/1862579973430665216/pu/vid/avc1/1080x1294/SShrRAy241VCVH2a.mp4

2/15
@NimrodEshed
Amazing 👌



3/15
@Ror_Fly
thanks man 💪



4/15
@WuxiaRocks
Wow. I know it can be done well in close-up, but you did a whole bird' eyes top view, that's awesome. And it works just as directed. Nice showcase.



5/15
@99aico
Incredible!



6/15
@nanobeep
So amazing - can't believe we're at this level already



7/15
@guicastellanos1
This is incredible!!!thanks Rory



8/15
@mingweihehehe
cool



9/15
@AntDX316
GenAI: 3D, XR, Video, Image, Audio, UI/UX, API

There is no way to do everything masterfully in 1 day unless you are an ASI. 😔



10/15
@_MB_pt
Awesome 😎👍



11/15
@CTamire
Is that real



12/15
@Raqeem01
What software is this?



13/15
@Kreefax
Would be much more useful if it worked with V1.5.



14/15
@mystickago
This feature is a must have please when MJ Video comes out please. @DavidSHolz @midjourney



15/15
@salinaakter781


[Quoted tweet]
Wildlife: Beauty Drenched in Blood🩸

⚠️Content Warning: This thread is not for the faint-hearted. Brutal truths, bloody scenes, and raw survival of wildlife.

Still think you can handle it❓


Gdm27d5boAIkd_E.jpg








1/11
@HBCoop_
⚛️ Multi-dimensional motion effects

I used 3 prompts with these Midjourney images in Gen-3 and Kling AI.

Effective keywords:

• Core movement:
Mesmerizing spiral pattern, phase through walls, cascade-like waterfall

• Natural flow:
Ultra smooth particle physics, precise physics simulation

• Depth:
Dramatic rim lighting, volumetric atmosphere, refracts light

• Environmental interaction:
Ripples across surface, sharp edges

Study your images to describe interactions between elements to create interesting effects.



https://video.twimg.com/ext_tw_video/1863256325339099136/pu/vid/avc1/1280x720/Ibfu9932G-S92czK.mp4

2/11
@madartist23x
Thank you for sharing the prompts



3/11
@HBCoop_
I hope it's helpful😊



4/11
@KampalaSiobhan
🔥👏 Thanks for this creative hint.



5/11
@HBCoop_
Thanks for reading :smile:



6/11
@aymieelis
Nice! love the ripple across the surface.



7/11
@HBCoop_
Thanks Aymiee! I loved that, too

I like the cubes filling up the larger cube, too☺️



8/11
@mike_darrow
Very interesting



9/11
@HBCoop_
Thanks Mike



10/11
@DaveWBaldwin1
I hope you're fashioning this toward a buyer



11/11
@HBCoop_
No, it's just an example of several prompts I worked on to learn how to create motion through several different areas in images :smile:

I am working on a big project and I'm using what I learn for those visuals.



1/1
@AARSUS007
TriggerQueen in Motion!!!
QT TQ 🖤🎥🖤
@Kling_ai
#ai[/URL][/U] #art[/URL][/U] #aiart[/URL][/U] #aiartwork[/URL][/U] #aiartcommunity[/URL][/U] #midjourney[/URL][/U] #midjourneyart[/URL][/U] #nijijourney[/URL][/U] #anime[/URL][/U] #animeart[/URL][/U] #digitalart[/URL][/U] #animegirl[/URL][/U] #cyberpunk[/URL][/U] #scifi[/URL][/U] #scifiart[/URL][/U] #fantasy[/URL][/U] #Midjourney61[/URL][/U] #Alart[/URL][/U] #AIgirl[/URL][/U] #kling_ai[/URL][/U] #hailuo_ai[/URL][/U] #kling[/URL][/U] #Hailuo[/URL][/U] #cyberpunk[/URL][/U]



https://video.twimg.com/ext_tw_video/1862503573818372096/pu/vid/avc1/704x1344/Pt9Q1U67U9vdAjCH.mp4





1/3
@tawleefai
◀︎ لقطة سينمائية بالذكاء اصطناعي، اختبار الـ Motion Brush

- الأدوات المستخدمة:
▫️ أنتاج الصورة @midjourney
▫️ إنتاج الفيديو @Kling_ai

إليك آلية العمل والطريقة ⬇️



https://video.twimg.com/ext_tw_video/1862061555086409728/pu/vid/avc1/1280x720/jV3uRm_G6OkXyqaR.mp4

2/3
@tawleefai
◀︎ مشاركة تلقين الصورة | ميدجورني.
ملاحظة: تم استخدام كود تخصيص خاص لذلك قد تختلف نتيجتك بدون استخدام أكواد التخصيص

A solitary Arabian oryx stands amidst swirling mist under a vast, star-filled night sky, exuding an ethereal and otherworldly atmosphere. --ar 16:9 --style raw



GddiS2ZWkAAW7we.jpg


3/3
@tawleefai
◀︎ مشاركة مسارات الـ Brush | كنلج

- من خلال خاصية الـ Motion Brush تم:
▫️ اختيار العنصر الأول، الحيوان وتوجيهه لليسار
▫️ اختيار العنصر الثاني، الضباب واعطاءه حركة متعرجة
▫️ اختيار العنصر الثالث، السماء وجعلها Static Area أي ثابتة



Gddjq17XsAApsJe.jpg





1/1
@awva_design
Datacentre. A test of motion and shimmer light.
An image was generated by @midjourney and brought to life using @Kling_ai and @AdobeAe. #AIart[/URL][/U] #AIartists[/URL][/U]



https://video.twimg.com/ext_tw_video/1859699774317813760/pu/vid/avc1/1412x1080/m3ZN103puS6NenTO.mp4





1/1
@AARSUS007
TriggerQueen in Motion!!! QT TQ:smile:
@Hailuo_AI
#ai #art #aiart #aiartwork #aiartcommunity #midjourney #midjourneyart #nijijourney #anime #animeart #digitalart #animegirl #cyberpunk #scifi #scifiart #fantasy #Midjourney61 #Alart #AIgirl #kling_ai #hailuo_ai #kling #Hailuo #cyberpunk



https://video.twimg.com/ext_tw_video/1860634687342870528/pu/vid/avc1/1072x720/acGrBFsniAcYTHt0.mp4



1/1
@StoryboardTH
Kling AI ได้เปิดตัวฟีเจอร์ Motion Brush และ Camera Movement ให้ใช้งานบนโมเดลเจนวิดีโอ KLING 1.5 คลิปนี้เราเลยลองเจนภาพการ์ตูน 2D บน Midjourney มาลองใช้งานฟีเจอร์ใหม่กัน

#KlingAI[/URL][/U] #MotionBrush[/URL][/U] #CameraMovement[/URL][/U] #Animation[/URL][/U] #midjourney[/URL][/U]



https://video.twimg.com/ext_tw_video/1860227114504736768/pu/vid/avc1/720x1280/Bu9CYxHiRfnQ-hC8.mp4




1/4
@Diesol
It's been crazy this week! I'd love to see some new incredible Gen AI work from the community.

Please share your latest art from you or a friend in the comments!



https://video.twimg.com/ext_tw_video/1860003200918781952/pu/vid/avc1/1920x1080/pGYYcvjhMQFAa90V.mp4

2/4
@felvasaur
Trying the newly added motion brush for Kling 1.5

Image generated in Midjourney



GdAQ96EXkAAoo-N.jpg


3/4
@Rob101Ai
Niiiccceee!!!



4/4
@felvasaur
Thanks!






1/9
@guicastellanos1
4 Epic battles created with @Kling_ai
2 of them with the help of the motion brush and 2 others just with the #prompt[/URL][/U] Epic Battle
all of them from a single image created with #midjourney[/URL][/U] in I2V
And sounds created with @elevenlabs_io
#ai[/URL][/U] #aigenerated[/URL][/U] #actionmovies[/URL][/U] #superheroes[/URL][/U]



https://video.twimg.com/ext_tw_video/1850978242972262400/pu/vid/avc1/1394x720/qJmrGwP0YPisfBGq.mp4

2/9
@guicastellanos1
I particularly like the style of this one



3/9
@guicastellanos1
Thanks for sharing 🙏



4/9
@MiShellK_King
I just used @vivago_ai to create this image😘
"Intellectual Allure" 🎨😊
#anime[/URL][/U] #DigitalArt[/URL][/U] #Alart[/URL][/U] #vivaai[/URL][/U] #vivago[/URL][/U]



https://video.twimg.com/ext_tw_video/1851150446322495488/pu/vid/avc1/720x1280/vW1gbEr_tU3hYQ5B.mp4

5/9
@zoey_aiart
I just used @vivago_ai to create this image😈
"Apocalyptic Nightmare"👽🎎
#FantasyArt[/URL][/U] #Surrealism[/URL][/U] #DigitalArt[/URL][/U] #aiart[/URL][/U] #vivago[/URL][/U] #vivaai[/URL][/U]



https://video.twimg.com/ext_tw_video/1851166930029215744/pu/vid/avc1/720x720/6YUQy1V-zWNDR758.mp4

6/9
@alex_aiart_
I just used @vivago_ai to create this image🥰
"Dreamscape Reflection"🖤😈
#DarkArt[/URL][/U] #Surrealism[/URL][/U] #Mystery[/URL][/U] #aiart[/URL][/U] #vivago[/URL][/U] #vivaai[/URL][/U]



https://video.twimg.com/ext_tw_video/1851170222465490945/pu/vid/avc1/720x720/PCFx3v4jd5X55aL8.mp4

7/9
@0B0YA
Try minimax



8/9
@Sarah_aiart
I just used @vivago_ai to create this video
🔥 The female warrior with a heavy sword ⚔️
#FantasyArt[/URL][/U] #Demon[/URL][/U] #DigitalArt[/URL][/U] #aiart[/URL][/U] #vivago[/URL][/U] #vivaai[/URL][/U]



https://video.twimg.com/ext_tw_video/1851185235641106432/pu/vid/avc1/1280x720/SFIXoETB9VeN8YzL.mp4

9/9
@guicastellanos1
Thanks, well first I love the consistency in the characters and then most of the time the movements are so realistic and organic some times that the generation it really looks like a movie scene
 

RageKage

All Star
Joined
May 24, 2022
Messages
2,927
Reputation
1,137
Daps
9,686
Reppin
Macragge
Film about to go the way of music..it's over for talented creatives :francis:

Going to change that's for sure

tired of these studios acting as gatekeepers and producing :trash: year after year for us to consume

True talent will still be valued and be rewarded but this will democratize? media produced and our options
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,335
Reputation
8,496
Daps
160,004

A new, uncensored AI video model may spark a new AI hobbyist movement​


Will Tencent's "open source" HunyuanVideo launch an at-home "Stable Diffusion" moment for uncensored AI video?


Benj Edwards – Dec 19, 2024 10:50 AM |
112

Still images from three videos generated with Tencent's HunyuanVideo.



Still images from three videos generated with Tencent's HunyuanVideo. Credit: Tencent


The AI-generated video scene has been hopping this year (or twirling wildly, as the case may be). This past week alone we've seen releases or announcements of OpenAI's Sora, Pika AI's Pika 2, Google's Veo 2, and Minimax's video-01-live. It's frankly hard to keep up, and even tougher to test them all. But recently, we put a new open-weights AI video synthesis model, Tencent's HunyuanVideo, to the test—and it's surprisingly capable for being a "free" model.

Unlike the aforementioned models, HunyuanVideo's neural network weights are openly distributed, which means they can be run locally under the right circumstances (people have already demonstrated it on a consumer 24 GB VRAM GPU) and it can be fine-tuned or used with LoRAs to teach it new concepts.

Notably, a few Chinese companies have been at the forefront of AI video for most of this year, and some experts speculate that the reason is less reticence to train on copyrighted materials, use images and names of famous celebrities, and incorporate some uncensored video sources. As we saw with Stable Diffusion 3's mangled release, including nudity or pornography in training data may allow these models achieve better results by providing more information about human bodies. HunyuanVideo notably allows uncensored outputs, so unlike the commercial video models out there, it can generate videos of anatomically realistic, nude humans.

Putting HunyuanVideo to the test​

To evaluate HunyuanVideo, we provided it with an array of prompts that we used on Runway's Gen-3 Alpha and Minimax's video-01 earlier this year. That way, it's easy to revisit those earlier articles and compare the results.

We generated each of the five-second-long 864 × 480 videos seen below using a commercial cloud AI provider. Each video generation took about seven to nine minutes to complete. Since the generations weren't free (each cost about $0.70 to make), we went with the first result for each prompt, so there's no cherry-picking below. Everything you see was the first generation for the prompt listed above it.



"A highly intelligent person reading 'Ars Technica' on their computer when the screen explodes"


"commercial for a new flaming cheeseburger from McDonald's"


"A cat in a car drinking a can of beer, beer commercial"


"Will Smith eating spaghetti"


"Robotic humanoid animals with vaudeville costumes roam the streets collecting protection money in tokens"


"A basketball player in a haunted passenger train car with a basketball court, and he is playing against a team of ghosts"


"A beautiful queen of the universe in a radiant dress smiling as a star field swirls around her"


"A herd of one million cats running on a hillside, aerial view"


"Video game footage of a dynamic 1990s third-person 3D platform game starring an anthropomorphic shark boy"




"A muscular barbarian breaking a CRT television set with a weapon, cinematic, 8K, studio lighting"


"A scared woman in a Victorian outfit running through a forest, dolly shot"


"Low angle static shot: A teddy bear sitting on a picnic blanket in a park, eating a slice of pizza. The teddy bear is brown and fluffy, with a red bowtie, and the pizza slice is gooey with cheese and pepperoni. The sun is setting, casting a golden glow over the scene"


"Aerial shot of a small American town getting deluged with liquid cheese after a massive cheese rainstorm where liquid cheese rained down and dripped all over the buildings"


Also, we added a new one: "A young woman doing a complex floor gymnastics routine at the Olympics, featuring running and flips."



Weighing the results​

Overall, the results shown above seem fairly comparable to Gen-3 Alpha and Minimax video-01, and that's notable because HunyuanVideo can be downloaded for free, fine-tuned, and run locally in an uncensored way (given the appropriate hardware).

There are some flaws, of course. The vaudeville robots are not animals, the cat is drinking from a weird transparent beer can, and the man eating spaghetti is obviously not Will Smith. There appears to be some celebrity censorship in the metadata/labeling of the training data, which differs from Kling and Minimax's AI video offerings. And yes, the gymnast has some anatomical issues.



Right now, HunyuanVideo's results are fairly rough, especially compared to the state-of-the-art video synthesis model to beat at the moment, the newly-unveiled Google Veo 2. We ran a few of these prompts through Sora as well (more on that later in a future article), and Sora created more coherent results than HunyuanVideo but didn't deliver on the prompts with much fidelity. We are still early days of AI, but quality is rapidly improving while models are getting smaller and more efficient.

Even with these limitations, judging from the history of Stable Diffusion and its offshoots, HunyuanVideo may still have significant impact: It could be fine-tuned at higher resolutions over time to eventually create higher-quality results for free that may be used in video productions, or it could lead to people making bespoke video pornography, which is already beginning to appear in trickles on Reddit.

As we've mentioned before in previous AI video overviews, text-to-video models work by combining concepts from their training data—existing video clips used to create the model. Every AI model on the market has some degree of trouble with new scenarios not found in their training data, and that limitation persists with HunyuanVideo.

Future versions of HunyuanVideo could improve with better prompt interpretation, different training data sets, increased computing power during training, or changes in the model design. Like all AI video synthesis models today, users still need to run multiple generations to get desired results. But it looks like the “open weights” AI video models are already here to stay.[]
 
Top