1/11
@runwayml
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Learn more about Act-One below.
(1/7)
https://video-t-2.twimg.com/ext_tw_...248/pu/vid/avc1/1280x720/2EyYj6GjSpT_loQf.mp4
2/11
@runwayml
Act-One allows you to faithfully capture the essence of an actor's performance and transpose it to your generation. Where traditional pipelines for facial animation involve complex, multi-step workflows, Act-One works with a single driving video that can be shot on something as simple as a cell phone.
(2/7)
https://video-t-2.twimg.com/ext_tw_...282/pu/vid/avc1/1280x720/Qie29gOWU42zMaGo.mp4
3/11
@runwayml
Without the need for motion-capture or character rigging, Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles.
(3/7)
https://video-t-2.twimg.com/ext_tw_...696/pu/vid/avc1/1280x720/TcWzpRl3kMfHM4ro.mp4
4/11
@runwayml
One of the models strengths is producing cinematic and realistic outputs across a robust number of camera angles and focal lengths. Allowing you generate emotional performances with previously impossible character depth opening new avenues for creative expression.
(4/7)
https://video-t-2.twimg.com/ext_tw_...424/pu/vid/avc1/1280x720/3fupOI32Ck6ITIiE.mp4
5/11
@runwayml
A single video of an actor is used to animate a generated character.
(5/7)
https://video-t-2.twimg.com/ext_tw_...016/pu/vid/avc1/1280x720/Fh7WanCgSTR_ffHF.mp4
6/11
@runwayml
With Act-One, eye-lines, micro expressions, pacing and delivery are all faithfully represented in the final generated output.
(6/7)
https://video-t-2.twimg.com/ext_tw_...528/pu/vid/avc1/1280x720/ywweYvzLHe-3GO2B.mp4
7/11
@runwayml
Access to Act-One will begin gradually rolling out to users today and will soon be available to everyone.
To learn more, visit Runway Research | Introducing Act-One
(7/7)
8/11
@xushanpao310
@threadreaderapp Unroll
9/11
@threadreaderapp
@xushanpao310 Salam, please find the unroll here: Thread by @runwayml on Thread Reader App See you soon.
10/11
@threadreaderapp
Your thread is gaining traction! /search?q=#TopUnroll Thread by @runwayml on Thread Reader App @TheReal_Ingrid_ for unroll
11/11
@flowersslop
motion capture industry is cooked
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
@runwayml
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Learn more about Act-One below.
(1/7)
https://video-t-2.twimg.com/ext_tw_...248/pu/vid/avc1/1280x720/2EyYj6GjSpT_loQf.mp4
2/11
@runwayml
Act-One allows you to faithfully capture the essence of an actor's performance and transpose it to your generation. Where traditional pipelines for facial animation involve complex, multi-step workflows, Act-One works with a single driving video that can be shot on something as simple as a cell phone.
(2/7)
https://video-t-2.twimg.com/ext_tw_...282/pu/vid/avc1/1280x720/Qie29gOWU42zMaGo.mp4
3/11
@runwayml
Without the need for motion-capture or character rigging, Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles.
(3/7)
https://video-t-2.twimg.com/ext_tw_...696/pu/vid/avc1/1280x720/TcWzpRl3kMfHM4ro.mp4
4/11
@runwayml
One of the models strengths is producing cinematic and realistic outputs across a robust number of camera angles and focal lengths. Allowing you generate emotional performances with previously impossible character depth opening new avenues for creative expression.
(4/7)
https://video-t-2.twimg.com/ext_tw_...424/pu/vid/avc1/1280x720/3fupOI32Ck6ITIiE.mp4
5/11
@runwayml
A single video of an actor is used to animate a generated character.
(5/7)
https://video-t-2.twimg.com/ext_tw_...016/pu/vid/avc1/1280x720/Fh7WanCgSTR_ffHF.mp4
6/11
@runwayml
With Act-One, eye-lines, micro expressions, pacing and delivery are all faithfully represented in the final generated output.
(6/7)
https://video-t-2.twimg.com/ext_tw_...528/pu/vid/avc1/1280x720/ywweYvzLHe-3GO2B.mp4
7/11
@runwayml
Access to Act-One will begin gradually rolling out to users today and will soon be available to everyone.
To learn more, visit Runway Research | Introducing Act-One
(7/7)
8/11
@xushanpao310
@threadreaderapp Unroll
9/11
@threadreaderapp
@xushanpao310 Salam, please find the unroll here: Thread by @runwayml on Thread Reader App See you soon.
10/11
@threadreaderapp
Your thread is gaining traction! /search?q=#TopUnroll Thread by @runwayml on Thread Reader App @TheReal_Ingrid_ for unroll
11/11
@flowersslop
motion capture industry is cooked
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196