1/1
Paper Alert
Paper Title: Hierarchical World Models as Visual Whole-Body Humanoid Controllers
Few pointers from the paper
Whole-body control for humanoids is challenging due to the high-dimensional nature of the problem, coupled with the inherent instability of a bipedal morphology. Learning from visual observations further exacerbates this difficulty.
In this work, authors have explored highly data-driven approaches to visual whole-body humanoid control based on reinforcement learning, without any simplifying assumptions, reward design, or skill primitives.
Specifically, authors have proposed a hierarchical world model in which a high-level agent generates commands based on visual observations for a low-level agent to execute, both of which are trained with rewards.
Their approach produces highly performant control policies in 8 tasks with a simulated 56-DoF humanoid, while synthesizing motions that are broadly preferred by humans.
Organization: @UCSanDiego ,β@nyuniversity ,β@AIatMeta
Paper Authors: @ncklashansen , @jyothir_s_v , @vlad_is_ai , @ylecun , @xiaolonw , @haosu_twitr
Read the Full Paper here: [2405.18418] Hierarchical World Models as Visual Whole-Body Humanoid Controllers
Project Page: Puppeteer
Code: GitHub - nicklashansen/puppeteer: Code for "Hierarchical World Models as Visual Whole-Body Humanoid Controllers"
Models: puppeteer β Google Drive
Be sure to watch the attached Demo Video-Sound on
Music by Yevgeniy Sorokin from @pixabay
Find this Valuable ?
QT and teach your network something new
Follow me , @NaveenManwani17 , for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
Paper Alert
Paper Title: Hierarchical World Models as Visual Whole-Body Humanoid Controllers
Few pointers from the paper
Whole-body control for humanoids is challenging due to the high-dimensional nature of the problem, coupled with the inherent instability of a bipedal morphology. Learning from visual observations further exacerbates this difficulty.
In this work, authors have explored highly data-driven approaches to visual whole-body humanoid control based on reinforcement learning, without any simplifying assumptions, reward design, or skill primitives.
Specifically, authors have proposed a hierarchical world model in which a high-level agent generates commands based on visual observations for a low-level agent to execute, both of which are trained with rewards.
Their approach produces highly performant control policies in 8 tasks with a simulated 56-DoF humanoid, while synthesizing motions that are broadly preferred by humans.
Organization: @UCSanDiego ,β@nyuniversity ,β@AIatMeta
Paper Authors: @ncklashansen , @jyothir_s_v , @vlad_is_ai , @ylecun , @xiaolonw , @haosu_twitr
Read the Full Paper here: [2405.18418] Hierarchical World Models as Visual Whole-Body Humanoid Controllers
Project Page: Puppeteer
Code: GitHub - nicklashansen/puppeteer: Code for "Hierarchical World Models as Visual Whole-Body Humanoid Controllers"
Models: puppeteer β Google Drive
Be sure to watch the attached Demo Video-Sound on
Music by Yevgeniy Sorokin from @pixabay
Find this Valuable ?
QT and teach your network something new
Follow me , @NaveenManwani17 , for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
A.I Generated explanation:
**Title:** Hierarchical World Models as Visual Whole-Body Humanoid Controllers
**Summary:** This paper is about creating a computer system that can control a humanoid robot (a robot that looks like a human) using only visual observations (like a camera). This is a challenging problem because the robot has many moving parts and can be unstable.
**Key Points:**
* The researchers used a type of artificial intelligence called reinforcement learning to teach the robot how to move.
* They didn't use any simplifications or assumptions to make the problem easier, which makes their approach more realistic.
* They created a hierarchical system, where one part of the system (the "high-level agent") tells another part (the "low-level agent") what to do based on what it sees.
* They tested their system on a simulated robot with 56 moving parts and were able to get it to perform well on 8 different tasks.
* The movements the robot made were also preferred by humans.
**Authors and Organizations:**
* The researchers are from the University of California, San Diego, New York University, and Meta AI.
* The authors are Nicklas Hansen, Jyothi S. V., Vladlen Koltun, Yann LeCun, Xiaolong Wang, and Haosu Wei.
**Resources:**
* You can read the full paper here: [2405.18418] Hierarchical World Models as Visual Whole-Body Humanoid Controllers
* You can visit the project page here: Puppeteer
* You can access the code here: GitHub - nicklashansen/puppeteer: Code for "Hierarchical World Models as Visual Whole-Body Humanoid Controllers"
* You can access the models here: puppeteer β Google Drive