1/1
CVPR 2024 (Highlight) Paper Alert
Paper Title: Relightable and Animatable Neural Avatar from Sparse-View Video
Few pointers from the paper
This paper tackles the challenge of creating relightable and animatable neural avatars from sparse-view (or even monocular) videos of dynamic humans under unknown illumination.
Compared to studio environments, this setting is more practical and accessible but poses an extremely challenging ill-posed problem.
Previous neural human reconstruction methods are able to reconstruct animatable avatars from sparse views using deformed Signed Distance Fields (SDF) but cannot recover material parameters for relighting.
While differentiable inverse rendering-based methods have succeeded in material recovery of static objects, it is not straightforward to extend them to dynamic humans as it is computationally intensive to compute pixel-surface intersection and light visibility on deformed SDFs for inverse rendering.
To solve this challenge, authors of this paper have proposed a Hierarchical Distance Query (HDQ) algorithm to approximate the world space distances under arbitrary human poses.
Specifically, they estimate coarse distances based on a parametric human model and compute fine distances by exploiting the local deformation invariance of SDF.
Based on the HDQ algorithm, they leveraged sphere tracing to efficiently estimate the surface intersection and light visibility. This allows them to develop the first system to recover animatable and relightable neural avatars from sparse view (or monocular) inputs.
Organization: @ZJU_China , @Stanford , @UofIllinois
Paper Authors: @realzhenxu , @pengsida , @gengchen01 , @LinzhanMou , @yzihan_hci ,@JiamingSuen , Hujun Bao, @XiaoweiZhou5
Read the Full Paper here: [2308.07903] Relightable and Animatable Neural Avatar from Sparse-View Video
Project Page: Relightable and Animatable Neural Avatar from Sparse-View Video
Code: GitHub - zju3dv/RelightableAvatar: [CVPR 2024 (Highlight)] Relightable and Animatable Neural Avatar from Sparse-View Video
Be sure to watch the attached Demo Video-Sound on
Music by Yevgeniy Sorokin from @pixabay
Find this Valuable ?
QT and teach your network something new
Follow me , @NaveenManwani17 , for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.
/search?q=#CVPR2024highlight
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
CVPR 2024 (Highlight) Paper Alert
Paper Title: Relightable and Animatable Neural Avatar from Sparse-View Video
Few pointers from the paper
This paper tackles the challenge of creating relightable and animatable neural avatars from sparse-view (or even monocular) videos of dynamic humans under unknown illumination.
Compared to studio environments, this setting is more practical and accessible but poses an extremely challenging ill-posed problem.
Previous neural human reconstruction methods are able to reconstruct animatable avatars from sparse views using deformed Signed Distance Fields (SDF) but cannot recover material parameters for relighting.
While differentiable inverse rendering-based methods have succeeded in material recovery of static objects, it is not straightforward to extend them to dynamic humans as it is computationally intensive to compute pixel-surface intersection and light visibility on deformed SDFs for inverse rendering.
To solve this challenge, authors of this paper have proposed a Hierarchical Distance Query (HDQ) algorithm to approximate the world space distances under arbitrary human poses.
Specifically, they estimate coarse distances based on a parametric human model and compute fine distances by exploiting the local deformation invariance of SDF.
Based on the HDQ algorithm, they leveraged sphere tracing to efficiently estimate the surface intersection and light visibility. This allows them to develop the first system to recover animatable and relightable neural avatars from sparse view (or monocular) inputs.
Organization: @ZJU_China , @Stanford , @UofIllinois
Paper Authors: @realzhenxu , @pengsida , @gengchen01 , @LinzhanMou , @yzihan_hci ,@JiamingSuen , Hujun Bao, @XiaoweiZhou5
Read the Full Paper here: [2308.07903] Relightable and Animatable Neural Avatar from Sparse-View Video
Project Page: Relightable and Animatable Neural Avatar from Sparse-View Video
Code: GitHub - zju3dv/RelightableAvatar: [CVPR 2024 (Highlight)] Relightable and Animatable Neural Avatar from Sparse-View Video
Be sure to watch the attached Demo Video-Sound on
Music by Yevgeniy Sorokin from @pixabay
Find this Valuable ?
QT and teach your network something new
Follow me , @NaveenManwani17 , for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.
/search?q=#CVPR2024highlight
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
A.I Generated explanation:
CVPR 2024 (Highlight) Paper Alert
Paper Title: Creating a Realistic Digital Human from a Few Videos
Few pointers from the paper
* The Challenge: This paper tries to solve a difficult problem: creating a realistic digital human that can be relit (changed lighting) and animated (moved) from just a few videos of a person.
* The Problem: This is hard because the videos are taken from different angles and with different lighting, making it difficult to create a consistent digital human.
* Previous Methods: Other methods can create digital humans from multiple videos, but they can't change the lighting or make the human move in a realistic way.
* The Solution: The authors of this paper came up with a new algorithm called Hierarchical Distance Query (HDQ) that can estimate the distance between the camera and the person in the video, even when the person is moving.
* How it Works: The HDQ algorithm uses a combination of a simple human model and a more detailed model to estimate the distance and lighting of the person in the video. This allows them to create a realistic digital human that can be relit and animated.
* The Result: The authors were able to create a system that can take a few videos of a person and create a realistic digital human that can be relit and animated in a realistic way.
Organization: The research was done by a team from Zhejiang University, Stanford University, and the University of Illinois.
Paper Authors: The authors of the paper are a team of researchers from these universities.
Read More:
Full Paper: You can read the full paper here: [2308.07903] Relightable and Animatable Neural Avatar from Sparse-View Video
Project Page: You can learn more about the project here: Relightable and Animatable Neural Avatar from Sparse-View Video
Code: You can access the code used in the project here: GitHub - zju3dv/RelightableAvatar: [CVPR 2024 (Highlight)] Relightable and Animatable Neural Avatar from Sparse-View Video