1/1
CVPR 2024 Best Paper Runners-Up Alert
Paper Title: EventPS: Real-Time Photometric Stereo Using an Event Camera
Few pointers from the paper
Photometric stereo is a well-established technique to estimate the surface normal of an object. However, the requirement of capturing multiple high dynamic range images under different illumination conditions limits the speed and real-time applications.
In this paper authors have introduced “EventPS”, a novel approach to real-time photometric stereo using an event camera. Capitalizing on the exceptional temporal resolution, dynamic range, and low bandwidth characteristics of event cameras, EventPS estimates surface normal only from the radiance changes, significantly enhancing data efficiency.
EventPS seamlessly integrates with both optimization-based and deep-learning-based photo-
metric stereo techniques to offer a robust solution for non-Lambertian surfaces. Extensive experiments validate the effectiveness and efficiency of EventPS compared to frame-based counterparts.
Their algorithm runs at over 30 fps in real-world scenarios, unleashing the potential of EventPS in time-sensitive and high-speed downstream applications.
Organization: National Key Laboratory for Multimedia Information Processing, School of Computer Science, @PKU1898 U, National Engineering Research Center of Visual Technology, School of Computer Science, @PKU1898 , School of Mechanical Engineering, @sjtu1896 , Graduate School of Information Science and Technology, @UTokyo_News_en , @jouhouken
Paper Authors: Bohan Yu , Jieji Ren, Jin Han, Feishi Wang, Jinxiu Liang, Boxin Shi
Read the Full Paper here: https://www.ybh1998.space/wp-conten..._Photometric_Stereo_Using_an_Event_Camera.pdf
Project Page: EventPS: Real-Time Photometric Stereo Using an Event Camera – Bohan Yu's Homepage
Code & Data: EventPS
Heartfelt congratulations to all the talented authors!
Be sure to watch the attached Demo Video-Sound on
Music by Music Unlimited from @pixabay
Find this Valuable ?
QT and teach your network something new
Follow me , @NaveenManwani17 , for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.
/search?q=#CVPR2024
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
CVPR 2024 Best Paper Runners-Up Alert
Paper Title: EventPS: Real-Time Photometric Stereo Using an Event Camera
Few pointers from the paper
Photometric stereo is a well-established technique to estimate the surface normal of an object. However, the requirement of capturing multiple high dynamic range images under different illumination conditions limits the speed and real-time applications.
In this paper authors have introduced “EventPS”, a novel approach to real-time photometric stereo using an event camera. Capitalizing on the exceptional temporal resolution, dynamic range, and low bandwidth characteristics of event cameras, EventPS estimates surface normal only from the radiance changes, significantly enhancing data efficiency.
EventPS seamlessly integrates with both optimization-based and deep-learning-based photo-
metric stereo techniques to offer a robust solution for non-Lambertian surfaces. Extensive experiments validate the effectiveness and efficiency of EventPS compared to frame-based counterparts.
Their algorithm runs at over 30 fps in real-world scenarios, unleashing the potential of EventPS in time-sensitive and high-speed downstream applications.
Organization: National Key Laboratory for Multimedia Information Processing, School of Computer Science, @PKU1898 U, National Engineering Research Center of Visual Technology, School of Computer Science, @PKU1898 , School of Mechanical Engineering, @sjtu1896 , Graduate School of Information Science and Technology, @UTokyo_News_en , @jouhouken
Paper Authors: Bohan Yu , Jieji Ren, Jin Han, Feishi Wang, Jinxiu Liang, Boxin Shi
Read the Full Paper here: https://www.ybh1998.space/wp-conten..._Photometric_Stereo_Using_an_Event_Camera.pdf
Project Page: EventPS: Real-Time Photometric Stereo Using an Event Camera – Bohan Yu's Homepage
Code & Data: EventPS
Heartfelt congratulations to all the talented authors!
Be sure to watch the attached Demo Video-Sound on
Music by Music Unlimited from @pixabay
Find this Valuable ?
QT and teach your network something new
Follow me , @NaveenManwani17 , for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.
/search?q=#CVPR2024
To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
A.I generated explanation:
(llama 3 sonar 32k large chat)
**Big News in Computer Vision Research**
A research paper called "EventPS" has been recognized as one of the best papers at a top conference in computer vision (CVPR 2024). Here's what it's about:
**What's the problem?**
Imagine you want to take a picture of an object and figure out its shape and orientation. One way to do this is called "photometric stereo", which involves taking multiple pictures of the object under different lighting conditions. However, this method is slow and can't be used in real-time applications.
**What's the solution?**
The researchers introduced a new approach called "EventPS", which uses a special type of camera called an "event camera". This camera can capture changes in light very quickly and efficiently, which allows it to estimate the shape and orientation of an object in real-time.
**How does it work?**
EventPS uses the event camera to detect changes in light and uses this information to figure out the shape and orientation of the object. It can work with different types of algorithms, including ones that use machine learning, to get accurate results. The researchers tested their approach and found that it works well and is much faster than other methods.
**What are the benefits?**
EventPS can run at over 30 frames per second, which means it can be used in applications that require fast processing, such as robotics or self-driving cars. This technology has the potential to be used in many different areas, including computer vision, robotics, and more.
**Who did the research?**
The research was done by a team of scientists from several universities and research institutions, including Peking University, Shanghai Jiao Tong University, and the University of Tokyo.