Demo2 first
Our SUNbot has evolved several generations to understand the world better.
Place-centric Scene Representation for Actionable Predictions.
Talk at ECCV 2012 (Watch the talk online)

Mission:                                     Sponsor Our Research

We study computer vision and robot perception ‒ the computational principles underlying machine perception, robot vision, and human vision. We are interested in building robots that automatically understand visual scenes, both inferring the semantics and extracting 3D structure.

We design end-to-end algorithms to learn deep 3D representations from big 3D data for visual scene understanding. We believe that it is critical to consider the role of a machine as an active explorer in a 3D world, such as a robot, and learn from rich 3D data close to the natural input to human visual system.

Specifically, our group is at the frontier of 3D Deep Learning, RGB-D Recognition and Reconstruction, Place-centric 3D Context Representation, Graphics for Vision, Large-scale Crowd-sourcing, and Petascale Big Data. As a real-world test for our research, we also focus on several key applications, such as Robotics, Autonomous Driving, and Augmented Reality.


We gratefully acknolwedge the generous support of Intel, National Science Foundation, Google, MERL, Facebook, and NVIDIA for our research.