On your one-minute walk from the coffee machine to your desk each morning, you pass by dozens of scenes – a kitchen, an elevator, your office – and you effortlessly recognize them and perceive their 3D structure. But this one-minute scene-understanding problem has been an open challenge in computer vision, since the field was first established 50 years ago. In this dissertation, we aim to rethink the path we took over these years, challenge the standard practices and implicit assumptions in the current research, and redefine several basic principles in scene understanding.
The key idea of this dissertation is that learning from rich data under natural setting is crucial for finding the right representation for scene understanding. First of all, to overcome the limitations of "object-centric" datasets, we built the Scene Understanding (SUN) Database, a large collection of real-world images that exhaustively spans all scene categories. This "scene-centric" dataset provides a more natural sample of human visual world, and establishes a realistic benchmark for standard 2D recognition tasks. However, while an image is a 2D array, the world is 3D and our eyes see it from a viewpoint, but this is not traditionally modeled. To obtain a 3D understanding at high-level, we reintroduce geometric figures using modern machinery. To model scene viewpoint, we propose a panoramic place representation to go beyond aperture computer vision and use data that is close to natural input for human visual system. This paradigm shift toward rich representation also opens up new challenges that require a new kind of big data – data with extra descriptions, namely rich data. Specifically, we focus on a highly valuable kind of rich data – multiple viewpoints in 3D – and we build the SUN3D database to obtain an integrated "place-centric" representation of scenes. We argue for the great importance of the computer’s role as an active explorer in a 3D scene, and demonstrate the power of place-centric scene representation.