Physically-Based Rendering for Indoor Scene Understanding
Using Convolutional Neural Networks

Abstract

Indoor scene understanding is central to applications such as robot navigation and human companion assistance. Over the last years, data-driven deep neural networks have outperformed many traditional approaches thanks to their representation learning capabilities. One of the bottlenecks in training for better representations is the amount of available per-pixel ground truth data that is required for core scene understanding tasks such as semantic segmentation, normal prediction, and object edge detection. To address this problem, a number of works proposed using synthetic data. However, a systematic study of how such synthetic data is generated is missing. In this work, we introduce a large-scale synthetic dataset with 400K physically-based rendered images from 45K realistic 3D indoor scenes. We study the effects of rendering methods and scene lighting on training for three computer vision tasks: surface normal prediction, semantic segmentation, and object boundary detection. This study provides insights into the best practices for training with synthetic data (more realistic rendering is worth it) and shows that pretraining with our new synthetic dataset can improve results beyond the current state of the art on all three tasks.



Paper



Dataset

Find download links to our dataset and some snapshots below. Note that our rendering is consistent with SUNCG v0.




Rendering Code

We provide the code to fully reproduce our synthetic dataset. Please check the code on github.

Rendering Pipeline



Vision Task 1: Surface Normal Estimation

We provide torch implementation for single image based surface normal estimation. Please check the code on github.

Surface Normal Estimation

Pre-trained model on our dataset can be downloaded here:



Vision Task 2: Semantic Segmentation

Please check github for the implementation of dilated network.

Pre-trained model on our dataset can be downloaded here:



Vision Task 3: Instance Boundary Estimation

Please check github for the implementation of Holistically-Nested Edge Detection.

Pre-trained model on our dataset can be downloaded here:



Contact

This webpage is hosted by Princeton Vision Group.

Please contact Yinda Zhang (yindazATcsDOTprincetonDOTedu) if you have any question about data/model/code.