Introduction

3D content is coming to normal users’ life. With commodity depth sensors, everyone can easily scan 3D models of the surrounding environment. With better 3D modeling tools, the designers can produce 3D models more easily. With the advent of virtual reality, the demand for high quality 3D models will be driven further. We are witnessing the significant growth of 3D content. However, the increasing availability of 3D models requires scalable and efficient algorithms to manage and analyze them. One important problem is how we can retrieve relevant 3D models and people have been working on it for more than a decade. However, the existing algorithms are usually evaluated on a repository with thousands of models, although millions of 3D models are available on the Internet. Thanks to the efforts of ShapeNet [1] team, we can have access to much more 3D models to develop and evaluate new algorithms. In this track, we aim to evaluate the performance of 3D shape retrieval methods on a dataset that is much larger than the previous ones.

Dataset

In this context, we use the models from ShapeNetCore. ShapeNetCore is a subset of the full ShapeNet dataset with single clean 3D models and manually verified category and alignment annotations. It covers 55 common object categories with about 51,300 unique 3D models.

The evaluation procedure follows Wu et al. [2]. The contest participants submit similarity scores of the shapes between each pair of testing samples. Given a query from the test set, a ranked list of the remaining test data is returned according to the similarity measure. We evaluate retrieval algorithms using two metrics: (1) mean area under precision-recall curve (AUC) for all the testing queries; (2) mean average precision (MAP) where AP is defined as the average precision each time a positive sample is returned. The submission details will be available when the dataset is released.

Schedule

Track

Feb. 1Data distribution
Feb. 10Please register before this date
Feb. 29Result submission
Mar. 4Release evaluation results

SHREC

Mar. 7Results ready for a track report
Mar. 15Submit track papers for review
Mar. 22All reviews due, feedback and notifications
Apr. 1Submission of camera-ready track papers
May 7/8Workshop

Team

Organizers

  • Manolis Savva - Stanford University
  • Fisher Yu - Princeton University
  • Hao Su - Stanford University

Advisory Board

  • Leonidas Guibas - Stanford University
  • Pat Hanrahan - Stanford University
  • Silvio Savarese - Stanford University
  • Qixing Huang - Toyota Technological Institute at Chicago
  • Jianxiong Xiao - Princeton University
  • Thomas Funkhouser - Princeton University

References

[1] Chang et al., ShapeNet: An Information-Rich 3D Model Repository arXiv:1512.03012
[2] Wu et al., 3D ShapeNets: A Deep Representation for Volumetric Shapes CVPR 2015
[3] Philip Shilane et al., The Princeton Shape Benchmark Shape Modeling International, June 2004