3D content is coming to normal users’ life. With commodity depth sensors, everyone can easily scan 3D models of the surrounding environment. With better 3D modeling tools, the designers can produce 3D models more easily. With the advent of virtual reality, the demand for high quality 3D models will be driven further. We are witnessing the significant growth of 3D content. However, the increasing availability of 3D models requires scalable and efficient algorithms to manage and analyze them. One important problem is how we can retrieve relevant 3D models and people have been working on it for more than a decade. However, the existing algorithms are usually evaluated on a repository with thousands of models, although millions of 3D models are available on the Internet. Thanks to the efforts of ShapeNet [1] team, we can have access to much more 3D models to develop and evaluate new algorithms. In this track, we aim to evaluate the performance of 3D shape retrieval methods on a dataset that is much larger than the previous ones.
In this context, we use the models from ShapeNetCore. ShapeNetCore is a subset of the full ShapeNet dataset with single clean 3D models and manually verified category and alignment annotations. It covers 55 common object categories with about 51,300 unique 3D models.
The evaluation procedure follows Wu et al. [2]. The contest participants submit similarity scores of the shapes between each pair of testing samples. Given a query from the test set, a ranked list of the remaining test data is returned according to the similarity measure. We evaluate retrieval algorithms using two metrics: (1) mean area under precision-recall curve (AUC) for all the testing queries; (2) mean average precision (MAP) where AP is defined as the average precision each time a positive sample is returned. The submission details will be available when the dataset is released.
Feb. 1 | Data distribution |
Feb. 10 | Please register before this date |
Feb. 29 | Result submission |
Mar. 4 | Release evaluation results |
Mar. 7 | Results ready for a track report |
Mar. 15 | Submit track papers for review |
Mar. 22 | All reviews due, feedback and notifications |
Apr. 1 | Submission of camera-ready track papers |
May 7/8 | Workshop |