Object-aware Guidance for Autonomous Scene Reconstruction

ACM Transactions on Graphics (Proceedings of SIGGRAPH 2018)

Ligang Liu1               Xi Xia1               Han Sun1               Qi Shen1               Junzhan Xu2               Bin Chen2               Hui Huang2               Kai Xu2,3

1University of Science and Technology of China          2Shenshen University          3National University of Defense Technology

Fig. 1. Autonomous scene scanning and reconstruction on a real office scene using our object-guided guidance approach. In each column (a)-(d), the object marked with the yellow rectangluar frame is the object-of-interest (BOI). The upper row shows the navigation path (in dotted red) with previous scanning views (shown as purple dots) and the current position of the robot. The objects in different colors are the reconstructed objects in the scene. The bottom raw shows the depth data (left) and the RGB image (right) from the current view of the robot. Our approach achieves both global path planning and local view planning on-the-fly within one single navigation pass and obtains the reconstructed scene with semantic objects (d).


Autonomous 3D scene scanning and reconstruction of unknown indoor scenes by mobile robots with depth sensors have become active research areas in recent years. In order to carry out these processes, one must find the balance between global exploration of a scene and local scanning of the objects within it. In this paper, we propose an object-aware guidance for autoscanning in order to explore, reconstruct, and understand unknown scenes within one navigational pass.

Our approach interleaves between object analysis to identify the next best object (NBO) for global exploration, and object-aware information gain analysis to plan the next best view (NBV) for local scanning. Based on a model-driven objectness measurement, an objectness-based segmentation method is introduced to extract semantic object proposals from the current scene surface via a multi-class graph cuts minimization. Then, we propose objectness-based NBO and NBV strategies to plan both the global navigation path and local scanning views. An object of interest (BOI) is identified using the NBO metric, which is obtained from its objectness score and visual saliency. The robot then moves and visits the BOI. Then the robot conducts the scanning using the views determined by the NBV strategy. When the BOI is recognized as a complete object, the most similar 3D model in the dataset is inserted into the scene to replace it. The algorithm iterates until all of the objects are recognized and reconstructed in the scene. A variety of experiments and comparisons have shown the feasibility and efficiency of our proposed approach.

Fig. 3. Pipeline of our object-guided autonomous scene scanning and reconstruction approach.

Fig. 15. Visual results of object-aware scanning for virtual simulation.

Fig. 16. Visual results of object-aware scanning for real running.


We thank the anonymous reviewers for their valuable comments. This work was supported in part by NSFC (61672482, 61672481, 61622212, 61572507, 61532003, 61522213), the One Hundred Talent Project of the Chinese Academy of Sciences, 973 Program (2015CB35 2501), Guangdong Science and Technology Program (2015A03031201 5) and Shenzhen Science and Technology Program (KQJSCX20170727 101233642, JCYJ20151015151249564). 


title = {Object-Aware Guidance for Autonomous Scene Reconstruction},
author = {Ligang Liu and Xi Xia and Han Sun and Qi Shen and Junzhan Xu and Bin Chen and Hui Huang and Kai Xu},
journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)},
volume = {37},
number = {4},
pages = {104:1--104:12},  
year = {2018},

Downloads (faster for people in China)

Downloads (faster for people in other places)