Full 3D Reconstruction of Transparent Objects

ACM Transactions on Graphics (Proceedings of SIGGRAPH 2018)

BOJIAN WU1,2           YANG ZHOU2,3           YIMING QIAN4            MINGLUN GONG5           HUI HUANG2*

1SIAT       2Shenzhen University           3Huazhong University of Science & Technology           4University of Alberta           5Memorial University of Newfoundland

Fig. 1. A transparent object refracts lights in its environment (left) and hence its shape cannot be reconstructed using conventional techniques. We present a novel method that, for the first time, directly reconstruct the shapes of transparent objects from captured images. This allows photorealistic rendering of real transparent objects in virtual environments (right). 


Numerous techniques have been proposed for reconstructing 3D models for opaque objects in past decades. However, none of them can be directly applied to transparent objects. This paper presents a fully automatic approach for reconstructing complete 3D shapes of transparent objects. Through positioning an object on a turntable, its silhouettes and light refraction paths under different viewing directions are captured. Then, starting from an initial rough model generated from space carving, our algorithm progressively optimizes the model under three constraints: surface and refraction normal consistency, surface projection and silhouette consistency, and surface smoothness. Experimental results on both synthetic and real objects demonstrate that our method can successfully recover the complex shapes of transparent objects and faithfully reproduce their light refraction properties.

Fig. 2. Our data acquisition setup. The object to be captured (the Monkey statue in this case) is placed on Turntable #1. A LCD monitor is placed on Turntable #2 and serves as a light source. Camera #1 faces the object and the monitor for capturing silhouettes and ray-pixel correspondences. Camera #2 looks downward to the Turntable #1 for its rotation axis calibration. The bottom right monitor belongs to a PC that controls the data capture and is not used for illuminating the scene.

Fig. 6. Importance of adaptive neighborhood size selection for point consolidation. For the point set X (in blue) sampled from the initial rough model (a), our approach adaptively selects the neighborhood size h for each point x based on the average distance between the rough model and the noisy point cloud (in green) estimated using ray-ray correspondences (b). The neighborhood sizes computed for different areas are color coded in (c), which shows that larger neighborhoods are used for areas not well modeled by the rough model, such as the concave ear region, and smaller neighborhoods for convex parts. Projecting sample points using adaptive neighborhoods results in a consolidated point cloud (in red) that better captures the shape of the Kitten around its ear and back while maintains to be smooth; see cross-section curves shown in (e). Using the consolidated point cloud shown in (d), we can get better reconstructed shape on these concave areas (f). In comparison, projection using a fixed neighborhood size will either leads to a noisy model when the neighborhood size is small (g), or to an over-smoothed model when the neighborhood is large (h). In both cases, the surface in concave regions are not properly reconstructed.

Fig. 9. Results on synthetic Bunny with refractive index set to 1.15. The size of the bounding box of Bunny is 8.4 × 8.3 × 6.5 mm. The numbers below eacherror map is the average Hausdorff distance between the reconstructed shape and ground truth.

Fig. 14. Progressive reconstruction of the Hand object shown in Fig. 15. Our approach gradually recovers surface details that are not available in the initial rough model (a). To conduct quantitative evaluation, we also painted the transparent object (b) with DPT-5 developer (see (c)) and then carefully scanned it using a high-end Artec Space Spider scanner (d). The size of the bounding box of this scanned Hand model is 80 × 119 × 64 mm. Using this captured model as ground truth, the average reconstruction error after each iteration is plotted in (e). The result curve shows that our approach can effectively reduce reconstruction errors and converge after 20 iterations.

Fig. 18. Reconstruction results for Monkey and Dog under two view directions. For each object, the reconstruction (right column) successfully captures the featured concave parts (highlighted in green box). Nonetheless, we can still observe areas not well-reconstructed (highlighted in red) due to the violation of two-refraction assumption. Since these reconstruction artifacts only show up in areas involving multiple refractions, they are hardly noticeable when comparing the rendering of our reconstructed model (middle column) with real object photos (left column).

Data & Code

Note that the DATA and CODE are free for Research and Education Use ONLY. 

Please cite our paper (add the bibtex below) if you use any part of our ALGORITHM, CODE, DATA or RESULTS in any publication.



We thank the anonymous reviewers for their valuable comments. This work was supported in part by NSFC (61522213, 61761146002, 6171101466), 973 Program (2015CB352501), Guangdong Science and Technology Program (2015A030312015), Shenzhen Innovation Program (KQJSCX20170727101233642, JCYJ20151015151249564) and NSERC (293127). 


title = {Full 3D Reconstruction of Transparent Objects},
author = {Bojian Wu and Yang Zhou and Yiming Qian and Minglun Gong and Hui Huang},
journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)},
volume = {37},
number = {4},
pages = {103:1--103:11},  
year = {2018},

Downloads (faster for people in China)

Downloads (faster for people in other places)