Differentiable Refraction-Tracing for Mesh Reconstruction of Transparent Objects

ACM Transactions on Graphics (Proceedings of SIGGRAPH ASIA 2020)


Jiahui Lyu1   Bojian Wu2   Dani Lischinski3   Daniel Cohen-Or1,4   Hui Huang1*

1Shenzhen University   2Alibaba Group  3The Hebrew University of Jerusalem  4Tel-Aviv University  



Fig. 1. Reconstructing a transparent Hand object. The five images, from left to right, show a sequence of ray-traced models, progressively optimized by our method. The ground-truth geometry, obtained by painting and scanning the object and a real photograph of the original object are shown on the right.



Abstract

Capturing the 3D geometry of transparent objects is a challenging task, ill-suited for general-purpose scanning and reconstruction techniques, since these cannot handle specular light transport phenomena. Existing state-of the-art methods, designed specifically for this task, either involve a complex setup to reconstruct complete refractive ray paths, or leverage a data-driven approach based on synthetic training data. In either case, the reconstructed 3D models suffer from over-smoothing and loss of fine detail. This paper introduces a novel, high precision, 3D acquisition and reconstruction method for solid transparent objects. Using a static background with a coded pattern, we establish a mapping between the camera view rays and locations on the background. Differentiable tracing of refractive ray paths is then used to directly optimize a 3D mesh approximation of the object, while simultaneously ensuring silhouette consistency and smoothness. Extensive experiments and comparisons demonstrate the superior accuracy of our method.





Fig. 2. Our transparent object capture setup. The object to be captured is placed on the turntable, which is rotated during acquisition to provide the static camera with multiple views of the object. A static LCD monitor is placed behind the object, displaying horizontal and vertical stripe patterns that form a Gray-coded background. The background is used for extracting the object’s silhouette and estimating the environment matte for each camera view.
Fig. 4. Refraction loss. The simulated refractive ray path (in red) through image pixel q should reach the observed background point Q, which corresponds to the intersection of the real ray path (in blue) with the background monitor. The pink mesh is the optimized virtual shape, initialized to the visual hull. The top left and right insets show the associated triangles and vertices of a single simulated ray-pixel correspondence.


Fig. 3. Coarse-to-fine reconstruction of a real mouse statue. Top: starting with the visual hull obtained by space carving, our method gradually recovers details ranging from large geometric displacements such as neck and tummy to fine-level details like eyes. Middle: we alternatively remesh and reconstruct geometric detail at progressively finer scales. Bottom: the error map is visualized using the shortest distance between each vertex of the reconstruction and the ground truth mesh. The number below is the average of the per-vertex distances in millimeters. The real size of the statue’s bounding box is 178mm × 101mm × 71mm.




Fig. 10. Real transparent objects used in our experiments. All of these objects exhibit geometric detail at a variety of scales. 

Fig. 11. Four of the captured images of a real Horse object, each image is taken from a different view and while the background monitor is displaying one of the horizontal or vertical stripe patterns. Acquiring each view with a full Gray-coded background pattern enables extracting the environment matte and the object silhouette.



Fig. 13. Comparison with Wu et al. [2018] and Li et al. [2020] using real transparent objects: Mouse, Monkey and Dog. For each object, compared with its corresponding ground truth, our reconstructed results better capture geometric details at various scales, while the results of Wu et al. and Li et al. are over-smoothed and many of these details are lost. Note that, due to the lack of silhouette constraints, the arm of the Monkey and the tail of the Mouse in the reconstructions of Li et al. are falsely connected with the body, while our reconstruction of these thin structures is more precise.



Fig. 14. Comparison with Wu et al. [2018] using a real Hand object. Our result succeeds in capturing the finger nails, and the creases between fingers, while these details are smoothed over by the method of Wu et al.



Data & Code

Note that the DATA and CODE are free for Research and Education Use ONLY. 

Please cite our paper (add the bibtex below) if you use any part of our ALGORITHM, CODE, DATA or RESULTS in any publication.

Codehttps://github.com/lvjiahui/DRT

Data:  Our captured data is attached below.  Horse, Rabbit, Tiger, Pig in Data_Redmi.7z. Hand, Mouse, Monkey, Dog in Data_Pointgray.7z


Acknowledgement

We sincerely thank Zhengqin Li, Yu-ying Yeh and Manmohan Chandraker for providing us their reconstructed results and their scanned ground truth of Mouse, Monkey, Dog, and Pig shown in [Li et al.2020]. We also thank the anonymous reviewers for their valuable comments. This work was supported in parts by NSFC (61761146002, 61861130365), GD Science & Technology Program (2020A0505100064, 2018KZDXM058, 2018A030310441, 2015A030312015), GD Talent Plan (2019JC05X328), LHTD (20170003), and Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ).


Bibtex

@article{DRT,

title = {Differentiable Refraction-Tracing for Mesh Reconstruction of Transparent Objects},

author = {Jiahui Lyu and Bojian Wu and Dani Lischinski and Daniel Cohen-Or and Hui Huang},

journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH ASIA 2020)},

volume = {39},

number = {6},

pages = {195:1--195:13},

year = {2020},

}




Downloads(faster for people in China)

Downloads(faster for people in other places)