Patch-based Progressive 3D Point Set Upsampling

 Conference on Computer Vision and Pattern Recognition (Proceedings of CVPR 2019)


Wang Yifan1          Shihao Wu1         Hui Huang2*          Daniel Cohen-Or2,3          Olga Sorkine-Hornung1

1ETH Zurich           2Shenzhen University        3Tel Aviv University



Figure 1: We develop a deep neural network for 3D point set upsampling. Intuitively, our network learns different levels of detail in multiple steps, where each step focuses on a local patch from the output of the previous step. By progressively training our patch-based network end-to-end, we successfully upsample a sparse set of input points, step by step, to a dense point set with rich geometric details. Here we use circle plates for points rendering, which are color-coded by point normals.


Abstract

We present a detail-driven deep neural network for point set upsampling. A high-resolution point set is essential for point-based rendering and surface reconstruction. Inspired by the recent success of neural image super-resolution techniques, we progressively train a cascade of patch-based upsampling networks on different levels of detail end-to-end. We propose a series of architectural design contributions that lead to a substantial performance boost. The effect of each technical contribution is demonstrated in an ablation study. Qualitative and quantitative experiments show that our method significantly outperforms the state-of-the-art learning-based [58, 59], and optimazation-based [23] approaches, both in terms of handling low-resolution inputs and revealing high-fidelity details. The full training and test data set, the trained network and code are in https://github.com/yifita/3pu.



Figure 2: Overview of our multi-step patch-based point set upsampling network with 3 levels of detail. Given a sparse point set as input, our network predicts a high-resolution set of points that agree with the ground truth. Instead of training an 8x-upsampling network, we break it into three 2x steps. In each training step, our network randomly selects a local patch as input, upsamples the patch under the guidance of ground truth, and passes the prediction to the next step. During testing, we upsample multiple patches in each step independently, then merge the upsampled results to the next step.




Figure 4: Illustration of one upsampling network unit.
Figure 5: Illustration of the feature extraction unit with dense connections.



Figure 11: Upsampling results from 625 input points (left) and reconstructed mesh (right).



Figure 12: Upsampling results from 5000 input points (left) and reconstructed mesh (right).


Data & Code
To reference our ALGORITHM, CODE, DATA or RESULTS in any publication, Please include the bibtex below.
Linkļ¼šhttps://github.com/yifita/3PU 


Acknowledgement

We thank the anonymous reviewers for their constructive comments. This work was supported in parts by SNF grant 200021 162958, ISF grant 2366/16, NSFC (61761146002), LHTD (20170003), and the National Engineering Laboratory for Big Data System Computing Technology.


Bibtex
@article{MPU19,
title = {Patch-based Progressive 3D Point Set Upsampling},
author = {Wang Yifan and Shihao Wu and Hui Huang and Daniel Cohen-Or and Olga Sorkine-Hornung},
journal = {Conference on Computer Vision and Pattern Recognition (Proceedings of CVPR 2019)},
volume = {},
number = {},
pages = {},  
year = {},

Downloads(faster for people in China)

Downloads(faster for people in other places)