“Mind the Gap”: Tele-Registration for Structure-Driven Image Completion

 ACM Transactions on Graphics 2013 (Proceedings of SIGGRAPH Asia 2013)

Hui Huang1*         Kangxue Yin1         Minglun Gong2        Dani Lischinski3         Daniel Cohen-Or4        Uri Ascher5        Baoquan Chen1,6*

1VisuCA/SIAT         2Memorial University         3The Hebrew University of Jerusalem          4Tel Aviv University         5The University of British Columbia         6Shandong University


Figure 1:Given several pieces extracted from the original image and casually placed together (left), our method applies tele-registration to align them (middle), and then uses structure-driven image completion to fill the gaps (right).


Concocting a plausible composition from several non-overlapping image pieces, whose relative positions are not fixed in advance and without having the benefit of priors, can be a daunting task. Here we propose such a method, starting with a set of sloppily pasted image pieces with gaps between them. We first extract salient curves that approach the gaps from non-tangential directions, and use likely correspondences between pairs of such curves to guide a novel tele-registration method that simultaneously aligns all the pieces together. A structure-driven image completion technique is then proposed to fill the gaps, allowing the subsequent employment of standard in-painting tools to finish the job.  


Figure 2: Mending a broken plate. Attempting to complete the image in (b) using content-aware fill (e) or image melding (f) before teleregistration fails to produce a satisfactory result. Image melding after tele-registration produces a better result, but the plate outline is still not smooth (g). Our method applies content-aware fill after structure-driven gap bridging to obtain a more plausible result (h).

Figure 3: Reconstructing an occluded part of a statue. The lassoed part in (b) is sloppily pasted inside the hole. Image melding fails to produce a plausible result (c). Tele-registration (d), followed by gap bridging (e) yields a more plausible reconstruction (f).

Figure 4: Removing a butterfly from the scene. Labeling the area covered by the butterfly as unknown (black) and casually placing the lassoed parts inside (b) provide us the input. Our algorithm then automatically aligns the parts (c) and completes the gaps (d). In comparison, directly inpainting using content-aware filling tool in the unknown area in (b) without the lassoed parts yields artifacts (e).

Figure 5: Aligning the large stones guided by the ambient vector field (c) and then completing their gaps for the Temple of Amon. A better result (d) is obtained than by just using direct content-aware inpainting (b). Note that pinks curves in (c) indicate those salient curves detected from the borders but not paired, and so they are discarded when computing the ambient vector field.

Figure 6: Putting together an oil painting from its torn pieces. Areas worth closer inspection are highlighted using boxes.

Figure 7: Fixing a poorly stitched panorama (a) found through Google image search for Namib desert. Separating the two photos and feeding them to our method results in better alignment (b) and a more seamless panorama (c). Note that the right piece in (b) is scaled to allow for smoother alignment of salient curves. In comparison, Poleg and Peleg [2012] first extrapolate the two photos to create overlap, which yields the alignment in (d). Removing the extrapolated areas (e) reveals that the registration result does not fully respect the salient curves in the scene. Applying structure-driven completion over (e) does not fully overcome the problem.

Figure 8: Generating an unusual yet natural looking mountain panorama from images of three different mountains. Note that in the teleregistration result (b), the two left pieces overlap. Compared to the direct inpainting result (c), the additional Poisson blending operation provides much smoother transition among image pieces of large color difference (d).

Figure 9: Swapping the heads between two animal statues (a). Using the same input (b), the results obtained under both similarity transformation (c) and rigid transformation (d) are shown. In both cases, the contours of the resulting foreground objects are smooth.

Data & Code

Note that the DATA and CODE are free for Research and Education Use ONLY. 

Please cite our paper (add the bibtex below) if you use any part of our ALGORITHM, CODE, DATA or RESULTS in any publication.



The authors would like to thank all the reviewers for their valuable comments. This work was supported in part by grants from NSFC (61379090, 61103166, 61232011), Guangdong Science and Technology Program (2011B050200007), Shenzhen Innovation Program (KQCX20120807104901791, JCYJ20130401170306810, CXB201104220029A, ZD201111080115A), NSERC (293127 and 84306), the Israel Science Foundation and the US-Israel Binational Science Foundation.


title = {“Mind the Gap”: Tele-Registration for Structure-Driven Image Completion},
author = {H. Huang and K.Yin and M. Gong and D. Lischinski and D. Cohen-Or and U. Ascher and B. Chen},
journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH ASIA 2013)},
volume = {32},
issue = {6},
pages = {174:1--174:10},
year = {2013},

Downloads (faster for people in China)

Downloads (faster for people in other places)