4.7 Article

Edge-guided Composition Network for Image Stitching

Journal

PATTERN RECOGNITION
Volume 118, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2021.108019

Keywords

Image stitching; Composition; Deep learning; Structure consistency; Edge guidance

Funding

  1. Key Project of the National Natural Science Foundation of China [61731009, 61961160734]
  2. National Natural Science Foundation of China [61871185]
  3. Science Foundation of Shanghai [20ZR1416200]

Ask authors/readers for more resources

This paper introduces a new end-to-end deep learning framework named EGCNet for the composition stage in image stitching, which utilizes perceptual edges to guide the network in generating seamless stitched images. Extensive experiments show that EGCNet produces excellent results in handling parallax and object motions, outperforming traditional methods.
Panorama creation is still challenging in consumer-level photography because of varying conditions of image capturing. A long-standing problem is the presence of artifacts caused by structure inconsistent image transitions. Since it is difficult to achieve perfect alignment in unconstrained shooting environment especially with parallax and object movements, image composition becomes a crucial step to produce artifact-free stitching results. Current energy-based seam-cutting image composition approaches are limited by the hand-crafted features, which are not discriminative and adaptive enough to robustly create structure consistent image transitions. In this paper, we present the first end-to-end deep learning framework named Edge Guided Composition Network (EGCNet) for the composition stage in image stitching. We cast the whole composition stage as an image blending problem, and aims to regress the blending weights to seamlessly produce the stitched image. To better preserve the structure consistency, we exploit perceptual edges to guide the network with additional geometric prior. Specifically, we introduce a perceptual edge branch to integrate edge features into the model and propose two edge-aware losses for edge guidance. Meanwhile, we gathered a general-purpose dataset for image stitching training and evaluation (namely, RISD). Extensive experiments demonstrate that our EGCNet produces plausible results with less running time, and outperforms traditional methods especially under the circumstances of parallax and object motions. (c) 2021 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available