3.8 Proceedings Paper

Stripformer: Strip Transformer for Fast Image Deblurring

Journal

COMPUTER VISION, ECCV 2022, PT XIX
Volume 13679, Issue -, Pages 146-162

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-19800-7_9

Keywords

-

Funding

  1. Ministry of Science and Technology (MOST) [109-2221-E-009-113-MY3, 111-2628-E-A49-025-MY3, 111-2634-F-007-002, 110-2634-F-002-050, 110-2634-F-006-022, 110-2622-E-004-001, 111-2221-E-004-010]
  2. Qualcomm through a Taiwan University Research Collaboration Project
  3. MediaTek

Ask authors/readers for more resources

This study introduces Stripformer, a transformer-based architecture for removing motion blur in images taken in dynamic scenes. It performs favorably against state-of-the-art models and is more efficient in terms of memory usage and computation cost.
Images taken in dynamic scenes may contain unwanted motion blur, which significantly degrades visual quality. Such blur causes short- and long-range region-specific smoothing artifacts that are often directional and non-uniform, which is difficult to be removed. Inspired by the current success of transformers on computer vision and image processing tasks, we develop, Stripformer, a transformer-based architecture that constructs intra- and inter-strip tokens to reweight image features in the horizontal and vertical directions to catch blurred patterns with different orientations. It stacks interlaced intra-strip and interstrip attention layers to reveal blur magnitudes. In addition to detecting region-specific blurred patterns of various orientations and magnitudes, Stripformer is also a token-efficient and parameter-efficient transformer model, demanding much less memory usage and computation cost than the vanilla transformer but works better without relying on tremendous training data. Experimental results show that Stripformer performs favorably against state-of-the-art models in dynamic scene deblurring.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available