4.7 Article

Model-Free Distortion Rectification Framework Bridged by Distortion Distribution Map

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 29, 期 -, 页码 3707-3718

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2020.2964523

关键词

Distortion rectification; model-free framework; dual-stream feature learning; deep learning

资金

  1. National Natural Science Foundation of China [61772066, 61972028]
  2. Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University [VRLAB2019B05]

向作者/读者索取更多资源

Recently, learning-based distortion rectification schemes have shown high efficiency. However, most of these methods only focus on a specific camera model with fixed parameters, thus failing to be extended to other models. To avoid such a disadvantage, we propose a model-free distortion rectification framework for the single-shot case, bridged by the distortion distribution map (DDM). Our framework is based on an observation that the pixel-wise distortion information is explicitly regular in a distorted image, despite different models having different types and numbers of distortion parameters. Motivated by this observation, instead of estimating the heterogeneous distortion parameters, we construct a proposed distortion distribution map that intuitively indicates the global distortion features of a distorted image. In addition, we develop a dual-stream feature learning module, benefitting from both the advantages of traditional methods that leverage the local handcrafted feature and learning-based methods that focus on the global semantic feature perception. Due to the sparsity of handcrafted features, we discrete the features into a 2D point map and learn the structure inspired by PointNet. Finally, a multimodal attention fusion module is designed to attentively fuse the local structural and global semantic features, providing the hybrid features for the more reasonable scene recovery. The experimental results demonstrate the excellent generalization ability and more significant performance of our method in both quantitative and qualitative evaluations, compared with the state-of-the-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据