3.8 Proceedings Paper

EAI-Stereo: Error Aware Iterative Network for Stereo Matching

期刊

COMPUTER VISION - ACCV 2022, PT I
卷 13841, 期 -, 页码 3-19

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-26319-4_1

关键词

-

向作者/读者索取更多资源

Current state-of-the-art stereo algorithms often fail to fully exploit high-frequency information, resulting in blurry disparity maps. In this paper, a refinement module is proposed to incorporate high-frequency information, enabling the network to generate detailed disparity maps with sharp edges. Additionally, an Iterative Multiscale Wide-LSTM Network is introduced to enhance data transfer efficiency across iterations. The proposed method achieves outstanding performance on various benchmarks and outperforms existing methods in cross-domain scenarios.
Current state-of-the-art stereo algorithms use a 2D CNN to extract features and then form a cost volume, which is fed into the following cost aggregation and regularization module composed of 2D or 3D CNNs. However, a large amount of high-frequency information like texture, color variation, sharp edge etc. is not well exploited during this process, which leads to relatively blurry and lacking detailed disparity maps. In this paper, we aim at making full use of the high-frequency information from the original image. Towards this end, we propose an error-aware refinement module that incorporates high-frequency information from the original left image and allows the network to learn error correction capabilities that can produce excellent subtle details and sharp edges. In order to improve the data transfer efficiency between our iterations, we propose the Iterative Multiscale Wide-LSTM Network which could carry more semantic information across iterations. We demonstrate the efficiency and effectiveness of our method on KITTI 2015, Middle-bury, and ETH3D. At the time of writing this paper, EAI-Stereo ranks 1st on the Middlebury leaderboard and 1st on the ETH3D Stereo benchmark for 50% quantile metric and second for 0.5px error rate among all published methods. Our model performs well in cross-domain scenarios and outperforms current methods specifically designed for generalization. Code is available at https://github.com/David- Zhao-1997/EAI-Stereo.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据