期刊
IEEE ACCESS
卷 8, 期 -, 页码 113371-113382出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.3003375
关键词
Feature extraction; Three-dimensional displays; Convolution; Data mining; Image edge detection; Convolutional neural networks; Neural network; stereo matching; multi-scale attention module; feature refinement module; 3D attention aggregation module
资金
- Science and Technology Program of Shenzhen [JCYJ20180503182133411]
- Project of Science and Technology Department of Guizhou Province [QiankeheZhicheng [2019]239, QiankeheJichu [2019]1250]
- Guizhou Provincial Science and Technology Cooperation Project [QiankeheLHZi [2017]7072]
- Guizhou Provincial Department of Education Youth Science and Technology Talents Growth Project [QiaojiaoheKYZi [2017]251]
In recent years, convolutional neural network (CNN) algorithms promote the development of stereo matching and make great progress, but some mismatches still occur in textureless, occluded and reflective regions. In feature extraction and cost aggregation, CNNs will greatly improve the accuracy of stereo matching by utilizing global context information and high-quality feature representations. In this paper, we design a novel end-to-end stereo matching algorithm named Multi-Attention Network (MAN). To obtain the global context information in detail at the pixel-level, we propose a Multi-Scale Attention Module (MSAM), combining a spatial pyramid module with an attention mechanism, when we extract the image features. In addition, we introduce a feature refinement module (FRM) and a 3D attention aggregation module (3D AAM) during cost aggregation so that the network can extract informative features with high representational ability and high-quality channel attention vectors. Finally, we obtain the final disparity through bilinear interpolation and disparity regression. We evaluate our method on the Scene Flow, KITTI 2012 and KITTI 2015 stereo datasets. The experimental results show that our method achieves state-of-the-art performance and that every component of our network is effective.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据