期刊
ELECTRONICS
卷 10, 期 10, 页码 -出版社
MDPI
DOI: 10.3390/electronics10101187
关键词
super-resolution; deep learning; convolution neural networks; attention mechanisms
资金
- National Natural Science Foundation of China [61901221, 52005265]
- National Key Research and Development Program of China [2019YFD1100404]
With the advancement of deep learning, single image super-resolution (SR) has seen notable improvements through CNN-based methods. However, the increased depth of CNNs presents difficulties in training, hindering greater success for SR networks. Various mechanisms have been introduced recently to help these networks converge faster and perform better, with attention mechanisms being a key focus from different perspectives.
With the advance of deep learning, the performance of single image super-resolution (SR) has been notably improved by convolution neural network (CNN)-based methods. However, the increasing depth of CNNs makes them more difficult to train, which hinders the SR networks from achieving greater success. To overcome this, a wide range of related mechanisms has been introduced into the SR networks recently, with the aim of helping them converge more quickly and perform better. This has resulted in many research papers that incorporated a variety of attention mechanisms into the above SR baseline from different perspectives. Thus, this survey focuses on this topic and provides a review of these recently published works by grouping them into three major categories: channel attention, spatial attention, and non-local attention. For each of the groups in the taxonomy, the basic concepts are first explained, and then we delve deep into the detailed insights and contributions. Finally, we conclude this review by highlighting the bottlenecks of the current SR attention mechanisms, and propose a new perspective that can be viewed as a potential way to make a breakthrough.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据