Journal
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Volume 32, Issue 12, Pages 8297-8311Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2022.3190553
Keywords
Depth map; dictionary learning; multidirectional total variation model; sparse representation; super-resolution
Categories
Funding
- National Natural Science Foundation of China [61906009]
- Scientific Research Common Program of Beijing Municipal Commission of Education [KM202010005018]
- International Research Cooperation Seed Fund of the Beijing University of Technology [2021B06]
Ask authors/readers for more resources
This paper proposes a depth map super-resolution method using joint local gradient and nonlocal structural regularizations. By modeling the local patterns of the depth map and providing nonlocal constraints, the method effectively restores image details and suppresses noise.
Depth maps have been widely used in many real world applications, such as human-computer interaction and virtual reality. However, due to the limitation of current depth sensing technology, the captured depth maps usually suffer from low resolution and insufficient quality. In this paper, we propose a depth map super-resolution method via joint local gradient and nonlocal structural regularizations. Depth maps contain mainly smooth areas separated by textures which demonstrate distinct geometry direction characteristic. Motivated by this, we classify depth map patches according to their geometrical directions and learn a compact online dictionary in each class. We further introduce two regularization terms into the sparse representation framework. Firstly, a multi-directional total variation model is proposed to characterize the local patterns in the gradient domain. Secondly, a nonlocal autoregressive model is introduced to provide nonlocal constraint to the local structures, which can effectively restore image details and suppress noise. Quantitative and qualitative evaluations compared with state-of-the-art methods demonstrate that the proposed method achieves superior performance for various configurations of magnification factors and datasets.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available