4.6 Article

Monocular Depth Estimation Algorithm Integrating Parallel Transformer and Multi-Scale Features

Journal

ELECTRONICS
Volume 12, Issue 22, Pages -

Publisher

MDPI
DOI: 10.3390/electronics12224669

Keywords

monocular depth estimation; Transformer; multi-scale features; self-supervised learning

Ask authors/readers for more resources

Traditional CNN often fails to capture global context information effectively in the process of environmental perception due to its network structure, resulting in blurred edges of objects and scenes. To address this issue, a self-supervised monocular depth estimation algorithm incorporating a Transformer is proposed. Experimental results show that the proposed algorithm outperforms mainstream algorithms, reducing the absolute relative error by 3.7% and the squared relative error by 3.9% compared to the latest CNN-Transformer algorithm.
In the process of environmental perception, traditional CNN is often unable to effectively capture global context information due to its network structure, which leads to the problem of blurred edges of objects and scenes. Aiming at this problem, a self-supervised monocular depth estimation algorithm incorporating a Transformer is proposed. First of all, the encoder-decoder architecture is adopted. In the course of the encoding procedure, the input image generates images with different patch sizes but the same size. The multi-path Transformer network and single-path CNN network are used to extract global and local features, respectively, and feature fusion is achieved through interactive modules, which improves the network's ability to acquire global information. Second, a multi-scale fusion structure of hierarchical features is designed to improve the utilization of features of different scales. Experiments for training the model were conducted using the KITTI dataset. The outcomes reveal that the proposed algorithm outperforms the mainstream algorithm. Compared with the latest CNN-Transformer algorithm, the proposed algorithm reduces the absolute relative error by 3.7% and the squared relative error by 3.9%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available