Journal
IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 4, Issue 3, Pages 2576-2583Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2019.2904733
Keywords
Semantic Segmentation; Urban Scenes; Deep Neural Network; Thermal Images; Information Fusion
Categories
Funding
- Shenzhen Science, Technology, and Innovation Commission (SZSTI) project [JCYJ20160401100022706]
- National Natural Science Foundation of China [U1713211]
- Hong Kong University of Science and Technology Project [IGN16EG12]
- Hong Kong Research Grant Council (RGC) [11210017, 21202816]
Ask authors/readers for more resources
Semantic segmentation is a fundamental capability for autonomous vehicles. With the advancements of deep learning technologies, many effective semantic segmentation networks have been proposed in recent years. However, most of them are designed using RGB images from visible cameras. The quality of RGB images is prone to be degraded under unsatisfied lighting conditions, such as darkness and glares of oncoming headlights, which imposes critical challenges for the networks that use only RGB images. Different from visible cameras, thermal imaging cameras generate images using thermal radiations. They are able to see under various lighting conditions. In order to enable robust and accurate semantic segmentation for autonomous vehicles, we take the advantage of thermal images and fuse both the RGB and thermal information in a novel deep neural network. The main innovation of this letter is the architecture of the proposed network. We adopt the encoder-decoder design concept. ResNet is employed for feature extraction and a new decoder is developed to restore the feature map resolution. The experimental results prove that our network outperforms the state of the arts.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available