4.7 Article

SFNet-N: An Improved SFNet Algorithm for Semantic Segmentation of Low-Light Autonomous Driving Road Scenes

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2022.3177615

关键词

Image segmentation; Semantics; Lighting; Meteorology; Roads; Autonomous vehicles; Annotations; Semantic segmentation; deep learning; low visible light; autonomous driving

资金

  1. National Natural Science Foundation of China [U20A20333, 52072160, 51875255]
  2. Key Research and Development Program of Jiangsu Province [BE2019010-2, BE2020083-3]
  3. Jiangsu Province's Six Talent Peaks [TD-GDZB-022]
  4. Norwegian Financial Mechanism
  5. Narodowego Centrum Nauki [2020/37/K/ST8/02748]
  6. Natural Science Foundation of Jiangsu Province [BK20190853]
  7. China Postdoctoral Science Foundation [2020T130258]

向作者/读者索取更多资源

Considerable progress has been made in semantic segmentation of images in favorable environments in recent years, but the environmental perception of autonomous driving under adverse weather conditions remains challenging. This paper aims to explore image segmentation in low-light scenarios to expand the application range of autonomous vehicles. We propose a novel nighttime segmentation framework and demonstrate its effectiveness through experiments.
In recent years, considerable progress has been made in semantic segmentation of images with favorable environments. However, the environmental perception of autonomous driving under adverse weather conditions is still very challenging. In particular, the low visibility at nighttime greatly affects driving safety. In this paper, we aim to explore image segmentation in low-light scenarios, thereby expanding the application range of autonomous vehicles. The segmentation algorithms for road scenes based on deep learning are highly dependent on the volume of images with pixel-level annotations. Considering the scarcity of labeled large-scale nighttime data, we performed synthetic data collection and data style transfer using images acquired in daytime based on the autonomous driving simulation platform and generative adversarial network, respectively. In addition, we also proposed a novel nighttime segmentation framework (SFNET-N) to effectively recognize objects in dark environments, aiming at the boundary blurring caused by low semantic contrast in low-illumination images. Specifically, the framework comprises a light enhancement network which introduces semantic information for the first time and a segmentation network with strong feature extraction capability. Extensive experiments with Dark Zurich-test and Nighttime Driving-test datasets show the effectiveness of our method compared with existing state-of-the art approaches, with 56.9% and 57.4% mIoU (mean of category-wise intersection-over-union) respectively. Finally, we also performed real-vehicle verification of the proposed models in road scenes of Zhenjiang city with poor lighting. The datasets are available at https://github.com/pupu-chenyanyan/semantic-segmentation-on-nightime.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据