4.6 Article

TSingNet: Scale-aware and context-rich feature learning for traffic sign detection and recognition in the wild

Journal

NEUROCOMPUTING
Volume 447, Issue -, Pages 10-22

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2021.03.049

Keywords

Traffic sign detection and recognition; Adaptive receptive field; Scale variation and occlusion; Scale-aware and context-rich feature learning; Attention-driven bilateral feature pyramid network

Funding

  1. Shenzhen Fundamental Research grant [JCYJ20180508162406177, ZLZBCXLJZI20160805020016]
  2. National Natural Science Foundation of China [62076227, U1613216, 61702208]
  3. Wuhan Applied Fundamental Frontier Project Grant [2020010601012166]
  4. Shenzhen Institute of Artificial Intelligence and Robotics for Society [AC01202005024]

Ask authors/readers for more resources

TSingNet proposes a new traffic sign detection network that effectively detects and recognizes small and occluded traffic signs by learning scale-aware and context-rich features. It outperforms state-of-the-art methods in detecting and recognizing traffic signs in the wild through extensive experiments.
Traffic sign detection and recognition in the wild is a challenging task. Existing techniques are often incapable of detecting small or occluded traffic signs because of the scale variation and context loss, which causes semantic gaps between multiple scales. We propose a new traffic sign detection network (TSingNet), which learns scale-aware and context-rich features to effectively detect and recognize small and occluded traffic signs in the wild. Specifically, TSingNet first constructs an attention-driven bilateral feature pyramid network, which draws on both bottom-up and top-down subnets to dually circulate low-, mid-, and high-level foreground semantics in scale self-attention learning. This is to learn scale aware foreground features and thus narrow down the semantic gaps between multiple scales. An adaptive receptive field fusion block with variable dilation rates is then introduced to exploit context-rich representation and suppress the influence of occlusion at each scale. TSingNet is end-to-end trainable by joint minimization of the scale-aware loss and multi-branch fusion losses, this adds a few parameters but significantly improves the detection performance. In extensive experiments with three challenging traffic sign datasets (TT100K, STSD and DFG), TSingNet outperformed state-of-the-art methods for traffic sign detection and recognition in the wild.& nbsp; (c) 2021 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available