4.7 Article

FRNet: Factorized and Regular Blocks Network for Semantic Segmentation in Road Scene

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2020.3037727

Keywords

Convolution; Semantics; Training; Logic gates; Image segmentation; Real-time systems; Fuses; Semantic segmentation; convolutional neural network (CNN); real-time; scene perception

Funding

  1. National Key Research and Development Program of China [2019YFB1311001, 2018YFB1307403]
  2. National Natural Science Foundation of China [61876099]
  3. Scientific and Technological Development Project of Shandong Province [2019GSF111002]
  4. Shenzhen Science and Technology Research and Development Funds [JCYJ20180305164401921]
  5. Foundation of Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education [2018ICIP03]
  6. Foundation of State Key Laboratory of Integrated Services Networks [ISN20-06]

Ask authors/readers for more resources

This paper proposes a real-time network for semantic segmentation called FRNet, which achieves a trade-off between accuracy and inference speed by employing Factorized and Regular (FR) blocks and an asymmetric encoder-decoder architecture. Experimental results on multiple datasets demonstrate that our network outperforms other state-of-the-art networks.
Nowadays, semantic segmentation methods for systems in road scene have a great demand. Most existing methods focus on high accuracy with low inference speed. And some approaches emphasize on speed, significantly sacrificing model accuracy. To make a trade-off between accuracy and inference speed, we propose a real-time network for semantic segmentation titled Factorized and Regular Network (FRNet), which employs an asymmetric encoder-decoder architecture with Factorized and Regular (FR) blocks. Our method achieves 70.4% mIoU on the Cityscapes test set with 1 million parameters at a speed of 127 frames per second (FPS) on a single Titan Xp at a resolution of 512 x 1024. We evaluate FRNet on Cityscapes, Camvid, Kitti, and Gatech datasets to identify that our network stands out from other state-of-the-art networks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available