4.7 Article

Depth Embedded Recurrent Predictive Parsing Network for Video Scenes

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2019.2909053

Keywords

Recurrent predictive parsing network (RPPNet); spatial-temporal continuity; video scene parsing; depth embedded; long short term memory (LSTM)

Funding

  1. National Science Foundation of China [61872187, 61773215, 61871444]
  2. National Defense Pre-Research Foundation [41412010302, 41412010101]
  3. Medical Research Council (MRC) Innovation Fellowship [MR/S003916/1]
  4. MRC [MR/S003916/1, MR/S003916/2] Funding Source: UKRI

Ask authors/readers for more resources

Semantic segmentation-based scene parsing plays an important role in automatic driving and autonomous navigation. However, most of the previous models only consider static images, and fail to parse sequential images because they do not take the spatial-temporal continuity between consecutive frames in a video into account. In this paper, we propose a depth embedded recurrent predictive parsing network (RPPNet), which analyzes preceding consecutive stereo pairs for parsing result. In this way, RPPNet effectively learns the dynamic information from historical stereo pairs, so as to correctly predict the representations of the next frame. The other contribution of this paper is to systematically study the video scene parsing (VSP) task, in which we use the RPPNet to facilitate conventional image paring features by adding spatial-temporal information. The experimental results show that our proposed method RPPNet can achieve fine predictive parsing results on cityscapes and the predictive features of RPPNet can significantly improve conventional image parsing networks in VSP task.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available