4.7 Article

Self-Training for Class-Incremental Semantic Segmentation

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3155746

关键词

Semantics; Task analysis; Data models; Image segmentation; Training; Adaptation models; Computational modeling; Class-incremental learning; self-training; semantic segmentation

向作者/读者索取更多资源

This paper addresses the problem of catastrophic forgetting in deep neural networks during incremental learning in class-incremental semantic segmentation. A self-training approach is proposed, leveraging unlabeled data for rehearsal of previous knowledge. Experimental results show that maximizing self-entropy and using diverse auxiliary data can significantly improve performance. State-of-the-art results are achieved on Pascal-VOC 2012 and ADE20K datasets.
In class-incremental semantic segmentation, we have no access to the labeled data of previous tasks. Therefore, when incrementally learning new classes, deep neural networks suffer from catastrophic forgetting of previously learned knowledge. To address this problem, we propose to apply a self-training approach that leverages unlabeled data, which is used for rehearsal of previous knowledge. Specifically, we first learn a temporary model for the current task, and then, pseudo labels for the unlabeled data are computed by fusing information from the old model of the previous task and the current temporary model. In addition, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the old and temporary models. We show that maximizing self-entropy can further improve results by smoothing the overconfident predictions. Interestingly, in the experiments, we show that the auxiliary data can be different from the training data and that even general-purpose, but diverse auxiliary data can lead to large performance gains. The experiments demonstrate the state-of-the-art results: obtaining a relative gain of up to 114% on Pascal-VOC 2012 and 8.5% on the more challenging ADE20K compared to previous state-of-the-art methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据