4.7 Article

Self-Supervised Feature Representation for SAR Image Target Classification Using Contrastive Learning

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTARS.2023.3321769

关键词

Convolutional neural networks; Contrastive learning (CL); convolutional neural network (CNN); self-supervised repersentation (SSR) learning; synthetic aperture radar (SAR) image; target classification

向作者/读者索取更多资源

This study proposes a two-stage algorithm based on contrastive learning for SAR image target classification. In the pretraining stage, high-level semantic features are extracted from an unlabeled train set using self-supervised representations. In the fine-tuning stage, a few labeled samples are used to train the classifier. Numerical experiments demonstrate that the proposed algorithm performs better than traditional supervised methods under labeled data constraints.
Nowadays, the developed deep neural networks (DNNs) have been widely applied to synthetic aperture radar (SAR) image interpretation, such as target classification and recognition, which can automatically learn high-level semantic features in data-driven and task-driven manners. For the supervised learning methods, abundant labeled samples are required to avoid the over-fitting of designed networks, which is usually difficult for SAR image applications. To address these issues, a novel two-stage algorithm based on contrastive learning (CL) is proposed for SAR image target classification. In the pretraining stage, to extract self-supervised representations (SSRs) from an unlabeled train set, a convolutional neural network (CNN)-based encoder is first pretrained using a contrasting strategy. This encoder can convert SAR images into a discriminative embedding space. Meanwhile, the optimal encoder can be determined using a linear evaluation protocol, which can indirectly confirm the transferability of prelearned SSRs to downstream tasks. Therefore, in the fine-tuning stage, a SAR target classifier can be adequately trained using a few labeled SSRs in a supervised manner, which benefits from the powerful pretrained encoder. Numerical experiments are carried out on the shared MSTAR dataset to demonstrate that the model based on the proposed self-supervised feature learning algorithm is superior to the conventional supervised methods under labeled data constraints. In addition, knowledge transfer experiments are also conducted on the openSARship dataset, showing that the encoder pretrained from the MSTAR dataset can support the classifier training with high efficiency and precision. These results demonstrate the excellent training convergence and classification performance of the proposed algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据