4.7 Article

Supervised contrastive learning over prototype-label embeddings for network intrusion detection

期刊

INFORMATION FUSION
卷 79, 期 -, 页码 200-228

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2021.09.014

关键词

Label embedding; contrastive learning; Max margin loss; Deep learning; Embeddings fusion; Network intrusion detection

资金

  1. Spanish Ministry for Science, Innovation and Universities [RTI2018-098958-B-I00]
  2. Agencia Estatal de Investigacion (AEI) [RTI2018-098958-B-I00]
  3. Fondo Europeo de Desarrollo Regional (FEDER) [RTI2018-098958-B-I00]

向作者/读者索取更多资源

Contrastive learning enables the establishment of similarities by comparing distances between sample features and labels in a shared embedding space, allowing for supervised classification and improved model performance through reduced pairwise comparisons.
Contrastive learning makes it possible to establish similarities between samples by comparing their distances in an intermediate representation space (embedding space) and using loss functions designed to attract/repel similar/dissimilar samples. The distance comparison is based exclusively on the sample features. We propose a novel contrastive learning scheme by including the labels in the same embedding space as the features and performing the distance comparison between features and labels in this shared embedding space. Following this idea, the sample features should be close to its ground-truth (positive) label and away from the other labels (negative labels). This scheme allows to implement a supervised classification based on contrastive learning. Each embedded label will assume the role of a class prototype in embedding space, with sample features that share the label gathering around it. The aim is to separate the label prototypes while minimizing the distance between each prototype and its same-class samples. A novel set of loss functions is proposed with this objective. Loss minimization will drive the allocation of sample features and labels in embedding space. Loss functions and their associated training and prediction architectures are analyzed in detail, along with different strategies for label separation. The proposed scheme drastically reduces the number of pair-wise comparisons, thus improving model performance. In order to further reduce the number of pair-wise comparisons, this initial scheme is extended by replacing the set of negative labels by its best single representative: either the negative label nearest to the sample features or the centroid of the cluster of negative labels. This idea creates a new subset of models which are analyzed in detail. The outputs of the proposed models are the distances (in embedding space) between each sample and the label prototypes. These distances can be used to perform classification (minimum distance label), features dimensionality reduction (using the distances and the embeddings instead of the original features) and data visualization (with 2 or 3D embeddings). Although the proposed models are generic, their application and performance evaluation is done here for network intrusion detection, characterized by noisy and unbalanced labels and a challenging classification of the various types of attacks. Empirical results of the model applied to intrusion detection are presented in detail for two well-known intrusion detection datasets, and a thorough set of classification and clustering performance evaluation metrics are included.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据