Journal
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING
Volume -, Issue -, Pages 2174-2182Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3447548.3467320
Keywords
Anomaly Detection; Polluted Training Data; Validation Loss
Categories
Funding
- National Science Foundation [IIS-1910880, CSSI-2103832, CNS-1852498]
- U.S. Dept. of Education [P200A180088]
Ask authors/readers for more resources
The paper introduces a novel deep learning approach ELITE for anomaly detection that utilizes a small number of labeled anomalies to infer hidden anomalies in the training data and improve anomaly detection performance. Unlike traditional methods, ELITE uses labeled examples as a validation set and leverages the gradient of validation loss to predict anomalies.
Deep Learning techniques have been widely used in detecting anomalies from complex data. Most of these techniques are either unsupervised or semi-supervised because of a lack of a large number of labeled anomalies. However, they typically rely on a clean training data not polluted by anomalies to learn the distribution of the normal data. Otherwise, the learned distribution tends to be distorted and hence ineffective in distinguishing between normal and abnormal data. To solve this problem, we propose a novel approach called ELITE that uses a small number of labeled examples to infer the anomalies hidden in the training samples. It then turns these anomalies into useful signals that help to better detect anomalies from user data. Unlike the classical semi-supervised classification strategy which uses labeled examples as training data, ELITE uses them as validation set. It leverages the gradient of the validation loss to predict if one training sample is abnormal. The intuition is that correctly identifying the hidden anomalies could produce a better deep anomaly model with reduced validation loss. Our experiments on public benchmark datasets show that ELITE achieves up to 30% improvement in ROC AUC comparing to the state-of-the-art, yet robust to polluted training data.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available