4.6 Article

Feature Selection and Ensemble Learning Techniques in One-Class Classifiers: An Empirical Study of Two-Class Imbalanced Datasets

Journal

IEEE ACCESS
Volume 9, Issue -, Pages 13717-13726

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3051969

Keywords

Classification algorithms; Testing; Feature extraction; Training data; Training; Anomaly detection; Vegetation; Data mining; one-class classifiers; class imbalance; machine learning; ensemble learning

Funding

  1. Ministry of Science and Technology of Taiwan [MOST 109-2410-H-182-012]
  2. Chang Gung Memorial Hospital, Linkou [BMRPH13]

Ask authors/readers for more resources

The study treats class imbalance as anomaly detection problem, investigating the performance of OCC classifiers and their performance in ensemble learning. Results show that OCC classifiers perform well on datasets with high class imbalance ratios, but feature selection does not usually improve their performance, while combining multiple OCC classifiers can outperform individual classifiers.
Class imbalance learning is an important research problem in data mining and machine learning. Most solutions including data levels, algorithm levels, and cost sensitive approaches are derived using multi-class classifiers, depending on the number of classes to be classified. One-class classification (OCC) techniques, in contrast, have been widely used for anomaly or outlier detection where only normal or positive class training data are available. In this study, we treat every two-class imbalanced dataset as an anomaly detection problem, which contains a larger number of data in the majority class, i.e. normal or positive class, and a very small number of data in the minority class. The research objectives of this paper are to understand the performance of OCC classifiers and examine the level of performance improvement when feature selection is considered for pre-processing the training data in the majority class and ensemble learning is employed to combine multiple OCC classifiers. Based on 55 datasets with different ranges of class imbalance ratios and one-class support vector machine, isolation forest, and local outlier factor as the representative OCC classifiers, we found that the OCC classifiers are good at high imbalance ratio datasets, outperforming the C4.5 baseline. In most cases, though, performing feature selection does not improve the performance of the OCC classifiers in most. However, many homogeneous and heterogeneous OCC classifier ensembles do outperform the single OCC classifiers, with some specific combinations of multiple OCC classifiers, both with and without feature selection, performing similar to or better than the baseline combination of SMOTE and C4.5.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available