4.7 Review

Conceptual and empirical comparison of dimensionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE)

Journal

COMPUTER SCIENCE REVIEW
Volume 40, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.cosrev.2021.100378

Keywords

Dimension reduction; Optimal set of features; Data quality; High-dimensional datasets; Correlation metrics; Classification accuracy; Run-time

Ask authors/readers for more resources

The study aims to explore feature extraction algorithms to address the curse of dimensionality in machine learning. It reviews various FEAs from different categories and evaluates their performance on challenging datasets.
Feature Extraction Algorithms (FEAs) aim to address the curse of dimensionality that makes machine learning algorithms incompetent. Our study conceptually and empirically explores the most representative FEAs. First, we review the theoretical background of many FEAs from different categories (linear vs. nonlinear, supervised vs. unsupervised, random projection-based vs. manifold-based), present their algorithms, and conduct a conceptual comparison of these methods. Secondly, for three challenging binary and multi-class datasets, we determine the optimal sets of new features and assess the quality of the various transformed feature spaces in terms of statistical significance and power analysis, and the FEA efficacy in terms of classification accuracy and speed. (C) 2021 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available