期刊
EXPERT SYSTEMS WITH APPLICATIONS
卷 175, 期 -, 页码 -出版社
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2021.114741
关键词
Dimensionality reduction; Embedding; Visualization; Machine learning; Big data
类别
资金
- Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq)
- Fundacao Amazonia de Amparo a Estudos e Pesquisas (Fapespa)
- Instituto Tecnologico Vale (ITV)
Dimensionality Reduction (DR) is important in understanding high-dimensional data, and the Polygonal Coordinate System (PCS) presented in this work offers an efficient geometric approach for this purpose. The study also introduces a new version of the t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm using a PCS-based deterministic strategy, showcasing the efficiency of PCS in data embedding compared to other DR algorithms.
Dimensionality Reduction (DR) is useful to understand high-dimensional data. It attracts wide attention from industry and academia and is employed in areas such as machine learning, data mining, and pattern recognition. This work presents a geometric approach to DR termed Polygonal Coordinate System (PCS), capable of representing multidimensional data in two or three dimensions while preserving their inherent overall structure by taking advantage of a polygonal interface bridging high-and low-dimensional spaces. PCS can handle Big Data by adopting an incremental, geometric DR with linear-time complexity. A new version of t-Distributed Stochastic Neighbor Embedding (t-SNE), a state-of-the-art algorithm for DR, is also provided. It employs a PCS-based deterministic strategy and is named t-Distributed Deterministic Neighbor Embedding (t-DNE). Several synthetic and real data sets were used as well-known real-world problem archetypes in our benchmark, providing a means to evaluate PCS and t-DNE against four embedding-based DR algorithms: two linear-transformation ones (Principal Component Analysis and Non-negative Matrix Factorization) and two nonlinear ones (t-SNE and Sammon's Mapping). Statistical comparisons of the execution times of these algorithms, by the Friedman's significance test, highlight the efficiency of PCS in data embedding. PCS tends to surpass its counterparts in several aspects explored in this work, including asymptotic time and space complexity, preservation of global data-inherent structures, number of hyperparameters, and applicability to unobserved data.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据