4.6 Article

Parametric nonlinear dimensionality reduction using kernel t-SNE

Journal

NEUROCOMPUTING
Volume 147, Issue -, Pages 71-82

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2013.11.045

Keywords

t-SNE; Dimensionality reduction; Visualization; Fisher information; Out-of-sample extension

Funding

  1. DFG [HA 2719/7-1]
  2. CITEC center of excellence
  3. German Federal Ministry of Education and Research (BMBF)

Ask authors/readers for more resources

Novel non-parametric dimensionality reduction techniques such as t-distributed stochastic neighbor embedding (t-SNE) lead to a powerful and flexible visualization of high-dimensional data. One drawback of non-parametric techniques is their lack of an explicit out-of-sample extension. In this contribution, we propose an efficient extension of t-SNE to a parametric framework, kernel t-SNE, which preserves the flexibility of basic t-SNE, but enables explicit out-of-sample extensions. We test the ability of kernel t-SNE in comparison to standard t-SNE for benchmark data sets, in particular addressing the generalization ability of the mapping for novel data. In the context of large data sets, this procedure enables us to train a mapping for a fixed size subset only, mapping all data afterwards in linear time. We demonstrate that this technique yields satisfactory results also for large data sets provided missing information due to the small size of the subset is accounted for by auxiliary information such as class labels, which can be integrated into kernel t-SNE based on the Fisher information. (C) 2014 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available