Journal
PATTERN RECOGNITION
Volume 39, Issue 6, Pages 1053-1065Publisher
ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2005.07.011
Keywords
nonlinear dimensionality reduction; manifold learning; locally linear embedding; principal component analysis; outlier; robust statistics; M-estimation; handwritten digit; wood texture
Ask authors/readers for more resources
In the past few years, some nonlinear dimensionality reduction (NLDR) or nonlinear manifold learning methods have aroused a great deal of interest in the machine learning community. These methods are promising in that they can automatically discover the low-dimensional nonlinear manifold in a high-dimensional data space and then embed the data points into a low-dimensional embedding space, using tractable linear algebraic techniques that are easy to implement and are not prone to local minima. Despite their appealing properties, these NLDR methods are not robust against outliers in the data, yet so far very little has been done to address the robustness problem. In this paper, we address this problem in the context of an NLDR method called locally linear embedding (LLE). Based on robust estimation techniques, we propose an approach to make LLE more robust. We refer to this approach as robust locally linear embedding (RLLE). We also present several specific methods for realizing this general RLLE approach. Experimental results on both synthetic and real-world data show that RLLE is very robust against outliers. (c) 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available