Journal
PATTERN RECOGNITION
Volume 42, Issue 1, Pages 93-104Publisher
ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2008.07.010
Keywords
Classifier design; Discriminative information; Manifold learning; Pattern recognition
Funding
- National Natural Science Foundation of China [60773061]
- Natural Science Foundations of Jiangru [BK2008XXX]
Ask authors/readers for more resources
Over the past decades, regularization theory is widely applied in various areas of machine learning to derive a large family of novel algorithms. Traditionally, regularization focuses Oil smoothing only, and does not fully Utilize the Underlying discriminative knowledge which is vital for classification. fit this paper, we propose a novel regularization algorithm in the least-squares sense, called discriminatively regularized least-squares classification (DRLSC) method, which is specifically designed for classification. Inspired by several new geometrically Motivated methods, DRLSC directly embeds the discriminative information as well as the local geometry of the samples into the regularization term so that it can explore as much underlying knowledge inside the samples as possible and aim to maximize the margins between the samples of different classes in each local area. Furthermore, by embedding equality type constraints in the formulation, the solutions of DRLSC can follow from solving a set of linear equations and the framework naturally contains multi-class Problems. Experiments on both toy and real world problems demonstrate that DRLSC is often superior in classification performance to the classical regularization algorithms, including regularization networks, support vector machines and some of the recent studied manifold regularization techniques. (C) 2008 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available