4.8 Article

Adaptive sparseness for supervised learning

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2003.1227989

Keywords

supervised learning; classification; regression; sparseness; feature selection; kernel methods; expectation-maximization algorithm

Ask authors/readers for more resources

The goal of supervised teaming is to infer a functional mapping based on a set of training examples. To achieve good generalization, it is necessary to control the complexity of the learned function. In Bayesian approaches, this is done by adapting a prior for the parameters of the function being teamed. We propose a Bayesian approach to supervised teaming, which leads to sparse solutions; that is, in which irrelevant parameters are automatically set exactly to zero. Other ways to obtain sparse classifiers (such as Laplacian priors, support vector machines) involve (hyper)parameters which control the degree of sparseness of the resulting classifiers; these parameters have to be somehow adjusted/estimated from the training data. In contrast, our approach does not involve any (hyper)parameters to be adjusted or estimated. This is achieved by a hierarchical-Bayes interpretation of the Laplacian prior, which is then modified by the adoption of a Jeffreys' noninformative hyperprior. Implementation is carried out by an expectation-maximization (EM) algorithm. Experiments with several benchmark data sets show that the proposed approach yields state-of-the-art performance. In particular, our method outperforms SVMs and performs competitively with the best alternative techniques, although it involves no tuning or adjustment of sparseness-controlling hyperparameters.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available