Journal
EXPERT SYSTEMS WITH APPLICATIONS
Volume 36, Issue 2, Pages 3982-3989Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2008.02.055
Keywords
Relevance vector machine (RVM); Gaussian kernel function; Regression; Gradient descent algorithm; Bayesian evidence
Categories
Funding
- National Natural Science Foundation of China (NSFC) [50375090]
Ask authors/readers for more resources
Kernel based machine learning techniques have been widely used to tackle problems of function approximation and regression estimation. Relevance vector machine (RVM) has state of the art performance in sparse regression. As a popular and competent kernel function in machine learning. conventional Gaussian kernel has unified kernel width with each of basis functions, which make impliedly a basic assumption: the response is represented below certain frequency and the noise is represented above such certain frequency. However, in many case, this assumption does not hold. To overcome this limitation, a novel adaptive spherical Gaussian kernel is utilized for nonlinear regression, and the stagewise optimization algorithm for maximizing Bayesian evidence in sparse Bayesian learning framework is proposed for model selection. Extensive empirical study, on two artificial datasets and two real-world benchmark datasets. shows its effectiveness and flexibility of model on representing regression problem with higher levels of sparsity and better performance than classical RVM. The attractive ability of this approach is to automatically choose the right kernel widths locally fitting RVs from the training dataset, which could keep right level smoothing at each scale of signal. (C) 2008 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available