Journal
INFORMATION SCIENCES
Volume 598, Issue -, Pages 37-56Publisher
ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.03.067
Keywords
Multi -task learning; Hypersphere classifier; Pattern recognition; TWSVM; Kernel method
Categories
Funding
- National Natural Science Foundation of China [12071475, 11671010]
- Beijing Natural Science Foundation [4172035]
- [12071475,11671010]
Ask authors/readers for more resources
In this paper, a novel method called MTTHSVM is proposed to address multi-task classification problems. It generates two hyperspheres for each task, enabling better representation of the distribution information of training samples. The method outperforms RMTL and DMTSVM in terms of computational efficiency. Experimental results validate the effectiveness of MTTHSVM.
Both regularized multi-task learning (RMTL) and direct multi-task twin support vector machine (DMTSVM) have shown good performances in dealing with multi-task problems. They all use hyperplanes to realize classification. However, the hyperplane cannot reflect the distribution of data well. Therefore, in this paper, we propose a novel multi-task twin hypersphere support vector machine (MTTHSVM) to solve multi-task classification prob-lems. It will generate two hyperspheres rather than hyperplanes for each task. So, the pro-posed method could better describe the distribution information of all training samples compared with the existing RMTL and DMTSVM. Based on Hierarchical Bayes theory, MTTHSVM divides the center of each hypersphere into task-specific and task-common parts to better measure the commonality and individuality of tasks. Then the shared infor-mation contained in multiple related tasks could be adaptively mined well. Therefore, the prediction accuracy will be improved to some extent. Besides, MTTHSVM is superior to RMTL and DMTSVM in terms of computational efficiency. This is because our MTTHSVM just solves two smaller-sized quadratic programming problems without any matrix inverse operations. Experimental results on one artificial data set, thirty-five benchmark data sets and a real image data set Cifar100 have verified the effectiveness of our method.(c) 2022 Elsevier Inc. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available