4.8 Article

Fast Support Vector Classification for Large-Scale Problems

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3085969

关键词

Classification; large-scale datasets; support vector machine; closed-form training; model selection

资金

  1. Xunta de Galicia [2019-2022 ED431G-2019/04]
  2. European RegionalDevelopment Fund (ERDF)

向作者/读者索取更多资源

The Support Vector Machine is an important machine learning algorithm that performs well on many classification problems. However, it is slow and requires a lot of memory when dealing with large datasets. To address this issue, a fast support vector classifier is proposed with efficient training, small prototypes collection, and fast kernel spread selection method.
The support vector machine (SVM) is a very important machine learning algorithm with state-of-the-art performance on many classification problems. However, on large datasets it is very slow and requires much memory. To solve this defficiency, we propose the fast support vector classifier (FSVC) that includes: 1) an efficient closed-form training free of any numerical iterative procedure; 2) a small collection of class prototypes that avoids to store in memory an excessive number of support vectors; and 3) a fast method that selects the spread of the radial basis function kernel directly from data, without classifier execution nor iterative hyper-parameter tuning. The memory requirements of FSVC are very low, spending in average only 6.10(-7) sec. per pattern, input and class, and processing datasets up to 31 millions of patterns, 30,000 inputs and 131 classes in less than 1.5 hours (less than 3 hours with only 2GB of RAM). In average, the FSVC is 10 times faster, requires 12 times less memory and achieves 4.7 percent more performance than Liblinear, that fails on the 4 largest datasets by lack of memory, being 100 times faster and achieving only 6.7 percent less performance than Libsvm. The time spent by FSVC only depends on the dataset size and thus it can be accurately estimated for new datasets, while Libsvm or Liblinear are much slower on difficult datasets, even if they are small. The FSVC adjusts its requirements to the available memory, classifying large datasets in computers with limited memory. Code for the proposed algorithm in the Octave scientific programming language is provided.(1)

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据