Journal
ACM COMPUTING SURVEYS
Volume 51, Issue 6, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3280989
Keywords
Dual optimization; primal optimization; decomposition; CPU parallelism; GPU parallelism; speedup; data movement
Categories
Funding
- Universities of Boras, Skovde and Gothenburg in Sweden
Ask authors/readers for more resources
The immense amount of data created by digitalization requires parallel computing for machine-learning methods. While there are many parallel implementations for support vector machines (SVMs), there is no clear suggestion for every application scenario. Many factor-including optimization algorithm, problem size and dimension, kernel function, parallel programming stack, and hardware architecture-impact the efficiency of implementations. It is up to the user to balance trade-offs, particularly between computation time and classification accuracy. In this survey, we review the state-of-the-art implementations of SVMs, their pros and cons, and suggest possible avenues for future research.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available