4.4 Article

Applying Intel's oneAPI to a machine learning case study

Journal

Publisher

WILEY
DOI: 10.1002/cpe.6917

Keywords

heterogeneous computing; high performance computing; performance portability; machine learning

Funding

  1. European Regional Development Fund [MCIN/AEI/10.13039/501100011033, RTI2018-098156-B-C53]

Ask authors/readers for more resources

This article discusses different technologies and approaches to address the performance portability problem, with focus on Intel's oneAPI solution. It uses the machine learning framework Caffe as a case study to explore the feasibility and advantages of using oneAPI for development.
Different technologies and approaches exist to work around the performance portability problem. Companies and academia work together to find a way to preserve performance across heterogeneous hardware using a unified language, one language to rule them all. Intel's oneAPI appears with this idea in mind. In this article, we try the new Intel solution to approach heterogeneous programming, choosing machine learning as our case study. More precisely, we choose Caffe, a machine learning framework that was created six years ago. Nevertheless, how would it be to make Caffe again from the beginning, using a fresh new technology like oneAPI? In terms of not only the ease of programming-because only one source code would be needed to deploy Caffe to CPUs, GPUs, FPGAs, and accelerators (platforms that oneAPI currently supports)-but also performance, where oneAPI may be capable of taking advantage of specific hardware automatically. Is Intel's oneAPI ready to take the leap?

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available