4.7 Article

Tensorox: Accelerating GPU Applications via Neural Approximation on Unused Tensor Cores

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2021.3093239

Keywords

Hardware; Tensors; Neural networks; Deep learning; Graphics processing units; Task analysis; Training; Graphics processing units; parallel programming; approximate computing; neural networks; tensor processing unit; GPGPU

Funding

  1. Singapore Ministry of Education [T1-251RES1818, MOE2016-T2-2-150]

Ask authors/readers for more resources

This article proposes Tensorox, a framework that utilizes the half-precision tensor cores on recent GPUs to accelerate non deep learning applications. By training shallow neural networks and running multiple instances in parallel using tensor operations on Nvidia GPUs, our approximation achieves higher accuracy than running the original single precision programs, while allowing for runtime adjustment of the degree of approximation.
Driven by the demands of deep learning, many hardware accelerators, including GPUs, have begun to include specialized tensor processing units to accelerate matrix operations. However, general-purpose GPU applications that have little or no large dense matrix operations cannot benefit from these tensor units. This article proposes Tensorox, a framework that exploits the half-precision tensor cores available on recent GPUs for approximable, non deep learning applications. In essence, a shallow neural network is trained based on the input-output mapping of the function to be approximated. The key innovation in our implementation is the use of the small and dimension-restricted tensor operations in Nvidia GPUs to run multiple instances of the approximation neural network in parallel. With the proper scaling and training methods, our approximation yielded an overall accuracy that is higher than naively running the original programs with half-precision. Furthermore, Tensorox allows for the runtime adjustment of the degree of approximation. For the 10 benchmarks we tested, we achieved speedups from 2x to 112x compared to the original in single precision floating point, while maintaining the error caused by the approximation to below 10 percent in most applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available