3.8 Proceedings Paper

Accelerating Deep Learning Frameworks with Micro-batches

出版社

IEEE
DOI: 10.1109/CLUSTER.2018.00058

关键词

Deep learning; convolutional neural networks; performance tuning; micro-batch

资金

  1. ETH Postdoctoral Fellowship, Japan
  2. ETH Student Summer Research Fellowship, Japan
  3. JST CREST Grant, Japan [JPMJCR1303, JPMJCR1687]
  4. JSPS KAKENHI, Japan [JP18J22858]

向作者/读者索取更多资源

cuDNN is a low-level library that provides GPU kernels frequently used in deep learning. Specifically, cuDNN implements several equivalent convolution algorithms, whose performance and memory footprint may vary considerably, depending on the layer dimensions. When an algorithm is automatically selected by cuDNN, the decision is performed on a per-layer basis, and thus it often resorts to slower algorithms that fit the workspace size constraints. We present mu-cuDNN, a thin wrapper library for cuDNN that transparently divides layers' mini-batch computation into multiple micro-batches, both on a single GPU and a heterogeneous set of GPUs. Based on Dynamic Programming and Integer Linear Programming (ILP), mu-cuDNN enables faster algorithms by decreasing the workspace requirements. At the same time, mu-cuDNN does not decrease the accuracy of the results, effectively decoupling statistical efficiency from the hardware efficiency. We demonstrate the effectiveness of mu-cuDNN for the Caffe and TensorFlow frameworks, achieving speedups of 1.63x for AlexNet and 1.21x for ResNet-18 on the P100-SXM2 GPU. We also show that mu-cuDNN achieves speedups of up to 4.54x, and 1.60x on average for DeepBench's convolutional layers on the V100-SXM2 GPU. In a distributed setting, mu-cuDNN attains a speedup of 2.20x when training ResNet-18 on a heterogeneous GPU cluster over a single GPU. These results indicate that using micro-batches can seamlessly increase the performance of deep learning, while maintaining the same overall memory footprint.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据