3.9 Proceedings Paper

Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge

期刊

ACM SIGPLAN NOTICES
卷 52, 期 4, 页码 615-629

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3093336.3037698

关键词

mobile computing; cloud computing; deep neural networks; intelligent applications

资金

  1. ARM
  2. Intel
  3. National Science Foundation [IIS-VEC1539011, CCF-SHF-1302682, CNS-CSR-1321047]
  4. NSF CAREER [SHF-1553485]

向作者/读者索取更多资源

The computation for today's intelligent personal assistants such as Apple Siri, Google Now, and Microsoft Cortana, is performed in the cloud. This cloud-only approach requires significant amounts of data to be sent to the cloud over the wireless network and puts significant computational pressure on the datacenter. However, as the computational resources in mobile devices become more powerful and energy efficient, questions arise as to whether this cloud-only processing is desirable moving forward, and what are the implications of pushing some or all of this compute to the mobile devices on the edge. In this paper, we examine the status quo approach of cloud-only processing and investigate computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency, low energy consumption, and high datacenter throughput for this class of intelligent applications. Our study uses 8 intelligent applications spanning computer vision, speech, and natural language domains, all employing state-of-the-art Deep Neural Networks (DNNs) as the core machine learning technique. We find that given the characteristics of DNN algorithms, a fine-grained, layer-level computation partitioning strategy based on the data and computation variations of each layer within a DNN has significant latency and energy advantages over the status quo approach. Using this insight, we design Neurosurgeon, a lightweight scheduler to automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers. Neurosurgeon does not require per-application profiling. It adapts to various DNN architectures, hardware platforms, wireless networks, and server load levels, intelligently partitioning computation for best latency or best mobile energy. We evaluate Neurosurgeon on a state-of-the-art mobile development platform and show that it improves end-to-end latency by 3.1 x on average and up to 40.7 x, reduces mobile energy consumption by 59.5% on average and up to 94.7%, and improves datacenter throughput by 1.5 x on average and up to 6.7 x.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.9
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据