3.8 Proceedings Paper

Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3037697.3037698

Keywords

mobile computing; cloud computing; deep neural networks; intelligent applications

Funding

  1. ARM
  2. Intel
  3. National Science Foundation [IIS-VEC1539011, CCF-SHF-1302682, CNS-CSR-1321047]
  4. NSF CAREER [SHF-1553485]
  5. Direct For Computer & Info Scie & Enginr [1539011] Funding Source: National Science Foundation
  6. Div Of Information & Intelligent Systems [1539011] Funding Source: National Science Foundation

Ask authors/readers for more resources

The computation for today's intelligent personal assistants such as Apple Siri, Google Now, and Microsoft Cortana, is performed in the cloud. This cloud-only approach requires significant amounts of data to be sent to the cloud over the wireless network and puts significant computational pressure on the datacenter. However, as the computational resources in mobile devices become more powerful and energy efficient, questions arise as to whether this cloud-only processing is desirable moving forward, and what are the implications of pushing some or all of this compute to the mobile devices on the edge. In this paper, we examine the status quo approach of cloud-only processing and investigate computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency, low energy consumption, and high datacenter throughput for this class of intelligent applications. Our study uses 8 intelligent applications spanning computer vision, speech, and natural language domains, all employing state-of-the-art Deep Neural Networks (DNNs) as the core machine learning technique. We find that given the characteristics of DNN algorithms, a fine-grained, layer-level computation partitioning strategy based on the data and computation variations of each layer within a DNN has significant latency and energy advantages over the status quo approach. Using this insight, we design Neurosurgeon, a lightweight scheduler to automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers. Neurosurgeon does not require per-application profiling. It adapts to various DNN architectures, hardware platforms, wireless networks, and server load levels, intelligently partitioning computation for best latency or best mobile energy. We evaluate Neurosurgeon on a state-of-the-art mobile development platform and show that it improves end-to-end latency by 3.1x on average and up to 40.7x, reduces mobile energy consumption by 59.5% on average and up to 94.7%, and improves data-center throughput by 1.5x on average and up to 6.7x.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available