3.8 Proceedings Paper

Training on the Edge: The why and the how

Publisher

IEEE
DOI: 10.1109/IPDPSW.2019.00148

Keywords

-

Funding

  1. Intel Parallel Computing Centre at Imperial College London
  2. EPSRC [EP/R029423/1]
  3. HPC-BigData INRIA Project LAB (IPL)
  4. U.S. Department of Energy, Office of Science [DE-AC02-06CH1135]

Ask authors/readers for more resources

Edge computing is a natural progression from cloud computing, where, instead of collecting all data and processing it centrally, as in a cloud computing environment, we distribute the computing power and try to do as much processing as possible, close to the source of the data. This model is being adopted quickly for various reasons, including privacy, and reduced power and bandwidth requirements on the edge nodes. While inference is commonly being done on edge nodes today, training on the edge is much less common. The reasons range from computational limitations, to its not being advantageous in reducing communications between the edge nodes. In this paper, we explore some scenarios where it is advantageous to do training on the edge, and the use of checkpointing strategies to save memory.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available