4.7 Article

Distributed and Collaborative High-Speed Inference Deep Learning for Mobile Edge with Topological Dependencies

期刊

IEEE TRANSACTIONS ON CLOUD COMPUTING
卷 10, 期 2, 页码 821-834

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCC.2020.2978846

关键词

Deep learning in edge computing; deep learning in cloud computing; edge inference; edge with topological dependencies; intelligent edge; intelligent cloud computing

资金

  1. Science Foundation of Ireland under Technology Innovation Development Award (TIDA) [P2038 SFI 17/TIDA/5130]

向作者/读者索取更多资源

Ubiquitous computing has the potential to harness the flexibility of distributed computing systems, including cloud, edge, and Internet of Things devices. This article proposes a novel collaborative distributed deep learning approach that utilizes the topological dependencies of the edge to enhance edge intelligence. Performance evaluation shows that both schemes outperform cloud-based deep learning inference.
Ubiquitous computing has potentials to harness the flexibility of distributed computing systems including cloud, edge, and Internet of Things devices. Mobile edge computing (MEC) benefits time-critical applications by providing low latency connections. However, most of the resource-constrained edge devices are not computationally feasible to host deep learning (DL) solutions. Further, these edge devices if deployed under denser deployments result in topological dependencies which if not taken into consideration adversely affect the MEC performance. To bring more intelligence to the edge under topological dependencies, compared to optimization heuristics, this article proposes a novel collaborative distributed DL approach. The proposed approach exploits topological dependencies of the edge using a resource-optimized graph neural network (GNN) version with an accelerated inference. By exploiting edge collaborative learning using stochastic gradient (SGD), the proposed approach called CGNN-edge ensures fast convergence and high accuracy. Collaborative learning of the deployed CGNN-edge incurs extra communication overhead and latency. To cope, this article proposes compressed collaborative learning based on momentum correction called cCGNN-edge with better scalability while preserving accuracy. Performance evaluation under IEEE 802.11ax-high-density wireless local area networks deployment demonstrates that both the schemes outperform cloud-based GNN inference in response time, satisfaction of latency requirements, and communication overhead.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据