4.7 Article

Joint Optimization of Service Caching Placement and Computation Offloading in Mobile Edge Computing Systems

Journal

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
Volume 19, Issue 7, Pages 4947-4963

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TWC.2020.2988386

Keywords

Task analysis; Servers; Delays; Wireless communication; Resource management; Energy consumption; Optimization; Mobile edge computing; service caching; computation offloading; resource allocation

Funding

  1. National Natural Science Foundation of China [61871271]
  2. Guangdong Province Pearl River Scholar Funding Scheme 2018 [308/00003704]
  3. Foundation of Shenzhen City [JCYJ20170818101824392, JCYJ20190808120415286]
  4. Science and Technology Innovation Commission of Shenzhen [827/000212]
  5. Zhejiang Provincial Natural Science Foundation of China [LY19F020033]
  6. Hong Kong Research Grant Council [14208017]

Ask authors/readers for more resources

In mobile edge computing (MEC) systems, edge service caching refers to pre-storing the necessary programs for executing computation tasks at MEC servers. Service caching effectively reduces the real-time delay/bandwidth cost on acquiring and initializing service applications when computation tasks are offloaded to the MEC servers. The limited caching space at resource-constrained edge servers calls for careful design of caching placement to determine which programs to cache over time. This is in general a complicated problem that highly correlates to the computation offloading decisions of computation tasks, i.e., whether or not to offload a task for edge execution. In this paper, we consider a single edge server that assists a mobile user (MU) in executing a sequence of computation tasks. In particular, the MU can upload and run its customized programs at the edge server, while the server can selectively cache the previously generated programs for future reuse. To minimize the computation delay and energy consumption of the MU, we formulate a mixed integer non-linear programming (MINLP) that jointly optimizes the service caching placement, computation offloading decisions, and system resource allocation (e.g., CPU processing frequency and transmit power of MU). To tackle the problem, we first derive the closed-form expressions of the optimal resource allocation solutions, and subsequently transform the MINLP into an equivalent pure 0-1 integer linear programming (ILP) that is much simpler to solve. To further reduce the complexity in solving the ILP, we exploit the underlying structures of caching causality and task dependency models, and accordingly devise a reduced-complexity alternating minimization technique to update the caching placement and offloading decision alternately. Extensive simulations show that the proposed joint optimization techniques achieve substantial resource savings of the MU compared to other representative benchmark methods considered.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available