4.5 Article

Realizing dynamic resource orchestration on cloud systems in the cloud-to-edge continuum

Journal

JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
Volume 160, Issue -, Pages 100-109

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jpdc.2021.10.006

Keywords

Cloud computing; Edge computing; Hadoop; HDFS

Ask authors/readers for more resources

The development trend between cloud computing and edge computing is to address the challenges faced by cloud systems in processing data and tasks from edge devices. Dynamic resource allocation and adjustment can improve task execution efficiency, accelerate execution speed, reduce edge device load, and increase the success rate of edge tasks completed in cloud systems.
Cloud computing has been widely utilized to handle the huge volume of data from many cutting-edge research areas such as Big Data and Internet of Things (IoT). The fast growing of edge devices makes it difficult for cloud systems to process all data and jobs originating from edge devices, which leads to the development of edge computing by completing jobs on edges instead of clouds. Unfortunately, edge devices generally possess only limited computing power. Therefore, jobs demanding heavy computation under strict time constraints could have more difficulties to successfully complete their work on edges than on clouds in the Cloud-to-Edge continuum. If cloud systems could dynamically orchestrate cloud resources to expedite the execution of those jobs, not only their timely execution could be assured, also the loading of edge devices could be reduced. The Apache Hadoop is considered one of the most popular cloud systems in industry and academia. However, it does not support dynamic resource allocation. Previously we proposed and implemented a new model which can dynamically adjust the computing resources assigned to given jobs in the Hadoop cloud system to speed up their execution. Like other computer software, cloud systems completely rely on their underlying operating systems to access hardware components such as CPUs and hard drives. In this paper, we report our efforts to improve our model to collaborate with the Linux operating system to accelerate the execution of jobs with high priority to a greater extent. Compared with what our original model achieved, experiments show that our ameliorated model could further quicken the execution of prioritized jobs in Hadoop by up to around 21%. As a result, jobs from edges that require substantial computing resources promptly could have better chances to get accomplished on cloud systems. (C) 2021 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available