4.7 Article

Digital twin-driven deep reinforcement learning for adaptive task allocation in robotic construction

Journal

ADVANCED ENGINEERING INFORMATICS
Volume 53, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.aei.2022.101710

Keywords

Digital Twin; Proximal Policy Optimization (PPO); Deep Reinforcement Learning (DRL); Autonomous Robot; Adaptive Task Allocation

Funding

  1. MCubed, United States [8505]
  2. Jiangsu Industrial Technology Research Institute [12494320]
  3. National Research Foundation of Korea (NRF) - Korea government Ministry of Science and ICT (MSIT), South Korea [NRF2020R1A4A4078916, 2022R1G1A1012897]
  4. National Research Foundation of Korea [2022R1G1A1012897] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This paper explores the potential of deep reinforcement learning (DRL) for adaptive task allocation in dynamic robotic construction environments using a digital twin-driven learning method. The results of testing on a virtual robotic construction project showed that the DRL model's task allocation approach reduced construction time by 36% compared to a rule-based imperative model.
In order to accomplish diverse tasks successfully in a dynamic (i.e., changing over time) construction environment, robots should be able to prioritize assigned tasks to optimize their performance in a given state. Recently, a deep reinforcement learning (DRL) approach has shown potential for addressing such adaptive task allocation. It remains unanswered, however, whether or not DRL can address adaptive task allocation problems in dynamic robotic construction environments. In this paper, we developed and tested a digital twin-driven DRL learning method to explore the potential of DRL for adaptive task allocation in robotic construction environments. Specifically, the digital twin synthesizes sensory data from physical assets and is used to simulate a variety of dynamic robotic construction site conditions within which a DRL agent can interact. As a result, the agent can learn an adaptive task allocation strategy that increases project performance. We tested this method with a case project in which a virtual robotic construction project (i.e., interlocking concrete bricks are delivered and assembled by robots) was digitally twinned for DRL training and testing. Results indicated that the DRL model's task allocation approach reduced construction time by 36% in three dynamic testing environments when compared to a rule-based imperative model. The proposed DRL learning method promises to be an effective tool for adaptive task allocation in dynamic robotic construction environments. Such an adaptive task allocation method can help construction robots cope with uncertainties and can ultimately improve construction project performance by efficiently prioritizing assigned tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available