4.8 Article

Deep Reinforcement Learning Control of Fully-Constrained Cable-Driven Parallel Robots

Journal

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
Volume 70, Issue 7, Pages 7194-7204

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIE.2022.3203763

Keywords

Uncertainty; Reinforcement learning; Markov processes; Adaptation models; Parallel robots; Heuristic algorithms; End effectors; Cable-driven parallel robots (CDPRs); deep reinforcement learning; parameter uncertainties

Ask authors/readers for more resources

This article introduces a control algorithm based on reinforcement learning to address the uncertainties in cable-driven parallel robots (CDPRs) and improve control performance.
Cable-driven parallel robots (CDPRs) have complex cable dynamics and working environment uncertainties, which bring challenges to the precise control of CDPRs. This article introduces the reinforcement learning to offset the negative effect on the control performance of CDPRs resulting from the uncertainties. The problem of controller design for CDPRs in the framework of deep reinforcement learning is investigated. A learning-based control algorithm is proposed to compensate for uncertainties due to cable elasticity, mechanical friction, etc. A basic control law is given for the nominal model, and a Lyapunov-based deep reinforcement learning control law is designed. Moreover, the stability of the closed-loop tracking system under the reinforcement learning algorithm is proved. Both simulations and experiments validate the effectiveness and advantages of the proposed control algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available