4.5 Article

Multi-agent reinforcement learning for redundant robot control in task-space

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s13042-020-01167-7

关键词

Multi-agent; Reinforcement learning; Redundant robot

向作者/读者索取更多资源

This paper proposes a fully cooperative multi-agent reinforcement learning to solve the kinematic problem of redundant robots. Each joint of the robot is regarded as one agent, avoiding function approximators and large learning space. Experimental results show that the proposed MARL outperforms classic methods such as Jacobian-based methods and neural networks.
Task-space control needs the inverse kinematics solution or Jacobian matrix for the transformation from task space to joint space. However, they are not always available for redundant robots because there are more joint degrees-of-freedom than Cartesian degrees-of-freedom. Intelligent learning methods, such as neural networks (NN) and reinforcement learning (RL) can learn the inverse kinematics solution. However, NN needs big data and classical RL is not suitable for multi-link robots controlled in task space. In this paper, we propose a fully cooperative multi-agent reinforcement learning (MARL) to solve the kinematic problem of redundant robots. Each joint of the robot is regarded as one agent. The fully cooperative MARL uses a kinematic learning to avoid function approximators and large learning space. The convergence property of the proposed MARL is analyzed. The experimental results show that our MARL is much more better compared with the classic methods such as Jacobian-based methods and neural networks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据