期刊
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
卷 53, 期 4, 页码 2456-2468出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSMC.2022.3214221
关键词
Consensus models; decision making; decision support systems; deep learning; reinforcement learning; Z-numbers
This study proposes novel reinforcement learning-based adjustment mechanisms to address the tradeoff between the number of discussion rounds and the harmony degree of decision makers in group decision-making. By converting the decision environment into a Markov decision process, two independent reinforcement learning agents are trained to adjust feedback parameters and weights of decision makers, aiming to reduce discussion rounds and improve harmony degree.
The number of discussion rounds and harmony degree of decision makers are two crucial efficiency measures to be considered in the design of the consensus-reaching process for the group decision-making problems. Adjusting the feedback parameter and importance weights of the decision makers in the recommendation mechanism has a great impact on these efficiency measures. This work aims to propose novel and efficient reinforcement learning-based adjustment mechanisms to address the tradeoff between the aforementioned measures. To employ these adjustment mechanisms, we propose to extract the dynamics of state transition from consensus models based on the distributed trust functions and Z-Numbers in order to convert the decision environment into a Markov decision process. Two independent reinforcement learning agents are then trained via a deep deterministic policy gradient algorithm to adjust the feedback parameter and importance weights of decision makers. The first agent is trained toward reducing the number of discussion rounds while ensuring the highest possible level of harmony degree among the decision makers. The second agent merely speeds up the consensus reaching process by adjusting the importance weights of the decision makers. Various experiments are designed to verify the applicability and scalability of the proposed feedback and weight-adjustment mechanisms in different decision environments.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据