期刊
KNOWLEDGE-BASED SYSTEMS
卷 217, 期 -, 页码 -出版社
ELSEVIER
DOI: 10.1016/j.knosys.2021.106854
关键词
Safe reinforcement learning; Human-in-the-loop reinforcement learning; Markov decision processes; Supervised learning
This paper discusses the incomplete state representation in simulation as one cause of errors and proposes a supervised learning approach to correct human-unacceptable policies calculated by simulators using human feedback. The approach involves detecting blind spots, training classifiers on noisy human feedback, and correcting policies through a complementary model based on linear function approximation and a policy iteration algorithm that uses radial basis functions. Experiments show the approach's higher accuracy compared to baselines in terms of human suboptimality, human errors, and human feedback types, as well as its scalability.
Using simulators is a cost-effective way to meet human needs. Nevertheless, inevitable errors derived from the gap between simulation and the real world sometimes cause great losses and must be taken seriously. This paper focuses on one cause of the gap, which is the incomplete state representation in simulation, and proposes a supervised learning approach, correcting human-unacceptable policies calculated by simulators, based on human feedback. The approach first detects the related blind spots by classifiers which are trained on data from aggregation of noisy human feedback. Then, it corrects the human-unacceptable policies through the complementary model presented based on linear function approximation (LFA) and a policy iteration algorithm FRU-SADPP that uses radial basis functions (RBFs). We evaluate our approach on two simulated domains and demonstrate its higher accuracy of policies than two baselines, in terms of three typical kinds of human suboptimality and human errors, and three types of human feedback. Experiments also show the scalability of our approach. (C) 2021 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据