Journal
MACHINE LEARNING
Volume 111, Issue 4, Pages 1523-1549Publisher
SPRINGER
DOI: 10.1007/s10994-022-06142-7
Keywords
Neuro-symbolic; Hierarchical reinforcement learning; Deep reinforcement learning; Inductive logic programming; Answer set programming
Categories
Ask authors/readers for more resources
In this paper, we present DUA, a neuro-symbolic reinforcement learning framework that combines computer vision, inductive logic programming, and deep reinforcement learning. By integrating these techniques, we address physical cognitive reasoning problems and establish foundations for tackling complex DRL challenges.
In this paper we introduce Detect, Understand, Act (DUA), a neuro-symbolic reinforcement learning framework. The Detect component is composed of a traditional computer vision object detector and tracker. The Act component houses a set of options, high-level actions enacted by pre-trained deep reinforcement learning (DRL) policies. The Understand component provides a novel answer set programming (ASP) paradigm for symbolically implementing a meta-policy over options and effectively learning it using inductive logic programming (ILP). We evaluate our framework on the Animal-AI (AAI) competition testbed, a set of physical cognitive reasoning problems. Given a set of pre-trained DRL policies, DUA requires only a few examples to learn a meta-policy that allows it to improve the state-of-the-art on multiple of the most challenging categories from the testbed. DUA constitutes the first holistic hybrid integration of computer vision, ILP and DRL applied to an AAI-like environment and sets the foundations for further use of ILP in complex DRL challenges.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available