4.7 Article

Automatic discovery of interpretable planning strategies

Journal

MACHINE LEARNING
Volume 110, Issue 9, Pages 2641-2683

Publisher

SPRINGER
DOI: 10.1007/s10994-021-05963-2

Keywords

Interpretability; Automatic strategy discovery; Decision support; Imitation learning; Program induction; Reinforcement learning; Rationality enhancement

Funding

  1. German Federal Ministry of Education and Research (BMBF): Tubingen AI Center [FKZ: 01IS18039B]

Ask authors/readers for more resources

The study introduces a new algorithm AI-Interpret to transform idiosyncratic policies into simple and interpretable descriptions, assisting human experts in designing effective decision aids. Experimental results show that providing decision rules generated by AI-Interpret as flowcharts significantly improves people's planning strategies and decisions across three different classes of sequential decision problems. These findings suggest that leveraging automatic strategy discovery can effectively enhance human decision-making.
When making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decision-makers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that providing the decision rules generated by AI-Interpret as flowcharts significantly improved people's planning strategies and decisions across three different classes of sequential decision problems. Moreover, our fourth experiment revealed that this approach is significantly more effective at improving human decision-making than training people by giving them performance feedback. Finally, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making. The code for our algorithm and the experiments is available at .

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available