3.8 Article

Interpretable machine learning: Fundamental principles and 10 grand challenges

Journal

STATISTICS SURVEYS
Volume 16, Issue -, Pages 1-85

Publisher

AMER STATISTICAL ASSOC
DOI: 10.1214/21-SS133

Keywords

Interpretable machine learning; explainable machine learning

Funding

  1. DOE [DE-SC0021358]
  2. NSF [DGE-2022040, CCF-1934964]
  3. NIDA [DA054994-01]
  4. U.S. Department of Energy (DOE) [DE-SC0021358] Funding Source: U.S. Department of Energy (DOE)

Ask authors/readers for more resources

This work highlights the fundamental principles of interpretable machine learning and identifies 10 technical challenge areas in this field, including optimizing sparse models, scoring systems, and adding constraints for better interpretability. It serves as a useful starting point for statisticians and computer scientists interested in interpretable machine learning.
Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the Rashomon set of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available