4.3 Article

Understanding from Machine Learning Models

Journal

BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
Volume 73, Issue 1, Pages 109-133

Publisher

UNIV CHICAGO PRESS
DOI: 10.1093/bjps/axz035

Keywords

-

Funding

  1. Free University of Amsterdam
  2. University of Connecticut [58942]
  3. John Templeton Foundation

Ask authors/readers for more resources

Scientists are increasingly using opaque machine learning models instead of simple idealized models, suggesting a willingness to sacrifice understanding for other benefits. Using deep neural networks as an example, this article argues that the lack of scientific evidence is what primarily limits a model's capacity for providing understanding.
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this article, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available