4.7 Article

Benchmarking explanation methods for mental state decoding with deep learning models

Journal

NEUROIMAGE
Volume 273, Issue -, Pages -

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.neuroimage.2023.120109

Keywords

Neuroimaging; Mental state decoding; Deep learning; Explainable AI; Benchmark

Ask authors/readers for more resources

Deep learning models are widely used in mental state decoding to accurately identify the mapping between mental states and brain activity. Researchers often use explainable AI methods to understand the learned mappings of the model. A study benchmarks different explanation methods in mental state decoding and provides guidance for neuroimaging researchers on choosing the appropriate explanation method.
Deep learning (DL) models find increasing application in mental state decoding, where researchers seek to un-derstand the mapping between mental states (e.g., experiencing anger or joy) and brain activity by identifying those spatial and temporal features of brain activity that allow to accurately identify (i.e., decode) these states. Once a DL model has been trained to accurately decode a set of mental states, neuroimaging researchers often make use of methods from explainable artificial intelligence research to understand the model's learned mappings between mental states and brain activity. Here, we benchmark prominent explanation methods in a mental state decoding analysis of multiple functional Magnetic Resonance Imaging (fMRI) datasets. Our findings demonstrate a gradient between two key characteristics of an explanation in mental state decoding, namely, its faithfulness and its alignment with other empirical evidence on the mapping between brain activity and decoded mental state: explanation methods with high explanation faithfulness, which capture the model's decision process well, generally provide explanations that align less well with other empirical evidence than the explanations of meth-ods with less faithfulness. Based on our findings, we provide guidance for neuroimaging researchers on how to choose an explanation method to gain insight into the mental state decoding decisions of DL models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available