4.7 Review

Interpretable machine learning for dementia: A systematic review

期刊

ALZHEIMERS & DEMENTIA
卷 -, 期 -, 页码 -

出版社

WILEY
DOI: 10.1002/alz.12948

关键词

dementia; diagnosis; explainable artificial intelligence; interpretability; machine learning; mild cognitive impairment

向作者/读者索取更多资源

Machine learning research for automated dementia diagnosis is growing in popularity, but its clinical impact has been limited so far. The challenge lies in developing robust and generalizable models that can provide reliable explanations for their decisions. Some models are inherently interpretable, while post hoc explainability methods can be used for other models.
IntroductionMachine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently interpretable, whereas post hoc explainability methods can be used for other models. MethodsHere we sought to summarize the state-of-the-art of interpretable machine learning for dementia. ResultsWe identified 92 studies using PubMed, Web of Science, and Scopus. Studies demonstrate promising classification performance but vary in their validation procedures and reporting standards and rely heavily on popular data sets. DiscussionFuture work should incorporate clinicians to validate explanation methods and make conclusive inferences about dementia-related disease pathology. Critically analyzing model explanations also requires an understanding of the interpretability methods itself. Patient-specific explanations are also required to demonstrate the benefit of interpretable machine learning in clinical practice.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据