3.8 Article

Mutual Explanations for Cooperative Decision Making in Medicine

Journal

KUNSTLICHE INTELLIGENZ
Volume 34, Issue 2, Pages 227-233

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s13218-020-00633-2

Keywords

Human-AI partnership; Inductive Logic Programming; Explanations as constraints

Ask authors/readers for more resources

Exploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available