4.6 Article

Explaining Multiclass Compound Activity Predictions Using Counterfactuals and Shapley Values

Journal

MOLECULES
Volume 28, Issue 14, Pages -

Publisher

MDPI
DOI: 10.3390/molecules28145601

Keywords

machine learning; multiclass activity prediction models; dual-target compounds; single-target compounds; explainable artificial intelligence; counterfactuals; SHAP values

Ask authors/readers for more resources

In pharmaceutical research, black box predictions from machine learning models hinder their application in guiding experimental work. This study presents a test system that combines two explainable artificial intelligence methods to better understand prediction outcomes and provide chemically intuitive explanations for model decisions.
Most machine learning (ML) models produce black box predictions that are difficult, if not impossible, to understand. In pharmaceutical research, black box predictions work against the acceptance of ML models for guiding experimental work. Hence, there is increasing interest in approaches for explainable ML, which is a part of explainable artificial intelligence (XAI), to better understand prediction outcomes. Herein, we have devised a test system for the rationalization of multiclass compound activity prediction models that combines two approaches from XAI for feature relevance or importance analysis, including counterfactuals (CFs) and Shapley additive explanations (SHAP). For compounds with different single- and dual-target activities, we identified small compound modifications that induce feature changes inverting class label predictions. In combination with feature mapping, CFs and SHAP value calculations provide chemically intuitive explanations for model decisions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available