4.3 Article

Interpretability in symbolic regression: a benchmark of explanatory methods using the Feynman data set

期刊

GENETIC PROGRAMMING AND EVOLVABLE MACHINES
卷 23, 期 3, 页码 309-349

出版社

SPRINGER
DOI: 10.1007/s10710-022-09435-x

关键词

Symbolic regression; Explanatory methods; Feature importance attribution; Benchmark

资金

  1. Federal University of ABC (UFABC)
  2. CoordenacAo de Aperfeicoamento de Pessoal de Nivel Superior (CAPES)
  3. FundacAo de Amparo a Pesquisa do Estado de SAo Paulo (FAPESP) [2018/14173-8]

向作者/读者索取更多资源

The interpretability of machine learning models is crucial in many situations, and this paper proposes a benchmark scheme to evaluate explanatory methods for regression models, focusing on symbolic regression models. The experiments show that symbolic regression models can be a compelling alternative to white-box and black-box models, providing accurate models with appropriate explanations. The most robust explanation models were found to be Partial Effects and SHAP, with Integrated Gradients being less stable with tree-based models.
In some situations, the interpretability of the machine learning models plays a role as important as the model accuracy. Interpretability comes from the need to trust the prediction model, verify some of its properties, or even enforce them to improve fairness. Many model-agnostic explanatory methods exists to provide explanations for black-box models. In the regression task, the practitioner can use white-boxes or gray-boxes models to achieve more interpretable results, which is the case of symbolic regression. When using an explanatory method, and since interpretability lacks a rigorous definition, there is a need to evaluate and compare the quality and different explainers. This paper proposes a benchmark scheme to evaluate explanatory methods to explain regression models, mainly symbolic regression models. Experiments were performed using 100 physics equations with different interpretable and non-interpretable regression methods and popular explanation methods, evaluating the performance of the explainers performance with several explanation measures. In addition, we further analyzed four benchmarks from the GP community. The results have shown that Symbolic Regression models can be an interesting alternative to white-box and black-box models that is capable of returning accurate models with appropriate explanations. Regarding the explainers, we observed that Partial Effects and SHAP were the most robust explanation models, with Integrated Gradients being unstable only with tree-based models. This benchmark is publicly available for further experiments.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据