4.6 Article

Explainability of Predictive Process Monitoring Results: Can You See My Data Issues?

期刊

APPLIED SCIENCES-BASEL
卷 12, 期 16, 页码 -

出版社

MDPI
DOI: 10.3390/app12168192

关键词

predictive process monitoring; machine learning eXplainability; XAI; outcome prediction; process mining; machine learning

向作者/读者索取更多资源

Predictive process monitoring (PPM) is an important application of process mining that utilizes machine learning to predict the future of ongoing business processes. However, the need for explainable artificial intelligence (XAI) to gain user trust in the predictions remains a challenge. This study systematically investigates the effects of different choices in PPM settings on the explainability of generated predictions, highlighting inconsistencies between data characteristics, ML model predictors, and prediction explanations.
Predictive process monitoring (PPM) has been discussed as a use case of process mining for several years. PPM enables foreseeing the future of an ongoing business process by predicting, for example, relevant information on the way in which running processes terminate or on related process performance indicators. A large share of PPM approaches adopt Machine Learning (ML), taking advantage of the accuracy and precision of ML models. Consequently, PPM inherits the challenges of traditional ML approaches. One of these challenges concerns the need to gain user trust in the generated predictions. This issue is addressed by explainable artificial intelligence (XAI). However, in addition to ML characteristics, the choices made and the techniques applied in the context of PPM influence the resulting explanations. This necessitates the availability of a study on the effects of different choices made in the context of a PPM task on the explainability of the generated predictions. In order to address this gap, we systemically investigate the effects of different PPM settings on the data fed into an ML model and subsequently into the employed XAI method. We study how differences between the resulting explanations indicate several issues in the underlying data. Example of these issues include collinearity and high dimensionality of the input data. We construct a framework for performing a series of experiments to examine different choices of PPM dimensions (i.e., event logs, preprocessing configurations, and ML models), integrating XAI as a fundamental component. In addition to agreements, the experiments highlight several inconsistencies between data characteristics and important predictors used by the ML model on one hand, and explanations of predictions of the investigated ML model on the other.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据