4.6 Article

Explainability of Predictive Process Monitoring Results: Can You See My Data Issues?

Journal

APPLIED SCIENCES-BASEL
Volume 12, Issue 16, Pages -

Publisher

MDPI
DOI: 10.3390/app12168192

Keywords

predictive process monitoring; machine learning eXplainability; XAI; outcome prediction; process mining; machine learning

Ask authors/readers for more resources

Predictive process monitoring (PPM) is an important application of process mining that utilizes machine learning to predict the future of ongoing business processes. However, the need for explainable artificial intelligence (XAI) to gain user trust in the predictions remains a challenge. This study systematically investigates the effects of different choices in PPM settings on the explainability of generated predictions, highlighting inconsistencies between data characteristics, ML model predictors, and prediction explanations.
Predictive process monitoring (PPM) has been discussed as a use case of process mining for several years. PPM enables foreseeing the future of an ongoing business process by predicting, for example, relevant information on the way in which running processes terminate or on related process performance indicators. A large share of PPM approaches adopt Machine Learning (ML), taking advantage of the accuracy and precision of ML models. Consequently, PPM inherits the challenges of traditional ML approaches. One of these challenges concerns the need to gain user trust in the generated predictions. This issue is addressed by explainable artificial intelligence (XAI). However, in addition to ML characteristics, the choices made and the techniques applied in the context of PPM influence the resulting explanations. This necessitates the availability of a study on the effects of different choices made in the context of a PPM task on the explainability of the generated predictions. In order to address this gap, we systemically investigate the effects of different PPM settings on the data fed into an ML model and subsequently into the employed XAI method. We study how differences between the resulting explanations indicate several issues in the underlying data. Example of these issues include collinearity and high dimensionality of the input data. We construct a framework for performing a series of experiments to examine different choices of PPM dimensions (i.e., event logs, preprocessing configurations, and ML models), integrating XAI as a fundamental component. In addition to agreements, the experiments highlight several inconsistencies between data characteristics and important predictors used by the ML model on one hand, and explanations of predictions of the investigated ML model on the other.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available