3.8 Proceedings Paper

Interactive Personalization of Classifiers for Explainability using Multi-Objective Bayesian Optimization

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3565472.3592956

关键词

Personalization; Explainable AI; Interactive AI; Bayesian Optimization; Multi-objective Optimization

向作者/读者索取更多资源

This article discusses the personalization of opaque-box image classifiers using an interactive hyperparameter tuning approach. By iteratively rating the quality of explanations for a selected set of query images, the classifier's accuracy and perceived explainability ratings are optimized using a multi-objective Bayesian optimization algorithm. The study found that adjusting hyperparameters can significantly improve the explainability ratings of queried images while minimally impacting classifier accuracy. This method has the potential to be used for joint optimization of any machine learning objective and any human-centric objective.
Explainability is a crucial aspect of models which ensures their reliable use by both engineers and end-users. However, explainability depends on the user and the model's usage context, making it an important dimension for user personalization. In this article, we explore the personalization of opaque-box image classifiers using an interactive hyperparameter tuning approach, in which the user iteratively rates the quality of explanations for a selected set of query images. Using a multi-objective Bayesian optimization (MOBO) algorithm, we optimize for both, the classifier's accuracy and the perceived explainability ratings. In our user study, we found Pareto-optimal parameters for each participant, that could significantly improve explainability ratings of queried images while minimally impacting classifier accuracy. Furthermore, this improved explainability with tuned hyperparameters generalized to held-out validation images, with the extent of generalization being dependent on the variance within the queried images, and the similarity between the query and validation images. This MOBO-based method has the potential to be used in general to jointly optimize any machine learning objective along with any human-centric objective. The Pareto front produced after the interactive hyperparameter tuning can be useful during deployment, allowing for desired trade-offs between the objectives (if any) to be chosen by selecting the appropriate parameters. Additionally, user studies like ours can assess if commonly assumed trade-offs, such as accuracy versus explainability, exist in a given context.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据