4.7 Article

From Clustering to Cluster Explanations via Neural Networks

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3185901

关键词

Explainable machine learning; k-means clustering; neural networks; neuralization; unsupervised learning

资金

  1. German Ministry for Education and Research [01IS14013A-E, 01GQ1115, 01GQ0850, 01IS18025A, 031L0207D, 01IS18037A]
  2. German Research Foundation (DFG) [EXC 2046/1, 390685689]
  3. Institute of Information and Communications Technology Planning and Evaluation (IITP) Grants, Korea Government (MSIT) [2019-0-00079]
  4. Artificial Intelligence Graduate School Program, Korea University [2022-000984]
  5. Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation

向作者/读者索取更多资源

In recent years, there has been a trend in machine learning to enhance learned models with the ability to explain their predictions. This field, known as explainable AI (XAI), has mainly focused on supervised learning, particularly deep neural network classifiers. However, in many practical problems where label information is not given, the goal is to discover the underlying structure of the data, such as its clusters. This study proposes a novel framework that can explain cluster assignments in terms of input features efficiently and reliably, by rewriting clustering models as neural networks.
A recent trend in machine learning has been to enrich learned models with the ability to explain their own predictions. The emerging field of explainable AI (XAI) has so far mainly focused on supervised learning, in particular, deep neural network classifiers. In many practical problems, however, the label information is not given and the goal is instead to discover the underlying structure of the data, for example, its clusters. While powerful methods exist for extracting the cluster structure in data, they typically do not answer the question why a certain data point has been assigned to a given cluster. We propose a new framework that can, for the first time, explain cluster assignments in terms of input features in an efficient and reliable manner. It is based on the novel insight that clustering models can be rewritten as neural networks-or neuralized. Cluster predictions of the obtained networks can then be quickly and accurately attributed to the input features. Several showcases demonstrate the ability of our method to assess the quality of learned clusters and to extract novel insights from the analyzed data and representations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据