4.7 Article

Understanding neural network through neuron level visualization

期刊

NEURAL NETWORKS
卷 168, 期 -, 页码 484-495

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.09.030

关键词

Interpretability; Neural network; Visualization

向作者/读者索取更多资源

Neurons are fundamental units of neural networks, and this paper proposes a method for explaining neural networks by visualizing the learning process of neurons. The method can analyze the working mechanism of different neural network models without requiring any changes to the architectures. The effectiveness of the method is demonstrated through experiments on various neural network architectures for image classification tasks, providing insights into the interpretability of neural networks from diverse perspectives.
Neurons are the fundamental units of neural networks. In this paper, we propose a method for explaining neural networks by visualizing the learning process of neurons. For a trained neural network, the proposed method obtains the features learned by each neuron and displays the features in a human-understandable form. The features learned by different neurons are combined to analyze the working mechanism of different neural network models. The method is applicable to neural networks without requiring any changes to the architectures of the models. In this study, we apply the proposed method to both Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs) trained using the backpropagation learning algorithm. We conduct experiments on models for image classification tasks to demonstrate the effectiveness of the method. Through these experiments, we gain insights into the working mechanisms of various neural network architectures and evaluate neural network interpretability from diverse perspectives.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据