4.7 Article

Understanding neural network through neuron level visualization

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Interpretable learning based Dynamic Graph Convolutional Networks for Alzheimer's Disease analysis

Yonghua Zhu et al.

Summary: This paper proposes a GCN architecture combining interpretable feature learning and dynamic graph learning for personalized early Alzheimer's disease diagnosis, which outputs competitive diagnosis performance and provides interpretability.

INFORMATION FUSION (2022)

Article Computer Science, Artificial Intelligence

Explaining the black-box model: A survey of local interpretation methods for deep neural networks

Yu Liang et al.

Summary: This research examines recent developments in interpreting deep neural networks, specifically focusing on local interpretation methods with in-depth analysis of representative works and newly proposed approaches. The study categorizes local interpretation methods into model-driven and data-driven categories, highlighting new ideas and principles. Results of various interpretation methods are reproduced using open source software plugins, demonstrating their effectiveness, and future research directions are suggested.

NEUROCOMPUTING (2021)

Article Computer Science, Artificial Intelligence

Extraction of an Explanatory Graph to Interpret a CNN

Quanshi Zhang et al.

Summary: This paper introduces an explanatory graph representation to reveal object parts encoded in convolutional layers of a CNN. By learning the explanatory graph, different object parts are automatically disentangled from each filter, boosting the transferability of CNN features.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2021)

Article Computer Science, Software Engineering

CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization

Zijie J. Wang et al.

Summary: CNN Explainer is an interactive visualization tool designed for non-experts to learn and examine convolutional neural networks. It helps users understand the underlying components of CNNs through a model overview and dynamic visual explanation views. A user study shows that CNN Explainer helps users more easily understand the inner workings of CNNs, and is engaging and enjoyable to use.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS (2021)

Review Engineering, Electrical & Electronic

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

Wojciech Samek et al.

Summary: With the increasing demand for explainable artificial intelligence (XAI) due to the successful usage of machine learning, particularly deep neural networks, this work aims to provide an overview of the field, test interpretability algorithms, and demonstrate successful usage in application scenarios.

PROCEEDINGS OF THE IEEE (2021)

Article Computer Science, Artificial Intelligence

A Survey on Neural Network Interpretability

Yu Zhang et al.

Summary: This study provides a comprehensive review of the interpretability of neural networks, clarifies the definition, and proposes a new taxonomy. The trust in deep learning systems is affected by the interpretability issue, which is also related to ethical problems. The interpretability of deep networks is a desired property for becoming powerful tools in other research fields.

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE (2021)

Article Business

Trustworthy artificial intelligence

Scott Thiebes et al.

Summary: Artificial intelligence presents both opportunities and challenges, with Trustworthy AI emphasizing the importance of trust in its development and deployment. Its five foundational principles include beneficence, non-maleficence, autonomy, justice, and explicability. A data-driven research framework can help delineate fruitful avenues for future research in the realization of Trustworthy AI.

ELECTRONIC MARKETS (2021)

Article Humanities, Multidisciplinary

Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward

Samuele Lo Piano

HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS (2020)

Article Computer Science, Artificial Intelligence

Visualizing deep neural network by alternately image blurring and deblurring

Feng Wang et al.

NEURAL NETWORKS (2018)

Proceedings Paper Computer Science, Artificial Intelligence

Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention

Jinkyu Kim et al.

2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) (2017)

Proceedings Paper Computer Science, Artificial Intelligence

MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network

Zizhao Zhang et al.

30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) (2017)

Proceedings Paper Computer Science, Artificial Intelligence

Network Dissection: Quantifying Interpretability of Deep Visual Representations

David Bau et al.

30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) (2017)