4.6 Article

An XAI method for convolutional neural networks in self-driving cars

Journal

PLOS ONE
Volume 17, Issue 8, Pages -

Publisher

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pone.0267282

Keywords

-

Funding

  1. Institute for Information & communications Technology Promotion (IITP) - Korea government (MSIP) [2020-0-00107]

Ask authors/readers for more resources

Explainable Artificial Intelligence (XAI), a new trend in machine learning, aims to provide explanations for the outputs of machine learning models, especially in reliability-critical applications like self-driving cars. In this paper, the authors propose an XAI method based on computing and explaining the difference of output values in the last hidden layer of convolutional neural networks. The experimental results demonstrate its effectiveness in accurately identifying the parts needed to distinguish the category of images in self-driving cars.
eXplainable Artificial Intelligence (XAI) is a new trend of machine learning. Machine learning models are used to predict or decide something, and they derive output based on a large volume of data set. Here, the problem is that it is hard to know why such prediction was derived, especially when using deep learning models. It makes the models unreliable in the case of reliability-critical applications. So, it is required to explain how they derived such output. It is a reliability-critical application for self-driving cars because the mistakes made by the computers inside them can lead to critical accidents. So, it is necessary to adopt XAI models in this field. In this paper, we propose an XAI method based on computing and explaining the difference of the output values of the neurons in the last hidden layer of convolutional neural networks. First, we input the original image and some modified images of it. Then we derive output values for each image and compare these values. Then, we introduce the Sensitivity Analysis technique to explain which parts of the original image are needed to distinguish the category. In detail, we divide the image into several parts and fill these parts with shades. First, we compute the influence value on the vector indicating the last hidden layer of the model for each of these parts. Then we draw shades whose darkness is in proportion to the influence values. The experimental results show that our approach for XAI in self-driving cars finds the parts needed to distinguish the category of these images accurately.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available