4.6 Article

Evaluating the Learning Procedure of CNNs through a Sequence of Prognostic Tests Utilising Information Theoretical Measures

Journal

ENTROPY
Volume 24, Issue 1, Pages -

Publisher

MDPI
DOI: 10.3390/e24010067

Keywords

convolutional neural networks; learning procedure; information theory; mutual information

Funding

  1. Ministry Education of Turkey
  2. Engineering and Physical Sciences Research Council (EPSRC) [EP/T000783/1]

Ask authors/readers for more resources

This paper reveals the typical learning patterns of convolutional neural networks using information theoretical measures, showing that more convolutional layers improve learning but excessively adding layers does not. It also demonstrates that the kernel size of convolutional layers only affects learning speed, and the placement of dropout layers has varying effects depending on the dropout rate.
Deep learning has proven to be an important element of modern data processing technology, which has found its application in many areas such as multimodal sensor data processing and understanding, data generation and anomaly detection. While the use of deep learning is booming in many real-world tasks, the internal processes of how it draws results is still uncertain. Understanding the data processing pathways within a deep neural network is important for transparency and better resource utilisation. In this paper, a method utilising information theoretic measures is used to reveal the typical learning patterns of convolutional neural networks, which are commonly used for image processing tasks. For this purpose, training samples, true labels and estimated labels are considered to be random variables. The mutual information and conditional entropy between these variables are then studied using information theoretical measures. This paper shows that more convolutional layers in the network improve its learning and unnecessarily higher numbers of convolutional layers do not improve the learning any further. The number of convolutional layers that need to be added to a neural network to gain the desired learning level can be determined with the help of theoretic information quantities including entropy, inequality and mutual information among the inputs to the network. The kernel size of convolutional layers only affects the learning speed of the network. This study also shows that where the dropout layer is applied to has no significant effects on the learning of networks with a lower dropout rate, and it is better placed immediately after the last convolutional layer with higher dropout rates.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available