4.6 Article

Learning in Convolutional Neural Networks Accelerated by Transfer Entropy

Journal

ENTROPY
Volume 23, Issue 9, Pages -

Publisher

MDPI
DOI: 10.3390/e23091218

Keywords

transfer entropy; causality; Convolutional Neural Network; deep learning

Ask authors/readers for more resources

The article discusses the integration of Transfer Entropy (TE) feedback connections in a Convolutional Neural Network (CNN) architecture to accelerate training process and improve stability by considering TE between neuron pairs in the last two fully connected layers.
Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. According to our experiments on CNN classifiers, to achieve a reasonable computational overhead-accuracy trade-off, it is efficient to consider only the inter-neural information transfer of the neuron pairs between the last two fully connected layers. The TE acts as a smoothing factor, generating stability and becoming active only periodically, not after processing each input sample. Therefore, we can consider the TE is in our model a slowly changing meta-parameter.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available