4.7 Article

Evolving artificial neural networks with feedback

Journal

NEURAL NETWORKS
Volume 123, Issue -, Pages 153-162

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2019.12.004

Keywords

Deep learning; Feedback; Transfer entropy; Convolutional neural network

Funding

  1. European Community's Horizon 2020 Programme [732266]

Ask authors/readers for more resources

Neural networks in the brain are dominated by sometimes more than 60% feedback connections, which most often have small synaptic weights. Different from this, little is known how to introduce feedback into artificial neural networks. Here we use transfer entropy in the feed-forward paths of deep networks to identify feedback candidates between the convolutional layers and determine their final synaptic weights using genetic programming. This adds about 70% more connections to these layers all with very small weights. Nonetheless performance improves substantially on different standard benchmark tasks and in different networks. To verify that this effect is generic we use 36000 configurations of small (2-10 hidden layer) conventional neural networks in a non-linear classification task and select the best performing feed-forward nets. Then we show that feedback reduces total entropy in these networks always leading to performance increase. This method may, thus, supplement standard techniques (e.g. error backprop) adding a new quality to network learning. (C) 2019 The Author(s). Published by Elsevier Ltd.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available