4.6 Article

Residual D2NN: training diffractive deep neural networks via learnable light shortcuts

Journal

OPTICS LETTERS
Volume 45, Issue 10, Pages 2688-2691

Publisher

Optica Publishing Group
DOI: 10.1364/OL.389696

Keywords

-

Categories

Funding

  1. Beijing Municipal Commission of Science and Technology [Z181100003118014]
  2. National Natural Science Foundation of China [61971020]
  3. TsinghuaUniversity Initiative ScientificResearch Program

Ask authors/readers for more resources

The diffractive deep neural network ((DNN)-N-2) has demonstrated its importance in performing various all-optical machine learning tasks, e.g., classification, segmentation, etc. However, deeper D(2)NNs that provide higher inference complexity are more difficult to train due to the problem of gradient vanishing. We introduce the residual D(2)NNs (Res-(DNN)-N-2), which enables us to train substantially deeper diffractive networks by constructing diffractive residual learning blocks to learn the residual mapping functions. Unlike the existing plain D(2)NNs, Res-D(2)NNs contribute to the design of a learnable light shortcut to directly connect the input and output between optical layers. Such a shortcut offers a direct path for gradient backpropagation in training, which is an effective way to alleviate the gradient vanishing issue on very deep diffractive neural networks. Experimental results on image classification and pixel super-resolution demonstrate the superiority of Res-D(2)NNs over the existing plain (DNN)-N-2 architectures. (C) 2020 Optical Society of America

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available