4.8 Article

11 TOPS photonic convolutional accelerator for optical neural networks

Journal

NATURE
Volume 589, Issue 7840, Pages 44-+

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41586-020-03063-0

Keywords

-

Funding

  1. Australian Research Council [DP150104327, DP190102773, DP190101576, FT104101104]
  2. Natural Sciences and Engineering Research Council of Canada (NSERC)
  3. MESI PSR-SIIRI Initiative in Quebec
  4. Canada Research Chair Program
  5. Strategic Priority Research Program of the Chinese Academy of Sciences [XDB24030000]

Ask authors/readers for more resources

Inspired by biological visual cortex systems, convolutional neural networks extract hierarchical features of raw data, reducing parameter complexity and improving prediction accuracy. Optical neural networks promise faster computing using broad optical bandwidths, with optical vector convolutional accelerators demonstrating efficient image processing capabilities.
Convolutional neural networks, inspired by biological visual cortex systems, are a powerful category of artificial neural networks that can extract the hierarchical features of raw data to provide greatly reduced parametric complexity and to enhance the accuracy of prediction. They are of great interest for machine learning tasks such as computer vision, speech recognition, playing board games and medical diagnosis(1-7). Optical neural networks offer the promise of dramatically accelerating computing speed using the broad optical bandwidths available. Here we demonstrate a universal optical vector convolutional accelerator operating at more than ten TOPS (trillions (10(12)) of operations per second, or tera-ops per second), generating convolutions of images with 250,000 pixels-sufficiently large for facial image recognition. We use the same hardware to sequentially form an optical convolutional neural network with ten output neurons, achieving successful recognition of handwritten digit images at 88 per cent accuracy. Our results are based on simultaneously interleaving temporal, wavelength and spatial dimensions enabled by an integrated microcomb source. This approach is scalable and trainable to much more complex networks for demanding applications such as autonomous vehicles and real-time video recognition.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available