4.6 Article

AERO: A 1.28 MOP/s/LUT Reconfigurable Inference Processor for Recurrent Neural Networks in a Resource-Limited FPGA

Related references

Note: Only part of the references are listed.
Article Engineering, Electrical & Electronic

A Resource-Efficient Inference Accelerator for Binary Convolutional Neural Networks

Tae-Hwan Kim et al.

Summary: This brief introduces a novel architecture for implementing a resource-efficient inference accelerator for BCNN, achieving high resource efficiency in the CIFAR-10 classification task. The proposed accelerator has been implemented using an FPGA and its functionality has been verified by implementing a fully-integrated BCNN inference system.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS (2021)

Article Engineering, Electrical & Electronic

EdgeDRNN: Recurrent Neural Network Accelerator for Edge Inference

Chang Gao et al.

IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS (2020)

Article Computer Science, Artificial Intelligence

Sequence classification for credit-card fraud detection

Johannes Jurgovsky et al.

EXPERT SYSTEMS WITH APPLICATIONS (2018)

Review Multidisciplinary Sciences

State-of-the-art in artificial neural network applications: A survey

Oludare Isaac Abiodun et al.

HELIYON (2018)

Article Computer Science, Artificial Intelligence

A Novel Connectionist System for Unconstrained Handwriting Recognition

Alex Graves et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2009)