相关参考文献
注意:仅列出部分参考文献,下载原文获取全部文献信息。Fair and Comprehensive Benchmarking of Machine Learning Processing Chips
Geoffrey W. Burr et al.
IEEE DESIGN & TEST (2022)
Optimised weight programming for analogue memory-based deep neural networks
Charles Mackin et al.
Nature Communications (2022)
Ohm's Law + Kirchhoff's Current Law = Better AI: Neural-Network Processing Done in Memory with Analog Circuits will Save Energy
Geoffrey W. Burr et al.
IEEE SPECTRUM (2021)
Fully On-Chip MAC at 14 nm Enabled by Accurate Row-Wise Programming of PCM-Based Weights and Parallel Vector-Transport in Duration-Format
P. Narayanan et al.
IEEE TRANSACTIONS ON ELECTRON DEVICES (2021)
Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices
Katie Spoon et al.
FRONTIERS IN COMPUTATIONAL NEUROSCIENCE (2021)
A Flexible and Fast PyTorch Toolkit for Simulating Training and Inference on Analog Crossbar Arrays
Malte J. Rasch et al.
2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS) (2021)
A Programmable Neural-Network Inference Accelerator Based on Scalable In-Memory Computing
Hongyang Jia et al.
2021 IEEE INTERNATIONAL SOLID-STATE CIRCUITS CONFERENCE (ISSCC) (2021)
A Survey of Accelerator Architectures for Deep Neural Networks
Yiran Chen et al.
ENGINEERING (2020)
Accurate deep neural network inference using computational phase-change memory
Vinay Joshi et al.
NATURE COMMUNICATIONS (2020)
Memory devices and applications for in-memory computing
Abu Sebastian et al.
NATURE NANOTECHNOLOGY (2020)
TiM-DNN: Ternary In-Memory Accelerator for Deep Neural Networks
Shubham Jain et al.
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS (2020)
Deep In-Memory Architectures in SRAM: An Analog Approach to Approximate Computing
Mingu Kang et al.
PROCEEDINGS OF THE IEEE (2020)
TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Weitao Li et al.
2020 ACM/IEEE 47TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2020) (2020)
SPARCE: Sparsity Aware General-Purpose Core Extensions to Accelerate Deep Neural Networks
Sanchari Sen et al.
IEEE TRANSACTIONS ON COMPUTERS (2019)
RAPA-ConvNets: Modified Convolutional Networks for Accelerated Training on Architectures With Analog Arrays
Malte J. Rasch et al.
FRONTIERS IN NEUROSCIENCE (2019)
Neural network accelerator design with resistive crossbars: Opportunities and challenges
S. Jain et al.
IBM JOURNAL OF RESEARCH AND DEVELOPMENT (2019)
Reducing the Impact of Phase-Change Memory Conductance Drift on the Inference of large-scale Hardware Neural Networks
S. Ambrogio et al.
2019 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM) (2019)
A 8.93-TOPS/W LSTM Recurrent Neural Network Accelerator Featuring Hierarchical Coarse-Grain Sparsity With All Parameters Stored On-Chip
Deepak Kadetotad et al.
IEEE SOLID-STATE CIRCUITS LETTERS (2019)
Recent progress in analog memory-based accelerators for deep learning
Hsinyu Tsai et al.
JOURNAL OF PHYSICS D-APPLIED PHYSICS (2018)
Equivalent-accuracy accelerated neural-network training using analogue memory
Stefano Ambrogio et al.
NATURE (2018)
Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator
Matthew J. Marinella et al.
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS (2018)
Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration
Anirban Nag et al.
IEEE MICRO (2018)
PROMISE: An End-to-End Design of a Programmable Mixed-Signal Accelerator for Machine-Learning Algorithms
Prakalp Srivastava et al.
2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA) (2018)
Toward on-chip acceleration of the backpropagation algorithm using nonvolatile memory
P. Narayanan et al.
IBM JOURNAL OF RESEARCH AND DEVELOPMENT (2017)
Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
Yu-Hsin Chen et al.
IEEE JOURNAL OF SOLID-STATE CIRCUITS (2017)
TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory
Mingyu Gao et al.
OPERATING SYSTEMS REVIEW (2017)
In-Datacenter Performance Analysis of a Tensor Processing Unit
Norman P. Jouppi et al.
44TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2017) (2017)
TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory
Mingyu Gao et al.
TWENTY-SECOND INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXII) (2017)
Metal-Oxide RRAM
H. -S. Philip Wong et al.
PROCEEDINGS OF THE IEEE (2012)
Phase change memory technology
Geoffrey W. Burr et al.
JOURNAL OF VACUUM SCIENCE & TECHNOLOGY B (2010)