4.5 Article

Gibbon: An Efficient Co-Exploration Framework of NN Model and Processing-In-Memory Architecture

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

A Survey on Evolutionary Neural Architecture Search

Yuqiao Liu et al.

Summary: Deep neural networks have achieved great success in many applications, but their architectures require labor-intensive and expert-designed processes. Neural architecture search (NAS) technology enables automatic design of architectures, with evolutionary computation (EC) methods gaining attention and success. However, there is currently no comprehensive summary of EC-based NAS algorithms.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Computer Science, Hardware & Architecture

DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training

Xiaochen Peng et al.

Summary: DNN+NeuroSim is an integrated framework for benchmarking compute-in-memory (CIM) accelerators for deep neural networks. It automatically maps algorithms to hardware, evaluates chip-level metrics, and investigates the impact of analog nonvolatile memory (eNVM) device properties on on-chip training. This framework provides insights into synaptic devices for on-chip training and is available for both inference and training versions on GitHub.

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS (2021)

Article Computer Science, Hardware & Architecture

Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators

Weiwen Jiang et al.

Summary: The article discusses the co-exploration of neural architectures and hardware design, proposing the NACIM framework, which can find robust neural networks and achieve high energy-efficiency performance while considering device variation.

IEEE TRANSACTIONS ON COMPUTERS (2021)

Article Computer Science, Information Systems

NAS4RRAM: neural network architecture search for inference on RRAM-based accelerators

Zhihang Yuan et al.

Summary: The text highlights the requirements for deploying neural networks on RRAM-based accelerators and proposes a framework using NAS method to design networks with high prediction accuracy that meet these requirements.

SCIENCE CHINA-INFORMATION SCIENCES (2021)

Article Computer Science, Theory & Methods

A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions

Pengzhen Ren et al.

Summary: Neural Architecture Search (NAS) is a revolutionary algorithm aimed at reducing human intervention and allowing algorithms to automatically design neural architectures. The related research work is complex and rich, requiring a comprehensive and systematic survey.

ACM COMPUTING SURVEYS (2021)

Proceedings Paper Automation & Control Systems

NAAS: Neural Accelerator Architecture Search

Yujun Lin et al.

Summary: NAAS introduces a comprehensive approach to search neural network architecture, accelerator architecture, and compiler mapping, reducing EDP in comparison to human design while also improving accuracy. This approach demonstrates the potential of data-driven methods in optimizing architecture exploration.

2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC) (2021)

Proceedings Paper Computer Science, Hardware & Architecture

Uncertainty Modeling of Emerging Device based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search

Zheyu Yan et al.

Summary: Emerging device-based Computing-in-memory (CiM) has been shown to be a promising candidate for high energy efficiency deep neural network (DNN) computations, but device uncertainties can lead to a decrease in accuracy from trained models to deployed platforms. To mitigate this impact, a uncertainty-aware Neural Architecture Search scheme called UAE has been proposed to identify a DNN model that is accurate and robust against device uncertainties.

2021 26TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC) (2021)

Article Computer Science, Hardware & Architecture

Low Bit-Width Convolutional Neural Network on RRAM

Yi Cai et al.

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS (2020)

Article Computer Science, Hardware & Architecture

TIME: A Training-in-Memory Architecture for RRAM-Based Deep Neural Networks

Ming Cheng et al.

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS (2019)

Article Computer Science, Hardware & Architecture

Computing in Memory With Spin-Transfer Torque Magnetic RAM

Shubham Jain et al.

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS (2018)

Article Computer Science, Hardware & Architecture

ImageNet Classification with Deep Convolutional Neural Networks

Alex Krizhevsky et al.

COMMUNICATIONS OF THE ACM (2017)

Article Computer Science, Hardware & Architecture

A Survey of Phase Change Memory Systems

Fei Xia et al.

JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY (2015)

Article Engineering, Electrical & Electronic

Resistive Random Access Memory (ReRAM) Based on Metal Oxides

Hiroyuki Akinaga et al.

PROCEEDINGS OF THE IEEE (2010)

Article Computer Science, Artificial Intelligence

A fast and elitist multiobjective genetic algorithm: NSGA-II

K Deb et al.

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION (2002)