4.8 Article

Diverse Sample Generation: Pushing the Limit of Generative Data-Free Quantization

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Learning Efficient Binarized Object Detectors With Information Compression

Ziwei Wang et al.

Summary: In this paper, a binarized neural network learning method (BiDet) is proposed for efficient object detection. BiDet utilizes the representational capacity of binary neural networks by redundancy removal, which enhances detection precision and reduces false positives.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Article Computer Science, Artificial Intelligence

Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond

Fanghui Liu et al.

Summary: This survey reviews the research on random features from the past ten years, summarizing the motivations, characteristics, and contributions of representative algorithms. The theoretical results related to the key question of the number of random features needed for high approximation quality are discussed. The survey also evaluates popular algorithms on large-scale benchmark datasets and explores the relationship between random features and modern deep neural networks. It serves as an introduction to the topic and a guide for practitioners interested in applying algorithms and understanding theoretical results.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Article Computer Science, Artificial Intelligence

Effective Training of Convolutional Neural Networks With Low-Bitwidth Weights and Activations

Bohan Zhuang et al.

Summary: This paper addresses the training problem of a deep convolutional neural network with both low-bitwidth weights and activations. Three practical approaches, including progressive quantization, stochastic precision, and joint knowledge distillation, are proposed to improve the network training. The effectiveness of the proposed methods is demonstrated through extensive experiments on various datasets.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Article Computer Science, Artificial Intelligence

Learning Channel-Wise Interactions for Binary Convolutional Neural Networks

Ziwei Wang et al.

Summary: This paper proposes a channel-wise interaction based binary convolutional neural networks (CI-BCNN) approach for efficient inference. By using reinforcement learning to mine channel-wise interactions, correct inconsistent signs, and alleviate noise in channel-wise priors, the proposed approach improves inference efficiency.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2021)

Proceedings Paper Computer Science, Artificial Intelligence

ZERO-SHOT LEARNING OF A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK FOR DATA-FREE NETWORK QUANTIZATION

Yoojin Choi et al.

Summary: ZS-CGAN is a novel method for training a conditional generative adversarial network without the need for training data, utilizing a pre-trained discriminative model to generate synthetic samples that mimic the characteristics of the original data. This approach has shown to achieve state-of-the-art data-free network quantization with minimal loss in accuracy compared to conventional data-dependent quantization.

2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Zero-shot Adversarial Quantization

Yuang Liu et al.

Summary: This paper proposes a zero-shot adversarial quantization (ZAQ) framework for achieving zero-shot model quantization without accessing training data, through effective discrepancy estimation and knowledge transfer. Experiments demonstrate the superiority of ZAQ over strong zero-shot baselines, validating its effectiveness.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Generative Zero-shot Network Quantization

Xiangyu He et al.

Summary: Convolutional neural networks can learn realistic image priors from training samples, and reconstruct realistic images for high-level image recognition tasks by leveraging intrinsic Batch Normalization statistics. The method consistently outperforms existing data-free quantization methods in benchmark datasets.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021 (2021)

Article Computer Science, Artificial Intelligence

Hierarchical Binary CNNs for Landmark Localization with Limited Resources

Adrian Bulat et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2020)

Article Computer Science, Artificial Intelligence

A survey on semi-supervised learning

Jesper E. Van Engelen et al.

MACHINE LEARNING (2020)

Article Computer Science, Artificial Intelligence

Deep Neural Network Compression by In-Parallel Pruning-Quantization

Frederick Tung et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2020)

Article Computer Science, Artificial Intelligence

Binary neural networks: A survey

Haotong Qin et al.

PATTERN RECOGNITION (2020)

Article Computer Science, Artificial Intelligence

Towards Efficient U-Nets: A Coupled and Quantized Approach

Zhiqiang Tang et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2020)

Article Computer Science, Artificial Intelligence

Learning Deep Binary Descriptor with Multi-Quantization

Yueqi Duan et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2019)

Article Computer Science, Hardware & Architecture

ImageNet Classification with Deep Convolutional Neural Networks

Alex Krizhevsky et al.

COMMUNICATIONS OF THE ACM (2017)

Proceedings Paper Computer Science, Artificial Intelligence

Fast R-CNN

Ross Girshick

2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) (2015)