4.5 Article

Convolutional neural network pruning based on misclassification cost

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Geochemistry & Geophysics

Instance-Aware Distillation for Efficient Object Detection in Remote Sensing Images

Cong Li et al.

Summary: In this article, an instance-aware distillation method (InsDist) is proposed to extract efficient object detectors for complex remote sensing images. The InsDist combines feature-based and relation-based knowledge distillation, allowing the student model to comprehensively imitate both features and relationships from the teacher model. The experiments on two large-scale remote sensing object detection datasets show that InsDist achieves noticeable gains over other distillation methods for various types of detectors.

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING (2023)

Article Chemistry, Analytical

A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation

Xuefu Sui et al.

Summary: In order to address issues such as large storage requirements, computational pressure, untimely data supply, and low computational efficiency caused by the large number of parameters in convolutional neural networks (CNNs) during hardware deployment, an innovative hardware-friendly CNN pruning method called KRP was developed. The KRP method prunes convolutional kernels on a row scale, and a new retraining method based on LR tracking was used to achieve high pruning rates and accuracy. Additionally, a high-performance convolutional computation module was designed on the FPGA platform to assist in the deployment of KRP pruning models. Comparative experiments on CNNs like VGG and ResNet demonstrated that KRP achieved higher accuracy than most pruning methods. The combination of KRP and the GSNQ quantization method resulted in a high-precision hardware-friendly network compression framework, capable of lossless CNN compression with a 27x reduction in network model storage. FPGA experiments showed that the KRP pruning method not only required less storage space, but also significantly reduced on-chip hardware resource consumption and improved model parallelism in FPGAs with strong hardware-friendly features.

SENSORS (2023)

Article Computer Science, Hardware & Architecture

Ejection Fraction estimation using deep semantic segmentation neural network

Md Golam Rabiul Alam et al.

Summary: This paper proposes an automated Ejection Fraction estimation system from 2D echocardiography images using deep semantic segmentation neural networks. Two parallel pipelines of deep semantic segmentation neural network models have been proposed for efficient left ventricle segmentation, and three different neural networks, UNet, ResUNet, and Deep ResUNet, have been implemented. The most accurate model achieved high Dice scores for left ventricle segmentation in both systolic and diastolic states. The proposed system can remove the eyeball estimation practice and reduce inter-observer variability.

JOURNAL OF SUPERCOMPUTING (2023)

Article Computer Science, Artificial Intelligence

Pruning Networks With Cross-Layer Ranking & k-Reciprocal Nearest Filters

Mingbao Lin et al.

Summary: This article introduces a novel filter-level network pruning method called CLR-RNF, which addresses the "long-tail" pruning problem in magnitude-based weight pruning methods. It proposes a computation-aware measurement for weight importance and a cross-layer ranking technique for weight selection. The article also presents a recommendation-based filter selection scheme and a k-reciprocal nearest filter selection scheme. Experimental results show that CLR-RNF outperforms existing methods in image classification tasks.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Computer Science, Information Systems

Universal Consistency of Deep Convolutional Neural Networks

Shao-Bo Lin et al.

Summary: Compared to practical research activities, the study of theoretical behaviors of deep convolutional neural networks (DCNNs) is significantly lacking behind. However, this paper proves that implementing empirical risk minimization on DCNNs with expansive convolution can be strongly universally consistent. Through a series of experiments, it is shown that DCNNs with expansive convolution, even without fully connected layers, perform as well as deep neural networks with contracting convolutional layers and fully connected layers.

IEEE TRANSACTIONS ON INFORMATION THEORY (2022)

Article Computer Science, Hardware & Architecture

Automatic lane marking prediction using convolutional neural network and S-Shaped Binary Butterfly Optimization

Abrar Mohammed Alajlan et al.

Summary: Lane detection using deep learning techniques with parameter optimization algorithm outperforms traditional methods in accuracy and performance, demonstrating improved robustness and stability.

JOURNAL OF SUPERCOMPUTING (2022)

Article Computer Science, Hardware & Architecture

Deep learning convolutional neural network in diagnosis of serous effusion in patients with malignant tumor by tomography

Jiawen Zhang et al.

Summary: PET-CT images based on deep learning CNN had important diagnostic value for chest, ascites, pericardial effusion, and serous cavity effusion of unknown reasons. Lung infection in tumor patients was an important factor aggravating serous effusion.

JOURNAL OF SUPERCOMPUTING (2022)

Review Computer Science, Hardware & Architecture

Deep learning techniques for tumor segmentation: a review

Huiyan Jiang et al.

Summary: Recently, deep learning has achieved remarkable results in the field of medical image segmentation, especially in tumor segmentation. Automatic tumor segmentation is crucial for radiotherapy and clinical practice, and this paper reviews the tumor segmentation methods based on deep learning in recent years.

JOURNAL OF SUPERCOMPUTING (2022)

Article Computer Science, Artificial Intelligence

Artifact- and content-specific quality assessment for MRI with image rulers

Ke Lei et al.

Summary: In clinical practice, the quality of MRI images is crucial for accurate diagnosis. Existing automatic image quality assessment methods have limitations in accurately evaluating the problems and solutions of low-quality images. This study proposes a framework based on a multi-task CNN model, calibrated labels, and image rulers to effectively assess artifacts such as noise and motion in MRI images and improve diagnostic accuracy and generalizability.

MEDICAL IMAGE ANALYSIS (2022)

Article Computer Science, Artificial Intelligence

Sparse CapsNet with explicit regularizer

Ruiyang Shi et al.

Summary: This paper proposes a sparse CapsNet with an explicit regularizer and sparse optimization method to improve efficiency by reducing the number of parameters and computational cost.

PATTERN RECOGNITION (2022)

Article Computer Science, Information Systems

Neural Network Pruning by Recurrent Weights for Finance Market

Songwen Pei et al.

Summary: Convolutional Neural Networks (CNNs) and deep learning technology are widely used in the financial market, promoting the development of finance and the Internet economy. The traditional channel pruning methods may mistakenly remove layers and destroy the structure of neural networks. Therefore, this study proposes a novel method called Neural Networks Pruning by Recurrent Weights (NPRW) to compress neural networks while maintaining acceptable accuracy loss.

ACM TRANSACTIONS ON INTERNET TECHNOLOGY (2022)

Review Mathematical & Computational Biology

Harris Hawk Optimization: A Survey on Variants and Applications

B. K. Tripathy et al.

Summary: In this review, an overview of the Harris Hawk optimizer (HHO) is given, including its logic of equations and mathematical model. Different variants of HHO from well-established literature are reviewed, with a focus on the state-of-the-art improvements, such as fuzzy HHO and a new intuitionistic fuzzy HHO algorithm. The applications of HHO in enhancing machine learning operations and tackling engineering optimization problems are also discussed. This survey provides a basis for future research in the development of swarm intelligence paths and the use of HHO for real-world problems.

COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (2022)

Proceedings Paper Computer Science, Artificial Intelligence

EXPLORING STRUCTURAL SPARSITY IN NEURAL IMAGE COMPRESSION

Shanzhi Yin et al.

Summary: This paper proposes a method to achieve real-time acceleration in neural image compression networks through structural sparsity, without the need for specialized hardware design or algorithms. The experiments show up to 7x computation reduction and 3x acceleration with negligible performance drop.

2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP (2022)

Article Computer Science, Artificial Intelligence

Discrimination-Aware Network Pruning for Deep Model Compression

Jing Liu et al.

Summary: In this paper, a method called discrimination-aware channel pruning (DCP) is proposed to enhance the discriminative power of deep networks by introducing discrimination-aware losses and simultaneously considering the discrimination-aware loss and the reconstruction error for selecting the most discriminative channels. Additionally, the paper also introduces discrimination-aware kernel pruning (DKP) to further compress deep networks by removing redundant kernels. The proposed methods achieve effective network pruning with improved performance, as demonstrated in experiments on image classification and face recognition tasks.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Proceedings Paper Acoustics

Model Compression by Iterative Pruning with Knowledge Distillation and Its Application to Speech Enhancement

Zeyuan Wei et al.

Summary: In this paper, a compression strategy based on iterative pruning and knowledge distillation is proposed. By iteratively dropping less impactful weights and fine-tuning the pruned model using the original model as a teacher, the proposed method dramatically reduces the model size without significant performance degradation. Experimental results on speech enhancement tasks validate the effectiveness of the proposed compression strategy.

INTERSPEECH 2022 (2022)

Proceedings Paper Acoustics

ACP: ADAPTIVE CHANNEL PRUNING FOR EFFICIENT NEURAL NETWORKS

Yuan Zhang et al.

Summary: In recent years, deep convolutional neural networks have achieved impressive results on multiple tasks, but they require significant computation resources and energy costs, making it difficult to deploy them on power-constrained devices. This paper proposes an adaptive channel pruning module (ACPM) that can automatically adjust the pruning rate, effectively removing redundant parameters, and achieves state-of-the-art results on various networks and benchmarks.

2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) (2022)

Proceedings Paper Acoustics

AN EFFICIENT METHOD FOR MODEL PRUNING USING KNOWLEDGE DISTILLATION WITH FEW SAMPLES

ZhaoJing Zhou et al.

Summary: This paper introduces a distillation method called PFDD, which uses a progressive training strategy to match the feature distributions between compressed and original networks, thus improving the performance of compressed networks. Compared to the FSKD method, this approach does not require modifying the network structure and achieves significant results even with few samples.

2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Batch Normalization Tells You Which Filter is Important

Junghun Oh et al.

Summary: Filter pruning aims to improve the efficiency of convolutional neural networks (CNNs) without sacrificing performance by removing unimportant filters. This study proposes a method to evaluate the importance of each filter based on batch normalization parameters, achieving outstanding performance in terms of trade-off between accuracy drop, computational complexity, and reduction in parameter count.

2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022) (2022)

Article Computer Science, Information Systems

Methods for Pruning Deep Neural Networks

Sunil Vadera et al.

Summary: This paper presents a survey of methods for pruning deep neural networks, categorizing them into magnitude based pruning, clustering based methods, and sensitivity analysis. It highlights key studies within these categories and serves as a resource for comparing results across different methods and architectures.

IEEE ACCESS (2022)

Article Computer Science, Interdisciplinary Applications

Wasserstein GANs for MR Imaging: From Paired to Unpaired Training

Ke Lei et al.

Summary: This article utilizes unpaired adversarial training for reconstruction networks, where a generator suppresses input image artifacts and a discriminator scores reconstruction quality based on the Wasserstein distance. Experimental results demonstrate that unpaired training enables diagnostic-quality reconstruction even in the absence of high-quality image labels or with limited label availability.

IEEE TRANSACTIONS ON MEDICAL IMAGING (2021)

Article Computer Science, Artificial Intelligence

Beyond softmax loss: Intra-concentration and inter-separability loss for classification

Hanyang Peng et al.

Summary: Research on classification losses has been underdeveloped, leading to the proposal of a new loss function unrelated to softmax loss. An efficient optimization algorithm has been introduced, showing competitive results in benchmark datasets and robustness in handling class-imbalanced and outlier situations.

NEUROCOMPUTING (2021)

Article Computer Science, Artificial Intelligence

Improving the accuracy of pruned network using knowledge distillation

Setya Widyawan Prakosa et al.

Summary: This study explores integrating knowledge distillation with pruning filters to enhance the accuracy of convolutional neural network models. Results show that this method can improve accuracy without increasing inference time.

PATTERN ANALYSIS AND APPLICATIONS (2021)

Article Computer Science, Theory & Methods

Object Detection Using Deep Learning Methods in Traffic Scenarios

Azzedine Boukerche et al.

Summary: The recent boom of autonomous driving has brought object detection in traffic scenes into focus, presenting challenges such as real-time detection, changeable weather, and complex lighting conditions. Deep learning has expanded into this field and achieved breakthroughs, covering key frameworks, categorized object detection applications, evaluation metrics, and datasets in more than 100 research papers. Open research fields also provide further avenues for exploration.

ACM COMPUTING SURVEYS (2021)

Article Computer Science, Hardware & Architecture

Understanding Deep Learning (Still) Requires Rethinking Generalization

Chiyuan Zhang et al.

Summary: Despite traditional explanations falling short in justifying the excellent generalization of large neural networks, experiments show that state-of-the-art convolutional networks can easily adapt to random labeling during training, indicating a different mechanism contributing to their strong performance in practice.

COMMUNICATIONS OF THE ACM (2021)

Article Computer Science, Artificial Intelligence

Filter pruning with a feature map entropy importance criterion for convolution neural networks compressing

Jielei Wang et al.

Summary: Recent studies have shown that model pruning is an effective method to reduce the computing and storage costs of deep neural networks. In this work, a novel structured pruning method is proposed for Convolutional Neural Networks based on pruning filter-level redundant weights according to entropy importance criteria (FPEI).

NEUROCOMPUTING (2021)

Article Chemistry, Analytical

Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks

Tao Wu et al.

Summary: This paper proposes a differential evolutionary layer-wise weight pruning method, which analyzes the pruning sensitivity of each layer and establishes an optimization model to find the optimal pruning sensitivity set for each layer. Experimental results demonstrate the effectiveness of this method in significantly reducing the number of weight parameters in four different deep models.

SENSORS (2021)

Article Computer Science, Artificial Intelligence

Dynamical Channel Pruning by Conditional Accuracy Change for Deep Neural Networks

Zhiqiang Chen et al.

Summary: This article introduces a dynamic channel pruning method that optimizes the pruning process, reducing the parameters and computations of neural networks effectively while maintaining higher accuracy. By shaping a more desirable network structure, significant performance improvements can be achieved.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression

Xiaofeng Ruan et al.

Summary: In this article, an efficient decomposition and pruning (EDP) scheme is proposed to automatically achieve low-rank decomposition and channel pruning by constructing a compressed-aware block, aiming to compress deep neural networks. Furthermore, the network architecture is further compressed and optimized by a novel Pruning & Merging (PM) module, leading to high compression ratio with acceptable accuracy degradation and outperforming state-of-the-arts on various metrics.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021)

Article Engineering, Electrical & Electronic

SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration

Jun Shi et al.

Summary: This paper proposes a SASL approach for further optimization, utilizing saliency estimation and adjusting regularization strength according to saliency to better preserve prediction performance. A hard sample mining strategy is utilized during the pruning phase to optimize the data-dependent criterion, showing higher effectiveness and efficiency.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY (2021)

Article Computer Science, Hardware & Architecture

The improvement in obstacle detection in autonomous vehicles using YOLO non-maximum suppression fuzzy algorithm

Nayereh Zaghari et al.

Summary: This research presents a method to improve the behavioral clone of self-driving cars by enhancing the detection of obstacles using a proposed algorithm called YOLO non-maximum suppression fuzzy algorithm. The hybrid method of fuzzy and NMS algorithms provided in this study improves the accuracy of the detection network, with evaluation criteria showing higher speed, lower FPR, and lower FNR compared to the baseline YOLOv3 model. The network performance accuracy rate is 95%, indicating successful results.

JOURNAL OF SUPERCOMPUTING (2021)

Article Radiology, Nuclear Medicine & Medical Imaging

On Interpretability of Artificial Neural Networks: A Survey

Feng-Lei Fan et al.

Summary: Deep learning by artificial deep neural networks has achieved great success in various fields, but their black-box nature hinders their adoption in critical applications like medicine. The interpretability of neural networks has become increasingly important, with wide applications in medicine and various future research directions.

IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES (2021)

Article Computer Science, Hardware & Architecture

CURATING: A multi-objective based pruning technique for CNNs

Santanu Pattanayak et al.

Summary: In this paper, a novel pruning technique called CURATING is proposed to address the issue of increased model size and computational overheads in convolutional neural networks. By retaining filters that are diverse, significant, and likely to produce higher activations, CURATING achieves a better tradeoff between model size, accuracy, and inference latency compared to existing techniques, as demonstrated on various CNNs and datasets.

JOURNAL OF SYSTEMS ARCHITECTURE (2021)

Article Engineering, Electrical & Electronic

An efficient pruning scheme of deep neural networks for Internet of Things applications

Chen Qi et al.

Summary: The paper proposes a novel pruning-based paradigm to reduce the computational cost of DNNs while maintaining their expressive capability. The approach is evaluated on various benchmark datasets and compared with typical advanced CNN architectures, demonstrating superior performance and effectiveness.

EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Architecture Disentanglement for Deep Neural Networks

Jie Hu et al.

Summary: Understanding the inner workings of deep neural networks (DNNs) is important for trustworthy artificial intelligence. The study introduces neural architecture disentanglement (NAD) to disentangle DNNs into sub-architectures for independent tasks, revealing insights into the inference processes. Experimental results show that DNNs can be divided into sub-architectures, deeper layers do not always correspond to higher semantics, and the connection type in a DNN affects information flow and disentanglement behaviors.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Comparison Analysis for Pruning Algorithms of Neural Networks

Xi Chen et al.

Summary: This paper studies pruning algorithms for optimizing deep learning models, classifies and summarizes the algorithms' structures, scheduling, and scoring, and compares the compression effects of mainstream algorithms on deep neural networks. Future pruning algorithms can be designed based on network sparsity structure, scheduling, and parameter ranking criteria, while also considering the factors of datasets and models.

2021 2ND INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND INTELLIGENT CONTROL (ICCEIC 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Convolutional Neural Network Pruning with Structural Redundancy Reduction

Zi Wang et al.

Summary: According to the study, identifying structural redundancy is more important in network pruning than finding unimportant filters. Pruning in layers with the most structural redundancy outperforms pruning the least important filters across all layers, leading to significant performance improvements.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 (2021)

Proceedings Paper Acoustics

NETWORK PRUNING USING LINEAR DEPENDENCY ANALYSIS ON FEATURE MAPS

Hao Pan et al.

Summary: The paper introduces a method for network pruning based on linear dependency analysis, which improves network accuracy while maintaining similar parameters and computational complexity. Experimental results demonstrate that this method achieves state-of-the-art results on several different benchmarks and networks.

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

A Discriminant Information Approach to Deep Neural Network Pruning

Zejiang Hou et al.

Summary: Network pruning is essential for accelerating deep neural networks for mobile and edge applications. The proposed channel pruning method based on feature-map discriminant introduces a Discriminant Information (DI) criterion to accurately quantify channel importance. By utilizing a greedy pruning algorithm and structure distillation technique, the method can automatically select pruned structure meeting resource constraints. Extensive experiments demonstrate significant reduction in FLOPs with no loss in accuracy on ImageNet dataset.

2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Combining Weight Pruning and Knowledge Distillation For CNN Compression

Nima Aghli et al.

Summary: Model compression is crucial for deep neural networks, with popular methods like weight pruning not suitable for complex networks like ResNets.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021 (2021)

Article Engineering, Electrical & Electronic

Icing-EdgeNet: A Pruning Lightweight Edge Intelligent Method of Discriminative Driving Channel for Ice Thickness of Transmission Lines

Bo Wang et al.

Summary: An ice monitoring system based on edge intelligence was proposed in this article, along with a lightweight vision identification method for ice thickness identification suitable for terminals with limited computing resources. Experimental results showed that the proposed method achieved a high recognition accuracy under extreme meteorological conditions with a model size of only 11MB, meeting the deployment requirements for ice monitoring terminals.

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT (2021)

Article Computer Science, Hardware & Architecture

Classification of the tree for aerial image using a deep convolution neural network and visual feature clustering

Chuen Horng Lin et al.

JOURNAL OF SUPERCOMPUTING (2020)

Article Computer Science, Hardware & Architecture

Deep convolutional network for breast cancer classification: enhanced loss function (ELF)

Smarika Acharya et al.

JOURNAL OF SUPERCOMPUTING (2020)

Article Automation & Control Systems

Learning Optimized Structure of Neural Networks by Hidden Node Pruning With L1 Regularization

Xuetao Xie et al.

IEEE TRANSACTIONS ON CYBERNETICS (2020)

Article Engineering, Electrical & Electronic

Acceleration of Deep Convolutional Neural Networks Using Adaptive Filter Pruning

Pravendra Singh et al.

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING (2020)

Article Automation & Control Systems

Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks

Yang He et al.

IEEE TRANSACTIONS ON CYBERNETICS (2020)

Proceedings Paper Computer Science, Artificial Intelligence

Class-dependent Pruning of Deep Neural Networks

Rahim Entezari et al.

2020 IEEE SECOND WORKSHOP ON MACHINE LEARNING ON EDGE IN SENSOR SYSTEMS (SENSYS-ML 2020) (2020)

Article Computer Science, Information Systems

Global Biased Pruning Considering Layer Contribution

Zheng Huang et al.

IEEE ACCESS (2020)

Article Computer Science, Artificial Intelligence

Evolving Unsupervised Deep Neural Networks for Learning Meaningful Representations

Yanan Sun et al.

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION (2019)

Article Computer Science, Artificial Intelligence

ThiNet: Pruning CNN Filters for a Thinner Net

Jian-Hao Luo et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2019)

Article Computer Science, Artificial Intelligence

Deep Supervision with Intermediate Concepts

Chi Li et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2019)

Article Computer Science, Artificial Intelligence

Interpreting Deep Visual Representations via Network Dissection

Bolei Zhou et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2019)

Article Engineering, Electrical & Electronic

Sparse Artificial Neural Networks Using a Novel Smoothed LASSO Penalization

Basava Naga Girish Koneru et al.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS (2019)

Article Computer Science, Artificial Intelligence

Generalization and Expressivity for Deep Nets

Shao-Bo Lin

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2019)

Article Computer Science, Artificial Intelligence

Transfer channel pruning for compressing deep domain adaptation models

Chaohui Yu et al.

INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Variational Convolutional Neural Network Pruning

Chenglong Zhao et al.

2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Information theory based pruning for CNN compression and its application to image classification and action recognition

Hai-Hong Phan et al.

2019 16TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS) (2019)

Article Computer Science, Hardware & Architecture

ImageNet Classification with Deep Convolutional Neural Networks

Alex Krizhevsky et al.

COMMUNICATIONS OF THE ACM (2017)

Proceedings Paper Computer Science, Artificial Intelligence

Channel Pruning for Accelerating Very Deep Neural Networks

Yihui He et al.

2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) (2017)

Article Computer Science, Artificial Intelligence

Accelerating Very Deep Convolutional Networks for Classification and Detection

Xiangyu Zhang et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2016)

Article Mathematical & Computational Biology

Applying Cost-Sensitive Extreme Learning Machine and Dissimilarity Integration to Gene Expression Data Classification

Yanqiu Liu et al.

COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (2016)