4.6 Article

3D-KCPNet: Efficient 3DCNNs based on tensor mapping theory

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

Kronecker CP Decomposition With Fast Multiplication for Compressing RNNs

Dingheng Wang et al.

Summary: This article introduces a method for compressing recurrent neural networks (RNNs) based on Kronecker CANDECOMP/PARAFAC (KCP) decomposition. Experimental results demonstrate that KCP-RNNs achieve comparable accuracy, high compression ratios, and efficiency in both space and computation complexity compared to other tensor decomposition methods.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Computer Science, Artificial Intelligence

Compressing convolutional neural networks with hierarchical Tucker-2 decomposition

Mateusz Gabor et al.

Summary: This study proposes a novel CNN compression technique based on the hierarchical Tucker-2 tensor decomposition, which achieves a significant reduction in parameters and FLOPS with a minor drop in classification accuracy. Compared to other compression methods, the HT-2 outperforms most of them.

APPLIED SOFT COMPUTING (2023)

Article Computer Science, Artificial Intelligence

Realistic acceleration of neural networks with fine-grained tensor decomposition

Rui Lv et al.

Summary: Research shows that tensor decomposition methods in compressing DNNs have potential and cost advantages for accelerating DNNs. This paper proposes a transposition-free algorithm for computing DNNs in KCP format, which is more efficient than existing algorithms. Experiments demonstrate that the proposed transposition-free algorithm achieves significant advantages in terms of accuracy, space complexity, computation complexity, and running time for both KCP-DNN and KCP-RNN.

NEUROCOMPUTING (2022)

Article Automation & Control Systems

Efficient Visual Recognition: A Survey on Recent Advances and Brain-inspired Methodologies

Yang Wu et al.

Summary: Visual recognition is a key research area in computer vision, pattern recognition, and artificial intelligence. While accuracy is important, efficiency is also crucial for both academic research and industrial applications. This survey reviews recent advances and proposes new directions for improving the efficiency of visual recognition approaches.

MACHINE INTELLIGENCE RESEARCH (2022)

Article Computer Science, Artificial Intelligence

Nonlinear tensor train format for deep neural network compression

Dingheng Wang et al.

Summary: This research introduces a novel nonlinear tensor train (NTT) format by studying various tensor decomposition methods, which compensates for the accuracy loss that normal tensor train (TT) cannot provide by embedding additional nonlinear activation functions in sequenced contractions and convolutions. Experimental results demonstrate that the compressed DNNs in the NTT format can maintain accuracy on multiple datasets.

NEURAL NETWORKS (2021)

Article Computer Science, Artificial Intelligence

QTTNet: Quantized tensor train neural networks for 3D object and video recognition

Donghyun Lee et al.

Summary: This article introduces a training framework for three-dimensional convolutional neural networks called QTTNet, which combines tensor train decomposition and data quantization to further shrink the model size and reduce memory and time costs. Experimental results demonstrate the effectiveness and competitiveness of this method in compressing 3D object and video recognition models.

NEURAL NETWORKS (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework

Miao Yin et al.

Summary: This paper proposes a systematic framework for tensor decomposition-based model compression using Alternating Direction Method of Multipliers (ADMM), which works for both CNNs and RNNs and can be easily modified to fit other tensor decomposition approaches.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Towards Extremely Compact RNNs for Video Recognition with Fully Decomposed Hierarchical Tucker Structure

Miao Yin et al.

Summary: This paper proposes a method for developing highly efficient and compact RNN models using fully decomposed hierarchical Tucker structure, which improves storage cost reduction and accuracy enhancement, allows comprehensive compression of the entire RNN models compared to existing tensor decomposition-based methods.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 (2021)

Article Computer Science, Artificial Intelligence

Training high-performance and large-scale deep neural networks with full 8-bit integers

Yukuan Yang et al.

NEURAL NETWORKS (2020)

Article Computer Science, Artificial Intelligence

Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning

Shaohui Lin et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2020)

Article Engineering, Electrical & Electronic

Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey

Lei Deng et al.

PROCEEDINGS OF THE IEEE (2020)

Article Computer Science, Artificial Intelligence

Compressing 3DCNNs based on tensor train decomposition

Dingheng Wang et al.

NEURAL NETWORKS (2020)

Article Computer Science, Artificial Intelligence

Tensor Networks for Latent Variable Analysis: Higher Order Canonical Polyadic Decomposition

Anh-Huy Phan et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2020)

Article Computer Science, Artificial Intelligence

Hybrid tensor decomposition in neural network compression

Bijiao Wu et al.

NEURAL NETWORKS (2020)

Proceedings Paper Computer Science, Artificial Intelligence

Factorized Higher-Order CNNs with an Application to Spatio-Temporal Emotion Estimation

Jean Kossaifi et al.

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2020)

Article Computer Science, Artificial Intelligence

Semisupervised Discriminant Multimanifold Analysis for Action Recognition

Zengmin Xu et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2019)

Article Computer Science, Theory & Methods

Fundamental tensor operations for large-scale data analysis using tensor network formats

Namgil Lee et al.

MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING (2018)

Article Computer Science, Artificial Intelligence

Long-Term Temporal Convolutions for Action Recognition

Gul Varol et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2018)

Proceedings Paper Computer Science, Artificial Intelligence

MiCT: Mixed 3D/2D Convolutional Tube for Human Action Recognition

Yizhou Zhou et al.

2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2018)

Article Engineering, Civil

Hand Gesture Recognition in Real Time for Automotive Interfaces: A Multimodal Vision-Based Approach and Evaluations

Eshed Ohn-Bar et al.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2014)

Article Mathematics, Applied

TENSOR-TRAIN DECOMPOSITION

I. V. Oseledets

SIAM JOURNAL ON SCIENTIFIC COMPUTING (2011)

Article Mathematics, Applied

HIERARCHICAL SINGULAR VALUE DECOMPOSITION OF TENSORS

Lars Grasedyck

SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS (2010)