4.7 Article

GhostNets on Heterogeneous Devices via Cheap Operations

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 130, Issue 4, Pages 1050-1069

Publisher

SPRINGER
DOI: 10.1007/s11263-022-01575-y

Keywords

Convolutional neural networks; Efficient inference; Visual recognition

Funding

  1. NSFC [62072449, 61872241, 61632003]
  2. Macao FDCT Grant [0018/2019/AKP]
  3. Australian Research Council [DP210101859]
  4. University of Sydney SOAR Prize
  5. CANN

Ask authors/readers for more resources

In this paper, efficient neural network designs for mobile devices are proposed. For CPU devices, a C-Ghost module is introduced to generate more feature maps, while for GPU devices, a G-Ghost stage structure is formulated to utilize stage-wise feature redundancy. Experimental results demonstrate the effectiveness of the proposed methods.
Deploying convolutional neural networks (CNNs) on mobile devices is difficult due to the limited memory and computation resources. We aim to design efficient neural networks for heterogeneous devices including CPU and GPU, by exploiting the redundancy in feature maps, which has rarely been investigated in neural architecture design. For CPU-like devices, we propose a novel CPU-efficient Ghost (C-Ghost) module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed C-Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. C-Ghost bottlenecks are designed to stack C-Ghost modules, and then the lightweight C-GhostNet can be easily established. We further consider the efficient networks for GPU devices. Without involving too many GPU-inefficient operations (e.g., depth-wise convolution) in a building stage, we propose to utilize the stage-wise feature redundancy to formulate GPU-efficient Ghost (G-Ghost) stage structure. The features in a stage are split into two parts where the first part is processed using the original block with fewer output channels for generating intrinsic features, and the other are generated using cheap operations by exploiting stage-wise redundancy. Experiments conducted on benchmarks demonstrate the effectiveness of the proposed C-Ghost module and the G-Ghost stage. C-GhostNet and G-GhostNet can achieve the optimal trade-off of accuracy and latency for CPU and GPU, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available