4.5 Article

Active Subspace of Neural Networks: Structural Analysis and Universal Attacks

Journal

SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE
Volume 2, Issue 4, Pages 1096-1122

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/19M1296070

Keywords

active subspace; deep neural network; network reduction; universal adversarial perturbation

Funding

  1. UCSB start-up grant
  2. Ministry of Education and Science of the Russian Federation [14.756.31.0001]

Ask authors/readers for more resources

Active subspace is a model reduction method widely used in the uncertainty quantification community. In this paper, we propose analyzing the internal structure and vulnerability of deep neural networks using active subspace. Firstly, we employ the active subspace to measure the number of active neurons at each intermediate layer, which indicates that the number of neurons can be reduced from several thousands to several dozens. This motivates us to change the network structure and to develop a new and more compact network, referred to as ASNet, that has significantly fewer model parameters. Secondly, we propose analyzing the vulnerability of a neural network using active subspace by finding an additive universal adversarial attack vector that can misclassify a dataset with a high probability. Our experiments on CIFAR-10 show that ASNet can achieve 23.98x parameter and 7.30x flops reduction. The universal active subspace attack vector can achieve around 20% higher attack ratio compared with the existing approaches in our numerical experiments. The PyTorch codes for this paper are available online.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available