4.6 Article

Abstract Layer for LeakyReLU for Neural Network Verification Based on Abstract Interpretation

Journal

IEEE ACCESS
Volume 11, Issue -, Pages 33401-33413

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3263145

Keywords

Neural networks; Robustness; Perturbation methods; Transformers; Task analysis; Optimization; Deep learning; Neural network verification; robustness; abstract interpretation; abstract transformer; LeakyReLU

Ask authors/readers for more resources

Deep neural networks have been widely used in complex tasks and shown vulnerable in noisy and uncertain environments. Robustness of neural networks is crucial for their application in critical systems. This paper proposes a mathematical formulation and implementation of an abstract transformer to enhance the robustness of neural networks.
Deep neural networks have been widely used in several complex tasks such as robotics, self-driving cars, medicine, etc. However, they have recently shown to be vulnerable in uncertain environments where inputs are noisy. As a consequence, the robustness of neural networks has become an essential property for their application in critical systems. Robustness is the capacity to take the same decision even when inputs are disturbed under different types of perturbations, including adversarial attacks. The great difficulty today is providing a formal guarantee of robustness, which is the context of this paper. To do so, abstract interpretation, a popular state-of-the-art method, consisting of converting the layers of the neural network into abstract layers, has been recently proposed. An abstract layer can act on a geometric abstract object or shape comprising implicitly an infinite number of inputs rather than an individual input. In this paper, we propose a new mathematical formulation of an abstract transformer to convert a LeakyReLU activation layer to an abstract layer. Moreover, we implement and integrate our transformer into the ERAN tool. For validation, we assess the performance of our transformer according to the LeakyReLU hyperparameter, and we study the robustness of the neural network according to the input perturbation intensity. Our approach is evaluated on three different datasets: MNIST, Fashion and a robotic dataset. The obtained results demonstrate the efficacy of our abstract transformer in terms of mathematical formulation and implementation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available