4.7 Review

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability?

Journal

COMPUTER SCIENCE REVIEW
Volume 37, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.cosrev.2020.100270

Keywords

-

Funding

  1. UK EPSRC [EP/R026173/1, EP/T026995/1]
  2. ORCA Partnership Resource Fund (PRF) on Towards the Accountable and Explainable Learning-enabled Autonomous Robotic Systems
  3. UK Dstl projects on Test Coverage Metrics for Artificial Intelligence
  4. EPSRC [EP/R026173/1] Funding Source: UKRI

Ask authors/readers for more resources

In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks. With the broader deployment of DNNs on various applications, the concerns over their safety and trustworthiness have been raised in public, especially after the widely reported fatal incidents involving self-driving cars. Research to address these concerns is particularly active, with a significant number of papers released in the past few years. This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability. In total, we survey 202 papers, most of which were published after 2017. (c) 2020 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available