4.6 Article

Visual privacy attacks and defenses in deep learning: a survey

Journal

ARTIFICIAL INTELLIGENCE REVIEW
Volume 55, Issue 6, Pages 4347-4401

Publisher

SPRINGER
DOI: 10.1007/s10462-021-10123-y

Keywords

Visual privacy; Attack and defense; Deep learning; Privacy preservation

Funding

  1. Australian Research Council, Australia [LP180101150]
  2. Australian Research Council [LP180101150] Funding Source: Australian Research Council

Ask authors/readers for more resources

This article discusses visual privacy attack algorithms and defense mechanisms in deep learning, analyzes privacy issues in visual data and visual deep learning systems, demonstrates the potential of deep learning in privacy protection and attacks, and provides directions and suggestions for future work.
The concerns on visual privacy have been increasingly raised along with the dramatic growth in image and video capture and sharing. Meanwhile, with the recent breakthrough in deep learning technologies, visual data can now be easily gathered and processed to infer sensitive information. Therefore, visual privacy in the context of deep learning is now an important and challenging topic. However, there has been no systematic study on this topic to date. In this survey, we discuss algorithms of visual privacy attacks and the corresponding defense mechanisms in deep learning. We analyze the privacy issues in both visual data and visual deep learning systems. We show that deep learning can be used as a powerful privacy attack tool as well as preservation techniques with great potential. We also point out the possible direction and suggestions for future work. By thoroughly investigating the relationship of visual privacy and deep learning, this article sheds insights on incorporating privacy requirements in the deep learning era.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available