期刊
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
卷 14, 期 5, 页码 910-932出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTSP.2020.3002101
关键词
Media; Forensics; Face; Tools; Information integrity; Videos; Deep learning; Deep learning; deepfakes; digital image forensics; video forensics
资金
- Google Faculty Research Award
- Air Force Research Laboratory
- Defense Advanced Research Projects Agency [FA8750-16-2-0204]
- PREMIER project - Italian Ministry of Education, University, and Research within the PRIN 2017 program
With the rapid progress in recent years, techniques that generate and manipulate multimedia content can now provide a very advanced level of realism. The boundary between real and synthetic media has become very thin. On the one hand, this opens the door to a series of exciting applications in different fields such as creative arts, advertising, film production, and video games. On the other hand, it poses enormous security threats. Software packages freely available on the web allow any individual, without special skills, to create very realistic fake images and videos. These can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people. Therefore, there is an urgent need for automated tools capable of detecting false multimedia content and avoiding the spread of dangerous false information. This review paper aims to present an analysis of the methods for visual media integrity verification, that is, the detection of manipulated images and videos. Special emphasis will be placed on the emerging phenomenon of deepfakes, fake media created through deep learning tools, and on modern data-driven forensic methods to fight them. The analysis will help highlight the limits of current forensic tools, the most relevant issues, the upcoming challenges, and suggest future directions for research.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据