4.4 Article

Approaches for Fake Content Detection: Strengths and Weaknesses to Adversarial Attacks

Journal

IEEE INTERNET COMPUTING
Volume 25, Issue 2, Pages 73-82

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/MIC.2020.3032323

Keywords

Videos; Deep learning; Faces; Feature extraction; Internet; Social networking (online); fake; content; detection; adversarial; attacks

Ask authors/readers for more resources

In recent years, there has been a rapid increase in fake content on the Internet, threatening the veracity of information. Advanced technologies like machine learning are being used to detect this fake content, but they still have weaknesses. The article describes methods for detecting fake content and potential adversarial attacks, while also discussing future research challenges in this area.
In the last few years, we have witnessed an explosive growth of fake content on the Internet which has significantly affected the veracity of information on many social platforms. Much of this disruption has been caused by the proliferation of advanced machine and deep learning methods. In turn, social platforms have been using the same technological methods in order to detect fake content. However, there is understanding of the strengths and weaknesses of these detection methods. In this article, we describe examples of machine and deep learning approaches that can be used to detect different types of fake content. We also discuss the characteristics and the potential for adversarial attacks on these methods that could reduce the accuracy of fake content detection. Finally, we identify and discuss some future research challenges in this area.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available