4.8 Article

Humans can decipher adversarial images

期刊

NATURE COMMUNICATIONS
卷 10, 期 -, 页码 -

出版社

NATURE PORTFOLIO
DOI: 10.1038/s41467-019-08931-6

关键词

-

资金

  1. JHU Office of Undergraduate Research
  2. JHU Science of Learning Institute

向作者/读者索取更多资源

Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are fooled by adversarial examples-nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine's classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine's preferred label over relevant foils-even for images described as totally unrecognizable to human eyes. Human intuition may be a surprisingly reliable guide to machine (mis)classification-with consequences for minds and machines alike.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据