3.8 Proceedings Paper

Cats Are Not Fish: Deep Learning Testing Calls for Out-Of-Distribution Awareness

出版社

IEEE COMPUTER SOC
DOI: 10.1145/3324884.3416609

关键词

Deep learning testing; quality assurance; out of distribution

资金

  1. Singapore Ministry of Education Academic Research Fund [2018-T1-002-069]
  2. National Research Foundation, Prime Ministers Office, Singapore under its National Cybersecurity RD Program [NRF2018 NCR-NCR005-0001]
  3. Singapore National Research Foundation under NCR [NSOE003-0001]
  4. NRF Investigatorship [NRFI06-2020-0022]
  5. JSPS KAKENHI [20H04168, 19K24348, 19H04086]
  6. NVIDIA AI Tech Center (NVAITC)
  7. JST-Mirai Program, Japan [JPMJMI18BB]
  8. Grants-in-Aid for Scientific Research [20H04168] Funding Source: KAKEN

向作者/读者索取更多资源

As Deep Learning (DL) is continuously adopted in many industrial applications, its quality and reliability start to raise concerns. Similar to the traditional software development process, testing the DL software to uncover its defects at an early stage is an effective way to reduce risks after deployment. According to the fundamental assumption of deep learning, the DL software does not provide statistical guarantee and has limited capability in handling data that falls outside of its learned distribution, i.e., out-of-distribution (OOD) data. Although recent progress has been made in designing novel testing techniques for DL software, which can detect thousands of errors, the current state-of-the-art DL testing techniques usually do not take the distribution of generated test data into consideration. It is therefore hard to judge whether the identified errors are indeed meaningful errors to the DL application (i.e., due to quality issues of the model) or outliers that cannot be handled by the current model (i.e., due to the lack of training data). Tofill this gap, we take thefi rst step and conduct a large scale empirical study, with a total of 451 experiment configurations, 42 deep neural networks (DNNs) and 1.2 million test data instances, to investigate and characterize the impact of OOD-awareness on DL testing. We further analyze the consequences when DL systems go into production by evaluating the effectiveness of adversarial retraining with distribution-aware errors. The results confirm that introducing data distribution awareness in both testing and enhancement phases outperforms distribution unaware retraining by up to 21.5%.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据