4.6 Article

Grammar Based Directed Testing of Machine Learning Systems

期刊

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
卷 47, 期 11, 页码 2487-2503

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TSE.2019.2953066

关键词

Machine learning; Grammar; Robustness; Systematics; Test pattern generators; Natural language processing; Software testing; machine learning; natural language processing

资金

  1. Ministry of Education, Singapore

向作者/读者索取更多资源

The paper introduces a systematic test framework for machine-learning systems, called the Ogma approach, which can discover erroneous behaviors and improve model performance. Testing on three natural language processing classifiers shows that Ogma is more effective than random testing methods.
The massive progress of machine learning has seen its application over a variety of domains in the past decade. But how do we develop a systematic, scalable and modular strategy to validate machine-learning systems? We present, to the best of our knowledge, the first approach, which provides a systematic test framework for machine-learning systems that accepts grammar-based inputs. Our Ogma approach automatically discovers erroneous behaviours in classifiers and leverages these erroneous behaviours to improve the respective models. Ogma leverages inherent robustness properties present in any well trained machine-learning model to direct test generation and thus, implementing a scalable test generation methodology. To evaluate our Ogma approach, we have tested it on three real world natural language processing (NLP) classifiers. We have found thousands of erroneous behaviours in these systems. We also compare Ogma with a random test generation approach and observe that Ogma is more effective than such random test generation by up to 489 percent.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据