期刊
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
卷 47, 期 11, 页码 2487-2503出版社
IEEE COMPUTER SOC
DOI: 10.1109/TSE.2019.2953066
关键词
Machine learning; Grammar; Robustness; Systematics; Test pattern generators; Natural language processing; Software testing; machine learning; natural language processing
资金
- Ministry of Education, Singapore
The paper introduces a systematic test framework for machine-learning systems, called the Ogma approach, which can discover erroneous behaviors and improve model performance. Testing on three natural language processing classifiers shows that Ogma is more effective than random testing methods.
The massive progress of machine learning has seen its application over a variety of domains in the past decade. But how do we develop a systematic, scalable and modular strategy to validate machine-learning systems? We present, to the best of our knowledge, the first approach, which provides a systematic test framework for machine-learning systems that accepts grammar-based inputs. Our Ogma approach automatically discovers erroneous behaviours in classifiers and leverages these erroneous behaviours to improve the respective models. Ogma leverages inherent robustness properties present in any well trained machine-learning model to direct test generation and thus, implementing a scalable test generation methodology. To evaluate our Ogma approach, we have tested it on three real world natural language processing (NLP) classifiers. We have found thousands of erroneous behaviours in these systems. We also compare Ogma with a random test generation approach and observe that Ogma is more effective than such random test generation by up to 489 percent.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据