4.6 Article

Astraea: Grammar-Based Fairness Testing

期刊

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
卷 48, 期 12, 页码 5188-5211

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TSE.2022.3141758

关键词

software fairness; machine learning; natural language processing; software testing; program debugging

资金

  1. University of Luxembourg, Ezekiel Sore-mekun
  2. Institute for Advanced Studiesof the University of Luxembourg
  3. OneConnect Financial [RGOCFT2001]
  4. Singapore Ministry of Education (MOE), President'sGraduate Fellowship and MOE [MOE2018-T2-1-098]

向作者/读者索取更多资源

Software often produces biased outputs, especially machine learning-based software when processing discriminatory inputs. To address this issue, we propose a grammar-based fairness testing approach that generates discriminatory inputs to reveal and explain biases in software systems, and improves fairness through fault diagnosis.
Software often produces biased outputs. In particular, machine learning (ML) based software is known to produce erroneous predictions when processing discriminatory inputs. Such unfair program behavior can be caused by societal bias. In the last few years, Amazon, Microsoft and Google have provided software services that produce unfair outputs, mostly due to societal bias (e.g., gender or race). In such events, developers are saddled with the task of conducting fairness testing. Fairness testing is challenging; developers are tasked with generating discriminatory inputs that reveal and explain biases. We propose a grammar-based fairness testing approach (called Astraea) which leverages context-free grammars to generate discriminatory inputs that reveal fairness violations in software systems. Using probabilistic grammars, Astraea also provides fault diagnosis by isolating the cause of observed software bias. Astraea's diagnoses facilitate the improvement of ML fairness. Astraea was evaluated on 18 software systems that provide three major natural language processing (NLP) services. In our evaluation, Astraea generated fairness violations at a rate of about 18%. Astraea generated over 573K discriminatory test cases and found over 102K fairness violations. Furthermore, Astraea improves software fairness by about 76% via model-retraining, on average.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据