4.7 Article

Fairness in Deep Learning: A Computational Perspective

期刊

IEEE INTELLIGENT SYSTEMS
卷 36, 期 4, 页码 25-34

出版社

IEEE COMPUTER SOC
DOI: 10.1109/MIS.2020.3000681

关键词

-

资金

  1. National Science Foundation [IIS-1657196, IIS-1718840, IIS-1939716]
  2. DARPA [N66001-17-2-4031]

向作者/读者索取更多资源

This article discusses and summarizes the issue of fairness in deep learning, emphasizing the importance of interpretability in diagnosing the reasons for algorithmic discrimination. It also proposes fairness mitigation approaches classified according to the three stages of the deep learning life-cycle, aiming to advance the field of fairness in deep learning and build genuinely fair and reliable deep learning systems.
Fairness in deep learning has attracted tremendous attention recently, as deep learning is increasingly being used in high-stake decision making applications that affect individual lives. We provide a review covering recent progresses to tackle algorithmic fairness problems of deep learning from the computational perspective. Specifically, we show that interpretability can serve as a useful ingredient to diagnose the reasons that lead to algorithmic discrimination. We also discuss fairness mitigation approaches categorized according to three stages of deep learning life-cycle, aiming to push forward the area of fairness in deep learning and build genuinely fair and reliable deep learning systems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据