4.5 Article

Legal requirements on explainability in machine learning

期刊

ARTIFICIAL INTELLIGENCE AND LAW
卷 29, 期 2, 页码 149-169

出版社

SPRINGER
DOI: 10.1007/s10506-020-09270-4

关键词

Interpretability; Explainability; Machine learning; Law

向作者/读者索取更多资源

Deep learning and other black-box models are increasingly popular, but their lack of explainability may pose ethical and legal challenges. This paper discusses the growing legal requirements for interpretability in machine learning models in decision making contexts and calls for interdisciplinary research to enhance explainability.
Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据