期刊
ARTIFICIAL INTELLIGENCE AND LAW
卷 29, 期 2, 页码 149-169出版社
SPRINGER
DOI: 10.1007/s10506-020-09270-4
关键词
Interpretability; Explainability; Machine learning; Law
Deep learning and other black-box models are increasingly popular, but their lack of explainability may pose ethical and legal challenges. This paper discusses the growing legal requirements for interpretability in machine learning models in decision making contexts and calls for interdisciplinary research to enhance explainability.
Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据