4.5 Article

Black is the new orange: how to determine AI liability

Journal

ARTIFICIAL INTELLIGENCE AND LAW
Volume 31, Issue 1, Pages 133-167

Publisher

SPRINGER
DOI: 10.1007/s10506-022-09308-9

Keywords

Explainable artificial intelligence (XAI); Explainability; Liability

Ask authors/readers for more resources

This article explores the use of Explainable Artificial Intelligence (XAI) in addressing liability issues related to autonomous AI systems. It analyzes existing legal frameworks and argues that XAI can provide clear technical explanations to courts, helping resolve legal concerns associated with artificial intelligence.
Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of black boxes has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how suitable explanations for liability can be reached in court. It provides an analysis of whether existing liability frameworks, in both civil and common law tort systems, with the support of XAI, can address legal concerns related to AI. Lastly, it claims their further development and adoption should allow AI liability cases to be decided under current legal and regulatory rules until new liability regimes for AI are enacted.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available