4.0 Article

How Could We Know When a Robot was a Moral Patient?

期刊

CAMBRIDGE QUARTERLY OF HEALTHCARE ETHICS
卷 30, 期 3, 页码 459-471

出版社

CAMBRIDGE UNIV PRESS
DOI: 10.1017/S0963180120001012

关键词

AI; moral patiency; artificial suffering; robot rights

向作者/读者索取更多资源

This paper addresses the question of whether artificial intelligence should be considered as having moral status and introduces the cognitive equivalence strategy as a way to determine psychological moral patiency. The author suggests that artificial systems should be considered psychological moral patients if they possess cognitive mechanisms shared with other beings whom we also consider as psychological moral patients.
There is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.0
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据