4.6 Article

Moral Judgments in the Age of Artificial Intelligence

期刊

JOURNAL OF BUSINESS ETHICS
卷 178, 期 4, 页码 917-943

出版社

SPRINGER
DOI: 10.1007/s10551-022-05053-w

关键词

Artificial intelligence; Moral judgments; Mind perception; Perceived agency; Perceived experience; Perceived intentional harm

向作者/读者索取更多资源

The current research aims to explore the responsibility for harm caused by artificial intelligence (AI) systems. Based on the literature on moral judgments, the study suggests that people tend to hold the company, developer team, and even the AI system itself accountable when perceiving intentional harm caused by AI. Using the theory of mind perception, the study found that perceived agency and experience of AI mediate the relationship between intentional harm and blame judgments. Furthermore, it is found that attributions of mind to AI are stronger when harm is directed towards humans compared to non-humans.
The current research aims to answer the following question: who will be held responsible for harm involving an artificial intelligence (AI) system? Drawing upon the literature on moral judgments, we assert that when people perceive an AI system's action as causing harm to others, they will assign blame to different entity groups involved in an AI's life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the theory of mind perception, we hypothesized that two dimensions of mind: perceived agency-attributing intention, reasoning, pursuing goals, and communicating to AI, and perceived experience-attributing emotional states, such as feeling pain and pleasure, personality, and consciousness to AI-mediated the relationship between perceived intentional harm and blame judgments toward AI. We also predicted that people are likely to attribute higher mind characteristics to AI when harm is perceived to be directed to humans than when it is perceived to be directed to non-humans. We tested our research model in three experiments. In all experiments, we found that perceived intentional harm led to blame judgments toward AI. In two experiments, we found perceived experience, not agency, mediated the relationship between perceived intentional harm and blame judgments. We also found that companies and developers were held responsible for moral violations involving AI, with developers received the most blame among the entities involved. Our third experiment reconciles the findings by showing that perceived intentional harm directed to a non-human entity did not lead to increased attributions of mind to AI. These findings have implications for theory and practice concerning unethical outcomes and behavior associated with AI use.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据