4.6 Article

Enhancing trust in AI through industry self-governance

期刊

出版社

OXFORD UNIV PRESS
DOI: 10.1093/jamia/ocab065

关键词

artificial intelligence/ethics; artificial intelligence/organization and administration; certification; accreditation; policy making

向作者/读者索取更多资源

Artificial intelligence plays a critical role in deriving value from health and healthcare data, but there is a risk of another AI Winter due to decreased trust in AI solutions. By promoting self-governance and defining standards to mitigate risks, a more comprehensive approach to governing AI solutions can be achieved, filling gaps in existing legislation and regulations. Adherence to these standards, verified through certification/accreditation, could help prevent another AI Winter by maintaining trust in AI practices.
Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionment, reduced investments, and progress, known as AI Winters. We are now at risk of another AI Winter in health/healthcare due to increasing publicity of AI solutions that are not representing touted breakthroughs, and thereby decreasing trust of users in AI. In this article, we first highlight recently published literature on AI risks and mitigation strategies that would be relevant for groups considering designing, implementing, and promoting self-governance. We then describe a process for how a diverse group of stakeholders could develop and define standards for promoting trust, as well as AI risk-mitigating practices through greater industry self-governance. We also describe how adherence to such standards could be verified, specifically through certification/accreditation. Self-governance could be encouraged by governments to complement existing regulatory schema or legislative efforts to mitigate AI risks. Greater adoption of industry self-governance could fill a critical gap to construct a more comprehensive approach to the governance of AI solutions than US legislation/regulations currently encompass. In this more comprehensive approach, AI developers, AI users, and government/legislators all have critical roles to play to advance practices that maintain trust in AI and prevent another AI Winter.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据