4.6 Article

Observe, inspect, modify: Three conditions for generative AI governance

期刊

NEW MEDIA & SOCIETY
卷 -, 期 -, 页码 -

出版社

SAGE PUBLICATIONS LTD
DOI: 10.1177/14614448231214811

关键词

AI governance; AI regulation; generative AI; generative models; inspectability; large language models; modifiability; observability; regulatory objects

向作者/读者索取更多资源

The absence of benchmarks to examine the effectiveness of oversight mechanisms for generative AI systems is a problem for research and policy. This article introduces the conditions of industrial observability, public inspectability, and technical modifiability as structural elements for governing generative AI systems. These conditions are exemplified using the EU's AI Act, grounding the analysis of oversight mechanisms in the material properties of generative AI systems.
In a world increasingly shaped by generative AI systems like ChatGPT, the absence of benchmarks to examine the efficacy of oversight mechanisms is a problem for research and policy. What are the structural conditions for governing generative AI systems? To answer this question, it is crucial to situate generative AI systems as regulatory objects: material items that can be governed. On this conceptual basis, we introduce three high-level conditions to structure research and policy agendas on generative AI governance: industrial observability, public inspectability, and technical modifiability. Empirically, we explicate those conditions with a focus on the EU's AI Act, grounding the analysis of oversight mechanisms for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Those three conditions represent an action plan to help us perceive generative AI systems as negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据