4.3 Article

European artificial intelligence trusted throughout the world: Risk-based regulation and the fashioning of a competitive common AI market

Journal

REGULATION & GOVERNANCE
Volume -, Issue -, Pages -

Publisher

WILEY
DOI: 10.1111/rego.12563

Keywords

artificial intelligence; competition state; critical policy analysis; cultural political economy; EU; risk-based regulation

Ask authors/readers for more resources

This article explores the European Commission's use of risk-based regulation in AI governance. The study finds that the Commission employs risk analysis as a tool for defining and enforcing risk in its pursuit of a future European AI market. Through qualitative analysis, it is revealed that the Commission treats certain AI applications as matters of deep value conflicts and tightly controls high-risk AI systems.
The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally risk-based approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in Regulation & Governance, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship-beyond AI-with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-& agrave;-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a Cultural Political Economy framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European cutting-edge AI & mldr; trusted throughout the world in the first place.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available