Resolving value conflicts in public AI governance: A procedural justice framework
Artikel i vetenskaplig tidskrift, 2025

This paper addresses the challenge of resolving value conflicts in the public governance of artificial intelligence (AI). While existing AI ethics and regulatory frameworks emphasize a range of normative criteria—such as accuracy, transparency, fairness, and accountability—many of these values are in tension and, in some cases, incommensurable. I propose a procedural justice framework that distinguishes between conflicts among derivative trustworthiness criteria and those involving fundamental democratic values. For the former, I apply analytical tools such as the Dominance Principle, Supervaluationism, and Maximality to eliminate clearly inferior alternatives. For the latter, I argue that justifiable decision-making requires procedurally fair deliberation grounded in widely endorsed principles such as publicity, inclusion, relevance, and appeal. I demonstrate the applicability of this framework through an indepth analysis of an AI-based decision support system used by the Swedish Public Employment Service (PES), showing how institutional decision-makers can navigate complex trade-offs between efficiency, explainability, and legality. The framework provides public institutions with a structured method for addressing normative conflicts in AI implementation, moving beyond technical optimization toward democratically legitimate governance.

Public Decision Making

Trustworthy AI

Artificial intelligence

Value Conflicts

Författare

Karl de Fine Licht

Chalmers, Teknikens ekonomi och organisation, Science, Technology and Society

Government Information Quarterly

0740-624X (ISSN)

Vol. 42 2

Drivkrafter

Hållbar utveckling

Ämneskategorier (SSIF 2025)

Filosofi

Etik

Statsvetenskap

Fundament

Grundläggande vetenskaper

DOI

10.1016/j.giq.2025.102033

Mer information

Senast uppdaterat

2025-05-20