AI in Public Decision‐Making: A Philosophical and Practical Framework for Assessing and Weighing Harm and Benefit
Artikel i vetenskaplig tidskrift, 2025
Artificial intelligence (AI) is increasingly used in public decision-making; yet existing governance tools often lack clear definitions of harm and benefit, practical methods for weighing competing values, and guidance for resolving value conflicts. This paper presents a five-step framework that integrates moral philosophy, trustworthy AI principles, and procedural justice into a coherent decision process for public administrators. The framework operationalizes harm and benefit through multidimensional well-being measures, applies normative principles such as harm–benefit asymmetry, incorporates technical assessment criteria, and offers structured methods for resolving both derivative and fundamental value conflicts. A worked example, based on the Dutch childcare benefits scandal, illustrates its application under real-world constraints. Comparative analysis positions the framework alongside established tools, highlighting its added value in combining normative reasoning with procedural legitimacy. The paper also discusses implementation challenges, including cognitive biases, institutional inertia, and political trade-offs, and suggests empirical approaches for validation. By linking philosophical depth with practical usability, the framework supports transparent, context-sensitive governance of AI in the public sector.
Public Decision-Making
Harm
Artificial Intelligence
Benefit
Framework