Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy
Artikel i vetenskaplig tidskrift, 2020

The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.

Artificial intelligence Transparency Public decision-making Perceived legitimacy Explainability Framework

Författare

Karl de Fine Licht

Chalmers, Teknikens ekonomi och organisation, Science, Technology and Society

Jenny de Fine Licht

Göteborgs universitet

AI and Society

0951-5666 (ISSN) 1435-5655 (eISSN)

Vol. 35 4 917-926

Ämneskategorier

Filosofi

Människa-datorinteraktion (interaktionsdesign)

Systemvetenskap, informationssystem och informatik med samhällsvetenskaplig inriktning

DOI

10.1007/s00146-020-00960-w

Mer information

Senast uppdaterat

2021-01-26