Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy
Journal article, 2020

The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.

Artificial intelligence Transparency Public decision-making Perceived legitimacy Explainability Framework

Author

Karl de Fine Licht

Chalmers, Technology Management and Economics, Science, Technology and Society

Jenny de Fine Licht

University of Gothenburg

AI and Society

0951-5666 (ISSN) 1435-5655 (eISSN)

Vol. 35 4 917-926

Subject Categories

Philosophy

Human Computer Interaction

Information Systemes, Social aspects

DOI

10.1007/s00146-020-00960-w

More information

Latest update

1/26/2021