Risk-Sensitive Bayesian Games for Multi-Agent Reinforcement Learning under Policy Uncertainty
Preprint, 2022

In stochastic games with incomplete information, the uncertainty is evoked by the lack of knowledge about a player's own and the other players' types, i.e. the utility function and the policy space, and also the inherent stochasticity of different players' interactions. In existing literature, the risk in stochastic games has been studied in terms of the inherent uncertainty evoked by the variability of transitions and actions. In this work, we instead focus on the risk associated with the \textit{uncertainty over types}. We contrast this with the multi-agent reinforcement learning framework where the other agents have fixed stationary policies and investigate risk-sensitiveness due to the uncertainty about the other agents' adaptive policies. We propose risk-sensitive versions of existing algorithms proposed for risk-neutral stochastic games, such as Iterated Best Response (IBR), Fictitious Play (FP) and a general multi-objective gradient approach using dual ascent (DAPG). Our experimental analysis shows that risk-sensitive DAPG performs better than competing algorithms for both social welfare and general-sum stochastic games.

Bayesian Games

Machine Learning

Multi-Agent Reinforcement Learning

Author

Hannes Eriksson

Zenseact AB

Debabrota Basu

Institut National de Recherche en Informatique et en Automatique (INRIA)

Mina Alibeigi

Zenseact AB

Christos Dimitrakakis

University of Oslo

Subject Categories

Transport Systems and Logistics

Computer Science

DOI

10.48550/arXiv.2203.10045

More information

Latest update

9/25/2023