Text Representations and Explainability for Political Science Applications
Licentiatavhandling, 2023
Due to the complexity of current NLP techniques interactions between the model and the political scientist are limited, which can impact the utility of such modeling. Therefore, we turn to explainability and develop a novel approach for explaining a text classifier. Our method extracts relevant features for a whole prediction class and can sort those by their relevance to the political domain.
Generally, we find current NLP methods are capable of capturing some politically relevant signals from text, but more work is needed to align the two fields. We conclude that the next step in this work should focus on investigating frameworks such as hybrid models and causality, which can improve both the representation capabilities and the interaction between model and social scientist.
Political Science
Explainability
NLP
Representation
Författare
Denitsa Saynova
Chalmers, Data- och informationsteknik, Data Science och AI
Annika Fredén, Moa Johansson, Denitsa Saynova, Word embeddings on ideology and issues from Swedish parliamentarians' motions over time: A comparative approach
Bias and methods of AI technology studying political behaviour
Marianne och Marcus Wallenberg Stiftelse (M&MWallenbergsStiftelse), 2020-01-01 -- 2023-12-31.
Ämneskategorier
Språkteknologi (språkvetenskaplig databehandling)
Statsvetenskap (exklusive studier av offentlig förvaltning och globaliseringsstudier)
Infrastruktur
C3SE (Chalmers Centre for Computational Science and Engineering)
Utgivare
Chalmers
EDIT Room Analysen
Opponent: Prof. Dr. Simone Paolo Ponzetto, University of Mannheim, Germany