Representations, Retrieval, and Evaluation in Knowledge-Intensive Natural Language Processing
Doctoral thesis, 2025
Context Utilisation
Knowledge-intensive Tasks
Vision-and-Language Models
Retrieval-Augmented Generation
Mechanistic Interpretability
Language Models
Evaluation
Natural Language Processing
Author
Lovisa Hagström
Data Science and AI 2
Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?
BlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP,;(2021)p. 149-162
Paper in proceeding
The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models
EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings,;(2023)p. 5457-5476
Paper in proceeding
A Reality Check on Context Utilisation for Retrieval-Augmented Generation
Proceedings of the Annual Meeting of the Association for Computational Linguistics,;Vol. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)(2025)p. 19691-19730
Paper in proceeding
Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of Language Models for Fact Completion
Findings of the Association for Computational Linguistics: ACL 2025,;(2025)p. 18322-18349
Paper in proceeding
L. Hagström, Y. Kim, H. Yu, S. Lee, R. Johansson, H. Cho, I. Augenstein. CUB: Benchmarking Context Utilisation Techniques for Language Models.
Subject Categories (SSIF 2025)
Natural Language Processing
Computer Sciences
ISBN
978-91-8103-250-5
Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5708
Publisher
Chalmers
MC-salen, Hörsalsvägen 5.
Opponent: Prof. Ivan Vulić, Language Technology Lab, University of Cambridge, England.