Representations, Retrieval, and Evaluation in Knowledge-Intensive Natural Language Processing
Doktorsavhandling, 2025
Context Utilisation
Knowledge-intensive Tasks
Vision-and-Language Models
Retrieval-Augmented Generation
Mechanistic Interpretability
Language Models
Evaluation
Natural Language Processing
Författare
Lovisa Hagström
Data Science och AI 2
Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?
BlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP,;(2021)p. 149-162
Paper i proceeding
The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models
EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings,;(2023)p. 5457-5476
Paper i proceeding
A Reality Check on Context Utilisation for Retrieval-Augmented Generation
Proceedings of the Annual Meeting of the Association for Computational Linguistics,;Vol. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)(2025)p. 19691-19730
Paper i proceeding
Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of Language Models for Fact Completion
Findings of the Association for Computational Linguistics: ACL 2025,;(2025)p. 18322-18349
Paper i proceeding
L. Hagström, Y. Kim, H. Yu, S. Lee, R. Johansson, H. Cho, I. Augenstein. CUB: Benchmarking Context Utilisation Techniques for Language Models.
Ämneskategorier (SSIF 2025)
Språkbehandling och datorlingvistik
Datavetenskap (datalogi)
ISBN
978-91-8103-250-5
Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5708
Utgivare
Chalmers
MC-salen, Hörsalsvägen 5.
Opponent: Prof. Ivan Vulić, Language Technology Lab, University of Cambridge, England.