On the Generalization Ability of Retrieval-Enhanced Transformers
Paper i proceeding, 2023

Recent work on the Retrieval-Enhanced Transformer (RETRO) model has shown that off-loading memory from trainable weights to a retrieval database can significantly improve language modeling and match the performance of non-retrieval models that are an order of magnitude larger in size. It has been suggested that at least some of this performance gain is due to non-trivial generalization based on both model weights and retrieval. In this paper, we try to better understand the relative contributions of these two components. We find that the performance gains from retrieval largely originate from overlapping tokens between the database and the test data, suggesting less non-trivial generalization than previously assumed. More generally, our results point to the challenges of evaluating the generalization of retrieval-augmented language models such as RETRO, as even limited token overlap may significantly decrease test-time loss. We release our code and model at https://github.com/TobiasNorlund/retro

Författare

Tobias Norlund

Chalmers, Data- och informationsteknik, Data Science och AI

Recorded Future

Ehsan Doostmohammadi

Linköpings universitet

Richard Johansson

Göteborgs universitet

Marco Kuhlmann

Linköpings universitet

EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023

1455-1463
9781959429470 (ISBN)

Findings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Dubrovnik, Croatia,

Ämneskategorier

Annan data- och informationsvetenskap

Sannolikhetsteori och statistik

Datavetenskap (datalogi)

Mer information

Senast uppdaterat

2024-05-03