What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge
Paper i proceeding, 2022

There are limitations in learning language from text alone. Therefore, recent focus has been on developing multimodal models. However, few benchmarks exist that can measure what language models learn about language from multimodal training. We hypothesize that training on a visual modality should improve on the visual commonsense knowledge in language models. Therefore, we introduce two evaluation tasks for measuring visual commonsense knowledge in language models(1) and use them to evaluate different multimodal models and unimodal baselines. Primarily, we find that the visual commonsense knowledge is not significantly different between the multimodal models and unimodal baseline models trained on visual text data.

Författare

Lovisa Hagström

Chalmers, Data- och informationsteknik, Data Science och AI

Richard Johansson

Göteborgs universitet

PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): STUDENT RESEARCH WORKSHOP

252-261
978-1-955917-23-0 (ISBN)

60th Annual Meeting of the Association-for-Computational-Linguistics (ACL)
Dublin, Ireland,

Ämneskategorier

Annan data- och informationsvetenskap

Språkteknologi (språkvetenskaplig databehandling)

Jämförande språkvetenskap och allmän lingvistik

Datavetenskap (datalogi)

DOI

10.18653/v1/2022.acl-srw.19

Mer information

Senast uppdaterat

2024-11-06