Cross-modal Transfer Between Vision and Language for Protest Detection
Paper i proceeding, 2022

Most of today’s systems for socio-political event detection are text-based, while an increasing amount of information published on the web is multi-modal. We seek to bridge this gap by proposing a method that utilizes existing annotated unimodal data to perform event detection in another data modality, zero-shot. Specifically, we focus on protest detection in text and images, and show that a pretrained vision-and-language alignment model (CLIP) can be leveraged towards this end. In particular, our results suggest that annotated protest text data can act supplementarily for detecting protests in images, but significant transfer is demonstrated in the opposite direction as well.

Författare

Ria Dass Raj

Recorded Future

Student vid Chalmers

Kajsa Andreasson

Recorded Future

Student vid Chalmers

Tobias Norlund

Chalmers, Data- och informationsteknik, Data Science och AI

Recorded Future

Richard Johansson

Göteborgs universitet

Aron Lagerberg

Recorded Future

CASE 2022 - 5th Workshop on Challenges and Applications of Automated Extraction of Socio-Political Events from Text, Proceedings of the Workshop

56-60
978-1-959429-05-0 (ISBN)

5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE)
Abu Dhabi, United Arab Emirates,

Ämneskategorier

Annan data- och informationsvetenskap

Språkteknologi (språkvetenskaplig databehandling)

Datorseende och robotik (autonoma system)

DOI

10.18653/v1/2022.case-1.8

Mer information

Senast uppdaterat

2024-11-08