LLMs Can Check Their Own Results to Mitigate Hallucinations in Traffic Understanding Tasks
Paper in proceeding, 2025
multi-modal data
hallucination detection
perception systems
safety-critical systems
large language models
automotive
Author
Malsha Ashani Mahawatta Dona
University of Gothenburg
Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering
Beatriz Cabrero-Daniel
Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering
University of Gothenburg
Yinan Yu
Chalmers, Computer Science and Engineering (Chalmers), Functional Programming
Christian Berger
University of Gothenburg
Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
03029743 (ISSN) 16113349 (eISSN)
Vol. 15383 LNCS 114-1309783031808883 (ISBN)
London, United Kingdom,
SAICOM
Swedish Foundation for Strategic Research (SSF) (FUS21-0004), 2022-06-01 -- 2027-05-31.
Subject Categories (SSIF 2025)
Natural Language Processing
DOI
10.1007/978-3-031-80889-0_8