Deep MultiModal Learning for Automotive Applications
Research Project, 2023 – 2027

Purpose and goal
This project aims to create multimodal sensor fusion methods for advanced and robust automotive perception systems. The project will focus on three key areas: (1) Develop multimodal fusion architectures and representations for both dynamic and static objects. (2) Investigate self-supervised learning techniques for the multimodal data in an automotive setting. (3) Improve the perception system’s ability to robustly handle rare events, objects, and road users.

Expected results and effects
In this project we are focusing on techniques that can improve the accuracy and robustness of perception systems of Autonomous Drive (AD) and Advanced Driver Assistance Systems (ADAS). Therefore, we expect that our techniques contribute to enhanced safety of ADAS/AD equipped vehicles which in turn can accelerate the public adoption of AD systems. Through this increased public adoption, we hope to contribute to a considerably safer transportation for all road users.

Participants

Selpi Selpi (contact)

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

Lars Hammarstrand

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Lennart Svensson

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Collaborations

Volvo Cars

Göteborg, Sweden

Zenseact AB

Göteborg, Sweden

Funding

VINNOVA

Project ID: 2023-00763
Funding Chalmers participation during 2023–2027

More information

Latest update

2024-02-20