Lidar–camera semi-supervised learning for semantic segmentation
Journal article, 2021

In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations.

Sensor fusion

Semantic segmentation

Deep learning

Semi-supervised learning

Author

Luca Caltagirone

Chalmers, Mechanics and Maritime Sciences (M2), Vehicle Engineering and Autonomous Systems

Mauro Bellone

Tallinn University of Technology (TalTech)

Lennart Svensson

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Mattias Wahde

Chalmers, Mechanics and Maritime Sciences (M2), Vehicle Engineering and Autonomous Systems

Raivo Sell

Tallinn University of Technology (TalTech)

Sensors

14248220 (eISSN)

Vol. 21 14 4813

Subject Categories

Communication Systems

Computer Science

Computer Systems

DOI

10.3390/s21144813

PubMed

34300551

More information

Latest update

7/28/2021