Lidar–camera semi-supervised learning for semantic segmentation
Artikel i vetenskaplig tidskrift, 2021

In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations.

Sensor fusion

Semantic segmentation

Deep learning

Semi-supervised learning

Författare

Luca Caltagirone

Chalmers, Mekanik och maritima vetenskaper, Fordonsteknik och autonoma system

Mauro Bellone

Tallinns tekniska universitet (TalTech)

Lennart Svensson

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik, Signalbehandling

Mattias Wahde

Chalmers, Mekanik och maritima vetenskaper, Fordonsteknik och autonoma system

Raivo Sell

Tallinns tekniska universitet (TalTech)

Sensors

1424-8220 (ISSN) 1424-3210 (eISSN)

Vol. 21 14 4813

Ämneskategorier

Kommunikationssystem

Datavetenskap (datalogi)

Datorsystem

DOI

10.3390/s21144813

PubMed

34300551

Mer information

Senast uppdaterat

2021-07-28