Long-Term Localization for Self-Driving Cars
Doktorsavhandling, 2020
This thesis presents solutions and insights both for long-term sequential visual localization, and localization using global navigational satellite systems (GNSS), that push us closer to the goal of accurate and reliable localization for self-driving cars. It addresses the question: How to achieve accurate and robust, yet cost-effective long-term localization for self-driving cars?
Starting in this question, the thesis explores how existing sensor suites for advanced driver-assistance systems (ADAS) can be used most efficiently, and how landmarks in maps can be recognized and used for localization even after severe changes in appearance. The findings show that:
* State-of-the-art ADAS sensors are insufficient to meet the requirements for localization of a self-driving car in less than ideal conditions.
GNSS and visual localization are identified as areas to improve.
* Highly accurate relative localization with no convergence delay is possible by using time relative GNSS observations with a single band receiver, and no base stations.
* Sequential semantic localization is identified as a promising focus point for further research based on a benchmark study comparing state-of-the-art visual localization methods in challenging autonomous driving scenarios including day-to-night and seasonal changes.
* A novel sequential semantic localization algorithm improves accuracy while significantly reducing map size compared to traditional methods based on matching of local image features.
* Improvements for semantic segmentation in challenging conditions can be made efficiently by automatically generating pixel correspondences between images from a multitude of conditions and enforcing a consistency constraint during training.
* A segmentation algorithm with automatically defined and more fine-grained classes improves localization performance.
* The performance advantage seen in single image localization for modern local image features, when compared to traditional ones, is all but erased when considering sequential data with odometry, thus, encouraging to focus future research more on sequential localization, rather than pure single image localization.
Sequential semantic localization
self-driving
localization
Författare
Erik Stenborg
Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik
Vehicle self-localization using off-the-shelf sensors and a detailed map
IEEE Intelligent Vehicles Symposium, Proceedings,;(2014)p. 522-528
Paper i proceeding
Using a single band GNSS receiver to improve relative positioning in autonomous cars
IEEE Intelligent Vehicles Symposium, Proceedings,;Vol. 2016-August(2016)p. Art no 7535498, Pages 921-926
Paper i proceeding
Long-Term Visual Localization Revisited
IEEE Transactions on Pattern Analysis and Machine Intelligence,;Vol. 44(2022)p. 2074-2088
Artikel i vetenskaplig tidskrift
Long-term Visual Localization using Semantically Segmented Images
Proceedings - IEEE International Conference on Robotics and Automation,;(2018)p. 6484-6490
Paper i proceeding
A cross-season correspondence dataset for robust semantic segmentation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,;Vol. 2019-June(2019)p. 9524-9534
Paper i proceeding
Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization
Proceedings of the IEEE International Conference on Computer Vision,;(2019)p. 31-41
Paper i proceeding
Using Image Sequences for Long-Term Visual Localization
Proceedings - 2020 International Conference on 3D Vision, 3DV 2020,;(2020)p. 938-948
Paper i proceeding
The research in this thesis aims at developing algorithms that self-driving cars can use to find their location in the world. The thesis presents solutions and insights both for long-term sequential visual localization, and localization using GPS, that push us closer to the goal of accurate and reliable localization for self-driving cars. Visual localization, that is, figuring out where you are from images, is proposed to be based on semantic segmentation. It means that the images are interpreted by a computer into class labels such as "building", "road", "traffic sign", for each pixel in the image, and using this to infer location. A localization solution for the case when the camera on the car delivers images in a sequence, is presented and compared to alternative ways of localization. Two methods for improved semantic segmentation are also presented, in turn leading to better localization performance. Additionally, a method for localization using common car sensors (including radars, camera, and GPS receivers), and a method for improving the relative localization performance of GPS, are presented.
Styrkeområden
Transport
Infrastruktur
C3SE (Chalmers Centre for Computational Science and Engineering)
Ämneskategorier
Datorseende och robotik (autonoma system)
ISBN
978-91-7905-377-2
Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 4844
Utgivare
Chalmers
Opponent: Jan-Michael Frahm, University of North Carolina at Chapel Hill, USA