Using Image Sequences for Long-Term Visual Localization
Paper in proceeding, 2020

Estimating the pose of a camera in a known scene, i.e., visual localization, is a core task for applications such as self-driving cars. In many scenarios, image sequences are available and existing work on combining single-image localization with odometry offers to unlock their potential for improving localization performance. Still, the largest part of the literature focuses on single-image localization and ignores the availability of sequence data. The goal of this paper is to demonstrate the potential of image sequences in challenging scenarios, e.g., under day-night or seasonal changes. Combining ideas from the literature, we describe a sequence-based localization pipeline that combines odometry with both a coarse and a fine localization module. Experiments on long-term localization datasets show that combining single-image global localization against a prebuilt map with a visual odometry / SLAM pipeline improves performance to a level where the extended CMU Seasons dataset can be considered solved. We show that SIFT features can perform on par with modern state-of-the-art features in our framework, despite being much weaker and a magnitude faster to compute. Our code is publicly available at github.com/rulllars.

Author

Erik Stenborg

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Torsten Sattler

Czech Technical University in Prague

Lars Hammarstrand

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Proceedings - 2020 International Conference on 3D Vision, 3DV 2020

938-948 9320360

8th International Conference on 3D Vision, 3DV 2020
Virtual, Fukuoka, Japan,

Subject Categories

Robotics

Computer Vision and Robotics (Autonomous Systems)

Medical Image Processing

DOI

10.1109/3DV50981.2020.00104

More information

Latest update

1/3/2024 9