Long-Term Visual Localization Revisited
Journal article, 2020

Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing conditions, including day-night changes, as well as weather and seasonal variations, while providing highly accurate six degree-of-freedom (6DOF) camera pose estimates. In this paper, we extend three publicly available datasets containing images captured under a wide variety of viewing conditions, but lacking camera pose information, with ground truth pose information, making evaluation of the impact of various factors on 6DOF camera pose estimation accuracy possible. We also discuss the performance of state-of-the-art localization approaches on these datasets. Additionally, we release around half of the poses for all conditions, and keep the remaining half private as a test set, in the hopes that this will stimulate research on long-term visual localization, learned local image features, and related research areas. Our datasets are available at visuallocalization.net, where we are also hosting a benchmarking server for automatic evaluation of results on the test set. The presented state-of-the-art results are to a large degree based on submissions to our server.

benchmark

Visualization

Three-dimensional analysis

long-term localization

Cameras

Benchmark testing

Visual localization

6DOF pose estimation

Solid modelling

Trajectory

Robots

relocalization

Author

Carl Toft

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Imaging and Image Analysis

Will Maddern

University of Oxford

Akihiko Torii

Tokyo Institute of Technology

Lars Hammarstrand

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Signal Processing

Erik Stenborg

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Signal Processing

Daniel Safari

Technical University of Denmark (DTU)

Tokyo Institute of Technology

Masatoshi Okutomi

Tokyo Institute of Technology

Marc Pollefeys

Swiss Federal Institute of Technology in Zürich (ETH)

Microsoft Corporation

Josef Sivic

Institut National de Recherche en Informatique et en Automatique (INRIA)

Czech Technical University in Prague

Tomas Pajdla

Czech Technical University in Prague

Fredrik Kahl

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Imaging and Image Analysis

Torsten Sattler

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Imaging and Image Analysis

Czech Technical University in Prague

IEEE Transactions on Pattern Analysis and Machine Intelligence

0162-8828 (ISSN)

Vol. In press

Semantic Mapping and Visual Navigation for Smart Robots

Swedish Foundation for Strategic Research (SSF), 2016-05-01 -- 2021-06-30.

Integrering av geometri och semantik i datorseende

Swedish Research Council (VR), 2017-01-01 -- 2020-12-31.

Infrastructure

C3SE (Chalmers Centre for Computational Science and Engineering)

Subject Categories

Signal Processing

Computer Science

Computer Vision and Robotics (Autonomous Systems)

DOI

10.1109/TPAMI.2020.3032010

More information

Latest update

2/17/2021