Single-Image Depth Prediction Makes Feature Matching Easier
Paper in proceeding, 2020

Good local features improve the robustness of many 3D re-localization and multi-view reconstruction pipelines. The problem is that viewing angle and distance severely impact the recognizability of a local feature. Attempts to improve appearance invariance by choosing better local feature points or by leveraging outside information, have come with pre-requisites that made some of them impractical. In this paper, we propose a surprisingly effective enhancement to local feature extraction, which improves matching. We show that CNN-based depths inferred from single RGB images are quite helpful, despite their flaws. They allow us to pre-warp images and rectify perspective distortions, to significantly enhance SIFT and BRISK features, enabling more good matches, even when cameras are looking at the same scene but in opposite directions.

Image matching

Local feature matching

Author

Carl Toft

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Daniyar Turmukhambetov

Niantic

Torsten Sattler

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Fredrik Kahl

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Gabriel J. Brostow

University College London (UCL)

Niantic

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 12361 LNCS 473-492

16th European Conference on Computer Vision, ECCV 2020
Glasgow, United Kingdom,

Subject Categories

Media Engineering

Computer Vision and Robotics (Autonomous Systems)

Medical Image Processing

DOI

10.1007/978-3-030-58517-4_28

More information

Latest update

12/3/2020