Single-Image Depth Prediction Makes Feature Matching Easier
Paper i proceeding, 2020

Good local features improve the robustness of many 3D re-localization and multi-view reconstruction pipelines. The problem is that viewing angle and distance severely impact the recognizability of a local feature. Attempts to improve appearance invariance by choosing better local feature points or by leveraging outside information, have come with pre-requisites that made some of them impractical. In this paper, we propose a surprisingly effective enhancement to local feature extraction, which improves matching. We show that CNN-based depths inferred from single RGB images are quite helpful, despite their flaws. They allow us to pre-warp images and rectify perspective distortions, to significantly enhance SIFT and BRISK features, enabling more good matches, even when cameras are looking at the same scene but in opposite directions.

Image matching

Local feature matching

Författare

Carl Toft

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

Daniyar Turmukhambetov

Niantic

Torsten Sattler

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

Fredrik Kahl

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

Gabriel J. Brostow

University College London (UCL)

Niantic

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 12361 LNCS 473-492

16th European Conference on Computer Vision, ECCV 2020
Glasgow, United Kingdom,

Ämneskategorier

Mediateknik

Datorseende och robotik (autonoma system)

Medicinsk bildbehandling

DOI

10.1007/978-3-030-58517-4_28

Mer information

Senast uppdaterat

2020-12-03