D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
Paper in proceedings, 2019

In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector. By postponing the detection to a later stage, the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures. We show that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations. The proposed method obtains state-of-the-art performance on both the difficult Aachen Day-Night localization dataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarks for image matching and 3D reconstruction.

deep learning

local features

3D reconstruction

machine learning

visual localization

Author

Mihai Dusmanu

Ecole Normale Superieure (ENS)

INRIA Paris

ETH Zurich

Ignacio Rocco

Ecole Normale Superieure (ENS)

INRIA Paris

Tomas Pajdla

Czech Technical University in Prague

Marc Pollefeys

ETH Zurich

Microsoft Mixed Real & Artificial Intelligence La

Josef Sivic

Ecole Normale Superieure (ENS)

INRIA Paris

Czech Technical University in Prague

Akihiko Torii

Tokyo Institute of Technology

Torsten Sattler

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering, Imaging and Image Analysis

2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

8084-8093

IEEE / CVF Conference on Computer Vision and Pattern Recognition
Long Beach, USA,

Subject Categories

Robotics

Computer Vision and Robotics (Autonomous Systems)

DOI

10.1109/CVPR.2019.00828

More information

Latest update

11/13/2020