SurfelMeshing: Online Surfel-Based Mesh Reconstruction
Journal article, 2019

We address the problem of mesh reconstruction from live RGB-D video, assuming a calibrated camera and poses provided externally (e.g., by a SLAM system). In contrast to most existing approaches, we do not fuse depth measurements in a volume but in a dense surfel cloud. We asynchronously (re)triangulate the smoothed surfels to reconstruct a surface mesh. This novel approach enables to maintain a dense surface representation of the scene during SLAM which can quickly adapt to loop closures. This is possible by deforming the surfel cloud and asynchronously remeshing the surface where necessary. The surfel-based representation also naturally supports strongly varying scan resolution. In particular, it reconstructs colors at the input camera's resolution. Moreover, in contrast to many volumetric approaches, ours can reconstruct thin objects since objects do not need to enclose a volume. We demonstrate our approach in a number of experiments, showing that it produces reconstructions that are competitive with the state-of-the-art, and we discuss its advantages and limitations. The algorithm (excluding loop closure functionality) is available as open source at https://github.com/puzzlepaint/surfelmeshing.

Real-Time Dense Mapping

Applications of RGB-D Vision

Depth Fusion

3D Modeling and Scene Reconstruction

Loop Closure

Surfels

RGB-D SLAM

Author

Thomas Schops

Swiss Federal Institute of Technology in Zürich (ETH)

Torsten Sattler

Chalmers, Electrical Engineering, Signalbehandling och medicinsk teknik, Imaging and Image Analysis

Marc Pollefeys

Swiss Federal Institute of Technology in Zürich (ETH)

IEEE Transactions on Pattern Analysis and Machine Intelligence

0162-8828 (ISSN)

Vol. In Press

Subject Categories

Media Engineering

Computer Science

Computer Vision and Robotics (Autonomous Systems)

DOI

10.1109/TPAMI.2019.2947048

More information

Latest update

12/2/2019