Torsten Sattler
Torsten Sattler is an associate professor in the research group Computer vision and medical image analysis. His main research interests focus around developing robust and reliable 3D computer vision algorithms for applications such as Mixed Reality, Self-Driving Cars, and Robotics. To this end, Torsten works on integrating higher-level scene understanding into techniques such as visual localization and mapping. He is further interested in real-time computer vision algorithms and machine learning for computer vision tasks.

Showing 29 publications
InLoc: Indoor Visual Localization with Dense Matching and View Synthesis
Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization
To Learn or Not to Learn: Visual Localization from Essential Matrices
Making Affine Correspondences Work in Camera Geometry Computation
Large-scale, real-time visual–inertial localization revisited
Single-Image Depth Prediction Makes Feature Matching Easier
Long-Term Visual Localization Revisited
Using Image Sequences for Long-Term Visual Localization
SurfelMeshing: Online Surfel-Based Mesh Reconstruction
Infrastructure-Based Multi-camera Calibration Using Radial Projections
Why Having 10,000 Parameters in Your Camera Model Is Better Than Twelve
Handcrafted Outlier Detection Revisited
Self-Supervised Linear Motion Deblurring
Beyond Controlled Environments: 3D Camera Re-localization in Changing Indoor Scenes
Deep LiDAR localization using optical flow sensor-map correspondences
Understanding the Limitations of CNN-based Absolute Camera Pose Regression
IMAGE-TO-IMAGE TRANSLATION for ENHANCED FEATURE MATCHING, IMAGE RETRIEVAL and VISUAL LOCALIZATION
Efficient 2D-3D Matching for Multi-Camera Visual Localization
A cross-season correspondence dataset for robust semantic segmentation
Is this the right place? geometric-semantic pose verification for indoor visual localization
Night-to-day image translation for retrieval-based localization
Hybrid scene Compression for Visual Localization
Bad slam: Bundle adjusted direct RGB-D slam
D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
Revisiting Radial Distortion Absolute Pose
Incremental visual-inertial 3d mesh generation with structural regularities
Real-Time Dense Mapping for Self-Driving Vehicles using Fisheye Cameras
Download publication list
You can download this list to your computer.
Filter and download publication list
As logged in user (Chalmers employee) you find more export functions in MyResearch.
You may also import these directly to Zotero or Mendeley by using a browser plugin. These are found herer:
Showing 2 research projects
Vision and machine learning for collaborative robotics