NeuRAD: Neural Rendering for Autonomous Driving
Paper in proceeding, 2024

Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community. Recent meth-ods show NeRFs' potential for closed-loop simulation, en-abling testing of AD systems, and as an advanced training data augmentation technique. However, existing meth-ods often require long training times, dense semantic su-pervision, or lack generalizability. This, in turn, hinders the application of NeRFs for AD at scale. In this paper, we propose NeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our method features simple network design, extensive sensor modeling for both cam-era and lidar - including rolling shutter, beam divergence and ray dropping - and is applicable to multiple datasets out of the box. We verify its performance on five popular AD datasets, achieving state-of-the-art performance across the board. To encourage further development, we openly release the NeuRAD source code.

Autonomous Driving

NeRF

Neural Rendering

Author

Adam Tonderski

Zenseact AB

Lund University

Carl Lindström

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Zenseact AB

Georg Hess

Zenseact AB

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

William Ljungbergh

Zenseact AB

Linköping University

Lennart Svensson

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Christoffer Petersson

Chalmers, Mathematical Sciences, Algebra and geometry

Zenseact AB

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

10636919 (ISSN)

14895-14904

2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
Seattle, USA,

Subject Categories (SSIF 2025)

Computer graphics and computer vision

Computer Sciences

DOI

10.1109/CVPR52733.2024.01411

More information

Latest update

2/28/2025