FlowIBR: Leveraging Pre-Training for Efficient Neural Image-Based Rendering of Dynamic Scenes
Paper in proceeding, 2024

We introduce FlowIBR, a novel approach for efficient monocular novel view synthesis of dynamic scenes. Existing techniques already show impressive rendering quality but tend to focus on optimization within a single scene without leveraging prior knowledge, resulting in long optimization times per scene. FlowIBR circumvents this limitation by integrating a neural image-based rendering method, pretrained on a large corpus of widely available static scenes, with a per-scene optimized scene flow field. Utilizing this flow field, we bend the camera rays to counteract the scene dynamics, thereby presenting the dynamic scene as if it were static to the rendering network. The proposed method reduces per-scene optimization time by an order of magnitude, achieving comparable rendering quality to existing methods - all on a single consumer-grade GPU.

Dynamic scenes

3D from multi-view and sensors

Neural rendering

Author

Marcel Büsching

Royal Institute of Technology (KTH)

Josef Bengtson

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

David Nilsson

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Marten Bjorkman

Royal Institute of Technology (KTH)

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

21607508 (ISSN) 21607516 (eISSN)

8016-8026
9798350365474 (ISBN)

2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024
Seattle, USA,

Subject Categories

Computer Science

DOI

10.1109/CVPRW63382.2024.00800

More information

Latest update

11/6/2024