LXL: LiDAR Excluded Lean 3D Object Detection with 4D Imaging Radar and Camera Fusion
Paper i proceeding, 2024

As an emerging technology and a relatively affordable device, the 4D imaging radar has already been confirmed effective in performing 3D object detection in autonomous driving [1]. Nevertheless, the sparsity and noisiness of 4D radar point clouds hinder further performance improvement, and in-depth studies about its fusion with other modalities are lacking. On the other hand, as a new image view transformation strategy, sampling has been applied in a few image-based detectors and shown to outperform the widely applied depth-based splatting proposed in Lift-Splat-Shoot (LSS) [2] , even without image depth prediction [3]. However, the potential of sampling is not fully unleashed. As a result, this paper investigates the sampling strategy on the camera and 4D imaging radar fusion-based 3D object detection. In the proposed LiDAR Excluded Lean (LXL) model, predicted image depth distribution maps and radar 3D occupancy grids are generated from image perspective view (PV) features and radar bird's eye view (BEV) features, respectively. They are sent to the core of LXL, called radar occupancy-assisted depth-based sampling , to aid image view transformation.

Författare

Weiyi Xiong

Beihang University

Jianan Liu

Vitalent Consulting

Tao Huang

James Cook University

Qing Long Han

Swinburne University of Technology

Yuxuan Xia

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

Bing Zhu

Beihang University

IEEE Intelligent Vehicles Symposium, Proceedings

19310587 (ISSN) 26427214 (eISSN)

3142-
9798350348811 (ISBN)

35th IEEE Intelligent Vehicles Symposium, IV 2024
Jeju Island, South Korea,

Ämneskategorier

Atom- och molekylfysik och optik

Datorseende och robotik (autonoma system)

Medicinsk bildbehandling

DOI

10.1109/IV55156.2024.10588781

Mer information

Senast uppdaterat

2024-08-05