Poster
SAFDNet: A Simple and Effective Network for Fully Sparse 3D Object Detection
Gang Zhang · Chen Junnan · Guohuan Gao · Jianmin Li · Si Liu · Xiaolin Hu
Arch 4A-E Poster #22
[
Abstract
]
[ Project Page ]
[ Paper PDF ]
Oral
presentation:
Orals 4A Autonomous navigation and egocentric vision
Thu 20 Jun 1 p.m. PDT — 2:30 p.m. PDT
[
Poster]
Thu 20 Jun 5 p.m. PDT
— 6:30 p.m. PDT
Thu 20 Jun 1 p.m. PDT — 2:30 p.m. PDT
Abstract:
LiDAR-based 3D object detection plays an essential role in autonomous driving. Existing high-performing 3D object detectors usually build dense feature maps in the backbone network and prediction head. However, the computational costs introduced by the dense feature maps grow quadratically as the perception range increases, making these models hard to scale up to long-range detection. Some recent works have attempted to construct fully sparse detectors to solve this issue; nevertheless, the resulting models either rely on a complex multi-stage pipeline or exhibit inferior performance. In this work, we propose SAFDNet, a straightforward yet highly effective architecture, tailored for fully sparse 3D object detection. In SAFDNet, an adaptive feature diffusion strategy is designed to address the center feature missing problem. We conducted extensive experiments on Waymo Open, nuScenes, and Argoverse2 datasets. SAFDNet performed slightly better than the previous SOTA on the first two datasets but much better on the last dataset, which features long-range detection, verifying the efficacy of SAFDNet in scenarios where long-range detection is required. Notably, on Argoverse2, SAFDNet surpassed the previous best hybrid detector HEDNet by 2.6\% mAP while being 2.1$\times$ faster, and yielded 2.1\% mAP gains over the previous best sparse detector FSDv2 while being 1.3$\times$ faster. The code will be available at https://github.com/zhanggang001/HEDNet.
Chat is not available.