Skip to yearly menu bar Skip to main content


Poster

GenNBV: Generalizable Next-Best-View Policy for Active 3D Reconstruction

Xiao Chen · Quanyi Li · Tai Wang · Tianfan Xue · Jiangmiao Pang

Arch 4A-E Poster #174
[ ] [ Paper PDF ]
[ Poster
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

While recent advances in neural radiance field enable realistic digitization for large-scale scenes, the image-capturing process is still time-consuming and labor-intensive. Previous works attempt to automate this process using the Next-Best-View (NBV) policy for active 3D reconstruction. However, the existing NBV policies heavily rely on hand-crafted criteria, limited action space, or per-scene optimized representations. These constraints limit their zero-shot generalizability. To overcome them, we propose \textbf{GenNBV}, an end-to-end generalizable NBV policy. Our policy adopts reinforcement learning (RL)-based framework and extends typical limited action space to 5D free space. It empowers our agent drone to scan from any viewpoint, and even interact with unseen geometries during training. In order to boost the zero-shot generalizability, we also propose a novel multi-source state embedding, including geometric, semantic, and action representations. We establish a benchmark using the Isaac Gym simulator with the Houses3K and OmniObject3D datasets to evaluate this NBV policy. Experiments demonstrate that our policy achieves a 98.26\% and 97.12\% coverage ratio on unseen building-scale objects from these datasets, respectively, outperforming prior solutions.

Chat is not available.