Skip to yearly menu bar Skip to main content


Poster

Geometrically-driven Aggregation for Zero-shot 3D Point Cloud Understanding

Guofeng Mei · Luigi Riz · Yiming Wang · Fabio Poiesi

Arch 4A-E Poster #360
award Highlight
[ ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Zero-shot 3D point cloud understanding can be achieved via 2D Vision-Language Models (VLMs). Existing strategies directly map VLM representations from 2D pixels of rendered or captured views to 3D points, overlooking the inherent and expressible point cloud geometric structure. Geometrically similar or close regions can be exploited for bolstering point cloud understanding as they are likely to share semantic information. To this end, we introduce the first training-free aggregation technique that leverages the point cloud's 3D geometric structure to improve the quality of the transferred VLM representations. Our approach operates iteratively, performing local-to-global aggregation based on geometric and semantic point-level reasoning. We benchmark our approach on three downstream tasks, including classification, part segmentation, and semantic segmentation, with a variety of datasets representing both synthetic/real-world, and indoor/outdoor scenarios. Our approach achieves new state-of-the-art results in all benchmarks.We will release the source code publicly.

Live content is unavailable. Log in and register to view live content