Skip to yearly menu bar Skip to main content


Poster

DisCoScene: Spatially Disentangled Generative Radiance Fields for Controllable 3D-Aware Scene Synthesis

Yinghao Xu · Menglei Chai · Zifan Shi · Sida Peng · Ivan Skorokhodov · Aliaksandr Siarohin · Ceyuan Yang · Yujun Shen · Hsin-Ying Lee · Bolei Zhou · Sergey Tulyakov

West Building Exhibit Halls ABC 026
Highlight Highlight
[ ] [ Project Page ]
[ Paper PDF [ Slides [ Poster

Abstract:

Existing 3D-aware image synthesis approaches mainly focus on generating a single canonical object and show limited capacity in composing a complex scene containing a variety of objects. This work presents DisCoScene: a 3D-aware generative model for high-quality and controllable scene synthesis. The key ingredient of our method is a very abstract object-level representation (i.e., 3D bounding boxes without semantic annotation) as the scene layout prior, which is simple to obtain, general to describe various scene contents, and yet informative to disentangle objects and background. Moreover, it serves as an intuitive user control for scene editing. Based on such a prior, the proposed model spatially disentangles the whole scene into object-centric generative radiance fields by learning on only 2D images with the global-local discrimination. Our model obtains the generation fidelity and editing flexibility of individual objects while being able to efficiently compose objects and the background into a complete scene. We demonstrate state-of-the-art performance on many scene datasets, including the challenging Waymo outdoor dataset. Our code will be made publicly available.

Chat is not available.