Skip to yearly menu bar Skip to main content


Poster

GauHuman: Articulated Gaussian Splatting from Monocular Human Videos

Shoukang Hu · Tao Hu · Ziwei Liu

Arch 4A-E Poster #85
[ ] [ Paper PDF ]
[ Poster
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract:

We present, GauHuman, a 3D human model with Gaussian Splatting for both fast training (1~2 minutes) and real-time rendering (up to 189 FPS), compared with existing NeRF-based implicit representation modelling frameworks demanding hours of training and seconds of rendering per frame. Specifically, GauHuman encodes Gaussian Splatting in the canonical space and transforms 3D Gaussians from canonical space to posed space with linear blend skinning (LBS), in which effective pose and LBS refinement modules are designed to learn fine details of 3D humans under negligible computational cost. Moreover, to enable fast optimization of GauHuman, we initialize and prune 3D Gaussians with 3D human prior, while splitting/cloning via KL divergence guidance, along with a novel merge operation for further speeding up. Extensive experiments on ZJU_Mocap and MonoCap datasets demonstrate that GauHuman achieves state-of-the-art performance quantitatively and qualitatively with fast training and real-time rendering speed. Notably, without sacrificing rendering quality, GauHuman can fast model the 3D human performer with ~13k 3D Gaussians. Our code is available at https://github.com/skhu101/GauHuman.

Chat is not available.