Skip to yearly menu bar Skip to main content


Poster

Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-Training via Differentiable Rendering of Line Segments

Yusuke Takimoto · Hikari Takehara · Hiroyuki Sato · Zihao Zhu · Bo Zheng

Arch 4A-E Poster #102
Highlight Highlight
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract:

In the film and gaming industries, achieving a realistic hair appearance typically involves the use of strands originating from the scalp.However, reconstructing these strands from observed surface images of hair presents significant challenges.The difficulty in acquiring Ground Truth (GT) data has led state-of-the-art learning-based methods to rely on pre-training with manually prepared synthetic CG data.This process is not only labor-intensive and costly but also introduces complications due to the domain gap when compared to real-world data.In this study, we propose an optimization-based approach that eliminates the need for pre-training.Our method represents hair strands as line segments growing from the scalp and optimizes them using a novel differentiable rendering algorithm.To robustly optimize a substantial number of slender explicit geometries, we introduce 3D orientation estimation utilizing global optimization, strand initialization based on Laplace's equation, and reparameterization that leverages geometric connectivity and spatial proximity.Unlike existing optimization-based methods, our method is capable of reconstructing internal hair flow in an absolute direction.Our method exhibits robust and accurate inverse rendering, surpassing the quality of existing methods and significantly improving processing speed.

Chat is not available.