Skip to yearly menu bar Skip to main content


Poster

Hearing Anything Anywhere

Mason Wang · Ryosuke Sawata · Samuel Clarke · Ruohan Gao · Shangzhe Wu · Jiajun Wu

Arch 4A-E Poster #208
[ ] [ Project Page ] [ Paper PDF ]
[ Slides [ Poster
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Recent years have seen immense progress in 3D computer vision and computer graphics, with emerging tools that can virtualize real-world 3D environments for numerous Mixed Reality (XR) applications. However, alongside immersive visual experiences, immersive auditory experiences are equally vital to our holistic perception of an environment. In this paper, we aim to reconstruct the spatial acoustic characteristics of an arbitrary environment given only a sparse set of (roughly 12) room impulse response (RIR) recordings and a planar reconstruction of the scene, a setup that is easily achievable by ordinary users. To this end, we introduce DiffRIR, a differentiable RIR rendering framework with interpretable parametric models of salient acoustic features of the scene, including sound source directivity and surface reflectivity. This allows us to synthesize novel auditory experiences through the space with any source audio. To evaluate our method, we collect a dataset of RIR recordings and music in four diverse, real environments. We show that our model outperforms state-of-the-art baselines on rendering monaural and binaural RIRs and music at unseen locations, and learns physically interpretable parameters characterizing acoustic properties of the sound source and surfaces in the scene.

Chat is not available.