Skip to yearly menu bar Skip to main content


Poster

Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language

Mark Hamilton · Andrew Zisserman · John Hershey · William Freeman

Arch 4A-E Poster #336
[ ] [ Project Page ] [ Paper PDF ]
[ Slides
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

We present DenseAV, a novel dual encoder grounding architecture that learns high-resolution, semantically meaningful, and audio-visual aligned features solely through watching videos. We show that DenseAV can discover the "meaning" of words and the "location" of sounds without explicit localization supervision. Furthermore, it automatically discovers and distinguishes between these two types of associations without discriminative supervision. We show that our high-quality localization abilities arise from a new multi-head feature aggregation operator that directly compares dense image and audio representations for contrastive learning. In contrast, many other systems that learn "global" audio and video representations do not show high quality localization of words and sound. Finally, we contribute two new datasets to improve the evaluation of AV representations through speech and sound prompted semantic segmentation. On these and other datasets we show DenseAV dramatically outperforms the prior art on speech and sound prompted semantic segmentation. DenseAV outperforms the current state-of-the-art, ImageBind, on cross-modal retrieval using fewer than half of the parameters.

Chat is not available.