Skip to yearly menu bar Skip to main content


Poster

Don’t Drop Your Samples! Coherence-Aware Training Benefits Conditional Diffusion

Nicolas Dufour · Victor Besnier · Vicky Kalogeiton · David Picard

Arch 4A-E Poster #142
Highlight Highlight
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Conditional diffusion models are powerful generative models that can leverage various types of conditional information, such as class labels, segmentation masks, or text captions. However, in many real-world scenarios, conditional information may be noisy or unreliable due to human annotation errors or weak alignment. In this paper, we propose the Coherence-Aware Diffusion (CAD), a novel method to integrate confidence in conditional information into diffusion models, allowing them to learn from noisy annotations without discarding data. We assume that each data point has an associated confidence score that reflects the quality of the conditional information. We then condition the diffusion model on both the conditional information and the confidence score. In this way, the model learns to ignore or discount the conditioning when the confidence is low. We show that our method is theoretically sound and empirically effective on various conditional generation tasks. Moreover, we show that leveraging confidence generates realistic and diverse samples that respect conditional information better than models trained on cleaned datasets where samples with low confidence have been discarded.

Chat is not available.