Skip to yearly menu bar Skip to main content


Poster

Residual Learning in Diffusion Models

Junyu Zhang · Daochang Liu · Eunbyung Park · Shichao Zhang · Chang Xu

Arch 4A-E Poster #242
Highlight Highlight
[ ] [ Paper PDF ]
[ Poster
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

The past few years have witnessed great success in the use of diffusion models (DMs) to generate high-fidelity images with the help of stochastic differential equations (SDEs).Nevertheless, a gap emerges in the model sampling trajectory constructed by reverse-SDE due to the accumulation of score estimation and discretization errors. This gap results in a residual in the generated images, adversely impacting the image quality.To remedy this, we propose a novel residual learning framework built upon a correction function.The optimized function enables to improve image quality via rectifying the sampling trajectory effectively.Importantly, our framework exhibits transferable residual correction ability, i.e., a correction function optimized for one pre-trained DM can also enhance the sampling trajectory constructed by other different DMs on the same dataset.Experimental results on four widely-used datasets demonstrate the effectiveness and transferable capability of our framework.

Chat is not available.