Skip to yearly menu bar Skip to main content


Poster

De-Diffusion Makes Text a Strong Cross-Modal Interface

Chen Wei · Chenxi Liu · Siyuan Qiao · Zhishuai Zhang · Alan L. Yuille · Jiahui Yu

Arch 4A-E Poster #371
[ ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

We demonstrate text as a strong cross-modal interface. Rather than relying on deep embeddings to connect image and language as the interface representation, our approach represents an image as text, from which we enjoy the interpretability and flexibility inherent to natural language. We employ an autoencoder that uses a pre-trained text-to-image diffusion model for decoding. The encoder is trained to transform an input image into text, which is then fed into the fixed text-to-image diffusion decoder to reconstruct the original input -- a process we term De-Diffusion. Experiments validate both the precision and comprehensiveness of De-Diffusion text representing images, such that it can be readily ingested by off-the-shelf text-to-image tools and LLMs for diverse multi-modal tasks. For example, a single De-Diffusion model can generalize to provide transferable prompts for different text-to-image tools, and also achieves a new state of the art on open-ended vision-language tasks by simply prompting LLMs with few-shot examples. Project page: https://dediffusion.github.io/

Chat is not available.