Skip to yearly menu bar Skip to main content


Poster

Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following

Yutong Feng · Biao Gong · Di Chen · Yujun Shen · Yu Liu · Jingren Zhou

Arch 4A-E Poster #171
[ ] [ Paper PDF ]
[ Poster
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT
 
Oral presentation: Orals 2A Image & Video Synthesis
Wed 19 Jun 1 p.m. PDT — 2:30 p.m. PDT

Abstract:

Existing text-to-image (T2I) diffusion models usually struggle in interpreting complex prompts, especially those with quantity, object-attribute binding, and multi-subject descriptions. In this work, we introduce a semantic panel as the middleware in decoding texts to images, supporting the generator to better follow instructions. The panel is obtained through arranging the visual concepts parsed from the input text by the aid of large language models, and then injected into the denoising network as a detailed control signal to complement the text condition. To facilitate text-to-panel learning, we come up with a carefully designed semantic formatting protocol, accompanied by a fully-automatic data preparation pipeline. Thanks to such a design, our approach, which we call Ranni, manages to enhance a pre-trained T2I generator regarding its textual controllability. More importantly, the introduction of the generative middleware brings a more convenient form of interaction (i.e., directly adjusting the elements in the panel or using language instructions) and further allows users to finely customize their generation, based on which we develop a practical system and showcase its potential in continuous generation and chatting-based editing.

Chat is not available.