Skip to yearly menu bar Skip to main content


Poster

Knowledge-Enhanced Dual-stream Zero-shot Composed Image Retrieval

Yucheng Suo · Fan Ma · Linchao Zhu · Yi Yang

Arch 4A-E Poster #270
[ ] [ Paper PDF ]
[ Poster
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

We study the zero-shot Composed Image Retrieval (ZS-CIR) task, which is to retrieve the target image given a reference image and a description without training on the triplet datasets. Previous works learn a pseudo-word token by projecting the reference image features to the text embedding space via image-only contrastive learning. However, they focus on the global visual representation, ignoring the representation of detailed attributes, e.g., color, object number and layout. To address this challenge, we propose a Knowledge-Enhanced Dual-stream zero-shot composed image retrieval framework (KEDs). KEDs implicitly models the attributes of the reference image by incorporating a database. The database enriches the pseudo-word tokens by providing relevant images and captions, emphasizing shared attribute information in various aspects. In this way, KEDs recognizes the reference image from diverse perspectives.Moreover, KEDs adopts an extra stream that aligns pseudo-word tokens with textual concepts, leveraging pseudo-triplets mined from image-text pairs. The pseudo-word tokens generated in this stream are explicitly aligned with fine-grained attribute semantics in the text embedding space. Extensive experiments on widely used benchmarks, i.e. ImageNet-R, COCO object, Fashion-IQ and CIRR, show that KEDs outperforms previous zero-shot composed image retrieval methods.

Chat is not available.