Skip to yearly menu bar Skip to main content


Poster

Learning Transferable Negative Prompts for Out-of-Distribution Detection

Tianqi Li · Guansong Pang · wenjun miao · Xiao Bai · Jin Zheng

Arch 4A-E Poster #289
[ ] [ Paper PDF ]
[ Poster
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Existing prompt learning methods have demonstrated certain capabilities in Out-of-Distribution (OOD) detection, but their lack of perception of OOD images in the target dataset can lead to mismatches between OOD images and In-Distribution (ID) categories, leading to a high false positive rate. To address this issue, we introduce a novel OOD detection method, named `NegPrompt', which is designed to learn a set of negative prompts, each representing a negative connotation of a given class label, to delineate the boundaries between ID and OOD images. It learns such negative prompts with ID data only, eliminating its reliance on external data.Further, current methods assume the availability of samples of all ID classes, rendering them ineffective in open-vocabulary learning scenarios where the inference stage can contain novel ID classes not present in the training data. In contrast, our learned negative prompts are transferable to novel class labels, Experiments on various ImageNet-based benchmarks demonstrate that NegPrompt surpasses state-of-the-art prompt-learning-based OOD detection methods and maintains a consistent lead in hard OOD detection in closed- and open-vocabulary classification scenarios.

Chat is not available.