Skip to yearly menu bar Skip to main content


Poster

Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection

Jongha Kim · Jihwan Park · Jinyoung Park · Jinyoung Kim · Sehyung Kim · Hyunwoo J. Kim

Arch 4A-E Poster #385
[ ] [ Project Page ]
[ Poster
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Visual Relationship Detection (VRD) has seen significant advancements with Transformer-based architecturesrecently. However, we identify two key limitations in a conventional label assignment for training Transformer-based VRD models, which is a process of mapping a ground-truth (GT) to a prediction. Under the conventional assignment, an ‘unspecialized’ query is trained since a query is expected to detect every relation, which makes it difficult for a query to specialize in specific relations. Furthermore, aquery is also insufficiently trained since a GT is assigned only to a single prediction, therefore near-correct or even correct predictions are suppressed by being assigned ‘no relation (∅)’ as a GT. To address these issues, we propose Groupwise Query Specialization and Quality-Aware Multi-Assignment (SpeaQ). Groupwise Query Specialization trains a ‘specialized’ query by dividing queries and relations into disjoint groups and directing a query in a specific query group solely toward relations in the corresponding relation group. Quality-Aware Multi-Assignment further facilitates the training by assigning a GT to multiple predictions that are significantly close to a GT in terms of a subject, an object, and the relation in between. Experimental results and analyses show that SpeaQ effectively trains ‘specialized’ queries, which better utilize the capacity of a model, resulting in consistent performance gains with ‘zero’ additional inference cost across multiple VRD models and benchmarks.

Chat is not available.