Skip to yearly menu bar Skip to main content


Poster

Super-CLEVR: A Virtual Benchmark To Diagnose Domain Robustness in Visual Reasoning

Zhuowan Li · Xingrui Wang · Elias Stengel-Eskin · Adam Kortylewski · Wufei Ma · Benjamin Van Durme · Alan L. Yuille

West Building Exhibit Halls ABC 249
Highlight Highlight
[ ] [ Project Page ]
[ Paper PDF [ Slides [ Poster

Abstract:

Visual Question Answering (VQA) models often perform poorly on out-of-distribution data and struggle on domain generalization. Due to the multi-modal nature of this task, multiple factors of variation are intertwined, making generalization difficult to analyze. This motivates us to introduce a virtual benchmark, Super-CLEVR, where different factors in VQA domain shifts can be isolated in order that their effects can be studied independently. Four factors are considered: visual complexity, question redundancy, concept distribution and concept compositionality. With controllably generated data, Super-CLEVR enables us to test VQA methods in situations where the test data differs from the training data along each of these axes. We study four existing methods, including two neural symbolic methods NSCL and NSVQA, and two non-symbolic methods FiLM and mDETR; and our proposed method, probabilistic NSVQA (P-NSVQA), which extends NSVQA with uncertainty reasoning. P-NSVQA outperforms other methods on three of the four domain shift factors. Our results suggest that disentangling reasoning and perception, combined with probabilistic uncertainty, form a strong VQA model that is more robust to domain shifts. The dataset and code are released at https://github.com/Lizw14/Super-CLEVR.

Chat is not available.