Abstract:
Active domain adaptation (ADA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate. Such a setting neglects the more practical scenario where training data are collected from multiple sources, which motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains, called Multi-source Active Domain Adaptation (MADA). Not surprisingly, we find that most of the traditional ADA methods cannot directly work in such a setting, resulting in significant performance degradation compared with the single-source domain setting. This is mainly due to the excessive domain gap introduced from all the source domains and their uncertainty-aware sample selection can easily be miscalibrated on data under the multi-domain shifts.Considering this, we propose a $\underline{\textbf{D}}$ynamic int$\underline{\textbf{e}}$gra$\underline{\textbf{te}}$d un$\underline{\textbf{c}}$er$\underline{\textbf{t}}$a$\underline{\textbf{i}}$nty $\underline{\textbf{v}}$aluation fram$\underline{\textbf{e}}$work~($\textbf{Detective}$) that comprehensively consider the domain shift between multi-source domains and target domain to detect the informative target samples. Specifically, the Detective leverages a dynamic DA model that learns how to adapt the model’s parameters and fit the union of multi-source domains, which enables an approximate single-source domain modeling by the dynamic model. Then we comprehensively measure the domain uncertainty and predictive uncertainty on the target domain to detect the informative target samples by evidential deep learning, mitigating the uncertainty miscalibration. We further seek a contextual diversity-aware calculator to enhance the diversity of the selected samples. Experiments show that the solution outperforms existing methods by a considerable margin on three domain adaptation benchmarks.
Chat is not available.