论文标题

与半监督域适应的任务分解深度共同训练

Deep Co-Training with Task Decomposition for Semi-Supervised Domain Adaptation

论文作者

Yang, Luyu, Wang, Yan, Gao, Mingfei, Shrivastava, Abhinav, Weinberger, Kilian Q., Chao, Wei-Lun, Lim, Ser-Nam

论文摘要

半监督域的适应性(SSDA)旨在调整从标记的源域进行训练的模型,从而提供一个不同但相关的目标域,从中提供了未标记的数据和一小部分标记数据。当前不区分源和目标监督的当前方法忽略了它们固有的差异,从而导致了源为源为主导的模型,该模型尚未有效地使用目标监督。在本文中,我们认为,需要对有效的SSDA进行区分标记的目标数据,并建议将SSDA任务明确分解为两个子任务:目标域中的半监督学习(SSL)任务,以及一个跨域跨域任务的无监督域(UDA)任务。通过这样做,这两个子任务可以更好地利用相应的监督,从而产生非常不同的分类器。为了整合两个分类器的优势,我们运用了良好的共同训练框架,在该框架中,两个分类器将其高自信的预测交换为迭代地“互相教”,以便两个分类器都可以在目标域中脱颖而出。我们称我们的方法与任务分解(Decota)进行了深入共同训练。 Decota不需要对抗性训练,并且易于实施。此外,Decota在何时成功训练的理论条件上有充分的基础。结果,Decota在几个SSDA数据集上实现了最新的结果,以优于先前的ART在域名上显着的4%边距。代码可从https://github.com/loyoyang/decota获得

Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain, from which unlabeled data and a small set of labeled data are provided. Current methods that treat source and target supervision without distinction overlook their inherent discrepancy, resulting in a source-dominated model that has not effectively used the target supervision. In this paper, we argue that the labeled target data needs to be distinguished for effective SSDA, and propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains. By doing so, the two sub-tasks can better leverage the corresponding supervision and thus yield very different classifiers. To integrate the strengths of the two classifiers, we apply the well-established co-training framework, in which the two classifiers exchange their high confident predictions to iteratively "teach each other" so that both classifiers can excel in the target domain. We call our approach Deep Co-training with Task decomposition (DeCoTa). DeCoTa requires no adversarial training and is easy to implement. Moreover, DeCoTa is well-founded on the theoretical condition of when co-training would succeed. As a result, DeCoTa achieves state-of-the-art results on several SSDA datasets, outperforming the prior art by a notable 4% margin on DomainNet. Code is available at https://github.com/LoyoYang/DeCoTa

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源