论文标题
部分可观测时空混沌系统的无模型预测
1st Place Solution to NeurIPS 2022 Challenge on Visual Domain Adaptation
论文作者
论文摘要
视觉域的适应性(VISDA)2022挑战要求在工业废物分类的语义分割任务中无监督的域自适应模型。在本文中,我们介绍了SIA_ADAPT方法,该方法结合了域自适应模型的几种方法。我们方法的核心是大规模预训练的可转移表示。在此过程中,我们选择一个与最新的域适应性不同的网络体系结构。之后,使用伪标签的自我训练有助于使初始适应模型更适合目标域。最后,模型汤方案有助于提高目标域中的概括性能。我们的方法SIA_ADAPT在Visda2022挑战中获得第一名。该代码可在https://github.com/daehankim-korea/visda2022_winner_solution上找到。
The Visual Domain Adaptation(VisDA) 2022 Challenge calls for an unsupervised domain adaptive model in semantic segmentation tasks for industrial waste sorting. In this paper, we introduce the SIA_Adapt method, which incorporates several methods for domain adaptive models. The core of our method in the transferable representation from large-scale pre-training. In this process, we choose a network architecture that differs from the state-of-the-art for domain adaptation. After that, self-training using pseudo-labels helps to make the initial adaptation model more adaptable to the target domain. Finally, the model soup scheme helped to improve the generalization performance in the target domain. Our method SIA_Adapt achieves 1st place in the VisDA2022 challenge. The code is available on https: //github.com/DaehanKim-Korea/VisDA2022_Winner_Solution.