论文标题

在微调的深度特征上,用于无监督域的适应

On Fine-Tuned Deep Features for Unsupervised Domain Adaptation

论文作者

Wang, Qian, Breckon, Toby P.

论文摘要

基于特征转化的方法无监督的域适应性(UDA)采用了预先训练的深层模型提取的深度特征,而无需对特定域的特定域或目标域数据进行微调,以适应特定域的适应任务。相比之下,基于端到端的学习方法可以同时优化预训练的骨干和自定义的适应模块,以学习UDA的域不变特征。在这项工作中,我们探讨了结合微调功能和基于特征转换的UDA方法的潜力,以改善域的适应性性能。具体而言,我们将普遍的渐进式伪标记技术集成到微调框架中,以提取细调特征,这些特征随后在基于最新的特征转换域适应方法SPL(选择性伪标记)中使用。对多个深层模型进行了彻底的实验,包括Resnet-50/101和Deit-Small/Base,以证明微调功能和SPL的组合可以在多个基准数据集上实现最新性能。

Prior feature transformation based approaches to Unsupervised Domain Adaptation (UDA) employ the deep features extracted by pre-trained deep models without fine-tuning them on the specific source or target domain data for a particular domain adaptation task. In contrast, end-to-end learning based approaches optimise the pre-trained backbones and the customised adaptation modules simultaneously to learn domain-invariant features for UDA. In this work, we explore the potential of combining fine-tuned features and feature transformation based UDA methods for improved domain adaptation performance. Specifically, we integrate the prevalent progressive pseudo-labelling techniques into the fine-tuning framework to extract fine-tuned features which are subsequently used in a state-of-the-art feature transformation based domain adaptation method SPL (Selective Pseudo-Labeling). Thorough experiments with multiple deep models including ResNet-50/101 and DeiT-small/base are conducted to demonstrate the combination of fine-tuned features and SPL can achieve state-of-the-art performance on several benchmark datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源