论文标题

带有图像翻译的源免费域改编

Source Free Domain Adaptation with Image Translation

论文作者

Hou, Yunzhong, Zheng, Liang

论文摘要

隐私和知识产权考虑因素可能会损害释放大规模数据集的努力。一个可行的替代方法是释放预训练的模型。尽管这些模型的原始任务(源域)很强,但直接部署在新环境(目标域)中时,它们的性能可能会大大降低,这可能不包含在现实设置下进行培训的标签。域自适应(DA)是域间隙问题的已知解决方案,但通常需要标记为源数据。在本文中,我们研究了源自由域适应性(SFDA)的问题,其独特的特征是源域仅提供预训练的模型,但没有源数据。免费来源为DA增加了重大挑战,尤其是在考虑目标数据集未标记时。为了解决SFDA问题,我们提出了一种图像翻译方法,该方法将目标图像的样式转移到了看不见的源图像的样式。为此,我们将生成图像的批处理特征统计数据与预先训练模型的批归归式层中存储的图像对齐。与直接分类目标图像相比,使用预训练的模型通过这些样式传输图像获得更高的精度。在几个图像分类数据集上,我们表明上述改进是一致的,并且具有统计学意义。

Effort in releasing large-scale datasets may be compromised by privacy and intellectual property considerations. A feasible alternative is to release pre-trained models instead. While these models are strong on their original task (source domain), their performance might degrade significantly when deployed directly in a new environment (target domain), which might not contain labels for training under realistic settings. Domain adaptation (DA) is a known solution to the domain gap problem, but usually requires labeled source data. In this paper, we study the problem of source free domain adaptation (SFDA), whose distinctive feature is that the source domain only provides a pre-trained model, but no source data. Being source free adds significant challenges to DA, especially when considering that the target dataset is unlabeled. To solve the SFDA problem, we propose an image translation approach that transfers the style of target images to that of unseen source images. To this end, we align the batch-wise feature statistics of generated images to that stored in batch normalization layers of the pre-trained model. Compared with directly classifying target images, higher accuracy is obtained with these style transferred images using the pre-trained model. On several image classification datasets, we show that the above-mentioned improvements are consistent and statistically significant.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源