论文标题

域概括的深层域 - 反向图像生成

Deep Domain-Adversarial Image Generation for Domain Generalisation

论文作者

Zhou, Kaiyang, Yang, Yongxin, Hospedales, Timothy, Xiang, Tao

论文摘要

当在源数据集上训练并在不同分布的目标数据集上进行评估时,机器学习模型通常会遭受域移位问题的影响。为了克服这个问题,域的概括(DG)方法旨在利用来自多个源域的数据,以便训练有素的模型可以推广到看不见的域。在本文中,我们提出了一种基于\ emph {深层域 - 反向图像生成}(DDAIG)的新型DG方法。具体而言,DDAIG由三个组件组成,即标签分类器,域分类器和域转换网络(dotnet)。 DOTNET的目标是将源训练数据映射到看不见的域。这是通过制定学习目标来确保在欺骗域分类器时正确分类的生成数据来实现的。通过使用生成的看不见的域数据来增强源训练数据,我们可以使标签分类器对未知域更改更加健壮。在四个DG数据集上进行的大量实验证明了我们方法的有效性。

Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on \emph{Deep Domain-Adversarial Image Generation} (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源