论文标题
重建损失与域概括的特征对齐之间的权衡
Trade-off between reconstruction loss and feature alignment for domain generalization
论文作者
论文摘要
域的概括(DG)是转移学习的一个分支,旨在在几个可见的域上训练学习模型,然后将这些预训练的模型应用于其他看不见的(未知但相关的)域。为了处理DG中具有挑战性的设置,在训练时间无法使用看不见的域的数据和标签,最常见的方法是根据域不变的表示特征设计分类器,即在域之间不变和可转移的潜在表示。与普遍的看法相反,我们表明,仅基于不变表示功能设计分类器是必要的,但在DG中不足。我们的分析表明,有必要对代表函数引起的重建损失施加限制,以保留潜在空间中有关标签的大多数相关信息。更重要的是,我们指出的是最大程度地减少重建损失和达到DG中的域名之间的权衡。我们的理论结果激发了一个新的DG框架,该框架共同优化了重建损失和域差异。提供理论和数值结果以证明我们的方法是合理的。
Domain generalization (DG) is a branch of transfer learning that aims to train the learning models on several seen domains and subsequently apply these pre-trained models to other unseen (unknown but related) domains. To deal with challenging settings in DG where both data and label of the unseen domain are not available at training time, the most common approach is to design the classifiers based on the domain-invariant representation features, i.e., the latent representations that are unchanged and transferable between domains. Contrary to popular belief, we show that designing classifiers based on invariant representation features alone is necessary but insufficient in DG. Our analysis indicates the necessity of imposing a constraint on the reconstruction loss induced by representation functions to preserve most of the relevant information about the label in the latent space. More importantly, we point out the trade-off between minimizing the reconstruction loss and achieving domain alignment in DG. Our theoretical results motivate a new DG framework that jointly optimizes the reconstruction loss and the domain discrepancy. Both theoretical and numerical results are provided to justify our approach.