论文标题
通用域适应的子公司原型比对
Subsidiary Prototype Alignment for Universal Domain Adaptation
论文作者
论文摘要
通用域适应(UNIDA)涉及两个具有域移位的数据集之间的知识转移问题以及类别换档。目的是将未标记的目标样本分类为“已知”类别之一,要么将单个“未知”类别分类。 UNIDA的一个主要问题是负转移,即“已知”和“未知”类别的未对准。为此,我们首先揭示了在深网的不同层中展示的负转移风险和域 - 不变性之间的有趣权衡。事实证明,我们可以在中层层的这两个指标之间取得平衡。为了设计基于这种见解的有效框架,我们从视野(弓)中汲取了动力。中层层的弓形表示中的单词 - 突出型将代表高级视觉原始图,这些视觉原始型可能不受高级特征中类别偏移的影响。我们开发修改,鼓励学习单词概况,然后是基于单词 - 历史图的分类。之后,子公司原型空间比对(SPA)可以看作是封闭式的比对问题,从而避免了负转移。我们通过与目标任务Unida结合使用的封闭式水疗中心的新型借口任务来实现这一点。我们在现有的UNIDA技术之上证明了我们的方法的功效,从而在三个标准UNIDA和开放式DA对象识别基准中产生了最先进的性能。
Universal Domain Adaptation (UniDA) deals with the problem of knowledge transfer between two datasets with domain-shift as well as category-shift. The goal is to categorize unlabeled target samples, either into one of the "known" categories or into a single "unknown" category. A major problem in UniDA is negative transfer, i.e. misalignment of "known" and "unknown" classes. To this end, we first uncover an intriguing tradeoff between negative-transfer-risk and domain-invariance exhibited at different layers of a deep network. It turns out we can strike a balance between these two metrics at a mid-level layer. Towards designing an effective framework based on this insight, we draw motivation from Bag-of-visual-Words (BoW). Word-prototypes in a BoW-like representation of a mid-level layer would represent lower-level visual primitives that are likely to be unaffected by the category-shift in the high-level features. We develop modifications that encourage learning of word-prototypes followed by word-histogram based classification. Following this, subsidiary prototype-space alignment (SPA) can be seen as a closed-set alignment problem, thereby avoiding negative transfer. We realize this with a novel word-histogram-related pretext task to enable closed-set SPA, operating in conjunction with goal task UniDA. We demonstrate the efficacy of our approach on top of existing UniDA techniques, yielding state-of-the-art performance across three standard UniDA and Open-Set DA object recognition benchmarks.