论文标题
改善对比表示学习中的变换不变性
Improving Transformation Invariance in Contrastive Representation Learning
论文作者
论文摘要
我们提出了加强通过对比度学习获得的表示的不变性特性的方法。尽管现有方法隐含地诱导了一定程度的不变性,但我们希望在编码过程中更直接地执行不变性。为此,我们首先引入了对比度学习的训练目标,该目标使用新颖的正规器来控制表示形式如何变化。我们表明,经过此目标训练的表示形式在下游任务上的表现更好,并且在测试时引入滋扰转换更为强大。其次,我们提出了通过引入功能平均方法来结合原始输入的多个转换的编码,发现这将导致整个董事会性能增长,从而更改了测试时间表示如何生成测试时间表示。最后,我们介绍了新型的静态数据集,以在具有多个下游任务的可区分生成过程的背景下探索我们的想法,这表明我们学习不变性的技术是非常有益的。
We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a training objective for contrastive learning that uses a novel regularizer to control how the representation changes under transformation. We show that representations trained with this objective perform better on downstream tasks and are more robust to the introduction of nuisance transformations at test time. Second, we propose a change to how test time representations are generated by introducing a feature averaging approach that combines encodings from multiple transformations of the original input, finding that this leads to across the board performance gains. Finally, we introduce the novel Spirograph dataset to explore our ideas in the context of a differentiable generative process with multiple downstream tasks, showing that our techniques for learning invariance are highly beneficial.