论文标题
在无监督的视觉表示中解决特征抑制
Addressing Feature Suppression in Unsupervised Visual Representations
论文作者
论文摘要
对比度学习是机器学习中增长最快的研究领域之一,因为它能够学习有用的表示没有标记的数据。但是,对比学习容易受到抑制作用,即,它可能会丢弃与感兴趣的任务相关的重要信息,并学习无关紧要的特征。过去的工作通过手工制作的数据增强解决了这一限制,从而消除了无关的信息。但是,这种方法在所有数据集和任务中都无法使用。此外,当一个属性可以抑制与其他属性相关的功能时,数据增强无法解决多属性分类中的特征抑制。在本文中,我们分析了对比学习的目标函数,并正式证明它很容易受到抑制作用。然后,我们提出了预测性的对比学习(PCL),这是一种学习不受监督的表示形式的框架,可抑制抑制作用。关键的想法是迫使学习的表示以预测输入,从而阻止其丢弃重要信息。广泛的实验证明,PCL在各种数据集和任务上都具有抑制作用和优于最先进的对比度学习方法。
Contrastive learning is one of the fastest growing research areas in machine learning due to its ability to learn useful representations without labeled data. However, contrastive learning is susceptible to feature suppression, i.e., it may discard important information relevant to the task of interest, and learn irrelevant features. Past work has addressed this limitation via handcrafted data augmentations that eliminate irrelevant information. This approach however does not work across all datasets and tasks. Further, data augmentations fail in addressing feature suppression in multi-attribute classification when one attribute can suppress features relevant to other attributes. In this paper, we analyze the objective function of contrastive learning and formally prove that it is vulnerable to feature suppression. We then present predictive contrastive learning (PCL), a framework for learning unsupervised representations that are robust to feature suppression. The key idea is to force the learned representation to predict the input, and hence prevent it from discarding important information. Extensive experiments verify that PCL is robust to feature suppression and outperforms state-of-the-art contrastive learning methods on a variety of datasets and tasks.