论文标题
重新思考自我监督的对比学习中的旋转:自适应正面或负面数据增加
Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
论文作者
论文摘要
旋转经常被列为对比学习中数据增强的候选者,但很少提供令人满意的改进。我们认为这是因为旋转的图像始终被视为正或负面。图像的语义可以是旋转不变的或旋转变化的,因此,应根据图像的内容确定旋转图像被视为正还是负。因此,我们提出了一种新颖的增强策略,自适应正面或负数据增强(PNDA),其中原始图像及其旋转的图像如果在语义上是一个正对,如果它们在语义上有所不同,则它们是一个正对。为了实现PNDA,我们首先以无监督的方式确定旋转是逐图基础上的旋转还是负面。然后,我们将PNDA应用于对比学习框架。我们的实验表明,PNDA提高了对比度学习的表现。该代码可在\ url {https://github.com/atsumiyai/rethinking_rotation}中获得。
Rotation is frequently listed as a candidate for data augmentation in contrastive learning but seldom provides satisfactory improvements. We argue that this is because the rotated image is always treated as either positive or negative. The semantics of an image can be rotation-invariant or rotation-variant, so whether the rotated image is treated as positive or negative should be determined based on the content of the image. Therefore, we propose a novel augmentation strategy, adaptive Positive or Negative Data Augmentation (PNDA), in which an original and its rotated image are a positive pair if they are semantically close and a negative pair if they are semantically different. To achieve PNDA, we first determine whether rotation is positive or negative on an image-by-image basis in an unsupervised way. Then, we apply PNDA to contrastive learning frameworks. Our experiments showed that PNDA improves the performance of contrastive learning. The code is available at \url{ https://github.com/AtsuMiyai/rethinking_rotation}.