论文标题

嘈杂标签的选择性审议的对比度学习

Selective-Supervised Contrastive Learning with Noisy Labels

论文作者

Li, Shikun, Xia, Xiaobo, Ge, Shiming, Liu, Tongliang

论文摘要

深网具有很强的能力,将数据嵌入到潜在的表示中并完成以下任务。但是,容量很大程度上来自高质量的注释标签,这些标签很昂贵。嘈杂的标签更负担得起,但导致损坏的表示,导致概括性能差。为了学习稳健的表示并处理嘈杂的标签,我们在本文中提出了选择性的对比度学习(SEL-CL)。具体而言,SEL-CL扩展了受监督的对比度学习(SUP-CL),该学习在表示方面具有强大的方式,但在有嘈杂的标签时会降级。 SEL-CL解决了SUP-CL问题的直接原因。也就是说,由于Sup-Cl以\ textit {pair-wise}方式工作,嘈杂的标签构建的嘈杂对误导了表示表示。为了减轻这个问题,我们在不知道噪音率的情况下从嘈杂的噪音中选择了自信的对。在选择过程中,通过衡量学会表示和给定标签之间的协议,我们首先确定被利用的自信示例,这些示例被利用以建立自信对。然后,利用建筑自信对中的表示相似性分布,以识别嘈杂对的更自信的对。所有获得的自信对最终用于SUP-CL来增强表示形式。多个嘈杂数据集的实验证明了在最先进的性能之后,通过我们的方法证明了学习表示的鲁棒性。源代码可在https://github.com/shikunli/sel-cl上找到

Deep networks have strong capacities of embedding data into latent representations and finishing following tasks. However, the capacities largely come from high-quality annotated labels, which are expensive to collect. Noisy labels are more affordable, but result in corrupted representations, leading to poor generalization performance. To learn robust representations and handle noisy labels, we propose selective-supervised contrastive learning (Sel-CL) in this paper. Specifically, Sel-CL extend supervised contrastive learning (Sup-CL), which is powerful in representation learning, but is degraded when there are noisy labels. Sel-CL tackles the direct cause of the problem of Sup-CL. That is, as Sup-CL works in a \textit{pair-wise} manner, noisy pairs built by noisy labels mislead representation learning. To alleviate the issue, we select confident pairs out of noisy ones for Sup-CL without knowing noise rates. In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs. Then, the representation similarity distribution in the built confident pairs is exploited to identify more confident pairs out of noisy pairs. All obtained confident pairs are finally used for Sup-CL to enhance representations. Experiments on multiple noisy datasets demonstrate the robustness of the learned representations by our method, following the state-of-the-art performance. Source codes are available at https://github.com/ShikunLi/Sel-CL

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源