论文标题

从半监督的角度学习部分标签

Learning with Partial Labels from Semi-supervised Perspective

论文作者

Li, Ximing, Jiang, Yuanzhi, Li, Changchun, Wang, Yiyuan, Ouyang, Jihong

论文摘要

部分标签(PL)学习是指从部分标记的数据中学习的任务,在该数据中,每个培训实例都模棱两可配备一组候选标签,但只有一个是有效的。最近的深层学习文献的进步表明,深度学习范式,例如自我训练,对比度学习或班级激活价值,可以实现有希望的表现。受到深度半监督(SS)学习的令人印象深刻的成功的启发,我们将PL学习问题转化为SS学习问题,并提出了一种新颖的PL学习方法,即具有半监视的观点(PLSP)。具体而言,我们首先通过选择少数可靠的伪标记的实例来形成伪标记的数据集,具有高信心预测分数并将其余实例视为伪无标记的实例。然后,我们设计了一个SS学习目标,由伪标记的实例有监督的损失以及伪无标记实例的语义一致性正规化。我们进一步引入了那些非候选标签的互补正则化,以限制其对模型的预测尽可能小。经验结果表明,PLSP显着优于现有的PL基线方法,尤其是在高歧义水平上。可用代码:https://github.com/changchunli/plsp。

Partial Label (PL) learning refers to the task of learning from the partially labeled data, where each training instance is ambiguously equipped with a set of candidate labels but only one is valid. Advances in the recent deep PL learning literature have shown that the deep learning paradigms, e.g., self-training, contrastive learning, or class activate values, can achieve promising performance. Inspired by the impressive success of deep Semi-Supervised (SS) learning, we transform the PL learning problem into the SS learning problem, and propose a novel PL learning method, namely Partial Label learning with Semi-supervised Perspective (PLSP). Specifically, we first form the pseudo-labeled dataset by selecting a small number of reliable pseudo-labeled instances with high-confidence prediction scores and treating the remaining instances as pseudo-unlabeled ones. Then we design a SS learning objective, consisting of a supervised loss for pseudo-labeled instances and a semantic consistency regularization for pseudo-unlabeled instances. We further introduce a complementary regularization for those non-candidate labels to constrain the model predictions on them to be as small as possible. Empirical results demonstrate that PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels. Code available: https://github.com/changchunli/PLSP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源