论文标题
结合:用信心引导的一致性正则化的半监督学习
ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency Regularization
论文作者
论文摘要
我们提出了一个新颖的半监督学习框架,该框架巧妙地利用了模型的预测,从两个强烈的图像观点中的预测之间的一致性正则化,并由伪标签的信心加权,称为conmatch。虽然最新的半监督学习方法使用图像的弱和强调的观点来定义方向的一致性损失,但如何为两个强大的观点之间的一致性正则化定义方向仍然没有得到探索。为了解决这个问题,我们通过弱小的观点作为非参数和参数方法的锚点提出了伪标记的新颖置信度度量。特别是,在参数方法中,我们首次介绍了伪标签在网络中的信心,该网络以端到端方式通过骨干模型学习。此外,我们还提出了阶段训练,以提高培训的融合。当纳入现有的半监督学习者中时,并始终提高表现。我们进行实验,以证明我们对最新方法的有效性并提供广泛的消融研究。代码已在https://github.com/jiwoncocoder/conmatch上公开提供。
We present a novel semi-supervised learning framework that intelligently leverages the consistency regularization between the model's predictions from two strongly-augmented views of an image, weighted by a confidence of pseudo-label, dubbed ConMatch. While the latest semi-supervised learning methods use weakly- and strongly-augmented views of an image to define a directional consistency loss, how to define such direction for the consistency regularization between two strongly-augmented views remains unexplored. To account for this, we present novel confidence measures for pseudo-labels from strongly-augmented views by means of weakly-augmented view as an anchor in non-parametric and parametric approaches. Especially, in parametric approach, we present, for the first time, to learn the confidence of pseudo-label within the networks, which is learned with backbone model in an end-to-end manner. In addition, we also present a stage-wise training to boost the convergence of training. When incorporated in existing semi-supervised learners, ConMatch consistently boosts the performance. We conduct experiments to demonstrate the effectiveness of our ConMatch over the latest methods and provide extensive ablation studies. Code has been made publicly available at https://github.com/JiwonCocoder/ConMatch.