论文标题
视频表示学习的自我监督共同培训
Self-supervised Co-training for Video Representation Learning
论文作者
论文摘要
本文的目的是仅视觉自我监督的视频表示学习。我们做出以下贡献:(i)我们研究将语义级阳性添加到基于实例的信息噪声对比估计(Infonce)培训中的好处,表明这种监督的对比学习形式会导致绩效的明显改善; (ii)我们提出了一种新型的自我监督的共同训练方案,以改善流行的信息损失,从不同观点,RGB流和光流中利用相同数据源的互补信息,通过获得一种视图来获得另一种视图的阳性类样品; (iii)我们在两个不同的下游任务上彻底评估了学会表示的质量:动作识别和视频检索。在这两种情况下,所提出的方法都与其他自我监管的方法证明了最先进或可比的性能,同时训练的效率要高得多,即需要较少的培训数据才能实现相似的性能。
The objective of this paper is visual-only self-supervised video representation learning. We make the following contributions: (i) we investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation (InfoNCE) training, showing that this form of supervised contrastive learning leads to a clear improvement in performance; (ii) we propose a novel self-supervised co-training scheme to improve the popular infoNCE loss, exploiting the complementary information from different views, RGB streams and optical flow, of the same data source by using one view to obtain positive class samples for the other; (iii) we thoroughly evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval. In both cases, the proposed approach demonstrates state-of-the-art or comparable performance with other self-supervised approaches, whilst being significantly more efficient to train, i.e. requiring far less training data to achieve similar performance.