论文标题
3D-CSL:自我监督的3D上下文相似性学习近乎删除视频检索
3D-CSL: self-supervised 3D context similarity learning for Near-Duplicate Video Retrieval
论文作者
论文摘要
在本文中,我们介绍了3D-CSL,这是一条用于近乎缩写视频检索(NDVR)的紧凑型管道,并探索一种新颖的自我监督学习策略,以实现视频相似性学习。大多数以前的方法仅分别从框架中提取视频空间特征,然后设计各种复杂机制来学习框架特征之间的时间相关性。但是,空时依赖性的一部分已经丢失。为了解决这个问题,我们的3D-CSL提取了端到端的3D变压器的视频中全局时空依赖性,并通过匹配剪辑级别来在效率和有效性之间找到良好的平衡。此外,我们提出了一个两阶段的自我监督相似性学习策略来优化整个网络。首先,我们提出了通过视频预测任务为3D变压器预认识的泼妇。其次,提出了一种新颖的视频特异性增强和FCS损失的ShotMix,这是一种新型的三胞胎损失,进一步促进了相似性学习结果。 FIVR-200K和CC_WEB_VIDEO上的实验证明了我们方法的优越性和可靠性,这实现了剪辑级NDVR上的最新性能。
In this paper, we introduce 3D-CSL, a compact pipeline for Near-Duplicate Video Retrieval (NDVR), and explore a novel self-supervised learning strategy for video similarity learning. Most previous methods only extract video spatial features from frames separately and then design kinds of complex mechanisms to learn the temporal correlations among frame features. However, parts of spatiotemporal dependencies have already been lost. To address this, our 3D-CSL extracts global spatiotemporal dependencies in videos end-to-end with a 3D transformer and find a good balance between efficiency and effectiveness by matching on clip-level. Furthermore, we propose a two-stage self-supervised similarity learning strategy to optimize the entire network. Firstly, we propose PredMAE to pretrain the 3D transformer with video prediction task; Secondly, ShotMix, a novel video-specific augmentation, and FCS loss, a novel triplet loss, are proposed further promote the similarity learning results. The experiments on FIVR-200K and CC_WEB_VIDEO demonstrate the superiority and reliability of our method, which achieves the state-of-the-art performance on clip-level NDVR.