论文标题
vlcap:与对比度学习相干视频段落字幕的视觉语言
VLCap: Vision-Language with Contrastive Learning for Coherent Video Paragraph Captioning
论文作者
论文摘要
在本文中,我们利用涉及视觉和语言互动的人类感知过程来生成对未修剪视频的连贯段落描述。我们提出了视觉语言(VL)特征,包括两种模态,即(i)视觉方式,以捕获整个场景的全局视觉内容以及(ii)语言方式来提取人类和非人类对象的场景元素描述(例如动物,车辆,车辆等),视觉和非视觉元素,视觉和非视觉元素(例如,关系,关系,活动,活动等)。此外,我们建议在对比度学习VL损失下培训我们提出的VLCAP。有关活动网字幕和YouCookii数据集的实验和消融研究表明,我们的VLCAP在准确性和多样性指标上都优于现有的SOTA方法。
In this paper, we leverage the human perceiving process, that involves vision and language interaction, to generate a coherent paragraph description of untrimmed videos. We propose vision-language (VL) features consisting of two modalities, i.e., (i) vision modality to capture global visual content of the entire scene and (ii) language modality to extract scene elements description of both human and non-human objects (e.g. animals, vehicles, etc), visual and non-visual elements (e.g. relations, activities, etc). Furthermore, we propose to train our proposed VLCap under a contrastive learning VL loss. The experiments and ablation studies on ActivityNet Captions and YouCookII datasets show that our VLCap outperforms existing SOTA methods on both accuracy and diversity metrics.