论文标题
通过分离时空建模来学习视频问题回答视频问题的细粒度视觉理解
Learning Fine-Grained Visual Understanding for Video Question Answering via Decoupling Spatial-Temporal Modeling
论文作者
论文摘要
尽管最近的大规模视频预训练在视频问答中取得了巨大进展,但视频语言模型的空间建模的设计不如图像语言模型的细粒度细元素。现有的时间建模实践也遭受了方式之间弱和嘈杂的一致性。为了学习细粒度的视觉理解,我们将空间建模解矛,并提出了混合管道,脱钩的空间 - 周期编码器,集成了图像和视频语言编码器。前者从较大但稀疏的采样帧独立于时间编码空间语义,而后者则模型在较低的空间但更高的时间分辨率下进行时间动力学。为了帮助视频语言模型学习视频QA的时间关系,我们提出了一个新颖的预训练目标,时间引用建模,该目标需要该模型来识别视频序列中事件的时间位置。广泛的实验表明,我们的模型优于先前在较大数据集的顺序上预先培训的工作。
While recent large-scale video-language pre-training made great progress in video question answering, the design of spatial modeling of video-language models is less fine-grained than that of image-language models; existing practices of temporal modeling also suffer from weak and noisy alignment between modalities. To learn fine-grained visual understanding, we decouple spatial-temporal modeling and propose a hybrid pipeline, Decoupled Spatial-Temporal Encoders, integrating an image- and a video-language encoder. The former encodes spatial semantics from larger but sparsely sampled frames independently of time, while the latter models temporal dynamics at lower spatial but higher temporal resolution. To help the video-language model learn temporal relations for video QA, we propose a novel pre-training objective, Temporal Referring Modeling, which requires the model to identify temporal positions of events in video sequences. Extensive experiments demonstrate that our model outperforms previous work pre-trained on orders of magnitude larger datasets.