论文标题
语义驱动图像段落字幕的旁路网络
Bypass Network for Semantics Driven Image Paragraph Captioning
论文作者
论文摘要
图像段落字幕旨在描述具有一系列连贯句子的给定图像。大多数现有方法通过主题过渡对一致性建模,该主题过渡将主题向量从先前的句子中动态渗透。但是,这些方法仍会在生成的段落中立即或延迟重复,因为(i)语法和语义的纠缠使主题向量分散了参与相关视觉区域的注意力; (ii)学习长期过渡几乎没有限制或奖励。在本文中,我们提出了一个旁路网络,该网络分别模拟了前面句子的语义和语言语法。具体而言,提出的模型由两个主要模块组成,即主题过渡模块和一个句子生成模块。前者将先前的语义向量作为查询,并将注意机制应用于区域特征以获取下一个主题向量,从而通过消除语言学来减少立即重复。后者将主题向量和先前的语法状态解码以产生以下句子。为了进一步减少生成段落中的延迟重复,我们为加强培训设计了基于替代的奖励。广泛使用的基准测试的全面实验证明了所提出的模型优于最终的技术,同时保持了高精度。
Image paragraph captioning aims to describe a given image with a sequence of coherent sentences. Most existing methods model the coherence through the topic transition that dynamically infers a topic vector from preceding sentences. However, these methods still suffer from immediate or delayed repetitions in generated paragraphs because (i) the entanglement of syntax and semantics distracts the topic vector from attending pertinent visual regions; (ii) there are few constraints or rewards for learning long-range transitions. In this paper, we propose a bypass network that separately models semantics and linguistic syntax of preceding sentences. Specifically, the proposed model consists of two main modules, i.e. a topic transition module and a sentence generation module. The former takes previous semantic vectors as queries and applies attention mechanism on regional features to acquire the next topic vector, which reduces immediate repetition by eliminating linguistics. The latter decodes the topic vector and the preceding syntax state to produce the following sentence. To further reduce delayed repetition in generated paragraphs, we devise a replacement-based reward for the REINFORCE training. Comprehensive experiments on the widely used benchmark demonstrate the superiority of the proposed model over the state of the art for coherence while maintaining high accuracy.