论文标题

语言驱动的区域指针用于可控图像字幕

Language-Driven Region Pointer Advancement for Controllable Image Captioning

论文作者

Lindh, Annika, Ross, Robert J., Kelleher, John D.

论文摘要

可控的图像字幕是图像字幕的多模式任务中最近的子场,其中放置了在生成的自然语言标题中应描述图像中的区域的约束。这使得更加专注于产生更详细的描述,并为对结果进行更多最终用户的控制打开了大门。可控图像字幕结构的重要组成部分是通过区域指针的发展来决定参与每个区域的机制。在本文中,我们提出了一种新的方法,可以通过将进步步骤视为语言结构的自然部分,以预测区域指针进步的时机,这是通过下一步的一种自然的一部分,其动机是与训练数据中的句子结构有很强的相关性。我们发现,我们的时机与FlickR30K实体的基础时间安排一致,精度为86.55%,召回97.92%的时间。我们实施此技术的模型改善了标准字幕指标的最新技术,同时还显示出更大的有效词汇量。

Controllable Image Captioning is a recent sub-field in the multi-modal task of Image Captioning wherein constraints are placed on which regions in an image should be described in the generated natural language caption. This puts a stronger focus on producing more detailed descriptions, and opens the door for more end-user control over results. A vital component of the Controllable Image Captioning architecture is the mechanism that decides the timing of attending to each region through the advancement of a region pointer. In this paper, we propose a novel method for predicting the timing of region pointer advancement by treating the advancement step as a natural part of the language structure via a NEXT-token, motivated by a strong correlation to the sentence structure in the training data. We find that our timing agrees with the ground-truth timing in the Flickr30k Entities test data with a precision of 86.55% and a recall of 97.92%. Our model implementing this technique improves the state-of-the-art on standard captioning metrics while additionally demonstrating a considerably larger effective vocabulary size.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源