论文标题

部分可观测时空混沌系统的无模型预测

Multimodal Across Domains Gaze Target Detection

论文作者

Tonini, Francesco, Beyan, Cigdem, Ricci, Elisa

论文摘要

本文解决了从第三人称角度捕获的单个图像中的目光目标检测问题。我们提出了一个多模式的深度建筑,以推断一个人在场景中所处的位置。该空间模型经过了代表丰富上下文信息的感兴趣人,场景和深度图的头部图像训练。我们的模型与以前的几种艺术不同,不需要监督目光角度的监督,不依赖头部方向信息和/或利益人眼睛的位置。广泛的实验证明了我们方法在多个基准数据集上的性能更强。我们还通过改变多模式数据的联合学习来研究我们方法的几种变体。一些变化的表现也胜过一些先前的艺术。在本文中,我们首次检查域的适应性目光目标检测,并授权多模式网络有效地处理跨数据集的域间隙。该方法的代码可在https://github.com/francescotonini/multimodal-across-domains-domains-domains-domains-domains-warget-detection中获得。

This paper addresses the gaze target detection problem in single images captured from the third-person perspective. We present a multimodal deep architecture to infer where a person in a scene is looking. This spatial model is trained on the head images of the person-of- interest, scene and depth maps representing rich context information. Our model, unlike several prior art, do not require supervision of the gaze angles, do not rely on head orientation information and/or location of the eyes of person-of-interest. Extensive experiments demonstrate the stronger performance of our method on multiple benchmark datasets. We also investigated several variations of our method by altering joint-learning of multimodal data. Some variations outperform a few prior art as well. First time in this paper, we inspect domain adaption for gaze target detection, and we empower our multimodal network to effectively handle the domain gap across datasets. The code of the proposed method is available at https://github.com/francescotonini/multimodal-across-domains-gaze-target-detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源