论文标题
Raze:地区有指导的自我监督的目光代表性学习
RAZE: Region Guided Self-Supervised Gaze Representation Learning
论文作者
论文摘要
在基于视觉的辅助技术中,具有不同新兴主题的用例,例如增强现实,虚拟现实和人类计算机互动等不同的主题中的用例中,自动眼目光估计是一个重要问题。在过去的几年中,由于它克服了大规模注释的数据的要求,因此人们对无监督和自我监督的学习范式的兴趣越来越大。在本文中,我们提出了Raze,Raze是一个由地区指导的自我监督的注视表示框架,该框架从非宣传的面部图像数据中杠杆作用。 Raze通过辅助监督(即伪凝视区域分类)学习目光的表示,其中目的是通过利用瞳孔中心的相对位置将视野分类为不同的凝视区域(即左,右和中心)。因此,我们会自动注释154K Web爬行图像的伪凝视区标签,并通过“ IZE-NET”框架学习特征表示。 “ IZE-NET”是基于胶囊层的CNN体系结构,可以有效地捕获丰富的眼睛表示。在四个基准数据集上评估了特征表示的判别性能:洞穴,桌面,MPII和RT-GENE。此外,我们评估了所提出的网络在其他两个下游任务(即驱动器凝视估计和视觉注意力估计)上的普遍性,这证明了学习的眼睛注视表示的有效性。
Automatic eye gaze estimation is an important problem in vision based assistive technology with use cases in different emerging topics such as augmented reality, virtual reality and human-computer interaction. Over the past few years, there has been an increasing interest in unsupervised and self-supervised learning paradigms as it overcomes the requirement of large scale annotated data. In this paper, we propose RAZE, a Region guided self-supervised gAZE representation learning framework which leverage from non-annotated facial image data. RAZE learns gaze representation via auxiliary supervision i.e. pseudo-gaze zone classification where the objective is to classify visual field into different gaze zones (i.e. left, right and center) by leveraging the relative position of pupil-centers. Thus, we automatically annotate pseudo gaze zone labels of 154K web-crawled images and learn feature representations via `Ize-Net' framework. `Ize-Net' is a capsule layer based CNN architecture which can efficiently capture rich eye representation. The discriminative behaviour of the feature representation is evaluated on four benchmark datasets: CAVE, TabletGaze, MPII and RT-GENE. Additionally, we evaluate the generalizability of the proposed network on two other downstream task (i.e. driver gaze estimation and visual attention estimation) which demonstrate the effectiveness of the learnt eye gaze representation.