论文标题

量化面部表达动态以进行影响分析

Quantified Facial Temporal-Expressiveness Dynamics for Affect Analysis

论文作者

Uddin, Md Taufeeq, Canavan, Shaun

论文摘要

视觉影响数据(例如面部图像)的量化对于有效构建和监视自动化影响建模系统至关重要。考虑到这一点,这项工作提出了量化的面部表达动态(TED)来量化人脸的表现力。提出的算法通过结合静态和动态信息来实现对面部表现力的准确测量来利用多模式的面部特征。我们表明,TED可用于高级任务,例如对非结构化视觉数据的汇总,以及对自动化情感识别模型的期望和解释。为了评估使用TED的积极影响,使用UNBC-MCMASTER自发的肩痛数据集进行了对自发疼痛的案例研究。实验结果表明,使用TED进行量化影响分析的功效。

The quantification of visual affect data (e.g. face images) is essential to build and monitor automated affect modeling systems efficiently. Considering this, this work proposes quantified facial Temporal-expressiveness Dynamics (TED) to quantify the expressiveness of human faces. The proposed algorithm leverages multimodal facial features by incorporating static and dynamic information to enable accurate measurements of facial expressiveness. We show that TED can be used for high-level tasks such as summarization of unstructured visual data, and expectation from and interpretation of automated affect recognition models. To evaluate the positive impact of using TED, a case study was conducted on spontaneous pain using the UNBC-McMaster spontaneous shoulder pain dataset. Experimental results show the efficacy of using TED for quantified affect analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源