论文标题
揭示面部表情的偏见
Uncovering the Bias in Facial Expressions
论文作者
论文摘要
在过去的几十年中,机器和深度学习社区在诸如图像分类之类的挑战性任务中取得了巨大的成就。人工神经网络的深层结构以及许多可用数据的建筑可以描述高度复杂的关系。但是,仍然不可能充分捕获深度学习模型所学的知识,并验证它在不产生偏见的情况下,尤其是在关键任务中,例如在医学领域出现的偏见。这样一项任务的一个例子是在面部图像中检测不同的面部表情,称为动作单元。考虑到这项特定任务,我们的研究旨在提供有关偏见的透明度,特别是与性别和肤色有关。我们训练一个神经网络进行动作单位分类,并根据其准确性和基于热图定性地对其性能进行定量分析。对我们的结果的结构化审查表明,我们能够检测到偏差。即使我们不能从结果中得出结论,较低的分类表现仅来自性别和肤色偏见,但必须解决这些偏见,这就是为什么我们最终提出有关如何避免检测到的偏见的建议。
Over the past decades the machine and deep learning community has celebrated great achievements in challenging tasks such as image classification. The deep architecture of artificial neural networks together with the plenitude of available data makes it possible to describe highly complex relations. Yet, it is still impossible to fully capture what the deep learning model has learned and to verify that it operates fairly and without creating bias, especially in critical tasks, for instance those arising in the medical field. One example for such a task is the detection of distinct facial expressions, called Action Units, in facial images. Considering this specific task, our research aims to provide transparency regarding bias, specifically in relation to gender and skin color. We train a neural network for Action Unit classification and analyze its performance quantitatively based on its accuracy and qualitatively based on heatmaps. A structured review of our results indicates that we are able to detect bias. Even though we cannot conclude from our results that lower classification performance emerged solely from gender and skin color bias, these biases must be addressed, which is why we end by giving suggestions on how the detected bias can be avoided.