论文标题
AFNET-M:带有2D+3D面部表情识别的掩模的自适应融合网络
AFNet-M: Adaptive Fusion Network with Masks for 2D+3D Facial Expression Recognition
论文作者
论文摘要
2D+3D面部表达识别(FER)可以通过合并2D纹理和更健壮的3D深度信息来有效地应对照明变化和姿势变化。大多数基于深度学习的方法采用了简单的融合策略,该策略在完全连接的层后直接连接多模式特征,而无需考虑每种模态的不同显着性。同时,如何专注于显着区域的2D和3D本地特征仍然是一个巨大的挑战。在这封信中,我们提出了带有2D+3D FER的口罩(AFNET-M)的自适应融合网络。为了增强2D和3D局部功能,我们将面部注释面部的蒙版作为先验知识和设计蒙版注意模块(MA),可以自动学习两个调制向量以调整特征图。此外,我们引入了一种新颖的融合策略,该策略可以通过设计的重要性权重计算模块(IWC)在卷积层上执行自适应融合。实验结果表明,我们的AFNET-M在BU-3DFE和Bosphorus数据集上实现了最先进的性能,与其他模型相比,需要更少的参数。
2D+3D facial expression recognition (FER) can effectively cope with illumination changes and pose variations by simultaneously merging 2D texture and more robust 3D depth information. Most deep learning-based approaches employ the simple fusion strategy that concatenates the multimodal features directly after fully-connected layers, without considering the different degrees of significance for each modality. Meanwhile, how to focus on both 2D and 3D local features in salient regions is still a great challenge. In this letter, we propose the adaptive fusion network with masks (AFNet-M) for 2D+3D FER. To enhance 2D and 3D local features, we take the masks annotating salient regions of the face as prior knowledge and design the mask attention module (MA) which can automatically learn two modulation vectors to adjust the feature maps. Moreover, we introduce a novel fusion strategy that can perform adaptive fusion at convolutional layers through the designed importance weights computing module (IWC). Experimental results demonstrate that our AFNet-M achieves the state-of-the-art performance on BU-3DFE and Bosphorus datasets and requires fewer parameters in comparison with other models.