论文标题

在卷积网络中嵌入编码器朝向可解释的AI

Embedded Encoder-Decoder in Convolutional Networks Towards Explainable AI

论文作者

Tavanaei, Amirhossein

论文摘要

最近,了解深度学习模型的中间层并发现刺激的驱动特征引起了人们的极大兴趣。可解释的人工智能(XAI)提供了一种打开AI黑匣子并做出透明且可解释的决定的新方法。本文提出了一个新的可解释的卷积神经网络(XCNN),它代表了端到端模型体系结构中刺激的重要和驱动视觉特征。该网络在CNN体系结构中采用编码器 - 编码器神经网络,以根据其类别来表示图像中感兴趣的区域。提出的模型是在没有定位标签的情况下训练的,并生成热图作为网络体系结构的一部分,而无需额外的后处理步骤。 CIFAR-10,Tiny ImageNet和MNIST数据集的实验结果表明,我们的算法(XCNN)的成功使CNN可以解释。根据视觉评估,所提出的模型在特定于类的功能表示和可解释的热图生成中优于当前算法,同时提供了简单且灵活的网络体系结构。这种方法的最初成功值得进一步的研究,以增强可解释框架中弱监督的本地化和语义细分。

Understanding intermediate layers of a deep learning model and discovering the driving features of stimuli have attracted much interest, recently. Explainable artificial intelligence (XAI) provides a new way to open an AI black box and makes a transparent and interpretable decision. This paper proposes a new explainable convolutional neural network (XCNN) which represents important and driving visual features of stimuli in an end-to-end model architecture. This network employs encoder-decoder neural networks in a CNN architecture to represent regions of interest in an image based on its category. The proposed model is trained without localization labels and generates a heat-map as part of the network architecture without extra post-processing steps. The experimental results on the CIFAR-10, Tiny ImageNet, and MNIST datasets showed the success of our algorithm (XCNN) to make CNNs explainable. Based on visual assessment, the proposed model outperforms the current algorithms in class-specific feature representation and interpretable heatmap generation while providing a simple and flexible network architecture. The initial success of this approach warrants further study to enhance weakly supervised localization and semantic segmentation in explainable frameworks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源