论文标题

通过类重叠的方案中图像分类的解释性:COVID-19和肺炎的应用

Improving Explainability of Image Classification in Scenarios with Class Overlap: Application to COVID-19 and Pneumonia

论文作者

Verenich, Edward, Velasquez, Alvaro, Khan, Nazar, Hussain, Faraz

论文摘要

如果该模型在以前看不见的样本上概括,并且推理伴随着预测背后的推理的有力解释,则对机器学习模型做出的预测的信任就会增加。在图像分类域中,可以通过准确性,灵敏度和特异性来评估概括。可以通过模型在图像中定位感兴趣的对象来评估解释性。但是,在类别之间具有显着重叠的情况下,通过本地化的概括和解释性都会降低。我们提出了一种基于二进制专家网络的方法,该方法通过减轻类重叠引起的模型不确定性来提高图像分类的解释性。我们的技术在包含具有重大类重叠的特征的图像上执行歧视性定位,而无需明确的本地化训练。我们的方法在现实世界类重叠方案(例如Covid-19和Pneumonia)中尤其有希望,在那里,在这里不容易获得精心标记的本地化数据。这对于与19 Covid-19的早期,快速和值得信赖的筛查可能很有用。

Trust in predictions made by machine learning models is increased if the model generalizes well on previously unseen samples and when inference is accompanied by cogent explanations of the reasoning behind predictions. In the image classification domain, generalization can be assessed through accuracy, sensitivity, and specificity. Explainability can be assessed by how well the model localizes the object of interest within an image. However, both generalization and explainability through localization are degraded in scenarios with significant overlap between classes. We propose a method based on binary expert networks that enhances the explainability of image classifications through better localization by mitigating the model uncertainty induced by class overlap. Our technique performs discriminative localization on images that contain features with significant class overlap, without explicitly training for localization. Our method is particularly promising in real-world class overlap scenarios, such as COVID-19 and pneumonia, where expertly labeled data for localization is not readily available. This can be useful for early, rapid, and trustworthy screening for COVID-19.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源