论文标题

对糖尿病性视网膜病变图像分级的对抗性暴露攻击

Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading

论文作者

Cheng, Yupeng, Guo, Qing, Juefei-Xu, Felix, Fu, Huazhu, Lin, Shang-Wei, Lin, Weisi

论文摘要

糖尿病性视网膜病(DR)是世界各地视力丧失的主要原因。为了帮助诊断它,众多尖端的作品建立了强大的深神经网络(DNN),以通过视网膜眼镜图像(RFIS)自动对DR进行分级。但是,RFI通常会受到可能导致不正确成绩的相机暴露问题的影响。错误分级的结果可能会带来高风险,以加剧这种情况。在本文中,我们从对抗攻击的角度研究了这个问题。我们识别并引入了一种新的解决方案,以解决一项全新的任务,称为对抗性暴露攻击,该攻击能够产生自然的暴露图像并误导最新的DNN。我们在现实世界中的公共DR数据集上验证了我们提出的方法,例如RESNET50,MOBILENET和EFIDENTENET,表明我们的方法在传输攻击方面实现了高图像质量和成功率。我们的方法揭示了对基于DNN的自动DR分级的潜在威胁,并将有利于将来的暴露态度DR分级方法的发展。

Diabetic Retinopathy (DR) is a leading cause of vision loss around the world. To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs). However, RFIs are commonly affected by camera exposure issues that may lead to incorrect grades. The mis-graded results can potentially pose high risks to an aggravation of the condition. In this paper, we study this problem from the viewpoint of adversarial attacks. We identify and introduce a novel solution to an entirely new task, termed as adversarial exposure attack, which is able to produce natural exposure images and mislead the state-of-the-art DNNs. We validate our proposed method on a real-world public DR dataset with three DNNs, e.g., ResNet50, MobileNet, and EfficientNet, demonstrating that our method achieves high image quality and success rate in transferring the attacks. Our method reveals the potential threats to DNN-based automatic DR grading and would benefit the development of exposure-robust DR grading methods in the future.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源