论文标题
对基于机器学习的智能医疗系统的对抗性攻击
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems
论文作者
论文摘要
医疗保健数据的可用性提高需要对疾病诊断,进展和实时监测的准确分析,以便为患者提供改进的治疗方法。在这种情况下,机器学习(ML)模型用于从高维和异质医疗保健数据中提取有价值的功能和见解,以检测智能医疗系统(SHS)中的不同疾病和患者活动。但是,最近的研究表明,在不同应用领域中使用的ML模型容易受到对抗攻击的影响。在本文中,我们引入了一种新型的对抗攻击,以利用SHS中使用的ML分类器。我们考虑一个对数据分布,SHS模型和ML算法有部分知识的对手,以执行目标和非目标攻击。我们采用这些对抗性功能,在SHS的结果中操纵医疗设备读数以改变患者状况(受疾病影响,正常状况,活动等)。我们的攻击利用五种不同的对抗ML算法(Hopskipjump,快速梯度方法,制作决策树,Carlini&Wagner,零订单优化)在SHS上执行不同的恶意活动(例如,数据中毒,错误分类的输出等)。此外,根据对手的训练和测试阶段能力,我们对SHS进行了白盒和黑匣子攻击。我们在不同的SHS设置和医疗设备中评估工作的性能。我们广泛的评估表明,我们提出的对抗性攻击可以显着降低基于ML的SH在正确检测患者的疾病和正常活动中的表现,这最终导致了错误的治疗。
The increasing availability of healthcare data requires accurate analysis of disease diagnosis, progression, and realtime monitoring to provide improved treatments to the patients. In this context, Machine Learning (ML) models are used to extract valuable features and insights from high-dimensional and heterogeneous healthcare data to detect different diseases and patient activities in a Smart Healthcare System (SHS). However, recent researches show that ML models used in different application domains are vulnerable to adversarial attacks. In this paper, we introduce a new type of adversarial attacks to exploit the ML classifiers used in a SHS. We consider an adversary who has partial knowledge of data distribution, SHS model, and ML algorithm to perform both targeted and untargeted attacks. Employing these adversarial capabilities, we manipulate medical device readings to alter patient status (disease-affected, normal condition, activities, etc.) in the outcome of the SHS. Our attack utilizes five different adversarial ML algorithms (HopSkipJump, Fast Gradient Method, Crafting Decision Tree, Carlini & Wagner, Zeroth Order Optimization) to perform different malicious activities (e.g., data poisoning, misclassify outputs, etc.) on a SHS. Moreover, based on the training and testing phase capabilities of an adversary, we perform white box and black box attacks on a SHS. We evaluate the performance of our work in different SHS settings and medical devices. Our extensive evaluation shows that our proposed adversarial attack can significantly degrade the performance of a ML-based SHS in detecting diseases and normal activities of the patients correctly, which eventually leads to erroneous treatment.