论文标题
新的数据毒药攻击机器学习分类器,用于移动渗透
New data poison attacks on machine learning classifiers for mobile exfiltration
论文作者
论文摘要
最近的研究表明,几种攻击漏洞,可能会危及该模型的完整性,近年来在网络安全方面开放了一个新的机会之窗。本文的主要兴趣是针对涉及标签的数据中毒攻击,这种攻击在训练阶段发生,这是攻击者通过大大降低模型的整体准确性和/或实现确定的样本的失误的整体准确性来损害目标机器学习模型的完整性的目的。本文的目的是提出两种基于标签的新型数据中毒攻击,该攻击的目标是使用移动剥落数据专用于用于恶意软件检测的各种机器学习分类器表示。因此,事实证明,拟议的攻击已成功地破坏了各种各样的机器学习模型。逻辑回归,决策树,随机森林和KNN就是一些例子。第一个攻击是随机执行标签式动作,而第二次攻击则执行标签,仅翻转两个类别中的一个。进一步分析了每种攻击的效果,并特别强调准确性下降和错误分类率。最后,本文通过建议开发一种可以承诺可行的检测和/或缓解机制的防御技术来追求进一步的研究方向。这种技术应能够为针对潜在攻击者的目标模型提供一定程度的鲁棒性。
Most recent studies have shown several vulnerabilities to attacks with the potential to jeopardize the integrity of the model, opening in a few recent years a new window of opportunity in terms of cyber-security. The main interest of this paper is directed towards data poisoning attacks involving label-flipping, this kind of attacks occur during the training phase, being the aim of the attacker to compromise the integrity of the targeted machine learning model by drastically reducing the overall accuracy of the model and/or achieving the missclassification of determined samples. This paper is conducted with intention of proposing two new kinds of data poisoning attacks based on label-flipping, the targeted of the attack is represented by a variety of machine learning classifiers dedicated for malware detection using mobile exfiltration data. With that, the proposed attacks are proven to be model-agnostic, having successfully corrupted a wide variety of machine learning models; Logistic Regression, Decision Tree, Random Forest and KNN are some examples. The first attack is performs label-flipping actions randomly while the second attacks performs label flipping only one of the 2 classes in particular. The effects of each attack are analyzed in further detail with special emphasis on the accuracy drop and the misclassification rate. Finally, this paper pursuits further research direction by suggesting the development of a defense technique that could promise a feasible detection and/or mitigation mechanisms; such technique should be capable of conferring a certain level of robustness to a target model against potential attackers.