论文标题
Stratdef:基于ML的恶意软件检测中的对抗性攻击的战略防御
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection
论文作者
论文摘要
多年来,大多数针对防御机器学习模型的防御攻击的研究都在图像识别领域中。基于ML的恶意软件检测域,尽管其重要性,但仍受到较少的关注。此外,大多数探索这些防御的工作都集中在几种方法上,但是在应用它们时没有策略。在本文中,我们介绍了Stratdef,这是一种基于移动目标防御方法的战略防御系统。我们克服了与模型的系统构建,选择和战略使用相关的挑战,以最大程度地提高对抗性鲁棒性。 Stratdef动态和战略性地选择了最佳模型,以增加攻击者的不确定性,同时最大程度地减少对抗性ML领域的关键方面,例如攻击可传递性。我们对针对恶意软件检测机器学习的对抗性攻击进行了首次全面评估,我们的威胁模型探索了不同级别的威胁,攻击者知识,能力和攻击强度。我们表明,即使面对对抗性威胁,StratDEF的表现也比其他防御能力更好。我们还表明,在现有的防御措施中,只有少数几个受对抗训练的模型比使用香草型号提供了更好的保护,但仍然胜过Stratdef。
Over the years, most research towards defenses against adversarial attacks on machine learning models has been in the image recognition domain. The ML-based malware detection domain has received less attention despite its importance. Moreover, most work exploring these defenses has focused on several methods but with no strategy when applying them. In this paper, we introduce StratDef, which is a strategic defense system based on a moving target defense approach. We overcome challenges related to the systematic construction, selection, and strategic use of models to maximize adversarial robustness. StratDef dynamically and strategically chooses the best models to increase the uncertainty for the attacker while minimizing critical aspects in the adversarial ML domain, like attack transferability. We provide the first comprehensive evaluation of defenses against adversarial attacks on machine learning for malware detection, where our threat model explores different levels of threat, attacker knowledge, capabilities, and attack intensities. We show that StratDef performs better than other defenses even when facing the peak adversarial threat. We also show that, of the existing defenses, only a few adversarially-trained models provide substantially better protection than just using vanilla models but are still outperformed by StratDef.