论文标题

蒙住眼睛的攻击者仍在威胁:严格的黑盒对抗攻击图

Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs

论文作者

Xu, Jiarong, Sun, Yizhou, Jiang, Xin, Wang, Yanhao, Yang, Yang, Wang, Chunping, Lu, Jiangang

论文摘要

对图表的对抗攻击吸引了相当大的研究兴趣。现有作品假设攻击者要么(部分)意识到受害者模型,要么能够向其发送查询。但是,这些假设是不现实的。为了弥合理论图形攻击和现实世界情景之间的差距,在这项工作中,我们提出了一个新颖,更现实的环境:严格的黑盒图形攻击,其中攻击者根本不知道受害者模型,并且不允许发送任何查询。为了设计这样的攻击策略,我们首先提出了一个通用的图形过滤器,以统一基于图形的模型的不同家族。然后,可以通过攻击前后的图形过滤器的变化来量化攻击的强度。通过最大化这一变化,我们能够找到有效的攻击策略,无论其基础模型如何。为了解决这个优化问题,我们还提出了一种放松技术和近似理论,以减少难度和计算费用。实验表明,即使没有接触模型,宏F1在节点分类中也下降了6.4%,图形分类中有29.5%,这是与现有作品相比的重要结果。

Adversarial attacks on graphs have attracted considerable research interests. Existing works assume the attacker is either (partly) aware of the victim model, or able to send queries to it. These assumptions are, however, unrealistic. To bridge the gap between theoretical graph attacks and real-world scenarios, in this work, we propose a novel and more realistic setting: strict black-box graph attack, in which the attacker has no knowledge about the victim model at all and is not allowed to send any queries. To design such an attack strategy, we first propose a generic graph filter to unify different families of graph-based models. The strength of attacks can then be quantified by the change in the graph filter before and after attack. By maximizing this change, we are able to find an effective attack strategy, regardless of the underlying model. To solve this optimization problem, we also propose a relaxation technique and approximation theories to reduce the difficulty as well as the computational expense. Experiments demonstrate that, even with no exposure to the model, the Macro-F1 drops 6.4% in node classification and 29.5% in graph classification, which is a significant result compared with existent works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源