论文标题

一种集成的自动编码器块切换防御方法,以防止对抗攻击

An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks

论文作者

Yadav, Anirudh, Upadhyay, Ashutosh, Sharanya, S.

论文摘要

根据最近的研究,最先进的神经网络对对抗输入样本的脆弱性大大增加了。神经网络是一种中间路径或技术,计算机可以通过它学习使用机器学习算法执行任务。机器学习和人工智能模型已成为生活的基本方面,例如自动驾驶汽车[1],智能家居设备,因此任何脆弱性都是一个重要的问题。最小的输入偏差可能会欺骗这些极其字面的系统,并欺骗其用户以及管理员陷入不稳定的情况。本文提出了一种使用自动编码器[3]和块切换体系结构组合的防御算法。自动编码器旨在删除输入图像中发现的任何扰动,而块开关方法用于使其在白色盒子攻击方面更强大。该攻击计划使用FGSM [9]模型计划,随后的拟议体系结构的反击将进行,从而证明算法提供的可行性和安全性。

According to recent studies, the vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically. A neural network is an intermediate path or technique by which a computer learns to perform tasks using Machine learning algorithms. Machine Learning and Artificial Intelligence model has become a fundamental aspect of life, such as self-driving cars [1], smart home devices, so any vulnerability is a significant concern. The smallest input deviations can fool these extremely literal systems and deceive their users as well as administrator into precarious situations. This article proposes a defense algorithm that utilizes the combination of an auto-encoder [3] and block-switching architecture. Auto-coder is intended to remove any perturbations found in input images whereas the block switching method is used to make it more robust against White-box attacks. The attack is planned using FGSM [9] model, and the subsequent counter-attack by the proposed architecture will take place thereby demonstrating the feasibility and security delivered by the algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源