论文标题
评估模拟小脑中的对抗鲁棒性
Evaluating adversarial robustness in simulated cerebellum
论文作者
论文摘要
众所周知,人工神经网络容易受到对抗性例子的影响,在这种例子中,已经付出了巨大的努力来改善鲁棒性。但是,这种例子通常是人类无法察觉的,因此它们对生物神经回路的影响在很大程度上是未知的。本文将研究模拟小脑的对抗性鲁棒性,这是一个精心研究的计算神经科学学习系统。具体而言,我们建议研究小脑揭示的三个独特特征:(i)网络宽度; (ii)对平行纤维 - 棕榈纤维细胞突触的长期抑郁; (iii)颗粒层中的稀疏连通性,并假设它们将有益于改善稳健性。据我们所知,这是检查模拟小脑模型中对抗性鲁棒性的首次尝试。 结果在实验阶段为负 - 从提出的三种机制中发现鲁棒性没有显着改善。因此,预计小脑将容易受到对抗性例子的影响,因为深度神经网络在批处理训练下。鼓励神经科学家在对抗攻击的实验中欺骗生物系统。
It is well known that artificial neural networks are vulnerable to adversarial examples, in which great efforts have been made to improve the robustness. However, such examples are usually imperceptible to humans, and thus their effect on biological neural circuits is largely unknown. This paper will investigate the adversarial robustness in a simulated cerebellum, a well-studied supervised learning system in computational neuroscience. Specifically, we propose to study three unique characteristics revealed in the cerebellum: (i) network width; (ii) long-term depression on the parallel fiber-Purkinje cell synapses; (iii) sparse connectivity in the granule layer, and hypothesize that they will be beneficial for improving robustness. To the best of our knowledge, this is the first attempt to examine the adversarial robustness in simulated cerebellum models. The results are negative in the experimental phase -- no significant improvements in robustness are discovered from the proposed three mechanisms. Consequently, the cerebellum is expected to be vulnerable to adversarial examples as the deep neural networks under batch training. Neuroscientists are encouraged to fool the biological system in experiments with adversarial attacks.