论文标题

基于噪声的新型策略,用于更快的SNN训练

A noise based novel strategy for faster SNN training

论文作者

Jiang, Chunming, Zhang, Yilei

论文摘要

尖峰神经网络(SNN)由于其低功耗和强大的生物学性能而受到越来越多的关注。 SNN的优化是一项具有挑战性的任务。两种主要方法,即人工神经网络(ANN)至SNN转换和基于尖峰的反向传播(BP),都有其优点和局限性。对于ANN到SNN的转换,需要很长的推理时间才能近似ANN的准确性,从而降低了SNN的益处。借助基于Spike的BP,培训高精度SNN通常比其ANN同行消耗数十倍的计算资源和时间。在本文中,我们提出了一种新型的SNN训练方法,结合了两种方法的好处。我们首先通过用随机噪声近似神经电位分布来训练单步SNN(t = 1),然后将单步SNN(t = 1)转换为多步snn(t = n)。引入高斯分布式噪声会导致转化后准确性的显着提高。结果表明,我们的方法大大减少了SNN的训练和推理时间,同时保持了高精度。与前两种方法相比,我们的训练时间可以减少65%-75%,并达到推理速度的100倍以上。我们还认为,随着噪声增强的神经元模型使其更加可行。

Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility. Optimization of SNNs is a challenging task. Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations. For ANN-to-SNN conversion, it requires a long inference time to approximate the accuracy of ANN, thus diminishing the benefits of SNN. With spike-based BP, training high-precision SNNs typically consumes dozens of times more computational resources and time than their ANN counterparts. In this paper, we propose a novel SNN training approach that combines the benefits of the two methods. We first train a single-step SNN(T=1) by approximating the neural potential distribution with random noise, then convert the single-step SNN(T=1) to a multi-step SNN(T=N) losslessly. The introduction of Gaussian distributed noise leads to a significant gain in accuracy after conversion. The results show that our method considerably reduces the training and inference times of SNNs while maintaining their high accuracy. Compared to the previous two methods, ours can reduce training time by 65%-75% and achieves more than 100 times faster inference speed. We also argue that the neuron model augmented with noise makes it more bio-plausible.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源