论文标题
支持新兴神经编码的资源有效的尖峰神经网络加速器
A Resource-efficient Spiking Neural Network Accelerator Supporting Emerging Neural Encoding
论文作者
论文摘要
由于其低功率繁殖计算以及人类神经系统中生物学过程的相似之处,尖峰神经网络(SNN)最近获得了动力。但是,SNN需要很长的尖峰列车(最多1000列)才能达到类似于其大型模型的人工神经网络(ANN)的准确性,该型号可以抵消效率,并抑制其在现实世界中使用的低功率系统的应用。为了减轻这个问题,提出了新兴的神经编码方案,以缩短尖峰列车,同时保持高精度。但是,当前的SNN加速器不能很好地支持新兴的编码方案。在这项工作中,我们提出了一种新颖的硬件体系结构,可以通过新兴的神经编码有效地支持SNN。我们的实施具有能源和区域有效的处理单元,并具有增加并行性和减少内存访问的功能。我们验证了FPGA上的加速器,并分别比以前的功耗和潜伏期的工作取得了25%和90%的进步。同时,高面积效率使我们可以扩展大型神经网络模型。据我们所知,这是第一部在基于物理FPGA的神经形态硬件上部署大型神经网络模型的工作。
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing and the closer resemblance of biological processes in the nervous system of humans. However, SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models, which offsets efficiency and inhibits its application to low-power systems for real-world use cases. To alleviate this problem, emerging neural encoding schemes are proposed to shorten the spike train while maintaining the high accuracy. However, current accelerators for SNN cannot well support the emerging encoding schemes. In this work, we present a novel hardware architecture that can efficiently support SNN with emerging neural encoding. Our implementation features energy and area efficient processing units with increased parallelism and reduced memory accesses. We verified the accelerator on FPGA and achieve 25% and 90% improvement over previous work in power consumption and latency, respectively. At the same time, high area efficiency allows us to scale for large neural network models. To the best of our knowledge, this is the first work to deploy the large neural network model VGG on physical FPGA-based neuromorphic hardware.