论文标题
通过流算法的镜头尖峰神经网络
Spiking Neural Networks Through the Lens of Streaming Algorithms
论文作者
论文摘要
我们从流算算法的角度启动生物神经网络的研究。像计算机一样,人的大脑会遭受记忆限制,在处理大规模和动态变化的数据时会构成重大障碍。在计算机科学中,这些挑战是由著名的流媒体模型捕获的,该模型可以追溯到Munro和Paterson`78,并且对理论及以后产生了重大影响。在经典流式设置中,必须计算一些更新流的功能$ f $ $ \ mathcal {s} = \ {u_1,\ ldots,u_m \} $,给定有限的单件通访问流。主要的复杂度度量是算法使用的空间。 我们采取了第一步,以理解流媒体算法和神经算法之间的联系。在上边界,我们根据已知的流媒体算法设计神经算法,包括基本任务,包括不同的元素,近似中位数,重型击球手等。我们神经溶液中的神经元数几乎与相应流算法的空间界限相匹配。作为一般算法原始的原始算法,我们展示了如何在尖峰神经网络中实现有效的线性素描的重要流技术。在下边界,我们给出了一般的降低,表明可以通过空间有效的流算法模拟任何空间效率的尖峰神经网络。这种还原使我们可以将流空间的下限转换为几乎匹配的神经空间下限,从而在这两个模型之间建立了密切的联系。
We initiate the study of biological neural networks from the perspective of streaming algorithms. Like computers, human brains suffer from memory limitations which pose a significant obstacle when processing large scale and dynamically changing data. In computer science, these challenges are captured by the well-known streaming model, which can be traced back to Munro and Paterson `78 and has had significant impact in theory and beyond. In the classical streaming setting, one must compute some function $f$ of a stream of updates $\mathcal{S} = \{u_1,\ldots,u_m\}$, given restricted single-pass access to the stream. The primary complexity measure is the space used by the algorithm. We take the first steps towards understanding the connection between streaming and neural algorithms. On the upper bound side, we design neural algorithms based on known streaming algorithms for fundamental tasks, including distinct elements, approximate median, heavy hitters, and more. The number of neurons in our neural solutions almost matches the space bounds of the corresponding streaming algorithms. As a general algorithmic primitive, we show how to implement the important streaming technique of linear sketching efficient in spiking neural networks. On the lower bound side, we give a generic reduction, showing that any space-efficient spiking neural network can be simulated by a space-efficiently streaming algorithm. This reduction lets us translate streaming-space lower bounds into nearly matching neural-space lower bounds, establishing a close connection between these two models.