论文标题
储层计算符合经常性内核和结构化变换
Reservoir Computing meets Recurrent Kernels and Structured Transforms
论文作者
论文摘要
储层计算是一类简单而有效的复发性神经网络,其中内部权重是随机固定的,并且只训练了线性输出层。在大尺寸限制中,这种随机神经网络与内核方法有着深厚的联系。我们的贡献是三重的:a)我们严格地建立了储层计算的循环内核极限,并证明其收敛性。 b)我们在混乱的时间序列预测上测试了我们的模型,这是储层计算中经典但具有挑战性的基准,并在数据点数量保持适中时展示了复发核如何具有竞争力和计算上的效率。 c)当样品数量太大时,我们通过引入结构化储层计算来利用结构化随机特征用于内核近似的成功。相比,这两种提出的方法,即经常性内核和结构化储层计算,比传统的储层计算要快得多,内存效率更高。
Reservoir Computing is a class of simple yet efficient Recurrent Neural Networks where internal weights are fixed at random and only a linear output layer is trained. In the large size limit, such random neural networks have a deep connection with kernel methods. Our contributions are threefold: a) We rigorously establish the recurrent kernel limit of Reservoir Computing and prove its convergence. b) We test our models on chaotic time series prediction, a classic but challenging benchmark in Reservoir Computing, and show how the Recurrent Kernel is competitive and computationally efficient when the number of data points remains moderate. c) When the number of samples is too large, we leverage the success of structured Random Features for kernel approximation by introducing Structured Reservoir Computing. The two proposed methods, Recurrent Kernel and Structured Reservoir Computing, turn out to be much faster and more memory-efficient than conventional Reservoir Computing.