论文标题

简单的总体代码中的超线性精度和内存

Superlinear Precision and Memory in Simple Population Codes

论文作者

Kim, Jimmy H. J., Fiete, Ila, Schwab, David J.

论文摘要

大脑通过广泛分布的活动模式在神经元中构建种群代码来表示刺激。人口代码优点的一个重要数字是可以从中解码有关原始刺激的多少信息。 Fisher信息被广泛用于量化编码精度并指定最佳代码,因为在某些假设下,其与均值平方误差(MSE)的关系。但是,当神经发射稀疏时,优化Fisher信息可能会导致在MSE方面高度优化的代码。我们发现,这种差异源于Fisher信息未考虑的错误的非本地错误组成部分。使用这种见解,我们通过直接最大程度地减少MSE来构建最佳人口代码。我们使用编码参数研究MSE的缩放特性,重点是调整曲线宽度。我们发现,用于编码的最佳调谐曲线宽度不再为逆种群大小,而精确度的二次缩放与仅通过Fisher信息预测的系统大小不再存在。但是,超线性仍然可以保留,只有对数放缓。我们通过连续的吸引力动力学得出网络来存储刺激的内存的类似结果,并证明相似的缩放属性优化了内存和表示。

The brain constructs population codes to represent stimuli through widely distributed patterns of activity across neurons. An important figure of merit of population codes is how much information about the original stimulus can be decoded from them. Fisher information is widely used to quantify coding precision and specify optimal codes, because of its relationship to mean squared error (MSE) under certain assumptions. When neural firing is sparse, however, optimizing Fisher information can result in codes that are highly sub-optimal in terms of MSE. We find that this discrepancy arises from the non-local component of error not accounted for by the Fisher information. Using this insight, we construct optimal population codes by directly minimizing the MSE. We study the scaling properties of MSE with coding parameters, focusing on the tuning curve width. We find that the optimal tuning curve width for coding no longer scales as the inverse population size, and the quadratic scaling of precision with system size predicted by Fisher information alone no longer holds. However, superlinearity is still preserved with only a logarithmic slowdown. We derive analogous results for networks storing the memory of a stimulus through continuous attractor dynamics, and show that similar scaling properties optimize memory and representation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源