论文标题
一种以约束权重分配的学习理论
A theory of learning with constrained weight-distribution
论文作者
论文摘要
计算神经科学中的一个核心问题是结构如何决定神经网络的功能。新兴的高质量的大规模连接数据集提出了一个问题,即从结构信息(例如兴奋性/抑制突触类型的分布和突触权重的分布)中可以收集哪些一般功能原理。在这个问题的推动下,我们开发了一种在神经网络中学习的统计机械理论,该理论将结构信息作为约束。我们得出了一种分析解决方案,以供Perceptron的记忆能力,这是一种基本的监督学习模型,并限制了其权重的分布。我们的理论预测,由于重量分布受限而导致的容量的减小与施加的分布与标准正态分布的分布之间的瓦斯汀距离有关。为了测试理论预测,我们使用最佳传输理论和信息几何形状来开发基于SGD的算法,以找到同时学习输入输出任务并满足分布约束的权重。我们表明,我们的算法中的训练可以解释为概率分布的瓦斯汀空间中的地球流。我们进一步开发了一种统计机械理论,用于教师认识的规则学习,并要求学生纳入对规则的先验知识的最佳方法。我们的理论表明,学习者在学习过程中采用不同的先前的体重分布是有益的,并表明受分布受限的学习优于不受限制和符号受限的学习。我们的理论和算法提供了将有关权重的先验知识纳入学习的新型策略,并揭示了神经网络中的结构和功能之间的强大联系。
A central question in computational neuroscience is how structure determines function in neural networks. The emerging high-quality large-scale connectomic datasets raise the question of what general functional principles can be gleaned from structural information such as the distribution of excitatory/inhibitory synapse types and the distribution of synaptic weights. Motivated by this question, we developed a statistical mechanical theory of learning in neural networks that incorporates structural information as constraints. We derived an analytical solution for the memory capacity of the perceptron, a basic feedforward model of supervised learning, with constraint on the distribution of its weights. Our theory predicts that the reduction in capacity due to the constrained weight-distribution is related to the Wasserstein distance between the imposed distribution and that of the standard normal distribution. To test the theoretical predictions, we use optimal transport theory and information geometry to develop an SGD-based algorithm to find weights that simultaneously learn the input-output task and satisfy the distribution constraint. We show that training in our algorithm can be interpreted as geodesic flows in the Wasserstein space of probability distributions. We further developed a statistical mechanical theory for teacher-student perceptron rule learning and ask for the best way for the student to incorporate prior knowledge of the rule. Our theory shows that it is beneficial for the learner to adopt different prior weight distributions during learning, and shows that distribution-constrained learning outperforms unconstrained and sign-constrained learning. Our theory and algorithm provide novel strategies for incorporating prior knowledge about weights into learning, and reveal a powerful connection between structure and function in neural networks.