论文标题

前馈和经常性网络组件的类似脑状组合可实现原型提取和强大的模式识别

Brain-like combination of feedforward and recurrent network components achieves prototype extraction and robust pattern recognition

论文作者

Ravichandran, Naresh Balaji, Lansner, Anders, Herman, Pawel

论文摘要

联想记忆一直是大规模复发新皮层网络执行的计算的重要候选者。实施联想记忆的吸引者网络为许多认知现象提供了机械解释。但是,吸引子记忆模型通常是使用正交或随机模式训练的,以避免记忆之间的干扰,这使得它们对于自然发生的复杂相关刺激(如图像)而言是不可行的。我们通过将经常性吸引子网络与馈电网络相结合,该网络使用无监督的Hebbian-Bayesian学习规则来学习分布式表示形式,从而解决了这个问题。最终的网络模型涵盖了许多已知的生物学特性:无监督的学习,Hebbian可塑性,稀疏分布激活,稀疏连接性,柱状和层状皮质体系结构等。我们评估了FeefForward和Recurrent网络组件在MNESTIST HANDISTRITION DIGASET上的复杂模式识别任务中馈电和经常性网络组件的协同效应。我们证明,经过训练在馈电驱动的内部(隐藏)表示上时,经常性吸引子组件会实现关联内存。还显示了关联内存可以从训练数据中进行原型提取,并使表示强大到严重失真的输入。我们认为,从机器学习的角度来看,提议集成的前馈和经常性计算的整合尤其有吸引力。

Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approach this problem by combining a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule. The resulting network model incorporates many known biological properties: unsupervised learning, Hebbian plasticity, sparse distributed activations, sparse connectivity, columnar and laminar cortical architecture, etc. We evaluate the synergistic effects of the feedforward and recurrent network components in complex pattern recognition tasks on the MNIST handwritten digits dataset. We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations. The associative memory is also shown to perform prototype extraction from the training data and make the representations robust to severely distorted input. We argue that several aspects of the proposed integration of feedforward and recurrent computations are particularly attractive from a machine learning perspective.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源