论文标题
探索翻板的记忆及以后:培训重复的神经网络,具有关键见解
Exploring Flip Flop memories and beyond: training recurrent neural networks with key insights
论文作者
论文摘要
在各个学科中,培训神经网络执行不同的任务是相关的。特别是,复发性神经网络(RNN)对计算神经科学具有很大的兴趣。专门用于机器学习的开源框架,例如Tensorflow和Keras,在我们目前使用的技术的开发中产生了重大变化。这项工作旨在通过全面调查和描述实施时间处理任务,特别是3位翻牌板内存来做出重大贡献。我们深入研究整个建模过程,包括方程,任务参数化和软件开发。通过一系列可视化和分析工具的帮助,对获得的网络进行了精心分析以阐明动力学。此外,提供的代码具有足够的用途,可以促进各种任务和系统的建模。此外,我们介绍了如何在尺寸降低的空间中有效地存储在立方体的顶点中的记忆状态,从而以独特的方法补充了先前的结果。
Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work aims to make a significant contribution by comprehensively investigating and describing the implementation of a temporal processing task, specifically a 3-bit Flip Flop memory. We delve into the entire modelling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modelling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.