论文标题
持续学习从多种睡眠机制中受益:NREM,REM和突触降尺度
Continual learning benefits from multiple sleep mechanisms: NREM, REM, and Synaptic Downscaling
论文作者
论文摘要
在不失去先前学习的情况下学习新任务和技能(即灾难性遗忘)是人工和生物神经网络的计算挑战,但是人工系统努力与其生物学类似物达成平等。哺乳动物的大脑采用众多神经手术来支持睡眠期间的持续学习。这些是人工适应的成熟。在这里,我们研究了对哺乳动物睡眠的三个不同组成部分建模的建模如何影响人造神经网络中的持续学习:(1)在非比型眼运动(NREM)睡眠期间观察到的垂直记忆重播过程; (2)链接到REM睡眠的生成记忆重播过程; (3)已提出的突触降压过程,以调整信噪比和支持神经保养。在评估连续学习CIFAR-100图像分类基准上的性能时,我们发现了所有三个睡眠组件的好处。在以后的任务期间,训练期间提高了最高精度和灾难性遗忘。尽管某些灾难性遗忘在网络培训过程中持续存在,但更高水平的突触降低缩放导致更好地保留早期任务,并进一步促进了随后培训期间早期任务准确性的恢复。一个关键的要点是,在考虑使用突触缩小的水平时,手头就可以权衡取舍 - 更具侵略性的缩减更好地保护早期任务,但降低降低的降低会增强学习新任务的能力。中级水平可以在训练过程中与最高的总体精度达到平衡。总体而言,我们的结果都提供了有关如何适应睡眠组件以增强人工连续学习系统的洞察力,并突出了未来神经科学意义的睡眠研究的领域,以进一步进行此类系统。
Learning new tasks and skills in succession without losing prior learning (i.e., catastrophic forgetting) is a computational challenge for both artificial and biological neural networks, yet artificial systems struggle to achieve parity with their biological analogues. Mammalian brains employ numerous neural operations in support of continual learning during sleep. These are ripe for artificial adaptation. Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks: (1) a veridical memory replay process observed during non-rapid eye movement (NREM) sleep; (2) a generative memory replay process linked to REM sleep; and (3) a synaptic downscaling process which has been proposed to tune signal-to-noise ratios and support neural upkeep. We find benefits from the inclusion of all three sleep components when evaluating performance on a continual learning CIFAR-100 image classification benchmark. Maximum accuracy improved during training and catastrophic forgetting was reduced during later tasks. While some catastrophic forgetting persisted over the course of network training, higher levels of synaptic downscaling lead to better retention of early tasks and further facilitated the recovery of early task accuracy during subsequent training. One key takeaway is that there is a trade-off at hand when considering the level of synaptic downscaling to use - more aggressive downscaling better protects early tasks, but less downscaling enhances the ability to learn new tasks. Intermediate levels can strike a balance with the highest overall accuracies during training. Overall, our results both provide insight into how to adapt sleep components to enhance artificial continual learning systems and highlight areas for future neuroscientific sleep research to further such systems.