论文标题
基于重播的持续学习方法的实用建议
Practical Recommendations for Replay-based Continual Learning Methods
论文作者
论文摘要
持续学习要求模型从动态的,非平稳的数据流中学习,而不会忘记以前的知识。文献中已经开发了几种方法来应对持续学习挑战。其中,重播方法已被证明是最有效的方法。重播通过将一些样本保存在内存中进行操作,然后将这些样本保存在后续任务中在培训中进行排练知识。但是,文献中仍然缺少对不同重播实施微妙之处的广泛比较和更深入的理解。这项工作的目的是比较和分析现有的基于重播的策略,并就发展有效,有效且通常适用的基于重播的策略提供实用建议。特别是,我们研究了记忆尺寸值,不同的加权策略的作用,并讨论了数据增强的影响,从而可以通过较低的内存大小来达到更好的性能。
Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge. Several approaches have been developed in the literature to tackle the Continual Learning challenge. Among them, Replay approaches have empirically proved to be the most effective ones. Replay operates by saving some samples in memory which are then used to rehearse knowledge during training in subsequent tasks. However, an extensive comparison and deeper understanding of different replay implementation subtleties is still missing in the literature. The aim of this work is to compare and analyze existing replay-based strategies and provide practical recommendations on developing efficient, effective and generally applicable replay-based strategies. In particular, we investigate the role of the memory size value, different weighting policies and discuss about the impact of data augmentation, which allows reaching better performance with lower memory sizes.