论文标题
具有预先分配的固定分类器的课堂学习
Class-incremental Learning with Pre-allocated Fixed Classifiers
论文作者
论文摘要
在课堂学习的学习中,学习代理人面临着一系列数据流,目的是学习新课程,而不会忘记以前的班级。由于他们忘记了以前获得的知识,因此已知神经网络在这种情况下会受到影响。为了解决这个问题,有效的方法利用了存储在情节内存中的过去数据,同时扩展最终分类器节点以适应新类。 在这项工作中,我们将扩展的分类器替换为新型的固定分类器,其中许多预先分配的输出节点从学习阶段开始就遭受分类损失。与标准扩展分类器相反,这允许:(a)自从开始学习以及逐步到达的正样本以来,未来看不见类的输出节点首先看到负样本; (b)学习不会改变其几何配置的功能,因为新型类别纳入了学习模型中。 使用公共数据集的实验表明,所提出的方法与扩展的分类器一样有效,同时表现出新颖的内部特征表示属性,而这些特征表现不存在。我们关于预先分配大量课程的消融研究进一步验证了该方法。
In class-incremental learning, a learning agent faces a stream of data with the goal of learning new classes while not forgetting previous ones. Neural networks are known to suffer under this setting, as they forget previously acquired knowledge. To address this problem, effective methods exploit past data stored in an episodic memory while expanding the final classifier nodes to accommodate the new classes. In this work, we substitute the expanding classifier with a novel fixed classifier in which a number of pre-allocated output nodes are subject to the classification loss right from the beginning of the learning phase. Contrarily to the standard expanding classifier, this allows: (a) the output nodes of future unseen classes to firstly see negative samples since the beginning of learning together with the positive samples that incrementally arrive; (b) to learn features that do not change their geometric configuration as novel classes are incorporated in the learning model. Experiments with public datasets show that the proposed approach is as effective as the expanding classifier while exhibiting novel intriguing properties of the internal feature representation that are otherwise not-existent. Our ablation study on pre-allocating a large number of classes further validates the approach.