论文标题

简单的终身学习机器

Simple Lifelong Learning Machines

论文作者

Dey, Jayanta, Vogelstein, Joshua T., Helm, Hayden S., LeVine, Will, Mehta, Ronak D., Tomita, Tyler M., Xu, Haoyin, Geisa, Ali, Wang, Qingyang, van de Ven, Gido M., Gao, Chenyu, Yang, Weiwei, Tower, Bryan, Larson, Jonathan, White, Christopher M., Priebe, Carey E.

论文摘要

在终身学习中,数据不仅用于改善当前任务的绩效,而且还可以在过去和未来(未经遵守)任务上提高绩效。虽然典型的转移学习算法可以提高未来任务的性能,但在学习新任务(称为遗忘)时,其先前任务的绩效会降低。鉴于新任务,许多持续或终身学习的最新方法试图在旧任务上保持绩效。但是努力避免忘记设定目标不必要的低点。终身学习的目标应该是使用数据来提高未来任务(正向转移)和过去任务(向后转移)的性能。在本文中,我们表明一种简单的方法 - 表示结合 - 在各种模拟和基准的数据场景中都向前和向后转移,包括表格,视觉(CIFAR-100,5-DATASET,5-DATASET,SPLISI-IMI-IMAGENET和FOOD1K)和语音(口语),与典型的参考algoriths相反,或者是典型的转移或向前转移的,或者是转移的。此外,我们提出的方法可以在有或没有计算预算的情况下灵活运作。

In lifelong learning, data are used to improve performance not only on the present task, but also on past and future (unencountered) tasks. While typical transfer learning algorithms can improve performance on future tasks, their performance on prior tasks degrades upon learning new tasks (called forgetting). Many recent approaches for continual or lifelong learning have attempted to maintain performance on old tasks given new tasks. But striving to avoid forgetting sets the goal unnecessarily low. The goal of lifelong learning should be to use data to improve performance on both future tasks (forward transfer) and past tasks (backward transfer). In this paper, we show that a simple approach -- representation ensembling -- demonstrates both forward and backward transfer in a variety of simulated and benchmark data scenarios, including tabular, vision (CIFAR-100, 5-dataset, Split Mini-Imagenet, and Food1k), and speech (spoken digit), in contrast to various reference algorithms, which typically failed to transfer either forward or backward, or both. Moreover, our proposed approach can flexibly operate with or without a computational budget.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源