论文标题
扩展的几次学习:利用现有资源进行新任务
Extended Few-Shot Learning: Exploiting Existing Resources for Novel Tasks
论文作者
论文摘要
在许多实际的少数学习问题中,即使标记的示例稀缺,也有大量的辅助数据集可能包含有用的信息。我们提出了扩展几次学习的问题来研究这些情况。然后,我们引入了一个框架,以解决有效选择并有效地使用辅助数据进行少量图像分类的挑战。给定一个大的辅助数据集和类之间的语义相似性概念,我们会自动选择伪拍摄,这些伪拍摄被标记为与目标任务相关的其他类别的示例。我们表明,幼稚的方法,例如(1)对这些其他示例进行建模与目标任务示例相同,或者(2)使用它们通过转移学习来学习特征,只有将精度提高到适度的量。取而代之的是,我们提出了一个掩蔽模块,该模块调整辅助数据的功能与目标类别更相似。我们表明,该掩蔽模块的性能要比天真地建模支持示例和转移学习分别为4.68和6.03个百分点更好。
In many practical few-shot learning problems, even though labeled examples are scarce, there are abundant auxiliary datasets that potentially contain useful information. We propose the problem of extended few-shot learning to study these scenarios. We then introduce a framework to address the challenges of efficiently selecting and effectively using auxiliary data in few-shot image classification. Given a large auxiliary dataset and a notion of semantic similarity among classes, we automatically select pseudo shots, which are labeled examples from other classes related to the target task. We show that naive approaches, such as (1) modeling these additional examples the same as the target task examples or (2) using them to learn features via transfer learning, only increase accuracy by a modest amount. Instead, we propose a masking module that adjusts the features of auxiliary data to be more similar to those of the target classes. We show that this masking module performs better than naively modeling the support examples and transfer learning by 4.68 and 6.03 percentage points, respectively.