论文标题
互补的表示形式缺乏,几乎没有图像分类:一种元学习方法
Complementing Representation Deficiency in Few-shot Image Classification: A Meta-Learning Approach
论文作者
论文摘要
很少有学习是一个具有挑战性的问题,最近引起了越来越多的关注,因为在实际应用中很难获得丰富的培训样本。已经提出了元学习来解决此问题,鉴于标记的样本有限,该问题致力于快速适应新任务的基础学习者。但是,元学习的一个关键挑战是表示不足,因为很难从少数培训样本甚至一个培训样本中发现常见信息,就像从少数信息中的关键特征的代表一样。结果,元学习者不能在高维参数空间中进行良好的训练,以推广到新任务。现有方法主要求助于提取较少的表达特征,以避免表示不足。为了学习更好的表示形式,我们提出了一种具有补充表示网络(MCRNET)的元学习方法,以进行几个图像分类。特别是,我们嵌入了一个潜在空间,在其中重建潜在的代码,并带有额外的表示信息,以补充表示不足。此外,潜在空间是通过各种推理建立的,与不同的碱性学习者合作,可以扩展到其他模型。最后,我们的端到端框架在三个标准的少量学习数据集上实现了图像分类的最新性能。
Few-shot learning is a challenging problem that has attracted more and more attention recently since abundant training samples are difficult to obtain in practical applications. Meta-learning has been proposed to address this issue, which focuses on quickly adapting a predictor as a base-learner to new tasks, given limited labeled samples. However, a critical challenge for meta-learning is the representation deficiency since it is hard to discover common information from a small number of training samples or even one, as is the representation of key features from such little information. As a result, a meta-learner cannot be trained well in a high-dimensional parameter space to generalize to new tasks. Existing methods mostly resort to extracting less expressive features so as to avoid the representation deficiency. Aiming at learning better representations, we propose a meta-learning approach with complemented representations network (MCRNet) for few-shot image classification. In particular, we embed a latent space, where latent codes are reconstructed with extra representation information to complement the representation deficiency. Furthermore, the latent space is established with variational inference, collaborating well with different base-learners, and can be extended to other models. Finally, our end-to-end framework achieves the state-of-the-art performance in image classification on three standard few-shot learning datasets.