论文标题

网络修剪的元学习

Meta-Learning with Network Pruning

论文作者

Tian, Hongduan, Liu, Bo, Yuan, Xiao-Tong, Liu, Qingshan

论文摘要

元学习是几次学习的强大范式。尽管在许多应用中获得了显着的成功,但现有的基于优化的元学习模型具有过度参数化的神经网络,已经证明了培训任务的卵形范围。为了解决这种缺陷,我们提出了一种基于网络修剪的元学习方法,以通过明确控制网络的能力来过度拟合减少。统一的浓度分析揭示了网络容量限制在减少拟议的元学习者的概括差距方面的益处。我们已经在爬行动物的基础上实现了我们的方法,并配有两个网络修剪程序:密度 - 宽度密集(DSD)和迭代的硬阈值(IHT)。具有不同参数深度网络的基准数据集上的广泛实验结果表明,我们的方法不仅有效地减轻了过度拟合元的拟合,而且在许多情况下,当应用于几个射击分类任务时,我们的方法可改善总体泛化性能。

Meta-learning is a powerful paradigm for few-shot learning. Although with remarkable success witnessed in many applications, the existing optimization based meta-learning models with over-parameterized neural networks have been evidenced to ovetfit on training tasks. To remedy this deficiency, we propose a network pruning based meta-learning approach for overfitting reduction via explicitly controlling the capacity of network. A uniform concentration analysis reveals the benefit of network capacity constraint for reducing generalization gap of the proposed meta-learner. We have implemented our approach on top of Reptile assembled with two network pruning routines: Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT). Extensive experimental results on benchmark datasets with different over-parameterized deep networks demonstrate that our method not only effectively alleviates meta-overfitting but also in many cases improves the overall generalization performance when applied to few-shot classification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源