论文标题
结构化的修剪学习紧凑而准确的模型
Structured Pruning Learns Compact and Accurate Models
论文作者
论文摘要
神经语言模型的规模不断增长,导致模型压缩的关注越来越大。两种主要的方法是修剪,它逐渐从预训练的模型中逐渐消除权重,并蒸馏,该蒸馏训练较小的紧凑型模型以匹配较大的模型。修剪方法可以显着降低模型大小,但几乎不能实现大加速作为蒸馏。但是,蒸馏方法需要大量未标记的数据,并且训练昂贵。在这项工作中,我们提出了一种特定于任务的结构化修剪方法COFI(粗粒和细颗粒修剪),该方法提供了高度可行的可行子网络,并以准确性和延迟均匹配蒸馏方法,而无需诉诸于任何未标记的数据。我们的关键见解是共同修剪粗颗粒(例如,层)和细粒度(例如头部和隐藏单元)模块,该模块控制着每个参数的修剪决策,并用不同的粒度掩盖。我们还设计了一种层蒸馏策略,以在优化过程中将知识从未经修复的模型转移到修剪。我们在胶水和小队数据集上的实验表明,COFI的模型具有超过10倍的加速度,精度下降较小,与以前的修剪和蒸馏方法相比,其有效性和效率表明其有效性和效率。
The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi (Coarse- and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10x speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches.