论文标题
篝火:可压缩,无正规化,结构化稀疏培训,用于硬件加速器
Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
论文作者
论文摘要
本文研究了使用逐渐修剪技术对CNN进行结构的稀疏训练,该技术导致固定的,稀疏的重量矩阵在一定的时期之后。我们简化了强制性稀疏性的结构,从而减少了正规化引起的开销。拟议的训练方法篝火探索了卷积内核和过滤器中粒度的修剪。 我们研究了各种折衷,方面的持续时间,稀疏水平和学习率配置。我们表明,我们的方法在完整成像网上创建了Resnet-50和Resnet-50 V1.5的稀疏版本,同时保持在准确性损失的<1%差距中。为了确保这种类型的稀疏训练不会损害网络的鲁棒性,我们还演示了网络在存在对抗攻击的情况下的行为。我们的结果表明,具有70%的目标稀疏性,可以实现超过75%的TOP-1准确性。
This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology Campfire explores pruning at granularities within a convolutional kernel and filter. We study various tradeoffs with respect to pruning duration, level of sparsity, and learning rate configuration. We show that our method creates a sparse version of ResNet-50 and ResNet-50 v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable.