论文标题

基于损失的灵敏度正则化:朝着深稀疏的神经网络

LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks

论文作者

Tartaglione, Enzo, Bragagnolo, Andrea, Fiandrotti, Attilio, Grangetto, Marco

论文摘要

龙虾(基于损失的灵敏度正则化)是一种训练具有稀疏拓扑的神经网络的方法。让网络参数的灵敏度为损耗函数相对于参数的变化的变化。具有低灵敏度的参数,即对扰动时的损失几乎没有影响,然后修剪以使网络稀疏。我们的方法允许从头开始训练网络,即无初步学习或倒带。多个架构和数据集的实验显示出具有最小计算开销的竞争压缩比。

LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology. Let the sensitivity of a network parameter be the variation of the loss function with respect to the variation of the parameter. Parameters with low sensitivity, i.e. having little impact on the loss when perturbed, are shrunk and then pruned to sparsify the network. Our method allows to train a network from scratch, i.e. without preliminary learning or rewinding. Experiments on multiple architectures and datasets show competitive compression ratios with minimal computational overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源