论文标题
深度神经网络低位宽度培训的统计框架
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
论文作者
论文摘要
通过量化神经网络模型的激活,权重和梯度,使用低位宽度硬件的完全量化训练(FQT)是加速深层神经网络训练的一种有希望的方法。 FQT的一个主要挑战是缺乏理论理解,特别是梯度量化如何影响收敛性能。在本文中,我们通过提出用于分析FQT算法的统计框架来解决此问题。我们将FQT的量化梯度视为其完整精度对应物的随机估计器,该程序称为量化感知训练(QAT)。我们表明,FQT梯度是对QAT梯度的无偏估计器,我们讨论了梯度量化对其方差的影响。受这些理论结果的启发,我们开发了两个新颖的梯度量化器,我们表明它们的方差比现有的量子量化器小。对于ImageNet上的训练Resnet-50,我们的5位块户主量化器相对于QAT仅实现0.5%的验证精度损失,与现有的INT8基线相当。
Fully quantized training (FQT), which uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model, is a promising approach to accelerate the training of deep neural networks. One major challenge with FQT is the lack of theoretical understanding, in particular of how gradient quantization impacts convergence properties. In this paper, we address this problem by presenting a statistical framework for analyzing FQT algorithms. We view the quantized gradient of FQT as a stochastic estimator of its full precision counterpart, a procedure known as quantization-aware training (QAT). We show that the FQT gradient is an unbiased estimator of the QAT gradient, and we discuss the impact of gradient quantization on its variance. Inspired by these theoretical results, we develop two novel gradient quantizers, and we show that these have smaller variance than the existing per-tensor quantizer. For training ResNet-50 on ImageNet, our 5-bit block Householder quantizer achieves only 0.5% validation accuracy loss relative to QAT, comparable to the existing INT8 baseline.