论文标题
用$ l_ {2,\ infty} $正常化提高深神经网络的鲁棒性和准确性
Improve the Robustness and Accuracy of Deep Neural Network with $L_{2,\infty}$ Normalization
论文作者
论文摘要
在本文中,通过引入$ l_ {2,\ infty} $归一化DNN的重量矩阵以relu作为激活函数来增强深神经网络(DNN)的鲁棒性和准确性。事实证明,$ l_ {2,\ infty} $归一化导致DNN函数的polyhedron图的两个相邻面之间的较大二面角,因此DNN函数更平滑,从而减少了过度拟合。提出了针对分类DNN的鲁棒性的度量,这是最大稳健球的平均半径,样品数据作为中心。稳健度度量的下限是根据$ l_ {2,\ infty} $ norm给出的。最后,给出了$ l_ {2,\ infty} $归一化DNN的Rademacher复杂性的上限。给出了使用$ l_ {2,\ infty} $归一化训练DNN的算法,并使用实验结果来证明$ L_ {2,\ infty} $归一化对提高稳健性和准确性有效。
In this paper, the robustness and accuracy of the deep neural network (DNN) was enhanced by introducing the $L_{2,\infty}$ normalization of the weight matrices of the DNN with Relu as the activation function. It is proved that the $L_{2,\infty}$ normalization leads to large dihedral angles between two adjacent faces of the polyhedron graph of the DNN function and hence smoother DNN functions, which reduces over-fitting. A measure is proposed for the robustness of a classification DNN, which is the average radius of the maximal robust spheres with the sample data as centers. A lower bound for the robustness measure is given in terms of the $L_{2,\infty}$ norm. Finally, an upper bound for the Rademacher complexity of DNN with $L_{2,\infty}$ normalization is given. An algorithm is given to train a DNN with the $L_{2,\infty}$ normalization and experimental results are used to show that the $L_{2,\infty}$ normalization is effective to improve the robustness and accuracy.