论文标题

DL-REG:使用线性回归的深度学习正则化技术

DL-Reg: A Deep Learning Regularization Technique using Linear Regression

论文作者

Dialameh, Maryam, Hamzeh, Ali, Rahmani, Hossein

论文摘要

正则化在深度学习的背景下,通过防止深层神经网络过度拟合的危险来起着至关重要的作用。本文提出了一种名为DL-REG的新型深度学习正则化方法,该方法通过明确执行网络的表现尽可能地线性,在一定程度上仔细地降低了深网的非线性。关键思想是在深神经网络的目标函数中添加线性约束,这仅仅是从输入到模型输出的线性映射的误差。更确切地说,提出的DL-REG仔细迫使网络以线性方式行事。这种线性约束进一步通过正规化因素进行了调整,可防止网络过度拟合的风险。通过在多个基准数据集上培训最先进的深层网络模型来评估DL-REG的性能。实验结果表明,提出的正则化方法:1)对现有正则化技术进行了重大改进,2)显着改善了深神经网络的性能,尤其是在小型培训数据集的情况下。

Regularization plays a vital role in the context of deep learning by preventing deep neural networks from the danger of overfitting. This paper proposes a novel deep learning regularization method named as DL-Reg, which carefully reduces the nonlinearity of deep networks to a certain extent by explicitly enforcing the network to behave as much linear as possible. The key idea is to add a linear constraint to the objective function of the deep neural networks, which is simply the error of a linear mapping from the inputs to the outputs of the model. More precisely, the proposed DL-Reg carefully forces the network to behave in a linear manner. This linear constraint, which is further adjusted by a regularization factor, prevents the network from the risk of overfitting. The performance of DL-Reg is evaluated by training state-of-the-art deep network models on several benchmark datasets. The experimental results show that the proposed regularization method: 1) gives major improvements over the existing regularization techniques, and 2) significantly improves the performance of deep neural networks, especially in the case of small-sized training datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源