论文标题
统计学习的新型非概念,原始惩罚
A novel nonconvex, smooth-at-origin penalty for statistical learning
论文作者
论文摘要
在高维统计学习算法中,非convex惩罚用于正规化,主要是因为它们对模型中的参数产生无偏见或几乎没有偏见的估计量。 SCAD,MCP,Laplace和Arctan等文献中存在的非convex处罚具有奇异性,这也使其对于可变选择也有用。但是,在几个高维框架(例如深度学习)中,可变选择不太关心。在本文中,我们提出了一种非概念罚款,该罚款的原点是平稳的。本文包括使用新的惩罚函数正规化的普通最小二乘估计量的渐近结果,显示渐近偏差呈指数迅速消失。我们还进行了一项经验研究,该研究采用了深层神经网络体系结构在四个数据集上的三个数据集和卷积神经网络上进行。这项实证研究表明,在七个数据集中五个新的正则化方法的性能更好。
Nonconvex penalties are utilized for regularization in high-dimensional statistical learning algorithms primarily because they yield unbiased or nearly unbiased estimators for the parameters in the model. Nonconvex penalties existing in the literature such as SCAD, MCP, Laplace and arctan have a singularity at origin which makes them useful also for variable selection. However, in several high-dimensional frameworks such as deep learning, variable selection is less of a concern. In this paper, we present a nonconvex penalty which is smooth at origin. The paper includes asymptotic results for ordinary least squares estimators regularized with the new penalty function, showing asymptotic bias that vanishes exponentially fast. We also conducted an empirical study employing deep neural network architecture on three datasets and convolutional neural network on four datasets. The empirical study showed better performance for the new regularization approach in five out of the seven datasets.