论文标题

数据维度对神经网络预布的影响

The Effect of Data Dimensionality on Neural Network Prunability

论文作者

Ankner, Zachary, Renda, Alex, Dziugaite, Gintare Karolina, Frankle, Jonathan, Jin, Tian

论文摘要

从业者修剪神经网络,以提高效率和概括,但很少有人仔细检查确定神经网络的预知性的因素,而修剪可以消除的最大权重就不会损害模型的测试准确性。在这项工作中,我们研究了可能有助于神经网络的预知性的输入数据的属性。对于高维输入数据(例如图像,文本和音频),歧管假设表明,这些高维输入大约位于或附近明显较低的维歧管上。先前的工作表明,输入数据的基本低维结构可能会影响学习的样本效率。在本文中,我们研究了输入数据的低维结构是否会影响神经网络的前瞻性。

Practitioners prune neural networks for efficiency gains and generalization improvements, but few scrutinize the factors determining the prunability of a neural network the maximum fraction of weights that pruning can remove without compromising the model's test accuracy. In this work, we study the properties of input data that may contribute to the prunability of a neural network. For high dimensional input data such as images, text, and audio, the manifold hypothesis suggests that these high dimensional inputs approximately lie on or near a significantly lower dimensional manifold. Prior work demonstrates that the underlying low dimensional structure of the input data may affect the sample efficiency of learning. In this paper, we investigate whether the low dimensional structure of the input data affects the prunability of a neural network.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源