论文标题

从随机初始化的神经网络特征中学习

Learning from Randomly Initialized Neural Network Features

论文作者

Amid, Ehsan, Anil, Rohan, Kotłowski, Wojciech, Warmuth, Manfred K.

论文摘要

我们提出了令人惊讶的结果,即随机初始化的神经网络是预期的良好特征提取器。这些随机特征对应于我们所谓的神经网络前核(NNPK)的有限样本实现,该核本质上是无限二维的。我们跨多个大小以及初始化和激活功能的多个体系结构进行消融。我们的分析表明,在初始化时已经存在某些体现在训练的模型中的结构。因此,NNPK可以进一步了解为什么神经网络在学习此类结构中如此有效。

We present the surprising result that randomly initialized neural networks are good feature extractors in expectation. These random features correspond to finite-sample realizations of what we call Neural Network Prior Kernel (NNPK), which is inherently infinite-dimensional. We conduct ablations across multiple architectures of varying sizes as well as initializations and activation functions. Our analysis suggests that certain structures that manifest in a trained model are already present at initialization. Therefore, NNPK may provide further insight into why neural networks are so effective in learning such structures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源