论文标题

经常性的卷积神经网络学习简洁的学习算法

Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms

论文作者

Goel, Surbhi, Kakade, Sham, Kalai, Adam Tauman, Zhang, Cyril

论文摘要

神经网络(NNS)也很难有效地解决某些问题,例如学习奇偶,即使对于这些问题有简单的学习算法。 NNS可以自己发现学习算法吗?我们展示了一个NN体系结构,在多项式时期,可以通过恒定程序来描述任何有效的学习算法。例如,关于平价问题,NN学习和高斯消除,这是一种可以简单描述的有效算法。我们的架构结合了层之间的重复分享和卷积重量共享之间的重复分享,以将参数数量降低到常数,即使网络本身可能具有数万亿个节点。在实践中,我们的分析中的常数太大而无法直接有意义,但我们的工作表明,经常性和卷积NNS(RCNN)的协同作用可能比单独的任何一个更自然和强大,尤其是对于简洁的参数化离散算法。

Neural networks (NNs) struggle to efficiently solve certain problems, such as learning parities, even when there are simple learning algorithms for those problems. Can NNs discover learning algorithms on their own? We exhibit a NN architecture that, in polynomial time, learns as well as any efficient learning algorithm describable by a constant-sized program. For example, on parity problems, the NN learns as well as Gaussian elimination, an efficient algorithm that can be succinctly described. Our architecture combines both recurrent weight sharing between layers and convolutional weight sharing to reduce the number of parameters down to a constant, even though the network itself may have trillions of nodes. While in practice the constants in our analysis are too large to be directly meaningful, our work suggests that the synergy of Recurrent and Convolutional NNs (RCNNs) may be more natural and powerful than either alone, particularly for concisely parameterizing discrete algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源