论文标题

零填充的全卷积神经网络的通用近似特性

Universal Approximation Property of Fully Convolutional Neural Networks with Zero Padding

论文作者

Hwang, Geonho, Kang, Myungjoo

论文摘要

卷积神经网络(CNN)是深度学习中最突出的神经网络体系结构之一。尽管采用了广泛的采用,但由于其复杂的性质,我们对其通用近似特性的理解受到限制。 CNN固有地充当张量映射,保留输入数据的空间结构。但是,有限的研究探索了完全卷积神经网络的通用近似特性,作为任意连续张量张量的函数。在这项研究中,我们证明,在使用零填充时,CNN可以在输入和输出值表现出相同的空间形状的情况下近似任意连续功能。此外,我们确定近似所需的神经网络的最小深度并证实其最佳性。我们还验证了深,狭窄的CNN具有张量 - 张量功能的UAP。结果涵盖了广泛的激活功能,我们的研究涵盖了各个维度的CNN。

The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation and substantiate its optimality. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源