论文标题

组成稀疏性,近似类和参数传输方程

Compositional Sparsity, Approximation Classes, and Parametric Transport Equations

论文作者

Dahmen, Wolfgang

论文摘要

大量变量的近似函数构成了特定的挑战,通常是在``维度的诅咒''(COD)术语中所包含的。除非近似函数表现出很高的平滑度,否则只能通过利用一些通常隐藏的{\ em结构稀疏}来避免COD。在本文中,我们为在高维度中的新函数类别的新模型类别提出了一个一般框架。它们基于{\ em组成尺寸 - 比索的合适概念}在连续级别上量化具有某些结构特性的组合物的近似性。特别是,这描述了深层神经网络可以避免COD的情况。这些概念的相关性是参数传输方程的{\ em解决方案歧管}的相关性。对于这样的PDES参数到解决方案,在其他模型场景中,有助于通过更多传统方法避免COD的高阶规则性类型。成分稀疏性被证明是主要的机制,证明问题数据的稀疏性是通过解决方案歧管以可量化的方式遗传的。特别是,人们获得了深层神经网络实现的收敛速率,表明确实避免了COD。

Approximating functions of a large number of variables poses particular challenges often subsumed under the term ``Curse of Dimensionality'' (CoD). Unless the approximated function exhibits a very high level of smoothness the CoD can be avoided only by exploiting some typically hidden {\em structural sparsity}. In this paper we propose a general framework for new model classes of functions in high dimensions. They are based on suitable notions of {\em compositional dimension-sparsity} quantifying, on a continuous level, approximability by compositions with certain structural properties. In particular, this describes scenarios where deep neural networks can avoid the CoD. The relevance of these concepts is demonstrated for {\em solution manifolds} of parametric transport equations. For such PDEs parameter-to-solution maps do not enjoy the type of high order regularity that helps to avoid the CoD by more conventional methods in other model scenarios. Compositional sparsity is shown to serve as the key mechanism forn proving that sparsity of problem data is inherited in a quantifiable way by the solution manifold. In particular, one obtains convergence rates for deep neural network realizations showing that the CoD is indeed avoided.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源