论文标题
离散弱配方的神经控制:galerkin,最小二乘和最小的水分离方法具有准最佳重量
Neural Control of Discrete Weak Formulations: Galerkin, Least-Squares and Minimal-Residual Methods with Quasi-Optimal Weights
论文作者
论文摘要
使用神经网络优化数值方法具有巨大的潜力。在本文中,我们介绍并分析了一个适用于有限元方法的离散弱制剂神经优化的框架。该框架的主要思想是包括一个神经网络函数,该功能用作弱形式的控制变量。找到(准)可最大程度地减少合适成本(或损失)功能的神经控制,然后产生具有理想属性的数值近似值。特别地,该框架以自然的方式允许将已知溶液的已知数据掺入或稳定机制的结合(例如,以消除虚假振荡)。 我们分析的主要结果与相关约束优化问题的适当性和收敛性有关。特别是,在某些条件下,我们证明了离散的弱形式是稳定的,并且存在准最小化的神经控制,它们会偏离准时。我们将分析结果专门针对Galerkin,最小二乘和最小的残基制剂,其中神经网络依赖性以合适的权重的形式出现。基本数值实验支持我们的发现并证明了框架的潜力。
There is tremendous potential in using neural networks to optimize numerical methods. In this paper, we introduce and analyse a framework for the neural optimization of discrete weak formulations, suitable for finite element methods. The main idea of the framework is to include a neural-network function acting as a control variable in the weak form. Finding the neural control that (quasi-) minimizes a suitable cost (or loss) functional, then yields a numerical approximation with desirable attributes. In particular, the framework allows in a natural way the incorporation of known data of the exact solution, or the incorporation of stabilization mechanisms (e.g., to remove spurious oscillations). The main result of our analysis pertains to the well-posedness and convergence of the associated constrained-optimization problem. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. We specialize the analysis results to Galerkin, least-squares and minimal-residual formulations, where the neural-network dependence appears in the form of suitable weights. Elementary numerical experiments support our findings and demonstrate the potential of the framework.