论文标题

数据驱动的有限元素方法:机器学习加速目标计算

Data-Driven Finite Elements Methods: Machine Learning Acceleration of Goal-Oriented Computations

论文作者

Brevis, Ignacio, Muga, Ignacio, van der Zee, Kristoffer G.

论文摘要

我们介绍了数据驱动的有限元方法的概念。这些是部分微分方程(PDE)的有限元元素离散,无论基本的网格大小如何,都以惊人的精度解决了兴趣的数量。这些方法是在机器学习框架内获得的,在此框架中,定义该方法的参数与可用的训练数据调节。特别是,我们使用一种稳定的参数彼得 - 盖尔金方法,该方法等同于使用加权规范的最小水样公式。虽然试用空间是标准有限元空间,但测试空间的参数是在离线阶段调整的。因此,找到最佳测试空间等于获得面向目标的离散化,该离散是完全针对兴趣量的。在深度学习中,我们使用人工神经网络来定义测试空间的参数家族。在一个和二维中,使用数值示例和对流方程,我们证明了数据驱动的有限元方法即使在非常粗糙的网格上也具有较高的兴趣量

We introduce the concept of data-driven finite element methods. These are finite-element discretizations of partial differential equations (PDEs) that resolve quantities of interest with striking accuracy, regardless of the underlying mesh size. The methods are obtained within a machine-learning framework during which the parameters defining the method are tuned against available training data. In particular, we use a stable parametric Petrov-Galerkin method that is equivalent to a minimal-residual formulation using a weighted norm. While the trial space is a standard finite element space, the test space has parameters that are tuned in an off-line stage. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored towards the quantity of interest. As is natural in deep learning, we use an artificial neural network to define the parametric family of test spaces. Using numerical examples for the Laplacian and advection equation in one and two dimensions, we demonstrate that the data-driven finite element method has superior approximation of quantities of interest even on very coarse meshes

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源