论文标题

大型涡模拟的机器学习墙模型的调查

Survey of machine learning wall models for large eddy simulation

论文作者

Vadrot, Aurélien, Yang, Xiang I. A., Abkar, Mahdi

论文摘要

该调查使用数据驱动的机器学习(ML)技术研究了大型涡流模拟(LES)中的墙壁建模。为此,我们在开源代码中实现了三个ML墙模型,并将其性能与在11美元摩擦reynolds $ 180 $至$ 10^{10} $之间的半通道流中的平衡墙模型进行比较。这三个模型仅以几个雷诺数字流动。我们测试这些ML壁模型是否可以推断出未见的雷诺数。在这三个模型中,有两个是监督的ML模型,一个是增强学习ML模型。对直接数值模拟(DNS)数据进行了训练,对两个有监督的ML模型进行了培训,而增强学习ML模型是在无法访问高保真数据的壁模型LE的背景下训练的。两种监督的ML模型在可见的和看不见的雷诺数字上都捕获了墙的定律 - 尽管一个模型需要重新训练并预测较小的vonKármán常数。强化学习模型可以很好地捕获墙的定律,但在低($re_τ<10^3 $)和高雷诺数($re_τ> 10^6 $)时都有错误。除了记录结果外,我们还试图“理解”为什么ML模型的行为方式。分析表明,监督的ML模型的错误是网络设计的结果,并且由于当前选择“状态”以及中性线与分隔动作图的线之间的不匹配而出现了强化学习模型中的错误。总的来说,我们看到了数据驱动的机器学习模型的承诺。

This survey investigates wall modeling in large eddy simulations (LES) using data-driven machine learning (ML) techniques. To this end, we implement three ML wall models in an open-source code and compare their performances with the equilibrium wall model in LES of half-channel flow at eleven friction Reynolds numbers between $180$ and $10^{10}$. The three models have ''seen'' flows at only a few Reynolds numbers. We test if these ML wall models can extrapolate to unseen Reynolds numbers. Among the three models, two are supervised ML models, and one is a reinforcement learning ML model. The two supervised ML models are trained against direct numerical simulation (DNS) data, whereas the reinforcement learning ML model is trained in the context of a wall-modeled LES with no access to high-fidelity data. The two supervised ML models capture the law of the wall at both seen and unseen Reynolds numbers--although one model requires re-training and predicts a smaller von Kármán constant. The reinforcement learning model captures the law of the wall reasonably well but has errors at both low ($Re_τ<10^3$) and high Reynolds numbers ($Re_τ>10^6$). In addition to documenting the results, we try to ''understand'' why the ML models behave the way they behave. Analysis shows that the errors of the supervised ML model is a result of the network design and the errors in the reinforcement learning model arise due to the present choice of the ''states'' and the mismatch between the neutral line and the line separating the action map. In all, we see promises in data-driven machine learning models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源