论文标题
使用Unsteady模式血浆执行器的深度加固学习,以进行圆形圆柱体周围的主动流量控制
Deep Reinforcement Learning for Active Flow Control around a Circular Cylinder Using Unsteady-mode Plasma Actuators
论文作者
论文摘要
深厚的增强学习(DRL)算法正在迅速融入流体力学,遵循这些技术在广泛的科学和工程应用中取得的显着成就。在本文中,已经使用了深入的增强学习(DRL)代理,使用计算流体动力学(CFD)数据来训练人工神经网络(ANN),以在二维圆形圆柱周围执行主动流控制(AFC)。使用优势参与者 - 批评(A2C)算法,通过位于分离点附近圆柱体表面的两个对称等离子体执行器,使用Advantage Actor-Critic(A2C)算法研究了基于直径的Reynolds编号RE_D = 100的流量控制策略。 DRL试剂通过操纵两个血浆执行器的非二维突发频率(F^+)与计算流体动力学(CFD)环境相互作用,并且将时间平均的表面压力用作对深神经网络(DNNS)的反馈观察。结果表明,使用恒定的非尺寸爆发频率进行常规致动,最大阻力降低了21.8%,而DRL代理能够学习一种控制策略,该策略可实现22.6%的阻力降低。通过分析流场,可以表明拖动还原与尾流区域的平均速度幅度和速度波动显着降低。这些结果证明了深入增强学习(DRL)范式在执行主动流量控制(AFC)方面的重要功能,并为为现实生活应用开发可靠的流量控制策略铺平了道路。
Deep reinforcement learning (DRL) algorithms are rapidly making inroads into fluid mechanics, following the remarkable achievements of these techniques in a wide range of science and engineering applications. In this paper, a deep reinforcement learning (DRL) agent has been employed to train an artificial neural network (ANN) using computational fluid dynamics (CFD) data to perform active flow control (AFC) around a 2-D circular cylinder. Flow control strategies are investigated at a diameter-based Reynolds number Re_D = 100 using advantage actor-critic (A2C) algorithm by means of two symmetric plasma actuators located on the surface of the cylinder near the separation point. The DRL agent interacts with the computational fluid dynamics (CFD) environment through manipulating the non-dimensional burst frequency (f^+) of the two plasma actuators, and the time-averaged surface pressure is used as a feedback observation to the deep neural networks (DNNs). The results show that a regular actuation using a constant non-dimensional burst frequency gives a maximum drag reduction of 21.8 %, while the DRL agent is able to learn a control strategy that achieves a drag reduction of 22.6%. By analyzing the flow-field, it is shown that the drag reduction is accompanied with a strong flow reattachment and a significant reduction in the mean velocity magnitude and velocity fluctuations at the wake region. These outcomes prove the great capabilities of the deep reinforcement learning (DRL) paradigm in performing active flow control (AFC), and pave the way toward developing robust flow control strategies for real-life applications.