Implementation of deep reinforcement learning in OpenFOAM for active flow control
Other conference contribution, 2023

Recent advancements in artificial intelligence and machine learning have enabled tackling high-dimensional controlling and decision-making problems. Deep Reinforcement Learning (DRL), as a combination of deep learning and reinforcement learning, can perform immensely complicated cognitive tasks at a superhuman level.
DRL can be utilized in fluid mechanics for different purposes, for instance, training an autonomous glider [1], exploring swimming strategies of multiple fish [2, 3], controlling a fluid-directed rigid body [4], proposing shape optimization [5, 6]. DRL can also be utilized for Active Flow Control (AFC) [7], which is of crucial importance for mitigating damaging effects or enhancing favourable consequences of fluid flows. Optimizing the AFC strategy using classical optimization methods is usually a highly non-linear problem and involves designing various parameters, while DRL can learn sophisticated AFC
strategies and fully exploit the abilities of the actuator. It is based on the reinforcement learning concept that explores the state-action-reward sequence and offers a powerful tool for conducting closed-loop feedback control.

In the present work, a coupled DRL-CFD framework was developed within OpenFOAM, as opposed to previous attempts in the literature in which the CFD solver was treated as a black box. Here, the DRL agent is implemented as a boundary condition that is able to sense the environment state, perform some action, and record the corresponding reward. Figure 1 displays a simple flowchart of the developed DRL framework in which a deep neural network (DNN) is used as the decision maker (i.e., policy function).

To test and verify the performance of the developed DRL-CFD software, the simple test case of vortex shedding behind a 2D cylinder is investigated. The actuator is a pair of synthetic jets on top and bottom of the cylinder. The reward function is defined as the reduction of drag and the absolute value of lift. Thereby, the DRL agent (which is a deep neural network here) learns to minimize the drag and lift coefficients by applying the optimum jet flow at each time step. The DRL agent was trained through a total of 1000 CFD simulations. Figure. 2 presents the variation of drag and lift coefficients of the cylinder
for both cases. The controlling mechanism starts at t = 40 s and it can be seen that both forces reduce significantly. The contours of vorticity behind the cylinder for the uncontrolled (baseline) and controlled cases, after reaching quasi-stationary condition (t = 200 s), are presented in Fig. 3. The vortex shedding effect is considerably reduced in the controlled case.

Author

Saeed Salehi

Chalmers, Mechanics and Maritime Sciences (M2), Fluid Dynamics

Håkan Nilsson

Chalmers, Mechanics and Maritime Sciences (M2), Fluid Dynamics

18th OpenFOAM Workshop 2023 – Book of unedited abstracts." 18th OpenFOAM Workshop. Genoa, Italy. July 11-14, 2023. Editors: Joel Guerrero, Jan Pralits. figshare. Conference proceedings

289-291

18th OpenFOAM Workshop
Genoa, Italy,

Artificial intelligence for enhanced hydraulic turbine lifetime

Swedish Energy Agency (VKU33020), 2023-01-01 -- 2027-06-30.

Energiforsk AB (VKU33020), 2023-01-01 -- 2027-06-30.

Areas of Advance

Energy

Infrastructure

C3SE (Chalmers Centre for Computational Science and Engineering)

Subject Categories

Fluid Mechanics and Acoustics

Building Technologies

Computer Science

DOI

10.6084/m9.figshare.24081426

More information

Latest update

7/9/2024 1