Active flow control is high dimensional optimization problem. Therefore in a generic example as flow around cylinder, the deep reinforcement learning is used to achieve optimal flow control by leveraging its power of approximation in high dimensional space. In this study, the flow control is achieved by open-loop control and closed-loop control. For flow around 2D cylinder, the von kármán vortices impose fluctuating drag and lift forces. Hence, for flow control the objective is to reduce drag and fluactuation of drag and lift for the stability of a cylinder. Hence, the cylinder is rotated in order to control the flow. For open-loop control the optimal strategy is determined by parametric study, where the rotation of the rotation of the cylinder is wave function in order to counter the natural vrtex shedding. For closed-loop control, the flow control is achieved by usind deep reinforcement learning. The proximal policy optimization (PPO) algorithm is used to implement the DRL setup, where the cylinder is rotated with optimal policy network and the pressure sensors are placed on the surface of the cylinder. In the PPO iteration, the starting of trajectory control is considered randomly between t=0s and t=4s.
surface_pressure_desh.mp4
- Python-libraries, Singularity, Docker, paraview(for visualisation)
For the simulation setup in OpenFOAM, the base case for the simulation may be found in ./test_cases/cylinder2D_base
. For more info see here.
To Built the singularity image follow the instruction given here. The singularity image file (.sif) should be in parent directory.
This base case is executable with singularity image as,
singuarity run of2006-py1.6-cpu.sif ./Allrun ./test_cases/cylinder2D_base/
For mesh dependency study, execute the shell file as,
$ ./mesh_study
The mesh is set to refinement level 100, 200, and 400. For more refinement level change the array mesh_size=( 100 200 400 )
in shell script. The simulations for different mesh will generate in ./test_case/run/mesh_convergence_study/
.
The parameter amplitude and frequency for the rotation of the cylinder is sampled by LHS method. For LHS sampling,
option-1 (with shell script)
$ ./bash_LHS_sampling
option-2 (with python script)
$ python3 py_LHS_sampling.py
for python script the py-libraries - numpy and matplotlib.
The simulations for the LHS is found in ./test_cases/run/oscillatory_parameter_study/cases
.
Python libraries that are used in DRL can be saperately installed in virtual environment by,
pip install -r ./DRL_py/docker/requirements.txt
For PPO iteration, the simulations in OpenFOAM (environmnent) are handled by ./DRL_py/env_local.py
.
To set the triaing in local machine, in ./DRL_py/reply_buffer.py
, change machine
variable to machine = 'local'
. see here.
To start training,
$ python3 main.py
Python libraries in cluster is installed by creating virtual environment as,
module load python/3.7
python3 -m pip install --user virtualenv
python3 -m virtualenv venv
To activate the virtual environment :
source venv/bin/activate
To deactivate :
deactivate
To install the python libraries in venv
virtual environment,
pip install -r ./DRL_py/docker/requirements.txt
For PPO iteration on cluster, the simulations in OpenFOAM (environmnent) are handled by ./DRL_py/env_cluster.py
.
To set the training on cluster, in ./DRL_py/reply_buffer.py
, change machine
variable to machine = 'cluster'
. see here.
To submit the training job on cluster,
$ cd DRL_py
$ sbatch python_job.sh
The report for this study : https://doi.org/10.5281/zenodo.4897961
BibTex citation :
@misc{darshan_thummar_2021_4897961,
author = {Darshan Thummar},
title = {{Active flow control in simulations of fluid flows
based on deep reinforcement learning}},
month = may,
year = 2021,
publisher = {Zenodo},
doi = {10.5281/zenodo.4897961},
url = {https://doi.org/10.5281/zenodo.4897961}
}
The PPO implementation is based on chapter 12 of Miguel Morales' excellent book Grokking Deep Reinforcement Learning. For more information refer to the Notebook.
For more information about the base simulation setup and the open loop control refer Schaefer et al., and Tokumaru et al. The robust active flow control is inspired from Rabault et al., and Tokarev et al.