RDE.jl provides a solver for the rotating detonation engine (RDE) model equations presented in Koch et al. (2020):
- Multiple Discretization Methods: Supports both finite difference and pseudospectral methods for spatial discretization
- Reinforcement Learning Interface: Integration with CommonRLInterface.jl
- Various observation strategies (direct state, Fourier-based)
- Flexible action spaces (scalar pressure, stepwise control)
- Customizable reward functions
- Interactive control capabilities
You can install RDE.jl using Julia's built-in package manager. From the Julia REPL, type ]
to enter the Pkg REPL mode and run:
pkg> add https://github.com/KristianHolme/RDE.jl
Or, you can use the Pkg API from the Julia REPL:
using Pkg
Pkg.add(url="https://github.com/KristianHolme/RDE.jl")
using RDE
using GLMakie
# Create and solve a basic RDE problem
params = RDEParam()
rde_prob = RDEProblem(params)
solve_pde!(rde_prob)
plot_solution(rde_prob)
# Initialize environment with parameters
env = RDEEnv(RDEParam(tmax=500.0), dt=20.0f0)
# Create stepwise policy
π = StepwiseRDEPolicy(env,
[20.0f0, 100.0f0, 200.0f0, 350.0f0], # Time points
[[3.5f0, 0.64f0], # Control values
[3.5f0, 0.86f0],
[3.5f0, 0.64f0],
[3.5f0, 0.94f0]])
# Run simulation
data = run_policy(π, env)
fig = plot_policy_data(env, data)
# Create animation
animate_policy_data(data, env; fname="stepwise_control", fps=60)
stepwise_control.mp4
using RDE, GLMakie
# Launch interactive control interface
env, fig = interactive_control(params=RDEParam())
The package provides extensive support for Deep Reinforcement Learning (DRL) through integration with multiple frameworks:
using RDE
using RLBridge
using PyCall
# Create environment with specific parameters
env = RDEEnv(;
dt=0.1,
τ_smooth=0.01,
params=RDEParam(tmax=100.0),
observation_strategy=FourierObservation(16), # Fourier-based observations
action_type=ScalarPressureAction(), # Control chamber pressure
reward_type=ShockPreservingReward(target_shock_count=3) # Maintain 3 shocks
)
# Convert to Gym environment for SB3
gym_env = convert_to_gym(env)
# Train with PPO
sb = pyimport("sbx") # stable_baselines jax
model = sb.PPO("MlpPolicy", gym_env, device="cpu")
model.learn(total_timesteps=1_000_000)
# Evaluate trained policy
policy = SBPolicy(env, model.policy)
data = run_policy(policy, env)
plot_policy_data(env, data)
For faster training, the package supports parallel environment execution:
# Create multiple environments
envs = [RDEEnv(dt=0.1, τ_smooth=0.01) for _ in 1:8]
vec_env = RDEVecEnv(envs)
# Convert to SB3 VecEnv
sb_vec_env = convert_to_vec_env[](vec_env)
# Train with vectorized environments
model = sb.PPO("MlpPolicy", sb_vec_env)
model.learn(total_timesteps=1_000_000)
StateObservation
: Direct state observationsFourierObservation
: Fourier coefficients of the stateExperimentalObservation
: Custom observation space
ScalarPressureAction
: Control chamber pressureScalarAreaScalarPressureAction
: Control both pressure and injectionarea
ShockSpanReward
: Maximize shock wave spacingShockPreservingReward
: Maintain specific number of shocksExperimentalReward
: Customizable reward components
@article{PhysRevE.101.013106,
title = {Mode-locked rotating detonation waves: Experiments and a model equation},
author = {Koch, James and Kurosaka, Mitsuru and Knowlen, Carl and Kutz, J. Nathan},
journal = {Phys. Rev. E},
volume = {101},
issue = {1},
pages = {013106},
numpages = {11},
year = {2020},
month = {Jan},
publisher = {American Physical Society},
doi = {10.1103/PhysRevE.101.013106},
url = {https://link.aps.org/doi/10.1103/PhysRevE.101.013106}
}
@article{Koch_2021,
title={Multiscale physics of rotating detonation waves: Autosolitons and modulational instabilities},
volume={104},
ISSN={2470-0053},
url={http://dx.doi.org/10.1103/PhysRevE.104.024210},
DOI={10.1103/physreve.104.024210},
number={2},
journal={Physical Review E},
publisher={American Physical Society (APS)},
author={Koch, James and Kurosaka, Mitsuru and Knowlen, Carl and Kutz, J. Nathan},
year={2021},
month=aug }