PPO on Airfoil2D-hard-v0 (FluidGym)

This repository is part of the FluidGym benchmark results. It contains trained Stable Baselines3 agents for the specialized Airfoil2D-hard-v0 environment.

Evaluation Results

Global Performance (Aggregated across 5 seeds)

Mean Reward: 1.31 ± 0.24

Per-Seed Statistics

Run Mean Reward Std Dev
Seed 0 0.96 0.69
Seed 1 1.59 0.53
Seed 2 1.34 0.55
Seed 3 1.13 0.63
Seed 4 1.53 0.56

About FluidGym

FluidGym is a benchmark for reinforcement learning in active flow control.

Usage

Each seed is contained in its own subdirectory. You can load a model using:

from stable_baselines3 import PPO
model = PPO.load("0/ckpt_latest.zip")

Important: The models were trained using fluidgym==0.0.2. In order to use them with newer versions of FluidGym, you need to wrap the environment with a FlattenObservation wrapper as shown below:

import fluidgym
from fluidgym.wrappers import FlattenObservation
from stable_baselines3 import PPO

env = fluidgym.make("Airfoil2D-hard-v0")
env = FlattenObservation(env)
model = PPO.load("path_to_model/ckpt_latest.zip")

obs, info = env.reset(seed=42)

action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)

References

Downloads last month
137
Video Preview
loading

Collection including safe-autonomous-systems/ppo-Airfoil2D-hard-v0

Paper for safe-autonomous-systems/ppo-Airfoil2D-hard-v0

Evaluation results