Skip to content

Commit

Permalink
Merge pull request #31 from SciML/benchmarking_harness
Browse files Browse the repository at this point in the history
Added benchmarking function
  • Loading branch information
nicholaskl97 authored Oct 15, 2024
2 parents 6e24f18 + 52b2d8c commit 2e04bf3
Show file tree
Hide file tree
Showing 10 changed files with 926 additions and 12 deletions.
10 changes: 6 additions & 4 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,24 @@ authors = ["Nicholas Klugman <[email protected]>"]
version = "0.1.0"

[deps]
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
EvalMetrics = "251d5f9e-10c1-4699-ba24-e0ad168fa3e4"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
NeuralPDE = "315f7962-48a3-4962-8226-d0f33b1235f0"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462"

[compat]
ForwardDiff = "0.10"
JuMP = "1"
Lux = "0.5.45"
Lux = "0.5"
ModelingToolkit = "8, 9"
NLopt = "1"
NeuralPDE = "5.10"
Optimization = "3"
OptimizationOptimJL = "0.2"
OptimizationOptimisers = "0.2"
Optimization = "3, 4"
SciMLBase = "2"
julia = "1.10"

Expand Down
3 changes: 2 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ MANUAL_PAGES = [
DEMONSTRATION_PAGES = [
"demos/damped_SHO.md",
"demos/roa_estimation.md",
"demos/policy_search.md"
"demos/policy_search.md",
"demos/benchmarking.md"
]

makedocs(;
Expand Down
239 changes: 239 additions & 0 deletions docs/src/demos/benchmarking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,239 @@
# Benchmarking a neural Lyapunov method

In this demonstration, we'll benchmark the neural Lyapunov method used in the [policy search demo](policy_search.md).
In that demonstration, we searched for a neural network policy to stabilize the upright equilibrium of the inverted pendulum.
Here, we will use the [`benchmark`](@ref) function to run approximately the same training, then check the performance the of the resulting controller and neural Lyapunov function by simulating the closed loop system to see (1) how well the controller drives the pendulum to the upright equilibrium, and (2) how well the neural Lyapunov function performs as a classifier of whether a state is in the region of attraction or not.
These results will be represented by a confusion matrix using the simulation results as ground truth.
(Keep in mind that training does no simulation.)

## Copy-Pastable Code

```julia
using NeuralPDE, NeuralLyapunov, Lux
import Boltz.Layers: PeriodicEmbedding
using OptimizationOptimisers
using Random

Random.seed!(200)

# Define dynamics and domain
function open_loop_pendulum_dynamics(x, u, p, t)
θ, ω = x
ζ, ω_0 = p
τ = u[]
return
-2ζ * ω_0 * ω - ω_0^2 * sin(θ) + τ]
end

lb = [0.0, -10.0];
ub = [2π, 10.0];
upright_equilibrium = [π, 0.0]
p = [0.5, 1.0]
state_syms = [, ]
parameter_syms = [, :ω_0]

# Define neural network discretization
# We use an input layer that is periodic with period 2π with respect to θ
dim_state = length(lb)
dim_hidden = 15
dim_phi = 2
dim_u = 1
dim_output = dim_phi + dim_u
chain = [Lux.Chain(
PeriodicEmbedding([1], [2π]),
Dense(3, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1, use_bias = false)
) for _ in 1:dim_output]

# Define neural network discretization
strategy = QuasiRandomTraining(1250)

# Define neural Lyapunov structure
structure = PositiveSemiDefiniteStructure(
dim_phi;
pos_def = function (state, fixed_point)
θ, ω = state
θ_eq, ω_eq = fixed_point
log(1.0 + (sin(θ) - sin(θ_eq))^2 + (cos(θ) - cos(θ_eq))^2 +- ω_eq)^2)
end
)
structure = add_policy_search(
structure,
dim_u
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = false)

# Define Lyapunov decrease condition
decrease_condition = AsymptoticStability()

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
structure,
minimization_condition,
decrease_condition
)

# Define optimization parameters
opt = OptimizationOptimisers.Adam()
optimization_args = [:maxiters => 1000]

# Run benchmark
cm, time = benchmark(
open_loop_pendulum_dynamics,
lb,
ub,
spec,
chain,
strategy,
opt;
simulation_time = 200,
n_grid = 20,
fixed_point = upright_equilibrium,
p = p,
optimization_args = optimization_args,
state_syms = state_syms,
parameter_syms = parameter_syms,
policy_search = true,
endpoint_check = (x) -> ([sin(x[1]), cos(x[1]), x[2]], [0, -1, 0], atol=5e-3),
)
```

## Detailed Description

Much of the set up is the same as in the [policy search demo](policy_search.md), so see that page for details.


```@example benchmarking
using NeuralPDE, NeuralLyapunov, Lux
import Boltz.Layers: PeriodicEmbedding
using Random
Random.seed!(200)
# Define dynamics and domain
function open_loop_pendulum_dynamics(x, u, p, t)
θ, ω = x
ζ, ω_0 = p
τ = u[]
return [ω
-2ζ * ω_0 * ω - ω_0^2 * sin(θ) + τ]
end
lb = [0.0, -10.0];
ub = [2π, 10.0];
upright_equilibrium = [π, 0.0]
p = [0.5, 1.0]
state_syms = [:θ, :ω]
parameter_syms = [:ζ, :ω_0]
# Define neural network discretization
# We use an input layer that is periodic with period 2π with respect to θ
dim_state = length(lb)
dim_hidden = 15
dim_phi = 2
dim_u = 1
dim_output = dim_phi + dim_u
chain = [Lux.Chain(
PeriodicEmbedding([1], [2π]),
Dense(3, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1, use_bias = false)
) for _ in 1:dim_output]
# Define neural network discretization
strategy = QuasiRandomTraining(1250)
# Define neural Lyapunov structure
structure = PositiveSemiDefiniteStructure(
dim_phi;
pos_def = function (state, fixed_point)
θ, ω = state
θ_eq, ω_eq = fixed_point
log(1.0 + (sin(θ) - sin(θ_eq))^2 + (cos(θ) - cos(θ_eq))^2 + (ω - ω_eq)^2)
end
)
structure = add_policy_search(
structure,
dim_u
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = false)
# Define Lyapunov decrease condition
decrease_condition = AsymptoticStability()
# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
structure,
minimization_condition,
decrease_condition
)
```

At this point of the [policy search demo](policy_search.md), we constructed the PDESystem, discretized it using NeuralPDE.jl, and solved the resulting OptimizationProblem using Optimization.jl.
All of that occurs in the [`benchmark`](@ref) function, so we instead provide that function with the optimizer and optimization arguments to use.

```@example benchmarking
using OptimizationOptimisers
# Define optimization parameters
opt = OptimizationOptimisers.Adam()
optimization_args = [:maxiters => 1000]
```

Finally, we can run the [`benchmark`](@ref) function.

```@example benchmarking
endpoint_check = (x) -> ≈([sin(x[1]), cos(x[1]), x[2]], [0, -1, 0], atol=5e-3)
(confusion_matrix, training_time), (states, endpoints, actual, predicted, V_samples, V̇_samples) = benchmark(
open_loop_pendulum_dynamics,
lb,
ub,
spec,
chain,
strategy,
opt;
simulation_time = 200,
n_grid = 20,
fixed_point = upright_equilibrium,
p = p,
optimization_args = optimization_args,
state_syms = state_syms,
parameter_syms = parameter_syms,
policy_search = true,
endpoint_check = endpoint_check,
classifier = (V, V̇, x) -> V̇ < zero(V̇) || endpoint_check(x),
verbose = true
);
nothing # hide
```

In this case, we used the `verbose = true` option to demonstrate the outputs of that option, but if you only want the confusion matrix and training time, leave that option off (`verbose` defaults to `false`) and change the first line to:

```julia
confusion_matrix, training_time = benchmark(
```
We can observe the confusion matrix and training time:
```@example benchmarking
confusion_matrix
```
```@example benchmarking
training_time
```
The returned `actual` labels are just `endpoint_check` applied to `endpoints`, which are the results of simulating from each element of `states`.
```@example benchmarking
all(endpoint_check.(endpoints) .== actual)
```
Similarly, the `predicted` labels are the results of the neural Lyapunov classifier.
In this case, we used the default classifier, which just checks for negative values of ``\dot{V}``.
```@example benchmarking
classifier = (V, V̇, x) ->< zero(V̇) || endpoint_check(x)
all(classifier.(V_samples, V̇_samples, states) .== predicted)
```
7 changes: 4 additions & 3 deletions docs/src/demos/policy_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,14 +172,15 @@ upright_equilibrium = [π, 0.0]
```

We'll use an architecture that's ``2\pi``-periodic in ``\theta`` so that we can train on just one period of ``\theta`` and don't need to add any periodic boundary conditions.
To achieve that, we use `Lux.PeriodicEmbedding([1], [2pi])`, enforces `2pi`-periodicity in input number `1`.
To achieve that, we use `Boltz.Layers.PeriodicEmbedding([1], [2pi])`, enforces `2pi`-periodicity in input number `1`.
Additionally, we include output dimensions for both the neural Lyapunov function and the neural controller.

Other than that, setting up the neural network using Lux and NeuralPDE training strategy is no different from any other physics-informed neural network problem.
For more on that aspect, see the [NeuralPDE documentation](https://docs.sciml.ai/NeuralPDE/stable/).

```@example policy_search
using Lux, Boltz
using Lux
import Boltz.Layers: PeriodicEmbedding
# Define neural network discretization
# We use an input layer that is periodic with period 2π with respect to θ
Expand All @@ -189,7 +190,7 @@ dim_phi = 3
dim_u = 1
dim_output = dim_phi + dim_u
chain = [Lux.Chain(
Boltz.Layers.PeriodicEmbedding([1], [2π]),
PeriodicEmbedding([1], [2π]),
Dense(3, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
Expand Down
11 changes: 11 additions & 0 deletions docs/src/man/benchmarking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Benchmarking neural Lyapunov methods

To facilitate comparison of different neural Lyapunov specifications, optimizers, hyperparameters, etc., we provide the [`benchmark`](@ref) function.

Through its arguments, users may specify how a neural Lyapunov problem, the neural network structure, the physics-informed neural network discretization strategy, and the optimization strategy used to solve the problem.
After solving the problem in the specified manner, the dynamical system is simulated (users can specify an ODE solver in the arguments, as well) and classification by the neural Lyapunov function is compared to the simulation results.
By default, the [`benchmark`](@ref) function returns a confusion matrix for the resultant neural Lyapunov classifier and the training time, so that users can compare accuracy and computation speed of various methods.

```@docs
benchmark
```
7 changes: 7 additions & 0 deletions src/NeuralLyapunov.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ import JuMP
using LinearAlgebra
using ModelingToolkit
import SciMLBase
using NeuralPDE
import DifferentialEquations: Tsit5
import EvalMetrics: ConfusionMatrix

include("conditions_specification.jl")
include("structure_specification.jl")
Expand All @@ -15,6 +18,7 @@ include("NeuralLyapunovPDESystem.jl")
include("numerical_lyapunov_functions.jl")
include("local_lyapunov.jl")
include("policy_search.jl")
include("benchmark_harness.jl")

# Lyapunov function structures
export NeuralLyapunovStructure, UnstructuredNeuralLyapunov, NonnegativeNeuralLyapunov,
Expand All @@ -40,4 +44,7 @@ export add_policy_search, get_policy
# Local Lyapunov analysis
export local_lyapunov

# Benchmarking tool
export benchmark

end
Loading

0 comments on commit 2e04bf3

Please sign in to comment.