From 46b05977f133109a1ba852869e4c1a1cd9e356d9 Mon Sep 17 00:00:00 2001 From: Chris Rackauckas Date: Thu, 15 Dec 2022 08:36:12 -0500 Subject: [PATCH] Complete the showcase Fixes https://github.com/SciML/SciMLDocs/issues/93. This is ready to tag, though there's a shame note in the missing physics one --- docs/src/showcase/gpu_spde.md | 10 +++-- docs/src/showcase/massively_parallel_gpu.md | 49 +++++++++++++++++---- docs/src/showcase/missing_physics.md | 4 +- docs/src/showcase/pinngpu.md | 2 +- docs/src/showcase/showcase.md | 2 +- docs/src/showcase/symbolic_analysis.md | 27 +++++++++++- 6 files changed, 77 insertions(+), 17 deletions(-) diff --git a/docs/src/showcase/gpu_spde.md b/docs/src/showcase/gpu_spde.md index eba6be919c4..403c5fcf4fb 100644 --- a/docs/src/showcase/gpu_spde.md +++ b/docs/src/showcase/gpu_spde.md @@ -48,9 +48,11 @@ mass-action kinetics. If this all is meaningless to you, just understand that it system of PDEs: ```math -A_t = D \Delta A + \alpha_A(x) - \beta_A A - r_1 A B + r_2 C -B_t = \alpha_B - \beta_B B - r_1 A B + r_2 C -C_t = \alpha_C - \beta_C C + r_1 A B - r_2 C +\begin{align} +A_t &= D \Delta A + \alpha_A(x) - \beta_A A - r_1 A B + r_2 C\\ +B_t &= \alpha_B - \beta_B B - r_1 A B + r_2 C\\ +C_t &= \alpha_C - \beta_C C + r_1 A B - r_2 C +\end{align} ``` One addition that was made to the model is that we let ``\alpha_A(x)`` be the production of @@ -143,7 +145,7 @@ We can represent our problem with a 3-dimensional tensor, taking each 2-dimensio as our (A,B,C). This means that we can define: ```@example spde -u0 = zeros(N,N,3) +u0 = zeros(N,N,3); ``` Now we can decompose it like: diff --git a/docs/src/showcase/massively_parallel_gpu.md b/docs/src/showcase/massively_parallel_gpu.md index dec5df9c0d2..b3f879ea592 100644 --- a/docs/src/showcase/massively_parallel_gpu.md +++ b/docs/src/showcase/massively_parallel_gpu.md @@ -19,8 +19,28 @@ This showcase will focus on the latter case. For the former, see the ## Problem Setup +Let's say we wanted to quantify the uncertainty in the solution of a differential equation. +One simple way to do this would be to a Monte Carlo simulation of the same ODE, randomly +jiggling around some parameters according to an uncertainty distribution. We could do +that on a CPU, but that's not hip. What's hip are GPUs! GPUs have thousands of cores, so +could we make each core of our GPU solve the same ODE but with different parameters? +The [ensembling tools of DiffEqGPU.jl](https://docs.sciml.ai/DiffEqGPU/stable/) solve +exactly this issue, and today you will learn how to master the GPUniverse. + +Let's dive right in. + ## Defining the Ensemble Problem for CPU +DifferentialEquations.jl +[has an ensemble interface for solving many ODEs](https://docs.sciml.ai/DiffEqDocs/stable/features/ensemble/). +DiffEqGPU conveniently uses exactly the same interface, so just a change of a few characters +is all that's required to change a CPU-parallelized code into a GPU-paralleized code. +Given that, let's start with the CPU-parallelized code. + +Let's implement the Lorenz equation out-of-place. If you don't know what that means, +see the [getting started with DifferentialEquations.jl](https://docs.sciml.ai/DiffEqDocs/stable/getting_started/) + + ```@example diffeqgpu using DiffEqGPU, OrdinaryDiffEq, StaticArrays function lorenz(u, p, t) @@ -37,25 +57,36 @@ u0 = @SVector [1.0f0; 0.0f0; 0.0f0] tspan = (0.0f0, 10.0f0) p = @SVector [10.0f0, 28.0f0, 8 / 3.0f0] prob = ODEProblem{false}(lorenz, u0, tspan, p) -prob_func = (prob, i, repeat) -> remake(prob, p = (@SVector rand(Float32, 3)).*p) -monteprob = EnsembleProblem(prob, prob_func = prob_func, safetycopy = false) -@time sol = solve(monteprob,Tsit5(),EnsembleThreads(),trajectories=10_000,saveat=1.0f0) ``` +Notice we use `SVector`s, i.e. StaticArrays, in order to define our arrays. This is +important for later since the GPUs will want a fully non-allocating code to build a +kernel on. + +Now from this problem, we build an `EnsembleProblem` as per the DifferentialEquations.jl +specification. A `prob_func` jiggles the parameters and we solve 10_000 trajectories: + ```@example diffeqgpu -@time sol = solve(monteprob,Tsit5(),EnsembleThreads(),trajectories=10_000,saveat=1.0f0) +prob_func = (prob, i, repeat) -> remake(prob, p = (@SVector rand(Float32, 3)).*p) +monteprob = EnsembleProblem(prob, prob_func = prob_func, safetycopy = false) +sol = solve(monteprob,Tsit5(),EnsembleThreads(),trajectories=10_000,saveat=1.0f0) ``` ## Taking the Ensemble to the GPU -```@example diffeqgpu -@time sol = solve(monteprob,Tsit5(),EnsembleGPUArray(),trajectories=10_000,saveat=1.0f0) -``` +Now uhh, we just change `EnsembleThreads()` to `EnsembleGPUArray()` ```@example diffeqgpu -@time sol = solve(monteprob,Tsit5(),EnsembleGPUArray(),trajectories=10_000,saveat=1.0f0) +sol = solve(monteprob,Tsit5(),EnsembleGPUArray(),trajectories=10_000,saveat=1.0f0) ``` +Or for a more efficient version, `EnsembleGPUKernel()`. But that requires special solvers +so we also change to `GPUTsit5()`. + ```@example diffeqgpu -@time sol = solve(monteprob, GPUTsit5(), EnsembleGPUKernel(), trajectories = 10_000) +sol = solve(monteprob, GPUTsit5(), EnsembleGPUKernel(), trajectories = 10_000) ``` + +Okay so that was anticlimactic but that's the point: if it was harder then that then it +wouldn't be automatic! Now go check out [DiffEqGPU.jl's documentation for more details,](https://docs.sciml.ai/DiffEqGPU/stable/) +that's the end of our show. diff --git a/docs/src/showcase/missing_physics.md b/docs/src/showcase/missing_physics.md index 08d09c0a48c..d9a7537859b 100644 --- a/docs/src/showcase/missing_physics.md +++ b/docs/src/showcase/missing_physics.md @@ -322,7 +322,9 @@ But... Can we also make it print out the LaTeX for what the missing equations were? Find out more after the break! -## Symbolic regression via sparse regression ( SINDy based ) +## Symbolic regression via sparse regression (SINDy based) + +#### This part of the showcase is still a work in progress... shame on us. But be back in a jiffy and we'll have it done. Okay that was a quick break, and that's good because this next part is pretty cool. Let's use [DataDrivenDiffEq.jl](https://docs.sciml.ai/DataDrivenDiffEq/stable/) to transform our diff --git a/docs/src/showcase/pinngpu.md b/docs/src/showcase/pinngpu.md index 45779cb1f7e..a0225dbc196 100644 --- a/docs/src/showcase/pinngpu.md +++ b/docs/src/showcase/pinngpu.md @@ -147,7 +147,7 @@ res = Optimization.solve(prob,Adam(0.01);callback = callback,maxiters=2500); We then use the `remake` function allows to rebuild the PDE problem to start a new optimization at the optimized parameters, and continue with a lower learning rate: -```julia +```@example pinn prob = remake(prob,u0=res.u) res = Optimization.solve(prob,Adam(0.001);callback = callback,maxiters=2500); ``` diff --git a/docs/src/showcase/showcase.md b/docs/src/showcase/showcase.md index c5c7cccaedc..e475102d33b 100644 --- a/docs/src/showcase/showcase.md +++ b/docs/src/showcase/showcase.md @@ -20,4 +20,4 @@ Want to see some cool things that you can do with SciML? Check out the following * [GPU-Accelerated Stochastic Partial Differential Equations](@ref gpuspde) * Useful cool wonky things that are hard to find anywhere else * [Automatic Uncertainty Quantification, Arbitrary Precision, and Unit Checking in ODE Solutions using Julia's Type System](@ref ode_types) - * [Symbolic Analysis of Parameter Identifiability and Model Stability](@ref symbolic_analysis) + * [Symbolic-Numeric Analysis of Parameter Identifiability and Model Stability](@ref symbolic_analysis) diff --git a/docs/src/showcase/symbolic_analysis.md b/docs/src/showcase/symbolic_analysis.md index 2f325ff5941..88d320250e0 100644 --- a/docs/src/showcase/symbolic_analysis.md +++ b/docs/src/showcase/symbolic_analysis.md @@ -1,4 +1,29 @@ -# [Symbolic Analysis of Parameter Identifiability and Model Stability](@id symbolic_analysis) +# [Symbolic-Numeric Analysis of Parameter Identifiability and Model Stability](@id symbolic_analysis) + +The mixture of symbolic computing with numeric computing, which we call symbolic-numeric +programming, is one of the central features of the SciML ecosystem. With core aspects +like the [Symbolics.jl Computer Algebra System](https://symbolics.juliasymbolics.org/stable/) +and its integration via [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/), +the SciML ecosystem gracefully mixes analytical symbolic computations with the numerical +solver processes to accelerate solvers, give additional information +(sparsity, identifiability), automatically fix numerical stability issues, and more. + +In this showcase we will highlight two aspects of symbolic-numeric programming. + +1. Automated index reduction of DAEs. While arbitrary [differential-algebraic equation + systems can be written in DifferentialEquations.jl](https://docs.sciml.ai/DiffEqDocs/stable/tutorials/dae_example/), + not all mathematical formulations of a system are equivalent. Some are numerically + difficult to solve, or even require special solvers. Some are easy. Can we recognize + which formulations are hard and automatically transform them into the easy ones? Yes. +2. Structural parameter identifiability. When fitting parameters to data, there's always + assumptions about whether there is a unique parameter set that achieves such a data + fit. But is this actually the case? The structural identifiability tooling allows one + to analytically determine whether, in the limit of infinite data on a subset of + observables, one could in theory uniquely identify the parameters (global identifiability), + identify the parameters up to a discrete set (local identifiability), or whether + there's an infinite manifold of solutions to the inverse problem (nonidentifiable). + +Let's dig into these two cases! # Automated Index Reduction of DAEs