You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Automatic, one-line single-parameter sensitivity analysis (a.k.a. univariate or "one-factor-at-a-time" (OFAT) sensitivity analysis) would be a great addition for exploratory model testing and model validation. The idea is that we add a function to the EMAworkbench which can fully test sensitivities in a model with a single line of code. The function would work like this:
Request all variables and default values from model
Request all reporters (KPIs) from model
For each variables, start runs with -X% and +X% in that variable for the number of replications
Log the results
Normalize if specified
Return graphs and DataFrame as specified
The user can specify:
The number of replications
The number of time steps (= the measurement time)
The amount to vary (for example 5%, 10% or 20%)
The default scenario (values for each variable)
The variables to test for sensitivity (default=all)
The KPIs (reporters) to test against (default=all)
Normalization (none, normalize to base, normalize to base and calculate ratio)
The desired output data (DataFrame, graph, or both)
The output graph could like this:
No normalization (absolute)
Normalization (relative to base case)
I think the function could be called univariate_sensitivity() and a function call could look like this:
Edit: Maybe we can make it a Class, which has functions built-in to normalize and graph. That way you only have to run the runs once, and then can use the Class functions to get data and graphs from it. That would be a two-line solution but is more scalable and robust.
The text was updated successfully, but these errors were encountered:
Automatic, one-line single-parameter sensitivity analysis (a.k.a. univariate or "one-factor-at-a-time" (OFAT) sensitivity analysis) would be a great addition for exploratory model testing and model validation. The idea is that we add a function to the EMAworkbench which can fully test sensitivities in a model with a single line of code. The function would work like this:
The user can specify:
The output graph could like this:
No normalization (absolute)
Normalization (relative to base case)
I think the function could be called
univariate_sensitivity()
and a function call could look like this:I have a lot of code already from a recent agent-based modelling course, mainly run_experiments.py and process_experiments.ipynb
Open to feedback on how to improve this!
Edit: Maybe we can make it a Class, which has functions built-in to normalize and graph. That way you only have to run the runs once, and then can use the Class functions to get data and graphs from it. That would be a two-line solution but is more scalable and robust.
The text was updated successfully, but these errors were encountered: