You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the problem
Parameters (i.e., levers, constants, and uncertainties) currently must be scalars in the workbench. This means that if you have 100 identical parameters (e.g., as in the intertemporal version of the lake problem) you need to create 100 parameters, all with the same lower and upper bound, but with a slightly different name. It also means in your model you need to collect these 100 parameters and put them back into the appropriate container.
envisioned solution
What if instead of having to create all parameters yourself and collecting their sampled values back into the appropriate container, all of this could be offloaded to the workbench? In fact, rhodium supports this for Levers, which take an optional length keyword argument. I suggest generalizing this idea by having an optional shape keyword argument. So building on the lake problem example, imagine that the following code just works;
and we can now just use decisions as a keyword argument on the lake model function removing the need for the second code block.
Implementation details
To make something like this work, there needs to be a distinction between the public-facing parameters and the internal scalar representation of these. So, when executing the above code, it triggers the generation of a hundred implicit parameters so they can be sampled/optimized. After having created experiments, we would call the 'parent' uncertainties to process these samples and do any transformations. The resulting modified experiment would then be run. This processing would be most likely a generalization of what happens already with categorical parameters where integers are mapped back to the corresponding category.
The text was updated successfully, but these errors were encountered:
What is the problem
Parameters (i.e., levers, constants, and uncertainties) currently must be scalars in the workbench. This means that if you have 100 identical parameters (e.g., as in the intertemporal version of the lake problem) you need to create 100 parameters, all with the same lower and upper bound, but with a slightly different name. It also means in your model you need to collect these 100 parameters and put them back into the appropriate container.
So from the lake problem
and something like
envisioned solution
What if instead of having to create all parameters yourself and collecting their sampled values back into the appropriate container, all of this could be offloaded to the workbench? In fact, rhodium supports this for Levers, which take an optional length keyword argument. I suggest generalizing this idea by having an optional shape keyword argument. So building on the lake problem example, imagine that the following code just works;
and we can now just use decisions as a keyword argument on the lake model function removing the need for the second code block.
Implementation details
To make something like this work, there needs to be a distinction between the public-facing parameters and the internal scalar representation of these. So, when executing the above code, it triggers the generation of a hundred implicit parameters so they can be sampled/optimized. After having created experiments, we would call the 'parent' uncertainties to process these samples and do any transformations. The resulting modified experiment would then be run. This processing would be most likely a generalization of what happens already with categorical parameters where integers are mapped back to the corresponding category.
The text was updated successfully, but these errors were encountered: