Add profiling & benchmarks for Indexset
and Parameter
#148
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In order to evaluate the benefit of #143, @meksor asked me to write some profiling and benchmark tests for
parameters.add(data)
. This PR does exactly that and a bit more, so there are a few things to keep in mind:Indexset
becauseIndexset
already had its DB model normalized (NormalizeIndexSet.data
DB storage #122). This might be helpful because we can already compare with this PR alone how theIndexset
model compares to converting data todict
s and storing them as JSON. This comparison is not accurate, though, so my next task will be to duplicate Normalizeoptimization.Table
DB storage #143 forParameter
. (We are interested inParameter
instead ofTable
because that needs theUPSERT
functionality, the benchmark of which triggered this whole procedure.) On top of that branch, I can include exactly the same tests we have here for a proper comparison.main
or at least requires more cleanup before being committed.big
data (because I only ensured that the tests are running locally, for which I didn't want to wait so long). For proper benchmarks runs, we may want to adapt this. And add some warmup-runs and iterations.tests/fixtures/optimization/big/parameterdata.csv
, which is too large for GitHub's liking. When I pushed the commit adding the file here, I received the following:git-lfs
or create the test data dynamically.