-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add wasm_of_ocaml benchmarks with current-bench output #1842
base: master
Are you sure you want to change the base?
Conversation
Note: this is ready for review but I still need to verify details—namely, how current-bench deals with multiple opam files, and what should be the name of the results file—so we should hold off from merging yet. |
Thanks to @punchagan’s advice, this should now hopefully work with current-bench. |
458a34c
to
df5a387
Compare
I think |
I would put all the results together and grouping them? {
"name": "Wasm_of_ocaml",
"results": [
{
"name": "Microbenchmarks",
"metrics": [
{
"name": "microbenchmark/almabench",
"value": 1.6087265,
"units": "s"
},
{
"name": "microbenchmark/bdd",
"value": 0.31883700000000004,
"units": "s"
}, |
This will cause all the run times to appear on one unique graph. I’m not sure we want that? |
Maybe not, actually, since they have very different running times. This is done for Ocaml, but it is not very readable. We should still put them in the same list of metrics. This will be more readable when we will add other benchmarks. Maybe we could report the geometric mean of all the running times as well? |
Actually, I disagree. When you look at the last line of https://github.com/ocurrent/current-bench/blob/b0528d005b34b8ea1ff4cd7f893165ce782ef5db/pipeline/lib/custom_dockerfile.ml#L78-L96, it also installs regular dependencies. I have rearranged the metrics list as requested.
A geometric mean of absolute values? Isn’t the geometric mean usually used on relative evolution numbers? |
This reverts commit 8556727.
@vouillon and I (so far, mostly @vouillon) have started to look at ways of improving performance of Wasm code. To keep track of our progress, we need to set up a benchmarking suite that gets run regularly.
This PR makes the repository compatible for monitoring by https://github.com/ocurrent/current-bench/: the benchmark suite is extended with the current-bench output format and I modified the scripts so that Wasm benchmarks are first-class citizens. The existing benchmarking utilities should still work, I ran a few checks to verify that.
This is a first step, we will soon want to add macrobenchmarks as well — presumably by having a
benchmarks/sources/macro
directory where each subdirectory is a macrobenchmark with its own dune file.