You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The case where we have a couple of different implementations and we want to test them on small, medium, large and super-large inputs is rather common.
Often, we refer to some kind of 'simple' structure as baseline, and want to benchmark a more clever 'complex' structure.
As a simple example, consider benchmarking element-access of a list using Enum.fetch(list, index) to that of :array.get(array, index).
For small inputs, the simple structure might run fast enough (and because of its simplicity, it might actually be faster than the complex structure). However, for larger inputs it will slow down dramatically. While this is nice to show in a graph, at some point we reach a problem:
If lucky, benchmarking the larger inputs takes much longer than the 'estimated time'
If unlucky, benchmarking the larger inputs takes so much time and/or memory that the computer freezes, or the elixir process is killed.
The only way to resolve this issue right now is to:
Run the benchmark, keeping a close eye.
If it seems to take rather long, interrupt it manually and remove the input case where the benchmarks started hanging
Repeat from (1) until the benchmark actually completes.
An additional drawback is that we have now removed the large inputs for all implementations. This means that if we have multiple 'clever' implementations, we can no longer see how they compare between them for large inputs unless we create another separate benchmark where the simple implementation is missing.
Proposal: Add an option to stop a benchmark scenario if its time to complete is dis-proportionally long w.r.t. the estimated running time. For instance, if time: 5s, we might stop any benchmark scenario where a single run already takes longer than 5s to complete.
The text was updated successfully, but these errors were encountered:
This sounds like adding a timeout configuration to a scenario, which does seem like a reasonable thing to do to me. It's also super easy to implement. If you want to give that a try I'd be more than happy to review a PR and help in that way. I could even put together something myself at one point, but I won't have time for that for a bit.
We can also support trap signals from recent Elixir versions. So if someone presses Cmd+\\ or similar, you abort it. But I think this particular command also shuts down the VM. :D
The case where we have a couple of different implementations and we want to test them on small, medium, large and super-large inputs is rather common.
Often, we refer to some kind of 'simple' structure as baseline, and want to benchmark a more clever 'complex' structure.
As a simple example, consider benchmarking element-access of a list using
Enum.fetch(list, index)
to that of:array.get(array, index)
.For small inputs, the simple structure might run fast enough (and because of its simplicity, it might actually be faster than the complex structure). However, for larger inputs it will slow down dramatically. While this is nice to show in a graph, at some point we reach a problem:
The only way to resolve this issue right now is to:
An additional drawback is that we have now removed the large inputs for all implementations. This means that if we have multiple 'clever' implementations, we can no longer see how they compare between them for large inputs unless we create another separate benchmark where the simple implementation is missing.
Proposal: Add an option to stop a benchmark scenario if its time to complete is dis-proportionally long w.r.t. the estimated running time. For instance, if
time: 5s
, we might stop any benchmark scenario where a single run already takes longer than5s
to complete.The text was updated successfully, but these errors were encountered: