-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support performance testing #15
Comments
So far we have introduced a level of visibility for performance. I think the next steps would be
These are quite advanced features and I don't know of any precedent right now. As such, we are not going to try and solve this in the next release or so, and an attempt will be made in a dedicated branch. |
Actually the performance measurement has been hidden as it was confusing. |
Done some experiments with sorting. Observations include
|
Trying to detail the model a bit more. If we want to measure function performance as a function of platform performance (how fast the test machine is) and want to characterize this over multiple dimensions, we need to have multiple observable or the system is undetermined.
We don't have C so we run a number of benchmark expressions and collect the timings (again, user, system and elapsed). We model this similarly with
Where M describes how many of the elementary operations are needed for each of the benchmarks and C is platform dependent.
Where unobservable platform-dependent matrix C cancels out. This is equivalent to
That is, if the above reasoning is correct, we can make the timings portable by inverting (pseudo-inverse) the benchmark matrix, which is measurable, and left multiplying it to the test timings. This is a linear transformation so we can model it with a poly, polylog or other, user-defined model. |
To clarify, quickcheck goal is not to model performance data. The goal here is to create portable performance related assertions, which ultimately end up in tests. But we may need to provide some modeling tools to help people write such assertions, because of portability. |
The plan is as follows: users won't model directly their algo performance T but T B^-1 or other portable transformation. They will make their model coefficients as a vector in their tests. The test will use the model to predict T B^-1 for specific input sizes and compute B for a specific test machine. Then will compare predicted and actual T and when prediction are exceeded by a TBD amount, the test will fail. |
It may be worth considering whether performance is only function of input size or of the input value itself. If one thinks of textbook algorithms such as sort, input size is all that matters in most cases. But in the case, for instance, of RNG, the input size is generally constant, but run time dependent on the actual sample size (an argument), so the general approach would be to have a performance model to be dependent on the actual arguments of a test, and then have the modeler decide whether to consider the length of an input or some other function of it. |
Trivial example to bring this back to earth. To test the performance of an implementation of quicksort, qsort() I can write a test like
Where |
No description provided.
The text was updated successfully, but these errors were encountered: