Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build out metrics for the profile tests #20

Open
notbenh opened this issue May 29, 2015 · 2 comments
Open

build out metrics for the profile tests #20

notbenh opened this issue May 29, 2015 · 2 comments

Comments

@notbenh
Copy link

notbenh commented May 29, 2015

I love that there is a profiling dir of tests... but they seem to be used by something else rather than a way to mark regressions? is there any documentation on what the expected points are or how to do a before/after comparison? I only even though of this because I ended up doing a stupid (really stupid) but simple cross branch testing thing that might be worth using as a starting point to be able to add a little data to the testing. NOTE: it's REALLY stupid: https://github.com/notbenh/kelp/blob/benchmarking/bench_all_branches

@exodist
Copy link
Member

exodist commented May 29, 2015

The profiling tests are mainly there for me to make sure I am not making Test::Stream too slow compared with Test::More.

Currently Test::Stream can do 100k OKs in ~12 seconds. Test::More taks ~20. These numbers are specific to my machine with nytprof active. I also look at the nytprof profiling data to see where things are slow (when they are slow)

@notbenh
Copy link
Author

notbenh commented May 29, 2015

I agree that these are super useful, I am trying to encourage there use, though it seems that there would be some good to codify how these metrics are obtained. That said, as you point out, they are SUPER platform dependent so building out a way to compare the before and after change metrics seems like a good idea. Though I am not really clear on how to reliably do a performance diff across commits or even staged commits. Thus knowing that you are using nytprof and have some numbers is cool but that does not help any one else if they wanted to say... build out a change for #22 and see if using cmp_ok for these tests is really that much slower.

Also if it works here then possibly this could be a new pattern for :toolchain: and we could start to see some testing around performance regressions that are just as systematic as say testing for failures in the code you are about to commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants