Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: expand benchmarking #1989

Open
wpbonelli opened this issue Oct 18, 2023 · 1 comment
Open

feature: expand benchmarking #1989

wpbonelli opened this issue Oct 18, 2023 · 1 comment
Assignees
Milestone

Comments

@wpbonelli
Copy link
Member

wpbonelli commented Oct 18, 2023

Is your feature request related to a problem? Please describe.

FloPy currently has a very small set of benchmarks using pytest-benchmark, including

It might be worthwhile to a) benchmark a broader set of models/utils, and b) minimize ad hoc code needed to achieve this.

Describe the solution you'd like

Maybe benchmark load/write for all test models provided by a models API as proposed in #1872, as well as any widely used pre/post-processing utils. Could also try ASV — it has been adopted by other projects like numpy, shapely, and pywatershed.

Describe alternatives you've considered

We could just stick with pytest-benchmark and a bit of scripting instead of moving to ASV.

Additional context

This would help quantify performance improvements from the ongoing effort to use pandas for file IO

@wpbonelli wpbonelli added this to the 3.7.0 milestone Mar 2, 2024
@wpbonelli wpbonelli modified the milestones: 3.7.0, 3.8.0 May 23, 2024
@wpbonelli wpbonelli modified the milestones: 3.8.0, 3.9.0 Aug 5, 2024
@wpbonelli wpbonelli modified the milestones: 3.9, 4.0 Sep 4, 2024
@wpbonelli wpbonelli changed the title feature: expand benchmarking, try ASV feature: expand benchmarking Jan 9, 2025
@wpbonelli wpbonelli modified the milestones: 4.0, 3.10 Jan 9, 2025
@wpbonelli
Copy link
Member Author

wpbonelli commented Jan 9, 2025

ASV seems unmaintained now, and some projects have begun to switch away from it. We could either stick with pytest-benchmark or move to something like Codspeed, with which the former would work out of the box. But in any case, probably good to set up a more complete benchmarking system in the near term so we can see the difference when we start reimplementing IO routines.

@wpbonelli wpbonelli self-assigned this Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant