-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance overhead #26
Comments
Now that the entire code base uses black this is practical - the flake8 configuration gave a convienent way to manage the gradual adoption. However, right now my flake8-black plugin has more overhead than it should and slows this down. peterjc/flake8-black#26
I didn't get the expected speedup trying this on TravisCI, I now suspect the black cache is the reason why:
So, while we may not be able to dramatically speed up flake8-black on a first run, can we tap into the black cache? |
Or could flake8 include a similar caching mechanism? Black does this with a one cache per mode (covering the relatively short list of black options) per version of black. Doing something similar for flake8 would also have cover the combination of all installed plugins and their versions. This seems like it would be too complicated. |
We can probably take advantage of the black cache by imitating calling the black command line and letting black parse the file (rather than letting flake8 parse the file and giving the data to an internal black function). This will be a performance trade off - I suspect it will be faster for the local use case (e.g. git pre-commit hook), but could be slower for continuous integration (with no black cache present). |
Testing with Biopython (which recently finished applying black to the entire code base), numbers on a multi-core Mac:
Black alone (best of three)
Using flake8 with assorted plugins, but without flake8-black (best of three)
Adding the above together gives us an expected run time for running black via flake8.
So, same setup, but with flake8-black (best of three)
That's an overhead of about 40s real and well over 2m user time - not good!
The text was updated successfully, but these errors were encountered: