[ENH] - Update tests with plots to not display / block #214
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Responds to #202
Currently when running the tests locally, a bunch of figures pop up, and some of them are blocking (wait for input / closing to proceed), which is pretty annoying. Digging into this a bit, most plots could be explicitly closed with
plt.close
, but this didn't work with the blocking plots, which would still wait for input. To address this, this sets pytest to run in "interactive mode" with the most salient outcome of this being that in this mode, creating plots defaults toblock=False
, such that blocking plots do not get created anymore. Tests also now proceed without displaying the other non-blocking plots.Note: for me at least, this addresses all the plots except those in
test_dynamictablesummary
. On my laptop, these plots open in the browser, regardless of the interactive mode. I think this stems from some other issue / difference in how these plots are created. My quick-check guess is that this perhaps relates toDynamicTableSummaryWidget
not defining aset_out_fig
method (which other widgets seem to do), and which seems like it directs where to create the widget output, and without doing these get managed differently?In terms of checking this change - I don't think anything should change on the automated tests, but it would be useful to check running tests locally, to double check this leads to consistent behaviour across systems.