-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix velocity advection stencil test and references #632
Conversation
cscs-ci run default |
cscs-ci run benchmark |
...tmosphere/dycore/tests/dycore_stencil_tests/test_fused_velocity_advection_stencil_8_to_13.py
Show resolved
Hide resolved
...atmosphere/dycore/tests/dycore_stencil_tests/test_fused_velocity_advection_stencil_1_to_7.py
Show resolved
Hide resolved
...ore/src/icon4py/model/atmosphere/dycore/stencils/fused_velocity_advection_stencil_8_to_13.py
Show resolved
Hide resolved
...atmosphere/dycore/tests/dycore_stencil_tests/test_correct_contravariant_vertical_velocity.py
Show resolved
Hide resolved
model/atmosphere/dycore/tests/dycore_stencil_tests/test_copy_cell_kdim_field_to_vp.py
Outdated
Show resolved
Hide resolved
…ell_kdim_field_to_vp.py Co-authored-by: Nicoletta Farabullini <[email protected]>
cscs-ci run default |
launch jenkins spack |
cscs-ci run benchmark |
cscs-ci run default |
cscs-ci run benchmark |
cscs-ci run default |
def _horizontal_range(grid): | ||
if isinstance(grid, icon_grid.IconGrid): | ||
# For the ICON grid we use the proper domain bounds (otherwise we will run into non-protected skip values) | ||
edge_domain = h_grid.domain(dims.EdgeDim) | ||
return grid.start_index(edge_domain(h_grid.Zone.LATERAL_BOUNDARY_LEVEL_7)), grid.end_index( | ||
edge_domain(h_grid.Zone.HALO) | ||
) | ||
else: | ||
return 0, gtx.int32(grid.num_edges) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a new pattern and breaks the grid abstraction. Maybe all grids should provide markers, but for simple grid we set them to 0...num_x everywhere?
if hasattr(grid, "end_index") | ||
else gtx.int32(grid.num_edges) | ||
) | ||
return start, end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you take this code out of the function to make it analogous to other tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used in 2 places, therefore I extracted it into a function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
inlined
cscs-ci run default |
Mandatory Tests Please make sure you run these tests via comment before you merge!
Optional Tests To run benchmarks you can use:
To run tests and benchmarks with the DaCe backend you can use:
In case your change might affect downstream icon-exclaim, please consider running
For more detailed information please look at CI in the EXCLAIM universe. |
cscs-ci run default |
@@ -4,8 +4,7 @@ include: | |||
.benchmark_model_stencils: | |||
stage: benchmark | |||
script: | |||
# force execution of tests where validation is expected to fail, because the reason for failure is wrong numpy reference | |||
- nox -s benchmark_model-3.10 -- --backend=$BACKEND --grid=$GRID --runxfail | |||
- nox -s benchmark_model-3.10 -- --backend=$BACKEND --grid=$GRID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will apply the same change in the dace CI pipeline:
Line 34 in 77a7a68
# force execution of tests where validation is expected to fail, because the reason for failure is wrong numpy reference |
Fixes the failure in the benchmark test that is run after merge.
The failure was not detected as we don't require to run benchmark tests on PRs, but only the verification tests. The failing test was disabled because of a wrong numpy reference, but we run the benchmark tests ignoring the xfail.
This PR fixes the numpy references for the xfailed tests and removes the
--runxfail
from the benchmark ci plan.