Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix velocity advection stencil test and references #632

Merged
merged 11 commits into from
Jan 10, 2025
Merged

Conversation

havogt
Copy link
Contributor

@havogt havogt commented Jan 6, 2025

Fixes the failure in the benchmark test that is run after merge.

The failure was not detected as we don't require to run benchmark tests on PRs, but only the verification tests. The failing test was disabled because of a wrong numpy reference, but we run the benchmark tests ignoring the xfail.

This PR fixes the numpy references for the xfailed tests and removes the --runxfail from the benchmark ci plan.

@havogt
Copy link
Contributor Author

havogt commented Jan 6, 2025

cscs-ci run default

@havogt
Copy link
Contributor Author

havogt commented Jan 6, 2025

cscs-ci run benchmark

@havogt havogt requested a review from nfarabullini January 6, 2025 12:22
@havogt
Copy link
Contributor Author

havogt commented Jan 6, 2025

cscs-ci run default

@havogt
Copy link
Contributor Author

havogt commented Jan 6, 2025

launch jenkins spack

@havogt
Copy link
Contributor Author

havogt commented Jan 6, 2025

cscs-ci run benchmark

@havogt havogt mentioned this pull request Jan 7, 2025
@havogt
Copy link
Contributor Author

havogt commented Jan 7, 2025

cscs-ci run default

@havogt
Copy link
Contributor Author

havogt commented Jan 7, 2025

cscs-ci run benchmark

@havogt
Copy link
Contributor Author

havogt commented Jan 8, 2025

cscs-ci run default

Comment on lines 22 to 30
def _horizontal_range(grid):
if isinstance(grid, icon_grid.IconGrid):
# For the ICON grid we use the proper domain bounds (otherwise we will run into non-protected skip values)
edge_domain = h_grid.domain(dims.EdgeDim)
return grid.start_index(edge_domain(h_grid.Zone.LATERAL_BOUNDARY_LEVEL_7)), grid.end_index(
edge_domain(h_grid.Zone.HALO)
)
else:
return 0, gtx.int32(grid.num_edges)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a new pattern and breaks the grid abstraction. Maybe all grids should provide markers, but for simple grid we set them to 0...num_x everywhere?

if hasattr(grid, "end_index")
else gtx.int32(grid.num_edges)
)
return start, end
Copy link
Contributor

@nfarabullini nfarabullini Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you take this code out of the function to make it analogous to other tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used in 2 places, therefore I extracted it into a function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inlined

@havogt havogt requested a review from nfarabullini January 9, 2025 10:56
@havogt
Copy link
Contributor Author

havogt commented Jan 9, 2025

cscs-ci run default

Copy link

Mandatory Tests

Please make sure you run these tests via comment before you merge!

  • cscs-ci run default
  • launch jenkins spack

Optional Tests

To run benchmarks you can use:

  • cscs-ci run benchmark

To run tests and benchmarks with the DaCe backend you can use:

  • cscs-ci run dace

In case your change might affect downstream icon-exclaim, please consider running

  • launch jenkins icon

For more detailed information please look at CI in the EXCLAIM universe.

@havogt
Copy link
Contributor Author

havogt commented Jan 10, 2025

cscs-ci run default

@@ -4,8 +4,7 @@ include:
.benchmark_model_stencils:
stage: benchmark
script:
# force execution of tests where validation is expected to fail, because the reason for failure is wrong numpy reference
- nox -s benchmark_model-3.10 -- --backend=$BACKEND --grid=$GRID --runxfail
- nox -s benchmark_model-3.10 -- --backend=$BACKEND --grid=$GRID
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will apply the same change in the dace CI pipeline:

# force execution of tests where validation is expected to fail, because the reason for failure is wrong numpy reference

@havogt havogt merged commit d5de356 into main Jan 10, 2025
3 checks passed
@havogt havogt deleted the fix_benchmark_test branch January 10, 2025 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants