Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decomposed mesh tallies when domain decomposition is active #212

Merged
merged 5 commits into from
Aug 14, 2024

Conversation

alexandermote
Copy link
Contributor

No description provided.

@alexandermote
Copy link
Contributor Author

Revised mesh tally decomposition after a meeting with @ilhamv:

  • Reduced N_bin to match decomposed tally values in main.py
  • Reduced tally sizes in make_type_mesh_tally function in type_.py

@ilhamv
Copy link
Collaborator

ilhamv commented Jul 23, 2024

Thanks, @alexandermote !

The MPI DD tests crashed. But all others passed. Which is a good sign. My recommendation:
(1) Identify the issue/bug in the non-Numba MPI test by manually running it with several ranks (the test uses 4 ranks).
(2) Then move on to the Numba MPI test.
And don't forget back in black (eg, run black *py on MCDC/mcdc).

@alexandermote
Copy link
Contributor Author

alexandermote commented Aug 12, 2024

I made several changes with this update:

  • Fixed an indexing issue that was causing DD mesh bounds to be set improperly
  • Added an MPI.Gather call before the HDF5 output is created to re-assemble the mesh tally
  • Added an if not DD flag to MPI.Reduce calls in the tally closeout functions, since each processor is working on a different section of the tally

This should allow at least the non-Numba tests to pass. However, it only works when there is only one processor assigned to each subdomain. I will need to add some version of MPI.Reduce among only processors in the same subdomain in order for it to work when multiple processors are assigned to any given subdomain.
Also, if there is a better spot for the mesh tally reassembly to take place, I'm happy to move it. Because of the way the tally data is packed, I can't replace the decomposed tally with the reassembled one because it's a different size. The only solution I could think of for this was to reassemble the tally inside the generate_hdf5 function and use it there.

@ilhamv ilhamv merged commit 0c65022 into CEMeNT-PSAAP:better_tally Aug 14, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants