Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPI shared memory #14

Open
wants to merge 4 commits into
base: unstable
Choose a base branch
from
Open

MPI shared memory #14

wants to merge 4 commits into from

Conversation

hmenke
Copy link
Member

@hmenke hmenke commented Sep 26, 2023

All tests were run with multiple nodes and different number of slots on each node:

$ cat hostfile 
pcscqm04 slots=4
pcscqm05 slots=3
pcscqm06 slots=5
pcscqm07 slots=2
$ mpirun -hostfile ./hostfile build/test/c++/mpi_window

Some ideas:

  • MPI allocator

    Similar to std::allocator implement a shared_allocator and a distributed_shared_allocator, such that one can use e.g. std::vector<double, mpi::shared_allocator<double>>

    Questions:

    • Should this be in mpi or in nda?
    • For shared memory one can probably use a best guess and use split_shared() on the default communicator, but for distributed shared memory there needs to be internode communication and that is not easily guessed from the default communicator. Maybe a global hash table with allocation information is needed?
    • For shared memory race conditions must be prevented somehow.
    • On top of that, for distributed shared memory access must be fenced and broadcasted between nodes. That's not so easy to abstract away.

@hmenke hmenke force-pushed the shm branch 3 times, most recently from 92e2e2b to 1738e29 Compare October 4, 2023 07:41
@hmenke hmenke changed the title WIP: MPI shared memory MPI shared memory Oct 9, 2023
@hmenke hmenke marked this pull request as ready for review October 9, 2023 10:00
test/c++/mpi_allocator.cpp Outdated Show resolved Hide resolved
Comment on lines +150 to +154
// Expose some commonly used attributes
BaseType* base() const noexcept { return static_cast<BaseType*>(get_attr(MPI_WIN_BASE)); }
MPI_Aint size() const noexcept { return *static_cast<MPI_Aint*>(get_attr(MPI_WIN_SIZE)); }
int disp_unit() const noexcept { return *static_cast<int*>(get_attr(MPI_WIN_DISP_UNIT)); }
};
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these function calls be turned into members that are initialized at construction?


/// Load data from a remote memory window.
template <typename TargetType = BaseType, typename OriginType>
std::enable_if_t<has_mpi_type<OriginType> && has_mpi_type<TargetType>, void>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace std::enable_if with requires.

Copy link
Member Author

@hmenke hmenke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this non-MPI compatible?


void free() noexcept {
if (win != MPI_WIN_NULL) {
MPI_Win_free(&win);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check that MPI_Win_free is indeed blocking on all ranks.

@hmenke hmenke mentioned this pull request Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant