-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking OpenMPI with OSU micro-benchmarks #45
Comments
mpi/openmpi/4.1.1/gnu-4.9.2Point to point (2 processes, one on each node)osu-latency - Latency Test
osu-bw - Bandwidth Test
There are also 2D graphs at each size, eg osu-bw-16, which has a nice repeating pattern. |
Example script for osu-latency
These two only took 1-2 mins to run. |
osu_mbw_mr - Multiple Bandwidth / Message Rate Test
This test can use all 80 cores across the two nodes. Test requires block (sequential) not round-robin assigned ranks, our This one segfaulted, need to check if I'm running it correctly. |
James has reminded me of -pe wss on Young to always run within a single switch, which we should for benchmarking. |
mpi/openmpi/4.1.1/gnu-4.9.2Collective non-blocking (two nodes)
Ran with defaults atm. osu-bcast - MPI_Bcast Latency Test
osu_allgather - MPI_Allgather Latency Test
osu_alltoall - MPI_Alltoall Latency Test
|
Have built a set with |
mpi/openmpi/3.1.6/gnu-4.9.2Point to point (2 processes, one on each node)osu-latency - Latency Test
osu-bw - Bandwidth Test
Collective non-blocking (two nodes)osu-bcast - MPI_Bcast Latency Test
osu_allgather - MPI_Allgather Latency Test
osu_alltoall - MPI_Alltoall Latency Test
|
With the exception of bcast, which is rather different for 3.1.6 in the larger messages of both graphs, they'rethesamepicture.gif (±jitter). The average latency reported on the osu_bcast graph also doesn't seem to make sense with the graph pictured, for openmpi 3.1.6, going to try including |
Pretty close |
Results from rerunning bcast for openmpi 3.1.6 with
Big difference in max and min latency on the last two, and the graph it draws doesn't show the max points. (Also doesn't make sense with the minimums since those go below 275.34...) Small sizes |
From #44 we want to know what Spack variants to build our main OpenMPI with. We are going to use the C MPI benchmarks from https://mvapich.cse.ohio-state.edu/benchmarks/ to compare how well they perform on our OmniPath clusters.
Our existing
mpi/openmpi/4.1.1/gnu-4.9.2
should be below acceptable performance (we assume!), using only vader.Compiling the OSU microbenchmarks on Young
Now got directories full of benchmarks:
Going to start with point-to-point then look at some collectives.
The text was updated successfully, but these errors were encountered: