Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(0.88.0) MPI communication and computation overlap in the HydrostaticFreeSurfaceModel and NonhydrostaticModel #3125

Merged
merged 576 commits into from
Sep 19, 2023
Merged
Changes from 1 commit
Commits
Show all changes
576 commits
Select commit Hold shift + click to select a range
eb6f5e6
comment
simone-silvestri Jun 12, 2023
80fdc83
fixed tag problems
simone-silvestri Jun 12, 2023
7cd4b44
bugfix
simone-silvestri Jun 12, 2023
28c26bc
Merge branch 'main' into ss/load-balance-and-corners
simone-silvestri Jun 12, 2023
9f3b273
resolve conflicts
navidcy Jun 13, 2023
d2ad49c
Update scalar_biharmonic_diffusivity.jl
simone-silvestri Jun 13, 2023
165e15e
Update src/Distributed/multi_architectures.jl
simone-silvestri Jun 14, 2023
88569a1
Update src/Distributed/partition_assemble.jl
simone-silvestri Jun 14, 2023
89603f2
Update src/ImmersedBoundaries/ImmersedBoundaries.jl
simone-silvestri Jun 14, 2023
10b37da
Update src/ImmersedBoundaries/active_cells_map.jl
simone-silvestri Jun 14, 2023
a24ef06
Merge branch 'main' into glw/catke-parameter-refactor
glwagner Jun 14, 2023
3c68709
Update src/Distributed/interleave_comm_and_comp.jl
simone-silvestri Jun 14, 2023
0206dd2
Merge branch 'glw/catke-parameter-refactor' of https://github.com/Cli…
glwagner Jun 14, 2023
99d7f98
Clean up batched tridiagonal solver and vertically implicit solver
glwagner Jun 14, 2023
287ac42
Fix bug in batched tridiagonal solver
glwagner Jun 14, 2023
814cd43
bugfix
simone-silvestri Jun 20, 2023
690d61c
Merge branch 'ss/load-balance-and-corners' of github.com:CliMA/Oceana…
simone-silvestri Jun 20, 2023
4b6d8d6
Merge branch 'main' into ss/load-balance-and-corners
simone-silvestri Jun 21, 2023
17e6fc0
Merge remote-tracking branch 'origin/main' into glw/catke-parameter-r…
glwagner Jun 22, 2023
df01667
Try to fix multi region immersed boundary issue
glwagner Jun 22, 2023
3113880
Hopefully fix immersed boundary grid constructor
glwagner Jun 23, 2023
389e243
Another fix
glwagner Jun 23, 2023
5d6fac3
Merge branch 'main' into ss/load-balance-and-corners
navidcy Jun 24, 2023
6492041
fixed project and manifest
simone-silvestri Jun 24, 2023
5633d14
convert instead of FT
simone-silvestri Jun 28, 2023
9f895bb
export KernelParameters
simone-silvestri Jun 28, 2023
9842f6e
remove FT
simone-silvestri Jun 28, 2023
126829c
removed useless where FT
simone-silvestri Jun 28, 2023
c5dc1ec
Merge remote-tracking branch 'origin/main' into ss/load-balance-and-c…
simone-silvestri Jun 28, 2023
9569ac9
small bugfix
simone-silvestri Jun 28, 2023
2917114
update manifest
simone-silvestri Jun 28, 2023
fa38abc
remove unbuffered communication
simone-silvestri Jun 28, 2023
2cac349
little bit of a cleanup
simone-silvestri Jun 28, 2023
8564df2
removed `views` comment
simone-silvestri Jun 28, 2023
a8c29e1
couple of bugfixes
simone-silvestri Jun 28, 2023
db8d996
fixed tests
simone-silvestri Jun 28, 2023
6681636
probably done
simone-silvestri Jun 28, 2023
d1eb3ba
same thing for nonhydrostatic model
simone-silvestri Jun 28, 2023
f2406fb
include file
simone-silvestri Jun 28, 2023
03ff7da
bugfix
simone-silvestri Jun 28, 2023
23a0040
prepare for nonhydrostatic multiregion
simone-silvestri Jun 28, 2023
f2f5de3
also here
simone-silvestri Jun 28, 2023
4e2b04b
bugfix
simone-silvestri Jun 28, 2023
b29a798
other bugfix
simone-silvestri Jun 28, 2023
f0b93e5
fix closures
simone-silvestri Jun 28, 2023
80f07c7
bugfix
simone-silvestri Jun 28, 2023
2f28cb0
simplify
simone-silvestri Jun 28, 2023
4c8136b
2D leith requires 2 halos!
simone-silvestri Jun 28, 2023
b222f57
AMD and Smag require 1 halo!
simone-silvestri Jun 28, 2023
752e6f0
wrong order
simone-silvestri Jun 28, 2023
e36931a
correct halo handling for diffusivities
simone-silvestri Jun 28, 2023
527b240
correct Leith formulation + fixes
simone-silvestri Jun 28, 2023
0f3a06a
`only_local_halos` kwarg in `fill_halo_regions!`
simone-silvestri Jun 28, 2023
718e0f8
bugfix
simone-silvestri Jun 28, 2023
2e33069
FT on GPU
simone-silvestri Jun 28, 2023
4be413e
bugfix
simone-silvestri Jun 28, 2023
ce0628a
bugfix
simone-silvestri Jun 28, 2023
a8285af
last bugfix?
simone-silvestri Jun 28, 2023
07f8d1d
removed all offsets from kernels + fixed all tests
simone-silvestri Jun 28, 2023
e5975db
fix `_compute!`
simone-silvestri Jun 28, 2023
d82d908
finished
simone-silvestri Jun 28, 2023
bd26e8c
fixed broken tests
simone-silvestri Jun 28, 2023
04bd76a
fixed docs
simone-silvestri Jun 28, 2023
e640e2a
miscellaneous changes
simone-silvestri Jun 29, 2023
5333609
bugfix
simone-silvestri Jun 29, 2023
aaf6f25
removed tests for vertical subdivision
simone-silvestri Jun 29, 2023
c6fcc90
test corner passing
simone-silvestri Jun 29, 2023
66e7ef3
correction
simone-silvestri Jun 29, 2023
d53cba6
retry
simone-silvestri Jun 29, 2023
59c7cd5
fixed all problems
simone-silvestri Jun 29, 2023
9b1412d
Added a validation example
simone-silvestri Jun 29, 2023
28f052e
fixed tests
simone-silvestri Jun 29, 2023
4b6743a
try new test
simone-silvestri Jun 29, 2023
b167ad9
fill send buffers in the correct place
simone-silvestri Jun 30, 2023
a85999d
fixed comments
simone-silvestri Jun 30, 2023
2e3fb94
define async
simone-silvestri Jun 30, 2023
1b0f2a8
pass the grid
simone-silvestri Jun 30, 2023
306655a
bugfix
simone-silvestri Jun 30, 2023
4c737f3
fix show method
simone-silvestri Jun 30, 2023
fb0505d
RefValue for mpi_tag
simone-silvestri Jun 30, 2023
d37a781
comment
simone-silvestri Jun 30, 2023
f5d203b
Merge branch 'main' into ss/load-balance-and-corners
simone-silvestri Jul 1, 2023
bcd4d02
add catke preprint
navidcy Jul 2, 2023
80d46de
remove warning; add ref to catke preprint
navidcy Jul 2, 2023
00a5eba
some code cleanup
navidcy Jul 2, 2023
04e603c
correct the example
simone-silvestri Jul 3, 2023
5f96fdc
Merge branch 'ss/load-balance-and-corners' of github.com:CliMA/Oceana…
simone-silvestri Jul 3, 2023
59ae073
Merge branch 'main' into glw/catke-parameter-refactor
navidcy Jul 3, 2023
c8944b7
Update src/TurbulenceClosures/vertically_implicit_diffusion_solver.jl
glwagner Jul 5, 2023
603f50e
bugfix
simone-silvestri Jul 6, 2023
2e06209
Refactor unit tests
glwagner Jul 6, 2023
86c89fd
Merge branch 'glw/catke-parameter-refactor' of https://github.com/Cli…
glwagner Jul 6, 2023
c724537
Generalize regridding for lat-lon
glwagner Jul 7, 2023
9b62341
Merge branch 'glw/catke-parameter-refactor' of https://github.com/Cli…
glwagner Jul 7, 2023
9069bf4
bugfix
simone-silvestri Jul 10, 2023
40e87b5
Add newline
glwagner Jul 10, 2023
19bc3dd
small correction
simone-silvestri Jul 12, 2023
54f273c
new tests
simone-silvestri Jul 12, 2023
9e520be
bugfix
simone-silvestri Jul 12, 2023
5755a3a
Merge remote-tracking branch 'origin/main' into glw/catke-parameter-r…
glwagner Jul 12, 2023
85d44f7
bugfix
simone-silvestri Jul 13, 2023
fdc0aea
back for testing
simone-silvestri Jul 13, 2023
f8c73ff
back for testing
simone-silvestri Jul 13, 2023
6885c88
update manifest
simone-silvestri Jul 13, 2023
6955b92
more options
simone-silvestri Jul 14, 2023
876e4e3
more
simone-silvestri Jul 14, 2023
0105179
finished
simone-silvestri Jul 14, 2023
3b79b9a
test hypothesis
simone-silvestri Jul 14, 2023
d6520aa
fixed bug - correct speed now
simone-silvestri Jul 14, 2023
5dbf9aa
add space
simone-silvestri Jul 14, 2023
70ac393
bugfix
simone-silvestri Jul 15, 2023
7d03b63
test
simone-silvestri Jul 15, 2023
056ff34
more info
simone-silvestri Jul 15, 2023
2514130
removed left-right connected computation
simone-silvestri Jul 15, 2023
cea3240
bugfix
simone-silvestri Jul 15, 2023
c1b2049
remove info
simone-silvestri Jul 15, 2023
abea7ef
improve
simone-silvestri Jul 15, 2023
c6deb5e
typo
simone-silvestri Jul 15, 2023
66965ff
bugfix
simone-silvestri Jul 15, 2023
2e7354e
bugfix
simone-silvestri Jul 16, 2023
403e74f
correct comments
simone-silvestri Jul 16, 2023
6580a12
bugfix
simone-silvestri Jul 16, 2023
923d1b2
bugfix prescribed velocities
simone-silvestri Jul 17, 2023
511352d
fixes
simone-silvestri Jul 17, 2023
30acce8
ok on mac
simone-silvestri Jul 17, 2023
0e211d7
bugfix
simone-silvestri Jul 17, 2023
242d590
bug fixed
simone-silvestri Jul 17, 2023
6ea2af3
bugfixxed
simone-silvestri Jul 17, 2023
67d27ca
new default
simone-silvestri Jul 17, 2023
19618b1
bugfix
simone-silvestri Jul 17, 2023
93593f8
Merge remote-tracking branch 'origin/ss/fix_split_explicit' into ss/l…
simone-silvestri Jul 17, 2023
3bb5844
remove <<<<HEAD
simone-silvestri Jul 17, 2023
972730a
bugfix PrescribedVelocityFields
simone-silvestri Jul 17, 2023
cc5af47
default in another PR
simone-silvestri Jul 17, 2023
3644e30
bugfix
simone-silvestri Jul 17, 2023
2f60434
flat grids only in Grids
simone-silvestri Jul 17, 2023
a50ebb8
last bugfix
simone-silvestri Jul 17, 2023
ebdbc22
bugfix
simone-silvestri Jul 17, 2023
18eae2d
try partial cells
simone-silvestri Jul 20, 2023
3b8f2d7
bugfix
simone-silvestri Jul 20, 2023
7d97dec
bugfix
simone-silvestri Jul 21, 2023
d5b3978
Merge branch 'main' into glw/catke-parameter-refactor
glwagner Jul 22, 2023
dad1301
Update test_turbulence_closures.jl
glwagner Jul 22, 2023
c57d2c7
small fixes
simone-silvestri Jul 25, 2023
14a32a1
rework IBG and MRG
simone-silvestri Jul 25, 2023
43c83ea
Update src/TurbulenceClosures/vertically_implicit_diffusion_solver.jl
simone-silvestri Jul 25, 2023
45bdebc
small bugfix
simone-silvestri Jul 25, 2023
efa1029
Merge branch 'glw/catke-parameter-refactor' of github.com:CliMA/Ocean…
simone-silvestri Jul 26, 2023
7ff28da
remove multiregion ibg with arrays for the moment
simone-silvestri Jul 26, 2023
9582465
bugfix
simone-silvestri Jul 26, 2023
040c1bd
little cleaner
simone-silvestri Jul 26, 2023
fe5e413
fixed tests
simone-silvestri Jul 26, 2023
2530c9e
Merge remote-tracking branch 'origin/main' into glw/catke-parameter-r…
simone-silvestri Jul 27, 2023
19164ed
Merge remote-tracking branch 'origin/main' into ss/load-balance-and-c…
simone-silvestri Jul 27, 2023
cd66ed3
see what the error is
simone-silvestri Jul 28, 2023
cd93563
allow changing halos from checkpointer
simone-silvestri Jul 28, 2023
2c7a633
test it
simone-silvestri Jul 28, 2023
5310c55
finally fixed it
simone-silvestri Jul 28, 2023
ac408b5
better naming
simone-silvestri Jul 28, 2023
746a014
bugfix
simone-silvestri Jul 30, 2023
a4aa696
bugfix
simone-silvestri Jul 30, 2023
9f8c1bb
bugfix
simone-silvestri Jul 31, 2023
11f01d8
bugfix
simone-silvestri Jul 31, 2023
2c0a170
removed useless tendency
simone-silvestri Jul 31, 2023
24c6815
small fix
simone-silvestri Jul 31, 2023
d19ab3c
dummy commit
simone-silvestri Jul 31, 2023
0cbfff9
merge
simone-silvestri Aug 4, 2023
7688452
fix active cell map
simone-silvestri Aug 4, 2023
0e81f12
comment
simone-silvestri Aug 4, 2023
7e4bf9a
bugfix
simone-silvestri Aug 6, 2023
347367e
bugfix
simone-silvestri Aug 6, 2023
2068462
removed useless tendency
simone-silvestri Aug 6, 2023
c972d07
maybe just keep it does not harm too much
simone-silvestri Aug 6, 2023
e01c38c
should have fixed it?
simone-silvestri Aug 6, 2023
45fb9d5
let's go now
simone-silvestri Aug 6, 2023
170dc90
done
simone-silvestri Aug 6, 2023
f0ac1da
bugfix
simone-silvestri Aug 6, 2023
c9a4ae6
no need for this
simone-silvestri Aug 6, 2023
bae6127
Merge remote-tracking branch 'origin/main' into ss/load-balance-and-c…
simone-silvestri Aug 8, 2023
6fc688e
convert Δt in time stepping
simone-silvestri Aug 15, 2023
234bd8e
maximum
simone-silvestri Aug 15, 2023
be4d885
Merge branch 'ss/load-balance-and-corners' of github.com:CliMA/Oceana…
simone-silvestri Aug 15, 2023
d6e338d
minimum substeps
simone-silvestri Aug 15, 2023
2eae774
more flexibility
simone-silvestri Aug 16, 2023
d6455c1
Merge branch 'ss/load-balance-and-corners' of github.com:CliMA/Oceana…
simone-silvestri Aug 16, 2023
04242b4
bugfix
simone-silvestri Aug 16, 2023
2b00958
mutlidimensional
simone-silvestri Aug 18, 2023
ea8b2ba
Merge branch 'main' into ss/load-balance-and-corners
simone-silvestri Aug 21, 2023
3202adb
fallback methods
simone-silvestri Aug 21, 2023
34eae95
Merge branch 'ss/load-balance-and-corners' of github.com:CliMA/Oceana…
simone-silvestri Aug 21, 2023
086b21e
test a thing
simone-silvestri Aug 22, 2023
9e1728f
change
simone-silvestri Aug 22, 2023
f6f0f3e
chnage
simone-silvestri Aug 22, 2023
7e61e0b
change
simone-silvestri Aug 22, 2023
ecb5664
change
simone-silvestri Aug 22, 2023
38bc808
update
simone-silvestri Aug 22, 2023
836d629
update
simone-silvestri Aug 22, 2023
636abdb
new offsets + return to previous KA
simone-silvestri Aug 28, 2023
dad5ad9
bugfix
simone-silvestri Aug 28, 2023
10b2e97
bugfixxed
simone-silvestri Aug 28, 2023
25316a6
remove debugging
simone-silvestri Aug 28, 2023
6d21230
going back
simone-silvestri Sep 6, 2023
1dc301f
Merge remote-tracking branch 'origin/ss/mpi-with-catke' into ss/load-…
simone-silvestri Sep 6, 2023
bf5d06b
Merge remote-tracking branch 'origin/main' into ss/load-balance-and-c…
simone-silvestri Sep 6, 2023
f8de976
more robus partitioning
simone-silvestri Sep 6, 2023
4824add
quite new
simone-silvestri Sep 6, 2023
4416f37
bugfix
simone-silvestri Sep 6, 2023
74ef9eb
updated Manifest
simone-silvestri Sep 6, 2023
20b470c
build with 1.9.3
simone-silvestri Sep 6, 2023
943458a
switch boundary_buffer to required_halo_size
simone-silvestri Sep 8, 2023
13982e3
bugfix
simone-silvestri Sep 8, 2023
a5ff1bc
Update src/Models/HydrostaticFreeSurfaceModels/single_column_model_mo…
simone-silvestri Sep 8, 2023
44cff40
Update src/Models/HydrostaticFreeSurfaceModels/update_hydrostatic_fre…
simone-silvestri Sep 8, 2023
df6967b
bugfix
simone-silvestri Sep 8, 2023
d62fa07
Merge branch 'ss/load-balance-and-corners' of github.com:CliMA/Oceana…
simone-silvestri Sep 8, 2023
229e4aa
biharmonic requires 2 halos
simone-silvestri Sep 8, 2023
d354418
buggfix
simone-silvestri Sep 9, 2023
30aefe5
compute_auxiliaries!
simone-silvestri Sep 10, 2023
b8e913f
bugfix
simone-silvestri Sep 10, 2023
8c4ed66
fixed it
simone-silvestri Sep 10, 2023
271aa86
little change
simone-silvestri Sep 12, 2023
1db41bb
some changes
simone-silvestri Sep 12, 2023
f2bc008
bugfix
simone-silvestri Sep 12, 2023
0911063
bugfix
simone-silvestri Sep 12, 2023
e6608a6
bugfixxed
simone-silvestri Sep 18, 2023
38f2b87
another bugfix
simone-silvestri Sep 18, 2023
4ed8333
Merge branch 'main' into ss/load-balance-and-corners
simone-silvestri Sep 18, 2023
7da9b59
compute_diffusivities!
simone-silvestri Sep 18, 2023
dafa13c
required halo size
simone-silvestri Sep 18, 2023
56892eb
all fixed
simone-silvestri Sep 18, 2023
bf927ae
shorten line
simone-silvestri Sep 19, 2023
bac7f4e
fix comment
simone-silvestri Sep 19, 2023
d48d1c9
remove abbreviation
simone-silvestri Sep 19, 2023
3679421
remove unused functions
simone-silvestri Sep 19, 2023
92739b0
better explanation of the MPI tag
simone-silvestri Sep 19, 2023
eab6dde
Update src/ImmersedBoundaries/active_cells_map.jl
simone-silvestri Sep 19, 2023
3bbcdcd
Update src/Solvers/batched_tridiagonal_solver.jl
simone-silvestri Sep 19, 2023
4259130
change name
simone-silvestri Sep 19, 2023
c118bf0
Merge branch 'ss/load-balance-and-corners' of github.com:CliMA/Oceana…
simone-silvestri Sep 19, 2023
d5e75a3
docstring
simone-silvestri Sep 19, 2023
256de76
name change on rank
simone-silvestri Sep 19, 2023
0bfeb97
interior active cells
simone-silvestri Sep 19, 2023
1b96804
calculate -> compute
simone-silvestri Sep 19, 2023
8f6fc68
fixed tests
simone-silvestri Sep 19, 2023
de64e92
do not compute momentum in prescribed velocities
simone-silvestri Sep 19, 2023
58d92ec
DistributedComputations
simone-silvestri Sep 19, 2023
cab51e5
DistributedComputations part #2
simone-silvestri Sep 19, 2023
dfbc048
bugfix
simone-silvestri Sep 19, 2023
55b9299
fixed the docs
simone-silvestri Sep 19, 2023
b51e681
Merge branch 'main' into ss/load-balance-and-corners
navidcy Sep 19, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
DistributedComputations part #2
simone-silvestri committed Sep 19, 2023

Verified

This commit was signed with the committer’s verified signature.
scala-steward Scala Steward
commit cab51e539640339dc28594194679c0d241731b5f
2 changes: 1 addition & 1 deletion benchmark/distributed_nonhydrostatic_model_mpi.jl
Original file line number Diff line number Diff line change
@@ -28,7 +28,7 @@ local_rank = MPI.Comm_rank(comm)
@info "Setting up distributed nonhydrostatic model with N=($Nx, $Ny, $Nz) grid points and ranks=($Rx, $Ry, $Rz) on rank $local_rank..."

topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), topology=topo, ranks=(Rx, Ry, Rz), communicator=MPI.COMM_WORLD)
arch = Distributed(CPU(), topology=topo, ranks=(Rx, Ry, Rz), communicator=MPI.COMM_WORLD)
distributed_grid = RectilinearGrid(arch, topology=topo, size=(Nx, Ny, Nz), extent=(1, 1, 1))
model = NonhydrostaticModel(grid=distributed_grid)

2 changes: 1 addition & 1 deletion benchmark/distributed_shallow_water_model_mpi.jl
Original file line number Diff line number Diff line change
@@ -30,7 +30,7 @@ Ry = parse(Int, ARGS[4])
@info "Setting up distributed shallow water model with N=($Nx, $Ny) grid points and ranks=($Rx, $Ry) on rank $local_rank..."

topo = (Periodic, Periodic, Flat)
arch = MultiProcess(CPU(), topology=topo, ranks=(Rx, Ry, 1), communicator=MPI.COMM_WORLD)
arch = Distributed(CPU(), topology=topo, ranks=(Rx, Ry, 1), communicator=MPI.COMM_WORLD)
distributed_grid = RectilinearGrid(arch, topology=topo, size=(Nx, Ny), extent=(1, 1))
model = ShallowWaterModel(grid=distributed_grid, gravitational_acceleration=1.0)
set!(model, h=1)
2 changes: 1 addition & 1 deletion src/DistributedComputations/DistributedComputations.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
module DistributedComputations

export
MultiProcess, child_architecture, reconstruct_global_grid,
Distributed, child_architecture, reconstruct_global_grid,
inject_halo_communication_boundary_conditions,
DistributedFFTBasedPoissonSolver

34 changes: 17 additions & 17 deletions src/DistributedComputations/distributed_architectures.jl
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@ import Oceananigans.Architectures: device, arch_array, array_type, child_archite
import Oceananigans.Grids: zeros
import Oceananigans.Utils: sync_device!

struct MultiProcess{A, M, R, I, ρ, C, γ, T} <: AbstractArchitecture
struct Distributed{A, M, R, I, ρ, C, γ, T} <: AbstractArchitecture
child_architecture :: A
local_rank :: R
local_index :: I
@@ -22,7 +22,7 @@ end
#####

"""
MultiProcess(child_architecture = CPU();
Distributed(child_architecture = CPU();
topology,
ranks,
devices = nothing,
@@ -57,7 +57,7 @@ Keyword arguments
- `communicator`: the MPI communicator, `MPI.COMM_WORLD`. This keyword argument should not be tampered with
if not for testing or developing. Change at your own risk!
"""
function MultiProcess(child_architecture = CPU();
function Distributed(child_architecture = CPU();
topology,
ranks,
devices = nothing,
@@ -101,28 +101,28 @@ function MultiProcess(child_architecture = CPU();
M = typeof(mpi_requests)
T = typeof(Ref(0))

return MultiProcess{A, M, R, I, ρ, C, γ, T}(child_architecture, local_rank, local_index, ranks, local_connectivity, communicator, mpi_requests, Ref(0))
return Distributed{A, M, R, I, ρ, C, γ, T}(child_architecture, local_rank, local_index, ranks, local_connectivity, communicator, mpi_requests, Ref(0))
end

const MultiCPUProcess = MultiProcess{CPU}
const MultiGPUProcess = MultiProcess{GPU}
const DistributedCPU = Distributed{CPU}
const DistributedGPU = Distributed{GPU}

const BlockingMultiProcess = MultiProcess{<:Any, <:Nothing}
const BlockingDistributed = Distributed{<:Any, <:Nothing}

#####
##### All the architectures
#####

child_architecture(arch::MultiProcess) = arch.child_architecture
device(arch::MultiProcess) = device(child_architecture(arch))
arch_array(arch::MultiProcess, A) = arch_array(child_architecture(arch), A)
zeros(FT, arch::MultiProcess, N...) = zeros(FT, child_architecture(arch), N...)
array_type(arch::MultiProcess) = array_type(child_architecture(arch))
sync_device!(arch::MultiProcess) = sync_device!(arch.child_architecture)
child_architecture(arch::Distributed) = arch.child_architecture
device(arch::Distributed) = device(child_architecture(arch))
arch_array(arch::Distributed, A) = arch_array(child_architecture(arch), A)
zeros(FT, arch::Distributed, N...) = zeros(FT, child_architecture(arch), N...)
array_type(arch::Distributed) = array_type(child_architecture(arch))
sync_device!(arch::Distributed) = sync_device!(arch.child_architecture)

cpu_architecture(arch::MultiCPUProcess) = arch
cpu_architecture(arch::MultiGPUProcess) =
MultiProcess(CPU(), arch.local_rank, arch.local_index, arch.ranks,
cpu_architecture(arch::DistributedCPU) = arch
cpu_architecture(arch::DistributedGPU) =
Distributed(CPU(), arch.local_rank, arch.local_index, arch.ranks,
arch.connectivity, arch.communicator, arch.mpi_requests, arch.mpi_tag)

#####
@@ -223,7 +223,7 @@ end
##### Pretty printing
#####

function Base.show(io::IO, arch::MultiProcess)
function Base.show(io::IO, arch::Distributed)
c = arch.connectivity
print(io, "Distributed architecture (rank $(arch.local_rank)/$(prod(arch.ranks)-1)) [index $(arch.local_index) / $(arch.ranks)]\n",
"└── child architecture: $(typeof(child_architecture(arch))) \n",
Original file line number Diff line number Diff line change
@@ -33,7 +33,7 @@ Return a FFT-based solver for the Poisson equation,
∇²φ = b
```
for `MultiProcess`itectures.
for `Distributed`itectures.
Supported configurations
========================
@@ -80,7 +80,7 @@ Restrictions
============
The algorithm for two-dimensional decompositions requires that `Nz = size(global_grid, 3)` is larger
than either `Rx = ranks[1]` or `Ry = ranks[2]`, where `ranks` are configured when building `MultiProcess`.
than either `Rx = ranks[1]` or `Ry = ranks[2]`, where `ranks` are configured when building `Distributed`.
If `Nz` does not satisfy this condition, we can only support a one-dimensional decomposition.
Algorithm for one-dimensional decompositions
22 changes: 11 additions & 11 deletions src/DistributedComputations/distributed_grids.jl
Original file line number Diff line number Diff line change
@@ -13,20 +13,20 @@ using Oceananigans.ImmersedBoundaries

import Oceananigans.Grids: RectilinearGrid, LatitudeLongitudeGrid, with_halo

const DistributedGrid{FT, TX, TY, TZ} = AbstractGrid{FT, TX, TY, TZ, <:MultiProcess}
const DistributedGrid{FT, TX, TY, TZ} = AbstractGrid{FT, TX, TY, TZ, <:Distributed}
const DistributedRectilinearGrid{FT, TX, TY, TZ, FX, FY, FZ, VX, VY, VZ} =
RectilinearGrid{FT, TX, TY, TZ, FX, FY, FZ, VX, VY, VZ, <:MultiProcess} where {FT, TX, TY, TZ, FX, FY, FZ, VX, VY, VZ}
RectilinearGrid{FT, TX, TY, TZ, FX, FY, FZ, VX, VY, VZ, <:Distributed} where {FT, TX, TY, TZ, FX, FY, FZ, VX, VY, VZ}
const DistributedLatitudeLongitudeGrid{FT, TX, TY, TZ, M, MY, FX, FY, FZ, VX, VY, VZ} =
LatitudeLongitudeGrid{FT, TX, TY, TZ, M, MY, FX, FY, FZ, VX, VY, VZ, <:MultiProcess} where {FT, TX, TY, TZ, M, MY, FX, FY, FZ, VX, VY, VZ}
LatitudeLongitudeGrid{FT, TX, TY, TZ, M, MY, FX, FY, FZ, VX, VY, VZ, <:Distributed} where {FT, TX, TY, TZ, M, MY, FX, FY, FZ, VX, VY, VZ}

const DistributedImmersedBoundaryGrid = ImmersedBoundaryGrid{FT, TX, TY, TZ, <:DistributedGrid, I, M, <:MultiProcess} where {FT, TX, TY, TZ, I, M}
const DistributedImmersedBoundaryGrid = ImmersedBoundaryGrid{FT, TX, TY, TZ, <:DistributedGrid, I, M, <:Distributed} where {FT, TX, TY, TZ, I, M}

"""
RectilinearGrid(arch::MultiProcess, FT=Float64; kw...)
RectilinearGrid(arch::Distributed, FT=Float64; kw...)
Return the rank-local portion of `RectilinearGrid` on `arch`itecture.
"""
function RectilinearGrid(arch::MultiProcess,
function RectilinearGrid(arch::Distributed,
FT::DataType = Float64;
size,
x = nothing,
@@ -69,11 +69,11 @@ function RectilinearGrid(arch::MultiProcess,
end

"""
LatitudeLongitudeGrid(arch::MultiProcess, FT=Float64; kw...)
LatitudeLongitudeGrid(arch::Distributed, FT=Float64; kw...)
Return the rank-local portion of `LatitudeLongitudeGrid` on `arch`itecture.
"""
function LatitudeLongitudeGrid(arch::MultiProcess,
function LatitudeLongitudeGrid(arch::Distributed,
FT::DataType = Float64;
precompute_metrics = true,
size,
@@ -321,17 +321,17 @@ function scatter_grid_properties(global_grid)
return x, y, z, topo, halo
end

function scatter_local_grids(arch::MultiProcess, global_grid::RectilinearGrid, local_size)
function scatter_local_grids(arch::Distributed, global_grid::RectilinearGrid, local_size)
x, y, z, topo, halo = scatter_grid_properties(global_grid)
return RectilinearGrid(arch, eltype(global_grid); size=local_size, x=x, y=y, z=z, halo=halo, topology=topo)
end

function scatter_local_grids(arch::MultiProcess, global_grid::LatitudeLongitudeGrid, local_size)
function scatter_local_grids(arch::Distributed, global_grid::LatitudeLongitudeGrid, local_size)
x, y, z, topo, halo = scatter_grid_properties(global_grid)
return LatitudeLongitudeGrid(arch, eltype(global_grid); size=local_size, longitude=x, latitude=y, z=z, halo=halo, topology=topo)
end

function scatter_local_grids(arch::MultiProcess, global_grid::ImmersedBoundaryGrid, local_size)
function scatter_local_grids(arch::Distributed, global_grid::ImmersedBoundaryGrid, local_size)
ib = global_grid.immersed_boundary
ug = global_grid.underlying_grid

Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import Oceananigans.Utils: launch!

function launch!(arch::MultiProcess, args...; kwargs...)
function launch!(arch::Distributed, args...; kwargs...)
child_arch = child_architecture(arch)
return launch!(child_arch, args...; kwargs...)
end
8 changes: 4 additions & 4 deletions src/DistributedComputations/halo_communication.jl
Original file line number Diff line number Diff line change
@@ -123,7 +123,7 @@ end

# Overlapping communication and computation, store requests in a `MPI.Request`
# pool to be waited upon after tendency calculation
if async && !(arch isa BlockingMultiProcess)
if async && !(arch isa BlockingDistributed)
push!(arch.mpi_requests, requests...)
return nothing
end
@@ -238,7 +238,7 @@ for (side, opposite_side) in zip([:west, :south], [:east, :north])
fill_opposite_side_send_buffers! = Symbol("fill_$(opposite_side)_send_buffers!")

@eval begin
function $fill_both_halo!(c, bc_side::DCBCT, bc_opposite_side::DCBCT, size, offset, loc, arch::MultiProcess,
function $fill_both_halo!(c, bc_side::DCBCT, bc_opposite_side::DCBCT, size, offset, loc, arch::Distributed,
grid::DistributedGrid, buffers, args...; only_local_halos = false, kwargs...)

only_local_halos && return nothing
@@ -255,7 +255,7 @@ for (side, opposite_side) in zip([:west, :south], [:east, :north])
return [send_req1, send_req2, recv_req1, recv_req2]
end

function $fill_both_halo!(c, bc_side::DCBCT, bc_opposite_side, size, offset, loc, arch::MultiProcess,
function $fill_both_halo!(c, bc_side::DCBCT, bc_opposite_side, size, offset, loc, arch::Distributed,
grid::DistributedGrid, buffers, args...; only_local_halos = false, kwargs...)

$fill_opposite_side_halo!(c, bc_opposite_side, size, offset, loc, arch, grid, buffers, args...; kwargs...)
@@ -271,7 +271,7 @@ for (side, opposite_side) in zip([:west, :south], [:east, :north])
return [send_req, recv_req]
end

function $fill_both_halo!(c, bc_side, bc_opposite_side::DCBCT, size, offset, loc, arch::MultiProcess,
function $fill_both_halo!(c, bc_side, bc_opposite_side::DCBCT, size, offset, loc, arch::Distributed,
grid::DistributedGrid, buffers, args...; only_local_halos = false, kwargs...)

$fill_side_halo!(c, bc_side, size, offset, loc, arch, grid, buffers, args...; kwargs...)
Original file line number Diff line number Diff line change
@@ -16,7 +16,7 @@ function complete_communication_and_compute_boundary!(model, ::DistributedGrid,
end

# Fallback
complete_communication_and_compute_boundary!(model, ::DistributedGrid, ::BlockingMultiProcess) = nothing
complete_communication_and_compute_boundary!(model, ::DistributedGrid, ::BlockingDistributed) = nothing
complete_communication_and_compute_boundary!(model, grid, arch) = nothing

compute_boundary_tendencies!(model) = nothing
@@ -26,7 +26,7 @@ interior_tendency_kernel_parameters(grid) = :xyz
interior_tendency_kernel_parameters(grid::DistributedGrid) =
interior_tendency_kernel_parameters(grid, architecture(grid))

interior_tendency_kernel_parameters(grid, ::BlockingMultiProcess) = :xyz
interior_tendency_kernel_parameters(grid, ::BlockingDistributed) = :xyz

function interior_tendency_kernel_parameters(grid, arch)
Rx, Ry, _ = arch.ranks
12 changes: 6 additions & 6 deletions src/DistributedComputations/partition_assemble.jl
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
using Oceananigans.Architectures: arch_array

all_reduce(val, arch::MultiProcess; op = +) =
all_reduce(val, arch::Distributed; op = +) =
MPI.Allreduce(val, op, arch.communicator)

all_reduce(val, arch; kwargs...) = val

"""
concatenate_local_sizes(n, arch::MultiProcess)
concatenate_local_sizes(n, arch::Distributed)
Return a 3-Tuple containing a vector of `size(grid, idx)` for each rank in
all 3 directions.
"""
concatenate_local_sizes(n, arch::MultiProcess) =
concatenate_local_sizes(n, arch::Distributed) =
Tuple(concatenate_local_sizes(n, arch, i) for i in 1:length(n))

function concatenate_local_sizes(n, arch::MultiProcess, idx)
function concatenate_local_sizes(n, arch::Distributed, idx)
R = arch.ranks[idx]
r = arch.local_index[idx]
n = n isa Number ? n : n[idx]
@@ -106,7 +106,7 @@ partition_global_array(arch, c_global::AbstractArray, n) = c_global
partition_global_array(arch, c_global::Function, n) = c_global

# Here we assume that we cannot partition in z (we should remove support for that)
function partition_global_array(arch::MultiProcess, c_global::AbstractArray, n)
function partition_global_array(arch::Distributed, c_global::AbstractArray, n)
c_global = arch_array(CPU(), c_global)

ri, rj, rk = arch.local_index
@@ -141,7 +141,7 @@ construct_global_array(arch, c_local::AbstractArray, n) = c_local
construct_global_array(arch, c_local::Function, N) = c_local

# TODO: This does not work for 3D parallelizations!!!
function construct_global_array(arch::MultiProcess, c_local::AbstractArray, n)
function construct_global_array(arch::Distributed, c_local::AbstractArray, n)
c_local = arch_array(CPU(), c_local)

ri, rj, rk = arch.local_index
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
using Oceananigans.AbstractOperations: GridMetricOperation, Δz
using Oceananigans.DistributedComputations: DistributedGrid, DistributedField
using Oceananigans.DistributedComputations: BlockingMultiProcess, complete_halo_communication!
using Oceananigans.DistributedComputations: BlockingDistributed, complete_halo_communication!
using Oceananigans.Models.HydrostaticFreeSurfaceModels: SplitExplicitState, SplitExplicitFreeSurface

import Oceananigans.Models.HydrostaticFreeSurfaceModels: FreeSurface, SplitExplicitAuxiliaryFields
@@ -93,7 +93,7 @@ end

const DistributedSplitExplicit = SplitExplicitFreeSurface{<:DistributedField}

wait_free_surface_communication!(::DistributedSplitExplicit, ::BlockingMultiProcess) = nothing
wait_free_surface_communication!(::DistributedSplitExplicit, ::BlockingDistributed) = nothing

function wait_free_surface_communication!(free_surface::DistributedSplitExplicit, arch)

Original file line number Diff line number Diff line change
@@ -206,8 +206,8 @@ function validate_vertical_velocity_boundary_conditions(w)
return nothing
end

validate_free_surface(::MultiProcess, free_surface::SplitExplicitFreeSurface) = free_surface
validate_free_surface(arch::MultiProcess, free_surface) = error("$(typeof(free_surface)) is not supported with $(typeof(arch))")
validate_free_surface(::Distributed, free_surface::SplitExplicitFreeSurface) = free_surface
validate_free_surface(arch::Distributed, free_surface) = error("$(typeof(free_surface)) is not supported with $(typeof(arch))")
validate_free_surface(arch, free_surface) = free_surface

validate_momentum_advection(momentum_advection, ibg::ImmersedBoundaryGrid) = validate_momentum_advection(momentum_advection, ibg.underlying_grid)
4 changes: 2 additions & 2 deletions src/Models/NonhydrostaticModels/NonhydrostaticModels.jl
Original file line number Diff line number Diff line change
@@ -11,15 +11,15 @@ using Oceananigans.Utils
using Oceananigans.Grids
using Oceananigans.Grids: XYRegRectilinearGrid, XZRegRectilinearGrid, YZRegRectilinearGrid
using Oceananigans.Solvers
using Oceananigans.DistributedComputations: MultiProcess, DistributedFFTBasedPoissonSolver, reconstruct_global_grid
using Oceananigans.DistributedComputations: Distributed, DistributedFFTBasedPoissonSolver, reconstruct_global_grid
using Oceananigans.ImmersedBoundaries: ImmersedBoundaryGrid
using Oceananigans.Utils: SumOfArrays

import Oceananigans: fields, prognostic_fields
import Oceananigans.Advection: cell_advection_timescale
import Oceananigans.TimeSteppers: step_lagrangian_particles!

function PressureSolver(arch::MultiProcess, local_grid::RegRectilinearGrid)
function PressureSolver(arch::Distributed, local_grid::RegRectilinearGrid)
global_grid = reconstruct_global_grid(local_grid)
return DistributedFFTBasedPoissonSolver(global_grid, local_grid)
end
2 changes: 1 addition & 1 deletion src/Models/NonhydrostaticModels/nonhydrostatic_model.jl
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@ using CUDA: has_cuda
using OrderedCollections: OrderedDict

using Oceananigans.Architectures: AbstractArchitecture
using Oceananigans.DistributedComputations: MultiProcess
using Oceananigans.DistributedComputations: Distributed
using Oceananigans.Advection: CenteredSecondOrder
using Oceananigans.BuoyancyModels: validate_buoyancy, regularize_buoyancy, SeawaterBuoyancy
using Oceananigans.Biogeochemistry: validate_biogeochemistry, AbstractBiogeochemistry, biogeochemical_auxiliary_fields
4 changes: 2 additions & 2 deletions src/OutputWriters/output_writer_utils.jl
Original file line number Diff line number Diff line change
@@ -44,7 +44,7 @@ saveproperty!(file, address, grid::AbstractGrid) = _saveproperty!(file, add

function saveproperty!(file, address, grid::DistributedGrid)
arch = architecture(grid)
cpu_arch = MultiProcess(CPU(); topology = topology(grid),
cpu_arch = Distributed(CPU(); topology = topology(grid),
ranks = arch.ranks)
_saveproperty!(file, address, on_architecture(cpu_arch, grid))
end
@@ -86,7 +86,7 @@ serializeproperty!(file, address, grid::AbstractGrid) = file[address] = on_archi

function serializeproperty!(file, address, grid::DistributedGrid)
arch = architecture(grid)
cpu_arch = MultiProcess(CPU(); topology = topology(grid),
cpu_arch = Distributed(CPU(); topology = topology(grid),
ranks = arch.ranks)
file[address] = on_architecture(cpu_arch, grid)
end
30 changes: 15 additions & 15 deletions test/test_distributed_models.jl
Original file line number Diff line number Diff line change
@@ -26,7 +26,7 @@ MPI.Init()
# to initialize MPI.

using Oceananigans.BoundaryConditions: fill_halo_regions!, DCBC
using Oceananigans.DistributedComputations: MultiProcess, index2rank
using Oceananigans.DistributedComputations: Distributed, index2rank
using Oceananigans.Fields: AbstractField
using Oceananigans.Grids:
halo_size,
@@ -113,7 +113,7 @@ mpi_ranks = MPI.Comm_size(comm)

function test_triply_periodic_rank_connectivity_with_411_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), ranks=(4, 1, 1), topology = topo)
arch = Distributed(CPU(), ranks=(4, 1, 1), topology = topo)

local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@test local_rank == index2rank(arch.local_index..., arch.ranks...)
@@ -147,7 +147,7 @@ end

function test_triply_periodic_rank_connectivity_with_141_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), ranks=(1, 4, 1), topology = topo)
arch = Distributed(CPU(), ranks=(1, 4, 1), topology = topo)

local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@test local_rank == index2rank(arch.local_index..., arch.ranks...)
@@ -187,7 +187,7 @@ end

function test_triply_periodic_rank_connectivity_with_221_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), ranks=(2, 2, 1), topology = topo)
arch = Distributed(CPU(), ranks=(2, 2, 1), topology = topo)

local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@test local_rank == index2rank(arch.local_index..., arch.ranks...)
@@ -231,7 +231,7 @@ end

function test_triply_periodic_local_grid_with_411_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), ranks=(4, 1, 1), topology = topo)
arch = Distributed(CPU(), ranks=(4, 1, 1), topology = topo)
local_grid = RectilinearGrid(arch, topology=topo, size=(2, 8, 8), extent=(1, 2, 3))

local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@@ -249,7 +249,7 @@ end

function test_triply_periodic_local_grid_with_141_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), ranks=(1, 4, 1), topology = topo)
arch = Distributed(CPU(), ranks=(1, 4, 1), topology = topo)
local_grid = RectilinearGrid(arch, topology=topo, size=(8, 2, 8), extent=(1, 2, 3))

local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@@ -267,7 +267,7 @@ end

function test_triply_periodic_local_grid_with_221_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), ranks=(2, 2, 1), topology = topo)
arch = Distributed(CPU(), ranks=(2, 2, 1), topology = topo)
local_grid = RectilinearGrid(arch, topology=topo, size=(4, 4, 8), extent=(1, 2, 3))

i, j, k = arch.local_index
@@ -291,7 +291,7 @@ end

function test_triply_periodic_bc_injection_with_411_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(ranks=(4, 1, 1), topology=topo)
arch = Distributed(ranks=(4, 1, 1), topology=topo)
grid = RectilinearGrid(arch, topology=topo, size=(2, 8, 8), extent=(1, 2, 3))
model = NonhydrostaticModel(grid=grid)

@@ -308,7 +308,7 @@ end

function test_triply_periodic_bc_injection_with_141_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(ranks=(1, 4, 1), topology=topo)
arch = Distributed(ranks=(1, 4, 1), topology=topo)
grid = RectilinearGrid(arch, topology=topo, size=(8, 2, 8), extent=(1, 2, 3))
model = NonhydrostaticModel(grid=grid)

@@ -325,7 +325,7 @@ end

function test_triply_periodic_bc_injection_with_221_ranks()
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(ranks=(2, 2, 1), topology=topo)
arch = Distributed(ranks=(2, 2, 1), topology=topo)
grid = RectilinearGrid(arch, topology=topo, size=(4, 4, 8), extent=(1, 2, 3))
model = NonhydrostaticModel(grid=grid)

@@ -346,7 +346,7 @@ end

function test_triply_periodic_halo_communication_with_411_ranks(halo, child_arch)
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(child_arch; ranks=(4, 1, 1), topology=topo, devices = (0, 0, 0, 0))
arch = Distributed(child_arch; ranks=(4, 1, 1), topology=topo, devices = (0, 0, 0, 0))
grid = RectilinearGrid(arch, topology=topo, size=(4, 4, 4), extent=(1, 2, 3), halo=halo)
model = NonhydrostaticModel(grid=grid)

@@ -370,7 +370,7 @@ end

function test_triply_periodic_halo_communication_with_141_ranks(halo, child_arch)
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(child_arch; ranks=(1, 4, 1), topology=topo, devices = (0, 0, 0, 0))
arch = Distributed(child_arch; ranks=(1, 4, 1), topology=topo, devices = (0, 0, 0, 0))
grid = RectilinearGrid(arch, topology=topo, size=(4, 4, 4), extent=(1, 2, 3), halo=halo)
model = NonhydrostaticModel(grid=grid)

@@ -392,7 +392,7 @@ end

function test_triply_periodic_halo_communication_with_221_ranks(halo, child_arch)
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(child_arch; ranks=(2, 2, 1), topology=topo, devices = (0, 0, 0, 0))
arch = Distributed(child_arch; ranks=(2, 2, 1), topology=topo, devices = (0, 0, 0, 0))
grid = RectilinearGrid(arch, topology=topo, size=(4, 4, 3), extent=(1, 2, 3), halo=halo)
model = NonhydrostaticModel(grid=grid)

@@ -464,7 +464,7 @@ end
for ranks in [(1, 4, 1), (2, 2, 1), (4, 1, 1)]
@info "Time-stepping a distributed NonhydrostaticModel with ranks $ranks..."
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(; ranks, topology=topo)
arch = Distributed(; ranks, topology=topo)
grid = RectilinearGrid(arch, topology=topo, size=(8, 2, 8), extent=(1, 2, 3))
model = NonhydrostaticModel(; grid)

@@ -483,7 +483,7 @@ end
@testset "Time stepping ShallowWaterModel" begin
for child_arch in archs
topo = (Periodic, Periodic, Flat)
arch = MultiProcess(child_arch; ranks=(1, 4, 1), topology = topo, devices = (0, 0, 0, 0))
arch = Distributed(child_arch; ranks=(1, 4, 1), topology = topo, devices = (0, 0, 0, 0))
grid = RectilinearGrid(arch, topology=topo, size=(8, 2), extent=(1, 2), halo=(3, 3))
model = ShallowWaterModel(; momentum_advection=nothing, mass_advection=nothing, tracer_advection=nothing, grid, gravitational_acceleration=1)

2 changes: 1 addition & 1 deletion test/test_distributed_poisson_solvers.jl
Original file line number Diff line number Diff line change
@@ -65,7 +65,7 @@ end

function divergence_free_poisson_solution_triply_periodic(grid_points, ranks)
topo = (Periodic, Periodic, Periodic)
arch = MultiProcess(CPU(), ranks=ranks, topology=topo)
arch = Distributed(CPU(), ranks=ranks, topology=topo)
local_grid = RectilinearGrid(arch, topology=topo, size=grid_points, extent=(1, 2, 3))

bcs = FieldBoundaryConditions(local_grid, (Center, Center, Center))
Original file line number Diff line number Diff line change
@@ -23,7 +23,7 @@ rank = MPI.Comm_rank(comm)
Nranks = MPI.Comm_size(comm)

topo = (Bounded, Periodic, Bounded)
arch = MultiProcess(CPU(); topology = topo,
arch = Distributed(CPU(); topology = topo,
ranks=(Nranks, 1, 1),
use_buffers = true)

Original file line number Diff line number Diff line change
@@ -75,7 +75,7 @@ Ry = 1
@assert Nranks == 4

# Enable overlapped communication!
arch = MultiProcess(CPU(), ranks = (Rx, Ry, 1),
arch = Distributed(CPU(), ranks = (Rx, Ry, 1),
topology=topo,
enable_overlapped_computation = true)

Original file line number Diff line number Diff line change
@@ -28,7 +28,7 @@ Nranks = MPI.Comm_size(comm)
Nx = Ny = 256
Lx = Ly = 2π
topology = (Periodic, Periodic, Flat)
arch = MultiProcess(CPU(); topology, ranks=(1, Nranks, 1))
arch = Distributed(CPU(); topology, ranks=(1, Nranks, 1))
grid = RectilinearGrid(arch; topology, size=(Nx ÷ Nranks, Ny), halo=(3, 3), x=(0, 2π), y=(0, 2π))

@info "Built $Nranks grids:"
2 changes: 1 addition & 1 deletion validation/distributed_simulations/mpi_output_writing.jl
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@ rank = MPI.Comm_rank(comm)
Nranks = MPI.Comm_size(comm)

topology = (Periodic, Periodic, Flat)
arch = MultiProcess(CPU(); topology, ranks=(Nranks, 1, 1))
arch = Distributed(CPU(); topology, ranks=(Nranks, 1, 1))
grid = RectilinearGrid(arch; topology, size=(16 ÷ Nranks, 16), halo=(3, 3), extent=(2π, 2π))

model = NonhydrostaticModel(; grid)
2 changes: 1 addition & 1 deletion validation/distributed_simulations/mpi_set.jl
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ Nranks = MPI.Comm_size(MPI.COMM_WORLD)

# Setup model
topology = (Periodic, Periodic, Flat)
arch = MultiProcess(CPU(); topology, ranks=(1, Nranks, 1))
arch = Distributed(CPU(); topology, ranks=(1, Nranks, 1))
grid = RectilinearGrid(arch; topology, size=(16 ÷ Nranks, 16), extent=(2π, 2π))
c = CenterField(grid)

Original file line number Diff line number Diff line change
@@ -13,7 +13,7 @@ using Oceananigans.DistributedComputations

ranks = (2, 2, 1)
topo = (Periodic, Periodic, Flat)
arch = MultiProcess(CPU(), ranks=ranks, topology=topo)
arch = Distributed(CPU(), ranks=ranks, topology=topo)
grid = RectilinearGrid(arch, topology=topo, size=(128 ÷ ranks[1], 128 ÷ ranks[2]), extent=(4π, 4π), halo=(3, 3))
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)