Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix slurm MPI submission bug #214

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

dominic-chang
Copy link
Contributor

mpiexec typically only submits jobs locally on slurm machines. The scheduler typically uses srun for global submission, so I changed the exec string to use srun when the scheduler is set to slurm

@alexandrebouchard
Copy link
Member

Thank you so much! From a quick look at the CI, I think it fails for an orthogonal reason that I should be able to fix later this week. Then I'll be able to look at this PR. Let me create a separate issue to be able to reference to it...

@alexandrebouchard
Copy link
Member

Upcoming fix for #215 should fix those CI builds..

@alexandrebouchard
Copy link
Member

Sorry for being slow on this!

That comes just in time as our local cluster at UBC just also migrated to SLURM recently. Interestingly, in their instructions, they still recommend to use mpiexec to have mpi jobs distributed across several machines.

Doing a quick google, that page seems to recommend that the mpiexec is more performant and portable: https://users.open-mpi.narkive.com/a97KsQwJ/ompi-openmpi-slurm-mpiexec-mpirun-vs-srun

I was wondering if you have a link for a page supporting the srun route? Maybe it is a more up to date approach (the above linke in 6 years old)? Or there is also the possibility that it is a particularity of the cluster you are using?

Even if it is a particularity of a specific cluster, I want to make sure there is enough flexibility to be able to configure it correctly, but in that case I would not make it the default route.

Thanks again!

@dominic-chang
Copy link
Contributor Author

Sure thing. I've been using two compute clusters. Both recommend srun. Here is the documentation from the Purdue Anvil cluster, and here is the documentation for the Harvard canon cluster. It might just be an oddity of these two clusters.

@dominic-chang
Copy link
Contributor Author

dominic-chang commented Apr 3, 2024

Here's additional information from the schedmd slurm guide that also suggests the use of srun it does seems like some clusters are configured to use mpiexec, but srun also seems to be common. Maybe the api should be changed to allow the user to specify a submission command in the rosetta?

@codecov-commenter
Copy link

codecov-commenter commented Apr 3, 2024

Codecov Report

Attention: Patch coverage is 0% with 9 lines in your changes are missing coverage. Please review.

Project coverage is 86.56%. Comparing base (ab190b8) to head (ae555f7).
Report is 8 commits behind head on main.

Files Patch % Lines
src/submission/MPIProcesses.jl 0.00% 9 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #214      +/-   ##
==========================================
- Coverage   86.82%   86.56%   -0.27%     
==========================================
  Files          95       95              
  Lines        2429     2419      -10     
==========================================
- Hits         2109     2094      -15     
- Misses        320      325       +5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@dominic-chang
Copy link
Contributor Author

I tried adding an additional concept to the rosetta and defined a helper function for adding custom concepts as well as docs for the function. An example workflow for the clusters that I am using would be:

params= (
    exec = "srun",
    submit = `sbatch`,
    del = `scancel`,
    directive = "#SBATCH",
    job_name = "--job-name=",
    output_file = "-o ",
    error_file = "-e ",
    submit_dir = "\$SLURM_SUBMIT_DIR",
    job_status = `squeue --job`,
    job_status_all = `squeue -u`,
    ncpu_info = `sinfo`
)

add_custom_submission_system(params)

function Pigeons.resource_string(m::MPIProcesses, ::Val{:custom}) 
    return """
    #SBATCH -t $(m.walltime)
    #SBATCH --ntasks=$(m.n_mpi_processes)
    #SBATCH --cpus-per-task=$(m.n_threads)
    #SBATCH --mem-per-cpu=$(m.memory)
    """
end
settings = Pigeons.MPISettings(;
    submission_system=:custom,
    environment_modules=["gcc/11.2.0", "openmpi/4.0.6"]
)

Pigeons.setup_mpi(settings)
nchains = 20

pt = Pigeons.pigeons(
    target=toy_mvn_target(1),
    n_chains=nchains,
    on = Pigeons.MPIProcesses(
        n_mpi_processes = nchains,
        walltime="0-00:30:00",
        n_threads = 1,
        mpiexec_args=`--mpi=pmi2`
    ),
    n_rounds=2
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants