-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial software set #44
Comments
FSL in Spack patches out the job submission capability. |
We were considering compilers but Spack is about to change how it deals with them so probably worth waiting. |
We want to test that the OpenMPI we get from Spack is reasonable performance-wise on the OmniPath clusters. So far we have not specified any fabrics and Spack lets OpenMPI set options as auto in that case. We should check performance of that version against one specifying some likely options (and can test against the vader-using Benchmarks: https://mvapich.cse.ohio-state.edu/benchmarks/ Can see available variants with |
Everything is working finally, and have been going fine for the last 2 days. Currently, building The complete provenance is:
|
Looks like the gromacs variants we would have, based on our current ones, are these. Or for 2022.5 we could just do the bolded plumed variants as we can't build those yet for 2023. We normally include the double-precision versions with the non-cuda builds. cuda_arch=80 is the A100s. Ignore the cuda variants for the first go-round. (Would only be added on Myriad and Young, and on Young it'd need to be built on a GPU node as those are AMD). [email protected] %[email protected] +double [email protected] %[email protected] +double |
Ok. Happy to do just the bolded ones for now. |
Not just the bolded ones, |
I'm pretty sure it says this somewhere else in here, but for a site called
There's a |
@heatherkellyucl Done. Finally got it working.
Do we have any standardised way of testing the installed gromacs? |
For future record: as discussed in #49 (comment) we decided to use We will try to use Update (2024-02-28): we ran into too many problems with Update (2024-03-04): |
LAMMPS (GNU, CPU) is going to look like this, as they specify ALL the packages individually where we'd been using presets/most.cmake for the recent releases.
Only thing missing that we had mentioned in the last install was quip, which isn't an option. For CUDA, build with GNU. Same packages. For Intel, we do one version with |
CASTEP, ORCA, NAMD are all manual-download packages. You can set a mirror or source_cache to be checked and they should be found in it, or you can specify exactly the location of each package (can be in environment's .yaml) like https://spack.readthedocs.io/en/latest/packages_yaml.html#assigning-package-attributes
We have the pkg-store so setting that as a source mirror should work. |
ORCA
Will build openmpi 4.1.2. No hurry since we already have this version of ORCA - will add in after other MPI stuff. I think we should be fine using the shared library builds which are the ones Spack has checksums for and are much smaller. The static builds come in three large tar files which would need alterations to the recipe. It has to have an external openmpi in either case. |
Got package.pys in our repos. NAMD and ORCA are just for new versions from spack develop. For CASTEP we want to build 23.1.1 not 21.11 so that needs testing first.
|
Latest GROMACS available in develop:
Added from develop:
|
See #59 for CASTEP. |
Added manylinux binary cache and our pkgstore to Might be an issue if it tries reusing Perl from the manylinux cache as there's a missing/circular libcrypt dependency - need to see if that happens. (To add to an already-existing site, edit |
Large usage MD codes
Myriad-flavoured packages
[Initial stack done #49]
Samtools, BCFTools, HTSlib: UCL-RITS/rcps-buildscripts#532
Samtools, Bwa, Bedtools, Vcftools, picard, GATK (requested by Biosciences) https://ucldata.atlassian.net/browse/AHS-139
Alphafold (would be nice to not have to tell people to go and track down the working containers): UCL-RITS/rcps-buildscripts#529 UCL-RITS/rcps-buildscripts#463 [Not doing until Spack 0.22]
HDF5
Netcdf
BEAST 2.7: UCL-RITS/rcps-buildscripts#498
The list of user-requested software builds is at https://github.com/UCL-RITS/rcps-buildscripts/issues
Second set
Amber
Cellranger: UCL-RITS/rcps-buildscripts#499
Have a look at dependencies of SRAtoolkit: UCL-RITS/rcps-buildscripts#543
Hammock
AFNI
Out of scope for initial set
Python
R
GPU builds
Only Myriad and Young have GPUs. Young's GPUs are AMD-cpu nodes while Myriad's are Intel. (GPUs themselves are Nvidia). Young only has A100s, Myriad has A100s and some older GPUs, of which the oldest will be retired soonish.
GPU builds are dependent on the version of the CUDA drivers being updated on the GPU nodes - not worth putting any GPU builds in the buildcache until after that has happened. UCL-RITS/rcps-buildscripts#528 and UCL-RITS/rcps-buildscripts#517 and https://ucldata.atlassian.net/browse/ARI-254. Done.
Work cannot begin on Kathleen and Michael until https://ucldata.atlassian.net/browse/ARI-37 is complete (including devtoolset-11 and rh-python38 in the image). Done.
The text was updated successfully, but these errors were encountered: