-
Notifications
You must be signed in to change notification settings - Fork 4
How Spack meets requirements
Note that this document is a Draft, and command sequences in this document still need to be validated.
-
An experiment will have instructions re. how to setup a Spack runtime environment. These instructions will include:
- How to setup the environment in a grid or interactive context.
- How to use UPS products as Spack packages within a Spack environment.
-
A user will be able to invoke a command that installs a set of Spack packages in a Spack package area. This will include:
- Instructions for how to install pre-built packages from a buildcache.
- The ability to maintain version/binary compatibility with other installations of a particular release of software distribution.
- The ability to install packages into a user-writable Spack package area while continuing to rely on already-installed packages in one or more centrally installed areas.
-
A release manager will be able to assemble and deploy the Spack equivalent of a UPS "distribution" being assured that users installing the same distribution on other machines will be able to install consistent and binary-compatible environments.
-
A developer of individual packages (e.g. art, larcoreobj, icarusalg, etc.) or groups thereof will be able to develop their package in an appropriately setup Spack development environment, relying on dependencies installed in their own or other Spack package areas while maintaining a consistent collection of binaries.
Let us consider the hypothetical "Hypot" experiment. If an experiment Spack package area has been created in CVMFS or a locally mounted experiment network attached disk or CEPH area, such as
- /cvmfs/hypot.opensciencegrid.org/packages
- /hypot/app/packages
- /cephfs/hypot/packages
You would simply source the
setup-env.sh
(orsetup-env.csh
) file in that application area, usespack env list
to find a suitable Spack environment, andspack env activate
to access it. So for example, to use the CVMFS area for their experiment:
$ source /cvmfs/hypot.opensciencegrid.org/packages/setup-env.sh
$ spack env list
hypotcode_current hypotcode_calibration hypotcode_nightly hypotcode_v1_2_3 ...
$ spack env activate hypotcode_current
$ hypotcode -c analyze.fcl file1.root
Those Spack environments may have multiple packages installed, including packages converted from UPS with ups_to_spack
.
Or, alternately, packages can be accessed via spack load
outside of a full environment, for testing or other purposes.
$ source /cvmfs/hypot.opensciencegrid.org/packages/setup-env.sh
$ spack find hypotcode
-- linux-scientific7-x86_64_v2 / [email protected] ---------------------
[email protected] [email protected] [email protected] [email protected]
==> 4 installed packages
$ spack load [email protected]
$ hypotcode -c analyze.fcl file1.root
Again these can be either native Spack-built packages, or ones converted with ups_to_spack (you can tell which by using the -N
argument to spack find
to list the recipe namespace, converted UPS packages would have a ups_to_spack
namespace.
There are two kinds of package installs; installs from source code (especially builds you expect to then redistribute as binary packages) , and installs of pre-built binary packages from a build cache.
We will recommend below that package areas be configured differently for these two tasks, and that the package areas that most users use (i.e. /cvmfs/hypotcode.opensciencegrid.org/packages/
) be setup for the latter.
If you have a package area created, installing pre-built packages and their dependencies is easy. You use spack buildcache list
to find what packages are available on your build-cache(s), and spack buildcache install
to install and relocate them. Finally, we want to take several of those packages and group them into a Spack "environment" for our experimenters to use. Because Spack really prefers microarchitecture-specific packages, and we at Fermilab tend to build and distribute generic x86_64v2 binaries to have them compatible with a wide variety of grid worker nodes, etc. we have to specify a few extra flags on some of those command lines.
Note that we will specify the packages by their unique hashes on the install.
Continuing with our hypothetical "Hypot" experiment, this looks like:
$ source /cvmfs/hypot.opensciencegrid.org/packages/setup-env.sh
$ cvmfs_server transaction hypot.opensciencegrid.org
$ spack buildcache list -al gcc
==> 3 cached builds.
-- linux-scientific7-x86_64 / [email protected] -------------------------
3qoagel [email protected] rds64a3 [email protected] thfxg6n [email protected]
$ spack buildcache install -oa /thfxg6n
...
$ spack buildcache list -al hypotcode
==> 3 cached builds.
-- linux-scientific7-x86_64 / [email protected] -------------------------
cf52dd6 [email protected] twb2xie [email protected] xnifo6i [email protected]
$ spack buildcache install -oa /xnif06i
...
$ spack env create hypot_3_11_07
$ spack env activate hypot_3_11_07
$ spack add [email protected]
$ spack add hypotcode/xnif061
$ spack concretize --reuse
...
$ spack install
...
$ cvmfs_server publish hypot.opensciencegrid.org
And we have a new spack environment for the v3_11_07 hypotcode. One could of course add other packages as well to that environment, like emacs text editors or texlive, etc. Of course, for areas not in cvmfs, you would not need the cvmfs_server
commands.
Of course, before packages can be extracted from a build-cache, someone has to build them, and put them into the build-cache. Because of details of how Spack relocates binary packages, you want to use a Spack instance with a standard "padding_depth" (we've chosen 255 at Fermilab), so you get binaries that can be relocated anywhere you are likely to put them, as well as install them as dependencies in a build Spack instance. Then you can build packages suitable for pushing into the build-caches. So lets say you have such a build instance at /local/build1, and you want to build the new hypotcode release. You can build an environment, add the specific package dependency versions you want to it, and install the packages, then copy the lot over to the build-cache and reindex it. This looks something like:
$ . /local/build1/setup-env.sh
$ spack env create hypot_v3_12_02
$ spack env activate hypot_v3_12_02
$ spack add [email protected]
$ spack add [email protected]
...
$ spack add [email protected]
$ spack add [email protected]
$ spack concretize --reuse
$ spack install
...
$ spack buildcache create -d /scratch/wherever [email protected]
...
$ cd /scratch/wherever
$ scp -r build_cache products@fifeutilgpvm01:/spack_cache/
$ ssh [email protected] sh /spack_cache/.mkindex.html
Note that rather than do the long sequence of spack add
commands above, one could also copy the spec.yaml file from a previous such environment, and edit the file to change versions, etc. and then do the concretize and install steps.
If you have access to a UPS package area, you can use our spack-infrastructure package's ups_to_spack tool to convert them to Spack packages. If you then get the recipe thus generated added to the general ups_to_spack repository (if it isn't already), you can distribute that package into the buildcache for installation elsewhere. Note that if you want to install an old ups package in a single Spack area and not redistribute it, then the later steps of this example are not required, just running ups_to_spack should be sufficient.
$ . /local/build1/setup-env.sh
$ spack load spack-infrastructure
$ . /local/ups/wherever/setups
$ ups list -aK+ hypotcode
"hypotcode" "v1_1" "NULL" "" ""
"hypotcode" "v1_2" "NULL" "" "current"
$ ups_to_spack hypotcode v1_1
...
$ spack cd --package-dir ups_to_spack.hypotcode
$ git checkout -b "add_hypotcode"
$ git add package.py
$ git commit -am "adding hypotcode to ups_to_spack repo"
$ git push
$ # make pull request on ups_to_spack
$ spack buildcache create -d /scratch/wherever [email protected]
...
$ cd /scratch/wherever
$ scp -r build_cache products@fifeutilgpvm01:/spack_cache/
$ ssh [email protected] sh /spack_cache/.mkindex.html
To initially setup the spack area for an end-user package area (i.e. for a cvmfs )
mkdir -p /hypot/app/users/$USER/my_packages
cd /hypot/app/users/$USER/my_packages
wget https://raw.githubusercontent.com/FNALssi/spack-infrastructure/v2.19.0_release/bin/bootstrap
sh bootstrap
. /hypot/app/users/$USER/my_packages/setup-env.sh
To create a Spack area with directory padding enabled for building packages you want to distribute via the binary-cache is
basically the same procedure, except to pass a --with-padding
flag to the bootstrap script.
mkdir -p /hypot/app/users/$USER/my_packages
cd /hypot/app/users/$USER/my_packages
wget https://raw.githubusercontent.com/FNALssi/spack-infrastructure/v2.19.0_release/bin/bootstrap
sh bootstrap --with-padding
. /hypot/app/users/$USER/my_packages/setup-env.sh
As previously mentioned, to make sure we can install binary pacakges, we are standardizing on a padding_depth of 255 in Spack instances used to build binary packages for distribution on the build caches. If you don't do this, you may get a package that won't install in, say, /cvmfs where the install directory paths may be longer than where you built your package. Using a particular standard padding_depth also means that you can install other binary packages into your Spack instance because the install path length is (exactly) the same as where the other binary package was built. If instead you set your padding_depth to, say 1000, packages you built could be installed elsewhere, but the binary packages already in the path couldn't be installed in your instance, because the paths in those binaries would be too short.
If you want to reproduce a given build, the best way to go about it, assuming the original build was done in a Spack environment, is to use the spec.lock file from the original environment, and build it in a new environment. Lets say you want to recompile the exact versions of software in an environment in CVMFS:
$ . /cvmfs/hypot.opensciencegrid.org/packages/setup-env.sh
$ spack env list
hypotcode_current hypotcode_calibration hypotcode_nightly hypotcode_v1_2_3 ...
$ spack cd --env hypotcode_calibration
$ cp spack.lock /tmp/myspack.lock
$ . /hypot/app/users/$USER/packages/setup-env.sh
$ spack env create my_calibration /tmp/myspack.lock
$ spack install --no-cache
Or if you want to take as many exact packages as possible from the build-cache, you would do the same as above, without the --no-cache
option.
What if you want to start from the regular experiment package area base, but have a few different packages of your own? You can make a "chained" spack instance, or a sub_spack, and install new or different software in your chained instance.
$ . /cvmfs/hypot.opensciencegrid.org/packages/setup-env.sh
$ spack load spack-infrastructure
$ make_subspack --with-padding /cvmfs/hypot.opensciencegrid.org/packages /hypot/app/users/$USER/my_packages
...
$ . /hypot/app/users/$USER/my_packages/setup-env.sh
$ spack env list
hypotcode_current hypotcode_calibration hypotcode_nightly hypotcode_v1_2_3 ...
$ spack env create my_hypotcode
$ spack cd --env hypotcode_current
$ cp spec.yaml /tmp/myspec.yaml
$ spack cd --env my_hypotcode
$ cp /tmp/myspec.yaml spec.yaml
$ vi spec.yaml # edit versions of one or two packages
$ spack activate my_hypotcode
$ spack concretize --reuse
$ spack install
Spack will now install any needed new versions in your /hypot/app/users/$USER/my_packages area, but use the existing packages from CVMFS wherever possible.
Using Spack environments, release managers can build a distribution of software, add all the packages to a build cache, and collect the spack.lock file from the environment. Then that spack.lock file can be used to recreate that environment in other spack instances.
$ . /cvmfs/hypot.opensciencegrid.org/packages/setup-env.sh
$ spack env list
hypotcode_current hypotcode_calibration hypotcode_nightly hypotcode_v1_2_3 ...
$ spack cd --env hypotcode_calibration
$ scp spack.lock remotehost.some.where:/tmp/myspack.lock
$ ssh remotehost.some.where
$ . /local/spack/packages/setup-env.sh
$ spack env create hypotcode_calibration /tmp/myspack.lock
You can use spack develop to have local checked-out copies of package's sources in a spack environment to develop the software.
Running spack install
will rebuild the packages from the checked-out source code, and install it. So if you setup a chained
spack instance, you can build the packages you're developing, and the intervening dependencies, in your chained instance.
$ . /cvmfs/hypot.opensciencegrid.org/packages/setup-env.sh
$ spack load spack-infrastructure
$ make_subspack --with-padding /cvmfs/hypot.opensciencegrid.org/packages /hypot/app/users/$USER/my_packages
...
$ . /hypot/app/users/$USER/my_packages/setup-env.sh
$ spack env list
hypotcode_current hypotcode_calibration hypotcode_nightly hypotcode_v1_2_3 ...
$ spack env create my_hypotcode
$ spack cd --env hypotcode_current
$ cp spec.yaml /tmp/myspec.yaml
$ spack cd --env my_hypotcode
$ cp /tmp/myspec.yaml spec.yaml
$ vi spec.yaml # edit versions of hypotcode, art to be @develop
$ spack develop hypotcode@develop
$ spack develop art@develop
$ spack concretize --reuse
$ spack install
$ spack cd --env my_hypotcode
$ ls
art hypotcode spack.lock spack.yaml
$ vi art/art/Framework/Art/artapp.cc
$ vi hypotcode/hypotcode/foo.cc
$ spack install