Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/EPCCed/cirrus-docs
Browse files Browse the repository at this point in the history
  • Loading branch information
clairbarrass committed May 1, 2024
2 parents 88b7bfe + 6e8f1c0 commit f6cabb6
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 12 deletions.
3 changes: 2 additions & 1 deletion docs/e1000-migration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -421,8 +421,9 @@ Please [contact the service desk](mailto:[email protected]) if you have conce
<p>tensorflow/2.11.0-gpu</p></td>
<td>
<p>Please use one of the following</p>
<p>tensorflow/2.13.0</p>
<p>tensorflow/2.15.0</p>
<p>tensorflow/2.15.0-gpu</p></td>
<p>tensorflow/2.13.0-gpu</p></td>
</tr>
<tr>
<td>
Expand Down
20 changes: 14 additions & 6 deletions docs/software-packages/starccm+.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,16 @@ and the ability to combine and account for the interaction between the
various physics and motion models in a single simulation to cover your
specific application.

!!! Note


STAR-CCM+ is not centrally available as a module on Cirrus. All users
must build the software in their own user space.

Below we provide some guidance for using STAR-CCM+ on Cirrus with the
PoD license.


## Useful Links

> - [Information about STAR-CCM+ by
Expand Down Expand Up @@ -52,10 +62,9 @@ following script starts the server:
#SBATCH --partition=standard
#SBATCH --qos=standard

# Load the default HPE MPI environment
module load mpt
module load starccm+
# Add starccm+ installation to PATH and LD_LIBRARY_PATH

# License information:
export SGI_MPI_HOME=$MPI_ROOT
export PATH=$STARCCM_EXE:$PATH
export [email protected]
Expand Down Expand Up @@ -93,10 +102,9 @@ previous examples is the "starccm+" line)
#SBATCH --partition=standard
#SBATCH --qos=standard

# Load the default HPE MPI environment
module load mpt
module load starccm+
# Add starccm+ installation to PATH and LD_LIBRARY_PATH

# License information:
export SGI_MPI_HOME=$MPI_ROOT
export PATH=$STARCCM_EXE:$PATH
export [email protected]
Expand Down
10 changes: 5 additions & 5 deletions docs/user-guide/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ see [mpi4py for CPU](#mpi4py-for-cpu) or [mpi4py for GPU](#mpi4py-for-gpu).

You can list the Miniconda modules by running `module avail python` on a
login node. Those module versions that have the `gpu` suffix are
suitable for use on the [Cirrus GPU nodes](../gpu). There are also
suitable for use on the [Cirrus GPU nodes](gpu.md). There are also
modules that extend these Python environments, e.g., `pyfr`, `tensorflow`
and `pytorch` - simply run `module help <module name>` for further info.

Expand Down Expand Up @@ -202,7 +202,7 @@ below for further details.

There are several more Python-based modules that also target the Cirrus
GPU nodes. These include two machine learning frameworks,
`pytorch/1.13.1-gpu` and `tensorflow/2.15.0-gpu`. Both modules are Python
`pytorch/1.13.1-gpu` and `tensorflow/2.13.0-gpu`. Both modules are Python
virtual environments that extend `python/3.10.8-gpu`. The MPI comms is
handled by the [Horovod](https://horovod.readthedocs.io/en/stable/)
0.28.1 package along with the [NVIDIA Collective Communications
Expand Down Expand Up @@ -325,7 +325,7 @@ the centrally-installed `python` modules. You could just as easily
create a local virtual environment based on one of the Machine Learning
(ML) modules, e.g., `tensorflow` or `pytorch`. This means you would avoid
having to install ML packages within your local area. Each of those ML
modules is based on a `python` module. For example, `tensorflow/2.15.0-gpu`
modules is based on a `python` module. For example, `tensorflow/2.13.0-gpu`
is itself an extension of `python/3.10.8-gpu`.

## Installing your own Python packages (with conda)
Expand Down Expand Up @@ -458,11 +458,11 @@ but please don’t attempt to run any computationally intensive work (such
jobs will be killed should they reach the login node CPU limit).

If you want to run your JupyterLab on a compute node, you will need to
enter an [interactive session](../batch/#interactive-jobs); otherwise
enter an [interactive session](batch.md#interactive-jobs); otherwise
you can start from a login node prompt.

1. As described above, load the Anaconda module on Cirrus using
`module load anaconda/python3`.
`module load anaconda3/2023.9`.

2. Run `export JUPYTER_RUNTIME_DIR=$(pwd)`.

Expand Down

0 comments on commit f6cabb6

Please sign in to comment.