Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge development into HarrisSheetinX 2022-10-03 #116

Open
wants to merge 43 commits into
base: HarrisSheetinX
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
03c7ee0
Updated name for Elisa Rheaume in zenodo & Field Probe files (#3379)
ElisaRheaume Sep 9, 2022
1634f63
Frontier/Crusher: rocFFT Cache Control (#3366)
ax3l Sep 9, 2022
193180a
`SyncCurrent`: Split Filter and Sum over Guard Cells (#3222)
EZoni Sep 9, 2022
c50bb93
Docs: Update Crusher (OLCF) (#3380)
ax3l Sep 12, 2022
936fadd
Docs: Crusher (OLCF) for PSATD+RZ (#3386)
ax3l Sep 12, 2022
70f9a86
AMReX: Weekly Update (#3387)
ax3l Sep 12, 2022
425d22a
ABLASTR: Move Used Inputs Helper (#3376)
ax3l Sep 12, 2022
1c7b2b6
use getWithParser (#3394)
RevathiJambunathan Sep 14, 2022
0f344d6
ABLASTR: Fix Stray Include in DepositCharge (#3393)
ax3l Sep 14, 2022
04b6f67
Frontier/Crusher: Less Invasive libFabric Work-Around (#3396)
ax3l Sep 14, 2022
ac2521a
Use blaspp::gemm on GPU for Hankel transform (#3383)
RemiLehe Sep 15, 2022
47eef0b
add species name to a couple of error messages (#3381)
lucafedeli88 Sep 15, 2022
68eb515
Update highlights with new PRX paper (#3408)
RemiLehe Sep 19, 2022
3bf1a33
ABLASTR: Constants (#3405)
ax3l Sep 19, 2022
276e574
ABLASTR: Fix Stray Include in ChargeDeposition (#3406)
ax3l Sep 19, 2022
6e75516
Display an ASCII art logo on standard output (#3382)
lucafedeli88 Sep 19, 2022
adeebe8
Improve docstrings for some physical constants (#3410)
lucafedeli88 Sep 19, 2022
5deed41
Fixes to Physics_applications/capacitive_discharge/PICMI* (#3413)
dpgrote Sep 19, 2022
f841e67
Zenodo: Add Marco Garten (#3414)
ax3l Sep 19, 2022
a57fe96
Fix value of particle container m_do_back_transformed_particles when …
NeilZaim Sep 19, 2022
33fd1fe
Add beta function to BeamRelevant (#3372)
n01r Sep 19, 2022
5e17906
More fixes for capacitive_discharge PICMI tests (#3416)
dpgrote Sep 20, 2022
2fed282
Correct and test fusion module in RZ geometry (#3255)
RemiLehe Sep 20, 2022
5761b4b
PSATD: More Options for Time Dependency of J, Rho (#3242)
EZoni Sep 20, 2022
71ca756
Add option to deposit laser on main grid (#3235)
RemiLehe Sep 20, 2022
9aca2a6
Fix compilation of RZ version on GPU (#3418)
RemiLehe Sep 21, 2022
797e78d
Fix update of particles flushed already in BTD (#3419)
RemiLehe Sep 21, 2022
90b72e8
CI: Test New v. Legacy BTD in `BTD_ReducedSliceDiag` (#3371)
EZoni Sep 21, 2022
6febc63
fix labels in inputfiles (#3422)
lucafedeli88 Sep 23, 2022
cf74a5b
BTD diagnostics specified by intervals (#3367)
RTSandberg Sep 24, 2022
3fe406c
enforce 3 components for some laser parameters (#3423)
lucafedeli88 Sep 26, 2022
1b9ba80
AMReX/PICSAR: Weekly Update (#3412)
ax3l Sep 28, 2022
7953200
Add 1d support to `_libwarpx.py` functions `get_particle_X` (#3421)
roelof-groenewald Sep 29, 2022
6b46702
Add quiet option to Summit post-proc. docs (#3434)
n01r Sep 29, 2022
0ce3dfa
Implement tridiag solver for 1D (#3431)
dpgrote Sep 30, 2022
e7c33be
Docs: BELLA MVA PoP & Ion PRAB Published (#3435)
ax3l Oct 2, 2022
87d4851
Lassen (LLNL): HDF5 1.12.2 (#3378)
ax3l Oct 2, 2022
3d0f943
Sphinx Extension: Sphinx-Design (#3361)
ax3l Oct 2, 2022
45ec9e3
Major update of the Python/picmi documentation (#3329)
dpgrote Oct 2, 2022
a1ade2b
Use parser for input parameters of type long (#2506)
NeilZaim Oct 2, 2022
7d3dab4
Release 22.10 (#3444)
ax3l Oct 3, 2022
a0eea6d
Doc: Dev FAQ Pinned Memory (#3437)
ax3l Oct 3, 2022
7894cd5
Merge branch 'development' into HSdevmerge_221003
hklion Oct 3, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/cuda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ jobs:
which nvcc || echo "nvcc not in PATH!"

git clone https://github.com/AMReX-Codes/amrex.git ../amrex
cd amrex && git checkout --detach 22.09 && cd -
cd amrex && git checkout --detach 13aa4df0f5a4af40270963ad5b42ac7ce662e045 && cd -
make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_PSATD=TRUE USE_CCACHE=TRUE -j 2

build_nvhpc21-11-nvcc:
Expand Down
7 changes: 6 additions & 1 deletion .zenodo.json
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,11 @@
"name": "Fedeli, Luca",
"orcid": "0000-0002-7215-4178"
},
{
"affiliation": "Lawrence Berkeley National Laboratory",
"name": "Garten, Marco",
"orcid": "0000-0001-6994-2475"
},
{
"affiliation": "SLAC National Accelerator Laboratory",
"name": "Ge, Lixin",
Expand Down Expand Up @@ -130,7 +135,7 @@
},
{
"affiliation": "Lawrence Berkeley National Laboratory",
"name": "Rheaume, Tiberius",
"name": "Rheaume, Elisa",
"orcid": "0000-0002-6710-0650"
},
{
Expand Down
8 changes: 7 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Preamble ####################################################################
#
cmake_minimum_required(VERSION 3.20.0)
project(WarpX VERSION 22.09)
project(WarpX VERSION 22.10)

include(${WarpX_SOURCE_DIR}/cmake/WarpXFunctions.cmake)

Expand Down Expand Up @@ -262,6 +262,12 @@ if(WarpX_PSATD)
if(WarpX_DIMS STREQUAL RZ)
target_link_libraries(ablastr PUBLIC blaspp)
target_link_libraries(ablastr PUBLIC lapackpp)

# BLAS++ forgets to declare cuBLAS and cudaRT dependencies
if(WarpX_COMPUTE STREQUAL CUDA)
find_package(CUDAToolkit REQUIRED)
target_link_libraries(ablastr PUBLIC CUDA::cudart CUDA::cublas)
endif()
endif()
endif()

Expand Down
9 changes: 7 additions & 2 deletions Docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,22 @@
#
# License: BSD-3-Clause-LBNL

# WarpX PICMI bindings w/o C++ component (used for autoclass docs)
-e ../Python
breathe
# docutils 0.17 breaks HTML tags & RTD theme
# https://github.com/sphinx-doc/sphinx/issues/9001
docutils<=0.16

# PICMI API docs
# note: keep in sync with version in ../requirements.txt
picmistandard==0.0.19
picmistandard==0.0.20
# for development against an unreleased PICMI version, use:
#picmistandard @ git+https://github.com/picmi-standard/picmi.git#subdirectory=PICMI_Python
# picmistandard @ git+https://github.com/picmi-standard/picmi.git#subdirectory=PICMI_Python

pygments
recommonmark
sphinx>=2.0
sphinx-design
sphinx_rtd_theme>=0.3.1
sphinxcontrib-napoleon
8 changes: 5 additions & 3 deletions Docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,9 @@
# ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.mathjax',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinx_design',
'breathe'
]

Expand All @@ -71,16 +73,16 @@
# built documents.
#
# The short X.Y version.
version = u'22.09'
version = u'22.10'
# The full version, including alpha/beta/rc tags.
release = u'22.09'
release = u'22.10'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
language = 'en'

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
Expand Down
4 changes: 3 additions & 1 deletion Docs/source/developers/documentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,11 @@ Breathe documentation
---------------------

Your Doxygen documentation is not only useful for people looking into the code, it is also part of the `WarpX online documentation <https://ecp-warpx.github.io>`_ based on `Sphinx <http://www.sphinx-doc.org>`_!
This is done using the Python module `Breathe <http://breathe.readthedocs.org>`_, that allows you to read Doxygen documentation dorectly in the source and include it in your Sphinx documentation, by calling Breathe functions.
This is done using the Python module `Breathe <http://breathe.readthedocs.org>`_, that allows you to write Doxygen documentation directly in the source and have it included it in your Sphinx documentation, by calling Breathe functions.
For instance, the following line will get the Doxygen documentation for ``WarpXParticleContainer`` in ``Source/Particles/WarpXParticleContainer.H`` and include it to the html page generated by Sphinx:

.. code-block:: rst

.. doxygenclass:: WarpXParticleContainer

Building the documentation
Expand Down
17 changes: 17 additions & 0 deletions Docs/source/developers/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,3 +70,20 @@ What does const int ``/*i_buffer*/`` mean in argument list?
This is often seen in a derived class, overwriting an interface method.
It means we do not name the parameter because we do not use it when we overwrite the interface.
But we add the name as a comment ``/* ... */`` so that we know what we ignored when looking at the definition of the overwritten method.


What is Pinned Memory?
----------------------

We need pinned aka "page locked" host memory when we:

- do asynchronous copies between the host and device
- want to write to CPU memory from a GPU kernel

A typical use case is initialization of our (filtered/processed) output routines.
AMReX provides pinned memory via the ``amrex::PinnedArenaAllocator`` , which is the last argument passed to constructors of ``ParticleContainer`` and ``MultiFab``.

Read more on this here: `How to Optimize Data Transfers in CUDA C/C++ <https://developer.nvidia.com/blog/how-optimize-data-transfers-cuda-cc/>`__ (note that pinned memory is a host memory feature and works with all GPU vendors we support)

Bonus: underneath the hood, asynchronous MPI communications also pin and unpin memory.
One of the benefits of GPU-aware MPI implementations is, besides the possibility to use direct device-device transfers, that MPI and GPU API calls `are aware of each others' pinning ambitions <https://www.open-mpi.org/community/lists/users/2012/11/20659.php>`__ and do not create `data races to unpin the same memory <https://github.com/ComputationalRadiationPhysics/picongpu/pull/438>`__.
4 changes: 1 addition & 3 deletions Docs/source/developers/particles.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,11 +85,9 @@ On a loop over particles it can be useful to access the fields on the box we are
Main functions
--------------

.. doxygenfunction:: PhysicalParticleContainer::FieldGather

.. doxygenfunction:: PhysicalParticleContainer::PushPX

.. doxygenfunction:: WarpXParticleContainer::DepositCurrent
.. doxygenfunction:: WarpXParticleContainer::DepositCurrent(amrex::Vector<std::array<std::unique_ptr<amrex::MultiFab>, 3>> &J, const amrex::Real dt, const amrex::Real relative_time)

.. note::
The current deposition is used both by ``PhysicalParticleContainer`` and ``LaserParticleContainer``, so it is in the parent class ``WarpXParticleContainer``.
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/developers/profiling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ behavior of *each* individual MPI rank. The workflow for doing so is the followi
cmake -S . -B build -DAMReX_BASE_PROFILE=OFF -DAMReX_TINY_PROFILE=ON

- Run the simulation to be profiled. Note that the WarpX executable will create
and new folder `bl_prof`, which contains the profiling data.
a new folder `bl_prof`, which contains the profiling data.

.. note::

Expand Down
2 changes: 1 addition & 1 deletion Docs/source/developers/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ For example, if you like to change the compiler to compilation to build on Nvidi
branch = development
cmakeSetupOpts = -DAMReX_ASSERTIONS=ON -DAMReX_TESTING=ON -DWarpX_COMPUTE=CUDA

We also support changing compilation options :ref:`via the usual build enviroment variables <building-cmake-envvars:>`__.
We also support changing compilation options via the usual :ref:`build enviroment variables <building-cmake-envvars>`.
For instance, compiling with ``clang++ -Werror`` would be:

.. code-block:: sh
Expand Down
17 changes: 9 additions & 8 deletions Docs/source/highlights.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Science Highlights

WarpX can be used in many domains of laser-plasma science, plasma physics, accelerator physics and beyond.
Below, we collect a series of scientific publications that used WarpX.
Please :ref:`acknowledge WarpX in your works`, so we can find your works.
Please :ref:`acknowledge WarpX in your works <acknowledge_warpx>`, so we can find your works.

Is your publication missing? :ref:`Contact us <contact>` or edit this page via a pull request.

Expand All @@ -21,12 +21,12 @@ Scientific works in laser-plasma and beam-plasma acceleration.

#. Miao B, Shrock JE, Feder L, Hollinger RC, Morrison J, Nedbailo R, Picksley A, Song H, Wang S, Rocca JJ, Milchberg HM.
**Multi-GeV electron bunches from an all-optical laser wakefield accelerator**.
*preprint*. under review, 2021.
`arXiv:2112.03489 <https://arxiv.org/abs/2112.03489>`__
Physical Review X **12**, 031038, 2022.
`DOI:10.1103/PhysRevX.12.031038 <https://doi.org/10.1103/PhysRevX.12.031038>`__

#. Mirani F, Calzolari D, Formenti A, Passoni M.
**Superintense laser-driven photon activation analysis**.
Nature Communications Physics volume **4**.185, 2021
Nature Communications Physics volume **4**.185, 2021.
`DOI:10.1038/s42005-021-00685-2 <https://doi.org/10.1038/s42005-021-00685-2>`__


Expand All @@ -37,12 +37,13 @@ Scientific works in laser-ion acceleration and laser-matter interaction.

#. Hakimi S, Obst-Huebl L, Huebl A, Nakamura K, Bulanov SS, Steinke S, Leemans WP, Kober Z, Ostermayr TM, Schenkel T, Gonsalves AJ, Vay J-L, Tilborg Jv, Toth C, Schroeder CB, Esarey E, Geddes CGR.
**Laser-solid interaction studies enabled by the new capabilities of the iP2 BELLA PW beamline**.
under review, 2022
Physics of Plasmas **29**, 083102, 2022.
`DOI:10.1063/5.0089331 <https://doi.org/10.1063/5.0089331>`__

#. Levy D, Andriyash IA, Haessler S, Ouille M, Kaur J, Flacco A, Kroupp E, Malka V, Lopez-Martens R.
#. Levy D, Andriyash IA, Haessler S, Kaur J, Ouille M, Flacco A, Kroupp E, Malka V, Lopez-Martens R.
**Low-divergence MeV-class proton beams from kHz-driven laser-solid interactions**.
*preprint*. under review, 2021.
`arXiv:2112.12581 <https://arxiv.org/abs/2112.12581>`__
Phys. Rev. Accel. Beams **25**, 093402, 2022.
`DOI:10.1103/PhysRevAccelBeams.25.093402 <https://doi.org/10.1103/PhysRevAccelBeams.25.093402>`__


Particle Accelerator & Beam Physics
Expand Down
33 changes: 31 additions & 2 deletions Docs/source/install/hpc/crusher.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ If you are new to this system, **please see the following resources**:
* `Production directories <https://docs.olcf.ornl.gov/data/index.html#data-storage-and-transfers>`_:

* ``$PROJWORK/$proj/``: shared with all members of a project, purged every 90 days (recommended)
* ``$MEMBERWORK/$proj/``: single user, purged every 90 days(usually smaller quota)
* ``$MEMBERWORK/$proj/``: single user, purged every 90 days (usually smaller quota)
* ``$WORLDWORK/$proj/``: shared with all users, purged every 90 days
* Note that the ``$HOME`` directory is mounted as read-only on compute nodes.
That means you cannot run in your ``$HOME``.
Expand All @@ -45,6 +45,21 @@ We recommend to store the above lines in a file, such as ``$HOME/crusher_warpx.p

source $HOME/crusher_warpx.profile

And since Crusher does not yet provide a module for them, install BLAS++ and LAPACK++:

.. code-block:: bash

# BLAS++ (for PSATD+RZ)
git clone https://bitbucket.org/icl/blaspp.git src/blaspp
rm -rf src/blaspp-crusher-build
cmake -S src/blaspp -B src/blaspp-crusher-build -Duse_openmp=OFF -Dgpu_backend=hip -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/crusher/blaspp-master
cmake --build src/blaspp-crusher-build --target install --parallel 10

# LAPACK++ (for PSATD+RZ)
git clone https://bitbucket.org/icl/lapackpp.git src/lapackpp
rm -rf src/lapackpp-crusher-build
cmake -S src/lapackpp -B src/lapackpp-crusher-build -DCMAKE_CXX_STANDARD=17 -Dbuild_tests=OFF -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON -DCMAKE_INSTALL_PREFIX=$HOME/sw/crusher/lapackpp-master
cmake --build src/lapackpp-crusher-build --target install --parallel 10

Then, ``cd`` into the directory ``$HOME/src/warpx`` and use the following commands to compile:

Expand All @@ -69,6 +84,8 @@ Running
MI250X GPUs (2x64 GB)
^^^^^^^^^^^^^^^^^^^^^

ECP WarpX project members, use the ``aph114`` project ID.

After requesting an interactive node with the ``getNode`` alias above, run a simulation like this, here using 8 MPI ranks and a single node:

.. code-block:: bash
Expand Down Expand Up @@ -105,4 +122,16 @@ Known System Issues

.. code-block:: bash

export FI_MR_CACHE_MAX_COUNT=0 # libfabric disable caching
#export FI_MR_CACHE_MAX_COUNT=0 # libfabric disable caching
# or, less invasive:
export FI_MR_CACHE_MONITOR=memhooks # alternative cache monitor

.. warning::

Sep 2nd, 2022 (OLCFDEV-1079):
rocFFT in ROCm 5.1+ tries to `write to a cache <https://rocfft.readthedocs.io/en/latest/library.html#runtime-compilation>`__ in the home area by default.
This does not scale, disable it via:

.. code-block:: bash

export ROCFFT_RTC_CACHE_PATH=/dev/null
14 changes: 13 additions & 1 deletion Docs/source/install/hpc/frontier.rst
Original file line number Diff line number Diff line change
Expand Up @@ -116,4 +116,16 @@ Known System Issues

.. code-block:: bash

export FI_MR_CACHE_MAX_COUNT=0 # libfabric disable caching
#export FI_MR_CACHE_MAX_COUNT=0 # libfabric disable caching
# or, less invasive:
export FI_MR_CACHE_MONITOR=memhooks # alternative cache monitor

.. warning::

Sep 2nd, 2022 (OLCFDEV-1079):
rocFFT in ROCm 5.1+ tries to `write to a cache <https://rocfft.readthedocs.io/en/latest/library.html#runtime-compilation>`__ in the home area by default.
This does not scale, disable it via:

.. code-block:: bash

export ROCFFT_RTC_CACHE_PATH=/dev/null
4 changes: 2 additions & 2 deletions Docs/source/install/hpc/perlmutter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,13 +66,13 @@ And since Perlmutter does not yet provide a module for them, install ADIOS2, BLA
# BLAS++ (for PSATD+RZ)
git clone https://bitbucket.org/icl/blaspp.git src/blaspp
rm -rf src/blaspp-pm-build
CXX=$(which CC) cmake -S src/blaspp -B src/blaspp-pm-build -Duse_openmp=ON -Dgpu_backend=CUDA -Duse_cmake_find_blas=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/perlmutter/blaspp-master
CXX=$(which CC) cmake -S src/blaspp -B src/blaspp-pm-build -Duse_openmp=OFF -Dgpu_backend=cuda -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/perlmutter/blaspp-master
cmake --build src/blaspp-pm-build --target install --parallel 16

# LAPACK++ (for PSATD+RZ)
git clone https://bitbucket.org/icl/lapackpp.git src/lapackpp
rm -rf src/lapackpp-pm-build
CXX=$(which CC) CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-pm-build -Duse_cmake_find_lapack=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DLAPACK_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -Dbuild_tests=OFF -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON -DCMAKE_INSTALL_PREFIX=$HOME/sw/perlmutter/lapackpp-master
CXX=$(which CC) CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-pm-build -DCMAKE_CXX_STANDARD=17 -Dbuild_tests=OFF -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON -DCMAKE_INSTALL_PREFIX=$HOME/sw/perlmutter/lapackpp-master
cmake --build src/lapackpp-pm-build --target install --parallel 16

Optionally, download and install Python packages for :ref:`PICMI <usage-picmi>` or dynamic ensemble optimizations (:ref:`libEnsemble <libensemble>`):
Expand Down
22 changes: 21 additions & 1 deletion Docs/source/install/hpc/summit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,22 @@ We recommend to store the above lines in a file, such as ``$HOME/summit_warpx.pr

source $HOME/summit_warpx.profile

For PSATD+RZ simulations, you will need to build BLAS++ and LAPACK++:

.. code-block:: bash

# BLAS++ (for PSATD+RZ)
git clone https://bitbucket.org/icl/blaspp.git src/blaspp
rm -rf src/blaspp-summit-build
cmake -S src/blaspp -B src/blaspp-summit-build -Duse_openmp=OFF -Dgpu_backend=cuda -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/summit/blaspp-master
cmake --build src/blaspp-summit-build --target install --parallel 10

# LAPACK++ (for PSATD+RZ)
git clone https://bitbucket.org/icl/lapackpp.git src/lapackpp
rm -rf src/lapackpp-summit-build
cmake -S src/lapackpp -B src/lapackpp-summit-build -DCMAKE_CXX_STANDARD=17 -Dbuild_tests=OFF -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON -DCMAKE_INSTALL_PREFIX=$HOME/sw/summit/lapackpp-master
cmake --build src/lapackpp-summit-build --target install --parallel 10

Optionally, download and install Python packages for :ref:`PICMI <usage-picmi>` or dynamic ensemble optimizations (:ref:`libEnsemble <libensemble>`):

.. code-block:: bash
Expand Down Expand Up @@ -319,6 +335,10 @@ For post-processing, most users use Python via OLCFs's `Jupyter service <https:/
We usually just install our software on-the-fly on Summit.
When starting up a post-processing session, run this in your first cells:

.. note::

The following software packages are installed only into a temporary directory.

.. code-block:: bash

# work-around for OLCFHELP-4242
Expand All @@ -328,6 +348,6 @@ When starting up a post-processing session, run this in your first cells:
!conda install -c conda-forge -y mamba

# next cell: the software you want
!mamba install -c conda-forge -y openpmd-api openpmd-viewer ipympl ipywidgets fast-histogram yt
!mamba install --quiet -c conda-forge -y openpmd-api openpmd-viewer ipympl ipywidgets fast-histogram yt

# restart notebook
Loading