Skip to content

Commit

Permalink
double check to make sure Gillian's commends and suggestions are in
Browse files Browse the repository at this point in the history
  • Loading branch information
weihuang-jedi committed Nov 5, 2024
1 parent 634a6f0 commit f6d0b47
Showing 1 changed file with 18 additions and 18 deletions.
36 changes: 18 additions & 18 deletions docs/source/noaa_csp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,14 +53,14 @@ Log in to the `NOAA CSP <http://noaa.parallel.works/login>`_ and into
the resources configuration. The user should arrive at the following
screen. Click the "blue" box indicated by the red arrow to login.

.. image:: _static/noaacsp_login_1.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_login_1.png

Fill the ``Username / Email`` box with your username or NOAA email (usually in "FirstName.LastName" format).
Note that the ``Username or email`` query field is case-sensitive.
Then enter the respective ``Pin + RSA`` combination using the same RSA token application used
for access to other RDHPCS machines (e.g., Hera, Gaea).

.. image:: _static/noaacsp_login_2.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_login_2.png

*******************************
Configure the NOAA CSP Instance
Expand All @@ -72,18 +72,18 @@ Scroll down to selecet "AWS EPIC Wei CentOS" circled in red.
Note that the urrent global-workflow is still using CentOS built spack-stack,
but it will be updated to Rocky 8 soon.

.. image:: _static/noaacsp_instance_1.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_inistance_1.png

Next, click "Fork latest" as shown in the red-circle.

.. image:: _static/noaacsp_instance_2.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_inistance_2.png

Please provide a unique name in the "New compute node" field for the instance
(see the box pointer by the red arrow).
Best practices suggest one that is clear, concise, and relevant to the application.
Click ``Fork`` (in the red-circle) to fork an instance.

.. image:: _static/noaacsp_instance_3.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_inistance_3.png

Now, an instance is forked, and it is time to configure the cluster.

Expand All @@ -95,7 +95,7 @@ Now, an instance is forked, and it is time to configure the cluster.

Click "Save Changes" at top-right as shown in red circle.

.. image:: _static/noaacsp_instance_4.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_inistance_4.png

The NOAA ParallelWorks (PW) currently provides 3 CSPs:
**AWS** (Amazon Web Services), **Azure** (Microsoft Azure),
Expand All @@ -115,7 +115,7 @@ To do so, navigate to the middle of the NOAA PW website left side panel and sele
at the top right as shown in the red-circle.

.. _noaacsp_lustre_1:
.. image:: _static/noaacsp_lustre_1.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_1.png

Select `FSx` for the AWS FSx ``/lustre`` filesystem as shown in the red circle.
Define ``/lustre`` with:
Expand All @@ -127,15 +127,15 @@ Define ``/lustre`` with:
Click "Add Storage" as in red-box at top-right corner.
This will create a "lustre" filesystem template.

.. image:: _static/noaacsp_lustre_2.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_2.png

fter creating the template, we need to fill information for this lustre filesystem.
To do so, go to the NOAA PW website, and click "Lustre" on the left side panel as
indicated by red arrow 1. Then select the filesystem defined above by `Resource name`,
as shown in the red box. Here, the user can delete this resource if not needed by
clicking the trash can (indicated by red-arrow 2).

.. image:: _static/noaacsp_lustre_3.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_3.png

By clicking the filesystem in the red box of the image above,
users will be led to the lustre definition page.
Expand All @@ -153,7 +153,7 @@ Then follow the steps illustrated in the :ref:`image below <noaacsp_lustre_4>`:

.. _noaacsp_lustre_4:

.. image:: _static/noaacsp_lustre_4.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_4.png

For the storage to be allocated for the global-workflow application,
it is suggested that the ``Mount Point`` be ``/lustre``. Once the storage
Expand All @@ -170,17 +170,17 @@ as shown in figuer below, then select the cluster "AWS EPIC Wei CentOS example"
Note, one can remove/delete this cluster if no longer needed by
click the trash-can shown in the red-circle at right.

.. image:: _static/noaacsp_filesystem_1.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_1.png

When get into the cluster page, click the `Definition` in the top menu as
in the red-box. When finished, remeber to clicke `Save Changes` to save
the changes.

.. image:: _static/noaacsp_filesystem_2.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_2.png

Scroll down to the bottom, and click `Add Attached Filesystems` as in the red-circle.

.. image:: _static/noaacsp_filesystem_3.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_3.png

After clicking `Add Attached Filesystems`, `Attached Filesystems settings` will appear.

Expand All @@ -194,7 +194,7 @@ If you have a `S3 bucket`, one can attached as:
#. In the `Storage` box, select the bucket you want to use, as in red-arrow 3.
#. In the `Mount Point` box, name it `/bucket` (the common and default choice) as pointed by red-arrow 4.

.. image:: _static/noaacsp_filesystem_4.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_4.png

Always remember to click `Save Changes` after making any changes to the cluster.

Expand All @@ -208,7 +208,7 @@ button in the red circle. The cluster status is denoted by the color-coded butto
on the right: red means stopped; orange means requested; green means active. The amount of time required to start
the cluster varies and is not immediate; it may take several minutes (often 10-20) for the cluster to become active.

.. image:: _static/noaacsp_using_1.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_1.png

when the cluster is activate, user will see:
#. Green dot means the cluster is active, pointed by red-arrow 1.
Expand All @@ -225,13 +225,13 @@ As this cluster is exclusive for yourself, AWS keep charging you as long as the
For running global-workflow, one need to keep the cluster active if there is any rocoto jobs running,
as rocoto is using `crontab`, which needs the cluster active all the time, or the crontab job will be terminated.

.. image:: _static/noaacsp_using_2.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_2.png

After finishing your work on the AWS cluster, you should terminate/stop the cluster, unless you have reasons to keep it active.
To stop/terminate the cluster, go to the cluster session, and click the `green` power button. A window pop up, and click the
red `Turn Off` button to switch off the cluster.

.. image:: _static/noaacsp_using_3.png
.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_3.png

***************************
Running the Global Workflow
Expand Down

0 comments on commit f6d0b47

Please sign in to comment.