From 8efc54f27ffdb78ae6690f6a2bd8e9cf207f0842 Mon Sep 17 00:00:00 2001 From: gspetro-NOAA Date: Thu, 21 Nov 2024 15:46:59 -0500 Subject: [PATCH 1/6] properly configure sphinx-bibtex --- docs/source/conf.py | 2 +- docs/source/references.bib | 0 2 files changed, 1 insertion(+), 1 deletion(-) create mode 100644 docs/source/references.bib diff --git a/docs/source/conf.py b/docs/source/conf.py index 2389dc59d6..2c462a2675 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -43,7 +43,7 @@ 'sphinx.ext.viewcode', 'sphinx.ext.githubpages', 'sphinx.ext.napoleon', - 'sphinxcontrib.bibtex' + 'sphinxcontrib.bibtex', ] bibtex_bibfiles = ['references.bib'] diff --git a/docs/source/references.bib b/docs/source/references.bib new file mode 100644 index 0000000000..e69de29bb2 From 9a28136c865abad65b6b7df407ccdd93046bf37b Mon Sep 17 00:00:00 2001 From: gspetro-NOAA Date: Thu, 21 Nov 2024 18:13:14 -0500 Subject: [PATCH 2/6] add crosslinks; edit syntax --- docs/source/noaa_csp.rst | 210 ++++++++++++++++++++++----------------- 1 file changed, 120 insertions(+), 90 deletions(-) diff --git a/docs/source/noaa_csp.rst b/docs/source/noaa_csp.rst index bfdeac162f..9fc2961b57 100644 --- a/docs/source/noaa_csp.rst +++ b/docs/source/noaa_csp.rst @@ -4,11 +4,11 @@ Configuring NOAA Cloud Service Providers ######################################## -The NOAA Cloud Service Providers (CSP) support the forecast-only, -coupled, and GEFS configurations for the global workflow. +The NOAA Cloud Service Providers (CSPs) support the forecast-only, +coupled, and GEFS configurations for global-workflow. Once a suitable CSP instance and cluster is defined/created, -the global-workflow may be executed similarly to the on-prem machines. -Currently the global-workflow supports the following +the global-workflow may be executed similarly to the on-premises (on-prem) machines. +Currently, the global-workflow supports the following instance and storage types as a function of CSP and forecast resolution. @@ -23,27 +23,27 @@ resolution. - **Instance Type** - **Partition** - **File System** - * - Amazon Web Services Parallel Works + * - Amazon Web Services ParallelWorks - C48, C96, C192, C384 - ``ATM``, ``GEFS`` - ``c5.18xlarge (72 vCPUs, 144 GB Memory, amd64)`` - ``compute`` - ``/lustre``, ``/bucket`` - * - Azure Parallel Works + * - Azure ParallelWorks - C48, C96, C192, C384 - ``ATM``, ``GEFS`` - ``Standard_F48s_v2 (48 vCPUs, 96 GB Memory, amd64)`` - ``compute`` - ``/lustre``, ``/bucket`` - * - GCP Parallel Works + * - GCP ParallelWorks - C48, C96, C192, C384 - ``ATM``, ``GEFS`` - ``c3d-standard-60-lssd (60 vCPUs, 240 GB Memory, amd64)`` - ``compute`` - ``/lustre``, ``/bucket`` -Instructions regarding configuring the respective CSP instance and -cluster follows. +Instructions regarding configuring the respective CSP instances and +clusters follow. ********************* Login to the NOAA CSP @@ -51,83 +51,95 @@ Login to the NOAA CSP Log in to the `NOAA CSP `_ and into the resources configuration. The user should arrive at the following -screen as in Figure 1. Click the "blue" box indicated by the red arrow to login. +screen as in :numref:`Figure %s `. Click the blue box indicated by the red arrow to login. + +.. _pw-home: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_login_1.png :name: noaacsp_login_1 :class: with-border :align: center - Figure 1 NOAA-PARALLElWORKS Home Page + NOAA-PARALLELWORKS Home Page -As shown in Figure 2, Fill the ``Username / Email`` box with your username or NOAA email (usually in "FirstName.LastName" format). +As shown in :numref:`Figure %s `, fill the ``Username / Email`` box with your username or NOAA email (usually in "FirstName.LastName" format). Note that the ``Username or email`` query field is case-sensitive. Then enter the respective ``Pin + RSA`` combination using the same RSA token application used for access to other RDHPCS machines (e.g., Hera, Gaea). +.. _login2: + .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_login_2.png :name: noaacsp_login_2 :class: with-border :align: center - Figure 2 NOASS-PARALLELWORKS Login Page + NOAA-SSO-PARALLELWORKS Login Page ******************************* Configure the NOAA CSP Instance ******************************* Once logged into the NOAA CSP, navigate to the ``Marketplace`` section -in the left sidebar as indicated by the red arrow in Figure 3, and click. -Scroll down to selecet "AWS EPIC Wei CentOS" circled in red. -Note that the current global-workflow is still using CentOS built spack-stack, +in the left sidebar as indicated by the red arrow in :numref:`Figure %s `, and click. +Scroll down to select "AWS EPIC Wei CentOS," circled in red. +Note that the current global-workflow is still using CentOS-built spack-stack, but it will be updated to Rocky 8 soon. +.. _pw-marketplace: + .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_1.png :name: noaacsp_instance_1 :class: with-border :align: center - Figure 3 ParallWork Marketplace + ParallWorks Marketplace + +Next, click "Fork latest" as shown in the red-circle in :numref:`Figure %s`. -Next, click "Fork latest" as shown in the red-circle in Figure 4. +.. _fork-latest: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_2.png :name: noaacsp_instance_2 :class: with-border :align: center - Figure 4 Fork Instance From Marketplace + Fork Instance From Marketplace Please provide a unique name in the "New compute node" field for the instance -(see the box pointer by the red arrow in Figure 5). +(see the box pointer by the red arrow in :numref:`Figure %s `). Best practices suggest one that is clear, concise, and relevant to the application. Click ``Fork`` (in the red-circle) to fork an instance. +.. _create-fork: + .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_3.png :name: noaacsp_instance_3 :class: with-border :align: center - Figure 5 Create the Fork + Create the Fork + +Now, an instance is forked, and it is time to configure the cluster. Follow these steps as shown in :numref:`Figure %s `: -Now, an instance is forked, and it is time to configure the cluster. Fellow these steps as shown in Figure 6: +#. Select a *Resource Account*; usually it is *NOAA AWS Commercial Vault*. +#. Select a *Group*, which will be something like: ``ca-epic``, ``ca-sfs-emc``, etc. +#. Copy and paste your public key (e.g., ``.ssh/id_rsa.pub``, ``.ssh/id_dsa.pub`` from your laptop). +#. Modify *User Bootstrap*. If you are not using the ``ca-epic`` group, please UNCOMMENT line 2. +#. Keep *Health Check* as it is. -#. Select a "Resource Account"; usually it is *NOAA AWS Commercial Vault*. -#. Select a "Group", which will be something like: ca-epic, ca-sfs-emc, etc. -#. Copy and paste your public key (e.g., *.ssh/id_rsa.pub*, *.ssh/id_dsa.pu* from your laptop). -#. Modify "User Bootstrap". If you are not using the "ca-epic" group, please UNCOMMENT line 2. -#. Keep "Health Check" as it is. +Click *Save Changes* at top-right as shown in red circle. -Click "Save Changes" at top-right as shown in red circle. +.. _config-cluster: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_4.png :name: noaacsp_instance_4 :class: with-border :align: center - Figure 6 Save the Instance + Configure & Save the Instance -The NOAA ParallelWorks (PW) currently provides 3 CSPs: +NOAA ParallelWorks (PW) currently provides 3 CSPs: **AWS** (Amazon Web Services), **Azure** (Microsoft Azure), and **GCP** (Google Cloud Platform). Existing clusters may also be modified. @@ -138,147 +150,163 @@ However, it is best practice to fork from Marketplace with something similar to Add CSP Lustre Filesystem ****************************** -To run global-workflow on CSPs, we need to attach the ``/lustre`` filesystem as run directory. +To run global-workflow on CSPs, we need to attach the ``/lustre`` filesystem as a run directory. First, we need to add/define our ``/lustre`` filesystem. -To do so, navigate to the middle of the NOAA PW website left side panel and select "Lustre" -(see the red arrow in Figure 7), and then click "Add Storage" -at the top right as shown in the red-circle. +To do so, navigate to the middle of the NOAA PW website left side panel and select *Lustre* +(see the red arrow in :numref:`Figure %s `), and then click *Add Storage* +at the top right, as shown in the red circle. + +.. _select-lustre: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_1.png :name: noaacsp_lustre_1 :class: with-border :align: center - Figure 7 Add Lustre Storage + Add Lustre Storage Select `FSx` for the AWS FSx ``/lustre`` filesystem as shown in the red circle. -Define ``/lustre`` with steps in Figure 8: +Define ``/lustre`` with steps in :numref:`Figure %s `: + +#. Provide a clear and meaningful *Resource name*, as shown by the first red arrow +#. Provide a short sentence for *Description*, as shown in the second red arrow +#. Choose **linux** for *Tag*, as shown by red arrow #3 -#. A clear and meaningful `Resource name` as shown by the first red arrow -#. A short sentence for `Description`, as shown in the second red arrow -#. Choose **linux** for `Tag` as shown by red arrow #3 +Click *Add Storage* as in the red box at the top right corner. -Click "Add Storage" as in red-box at top-right corner. +This will create a ``/lustre`` filesystem template after clicking on the red square shown in :numref:`Figure %s `. -This will create a "lustre" filesystem template as in red-squre as in Figure 8. +.. _define-lustre: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_2.png :name: noaacsp_lustre_2 :class: with-border :align: center - Figure 8 Define Lustre Attributes + Define Lustre Attributes -After creating the template, we need to fill information for this lustre filesystem. -To do so, go to the NOAA PW website, and click "Lustre" on the left side panel as -indicated by red arrow 1 as in Figure 8. Then select the filesystem defined above by `Resource name`, +After creating the template, we need to fill in information for this ``/lustre`` filesystem. +To do so, go to the NOAA PW website, and click *Lustre* on the left side panel, as +indicated by red arrow 1 in :numref:`Figure %s `. Then select the filesystem defined by *Resource name* in :numref:`Figure %s above `, as shown in the red box. Here, the user can delete this resource if not needed by -clicking the trash can (indicated by red-arrow 2). +clicking the trash can (indicated by red arrow 2 in :numref:`Figure %s `). + +.. _check-lustre: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_3.png :name: noaacsp_lustre_3 :class: with-border :align: center - Figure 9 Show the Lustre in PW page + Show Lustre on the PW page By clicking the filesystem in the red box of the image above, -users will be led to the lustre definition page. +users will be led to the ``/lustre`` definition page. -Then follow the steps illustrated in Figure 9 as below: +Then follow the steps illustrated in :numref:`Figure %s ` below: -#. Choose a size in the `Storage Capacity(GB)` box as pointed by red-arrow 1. - There is a minium of 1200 by AWS. For C48 ATM/GEFS case this will be enough. - For SFS-C96 case, or C768 ATM/S2S case it should probably increase to 12000. -#. For `File System Deployment`, choose "SCRATCH_2" for now as by red-arrow 2. - Do not use SCRATCH_1, as it is used for test by PW. -#. Choose **NONE** for `File System Compression` as pointed by red-arrow 3. +#. Choose a size in the *Storage Capacity (GB)* box, as indicated by red arrow 1. + There is a minimum of 1200 for AWS. For the C48 ATM/GEFS case this will be enough. + For SFS-C96 case or C768 ATM/S2S case, it should probably be increased to 12000. +#. For *File System Deployment*, choose "SCRATCH_2" for now as indicated by red arrow 2. + Do not use SCRATCH_1, as it is used for testing by PW. +#. Choose **NONE** for *File System Compression* as pointed by red arrow 3. Only choose LZ4 if you understand what it means. -#. Leave "S3 Import Path" and "S3 Export Path" black for now. -#. Click **Save Changes** in red-circle to save the definition/(changes made). +#. Leave *S3 Import Path* and *S3 Export Path* blank for now. +#. Click **Save Changes** in the red circle to save the definition/changes made. + +.. _config-lustre: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_4.png :name: noaacsp_lustre_4 :class: with-border :align: center - Figure 10 Defining the Lustre Filesystem Capacity + Defining the Lustre Filesystem Capacity For the storage to be allocated for the global-workflow application, it is suggested that the ``Mount Point`` be ``/lustre``. Once the storage -has been configured, following the steps below to attach the Lustre Filesystem. +has been configured, follow the steps below to attach the ``/lustre`` Filesystem. ****************************** Attach CSP Lustre Filesystem ****************************** Now we need to attach the defined filesystem to our cluster. -Go back to our noaa.parallel.works web-site, and click `Cluster` -as shown in Figuer 11 below, then select the cluster "AWS EPIC Wei CentOS example" -(it should be your own cluster) cluster as show in red-box. +Go back to our the NOAA PW website (https://noaa.parallel.works), and click *Cluster* +as shown in :numref:`Figure %s ` below, then select the cluster you made (e.g., "AWS EPIC Wei CentOS example" cluster, as show in the red box below). Note, one can remove/delete this cluster if no longer needed by -click the trash-can shown in the red-circle at right. +clicking the trash can shown in the red circle at right. + +.. _select-cluster: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_1.png :name: noaacsp_filesystem_1 :class: with-border :align: center - Figure 11 Add Attached Filesystems + Add Attached Filesystems + +When we get into the cluster page, click the *Definition* in the top menu as +in the red-box in :numref:`Figure %s `. Then we can attach the defined filesystems. +When finished, remeber to click *Save Changes* to save the changes. -When get into the cluster page we will see things as in Figure 12, click the `Definition` in the top menu as -in the red-box. Then we can attached the defined filesystems. -When finished, remeber to click `Save Changes` to save the changes. +.. _add-filesystem: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_2.png :name: noaacsp_filesystem_2 :class: with-border :align: center - Figure 12 Add Attached /lustre and/or /bucket Filesystems + Add Attached ``/lustre`` and/or ``/bucket`` Filesystems -Scroll down to the bottom as show in Figure 13, and click `Add Attached Filesystems` as in the red-circle. +Scroll down to the bottom as show in :numref:`Figure %s `, and click *Add Attached Filesystems* as in the red circle. + +.. _click-add-fs: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_3.png :name: noaacsp_filesystem_3 :class: with-border :align: center - Figure 13 Add Attached /lustre and/or /bucket Filesystems + Attach ``/lustre`` and/or ``/bucket`` Filesystems + +After clicking *Add Attached Filesystems*, go to *Attached Filesystems settings*, and follow the steps listed here, +which are also shown in :numref:`Figure %s `. -After clicking `Add Attached Filesystems`, `Attached Filesystems settings`, following steps listed here -which also shown in Figure 14. +#. In the *Storage* box, select the lustre filesystem defined above, as in red arrow 1. +#. In the *Mount Point* box, name it ``/lustre`` (the common and default choice), as indicated by red arrow 2. + If you choose a different name, make sure that the name chosen here uses the name from the global-workflow setup step. -#. In the `Storage` box, select the lustre filesystem defined above, as in red-arrow 1. -#. In the `Mount Point` box, name it `/lustre` (the common and default choice) as pointed by red-arrow 2. - If you choose a different name, make sure to make the Global-Workflow setup step - use the name chosen here. +If you have a S3 bucket, one can attached as: -If you have a `S3 bucket`, one can attached as: +#. In the *Storage* box, select the bucket you want to use, as in red arrow 3. +#. In the *Mount Point* box, name it ``/bucket`` (the common and default choice) as indicated by red arrow 4. -#. In the `Storage` box, select the bucket you want to use, as in red-arrow 3. -#. In the `Mount Point` box, name it `/bucket` (the common and default choice) as pointed by red-arrow 4. +.. _change-settings: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_4.png :name: noaacsp_filesystem_4 :class: with-border :align: center - Figure 14 Add Attached /lustre and/or /bucket Filesystems + Adjust Attached ``/lustre`` and/or ``/bucket`` Filesystem Settings -Always remember to click `Save Changes` after making any changes to the cluster. +Always remember to click *Save Changes* after making any changes to the cluster. ************************** Using the NOAA CSP Cluster ************************** -To activate the cluster, click `Clusters` on the left panel of the NOAA PW website shown in Figure 15, -as indicated by the red arrow. Then click the `Sessions` button in the red square, and click the power +To activate the cluster, click *Clusters* on the left panel of the NOAA PW website shown in :numref:`Figure %s `, +as indicated by the red arrow. Then click the *Sessions* button in the red square, and click the power button in the red circle. The cluster status is denoted by the color-coded button on the right: red means stopped; orange means requested; green means active. The amount of time required to start the cluster varies and is not immediate; it may take several minutes (often 10-20) for the cluster to become active. +.. _activate-cluster: + .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_1.png :name: noaacsp_using_1 :class: with-border @@ -286,14 +314,14 @@ the cluster varies and is not immediate; it may take several minutes (often 10-2 Figure 15 Activate the Cluster -when the cluster is activate, user will see things numbered in Figure 16 and also listed below: +When the cluster is activated, users will see the following indicators of success listed below in seen in :numref:`Figure %s `: -#. Green dot means the cluster is active, pointed by red-arrow 1. -#. Green dot means the cluster is active, pointed by red-arrow 2. -#. Green button means the cluster is active, pointed by red-arrow 3. -#. Click the blue-square with arrow inside pointed by red-arrow 4 will copy the cluster's IP into clipboard, - which you can open a laptop xterm/window, and do `ssh username@the-ip-address` in the xterm window will connect you - to the AWS cluster, and you can do you work there. +#. A green dot means the cluster is active, indicated by red arrow 1. +#. A green dot means the cluster is active, indicated by red arrow 2. +#. A green button means the cluster is active, indicated by red arrow 3. +#. Clicking the clipboard icon (blue square with arrow inside), indicated by red arrow 4 will copy the cluster's IP address into the clipboard. Then, + you can open a laptop terminal window (such as xterm), and do ``ssh username@the-ip-address``. This will connect you + to the AWS cluster, and you can do your work there. #. Which is the `username@the-ip-address`, or your AWS PW cluster. Click it, will have a PW web terminal appear in the bottom of the web-site, which you can work on this terminal to use your AWS cluster. @@ -302,6 +330,8 @@ As this cluster is exclusive for yourself, AWS keep charging you as long as the For running global-workflow, one need to keep the cluster active if there is any rocoto jobs running, as rocoto is using `crontab`, which needs the cluster active all the time, or the crontab job will be terminated. +.. _cluster-success: + .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_2.png :name: noaacsp_using_2 :class: with-border From 4f2470af939824fc38740d0b29bc8b9c8cc8473c Mon Sep 17 00:00:00 2001 From: gspetro-NOAA Date: Thu, 21 Nov 2024 23:01:53 -0500 Subject: [PATCH 3/6] add more crosslinks; edit syntax --- docs/source/noaa_csp.rst | 58 +++++++++++++++++++++------------------- 1 file changed, 30 insertions(+), 28 deletions(-) diff --git a/docs/source/noaa_csp.rst b/docs/source/noaa_csp.rst index 9fc2961b57..bb26941820 100644 --- a/docs/source/noaa_csp.rst +++ b/docs/source/noaa_csp.rst @@ -312,23 +312,23 @@ the cluster varies and is not immediate; it may take several minutes (often 10-2 :class: with-border :align: center - Figure 15 Activate the Cluster + Activate the Cluster -When the cluster is activated, users will see the following indicators of success listed below in seen in :numref:`Figure %s `: +When the cluster is activated, users will see the following indicators of success listed below as seen in :numref:`Figure %s `: -#. A green dot means the cluster is active, indicated by red arrow 1. -#. A green dot means the cluster is active, indicated by red arrow 2. -#. A green button means the cluster is active, indicated by red arrow 3. +#. A green dot on the left beside the AWS logo means that the cluster is active (indicated by red arrow 1). +#. A green dot on the right labeled "active" means that the cluster is active (indicated by red arrow 2). +#. A green power button means the cluster is active (indicated by red arrow 3). #. Clicking the clipboard icon (blue square with arrow inside), indicated by red arrow 4 will copy the cluster's IP address into the clipboard. Then, - you can open a laptop terminal window (such as xterm), and do ``ssh username@the-ip-address``. This will connect you + you can open a laptop terminal window (such as xterm), and run ``ssh username@the-ip-address``. This will connect you to the AWS cluster, and you can do your work there. -#. Which is the `username@the-ip-address`, or your AWS PW cluster. Click it, will have a PW web terminal appear in the - bottom of the web-site, which you can work on this terminal to use your AWS cluster. +#. Alternatively, users can click directly on the ``username@the-ip-address``, and a PW web terminal will appear at the + bottom of the website. Users can work through this terminal to use their AWS cluster. -Please note, as soon as the cluster is activated, AWS/PW starts charging you for use the cluster. -As this cluster is exclusive for yourself, AWS keep charging you as long as the cluster is active. -For running global-workflow, one need to keep the cluster active if there is any rocoto jobs running, -as rocoto is using `crontab`, which needs the cluster active all the time, or the crontab job will be terminated. +Please note, as soon as the cluster is activated, AWS/PW starts charging you for use of the cluster. +As this cluster is exclusive for yourself, AWS keeps charging you as long as the cluster is active. +For running global-workflow, one needs to keep the cluster active if there are any Rocoto jobs running +because Rocoto uses `crontab`, which needs the cluster active all the time, or the crontab job will be terminated. .. _cluster-success: @@ -337,46 +337,48 @@ as rocoto is using `crontab`, which needs the cluster active all the time, or th :class: with-border :align: center - Figure 16 Knowing the Cluster + Knowing the Cluster After finishing your work on the AWS cluster, you should terminate/stop the cluster, unless you have reasons to keep it active. -To stop/terminate the cluster, go to the cluster session, and click the `green` power button as show in Figure 17. -A window pop up, and click the red `Turn Off` button to switch off the cluster. +To stop/terminate the cluster, go to the cluster session, and click the green power button as show in :numref:`Figure %s `. +A window will pop up; click the red *Turn Off* button to switch off the cluster. + +.. _stop-cluster: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_3.png :name: noaacsp_using_3 :class: with-border :align: center - Figure 17 Terminating the Cluster + Terminating the Cluster *************************** Running the Global Workflow *************************** -Assume you have a AWS cluster running, after login to the cluster through `ssh` from your laptop terminal, -or access the cluster from your web terminal, one can start clone, compile, and run global-workflow. +Assuming you have an AWS cluster running, after logging in to the cluster through ``ssh`` from your laptop terminal +or accessing the cluster from your web terminal, you can start to clone, compile, and run global-workflow. -#. clone global-workflow(assume you have setup access to githup): +#. Clone global-workflow (assumes you have set up access to GitHub): .. code-block:: console - cd /contrib/$USER #you should have a username, and have a directory at /contrib where we save our permanent files. + cd /contrib/$USER #you should have a username and have a directory at /contrib, where we save our permanent files. git clone --recursive git@github.com:NOAA-EMC/global-workflow.git global-workflow #or the develop fork at EPIC: git clone --recursive git@github.com:NOAA-EPIC/global-workflow-cloud.git global-workflow-cloud -#. compile global-workflow: +#. Compile global-workflow: .. code-block:: console - cd /contrib/$USER/global-workflow + cd /contrib/$USER/global-workflow #or cd /contrib/$USER/global-workflow-cloud depending on which one you cloned cd sorc - build_all.sh # or similar command to compile for gefs, or others. + build_all.sh # or similar command to compile for gefs, or others. link_workflow.sh # after build_all.sh finished successfully #. As users may define a very small cluster as controller, one may use the script below to compile in compute node. - Save the this script in a file, say, com.slurm, and submit this job with command "sbatch com.slurm": + Save the this script in a file, say, ``com.slurm``, and submit this job with command ``sbatch com.slurm``: .. code-block:: console @@ -390,14 +392,14 @@ or access the cluster from your web terminal, one can start clone, compile, and #SBATCH -o compile.%J.log #SBATCH --exclusive - gwhome=/contrib/Wei.Huang/src/global-workflow-cloud # Change this to your own "global-workflow" source dir + gwhome=/contrib/Wei.Huang/src/global-workflow-cloud # Change this to your own "global-workflow" source directory cd ${gwhome}/sorc source ${gwhome}/workflow/gw_setup.sh #build_all.sh build_all.sh -w link_workflow.sh -#. run global-workflow C48 ATM test case (assume user has /lustre filesystem attached): +#. Run global-workflow C48 ATM test case (assumes user has ``/lustre`` filesystem attached): .. code-block:: console @@ -410,7 +412,7 @@ or access the cluster from your web terminal, one can start clone, compile, and cd /lustre/$USER/run/EXPDIR/c48atm crontab c48atm -EPIC has copied the C48 and C96 ATM, GEFS and some other data to AWS, and the current code has setup to use those data. -If user wants to run own case, user needs to make changes to the IC path and others to make it work. +EPIC has copied the C48 and C96 ATM, GEFS, and some other data to AWS, and the current code has been set up to use those data. +If users want to run their own case, they need to make changes to the IC path and others to make it work. The execution of the global-workflow should now follow the same steps as those for the RDHPCS on-premises hosts. From a1d044e379d0298a0c03d96d69a4e09f0a81dd8d Mon Sep 17 00:00:00 2001 From: Wei Huang Date: Fri, 22 Nov 2024 07:44:38 -0700 Subject: [PATCH 4/6] Update docs/source/noaa_csp.rst Co-authored-by: Gillian Petro <96886803+gspetro-NOAA@users.noreply.github.com> --- docs/source/noaa_csp.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/source/noaa_csp.rst b/docs/source/noaa_csp.rst index bfdeac162f..d5e48e11bf 100644 --- a/docs/source/noaa_csp.rst +++ b/docs/source/noaa_csp.rst @@ -297,10 +297,10 @@ when the cluster is activate, user will see things numbered in Figure 16 and als #. Which is the `username@the-ip-address`, or your AWS PW cluster. Click it, will have a PW web terminal appear in the bottom of the web-site, which you can work on this terminal to use your AWS cluster. -Please note, as soon as the cluster is activated, AWS/PW starts charging you for use the cluster. -As this cluster is exclusive for yourself, AWS keep charging you as long as the cluster is active. -For running global-workflow, one need to keep the cluster active if there is any rocoto jobs running, -as rocoto is using `crontab`, which needs the cluster active all the time, or the crontab job will be terminated. +Please note, as soon as the cluster is activated, AWS/PW starts charging you for use of the cluster. +As this cluster is exclusive for yourself, AWS will keep charging you as long as the cluster is active. +For running global-workflow, one needs to keep the cluster active if there are any Rocoto jobs running, +as Rocoto is using `crontab`, which needs the cluster active all the times, or the crontab job(s) will be terminated. .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_2.png :name: noaacsp_using_2 From 18f915e893f209d3c037a51f333f9da9cadc603b Mon Sep 17 00:00:00 2001 From: gspetro-NOAA Date: Fri, 22 Nov 2024 11:47:16 -0500 Subject: [PATCH 5/6] fix crosslink --- docs/source/noaa_csp.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/noaa_csp.rst b/docs/source/noaa_csp.rst index bb26941820..731902d366 100644 --- a/docs/source/noaa_csp.rst +++ b/docs/source/noaa_csp.rst @@ -314,7 +314,7 @@ the cluster varies and is not immediate; it may take several minutes (often 10-2 Activate the Cluster -When the cluster is activated, users will see the following indicators of success listed below as seen in :numref:`Figure %s `: +When the cluster is activated, users will see the following indicators of success listed below as seen in :numref:`Figure %s `: #. A green dot on the left beside the AWS logo means that the cluster is active (indicated by red arrow 1). #. A green dot on the right labeled "active" means that the cluster is active (indicated by red arrow 2). @@ -330,7 +330,7 @@ As this cluster is exclusive for yourself, AWS keeps charging you as long as the For running global-workflow, one needs to keep the cluster active if there are any Rocoto jobs running because Rocoto uses `crontab`, which needs the cluster active all the time, or the crontab job will be terminated. -.. _cluster-success: +.. _cluster-on: .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_2.png :name: noaacsp_using_2 From c763d6436e9ebff18f53dea4f805c553eddf6ac7 Mon Sep 17 00:00:00 2001 From: gspetro-NOAA Date: Fri, 22 Nov 2024 12:52:54 -0500 Subject: [PATCH 6/6] fix duplicate crosslink --- docs/source/noaa_csp.rst | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/noaa_csp.rst b/docs/source/noaa_csp.rst index eacfa309c1..10af0895e5 100644 --- a/docs/source/noaa_csp.rst +++ b/docs/source/noaa_csp.rst @@ -158,8 +158,6 @@ at the top right, as shown in the red circle. .. _select-lustre: -.. _select-lustre: - .. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_1.png :name: noaacsp_lustre_1 :class: with-border