From 0c153050f499ac84303987f6dd82943edbe8e6c1 Mon Sep 17 00:00:00 2001 From: Annika Lauber Date: Wed, 16 Oct 2024 10:43:23 +0200 Subject: [PATCH] Update local testing instructions --- docs/models/icon/large_use_cases.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/models/icon/large_use_cases.md b/docs/models/icon/large_use_cases.md index 23a34c32..a5bd4813 100644 --- a/docs/models/icon/large_use_cases.md +++ b/docs/models/icon/large_use_cases.md @@ -48,10 +48,10 @@ and a few time steps (about 6). Existing use cases like the [Aquaplanet :materia ### 1.2 Local testing Follow the step-by-step guide in [How to add experiments to a buildbot list :material-open-in-new:](https://gitlab.dkrz.de/icon/wiki/-/wikis/How-to-setup-new-test-experiments-for-buildbot#how-to-add-experiments-to-a-buildbot-list){:target="_blank"} to add you experiment test case. Start with the `checksuite_modes` for the mpi and nproma test (`'nm'`) for the machine you are testing on. -We recommend you to do out-of-source builds for CPU and GPU so that you can have two compiled versions of ICON in the same repository. +We recommend you to do out-of-source builds for CPU and GPU so that you can have two compiled versions of ICON in the same repository. Please follow the instructions in [Configure and compile :material-open-in-new:](https://c2sm.github.io/models/icon/usage/#configure-and-compile){:target="_blank"} to compile ICON on CPU and on GPU. #### Test on CPU -To ensure that there are no basic issues with the namelist, we recommend to start testing on CPU before going over to GPU testing. First, compile icon-nwp on CPU following the instructions in [Configure and compile :material-open-in-new:](https://c2sm.github.io/models/icon/#configure-and-compile){:target="_blank"} (*TODO*: fix link before merging). Then create the check file and run the test locally (`EXP=`): +To ensure that there are no basic issues with the namelist, we recommend to start testing on CPU before going over to GPU testing. Create the check file and run the test locally in the folder you built CPU in (set `EXP=`): ```bash ./make_runscripts ${EXP} @@ -66,7 +66,7 @@ sbatch --partition debug --time 00:30:00 check.${EXP}.run Check in the LOG file if all tests passed. #### Test on GPU -If all tests are validating on CPU, the next step is to test on GPU. First, compile icon-nwp on GPU. Then create the check file and run the mpi and nproma test locally as above. If those tests also validate on GPU, you can continue with the tolerance test to ensure that running on GPU gives basically the same results as running on CPU. Therefore, change the `checksuite_mode` to `'t'` for the tolerance test and follow the instructions in [Validating with probtest without buildbot references (Generating tolerances for non standard tests) :material-open-in-new:](https://gitlab.dkrz.de/icon/wiki/-/wikis/GPU-development/Validating-with-probtest-without-buildbot-references-(Generating-tolerances-for-non-standard-tests){:target="_blank"}). +If all tests are validating on CPU, the next step is to test on GPU. First, compile icon-nwp on GPU. Then create the check file and run the mpi and nproma test locally as above. If those tests also validate on GPU, you can continue with the tolerance test to ensure that running on GPU gives basically the same results as running on CPU. Therefore, please follow the instructions in [Validating with probtest without buildbot references (Generating tolerances for non standard tests) :material-open-in-new:](https://gitlab.dkrz.de/icon/wiki/-/wikis/GPU-development/Validating-with-probtest-without-buildbot-references-(Generating-tolerances-for-non-standard-tests){:target="_blank"}). ### 1.3 Activate Test in a CI Pipeline