From dae42e06671dbfe13195e4608931d2f458223735 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 16:28:10 +0200 Subject: [PATCH 01/10] runtime_tips.md last formatting? --- .../running_jobs/runtime_tips.md | 84 +++++++++---------- 1 file changed, 42 insertions(+), 42 deletions(-) diff --git a/docs/cluster_guides/running_jobs/runtime_tips.md b/docs/cluster_guides/running_jobs/runtime_tips.md index d48cdd568..314a8898d 100644 --- a/docs/cluster_guides/running_jobs/runtime_tips.md +++ b/docs/cluster_guides/running_jobs/runtime_tips.md @@ -169,75 +169,75 @@ ???- question "How can I see my job's memory usage?" -???- info "For UPPMAX staff" + - Historical information can first of all be found by issuing the command ``finishedjobinfo -j``. That will print out the maximum memory used by your job. - TODO: InfoGlue link: `https://www.uppmax.uu.se/support/faq/running-jobs-faq/how-can-i-see-my-job-s-memory-usage/` + - If you want more details then we also save some memory information each 5 minute interval for the job in a file under ``/sw/share/slurm/[cluster-name]/uppmax_jobstats/``. Notice that this is only stored for 30 days. -Historical information can first of all be found by issuing the command "finishedjobinfo -j". That will print out the maximum memory used by your job. + - You can also ask for an e-mail containing the log, when you submit your job with sbatch or start an "interactive" session, by adding a "-C usage_mail" flag to your command. Two examples: -If you want more details then we also save some memory information each 5 minute interval for the job in a file under /sw/share/slurm/[cluster-name]/uppmax_jobstats//. Notice that this is only stored for 30 days. + ``` + sbatch -A testproj -p core -n 5 -C usage_mail batchscript1 + ``` -You can also ask for an e-mail containing the log, when you submit your job with sbatch or start an "interactive" session, by adding a "-C usage_mail" flag to your command. Two examples: + or, if interactive -``` -sbatch -A testproj -p core -n 5 -C usage_mail batchscript1 + ``` + interactive -A testproj -p node -n 1 -C "fat&usage_mail" + ``` -interactive -A testproj -p node -n 1 -C "fat&usage_mail" -``` + - As you see, you have to be careful with the syntax when asking for two features, like "fat" and "usage_mail", at the same time. The logical AND operator "&" combines the flags. -As you see, you have to be careful with the syntax when asking for two features, like "fat" and "usage_mail", at the same time. The logical AND operator "&" combines the flags. + - If you overdraft the RAM that you asked for, you will probably get an automatic e-mail anyway. -If you overdraft the RAM that you asked for, you will probably get an automatic e-mail anyway. + - If, on the other hand, you want to view your memory consumption in real time then you will have to login to the node in question in another SSH session. (You will probably find a more recently updated memory information file there, named /var/spool/uppmax_jobstats/.) -If, on the other hand, you want to view your memory consumption in real time then you will have to login to the node in question in another SSH session. (You will probably find a more recently updated memory information file there, named /var/spool/uppmax_jobstats/.) + - By naively looking at the memory consumption with tools like "ps" and "top" you as a user can easily get the wrong impression of the system, as the Linux kernel uses free memory for lots of buffers and caches to speed up other processes (but releases this as soon as applications requests it). -By naively looking at the memory consumption with tools like "ps" and "top" you as a user can easily get the wrong impression of the system, as the Linux kernel uses free memory for lots of buffers and caches to speed up other processes (but releases this as soon as applications requests it). + - If you know that you are the only user running on the node (from requesting a node job for example), then you could issue the command "free -g" instead. That will show you how much memory is used/free by the whole system, exclusive to these caches. Look for the row called "-/+ buffers/cache". -If you know that you are the only user running on the node (from requesting a node job for example), then you could issue the command "free -g" instead. That will show you how much memory is used/free by the whole system, exclusive to these caches. Look for the row called "-/+ buffers/cache". + - If you require more detailed live information, then it would probably be best if the tool called "smem" is used. Download the latest version from http://www.selenic.com/smem/download/ and unpack it in your home directory. Inside you will find an executable Python script, and by executing the command "smem -utk" you will see your user's memory usage reported in three different ways. -If you require more detailed live information, then it would probably be best if the tool called "smem" is used. Download the latest version from http://www.selenic.com/smem/download/ and unpack it in your home directory. Inside you will find an executable Python script, and by executing the command "smem -utk" you will see your user's memory usage reported in three different ways. + - USS is the total memory used by the user without shared buffers or caches. + - RSS is the number reported in "top" and "ps"; i.e. including ALL shared buffered/cached memory. + - And then there's also the PSS figure which tries to calculate a proportional memory usage per user for all shared memory buffers and caches (i.e. the figure will fall between USS and RSS). -USS is the total memory used by the user without shared buffers or caches. RSS is the number reported in "top" and "ps"; i.e. including ALL shared buffered/cached memory. And then there's also the PSS figure which tries to calculate a proportional memory usage per user for all shared memory buffers and caches (i.e. the figure will fall between USS and RSS). ???- question "My job has very low priority! What can be wrong?" -???- info "For UPPMAX staff" - TODO: InfoGlue link: `https://www.uppmax.uu.se/support/faq/running-jobs-faq/why-does-my-job-have-very-low-priority/` + - One reason could be that your project has consumed its allocated hours. -One reason could be that your project has consumed its allocated hours. + - Background: Every job is associated with a project. Suppose that that you are working for a SNIC project s00101-01 that's been granted 10000 core hours per 30-days running. At the start of the project, s00101-01 is credited with 10000 hours and jobs that runs in that project are given a high priority. All the jobs that are finished or are running during the last 30 days is compared with this granted time. If enough jobs have run to consume this amount of hours the priority is lowered. The more you have overdrafted your granted time, the lower the priority. -Background: Every job is associated with a project. Suppose that that you are working for a SNIC project s00101-01 that's been granted 10000 core hours per 30-days running. At the start of the project, s00101-01 is credited with 10000 hours and jobs that runs in that project are given a high priority. All the jobs that are finished or are running during the last 30 days is compared with this granted time. If enough jobs have run to consume this amount of hours the priority is lowered. The more you have overdrafted your granted time, the lower the priority. + - If you have overdrafted your granted time it's still possible to run jobs. You will probably wait for a longer time in the queue. -If you have overdrafted your granted time it's still possible to run jobs. You will probably wait for a longer time in the queue. + - To check status for your projects, run -To check status for your projects, run + ``` + $ projinfo + (Counting the number of core hours used since 2010-05-12/00:00:00 until now.) -``` -$ projinfo -(Counting the number of core hours used since 2010-05-12/00:00:00 until now.) - -Project Used[h] Current allocation [h/month] -User ------------------------------------------------------ -s00101-01 72779.48 50000 -some-user 72779.48 -``` + Project Used[h] Current allocation [h/month] + User + ----------------------------------------------------- + s00101-01 72779.48 50000 + some-user 72779.48 + ``` -If there are enough jobs left in projects that have not gone over their allocation, jobs associated with this project are therefore stuck wating at the bottom of the jobinfo list until the usage for the last 30 days drops down under its allocated budget again. + - If there are enough jobs left in projects that have not gone over their allocation, jobs associated with this project are therefore stuck wating at the bottom of the jobinfo list until the usage for the last 30 days drops down under its allocated budget again. -On the other side they may be lucky to get some free nodes, so it could happen that they run as a bonus job before this happens. + - On the other side they may be lucky to get some free nodes, so it could happen that they run as a bonus job before this happens. -The job queue, that you can see with the jobinfo command, is ordered on job priority. Jobs with a high priority will run first, if they can (depending on number of free nodes and any special demands on e.g. memory). + - The job queue, that you can see with the jobinfo command, is ordered on job priority. Jobs with a high priority will run first, if they can (depending on number of free nodes and any special demands on e.g. memory). -Job priority is the sum of the following numbers (you may use the sprio command to get exact numbers for individual jobs): + - Job priority is the sum of the following numbers (you may use the sprio command to get exact numbers for individual jobs): -A high number (100000 or 130000) if your project is within its allocation and a lower number otherwise. There are different grades of lower numbers, depending on how many times your project is overdrafted. As an example, a 2000 core hour project gets priority 70000 when it has used more than 2000 core hours, gets priority 60000 when it has used more than 4000 core hours, gets priority 50000 when it has used more than 6000 core hours, and so on. The lowest grade gives priority 10000 and does not go down from there. -The number of minutes the job has been waiting in queue (for a maximum of 20160 after fourteen days). -A job size number, higher for more nodes allocated to your job, for a maximum of 104. -A very, very high number for "short" jobs, i.e. very short jobs that is not wider than four nodes. -If your job priority is zero or one, there are more serious problems, for example that you asked for more resources than the batch system finds on the system. + - A high number (100000 or 130000) if your project is within its allocation and a lower number otherwise. There are different grades of lower numbers, depending on how many times your project is overdrafted. As an example, a 2000 core hour project gets priority 70000 when it has used more than 2000 core hours, gets priority 60000 when it has used more than 4000 core hours, gets priority 50000 when it has used more than 6000 core hours, and so on. The lowest grade gives priority 10000 and does not go down from there. + - The number of minutes the job has been waiting in queue (for a maximum of 20160 after fourteen days). + - A job size number, higher for more nodes allocated to your job, for a maximum of 104. + - A very, very high number for "short" jobs, i.e. very short jobs that is not wider than four nodes. + - If your job priority is zero or one, there are more serious problems, for example that you asked for more resources than the batch system finds on the system. -If you ask for a longer run time (TimeLimit) than the maximum on the system, your job will not run. The maximum is currently ten days. If you must run a longer job, submit it with a ten-day runtime and contact UPPMAX support. + - If you ask for a longer run time (TimeLimit) than the maximum on the system, your job will not run. The maximum is currently ten days. If you must run a longer job, submit it with a ten-day runtime and contact UPPMAX support. From 741fb109cd9e60741ef7c430b04a09e8cfb6f446 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 16:31:59 +0200 Subject: [PATCH 02/10] Update runtime_tips.md --- docs/cluster_guides/running_jobs/runtime_tips.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/cluster_guides/running_jobs/runtime_tips.md b/docs/cluster_guides/running_jobs/runtime_tips.md index 314a8898d..87f4eb667 100644 --- a/docs/cluster_guides/running_jobs/runtime_tips.md +++ b/docs/cluster_guides/running_jobs/runtime_tips.md @@ -191,7 +191,7 @@ - If, on the other hand, you want to view your memory consumption in real time then you will have to login to the node in question in another SSH session. (You will probably find a more recently updated memory information file there, named /var/spool/uppmax_jobstats/.) - - By naively looking at the memory consumption with tools like "ps" and "top" you as a user can easily get the wrong impression of the system, as the Linux kernel uses free memory for lots of buffers and caches to speed up other processes (but releases this as soon as applications requests it). + - By naively looking at the memory consumption with tools like ``ps`` and ``top`` you as a user can easily get the wrong impression of the system, as the Linux kernel uses free memory for lots of buffers and caches to speed up other processes (but releases this as soon as applications requests it). - If you know that you are the only user running on the node (from requesting a node job for example), then you could issue the command "free -g" instead. That will show you how much memory is used/free by the whole system, exclusive to these caches. Look for the row called "-/+ buffers/cache". @@ -208,7 +208,12 @@ - One reason could be that your project has consumed its allocated hours. - - Background: Every job is associated with a project. Suppose that that you are working for a SNIC project s00101-01 that's been granted 10000 core hours per 30-days running. At the start of the project, s00101-01 is credited with 10000 hours and jobs that runs in that project are given a high priority. All the jobs that are finished or are running during the last 30 days is compared with this granted time. If enough jobs have run to consume this amount of hours the priority is lowered. The more you have overdrafted your granted time, the lower the priority. + - Background: Every job is associated with a project. + - Suppose that that you are working for a SNIC project s00101-01 that's been granted 10000 core hours per 30-days running. + - At the start of the project, s00101-01 is credited with 10000 hours and jobs that runs in that project are given a high priority. + - All the jobs that are finished or are running during the last 30 days is compared with this granted time. + - If enough jobs have run to consume this amount of hours the priority is lowered. + - The more you have overdrafted your granted time, the lower the priority. - If you have overdrafted your granted time it's still possible to run jobs. You will probably wait for a longer time in the queue. From f917400e7770320c5553c8f12b738eeff3a65b76 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 16:48:53 +0200 Subject: [PATCH 03/10] extra.css links underlined test #11 --- docs/stylesheets/extra.css | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/stylesheets/extra.css b/docs/stylesheets/extra.css index 8dc023b4f..dcbd6bfb8 100644 --- a/docs/stylesheets/extra.css +++ b/docs/stylesheets/extra.css @@ -19,6 +19,7 @@ --md-footer-bg-color: #E6E6E6; --md-footer-fg-color: #000000; + } @@ -37,6 +38,10 @@ text-align: initial; } + .md-typeset a { + text-decoration: underline; +} + /* Markdown Header */ /* https://github.com/squidfunk/mkdocs-material/blob/dcab57dd1cced4b77875c1aa1b53467c62709d31/src/assets/stylesheets/main/_typeset.scss */ From f4f4c751b5d0b412a2c1fd8b73d0d92c8f703216 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 16:54:38 +0200 Subject: [PATCH 04/10] Update mkdocs.yml --- mkdocs.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mkdocs.yml b/mkdocs.yml index d24692fee..844e8d66c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -51,7 +51,6 @@ nav: - Software table: software/software-table.md - Text editors: software/text_editors.md - Software-specific documentation: - - Whisper: software/whisper.md - Chemistry/physics: - GAMESS_US: software/games_us.md - Gaussian: software/gaussian.md @@ -69,9 +68,10 @@ nav: - IGV: software/igv.md - MetONTIIME: software/metontiime.md - Tracer: software/tracer.md - - Machine Learning: + - Machine Learning and AI: - NVIDIA DLF: software/nvidia-deep-learning-frameworks.md - TensorFlow: software/tensorflow.md + - Whisper: software/whisper.md - Programming languages: - Julia: software/julia.md - MATLAB: software/matlab.md From 6a2564e400e6b26c323e141c0dcbf3c5706d4701 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 16:57:02 +0200 Subject: [PATCH 05/10] mkdocs.yml sftp and wrf in doc tree --- mkdocs.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mkdocs.yml b/mkdocs.yml index 844e8d66c..3ac575f37 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -56,6 +56,7 @@ nav: - Gaussian: software/gaussian.md - GROMACS: software/gromacs.md - Molcas: software/openmolcas.md + - WRF: software/wrf.md - Integrated development environments (IDE): - Jupyter: software/jupyter.md - RStudio: software/rstudio.md @@ -84,6 +85,7 @@ nav: - projplot: software/projplot.md - Rclone: software/rclone.md - SSH: software/ssh.md + - SFTP: software/sftp.md - Databases: - Overview: databases/overview.md From 258f3f076f8f658527f8c71b51bbab9f2f709d53 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 17:07:28 +0200 Subject: [PATCH 06/10] wrf.md content and first formatting --- docs/software/wrf.md | 95 ++++++++++++++++++++++++++++++++------------ 1 file changed, 70 insertions(+), 25 deletions(-) diff --git a/docs/software/wrf.md b/docs/software/wrf.md index 9fd924122..0faf2068f 100644 --- a/docs/software/wrf.md +++ b/docs/software/wrf.md @@ -1,6 +1,6 @@ # WRF user guide -Introduction +# Introduction The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. Model home page @@ -27,7 +27,7 @@ It may not work for a large domain. If so, either modify TBL file or use in inne To analyse the WRF output on the cluster you can use Vapor, NCL (module called as NCL-graphics) or wrf-python (module called as wrf-python). For details on how, please confer the Vapor, NCL or wrf-python web pages. -Get started +# Get started This section assumes that you are already familiar in running WRF. If not, please check the tutorial, where you can at least omit the first 5 buttons and go directly to the last button, or depending on your needs, also check the “Static geography data” and “Real-time data”. When running WRF/WPS you would like your own settings for the model to run and not to interfere with other users. Therefore, you need to set up a local or project directory (e.g. 'WRF') and work from there like for a local installation. You also need some of the content from the central installation. Follow these steps: @@ -46,6 +46,8 @@ You can remove *.exe files because the module files shall be used. When WRF or WPS modules are loaded you can run with “ungrib.exe” or for instance “wrf.exe”, i.e. without the “./”. Normally you can run ungrib.exe, geogrid.exe and real.exe and, if not too long period, metgrid.exe, in the command line or in interactive mode. wrf.exe has to be run on the compute nodes. Make a batch script, see template below: + +```bash #!/bin/bash #SBATCH -J #SBATCH --mail-user @@ -61,15 +63,22 @@ module load WRF/4.1.3-dmpar export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so export I_MPI_PMI2=yes srun -n 40 --mpi=pmi2 wrf.exe -Running smpar+dmpar -Wrf compiled for Hybrid Shared + Distributed memory (OpenMP+MPI) can be more efficient than dmpar only. With good settings it runs approximately 30% faster and similarly less resources. +``` + +# Running smpar+dmpar + +WRF compiled for Hybrid Shared + Distributed memory (OpenMP+MPI) can be more efficient than dmpar only. With good settings it runs approximately 30% faster and similarly less resources. To load this module type: +``` module load WRF/4.1.3-dm+sm +``` + The submit script can look like this: -#!/bin/bash +```bash +#!/bin/bash -l #SBATCH -J #SBATCH --mail-user #SBATCH --mail-type=ALL @@ -95,20 +104,32 @@ export I_MPI_PIN_DOMAIN=omp export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so export I_MPI_PMI2=yes srun -n 8 --mpi=pmi2 wrf.exe -Local installation with module dependencies + +``` + +# Local installation with module dependencies + If you would like to change in the FORTRAN code for physics or just want the latest version you can install locally but with the dependencies from the modules -Step 1: WRF Source Code Registration and Download -Register and download -Identify download URLs you need (on Github for v4 and higher) -WRF -WPS -Other? -In folder of your choice at UPPMAX: -'wget ' -'tar zxvf ' . -Step 2: Configure and compile -Create and set the environment in a SOURCEME file, see example below for a intel-dmpar build. Loading module WRF sets most of the environment but some variables have different names in configure file. Examples below assumes dmpar, but can be interchanged to dm+sm for hybrid build. +### Step 1: WRF Source Code Registration and Download + +1. Register and download +1. Identify download URLs you need (on Github for v4 and higher) + + 1. WRF + 1. WPS + 1. Other? + +1. In folder of your choice at UPPMAX: + 1. ``wget `` +1. ``tar zxvf `` + +### Step 2: Configure and compile +- Create and set the environment in a ``SOURCEME`` file, see example below for a intel-dmpar build. +- Loading module WRF sets most of the environment but some variables have different names in configure file. +- Examples below assumes dmpar, but can be interchanged to dm+sm for hybrid build. + +```bash #!/bin/bash module load WRF/4.1.3-dmpar @@ -123,14 +144,21 @@ export NETCDFPATH=$NETCDF export HDF5PATH=$HDF5_DIR -export HDF5=$HDF5_DIR -Then -source SOURCEME +export HDF5=$HDF5_DIR +``` +- Then +``` +source SOURCEME ./configure -Choose intel and dmpar (15) or other, depending on WRF version and parallelization. -When finished it may complain about not finding netcdf.inc file. This is solved below as you have to modify the configure.wrf file. -•Intelmpi settings (for dmpar) +``` + +- Choose intel and dmpar (15) or other, depending on WRF version and parallelization. +- When finished it may complain about not finding netcdf.inc file. This is solved below as you have to modify the configure.wrf file. + +- Intelmpi settings (for dmpar) + +``` DM_FC = mpiifort DM_CC = mpiicc -DMPI2_SUPPORT @@ -138,19 +166,35 @@ DM_CC = mpiicc -DMPI2_SUPPORT #DM_FC = mpif90 -f90=$(SFC) #DM_CC = mpicc -cc=$(SCC) -•Netcdf-fortran paths +``` + +- Netcdf-fortran paths + +``` LIB_EXTERNAL = add flags "-$(NETCDFFPATH)/lib -lnetcdff -lnetcdf" (let line end with "\") INCLUDE_MODULES = add flag "-I$(NETCDFFPATH)/include" (let line end with "\") Add the line below close to NETCDFPATH: NETCDFFPATH = $(NETCDFF) +``` + Then: +``` ./compile em_real +``` + When you have made modification of the code and once configure.wrf is created, just + +``` source SOURCEME +``` and run: +``` ./compile em_real -Running +``` +### Running Batch script should include: + +```bash module load WRF/4.1.3-dmpar export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so @@ -158,3 +202,4 @@ export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so export I_MPI_PMI2=yes srun -n 40 --mpi=pmi2 ./wrf.exe #Note ”./”, otherwise ”module version of wrf.exe” is used +``` From ee73363fa48b94832e46d5ade51e0583c91d3460 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 17:13:26 +0200 Subject: [PATCH 07/10] wrf.md formatting --- docs/software/wrf.md | 70 +++++++++++++++++++++++--------------------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/docs/software/wrf.md b/docs/software/wrf.md index 0faf2068f..f2861158d 100644 --- a/docs/software/wrf.md +++ b/docs/software/wrf.md @@ -1,51 +1,53 @@ # WRF user guide # Introduction -The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. -Model home page +- The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. -ARW branch page +- Model home page -WRF Preprocessing System (WPS). The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. +- ARW branch page -WRF is installed as modules for version 4.1.3 and compiled with INTEL and parallelized for distributed memory (dmpar) or hybrid shared and distributed memory (sm+dm). These are available as: +- WRF Preprocessing System (WPS). The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. -WRF/4.1.3-dmpar default as WRF/4.1.3 -WRF/4.1.3-dm+sm -WPS is installed as version 4.1 and available as: +- WRF is installed as modules for version 4.1.3 and compiled with INTEL and parallelized for distributed memory (dmpar) or hybrid shared and distributed memory (sm+dm). These are available as: -WPS/4.1 -There are WPS_GEOG data available. -Set the path in namelist.wps to: + - WRF/4.1.3-dmpar default as WRF/4.1.3 + - WRF/4.1.3-dm+sm +- WPS is installed as version 4.1 and available as: -'geog_data_path = '/sw/data/WPS-geog/4/rackham/WPS_GEOG'' + - WPS/4.1 + +- There are WPS_GEOG data available. +- Set the path in namelist.wps to: -Corine and metria data are included in the WPS_GEOG directory. -In /sw/data/WPS-geog/4/rackham you'll find GEOGRID.TBL.ARW.corine_metria that hopefully works. Copy to your WPS/GEOGRID directory and then link to GEOGRID.TBL file. -It may not work for a large domain. If so, either modify TBL file or use in inner domains only. +``geog_data_path = '/sw/data/WPS-geog/4/rackham/WPS_GEOG'`` -To analyse the WRF output on the cluster you can use Vapor, NCL (module called as NCL-graphics) or wrf-python (module called as wrf-python). For details on how, please confer the Vapor, NCL or wrf-python web pages. +- Corine and metria data are included in the WPS_GEOG directory. +- In /sw/data/WPS-geog/4/rackham you'll find GEOGRID.TBL.ARW.corine_metria that hopefully works. Copy to your WPS/GEOGRID directory and then link to GEOGRID.TBL file. +- It may not work for a large domain. If so, either modify TBL file or use in inner domains only. + +- To analyse the WRF output on the cluster you can use Vapor, NCL (module called as NCL-graphics) or wrf-python (module called as wrf-python). For details on how, please confer the Vapor, NCL or wrf-python web pages. # Get started -This section assumes that you are already familiar in running WRF. If not, please check the tutorial, where you can at least omit the first 5 buttons and go directly to the last button, or depending on your needs, also check the “Static geography data” and “Real-time data”. - -When running WRF/WPS you would like your own settings for the model to run and not to interfere with other users. Therefore, you need to set up a local or project directory (e.g. 'WRF') and work from there like for a local installation. You also need some of the content from the central installation. Follow these steps: - -Create a directory where you plan to have your input and result files. -Standing in this directory copy the all or some of the following directories from the central installation. -Run directory for real runs -cp -r /sw/EasyBuild/rackham/software/WRF/4.1.3-intel-2019b-dmpar/WRF-4.1.3/run . -You can remove *.exe files in this run directory because the module files shall be used. -WPS directory if input data has to be prepared -cp -r /sw/EasyBuild/rackham/software/WPS/4.1-intel-2019b-dmpar/WPS-4.1 . -You can remove *.exe files in the new directory because the module files shall be used. -Test directory for ideal runs -cp -r /sw/EasyBuild/rackham/software/WRF/4.1.3-intel-2019b-dmpar/WRF-4.1.3/test . -You can remove *.exe files because the module files shall be used. -When WRF or WPS modules are loaded you can run with “ungrib.exe” or for instance “wrf.exe”, i.e. without the “./”. -Normally you can run ungrib.exe, geogrid.exe and real.exe and, if not too long period, metgrid.exe, in the command line or in interactive mode. -wrf.exe has to be run on the compute nodes. Make a batch script, see template below: +- This section assumes that you are already familiar in running WRF. If not, please check the tutorial, where you can at least omit the first 5 buttons and go directly to the last button, or depending on your needs, also check the “Static geography data” and “Real-time data”. + +- When running WRF/WPS you would like your own settings for the model to run and not to interfere with other users. Therefore, you need to set up a local or project directory (e.g. 'WRF') and work from there like for a local installation. You also need some of the content from the central installation. Follow these steps: + +1. Create a directory where you plan to have your input and result files. +1. Standing in this directory copy the all or some of the following directories from the central installation. + 1. Run directory for real runs + - ``cp -r /sw/EasyBuild/rackham/software/WRF/4.1.3-intel-2019b-dmpar/WRF-4.1.3/run .`` + - You can remove *.exe files in this run directory because the module files shall be used. + 1. WPS directory if input data has to be prepared + - ``cp -r /sw/EasyBuild/rackham/software/WPS/4.1-intel-2019b-dmpar/WPS-4.1 .`` + - You can remove *.exe files in the new directory because the module files shall be used. + 1. Test directory for ideal runs + - ``cp -r /sw/EasyBuild/rackham/software/WRF/4.1.3-intel-2019b-dmpar/WRF-4.1.3/test .`` + - You can remove *.exe files because the module files shall be used. +1. When WRF or WPS modules are loaded you can run with “ungrib.exe” or for instance “wrf.exe”, i.e. without the “./”. +1. Normally you can run ungrib.exe, geogrid.exe and real.exe and, if not too long period, metgrid.exe, in the command line or in interactive mode. +1. wrf.exe has to be run on the compute nodes. Make a batch script, see template below: ```bash #!/bin/bash From 71e0467cc83c31441e4ab1f3887bed32e206f96c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 17:14:50 +0200 Subject: [PATCH 08/10] mkdocs.yml add atmosphere to chem/phys section --- mkdocs.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mkdocs.yml b/mkdocs.yml index 3ac575f37..c905f7b9b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -51,7 +51,7 @@ nav: - Software table: software/software-table.md - Text editors: software/text_editors.md - Software-specific documentation: - - Chemistry/physics: + - Chemistry/physics/atmosphere: - GAMESS_US: software/games_us.md - Gaussian: software/gaussian.md - GROMACS: software/gromacs.md From ccea6fdec5996cfa57901518b084bd4707d5dde8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 17:27:46 +0200 Subject: [PATCH 09/10] wrf.md links --- docs/software/wrf.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/docs/software/wrf.md b/docs/software/wrf.md index f2861158d..0b13f75ec 100644 --- a/docs/software/wrf.md +++ b/docs/software/wrf.md @@ -4,9 +4,9 @@ - The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. -- Model home page +- [Model home page](https://www.mmm.ucar.edu/models/wrf) -- ARW branch page +- [ARW branch page](https://www2.mmm.ucar.edu/wrf/users/) - WRF Preprocessing System (WPS). The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. @@ -27,10 +27,14 @@ - In /sw/data/WPS-geog/4/rackham you'll find GEOGRID.TBL.ARW.corine_metria that hopefully works. Copy to your WPS/GEOGRID directory and then link to GEOGRID.TBL file. - It may not work for a large domain. If so, either modify TBL file or use in inner domains only. -- To analyse the WRF output on the cluster you can use Vapor, NCL (module called as NCL-graphics) or wrf-python (module called as wrf-python). For details on how, please confer the Vapor, NCL or wrf-python web pages. +- To analyse the WRF output on the cluster you can use Vapor, NCL (module called as NCL-graphics) or wrf-python (module called as wrf-python). For details on how, please confer the web pages below: + - [wrf-python](https://wrf-python.readthedocs.io/en/latest/), + - [Vapor](https://www.vapor.ucar.edu/) or + - [NCL](https://www.ncl.ucar.edu/Document/Pivot_to_Python/september_2019_update.shtml) + - is not updated anymore and the developers recommend [GeoCAT](https://geocat.ucar.edu/) which serves as an umbrella over wrf-python, among others. # Get started -- This section assumes that you are already familiar in running WRF. If not, please check the tutorial, where you can at least omit the first 5 buttons and go directly to the last button, or depending on your needs, also check the “Static geography data” and “Real-time data”. +- This section assumes that you are already familiar in running WRF. If not, please check the [tutorial](https://www2.mmm.ucar.edu/wrf/OnLineTutorial/index.php), where you can at least omit the first 5 buttons and go directly to the last button, or depending on your needs, also check the “Static geography data” and “Real-time data”. - When running WRF/WPS you would like your own settings for the model to run and not to interfere with other users. Therefore, you need to set up a local or project directory (e.g. 'WRF') and work from there like for a local installation. You also need some of the content from the central installation. Follow these steps: @@ -115,7 +119,7 @@ If you would like to change in the FORTRAN code for physics or just want the lat ### Step 1: WRF Source Code Registration and Download -1. Register and download +1. [Register and download](https://www2.mmm.ucar.edu/wrf/users/download/get_source.html) 1. Identify download URLs you need (on Github for v4 and higher) 1. WRF From f8884cf3735780ae77090c6495a26f4218f10b4f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Claremar?= <70746791+bclaremar@users.noreply.github.com> Date: Tue, 7 May 2024 21:33:16 +0200 Subject: [PATCH 10/10] Update .wordlist.txt --- .wordlist.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/.wordlist.txt b/.wordlist.txt index 6312add64..c81f7287a 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -3322,3 +3322,4 @@ xls th sFTP SettlementReport +GeoCAT