-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' of https://github.com/UPPMAX/UPPMAX-documentation
- Loading branch information
Showing
2 changed files
with
304 additions
and
45 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,20 +1,19 @@ | ||
# Gaussian 09 user guide | ||
|
||
> A short guide on how to run g09 on UPPMAX. | ||
https://www.uppmax.uu.se/support/user-guides/gaussian-09-user-guide/ | ||
## Access to Gaussian 09 | ||
Gaussian 09 is available at UPPMAX. Uppsala University has an university license for all employees. If you want to be able to run g09 email [[email protected]](mailto:[email protected]) and ask to be added to the g09 group. | ||
|
||
A short guide on how to run g09 on UPPMAX. | ||
|
||
Access to Gaussian 09 | ||
Gaussian 09 is available at UPPMAX. Uppsala University has an university license for all employees. If you want to be able to run g09 email [email protected] and ask to be added to the g09 group. | ||
|
||
Running g09 | ||
## Running g09 | ||
In order to run g09 you must first set up the correct environment. You do this with: | ||
|
||
module load gaussian/g09.d01 | ||
Running single core jobs in SLURM | ||
`module load gaussian/g09.d01` | ||
|
||
### Running single core jobs in SLURM | ||
Here is an example of a submit script for SLURM: | ||
|
||
```slurm | ||
#!/bin/bash -l | ||
#SBATCH -J g09test | ||
#SBATCH -p core | ||
|
@@ -25,12 +24,15 @@ Here is an example of a submit script for SLURM: | |
module load gaussian/g09.d01 | ||
g09 mp2.inp mp2.out | ||
``` | ||
|
||
If you run a single core job on Rackham you can't use more than 6.4GB of memory. | ||
|
||
When specifying the memory requirements, make sure that you ask for some more memory in the submit-script than in g09 to allow for some memory overhead for the program. As a general rule you should ask for 200MB more than you need in the calculation. | ||
|
||
The mp2.inp inputfile in the example above: | ||
The `mp2.inp` input file in the example above: | ||
|
||
``` | ||
%Mem=800MB | ||
#P MP2 aug-cc-pVTZ OPT | ||
|
@@ -39,30 +41,35 @@ test | |
0 1 | ||
Li | ||
F 1 1.0 | ||
Scratch space | ||
The g09 module sets the environment GAUSS_SCRDIR to /scratch/$SLURM_JOBID in slurm. These directories are removed after the job is finished. | ||
``` | ||
|
||
If you want to set GAUSS_SCRDIR, you must do it after module load gaussian/g09.a02 in your script. | ||
## Scratch space | ||
The g09 module sets the environment `GAUSS_SCRDIR` to `/scratch/$SLURM_JOBID` in slurm. These directories are removed after the job is finished. | ||
|
||
If you set GAUSS_SCRDIR to something else in your submit script remember to remove all unwanted files after your job has finished. | ||
If you want to set `GAUSS_SCRDIR`, you must do it after module load `gaussian/g09.a02` in your script. | ||
|
||
If you think you will use a large amount of scratch space, you might want to set maxdisk in your input file. You can either set maxdisk directly on the command line in your input file: | ||
If you set `GAUSS_SCRDIR` to something else in your submit script remember to remove all unwanted files after your job has finished. | ||
|
||
If you think you will use a large amount of scratch space, you might want to set **maxdisk** in your input file. You can either set **maxdisk** directly on the command line in your input file: | ||
|
||
``` | ||
#P MP2 aug-cc-pVTZ SCF=Tight maxdisk=170GB | ||
``` | ||
or you can put something like: | ||
|
||
```bash | ||
MAXDISK=$( df | awk '/scratch/ { print $4 }' )KB | ||
sed -i '/^#/ s/ maxdisk=[[:digit:]]*KB//' inputfile | ||
sed -i '/^#/ s/$/ maxdisk='$MAXDISK'/'; inputfile | ||
in your scriptfile. This will set maxdisk to the currently available size of the /scratch disk on the node you will run on. Read more on maxdisk in the online manual. | ||
|
||
Running g09 in parallel | ||
Gaussian can be run in parallel on a single node using shared memory. | ||
``` | ||
|
||
This is the input file for the slurm example below: | ||
in your scriptfile. This will set **maxdisk** to the currently available size of the /scratch disk on the node you will run on. Read more on maxdisk in the [online manual](https://gaussian.com/maxdisk/). | ||
|
||
The dimer4.inp input: | ||
## Running g09 in parallel | ||
Gaussian can be run in parallel on a single node using shared memory. This is the input file for the slurm example below: | ||
|
||
The `dimer4.inp` input: | ||
``` | ||
%Mem=3800MB | ||
%NProcShared=4 | ||
#P MP2 aug-cc-pVTZ SCF=Tight | ||
|
@@ -82,11 +89,15 @@ methanol dimer MP2 | |
1 2.062618 4.333044 1.344537 | ||
8 2.372298 2.640544 0.197416 | ||
1 2.702458 3.161614 -0.539550 | ||
Running g09 in parallel in slurm | ||
This can be done by asking for CPUs on the same node using the parallel node environments and telling Gaussian to use several CPUs using the NProcShared link 0 command. | ||
``` | ||
|
||
### Running g09 in parallel in slurm | ||
This can be done by asking for CPUs on the same **node** using the parallel node environments and telling Gaussian to use several CPUs using the `NProcShared` link 0 command. | ||
|
||
An example submit-script: | ||
|
||
```slurm | ||
#!/bin/bash -l | ||
#SBATCH -J g09_4 | ||
#SBATCH -p node -n 8 | ||
|
@@ -96,27 +107,25 @@ An example submit-script: | |
module load gaussian/g09.d01 | ||
export OMP_NUM_THREADS=1 | ||
ulimit -s $STACKLIMIT | ||
g09 dimer4.inp dimer4.out | ||
Notice that 8 cores are requested from the queue-system using the line #SLURM -p node -n 8 and that Gaussian is told to use 4 cores with the link 0 command %NProcShared=4 | ||
The example above runs about 1.7 times as fast on eight cores than on four, just change in the input file to %NProcShared=8. | ||
g09 dimer4.inp dimer4.out | ||
``` | ||
|
||
Please benchmark your own inputs as the speedup depends heavily on the method and size of system. | ||
Notice that 8 cores are requested from the queue-system using the line `#SLURM -p node -n 8` and that Gaussian is told to use 4 cores with the link 0 command `%NProcShared=4`. The example above runs about 1.7 times as fast on eight cores than on four, just change in the input file to `%NProcShared=8`. Please benchmark your own inputs as the speedup depends heavily on the method and size of system. In some cases Gaussian cannot use all the cpus you ask for. This is indicated in the output with lines looking like this: | ||
|
||
In some cases Gaussian cannot use all the cpus you ask for. This is indicated in the output with lines looking like this: | ||
_PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda._ | ||
|
||
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda. | ||
The reason for specifying `OMP_NUM_THREADS=1` is to not use the parts of OpenMP in the Gaussian code, but to use Gaussians own threads. | ||
|
||
The reason for specifying OMP_NUM_THREADS=1 is to not use the parts of OpenMP in the Gaussian code, but to use Gaussians own threads. | ||
|
||
Running g09 in parallel with linda | ||
## Running g09 in parallel with linda | ||
In order to run g09 in parallel over several nodes we have acquired Linda TCP. | ||
|
||
Running g09 in parallel with linda in slurm | ||
This can be done by asking for CPUs on the same node using the parallel node environments and telling Gaussian to use several CPUs using the NProcLinda and NProcShared link 0 command. | ||
### Running g09 in parallel with linda in slurm | ||
This can be done by asking for CPUs on the same **node** using the parallel node environments and telling Gaussian to use several CPUs using the `NProcLinda` and `NProcShared` link 0 command. | ||
|
||
An example submit-script: | ||
|
||
```slurm | ||
#!/bin/bash -l | ||
#SBATCH -J g09-linda | ||
# | ||
|
@@ -137,8 +146,11 @@ export GAUSS_LFLAGS='-nodefile tsnet.nodes.$SLURM_JOBID -opt "Tsnet.Node.lindars | |
time g09 dimer20-2.inp dimer20-2.out | ||
rm tsnet.nodes.$SLURM_JOBID | ||
``` | ||
|
||
Here is the input file: | ||
|
||
``` | ||
%NProcLinda=2 | ||
%NProcShared=20 | ||
%Mem=2800MB | ||
|
@@ -159,26 +171,32 @@ methanol dimer MP2 | |
1 2.062618 4.333044 1.344537 | ||
8 2.372298 2.640544 0.197416 | ||
1 2.702458 3.161614 -0.539550 | ||
Notice that 40 cores are requested from the queue-system using the line #SLURM -p node -n 40 and that g09 is told to use 2 nodes via linda with the %NProcLinda=2 link 0 command and 20 cores on each node with the link 0 command %NProcShared=20. | ||
``` | ||
|
||
Notice that 40 cores are requested from the queue-system using the line `#SLURM -p node -n 40` and that g09 is told to use 2 nodes via linda with the `%NProcLinda=2` link 0 command and 20 cores on each node with the link 0 command `%NProcShared=20`. | ||
|
||
Please benchmark your own inputs as the speedup depends heavily on the method and size of system. | ||
|
||
In some cases Gaussian cannot use all the cpus you ask for. This is indicated in the output with lines looking like this: | ||
PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda. | ||
|
||
Number of CPUs on the shared memory nodes | ||
_ PrsmSu: requested number of processors reduced to: 1 ShMem 1 Linda._ | ||
|
||
## Number of CPUs on the shared memory nodes | ||
Use the information below as a guide to how many CPUs to request for your calculation: | ||
|
||
On Rackham: | ||
### On Rackham: | ||
|
||
- 272 nodes with two 10-core CPUs and 128GB memory | ||
- 32 nodes with two 10-core CPUs and 256GB memory | ||
|
||
### On Milou: | ||
|
||
272 nodes with two 10-core CPUs and 128GB memory | ||
32 nodes with two 10-core CPUs and 256GB memory | ||
On Milou: | ||
- 174 nodes with two 8-core CPUs and 128GB memory | ||
- 17 nodes with two 8-core CPUs and 256GB memory | ||
- 17 nodes with two 8-core CPUs and 512GB memory | ||
|
||
174 nodes with two 8-core CPUs and 128GB memory | ||
17 nodes with two 8-core CPUs and 256GB memory | ||
17 nodes with two 8-core CPUs and 512GB memory | ||
Note on chk-files: | ||
### Note on chk-files: | ||
You may experience difficulties if you mix different versions (g09 and g03) or revisions of gaussian. If you use a checkpoint file (.chk file) from an older revision (say g03 e.01), in a new calculation with revision a.02, g09 may not run properly. | ||
|
||
We recommend using the same revision if you want to restart a calculation or reuse an older checkpoint file. |
Oops, something went wrong.