From 36c78bebeda0aa97916cd0d2be43b0ab1167bb9c Mon Sep 17 00:00:00 2001 From: richelbilderbeek Date: Wed, 15 May 2024 12:04:05 +0200 Subject: [PATCH] Fix markdown style issues --- docs/software/matlab.md | 40 +++++++++++-------- docs/software/metontiime.md | 4 +- .../nvidia-deep-learning-frameworks.md | 10 ++++- docs/software/openmolcas.md | 22 ++++++++-- 4 files changed, 53 insertions(+), 23 deletions(-) diff --git a/docs/software/matlab.md b/docs/software/matlab.md index c43c52c06..c0e14c3a1 100644 --- a/docs/software/matlab.md +++ b/docs/software/matlab.md @@ -30,11 +30,11 @@ That will start a matlab session with the common GUI. Use "&" to have MATLAB in A good and important suggestion is that you always specify a certain version. This is to be able to reproduce your work, a very important key in research! -### First time, since May 13 2024 +## First time, since May 13 2024 - If you use MATLAB after May 13 2024, of any version, you have to do the following step to be able to use the full features of running parallel jobs. - - only needs to be called once per version of MATLAB. - - Note, however, that on Bianca this has to be done separately. + - only needs to be called once per version of MATLAB. + - Note, however, that on Bianca this has to be done separately. - After logging into the cluster, configure MATLAB to run parallel jobs on the cluster by calling the shell script configCluster.sh. @@ -52,17 +52,17 @@ $ configCluster.sh # Note: no '-A' Using MATLAB on the cluster enables you to utilize high performance facilities like: - [Parallel computing](https://se.mathworks.com/help/parallel-computing/getting-started-with-parallel-computing-toolbox.html?s_tid=CRUX_lftnav) - - Parallel for-loops - - Evaluate functions in the background + - Parallel for-loops + - Evaluate functions in the background - [Big data processing](https://se.mathworks.com/help/parallel-computing/big-data-processing.html?s_tid=CRUX_lftnav) - - Analyze big data sets in parallel + - Analyze big data sets in parallel - [Batch Processing](https://se.mathworks.com/help/parallel-computing/batch-processing.html?s_tid=CRUX_lftnav) - - Offload execution of functions to run in the background + - Offload execution of functions to run in the background - [GPU computing](https://se.mathworks.com/help/parallel-computing/gpu-computing.html?s_tid=CRUX_lftnav) (Available on Bianca and Snowy) - - Accelerate your code by running it on a GPU + - Accelerate your code by running it on a GPU - Machine & Deep learning - - [Statistics and Machine Learning](https://se.mathworks.com/help/stats/index.html) - - [Deep Learning](https://se.mathworks.com/help/deeplearning/index.html) + - [Statistics and Machine Learning](https://se.mathworks.com/help/stats/index.html) + - [Deep Learning](https://se.mathworks.com/help/deeplearning/index.html) [See MathWork's complete user guide](https://se.mathworks.com/help/parallel-computing/index.html?s_tid=CRUX_lftnav) @@ -70,13 +70,14 @@ Some online tutorials and courses: - [Parallel computing](https://se.mathworks.com/solutions/parallel-computing.html) - Machine Learning - - [Machine learning article](https://se.mathworks.com/solutions/machine-learning.html) - - [Machine learning tutorial](https://matlabacademy.mathworks.com/details/machine-learning-onramp/machinelearning) + - [Machine learning article](https://se.mathworks.com/solutions/machine-learning.html) + - [Machine learning tutorial](https://matlabacademy.mathworks.com/details/machine-learning-onramp/machinelearning) - Deep Learning - - [Deep learning article](https://se.mathworks.com/solutions/deep-learning.html) - - [Deep learning tutorial](https://matlabacademy.mathworks.com/details/deep-learning-onramp/deeplearning) + - [Deep learning article](https://se.mathworks.com/solutions/deep-learning.html) + - [Deep learning tutorial](https://matlabacademy.mathworks.com/details/deep-learning-onramp/deeplearning) ## Running MATLAB + ### Graphical user interface To start MATLAB with its usual graphical interface, start it with: @@ -129,11 +130,11 @@ You may want to confer our UPPMAX [ThinLinc user guide](http://docs.uppmax.uu.se ## How to run parallel jobs -### First time, since May 13 2024 +### How to run parallel jobs for the first time, since May 13 2024 - If you use MATLAB after May 13 2024, of any version, you have to do the following step to be able to use the full features of running parallel jobs. - - only needs to be called once per version of MATLAB. - - Note, however, that on Bianca this has to be done separately. + - only needs to be called once per version of MATLAB. + - Note, however, that on Bianca this has to be done separately. - After logging into the cluster, configure MATLAB to run parallel jobs on the cluster by calling the shell script configCluster.sh. ```console @@ -208,6 +209,7 @@ With MATLAB you can e.g. submit jobs directly to our job queue scheduler, withou end t = toc(t0); ``` + and the second, little longer, saved in ``parallel_example_hvy.m``: ```matlab @@ -223,6 +225,7 @@ and the second, little longer, saved in ``parallel_example_hvy.m``: end end ``` + Begin by running the command ```matlab @@ -245,6 +248,7 @@ in Matlab Command Window to choose a cluster configuration. Matlab will set up a >> job.wait >> job.fetchOutputs{:}" ``` + Follow them. These inform you what is needed in your script or in command line to run in parallel on the cluster. The line "c.batch(@parallel_example, 1, {90, 5}, 'pool', 19)" can be understood as put the function "parallel_example" to the batch queue. The arguments to batch are: ```matlab @@ -344,6 +348,7 @@ Batch script example with 2 nodes (Rackham), matlab_submit.sh. module load matlab/R2020b &> /dev/null srun -N 2 -n 40 matlab -batch "run('')" ``` + Run with ```console @@ -365,6 +370,7 @@ If the graphics is slow, try: ```console $ vglrun matlab -nosoftwareopengl ``` + Unfortunately this only works from login nodes. You may want to run MATLAB on a single thread. This makes it work: diff --git a/docs/software/metontiime.md b/docs/software/metontiime.md index 8c36bb43b..25df0b743 100644 --- a/docs/software/metontiime.md +++ b/docs/software/metontiime.md @@ -7,8 +7,8 @@ It is not installed as a module. ???- tip "User tickets (for UPPMAX staff)" - - [ticket_287014](https://github.com/richelbilderbeek/ticket_287014) + [ticket_287014](https://github.com/richelbilderbeek/ticket_287014) ## Links - * [MetONTIIME GitHub repository](https://github.com/MaestSi/MetONTIIME) +- [MetONTIIME GitHub repository](https://github.com/MaestSi/MetONTIIME) diff --git a/docs/software/nvidia-deep-learning-frameworks.md b/docs/software/nvidia-deep-learning-frameworks.md index 60efd8747..912b22d56 100644 --- a/docs/software/nvidia-deep-learning-frameworks.md +++ b/docs/software/nvidia-deep-learning-frameworks.md @@ -5,10 +5,13 @@ Here is how easy one can use an NVIDIA [environment](https://docs.nvidia.com/dee ![web screenshot](./img/pytorch-nvidia.png) First - pull the container (6.5GB). + ```bash singularity pull docker://nvcr.io/nvidia/pytorch:22.03-py3 ``` + Get an interactive shell. + ```bash singularity shell --nv ~/external_1TB/tmp/pytorch_22.03-py3.sif @@ -34,7 +37,9 @@ True >>> torch.zeros(1).to('cuda') tensor([0.], device='cuda:0') ``` + From the container shell, check what else is available... + ```bash Singularity> nvcc -V nvcc: NVIDIA (R) Cuda compiler driver @@ -60,7 +65,9 @@ Singularity> jupyter-lab [I 13:35:46.616 LabApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). ... ``` + You can use this container to add more packages. + ```singularity Bootstrap: docker From: nvcr.io/nvidia/pytorch:22.03-py3 @@ -68,6 +75,7 @@ From: nvcr.io/nvidia/pytorch:22.03-py3 ``` Just keep in mind that "upgrading" the build-in torch package might install a package that is compatible with less GPU architectures and it might not work anymore on your hardware. + ```bash Singularity> python3 -c "import torch; print(torch.__version__); print(torch.cuda.is_available()); print(torch.cuda.get_arch_list()); torch.zeros(1).to('cuda')" @@ -76,4 +84,4 @@ True ['sm_37', 'sm_50', 'sm_60', 'sm_70'] NVIDIA A100-PCIE-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. -``` \ No newline at end of file +``` diff --git a/docs/software/openmolcas.md b/docs/software/openmolcas.md index 762663242..aae8ad07c 100644 --- a/docs/software/openmolcas.md +++ b/docs/software/openmolcas.md @@ -1,46 +1,58 @@ # MOLCAS user guide -> How to run the program MOLCAS on UPPMAX +How to run the program MOLCAS on UPPMAX ## Information + MOLCAS is an ab initio computational chemistry program. Focus in the program is placed on methods for calculating general electronic structures in molecular systems in both ground and excited states. MOLCAS is, in particular, designed to study the potential surfaces of excited states This guide will help you get started running MOLCAS on UPPMAX. More detailed information on how to use Molcas can be found on the [official website](https://molcas.org/). ## Licensing + A valid license key is required to run Molcas on UPPMAX. The licence key should be kept in a directory named .Molcas under the home directory. Molcas is currently free of charge for academic researchers active in the Nordic countries. You can get hold of a license by following [these instructions](https://www.molcas.org/order.html). ## Versions installed at UPPMAX + At UPPMAX the following versions are installed: - 8.0 (serial) - 7.8 (serial) + ## Modules needed to run MOLCAS + In order to run MOLCAS you must first load the MOLCAS module. You can see all available versions of MOLCAS installed at UPPMAX with: ```bash module avail molcas ``` + Load a MOLCAS module with, eg: ```bash module load molcas/7.8.082 ``` + ## How to run MOLCAS interactively + If you would like to do tests or short runs, we recommend using the interactive command: + ```bash interactive -A your_project_name ``` + This will reserve a node for you to do your test on. Note that you must provide the name of an active project in order to run on UPPMAX resources. After a short wait you will get access to the node. Then you can run MOLCAS by: + ```bash module load molcas/7.8.082 molcas -f test000.input ``` + The `test000.input` looks like: -``` +```text *$Revision: 7.7 $ ************************************************************************ * Molecule: H2 @@ -82,9 +94,11 @@ Ras2 &CASPT2 ``` + See the [SLURM user guide](../cluster_guides/slurm.md) for more information on the interactive command. Don't forget to exit your interactive job when you have finished your calculation. Exiting will free the resource for others to use. -## Batch scripts for slurm +## Batch scripts for Slurm + It's possible to run MOLCAS in the batch queue. Here is an example running MOLCAS on one core: ```sbatch @@ -102,9 +116,11 @@ export MOLCASMEM=2000 molcas -f test000.input ``` + Again you'll have to provide your project name. If the script is called `test000.job` you can submit it to the batch queue with: + ```bash sbatch test000.job ```