diff --git a/high_performance_computing/hpc_mpi/06_non_blocking_communication.md b/high_performance_computing/hpc_mpi/06_non_blocking_communication.md index 307810d5..bf8efdc4 100644 --- a/high_performance_computing/hpc_mpi/06_non_blocking_communication.md +++ b/high_performance_computing/hpc_mpi/06_non_blocking_communication.md @@ -102,6 +102,7 @@ The "I" stands for "immediate", indicating that the function returns immediately | `MPI_Bsend()` | `MPI_Ibsend()` | | `MPI_Barrier()` | `MPI_Ibarrier()` | | `MPI_Reduce()` | `MPI_Ireduce()` | + :::: When we use non-blocking communication, we have to follow it up with `MPI_Wait()` to synchronise the program and make sure `*buf` is ready to be re-used. diff --git a/high_performance_computing/hpc_parallel_intro/01_introduction.md b/high_performance_computing/hpc_parallel_intro/01_introduction.md index ffd4ab4d..1b104c33 100644 --- a/high_performance_computing/hpc_parallel_intro/01_introduction.md +++ b/high_performance_computing/hpc_parallel_intro/01_introduction.md @@ -192,6 +192,7 @@ For the different dispensers case for your workers, however, think of the memory |Suitable for both distributed memory and shared memory (e.g., SMP) systems, allowing for parallelization across multiple nodes.|Designed for shared memory systems and cannot be used for parallelization across multiple computers.| |Enables parallelism through both processes and threads, providing flexibility for different parallel programming approaches.|Focuses solely on thread-based parallelism, limiting its scope to shared memory environments.| |Creation of process/thread instances and communication can result in higher costs and overhead.|Offers lower overhead, as inter-process communication is handled through shared memory, reducing the need for expensive process/thread creation.| + :::: ## Parallel Paradigms diff --git a/high_performance_computing/hpc_scalability_profiling/02_scalability_profiling.md b/high_performance_computing/hpc_scalability_profiling/02_scalability_profiling.md index 24d6ce20..28beddfd 100644 --- a/high_performance_computing/hpc_scalability_profiling/02_scalability_profiling.md +++ b/high_performance_computing/hpc_scalability_profiling/02_scalability_profiling.md @@ -187,6 +187,7 @@ Using the `3.9967` T~1~ starting value, we get the following estimations: | 1024 | 0.142149 | 28.115998 | 0.726001 | | 2048 | 0.140265 | 28.493625 | 0.377627 | | 4096 | 0.139323 | 28.686268 | 0.192643 | + :::: ::::: diff --git a/software_architecture_and_design/procedural/arrays_python.md b/software_architecture_and_design/procedural/arrays_python.md index 3ff41eb7..8dfeaa01 100644 --- a/software_architecture_and_design/procedural/arrays_python.md +++ b/software_architecture_and_design/procedural/arrays_python.md @@ -192,13 +192,13 @@ cd inflammation If on WSL or Linux (e.g. Ubuntu or the Ubuntu VM), then do: ```bash -wget https://www.uhpc-training.co.uk/material/software_architecture_and_design/procedural/inflammation/inflammation.zip +wget https://www.uhpc-training.co.uk/material/HPCu/software_architecture_and_design/procedural/inflammation/inflammation.zip ``` Or, if on a Mac, do: ```bash -curl -O https://www.uhpc-training.co.uk/material/software_architecture_and_design/procedural/inflammation/inflammation.zip +curl -O https://www.uhpc-training.co.uk/material/HPCu/software_architecture_and_design/procedural/inflammation/inflammation.zip ``` Once done, you can unzip this file using the `unzip` command in Bash, which will unpack all the files