From 0991c4dd8a761a4026e9fe9f542bf38dbf29fc8c Mon Sep 17 00:00:00 2001 From: Milan Malfait Date: Fri, 24 Nov 2023 12:38:55 +0000 Subject: [PATCH] Run [`typos`](https://github.com/crate-ci/typos) checker --- 01projects/sec01Git.md | 4 ++-- 02cpp1/sec01Types.md | 2 +- 02cpp1/sec02PassByValueOrReference.md | 2 +- 02cpp1/sec03ObjectOrientedProgramming.md | 10 +++++----- 02cpp1/sec04StandardLibrary.md | 2 +- 03cpp2/sec01Exceptions.md | 6 +++--- 04cpp3/sec01Pointers.md | 4 ++-- 04cpp3/sec03Templates.md | 8 ++++---- 05libraries/ProgrammingParadigms.md | 4 ++-- 05libraries/sec03CppCodeDesign.md | 4 ++-- 07performance/sec01Complexity.md | 4 ++-- 07performance/sec02Memory.md | 2 +- .../sec01DistributedMemoryModels.md | 10 +++++----- 09distributed_computing/sec02ProgrammingWithMPI.md | 2 +- 10parallel_algorithms/WorkDepth.md | 6 +++--- index.md | 4 ++-- 16 files changed, 37 insertions(+), 37 deletions(-) diff --git a/01projects/sec01Git.md b/01projects/sec01Git.md index 4ee341dad..f8105216a 100644 --- a/01projects/sec01Git.md +++ b/01projects/sec01Git.md @@ -132,7 +132,7 @@ Changes to be committed: new file: hello.cpp ``` -Now, all three files are ready to be committed (i.e. made a permanently referencable entity of the project), and we employ the `git commit` command for this. +Now, all three files are ready to be committed (i.e. made a permanently referenceable entity of the project), and we employ the `git commit` command for this. ### `git commit` @@ -276,6 +276,6 @@ In fact, this is how we shall proceed with the devcontainer setup for our upcomi ## Further resources -We have only covered a very basic overview of the Git version control system that shall enable us to get started with the in-class exercises and course projects. An excellent resource that provides an expanded introduction is the Software Carpentry's [lessons on Git](https://swcarpentry.github.io/git-novice/) which covers some additional topics such as ignoring certain kind of files from being tracked, referencing previous commits in git commands etc. The sofware carpentry lesson material has been taught as a video playlist with live coding/demonstrator by your course instructor and is available [here](https://www.youtube.com/playlist?list=PLn8I4rGvUPf6qxv2KRN_wK7inXHJH6AIJ). +We have only covered a very basic overview of the Git version control system that shall enable us to get started with the in-class exercises and course projects. An excellent resource that provides an expanded introduction is the Software Carpentry's [lessons on Git](https://swcarpentry.github.io/git-novice/) which covers some additional topics such as ignoring certain kind of files from being tracked, referencing previous commits in git commands etc. The software carpentry lesson material has been taught as a video playlist with live coding/demonstrator by your course instructor and is available [here](https://www.youtube.com/playlist?list=PLn8I4rGvUPf6qxv2KRN_wK7inXHJH6AIJ). In professional software development, one usually encounters further advanced topics such as branching, rebasing, cherry-picking commits etc for which specialised git resources exist both online and in print. All UCL students have free access to content from LinkedIn Learning, and it is worthwhile to look into some of the [top rated Git courses](https://www.linkedin.com/learning/search?keywords=git&upsellOrderOrigin=default_guest_learning&sortBy=RELEVANCE&entityType=COURSE&softwareNames=Git) there. diff --git a/02cpp1/sec01Types.md b/02cpp1/sec01Types.md index 23b8306a2..8674cffe8 100644 --- a/02cpp1/sec01Types.md +++ b/02cpp1/sec01Types.md @@ -73,7 +73,7 @@ We will focus overwhelmingly on classes as our means of defining custom types, b - `enum class Colour {red, green, blue};`. This kind of enum (called an `enum class`) cannot be used interchangeably with `int`, and therefore `Colour` can only be used in places that are explicitly expecting a `Colour` type. **We usually want to use an `enum class` so that we don't accidentally mix it up with integer types!** - This cannot be used to index arrays (because it is not an int), but it can be used as a key in `map` types. `map` and `unordered_map` provide C++ equivalents to Python's dictionary type. - In order to use these values we have to also include the class name, so we have to write `Colour::red`, `Colour::green`, or `Colour::blue`. -- `union`: Union types are types which represent a value which is one of a finite set of types. A `union` is declared with a list of members of different types, for example `union IntOrString { int i; string s; };` can store an `int` or a `string`. When a variable of type `IntOrString` is declared, it is only allocated enough memory to store _one_ of its members at a time, so it cannot store both `i` and `s` at the same time. The programmer needs to manually keep track of which type is present, often using an auxilliary variable, in order to safely use union types. Given this additional difficulty, **I wouldn't recommend using union types without a very strong reason.** +- `union`: Union types are types which represent a value which is one of a finite set of types. A `union` is declared with a list of members of different types, for example `union IntOrString { int i; string s; };` can store an `int` or a `string`. When a variable of type `IntOrString` is declared, it is only allocated enough memory to store _one_ of its members at a time, so it cannot store both `i` and `s` at the same time. The programmer needs to manually keep track of which type is present, often using an auxiliary variable, in order to safely use union types. Given this additional difficulty, **I wouldn't recommend using union types without a very strong reason.** Microsoft has excellent, and accessible, resources on [`enum`](https://learn.microsoft.com/en-us/cpp/cpp/enumerations-cpp?view=msvc-170) and [`union`](https://learn.microsoft.com/en-us/cpp/cpp/unions?view=msvc-170) types if you are interested in learning more about them. diff --git a/02cpp1/sec02PassByValueOrReference.md b/02cpp1/sec02PassByValueOrReference.md index d1229eda5..1da826948 100644 --- a/02cpp1/sec02PassByValueOrReference.md +++ b/02cpp1/sec02PassByValueOrReference.md @@ -222,7 +222,7 @@ When we use a `return` statement in a function, we are also passing by value, al - Objects are copied using their _copy constructor_, a special function in their class definition which defines how to create a new object and copy the current object's data. (In many cases this can be automatically created by the compiler.) - Some objects also have a _move constructor_ defined, in which the data is not explicitly copied, but a new object takes control of the data. We'll return to this idea when we talk about pointers later in the course. (The move constructor may also be automatically created by the compiler.) - Normally when an variable goes out of scope its memory is freed and can be reallocated to new variables. If we have a _local variable_ in the function scope that we want to return, we can't just give the address of the data (return by reference) because when the function returns the variable will go out of scope and that memory is freed. - - Although return types can be refences, e.g. `int& someFunction()`, you have to be absolutely certain that the memory you are referencing will remain in scope. This could be e.g. a global variable, or a member of a class for an object which continues to exist. It should _never_ be a variable created locally in that function scope. Don't use reference return types unless you are really confident that you know what you are doing! + - Although return types can be references, e.g. `int& someFunction()`, you have to be absolutely certain that the memory you are referencing will remain in scope. This could be e.g. a global variable, or a member of a class for an object which continues to exist. It should _never_ be a variable created locally in that function scope. Don't use reference return types unless you are really confident that you know what you are doing! - For classes with a move constructor a local object can be returned without making a copy, since the compiler knows that the object is about to be destroyed as soon as the function returns, and can therefore have its data transferred instead. (This is why this optimisation can be used when returning a value but _not_ when passing an object to a function by value: when passing an object to a function the original object will continue to exist.) - **The compiler will use a move constructor when available if the object is deemed large enough for the move to be more efficient than a copy, and a copy constructor when not.** Therefore, you may find that returning values is more performant than you expect from the size of the data-structure. diff --git a/02cpp1/sec03ObjectOrientedProgramming.md b/02cpp1/sec03ObjectOrientedProgramming.md index 58ea81ade..6e19b7e1e 100644 --- a/02cpp1/sec03ObjectOrientedProgramming.md +++ b/02cpp1/sec03ObjectOrientedProgramming.md @@ -6,7 +6,7 @@ Estimated Reading Time: 60 minutes # Custom Types and Object Oriented Programming (OOP) in C++ -As a programming lanaguage, C++ supports multiple styles of programming, but it is generally known for _object oriented programming_, often abbreviated as _OOP_. This is handled in C++, as in many languages, through the use of classes: special datastructures which have both member data (variables that each object of that class contains and which are usually different for each object) and member functions, which are functions which can be called through an object and which have access to both the arguments passed to it _and_ the member variables of that object. +As a programming language, C++ supports multiple styles of programming, but it is generally known for _object oriented programming_, often abbreviated as _OOP_. This is handled in C++, as in many languages, through the use of classes: special datastructures which have both member data (variables that each object of that class contains and which are usually different for each object) and member functions, which are functions which can be called through an object and which have access to both the arguments passed to it _and_ the member variables of that object. We have already been making extensive use of classes when working with C++. Indeed, it is difficult not to! The addition of classes was the main paradigm shift between C, a procedural programming language with no native support for OOP, and C++. @@ -98,7 +98,7 @@ int main() } ``` -- The count is incremented in the constuctor (`countedClass()`), and so increased every time an instance of this type is created. +- The count is incremented in the constructor (`countedClass()`), and so increased every time an instance of this type is created. - The count is decremented in the destructor (`~countedClass()`), and so decreased every time an instance of this type is destroyed. - `count` is a static variable, so belongs to the class as a whole. There is one variable `count` for the whole class, regardless of how many instances there are. The class still accesses it as a normal member variable. - `count` also needs to be declared outside of the class definition. (This is where you should initialise the value.) @@ -250,7 +250,7 @@ class Ball ``` We now have a ball class that can be instantiated with any mass and radius, and can have its mass or radius changed, but **always satisfies the property that the density field is correct for the given radius and mass of the object**. Being able to guarantee properties of objects of a given type makes the type system far more powerful and gives users the opportunity to use objects in more efficient ways without having to check for conditions that are already guaranteed by the object's design. -### Maintaining Desireable Properties +### Maintaining Desirable Properties Consider another example where we have a catalogue for a library. To keep things simple, we'll say that we just store the title of each book. Very simply, we could define this as a vector: ```cpp @@ -395,9 +395,9 @@ Function overriding is fundamental to this polymorphic style of programming beca ## Polymorphism -Polymorphism is the ability to use multiple types in the same context in our program; in order to achieve this we must only access the common properties of those types through some shared interface. The most common way to do this is to define a base class which defines the necessary common properties, and then have sub-classes which inherit from the base class which represent different kinds of objects which can implement this interface. This is caled *sub-type polymorphism*, and is one of the most common forms of polymorphism. +Polymorphism is the ability to use multiple types in the same context in our program; in order to achieve this we must only access the common properties of those types through some shared interface. The most common way to do this is to define a base class which defines the necessary common properties, and then have sub-classes which inherit from the base class which represent different kinds of objects which can implement this interface. This is called *sub-type polymorphism*, and is one of the most common forms of polymorphism. -By exploring polymorphism we can also understand the behaviour, and some of the limitations, of the straightforward model of inheritence that we have used so far. +By exploring polymorphism we can also understand the behaviour, and some of the limitations, of the straightforward model of inheritance that we have used so far. Let's assume that we have some class `Shape`, and derived classes `Circle` and `Square`. diff --git a/02cpp1/sec04StandardLibrary.md b/02cpp1/sec04StandardLibrary.md index 9c1f82d82..1f56ef374 100755 --- a/02cpp1/sec04StandardLibrary.md +++ b/02cpp1/sec04StandardLibrary.md @@ -350,7 +350,7 @@ We can see from our previous example the use of the `()` and `{}` brackets to de You will often find when programming, especially in a language with such an expansive standard library, that there are things that you need to look up. There are a large number of classes and functions available to C++ programmers, many of which may be new to you or require refreshing at various points. -Two common sites for C++ refernce are: +Two common sites for C++ reference are: - - diff --git a/03cpp2/sec01Exceptions.md b/03cpp2/sec01Exceptions.md index b94634a9c..9ae33c438 100644 --- a/03cpp2/sec01Exceptions.md +++ b/03cpp2/sec01Exceptions.md @@ -40,7 +40,7 @@ We'll take a look now at how to do this in practice, starting with catching exce ## Catching Exceptions -We'll start by looking at how to handle an error thrown by an existing function, such as a range error thrown by a vector. When such a function encounters an erorr and _throws_ an exception, it needs to be _caught_. +We'll start by looking at how to handle an error thrown by an existing function, such as a range error thrown by a vector. When such a function encounters an error and _throws_ an exception, it needs to be _caught_. - We first need to identify the code that could throw the exception. We do this with the `try{...}` keyword. - This tells our compiler that we want to monitor the execution of this code block (inside the `{}`) for exceptions. @@ -90,7 +90,7 @@ int main() - `catch` clauses will be evaluated in order, so you should always list your `catch` statements from most specific to most general i.e. list _derived classes_ before the _base classes_ from which they inherit. For example, `std::out_of_range` is a sub-type of `std::exception` since the `out_of_range` class inherits from `exception`. This means that: - if `catch(std::exception e)` comes before `catch(std::out_of_range e)` then all `out_of_range` errors will be caught by the more general `exception` clause, and the specialised `out_of_range` error handling code will never run. - if `catch(std::out_of_range)` is placed first, then the `catch(std::exception e)` code will only run for exceptions which are not `out_of_range`. -- `cerr` is a special output stream for errors; we can use this if we want the error to be written to a different place than standard output (e.g. standard ouput to file and errors to terminal, or vice versa). We can also output exception information to `cout` though. +- `cerr` is a special output stream for errors; we can use this if we want the error to be written to a different place than standard output (e.g. standard output to file and errors to terminal, or vice versa). We can also output exception information to `cout` though. We can see in this example that using `try` and `catch` blocks have significant advantages for someone reading our code: @@ -215,7 +215,7 @@ int main() ## Defining Our Own Exceptions -We've mentioned above that we can differentiate between different kinds of exceptions by checking for different expception classes, and then execute different error handling code accordingly. This is a very powerful feature of exceptions that we can extend further by defining our own exception classes to represent cases specific to our own applications. When we define our own exceptions, they should inherit from the `std::exception` class, or from another class which derives from `std::exception` like the standard library exceptions listed above. You should be aware though that if you inherit from a class, for example `runtime_error`, then your exception will be caught by any `catch` statements that catch exceptions of the base classes (`runtime_error` or `exception`). +We've mentioned above that we can differentiate between different kinds of exceptions by checking for different exception classes, and then execute different error handling code accordingly. This is a very powerful feature of exceptions that we can extend further by defining our own exception classes to represent cases specific to our own applications. When we define our own exceptions, they should inherit from the `std::exception` class, or from another class which derives from `std::exception` like the standard library exceptions listed above. You should be aware though that if you inherit from a class, for example `runtime_error`, then your exception will be caught by any `catch` statements that catch exceptions of the base classes (`runtime_error` or `exception`). Exceptions that we define should be indicative of the kind of error that occur. Rather than trying to create a different exception for each function that can go wrong, create exception classes that represent kinds of problems, and these exceptions may be thrown by many functions. When creating new exception classes it is a good idea to think about what is useful for you to be able to differentiate between. diff --git a/04cpp3/sec01Pointers.md b/04cpp3/sec01Pointers.md index c5666b9d4..d43fdc99e 100644 --- a/04cpp3/sec01Pointers.md +++ b/04cpp3/sec01Pointers.md @@ -35,7 +35,7 @@ Data will end up on the stack or the heap depending on how it is declared, and t ## What Are Smart Pointers? -Smart pointers are a special kind of pointer, introduced in C++11. Since then, they are typically used as the default pointers for most applications, as they automatically handle some memory management which would previously have to be done manually. The reason we have three different kinds of smart pointers is because they embody three different possible ideas about *memory ownership*. Understanding ownership is key to understanding the useage of smart pointers. +Smart pointers are a special kind of pointer, introduced in C++11. Since then, they are typically used as the default pointers for most applications, as they automatically handle some memory management which would previously have to be done manually. The reason we have three different kinds of smart pointers is because they embody three different possible ideas about *memory ownership*. Understanding ownership is key to understanding the usage of smart pointers. When we talk about ownership of some memory or data, the question we are asking is what should have control over the lifetime of the data i.e. when the data should be allocated and freed. Smart pointers in C++ address three cases: @@ -346,7 +346,7 @@ which is exactly equivalent, but the `int const` form is preferred because it is Using `const` with pointers allows us to declare one of two things (or both): - **The pointer points to a `const` type**: we declare the data pointed to constant, and so this pointer cannot be used to update the value of held in the memory location to which it points. In other words, the memory pointed to is declared read-only, and we can deference the pointer to retrieve the data at that location, but we can't update it. We can however change the memory address that the pointer points to, since the pointer itself is not constant (remember the pointer is actually a variable storing a memory address). - - To do this with a smart pointer we need to place the `const` in the angle brackets, e.g. `shared_ptr readOnlySPtr` or `shared_ptr readOnlySPtr` which declares a shared pointer to a constant int. The `const` keywork here applies to the type of the data, `int`, so it is the data pointer to, not the pointer itself, which is being declared const. + - To do this with a smart pointer we need to place the `const` in the angle brackets, e.g. `shared_ptr readOnlySPtr` or `shared_ptr readOnlySPtr` which declares a shared pointer to a constant int. The `const` keyword here applies to the type of the data, `int`, so it is the data pointer to, not the pointer itself, which is being declared const. - To do this with a raw pointer use the `const` keyword _before_ the `*` operator, e.g. `int const * readOnlyPtr` or `const int * readOnlyPtr`. This declares a (raw) pointer to a constant int. - A pointer to const data only prohibits the value in memory being changed _through that pointer_, but if the value can be changed another way (e.g. it is a stack variable or there is another pointer to it) then it could still be changed. - **The pointer itself is const**: the memory location pointed to is a constant. In this case, the value held in the memory can change, but the pointer must always point to the same place and we can't redirect the pointer to look at another place in memory. diff --git a/04cpp3/sec03Templates.md b/04cpp3/sec03Templates.md index 988af4542..5aea6546b 100644 --- a/04cpp3/sec03Templates.md +++ b/04cpp3/sec03Templates.md @@ -41,7 +41,7 @@ class myClassTemplate }; ``` - `T` is the template parameter, and the `typename` keyword tells us that `T` must denote a type. (You can equivalently use the `class` keyword.) - - Do note that you don't need to call your template parameter `T`; like function parameters or other variables, it can have any name. It's good to give it a more meaningful name if the type should represent something in particular, for example `matrixType` could be the name if your templated code deals with arbitrary types represnting matrices. This is especially useful when using templates with multiple template parameters! + - Do note that you don't need to call your template parameter `T`; like function parameters or other variables, it can have any name. It's good to give it a more meaningful name if the type should represent something in particular, for example `matrixType` could be the name if your templated code deals with arbitrary types representing matrices. This is especially useful when using templates with multiple template parameters! - We can then use `T` like any other type inside the body of the class definition. - Additional template parameters can appear in the angle brackets in a comma separated list e.g. `template`. This is how e.g. `std::map` works. @@ -231,7 +231,7 @@ We can use `getTheBiggerOne` with our `Country` class just as well as our `Shape - Templates provide static polymorphism. I can define one function template that generates separate functions for each class. If I want to use my function with both `Shape` and `Country`, the compiler needs to know this at run time. - I can't declare a single function or class (such as a container), which can take both `Shape` and `Country`. For example, I can't put a `Shape` object in the same vector as a `Country` object, since it either needs to be a `vector` or `vector`. - If I use the function with `Shape` and with `Country` in the same program, I will actually generate two functions: `Shape& getTheBiggerOne(Shape&, Shape&)` and `Country& getTheBiggerOne(Country&, Country&)`. These functions are separate because they have different signatures (parameter and return types). -- These two can be combined. For example, `getTheBiggerOne` is a template which could be instantiated with the type `Shape`. The resulting fucntion, which takes and returns references to `Shape`, could be used with objects of type `Shape`, `Circle` or `Square` (run time polymorphism based on their inheritance tree) but not `Country` (this is not part of the same inheritance tree). +- These two can be combined. For example, `getTheBiggerOne` is a template which could be instantiated with the type `Shape`. The resulting function, which takes and returns references to `Shape`, could be used with objects of type `Shape`, `Circle` or `Square` (run time polymorphism based on their inheritance tree) but not `Country` (this is not part of the same inheritance tree). ## Organising and Compiling Code with Templates @@ -336,7 +336,7 @@ undefined reference to `int utilFunctions::add(int, int)' - The compiler has been unable to implement a definition of the `add` function for the type `int`, so this definition does not exist for us to use. - This error shows up during linking. You can compile both object files like before, because both match the template declaration and therefore are valid, but neither one can define the specific implementation that we want so when linking it finds that the function isn't defined anywhere. -- `implemenation.cpp` cannot define the implementation when compiled down to an object because it has the function template but not the intended type, so it can't come up with any concrete implementation. +- `implementation.cpp` cannot define the implementation when compiled down to an object because it has the function template but not the intended type, so it can't come up with any concrete implementation. - `usage.cpp` cannot define the implementation when compiled down to an object because it knows what type it should be used for, but it doesn't have the templated implementation (this is in `implementation.cpp`, and we have only included `declaration.hpp`). There are two possible ways to approach this problem. @@ -364,7 +364,7 @@ namespace utilFunctions ``` 2. We can keep our header file with just the declaration, and tell the compiler which types to implement the function for in the source file (`implementation.cpp`). - - In this case, `usage.cpp` will only be able to use `add` for the types which are explicitly instantiated in `implemenation.cpp`. + - In this case, `usage.cpp` will only be able to use `add` for the types which are explicitly instantiated in `implementation.cpp`. - This is less flexible as you need to anticipate any combination of template arguments that the function will be used with, but keeps the declaration and the implementation separate. - Separate function implementations will be created for each set of types given, even if they are never used. - It can also be useful if you want the function to restrict usage to a sub-set of possible types. diff --git a/05libraries/ProgrammingParadigms.md b/05libraries/ProgrammingParadigms.md index 75f113375..47653b5d8 100644 --- a/05libraries/ProgrammingParadigms.md +++ b/05libraries/ProgrammingParadigms.md @@ -77,7 +77,7 @@ Take a binary tree as an example: ![image](images/TreeInheritanceComposition.png) -## Influences from Functional Progamming +## Influences from Functional Programming Functional programming is an alternative approach to imperative programming. Although C++ is not a functional language in the sense that Haskell or ML are, it has taken some influence from functional programming in the last decade or so, and we can try to enforce some of the functional style in C++ by applying some conventions. @@ -131,7 +131,7 @@ You can declare a `const` member function by placing the keywords `const` after > }; > ``` -A pure function can be declared as a `constexpr`. `constexpr` stands for "constant expression", and it is an expression which can (in principle) be evaluated at compile-time. The simplest usages for this are to intialise constant variables with simple expressions, such as: +A pure function can be declared as a `constexpr`. `constexpr` stands for "constant expression", and it is an expression which can (in principle) be evaluated at compile-time. The simplest usages for this are to initialise constant variables with simple expressions, such as: > ```cpp > double y = 1.0/7.0; diff --git a/05libraries/sec03CppCodeDesign.md b/05libraries/sec03CppCodeDesign.md index 196b1593c..d5f662cb7 100644 --- a/05libraries/sec03CppCodeDesign.md +++ b/05libraries/sec03CppCodeDesign.md @@ -51,7 +51,7 @@ Inheritance is sometimes misused by C++ programmers to share functionality betwe ## Undefined Behaviour -A quirk of the C++ programming language is that not all source code that compiles is actually a valid C++ program. **Undefined behaviour** refers to situtaions in C++ where the standard offers no guidance and a compiler can more or less do what it likes; as a result we as programmers may have little idea what will happen if such a program is run, and the results will vary from compiler to compiler, and system to system. This means if our program has undefined behaviour then even if we have thoroughly tested it on our own system, it may not be portable to anyone else's. +A quirk of the C++ programming language is that not all source code that compiles is actually a valid C++ program. **Undefined behaviour** refers to situations in C++ where the standard offers no guidance and a compiler can more or less do what it likes; as a result we as programmers may have little idea what will happen if such a program is run, and the results will vary from compiler to compiler, and system to system. This means if our program has undefined behaviour then even if we have thoroughly tested it on our own system, it may not be portable to anyone else's. You can read more about undefined behaviour on e.g. [cppreference](https://en.cppreference.com/w/cpp/language/ub). @@ -127,4 +127,4 @@ The book [Effective Modern C++](https://www.oreilly.com/library/view/effective-m ### Design Patterns -The book [Design Patterns](https://www.oreilly.com/library/view/design-patterns-elements/0201633612/) provides many examples of frequently occuring design solutions in object oriented programming that we have not covered in these notes. If you're comfortable with the ideas we've covered in C++ and want to improve your object-oriented software engineering skills, this book may be helpful. \ No newline at end of file +The book [Design Patterns](https://www.oreilly.com/library/view/design-patterns-elements/0201633612/) provides many examples of frequently occurring design solutions in object oriented programming that we have not covered in these notes. If you're comfortable with the ideas we've covered in C++ and want to improve your object-oriented software engineering skills, this book may be helpful. \ No newline at end of file diff --git a/07performance/sec01Complexity.md b/07performance/sec01Complexity.md index 19f68f245..7d4727821 100644 --- a/07performance/sec01Complexity.md +++ b/07performance/sec01Complexity.md @@ -40,7 +40,7 @@ When talking about complexity, we only want to capture information about how the We can also understand algorithms made of smaller parts, for example: -- If an algorithm calculates $f(n)$ which is $O(n^3)$ then $g(n)$ which is $O(n^2)$, then the complexity of the algorithm is $O(n^3)$ since caculating $g(n)$ will become subdominant. +- If an algorithm calculates $f(n)$ which is $O(n^3)$ then $g(n)$ which is $O(n^2)$, then the complexity of the algorithm is $O(n^3)$ since calculating $g(n)$ will become subdominant. - If we make $n$ calls to a function $f(n)$, and $f(n)$ is $O(g(n))$, then the complexity is $O(n g(n))$. For example, making $n$ calls to a quadric-scaling function would lead to a cubic, i.e. $O(n^3)$, algorithm. - Nested loops and recursions are key areas of your program to look at to see if complexity is piling up! - Recursions or other kinds of branching logic can lead to recurrence relations: the time to calculate a problem can be expressed in terms of the time to calculate a smaller problem. This recurrence relation is directly linked to the complexity: @@ -134,7 +134,7 @@ Each round of merging takes $O(n)$ operations, so we need to know how many round ## The Complexity of a Problem: Matrix Multiplication - As well as analysing the performance of a specific algorithm, one can look at the inherent complexity of a problem itself: with what asymptotic behaviour is it _possible_ to solve a problem? When discussing the instrinsic complexity of a problem, the complexity of best solution we have provides an upper bound since we know we can do it _at least that well_, although we don't know if we could do better. Getting more precise knowledge of the inherent complexity of many problems is an active area of research. (And if you can solve the $P=NP$ problem [you get $1,000,000!](https://en.wikipedia.org/wiki/Millennium_Prize_Problems)) + As well as analysing the performance of a specific algorithm, one can look at the inherent complexity of a problem itself: with what asymptotic behaviour is it _possible_ to solve a problem? When discussing the intrinsic complexity of a problem, the complexity of best solution we have provides an upper bound since we know we can do it _at least that well_, although we don't know if we could do better. Getting more precise knowledge of the inherent complexity of many problems is an active area of research. (And if you can solve the $P=NP$ problem [you get $1,000,000!](https://en.wikipedia.org/wiki/Millennium_Prize_Problems)) Let's take as an example the problem of matrix multiplication, an extremely common operation in scientific computing. What is the complexity of matrix multiplication? What algorithms are available to us and how do they get used in practice? diff --git a/07performance/sec02Memory.md b/07performance/sec02Memory.md index e879acdd5..1a5b99c20 100644 --- a/07performance/sec02Memory.md +++ b/07performance/sec02Memory.md @@ -106,7 +106,7 @@ void Transpose(vector> &A, vector> &B) } } ``` -We'll assume that our matrices are in row major order, so rows in each matrix are contiguous in memory, and we will be focusing just on reading the data from the source matrix, and ignoring writing the operations to the output matrix, since the output matrix will be filled in order so that part of the algorithm is already cache efficient. (If they were in column major order the logic would be the same except exchanging write for read: reading the source matrix would be cache efficient, but writing the output matrix woudl be inefficient.) +We'll assume that our matrices are in row major order, so rows in each matrix are contiguous in memory, and we will be focusing just on reading the data from the source matrix, and ignoring writing the operations to the output matrix, since the output matrix will be filled in order so that part of the algorithm is already cache efficient. (If they were in column major order the logic would be the same except exchanging write for read: reading the source matrix would be cache efficient, but writing the output matrix would be inefficient.) This is an illustrative example using a single cache of very small capacity; we won't concern ourselves with the exact cache-mapping strategy since this varies, but will just fill in our cache in order. In the diagrams _red_ blocks will be blocks in system memory but not in the cache, and _blue_ blocks are data which are also stored in the cache. diff --git a/09distributed_computing/sec01DistributedMemoryModels.md b/09distributed_computing/sec01DistributedMemoryModels.md index bf5d28c51..5068940db 100644 --- a/09distributed_computing/sec01DistributedMemoryModels.md +++ b/09distributed_computing/sec01DistributedMemoryModels.md @@ -17,7 +17,7 @@ In the distributed memory model, we take the parallelisable part of our program - Keeping processes synchronised where necessary (similar to how we used `barrier` in OpenMP). - Aggregating results from multiple processes into a complete solution. -Distributed memory programming is incredibly broad and flexible, as we've only specified that there are processes with private memory and some kind of message passing. We've said nothing about what each of the processes _does_ (they can all do entirely different things; not just different tasks but even entirely different programs), what those processes run _on_ (you could have many nodes in a cluster or a series of completely different devices), or what medium they use to communicate (they can all be directly linked up or they could be communicated over channels like the internet). The distributed memory model can apply to anything from running a simple program with different initial conditions on a handful of nodes in a cluster to running a client-server application with many users on computers and mobile devices to a world-wide payment system involving many different potential individuals, institutions, devices, and softwares. It can even apply to separate processes running on the _same core_ or on cores with shared memory, as long as the memory is partitioned in such a way that the processes cannot _access_ the same memory. (Remember when you write programs you use _virtual memory addresses_ which are mapped to a limited subset of memory as allocated by your OS; you generally have many processes running on the same core or set of cores with access to non-overlapping subsets of RAM.) +Distributed memory programming is incredibly broad and flexible, as we've only specified that there are processes with private memory and some kind of message passing. We've said nothing about what each of the processes _does_ (they can all do entirely different things; not just different tasks but even entirely different programs), what those processes run _on_ (you could have many nodes in a cluster or a series of completely different devices), or what medium they use to communicate (they can all be directly linked up or they could be communicated over channels like the internet). The distributed memory model can apply to anything from running a simple program with different initial conditions on a handful of nodes in a cluster to running a client-server application with many users on computers and mobile devices to a world-wide payment system involving many different potential individuals, institutions, devices, and software. It can even apply to separate processes running on the _same core_ or on cores with shared memory, as long as the memory is partitioned in such a way that the processes cannot _access_ the same memory. (Remember when you write programs you use _virtual memory addresses_ which are mapped to a limited subset of memory as allocated by your OS; you generally have many processes running on the same core or set of cores with access to non-overlapping subsets of RAM.) For our purposes, we will focus on code written for a multi-node HPC cluster, such as [UCL's Myriad cluster](https://www.rc.ucl.ac.uk/docs/Clusters/Myriad/), using the [MPI (Message Passing Interface) standard](https://www.mpi-forum.org/). We will, naturally, do our programming with C++, but it is worth noting that the MPI standard has been implemented for many languages including C, C#, Fortran, and Python. We will use the [Open MPI](https://www.open-mpi.org/) implementation. We won't be covering much programming in these notes, but focussing on the models that we use and their implications. @@ -72,7 +72,7 @@ This ordering of events is only partial, because it does not necessarily allow u - $p_0$ comes before $q_0$, and therefore before $q_1$, $q_1$ etc. - We cannot say whether $p_1$ comes before $q_1$ or vice versa. Likewise for $r_1$ and $q_1$ and various other pairings. -This is a key property of distributed systems: in general we can't say in what order _all_ things occurr across independent processes. Different processes all run independently and can run a different speeds, or have different amounts of work to do. (Lamport's paper goes on to describe the limitations of synchronised physical clocks, and an algorithm for establishing a total ordering across events. This total ordering is non-unique, and the partial time ordering is the only ordering enforced by the actual mechanics of the sytem under study.) +This is a key property of distributed systems: in general we can't say in what order _all_ things occur across independent processes. Different processes all run independently and can run a different speeds, or have different amounts of work to do. (Lamport's paper goes on to describe the limitations of synchronised physical clocks, and an algorithm for establishing a total ordering across events. This total ordering is non-unique, and the partial time ordering is the only ordering enforced by the actual mechanics of the system under study.) All this is perhaps a lengthy way of saying: **if your processes need to be synchronised for some reason, you need to send messages to do it!** @@ -91,7 +91,7 @@ Let's illustrate this game of life example using just two processes, $P$ and $Q$ ## Performance and Message Passing -Message passing naturall incurs a performance overhead. Data communication channels betweeen processes are generally speaking much slower than straight-forward reads to RAM. As such, when designing distributed systems we should bear in mind: +Message passing naturally incurs a performance overhead. Data communication channels between processes are generally speaking much slower than straight-forward reads to RAM. As such, when designing distributed systems we should bear in mind: - The frequency of message passing should be kept down where possible. - The size of messages should be kept down where possible. - In general, a smaller number of large messages is better than a large number of small messages _for a given amount of data_. @@ -125,7 +125,7 @@ In this case instead of one thread merging all sub-lists, we can parallelise ove In these flow charts we have described what needs to be done but not necessarily which processes do it. We want our Parent process to divide the list up and broadcast it, and we want our parent process to end up with the sorted list at the end, but if we want to make the most of our resources we should probably have the parent process do some of the sorting work as well in this case. If we have 4 processes $P_{0...3}$, we could arrange our processes like so: -![iamge](images/Merge_Sort_Processes.jpg) +![image](images/Merge_Sort_Processes.jpg) This kind of pattern of distributing work and aggregating results often happens in a loop, so that we have the division of a task followed by a central synchronisation, followed by a divided task again and so on. @@ -154,7 +154,7 @@ We can divide up this region to reduce the number of cells on the boundary by di - Doing it this way would result in sending 36 cells across 12 messages, so fewer cells but more messages. - We can do this using fewer messages however if we introduce some blocking i.e. we make some processes wait to receive data before sending data so that they can forward on shared data. This tends to lead to time idling though! -Which solution and message passing pattern is most efficient may depend on your system and the message passing latency and bandwidth properties! If your message passing time is dominated by bandwidth, you should try to minimise the amount of data communicated (i.e. smallest number of boundary cells); if your message passing time is dominated by latency, you shoudl try to minimise the number of messages that you send. For problems which have to communicate large amounts of data, the message passing time will likely be bandwidth dominated and so a smaller boundary is the preferable solution. +Which solution and message passing pattern is most efficient may depend on your system and the message passing latency and bandwidth properties! If your message passing time is dominated by bandwidth, you should try to minimise the amount of data communicated (i.e. smallest number of boundary cells); if your message passing time is dominated by latency, you should try to minimise the number of messages that you send. For problems which have to communicate large amounts of data, the message passing time will likely be bandwidth dominated and so a smaller boundary is the preferable solution. ## Putting Things Together: Performance at Every Scale diff --git a/09distributed_computing/sec02ProgrammingWithMPI.md b/09distributed_computing/sec02ProgrammingWithMPI.md index 5439f6275..a7d710381 100644 --- a/09distributed_computing/sec02ProgrammingWithMPI.md +++ b/09distributed_computing/sec02ProgrammingWithMPI.md @@ -511,4 +511,4 @@ Using this handful of calls we can create highly complex systems, although there When we have processes performing different jobs we should refactor this behind function calls so as not to have a large, confusing branching `main` where it is difficult to tell what process you are in! -MPI involves sending buffers of contiguous memory as messages, and we have used traditional C arrays to align with this interpretation. But we can send C++ datastructures if we need to. We can send an `std::vector v` by using the pointer to the first element `&v[0]` as the buffer pointer. Be wary of sending vectors of objects though; sending generally store these as vectors of pointers and those pointers will not be valid in the memory space of another process! Likewise any objects that you send which contain pointers will not be valid any more either. In general, try to keep you messsages short, and composed of a single, simple data type like `char`, `int`, or `double`. \ No newline at end of file +MPI involves sending buffers of contiguous memory as messages, and we have used traditional C arrays to align with this interpretation. But we can send C++ datastructures if we need to. We can send an `std::vector v` by using the pointer to the first element `&v[0]` as the buffer pointer. Be wary of sending vectors of objects though; sending generally store these as vectors of pointers and those pointers will not be valid in the memory space of another process! Likewise any objects that you send which contain pointers will not be valid any more either. In general, try to keep you messages short, and composed of a single, simple data type like `char`, `int`, or `double`. \ No newline at end of file diff --git a/10parallel_algorithms/WorkDepth.md b/10parallel_algorithms/WorkDepth.md index 15e01f2b8..716e04d31 100644 --- a/10parallel_algorithms/WorkDepth.md +++ b/10parallel_algorithms/WorkDepth.md @@ -53,7 +53,7 @@ We can see that every element of the output is independent of every other, since - $D \in O(1)$ - The depth does not scale with the size of the input because all elements can be processed in parallel. -Constant depth is a feature of so called "embarassingly parallel" problems, where all computations are independent and you can just throw more computing power at them to speed them up. (With come caveats, but this is an idealised algorithm analysis!) +Constant depth is a feature of so called "embarrassingly parallel" problems, where all computations are independent and you can just throw more computing power at them to speed them up. (With come caveats, but this is an idealised algorithm analysis!) ## Reduce @@ -89,7 +89,7 @@ The loop dependency is a consequence of the way that the code was written, not t - $D \in O(\log n)$ - The depth is $\log n$ because the number of operators to be applied halves at each level of the diagram. -This kind of tree diagram is a common data dependency pattern, and places some limitations on the speed-up of our algorithm compared to our embarassingly parallel problem. Even with infinite processors, we still can't do better than $O(\log n)$ serial computations! +This kind of tree diagram is a common data dependency pattern, and places some limitations on the speed-up of our algorithm compared to our embarrassingly parallel problem. Even with infinite processors, we still can't do better than $O(\log n)$ serial computations! We can also see as we move down the tree that we have fewer operations to do in parallel at each stage. Depending on the size of this tree and the number of processors that you have, this means processing power may end up sitting idle which could be reallocated to other tasks elsewhere while this computation is still going on. (This isn't really going to be the case for something as rapid as an addition, but for workflows with similar tree like structures where computations take a long time, you can end up with resources sitting idle for significant amounts of time as you move down the tree.) @@ -147,7 +147,7 @@ As a result this algorithm has: So we can substantially improve the depth (and therefore time) over a serial algorithm with sufficient processing power, but we do approximately _double the total work_ of the serial approach. - As a result of the extra work done, having 2 processors tackle this job (using this approach) is unlikely to be very effective: the time you save doing things in parallel would be roughly cancelled out by all the duplicate work you're doing. -- With four processors we might expect to get a benefit of roughly a factor of 2 on a large list (so most of our time is spent doing paralell computations), because we'll be doing twice as much work with four times the processing power. +- With four processors we might expect to get a benefit of roughly a factor of 2 on a large list (so most of our time is spent doing parallel computations), because we'll be doing twice as much work with four times the processing power. # Approaches to Parallel Algorithms diff --git a/index.md b/index.md index c43c61316..9071713a2 100644 --- a/index.md +++ b/index.md @@ -19,7 +19,7 @@ We have found in previous years that C++ is no longer commonly taught at undergr * Arrays and structures * Basic object oriented design (classes, inheritance, polymorphism) -This could be obtained through online resources such as the the C++ Essential Training course by Bill Weinman on [LinkedIn Learning](https://www.ucl.ac.uk/isd/linkedin-learning) (accessable using your UCL single sign-on) or via a variety of C++ courses in college, such as [MPHYGB24](https://moodle.ucl.ac.uk). +This could be obtained through online resources such as the the C++ Essential Training course by Bill Weinman on [LinkedIn Learning](https://www.ucl.ac.uk/isd/linkedin-learning) (accessible using your UCL single sign-on) or via a variety of C++ courses in college, such as [MPHYGB24](https://moodle.ucl.ac.uk). * Eligibility: This course designed for UCL post-graduate students but with agreement of their course tutor a limited number of undegraduate students can also take it. @@ -27,4 +27,4 @@ This could be obtained through online resources such as the the C++ Essential Tr Members of doctoral training schools, or Masters courses who offer this module as part of their programme should register through their course organisers. -This course may not be audited without the prior permission of the course organiser Dr. Jamie Quinn as due to the practical nature of the lectures there is a cap on the total number of students who can enrol. +This course may not be audited without the prior permission of the course organiser Dr. Jamie Quinn as due to the practical nature of the lectures there is a cap on the total number of students who can enrol.