Algorithms are the sets of steps necessary to complete computation - they are at the heart of what our devices do. And this isn’t a new concept. Since the development of math itself algorithms have been needed to help us complete tasks more efficiently, but today we’re going to look a couple modern computing problems like sorting and graph search and show how we’ve made them more efficient so you can more easily find cheap airfare or map directions to Winterfell... or like a restaurant or something.
Basically, an Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the desired output.
The time complexity of an algorithm is an estimate of how much time it will take for an algorithm to run for a selected input. In other words, it describes how the run time of an algorithm grows as the input size grows. By calculating the time complexity, we can find out whether the algorithm is fast enough without implementing it. Normally written as O Notation but Ω and Θ Notation are also used. An algorithm's time complexity can range from constant to factorial.
The space complexity of an algorithm refers to the total amount of memory space used by the algorithm. It’s the space of the input values and the space used while it is executed. Normally written as O Notation but Ω and Θ Notation are also used. An algorithm's space complexity can range from constant to factorial but is normally closer to the input size.
When creating algorithms there are a few techniques that can be used to reduce the time complexity of an algorithm. This allows for larger inputs to be calculated at faster times.
Sorting is the process of arranging a list of items in a particular order. For example, if you had a list of names, you might want to sort them alphabetically. Or if you had a list of numbers, you might want to put them in order from smallest to largest. Sorting is a common task, and it’s one that we can do in many different ways.
- Bubble Sort
- Insertion Sort
- Selection Sort
- Merge Sort
- Quick Sort
- Heap Sort
- Swap Sort
- Radix Sort
- Shell Sort
- Count Sort
- Tim Sort
Searching is the process of finding a certain target element inside a container. Searching Algorithms are designed to check for an element or retrieve an element from any data structure where it is stored.
- Sequential
- Linear Search
- Binary Search
- Hashing
- Jump Search
- Interpolation Search
- Exponential Search
A graph is a data structure that consists of a finite (and possibly changeable) set of vertices, which are also called nodes or points, e.g. V=(A, B, C, ..). To represent relations between vertices we have a set of unordered pairs called edges e.g. E= ((A,B), (A,C), ..). These edges are known as arcs, lines, and arrows. More formally a Graph is defined as a set of vertices( V ), a set of edges( E ) and is denoted by G(E, V). Graphs are used to model pairwise relations between objects and are the most useful data structures for many real-world applications. For example, the airline route network is a graph in which the cities are the vertices, and the flight routes are the edges. Graphs are also used to represent networks. The Internet can be modeled as a graph in which the computers are the vertices and the links between computers are the edges. Graphs are also used in social networks like linkedIn and Facebook. In fact, graphs are used to represent circuit design, aeronautical scheduling, and many other things.
- Edges - An edge is a connection between two nodes.
- Weight - A weight is a value associated with an edge.
- Vertices - A vertex is a node of the graph. This node also has a degree, or the number of edges connected to it. A vertex’s in-degree is defined as the number of edges that point to a vertex and the vertex’s out-degree is the number of edges that point to other vertices. There can be many different types of vertices such as:
- Isolated vertex - Has no edges pointing to the vertex, and it has no outgoing edges. Its in-degree and out-degree is zero.
- Source vertex - Has no edges point to the vertex, its in-degree is zero.
- Sink vertex - Has no outgoing edges, it’s out-degree is zero.
- Articulation Points - A vertex in an undirected graph that would created a disconnected graph if removed.
- Undirected Graph - A graph where edges of the graph are two-way paths or relations.
- Directed Graph - A graph where edges of the graph go only one way, usually marked with arrows.
- Weighted Graph - A graph where edges of the graph have costs or weights associated with them.
- Tree Graphs - A graph with n vertices and n-1 edges where there exists only one path between vertices.
- Graph Traversal
- Topological Sorts
- Cycle Detection
- Undirected Graph
- Directed Graph
- Shortest Path
- Spanning Tree Algorithm
- Strongly Connected Components
Greedy algorithms are a simple, intuitive class of algorithms that can be used to find the optimal solution to some optimization problems. An optimization problem can be solved using Greedy if the problem has the following property: at every step, we can make a choice that looks best at the moment, and we get the optimal solution of the complete problem. They are called greedy because at each step they make the choice that seems best at that moment. This means that greedy algorithms do not guarantee to return the globally optimal solution, but instead make locally optimal choices in the hope of finding a global optimum.
- Activity Selection
- Huffman Coding
- Prim's Algorithm
- Kruskal's Algorithm
- Dijkstra's Algorithm
- Job Sequencing
- Fractional Knapsack
- Ford-Fulkerson Algorithm
String-based algorithms are algorithms limited to strings. They are used for operations like searching for a specific substring, pattern matching, and other text processing.
- Knuth-Morris-Pratt(KMP)
- Rabin Karp
- Suffix Trie
- Boyer-Moore Algorithm
- String Hashing
Dynamic programming is both a mathematical optimization method and a computer programming method. It was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure. Dynamic programming is one way to solve problems with these properties. The process of breaking a complicated problem down into simpler sub-problems is called "divide and conquer".
- Fibonacci Sequence
- Nth Fibonacci
- Nth Catalan Number/Sequence
- Longest Common Subsequence
- Longest Increasing Subsequence
- Longest Common Substring
- Longest Palindromic Substring
- Knapsack Problem
- Edit Distance
- Coin Change
- Matrix Chain Multiplication
- Balanced Tree Count
- Counting Hops
- Floyd Warshall Algorithm
- Gold Mine Problem
- Least Common Multiple (LCM)
- Painting Fence Algorithm
- Staircase
- Subset Sum Problem
Divide and conquer is an algorithmic paradigm in which the problem is recursively solved using the Divide, Conquer, and Combine strategy. A problem is broken down into two or more similar sub-problems until they can be easily solved. Those solutions are then combined to solve the larger sub-problems until the original problem is solved. Divide and Conquer algorithms differ from Dynamic Programming algorithms in that Divide and Conquer algorithms do not have overlapping sub-problems. Meaning that all Divide and Conquer algorithms are also Dynamic Programming algorithms, but not all Dynamic Programming algorithms are Divide and Conquer algorithms.
- Convex Hull Problem
- The Inversion Problem
- Maximum and minimum of an array
- Strassen's Matrix Multiplication
Branch and bound is an algorithm technique for solving optimization problems. The problem is broken down by exploring potential solutions. Those solutions which cannot be the optimal solution are eliminated. When the entire tree of potential solutions has been explored the optimized solution is found. The branching of the solutions and the eliminating of non-optimal solutions is where the name branch and bound comes from.
Backtracking is defined as searching every possible combination in order to solve a computational problem. This technique solves problems by building up solution candidates and when the algorithm determines that the candidate cannot possibly be part of a valid solution it abandons the candidate or “backtracks" to try another candidate.