Table of Contents Introduction Reductions Clique Problem Vertex Cover Problem Introduction Most of the algorithms discussed in a typical algorithms course run in polynomial time. This focus is reasonable since algorithms that run worse than polynomial time have little practical use. To simplify this notion: a problem for which a polynomial-time algorithm exists is “easy” and a problem for which no polynomial-time algorithm exists is “hard”. Knowing how to determine whether a problem is easy or hard is extremely useful. If one can identify a hard problem, then an approximate solution may be the best that can be achieved.
Table of Contents Problem Representation A Naive Solution The Floyd-Warshall Algorithm TychoLink is a telecommunications company looking to optimize its network for the fastest and most efficient data transfer possible. The network consists of multiple routers, each connected by various types of links that differ in latency and bandwidth. The company wants to ensure that data packets can travel from any router to any other router in the network using the path that offers the best balance between low latency and high bandwidth. There are three objectives in total:
Table of Contents Objective Questions Maximum Flow A polynomial time solution A flow network is a directed graph in which the edges begin at a node that produces the flow and the adjacent nodes are the ones that receive it. Flow in this context could take on many meanings, such as the amount of water that can flow through a pipe, the amount of data that can be sent through a network, or the amount of traffic that can be sent through a road network. The goal of a flow network is to maximize the flow from the source to the sink.
Table of Contents Topological Sort Strongly Connected Components Application: Recommender Graphs Topological Sort A topological sort of a directed acyclic graph \(G = (V, E)\) is a linear ordering of all its vertices such that for every directed edge \((u, v) \in E\), vertex \(u\) comes before vertex \(v\) in the ordering.
The process itself can be described simply:
Call \(\text{DFS}(G)\) to compute the finishing times for each vertex \(v\). As each vertex is finished, insert it onto the front of a linked list. Return the linked list of vertices. The entire call takes \(\Theta(V + E)\) since DFS\((G)\) takes \(\Theta(V + E)\) time. Inserting each vertex onto the front of the list can be done in constant time.
Table of Contents Example 4.13 from CLRS Visualizing the characteristics of an algorithm is a great way to build intuition about its runtime. Although it can be used to prove a recurrence, it is often a good jumping off point for the Substitution Method.
Example 4.13 from CLRS Consider the recurrence \(T(n) = 3T(n/4) + \Theta(n^2)\). We start by describing \(\Theta(n^2) = cn^2\), where the constant \(c > 0\) serves as an upper-bound constant. It reflects the amount of work done at each level of the recursion tree. The tree is shown below.
Table of Contents Activity Selection Properties of Greedy Solutions Huffman Codes Greedy algorithms are a class of algorithms that yield locally optimal solutions. In cases where the local optimum is also the global optimum, greedy algorithms are ideal. Even in cases where the global solution is more elusive, a local solution may be sufficient.
Activity Selection Given a set of activities that need to be scheduled using a common resource, the activity selection problem is to find the maximum number of activities that can be scheduled without overlapping.
Table of Contents Rod Cutting Matrix-chain Multiplication Applying Dynamic Programming Longest Common Subsequence Exercises Dynamic programming is a technique for solving problems by breaking them down into simpler subproblems, very much like divide and conquer algorithms. One primary difference is that the subproblems are designed in such a way that they do not need to be recomputed.
Many common problems have efficience dynamic programming solutions, and we will investigate several of them in these notes. In general, a dynamic programming solution can be applied if the problem has the following features.
Table of Contents Order Statistics Minimum and Maximum Selection in expected linear time Problems and Exercises We briefly touched on a median finding algorithm when discussing Divide and Conquer Algorithms. This section will be a bit of a review, but the point is to touch on the topic of order statistics more generally.
Order Statistics The \(i^{\text{th}}\) order statistic is the \(i^{\text{th}}\) smallest element in a set of \(n\) elements. The median is the \(\frac{n}{2}^{\text{th}}\) order statistic. The minimum and maximum are the \(1^{\text{st}}\) and \(n^{\text{th}}\) order statistics, respectively. When \(n\) is even, there are two medians:
Table of Contents Introduction Establishing a Lower Bound on Comparison Sorts Counting Sort Radix Sort Bucket Sort Questions and Exercises These are my personal notes for Chapter 8 of Introduction to Algorithms (Cormen et al. 2022). Readers should reference the book for more details when necessary.
Introduction All sorting algorithms discussed up to this point are comparison based. You may have thought, as I did, that sorting cannot be done without a comparison. If you have no way to evaluate the relative ordering of two different objects, how can you possibly arrange them in any order?
Table of Contents Example from CLRS Making the Wrong Guess The substitution method is a technique for solving recurrences. It works in two steps:
Guess the solution Use mathematical induction to verify the guess This can work very well, especially if we already have some intuition about the problem. Let’s start with a simple example: The Tower of Hanoi. In this classic puzzle, we have three pegs and a number of disks of different sizes which can slide onto any peg. The puzzle starts with the disks in a neat stack in ascending order of size on one peg, with the smallest disk on top. The objective is to move the entire stack to another peg, obeying the following rules:
Table of Contents Basic Quicksort Performance Randomized Quicksort Paranoid Quicksort Quicksort is a popular sorting algorithm implemented in many language libraries that has a worst-case running time of \(\Theta(n^2)\). Why would anyone choose this as the default sorting algorithm if one like mergesort has better worst-case performance? As you will see, the devil is in the details. Quicksort is often faster in practice. It also has a small memory footprint and is easy to implement.