*Content is adapted from lecture notes of Sedgewick’s ‘Algorithms (4th Ed.)’*

### Introduction

As a computer executes each instruction in a program, every action inside the computer takes up resources in space and time. Space in the computer memory, however it is implemented. And time in the many successive changes in state. Many algorithms can solve a given problem but each one has a characteristic use of time and space. This is the key difference among algorithms.

The algorithm and the computer don’t particularly care how long a computation takes but the world is made of impatient computer users who have practical constraints on their time. If you’ve agonized over a computer loading screen, you’ll understand the enormous interest in computing faster. Intuitively, a more powerful computer will solve an algorithm faster. Given the geometric growth in computer power, it is not unreasonable to think that all practical algorithms will eventually fall within a typical laptop users time and space budget.

What’s interesting is that the savior of the impatient is not a better computer. The key to faster computing is design of the algorithm.

In this post, we will explore the process of designing an algorithm and refining its performance. Starting from a naive first approach, we will gradually address problems in our algorithm in successive attempts. We will explore the problem of **Dynamic Connectivity**, the model for Union-Find and see how a clever algorithm can drastically shorten the time to a computational solution.

### Algorithm design

Mathematical problem solving always starts with a model. A good model suppresses irrelevant information and lets us understand the problem as simply as possible, and no simpler than that.

Next, we will pick an algorithm that solves the problem. For the first approach we need only something that can reach a solution, even only for a limited input set, and ignore practical time and space constraints on this first draft.

Then we will analyze this approach. We will see how the algorithm consumes time and space and see how that consumption changes for larger and more complicated input. Many algorithms perform well for small inputs but become intractable for large inputs. For some problems, large inputs are not necessary to consider. Whether the algorithm is “good enough” at this point depends on the requirements of the designer.

If it is not good enough, we need to see where we are constrained. Is the algorithm fast enough? Will it fit in the memory of my computer? Depending on the issue we will find a way to improve the algorithm by altering its performance with respect to time and space.

We can iterate on this process until all our needs are met. This is a scientific approach to algorithm design. Problems in our early attempts will inform our later attempts.

### Union Find

A union-find data structure is a set of N objects with two commands defined.

**Union**command — Connect two objects**Find**or Connected query — Is there a path connecting these two objects?

Can we write an algorithm that can tell us whether there is a path between two given objects?

This problem model might seem toyish but it is a suitable abstraction for many important systems where determining connectivity is critical.

- Pixels in a digital photo
- Friends in a social network
- Transistors in a computer chip
- Elements in a mathematical set
- Variable names in a program
- Computers in a network
- Metallic sites in a composite system

Determining connectivity is important but also consider that models for these systems might require billions and billions of elements. A poorly designed algorithm might require a modern supercomputer to run for as long as the age of the universe. Since we are not endowed with so much time, let’s start designing.

### Foundations

First let’s consider some models of the problem which suppress irrelevant details. Modeling the connections, let’s consider some abstract properties these connections will have to satisfy.

Let’s first posit that the term “is connected to” is an equivalence relation where these connections are:

- Reflexive:
*p*is connected to*p* - Symmetric: If
*p*is connected to*q*, then*q*is connected to*p* - Transitive: If
*p*is connected to*q*and*q*is connected to*r*, then*p*is connected to*r*

These are intuitive, but it’s worthwhile to state them explicitly.

As we connect elements, groups of elements within the larger set will emerge and grow. Every element within this group is connected to every other by the transitive property laid out above. We will call these groups “connected components”, defined as the maximal set of objects that are mutually connected.

Starting from some set of elements, we will run a series of Union and Find commands. Given the above set, if we run `union(2,5)`

we will connect two components.

Our two operations are so defined:

- Find query = Check if two objects are in the same component
- Union command = Replace components containing two objects with their union

To satisfy all this, we’ll need to create a Union-find data type with an API that supports the problem while considering:

- Number of objects
*N*can be huge - Number of operations
*M*can be huge - Find queries and Union commands can be intermixed

Method | Description |
---|---|

UF(int N) | Initialize UF data structure with N objects |

void union(int p, int q) | Add connection between p and q |

bool connected(int p, int q) | Are p and q in the same component? |

int find(int p) | Component identifier |

int count() | Number of components |

To check our API design, we’ll need to build a client that will use the data type that we develop to ensure we’re on the right track.

- Read in integers from standard input
- If they are not yet connected, connect them and print out pair

### First attempt: Quick-find

Here’s our first attempt. An eager approach.

The data structure we’ll use to support the algorithm is an integer array, indexed by object. The interpretation here is that two objects *p* and *q* are connected if and only if their entries in the array are the same.

Here’s an example of the structure where each entry is an id.

```
idx 0 1 2 3 4 5 6 7 8 9
id[] {0, 1, 1, 8, 8, 0, 0, 1, 8, 8}
```

Representing the following graph:

In this case, the Find operation is simple. For two elements, *p* and *q* simply check if they have the same id.

Union is more difficult. To merge components, we’ll need to update all the elements of the same id to the id of the first union argument.

##### Java implementation

The code for this might look something like this:

```
public class QuickFindUF {
private int[] id;
public QuickFindUF(int N) {
// set id of each obejct to itself
// (N array accesses)
id = new int[N];
for (int i = 0; i < N; i++)
id[i] = i;
}
public boolean connected(int p, int q) {
// check whether p and q are in the same component
// (2 array accesses)
return id[p] == id[q];
}
public void union(int q, int q) {
// change all entries with id[p] to id[q]
// (at most 2N + 2 accesses)
int pid = id[p];
int qid = id[q];
for (int i = 0; i < id.length; i++)
if (id[i] == pid) id[i] = qid;
}
}
```

But we quickly see that Quick-find is too slow. For our cost model, we understand the “cost” of the algorithm as the number of array accesses (for read or write). The defect in this model is that the union operation is too expensive.

In pathological cases, a union operation could involve changing N-2 objects. Further, this could happen as many as N times.

The find operation here only has to touch each element a constant number of times. However, $N$ union commands over $N$ objects requires $N^2$ operations. As we’ll see, quadratic time algorithms are unacceptable for large problems. As your problem size grows, the algorithm becomes slower.

As an interesting truism in computer science, it takes roughly one second for a computer to touch everything in memory. This has been true since the 50’s when memory was only a few thousand bytes. A computers got larger, they also got faster.

Relating that back to this problem, $10^9$ union commands over $10^9$ objects for our Quick-find would require $10^{18}$ operations. That shakes out to roughly 30 years of computer time.

Let this be a lesson, quadratic algorithms do not scale with technology.

### Second attempt: Quick-union

Here’s our next attempt, a lazy approach. This means we will avoid doing work until we have to.

The underlying data structure (array size N of id’s) will be the same but we will interpret it differently. We will think of the array as representing a set of trees (called a forest) where each entry in the array contains a reference to its parent.

The root of i is `id[id[id[...id[i]...]]]`

Our operations will look like this:

- Find: Check if
*p*and*q*have the same root - Union: To merge components containing
*p*and*q*, set the id of*p*’s root to the id of*q*’s root.

Each union operation involves only changing one element.

##### Java implementation

The Java implementation might look something like

```
public class QuickUnionUF {
private in[] id;
public QuickUnionUF(int N) {
// set id of each object to itself
// (N array accesses)
id = new int[N];
for (int i = 0; i < N; i ++) id[i] = i;
}
private int root(int i) {
// chase parent pointers until reach root
// (depth of i array accesses)
while (i != id[i]) i = id[i];
return i;
}
public boolean connected(int p, int q) {
// check if p and q have same root
// (depth of p and q array accesses)
return root(p) == root(q);
}
public void union(int p, int q) {
// change root of p to point to root of q
// (depth of p and q array accesses)
int i = root(p);
int j = root(q);
id[i] = j
}
}
```

Sadly, Quick-union is also too slow. Trees can get tall which causes find operations to become too expensive with as many as N array accesses.

### Quick-union improvements

We’ve seen some problems with the naive approaches. Let’s discuss some improvements. The first is called **weighting**.

The weighted quick-union has features which provide clear advantages:

- Modify quick union to avoid tall trees
- Keep track of size of each tree (number of objects)
- We maintain balance by always linking root of smaller tree to larger tree

##### Java implementation

The data structure will be similar to that of quick-union but we will maintain an extra array, size `sz[i]`

, to count the number of objects in the tree rooted at `i`

.

Find operation will be identical to quick-union:

`return root(p) == root(i)`

Union operation will be modified to:

- Link root of smaller tree to root of larger tree
- Update the
`sz[]`

array

```
int i = root(p);
int j = root(q);
if (i == j) return;
if (sz[i] < sz[j]) { id[i] = j; sz[j] += sz[i]; }
else { id[j] = i; sz[i] += sz[j]; }
```

Running time:

- Find: proportional to the depth of
*p*and*q* - Union: takes constant time, given roots

An interesting proposition, the depth of any node $x$ is at most $\log_2 N$. The insight here is to think about a given node, $x$. When does the depth of this node increase? $x$’s depth will increase when the $x$’s tree $T_1$ is merged in to another tree, $T_2$.

- The size of the tree containing $x$ at least doubles
- But the size of the tree containing $x$ can double at most $\log_2 N$ times
- Another way to see the previous point: if you start at 1 and double the depth of a node $\log_2 N$ times you get depth $N$

### Path compression

Could we improve this further? Yes! Easily.

The next technique is called **path-compression**.

Just after computing the root of p, set the id of each examined node to point to that root. We can acheive this with one line of code and we only add an extra constant scaling in time.

##### Java implementation

There are two approaches. The first is a two-pass implementation. We could add a second loop to `root()`

to set the `id[]`

of each examined node to the root.

But that misses the simpler one-pass variant where we make every other node in the path point to its grandparent (thereby halving the path length) with a single line of code added.

```
private int root(int i) {
while (i != id[i]) {
id[i] = id[id[i]];
i = id[i];
}
return i;
}
```

In practice there’s virtually no reason not to do this. A single line modification keeps the tree almost completely flat.

It has been proven, though the proof is beyond the scope of these notes, that starting from an empty data structure, any sequence of $M$ union-find operations on $N$ objects makes $\leq c(N + M \log^* N)$ array accesses where $\log^*$ is the [iterated logarithm].

To see how powerful this function is, notice how slowly it increases for large $N$

N | $\log^* N$ |
---|---|

1 | 0 |

2 | 1 |

4 | 2 |

16 | 3 |

65536 | 4 |

$2^{65536}$ | 5 |

In fact, analysis can be improved to $N + \alpha(M, N)$ where $\alpha$ is the Ackermann function, an even slower growing algorithm.

In either case, it turns out you can rely on the weighted quick-union with path compression algorithm to solve the Dynamic Connectivity problem roughly as fast as you can read in the data.

- In theory, WQUPC is not quite linear
- In practice, WQUPC is linear

As an interesting side note, it has been proven [Fredman-Saks] that no linear-time algorithm exists.

### Conclusion

We have approached a famous problem, Dynamic Connectivity, using a meta-algorithm for developing an algorithm suited to our problem. In each successive attempt, we refined our approach and optimized as needed.

For $M$ union-find operations on a set of $N$ objects we have seen

Algorithm | Worst-case time |
---|---|

quick-find | $MN$ |

quick-union | $MN$ |

weighted QU | $N + M \log N$ |

QU + path compression | $N + M \log^* N$ |

weighted QU + path compression | $N + M \log^* N$ |

Earlier, we stated

Relating that back to this problem, $10^9$ union commands over $10^9$ objects for our Quick-find would require $10^{18}$ operations. That shakes out to roughly 30 years of computer time.

With weighted QU + path compression, $10^9$ union commands over $10^9$ objects reduces to about 6 seconds.

A supercomputer will not help you nearly as much as a well designed algorithm.