id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
21-21.2-4
21
21.2
21.2-4
docs/Chap21/21.2.md
Give a tight asymptotic bound on the running time of the sequence of operations in Figure 21.3 assuming the linked-list representation and the weighted-union heuristic.
We call $\text{MAKE-SET}$ $n$ times, which contributes $\Theta(n)$. In each union, the smaller set is of size $1$, so each of these takes $\Theta(1)$ time. Since we union $n - 1$ times, the runtime is $\Theta(n)$.
[]
false
[]
21-21.2-5
21
21.2
21.2-5
docs/Chap21/21.2.md
Professor Gompers suspects that it might be possible to keep just one pointer in each set object, rather than two ($head$ and $tail$), while keeping the number of pointers in each list element at two. Show that the professor's suspicion is well founded by describing how to represent each set by a linked list such that each operation has the same running time as the operations described in this section. Describe also how the operations work. Your scheme should allow for the weighted-union heuristic, with the same effect as described in this section. ($\textit{Hint:}$ Use the tail of a linked list as its set's representative.)
For each member of the set, we will make its first field which used to point back to the set object point instead to the last element of the linked list. Then, given any set, we can find its last element by going ot the head and following the pointer that that object maintains to the last element of the linked list. This only requires following exactly two pointers, so it takes a constant amount of time. Some care must be taken when unioning these modified sets. Since the set representative is the last element in the set, when we combine two linked lists, we place the smaller of the two sets before the larger, since we need to update their set representative pointers, unlike the original situation, where we update the representative of the objects that are placed on to the end of the linked list.
[]
false
[]
21-21.2-6
21
21.2
21.2-6
docs/Chap21/21.2.md
Suggest a simple change to the $\text{UNION}$ procedure for the linked-list representation that removes the need to keep the $tail$ pointer to the last object in each list. Whether or not the weighted-union heuristic is used, your change should not change the asymptotic running time of the $\text{UNION}$ procedure. ($\textit{Hint:}$ Rather than appending one list to another, splice them together.)
Instead of appending the second list to the end of the first, we can imagine splicing it into the first list, in between the head and the elements. Store a pointer to the first element in $S_1$. Then for each element $x$ in $S_2$, set $x.head = S_1.head$. When the last element of $S_2$ is reached, set its next pointer to the first element of $S_1$. If we always let $S_2$ play the role of the smaller set, this works well with the weighted-union heuristic and don't affect the asymptotic running time of $\text{UNION}$.
[]
false
[]
21-21.3-1
21
21.3
21.3-1
docs/Chap21/21.3.md
Redo Exercise 21.2-2 using a disjoint-set forest with union by rank and path compression.
``` 1 / / \ \ 2 3 5 9 | / \ / \ \ 4 6 7 10 11 13 | | / \ 8 12 14 15 | 16 ```
[ { "lang": "", "code": " 1\n / / \\ \\\n2 3 5 9\n | / \\ / \\ \\\n 4 6 7 10 11 13\n | | / \\\n 8 12 14 15\n |\n 16" } ]
false
[]
21-21.3-2
21
21.3
21.3-2
docs/Chap21/21.3.md
Write a nonrecursive version of $\text{FIND-SET}$ with path compression.
To implement $\text{FIND-SET}$ nonrecursively, let $x$ be the element we call the function on. Create a linked list $A$ which contains a pointer to $x$. Each time we most one element up the tree, insert a pointer to that element into $A$. Once the root $r$ has been found, use the linked list to find each node on the path from the root to $x$ and update its parent to $r$.
[]
false
[]
21-21.3-3
21
21.3
21.3-3
docs/Chap21/21.3.md
Give a sequence of $m$ $\text{MAKE-SET}$, $\text{UNION}$, and $\text{FIND-SET}$ operations, $n$ of which are $\text{MAKE-SET}$ operations, that takes $\Omega(m\lg n)$ time when we use union by rank only.
Suppose that $n' = 2k$ is the smallest power of two less than $n$. To see that this sequences of operations does take the required amount of time, we'll first note that after each iteration of the **for** loop indexed by $j$, we have that the elements $x_1, \dots, x_{n'}$ are in trees of depth $i$. So, after we finish the outer **for** loop, we have that $x_1, \dots, x_{n'}$ all lie in the same set, but are represented by a tree of depth $k \in \Omega(\lg n)$. Then, since we repeatedly call $\text{FIND-SET}$ on an item that is $\lg n$ away from its set representative, we have that each one takes time $\lg n$. So, the last **for** loop alltogther takes time $\Omega(m \lg n)$. ```cpp for i = 1 to n MAKE-SET(x[i]) for i = 1 to k for j = 1..n' - 2^{i = 1} by 2^i UNION(x[i], x[i + 2^{j - 1}]) for i = 1 to m FIND-SET(x[1]) ```
[ { "lang": "cpp", "code": " for i = 1 to n\n MAKE-SET(x[i])\n for i = 1 to k\n for j = 1..n' - 2^{i = 1} by 2^i\n UNION(x[i], x[i + 2^{j - 1}])\n for i = 1 to m\n FIND-SET(x[1])" } ]
false
[]
21-21.3-4
21
21.3
21.3-4
docs/Chap21/21.3.md
Suppose that we wish to add the operation $\text{PRINT-SET}(x)$, which is given a node $x$ and prints all the members of $x$'s set, in any order. Show how we can add just a single attribute to each node in a disjoint-set forest so that $\text{PRINT-SET}(x)$ takes time linear in the number of members of $x$'s set and the asymptotic running times of the other operations are unchanged. Assume that we can print each member of the set in $O(1)$ time.
In addition to each tree, we'll store a linked list (whose set object contains a single tail pointer) with which keeps track of all the names of elements in the tree. The only additional information we'll store in each node is a pointer $x.l$ to that element's position in the list. - When we call $\text{MAKE-SET}(x)$, we'll also create a new linked list, insert the label of $x$ into the list, and set $x.l$ to a pointer to that label. This is all done in $O(1)$. - $\text{FIND-SET}$ will remain unchanged. - $\text{UNION}(x, y)$ will work as usual, with the additional requirement that we union the linked lists of $x$ and $y$, since we don't need to update pointers to the head, we can link up the lists in constant time, thus preserving the runtime of $\text{UNION}$. - Finally, $\text{PRINT-SET}(x)$ works as follows: first, set $s = \text{FIND-SET}(x)$. Then print the elements in the linked list, starting with the element pointed to by $x$. (This will be the first element in the list). Since the list contains the same number of elements as the set and printing takes $O(1)$, this operation takes linear time in the number of set members.
[]
false
[]
21-21.3-5
21
21.3
21.3-5 $\star$
docs/Chap21/21.3.md
Show that any sequence of $m$ $\text{MAKE-SET}$, $\text{FIND-SET}$, and $\text{LINK}$ operations, where all the $\text{LINK}$ operations appear before any of the $\text{FIND-SET}$ operations, takes only $O(m)$ time if we use both path compression and union by rank. What happens in the same situation if we use only the path-compression heuristic?
Clearly each $\text{MAKE-SET}$ and $\text{LINK}$ operation only takes time $O(1)$, so, supposing that $n$ is the number of $\text{FIND-SET}$ operations occuring after the making and linking, we need to show that all the $\text{FIND-SET}$ operations only take time $O(n)$. To do this, we will ammortize some of the cost of the $\text{FIND-SET}$ operations into the cost of the $\text{MAKE-SET}$ operations. Imagine paying some constant amount extra for each $\text{MAKE-SET}$ operation. Then, when doing a $\text{FIND-SET}(x)$ operation, we have three possibilities: - First, we could have that $x$ is the representative of its own set. In this case, it clearly only takes constant time to run. - Second, we could have that the path from $x$ to its set's representative is already compressed, so it only takes a single step to find the set representative. In this case also, the time required is constant. - Third, we could have that $x$ is not the representative and it's path has not been compressed. Then, suppose that there are k nodes between $x$ and its representative. The time of this $\text{FIND-SET}$ operation is $O(k)$, but it also ends up compressing the paths of $k$ nodes, so we use that extra amount that we paid during the $\text{MAKE-SET}$ operations for these $k$ nodes whose paths were compressed. Any subsequent call to find set for these nodes will take only a constant amount of time, so we would never try to use the work that amortization amount twice for a given node.
[]
false
[]
21-21.4-1-1
21
21.4
21.4-1
docs/Chap21/21.4.md
Prove Lemma 21.4.
The lemma states: > For all nodes $x$, we have $x.rank \le x.p.rank$, with strict inequality if $x \ne x.p$. The value of $x.rank$ is initially $0$ and increases through time until $x \ne x.p$; from then on, $x.rank$ does not change. The value of $x.p.rank$ monotonically increases over time. The initial value of $x.rank$ is $0$, as it is initialized in line 2 of the $\text{MAKE-SET}(x)$ procedure. When we run $\text{LINK}(x, y)$, whichever one has the larger rank is placed as the parent of the other, and if there is a tie, the parent's rank is incremented. This means that after any $\text{LINK}(y, x)$, the two nodes being linked satisfy this strict inequality of ranks. Also, if we have that $x \ne x.p$, then, we have that $x$ is not its own set representative, so, any linking together of sets that would occur would not involve $x$, but that's the only way for ranks to increase, so, we have that $x.rank$ must remain constant after that point.
[]
false
[]
21-21.4-1-2
21
21.4
21.4-1
docs/Chap21/21.4.md
For all nodes $x$, we have $x.rank \le x.p.rank$, with strict inequality if $x \ne x.p$. The value of $x.rank$ is initially $0$ and increases through time until $x \ne x.p$; from then on, $x.rank$ does not change. The value of $x.p.rank$ monotonically increases over time.
The lemma states: > For all nodes $x$, we have $x.rank \le x.p.rank$, with strict inequality if $x \ne x.p$. The value of $x.rank$ is initially $0$ and increases through time until $x \ne x.p$; from then on, $x.rank$ does not change. The value of $x.p.rank$ monotonically increases over time. The initial value of $x.rank$ is $0$, as it is initialized in line 2 of the $\text{MAKE-SET}(x)$ procedure. When we run $\text{LINK}(x, y)$, whichever one has the larger rank is placed as the parent of the other, and if there is a tie, the parent's rank is incremented. This means that after any $\text{LINK}(y, x)$, the two nodes being linked satisfy this strict inequality of ranks. Also, if we have that $x \ne x.p$, then, we have that $x$ is not its own set representative, so, any linking together of sets that would occur would not involve $x$, but that's the only way for ranks to increase, so, we have that $x.rank$ must remain constant after that point.
[]
false
[]
21-21.4-2
21
21.4
21.4-2
docs/Chap21/21.4.md
Prove that every node has rank at most $\lfloor \lg n \rfloor$.
We'll prove the claim by strong induction on the number of nodes. If $n = 1$, then that node has rank equal to $0 = \lfloor \lg 1 \rfloor$. Now suppose that the claim holds for $1, 2, \ldots, n$ nodes. Given $n + 1$ nodes, suppose we perform a $\text{UNION}$ operation on two disjoint sets with $a$ and $b$ nodes respectively, where $a, b \le n$. Then the root of the first set has rank at most $\lfloor \lg a \rfloor$ and the root of the second set has rank at most $\lfloor \lg b\rfloor$. If the ranks are unequal, then the $\text{UNION}$ operation preserves rank and we are done, so suppose the ranks are equal. Then the rank of the union increases by $1$, and the resulting set has rank $\lfloor\lg a\rfloor + 1 \le\lfloor\lg(n + 1) / 2\rfloor + 1 = \lfloor\lg(n + 1)\rfloor$.
[]
false
[]
21-21.4-3
21
21.4
21.4-3
docs/Chap21/21.4.md
In light of Exercise 21.4-2, how many bits are necessary to store $x.rank$ for each node $x$?
Since their value is at most $\lfloor \lg n \rfloor$, we can represent them using $\Theta(\lg(\lg(n)))$ bits, and may need to use that many bits to represent a number that can take that many values.
[]
false
[]
21-21.4-4
21
21.4
21.4-4
docs/Chap21/21.4.md
Using Exercise 21.4-2, give a simple proof that operations on a disjoint-set forest with union by rank but without path compression run in $O(m\lg n)$ time.
$\text{MAKE-SET}$ takes constant time and both $\text{FIND-SET}$ and $\text{UNION}$ are bounded by the largest rank among all the sets. Exercise 21.4-2 bounds this from about by $\lceil \lg n \rceil$, so the actual cost of each operation is $O(\lg n)$. Therefore the actual cost of $m$ operations is $O(m\lg n)$.
[]
false
[]
21-21.4-5
21
21.4
21.4-5
docs/Chap21/21.4.md
Professor Dante reasons that because node ranks increase strictly along a simple path to the root, node levels must monotonically increase along the path. In other words, if $x.rank > 0$ and $x.p$ is not a root, then $\text{level}(x) \le \text{level}(x.p)$. Is the professor correct?
Professor Dante is not correct. Suppose that we had that $x.p.rank > A_2(x.rank)$ but that $x.p.p.rank = 1 + x.p.rank$, then we would have that $\text{level}(x.p) = 0$, but $\text{level}(x) \ge 2$. So, we don't have that $\text{level}(x) \le \text{level}(x.p)$ even though we have that the ranks are monotonically increasing as we go up in the tree. Put another way, even though the ranks are monotonically increasing, the rate at which they are increasing (roughly captured by the level values) doesn't have to be increasing.
[]
false
[]
21-21.4-6
21
21.4
21.4-6 $\star$
docs/Chap21/21.4.md
Consider the function $\alpha'(n) = \min \\{k: A_k(1) \ge \lg(n + 1)\\}$. Show that $\alpha'(n) \le 3$ for all practical values of $n$ and, using Exercise 21.4-2, show how to modify the potential-function argument to prove that we can perform a sequence of $m$ $\text{MAKE-SET}$, $\text{UNION}$, and $\text{FIND-SET}$ operations, $n$ of which are $\text{MAKE-SET}$ operations, on a disjoint-set forest with union by rank and path compression in worst-case time $O(m \alpha'(n))$.
First, observe that by a change of variables, $\alpha'(2^{n − 1}) = \alpha(n)$. Earlier in the section we saw that $\alpha(n) \le 3$ for $0 \le n \le 2047$. This means that $\alpha'(n) \le 2$ for $0 \le n \le 2^{2046}$, which is larger than the estimated number of atoms in the observable universe. To prove the improved bound $O(m\alpha'(n))$ on the operations, the general structure will be essentially the same as that given in the section. First, modify bound 21.2 by observing that $A_{\alpha'(n)}(x.rank) \ge A_{\alpha'(n)}(1) \ge \lg(n + 1) > x.p.rank$ which implies $\text{level}(x) \le \alpha'(n)$. Next, redefine the potential replacing $\alpha(n)$ by $\alpha'(n)$. Lemma 21.8 now goes through just as before. All subsequent lemmas rely on these previous observations, and their proofs go through exactly as in the section, yielding the bound.
[]
false
[]
21-21-1
21
21-1
21-1
docs/Chap21/Problems/21-1.md
The **_off-line minimum problem_** asks us to maintain a dynamic set $T$ of elements from the domain $\\{1, 2, \ldots, n\\}$ under the operations $\text{INSERT}$ and $\text{EXTRACT-MIN}$. We are given a sequence $S$ of $n$ $\text{INSERT}$ and $m$ $\text{EXTRACT-MIN}$ calls, where each key in $\\{1, 2, \ldots, n\\}$ is inserted exactly once. We wish to determine which key is returned by each $\text{EXTRACT-MIN}$ call. Specifically, we wish to fill in an array $extracted[1..m]$, where for $i = 1, 2, \ldots, m$, $extracted[i]$ is the key returned by the $i$th $\text{EXTRACT-MIN}$ call. The problem is "off-line" in the sense that we are allowed to process the entire sequence $S$ before determining any of the returned keys. **a.** In the following instance of the off-line minimum problem, each operation $\text{INSERT}(i)$ is represented by the value of $i$ and each $\text{EXTRACT-MIN}$ is represented by the letter $\text E$: $$4, 8, \text E, 3, \text E, 9, 2, 6, \text E, \text E, \text E, 1, 7, \text E, 5.$$ Fill in the correct values in the _extracted_ array. To develop an algorithm for this problem, we break the sequence $S$ into homogeneous subsequences. That is, we represent $S$ by $$\text I_1, \text E, \text I_2, \text E, \text I_3, \ldots, \text I_m,\text E, \text I_{m + 1},$$ where each $\text E$ represents a single $\text{EXTRACT-MIN}$ call and each $\text{I}_j$ represents a (possibly empty) sequence of $\text{INSERT}$ calls. For each subsequence $\text{I}_j$ , we initially place the keys inserted by these operations into a set $K_j$, which is empty if $\text{I}_j$ is empty. We then do the following: ```cpp OFF-LINE-MINIMUM(m, n) for i = 1 to n determine j such that i ∈ K[j] if j != m + 1 extracted[j] = i let l be the smallest value greater than j for which set K[l] exists K[l] = K[j] ∪ K[l], destroying K[j] return extracted ``` **b.** Argue that the array _extracted_ returned by $\text{OFF-LINE-MINIMUM}$ is correct. **c.** Describe how to implement $\text{OFF-LINE-MINIMUM}$ efficiently with a disjoint-set data structure. Give a tight bound on the worst-case running time of your implementation.
**a.** $$ \begin{array}{|c|c|} \hline index & value \\\\ \hline 1 & 4 \\\\ 2 & 3 \\\\ 3 & 2 \\\\ 4 & 6 \\\\ 5 & 8 \\\\ 6 & 1 \\\\ \hline \end{array} $$ **b.** As we run the **for** loop, we are picking off the smallest of the possible elements to be removed, knowing for sure that it will be removed by the next unused $\text{EXTRACT-MIN}$ operation. Then, since that $\text{EXTRACT-MIN}$ operation is used up, we can pretend that it no longer exists, and combine the set of things that were inserted by that segment with those inserted by the next, since we know that the $\text{EXTRACT-MIN}$ operation that had separated the two is now used up. Since we proceed to figure out what the various extract operations do one at a time, by the time we are done, we have figured them all out. **c.** We let each of the sets be represented by a disjoint set structure. To union them (as on line 6) just call $\text{UNION}$. Checking that they exist is just a matter of keeping track of a linked list of which ones exist(needed for line 5), initially containing all of them, but then, when deleting the set on line 6, we delete it from the linked list that we were maintaining. The only other interaction with the sets that we have to worry about is on line 2, which just amounts to a call of $\text{FIND-SET}(j)$. Since line 2 takes amortized time $\alpha(n)$ and we call it exactly $n$ times, then, since the rest of the **for** loop only takes constant time, the total runtime is $O(n\alpha(n))$.
[ { "lang": "cpp", "code": "> OFF-LINE-MINIMUM(m, n)\n> for i = 1 to n\n> determine j such that i ∈ K[j]\n> if j != m + 1\n> extracted[j] = i\n> let l be the smallest value greater than j for which set K[l] exists\n> K[l] = K[j] ∪ K[l], destroying K[j]\n> return extracted\n>" } ]
false
[]
21-21-2
21
21-2
21-2
docs/Chap21/Problems/21-2.md
In the **_depth-determination problem_**, we maintain a forest $\mathcal F = \\{T_i\\}$ of rooted trees under three operations: $\text{MAKE-TREE}(v)$ creates a tree whose only node is $v$. $\text{FIND-DEPTH}(v)$ returns the depth of node $v$ within its tree. $\text{GRAFT}(r, v)$ makes node $r$, which is assumed to be the root of a tree, become the child of node $v$, which is assumed to be in a different tree than $r$ but may or may not itself be a root. **a.** Suppose that we use a tree representation similar to a disjoint-set forest: $v.p$ is the parent of node $v$, except that $v.p = v$ if $v$ is a root. Suppose further that we implement $\text{GRAFT}(r, v)$ by setting $r.p = v$ and $\text{FIND-DEPTH}(v)$ by following the find path up to the root, returning a count of all nodes other than $v$ encountered. Show that the worst-case running time of a sequence of $m$ $\text{MAKE-TREE}$, $\text{FIND-DEPTH}$, and $\text{GRAFT}$ operations is $\Theta(m^2)$. By using the union-by-rank and path-compression heuristics, we can reduce the worst-case running time. We use the disjoint-set forest $\mathcal S = \\{S_i\\}$, where each set $S_i$ (which is itself a tree) corresponds to a tree $T_i$ in the forest $\mathcal F$. The tree structure within a set $S_i$, however, does not necessarily correspond to that of $T_i$. In fact, the implementation of $S_i$ does not record the exact parent-child relationships but nevertheless allows us to determine any node's depth in $T_i$. The key idea is to maintain in each node $v$ a "pseudodistance" $v.d$, which is defined so that the sum of the pseudodistances along the simple path from $v$ to the root of its set $S_i$ equals the depth of $v$ in $T_i$. That is, if the simple path from $v$ to its root in $S_i$ is $v_0, v_1, \ldots, v_k$, where $v_0 = v$ and $v_k$ is $S_i$'s root, then the depth of $v$ in $T_i$ is $\sum_{j = 0}^k v_j.d$. **b.** Give an implementation of $\text{MAKE-TREE}$. **c.** Show how to modify $\text{FIND-SET}$ to implement $\text{FIND-DEPTH}$. Your implementation should perform path compression, and its running time should be linear in the length of the find path. Make sure that your implementation updates pseudodistances correctly. **d.** Show how to implement $\text{GRAFT}(r, v)$, which combines the sets containing $r$ and $v$, by modifying the $\text{UNION}$ and $\text{LINK}$ procedures. Make sure that your implementation updates pseudodistances correctly. Note that the root of a set $S_i$ is not necessarily the root of the corresponding tree $T_i$. **e.** Give a tight bound on the worst-case running time of a sequence of $m$ $\text{MAKE-TREE}$, $\text{FIND-DEPTH}$, and $\text{GRAFT}$ operations, $n$ of which are $\text{MAKE-TREE}$ operations.
**a.** $\text{MAKE-TREE}$ and $\text{GRAFT}$ are both constant time operations. $\text{FINDDEPTH}$ is linear in the depth of the node. In a sequence of $m$ operations the maximal depth which can be achieved is $m/2$, so $\text{FIND-DEPTH}$ takes at most $O(m)$. Thus, $m$ operations take at most $O(m^2)$. This is achieved as follows: Create $m / 3$ new trees. Graft them together into a chain using $m / 3$ calls to $\text{GRAFT}$. Now call $\text{FIND-DEPTH}$ on the deepest node $m / 3$ times. Each call takes time at least $m / 3$, so the total runtime is $\Omega((m / 3)^2) = \Omega(m^2)$. Thus the worst-case runtime of the $m$ operations is $\Theta(m^2)$. **b.** Since the new set will contain only a single node, its depth must be zero and its parent is itself. In this case, the set and its corresponding tree are indistinguishable. ```cpp MAKE-TREE(v) v = ALLOCATE-NODE() v.d = 0 v.p = v return v ``` **c.** In addition to returning the set object, modify $\text{FIND-SET}$ to also return the depth of the parent node. Update the pseudodistance of the current node $v$ to be $v.d$ plus the returned pseudodistance. Since this is done recursively, the running time is unchanged. It is still linear in the length of the find path. To implement $\text{FIND-DEPTH}$, simply recurse up the tree containing $v$, keeping a running total of pseudodistances. ```cpp FIND-SET(v) if v != v.p (v.p, d) = FIND-SET(v.p) v.d = v.d + d return (v.p, v.d) return (v, 0) ``` **d.** To implement $\text{GRAFT}$ we need to find $v$'s actual depth and add it to the pseudodistance of the root of the tree $S_i$ which contains $r$. ```cpp GRAFT(r, v) (x, d_1) = FIND-SET(r) (y, d_2) = FIND-SET(v) if x.rank > y.rank y.p = x x.d = x.d + d_2 + y.d else x.p = y x.d = x.d + d_2 if x.rank == y.rank y.rank = y.rank + 1 ``` **e.** The three implemented operations have the same asymptotic running time as $\text{MAKE}$, $\text{FIND}$, and $\text{UNION}$ for disjoint sets, so the worst-case runtime of $m$ such operations, $n$ of which are $\text{MAKE-TREE}$ operations, is $O(m\alpha(n))$.
[ { "lang": "cpp", "code": "MAKE-TREE(v)\n v = ALLOCATE-NODE()\n v.d = 0\n v.p = v\n return v" }, { "lang": "cpp", "code": "FIND-SET(v)\n if v != v.p\n (v.p, d) = FIND-SET(v.p)\n v.d = v.d + d\n return (v.p, v.d)\n return (v, 0)" }, { "lang": "cpp", "code": "GRAFT(r, v)\n (x, d_1) = FIND-SET(r)\n (y, d_2) = FIND-SET(v)\n if x.rank > y.rank\n y.p = x\n x.d = x.d + d_2 + y.d\n else\n x.p = y\n x.d = x.d + d_2\n if x.rank == y.rank\n y.rank = y.rank + 1" } ]
false
[]
21-21-3
21
21-3
21-3
docs/Chap21/Problems/21-3.md
The **_least common ancestor_** of two nodes $u$ and $v$ in a rooted tree $T$ is the node $w$ that is an ancestor of both $u$ and $v$ and that has the greatest depth in $T$. In the **_off-line least-common-ancestors problem_**, we are given a rooted tree $T$ and an arbitrary set $P = \\{\\{u, v\\}\\}$ of unordered pairs of nodes in $T$, and we wish to determine the least common ancestor of each pair in $P$. To solve the off-line least-common-ancestors problem, the following procedure performs a tree walk of $T$ with the initial call $\text{LCA}(T.root)$. We assume that each node is colored $\text{WHITE}$ prior to the walk. ```cpp LCA(u) MAKE-SET(u) FIND-SET(u).ancestor = u for each child v of u in T LCA(v) UNION(u, v) FIND-SET(u).ancestor = u u.color = BLACK for each node v such that {u, v} ∈ P if v.color == BLACK print "The least common ancestor of" u "and" v "is" FIND-SET(v).ancestor ``` **a.** Argue that line 10 executes exactly once for each pair $\\{u, v\\} \in P$. **b.** Argue that at the time of the call $\text{LCA}(u)$, the number of sets in the disjoint-set data structure equals the depth of $u$ in $T$. **c.** Prove that $\text{LCA}$ correctly prints the least common ancestor of $u$ and $v$ for each pair $\\{u, v\\} \in P$. **d.** Analyze the running time of $\text{LCA}$, assuming that we use the implementation of the disjoint-set data structure in Section 21.3.
**a.** Suppose that we let $\le_{LCA}$ to be an ordering on the vertices so that $u \le_{LCA} v$ if we run line 7 of $\text{LCA}(u)$ before line 7 of $\text{LCA}(v)$. Then, when we are running line 7 of $\text{LCA}(u)$, we immediately go on to the **for** loop on line 8. So, while we are doing this **for** loop, we still haven't called line 7 of $\text{LCA}(v)$. This means that $v.color$ is white, and so, the pair $\\{u, v\\}$ is not considered during the run of $\text{LCA}(u)$. However, during the **for** loop of $\text{LCA}(v)$, since line 7 of $\text{LCA}(u)$ has already run, $u.color = black$. This means that we will consider the pair $\\{v, u\\}$ during the running of $\text{LCA}(v)$. It is not obvious what the ordering $\le_{LCA}$ is, as it will be implementation dependent. It depends on the order in which child vertices are iterated in the **for** loop on line 3. That is, it doesn't just depend on the graph structure. **b.** We suppose that it is true prior to a given call of $\text{LCA}$, and show that this property is preserved throughout a run of the procedure, increasing the number of disjoint sets by one by the end of the procedure. So, supposing that $u$ has depth $d$ and there are $d$ items in the disjoint set data structure before it runs, it increases to $d + 1$ disjoint sets on line 1. So, by the time we get to line 4, and call $\text{LCA}$ of a child of $u$, there are $d + 1$ disjoint sets, this is exactly the depth of the child. After line 4, there are now $d + 2$ disjoint sets, so, line 5 brings it back down to $d + 1$ disjoint sets for the subsequent times through the loop. After the loop, there are no more changes to the number of disjoint sets, so, the algorithm terminates with $\text{d + 1}$ disjoint sets, as desired. Since this holds for any arbitrary run of $\text{LCA}$, it holds for all runs of $\text{LCA}$. **c.** Suppose that the pair $u$ and $v$ have the least common ancestor $w$. Then, when running $\text{LCA}(w)$, $u$ will be in the subtree rooted at one of $w$'s children, and $v$ will be in another. WLOG, suppose that the subtree containing $u$ runs first. So, when we are done with running that subtree, all of their ancestor values will point to $w$ and their colors will be black, and their ancestor values will not change until $\text{LCA}(w)$ returns. However, we run $\text{LCA}(v)$ before $\text{LCA}(w)$ returns, so in the **for** loop on line 8 of $\text{LCA}(v)$, we will be considering the pair $\\{u, v\\}$, since $u.color = black$. Since $u.ancestor$ is still $w$, that is what will be output, which is the correct answer for their $\text{LCA}$. **d.** The time complexity of lines 1 and 2 are just constant. Then, for each child, we have a call to the same procedure, a $\text{UNION}$ operation which only takes constant time, and a $\text{FIND-SET}$ operation which can take at most amortized inverse Ackerman's time. Since we check each and every thing that is adjacent to $u$ for being black, we are only checking each pair in $P$ at most twice in lines 8-10, among all the runs of $\text{LCA}$. This means that the total runtime is $O(|T|\alpha(|T|) + |P|)$.
[ { "lang": "cpp", "code": "> LCA(u)\n> MAKE-SET(u)\n> FIND-SET(u).ancestor = u\n> for each child v of u in T\n> LCA(v)\n> UNION(u, v)\n> FIND-SET(u).ancestor = u\n> u.color = BLACK\n> for each node v such that {u, v} ∈ P\n> if v.color == BLACK\n> print \"The least common ancestor of\" u \"and\" v \"is\" FIND-SET(v).ancestor\n>" } ]
false
[]
22-22.1-1
22
22.1
22.1-1
docs/Chap22/22.1.md
Given an adjacency-list representation of a directed graph, how long does it take to compute the $\text{out-degree}$ of every vertex? How long does it take to compute the $\text{in-degree}$s?
- The time to compute the $\text{out-degree}$ of every vertex is $$\sum_{v \in V}O(\text{out-degree}(v)) = O(|E| + |V|),$$ which is straightforward. - As for the $\text{in-degree}$, we have to scan through all adjacency lists and keep counters for how many times each vertex has been pointed to. Thus, the time complexity is also $O(|E| + |V|)$ because we'll visit all nodes and edges.
[]
false
[]
22-22.1-2
22
22.1
22.1-2
docs/Chap22/22.1.md
Give an adjacency-list representation for a complete binary tree on $7$ vertices. Give an equivalent adjacency-matrix representation. Assume that vertices are numbered from $1$ to $7$ as in a binary heap.
- **Adjacency-list representation** $$ \begin{aligned} 1 & \to 2 \to 3 \\\\ 2 & \to 1 \to 4 \to 5 \\\\ 3 & \to 1 \to 6 \to 7 \\\\ 4 & \to 2 \\\\ 5 & \to 2 \\\\ 6 & \to 3 \\\\ 7 & \to 3 \end{aligned} $$ - **Adjacency-matrix representation** $$ \begin{array}{c|ccccccc|} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \hline 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\\\ 2 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\\\ 3 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\\\ 4 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 5 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 6 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 7 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ \hline \end{array} $$
[]
false
[]
22-22.1-3
22
22.1
22.1-3
docs/Chap22/22.1.md
The **_transpose_** of a directed graph $G = (V, E)$ is the graph $G^\text T = (V, E^\text T)$, where $E^\text T = \\{(v, u) \in V \times V: (u, v) \in E \\}$. Thus, $G^\text T$ is $G$ with all its edges reversed. Describe efficient algorithms for computing $G^\text T$ from $G$, for both the adjacency-list and adjacency-matrix representations of $G$. Analyze the running times of your algorithms.
- **Adjacency-list representation** Assume the original adjacency list is $Adj$. ```cpp let Adj'[1..|V|] be a new adjacency list of the transposed G^T for each vertex u ∈ G.V for each vertex v ∈ Adj[u] INSERT(Adj'[v], u) ``` Time complexity: $O(|E| + |V|)$. - **Adjacency-matrix representation** Transpose the original matrix by looking along every entry above the diagonal, and swapping it with the entry that occurs below the diagonal. Time complexity: $O(|V|^2)$.
[ { "lang": "cpp", "code": " let Adj'[1..|V|] be a new adjacency list of the transposed G^T\n for each vertex u ∈ G.V\n for each vertex v ∈ Adj[u]\n INSERT(Adj'[v], u)" } ]
false
[]
22-22.1-4
22
22.1
22.1-4
docs/Chap22/22.1.md
Given an adjacency-list representation of a multigraph $G = (V, E)$, describe an $O(V + E)$-time algorithm to compute the adjacency-list representation of the "equivalent" undirected graph $G' = (V, E')$, where $E'$ consists of the edges in $E$ with all multiple edges between two vertices replaced by a single edge and with all self-loops removed.
```cpp EQUIVALENT-UNDIRECTED-GRAPH let Adj'[1..|V|] be a new adjacency list let A be a 0-initialized array of size |V| for each vertex u ∈ G.V for each v ∈ Adj[u] if v != u && A[v] != u A[v] = u INSERT(Adj'[u], v) ``` Note that $A$ does not contain any element with value $u$ before each iteration of the inner for-loop. That's why we use $A[v] = u$ to mark the existence of an edge $(u, v)$ in the inner for-loop. Since we lookup in the adjacency-list $Adj$ for $|V| + |E|$ times, the time complexity is $O(|V| + |E|)$.
[ { "lang": "cpp", "code": "EQUIVALENT-UNDIRECTED-GRAPH\n let Adj'[1..|V|] be a new adjacency list\n let A be a 0-initialized array of size |V|\n for each vertex u ∈ G.V\n for each v ∈ Adj[u]\n if v != u && A[v] != u\n A[v] = u\n INSERT(Adj'[u], v)" } ]
false
[]
22-22.1-5
22
22.1
22.1-5
docs/Chap22/22.1.md
The **_square_** of a directed graph $G = (V, E)$ is the graph $G^2 = (V, E^2)$ such that $(u, v) \in E^2$ if and only if $G$ contains a path with at most two edges between $u$ and $v$. Describe efficient algorithms for computing $G^2$ from $G$ for both the adjacency-list and adjacency-matrix representations of $G$. Analyze the running times of your algorithms.
- **Adjacency-list representation** To compute $G^2$ from the adjacency-list representation $Adj$ of $G$, we perform the following for each $Adj[u]$: ```cpp for each v ∈ Adj[u] INSERT(Adj2[u], v) for each w ∈ Adj[v] // edge(u, w) ∈ E^2 INSERT(Adj2[u], w) ``` where $Adj2$ is the adjacency-list representation of $G^2$. For every edge in $Adj$ we scan at most $|V|$ vertices, we compute $Adj2$ in time $O(|V||E|)$. After we have computed $Adj2$, we have to remove duplicate edges from the lists. Removing duplicate edges is done in $O(V + E')$ where $E' = O(VE)$ is the number of edges in $Adj2$ as shown in exercise 22.1-4. Thus the total running time is $$O(VE) + O(V + VE) = O(VE).$$ However, if the original graph $G$ contains self-loops, we should modify the algorithm so that self-loops are not removed. - **Adjacency-matrix representation** Let $A$ denote the adjacency-matrix representation of $G$. The adjacency-matrix representation of $G^2$ is the square of $A$. Computing $A^2$ can be done in time $O(V^3)$ (and even faster, theoretically; Strassen's algorithm for example will compute $A^2$ in $O(V^{\lg 7})$).
[ { "lang": "cpp", "code": " for each v ∈ Adj[u]\n INSERT(Adj2[u], v)\n for each w ∈ Adj[v]\n // edge(u, w) ∈ E^2\n INSERT(Adj2[u], w)" } ]
false
[]
22-22.1-6
22
22.1
22.1-6
docs/Chap22/22.1.md
Most graph algorithms that take an adjacency-matrix representation as input require time $\Omega(V^2)$, but there are some exceptions. Show how to determine whether a directed graph $G$ contains a **_universal sink_** $-$ a vertex with $\text{in-degree}$ $|V| - 1$ and $\text{out-degree}$ $0$ $-$ in time $O(V)$, given an adjacency matrix for $G$.
Start by examining position $(1, 1)$ in the adjacency matrix. When examining position $(i, j)$, - if a $1$ is encountered, examine position $(i + 1, j)$, and - if a $0$ is encountered, examine position $(i, j + 1)$. Once either $i$ or $j$ is equal to $|V|$, terminate. ```cpp IS-CONTAIN-UNIVERSAL-SINK(M) i = j = 1 while i < |V| and j < |V| // There's an out-going edge, so examine the next row if M[i, j] == 1 i = i + 1 // There's no out-going edge, so see if we could reach the last column of current row else if M[i, j] == 0 j = j + 1 check if vertex i is a universal sink ``` If a graph contains a universal sink, then it must be at vertex $i$. To see this, suppose that vertex $k$ is a universal sink. Since $k$ is a universal sink, row $k$ will be filled with $0$'s, and column $k$ will be filled with $1$'s except for $M[k, k]$, which is filled with a $0$. Eventually, once row $k$ is hit, the algorithm will continue to increment column $j$ until $j = |V|$. To be sure that row $k$ is eventually hit, note that once column $k$ is reached, the algorithm will continue to increment $i$ until it reaches $k$. This algorithm runs in $O(V)$ and checking if vertex $i$ is a universal sink is done in $O(V)$. Therefore, the total running time is $O(V) + O(V) = O(V)$.
[ { "lang": "cpp", "code": "IS-CONTAIN-UNIVERSAL-SINK(M)\n i = j = 1\n while i < |V| and j < |V|\n // There's an out-going edge, so examine the next row\n if M[i, j] == 1\n i = i + 1\n // There's no out-going edge, so see if we could reach the last column of current row\n else if M[i, j] == 0\n j = j + 1\n check if vertex i is a universal sink" } ]
false
[]
22-22.1-7
22
22.1
22.1-7
docs/Chap22/22.1.md
The **_incidence matrix_** of a directed graph $G = (V, E)$ with no self-loops is a $|V| \times |E|$ matrix $B = (b_{ij})$ such that $$ b_{ij} = \begin{cases} -1 & \text{if edge $j$ leaves vertex $i$}, \\\\ 1 & \text{if edge $j$ enters vertex $i$}, \\\\ 0 & \text{otherwise}. \end{cases} $$ Describe what the entries of the matrix product $BB^\text T$ represent, where $B^\text T$ is the transpose of $B$.
$$BB^\text T(i, j) = \sum\limits_{e \in E}b_{ie} b_{ej}^\text T = \sum\limits_{e \in E} b_{ie}b_{je}.$$ - If $i = j$, then $b_{ie} b_{je} = 1$ (it is $1 \cdot 1$ or $(-1) \cdot (-1)$) whenever $e$ enters or leaves vertex $i$, and $0$ otherwise. - If $i \ne j$, then $b_{ie} b_{je} = -1$ when $e = (i, j)$ or $e = (j, i)$, and $0$ otherwise. Thus, $$ BB^\text T(i, j) = \begin{cases} \text{degree of $i$ = in-degree + out-degree} & \text{if $i = j$}, \\\\ \text{$-$(\\# of edges connecting $i$ and $j$)} & \text{if $i \ne j$}. \end{cases} $$
[]
false
[]
22-22.1-8
22
22.1
22.1-8
docs/Chap22/22.1.md
Suppose that instead of a linked list, each array entry $Adj[u]$ is a hash table containing the vertices $v$ for which $(u, v) \in E$. If all edge lookups are equally likely, what is the expected time to determine whether an edge is in the graph? What disadvantages does this scheme have? Suggest an alternate data structure for each edge list that solves these problems. Does your alternative have disadvantages compared to the hash table?
The expected lookup time is $O(1)$, but in the worst case it could take $O(|V|)$. If we first sorted vertices in each adjacency list then we could perform a binary search so that the worst case lookup time is $O(\lg |V|)$, but this has the disadvantage of having a much worse expected lookup time.
[]
false
[]
22-22.2-1
22
22.2
22.2-1
docs/Chap22/22.2.md
Show the $d$ and $\pi$ values that result from running breadth-first search on the directed graph of Figure 22.2(a), using vertex $3$ as the source.
$$ \begin{array}{c|cccccc} \text{vertex} & 1 & 2 & 3 & 4 & 5 & 6 \\\\ \hline d & \infty & 3 & 0 & 2 & 1 & 1 \\\\ \pi & \text{NIL} & 4 & \text{NIL} & 5 & 3 & 3 \end{array} $$
[]
false
[]
22-22.2-2
22
22.2
22.2-2
docs/Chap22/22.2.md
Show the $d$ and $\pi$ values that result from running breadth-first search on the undirected graph of Figure 22.3, using vertex $u$ as the source.
$$ \begin{array}{c|cccccc} \text{vertex} & r & s & t & u & v & w & x & y \\\\ \hline d & 4 & 3 & 1 & 0 & 5 & 2 & 1 & 1 \\\\ \pi & s & w & u & \text{NIL} & r & t & u & u \end{array} $$
[]
false
[]
22-22.2-3
22
22.2
22.2-3
docs/Chap22/22.2.md
Show that using a single bit to store each vertex color suffices by arguing that the $\text{BFS}$ procedure would produce the same result if lines 5 and 14 were removed.
The textbook introduces the $\text{GRAY}$ color for the pedagogical purpose to distinguish between the $\text{GRAY}$ nodes (which are enqueued) and the $\text{BLACK}$ nodes (which are dequeued). Therefore, it suffices to use a single bit to store each vertex color.
[]
false
[]
22-22.2-4
22
22.2
22.2-4
docs/Chap22/22.2.md
What is the running time of $\text{BFS}$ if we represent its input graph by an adjacency matrix and modify the algorithm to handle this form of input?
The time of iterating all edges becomes $O(V^2)$ from $O(E)$. Therefore, the running time is $O(V + V^2) = O(V^2)$.
[]
false
[]
22-22.2-5
22
22.2
22.2-5
docs/Chap22/22.2.md
Argue that in a breadth-first search, the value $u.d$ assigned to a vertex $u$ is independent of the order in which the vertices appear in each adjacency list. Using Figure 22.3 as an example, show that the breadth-first tree computed by $\text{BFS}$ can depend on the ordering within adjacency lists.
First, we will show that the value $d$ assigned to a vertex is independent of the order that entries appear in adjacency lists. To show this, we rely on theorem 22.5, which proves correctness of $\text{BFS}$. In particular, the theorem states that $v.d = \delta(s, v)$ at the termination of $\text{BFS}$. Since $\delta(s, v)$ is a property of the underlying graph, for any adjacency list representation of the graph (including any reordering of the adjacency lists), $\delta(s, v)$ will not change. Since the $d$ values are equal to $\delta(s, v)$ and $\delta(s, v)$ is invariant for any ordering of the adjacency list, $d$ is also not dependent of the ordering of the adjacency list. Now, to show that $\pi$ does depend on the ordering of the adjacency lists, we will be using Figure 22.3 as a guide. First, we note that in the given worked out procedure, we have that in the adjacency list for $w$, $t$ precedes $x$. Also, in the worked out procedure, we have that $u.\pi = t$. Now, suppose instead that we had $x$ preceding $t$ in the adjacency list of $w$. Then, it would get added to the queue before $t$, which means that it would $u$ as it's child before we have a chance to process the children of $t$. This will mean that $u.\pi = x$ in this different ordering of the adjacency list for $w$.
[]
false
[]
22-22.2-6
22
22.2
22.2-6
docs/Chap22/22.2.md
Give an example of a directed graph $G = (V, E)$, a source vertex $s \in V$, and a set of tree edges $E_\pi \subseteq E$ such that for each vertex $v \in V$, the unique simple path in the graph $(V, E_\pi)$ from $s$ to $v$ is a shortest path in $G$, yet the set of edges $E_\pi$ cannot be produced by running $\text{BFS}$ on $G$, no matter how the vertices are ordered in each adjacency list.
Let $G$ be the graph shown in the first picture, $G_\pi = (V, E_\pi)$ be the graph shown in the second picture, and $s$ be the source vertex. We could see that $E_\pi$ will never be produced by running BFS on $G$. <center> ![](../img/22.2-6-2.png) ![](../img/22.2-6-1.png) </center> - If $y$ precedes $v$ in the $Adj[s]$. We'll dequeue $y$ before $v$, so $u.\pi$ and $x.\pi$ are both $y$. However, this is not the case. - If $v$ preceded $y$ in the $Adj[s]$. We'll dequeue $v$ before $y$, so $u.\pi$ and $x.\pi$ are both $v$, which again isn't true. Nonetheless, the unique simple path in $G_\pi$ from $s$ to any vertex is a shortest path in $G$.
[]
true
[ "../img/22.2-6-2.png", "../img/22.2-6-1.png" ]
22-22.2-7
22
22.2
22.2-7
docs/Chap22/22.2.md
There are two types of professional wrestlers: "babyfaces" ("good guys") and "heels" ("bad guys"). Between any pair of professional wrestlers, there may or may not be a rivalry. Suppose we have $n$ professional wrestlers and we have a list of $r$ pairs of wrestlers for which there are rivalries. Give an $O(n + r)$-time algorithm that determines whether it is possible to designate some of the wrestlers as babyfaces and the remainder as heels such that each rivalry is between a babyface and a heel. If it is possible to perform such a designation, your algorithm should produce it.
This problem is basically just a obfuscated version of two coloring. We will try to color the vertices of this graph of rivalries by two colors, "babyface" and "heel". To have that no two babyfaces and no two heels have a rivalry is the same as saying that the coloring is proper. To two color, we perform a breadth first search of each connected component to get the $d$ values for each vertex. Then, we give all the odd ones one color say "heel", and all the even d values a different color. We know that no other coloring will succeed where this one fails since if we gave any other coloring, we would have that a vertex $v$ has the same color as $v.\pi$ since $v$ and $v.\pi$ must have different parities for their $d$ values. Since we know that there is no better coloring, we just need to check each edge to see if this coloring is valid. If each edge works, it is possible to find a designation, if a single edge fails, then it is not possible. Since the BFS took time $O(n + r)$ and the checking took time $O(r)$, the total runtime is $O(n + r)$.
[]
false
[]
22-22.2-8
22
22.2
22.2-8 $\star$
docs/Chap22/22.2.md
The **_diameter_** of a tree $T = (V, E)$ is defined as $\max_{u,v \in V} \delta(u, v)$, that is, the largest of all shortest-path distances in the tree. Give an efficient algorithm to compute the diameter of a tree, and analyze the running time of your algorithm.
Suppose that a and b are the endpoints of the path in the tree which achieve the diameter, and without loss of generality assume that $a$ and $b$ are the unique pair which do so. Let $s$ be any vertex in $T$. We claim that the result of a single $\text{BFS}$ will return either $a$ or $b$ (or both) as the vertex whose distance from $s$ is greatest. To see this, suppose to the contrary that some other vertex $x$ is shown to be furthest from $s$. (Note that $x$ cannot be on the path from $a$ to $b$, otherwise we could extend). Then we have $$d(s, a) < d(s, x)$$ and $$d(s, b) < d(s, x).$$ Let $c$ denote the vertex on the path from $a$ to $b$ which minimizes $d(s, c)$. Since the graph is in fact a tree, we must have $$d(s, a) = d(s, c) + d(c, a)$$ and $$d(s, b) = d(s, c) + d(c, b).$$ (If there were another path, we could form a cycle). Using the triangle inequality and inequalities and equalities mentioned above we must have $$ \begin{aligned} d(a, b) + 2d(s, c) & = d(s, c) + d(c, b) + d(s, c) + d(c, a) \\\\ & < d(s, x) + d(s, c) + d(c, b). \end{aligned} $$ I claim that $d(x, b) = d(s, x) + d(s, b)$. If not, then by the triangle inequality we must have a strict less-than. In other words, there is some path from $x$ to $b$ which does not go through $c$. This gives the contradiction, because it implies there is a cycle formed by concatenating these paths. Then we have $$d(a, b) < d(a, b) + 2d(s, c) < d(x, b).$$ Since it is assumed that $d(a, b)$ is maximal among all pairs, we have a contradiction. Therefore, since trees have $|V| - 1$ edges, we can run $\text{BFS}$ a single time in $O(V)$ to obtain one of the vertices which is the endpoint of the longest simple path contained in the graph. Running $\text{BFS}$ again will show us where the other one is, so we can solve the diameter problem for trees in $O(V)$.
[]
false
[]
22-22.2-9
22
22.2
22.2-9
docs/Chap22/22.2.md
Let $G = (V, E)$ be a connected, undirected graph. Give an $O(V + E)$-time algorithm to compute a path in $G$ that traverses each edge in $E$ exactly once in each direction. Describe how you can find your way out of a maze if you are given a large supply of pennies.
First, the algorithm computes a minimum spanning tree of the graph. Note that this can be done using the procedures of Chapter 23. It can also be done by performing a breadth first search, and restricting to the edges between $v$ and $v.\pi$ for every $v$. To aide in not double counting edges, fix any ordering $\le$ on the vertices before hand. Then, we will construct the sequence of steps by calling $\text{MAKE-PATH}(s)$, where $s$ was the root used for the $\text{BFS}$. ```cpp MAKE-PATH(u) for each v ∈ Adj[u] but not in the tree such that u ≤ v go to v and back to u for each v ∈ Adj[u] but not equal to u.π go to v perform the path proscribed by MAKE-PATH(v) go to u.π ```
[ { "lang": "cpp", "code": "MAKE-PATH(u)\n for each v ∈ Adj[u] but not in the tree such that u ≤ v\n go to v and back to u\n for each v ∈ Adj[u] but not equal to u.π\n go to v\n perform the path proscribed by MAKE-PATH(v)\n go to u.π" } ]
false
[]
22-22.3-1
22
22.3
22.3-1
docs/Chap22/22.3.md
Make a $3$-by-$3$ chart with row and column labels $\text{WHITE}$, $\text{GRAY}$, and $\text{BLACK}$. In each cell $(i, j)$, indicate whether, at any point during a depth-first search of a directed graph, there can be an edge from a vertex of color $i$ to a vertex of color $j$. For each possible edge, indicate what edge types it can be. Make a second such chart for depth-first search of an undirected graph.
According to Theorem 22.7 (Parenthesis theorem), there are 3 cases of relationship between intervals of vertex $u$ and $v$: - $[u.d, u.f]$ and $[v.d, v.f]$ are entirely disjointed, - $[u.d, u.f] \subset [v.d, v.f]$, and - $[v.d, v.f] \subset [u.d, u.f]$. We judge the possibility according to this Theorem. - For **directed graph**, we can use the edge classification given by exercise 22.3-5 to simplify the problem. $$ \begin{array}{c|ccc} from \diagdown to & \text{WHITE} & \text{GRAY} & \text{BLACK} \\\\ \hline \text{WHITE} & \text{All kinds} & \text{Cross, Back} & \text{Cross} \\\\ \text{GRAY} & \text{Tree, Forward} & \text{Tree, Forward, Back} & \text{Tree, Forward, Cross} \\\\ \text{BLACK} & - & \text{Back} & \text{All kinds} \end{array} $$ - For **undirected graph**, starting from directed chart, we remove the forward edge and the cross edge, and - when a back edge exist, we add a tree edge; - when a tree edge exist, we add a back edge. This is correct for the following reasons: 1. Theorem 22.10: In a depth-first search of an undirected graph $G$, every edge of $G$ is either a tree or back edge. So tree and back edge only. 2. If $(u, v)$ is a tree edge from $u$'s perspective, $(u, v)$ is also a back edge from $v$'s perspective. $$ \begin{array}{c|ccc} from \diagdown to & \text{WHITE} & \text{GRAY} & \text{BLACK} \\\\ \hline \text{WHITE} & - & \text{Tree, Back} & \text{Tree, Back} \\\\ \text{GRAY} & \text{Tree, Back} & \text{Tree, Back} & \text{Tree, Back} \\\\ \text{BLACK} & \text{Tree, Back} & \text{Tree, Back} & - \end{array} $$
[]
false
[]
22-22.3-2
22
22.3
22.3-2
docs/Chap22/22.3.md
Show how depth-first search works on the graph of Figure 22.6. Assume that the **for** loop of lines 5–7 of the $\text{DFS}$ procedure considers the vertices in alphabetical order, and assume that each adjacency list is ordered alphabetically. Show the discovery and finishing times for each vertex, and show the classification of each edge.
The following table gives the discovery time and finish time for each vetex in the graph. See the [C++ demo](https://github.com/walkccc/CLRS-cpp/blob/master/Chap22/22.3.cpp). $$ \begin{array}{ccc} \text{Vertex} & \text{Discovered} & \text{Finished} \\\\ \hline q & 1 & 16 \\\\ r & 17 & 20 \\\\ s & 2 & 7 \\\\ t & 8 & 15 \\\\ u & 18 & 19 \\\\ v & 3 & 6 \\\\ w & 4 & 5 \\\\ x & 9 & 12 \\\\ y & 13 & 14 \\\\ z & 10 & 11 \end{array} $$ - **Tree edges:** $(q, s)$, $(s, v)$, $(v, w)$, $(q, t)$, $(t, x)$, $(x, z)$, $(t, y)$, $(r, u)$. - **Back edges:** $(w, s)$, $(z, x)$, $(y, q)$. - **Forward edges:** $(q, w)$. - **Cross edges:** $(r, y)$, $(u, y)$.
[]
false
[]
22-22.3-3
22
22.3
22.3-3
docs/Chap22/22.3.md
Show the parenthesis structure of the depth-first search of Figure 22.4.
The parentheses structure of the depth-first search of Figure 22.4 is $(u(v(y(xx)y)v)u)(w(zz)w)$.
[]
false
[]
22-22.3-4
22
22.3
22.3-4
docs/Chap22/22.3.md
Show that using a single bit to store each vertex color suffices by arguing that the $\text{DFS}$ procedure would produce the same result if line 3 of $\text{DFS-VISIT}$ was removed.
Change line 3 to `color = BLACK` and remove line 8. Then, the algorithm would produce the same result.
[]
false
[]
22-22.3-5
22
22.3
22.3-5
docs/Chap22/22.3.md
Show that edge $(u, v)$ is **a.** a tree edge or forward edge if and only if $u.d < v.d < v.f < u.f$, **b.** a back edge if and only if $v.d \le u.d < u.f \le v.f$, and **c.** a cross edge if and only if $v.d < v.f < u.d < u.f$.
**a.** $u$ is an ancestor of $v$. **b.** $u$ is a descendant of $v$. **c.** $v$ is visited before $u$.
[]
false
[]
22-22.3-6
22
22.3
22.3-6
docs/Chap22/22.3.md
Show that in an undirected graph, classifying an edge $(u, v)$ as a tree edge or a back edge according to whether $(u, v)$ or $(v, u)$ is encountered first during the depth-first search is equivalent to classifying it according to the ordering of the four types in the classification scheme.
By Theorem 22.10, every edge of an undirected graph is either a tree edge or a back edge. First suppose that $v$ is first discovered by exploring edge $(u, v)$. Then by definition, $(u, v)$ is a tree edge. Moreover, $(u, v)$ must have been discovered before $(v, u)$ because once $(v, u)$ is explored, $v$ is necessarily discovered. Now suppose that $v$ isn't first discovered by $(u, v)$. Then it must be discovered by $(r, v)$ for some $r\ne u$. If $u$ hasn't yet been discovered then if $(u, v)$ is explored first, it must be a back edge since $v$ is an ancestor of $u$. If $u$ has been discovered then $u$ is an ancestor of $v$, so $(v, u)$ is a back edge.
[]
false
[]
22-22.3-7
22
22.3
22.3-7
docs/Chap22/22.3.md
Rewrite the procedure $\text{DFS}$, using a stack to eliminate recursion.
See the [C++ demo](https://github.com/walkccc/CLRS-cpp/blob/master/Chap22/22.3-7/22.3-7.cpp). Also, see this [issue](https://github.com/walkccc/CLRS/issues/329) for [@i-to](https://github.com/i-to)'s discussion. ```cpp DFS-STACK(G) for each vertex u ∈ G.V u.color = WHITE u.π = NIL time = 0 for each vertex u ∈ G.V if u.color == WHITE DFS-VISIT-STACK(G, u) ``` ```cpp DFS-VISIT-STACK(G, u) S = Ø PUSH(S, u) time = time + 1 // white vertex u has just been discovered u.d = time u.color = GRAY while !STACK-EMPTY(S) u = TOP(S) v = FIRST-WHITE-NEIGHBOR(G, u) if v == NIL // u's adjacency list has been fully explored POP(S) time = time + 1 u.f = time u.color = BLACK // blackend u; it is finished else // u's adjacency list hasn't been fully explored v.π = u time = time + 1 v.d = time v.color = GRAY PUSH(S, v) ``` ```cpp FIRST-WHITE-NEIGHBOR(G, u) for each vertex v ∈ G.Adj[u] if v.color == WHITE return v return NIL ```
[ { "lang": "cpp", "code": "DFS-STACK(G)\n for each vertex u ∈ G.V\n u.color = WHITE\n u.π = NIL\n time = 0\n for each vertex u ∈ G.V\n if u.color == WHITE\n DFS-VISIT-STACK(G, u)" }, { "lang": "cpp", "code": "DFS-VISIT-STACK(G, u)\n S = Ø\n PUSH(S, u)\n time = time + 1 // white vertex u has just been discovered\n u.d = time\n u.color = GRAY\n while !STACK-EMPTY(S)\n u = TOP(S)\n v = FIRST-WHITE-NEIGHBOR(G, u)\n if v == NIL\n // u's adjacency list has been fully explored\n POP(S)\n time = time + 1\n u.f = time\n u.color = BLACK // blackend u; it is finished\n else\n // u's adjacency list hasn't been fully explored\n v.π = u\n time = time + 1\n v.d = time\n v.color = GRAY\n PUSH(S, v)" }, { "lang": "cpp", "code": "FIRST-WHITE-NEIGHBOR(G, u)\n for each vertex v ∈ G.Adj[u]\n if v.color == WHITE\n return v\n return NIL" } ]
false
[]
22-22.3-8
22
22.3
22.3-8
docs/Chap22/22.3.md
Give a counterexample to the conjecture that if a directed graph $G$ contains a path from $u$ to $v$, and if $u.d < v.d$ in a depth-first search of $G$, then $v$ is a descendant of $u$ in the depth-first forest produced.
Consider a graph with $3$ vertices $u$, $v$, and $w$, and with edges $(w, u)$, $(u, w)$, and $(w, v)$. Suppose that $\text{DFS}$ first explores $w$, and that $w$'s adjacency list has $u$ before $v$. We next discover $u$. The only adjacent vertex is $w$, but $w$ is already grey, so $u$ finishes. Since $v$ is not yet a descendant of $u$ and $u$ is finished, $v$ can never be a descendant of $u$.
[]
false
[]
22-22.3-9
22
22.3
22.3-9
docs/Chap22/22.3.md
Give a counterexample to the conjecture that if a directed graph $G$ contains a path from $u$ to $v$, then any depth-first search must result in $v.d \le u.f$.
Consider the directed graph on the vertices $\\{1, 2, 3\\}$, and having the edges $(1, 2)$, $(1, 3)$, $(2, 1)$ then there is a path from $2$ to $3$. However, if we start a $\text{DFS}$ at $1$ and process $2$ before $3$, we will have $2.f = 3 < 4 = 3.d$ which provides a counterexample to the given conjecture.
[]
false
[]
22-22.3-10
22
22.3
22.3-10
docs/Chap22/22.3.md
Modify the pseudocode for depth-first search so that it prints out every edge in the directed graph $G$, together with its type. Show what modifications, if any, you need to make if $G$ is undirected.
If $G$ is undirected we don't need to make any modifications. See the [C++ demo](https://github.com/walkccc/CLRS-cpp/blob/master/Chap22/22.3-10/22.3-10.cpp). ```cpp DFS-VISIT-PRINT(G, u) time = time + 1 u.d = time u.color = GRAY for each vertex v ∈ G.Adj[u] if v.color == WHITE print "(u, v) is a tree edge." v.π = u DFS-VISIT-PRINT(G, v) else if v.color == GRAY print "(u, v) is a back edge." else if v.d > u.d print "(u, v) is a forward edge." else print "(u, v) is a cross edge." u.color = BLACK time = time + 1 u.f = time ```
[ { "lang": "cpp", "code": "DFS-VISIT-PRINT(G, u)\n time = time + 1\n u.d = time\n u.color = GRAY\n for each vertex v ∈ G.Adj[u]\n if v.color == WHITE\n print \"(u, v) is a tree edge.\"\n v.π = u\n DFS-VISIT-PRINT(G, v)\n else if v.color == GRAY\n print \"(u, v) is a back edge.\"\n else if v.d > u.d\n print \"(u, v) is a forward edge.\"\n else\n print \"(u, v) is a cross edge.\"\n u.color = BLACK\n time = time + 1\n u.f = time" } ]
false
[]
22-22.3-11
22
22.3
22.3-11
docs/Chap22/22.3.md
Explain how a vertex $u$ of a directed graph can end up in a depth-first tree containing only $u$, even though $u$ has both incoming and outgoing edges in $G$.
Suppose that we have a directed graph on the vertices $\\{1, 2, 3\\}$ and having edges $(1, 2)$ and $(2, 3)$. Then, $2$ has both incoming and outgoing edges. If we pick our first root to be $3$, that will be in its own $\text{DFS}$ tree. Then, we pick our second root to be $2$, since the only thing it points to has already been marked $\text{BLACK}$, we won't be exploring it. Then, picking the last root to be $1$, we don't screw up the fact that $2$ is along in a $\text{DFS}$ tree even though it has both an incoming and outgoing edge in $G$.
[]
false
[]
22-22.3-12
22
22.3
22.3-12
docs/Chap22/22.3.md
Show that we can use a depth-first search of an undirected graph $G$ to identify the connected components of $G$, and that the depth-first forest contains as many trees as $G$ has connected components. More precisely, show how to modify depth-first search so that it assigns to each vertex $v$ an integer label $v.cc$ between $1$ and $k$, where $k$ is the number of connected components of $G$, such that $u.cc = v.cc$ if and only if $u$ and $v$ are in the same connected component.
The modifications work as follows: each time the **if**-condition of line 8 is satisfied in $\text{DFS-CC}$, we have a new root of a tree in the forest, so we update its $cc$ label to be a new value of $k$. In the recursive calls to $\text{DFS-VISIT-CC}$, we always update a descendant's connected component to agree with its ancestor's. See the [C++ demo](https://github.com/walkccc/CLRS-cpp/blob/master/Chap22/22.3-12/22.3-12.cpp). ```cpp DFS-CC(G) for each vertex u ∈ G.V u.color = WHITE u.π = NIL time = 0 cc = 1 for each vertex u ∈ G.V if u.color == WHITE u.cc = cc cc = cc + 1 DFS-VISIT-CC(G, u) ``` ```cpp DFS-VISIT-CC(G, u) time = time + 1 u.d = time u.color = GRAY for each vertex v ∈ G.Adj[u] if v.color == WHITE v.cc = u.cc v.π = u DFS-VISIT-CC(G, v) u.color = BLACK time = time + 1 u.f = time ```
[ { "lang": "cpp", "code": "DFS-CC(G)\n for each vertex u ∈ G.V\n u.color = WHITE\n u.π = NIL\n time = 0\n cc = 1\n for each vertex u ∈ G.V\n if u.color == WHITE\n u.cc = cc\n cc = cc + 1\n DFS-VISIT-CC(G, u)" }, { "lang": "cpp", "code": "DFS-VISIT-CC(G, u)\n time = time + 1\n u.d = time\n u.color = GRAY\n for each vertex v ∈ G.Adj[u]\n if v.color == WHITE\n v.cc = u.cc\n v.π = u\n DFS-VISIT-CC(G, v)\n u.color = BLACK\n time = time + 1\n u.f = time" } ]
false
[]
22-22.3-13
22
22.3
22.3-13 $\star$
docs/Chap22/22.3.md
A directed graph $G = (V, E)$ is **_singly connected_** if $u \leadsto v$ implies that $G$ contains at most one simple path from $u$ to $v$ for all vertices $u, v \in V$. Give an efficient algorithm to determine whether or not a directed graph is singly connected.
This can be done in time $O(|V||E|)$. To do this, first perform a topological sort of the vertices. Then, we will contain for each vertex a list of it's ancestors with $in\text-degree$ $0$. We compute these lists for each vertex in the order starting from the earlier ones topologically. Then, if we ever have a vertex that has the same degree $0$ vertex appearing in the lists of two of its immediate parents, we know that the graph is not singly connected. however, if at each step we have that at each step all of the parents have disjoint sets of degree $0$ vertices as ancestors, the graph is singly connected. Since, for each vertex, the amount of time required is bounded by the number of vertices times the $in\text-degree$ of the particular vertex, the total runtime is bounded by $O(|V||E|)$.
[]
false
[]
22-22.4-1
22
22.4
22.4-1
docs/Chap22/22.4.md
Show the ordering of vertices produced by $\text{TOPOLOGICAL-SORT}$ when it is run on the dag of Figure 22.8, under the assumption of Exercise 22.3-2.
Our start and finish times from performing the $\text{DFS}$ are $$ \begin{array}{ccc} \text{label} & d & f \\\\ \hline m & 1 & 20 \\\\ q & 2 & 5 \\\\ t & 3 & 4 \\\\ r & 6 & 19 \\\\ u & 7 & 8 \\\\ y & 9 & 18 \\\\ v & 10 & 17 \\\\ w & 11 & 14 \\\\ z & 12 & 13 \\\\ x & 15 & 16 \\\\ n & 21 & 26 \\\\ o & 22 & 25 \\\\ s & 23 & 24 \\\\ p & 27 & 28 \end{array} $$ And so, by reading off the entries in decreasing order of finish time, we have the sequence $p, n, o, s, m, r, y, v, x, w, z, u, q, t$.
[]
false
[]
22-22.4-2
22
22.4
22.4-2
docs/Chap22/22.4.md
Give a linear-time algorithm that takes as input a directed acyclic graph $G = (V, E)$ and two vertices $s$ and $t$, and returns the number of simple paths from $s$ to $t$ in $G$. For example, the directed acyclic graph of Figure 22.8 contains exactly four simple paths from vertex $p$ to vertex $v: pov$, $poryv$, $posryv$, and $psryv$. (Your algorithm needs only to count the simple paths, not list them.)
The algorithm works as follows. The attribute $u.paths$ of node $u$ tells the number of simple paths from $u$ to $v$, where we assume that $v$ is fixed throughout the entire process. First of all, a topo sort should be conducted and list the vertex between $u$, $v$ as $\\{v[1], v[2], \dots, v[k - 1]\\}$. To count the number of paths, we should construct a solution from $v$ to $u$. Let's call $u$ as $v[0]$ and $v$ as $v[k]$, to avoid overlapping subproblem, the number of paths between $v_k$ and $u$ should be remembered and used as $k$ decrease to $0$. Only in this way can we solve the problem in $\Theta(V + E)$. An bottom-up iterative version is possible only if the graph uses adjacency matrix so whether $v$ is adjacency to $u$ can be determined in $O(1)$ time. But building a adjacency matrix would cost $\Theta(|V|^2)$, so never mind. ```cpp SIMPLE-PATHS(G, u, v) TOPO-SORT(G) let {v[1], v[2]..v[k - 1]} be the vertex between u and v v[0] = u v[k] = v for j = 0 to k - 1 DP[j] = ∞ DP[k] = 1 return SIMPLE-PATHS-AID(G, DP, 0) ``` ```cpp SIMPLE-PATHS-AID(G, DP, i) if i > k return 0 else if DP[i] != ∞ return DP[i] else DP[i] = 0 for v[m] in G.adj[v[i]] and 0 < m ≤ k DP[i] += SIMPLE-PATHS-AID(G, DP, m) return DP[i] ```
[ { "lang": "cpp", "code": "SIMPLE-PATHS(G, u, v)\n TOPO-SORT(G)\n let {v[1], v[2]..v[k - 1]} be the vertex between u and v\n v[0] = u\n v[k] = v\n for j = 0 to k - 1\n DP[j] = ∞\n DP[k] = 1\n return SIMPLE-PATHS-AID(G, DP, 0)" }, { "lang": "cpp", "code": "SIMPLE-PATHS-AID(G, DP, i)\n if i > k\n return 0\n else if DP[i] != ∞\n return DP[i]\n else\n DP[i] = 0\n for v[m] in G.adj[v[i]] and 0 < m ≤ k\n DP[i] += SIMPLE-PATHS-AID(G, DP, m)\n return DP[i]" } ]
false
[]
22-22.4-3
22
22.4
22.4-3
docs/Chap22/22.4.md
Give an algorithm that determines whether or not a given undirected graph $G = (V, E)$ contains a cycle. Your algorithm should run in $O(V)$ time, independent of $|E|$.
(Removed)
[]
false
[]
22-22.4-4
22
22.4
22.4-4
docs/Chap22/22.4.md
Prove or disprove: If a directed graph $G$ contains cycles, then $\text{TOPOLOGICAL-SORT}(G)$ produces a vertex ordering that minimizes the number of "bad" edges that are inconsistent with the ordering produced.
This is not true. Consider the graph $G$ consisting of vertices $a, b, c$, and $d$. Let the edges be $(a, b)$, $(b, c)$, $(a, d)$, $(d, c)$, and $(c, a)$. Suppose that we start the $\text{DFS}$ of $\text{TOPOLOGICAL-SORT}$ at vertex $c$. Assuming that $b$ appears before $d$ in the adjacency list of $a$, the order, from latest to earliest, of finish times is $c, a, d, b$. The "bad" edges in this case are $(b, c)$ and $(d, c)$. However, if we had instead ordered them by $a, b, d, c$ then the only bad edges would be $(c, a)$. Thus $\text{TOPOLOGICAL-SORT}$ doesn't always minimizes the number of "bad" edges
[]
false
[]
22-22.4-5
22
22.4
22.4-5
docs/Chap22/22.4.md
Another way to perform topological sorting on a directed acyclic graph $G = (V, E)$ is to repeatedly find a vertex of $\text{in-degree}$ $0$, output it, and remove it and all of its outgoing edges from the graph. Explain how to implement this idea so that it runs in time $O(V + E)$. What happens to this algorithm if $G$ has cycles?
(Removed)
[]
false
[]
22-22.5-1
22
22.5
22.5-1
docs/Chap22/22.5.md
How can the number of strongly connected components of a graph change if a new edge is added?
It can either stay the same or decrease. To see that it is possible to stay the same, just suppose you add some edge to a cycle. To see that it is possible to decrease, suppose that your original graph is on three vertices, and is just a path passing through all of them, and the edge added completes this path to a cycle. To see that it cannot increase, notice that adding an edge cannot remove any path that existed before. So, if $u$ and $v$ are in the same connected component in the original graph, then there are a path from one to the other, in both directions. Adding an edge wont disturb these two paths, so we know that $u$ and $v$ will still be in the same $\text{SCC}$ in the graph after adding the edge. Since no components can be split apart, this means that the number of them cannot increase since they form a partition of the set of vertices.
[]
false
[]
22-22.5-2
22
22.5
22.5-2
docs/Chap22/22.5.md
Show how the procedure $\text{STRONGLY-CONNECTED-COMPONENTS}$ works on the graph of Figure 22.6. Specifically, show the finishing times computed in line 1 and the forest produced in line 3. Assume that the loop of lines 5–7 of $\text{DFS}$ considers vertices in alphabetical order and that the adjacency lists are in alphabetical order.
The finishing times of each vertex were computed in exercise 22.3-2. The forest consists of 5 trees, each of which is a chain. We'll list the vertices of each tree in order from root to leaf: $r$, $u$, $q - y - t$, $x - z$, and $s - w - v$.
[]
false
[]
22-22.5-3
22
22.5
22.5-3
docs/Chap22/22.5.md
Professor Bacon claims that the algorithm for strongly connected components would be simpler if it used the original (instead of the transpose) graph in the second depth-first search and scanned the vertices in order of _increasing_ finishing times. Does this simpler algorithm always produce correct results?
Professor Bacon's suggestion doesn't work out. As an example, suppose that our graph is on the three vertices $\\{1, 2, 3\\}$ and consists of the edges $(2, 1), (2, 3), (3, 2)$. Then, we should end up with $\\{2, 3\\}$ and $\\{1\\}$ as our $\text{SCC}$'s. However, a possible $\text{DFS}$ starting at $2$ could explore $3$ before $1$, this would mean that the finish time of $3$ is lower than of $1$ and $2$. This means that when we first perform the $\text{DFS}$ starting at $3$. However, a $\text{DFS}$ starting at $3$ will be able to reach all other vertices. This means that the algorithm would return that the entire graph is a single $\text{SCC}$, even though this is clearly not the case since there is neither a path from $1$ to $2$ of from $1$ to $3$.
[]
false
[]
22-22.5-4
22
22.5
22.5-4
docs/Chap22/22.5.md
Prove that for any directed graph $G$, we have $((G^\text T)^{\text{SCC}})^\text T = G^{\text{SCC}}$. That is, the transpose of the component graph of $G^\text T$ is the same as the component graph of $G$.
First observe that $C$ is a strongly connected component of $G$ if and only if it is a strongly connected component of $G^\text T$. Thus the vertex sets of $G^{\text{SCC}}$ and $(G^\text T)^{\text{SCC}}$ are the same, which implies the vertex sets of $((G^\text T)^\text{SCC})^\text T$ and $G^{\text{SCC}}$ are the same. It suffices to show that their edge sets are the same. Suppose $(v_i, v_j)$ is an edge in $((G^\text T)^{\text{SCC}})^\text T$. Then $(v_j, v_i)$ is an edge in $(G^\text T)^{\text{SCC}}$. Thus there exist $x \in C_j$ and $y \in C_i$ such that $(x, y)$ is an edge of $G^\text T$, which implies $(y, x)$ is an edge of $G$. Since components are preserved, this means that $(v_i, v_j)$ is an edge in $G^{\text{SCC}}$. For the opposite implication we simply note that for any graph $G$ we have $(G^\text T)^{\text T} = G$.
[]
false
[]
22-22.5-5
22
22.5
22.5-5
docs/Chap22/22.5.md
Give an $O(V + E)$-time algorithm to compute the component graph of a directed graph $G = (V, E)$. Make sure that there is at most one edge between two vertices in the component graph your algorithm produces.
(Removed)
[]
false
[]
22-22.5-6
22
22.5
22.5-6
docs/Chap22/22.5.md
Given a directed graph $G = (V, E)$, explain how to create another graph $G' = (V, E')$ such that (a) $G'$ has the same strongly connected components as $G$, (b) $G'$ has the same component graph as $G$, and \(c\) $E'$ is as small as possible. Describe a fast algorithm to compute $G'$.
(Removed)
[]
false
[]
22-22.5-7
22
22.5
22.5-7
docs/Chap22/22.5.md
A directed graph $G = (V, E)$ is **_semiconnected_** if, for all pairs of vertices $u, v \in V$, we have $u \leadsto v$ or $v \leadsto u$. Give an efficient algorithm to determine whether or not $G$ is semiconnected. Prove that your algorithm is correct, and analyze its running time.
Algorithm: 1. Run $\text{STRONG-CONNECTED-COMPONENTS}(G)$. 2. Take each strong connected component as a virtual vertex and create a new virtual graph $G'$. 3. Run $\text{TOPOLOGICAL-SORT}(G')$. 4. Check if for all consecutive vertices $(v\_i, v\_{i + 1})$ in a topological sort of $G'$, there is an edge $(v_i, v_{i + 1})$ in graph $G'$. if so, the original graph is semiconnected. Otherwise, it isn't. Proof: It is easy to show that $G'$ is a DAG. Consider consecutive vertices $v_i$ and $v_{i + 1}$ in $G'$. If there is no edge from $v_i$ to $v_{i + 1}$, we also conclude that there is no path from $v_{i + 1}$ to $v_i$ since $v_i$ finished after $v_{i + 1}$. From the definition of $G'$, we conclude that, there is no path from any vertices in $G$ who is represented as $v_i$ in $G'$ to those represented as $v_{i + 1}$. Thus, $G$ is not semi-connected. If there is an edge between all consecutive vertices, we claim that there is an edge between any two vertices. Therefore, $G$ is semi-connected. Running-time: $O(V + E)$.
[]
false
[]
22-22-1
22
22-1
22-1
docs/Chap22/Problems/22-1.md
A depth-first forest classifies the edges of a graph into tree, back, forward, and cross edges. A breadth-first tree can also be used to classify the edges reachable from the source of the search into the same four categories. **a.** Prove that in a breadth-first search of an undirected graph, the following properties hold: 1. There are no back edges and no forward edges. 2. For each tree edge $(u, v)$, we have $v.d = u.d + 1$. 3. For each cross edge $(u, v)$, we have $v.d = u.d$ or $v.d = u.d + 1$. **b.** Prove that in a breadth-first search of a directed graph, the following properties hold: 1. There are no forward edges. 2. For each tree edge $(u, v)$, we have $v.d = u.d + 1$. 3. For each cross edge $(u, v)$, we have $v.d \le u.d + 1$. 4. For each back edge $(u, v)$, we have $0 \le v.d \le u.d$.
**a.** 1. If we found a back edge, this means that there are two vertices, one a descendant of the other, but there is already a path from the ancestor to the child that doesn’t involve moving up the tree. This is a contradiction since the only children in the bfs tree are those that are a single edge away, which means there cannot be any other paths to that child because that would make it more than a single edge away. To see that there are no forward edges, We do a similar procedure. A forward edge would mean that from a given vertex we notice it has a child that has already been processed, but this cannot happen because all children are only one edge away, and for it to of already been processed, it would need to have gone through some other vertex first. 2. An edge is placed on the list to be processed if it goes to a vertex that has not yet been considered. This means that the path from that vertex to the root must be at least the distance from the current vertex plus $1$. It is also at most that since we can just take the path that consists of going to the current vertex and taking its path to the root. 3. We know that a cross edge cannot be going to a depth more than one less, otherwise it would be used as a tree edge when we were processing that earlier element. It also cannot be going to a vertex of depth more than one more, because we wouldn’t of already processed a vertex that was that much further away from the root. Since the depths of the vertices in the cross edge cannot be more than one apart, the conclusion follows by possibly interchanging the roles of $u$ and $v$, which we can do because the edges are unordered. **b.** 1. To have a forward edge, we would need to have already processed a vertex using more than one edge, even though there is a path to it using a single edge. Since breadth first search always considers shorter paths first, this is not possible. 2. Suppose that $(u, v)$ is a tree edge. Then, this means that there is a path from the root to $v$ of length $u.d + 1$ by just appending $(u, v)$ on to the path from the root to $u$. To see that there is no shorter path, we just note that we would of processed $v$ sooner, and so wouldn’t currently have a tree edge if there were. 3. To see this, all we need to do is note that there is some path from the root to $v$ of length $u.d + 1$ obtained by appending $(u, v)$ to $v.d$. Since there is a path of that length, it serves as an upper bound on the minimum length of all such paths from the root to $v$. 4. It is trivial that $0 \le v.d$, since it is impossible to have a path from the root to $v$ of negative length. The more interesting inequality is $v.d \le u.d$. We know that there is some path from $v$ to $u$, consisting of tree edges, this is the defining property of $(u, v)$ being a back edge. This means that is $v, v_1, v_2, \dots, v_k, u$ is this path (it is unique because the tree edges form a tree). Then, we have that $u.d = v_k.d + 1 = v_{k − 1}.d + 2 = \cdots = v_1.d + k = v.d + k + 1$. So, we have that $u.d > v.d$. In fact, we just showed that we have the stronger conclusion, that $0 \le v.d < u.d$.
[]
false
[]
22-22-2
22
22-2
22-2
docs/Chap22/Problems/22-2.md
Let $G = (V, E)$ be a connected, undirected graph. An **_articulation point_** of $G$ is a vertex whose removal disconnects $G$. A **_bridge_** of $G$ is an edge whose removal disconnects $G$. A **_biconnected component_** of $G$ is a maximal set of edges such that any two edges in the set lie on a common simple cycle. Figure 22.10 illustrates these definitions. We can determine articulation points, bridges, and biconnected components using depth-first search. Let $G_\pi = (V, E_\pi)$ be a depth-first tree of $G$. **a.** Prove that the root of $G_\pi$ is an articulation point of $G$ if and only if it has at least two children in $G_\pi$. **b.** Let $v$ be a nonroot vertex of $G_\pi$. Prove that $v$ is an articulation point of $G$ if and only if $v$ has a child $s$ such that there is no back edge from $s$ or any descendant of $s$ to a proper ancestor of $v$. **c.** Let $$ v.low = \min \begin{cases} v.d, \\\\ w.d:(u,w) \text{ is a back edge for some descendant } u \text{ of } v. \end{cases} $$ Show how to computer $v.low$ for all vertices $v \in V$ in $O(E)$ time. **d.** Show how to compute all articulation points in $O(E)$ time. **e.** Prove that an edge of $G$ is a bridge if and only if it does not lie on any simple cycle of $G$. **f.** Show how to compute all the bridges of $G$ in $O(E)$ time. **g.** Prove that the biconnected components of $G$ partition the nonbridge edges of $G$. **h.** Give an $O(E)$-time algorithm to label each edge $e$ of $G$ with a positive integer $e.bcc$ such that $e.bcc = e'.bcc$ if and only if $e$ and $e'$ are in the same biconnected component.
**a.** First suppose the root $r$ of $G_\pi$ is an articulation point. Then the removal of $r$ from $G$ would cause the graph to disconnect, so $r$ has at least $2$ children in $G$. If $r$ has only one child $v$ in $G_\pi$ then it must be the case that there is a path from $v$ to each of $r$'s other children. Since removing $r$ disconnects the graph, there must exist vertices $u$ and $w$ such that the only paths from $u$ to $w$ contain $r$. To reach $r$ from $u$, the path must first reach one of $r$'s children. This child is connect to $v$ via a path which doesn't contain $r$. To reach $w$, the path must also leave $r$ through one of its children, which is also reachable by $v$. This implies that there is a path from $u$ to $w$ which doesn't contain $r$, a contradiction. Now suppose $r$ has at least two children $u$ and $v$ in $G_\pi$. Then there is no path from $u$ to $v$ in $G$ which doesn't go through $r$, since otherwise $u$ would be an ancestor of $v$. Thus, removing $r$ disconnects the component containing $u$ and the component containing $v$, so $r$ is an articulation point. **b.** Suppose that $v$ is a nonroot vertex of $G_\pi$ and that $v$ has a child $s$ such that neither $s$ nor any of $s$'s descendants have back edges to a proper ancestor of $v$. Let $r$ be an ancestor of $v$, and remove $v$ from $G$. Since we are in the undirected case, the only edges in the graph are tree edges or back edges, which means that every edge incident with $s$ takes us to a descendant of $s$, and no descendants have back edges, so at no point can we move up the tree by taking edges. Therefore $r$ is unreachable from $s$, so the graph is disconnected and $v$ is an articulation point. Now suppose that for every child of $v$ there exists a descendant of that child which has a back edge to a proper ancestor of $v$. Remove $v$ from $G$. Every subtree of $v$ is a connected component. Within a given subtree, find the vertex which has a back edge to a proper ancestor of $v$. Since the set $T$ of vertices which aren't descendants of $v$ form a connected component, we have that every subtree of $v$ is connected to $T$. Thus, the graph remains connected after the deletion of $v$ so $v$ is not an articulation point. **c.** Since $v$ is discovered before all of its descendants, the only back edges which could affect $v.low$ are ones which go from a descendant of $v$ to a proper ancestor of $v$. If we know $u.low$ for every child $u$ of $v$, then we can compute $v.low$ easily since all the information is coded in its descendants. Thus, we can write the algorithm recursively: If $v$ is a leaf in $G_\pi$ then $v.low$ is the minimum of $v.d$ and $w.d$ where $(v, w)$ is a back edge. If $v$ is not a leaf, $v$ is the minimum of $v.d$, $w.d$ where $(v, w)$ is a back edge, and $u.low$, where $u$ is a child of $v$. Computing $v.low$ for a vertex is linear in its degree. The sum of the vertices' degrees gives twice the number of edges, so the total runtime is $O(E)$. **d.** First apply the algorithm of part \(c\) in $O(E)$ to compute $v.low$ for all $v \in V$. If $v.low$ = $v.d$ if and only if no descendant of $v$ has a back edge to a proper ancestor of $v$, if and only if $v$ is not an articulation point. Thus, we need only check $v.low$ versus $v.d$ to decide in constant time whether or not $v$ is an articulation point, so the runtime is $O(E)$. **e.** An edge $(u, v)$ lies on a simple cycle if and only if there exists at least one path from $u$ to $v$ which doesn't contain the edge $(u, v)$, if and only if removing $(u, v)$ doesn't disconnect the graph, if and only if $(u, v)$ is not a bridge. **f.** A edge $(u, v)$ lies on a simple cycle in an undirected graph if and only if either both of its endpoints are articulation points, or one of its endpoints is an articulation point and the other is a vertex of degree $1$. There's also a special case where there's only one edge whose incident vertices are both degree $1$. We can check this case in constant time. Since we can compute all articulation points in $O(E)$ and we can decide whether or not a vertex has degree $1$ in constant time, we can run the algorithm in part (d) and then decide whether each edge is a bridge in constant time, so we can find all bridges in $O(E)$ time. **g.** It is clear that every nonbridge edge is in some biconnected component, so we need to show that if $C_1$ and $C_2$ are distinct biconnected components, then they contain no common edges. Suppose to the contrary that $(u, v)$ is in both $C_1$ and $C_2$. Let $(a, b)$ be any edge in $C_1$ and $(c, d)$ be any edge in $C_2$. Then $(a, b)$ lies on a simple cycle with $(u, v)$, consisting of the path $$a, b, p_1, \ldots, p_k, u, v, p_{k + 1}, \ldots, p_n, a.$$ Similarly, $(c, d)$ lies on a simple cycle with $(u, v)$ consisting of the path $$c, d, q_1, \ldots, q_m, u, v, q_{m + 1}, \ldots, q_l, c.$$ This means $$a, b, p_1, \ldots, p_k, u, q_m, \ldots, q_1, d, c, q_l , \ldots, q_{m + 1}, v, p_{k + 1}, \ldots, p_n,$$ is a simple cycle containing $(a, b)$ and $(c, d)$, a contradiction. Thus, the biconnected components form a partition. **h.** Locate all bridge edges in $O(E)$ time using the algorithm described in part (f). Remove each bridge from $E$. The biconnected components are now simply the edges in the connected components. Assuming this has been done, run the following algorithm, which clearly runs in $O(|E|)$ where $|E|$ is the number of edges originally in $G$. ```cpp VISIT-BCC(G, u, k) u.color = GRAY for each v ∈ G.Adj[u] (u, v).bcc = k if v.color == WHITE VISIT-BCC(G, v, k) ```
[ { "lang": "cpp", "code": "VISIT-BCC(G, u, k)\n u.color = GRAY\n for each v ∈ G.Adj[u]\n (u, v).bcc = k\n if v.color == WHITE\n VISIT-BCC(G, v, k)" } ]
false
[]
22-22-3
22
22-3
22-3
docs/Chap22/Problems/22-3.md
An **_Euler tour_** of a strongly connected, directed graph $G = (V, E)$ is a cycle that traverses each edge of $G$ exactly once, although it may visit a vertex more than once. **a.** Show that $G$ has an Euler tour if and only if $\text{in-degree}(v) = \text{out-degree}(v)$ for each vertex $v \in V$. **b.** Describe an $O(E)$-time algorithm to find an Euler tour of $G$ if one exists. ($\textit{Hint:}$ Merge edge-disjoint cycles.)
**a.** First, we'll show that it is necessary to have in degree equal out degree for each vertex. Suppose that there was some vertex v for which the two were not equal, suppose that $\text{in-degree}(v) - \text{out-degree}(v)$. Note that we may assume that in degree is greater because otherwise we would just look at the transpose graph in which we traverse the cycle backwards. If $v$ is the start of the cycle as it is listed, just shift the starting and ending vertex to any other one on the cycle. Then, in whatever cycle we take going though $v$, we must pass through $v$ some number of times, in particular, after we pass through it a times, the number of unused edges coming out of $v$ is zero, however, there are still unused edges goin in that we need to use. This means that there is no hope of using those while still being a tour, becase we would never be able to escape $v$ and get back to the vertex where the tour started. Now, we show that it is sufficient to have the in degree and out degree equal for every vertex. To do this, we will generalize the problem slightly so that it is more amenable to an inductive approach. That is, we will show that for every graph $G$ that has two vertices $v$ and $u$ so that all the vertices have the same in and out degree except that the indegree is one greater for $u$ and the out degree is one greater for $v$, then there is an Euler path from $v$ to $u$. This clearly lines up with the original statement if we pick $u = v$ to be any vertex in the graph. We now perform induction on the number of edges. If there is only a single edge, then taking just that edge is an Euler tour. Then, suppose that we start at $v$ and take any edge coming out of it. Consider the graph that is obtained from removing that edge, it inductively contains an Euler tour that we can just post-pend to the edge that we took to get out of $v$. **b.** To actually get the Euler circuit, we can just arbitrarily walk any way that we want so long as we don't repeat an edge, we will necessarily end up with a valid Euler tour. This is implemented in the following algorithm, $\text{EULER-TOUR}(G)$ which takes time $O(|E|)$. It has this runtime because the for loop will get run for every edge, and takes a constant amount of time. Also, the process of initializing each edge's color will take time proportional to the number of edges. ```cpp EULER-TOUR(G) color all edges WHITE let (v, u) be any edge let L be a list containing v while there is some WHITE edge (v, w) coming out of v color (v, w) BLACK v = w append v to L ```
[ { "lang": "cpp", "code": "EULER-TOUR(G)\n color all edges WHITE\n let (v, u) be any edge\n let L be a list containing v\n while there is some WHITE edge (v, w) coming out of v\n color (v, w) BLACK\n v = w\n append v to L" } ]
false
[]
22-22-4
22
22-4
22-4
docs/Chap22/Problems/22-4.md
Let $G = (V, E)$ be a directed graph in which each vertex $u \in V$ is labeled with a unique integer $L(U)$ from the set $\\{1, 2, \ldots, |V|\\}$. For each vertex $u \in V$, let $R(u) = \\{v \in V: u \leadsto v \\}$ be the set of vertices that are reachable from $u$. Define $\min(u)$ to be the vertex in $R(u)$ whose label is minimum, i.e., $\min(u)$ is the vertex $v$ such that $L(v) = \min \\{L(w): w \in R(u) \\}$. Give an $O(V + E)$-time algorithm that computes $\min(u)$ for all vertices $u \in V$.
**1.** Compute the component graph $G^{\text{SCC}}$ (in order to remove simple cycles from graph $G$), and label each vertex in $G^{\text{SCC}}$ with the smallest label of vertex in that $G^{\text{SCC}}$. Following chapter 22.5 the time complexity of this procedure is $O(V + E)$. **2.** On $G^{\text{SCC}}$, execute the below algorithm. Notice that if we memorize this function it will be invoked at most $V + E$ times. Its time complexity is also $O(V + E)$. ```cpp REACHABILITY(u) u.min = u.label for each v ∈ Adj[u] u.min = min(u.min, REACHABILITY(v)) return u.min ``` **3.** Back to graph $G$, the value of $\min(u)$ on Graph $G$ is the value of $\min(u.scc)$ on Graph $G^{\text{SCC}}$. **Alternate solution:** Transpose the graph. Call $\text{DFS}$, but in the main loop of $\text{DFS}$, consider the vertices in order of their labels. In the $\text{DFS-VISIT}$ subroutine, upon discovering a new node, we set its $\text{min}$ to be the label of its root.
[ { "lang": "cpp", "code": "REACHABILITY(u)\n u.min = u.label\n for each v ∈ Adj[u]\n u.min = min(u.min, REACHABILITY(v))\n return u.min" } ]
false
[]
23-23.1-1
23
23.1
23.1-1
docs/Chap23/23.1.md
Let $(u, v)$ be a minimum-weight edge in a connected graph $G$. Show that $(u, v)$ belongs to some minimum spanning tree of $G$.
(Removed)
[]
false
[]
23-23.1-2
23
23.1
23.1-2
docs/Chap23/23.1.md
Professor Sabatier conjectures the following converse of Theorem 23.1. Let $G = (V, E)$ be a connected, undirected graph with a real-valued weight function $w$ defined on $E$. Let $A$ be a subset of $E$ that is included in some minimum spanning tree for $G$, let $(S, V - S)$ be any cut of $G$ that respects $A$, and let $(u, v)$ be a safe edge for $A$ crossing $(S, V - S)$. Then, $(u, v)$ is a light edge for the cut. Show that the professor's conjecture is incorrect by giving a counterexample.
Let $G$ be the graph with $4$ vertices: $u, v, w, z$. Let the edges of the graph be $(u, v), (u, w), (w, z)$ with weights $3$, $1$, and $2$ respectively. Suppose $A$ is the set $\\{(u, w)\\}$. Let $S = A$. Then $S$ clearly respects $A$. Since $G$ is a tree, its minimum spanning tree is itself, so $A$ is trivially a subset of a minimum spanning tree. Moreover, every edge is safe. In particular, $(u, v)$ is safe but not a light edge for the cut. Therefore Professor Sabatier's conjecture is false.
[]
false
[]
23-23.1-3
23
23.1
23.1-3
docs/Chap23/23.1.md
Show that if an edge $(u, v)$ is contained in some minimum spanning tree, then it is a light edge crossing some cut of the graph.
Let $T_0$ and $T_1$ be the two trees that are obtained by removing edge $(u, v)$ from a $\text{MST}$. Suppose that $V_0$ and $V_1$ are the vertices of $T_0$ and $T_1$ respectively. Consider the cut which separates $V_0$ from $V_1$. Suppose to a contradiction that there is some edge that has weight less than that of $(u, v)$ in this cut. Then, we could construct a minimum spanning tree of the whole graph by adding that edge to $T_1 \cup T_0$. This would result in a minimum spanning tree that has weight less than the original minimum spanning tree that contained $(u, v)$.
[]
false
[]
23-23.1-4
23
23.1
23.1-4
docs/Chap23/23.1.md
Give a simple example of a connected graph such that the set of edges $\\{(u, v):$ there exists a cut $(S, V - S)$ such that $(u, v)$ is a light edge crossing $(S, V - S)\\}$ does not form a minimum spanning tree.
(Removed)
[]
false
[]
23-23.1-5
23
23.1
23.1-5
docs/Chap23/23.1.md
Let $e$ be a maximum-weight edge on some cycle of connected graph $G = (V, E)$. Prove that there is a minimum spanning tree of $G' = (V, E - \\{e\\})$ that is also a minimum spanning tree of $G$. That is, there is a minimum spanning tree of $G$ that does not include $e$.
Let $A$ be any cut that causes some vertices in the cycle on once side of the cut, and some vertices in the cycle on the other. For any of these cuts, we know that the edge $e$ is not a light edge for this cut. Since all the other cuts won't have the edge $e$ crossing it, we won't have that the edge is light for any of those cuts either. This means that we have that e is not safe.
[]
false
[]
23-23.1-6
23
23.1
23.1-6
docs/Chap23/23.1.md
Show that a graph has a unique minimum spanning tree if, for every cut of the graph, there is a unique light edge crossing the cut. Show that the converse is not true by giving a counterexample.
**Remark:** Do not assume that all edge weights are distinct. **Part 1: Proving the Forward Direction** **Goal:** Show that if every cut in the graph has a unique light edge crossing it, then the graph has exactly one MST. **Proof:** Assume, for contradiction, that the graph has **two distinct MSTs**, $T$ and $T'$. 1. **Identifying an Edge in $T$ but not in $T'$:** - Since $T$ and $T'$ are distinct, there exists at least one edge that is in $T$ but not in $T'$. - Let $(u, v)$ be such an edge. 2. **Creating a Cut by Removing $(u, v)$:** - Removing $(u, v)$ from $T$ divides it into two connected components (since trees are acyclic and connected). - Let $T_u$ be the set of vertices reachable from $u$ without using $(u, v)$. - Let $T_v$ be the set of vertices reachable from $v$ without using $(u, v)$. - The sets $T_u$ and $T_v$ form a **cut** $(T_u, T_v)$ in the graph. 3. **Unique Light Edge Crossing the Cut:** - By assumption, the cut $(T_u, T_v)$ has a **unique light edge** crossing it. - Let $(x, y)$ be this unique light edge. - Note that $(u, v)$ crosses this cut because $u \in T_u$ and $v \in T_v$. 4. **Analyzing the Unique Light Edge:** - **Case 1:** If $(x, y) \ne (u, v)$, then: - Since $(x, y)$ is the unique light edge and not $(u, v)$, it must have a weight **less than** $w(u, v)$. - **Constructing a New Spanning Tree:** - Replace $(u, v)$ in $T$ with $(x, y)$ to get $T'' = T - \{ (u, v) \} \cup \{(x, y)\}$. - $T''$ is connected (since $(x, y)$ connects $T_u$ and $T_v$) and spans all vertices. - The total weight of $T''$ is less than that of $T$ because $w(x, y) < w(u, v)$. - **Contradiction:** - $T$ was assumed to be an MST, but $T''$ has a lower total weight. - This contradicts the minimality of $T$. - **Case 2:** If $(x, y) = (u, v)$, then: - The unique light edge crossing the cut is $(u, v)$. - **Observing $T'$:** - Since $(u, v) \notin T'$, there must be a path from $u$ to $v$ in $T'$ (since $T'$ is connected). - This path must cross the cut $(T_u, T_v)$ at least once. - Let $e$ be an edge on this path that crosses the cut. - **Comparing Edge Weights:** - Since $(u, v)$ is the unique light edge crossing the cut, and $e \ne (u, v)$, it follows that $w(u, v) < w(e)$. - **Constructing a New Spanning Tree:** - Add $(u, v)$ to $T'$, creating a cycle. - Remove $e$ from this cycle to get $T'' = T' + \{(u, v)\} - \{e\}$. - $T''$ is connected and spans all vertices. - The total weight of $T''$ is less than that of $T'$ because $w(u, v) < w(e)$. - **Contradiction:** - $T'$ was assumed to be an MST, but $T''$ has a lower total weight. - This contradicts the minimality of $T'$. 5. **Conclusion:** - In both cases, assuming the existence of two distinct MSTs leads to a contradiction. - Therefore, the initial assumption that there are two distinct MSTs is false. - **Hence, the graph must have a unique MST.** --- **Part 2: Showing the Converse is Not True** **Goal:** Provide a counterexample to show that a graph can have a unique MST even if not every cut has a unique light edge crossing it. **Counterexample:** **Graph Description:** - **Vertices:** $a$, $b$, $c$. - **Edges and Weights:** - Edge $(a, b)$ with weight $1$. - Edge $(a, c)$ with weight $1$. - Edge $(b, c)$ with weight $2$. **Visualization:** ``` (1) a ------- b \ / \ / \ / (1) \ / (2) c ``` **Analysis:** 1. **Possible Spanning Trees:** - **Tree 1:** Edges $(a, b)$ and $(a, c)$; total weight $1 + 1 = 2$. - **Tree 2:** Edges $(a, b)$ and $(b, c)$; total weight $1 + 2 = 3$. - **Tree 3:** Edges $(a, c)$ and $(b, c)$; total weight $1 + 2 = 3$. 2. **Identifying the Unique MST:** - The minimum total weight is $2$, achieved by Tree 1. - **Therefore, the graph has a unique MST** comprising edges $(a, b)$ and $(a, c)$. 3. **Examining the Cuts:** - **Cut between $\{a\}$ and $\{b, c\}$:** - Edges crossing this cut are $(a, b)$ and $(a, c)$. - Both edges have the same weight $1$. - **Therefore, this cut does not have a unique light edge**; it has two edges with the minimum weight. 4. **Conclusion:** - The graph has a unique MST even though at least one cut does not have a unique light edge crossing it. - **This demonstrates that the converse is not true.**
[ { "lang": "", "code": " (1)\na ------- b\n \\ /\n \\ /\n \\ /\n(1) \\ / (2)\n c" } ]
false
[]
23-23.1-7
23
23.1
23.1-7
docs/Chap23/23.1.md
Argue that if all edge weights of a graph are positive, then any subset of edges that connects all vertices and has minimum total weight must be a tree. Give an example to show that the same conclusion does not follow if we allow some weights to be nonpositive.
First, we show that the subset of edges of minimum total weight that connects all the vertices is a tree. To see this, suppose not, that it had a cycle. This would mean that removing any of the edges in this cycle would mean that the remaining edges would still connect all the vertices, but would have a total weight that's less by the weight of the edge that was removed. This would contradict the minimality of the total weight of the subset of vertices. Since the subset of edges forms a tree, and has minimal total weight, it must also be a minimum spanning tree. To see that this conclusion is not true if we allow negative edge weights, we provide a construction. Consider the graph $K_3$ with all edge weights equal to $-1$. The only minimum weight set of edges that connects the graph has total weight $-3$, and consists of all the edges. This is clearly not a $\text{MST}$ because it is not a tree, which can be easily seen because it has one more edge than a tree on three vertices should have. Any $\text{MST}$ of this weighted graph must have weight that is at least $-2$.
[]
false
[]
23-23.1-8
23
23.1
23.1-8
docs/Chap23/23.1.md
Let $T$ be a minimum spanning tree of a graph $G$, and let $L$ be the sorted list of the edge weights of $T$. Show that for any other minimum spanning tree $T'$ of $G$, the list $L$ is also the sorted list of edge weights of $T'$.
Suppose that $L'$ is another sorted list of edge weights of a minimum spanning tree. If $L' \ne L$, there must be a first edge $(u, v)$ in $T$ or $T'$ which is of smaller weight than the corresponding edge $(x, y)$ in the other set. Without loss of generality, assume $(u, v)$ is in $T$. Let $C$ be the graph obtained by adding $(u, v)$ to $L'$. Then we must have introduced a cycle. If there exists an edge on that cycle which is of larger weight than $(u, v)$, we can remove it to obtain a tree $C'$ of weight strictly smaller than the weight of $T'$, contradicting the fact that $T'$ is a minimum spanning tree. Thus, every edge on the cycle must be of lesser or equal weight than $(u, v)$. Suppose that every edge is of strictly smaller weight. Remove $(u, v)$ from $T$ to disconnect it into two components. There must exist some edge besides $(u, v)$ on the cycle which would connect these, and since it has smaller weight we can use that edge instead to create a spanning tree with less weight than $T$, a contradiction. Thus, some edge on the cycle has the same weight as $(u, v)$. Replace that edge by $(u, v)$. The corresponding lists $L$ and $L'$ remain unchanged since we have swapped out an edge of equal weight, but the number of edges which $T$ and $T'$ have in common has increased by $1$. If we continue in this way, eventually they must have every edge in common, contradicting the fact that their edge weights differ somewhere. Therefore all minimum spanning trees have the same sorted list of edge weights.
[]
false
[]
23-23.1-9
23
23.1
23.1-9
docs/Chap23/23.1.md
Let $T$ be a minimum spanning tree of a graph $G = (V, E)$, and let $V'$ be a subset of $V$. Let $T'$ be the subgraph of $T$ induced by $V'$, and let $G'$ be the subgraph of $G$ induced by $V'$. Show that if $T'$ is connected, then $T'$ is a minimum spanning tree of $G'$.
Suppose that there was some cheaper spanning tree than $T'$. That is, we have that there is some $T''$ so that $w(T'') < w(T')$. Then, let $S$ be the edges in $T$ but not in $T'$. We can then construct a minimum spanning tree of $G$ by considering $S \cup T''$. This is a spanning tree since $S \cup T'$ is, and $T''$ makes all the vertices in $V'$ connected just like $T'$ does. However, we have that $$w(S \cup T'') = w(S) + w(T'') < w(S) + w(T') = w(S \cup T') = w(T).$$ This means that we just found a spanning tree that has a lower total weight than a minimum spanning tree. This is a contradiction, and so our assumption that there was a spanning tree of $V'$ cheaper than $T'$ must be false.
[]
false
[]
23-23.1-10
23
23.1
23.1-10
docs/Chap23/23.1.md
Given a graph $G$ and a minimum spanning tree $T$, suppose that we decrease the weight of one of the edges in $T$. Show that $T$ is still a minimum spanning tree for $G$. More formally, let $T$ be a minimum spanning tree for $G$ with edge weights given by weight function $w$. Choose one edge $(x, y) \in T$ and a positive number $k$, and define the weight function $w'$ by $$ w'(u, v) = \begin{cases} w(u, v) & \text{ if }(u, v) \ne (x, y), \\\\ w(x, y) - k & \text{ if }(u, v) = (x, y). \end{cases} $$ Show that $T$ is a minimum spanning tree for $G$ with edge weights given by $w'$.
(Removed)
[]
false
[]
23-23.1-11
23
23.1
23.1-11 $\star$
docs/Chap23/23.1.md
Given a graph $G$ and a minimum spanning tree $T$, suppose that we decrease the weight of one of the edges not in $T$. Give an algorithm for finding the minimum spanning tree in the modified graph.
If we were to add in this newly decreased edge to the given tree, we would be creating a cycle. Then, if we were to remove any one of the edges along this cycle, we would still have a spanning tree. This means that we look at all the weights along this cycle formed by adding in the decreased edge, and remove the edge in the cycle of maximum weight. This does exactly what we want since we could only possibly want to add in the single decreased edge, and then, from there we change the graph back to a tree in the way that makes its total weight minimized.
[]
false
[]
23-23.2-1
23
23.2
23.2-1
docs/Chap23/23.2.md
Kruskal's algorithm can return different spanning trees for the same input graph $G$, depending on how it breaks ties when the edges are sorted into order. Show that for each minimum spanning tree $T$ of $G$, there is a way to sort the edges of $G$ in Kruskal's algorithm so that the algorithm returns $T$.
Suppose that we wanted to pick $T$ as our minimum spanning tree. Then, to obtain this tree with Kruskal's algorithm, we will order the edges first by their weight, but then will resolve ties in edge weights by picking an edge first if it is contained in the minimum spanning tree, and treating all the edges that aren't in $T$ as being slightly larger, even though they have the same actual weight. With this ordering, we will still be finding a tree of the same weight as all the minimum spanning trees $w(T)$. However, since we prioritize the edges in $T$, we have that we will pick them over any other edges that may be in other minimum spanning trees.
[]
false
[]
23-23.2-2
23
23.2
23.2-2
docs/Chap23/23.2.md
Suppose that we represent the graph $G = (V, E)$ as an adjacency matrix. Give a simple implementation of Prim's algorithm for this case that runs in $O(V^2)$ time.
At each step of the algorithm we will add an edge from a vertex in the tree created so far to a vertex not in the tree, such that this edge has minimum weight. Thus, it will be useful to know, for each vertex not in the tree, the edge from that vertex to some vertex in the tree of minimal weight. We will store this information in an array $A$, where $A[u] = (v, w)$ if $w$ is the weight of $(u, v)$ and is minimal among the weights of edges from $u$ to some vertex $v$ in the tree built so far. We'll use $A[u].1$ to access $v$ and $A[u].2$ to access $w$. ```cpp PRIM-ADJ(G, w, r) initialize A with every entry = (NIL, ∞) T = {r} for i = 1 to V if Adj[r, i] != 0 A[i] = (r, w(r, i)) while T != V min = ∞ for each v in V - T if A[v].2 < min min = A[v].2 k = v T = T ∪ {k} k.π = A[k].1 for i = 1 to V if Adj[k, i] != 0 and i ∉ T and w(k, i) < A[i].2 A[i] = (k, w(k, i)) ```
[ { "lang": "cpp", "code": "PRIM-ADJ(G, w, r)\n initialize A with every entry = (NIL, ∞)\n T = {r}\n for i = 1 to V\n if Adj[r, i] != 0\n A[i] = (r, w(r, i))\n while T != V\n min = ∞\n for each v in V - T\n if A[v].2 < min\n min = A[v].2\n k = v\n T = T ∪ {k}\n k.π = A[k].1\n for i = 1 to V\n if Adj[k, i] != 0 and i ∉ T and w(k, i) < A[i].2\n A[i] = (k, w(k, i))" } ]
false
[]
23-23.2-3
23
23.2
23.2-3
docs/Chap23/23.2.md
For a sparse graph $G = (V, E)$, where $|E| = \Theta(V)$, is the implementation of Prim's algorithm with a Fibonacci heap asymptotically faster than the binary-heap implementation? What about for a dense graph, where $|E| = \Theta(V^2)$? How must the sizes $|E|$ and $|V|$ be related for the Fibonacci-heap implementation to be asymptotically faster than the binary-heap implementation?
Prim's algorithm implemented with a Binary heap has runtime $O((V + E)\lg V)$, which in the sparse case, is just $O(V\lg V)$. The implementation with Fibonacci heaps is $$O(E + V\lg V) = O(V + V\lg V) = O(V \lg V).$$ - In the sparse case, the two algorithms have the same asymptotic runtimes. - In the dense case. - The binary heap implementation has a runtime of $$O((V + E)\lg V) = O((V + V^2)\lg V) = O(V^2\lg V).$$ - The Fibonacci heap implementation has a runtime of $$O(E + V\lg V) = O(V^2 + V\lg V) = O(V^2).$$ So, in the dense case, we have that the Fibonacci heap implementation is asymptotically faster. - The Fibonacci heap implementation will be asymptotically faster so long as $E = \omega(V)$. Suppose that we have some function that grows more quickly than linear, say $f$, and $E = f(V)$. - The binary heap implementation will have runtime of $$O((V + E)\lg V) = O((V + f(V))\lg V) = O(f(V)\lg V).$$ However, we have that the runtime of the Fibonacci heap implementation will have runtime of $$O(E + V\lg V) = O(f(V) + V\lg V).$$ This runtime is either $O(f(V))$ or $O(V\lg V)$ depending on if $f(V)$ grows more or less quickly than $V\lg V$ respectively. In either case, we have that the runtime is faster than $O(f(V)\lg V)$.
[]
false
[]
23-23.2-4
23
23.2
23.2-4
docs/Chap23/23.2.md
Suppose that all edge weights in a graph are integers in the range from $1$ to $|V|$. How fast can you make Kruskal's algorithm run? What if the edge weights are integers in the range from $1$ to $W$ for some constant $W$?
(Removed)
[]
false
[]
23-23.2-5
23
23.2
23.2-5
docs/Chap23/23.2.md
Suppose that all edge weights in a graph are integers in the range from $1$ to $|V|$. How fast can you make Prim's algorithm run? What if the edge weights are integers in the range from $1$ to $W$ for some constant $W$?
For the first case, we can use a van Emde Boas tree to improve the time bound to $O(E \lg \lg V)$. Comparing to the Fibonacci heap implementation, this improves the asymptotic running time only for sparse graphs, and it cannot improve the running time polynomially. An advantage of this implementation is that it may have a lower overhead. For the second case, we can use a collection of doubly linked lists, each corresponding to an edge weight. This improves the bound to $O(E)$.
[]
false
[]
23-23.2-6
23
23.2
23.2-6 $\star$
docs/Chap23/23.2.md
Suppose that the edge weights in a graph are uniformly distributed over the halfopen interval $[0, 1)$. Which algorithm, Kruskal's or Prim's, can you make run faster?
For input drawn from a uniform distribution I would use bucket sort with Kruskal's algorithm, for expected linear time sorting of edges by weight. This would achieve expected runtime $O(E\alpha(V))$.
[]
false
[]
23-23.2-7
23
23.2
23.2-7 $\star$
docs/Chap23/23.2.md
Suppose that a graph $G$ has a minimum spanning tree already computed. How quickly can we update the minimum spanning tree if we add a new vertex and incident edges to $G$?
(Removed)
[]
false
[]
23-23.2-8
23
23.2
23.2-8
docs/Chap23/23.2.md
Professor Borden proposes a new divide-and-conquer algorithm for computing minimum spanning trees, which goes as follows. Given a graph $G = (V, E)$, partition the set $V$ of vertices into two sets $V_1$ and $V_2$ such that $|V_1|$ and $|V_2|$ differ by at most $1$. Let $E_1$ be the set of edges that are incident only on vertices in $V_1$, and let $E_2$ be the set of edges that are incident only on vertices in $V_2$. Recursively solve a minimum-spanning-tree problem on each of the two subgraphs $G_1 = (V_1, E_1)$ and $G_2 = (V_2, E_2)$. Finally, select the minimum-weight edge in $E$ that crosses the cut $(V_1, V_2)$, and use this edge to unite the resulting two minimum spanning trees into a single spanning tree. Either argue that the algorithm correctly computes a minimum spanning tree of $G$, or provide an example for which the algorithm fails.
The algorithm fails. Suppose $E = \\{(u, v), (u, w), (v, w)\\}$, the weight of $(u, v)$ and $(u, w)$ is $1$, and the weight of $(v, w)$ is $1000$, partition the set into two sets $V_1 = \\{u\\}$ and $V_2 = \\{v, w\\}$.
[]
false
[]
23-23-1
23
23-1
23-1
docs/Chap23/Problems/23-1.md
Let $G = (V, E)$ be an undirected, connected graph whose weight function is $w: E \rightarrow \mathbb R$, and suppose that $|E| \ge |V|$ and all edge weights are distinct. We define a second-best minimum spanning tree as follows. Let $\mathcal T$ be the set of all spanning trees of $G$, and let $T'$ be a minimum spanning tree of $G$. Then a **_second-best minimum spanning tree_** is a spanning tree $T$ such that $W(T) = \min_{T'' \in \mathcal T - \\{T'\\}} \\{w(T'')\\}$. **a.** Show that the minimum spanning tree is unique, but that the second-best minimum spanning tree need not be unique. **b.** Let $T$ be the minimum spanning tree of $G$. Prove that $G$ contains edges $(u, v) \in T$ and $(x, y) \notin T$ such that $T - \\{(u, v)\\} \cup \\{(x, y)\\}$ is a second-best minimum spanning tree of $G$. **c.** Let $T$ be a spanning tree of $G$ and, for any two vertices $u, v \in V$, let $max[u, v]$ denote an edge of maximum weight on the unique simple path between $u$ and $v$ in $T$. Describe an $O(V^2)$-time algorithm that, given $T$, computes $max[u, v]$ for all $u, v \in V$. **d.** Give an efficient algorithm to compute the second-best minimum spanning tree of $G$.
**a.** To see that the second best minimum spanning tree need not be unique, we consider the following example graph on four vertices. Suppose the vertices are $\\{a, b, c, d\\}$, and the edge weights are as follows: $$ \begin{array}{c|c|c|c|c|} & a & b & c & d \\\\ \hline a & - & 1 & 4 & 3 \\\\ \hline b & 1 & - & 5 & 2 \\\\ \hline c & 4 & 5 & - & 6 \\\\ \hline d & 3 & 2 & 6 & - \\\\ \hline \end{array} $$ Then, the minimum spanning tree has weight $7$, but there are two spanning trees of the second best weight, $8$. **b.** We are trying to show that there is a single edge swap that can demote our minimum spanning tree to a second best minimum spanning tree. In obtaining the second best minimum spanning tree, there must be some cut of a single vertex away from the rest for which the edge that is added is not light, otherwise, we would find the minimum spanning tree, not the second best minimum spanning tree. Call the edge that is selected for that cut for the second best minimum spanning tree $(x, y)$. Now, consider the same cut, except look at the edge that was selected when obtaining $T$, call it $(u, v)$. Then, we have that if consider $T - \\{(u, v)\\} \cup \\{(x, y)\\}$, it will be a second best minimum spanning tree. This is because if the second best minimum spanning tree also selected a non-light edge for another cut, it would end up more expensive than all the minimum spanning trees. This means that we need for every cut other than the one that the selected edge was light. This means that the choices all align with what the minimum spanning tree was. **c.** We give here a dynamic programming solution. Suppose that we want to find it for $(u, v)$. First, we will identify the vertex $x$ that occurs immediately after $u$ on the simple path from $u$ to $v$. We will then make $\max[u, v]$ equal to the max of $w((u, x))$ and $\max[w, v]$. Lastly, we just consider the case that $u$ and $v$ are adjacent, in which case the maximum weight edge is just the single edge between the two. If we can find $x$ in constant time, then we will have the whole dynamic program running in time $O(V^2)$, since that's the size of the table that's being built up. To find $x$ in constant time, we preprocess the tree. We first pick an arbitrary root. Then, we do the preprocessing for Tarjan's off-line least common ancestors algorithm (See problem 21-3). This takes time just a little more than linear, $O(|V|\alpha(|V|))$. Once we've computed all the least common ancestors, we can just look up that result at some point later in constant time. Then, to find the $w$ that we should pick, we first see if $u = \text{LCA}(u, v)$ if it does not, then we just pick the parent of $u$ in the tree. If it does, then we flip the question on its head and try to compute $\max[v, u]$, we are guaranteed to not have this situation of $v = \text{LCA}(v, u)$ because we know that $u$ is an ancestor of $v$. **d.** We provide here an algorithm that takes time $O(V^2)$ and leave open if there exists a linear time solution, that is a $O(E + V)$ time solution. First, we find a minimum spanning tree in time $O(E + V \lg(V))$, which is in $O(V^2)$. Then, using the algorithm from part c, we find the double array max. Then, we take a running minimum over all pairs of vertices $u$, $v$, of the value of $w(u, v) - \max[u, v]$. If there is no edge between $u$ and $v$, we think of the weight being infinite. Then, for the pair that resulted in the minimum value of this difference, we add in that edge and remove from the minimum spanning tree, an edge that is in the path from $u$ to $v$ that has weight $\max[u, v]$.
[]
false
[]
23-23-2
23
23-2
23-2
docs/Chap23/Problems/23-2.md
For a very sparse connected graph $G = (V, E)$, we can further improve upon the $O(E + V\lg V)$ running time of Prim's algorithm with Fibonacci heaps by preprocessing $G$ to decrease the number of vertices before running Prim's algorithm. In particular, we choose, for each vertex $u$, the minimum-weight edge $(u, v)$ incident on $u$, and we put $(u, v)$ into the minimum spanning tree under construction. We then contract all chosen edges (see Section B.4). Rather than contracting these edges one at a time, we first identify sets of vertices that are united into the same new vertex. Then we create the graph that would have resulted from contracting these edges one at a time, but we do so by "renaming" edges according to the sets into which their endpoints were placed. Several edges from the original graph may be renamed the same as each other. In such a case, only one edge results, and its weight is the minimum of the weights of the corresponding original edges. Initially, we set the minimum spanning tree $T$ being constructed to be empty, and for each edge $(u, v) \in E$, we initialize the attributes $(u, v).orig = (u, v)$ and $(u, v).c = w(u, v)$. We use the $orig$ attribute to reference the edge from the initial graph that is associated with an edge in the contracted graph. The $c$ attribute holds the weight of an edge, and as edges are contracted, we update it according to the above scheme for choosing edge weights. The procedure $\text{MST-REDUCE}$ takes inputs $G$ and $T$, and it returns a contracted graph $G'$ with updated attributes $orig'$ and $c'$. The procedure also accumulates edges of $G$ into the minimum spanning tree $T$. ```cpp MST-REDUCE(G, T) for each v ∈ G.V v.mark = false MAKE-SET(v) for each u ∈ G.V if u.mark == false choose v ∈ G.Adj[u] such that (u, v).c is minimized UNION(u, v) T = T ∪ {(u, v).orig} u.mark = v.mark = true G'.V = {FIND-SET(v): v ∈ G.V} G'.E = Ø for each (x, y) ∈ G.E u = FIND-SET(x) v = FIND-SET(y) if (u, v) ∉ G'.E G'.E = G'.E ∪ {(u, v)} (u, v).orig' = (x, y).orig (u, v).c' = (x, y).c else if (x, y).c < (u, v).c' (u, v).orig' = (x, y).orig (u, v).c' = (x, y).c construct adjacency lists G'.Adj for G' return G' and T ``` **a.** Let $T$ be the set of edges returned by $\text{MST-REDUCE}$, and let $A$ be the minimum spanning tree of the graph $G'$ formed by the call $\text{MST-PRIM}(G', c', r)$, where $c'$ is the weight attribute on the edges of $G'.E$ and $r$ is any vertex in $G'.V$. Prove that $T \cup \\{(x,y).orig': (x, y) \in A\\}$ is a minimum spanning tree of $G$. **b.** Argue that $|G'.V| \le |V| / 2$. **c.** Show how to implement $\text{MST-REDUCE}$ so that it runs in $O(E)$ time. ($\textit{Hint:}$ Use simple data structures.) **d.** Suppose that we run $k$ phases of $\text{MST-REDUCE}$, using the output $G'$ produced by one phase as the input $G$ to the next phase and accumulating edges in $T$. Argue that the overall running time of the $k$ phases is $O(kE)$. **e.** Suppose that after running $k$ phases of $\text{MST-REDUCE}$, as in part (d), we run Prim's algorithm by calling $\text{MST-PRIM}(G', c', r)$, where $G'$, with weight attribute $c'$, is returned by the last phase and $r$ is any vertex in $G'.V$. Show how to pick $k$ so that the overall running time is $O(E\lg\lg V)$. Argue that your choice of $k$ minimizes the overall asymptotic running time. **f.** For what values of $|E|$ (in terms of $|V|$) does Prim's algorithm with preprocessing asymptotically beat Prim's algorithm without preprocessing?
**a.** We'll show that the edges added at each step are safe. Consider an unmarked vertex $u$. Set $S = \\{u\\}$ and let $A$ be the set of edges in the tree so far. Then the cut respects $A$, and the next edge we add is a light edge, so it is safe for $A$. Thus, every edge in $T$ before we run Prim's algorithm is safe for $T$. Any edge that Prim's would normally add at this point would have to connect two of the trees already created, and it would be chosen as minimal. Moreover, we choose exactly one between any two trees. Thus, the fact that we only have the smallest edges available to us is not a problem. The resulting tree must be minimal. **b.** We argue by induction on the number of vertices in $G$. We'll assume that $|V| > 1$, since otherwise $\text{MST-REDUCE}$ will encounter an error on line 6 because there is no way to choose $v$. Let $|V| = 2$. Since $G$ is connected, there must be an edge between $u$ and $v$, and it is trivially of minimum weight. They are joined, and $|G'.V| = 1 = |V| / 2$. Suppose the claim holds for $|V| = n$. Let $G$ be a connected graph on $n + 1$ vertices. Then $G'.V \le n / 2$ prior to the final vertex $v$ being examined in the for-loop of line 4. If $v$ is marked then we're done, and if $v$ isn't marked then we'll connect it to some other vertex, which must be marked since $v$ is the last to be processed. Either way, $v$ can't contribute an additional vertex to $G'.V$. so $$|G'.V| \le n / 2 \le (n + 1) / 2.$$ **c.** Rather than using the disjoint set structures of chapter 21, we can simply use an array to keep track of which component a vertex is in. Let $A$ be an array of length $|V|$ such that $A[u] = v$ if $v = \text{FIND-SET}(u)$. Then $\text{FIND-SET}(u)$ can now be replaced with $A[u]$ and $\text{UNION}(u, v)$ can be replaced by $A[v] = A[u]$. Since these operations run in constant time, the runtime is $O(E)$. **d.** The number of edges in the output is monotonically decreasing, so each call is $O(E)$. Thus, $k$ calls take $O(kE)$ time. **e.** The runtime of Prim's algorithm is $O(E + V\lg V)$. Each time we run $\text{MST-REDUCE}$, we cut the number of vertices at least in half. Thus, after $k$ calls, the number of vertices is at most $|V| / 2^k$. We need to minimize $$E + V / 2^k\lg(V / 2^k) + kE = E + \frac{V\lg V}{2^k} - \frac{Vk}{2^k} + kE$$ with respect to $k$. If we choose $k = \lg\lg V$ then we achieve the overall running time of $O(E\lg\lg V)$ as desired. To see that this value of $k$ minimizes, note that the $\frac{Vk}{2^k}$ term is always less than the $kE$ term since $E \ge V$. As $k$ decreases, the contribution of $kE$ decreases, and the contribution of $\frac{V\lg V}{2^k}$ increases. Thus, we need to find the value of $k$ which makes them approximately equal in the worst case, when $E = V$. To do this, we set $\frac{\lg V}{2^k} = k$. Solving this exactly would involve the Lambert W function, but the nicest elementary function which gets close is $k = \lg\lg V$. **f.** We simply set up the inequality $$E\lg\lg V < E + V\lg V$$ to find that we need $$E < \frac{V\lg V}{\lg\lg V-1} = O(\frac{V\lg V}{\lg\lg V}).$$
[ { "lang": "cpp", "code": "> MST-REDUCE(G, T)\n> for each v ∈ G.V\n> v.mark = false\n> MAKE-SET(v)\n> for each u ∈ G.V\n> if u.mark == false\n> choose v ∈ G.Adj[u] such that (u, v).c is minimized\n> UNION(u, v)\n> T = T ∪ {(u, v).orig}\n> u.mark = v.mark = true\n> G'.V = {FIND-SET(v): v ∈ G.V}\n> G'.E = Ø\n> for each (x, y) ∈ G.E\n> u = FIND-SET(x)\n> v = FIND-SET(y)\n> if (u, v) ∉ G'.E\n> G'.E = G'.E ∪ {(u, v)}\n> (u, v).orig' = (x, y).orig\n> (u, v).c' = (x, y).c\n> else if (x, y).c < (u, v).c'\n> (u, v).orig' = (x, y).orig\n> (u, v).c' = (x, y).c\n> construct adjacency lists G'.Adj for G'\n> return G' and T\n>" } ]
false
[]
23-23-3
23
23-3
23-3
docs/Chap23/Problems/23-3.md
A **_bottleneck spanning tree_** $T$ of an undirected graph $G$ is a spanning tree of $G$ whose largest edge weight is minimum over all spanning trees of $G$. We say that the value of the bottleneck spanning tree is the weight of the maximum-weight edge in $T$. **a.** Argue that a minimum spanning tree is a bottleneck spanning tree. Part (a) shows that finding a bottleneck spanning tree is no harder than finding a minimum spanning tree. In the remaining parts, we will show how to find a bottleneck spanning tree in linear time. **b.** Give a linear-time algorithm that given a graph $G$ and an integer $b$, determines whether the value of the bottleneck spanning tree is at most $b$. **c.** Use your algorithm for part (b) as a subroutine in a linear-time algorithm for the bottleneck-spanning-tree problem. ($\textit{Hint:}$ You may want to use a subroutine that contracts sets of edges, as in the $\text{MST-REDUCE}$ procedure described in Problem 23-2.)
**a.** To see that every minimum spanning tree is also a bottleneck spanning tree. Suppose that $T$ is a minimum spanning tree. Suppose there is some edge in it $(u, v)$ that has a weight that's greater than the weight of the bottleneck spanning tree. Then, let $V_1$ be the subset of vertices of $V$ that are reachable from $u$ in $T$, without going though $v$. Define $V_2$ symmetrically. Then, consider the cut that separates $V_1$ from $V_2$. The only edge that we could add across this cut is the one of minimum weight, so we know that there are no edge across this cut of weight less than $w(u, v)$. However, we have that there is a bottleneck spanning tree with less than that weight. This is a contradiction because a bottleneck spanning tree, since it is a spanning tree, must have an edge across this cut. **b.** To do this, we first process the entire graph, and remove any edges that have weight greater than $b$. If the remaining graph is connected, we can just arbitrarily select any tree in it, and it will be a bottleneck spanning tree of weight at most $b$. Testing connectivity of a graph can be done in linear time by running a breadth first search and then making sure that no vertices remain white at the end. **c.** Write down all of the edge weights of vertices. Use the algorithm from section 9.3 to find the median of this list of numbers in time $O(E)$. Then, run the procedure from part b with this median value as input. Then there are two cases: First, we could have that there is a bottleneck spanning tree with weight at most this median. Then just throw away the edges with weight more than the median, and repeat the procedure on this new graph with half the edges. Second, we could have that there is no bottleneck spanning tree with at most that weight. Then, we should run a procedure similar to problem 23-2 to contract all of the edges that have weight at most the weight of the median. This takes time $O(E)$ and then we are left solving the problem on a graph that now has half the edges. Observe that both cases are $O(E)$ and each recursion reduces the problem size into half. The solution to this recurrence is therefore linear.
[]
false
[]
23-23-4
23
23-4
23-4
docs/Chap23/Problems/23-4.md
In this problem, we give pseudocode for three different algorithms. Each one takes a connected graph and a weight function as input and returns a set of edges $T$. For each algorithm, either prove that $T$ is a minimum spanning tree or prove that $T$ is not a minimum spanning tree. Also describe the most efficient implementation of each algorithm, whether or not it computes a minimum spanning tree. **a.** ```cpp MAYBE-MST-A(G, w) sort the edges into nonincreasing order of edge weights w T = E for each edge e, taken in nonincreasing order by weight if T - {e} is a connected graph T = T - {e} return T ``` **b.** ```cpp MAYBE-MST-B(G, w) T = Ø for each edge e, taken in arbitrary order if T ∪ {e} has no cycles T = T ∪ {e} return T ``` **c.** ```cpp MAYBE-MST-C(G, w) T = Ø for each edge e, taken in arbitrary order T = T ∪ {e} if T has a cycle c let e' be a maximum-weight edge on c T = T - {e} return T ```
**a.** This does return an $\text{MST}$. To see this, we'll show that we never remove an edge which must be part of a minimum spanning tree. If we remove $e$, then $e$ cannot be a bridge, which means that e lies on a simple cycle of the graph. Since we remove edges in nonincreasing order, the weight of every edge on the cycle must be less than or equal to that of $e$. By exercise 23.1-5, there is a minimum spanning tree on $G$ with edge $e$ removed. To implement this, we begin by sorting the edges in $O(E \lg E)$ time. For each edge we need to check whether or not $T - {e}$ is connected, so we'll need to run a $\text{DFS}$. Each one takes $O(V + E)$, so doing this for all edges takes $O(E(V + E))$. This dominates the running time, so the total time is $O(E^2)$. **b.** This doesn't return an $\text{MST}$. To see this, let $G$ be the graph on 3 vertices $a$, $b$, and $c$. Let the eges be $(a, b)$, $(b, c)$, and $(c, a)$ with weights $3, 2$, and $1$ respectively. If the algorithm examines the edges in their order listed, it will take the two heaviest edges instead of the two lightest. An efficient implementation will use disjoint sets to keep track of connected components, as in $\text{MST-REDUCE}$ in problem 23-2. Trying to union within the same component will create a cycle. Since we make $|V|$ calls to $\text{MAKESET}$ and at most $3|E|$ calls to $\text{FIND-SET}$ and $\text{UNION}$, the runtime is $O(E\alpha(V))$. **c.** This does return an $\text{MST}$. To see this, we simply quote the result from exercise 23.1-5. The only edges we remove are the edges of maximum weight on some cycle, and there always exists a minimum spanning tree which doesn't include these edges. Moreover, if we remove an edge from every cycle then the resulting graph cannot have any cycles, so it must be a tree. To implement this, we use the approach taken in part (b), except now we also need to find the maximum weight edge on a cycle. For each edge which introduces a cycle we can perform a $\text{DFS}$ to find the cycle and max weight edge. Since the tree at that time has at most one cycle, it has at most $|V|$ edges, so we can run $\text{DFS}$ in $O(V)$. The runtime is thus $O(EV)$.
[ { "lang": "cpp", "code": "> MAYBE-MST-A(G, w)\n> sort the edges into nonincreasing order of edge weights w\n> T = E\n> for each edge e, taken in nonincreasing order by weight\n> if T - {e} is a connected graph\n> T = T - {e}\n> return T\n>" }, { "lang": "cpp", "code": "> MAYBE-MST-B(G, w)\n> T = Ø\n> for each edge e, taken in arbitrary order\n> if T ∪ {e} has no cycles\n> T = T ∪ {e}\n> return T\n>" }, { "lang": "cpp", "code": "> MAYBE-MST-C(G, w)\n> T = Ø\n> for each edge e, taken in arbitrary order\n> T = T ∪ {e}\n> if T has a cycle c\n> let e' be a maximum-weight edge on c\n> T = T - {e}\n> return T\n>" } ]
false
[]
24-24.1-1
24
24.1
24.1-1
docs/Chap24/24.1.md
Run the Bellman-Ford algorithm on the directed graph of Figure 24.4, using vertex $z$ as the source. In each pass, relax edges in the same order as in the figure, and show the $d$ and $\pi$ values after each pass. Now, change the weight of edge $(z, x)$ to $4$ and run the algorithm again, using $s$ as the source.
- Using vertex $z$ as the source: - $d$ values: $$ \begin{array}{cccccc} s & t & x & y & z \\\\ \hline \infty & \infty & \infty & \infty & 0 \\\\ 2 & \infty & 7 & \infty & 0 \\\\ 2 & 5 & 7 & 9 & 0 \\\\ 2 & 5 & 6 & 9 & 0 \\\\ 2 & 4 & 6 & 9 & 0 \end{array} $$ - $\pi$ values: $$ \begin{array}{cccccc} s & t & x & y & z \\\\ \hline \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} \\\\ z & \text{NIL} & z & \text{NIL} & \text{NIL} \\\\ z & x & z & s & \text{NIL} \\\\ z & x & y & s & \text{NIL} \\\\ z & x & y & s & \text{NIL} \end{array} $$ - Changing the weight of edge $(z, x)$ to $4$: - $d$ values: $$ \begin{array}{cccccc} s & t & x & y & z \\\\ \hline 0 & \infty & \infty & \infty & \infty \\\\ 0 & 6 & \infty & 7 & \infty \\\\ 0 & 6 & 4 & 7 & 2 \\\\ 0 & 2 & 4 & 7 & 2 \\\\ 0 & 2 & 4 & 7 & -2 \end{array} $$ - $\pi$ values: $$ \begin{array}{cccccc} s & t & x & y & z \\\\ \hline \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} \\\\ \text{NIL} & s & \text{NIL} & s & \text{NIL} \\\\ \text{NIL} & s & y & s & t \\\\ \text{NIL} & x & y & s & t \\\\ \text{NIL} & x & y & s & t \end{array} $$ Consider edge $(z, x)$, it'll return $\text{FALSE}$ since $x.d = 4 > z.d + w(z, x) = -2 + 4$.
[]
false
[]
24-24.1-2
24
24.1
24.1-2
docs/Chap24/24.1.md
Prove Corollary 24.3.
Suppose there is a path from $s$ to $v$. Then there must be a shortest such path of length $\delta(s, v)$. It must have finite length since it contains at most $|V| - 1$ edges and each edge has finite length. By Lemma 24.2, $v.d = \delta(s, v) < \infty$ upon termination. On the other hand, suppose $v.d < \infty$ when $\text{BELLMAN-FORD}$ terminates. Recall that $v.d$ is monotonically decreasing throughout the algorithm, and $\text{RELAX}$ will update $v.d$ only if $u.d + w(u, v) < v.d$ for some $u$ adjacent to $v$. Moreover, we update $v.\pi = u$ at this point, so $v$ has an ancestor in the predecessor subgraph. Since this is a tree rooted at $s$, there must be a path from $s$ to $v$ in this tree. Every edge in the tree is also an edge in $G$, so there is also a path in $G$ from $s$ to $v$.
[]
false
[]
24-24.1-3
24
24.1
24.1-3
docs/Chap24/24.1.md
Given a weighted, directed graph $G = (V, E)$ with no negative-weight cycles, let $m$ be the maximum over all vertices $v \in V$ of the minimum number of edges in a shortest path from the source $s$ to $v$. (Here, the shortest path is by weight, not the number of edges.) Suggest a simple change to the Bellman-Ford algorithm that allows it to terminate in $m + 1$ passes, even if $m$ is not known in advance.
By the upper bound theory, we know that after $m$ iterations, no $d$ values will ever change. Therefore, no $d$ values will change in the $(m + 1)$-th iteration. However, we do not know the exact $m$ value in advance, we cannot make the algorithm iterate exactly $m$ times and then terminate. If we try to make the algorithm stop when every $d$ values do not change anymore, then it will stop after $m + 1$ iterations.
[]
false
[]
24-24.1-4
24
24.1
24.1-4
docs/Chap24/24.1.md
Modify the Bellman-Ford algorithm so that it sets $v.d$ to $-\infty$ for all vertices $v$ for which there is a negative-weight cycle on some path from the source to $v$.
```cpp BELLMAN-FORD'(G, w, s) INITIALIZE-SINGLE-SOURCE(G, s) for i = 1 to |G.V| - 1 for each edge (u, v) ∈ G.E RELAX(u, v, w) for each edge(u, v) ∈ G.E if v.d > u.d + w(u, v) mark v for each vertex u ∈ marked vertices DFS-MARK(u) ``` ```cpp DFS-MARK(u) if u != NIL and u.d != -∞ u.d = -∞ for each v in G.Adj[u] DFS-MARK(v) ``` After running $\text{BELLMAN-FORD}'$, run $\text{DFS}$ with all vertices on negative-weight cycles as source vertices. All the vertices that can be reached from these vertices should have their $d$ attributes set to $-\infty$.
[ { "lang": "cpp", "code": "BELLMAN-FORD'(G, w, s)\n INITIALIZE-SINGLE-SOURCE(G, s)\n for i = 1 to |G.V| - 1\n for each edge (u, v) ∈ G.E\n RELAX(u, v, w)\n for each edge(u, v) ∈ G.E\n if v.d > u.d + w(u, v)\n mark v\n for each vertex u ∈ marked vertices\n DFS-MARK(u)" }, { "lang": "cpp", "code": "DFS-MARK(u)\n if u != NIL and u.d != -∞\n u.d = -∞\n for each v in G.Adj[u]\n DFS-MARK(v)" } ]
false
[]
24-24.1-5
24
24.1
24.1-5 $\star$
docs/Chap24/24.1.md
Let $G = (V, E)$ be a weighted, directed graph with weight function $w : E \rightarrow \mathbb R$. Give an $O(VE)$-time algorithm to find, for each vertex $v \in V$, the value $\delta^*(v) = \min_{u \in V} \\{\delta(u, v)\\}$.
```cpp RELAX(u, v, w) if v.d > min(w(u, v), w(u, v) + u.d) v.d = min(w(u, v), w(u, v) + u.d) v.π = u.π ```
[ { "lang": "cpp", "code": "RELAX(u, v, w)\n if v.d > min(w(u, v), w(u, v) + u.d)\n v.d = min(w(u, v), w(u, v) + u.d)\n v.π = u.π" } ]
false
[]
24-24.1-6
24
24.1
24.1-6 $\star$
docs/Chap24/24.1.md
Suppose that a weighted, directed graph $G = (V, E)$ has a negative-weight cycle. Give an efficient algorithm to list the vertices of one such cycle. Prove that your algorithm is correct.
Based on exercise 24.1-4, $\text{DFS}$ from a vertex $u$ that $u.d = -\infty$, if the weight sum on the search path is negative and the next vertex is $\text{BLACK}$, then the search path forms a negative-weight cycle.
[]
false
[]
24-24.2-1
24
24.2
24.2-1
docs/Chap24/24.2.md
Run $\text{DAG-SHORTEST-PATHS}$ on the directed graph of Figure 24.5, using vertex $r$ as the source.
- $d$ values: $$ \begin{array}{cccccc} r & s & t & x & y & z \\\\ \hline 0 & \infty & \infty & \infty & \infty & \infty \\\\ 0 & 5 & 3 & \infty & \infty & \infty \\\\ 0 & 5 & 3 & 11 & \infty & \infty \\\\ 0 & 5 & 3 & 10 & 7 & 5 \\\\ 0 & 5 & 3 & 10 & 7 & 5 \\\\ 0 & 5 & 3 & 10 & 7 & 5 \end{array} $$ - $\pi$ values: $$ \begin{array}{cccccc} r & s & t & x & y & z \\\\ \hline \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} \\\\ \text{NIL} & r & r & \text{NIL} & \text{NIL} & \text{NIL} \\\\ \text{NIL} & r & r & s & \text{NIL} & \text{NIL} \\\\ \text{NIL} & r & r & t & t & t \\\\ \text{NIL} & r & r & t & t & t \\\\ \text{NIL} & r & r & t & t & t \end{array} $$
[]
false
[]
24-24.2-2
24
24.2
24.2-2
docs/Chap24/24.2.md
Suppose we change line 3 of $\text{DAG-SHORTEST-PATHS}$ to read ```cpp 3 for the first |V| - 1 vertices, taken in topologically sorted order ``` Show that the procedure would remain correct.
When we reach vertex $v$, the last vertex in the topological sort, it must have $out\text-degree$ $0$. Otherwise there would be an edge pointing from a later vertex to an earlier vertex in the ordering, a contradiction. Thus, the body of the for-loop of line 4 is never entered for this final vertex, so we may as well not consider it.
[ { "lang": "cpp", "code": "> 3 for the first |V| - 1 vertices, taken in topologically sorted order\n>" } ]
false
[]
24-24.2-3
24
24.2
24.2-3
docs/Chap24/24.2.md
The PERT chart formulation given above is somewhat unnatural. In a more natural structure, vertices would represent jobs and edges would represent sequencing constraints; that is, edge $(u, v)$ would indicate that job $u$ must be performed before job $v$. We would then assign weights to vertices, not edges. Modify the $\text{DAG-SHORTEST-PATHS}$ procedure so that it finds a longest path in a directed acyclic graph with weighted vertices in linear time.
There are two ways to transform a PERT chart $G = (V, E)$ with weights on the vertices to a PERT chart $G' = (V', E')$ with weights on edges. Both ways satisfy $|V'| \le 2|V|$ and $|E'| \le |V| + |E|$, so we can scan $G'$ using the same algorithm to find the longest path through a directed acyclic graph. In the first way, we transform each vertex $v \in V$ into two vertices $v'$ and $v''$ in $V'$. All edges in $E$ that enters $V$ will also enter $V'$ in $E'$, and all edges in $E$ that leaves $V$ will leave $V''$ in $E'$. Thus, if $(u, v) \in E$, then $(u'', v') \in E'$. All such edges have weight 0, so we can put edges $(v', v'')$ into $E'$ for all vertices $v \in V$, and these edges are given the weight of the corresponding vertex $v$ in $G$. Finally, we get $|V'| \le 2|V|$ and $|E'| \le |V| + |E|$, and the edge weight of each path in $G'$ equals the vertex weight of the corresponding path in $G$. In the second way, we leave vertices in $V$, but try to add one new source vertex $s$ to $V'$, given that $V' = V \cup \\{s\\}$. All edges of $E$ are in $E'$, and $E'$ also includes an edge $(s, v)$ for every vertex $v \in V$ that has in-degree 0 in $G$. Thus, the only vertex with in-degree 0 in $G'$ is the new source $s$. The weight of edge $(u, v) \in E'$ is the weight of vertex $v$ in $G$. We have the weight of each entering edge in $G'$ is the weight of the vertex it enters in $G$.
[]
false
[]
24-24.2-4
24
24.2
24.2-4
docs/Chap24/24.2.md
Give an efficient algorithm to count the total number of paths in a directed acyclic graph. Analyze your algorithm.
We will compute the total number of paths by counting the number of paths whose start point is at each vertex $v$, which will be stored in an attribute $v.paths$. Assume that initial we have $v.paths = 0$ for all $v \in V$. Since all vertices adjacent to $u$ occur later in the topological sort and the final vertex has no neighbors, line 4 is well-defined. Topological sort takes $O(V + E)$ and the nested for-loops take $O(V + E)$ so the total runtime is $O(V + E)$. ```cpp PATHS(G) topologically sort the vertices of G for each vertex u, taken in topologically sorted order for each v ∈ G.Adj[u] v.paths = u.paths + 1 + v.paths return the sum of all paths attributes ```
[ { "lang": "cpp", "code": "PATHS(G)\n topologically sort the vertices of G\n for each vertex u, taken in topologically sorted order\n for each v ∈ G.Adj[u]\n v.paths = u.paths + 1 + v.paths\n return the sum of all paths attributes" } ]
false
[]
24-24.3-1
24
24.3
24.3-1
docs/Chap24/24.3.md
Run Dijkstra's algorithm on the directed graph of Figure 24.2, first using vertex $s$ as the source and then using vertex $z$ as the source. In the style of Figure 24.6, show the $d$ and $\pi$ values and the vertices in set $S$ after each iteration of the **while** loop.
- $s$ as the source: - $d$ values: $$ \begin{array}{ccccc} s & t & x & y & z \\\\ \hline 0 & 3 & \infty & 5 & \infty \\\\ 0 & 3 & 9 & 5 & \infty \\\\ 0 & 3 & 9 & 5 & 11 \\\\ 0 & 3 & 9 & 5 & 11 \\\\ 0 & 3 & 9 & 5 & 11 \end{array} $$ - $\pi$ values: $$ \begin{array}{ccccc} s & t & x & y & z \\\\ \hline \text{NIL} & s & \text{NIL} & \text{NIL} & \text{NIL} \\\\ \text{NIL} & s & t & s & \text{NIL} \\\\ \text{NIL} & s & t & s & y \\\\ \text{NIL} & s & t & s & y \\\\ \text{NIL} & s & t & s & y \end{array} $$ - $z$ as the source: - $d$ values: $$ \begin{array}{ccccc} s & t & x & y & z \\\\ \hline 3 & \infty & 7 & \infty & 0 \\\\ 3 & 6 & 7 & 8 & 0 \\\\ 3 & 6 & 7 & 8 & 0 \\\\ 3 & 6 & 7 & 8 & 0 \\\\ 3 & 6 & 7 & 8 & 0 \end{array} $$ - $\pi$ values: $$ \begin{array}{ccccc} s & t & x & y & z \\\\ \hline z & \text{NIL} & z & \text{NIL} & \text{NIL} \\\\ z & s & z & s & \text{NIL} \\\\ z & s & z & s & \text{NIL} \\\\ z & s & z & s & \text{NIL} \\\\ z & s & z & s & \text{NIL} \end{array} $$
[]
false
[]
24-24.3-2
24
24.3
24.3-2
docs/Chap24/24.3.md
Give a simple example of a directed graph with negative-weight edges for which Dijkstra's algorithm produces incorrect answers. Why doesn't the proof of Theorem 24.6 go through when negative-weight edges are allowed?
Consider any graph with a negative cycle. $\text{RELAX}$ is called a finite number of times but the distance to any vertex on the cycle is $-\infty$, so Dijkstra's algorithm cannot possibly be correct here. The proof of theorem 24.6 doesn't go through because we can no longer guarantee that $$\delta(s, y) \le \delta(s, u).$$
[]
false
[]
24-24.3-3
24
24.3
24.3-3
docs/Chap24/24.3.md
Suppose we change line 4 of Dijkstra's algorithm to the following. ```cpp 4 while |Q| > 1 ``` This change causes the **while** loop to execute $|V| - 1$ times instead of $|V|$ times. Is this proposed algorithm correct?
Yes, the algorithm is correct. Let $u$ be the leftover vertex that does not get extracted from the priority queue $Q$. If $u$ is not reachable from $s$, then $$u.d = \delta(s, u) = \infty.$$ If $u$ is reachable from $s$, then there is a shortest path $$p = s \rightarrow x \rightarrow u.$$ When the node $x$ was extracted, $$x.d = \delta(s, x)$$ and then the edge $(x, u)$ was relaxed; thus, $$u.d = \delta(s, u).$$
[ { "lang": "cpp", "code": "> 4 while |Q| > 1\n>" } ]
false
[]