id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
05-5.2-2
05
5.2
5.2-2
docs/Chap05/5.2.md
In $\text{HIRE-ASSISTANT}$, assuming that the candidates are presented in a random order, what is the probability that you hire exactly twice?
Note that - Candidate $1$ is always hired - The best candidate (candidate whose rank is $n$) is always hired - If the best candidate is candidate $1$, then that's the only candidate hired. In order for $\text{HIRE-ASSISTANT}$ to hire exactly twice, candidate $1$ should have rank $i$, where $1 \le i \le n - 1$, and all candidates whose ranks are $i + 1, i + 2, \dots, n - 1$ should be interviewed after the candidate whose rank is $n$ (the best candidate). Let $E_i$ be the event in which candidate $1$ have rank $i$, so we have $P(E_i) = 1 / n$ for $1 \le i \le n$. Our goal is to find for $1 \le i \le n - 1$, given $E_i$ occurs, i.e., candidate $1$ has rank $i$, the candidate whose rank is $n$ (the best candidate) is the first one interviewed out of the $n - i$ candidates whose ranks are $i + 1, i + 2, \dots, n$. So, $$\sum_{i = 1}^{n - 1} P(E_i) \cdot \frac{1}{n - i} = \sum_{i = 1}^{n - 1} \frac{1}{n} \cdot \frac{1}{n - i}.$$
[]
false
[]
05-5.2-3
05
5.2
5.2-3
docs/Chap05/5.2.md
Use indicator random variables to compute the expected value of the sum of $n$ dice.
Expectation of a single dice $X_i$ is $$ \begin{aligned} \text E[X_k] & = \sum_{i = 1}^6 i \Pr\\{X_k = i\\} \\\\ & = \frac{1 + 2 + 3 + 4 + 5 + 6}{6} \\\\ & = \frac{21}{6} \\\\ & = 3.5. \end{aligned} $$ As for multiple dices, $$ \begin{aligned} \text E[X] & = \text E\Bigg[\sum_{i = 1}^n X_i \Bigg] \\\\ & = \sum_{i = 1}^n \text E[X_i] \\\\ & = \sum_{i = 1}^n 3.5 \\\\ & = 3.5 \cdot n. \end{aligned} $$
[]
false
[]
05-5.2-4
05
5.2
5.2-4
docs/Chap05/5.2.md
Use indicator random variables to solve the following problem, which is known as the **_hat-check problem_**. Each of $n$ customers gives a hat to a hat-check person at a restaurant. The hat-check person gives the hats back to the customers in a random order. What is the expected number of customers who get back their hat?
Let $X$ be the number of customers who get back their own hat and $X_i$ be the indicator random variable that customer $i$ gets his hat back. The probability that an individual gets his hat back is $\frac{1}{n}$. Thus we have $$E[X] = E\Bigg[\sum_{i = 1}^n X_i\Bigg] = \sum_{i = 1}^n E[X_i] = \sum_{i = 1}^n \frac{1}{n} = 1.$$
[]
false
[]
05-5.2-5
05
5.2
5.2-5
docs/Chap05/5.2.md
Let $A[1..n]$ be an array of $n$ distinct numbers. If $i < j$ and $A[i] > A[j]$, then the pair $(i, j)$ is called an **_inversion_** of $A$. (See Problem 2-4 for more on inversions.) Suppose that the elements of $A$ form a uniform random permutation of $\langle 1, 2, \ldots, n \rangle$. Use indicator random variables to compute the expected number of inversions.
Let $X_{i, j}$ for $i < j$ be the indicator of $A[i] > A[j]$. We have that the expected number of inversions $$ \begin{aligned} \text E\Bigg[\sum_{i < j} X_{i, j}\Bigg] & = \sum_{i < j} E[X_{i, j}] \\\\ & = \sum_{i = 1}^{n - 1}\sum_{j = i + 1}^n \Pr\\{A[i] > A[j]\\} \\\\ & = \frac{1}{2} \sum_{i = 1}^{n - 1} n - i \\\\ & = \frac{n(n - 1)}{2} - \frac{n(n - 1)}{4} \\\\ & = \frac{n(n - 1)}{4}. \end{aligned} $$
[]
false
[]
05-5.3-1
05
5.3
5.3-1
docs/Chap05/5.3.md
Professor Marceau objects to the loop invariant used in the proof of Lemma 5.5. He questions whether it is true prior to the first iteration. He reasons that we could just as easily declare that an empty subarray contains no $0$-permutations. Therefore, the probability that an empty subarray contains a $0$-permutation should be $0$, thus invalidating the loop invariant prior to the first iteration. Rewrite the procedure $\text{RANDOMIZE-IN-PLACE}$ so that its associated loop invariant applies to a nonempty subarray prior to the first iteration, and modify the proof of Lemma 5.5 for your procedure.
Modify the algorithm by unrolling the $i = 1$ case. ```cpp swap(A[1], A[RANDOM(1, n)]) for i = 2 to n swap(A[i], A[RANDOM(i, n)]) ``` Modify the proof of Lemma 5.5 by starting with $i = 2$ instead of $i = 1$. This resolves the issue of $0$-permutations.
[ { "lang": "cpp", "code": "swap(A[1], A[RANDOM(1, n)])\nfor i = 2 to n\n swap(A[i], A[RANDOM(i, n)])" } ]
false
[]
05-5.3-2
05
5.3
5.3-2
docs/Chap05/5.3.md
Professor Kelp decides to write a procedure that produces at random any permutation besides the identity permutation. He proposes the following procedure: ```cpp PERMUTE-WITHOUT-IDENTITY(A) n = A.length for i = 1 to n - 1 swap A[i] with A[RANDOM(i + 1, n)] ``` Does this code do what Professor Kelp intends?
The code does not do what he intends. Suppose $A = [1, 2, 3]$. If the algorithm worked as proposed, then with nonzero probability the algorithm should output $[3, 2, 1]$. On the first iteration we swap $A[1]$ with either $A[2]$ or $A[3]$. Since we want $[3, 2, 1]$ and will never again alter $A[1]$, we must necessarily swap with $A[3]$. Now the current array is $[3, 2, 1]$. On the second (and final) iteration, we have no choice but to swap $A[2]$ with $A[3]$, so the resulting array is $[3, 1, 2]$. Thus, the procedure cannot possibly be producing random non-identity permutations.
[ { "lang": "cpp", "code": "> PERMUTE-WITHOUT-IDENTITY(A)\n> n = A.length\n> for i = 1 to n - 1\n> swap A[i] with A[RANDOM(i + 1, n)]\n>" } ]
false
[]
05-5.3-3
05
5.3
5.3-3
docs/Chap05/5.3.md
Suppose that instead of swapping element $A[i]$ with a random element from the subarray $A[i..n]$, we swapped it with a random element from anywhere in the array: ```cpp PERMUTE-WITH-ALL(A) n = A.length for i = 1 to n swap A[i] with A[RANDOM(1, n)] ``` Does this code produce a uniform random permutation? Why or why not?
Consider the case of $n = 3$ in running the algorithm, three IID choices will be made, and so you'll end up having $27$ possible end states each with equal probability. There are $3! = 6$ possible orderings, these should appear equally often, but this can't happen because $6$ does not divide $27$.
[ { "lang": "cpp", "code": "> PERMUTE-WITH-ALL(A)\n> n = A.length\n> for i = 1 to n\n> swap A[i] with A[RANDOM(1, n)]\n>" } ]
false
[]
05-5.3-4
05
5.3
5.3-4
docs/Chap05/5.3.md
Professor Armstrong suggests the following procedure for generating a uniform random permutation: ```cpp PERMUTE-BY-CYCLIC(A) n = A.length let B[1..n] be a new array offset = RANDOM(1, n) for i = 1 to n dest = i + offset if dest > n dest = dest - n B[dest] = A[i] return B ``` Show that each element $A[i]$ has a $1 / n$ probability of winding up in any particular position in $B$. Then show that Professor Armstrong is mistaken by showing that the resulting permutation is not uniformly random.
Fix a position $j$ and an index $i$. We'll show that the probability that $A[i]$ winds up in position $j$ is $1 / n$. The probability $B[j] = A[i]$ is the probability that $dest = j$, which is the probability that $i + offset$ or $i + offset − n$ is equal to $j$, which is $1 / n$. This algorithm can't possibly return a random permutation because it doesn't change the relative positions of the elements; it merely cyclically permutes the whole permutation. For instance, suppose $A = [1, 2, 3]$, - if $offset = 1$, $B = [3, 1, 2]$, - if $offset = 2$, $B = [2, 3, 1]$, - if $v = 3$, $B = [1, 2, 3]$. Thus, the algorithm will never produce $B = [1, 3, 2]$, so the resulting permutation cannot be uniformly random.
[ { "lang": "cpp", "code": "> PERMUTE-BY-CYCLIC(A)\n> n = A.length\n> let B[1..n] be a new array\n> offset = RANDOM(1, n)\n> for i = 1 to n\n> dest = i + offset\n> if dest > n\n> dest = dest - n\n> B[dest] = A[i]\n> return B\n>" } ]
false
[]
05-5.3-5
05
5.3
5.3-5 $\star$
docs/Chap05/5.3.md
Prove that in the array $P$ in procedure $\text{PERMUTE-BY-SORTING}$, the probability that all elements are unique is at least $1 - 1 / n$.
Let $\Pr\\{j\\}$ be the probability that the element with index $j$ is unique. If there are $n^3$ elements, then the $\Pr\\{j\\} = 1 - \frac{j - 1}{n^3}$. $$ \begin{aligned} \Pr\\{1 \cap 2 \cap 3 \cap \ldots\\} & = \Pr\\{1\\} \cdot \Pr\\{2 \mid 1\\} \cdot \Pr\\{3 \mid 1 \cap 2\\} \cdots \\\\ & = 1 (1 - \frac{1}{n^3})(1 - \frac{2}{n^3})(1 - \frac{3}{n^3}) \cdots \\\\ & \ge 1 (1 - \frac{n}{n^3}) (1 - \frac{n}{n^3})(1 - \frac{n}{n^3}) \cdots \\\\ & \ge (1 - \frac{1}{n^2})^n \\\\ & \ge 1 - \frac{1}{n}, \\\\ \end{aligned} $$ where the last step holds for $(1 - x)^n \ge 1 - nx$.
[]
false
[]
05-5.3-6
05
5.3
5.3-6
docs/Chap05/5.3.md
Explain how to implement the algorithm $\text{PERMUTE-BY-SORTING}$ to handle the case in which two or more priorities are identical. That is, your algorithm should produce a uniform random permutation, even if two or more priorities are identical.
```cpp PERMUTE-BY-SORTING(A) let P[1..n] be a new array for i = 1 to n P[i] = i for i = 1 to n swap P[i] with P[RANDOM(i, n)] ```
[ { "lang": "cpp", "code": "PERMUTE-BY-SORTING(A)\n let P[1..n] be a new array\n for i = 1 to n\n P[i] = i\n for i = 1 to n\n swap P[i] with P[RANDOM(i, n)]" } ]
false
[]
05-5.3-7
05
5.3
5.3-7
docs/Chap05/5.3.md
Suppose we want to create a **_random sample_** of the set $\\{1, 2, 3, \ldots, n\\}$, that is, an $m$-element subset $S$, where $0 \le m \le n$, such that each $m$-subset is equally likely to be created. One way would be to set $A[i] = i$ for $i = 1, 2, 3, \ldots, n$, call $\text{RANDOMIZE-IN-PLACE}(A)$, and then take just the first $m$ array elements. This method would make $n$ calls to the $\text{RANDOM}$ procedure. If $n$ is much larger than $m$, we can create a random sample with fewer calls to $\text{RANDOM}$. Show that the following recursive procedure returns a random $m$-subset $S$ of $\\{1, 2, 3, \ldots, n\\}$, in which each $m$-subset is equally likely, while making only $m$ calls to $\text{RANDOM}$: ```cpp RANDOM-SAMPLE(m, n) if m == 0 return Ø else S = RANDOM-SAMPLE(m - 1, n - 1) i = RANDOM(1, n) if i ∈ S S = S ∪ {n} else S = S ∪ {i} return S ```
We prove that it produces a random $m$ subset by induction on $m$. It is obviously true if $m = 0$ as there is only one size $m$ subset of $[n]$. Suppose $S$ is a uniform $m − 1$ subset of $n − 1$, that is, $\forall j \in [n - 1]$, $\Pr[j \in S] = \frac{m - 1}{n - 1}$. If we let $S'$ denote the returned set, suppose first $j \in [n − 1]$, $$ \begin{aligned} \Pr[j \in S'] & = \Pr[j \in S] + \Pr[j \notin S \wedge i = j] \\\\ & = \frac{m - 1}{n - 1} + \Pr[j \notin S]\Pr[i = j] \\\\ & = \frac{m - 1}{n - 1} + \left(1 - \frac{m - 1}{n - 1}\right) \frac{1}{n} \\\\ & = \frac{n(m - 1) + n - m}{(n - 1)n} \\\\ & = \frac{nm - m}{(n - 1)n} = \frac{m}{n}. \end{aligned} $$ Since the constructed subset contains each of $[n − 1]$ with the correct probability, it must also contain $n$ with the correct probability because the probabilities sum to $1$.
[ { "lang": "cpp", "code": "> RANDOM-SAMPLE(m, n)\n> if m == 0\n> return Ø\n> else S = RANDOM-SAMPLE(m - 1, n - 1)\n> i = RANDOM(1, n)\n> if i ∈ S\n> S = S ∪ {n}\n> else S = S ∪ {i}\n> return S\n>" } ]
false
[]
05-5.4-1
05
5.4
5.4-1
docs/Chap05/5.4.md
How many people must there be in a room before the probability that someone has the same birthday as you do is at least $1 / 2$? How many people must there be before the probability that at least two people have a birthday on July 4 is greater than $1 / 2$?
The probability of a person not having the same birthday as me is $(n - 1) / n$. The probability of $k$ people not having the same birthday as me is that, squared. We apply the same approach as the text - we take the complementary event and solve it for $k$, $$ \begin{aligned} 1 - \big(\frac{n - 1}{n}\big)^k & \ge \frac{1}{2} \\\\ \big(\frac{n - 1}{n}\big)^k & \le \frac{1}{2} \\\\ k\lg\big(\frac{n - 1}{n}\big) & \ge \lg\frac{1}{2} \\\\ k = \frac{\log(1 / 2)}{\log(364 / 365)} & \approx 253. \end{aligned} $$ As for the other question, $$ \begin{aligned} \Pr\\{\text{2 born on Jul 4}\\} & = 1 - \Pr\\{\text{1 born on Jul 4}\\} - \Pr\\{\text{0 born on Jul 4}\\} \\\\ & = 1 - \frac{k}{n}\big(\frac{n - 1}{n}\big)^{k - 1} - \big(\frac{n - 1}{n}\big)^k \\\\ & = 1 - \big(\frac{n - 1}{n}\big)^{k - 1}\big(\frac{n + k - 1}{n}\big). \end{aligned} $$ Writing a Ruby programme to find the closest integer, we get $115$.
[]
false
[]
05-5.4-2
05
5.4
5.4-2
docs/Chap05/5.4.md
Suppose that we toss balls into $b$ bins until some bin contains two balls. Each toss is independent, and each ball is equally likely to end up in any bin. What is the expected number of ball tosses?
This is just a restatement of the birthday problem. I consider this all that needs to be said on this subject.
[]
false
[]
05-5.4-3
05
5.4
5.4-3 $\star$
docs/Chap05/5.4.md
For the analysis of the birthday paradox, is it important that the birthdays be mutually independent, or is pairwise independence sufficient? Justify your answer.
Pairwise independence is enough. It's sufficient for the derivation after $\text{(5.6)}$.
[]
false
[]
05-5.4-4
05
5.4
5.4-4 $\star$
docs/Chap05/5.4.md
How many people should be invited to a party in order to make it likely that there are $three$ people with the same birthday?
The answer is $88$. I reached it by trial and error. But let's analyze it with indicator random variables. Let $X_{ijk}$ be the indicator random variable for the event of the people with indices $i$, $j$ and $k$ have the same birthday. The probability is $1 / n^2$. Then, $$ \begin{aligned} \text E[X] & = \sum_{i = 1}^n\sum_{j = i + 1}^n \sum_{k = j + 1}^n X_{ijk} \\\\ & = \sum_{i = 1}^n\sum_{j = i + 1}^n \sum_{k = j + 1}^n \frac{1}{n^2} \\\\ & = \binom{n}{3}\frac{1}{n^2} \\\\ & = \frac{k(k - 1)(k - 2)}{6n^2}. \end{aligned} $$ Solving this yields $94$. It's a bit more, but again, indicator random variables are approximate. Finding more commentary online is tricky.
[]
false
[]
05-5.4-5
05
5.4
5.4-5 $\star$
docs/Chap05/5.4.md
What is the probability that a $k$-string over a set of size $n$ forms a $k$-permutation? How does this question relate to the birthday paradox?
$$ \begin{aligned} \Pr\\{k\text{-perm in }n\\} & = 1 \cdot \frac{n - 1}{n} \cdot \frac{n - 2}{n} \cdots \frac{n - k + 1}{n} \\\\ & = \frac{n!}{n^k(n - k)!}. \end{aligned} $$ This is the complementary event to the birthday problem, that is, the chance of $k$ people have distinct birthdays.
[]
false
[]
05-5.4-6
05
5.4
5.4-6 $\star$
docs/Chap05/5.4.md
Suppose that $n$ balls are tossed into $n$ bins, where each toss is independent and the ball is equally likely to end up in any bin. What is the expected number of empty bins? What is the expected number of bins with exactly one ball?
Let $X_i$ be the indicator variable that bin $i$ is empty after all balls are tossed and $X$ be the random variable that gives the number of empty bins. Thus we have $$E[X] = \sum_{i = 1}^n E[X_i] = \sum_{i = 1}^n \bigg(\frac{n - 1}{n}\bigg)^n = n\bigg(\frac{n - 1}{n}\bigg)^n.$$ Let $X_i$ be the indicator variable that bin $i$ contains exactly $1$ ball after all balls are tossed and $X$ be the random variable that gives the number of bins containing exactly $1$ ball. Thus we have $$E[X] = \sum_{i = 1}^n E[X_i] = \sum_{i = 1}^n \binom{n}{1}\bigg(\frac{n - 1}{n}\bigg)^{n - 1} \frac{1}{n} = n\bigg(\frac{n - 1}{n}\bigg)^{n - 1},$$ because we need to choose which toss will go into bin $i$, then multiply by the probability that that toss goes into that bin and the remaining $n − 1$ tosses avoid it.
[]
false
[]
05-5.4-7
05
5.4
5.4-7 $\star$
docs/Chap05/5.4.md
Sharpen the lower bound on streak length by showing that in $n$ flips of a fair coin, the probability is less than $1 / n$ that no streak longer than $\lg n - 2\lg\lg n$ consecutive heads occurs.
We split up the n flips into $n / s$ groups where we pick $s = \lg(n) - 2 \lg(\lg(n))$. We will show that at least one of these groups comes up all heads with probability at least $\frac{n - 1}{n}$. So, the probability the group starting in position $i$ comes up all heads is $$\Pr(A_{i,\lg n - 2\lg(\lg n)}) = \frac{1}{2^{\lg n - 2\lg(\lg n)}} = \frac{\lg n^2}{n}.$$ Since the groups are based of of disjoint sets of IID coin flips, these probabilities are independent. so, $$ \begin{aligned} \Pr(\bigwedge\neg A_{i,\lg n - 2\lg(\lg n)}) & = \prod_i\Pr(\neg A_{i,\lg n - 2\lg(\lg n)}) \\\\ & = \Big(1-\frac{\lg n^2}{n}\Big)^{\frac{n}{\lg n - 2\lg(\lg n)}} \\\\ & \le e^{-\frac{\lg n^2}{\lg n - 2\lg(\lg n)}} \\\\ &= \frac{1}{n} e^{\frac{-2\lg(\lg n)\lg n}{\lg n - 2\lg(\lg n)}} \\\\ & = n^{-1-\frac{2\lg(\lg n)}{\lg n - 2\lg(\lg n)}} \\\\ & < n^{-1}. \end{aligned} $$ Showing that the probability that there is no run of length at least $\lg n - 2\lg(\lg n)$ to be $< \frac{1}{n}$.
[]
false
[]
05-5-1
05
5-1
5-1
docs/Chap05/Problems/5-1.md
With a $b$-bit counter, we can ordinarily only count up to $2^b - 1$. With R. Morris's **_probabilistic counting_**, we can count up to a much larger value at the expense of some loss of precision. We let a counter value of $i$ represent that a count of $n_i$ for $i = 0, 1, \ldots, 2^b - 1$, where the $n_i$ form an increasing sequence of nonnegative values. We assume that the initial value of the counter is $0$, representing a count of $n_0 = 0$. The $\text{INCREMENT}$ operation works on a counter containing the value $i$ in a probabilistic manner. If $i = 2^b - 1$, then the operation reports an overflow error. Otherwise, the $\text{INCREMENT}$ operation increases the counter by $1$ with probability $1 / (n_{i + 1} - n_i)$, and it leaves the counter unchanged with probability $1 - 1 / (n_{i + 1} - n_i)$. If we select $n_i = i$ for all $i \ge 0$, then the counter is an ordinary one. More interesting situations arise if we select, say, $n_i = 2^{i - 1}$ for $i > 0$ or $n_i = F_i$ (the $i$th Fibonacci number—see Section 3.2). For this problem, assume that $n_{2^b - 1}$ is large enough that the probability of an overflow error is negligible. **a.** Show that the expected value represented by the counter after $n$ $\text{INCREMENT}$ operations have been performed is exactly $n$. **b.** The analysis of the variance of the count represented by the counter depends on the sequence of the $n_i$. Let us consider a simple case: $n_i = 100i$ for all $i \ge 0$. Estimate the variance in the value represented by the register after $n$ $\text{INCREMENT}$ operations have been performed.
**a.** To show that the expected value represented by the counter after $n$ $\text{INCREMENT}$ operations have been performed is exactly $n$, we can show that each expected increment represented by the counter is $1$. Assume the initial value of the counter is $i$, increasing the number represented from $n_i$ to $n_{i + 1}$ is with a probability of $\frac{1}{n_{i + 1} - n_i}$ and leaving the value not changed otherwise. The expected increase: $$ \frac{n_{i + 1} - n_i}{n_{i + 1} - n_i} = 1. $$ **b.** For this choice of $n_i$ , we have that at each increment operation, the probability that we change the value of the counter is $\frac{1}{100}$. Since this is a constant with respect to the current value of the counter $i$, we can view the final result as a binomial distribution with a $p$ value of $0.01$. Since the variance of a binomial distribution is $np(1 − p)$, and we have that each success is worth $100$ instead, the variance is going to be equal to $0.99n$.
[]
false
[]
05-5-2
05
5-2
5-2
docs/Chap05/Problems/5-2.md
The problem examines three algorithms for searching for a value $x$ in an unsorted array $A$ consisting for $n$ elements. Consider the following randomized strategy: pick a random index $i$ into $A$. If $A[i] = x$, then we terminate; otherwise, we continue the search by picking a new random index into $A$. We continue picking random indices into $A$ until we find an index $j$ such that $A[j] = x$ or until we have checked every element of $A$. Note that we pick from the whole set of indices each time, so that we may examine a given element more than once. **a.** Write pseudocode for a procedure $\text{RANDOM-SEARCH}$ to implement the strategy above. Be sure that your algorithm terminates when all indices into $A$ have been picked. **b.** Suppose that there is exactly one index $i$ such that $A[i] = x$. What is the expected number of indices into $A$ that we must pick before we find $x$ and $\text{RANDOM-SEARCH}$ terminates? **c.** Generalizing your solution to part (b), suppose that there are $k \ge 1$ indices $i$ such that $A[i] = x$. What is the expected number of indices into $A$ that we must pick before we find $x$ and $\text{RANDOM-SEARCH}$ terminates? Your answer should be a function of $n$ and $k$. **d.** Suppose that there are no indices $i$ such that $A[i] = x$. What is the expected number of indices into $A$ that we must pick before we have checked all elements of $A$ and $\text{RANDOM-SEARCH}$ terminates? Now consider a deterministic linear search algorithm, which we refer to as $\text{DETERMINISTIC-SEARCH}$. Specifically, the algorithm searches $A$ for $x$ in order, considering $A[1], A[2], A[3], \ldots, A[n]$ until either it finds $A[i] = x$ or it reaches the end of the array. Assume that possible permutations of the input array are equally likely. **e.** Suppose that there is exactly one index $i$ such that $A[i] = x$. What is the average-case running time of $\text{DETERMINISTIC-SEARCH}$? What is the worst-case running time of $\text{DETERMINISTIC-SEARCH}$? **f.** Generalizing your solution to part (e), suppose that there are $k \ge 1$ indices $i$ such that $A[i] = x$. What is the average-case running time of $\text{DETERMINISTIC-SEARCH}$? What is the worst-case running time of $\text{DETERMINISTIC-SEARCH}$? Your answer should be a function of $n$ and $k$. **g.** Suppose that there are no indices $i$ such that $A[i] = x$. What is the average-case running time of $\text{DETERMINISTIC-SEARCH}$? What is the worst-case running time of $\text{DETERMINISTIC-SEARCH}$? Finally, consider a randomized algorithm $\text{SCRAMBLE-SEARCH}$ that works by first randomly permuting the input array and then running the deterministic linear search given above on the resulting permuting array. **h.** Letting $k$ be the number of indices $i$ such that $A[i] = x$, give the worst-case and expected running time of $\text{SCRAMBLE-SEARCH}$ for the cases in which $k = 0$ and $k = 1$. Generalizing your solution to handle the case in which $k \ge 1$. **i.** Which of the three searching algorithms would you use? Explain your answer.
**a.** ```cpp RANDOM-SEARCH(x, A, n) v = Ø // a set (or bitmap, etc.) of visited indices while |v| < n: i = RANDOM(1, n) if i ∉ v: // only use i if it hasn't been picked before if A[i] = x: return i else: v = v ∪ {i} return NIL ``` $v$ can be implemented in multiple ways: a hash table, a tree or a bitmap. The last one would probabily perform best and consume the least space. **b.** $\text{RANDOM-SEARCH}$ is well-modelled by Bernoulli trials. The expected number of picks is $n$. **c.** In similar fashion, the expected number of picks is $n / k$. **d.** This is modelled by the balls and bins problem, explored in section 5.4.2. The answer is $n(\ln n + O(1))$. **e.** The worst-case running time is $n$. The average-case is $(n + 1) / 2$ (obviously). **f.** The worst-case running time is $n - k + 1$. The average-case running time is $(n + 1) / (k + 1)$. Let $X_i$ be an indicator random variable that the $i$th element is a match. $\Pr\\{X_i\\} = 1 / (k + 1)$. Let $Y$ be an indicator random variable that we have found a match after the first $n - k + 1$ elements ($\Pr\\{Y\\} = 1$). Thus, $$ \begin{aligned} \text E[X] & = \text E[X_1 + X_2 + \ldots + X_{n - k} + Y] \\\\ & = 1 + \sum_{i = 1}^{n - k}\text E[X_i] = 1 + \frac{n - k}{k + 1} \\\\ & = \frac{n + 1}{k + 1}. \end{aligned} $$ **g.** Both the worst-case and average case is $n$. **h.** It's the same as $\text{DETERMINISTIC-SEARCH}$, only we replace "average-case" with "expected". **i.** Definitelly $\text{DETERMINISTIC-SEARCH}$. $\text{SCRAMBLE-SEARCH}$ gives better expected results, but for the cost of randomly permuting the array, which is a linear operation. In the same time we could have scanned the full array and reported a result.
[ { "lang": "cpp", "code": "RANDOM-SEARCH(x, A, n)\n v = Ø // a set (or bitmap, etc.) of visited indices\n while |v| < n:\n i = RANDOM(1, n)\n if i ∉ v: // only use i if it hasn't been picked before\n if A[i] = x:\n return i\n else:\n v = v ∪ {i}\n return NIL" } ]
false
[]
06-6.1-1
06
6.1
6.1-1
docs/Chap06/6.1.md
What are the minimum and maximum numbers of elements in a heap of height $h$?
At least $2^h$ and at most $2^{h + 1} − 1$. Can be seen because a complete binary tree of depth $h − 1$ has $\sum_{i = 0}^{h - 1} 2^i = 2^h - 1$ elements, and the number of elements in a heap of depth $h$ is between the number for a complete binary tree of depth $h − 1$ exclusive and the number in a complete binary tree of depth $h$ inclusive.
[]
false
[]
06-6.1-2
06
6.1
6.1-2
docs/Chap06/6.1.md
Show that an $n$-element heap has height $\lfloor \lg n \rfloor$.
Write $n = 2^m − 1 + k$ where $m$ is as large as possible. Then the heap consists of a complete binary tree of height $m − 1$, along with $k$ additional leaves along the bottom. The height of the root is the length of the longest simple path to one of these $k$ leaves, which must have length $m$. It is clear from the way we defined $m$ that $m = \lfloor \lg n\rfloor$.
[]
false
[]
06-6.1-3
06
6.1
6.1-3
docs/Chap06/6.1.md
Show that in any subtree of a max-heap, the root of the subtree contains the largest value occuring anywhere in the subtree.
If the largest element in the subtree were somewhere other than the root, it has a parent that is in the subtree. So, it is larger than it's parent, so, the heap property is violated at the parent of the maximum element in the subtree.
[]
false
[]
06-6.1-4
06
6.1
6.1-4
docs/Chap06/6.1.md
Where in a max-heap might the smallest element reside, assuming that all elements are distinct?
In any of the leaves, that is, elements with index $\lfloor n / 2 \rfloor + k$, where $k \geq 1$ (see exercise 6.1-7), that is, in the second half of the heap array.
[]
false
[]
06-6.1-5
06
6.1
6.1-5
docs/Chap06/6.1.md
Is an array that is in sorted order a min-heap?
Yes. For any index $i$, both $\text{LEFT}(i)$ and $\text{RIGHT}(i)$ are larger and thus the elements indexed by them are greater or equal to $A[i]$ (because the array is sorted.)
[]
false
[]
06-6.1-6
06
6.1
6.1-6
docs/Chap06/6.1.md
Is the array with values $\langle 23, 17, 14, 6, 13, 10, 1, 5, 7, 12 \rangle$ a max-heap?
No. Since $\text{PARENT}(7)$ is $6$ in the array. This violates the max-heap property.
[]
false
[]
06-6.1-7
06
6.1
6.1-7
docs/Chap06/6.1.md
Show that, with the array representation for sorting an $n$-element heap, the leaves are the nodes indexed by $\lfloor n / 2 \rfloor + 1, \lfloor n / 2 \rfloor + 2, \ldots, n$.
Let's take the left child of the node indexed by $\lfloor n / 2 \rfloor + 1$. $$ \begin{aligned} \text{LEFT}(\lfloor n / 2 \rfloor + 1) & = 2(\lfloor n / 2 \rfloor + 1) \\\\ & > 2(n / 2 - 1) + 2 \\\\ & = n - 2 + 2 \\\\ & = n. \end{aligned} $$ Since the index of the left child is larger than the number of elements in the heap, the node doesn't have childrens and thus is a leaf. Same goes for all nodes with larger indices. Note that if we take element indexed by $\lfloor n / 2 \rfloor$, it will not be a leaf. In case of even number of nodes, it will have a left child with index $n$ and in the case of odd number of nodes, it will have a left child with index $n - 1$ and a right child with index $n$. This makes the number of leaves in a heap of size $n$ equal to $\lceil n / 2 \rceil$.
[]
false
[]
06-6.2-1
06
6.2
6.2-1
docs/Chap06/6.2.md
Using figure 6.2 as a model, illustrate the operation of $\text{MAX-HEAPIFY}(A, 3)$ on the array $A = \langle 27, 17, 3, 16, 13, 10, 1, 5, 7, 12, 4, 8, 9, 0 \rangle$.
$$ \begin{aligned} \langle 27, 17, 3, 16, 13, 10,1, 5, 7, 12, 4, 8, 9, 0 \rangle \\\\ \langle 27, 17, 10, 16, 13, 3, 1, 5, 7, 12, 4, 8, 9, 0 \rangle \\\\ \langle 27, 17, 10, 16, 13, 9, 1, 5, 7, 12, 4, 8, 3, 0 \rangle \\\\ \end{aligned} $$
[]
false
[]
06-6.2-2
06
6.2
6.2-2
docs/Chap06/6.2.md
Starting with the procedure $\text{MAX-HEAPIFY}$, write pseudocode for the procedure $\text{MIN-HEAPIFY}(A, i)$, which performs the corresponding manipulation on a min-heap. How does the running time of $\text{MIN-HEAPIFY}$ compare to that of $\text{MAX-HEAPIFY}$?
```cpp MIN-HEAPIFY(A, i) l = LEFT(i) r = RIGHT(i) if l ≤ A.heap-size and A[l] < A[i] smallest = l else smallest = i if r ≤ A.heap-size and A[r] < A[smallest] smallest = r if smallest != i exchange A[i] with A[smallest] MIN-HEAPIFY(A, smallest) ``` The running time is the same. Actually, the algorithm is the same with the exceptions of two comparisons and some names.
[ { "lang": "cpp", "code": "MIN-HEAPIFY(A, i)\n l = LEFT(i)\n r = RIGHT(i)\n if l ≤ A.heap-size and A[l] < A[i]\n smallest = l\n else smallest = i\n if r ≤ A.heap-size and A[r] < A[smallest]\n smallest = r\n if smallest != i\n exchange A[i] with A[smallest]\n MIN-HEAPIFY(A, smallest)" } ]
false
[]
06-6.2-3
06
6.2
6.2-3
docs/Chap06/6.2.md
What is the effect of calling $\text{MAX-HEAPIFY}(A, i)$ when the element $A[i]$ is larger than its children?
No effect. The comparisons are carried out, $A[i]$ is found to be largest and the procedure just returns.
[]
false
[]
06-6.2-4
06
6.2
6.2-4
docs/Chap06/6.2.md
What is the effect of calling $\text{MAX-HEAPIFY}(A, i)$ for $i > A.heap\text-size / 2$?
No effect. In that case, it is a leaf. Both $\text{LEFT}$ and $\text{RIGHT}$ return values that fail the comparison with the heap size and $i$ is stored in largest. Afterwards the procedure just returns.
[]
false
[]
06-6.2-5
06
6.2
6.2-5
docs/Chap06/6.2.md
The code for $\text{MAX-HEAPIFY}$ is quite efficient in terms of constant factors, except possibly for the recursive call in line 10, which might cause some compilers to produce inefficient code. Write an efficient $\text{MAX-HEAPIFY}$ that uses an iterative control construct (a loop) instead of recursion.
```cpp MAX-HEAPIFY(A, i) while true l = LEFT(i) r = RIGHT(i) if l ≤ A.heap-size and A[l] > A[i] largest = l else largest = i if r ≤ A.heap-size and A[r] > A[largest] largest = r if largest == i return exchange A[i] with A[largest] i = largest ```
[ { "lang": "cpp", "code": "MAX-HEAPIFY(A, i)\n while true\n l = LEFT(i)\n r = RIGHT(i)\n if l ≤ A.heap-size and A[l] > A[i]\n largest = l\n else largest = i\n if r ≤ A.heap-size and A[r] > A[largest]\n largest = r\n if largest == i\n return\n exchange A[i] with A[largest]\n i = largest" } ]
false
[]
06-6.2-6
06
6.2
6.2-6
docs/Chap06/6.2.md
Show that the worst-case running time of $\text{MAX-HEAPIFY}$ on a heap of size $n$ is $\Omega(\lg n)$. ($\textit{Hint:}$ For a heap with $n$ nodes, give node values that cause $\text{MAX-HEAPIFY}$ to be called recursively at every node on a simple path from the root down to a leaf.)
Consider the heap resulting from $A$ where $A[1] = 1$ and $A[i] = 2$ for $2 \le i \le n$. Since $1$ is the smallest element of the heap, it must be swapped through each level of the heap until it is a leaf node. Since the heap has height $\lfloor \lg n\rfloor$, $\text{MAX-HEAPIFY}$ has worst-case time $\Omega(\lg n)$.
[]
false
[]
06-6.3-1
06
6.3
6.3-1
docs/Chap06/6.3.md
Using figure 6.3 as a model, illustrate the operation of $\text{BUILD-MAX-HEAP}$ on the array $A = \langle 5, 3, 17, 10, 84, 19, 6, 22, 9 \rangle$.
$$ \begin{aligned} \langle 5, 3, 17, 10, 84, 19, 6, 22, 9 \rangle \\\\ \langle 5, 3, 17, 22, 84, 19, 6, 10, 9 \rangle \\\\ \langle 5, 3, 19, 22, 84, 17, 6, 10, 9 \rangle \\\\ \langle 5, 84, 19, 22, 3, 17, 6, 10, 9 \rangle \\\\ \langle 84, 5, 19, 22, 3, 17, 6, 10, 9 \rangle \\\\ \langle 84, 22, 19, 5, 3, 17, 6, 10, 9 \rangle \\\\ \langle 84, 22, 19, 10, 3, 17, 6, 5, 9 \rangle \\\\ \end{aligned} $$
[]
false
[]
06-6.3-2
06
6.3
6.3-2
docs/Chap06/6.3.md
Why do we want the loop index $i$ in line 2 of $\text{BUILD-MAX-HEAP}$ to decrease from $\lfloor A.length / 2 \rfloor$ to $1$ rather than increase from $1$ to $\lfloor A.length/2 \rfloor$?
Otherwise we won't be allowed to call $\text{MAX-HEAPIFY}$, since it will fail the condition of having the subtrees be max-heaps. That is, if we start with $1$, there is no guarantee that $A[2]$ and $A[3]$ are roots of max-heaps.
[]
false
[]
06-6.3-3
06
6.3
6.3-3
docs/Chap06/6.3.md
Show that there are at most $\lceil n / 2^{h + 1} \rceil$ nodes of height $h$ in any $n$-element heap.
From 6.1-7, we know that the leaves of a heap are the nodes indexed by $$\left\lfloor n / 2 \right\rfloor + 1, \left\lfloor n / 2 \right\rfloor + 2, \dots, n.$$ Note that those elements corresponds to the second half of the heap array (plus the middle element if $n$ is odd). Thus, the number of leaves in any heap of size $n$ is $\left\lceil n / 2 \right\rceil$. Let's prove by induction. Let $n_h$ denote the number of nodes at height $h$. The upper bound holds for the base since $n_0 = \left\lceil n / 2 \right\rceil$ is exactly the number of leaves in a heap of size $n$. Now assume it holds for $h − 1$. We have prove that it also holds for $h$. Note that if $n_{h - 1}$ is even each node at height $h$ has exactly two children, which implies $n_h = n_{h - 1} / 2 = \left\lfloor n_{h - 1} / 2 \right\rfloor$. If $n_{h - 1}$ is odd, one node at height $h$ has one child and the remaining has two children, which also implies $n_h = \left\lfloor n_{h - 1} / 2 \right\rfloor + 1 = \left\lceil n_{h - 1} / 2 \right\rceil$. Thus, we have $$ \begin{aligned} n_h & = \left\lceil \frac{n_{h - 1}}{2} \right\rceil \\\\ & \le \left\lceil \frac{1}{2} \cdot \left\lceil \frac{n}{2^{(h - 1) + 1}} \right\rceil \right\rceil \\\\ & = \left\lceil \frac{1}{2} \cdot \left\lceil \frac{n}{2^h} \right\rceil \right\rceil \\\\ & = \left\lceil \frac{n}{2^{h + 1}} \right\rceil, \end{aligned} $$ which implies that it holds for $h$.
[]
false
[]
06-6.4-1
06
6.4
6.4-1
docs/Chap06/6.4.md
Using figure 6.4 as a model, illustrate the operation of $\text{HEAPSORT}$ on the array $A = \langle 5, 13, 2, 25, 7, 17, 20, 8, 4 \rangle$.
$$ \begin{aligned} \langle 5, 13, 2, 25, 7, 17, 20, 8, 4 \rangle \\\\ \langle 5, 13, 20, 25, 7, 17, 2, 8, 4 \rangle \\\\ \langle 5, 25, 20, 13, 7, 17, 2, 8, 4 \rangle \\\\ \langle 25, 5, 20, 13, 7, 17, 2, 8, 4 \rangle \\\\ \langle 25, 13, 20, 5, 7, 17, 2, 8, 4 \rangle \\\\ \langle 25, 13, 20, 8, 7, 17, 2, 5, 4 \rangle \\\\ \langle 4, 13, 20, 8, 7, 17, 2, 5, 25 \rangle \\\\ \langle 20, 13, 4, 8, 7, 17, 2, 5, 25 \rangle \\\\ \langle 20, 13, 17, 8, 7, 4, 2, 5, 25 \rangle \\\\ \langle 5, 13, 17, 8, 7, 4, 2, 20, 25 \rangle \\\\ \langle 17, 13, 5, 8, 7, 4, 2, 20, 25 \rangle \\\\ \langle 2, 13, 5, 8, 7, 4, 17, 20, 25 \rangle \\\\ \langle 13, 2, 5, 8, 7, 4, 17, 20, 25 \rangle \\\\ \langle 13, 8, 5, 2, 7, 4, 17, 20, 25 \rangle \\\\ \langle 4, 8, 5, 2, 7, 13, 17, 20, 25 \rangle \\\\ \langle 8, 4, 5, 2, 7, 13, 17, 20, 25 \rangle \\\\ \langle 8, 7, 5, 2, 4, 13, 17, 20, 25 \rangle \\\\ \langle 4, 7, 5, 2, 8, 13, 17, 20, 25 \rangle \\\\ \langle 7, 4, 5, 2, 8, 13, 17, 20, 25 \rangle \\\\ \langle 2, 4, 5, 7, 8, 13, 17, 20, 25 \rangle \\\\ \langle 5, 4, 2, 7, 8, 13, 17, 20, 25 \rangle \\\\ \langle 2, 4, 5, 7, 8, 13, 17, 20, 25 \rangle \\\\ \langle 4, 2, 5, 7, 8, 13, 17, 20, 25 \rangle \\\\ \langle 2, 4, 5, 7, 8, 13, 17, 20, 25 \rangle \end{aligned} $$
[]
false
[]
06-6.4-2
06
6.4
6.4-2
docs/Chap06/6.4.md
Argue the correctness of $\text{HEAPSORT}$ using the following loop invariant: At the start of each iteration of the **for** loop of lines 2-5, the subarray $A[1..i]$ is a max-heap containing the $i$ smallest elements of $A[1..n]$, and the subarray $A[i + 1..n]$ contains the $n - i$ largest elements of $A[1..n]$, sorted.
**Initialization:** The subarray $A[i + 1..n]$ is empty, thus the invariant holds. **Maintenance:** $A[1]$ is the largest element in $A[1..i]$ and it is smaller than the elements in $A[i + 1..n]$. When we put it in the $i$th position, then $A[i..n]$ contains the largest elements, sorted. Decreasing the heap size and calling $\text{MAX-HEAPIFY}$ turns $A[1..i - 1]$ into a max-heap. Decrementing $i$ sets up the invariant for the next iteration. **Termination:** After the loop $i = 1$. This means that $A[2..n]$ is sorted and $A[1]$ is the smallest element in the array, which makes the array sorted.
[]
false
[]
06-6.4-3
06
6.4
6.4-3
docs/Chap06/6.4.md
What is the running time of $\text{HEAPSORT}$ on an array $A$ of length $n$ that is already sorted in increasing order? What about decreasing order?
Both of them are $\Theta(n\lg n)$. If the array is sorted in increasing order, the algorithm will need to convert it to a heap that will take $O(n)$. Afterwards, however, there are $n - 1$ calls to $\text{MAX-HEAPIFY}$ and each one will perform the full $\lg k$ operations. Since: $$\sum_{k = 1}^{n - 1}\lg k = \lg((n - 1)!) = \Theta(n\lg n).$$ Same goes for decreasing order. $\text{BUILD-MAX-HEAP}$ will be faster (by a constant factor), but the computation time will be dominated by the loop in $\text{HEAPSORT}$, which is $\Theta(n\lg n)$.
[]
false
[]
06-6.4-4
06
6.4
6.4-4
docs/Chap06/6.4.md
Show that the worst-case running time of $\text{HEAPSORT}$ is $\Omega(n\lg n)$.
This is essentially the first part of exercise 6.4-3. Whenever we have an array that is already sorted, we take linear time to convert it to a max-heap and then $n\lg n$ time to sort it.
[]
false
[]
06-6.4-5
06
6.4
6.4-5 $\star$
docs/Chap06/6.4.md
Show that when all elements are distinct, the best-case running time of $\text{HEAPSORT}$ is $\Omega(n\lg n)$.
This proved to be quite tricky. My initial solution was wrong. Also, heapsort appeared in 1964, but the lower bound was proved by Schaffer and Sedgewick in 1992. It's evil to put this an exercise. Let's assume that the heap is a full binary tree with $n = 2^k - 1$. There are $2^{k - 1}$ leaves and $2^{k - 1} - 1$ inner nodes. Let's look at sorting the first $2^{k - 1}$ elements of the heap. Let's consider their arrangement in the heap and color the leaves to be red and the inner nodes to be blue. The colored nodes are a subtree of the heap (otherwise there would be a contradiction). Since there are $2^{k - 1}$ colored nodes, at most $2^{k - 2}$ are red, which means that at least $2^{k - 2} - 1$ are blue. While the red nodes can jump directly to the root, the blue nodes need to travel up before they get removed. Let's count the number of swaps to move the blue nodes to the root. The minimal case of swaps is when 1. there are $2^{k - 2} - 1$ blue nodes and 2. they are arranged in a binary tree. If there are $d$ such blue nodes, then there would be $i = \lg d$ levels, each containing $2^i$ nodes with length $i$. Thus the number of swaps is, $$\sum_{i = 0}^{\lg d}i2^i = 2 + (\lg d - 2)2^{\lg d} = \Omega(d\lg d).$$ And now for a lazy (but cute) trick. We've figured out a tight bound on sorting half of the heap. We have the following recurrence: $$T(n) = T(n / 2) + \Omega(n\lg n).$$ Applying the master method, we get that $T(n) = \Omega(n\lg n)$.
[]
false
[]
06-6.5-1
06
6.5
6.5-1
docs/Chap06/6.5.md
Illustrate the operation $\text{HEAP-EXTRACT-MAX}$ on the heap $A = \langle 15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1 \rangle$.
1. Original heap. ![](../img/6.5-1-1.png) 2. Extract the max node $15$, then move $1$ to the top of the heap. ![](../img/6.5-1-2.png) 3. Since $13 > 9 > 1$, swap $1$ and $13$. ![](../img/6.5-1-3.png) 4. Since $12 > 5 > 1$, swap $1$ and $12$. ![](../img/6.5-1-4.png) 5. Since $6 > 2 > 1$, swap $1$ and $6$. ![](../img/6.5-1-5.png)
[]
true
[ "../img/6.5-1-1.png", "../img/6.5-1-2.png", "../img/6.5-1-3.png", "../img/6.5-1-4.png", "../img/6.5-1-5.png" ]
06-6.5-2
06
6.5
6.5-2
docs/Chap06/6.5.md
Illustrate the operation of $\text{MAX-HEAP-INSERT}(A, 10)$ on the heap $A = \langle 15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1 \rangle$.
1. Original heap. ![](../img/6.5-2-1.png) 2. Since $\text{MAX-HEAP-INSERT}(A, 10)$ is called, we append a node assigned value $-\infty$. ![](../img/6.5-2-2.png) 3. Update the $key$ value of the new node. ![](../img/6.5-2-3.png) 4. Since the parent $key$ is smaller than $10$, the nodes are swapped. ![](../img/6.5-2-4.png) 5. Since the parent $key$ is smaller than $10$, the nodes are swapped. ![](../img/6.5-2-5.png)
[]
true
[ "../img/6.5-2-1.png", "../img/6.5-2-2.png", "../img/6.5-2-3.png", "../img/6.5-2-4.png", "../img/6.5-2-5.png" ]
06-6.5-3
06
6.5
6.5-3
docs/Chap06/6.5.md
Write pseudocode for the procedures $\text{HEAP-MINIMUM}$, $\text{HEAP-EXTRACT-MIN}$, $\text{HEAP-DECREASE-KEY}$, and $\text{MIN-HEAP-INSERT}$ that implement a min-priority queue with a min-heap.
```cpp HEAP-MINIMUM(A) return A[1] ``` ```cpp HEAP-EXTRACT-MIN(A) if A.heap-size < 1 error "heap underflow" min = A[1] A[1] = A[A.heap-size] A.heap-size = A.heap-size - 1 MIN-HEAPIFY(A, 1) return min ``` ```cpp HEAP-DECREASE-KEY(A, i, key) if key > A[i] error "new key is larger than current key" A[i] = key while i > 1 and A[PARENT(i)] > A[i] exchange A[i] with A[PARENT(i)] i = PARENT(i) ``` ```cpp MIN-HEAP-INSERT(A, key) A.heap-size = A.heap-size + 1 A[A.heap-size] = ∞ HEAP-DECREASE-KEY(A, A.heap-size, key) ```
[ { "lang": "cpp", "code": "HEAP-MINIMUM(A)\n return A[1]" }, { "lang": "cpp", "code": "HEAP-EXTRACT-MIN(A)\n if A.heap-size < 1\n error \"heap underflow\"\n min = A[1]\n A[1] = A[A.heap-size]\n A.heap-size = A.heap-size - 1\n MIN-HEAPIFY(A, 1)\n return min" }, { "lang": "cpp", "code": "HEAP-DECREASE-KEY(A, i, key)\n if key > A[i]\n error \"new key is larger than current key\"\n A[i] = key\n while i > 1 and A[PARENT(i)] > A[i]\n exchange A[i] with A[PARENT(i)]\n i = PARENT(i)" }, { "lang": "cpp", "code": "MIN-HEAP-INSERT(A, key)\n A.heap-size = A.heap-size + 1\n A[A.heap-size] = ∞\n HEAP-DECREASE-KEY(A, A.heap-size, key)" } ]
false
[]
06-6.5-4
06
6.5
6.5-4
docs/Chap06/6.5.md
Why do we bother setting the key of the inserted node to $-\infty$ in line 2 of $\text{MAX-HEAP-INSERT}$ when the next thing we do is increase its key to the desired value?
In order to pass the guard clause. Otherwise we have to drop the check if $key < A[i]$.
[]
false
[]
06-6.5-5
06
6.5
6.5-5
docs/Chap06/6.5.md
Argue the correctness of $\text{HEAP-INCREASE-KEY}$ using the following loop invariant: At the start of each iteration of the **while** loop of lines 4-6, the subarray $A[1 ..A.heap\text-size]$ satisfies the max-heap property, except that there may be one violation: $A[i]$ may be larger than $A[\text{PARENT}(i)]$. You may assume that the subarray $A[1..A.heap\text-size]$ satisfies the max-heap property at the time $\text{HEAP-INCREASE-KEY}$ is called.
**Initialization:** $A$ is a heap except that $A[i]$ might be larger that it's parent, because it has been modified. $A[i]$ is larger than its children, because otherwise the guard clause would fail and the loop will not be entered (the new value is larger than the old value and the old value is larger than the children). **Maintenance:** When we exchange $A[i]$ with its parent, the max-heap property is satisfied except that now $A[\text{PARENT}(i)]$ might be larger than its parent. Changing $i$ to its parent maintains the invariant. **Termination:** The loop terminates whenever the heap is exhausted or the max-heap property for $A[i]$ and its parent is preserved. At the loop termination, $A$ is a max-heap.
[]
false
[]
06-6.5-6
06
6.5
6.5-6
docs/Chap06/6.5.md
Each exchange operation on line 5 of $\text{HEAP-INCREASE-KEY}$ typically requires three assignments. Show how to use the idea of the inner loop of $\text{INSERTION-SORT}$ to reduce the three assignments down to just one assignment.
```cpp HEAP-INCREASE-KEY(A, i, key) if key < A[i] error "new key is smaller than current key" while i > 1 and A[PARENT(i)] < key A[i] = A[PARENT(i)] i = PARENT(i) A[i] = key ```
[ { "lang": "cpp", "code": "HEAP-INCREASE-KEY(A, i, key)\n if key < A[i]\n error \"new key is smaller than current key\"\n while i > 1 and A[PARENT(i)] < key\n A[i] = A[PARENT(i)]\n i = PARENT(i)\n A[i] = key" } ]
false
[]
06-6.5-7
06
6.5
6.5-7
docs/Chap06/6.5.md
Show how to implement a first-in, first-out queue with a priority queue. Show how to implement a stack with a priority queue. (Queues and stacks are defined in section 10.1).
Both are simple. For a stack we keep adding elements in increasing priority, while in a queue we add them in decreasing priority. For the stack we can set the new priority to $\text{HEAP-MAXIMUM}(A) + 1$. For the queue we need to keep track of it and decrease it on every insertion. Both are not very efficient. Furthermore, if the priority can overflow or underflow, so will eventually need to reassign priorities.
[]
false
[]
06-6.5-8
06
6.5
6.5-8
docs/Chap06/6.5.md
The operation $\text{HEAP-DELETE}(A, i)$ deletes the item in node $i$ from heap $A$. Give an implementation of $\text{HEAP-DELETE}$ that runs in $O(\lg n)$ time for an $n$-element max-heap.
```cpp HEAP-DELETE(A, i) if A[i] > A[A.heap-size] A[i] = A[A.heap-size] MAX-HEAPIFY(A, i) else HEAP-INCREASE-KEY(A, i, A[A.heap-size]) A.heap-size = A.heap-size - 1 ``` **Note:** The following algorithm is wrong. For example, given an array $A = [15, 7, 9, 1, 2, 3, 8]$ which is a max-heap, and if we delete $A[5] = 2$, then it will fail. ```cpp HEAP-DELETE(A, i) A[i] = A[A.heap-size] A.heap-size = A.heap-size - 1 MAX-HEAPIFY(A, i) ``` - before: ``` 15 / \ 7 9 / \ / \ 1 2 3 8 ``` - after (which is wrong since $8 > 7$ violates the max-heap property): ``` 15 / \ 7 9 / \ / 1 8 3 ```
[ { "lang": "cpp", "code": "HEAP-DELETE(A, i)\n if A[i] > A[A.heap-size]\n A[i] = A[A.heap-size]\n MAX-HEAPIFY(A, i)\n else\n HEAP-INCREASE-KEY(A, i, A[A.heap-size])\n A.heap-size = A.heap-size - 1" }, { "lang": "cpp", "code": "HEAP-DELETE(A, i)\n A[i] = A[A.heap-size]\n A.heap-size = A.heap-size - 1\n MAX-HEAPIFY(A, i)" }, { "lang": "", "code": " 15\n / \\\n 7 9\n / \\ / \\\n 1 2 3 8" }, { "lang": "", "code": " 15\n / \\\n 7 9\n / \\ /\n 1 8 3" } ]
false
[]
06-6.5-9
06
6.5
6.5-9
docs/Chap06/6.5.md
Give an $O(n\lg k)$-time algorithm to merge $k$ sorted lists into one sorted list, where $n$ is the total number of elements in all the input lists. ($\textit{Hint:}$ Use a min-heap for $k$-way merging.)
We take one element of each list and put it in a min-heap. Along with each element we have to track which list we took it from. When merging, we take the minimum element from the heap and insert another element off the list it came from (unless the list is empty). We continue until we empty the heap. We have $n$ steps and at each step we're doing an insertion into the heap, which is $\lg k$. Suppose that sorted lists on input are all nonempty, we have the following pseudocode. ```cpp def MERGE-SORTED-LISTS(lists) n = lists.length // Take the lowest element from each of lists together with an index of the list and make list of such pairs. // Pairs are of "type" (element-value, index-of-list) let lowest-from-each be an empty array for i = 1 to n add (lists[i][0], i) to lowest-from-each delete lists[i][0] // This makes min-heap from list lowest-from-each. // We are sssuming that pairs of "type" (element-value, index-of-list) are compared according to the values of elements. A = MIN-HEAP(lowest-from-each) let merged-lists be an empty array while not A.EMPTY() element-value, index-of-list = HEAP-EXTRACT-MIN(A) add element-value to merged-lists if lists[index-of-list].length > 0 MIN-HEAP-INSERT(A, (lists[index-of-list][0], index-of-list)) delete lists[index-of-list][0] return merged-lists ```
[ { "lang": "cpp", "code": "def MERGE-SORTED-LISTS(lists)\n n = lists.length\n // Take the lowest element from each of lists together with an index of the list and make list of such pairs.\n // Pairs are of \"type\" (element-value, index-of-list)\n let lowest-from-each be an empty array\n for i = 1 to n\n add (lists[i][0], i) to lowest-from-each\n delete lists[i][0]\n // This makes min-heap from list lowest-from-each.\n // We are sssuming that pairs of \"type\" (element-value, index-of-list) are compared according to the values of elements.\n A = MIN-HEAP(lowest-from-each)\n let merged-lists be an empty array\n while not A.EMPTY()\n element-value, index-of-list = HEAP-EXTRACT-MIN(A)\n add element-value to merged-lists\n if lists[index-of-list].length > 0\n MIN-HEAP-INSERT(A, (lists[index-of-list][0], index-of-list))\n delete lists[index-of-list][0]\n return merged-lists" } ]
false
[]
06-6-1
06
6-1
6-1
docs/Chap06/Problems/6-1.md
We can build a heap by repeatedly calling $\text{MAX-HEAP-INSERT}$ to insert the elements into the heap. Consider the following variation of the $\text{BUILD-MAX-HEAP}$ procedure: ```cpp BUILD-MAX-HEAP'(A) A.heap-size = 1 for i = 2 to A.length MAX-HEAP-INSERT(A, A[i]) ``` **a.** Do the procedures $\text{BUILD-MAX-HEAP}$ and $\text{BUILD-MAX-HEAP}'$ always create the same heap when run on the same input array? Prove that they do, or provide a counterexample. **b.** Show that in the worst case, $\text{BUILD-MAX-HEAP}'$ requires $\Theta(n\lg n)$ time to build a $n$-element heap.
**a.** Consider the following counterexample. - Input array $A = \langle 1, 2, 3 \rangle$: - $\text{BUILD-MAX-HEAP}(A)$: $A = \langle 3, 2, 1 \rangle$. - $\text{BUILD-MAX-HEAP}'(A)$: $A = \langle 3, 1, 2 \rangle$. **b.** It is very easy to find out that the $\text{MAX-HEAP-INSERT}$ operation for each iteration takes $\Theta(\log i)$ time, therefore the total time complexity is the sum of the individual complexities for $i$ from $2$ to $n$, which is : $\Theta(\log 2) + \Theta(\log 3) + \dots + \Theta(\log n) = \Theta(\log n!) $. By using Stirling's approximation, $\Theta (\log n!) \approx \Theta(n\log n)$, so the overall complexity is $\Theta(n\log⁡ n)$.
[ { "lang": "cpp", "code": "> BUILD-MAX-HEAP'(A)\n> A.heap-size = 1\n> for i = 2 to A.length\n> MAX-HEAP-INSERT(A, A[i])\n>" } ]
false
[]
06-6-2
06
6-2
6-2
docs/Chap06/Problems/6-2.md
A **_$d$-ary heap_** is like a binary heap, but (with one possible exception) non-leaf nodes have $d$ children instead of $2$ children. **a.** How would you represent a $d$-ary heap in an array? **b.** What is the height of a $d$-ary heap of $n$ elements in terms of $n$ and $d$? **c.** Give an efficient implementation of $\text{EXTRACT-MAX}$ in a $d$-ary max-heap. Analyze its running time in terms of $d$ and $n$. **d.** Give an efficient implementation of $\text{INSERT}$ in a $d$-ary max-heap. Analyze its running time in terms of $d$ and $n$. **e.** Give an efficient implementation of $\text{INCREASE-KEY}(A, i, k)$, which flags an error if $k < A[i]$, but otherwise sets $A[i] = k$ and then updates the $d$-ary max-heap structure appropriately. Analyze its running time in terms of $d$ and $n$.
**a.** We can use those two following functions to retrieve parent of $i$-th element and $j$-th child of $i$-th element. ```cpp d-ARY-PARENT(i) return floor((i - 2) / d + 1) ``` ```cpp d-ARY-CHILD(i, j) return d(i − 1) + j + 1 ``` Obviously $1 \le j \le d$. You can verify those functions checking that $$d\text{-ARY-PARENT}(d\text{-ARY-CHILD}(i, j)) = i.$$ Also easy to see is that binary heap is special type of $d$-ary heap where $d = 2$, if you substitute $d$ with $2$, then you will see that they match functions $\text{PARENT}$, $\text{LEFT}$ and $\text{RIGHT}$ mentioned in book. **b.** Since each node has $d$ children, the height of a $d$-ary heap with $n$ nodes is $\Theta(\log_d n)$. **c.** $d\text{-ARY-HEAP-EXTRACT-MAX}(A)$ consists of constant time operations, followed by a call to $d\text{-ARY-MAX-HEAPIFY}(A, i)$. The number of times this recursively calls itself is bounded by the height of the $d$-ary heap, so the running time is $O(d\log_d n)$. ```cpp d-ARY-HEAP-EXTRACT-MAX(A) if A.heap-size < 1 error "heap under flow" max = A[1] A[1] = A[A.heap-size] A.heap-size = A.heap-size - 1 d-ARY-MAX-HEAPIFY(A, 1) return max ``` ```cpp d-ARY-MAX-HEAPIFY(A, i) largest = i for k = 1 to d if d-ARY-CHILD(i, k) ≤ A.heap-size and A[d-ARY-CHILD(i, k)] > A[largest] largest = d-ARY-CHILD(i, k) if largest != i exchange A[i] with A[largest] d-ARY-MAX-HEAPIFY(A, largest) ``` **d.** The runtime is $O(\log_d n)$ since the **while** loop runs at most as many times as the height of the $d$-ary array. ```cpp d-ARY-MAX-HEAP-INSERT(A, key) A.heap-size = A.heap-size + 1 A[A.heap-size] = key i = A.heap-size while i > 1 and A[d-ARY-PARENT(i)] < A[i] exchange A[i] with A[d-ARY-PARENT(i)] i = d-ARY-PARENT(i) ``` **e.** The runtime is $O(\log_d n)$ since the **while** loop runs at most as many times as the height of the $d$-ary array. ```cpp d-ARY-INCREASE-KEY(A, i, key) if key < A[i] error "new key is smaller than current key" A[i] = key while i > 1 and A[d-ARY-PARENT(i)] < A[i] exchange A[i] with A[d-ARY-PARENT(i)] i = d-ARY-PARENT(i) ```
[ { "lang": "cpp", "code": "d-ARY-PARENT(i)\n return floor((i - 2) / d + 1)" }, { "lang": "cpp", "code": "d-ARY-CHILD(i, j)\n return d(i − 1) + j + 1" }, { "lang": "cpp", "code": "d-ARY-HEAP-EXTRACT-MAX(A)\n if A.heap-size < 1\n error \"heap under flow\"\n max = A[1]\n A[1] = A[A.heap-size]\n A.heap-size = A.heap-size - 1\n d-ARY-MAX-HEAPIFY(A, 1)\n return max" }, { "lang": "cpp", "code": "d-ARY-MAX-HEAPIFY(A, i)\n largest = i\n for k = 1 to d\n if d-ARY-CHILD(i, k) ≤ A.heap-size and A[d-ARY-CHILD(i, k)] > A[largest]\n largest = d-ARY-CHILD(i, k)\n if largest != i\n exchange A[i] with A[largest]\n d-ARY-MAX-HEAPIFY(A, largest)" }, { "lang": "cpp", "code": "d-ARY-MAX-HEAP-INSERT(A, key)\n A.heap-size = A.heap-size + 1\n A[A.heap-size] = key\n i = A.heap-size\n while i > 1 and A[d-ARY-PARENT(i)] < A[i]\n exchange A[i] with A[d-ARY-PARENT(i)]\n i = d-ARY-PARENT(i)" }, { "lang": "cpp", "code": "d-ARY-INCREASE-KEY(A, i, key)\n if key < A[i]\n error \"new key is smaller than current key\"\n A[i] = key\n while i > 1 and A[d-ARY-PARENT(i)] < A[i]\n exchange A[i] with A[d-ARY-PARENT(i)]\n i = d-ARY-PARENT(i)" } ]
false
[]
06-6-3
06
6-3
6-3
docs/Chap06/Problems/6-3.md
An $m \times n$ Young tableau is an $m \times n$ matrix such that the entries of each row are in sorted order from left to right and the entries of each column are in sorted order from top to bottom. Some of the entries of a Young tableau may be $\infty$, which we treat as nonexistent elements. Thus, a Young tableau can be used to hold $r \le mn$ finite numbers. **a.** Draw $4 \times 4$ tableau containing the elements $\\{9, 16, 3, 2, 4, 8, 5, 14, 12\\}$. **b.** Argue that an $m \times n$ Young tableau $Y$ is empty if $Y[1, 1] = \infty$. Argue that $Y$ is full (contains $mn$ elements) if $Y[m, n] < \infty$. **c.** Give an algorithm to implement $\text{EXTRACT-MIN}$ on a nonempty $m \times n$ Young tableau that runs in $O(m + n)$ time. Your algorithm should use a recursive subroutine that solves an $m \times n$ problem by recursively solving either an $(m - 1) \times n$ or an $m \times (n - 1)$ subproblem. ($\textit{Hint:}$ Think about $\text{MAX-HEAPIFY}$.) Define $T(p)$ where $p = m + n$, to be the maximum running time of $\text{EXTRACT-MIN}$ on any $m \times n$ Young tableau. Give and solve a recurrence relation for $T(p)$ that yields the $O(m + n)$ time bound. **d.** Show how to insert a new element into a nonfull $m \times n$ Young tableau in $O(m + n)$ time. **e.** Using no other sorting method as a subroutine, show how to use an $n \times n$ Young tableau to sort $n^2$ numbers in $O(n^3)$ time. **f.** Give an $O(m + n)$-time algorithm to determine whether a given number is stored in a given $m \times n$ Young tableau.
**a.** $$ \begin{matrix} 2 & 3 & 12 & 14 \\\\ 4 & 8 & 16 & \infty \\\\ 5 & 9 & \infty & \infty \\\\ \infty & \infty & \infty & \infty \end{matrix} $$ **b.** If the top left element is $\infty$, then all the elements on the first row need to be $\infty$. But if this is the case, all other elements need to be $\infty$ because they are larger than the first element on their column. If the bottom right element is smaller than $\infty$, all the elements on the bottom row need to be smaller than $\infty$. But so are the other elements in the tableau, because each is smaller than the bottom element of its column. **c.** The $A[1, 1]$ is the smallest element. We store it, so we can return it later and then replace it with $\infty$. This breaks the Young tableau property and we need to perform a procedure, similar to $\text{MAX-HEAPIFY}$, to restore it. We compare $A[i, j]$ with each of its neighbours and exchange it with the smallest. This restores the property for $A[i, j]$ but reduces the problem to either $A[i, j + 1]$ or $A[i + 1, j]$. We terminate when $A[i, j]$ is smaller than its neighbours. The relation in question is $$T(p) = T(p - 1) + O(1) = T(p - 2) + O(1) + O(1) = \cdots = O(p).$$ **d.** The algorithm is very similar to the previous, except that we start with the bottom right element of the tableau and move it upwards and leftwards to the correct position. The asymptotic analysis is the same. **e.** We can sort by starting with an empty tableau and inserting all the $n^2$ elements in it. Each insertion is $O(n + n) = O(n)$. The complexity is $n^2O(n) = O(n^3)$. Afterwards we can take them one by one and put them back in the original array which has the same complexity. In total, its $O(n^3)$. We can also do it in place if we allow for "partial" tableaus where only a portion of the top rows (and a portion of the last of them) is in the tableau. Then we can build the tableau in place and then start putting each minimal element to the end. This would be asymptotically equal, but use constant memory. It would also sort the array in reverse. **f.** We start from the lower-left corner. We check the current element $current$ with the one we're looking for $key$ and move up if $current > key$ and right if $current < key$. We declare success if $current = key$ and otherwise terminate if we walk off the tableau.
[]
false
[]
07-7.1-1
07
7.1
7.1-1
docs/Chap07/7.1.md
Using figure 7.1 as a model, illustrate the operation of $\text{PARTITION}$ on the array $A = \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle$.
$$ \begin{aligned} \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 19, 13, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 13, 19, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 13, 19, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 8, 19, 12, 13, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 8, 7, 12, 13, 19, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 8, 7, 4, 13, 19, 12, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 8, 7, 4, 13, 19, 12, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 8, 7, 4, 2, 19, 12, 21, 13, 6, 11 \rangle \\\\ \langle 9, 5, 8, 7, 4, 2, 6, 12, 21, 13, 19, 11 \rangle \\\\ \langle 9, 5, 8, 7, 4, 2, 6, 11, 21, 13, 19, 12 \rangle \end{aligned} $$
[]
false
[]
07-7.1-2
07
7.1
7.1-2
docs/Chap07/7.1.md
What value of $q$ does $\text{PARTITION}$ return when all elements in the array $A[p..r]$ have the same value? Modify $\text{PARTITION}$ so that $q = \lfloor (p + r) / 2 \rfloor$ when all elements in the array $A[p..r]$ have the same value.
It returns $r$. We can modify $\text{PARTITION}$ by counting the number of comparisons in which $A[j] = A[r]$ and then subtracting half that number from the pivot index.
[]
false
[]
07-7.1-3
07
7.1
7.1-3
docs/Chap07/7.1.md
Give a brief argument that the running time of $\text{PARTITION}$ on a subarray of size $n$ is $\Theta(n)$.
There is a for statement whose body executes $r - 1 - p = \Theta(n)$ times. In the worst case every time the body of the if is executed, but it takes constant time and so does the code outside of the loop. Thus the running time is $\Theta(n)$.
[]
false
[]
07-7.1-4
07
7.1
7.1-4
docs/Chap07/7.1.md
How would you modify $\text{QUICKSORT}$ to sort into nonincreasing order?
We only need to flip the condition on line 4.
[]
false
[]
07-7.2-1
07
7.2
7.2-1
docs/Chap07/7.2.md
Use the substitution method to prove that the recurrence $T(n) = T(n - 1) + \Theta(n)$ has the solution $T(n) = \Theta(n^2)$, as claimed at the beginning of section 7.2.
**Proof** We are given the recurrence $$ T(n) = T(n - 1) + \Theta(n), $$ and we aim to prove $T(n) = \Theta(n^2)$. By the definition of $\Theta(n)$, there exist constants $c_1$, $c_2$, and $n_0$ such that for all $n \ge n_0$, $$ c_1 n \le f(n) \le c_2 n, $$ where $f(n) = \Theta(n)$. Thus, the recurrence becomes $$ T(n) = T(n - 1) + f(n), \quad \text{with} \quad c_1 n \le f(n) \le c_2 n. $$ **Upper Bound** We conjecture $T(n) \le c_3 n^2$ for some constant $c_3$. Assume this holds for $T(k)$, i.e., $T(k) \le c_3 k^2$. Then, $$ T(k+1) = T(k) + f(k+1) \le c_3 k^2 + c_2 (k+1). $$ We want to show $$ c_3 k^2 + c_2 (k+1) \le c_3 (k+1)^2. $$ Expanding both sides: $$ c_3 (k+1)^2 = c_3 k^2 + 2c_3 k + c_3, $$ and $$ c_3 k^2 + c_2 (k+1) = c_3 k^2 + c_2 k + c_2. $$ Thus, we need $$ c_2 k + c_2 \le 2c_3 k + c_3. $$ This holds for $c_2 \le 2c_3$ and $c_2 \le c_3$, which is possible by choosing appropriate constants. **Lower Bound** The lower bound follows similarly by applying the lower bound $c_1 n$ for $f(n)$. Thus, we can conclude that $$ T(n) = \Theta(n^2). $$ This completes the proof.
[]
false
[]
07-7.2-2
07
7.2
7.2-2
docs/Chap07/7.2.md
What is the running time of $\text{QUICKSORT}$ when all elements of the array $A$ have the same value?
It is $\Theta(n^2)$, since one of the partitions is always empty (see exercise 7.1-2.)
[]
false
[]
07-7.2-3
07
7.2
7.2-3
docs/Chap07/7.2.md
Show that the running time of $\text{QUICKSORT}$ is $\Theta(n^2)$ when the array $A$ contains distinct elements and is sorted in decreasing order.
If the array is already sorted in decreasing order, then, the pivot element is less than all the other elements. The partition step takes $\Theta(n)$ time, and then leaves you with a subproblem of size $n − 1$ and a subproblem of size $0$. This gives us the recurrence considered in 7.2-1. Which we showed has a solution that is $\Theta(n^2)$.
[]
false
[]
07-7.2-4
07
7.2
7.2-4
docs/Chap07/7.2.md
Banks often record transactions on an account in order of the times of the transactions, but many people like to receive their bank statements with checks listed in order by check numbers. People usually write checks in order by check number, and merchants usually cash the with reasonable dispatch. The problem of converting time-of-transaction ordering to check-number ordering is therefore the problem of sorting almost-sorted input. Argue that the procedure $\text{INSERTION-SORT}$ would tend to beat the procedure $\text{QUICKSORT}$ on this problem.
The more sorted the array is, the less work insertion sort will do. Namely, $\text{INSERTION-SORT}$ is $\Theta(n + d)$, where $d$ is the number of inversions in the array. In the example above the number of inversions tends to be small so insertion sort will be close to linear. On the other hand, if $\text{PARTITION}$ does pick a pivot that does not participate in an inversion, it will produce an empty partition. Since there is a small number of inversions, $\text{QUICKSORT}$ is very likely to produce empty partitions.
[]
false
[]
07-7.2-5
07
7.2
7.2-5
docs/Chap07/7.2.md
Suppose that the splits at every level of quicksort are in proportion $1 - \alpha$ to $\alpha$, where $0 < \alpha \le 1 / 2$ is a constant. Show that the minimum depth of a leaf in the recursion tree is approximately $-\lg n / \lg\alpha$ and the maximum depth is approximately $-\lg n / \lg(1 - \alpha)$. (Don't worry about integer round-off.)
The minimum depth corresponds to repeatedly taking the smaller subproblem, that is, the branch whose size is proportional to $\alpha$. Then, this will fall to $1$ in $k$ steps where $1 \approx \alpha^kn$. Therefore, $k \approx \log_\alpha 1 / n = -\frac{\lg n}{\lg\alpha}$. The longest depth corresponds to always taking the larger subproblem. we then have an identical expression, replacing $\alpha$ with $1 − \alpha$.
[]
false
[]
07-7.2-6
07
7.2
7.2-6 $\star$
docs/Chap07/7.2.md
Argue that for any constant $0 < \alpha \le 1 / 2$, the probability is approximately $1 - 2\alpha$ that on a random input array, $\text{PARTITION}$ produces a split more balanced than $1 - \alpha$ to $\alpha$.
In order to produce a worse split than $1 - \alpha$ to $\alpha$, $\text{PARTITION}$ must pick a pivot that will be either within the smallest $\alpha n$ elements or the largest $\alpha n$ elements. The probability of either is (approximately) $\alpha n / n = \alpha$ and the probability of both is $2\alpha$. Thus, the probability of having a better partition is the complement, $1 - 2\alpha$.
[]
false
[]
07-7.3-1
07
7.3
7.3-1
docs/Chap07/7.3.md
Why do we analyze the expected running time of a randomized algorithm and not its worst-case running time?
We analyze the expected run time because it represents the more typical time cost. Also, we are doing the expected run time over the possible randomness used during computation because it can't be produced adversarially, unlike when doing expected run time over all possible inputs to the algorithm.
[]
false
[]
07-7.3-2
07
7.3
7.3-2
docs/Chap07/7.3.md
When $\text{RANDOMIZED-QUICKSORT}$ runs, how many calls are made to the random number generator $\text{RANDOM}$ in the worst case? How about in the best case? Give your answer in terms of $\Theta$-notation.
In the worst case, the number of calls to $\text{RANDOM}$ is $$T(n) = T(n - 1) + 1 = n = \Theta(n).$$ As for the best case, $$T(n) = 2T(n / 2) + 1 = \Theta(n).$$ This is not too surprising, because each third element (at least) gets picked as pivot.
[]
false
[]
07-7.4-1
07
7.4
7.4-1
docs/Chap07/7.4.md
Show that in the recurrence $$ \begin{aligned} T(n) & = \max\limits_{0 \le q \le n - 1} (T(q) + T(n - q - 1)) + \Theta(n), \\\\ T(n) & = \Omega(n^2). \end{aligned} $$
We guess $T(n) \ge cn^2 - 2n$, $$ \begin{aligned} T(n) & = \max_{0 \le q \le n - 1} (T(q) + T(n - q - 1)) + \Theta(n) \\\\ & \ge \max_{0 \le q \le n - 1} (cq^2 - 2q + c(n - q - 1)^2 - 2n - 2q -1) + \Theta(n) \\\\ & \ge c\max_{0 \le q \le n - 1} (q^2 + (n - q - 1)^2 - (2n + 4q + 1) / c) + \Theta(n) \\\\ & \ge cn^2 - c(2n - 1) + \Theta(n) \\\\ & \ge cn^2 - 2cn + 2c & (c \le 1) \\\\ & \ge cn^2 - 2n. \end{aligned} $$
[]
false
[]
07-7.4-2
07
7.4
7.4-2
docs/Chap07/7.4.md
Show that quicksort's best-case running time is $\Omega(n\lg n)$.
We'll use the substitution method to show that the best-case running time is $\Omega(n\lg n)$. Let $T(n)$ be the best-case time for the procedure $\text{QUICKSORT}$ on an input of size $n$. We have $$T(n) = \min _{1 \le q \le n - 1} (T(q) + T(n - q - 1)) + \Theta(n).$$ Suppose that $T(n) \ge c(n\lg n + 2n)$ for some constant $c$. Substituting this guess into the recurrence gives $$ \begin{aligned} T(n) & \ge \min _{1 \le q \le n - 1}(cq\lg q + 2cq + c(n - q - 1) \lg(n - q - 1) + 2c(n - q - 1)) + \Theta(n) \\\\ & = (cn / 2)\lg(n / 2) + cn + c(n / 2 - 1)\lg(n / 2 - 1) + cn - 2c + \Theta(n) \\\\ & \ge (cn / 2)\lg n - cn / 2 + c(n / 2 - 1)(\lg n - 2) + 2cn - 2c\Theta(n) \\\\ & = (cn / 2)\lg n - cn / 2 + (cn / 2) \lg n - cn - c\lg n + 2c + 2cn - 2c\Theta(n) \\\\ & = cn\lg n + cn / 2 - c\lg n + 2c - 2c\Theta(n). \end{aligned} $$ Taking a derivative with respect to $q$ shows that the minimum is obtained when $q = n / 2$. Taking $c$ large enough to dominate the $−\lg n + 2 − 2c + \Theta(n)$ term makes this greater than $cn\lg n$, proving the bound.
[]
false
[]
07-7.4-3
07
7.4
7.4-3
docs/Chap07/7.4.md
Show that the expression $q^2 + (n - q - 1)^2$ achieves a maximum over $q = 0, 1, \ldots, n - 1$ when $q = 0$ and $q = n - 1$.
$$ \begin{aligned} f(q) & = q^2 + (n - q - 1)^2 \\\\ f'(q) & = 2q - 2(n - q - 1) = 4q - 2n + 2 \\\\ f''(q) & = 4. \\\\ \end{aligned} $$ $f'(q) = 0$ when $q = \frac{1}{2}n - \frac{1}{2}$. $f'(q)$ is also continious. $\forall q: f''(q) > 0$, which means that $f'(q)$ is negative left of $f'(q) = 0$ and positive right of it, which means that this is a local minima. In this case, $f(q)$ is decreasing in the beginning of the interval and increasing in the end, which means that those two points are the only candidates for a maximum in the interval. $$ \begin{aligned} f(0) & = (n - 1)^2 \\\\ f(n - 1) & = (n - 1)^2 + 0^2. \end{aligned} $$
[]
false
[]
07-7.4-4
07
7.4
7.4-4
docs/Chap07/7.4.md
Show that $\text{RANDOMIZED-QUICKSORT}$'s expected running time is $\Omega(n\lg n)$.
We use the same reasoning for the expected number of comparisons, we just take in a different direction. $$ \begin{aligned} \text E[X] & = \sum_{i = 1}^{n - 1} \sum_{j = i + 1}^n \frac{2}{j - i + 1} \\\\ & = \sum_{i = 1}^{n - 1} \sum_{k = 1}^{n - i} \frac{2}{k + 1} & (k \ge 1) \\\\ & \ge \sum_{i = 1}^{n - 1} \sum_{k = 1}^{n - i} \frac{2}{2k} \\\\ & \ge \sum_{i = 1}^{n - 1} \Omega(\lg n) \\\\ & = \Omega(n\lg n). \end{aligned} $$ Using the master method, we get the solution $\Theta(n\lg n)$.
[]
false
[]
07-7.4-5
07
7.4
7.4-5
docs/Chap07/7.4.md
We can improve the running time of quicksort in practice by taking advantage of the fast running time of insertion sort when its input is "nearly" sorted. Upon calling quicksort on a subarray with fewer than $k$ elements, let it simply return without sorting the subarray. After the top-level call to quicksort returns, run insertion sort on the entire array to finish the sorting process. Argue that this sorting algorithm runs in $O(nk + n\lg(n / k))$ expected time. How should we pick $k$, both in theory and practice?
In the quicksort part of the proposed algorithm, the recursion stops at level $\lg(n / k)$, which makes the expected running time $O(n\lg(n / k))$. However, this leaves $n / k$ non-sorted, non - intersecting subarrays of (maximum) length $k$. Because of the nature of the insertion sort algorithm, it will first sort fully one such subarray before consider the next one. Thus, it has the same complexity as sorting each of those arrays, that is $\frac{n}{k}O(k^2) = O(nk)$. In theory, if we ignore the constant factors, we need to solve $$ \begin{aligned} & n\lg n \ge nk + n\lg{n / k} \\\\ \Rightarrow & \lg n \ge k + \lg n - \lg k \\\\ \Rightarrow & \lg k \ge k. \end{aligned} $$ Which is not possible. If we add the constant factors, we get $$ \begin{aligned} & c_qn\lg n \ge c_ink + c_qn\lg(n / k) \\\\ \Rightarrow & c_q\lg n \ge c_ik + c_q\lg n - c_q\lg k \\\\ \Rightarrow & \lg k \ge \frac{c_i}{c_q}k. \end{aligned} $$ Which indicates that there might be a good candidate. Furthermore, the lower-order terms should be taken into consideration too. In practice, $k$ should be chosed by experiment.
[]
false
[]
07-7.4-6
07
7.4
7.4-6 $\star$
docs/Chap07/7.4.md
Consider modifying the $\text{PARTITION}$ procedure by randomly picking three elements from array $A$ and partitioning about their median (the middle value of the three elements). Approximate the probability of getting at worst an $\alpha$-to-$(1 - \alpha)$ split, as a function of $\alpha$ in the range $0 < \alpha < 1$.
First, for simplicity's sake, let's assume that we can pick the same element twice. Let's also assume that $0 < \alpha \le 1 / 2$. In order to get such a split, two out of three elements need to be in the smallest $\alpha n$ elements. The probability of having one is $\alpha n / n = \alpha$. The probability of having exactly two is $\alpha^2 - \alpha^3$. There are three ways in which two elements can be in the smallest $\alpha n$ and one way in which all three can be in the smallest $\alpha n$ so the probability of getting such a median is $3\alpha^2 - 2\alpha^3$. We will get the same split if the median is in the largest $\alpha n$. Since the two events are mutually exclusive, the probability is $$\Pr\\{\text{OK split}\\} = 6\alpha^2 - 4\alpha^3 = 2\alpha^2(3 - 2\alpha).$$
[]
false
[]
07-7-1
07
7-1
7-1
docs/Chap07/Problems/7-1.md
The version of $\text{PARTITION}$ given in this chapter is not the original partitioning algorithm. Here is the original partition algorithm, which is due to C.A.R. Hoare: ```cpp HOARE-PARTITION(A, p, r) x = A[p] i = p - 1 j = r + 1 while true repeat j = j - 1 until A[j] ≤ x repeat i = i + 1 until A[i] ≥ x if i < j exchange A[i] with A[j] else return j ``` **a.** Demonstrate the operation of $\text{HOARE-PARTITION}$ on the array $A = \langle 13, 19, 9, 5, 12, 8, 7, 4, 11, 2, 6, 21 \rangle$, showing the values of the array and auxiliary values after each iteration of the **while** loop in lines 4-13. The next three questions ask you to give a careful argument that the procedure $\text{HOARE-PARTITION}$ is correct. Assuming that the subarray $A[p..r]$ contains at least two elements, prove the following: **b.** The indices $i$ and $j$ are such that we never access an element of $A$ outside the subarray $A[p..r]$. **c.** When $\text{HOARE-PARTITION}$ terminates, it returns a value $j$ such that $p \le j < r$. **d.** Every element of $A[p..j]$ is less than or equal to every element of $A[j + 1..r]$ when $\text{HOARE-PARTITION}$ terminates. The $\text{PARTITION}$ procedure in section 7.1 separates the pivot value (originally in $A[r]$) from the two partitions it forms. The $\text{HOARE-PARTITION}$ procedure, on the other hand, always places the pivot value (originally in $A[p]$) into one of the two parititions $A[p..j]$ and $A[j + 1..r]$. Since $p \le j < r$, this split is always nontrivial. **e.** Rewrite the $\text{QUICKSORT}$ procedure to use $\text{HOARE-PARTITION}$.
**a.** After the end of the loop, the variables have the following values: $x = 13$, $j = 9$ and $i = 10$. **b.** Because when $\text{HOARE-PARTITION}$ is running, $p \le i < j \le r$ will always hold, $i$, $j$ won't access any element of $A$ outside the subarray $A[p..r]$. **c.** When $i \ge j$, $\text{HOARE-PARTITION}$ terminates, so $p \le j < r$. **d.** When $\text{HOARE-PARTITION}$ terminates, $A[p..j] \le x \le A[j + 1..r]$. **e.** ```cpp HOARE-QUICKSORT(A, p, r) if p < r q = HOARE-PARTITION(A, p, r) HOARE-QUICKSORT(A, p, q) HOARE-QUICKSORT(A, q + 1, r) ```
[ { "lang": "cpp", "code": "> HOARE-PARTITION(A, p, r)\n> x = A[p]\n> i = p - 1\n> j = r + 1\n> while true\n> repeat\n> j = j - 1\n> until A[j] ≤ x\n> repeat\n> i = i + 1\n> until A[i] ≥ x\n> if i < j\n> exchange A[i] with A[j]\n> else return j\n>" }, { "lang": "cpp", "code": "HOARE-QUICKSORT(A, p, r)\n if p < r\n q = HOARE-PARTITION(A, p, r)\n HOARE-QUICKSORT(A, p, q)\n HOARE-QUICKSORT(A, q + 1, r)" } ]
false
[]
07-7-2
07
7-2
7-2
docs/Chap07/Problems/7-2.md
The analysis of the expected running time of randomized quicksort in section 7.4.2 assumes that all element values are distinct. In this problem. we examine what happens when they are not. **a.** Suppose that all element values are equal. What would be randomized quick-sort's running time in this case? **b.** The $\text{PARTITION}$ procedure returns an index $q$ such that each element of $A[p..q - 1]$ is less than or equal to $A[q]$ and each element of $A[q + 1..r]$ is greater than $A[q]$. Modify the $\text{PARTITION}$ procedure to produce a procedure $\text{PARTITION}'(A, p, r)$ which permutes the elements of $A[p..r]$ and returns two indices $q$ and $t$ where $p \le q \le t \le r$, such that - all elements of $A[q..t]$ are equal, - each element of $A[p..q - 1]$ is less than $A[q]$, and - each element of $A[t + 1..r]$ is greater than $A[q]$. Like $\text{PARTITION}$, your $\text{PARTITION}'$ procedure should take $\Theta(r - p)$ time. **c.** Modify the $\text{RANDOMIZED-QUICKSORT}$ procedure to call $\text{PARTITION}'$, and name the new procedure $\text{RANDOMIZED-QUICKSORT}'$. Then modify the $\text{QUICKSORT}$ procedure to produce a procedure $\text{QUICKSORT}'(p, r)$ that calls $\text{RANDOMIZED-PARTITION}'$ and recurses only on partitions of elements not know to be equal to each other. **d.** Using $\text{QUICKSORT}'$, how would you adjust the analysis of section 7.4.2 to avoid the assumption that all elements are distinct?
**a.** Since all elements are equal, $\text{RANDOMIZED-QUICKSORT}$ always returns $q = r$. We have recurrence $T(n) = T(n - 1) + \Theta(n) = \Theta(n^2)$. **b.** ```cpp PARTITION'(A, p, r) x = A[p] low = p high = p for j = p + 1 to r if A[j] < x y = A[j] A[j] = A[high + 1] A[high + 1] = A[low] A[low] = y low = low + 1 high = high + 1 else if A[j] == x exchange A[high + 1] with A[j] high = high + 1 return (low, high) ``` **c.** ```cpp QUICKSORT'(A, p, r) if p < r (low, high) = RANDOMIZED-PARTITION'(A, p, r) QUICKSORT'(A, p, low - 1) QUICKSORT'(A, high + 1, r) ``` **d.** Since we don't recurse on elements equal to the pivot, the subproblem sizes with $\text{QUICKSORT}'$ are no larger than the subproblem sizes with $\text{QUICKSORT}$ when all elements are distinct.
[ { "lang": "cpp", "code": "PARTITION'(A, p, r)\n x = A[p]\n low = p\n high = p\n for j = p + 1 to r\n if A[j] < x\n y = A[j]\n A[j] = A[high + 1]\n A[high + 1] = A[low]\n A[low] = y\n low = low + 1\n high = high + 1\n else if A[j] == x\n exchange A[high + 1] with A[j]\n high = high + 1\n return (low, high)" }, { "lang": "cpp", "code": "QUICKSORT'(A, p, r)\n if p < r\n (low, high) = RANDOMIZED-PARTITION'(A, p, r)\n QUICKSORT'(A, p, low - 1)\n QUICKSORT'(A, high + 1, r)" } ]
false
[]
07-7-3
07
7-3
7-3
docs/Chap07/Problems/7-3.md
An alternative analysis of the running time of randomized quicksort focuses on the expected running time of each individual recursive call to $\text{RANDOMIZED-QUICKSORT}$, rather than on the number of comparisons performed. **a.** Argue that, given an array of size $n$, the probability that any particular element is chosen as the pivot is $1 / n$. Use this to define indicator random variables $$X_i = I\\{i\text{th smallest element is chosen as the pivot}\\}.$$ What is $\text E[X_i]$? **b.** Let $T(n)$ be a random variable denoting the running time of quicksort on an array of size $n$. Argue that $$\text E[T(n)] = \text E\bigg[\sum_{q = 1}^n X_q(T(q - 1) + T(n - q) + \Theta(n))\bigg]. \tag{7.5}$$ **c.** Show that we can rewrite equation $\text{(7.5)}$ as $$\text E[T(n)] = \frac{2}{n}\sum_{q = 2}^{n - 1}\text E[T(q)] + \Theta(n). \tag{7.6}$$ **d.** Show that $$\sum_{k = 2}^{n - 1}k\lg k \le \frac{1}{2}n^2\lg n - \frac{1}{8}n^2. \tag{7.7}$$ ($\textit{Hint:}$ Split the summation into two parts, one for $k = 2, 3, \ldots, \lceil n / 2 \rceil - 1$ and one for $k = \lceil n / 2 \rceil, \ldots, n - 1$.) **e.** Using the bound from equation $\text{(7.7)}$, show that the recurrence in equation $\text{(7.6)}$ has the solution $\text E[T(n)] = \Theta(n\lg n)$. ($\textit{Hint:}$ Show, by substitution, that $\text E[T(n)] \le an\lg n$ for sufficiently large $n$ and for some positive constant $a$.)
**a.** Since the pivot is selected as a random element in the array, which has size $n$, the probabilities of any particular element being selected are all equal, and add to one, so, are all $\frac{1}{n}$. As such, $\text E[X_i] = \Pr\\{i \text{ smallest is picked}\\} = \frac{1}{n}$. **b.** We can apply linearity of expectation over all of the events $X_i$. Suppose we have a particular $X_i$ be true, then, we will have one of the sub arrays be length $i - 1$, and the other be $n - i$, and will of course still need linear time to run the partition procedure. This corresponds exactly to the summand in equation $\text{(7.5)}$. **c.** $$ \begin{aligned} & \text E\Bigg[\sum_{q = 1}^n X_q(T(q - 1) + T(n - q) + \Theta(n)) \Bigg] \\\\ & = \sum_{q = 1}^n \text E[X_q(T(q - 1) + T(n - q) + \Theta(n))] \\\\ & = \sum_{q = 1}^n(T(q - 1) + T(n - q) + \Theta(n))/n \\\\ & = \Theta(n) + \frac{1}{n} \sum_{q = 1}^n(T(q - 1)+T(n - 1)) \\\\ & = \Theta(n) + \frac{1}{n} \Big(\sum_{q = 1}^n T(q - 1) + \sum_{q = 1}^n T (n - q) \Big) \\\\ & = \Theta(n) + \frac{1}{n} \Big(\sum_{q = 1}^n T(q - 1) + \sum_{q = 1}^n T (q - 1) \Big) \\\\ & = \Theta(n) + \frac{2}{n} \sum_{q = 1}^n T(q - 1) \\\\ & = \Theta(n) + \frac{2}{n} \sum_{q = 0}^{n - 1} T(q) \\\\ & = \Theta(n) + \frac{2}{n} \sum_{q = 2}^{n - 1} T(q). \end{aligned} $$ **d.** We will prove this inequality in a different way than suggested by the hint. If we let $f(k) = k\lg k$ treated as a continuous function, then $f'(k) = \lg k + 1$. Note now that the summation written out is the left hand approximation of the integral of $f(k)$ from $2$ to $n$ with step size $1$. By integration by parts, the anti-derivative of $k\lg k$ is $$\frac{1}{\lg 2}(\frac{k^2}{2}\ln k-\frac{k^2}{4}).$$ So, plugging in the bounds and subtracting, we get $\frac{n^2\lg n}{2} - \frac{n^2}{4\ln 2} - 1$. Since $f$ has a positive derivative over the entire interval that the integral is being evaluated over, the left hand rule provides a underapproximation of the integral, so, we have that $$ \begin{aligned} \sum_{k = 2}^{n - 1} k\lg k & \le \frac{n^2\lg n}{2} - \frac{n^2}{4\ln 2} - 1 \\\\ & \le \frac{n^2\lg n}{2} - \frac{n^2}{8}, \end{aligned} $$ where the last inequality uses the fact that $\ln 2 > 1 / 2$. **e.** Assume by induction that $T(q) \le q \lg(q) + \Theta(n)$. Combining $\text{(7.6)}$ and $\text{(7.7)}$, we have $$ \begin{aligned} \text E[T(n)] & = \frac{2}{n} \sum_{q = 2}^{n - 1} \text E[T(q)] + \Theta(n) \\\\ & \le \frac{2}{n} \sum_{q = 2}^{n - 1}(q\lg q + \Theta(n)) + \Theta(n) \\\\ & \le \frac{2}{n} \sum_{q = 2}^{n - 1}q\lg q + \frac{2}{n}\Theta(n) + \Theta(n) \\\\ & \le \frac{2}{n}(\frac{1}{2}n^2\lg n - \frac{1}{8}n^2) + \Theta(n) \\\\ & = n\lg n -\frac{1}{4}n + \Theta(n) \\\\ & = n\lg n+\Theta(n). \end{aligned} $$
[]
false
[]
07-7-4
07
7-4
7-4
docs/Chap07/Problems/7-4.md
The $\text{QUICKSORT}$ algorithm of Section 7.1 contains two recursive calls to itself. After $\text{QUICKSORT}$ calls $\text{PARTITION}$, it recursively sorts the left subarray and then it recursively sorts the right subarray. The second recursive call in $\text{QUICKSORT}$ is not really necessary; we can avoid it by using an iterative control structure. This technique, called **_tail recursion_**, is provided automatically by good compilers. Consider the following version of quicksort, which simulates tail recursion: ```cpp TAIL-RECURSIVE-QUICKSORT(A, p, r) while p < r // Partition and sort left subarray. q = PARTITION(A, p, r) TAIL-RECURSIVE-QUICKSORT(A, p, q - 1) p = q + 1 ``` **a.** Argue that $\text{TAIL-RECURSIVE-QUICKSORT}(A, 1, A.length)$ correctly sorts the array $A$. Compilers usually execute recursive procedures by using a **_stack_** that contains pertinent information, including the parameter values, for each recursive call. The information for the most recent call is at the top of the stack, and the information for the initial call is at the bottom. Upon calling a procedure, its information is **_pushed_** onto the stack; when it terminates, its information is **_popped_**. Since we assume that array parameters are represented by pointers, the information for each procedure call on the stack requires $O(1)$ stack space. The **_stack depth_** is the maximum amount of stack space used at any time during a computation. **b.** Describe a scenario in which $\text{TAIL-RECURSIVE-QUICKSORT}$'s stack depth is $\Theta(n)$ on an $n$-element input array. **c.** Modify the code for $\text{TAIL-RECURSIVE-QUICKSORT}$ so that the worst-case stack depth is $\Theta(\lg n)$. Maintain the $O(n\lg n)$ expected running time of the algorithm.
**a.** The book proved that $\text{QUICKSORT}$ correctly sorts the array $A$. $\text{TAIL-RECURSIVE-QUICKSORT}$ differs from $\text{QUICKSORT}$ in only the last line of the loop. It is clear that the conditions starting the second iteration of the **while** loop in $\text{TAIL-RECURSIVE-QUICKSORT}$ are identical to the conditions starting the second recursive call in $\text{QUICKSORT}$. Therefore, $\text{TAIL-RECURSIVE-QUICKSORT}$ effectively performs the sort in the same manner as $\text{QUICKSORT}$. Therefore, $\text{TAIL-RECURSIVE-QUICKSORT}$ must correctly sort the array $A$. **b.** The stack depth will be $\Theta(n)$ if the input array is already sorted. The right subarray will always have size $0$ so there will be $n − 1$ recursive calls before the **while**-condition $p < r$ is violated. **c.** ```cpp MODIFIED-TAIL-RECURSIVE-QUICKSORT(A, p, r) while p < r q = PARTITION(A, p, r) if q < floor((p + r) / 2) MODIFIED-TAIL-RECURSIVE-QUICKSORT(A, p, q - 1) p = q + 1 else MODIFIED-TAIL-RECURSIVE-QUICKSORT(A, q + 1, r) r = q - 1 ```
[ { "lang": "cpp", "code": "> TAIL-RECURSIVE-QUICKSORT(A, p, r)\n> while p < r\n> // Partition and sort left subarray.\n> q = PARTITION(A, p, r)\n> TAIL-RECURSIVE-QUICKSORT(A, p, q - 1)\n> p = q + 1\n>" }, { "lang": "cpp", "code": "MODIFIED-TAIL-RECURSIVE-QUICKSORT(A, p, r)\n while p < r\n q = PARTITION(A, p, r)\n if q < floor((p + r) / 2)\n MODIFIED-TAIL-RECURSIVE-QUICKSORT(A, p, q - 1)\n p = q + 1\n else\n MODIFIED-TAIL-RECURSIVE-QUICKSORT(A, q + 1, r)\n r = q - 1" } ]
false
[]
07-7-5
07
7-5
7-5
docs/Chap07/Problems/7-5.md
One way to improve the $\text{RANDOMIZED-QUICKSORT}$ procedure is to partition around a pivot that is chosen more carefully than by picking a random element from the subarray. One common approach is the **_median-of-3_** method: choose the pivot as the median (middle element) of a set of 3 elements randomly selected from the subarray. (See exercise 7.4-6.) For this problem, let us assume that the elements of the input array $A[1..n]$ are distinct and that $n \ge 3$. We denote the sorted output array by $A'[1..n]$. Using the median-of-3 method to choose the pivot element $x$, define $p_i = \Pr\\{x = A'[i]\\}$. **a.** Give an exact formula for $p_i$ as a function of $n$ and $i$ for $i = 2, 3, \ldots, n - 1$. (Note that $p_1 = p_n = 0$.) **b.** By what amount have we increased the likelihood of choosing the pivot as $x = A'[\lfloor (n + 1) / 2 \rfloor]$, the median of $A[1..n]$, compared with the ordinary implementation? Assume that $n \to \infty$, and give the limiting ratio of these probabilities. **c.** If we define a "good" split to mean choosing the pivot as $x = A'[i]$, where $n / 3 \le i \le 2n / 3$, by what amount have we increased the likelihood of getting a good split compared with the ordinary implementation? ($\textit{Hint:}$ Approximate the sum by an integral.) **d.** Argue that in the $\Omega(n\lg n)$ running time of quicksort, the median-of-3 method affects only the constant factor.
**a.** $p_i$ is the probability that a randomly selected subset of size three has the $A'[i]$ as it's middle element. There are 6 possible orderings of the three elements selected. So, suppose that $S'$ is the set of three elements selected. We will compute the probability that the second element of $S'$ is $A'[i]$ among all possible $3$-sets we can pick, since there are exactly six ordered $3$-sets corresponding to each $3$-set, these probabilities will be equal. We will compute the probability that $S'[2] = A[i]$. For any such $S'$, we would need to select the first element from $[i - 1]$ and the third from ${i + 1, \ldots , n}$. So, there are $(i - 1)(n - i)$ such $3$-sets. The total number of $3$-sets is $\binom{n}{3} = \frac{n(n - 1)(n - 2)}{6}$. So, $$p_i = \frac{6(n - i)(i - 1)}{n(n - 1)(n - 2)}.$$ **b.** If we let $i = \lfloor \frac{n + 1}{2} \rfloor$, the previous result gets us an increase of $$\frac{6(\lfloor\frac{n - 1}{2}\rfloor)(n - \lfloor\frac{n + 1}{2}\rfloor)}{n(n - 1)(n - 2)} - \frac{1}{n}$$ in the limit $n$ going to infinity, we get $$\lim_{n \to \infty} \frac{\frac{6(\lfloor \frac{n - 1}{2} \rfloor)(n - \lfloor \frac{n + 1}{2} \rfloor)}{n(n - 1)(n - 2)}}{\frac{1}{n}} = \frac{3}{2}.$$ **c.** To save the messiness, suppose $n$ is a multiple of $3$. We will approximate the sum as an integral, so, $$ \begin{aligned} \sum_{i = n / 3}^{2n / 3} & \approx \int_{n / 3}^{2n / 3} \frac{6(-x^2 + nx + x - n)}{n(n - 1)(n - 2)}dx \\\\ & = \frac{6(-7n^3 / 81 + 3n^3 / 18 + 3n^2 / 18 - n^2 / 3)}{n(n - 1)(n - 2)}, \end{aligned} $$ which, in the limit $n$ goes to infinity, is $\frac{13}{27}$ which is a constant that $>\frac{1}{3}$ as it was in the original randomized quicksort implementation. **d.** Even though we always choose the middle element as the pivot (which is the best case), the height of the recursion tree will be $\Theta(\lg n)$. Therefore, the running time is still $\Omega(n\lg n)$.
[]
false
[]
07-7-6
07
7-6
7-6
docs/Chap07/Problems/7-6.md
Consider the problem in which we do not know the numbers exactly. Instead, for each number, we know an interval on the real line to which it belongs. That is, we are given $n$ closed intervals of the form $[a_i, b_i]$, where $a_i \le b_i$. We wish to **_fuzzy-sort_** these intervals, i.e., to produce a permutation $\langle i_1, i_2, \ldots, i_n \rangle$ of the intervals such that for $j = 1, 2, \ldots, n$, there exists $c_j \in [a_{i_j}, b_{i_j}]$ satisfying $c_1 \le c_2 \le \cdots \le c_n$. **a.** Design a randomized algorithm for fuzzy-sorting $n$ intervals. Your algorithm should have the general structure of an algorithm that quicksorts the left endpoints (the $a_i$ values), but it should take advantage of overlapping intervals to improve the running time. (As the intervals overlap more and more, the problem of fuzzy-sorting the intervals becoes progressively easier. Your algorithm should take advantage of such overlapping, to the extend that it exists.) **b.** Argue that your algorithm runs in expected time $\Theta(n\lg n)$ in general, but runs in expected time $\Theta(n)$ when all of the intervals overlap (i.e., when there exists a value $x$ such that $x \in [a_i, b_i]$ for all $i$). Your algorithm should not be checking for this case explicitly; rather, its performance should naturally improve as the amount of overlap increases.
**a.** With randomly selected left endpoint for the pivot, we could trivially perform fuzzy sorting by quicksorting the left endpoints, $a_i$'s. This would achieve the worst-case expected running time of $\Theta(n\lg n)$. We definitely can do better by exploit the characteristic that we don't have to sort overlapping intervals. That is, for two overlapping intervals, $[a_i, b_i]$ and $[a_j, b_j]$. In such situations, we can always choose $\\{c_i, c_j\\}$ (within the intersection of these intervals) such that $c_i \le c_j$ or $c_j \le c_i$. Since overlapping intervals do not require sorting, we can improve the expected running time by modifying quicksort to identify overlaps: ```cpp FIND-INTERSECTION(A, p, r) rand = RANDOM(p, r) exchange A[rand] with A[r] a = A[r].a b = A[r].b for i = p to r - 1 if A[i].a ≤ b and A[i].b ≥ a if A[i].a > a a = A[i].a if A[i].b < b b = A[i].b return (a, b) ``` On lines 2 through 3 of $\text{FIND-INTERSECTION}$, we select a random _pivot interval_ as the initial region of overlap $[a ,b]$. There are two situations: - If the intervals are all disjoint, then the estimated region of overlap will be this randomly-selected interval; - otherwise, on lines 6 through 11, we loop through all intervals in arrays $A$ (except the endpoint which is the initial pivot interval). At each iteration, we determine if the current interval overlaps the current estimated region of overlap. If it does, we update the estimated region of overlap as $[a, b] = [a_i, b_i] \cap [a, b]$. $\text{FIND-INTERSECTION}$ has a worst-case running time $\Theta(n)$ since we evaluate the intersection from index $1$ to $A.length$ of the array. We can extend the $\text{QUICKSORT}$ to allow fuzzy sorting using $\text{FIND-INTERSECTION}$. First, partition the input array into "left", "middle", and "right" subarrays. The "middle" subarray elements overlap the interval $[a, b]$ found by $\text{FIND-INTERSECTION}$. As a result, they can appear in any order in the output. We recursively call $\text{FUZZY-SORT}$ on the "left" and "right" subarrays to produce a fuzzy sorted array in-place. The following pseudocode implements these basic operations. One can run $\text{FUZZY-SORT}(A, 1, A.length)$ to fuzzy-sort an array. The first and last elements in a subarray are indexed by $p$ and $r$, respectively. The index of the first and last intervals in the "middle" region are indexed by $q$ and $t$, respectively. ```cpp FUZZY-SORT(A, p, r) if p < r (a, b) = FIND-INTERSECTION(A, p, r) t = PARTITION-RIGHT(A, a, p, r) q = PARTITION-LEFT(A, b, p, t) FUZZY-SORT(A, p, q - 1) FUZZY-SORT(A, t + 1, r) ``` We need to determine how to partition the input arrays into "left", "middle", and "right" subarrays in-place. First, we $\text{PARTITION-RIGHT}$ the entire array from $p$ to $r$ using a pivot value equal to the left endpoint $a$ found by $\text{FIND-INTERSECTION}$, such that $a_i \le a$. Then, we $\text{PARTITION-LEFT}$ the subarray from $p$ to $t$ using a pivot value equal to the right endpoint $b$ found by $\text{FIND-INTERSECTION}$, such that $b_i < b$. ```cpp PARTITION-RIGHT(A, a, p, r) i = p - 1 for j = p to r - 1 if A[j].a ≤ a i = i + 1 exchange A[i] with A[j] exchange A[i + 1] with A[r] return i + 1 ``` ```cpp PARTITION-LEFT(A, b, p, t) i = p - 1 for j = p to t - 1 if A[j].b < b i = i + 1 exchange A[i] with A[j] exchange A[i + 1] with A[t] return i + 1 ``` The $\text{FUZZY-SORT}$ is similar to the randomized quicksort presented in the textbook. In fact, $\text{PARTITION-RIGHT}$ and $\text{PARTITION-LEFT}$ are nearly identical to the $\text{PARTITION}$ procedure on page 171. The primary difference is the value of the pivot used to sort the intervals. **b.** We expect $\text{FUZZY-SORT}$ to have a worst-case running time $\Theta(n\lg n)$ for a set of input intervals which do not overlap each other. First, notice that lines 2 through 3 of $\text{FIND-INTERSECTION}$ select a _random interval_ as the initial pivot interval. Recall that if the intervals are disjoint, then $[a, b]$ will simply be this initial interval. Since for this example there are no overlaps, the "middle" region created by lines 4 and 5 of $\text{FUZZY-SORT}$ will only contain the initially-selected interval. In general, line 3 is $\Theta(n)$. Fortunately, since the pivot interval $[a, b]$ is randomly-selected, the expected sizes of the "left" and "right" subarrays are both $\left\lfloor \frac{n}{2} \right\rfloor$. In conclusion, the reccurrence of the running time is $$ \begin{aligned} T(n) & \le 2T(n / 2) + \Theta(n) \\\\ & = \Theta(n\lg n). \end{aligned} $$ The $\text{FIND-INTERSECTION}$ will always return a non-empty region of overlap $[a, b]$ containing $x$ if the intervals all overlap at $x$. For this situation, every interval will be within the "middle" region since the "left" and "right" subarrays will be empty, lines 6 and 7 of $\text{FUZZY-SORT}$ are $\Theta(1)$. As a result, there is no recursion and the running time of $\text{FUZZY-SORT}$ is determined by the $\Theta(n)$ running time required to find the region of overlap. Therfore, if the input intervals all overlap at a point, then the expected worst-case running time is $\Theta(n)$.
[ { "lang": "cpp", "code": "FIND-INTERSECTION(A, p, r)\n rand = RANDOM(p, r)\n exchange A[rand] with A[r]\n a = A[r].a\n b = A[r].b\n for i = p to r - 1\n if A[i].a ≤ b and A[i].b ≥ a\n if A[i].a > a\n a = A[i].a\n if A[i].b < b\n b = A[i].b\n return (a, b)" }, { "lang": "cpp", "code": "FUZZY-SORT(A, p, r)\n if p < r\n (a, b) = FIND-INTERSECTION(A, p, r)\n t = PARTITION-RIGHT(A, a, p, r)\n q = PARTITION-LEFT(A, b, p, t)\n FUZZY-SORT(A, p, q - 1)\n FUZZY-SORT(A, t + 1, r)" }, { "lang": "cpp", "code": "PARTITION-RIGHT(A, a, p, r)\n i = p - 1\n for j = p to r - 1\n if A[j].a ≤ a\n i = i + 1\n exchange A[i] with A[j]\n exchange A[i + 1] with A[r]\n return i + 1" }, { "lang": "cpp", "code": "PARTITION-LEFT(A, b, p, t)\n i = p - 1\n for j = p to t - 1\n if A[j].b < b\n i = i + 1\n exchange A[i] with A[j]\n exchange A[i + 1] with A[t]\n return i + 1" } ]
false
[]
08-8.1-1
08
8.1
8.1-1
docs/Chap08/8.1.md
What is the smallest possible depth of a leaf in a decision tree for a comparison sort?
For a permutation $a_1 \le a_2 \le \ldots \le a_n$, there are $n - 1$ pairs of relative ordering, thus the smallest possible depth is $n - 1$.
[]
false
[]
08-8.1-2
08
8.1
8.1-2
docs/Chap08/8.1.md
Obtain asymptotically tight bounds on $\lg(n!)$ without using Stirling's approximation. Instead, evaluate the summation $\sum_{k = 1}^n\lg k$ using techniques from Section A.2.
$$ \begin{aligned} \sum_{k = 1}^n\lg k & \le \sum_{k = 1}^n\lg n \\\\ & = n\lg n. \end{aligned} $$ $$ \begin{aligned} \sum_{k = 1}^n\lg k & = \sum_{k = 2}^{n / 2} \lg k + \sum_{k = n / 2}^n\lg k \\\\ & \ge \sum_{k = 1}^{n / 2} 1 + \sum_{k = n / 2}^n\lg n / 2 \\\\ & = \frac{n}{2} + \frac{n}{2}(\lg n - 1) \\\\ & = \frac{n}{2}\lg n. \end{aligned} $$
[]
false
[]
08-8.1-3
08
8.1
8.1-3
docs/Chap08/8.1.md
Show that there is no comparison sort whose running time is linear for at least half of the $n!$ inputs of length $n$. What about a fraction of $1 / n$ of the inputs of length $n$? What about a fraction $1 / 2^n$?
Consider a decision tree of height $h$ with $r$ reachable leaves corresponding to a comparison sort on $n$ elements. From **Theorem 8.1**, We have $n! / 2 \le n! \le r \le 2^h$. By taking logarithms, we have $$h \ge \lg (n! / 2) = \lg (n!) - 1 = \Theta (n\lg n) - 1 = \Theta (n\lg n).$$ From the equation above, there is no comparison sort whose running time is linear for at least half of the $n!$ inputs of length $n$. Consider the $1/n$ of inputs of length $n$ condition. we have $(1/n)n! \le n! \le r \le 2^h$. By taking logarithms, we have $$h \ge \lg (n! / n) = \lg (n!) - \lg n = \Theta (n\lg n) - \lg n = \Theta (n\lg n).$$ From the equation above, there is no comparison sort whose running time is linear for $1/n$ of the $n!$ inputs of length $n$. Consider the $1 / 2^n$ of inputs of length $n$ condition. we have $(1/2^n)n! \le n! \le r \le 2^h$. By taking logarithms, we have $$h \ge \lg (n! / 2^n) = \lg (n!) - n = \Theta (n\lg n) - n = \Theta (n\lg n).$$ From the equation above, there is no comparison sort whose running time is linear for $1/2^n$ of the $n!$ inputs of length $n$.
[]
false
[]
08-8.1-4
08
8.1
8.1-4
docs/Chap08/8.1.md
Suppose that you are given a sequence of $n$ elements to sort. The input sequence consists of $n / k$ subsequences, each containing $k$ elements. The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the elements in the preceding subsequence. Thus, all that is needed to sort the whole sequence of length $n$ is to sort the $k$ elements in each of the $n / k$ subsequences. Show an $\Omega(n\lg k)$ lower bound on the number of comparisons needed to solve this variant of the sorting problem. ($\textit{Hint:}$ It is not rigorous to simply combine the lower bounds for the individual subsequences.)
Assume that we need to construct a binary decision tree to represent comparisons. Since length of each subsequece is $k$, there are $(k!)^{n / k}$ possible output permutations. To compute the height $h$ of the decision tree, we must have $(k!)^{n / k} \le 2^h$. Taking logs on both sides, we know that $$h \ge \frac{n}{k} \times \lg (k!) \ge \frac{n}{k} \times \left( \frac{k\ln k - k}{\ln 2} \right) = \frac{n\ln k - n}{\ln 2} = \Omega (n\lg k).$$
[]
false
[]
08-8.2-1
08
8.2
8.2-1
docs/Chap08/8.2.md
Using Figure 8.2 as a model, illustrate the operation of $\text{COUNTING-SORT}$ on the array $A = \langle 6, 0, 2, 0, 1, 3, 4, 6, 1, 3, 2 \rangle$.
We have that $C = \langle 2, 4, 6, 8, 9, 9, 11 \rangle$. Then, after successive iterations of the loop on lines 10-12, we have $$ \begin{aligned} B & = \langle, , , , , 2, , , , , \rangle, \\\\ B & = \langle, , , , , 2, 3, , , \rangle, \\\\ B & = \langle, , , 1, , 2, 3, , , \rangle \end{aligned} $$ and at the end, $$B = \langle 0, 0, 1, 1, 2, 2, 3, 3, 4, 6, 6 \rangle.$$
[]
false
[]
08-8.2-2
08
8.2
8.2-2
docs/Chap08/8.2.md
Prove that $\text{COUNTING-SORT}$ is stable.
Suppose positions $i$ and $j$ with $i < j$ both contain some element $k$. We consider lines 10 through 12 of $\text{COUNTING-SORT}$, where we construct the output array. Since $j > i$, the loop will examine $A[j]$ before examining $A[i]$. When it does so, the algorithm correctly places $A[j]$ in position $m = C[k]$ of $B$. Since $C[k]$ is decremented in line 12, and is never again incremented, we are guaranteed that when the **for** loop examines $A[i]$ we will have $C[k] < m$. Therefore $A[i]$ will be placed in an earlier position of the output array, proving stability.
[]
false
[]
08-8.2-3
08
8.2
8.2-3
docs/Chap08/8.2.md
Suppose that we were to rewrite the **for** loop header in line 10 of the $\text{COUNTING-SORT}$ as ```cpp 10 for j = 1 to A.length ``` Show that the algorithm still works properly. Is the modified algorithm stable?
The algorithm still works correctly. The order that elements are taken out of $C$ and put into $B$ doesn't affect the placement of elements with the same key. It will still fill the interval $(C[k − 1], C[k]]$ with elements of key $k$. The question of whether it is stable or not is not well phrased. In order for stability to make sense, we would need to be sorting items which have information other than their key, and the sort as written is just for integers, which don't. We could think of extending this algorithm by placing the elements of $A$ into a collection of elements for each cell in array $C$. Then, if we use a FIFO collection, the modification of line 10 will make it stable, if we use LILO, it will be anti-stable.
[ { "lang": "cpp", "code": "> 10 for j = 1 to A.length\n>" } ]
false
[]
08-8.2-4
08
8.2
8.2-4
docs/Chap08/8.2.md
Describe an algorithm that, given n integers in the range $0$ to $k$, preprocesses its input and then answers any query about how many of the $n$ integers fall into a range $[a..b]$ in $O(1)$ time. Your algorithm should use $\Theta(n + k)$ preprocessing time.
The algorithm will begin by preprocessing exactly as $\text{COUNTING-SORT}$ does in lines 1 through 9, so that $C[i]$ contains the number of elements less than or equal to $i$ in the array. When queried about how many integers fall into a range $[a..b]$, simply compute $C[b] − C[a − 1]$. This takes $O(1)$ times and yields the desired output.
[]
false
[]
08-8.3-1
08
8.3
8.3-1
docs/Chap08/8.3.md
Using Figure 8.3 as a model, illustrate the operation of $\text{RADIX-SORT}$ on the following list of English words: COW, DOG, SEA, RUG, ROW, MOB, BOX, TAB, BAR, EAR, TAR, DIG, BIG, TEA, NOW, FOX.
$$ \begin{array}{cccc} 0 & 1 & 2 & 3 \\\\ \hline \text{COW} & \text{SE$\textbf{A}$} & \text{T$\textbf{A}$B} & \text{$\textbf{B}$AR} \\\\ \text{DOG} & \text{TE$\textbf{A}$} & \text{B$\textbf{A}$R} & \text{$\textbf{B}$IG} \\\\ \text{SEA} & \text{MO$\textbf{B}$} & \text{E$\textbf{A}$R} & \text{$\textbf{B}$OX} \\\\ \text{RUG} & \text{TA$\textbf{B}$} & \text{T$\textbf{A}$R} & \text{$\textbf{C}$OW} \\\\ \text{ROW} & \text{DO$\textbf{G}$} & \text{S$\textbf{E}$A} & \text{$\textbf{D}$IG} \\\\ \text{MOB} & \text{RU$\textbf{G}$} & \text{T$\textbf{E}$A} & \text{$\textbf{D}$OG} \\\\ \text{BOX} & \text{DI$\textbf{G}$} & \text{D$\textbf{I}$G} & \text{$\textbf{E}$AR} \\\\ \text{TAB} & \text{BI$\textbf{G}$} & \text{B$\textbf{I}$G} & \text{$\textbf{F}$OX} \\\\ \text{BAR} & \text{BA$\textbf{R}$} & \text{M$\textbf{O}$B} & \text{$\textbf{M}$OB} \\\\ \text{EAR} & \text{EA$\textbf{R}$} & \text{D$\textbf{O}$G} & \text{$\textbf{N}$OW} \\\\ \text{TAR} & \text{TA$\textbf{R}$} & \text{C$\textbf{O}$W} & \text{$\textbf{R}$OW} \\\\ \text{DIG} & \text{CO$\textbf{W}$} & \text{R$\textbf{O}$W} & \text{$\textbf{R}$UG} \\\\ \text{BIG} & \text{RO$\textbf{W}$} & \text{N$\textbf{O}$W} & \text{$\textbf{S}$EA} \\\\ \text{TEA} & \text{NO$\textbf{W}$} & \text{B$\textbf{O}$X} & \text{$\textbf{T}$AB} \\\\ \text{NOW} & \text{BO$\textbf{X}$} & \text{F$\textbf{O}$X} & \text{$\textbf{T}$AR} \\\\ \text{FOX} & \text{FO$\textbf{X}$} & \text{R$\textbf{U}$G} & \text{$\textbf{T}$EA} \\\\ \end{array} $$
[]
false
[]
08-8.3-2
08
8.3
8.3-2
docs/Chap08/8.3.md
Which of the following sorting algorithms are stable: insertion sort, merge sort, heapsort, and quicksort? Give a simple scheme that makes any sorting algorithm stable. How much additional time and space does your scheme entail?
Insertion sort and merge sort are stable. Heapsort and quicksort are not. To make any sorting algorithm stable we can preprocess, replacing each element of an array with an ordered pair. The first entry will be the value of the element, and the second value will be the index of the element. For example, the array $[2, 1, 1, 3, 4, 4, 4]$ would become $[(2, 1), (1, 2), (1, 3), (3, 4), (4, 5), (4, 6), (4, 7)]$. We now interpret $(i, j) < (k, m)$ if $i < k$ or $i = k$ and $j < m$. Under this definition of less-than, the algorithm is guaranteed to be stable because each of our new elements is distinct and the index comparison ensures that if a repeat element appeared later in the original array, it must appear later in the sorted array. This doubles the space requirement, but the running time will be asymptotically unchanged.
[]
false
[]
08-8.3-3
08
8.3
8.3-3
docs/Chap08/8.3.md
Use induction to prove that radix sort works. Where does your proof need the assumption that the intermediate sort is stable?
**Loop invariant:** At the beginning of the **for** loop, the array is sorted on the last $i − 1$ digits. **Initialization:** The array is trivially sorted on the last $0$ digits. **Maintenance:** Let's assume that the array is sorted on the last $i − 1$ digits. After we sort on the $i$th digit, the array will be sorted on the last $i$ digits. It is obvious that elements with different digit in the $i$th position are ordered accordingly; in the case of the same $i$th digit, we still get a correct order, because we're using a stable sort and the elements were already sorted on the last $i − 1$ digits. **Termination:** The loop terminates when $i = d + 1$. Since the invariant holds, we have the numbers sorted on $d$ digits.
[]
false
[]
08-8.3-4
08
8.3
8.3-4
docs/Chap08/8.3.md
Show how to sort $n$ integers in the range $0$ to $n^3 - 1$ in $O(n)$ time.
First run through the list of integers and convert each one to base $n$, then radix sort them. Each number will have at most $\log_n n^3 = 3$ digits so there will only need to be $3$ passes. For each pass, there are $n$ possible values which can be taken on, so we can use counting sort to sort each digit in $O(n)$ time.
[]
false
[]
08-8.3-5
08
8.3
8.3-5 $\star$
docs/Chap08/8.3.md
In the first card-sorting algorithm in this section, exactly how many sorting passes are needed to sort $d$-digit decimal numbers in the worst case? How many piles of cards would an operator need to keep track of in the worst case?
Given $n$ $d$-digit numbers in which each digit can take on up to $k$ possible values, we'll perform $\Theta(k^d)$ passes and keep track of $\Theta(nk)$ piles in the worst case.
[]
false
[]
08-8.4-1
08
8.4
8.4-1
docs/Chap08/8.4.md
Using Figure 8.4 as a model, illustrate the operation of $\text{BUCKET-SORT}$ on the array $A = \langle .79, .13, .16, .64, .39, .20, .89, .53, .71, .42 \rangle$.
$$ \begin{array}{cl} R & \\\\ \hline 0 & \\\\ 1 & .13 .16 \\\\ 2 & .20 \\\\ 3 & .39 \\\\ 4 & .42 \\\\ 5 & .53 \\\\ 6 & .64 \\\\ 7 & .79 .71 \\\\ 8 & .89 \\\\ 9 & \\\\ \end{array} $$ $$A = \langle.13, .16, .20, .39, .42, .53, .64, .71, .79, .89 \rangle.$$
[]
false
[]
08-8.4-2
08
8.4
8.4-2
docs/Chap08/8.4.md
Explain why the worst-case running time for bucket sort is $\Theta(n^2)$. What simple change to the algorithm preserves its linear average-case running time and makes its worst-case running time $O(n\lg n)$?
If all the keys fall in the same bucket and they happen to be in reverse order, we have to sort a single bucket with $n$ items in reversed order with insertion sort. This is $\Theta(n^2)$. We can use merge sort or heapsort to improve the worst-case running time. Insertion sort was chosen because it operates well on linked lists, which has optimal time and requires only constant extra space for short linked lists. If we use another sorting algorithm, we have to convert each list to an array, which might slow down the algorithm in practice.
[]
false
[]
08-8.4-3
08
8.4
8.4-3
docs/Chap08/8.4.md
Let $X$ be a random variable that is equal to the number of heads in two flips of a fair coin. What is $\text E[X^2]$? What is $\text E^2[X]$?
$$ \begin{aligned} \text E[X] & = 2 \cdot \frac{1}{4} + 1 \cdot \frac{1}{2} + 0 \cdot \frac{1}{4} = 1 \\\\ \text E[X^2] & = 4 \cdot \frac{1}{4} + 1 \cdot \frac{1}{2} + 0 \cdot \frac{1}{4} = 1.5 \\\\ \text E^2[X] & = \text E[X] \cdot \text E[X] = 1 \cdot 1 = 1. \end{aligned} $$
[]
false
[]
08-8.4-4
08
8.4
8.4-4 $\star$
docs/Chap08/8.4.md
We are given $n$ points in the unit circle, $p_i = (x_i, y_i)$, such that $0 < x_i^2 + y_i^2 \le 1$ for $i = 1, 2, \ldots, n$. Suppose that the points are uniformly distributed; that is, the probability of finding a point in any region of the circle is proportional to the area of that region. Design an algorithm with an average-case running time of $\Theta(n)$ to sort the $n$ points by their distances $d_i = \sqrt{x_i^2 + y_i^2}$ from the origin. ($\textit{Hint:}$ Design the bucket sizes in $\text{BUCKET-SORT}$ to reflect the uniform distribution of the points in the unit circle.)
Bucket sort by radius, $$ \begin{aligned} \pi r_i^2 & = \frac{i}{n} \cdot \pi 1^2 \\\\ r_i & = \sqrt{\frac{i}{n}}. \end{aligned} $$
[]
false
[]
08-8.4-5
08
8.4
8.4-5 $\star$
docs/Chap08/8.4.md
A **_probability distribution function_** $P(x)$ for a random variable $X$ is defined by $P(x) = \Pr\\{X \le x\\}$. Suppose that we draw a list of $n$ random variables $X_1, X_2, \ldots, X_n$ from a continuous probability distribution function $P$ that is computable in $O(1)$ time. Give an algorithm that sorts these numbers in linear average-case time.
Bucket sort by $p_i$, so we have $n$ buckets: $[p_0, p_1), [p_1, p_2), \dots, [p_{n - 1}, p_n)$. Note that not all buckets are the same size, which is ok as to ensure linear run time, the inputs should on average be uniformly distributed amongst all buckets, of which the intervals defined with $p_i$ will do so. $p_i$ is defined as follows: $$P(p_i) = \frac{i}{n}.$$
[]
false
[]
08-8-1
08
8-1
8-1
docs/Chap08/Problems/8-1.md
In this problem, we prove a probabilistic $\Omega(n\lg n)$ lower bound on the running time of any deterministic or randomized comparison sort on $n$ distinct input elements. We begin by examining a deterministic comparison sort $A$ with decision tree $T_A$. We assume that every permutation of $A$'s inputs is equally likely. **a.** Suppose that each leaf of $T_A$ is labeled with the probability that it is reached given a random input. Prove that exactly $n!$ leaves are labeled $1 / n!$ and that the rest are labeled $0$. **b.** Let $D(T)$ denote the external path length of a decision tree $T$; that is, $D(T)$ is the sum of the depths of all the leaves of $T$. Let $T$ be a decision tree with $k > 1$ leaves, and let $LT$ and $RT$ be the left and right subtrees of $T$. Show that $D(T) = D(LT) + D(RT)+k$. **c.** Let $d(k)$ be the minimum value of $D(T)$ over all decision trees $T$ with $k > 1$ leaves. Show that $d(k) = \min _{1 \le i \le k - 1} \\{ d(i) + d(k - i) + k \\}$. ($\textit{Hint:}$ Consider a decision tree $T$ with $k$ leaves that achieves the minimum. Let $i_0$ be the number of leaves in $LT$ and $k - i_0$ the number of leaves in $RT$.) **d.** Prove that for a given value of $k > 1$ and $i$ in the range $1 \le i \le k - 1$, the function $i\lg i + (k - i) \lg(k - i)$ is minimized at $i = k / 2$. Conclude that $d(k) = \Omega(k\lg k)$. **e.** Prove that $D(T_A) = \Omega(n!\lg(n!))$, and conclude that the average-case time to sort $n$ elements is $\Omega(n\lg n)$. Now, consider a _randomized_ comparison sort $B$. We can extend the decision-tree model to handle randomization by incorporating two kinds of nodes: ordinary comparison nodes and "randomization" nodes. A randomization node models a random choice of the form $\text{RANDOM}(1, r)$ made by algorithm $B$; the node has $r$ children, each of which is equally likely to be chosen during an execution of the algorithm. **f.** Show that for any randomized comparison sort $B$, there exists a deterministic comparison sort $A$ whose expected number of comparisons is no more than those made by $B$.
**a.** There are $n!$ possible permutations of the input array because the input elements are all distinct. Since each is equally likely, the distribution is uniformly supported on this set. So, each occurs with probability $\frac{1}{n!}$ and corresponds to a different leaf because the program needs to be able to distinguish between them. **b.** The depths of particular elements of $LT$ or $RT$ are all one less than their depths when considered elements of $T$. In particular, this is true for the leaves of the two subtrees. Also, $\\{LT, RT\\}$ form a partition of all the leaves of $T$. Therefore, if we let $L(T)$ denote the leaves of $T$, $$ \begin{aligned} D(T) & = \sum_{\ell \in L(T)} D_T(\ell) \\\\ & = \sum_{\ell \in L(LT)} D_T(\ell) + \sum_{\ell \in L(RT)} D_T(\ell) \\\\ & = \sum_{\ell \in L(LT)} (D_{LT}(\ell) + 1) + \sum_{\ell \in L(RT)} (D_{RT}(\ell) + 1) \\\\ & = \sum_{\ell \in L(LT)} D_{LT}(\ell) + \sum_{\ell \in L(RT)} D_{RT}(\ell) + k \\\\ & = D(LT) + D(RT) + k. \end{aligned} $$ **c.** Suppose we have a $T$ with $k$ leaves so that $D(T) = d(k)$. Let $i_0$ be the number of leaves in $LT$. Then, $d(k) = D(T) = D(LT) + D(RT) + k$. However, we can pick $LT$ and $RT$ to minimize the external path length. **d.** We treat $i$ as a continuous variable, and take a derivative to find critical points. The given expression has the following as a derivative with respect to $i$ $$\frac{1}{\ln 2} + \lg i + \frac{1}{\ln 2} - \lg(k - i) = \frac{2}{\ln 2} + \lg\left(\frac{i}{k - i}\right),$$ which is $0$ when we have $\frac{i}{k - i} = 2^{-\frac{2}{\ln 2}} = 2^{-\lg e^2} = e^{-2}$. Therefore, $(1 + e^{-2})i = k$, $i = \frac{k}{1 + e^{-2}}$. Since we are picking the two subtrees to be roughly equal size, the total depth will be order $\lg k$, with each level contributing $k$, so the total external path length is at least $k\lg k$. **e.** Since before we that a tree with $k$ leaves needs to have external length $k\lg k$, and that a sorting tree needs at least $n!$ trees, a sorting tree must have external tree length at least $n!\lg (n!)$. Since the average case run time is the depth of a leaf weighted by the probability of that leaf being the one that occurs, we have that the run time is at least $\frac{n!\lg (n!)}{n!} = \lg (n!) \in \Omega(n\lg n)$. **f.** Since the expected runtime is the average over all possible results from the random bits, if every possible fixing of the randomness resulted in a higher runtime, the average would have to be higher as well.
[]
false
[]
08-8-2
08
8-2
8-2
docs/Chap08/Problems/8-2.md
Suppose that we have an array of $n$ data records to sort and that the key of each record has the value $0$ or $1$. An algorithm for sorting such a set of records might possess some subset of the following three desirable characteristics: 1. The algorithm runs in $O(n)$ time. 2. The algorithm is stable. 3. The algorithm sorts in place, using no more than a constant amount of storage space in addition to the original array. **a.** Give an algorithm that satisfies criteria 1 and 2 above. **b.** Give an algorithm that satisfies criteria 1 and 3 above. **c.** Give an algorithm that satisfies criteria 2 and 3 above. **d.** Can you use any of your sorting algorithms from parts (a)–(c) as the sorting method used in line 2 of $\text{RADIX-SORT}$, so that $\text{RADIX-SORT}$ sorts $n$ records with $b$-bit keys in $O(bn)$ time? Explain how or why not. **e.** Suppose that the $n$ records have keys in the range from $1$ to $k$. Show how to modify counting sort so that it sorts the records in place in $O(n + k)$ time. You may use $O(k)$ storage outside the input array. Is your algorithm stable? ($\textit{Hint:}$ How would you do it for $k = 3$?)
**a.** Counting-Sort. **b.** Quicksort-Partition. **c.** Insertion-Sort. **d.** (a) Yes. (b) No. (c) No. **e.** Thanks [@Gutdub](https://github.com/Gutdub) for providing the solution in this [issue](https://github.com/walkccc/CLRS/issues/150). ```cpp MODIFIED-COUNTING-SORT(A, k) let C[0..k] be a new array for i = 1 to k C[i] = 0 for j = 1 to A.length C[A[j]] = C[A[j]] + 1 for i = 2 to k C[i] = C[i] + C[i - 1] insert sentinel element NIL at the start of A B = C[0..k - 1] insert number 1 at the start of B // B now contains the "endpoints" for C for i = 2 to A.length while C[A[i]] != B[A[i]] key = A[i] exchange A[C[A[i]]] with A[i] while A[C[key]] == key // make sure that elements with the same keys will not be swapped C[key] = C[key] - 1 remove the sentinel element return A ``` In place (storage space is $\Theta(k)$) but not stable.
[ { "lang": "cpp", "code": "MODIFIED-COUNTING-SORT(A, k)\n let C[0..k] be a new array\n for i = 1 to k\n C[i] = 0\n for j = 1 to A.length\n C[A[j]] = C[A[j]] + 1\n for i = 2 to k\n C[i] = C[i] + C[i - 1]\n insert sentinel element NIL at the start of A\n B = C[0..k - 1]\n insert number 1 at the start of B\n // B now contains the \"endpoints\" for C\n for i = 2 to A.length\n while C[A[i]] != B[A[i]]\n key = A[i]\n exchange A[C[A[i]]] with A[i]\n while A[C[key]] == key // make sure that elements with the same keys will not be swapped\n C[key] = C[key] - 1\n remove the sentinel element\n return A" } ]
false
[]
08-8-3
08
8-3
8-3
docs/Chap08/Problems/8-3.md
**a.** You are given an array of integers, where different integers may have different numbers of digits, but the total number of digits over _all_ the integers in the array is $n$. Show how to sort the array in $O(n)$ time. **b.** You are given an array of strings, where different strings may have different numbers of characters, but the total number of characters over all the strings is $n$. Show how to sort the strings in $O(n)$ time. (Note that the desired order here is the standard alphabetical order; for example, $\text a < \text{ab} < \text b$.)
**a.** First, sort the integer according to their lengths by bucket sort, where we make a bucket for each possible number of digits. We sort each these uniform length sets of integers using radix sort. Then, we just concatenate the sorted lists obtained from each bucket. **b.** Make a bucket for every letter in the alphabet, each containing the words that start with that letter. Then, forget about the first letter of each of the words in the bucket, concatenate the empty word (if it's in this new set of words) with the result of recursing on these words of length one less. Since each word is processed a number of times equal to it's length, the runtime will be linear in the total number of letters.
[]
false
[]
08-8-4
08
8-4
8-4
docs/Chap08/Problems/8-4.md
Suppose that you are given $n$ red and $n$ blue water jugs, all of different shapes and sizes. All red jugs hold different amounts of water, as do the blue ones. Moreover, for every red jug, there is a blue jug that holds the same amount of water, and vice versa. Your task is to find a grouping of the jugs into pairs of red and blue jugs that hold the same amount of water. To do so, you may perform the following operation: pick a pair of jugs in which one is red and one is blue, fill the red jug with water, and then pour the water into the blue jug. This operation will tell you whether the red or the blue jug can hold more water, or that they have the same volume. Assume that such a comparison takes one time unit. Your goal is to find an algorithm that makes a minimum number of comparisons to determine the grouping. Remember that you may not directly compare two red jugs or two blue jugs. **a.** Describe a deterministic algorithm that uses $\Theta(n^2)$ comparisons to group the jugs into pairs. **b.** Prove a lower bound of $\Omega(n\lg n)$ for the number of comparisons that an algorithm solving this problem must make. **c.** Give a randomized algorithm whose expected number of comparisons is $O(n\lg n)$, and prove that this bound is correct. What is the worst-case number of comparisons for your algorithm?
**a.** Select a red jug. Compare it to blue jugs until you find one which matches. Set that pair aside, and repeat for the next red jug. This will use at most $\sum_{i = 1}^{n - 1} i = n(n - 1) / 2 = \Theta(n^2)$ comparisons. **b.** We can imagine first lining up the red jugs in some order. Then a solution to this problem becomes a permutation of the blue jugs such that the $i$th blue jug is the same size as the $i$th red jug. As in section 8.1, we can make a decision tree which represents comparisons made between blue jugs and red jugs. An internal node represents a comparison between a specific pair of red and blue jugs, and a leaf node represents a permutation of the blue jugs based on the results of the comparison. We are interested in when one jug is greater than, less than, or equal in size to another jug, so the tree should have $3$ children per node. Since there must be at least $n!$ leaf nodes, the decision tree must have height at least $\log_3 (n!)$. Since a solution corresponds to a simple path from root to leaf, an algorithm must make at least $\Theta(n\lg n)$ comparisons to reach any leaf. **c.** We use an algorithm analogous to randomized quicksort. Select a blue jug at random. Partition the red jugs into those which are smaller than the blue jug, and those which are larger. At some point in the comparisons, you will find the red jug which is of equal size. Once the red jugs have been divided by size, use the red jug of equal size to partition the blue jugs into those which are smaller and those which are larger. If $k$ red jugs are smaller than the originally chosen jug, we need to solve the original problem on input of size $k − 1$ and size $n − k$, which we will do in the same manner. A subproblem of size $1$ is trivially solved because if there is only one red jug and one blue jug, they must be the same size. The analysis of expected number of comparisons is exactly the same as that of $\text{RANDOMIZED-QUICKSORT}$ given on pages 181-184. We are running the procedure twice so the expected number of comparisons is doubled, but this is absorbed by the big-$O$ notation. In the worst case, we pick the largest jug each time, which results in $\sum_{i = 2}^n i + i - 1 = n^2$ comparisons.
[]
false
[]
08-8-5
08
8-5
8-5
docs/Chap08/Problems/8-5.md
Suppose that, instead of sorting an array, we just require that the elements increase on average. More precisely, we call an $n$-element array $A$ **_k-sorted_** if, for all $i = 1, 2, \ldots, n - k$, the following holds: $$\frac{\sum_{j = i}^{i + k - 1} A[j]}{k} \le \frac{\sum_{j = i + 1}^{i + k} A[j]}{k}.$$ **a.** What does it mean for an array to be $1$-sorted? **b.** Give a permutation of the numbers $1, 2, \ldots, 10$ that is $2$-sorted, but not sorted. **c.** Prove that an $n$-element array is $k$-sorted if and only if $A[i] \le A[i + k]$ for all $i = 1, 2, \ldots, n - k$. **d.** Give an algorithm that $k$-sorts an $n$-element array in $O(n\lg (n / k))$ time. We can also show a lower bound on the time to produce a $k$-sorted array, when $k$ is a constant. **e.** Show that we can sort a $k$-sorted array of length $n$ in $O(n\lg k)$ time. ($\textit{Hint:}$ Use the solution to Exercise 6.5-9.) **f.** Show that when $k$ is a constant, $k$-sorting an $n$-element array requires $\Omega(n\lg n)$ time. ($\textit{Hint:}$ Use the solution to the previous part along with the lower bound on comparison sorts.)
**a.** Ordinary sorting **b.** $2, 1, 4, 3, 6, 5, 8, 7, 10, 9$. **c.** $$ \begin{aligned} \frac{\sum_{j = i}^{i + k - 1} A[j]}{k} & \le \frac{\sum_{j = i + 1}^{i + k}A[j]}{k} \\\\ \sum_{j = i}^{i + k- 1 } A[j] & \le \sum_{j = i + 1}^{i + k} A[j] \\\\ A[i] & \le A[i + k]. \end{aligned} $$ **d.** Shell-Sort, i.e., We split the $n$-element array into $k$ part. For each part, we use Insertion-Sort (or Quicksort) to sort in $O(n / k \lg(n / k))$ time. Therefore, the total running time is $k \cdot O(n / k \lg(n / k)) = O(n\lg(n / k))$. **e.** Using a heap, we can sort a $k$-sorted array of length $n$ in $O(n\lg k)$ time. (The height of the heap is $\lg k$, the solution to Exercise 6.5-9.) **f.** The lower bound of sorting each part is $\Omega(n / k\lg(n / k))$, so the total lower bound is $\Theta(n\lg (n/k))$. Since $k$ is a constant, therefore $\Theta(n\lg(n / k)) = \Omega(n\lg n)$.
[]
false
[]