id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
29-29-5
29
29-5
29-5
docs/Chap29/Problems/29-5.md
In this problem, we consider a variant of the minimum-cost-flow problem from Section 29.2 in which we are not given a demand, a source, or a sink. Instead, we are given, as before, a flow network and edge costs $a(u, v)$ flow is feasible if it satisfies the capacity constraint on every edge and flow conservation at _every_ vertex. The goal is to find, among all feasible flows, the one of minimum cost. We call this problem the **_minimum-cost-circulation problem_**. **a.** Formulate the minimum-cost-circulation problem as a linear program. **b.** Suppose that for all edges $(u, v) \in E$, we have $a(u, v) > 0$. Characterize an optimal solution to the minimum-cost-circulation problem. **c.** Formulate the maximum-flow problem as a minimum-cost-circulation problem linear program. That is given a maximum-flow problem instance $G = (V, E)$ with source $s$, sink $t$ and edge capacities $c$, create a minimum-cost-circulation problem by giving a (possibly different) network $G' = (V', E')$ with edge capacities $c'$ and edge costs $a'$ such that you can discern a solution to the maximum-flow problem from a solution to the minimum-cost-circulation problem. **d.** Formulate the single-source shortest-path problem as a minimum-cost-circulation problem linear program.
**a.** This is exactly the linear program given in equations $\text{(29.51)}$-$\text{(29.52)}$ except that the equation on the third line of the constraints should be removed, and for the equation on the second line of the constraints, $u$ should be selected from all of $V$ instead of from $V \backslash \\{s, t\\}$. **b.** If $a(u, v) > 0$ for every pair of vertices, then, there is no point in sending any flow at all. So, an optimal solution is just to have no flow. This obviously satisfies the capacity constraints, it also satisfies the conservation constraints because the flow into and out of each vertex is zero. **c.** We assume that the edge $(t, s)$ is not in $E$ because that would be a silly edge to have for a maximum flow from $s$ to $t$. If it is there, remove it and it won't decrease the maximum flow. Let $V' = V$ and $E' = E \cup \\{(t, s)\\}$. For the edges of $E'$ that are in $E$, let the capacity be as it is in $E$ and let the cost be $0$. For the other edge, we set $c(t, s) = \infty$ and $a(t, s) = -1$. Then, if we have any circulation in $G'$, it will be trying to get as much flow to go across the edge $(t, s)$ in order to drive the objective function lower, the other flows will have no affect on the objective function. Then, by Kirchhoff's current law (a.k.a. common sense), the amount going across the edge $(t, s)$ is the same as the total flow in the rest of the network from $s$ to $t$. This means that maximizing the flow across the edge $(t, s)$ is also maximizing the flow from $s$ to $t$. So, all we need to do to recover the maximum flow for the original network is to keep the same flow values, but throw away the edge $(t, s)$. **d.** Suppose that $s$ is the vertex that we are computing shortest distance from. Then, we make the circulation network by first starting with the original graph, giving each edge a cost of whatever it was before and infinite capacity. Then, we place an edge going from every vertex that isn't $s$ to $s$ that has a capacity of $1$ and a cost of $-|E|$ times the largest cost appearing among all the edge costs already in the graph. Giving it such a negative cost ensures that placing other flow through the network in order to get a unit of flow across it will cause the total cost to decrease. Then, to recover the shortest path for any vertex, start at that vertex and then go to any vertex that is sending a unit of flow to it. Repeat this until you've reached $s$.
[]
false
[]
30-30.1-1
30
30.1
30.1-1
docs/Chap30/30.1.md
Multiply the polynomials $A(x) = 7x^3 - x^2 + x - 10$ and $B(x) = 8x^3 - 6x + 3$ using equations $\text{(30.1)}$ and $\text{(30.2)}$.
$$ \begin{array}{rl} & 56x^6 - 8x^5 + (8 - 42)x^4 + (-80 + 6 + 21)x^3 + (-3 - 6)x^2 + (60 + 3)x - 30 \\\\ = & 56x^6 - 8x^5 - 34x^4 - 53x^3 - 9x^2 + 63x - 30. \end{array} $$
[]
false
[]
30-30.1-2
30
30.1
30.1-2
docs/Chap30/30.1.md
Another way to evaluate a polynomial $A(x)$ of degree-bound $n$ at a given point $x_0$ is to divide $A(x)$ by the polynomial $(x - x_0)$, obtaining a quotient polynomial $q(x)$ of degree-bound $n - 1$ and a remainder $r$, such that $$A(x) = q(x)(x - x_0) + r.$$ Clearly, $A(x_0) = r$. Show how to compute the remainder $r$ and the coefficients of $q(x)$ in time $\Theta(n)$ from $x_0$ and the coefficients of $A$.
Let $A$ be the matrix with $1$'s on the diagonal, $−x_0$'s on the super diagonal, and $0$'s everywhere else. Let $q$ be the vector $(r, q_0, q_1, \dots, x_{n − 2})$. If $a = (a_0, a_1, \dots, a_{n − 1})$ then we need to solve the matrix equation $Aq = a$ to compute the remainder and coefficients. Since $A$ is tridiagonal, Problem 28-1 (e) tells us how to solve this equation in linear time.
[]
false
[]
30-30.1-3
30
30.1
30.1-3
docs/Chap30/30.1.md
Derive a point-value representation for $A^\text{rev}(x) = \sum_{j = 0}^{n - 1} a_{n - 1 - j}x^j$ from a point-value representation for $A(x) = \sum_{j = 0}^{n - 1} a_jx^j$, assuming that none of the points is $0$.
For each pair of points, $(p, A(p))$, we can compute the pair $(\frac{1}{p}, A^{rev}(\frac{1}{p}))$. To do this, we note that $$ \begin{aligned} A^{rev}(\frac{1}{p}) & = \sum_{j = 0}^{n - 1} a_{n - 1 - j} (\frac{1}{p})^j \\\\ & = \sum_{j = 0}^{n - 1} a_j(\frac{1}{p})^{n - 1 - j} \\\\ & = p^{1 - n} \sum_{j = 0}^{n - 1} a_jp^j \\\\ & = p^{1 - n}A(p), \end{aligned} $$ since we know what $A(p)$ is, we can compute $A^{rev}(\frac{1}{p})$ of course, we are using the fact that $p \ne 0$ because we are dividing by it. Also, we know that each of these points are distinct, because $\frac{1}{p} = \frac{1}{p'}$ implies that $p = p'$ by cross multiplication. So, since all the $x$ values were distinct in the point value representation of $A$, they will be distinct in this point value representation of $A^{rev}$ that we have made.
[]
false
[]
30-30.1-4
30
30.1
30.1-4
docs/Chap30/30.1.md
Prove that $n$ distinct point-value pairs are necessary to uniquely specify a polynomial of degree-bound $n$, that is, if fewer than $n$ distinct point-value pairs are given, they fail to specify a unique polynomial of degree-bound $n$. ($\textit{Hint:}$ Using Theorem 30.1, what can you say about a set of $n - 1$ point-value pairs to which you add one more arbitrarily chosen point-value pair?)
Suppose that just $n − 1$ point-value pairs uniquely determine a polynomial $P$ which satisfies them. Append the point value pair $(x_{n - 1}, y_{n − 1})$ to them, and let $P'$ be the unique polynomial which agrees with the $n$ pairs, given by Theorem 30.1. Now append instead $(x\_{n - 1}, y'\_{n − 1})$ where $y_{n − 1} \ne y'_{n - 1}$, and let $P''$ be the polynomial obtained from these points via Theorem 30.1. Since polynomials coming from $n$ pairs are unique, $P' \ne P''$. However, $P'$ and $P''$ agree on the original $n − 1$ point-value pairs, contradicting the fact that $P$ was determined uniquely.
[]
false
[]
30-30.1-5
30
30.1
30.1-5
docs/Chap30/30.1.md
Show how to use equation $\text{(30.5)}$ to interpolate in time $\Theta(n^2)$. ($\textit{Hint:}$ First compute the coefficient representation of the polynomial $\prod_j (x - x_j)$ and then divide by $(x - x_k)$ as necessary for the numerator of each term; see Exercise 30.1-2. You can compute each of the $n$ denominators in time $O(n)$.)
First, we show that we can compute the coefficient representation of $\prod_j (x - x_j)$ in time $\Theta(n^2)$. We will do it by recursion, showing that multiplying $\prod_{j < k} (x − x_j)$ by $(x − x_k)$ only takes time $O(n)$, since this only needs to be done $n$ times, this gets is total runtime of $O(n)$. Suppose that $\sum_{i = 0}^{k - 1} k_ix^i$ is a coefficient representation of $\prod_{j < k} (x − x_j)$. To multiply this by $(x − x_k)$, we just set $(k + 1)\_i = k\_{i − 1} - x_kk_i$ for $i = 1, \dots, k$ and $(k + 1)_0 = −x_k \cdot k_0$. Each of these coefficients can be computed in constant time, since there are only linearly many coefficients, then, the time to compute the next partial product is just $O(n)$. Now that we have a coefficient representation of $\prod_j (x − x_j)$, we need to compute, for each $k \prod_{j − k} (x − x_j)$, each of which can be computed in time $\Theta(n)$ by problem 30.1-2. Since the polynomial is defined as a product of things containing the thing we are dividing by, we have that the remainder in each case is equal to $0$. Lets call these polynomials $f_k$. Then, we need only compute the sum $\sum_k y_k \frac{f_k(x)}{f_k(x_k)}$. That is, we compute $f(x_k)$ each in time $\Theta(n)$, so all told, only $\Theta(n^2)$ time is spent computing all the $f(x_k)$ values. For each of the terms in the sum, dividing the polynomial $f_k(x)$ by the number $f_k(x_k)$ and multiplying by $y_k$ only takes time $\Theta(n)$, so total it takes time $\Theta(n^2)$. Lastly, we are adding up $n$ polynomials, each of degree bound $n − 1$, so the total time taken there is $\Theta(n^2)$.
[]
false
[]
30-30.1-6
30
30.1
30.1-6
docs/Chap30/30.1.md
Explain what is wrong with the "obvious" approach to polynomial division using a point-value representation, i.e., dividing the corresponding $y$ values. Discuss separately the case in which the division comes out exactly and the case in which it doesn't.
If we wish to compute $P / Q$ but $Q$ takes on the value zero at some of these points, then we can't carry out the "obvious" method. However, as long as all point value pairs $(x_i, y_i)$ we choose for $Q$ are such that $y_i \ne 0$, then the approach comes out exactly as we would like.
[]
false
[]
30-30.1-7
30
30.1
30.1-7
docs/Chap30/30.1.md
Consider two sets $A$ and $B$, each having $n$ integers in the range from $0$ to $10n$. We wish to compute the **_Cartesian sum_** of $A$ and $B$, defined by $$C = \\{x + y: x \in A \text{ and } y \in B\\}.$$ Note that the integers in $C$ are in the range from $0$ to $20n$. We want to find the elements of $C$ and the number of times each element of $C$ is realized as a sum of elements in $A$ and $B$. Show how to solve the problem in $O(n\lg n)$ time. ($\textit{Hint:}$ Represent $A$ and $B$ as polynomials of degree at most $10n$.)
For the set $A$, we define the polynomial $f_A$ to have a coefficient representation that has $a_i$ equal zero if $i \notin A$ and equal to $1$ if $i \in A$. Similarly define $f_B$. Then, we claim that looking at $f_C := f_A \cdot f_B$ in coefficient form, we have that the $i$th coefficient, $c_i$ is exactly equal to the number of times that $i$ is realized as a sum of elements from $A$ and $B$. Since we can perform the polynomial multiplication in time $O(n \lg n)$ by the methods of this chapter, we can get the final answer in time $O(n \lg n)$. To see that $f_C$ has the nice property described, we'll look at the ways that we could end up having a term of $x^i$ appear. Each contribution to that coefficient must come from there being some $k$ so that $a_k \ne 0$ and $b_{i − k} \ne 0$, because the powers of $x$ attached to each are additive when we multiply. Since each of these contributions is only ever $1$, the final coefficient is counting the total number of such contributions, therefore counting the number of $k \in A$ such that $−k \in B$, which is exactly what we claimed $f_C$ was counting.
[]
false
[]
30-30.2-1
30
30.2
30.2-1
docs/Chap30/30.2.md
Prove Corollary 30.4.
(Omit!)
[]
false
[]
30-30.2-2
30
30.2
30.2-2
docs/Chap30/30.2.md
Compute the $\text{DFT}$ of the vector $(0, 1, 2, 3)$.
(Omit!)
[]
false
[]
30-30.2-3
30
30.2
30.2-3
docs/Chap30/30.2.md
Do Exercise 30.1-1 by using the $\Theta(n\lg n)$-time scheme.
(Omit!)
[]
false
[]
30-30.2-4
30
30.2
30.2-4
docs/Chap30/30.2.md
Write pseudocode to compute $\text{DFT}_n^{-1}$ in $\Theta(n\lg n)$ time.
(Omit!)
[]
false
[]
30-30.2-5
30
30.2
30.2-5
docs/Chap30/30.2.md
Describe the generalization of the $\text{FFT}$ procedure to the case in which $n$ is a power of $3$. Give a recurrence for the running time, and solve the recurrence.
(Omit!)
[]
false
[]
30-30.2-6
30
30.2
30.2-6 $\star$
docs/Chap30/30.2.md
Suppose that instead of performing an $n$-element $\text{FFT}$ over the field of complex numbers (where $n$ is even), we use the ring $\mathbb Z_m$ of integers modulo $m$, where $m = 2^{tn / 2} + 1$ and $t$ is an arbitrary positive integer. Use $\omega = 2^t$ instead of $\omega_n$ as a principal nth root of unity, modulo $m$. Prove that the $\text{DFT}$ and the inverse $\text{DFT}$ are well defined in this system.
(Omit!)
[]
false
[]
30-30.2-7
30
30.2
30.2-7
docs/Chap30/30.2.md
Given a list of values $z_0, z_1, \dots, z_{n - 1}$ (possibly with repetitions), show how to find the coefficients of a polynomial $P(x)$ of degree-bound $n + 1$ that has zeros only at $z_0, z_1, \dots, z_{n - 1}$ (possibly with repetitions). Your procedure should run in time $O(n\lg^2 n)$. ($\textit{Hint:}$ The polynomial $P(x)$ has a zero at $z_j$ if and only if $P(x)$ is a multiple of $(x - z_j)$.)
(Omit!)
[]
false
[]
30-30.2-8
30
30.2
30.2-8 $\star$
docs/Chap30/30.2.md
The **_chirp transform_** of a vector $a = (a_0, a_1, \dots, a_{n - 1})$ is the vector $y = (y_0, y_1, \dots, y_{n - 1})$, where $y_k = \sum_{j = 0}^{n - 1} a_jz^{kj}$ and $z$ is any complex number. The $\text{DFT}$ is therefore a special case of the chirp transform, obtained by taking $z = \omega_n$. Show how to evaluate the chirp transform in time $O(n\lg n)$ for any complex number $z$. ($\textit{Hint:}$ Use the equation $$y_k = z^{k^2 / 2} \sum_{j = 0}^{n - 1} \Big(a_jz^{j^2 / 2}\Big) \Big(z^{-(k - j)^2 / 2}\Big)$$ to view the chirp transform as a convolution.)
(Omit!)
[]
false
[]
30-30.3-1
30
30.3
30.3-1
docs/Chap30/30.3.md
Show how $\text{ITERATIVE-FFT}$ computes the $\text{DFT}$ of the input vector $(0, 2, 3, -1, 4, 5, 7, 9)$.
(Omit!)
[]
false
[]
30-30.3-2
30
30.3
30.3-2
docs/Chap30/30.3.md
Show how to implement an $\text{FFT}$ algorithm with the bit-reversal permutation occurring at the end, rather than at the beginning, of the computation. ($\textit{Hint:}$ Consider the inverse $\text{DFT}$.)
(Omit!)
[]
false
[]
30-30.3-3
30
30.3
30.3-3
docs/Chap30/30.3.md
How many times does $\text{ITERATIVE-FFT}$ compute twiddle factors in each stage? Rewrite $\text{ITERATIVE-FFT}$ to compute twiddle factors only $2^{s - 1}$ times in stage $s$.
(Omit!)
[]
false
[]
30-30.3-4
30
30.3
30.3-4 $\star$
docs/Chap30/30.3.md
Suppose that the adders within the butterfly operations of the $\text{FFT}$ circuit sometimes fail in such a manner that they always produce a zero output, independent of their inputs. Suppose that exactly one adder has failed, but that you don't know which one. Describe how you can identify the failed adder by supplying inputs to the overall $\text{FFT}$ circuit and observing the outputs. How efficient is your method?
(Omit!)
[]
false
[]
30-30-1
30
30-1
30-1
docs/Chap30/Problems/30-1.md
**a.** Show how to multiply two linear polynomials $ax + b$ and $cx + d$ using only three multiplications. ($\textit{Hint:}$ One of the multiplications is $(a + b) \cdot (c + d)$.) **b.** Give two divide-and-conquer algorithms for multiplying two polynomials of degree-bound $n$ in $\Theta(n^{\lg 3})$ time. The first algorithm should divide the input polynomial coefficients into a high half and a low half, and the second algorithm should divide them according to whether their index is odd or even. **c.** Show how to multiply two $n$-bit integers in $O(n^{\lg 3})$ steps, where each step operates on at most a constant number of $1$-bit values.
(Omit!)
[]
false
[]
30-30-2
30
30-2
30-2
docs/Chap30/Problems/30-2.md
A **_Toeplitz matrix_** is an $n \times n$ matrix $A = (a_{ij})$ such that $a_{ij} = a_{i - 1, j - 1}$ for $i = 2, 3, \dots, n$ and $j = 2, 3, \dots, n$. **a.** Is the sum of two Toeplitz matrices necessarily Toeplitz? What about the product? **b.** Describe how to represent a Toeplitz matrix so that you can add two $n \times n$ Toeplitz matrices in $O(n)$ time. **c.** Give an $O(n\lg n)$-time algorithm for multiplying an $n \times n$ Toeplitz matrix by a vector of length $n$. Use your representation from part (b). **d.** Give an efficient algorithm for multiplying two $n \times n$ Toeplitz matrices. Analyze its running time.
(Omit!)
[]
false
[]
30-30-3
30
30-3
30-3
docs/Chap30/Problems/30-3.md
We can generalize the $1$-dimensional discrete Fourier transform defined by equation $\text{(30.8)}$ to $d$ dimensions. The input is a $d$-dimensional array $A = (a_{j_1, j_2, \dots, j_d})$ whose dimensions are $n_1, n_2, \dots, n_d$, where $n_1n_2 \cdots n_d = n$. We define the $d$-dimensional discrete Fourier transform by the equation $$y_{k_1, k_2, \dots, k_d} = \sum_{j_1 = 0}^{n_1 - 1} \sum_{j_2 = 0}^{n_2 - 1} \cdots \sum_{j_d = 0}^{n_d - 1} a_{j_1, j_2, \cdots, j_d} \omega_{n_1}^{j_1k_1}\omega_{n_2}^{j_2k_2} \cdots \omega_{n_d}^{j_dk_d}$$ for $0 \le k_1 < n_1, 0 \le k_2 < n_2, \dots, 0 \le k_d < n_d$. **a.** Show that we can compute a $d$-dimensional $\text{DFT}$ by computing $1$-dimensional $\text{DFT}$s on each dimension in turn. That is, we first compute $n / n_1$ separate $1$-dimensional $\text{DFT}$s along dimension $1$. Then, using the result of the $\text{DFT}$s along dimension $1$ as the input, we compute $n / n_2$ separate $1$-dimensional $\text{DFT}$s along dimension $2$. Using this result as the input, we compute $n / n_3$ separate $1$-dimensional $\text{DFT}$s along dimension $3$, and so on, through dimension $d$. **b.** Show that the ordering of dimensions does not matter, so that we can compute a $d$-dimensional $\text{DFT}$ by computing the $1$-dimensional $\text{DFT}$s in any order of the $d$ dimensions. **c.** Show that if we compute each $1$-dimensional $\text{DFT}$ by computing the fast Fourier transform, the total time to compute a $d$-dimensional $\text{DFT}$ is $O(n\lg n)$, independent of $d$.
(Omit!)
[]
false
[]
30-30-4
30
30-4
30-4
docs/Chap30/Problems/30-4.md
Given a polynomial $A(x)$ of degree-bound $n$, we define its $t$th derivative by $$ A^{(t)}(x) = \begin{cases} A(x) & \text{ if } t = 0, \\\\ \frac{d}{dx} A^{(t - 1)}(x) & \text{ if } 1 \le t \le n - 1, \\\\ 0 & \text{ if } t \ge n. \end{cases} $$ From the coefficient representation $(a_0, a_1, \dots, a_{n - 1})$ of $A(x)$ and a given point $x_0$, we wish to determine $A^{(t)}(x_0)$ for $t = 0, 1, \dots, n- 1$. **a.** Given coefficients $b_0, b_1, \dots, b_{n - 1}$ such that $$A(x) = \sum_{j = 0}^{n - 1} b_j(x - x_0)^j,$$ show how to compute $A^{(t)}(x_0)$ for $t = 0, 1, \dots, n - 1$, in $O(n)$ time. **b.** Explain how to find $b_0, b_1, \dots, b_{n - 1}$ in $O(n\lg n)$ time, given $A(x_0 + \omega_n^k)$ for $k = 0, 1, \dots, n - 1$. **c.** Prove that $$A(x_0 + \omega_n^k) = \sum_{r = 0}^{n - 1} \Bigg(\frac{\omega_n^{kr}}{r!} \sum_{j = 0}^{n - 1} f(j)g(r - j)\Bigg),$$ where $f(j) = a_j \cdot j!$ and $$ g(l) = \begin{cases} x_0^{-l} / (-l)! & \text{ if } -(n - 1) \le l \le 0, \\\\ 0 & \text{ if } 1 \le l \le n - 1. \end{cases} $$ **d.** Explain how to evaluate $A(x_0 + \omega_n^k)$ for $k = 0, 1, \dots, n - 1$ in $O(n\lg n)$ time. Conclude that we can evaluate all nontrivial derivatives of $A(x)$ at $x_0$ in $O(n\lg n)$ time.
(Omit!)
[]
false
[]
30-30-5
30
30-5
30-5
docs/Chap30/Problems/30-5.md
We have seen how to evaluate a polynomial of degree-bound $n$ at a single point in $O(n)$ time using Horner's rule. We have also discovered how to evaluate such a polynomial at all $n$ complex roots of unity in $O(n\lg n)$ time using the $\text{FFT}$. We shall now show how to evaluate a polynomial of degree-bound $n$ at $n$ arbitrary points in $O(n\lg^2 n)$ time. To do so, we shall assume that we can compute the polynomial remainder when one such polynomial is divided by another in $O(n\lg n)$ time, a result that we state without proof. For example, the remainder of $3x^3 + x^2 - 3x + 1$ when divided by $x^2 + x + 2$ is $$(3x^3 + x^2 - 3x + 1) \mod (x^2 + x + 2) = -7x + 5.$$ Given the coefficient representation of a polynomial $A(x) = \sum_{k = 0}^{n - 1} a_kx^k$ and $n$ points $x_0, x_1, \dots, x_{n - 1}$, we wish to compute the $n$ values $A(x_0), A(x_1), \dots, A(x_{n - 1})$. For $0 \le i \le j \le n - 1$, define the polynomials $P_{ij}(x) = \prod_{k = i}^j (x - x_k)$ and $Q_{ij}(x) = A(x) \mod P_{ij}(x)$. Note that $Q_{ij}(x)$ has degree at most $j - i$. **a.** Prove that $A(x) \mod (x - z) = A(z)$ for any point $z$. **b.** Prove that $Q_{kk}(x) = A(x_k)$ and that $Q_{0, n - 1}(x) = A(x)$. **c.** Prove that for $i \le k \le j$, we have $Q_{ik}(x) = Q_{ij}(x) \mod P_{ik}(x)$ and $Q_{kj}(x) = Q_{ij}(x) \mod P_{kj}(x)$. **d.** Give an $O(n\lg^2 n)$-time algorithm to evaluate $A(x_0), A(x_1), \dots, A(x_{n - 1})$.
(Omit!)
[]
false
[]
30-30-6
30
30-6
30-6
docs/Chap30/Problems/30-6.md
As defined, the discrete Fourier transform requires us to compute with complex numbers, which can result in a loss of precision due to round-off errors. For some problems, the answer is known to contain only integers, and by using a variant of the $\text{FFT}$ based on modular arithmetic, we can guarantee that the answer is calculated exactly. An example of such a problem is that of multiplying two polynomials with integer coefficients. Exercise 30.2-6 gives one approach, using a modulus of length $\Omega(n)$ bits to handle a $\text{DFT}$ on $n$ points. This problem gives another approach, which uses a modulus of the more reasonable length $O(\lg n)$; it requires that you understand the material of Chapter 31. Let $n$ be a power of $2$. **a.** Suppose that we search for the smallest $k$ such that $p = kn + 1$ is prime. Give a simple heuristic argument why we might expect $k$ to be approximately $\ln n$. (The value of $k$ might be much larger or smaller, but we can reasonably expect to examine $O(\lg n)$ candidate values of $k$ on average.) How does the expected length of $p$ compare to the length of $n$? Let $g$ be a generator of $\mathbb Z_p^\*$, and let $w = g^k \mod p$. **b.** Argue that the $\text{DFT}$ and the inverse $\text{DFT}$ are well-defined inverse operations modulo $p$, where $w$ is used as a principal $n$th root of unity. **c.** Show how to make the $\text{FFT}$ and its inverse work modulo $p$ in time $O(n\lg n)$, where operations on words of $O(\lg n)$ bits take unit time. Assume that the algorithm is given $p$ and $w$. **d.** Compute the $\text{DFT}$ modulo $p = 17$ of the vector $(0, 5, 3, 7, 7, 2, 1, 6)$. Note that $g = 3$ is a generator of $\mathbb Z_{17}^\*$.
(Omit!)
[]
false
[]
31-31.1-1
31
31.1
31.1-1
docs/Chap31/31.1.md
Prove that if $a > b > 0$ and $c = a + b$, then $c \mod a = b$.
$$ \begin{aligned} c \mod a & = (a + b) \mod a \\\\ & = (a \mod a) + (b \mod a) \\\\ & = 0 + b \\\\ & = b. \end{aligned} $$
[]
false
[]
31-31.1-2
31
31.1
31.1-2
docs/Chap31/31.1.md
Prove that there are infinitely many primes.
$$ \begin{aligned} ((p_1 p_2 \cdots p_k) + 1) \mod p_i & = (p_1 p_2 \cdots p_k) \mod p_i + (1 \mod p_i) \\\\ & = 0 + 1 \\\\ & = 1. \end{aligned} $$ if $p_1 p_2 \cdots p_k$ is all prime numbers, then $(p_1 p_2 \cdots p_k) + 1$ is a new prime number.
[]
false
[]
31-31.1-3
31
31.1
31.1-3
docs/Chap31/31.1.md
Prove that if $a \mid b$ and $b \mid c$, then $a \mid c$.
- If $a \mid b$, then $b = a \cdot k_1$. - If $b \mid c$, then $c = b \cdot k_2 = a \cdot (k_1 \cdot k_2) = a \cdot k_3$, then $a \mid c$.
[]
false
[]
31-31.1-4
31
31.1
31.1-4
docs/Chap31/31.1.md
Prove that if $p$ is prime and $0 < k < p$, then $\gcd(k, p) = 1$.
- If $k \ne 1$, then $k \nmid p$. - If $k = 1$, then the divisor is $1$.
[]
false
[]
31-31.1-5
31
31.1
31.1-5
docs/Chap31/31.1.md
Prove Corollary 31.5. For all positive integers $n$, $a$, and $b$, if $n \mid ab$ and $\gcd(a, n) = 1$, then $n \mid b$.
Assume for the purpose of contradiction that $n \mid ab$ and $\gcd(a, n) = 1$, but $n \nmid b$, so $gcd(n, b)=1$, for theorem 31.6, $gcd(n, ab)=1$ which is contradicting our assumption.
[]
false
[]
31-31.1-6
31
31.1
31.1-6
docs/Chap31/31.1.md
Prove that if $p$ is prime and $0 < k < p$, then $p \mid \binom{p}{k}$. Conclude that for all integers $a$ and $b$ and all primes $p$, $(a + b)^p \equiv a^p + b^p \pmod p$.
$$ \begin{array}{rlll} (a + b) ^ p & \equiv & a^p + \binom{p}{1} a^{p - 1}b^{1} + \cdots + \binom{p}{p - 1} a^{1}b^{p - 1} + b^p & \pmod p \\\\ & \equiv & a^p + 0 + \cdots + 0 + b^p & \pmod p \\\\ & \equiv & a^p + b^p & \pmod p \end{array} $$
[]
false
[]
31-31.1-7
31
31.1
31.1-7
docs/Chap31/31.1.md
Prove that if $a$ and $b$ are any positive integers such that $a \mid b$, then $$(x \mod b) \mod a = x \mod a$$ for any $x$. Prove, under the same assumptions, that $x \equiv y \pmod b$ implies $x \equiv y \pmod a$ for any integers $x$ and $y$.
Suppose $x = kb + c$, we have $$(x \mod b) \mod a = c \mod a,$$ and $$x \mod a = (kb + c) \mod a = (kb \mod a) + (c \mod a) = c \mod a.$$
[]
false
[]
31-31.1-8
31
31.1
31.1-8
docs/Chap31/31.1.md
For any integer $k > 0$, an integer $n$ is a **_$k$th power_** if there exists an integer $a$ such that $a^k = n$. Furthermore, $n > 1$ is a **_nontrivial power_** if it is a $k$th power for some integer $k > 1$. Show how to determine whether a given $\beta$-bit integer $n$ is a nontrivial power in time polynomial in $\beta$.
Because $2^\beta > n$, we only need to test values of $k$ that satisfy $2 \le k < \beta$, therefore the testing procedure remains $O(\beta)$. For any nontrivial power $k$, where $2 \le k < \beta$, do a binary search on $a$ that costs $$O(\log \sqrt n) = O(\log \sqrt{2^\beta}) = O(\frac 1 2\log 2^\beta) = O(\beta).$$ Thus, the total time complexity is $$O(\beta) \times O(\beta) = O(\beta^2).$$
[]
false
[]
31-31.1-9
31
31.1
31.1-9
docs/Chap31/31.1.md
Prove equations $\text{(31.6)}$–$\text{(31.10)}$.
(Omit!)
[]
false
[]
31-31.1-10
31
31.1
31.1-10
docs/Chap31/31.1.md
Show that the gcd operator is associative. That is, prove that for all integers $a$, $b$, and $c$, $\gcd(a, \gcd(b, c)) = \gcd(\gcd(a, b), c)$.
_[The following proof is provided by my friend, Tony Xiao.]_ Let $d = \gcd(a, b, c)$, $a = dp$, $b = dq$ and $c = dr$. **_Claim_** $\gcd(a, \gcd(b, c)) = d.$ Let $e = \gcd(b, c)$, thus $$ \begin{aligned} b = es, \\\\ c = et. \end{aligned} $$ Since $d \mid b$ and $d \mid c$, thus $d \mid e$. Let $e = dm$, thus $$ \begin{aligned} b = (dm)s & = dq, \\\\ c = (dm)t & = dr. \end{aligned} $$ Suppose $k = \gcd(p, m)$, $$ \begin{aligned} & k \mid p, k \mid m, \\\\ \Rightarrow & dk \mid dp, dk \mid dm, \\\\ \Rightarrow & dk \mid dp, dk \mid (dm)s, dk \mid (dm)t, \\\\ \Rightarrow & dk \mid a, dk \mid b, dk \mid c. \end{aligned} $$ Since $d = \gcd(a, b, c)$, thus $k = 1$. $$ \begin{aligned} \gcd(a, \gcd(b, c)) & = \gcd(a, e) \\\\ & = \gcd(dp, dm) \\\\ & = d \cdot \gcd(p, m) \\\\ & = d \cdot k \\\\ & = d. \end{aligned} $$ By the claim, $$\gcd(a, \gcd(b, c)) = d = \gcd(\gcd(a, b), c).$$
[]
false
[]
31-31.1-11
31
31.1
31.1-11 $\star$
docs/Chap31/31.1.md
Prove Theorem 31.8.
(Omit!)
[]
false
[]
31-31.1-12
31
31.1
31.1-12
docs/Chap31/31.1.md
Give efficient algorithms for the operations of dividing a $\beta$-bit integer by a shorter integer and of taking the remainder of a $\beta$-bit integer when divided by a shorter integer. Your algorithms should run in time $\Theta(\beta^2)$.
Shift left until the two numbers have the same length, then repeatedly subtract with proper multiplier and shift right.
[]
false
[]
31-31.1-13
31
31.1
31.1-13
docs/Chap31/31.1.md
Give an efficient algorithm to convert a given $\beta$-bit (binary) integer to a decimal representation. Argue that if multiplication or division of integers whose length is at most $\beta$ takes time $M(\beta)$, then we can convert binary to decimal in time $\Theta(M(\beta) \lg\beta)$.
(Omit!)
[]
false
[]
31-31.2-1
31
31.2
31.2-1
docs/Chap31/31.2.md
Prove that equations $\text{(31.11)}$ and $\text{(31.12)}$ imply equation $\text{(31.13)}$.
(Omit!)
[]
false
[]
31-31.2-2
31
31.2
31.2-2
docs/Chap31/31.2.md
Compute the values $(d, x, y)$ that the call $\text{EXTENDED-EUCLID}(899, 493)$ returns.
$(29, -6, 11)$.
[]
false
[]
31-31.2-3
31
31.2
31.2-3
docs/Chap31/31.2.md
Prove that for all integers $a$, $k$, and $n$, $\gcd(a, n) = \gcd(a + kn, n)$.
- $\gcd(a, n) \mid \gcd(a + kn, n)$. Let $d = \gcd(a, n)$, then $d \mid a$ and $d \mid n$. Since $$(a + kn) \mod d = a \mod d + k \cdot (n \mod d) = 0$$ and $d \mid n$, we have $$d \mid \gcd(a + kn, n)$$ and $$\gcd(a, n) \mid \gcd(a + kn, n).$$ - $\gcd(a + kn, n) \mid \gcd(a, n)$. Suppose $d = \gcd(a + kn, n)$, we have $d \mid n$ and $d \mid (a + kn)$. Since $$(a + kn) \mod d = a \mod d + k \cdot (n \mod d) = a \mod d = 0,$$ we have $d \mid a$. Since $d \mid a$ and $d \mid n$, we have $$d \mid \gcd(a, n)$$ and $$\gcd(a + kn, n) \mid \gcd(a, n).$$ Since $$\gcd(a, n) \mid \gcd(a + kn, n)$$ and $$\gcd(a + kn, n) \mid \gcd(a, n),$$ we have $$\gcd(a, n) = \gcd(a + kn, n).$$
[]
false
[]
31-31.2-4
31
31.2
31.2-4
docs/Chap31/31.2.md
Rewrite $\text{EUCLID}$ in an iterative form that uses only a constant amount of memory (that is, stores only a constant number of integer values).
```cpp EUCLID(a, b) while b != 0 t = a a = b b = t % b return a ```
[ { "lang": "cpp", "code": "EUCLID(a, b)\n while b != 0\n t = a\n a = b\n b = t % b\n return a" } ]
false
[]
31-31.2-5-1
31
31.2
31.2-5
docs/Chap31/31.2.md
If $a > b \ge 0$, show that the call EUCLID$(a, b)$ makes at most $1 + \log_\phi b$ recursive calls. Improve this bound to $1 + \log_\phi(b / \gcd(a, b))$.
$$b \ge F_{k + 1} \approx \phi^{k + 1} / \sqrt{5}$$ $$k + 1 < \log_\phi \sqrt{5} + \log_\phi b \approx 1.67 + \log_\phi b$$ $$k < 0.67 + \log_\phi b < 1 + \log_\phi b.$$ Since $d \cdot a \mod d \cdot b = d \cdot (a \mod b)$, $\gcd(d \cdot a, d \cdot b)$ has the same number of recursive calls with $\gcd(a, b)$, therefore we could let $b' = b / \gcd(a, b)$, the inequality $k < 1 + \log_\phi(b') = 1 + \log_\phi(b / \gcd(a, b))$. will holds. # 31.2-6 > What does $\text{EXTENDED-EUCLID}(F_{k + 1}, F_k)$ return? Prove your answer correct. - If $k$ is odd, then $(1, -F_{k-2}, F_{k - 1})$. - If $k$ is even, then $(1, F_{k-2}, -F_{k - 1})$.
[]
false
[]
31-31.2-5-2
31
31.2
31.2-5
docs/Chap31/31.2.md
What does $\text{EXTENDED-EUCLID}(F_{k + 1}, F_k)$ return? Prove your answer correct.
$$b \ge F_{k + 1} \approx \phi^{k + 1} / \sqrt{5}$$ $$k + 1 < \log_\phi \sqrt{5} + \log_\phi b \approx 1.67 + \log_\phi b$$ $$k < 0.67 + \log_\phi b < 1 + \log_\phi b.$$ Since $d \cdot a \mod d \cdot b = d \cdot (a \mod b)$, $\gcd(d \cdot a, d \cdot b)$ has the same number of recursive calls with $\gcd(a, b)$, therefore we could let $b' = b / \gcd(a, b)$, the inequality $k < 1 + \log_\phi(b') = 1 + \log_\phi(b / \gcd(a, b))$. will holds. # 31.2-6 > What does $\text{EXTENDED-EUCLID}(F_{k + 1}, F_k)$ return? Prove your answer correct. - If $k$ is odd, then $(1, -F_{k-2}, F_{k - 1})$. - If $k$ is even, then $(1, F_{k-2}, -F_{k - 1})$.
[]
false
[]
31-31.2-7
31
31.2
31.2-7
docs/Chap31/31.2.md
Define the $\gcd$ function for more than two arguments by the recursive equation $\gcd(a_0, a_1, \cdots, a_n) = \gcd(a_0, \gcd(a_1, a_2, \cdots, a_n))$. Show that the $\gcd$ function returns the same answer independent of the order in which its arguments are specified. Also show how to find integers $x_0, x_1, \cdots, x_n$ such that $\gcd(a_0, a_1, \ldots, a_n) = a_0 x_0 + a_1 x_1 + \cdots + a_n x_n$. Show that the number of divisions performed by your algorithm is $O(n + \lg (max \\{a_0, a_1, \cdots, a_n \\}))$.
Suppose $$\gcd(a_0, \gcd(a_1, a_2, \cdots, a_n)) = a_0 \cdot x + \gcd(a_1, a_2, \cdots, a_n) \cdot y$$ and $$\gcd(a_1, \gcd(a_2, a_3, \cdots, a_n)) = a_1 \cdot x' + \gcd(a_2, a_3, \cdots, a_n) \cdot y',$$ then the coefficient of $a_1$ is $y \cdot x'$. ```cpp EXTENDED-EUCLID(a, b) if b == 0 return (a, 1, 0) (d, x, y) = EXTENDED-EUCLID(b, a % b) return (d, y, x - (a / b) * y) ``` ```cpp EXTENDED-EUCLID-MULTIPLE(a) if a.length == 1 return (a[0], 1) g = a[a.length - 1] xs = [1] * a.length ys = [0] * a.length for i = a.length - 2 downto 0 (g, xs[i], ys[i + 1]) = EXTENDED-EUCLID(a[i], g) m = 1 for i = 1 to a.length m *= ys[i] xs[i] *= m return (g, xs) ```
[ { "lang": "cpp", "code": "EXTENDED-EUCLID(a, b)\n if b == 0\n return (a, 1, 0)\n (d, x, y) = EXTENDED-EUCLID(b, a % b)\n return (d, y, x - (a / b) * y)" }, { "lang": "cpp", "code": "EXTENDED-EUCLID-MULTIPLE(a)\n if a.length == 1\n return (a[0], 1)\n g = a[a.length - 1]\n xs = [1] * a.length\n ys = [0] * a.length\n for i = a.length - 2 downto 0\n (g, xs[i], ys[i + 1]) = EXTENDED-EUCLID(a[i], g)\n m = 1\n for i = 1 to a.length\n m *= ys[i]\n xs[i] *= m\n return (g, xs)" } ]
false
[]
31-31.2-8
31
31.2
31.2-8
docs/Chap31/31.2.md
Define $\text{lcm}(a_1, a_2, \ldots, a_n)$ to be the **_least common multiple_** of the $n$ integers $a_1, a_2, \ldots, a_n$, that is, the smallest nonnegative integer that is a multiple of each $a_i$. Show how to compute $\text{lcm}(a_1, a_2, \ldots, a_n)$ efficiently using the (two-argument) $\gcd$ operation as a subroutine.
```cpp GCD(a, b) if b == 0 return a return GCD(b, a % b) ``` ```cpp LCM(a, b) return a / GCD(a, b) * b ``` ```cpp LCM-MULTIPLE(a) l = a[0] for i = 1 to a.length l = LCM(l, a[i]) return l ```
[ { "lang": "cpp", "code": "GCD(a, b)\n if b == 0\n return a\n return GCD(b, a % b)" }, { "lang": "cpp", "code": "LCM(a, b)\n return a / GCD(a, b) * b" }, { "lang": "cpp", "code": "LCM-MULTIPLE(a)\n l = a[0]\n for i = 1 to a.length\n l = LCM(l, a[i])\n return l" } ]
false
[]
31-31.2-9
31
31.2
31.2-9
docs/Chap31/31.2.md
Prove that $n_1$, $n_2$, $n_3$, and $n_4$ are pairwise relatively prime if and only if $\gcd(n_1n_2,n_3n_4) = \gcd(n_1n_3, n_2n_4) = 1.$ More generally, show that $n_1, n_2, \ldots, n_k$ are pairwise relatively prime if and only if a set of $\lceil \lg k \rceil$ pairs of numbers derived from the $n_i$ are relatively prime.
Suppose $n_1n_2 x + n_3n_4 y = 1$, then $n_1(n_2 x) + n_3(n_4 y) = 1$, thus $n_1$ and $n_3$ are relatively prime, $n_1$ and $n_4$, $n_2$ and $n_3$, $n_2$ and $n_4$ are the all relatively prime. And since $\gcd(n_1n_3, n_2n_4) = 1$, all the pairs are relatively prime. General: in the first round, divide the elements into two sets with equal number of elements, calculate the products of the two set separately, if the two products are relatively prime, then the element in one set is pairwise relatively prime with the element in the other set. In the next iterations, for each set, we divide the elements into two subsets, suppose we have subsets $\\{ (s_1, s_2), (s_3, s_4), \ldots \\}$, then we calculate the products of $\\{ s_1, s_3, \ldots \\}$ and $\\{ s_2, s_4, \ldots \\}$, if the two products are relatively prime, then all the pairs of subset are pairwise relatively prime similar to the first round. In each iteration, the number of elements in a subset is half of the original set, thus there are $\lceil \lg k \rceil$ pairs of products. To choose the subsets efficiently, in the $k$th iteration, we could divide the numbers based on the value of the index's $k$th bit.
[]
false
[]
31-31.3-1
31
31.3
31.3-1
docs/Chap31/31.3.md
Draw the group operation tables for the groups $(\mathbb Z_4, +_4)$ and $(\mathbb Z_5^\*, \cdot_5)$. Show that these groups are isomorphic by exhibiting a one-to-one correspondence $\alpha$ between their elements such that $a + b \equiv c \pmod 4$ if and only if $\alpha(a) \cdot \alpha(b) \equiv \alpha(c) \pmod 5$.
$\mathbb Z_4$ | | 0 | 1 | 2 | 3 | |---|---|---|---|---| | 0 | 0 | 1 | 2 | 3 | | 1 | 1 | 2 | 3 | 0 | | 2 | 2 | 3 | 0 | 1 | | 3 | 3 | 0 | 1 | 2 | $\mathbb Z_5^*$ | | 1 | 2 | 3 | 4 | |---|---|---|---|---| | 1 | 1 | 2 | 3 | 4 | | 2 | 2 | 4 | 1 | 3 | | 3 | 3 | 1 | 4 | 2 | | 4 | 4 | 3 | 2 | 1 | Define isomorphism $\alpha : \mathbb Z_4 \to \mathbb Z_5^*$ by $\alpha(x) = 2^{x}$. $a + b \equiv c \pmod 4 \iff 2^{a} \cdot 2^{b} \equiv 2^{c} \pmod 5$
[]
false
[]
31-31.3-2
31
31.3
31.3-2
docs/Chap31/31.3.md
List all subgroups of $\mathbb Z_9$ and of $\mathbb Z_{13}^\*$.
By Lagrange's Theorem, any subgroup of $\mathbb Z_9$ must be of order $1, 3, 9$. These are - $\\{ 1 \\} \cong \mathbb Z_1$ - $\\{3, 6, 9\\} \cong \mathbb Z_3$ - $\mathbb Z_9$ (not isomorphic to $\mathbb Z_3^2$) Observe $\mathbb Z_{13}^\*$ consists of units $\\{1, \dots, 12\\}$ and is generated by $2$, i.e. $\mathbb Z_{13}^\* = \langle 2 \rangle$. The generators are thus $\langle 2^n \rangle$ for $n | 12$ - $\langle 2 \rangle$ - $\langle 2^2 \rangle = \langle 4 \rangle$ - $\langle 2^3 \rangle = \langle 8 \rangle$ - $\langle 2^4 \rangle = \langle 3 \rangle$ - $\langle 2^6 \rangle = \langle -1 \rangle$ - $\langle 2^{12} \rangle = \langle 1 \rangle$ [Math SE source](https://math.stackexchange.com/a/1352349)
[]
false
[]
31-31.3-3
31
31.3
31.3-3
docs/Chap31/31.3.md
Prove Theorem 31.14.
A nonempty closed subset of a finite group is a subgroup. - Closure: the subset is closed. - Identity: suppose $a \in S'$, then $a^{(k)} \in S'$. Since the subset is finite, there must be a period such that $a^{(m)} = a^{(n)}$, hence $a^{(m)}a^{(-n)} = a^{(m - n)} = 1$, therefore the subset must contain the identity. - Associativity: inherit from the origin group. - Inverses: suppose $a^{(k)} = 1$, the inverse of element $a$ is $a^{(k - 1)}$ since $aa^{(k - 1)} = a^{(k)} = 1$.
[]
false
[]
31-31.3-4
31
31.3
31.3-4
docs/Chap31/31.3.md
Show that if $p$ is prime and $e$ is a positive integer, then $\phi(p^e) = p^{e - 1}(p - 1)$.
$\phi(p^e) = p^e \cdot \left ( 1 - \frac{1}{p} \right ) = p^{e - 1}(p - 1)$.
[]
false
[]
31-31.3-5
31
31.3
31.3-5
docs/Chap31/31.3.md
Show that for any integer $n > 1$ and for any $a \in \mathbb Z_n^\*$, the function $f_a : \mathbb Z_n^\* \rightarrow \mathbb Z_n^\*$ defined by $f_a(x) = ax \mod n$ is a permutation of $\mathbb Z_n^\*$.
To prove it is a permutation, we need to prove that - for each element $x \in \mathbb Z_n^\*$, $f_a(x) \in \mathbb Z_n^\*$, - the numbers generated by $f_a$ are distinct. Since $a \in \mathbb Z_{n}^\*$ and $x \in \mathbb Z_n^\*$, then $f_a(x) = ax \mod n \in \mathbb Z_n^\*$ by the closure property. Suppose there are two distinct numbers $x \in \mathbb Z_n^\*$ and $y \in \mathbb Z_n^\*$ that $f_a(x) = f_a(y)$, $$ \begin{aligned} f_a(x) & = f_a(y) \\\\ ax \mod n & = ay \mod n \\\\ (a \mod n)(x \mod n) & = (a \mod n)(y \mod n) \\\\ (x \mod n) & = y \mod n \\\\ x & \equiv y \mod n, \end{aligned} $$ which contradicts the assumption, therefore the numbers generated by $f_a$ are distinct.
[]
false
[]
31-31.4-1
31
31.4
31.4-1
docs/Chap31/31.4.md
Find all solutions to the equation $35x \equiv 10 \pmod{50}$.
$\\{6, 16, 26, 36, 46\\}$.
[]
false
[]
31-31.4-2
31
31.4
31.4-2
docs/Chap31/31.4.md
Prove that the equation $ax \equiv ay \pmod n$ implies $x \equiv y \pmod n$ whenever $\gcd(a, n) = 1$. Show that the condition $\gcd(a, n) = 1$ is necessary by supplying a counterexample with $\gcd(a, n) > 1$.
$$d = \gcd(ax, n) = \gcd(x, n)$$ Since $ax \cdot x' + n \cdot y' = d$, we have $$x \cdot (ax') + n \cdot y' = d.$$ $$ \begin{aligned} x_0 & = x'(ay / d), \\\\ x_0' & = (ax')(y / d) = x'(ay / d) = x_0. \end{aligned} $$
[]
false
[]
31-31.4-3
31
31.4
31.4-3
docs/Chap31/31.4.md
Consider the following change to line 3 of the procedure $\text{MODULAR-LINEAR-EQUATION-SOLVER}$: ```cpp 3 x0 = x'(b / d) mod (n / d) ``` Will this work? Explain why or why not.
Assume that $x_0 \ge n / d$, then the largest solution is $x_0 + (d - 1) \cdot (n / d) \ge d \cdot n / d \ge n$, which is impossible, therefore $x_0 < n / d$.
[ { "lang": "cpp", "code": "> 3 x0 = x'(b / d) mod (n / d)\n>" } ]
false
[]
31-31.4-4
31
31.4
31.4-4 $\star$
docs/Chap31/31.4.md
Let $p$ be prime and $f(x) \equiv f_0 + f_1 x + \cdots + f_tx^t \pmod p$ be a polynomial of degree $t$, with coefficients $f_i$ drawn from $\mathbb Z_p$. We say that $a \in \mathbb Z_p$ is a **_zero_** of $f$ if $f(a) \equiv 0 \pmod p$. Prove that if $a$ is a zero of $f$, then $f(x) \equiv (x - a) g(x) \pmod p$ for some polynomial $g(x)$ of degree $t - 1$. Prove by induction on $t$ that if $p$ is prime, then a polynomial $f(x)$ of degree $t$ can have at most $t$ distinct zeros modulo $p$.
(Omit!)
[]
false
[]
31-31.5-1
31
31.5
31.5-1
docs/Chap31/31.5.md
Find all solutions to the equations $x \equiv 4 \pmod 5$ and $x \equiv 5 \pmod{11}$.
$$ \begin{aligned} m_1 & = 11, m_2 = 5. \\\\ m_1^{-1} & = 1, m_2^{-1} = 9. \\\\ c_1 & = 11, c_2 = 45. \\\\ a & = (c_1 \cdot a_1 + c_2 \cdot a_2) \mod (n_1 \cdot n_2) \\\\ & = (11 \cdot 4 + 45 \cdot 5) \mod 55 = 49. \end{aligned} $$
[]
false
[]
31-31.5-2
31
31.5
31.5-2
docs/Chap31/31.5.md
Find all integers $x$ that leave remainders $1$, $2$, $3$ when divided by $9$, $8$, $7$ respectively.
$10 + 504i$, $i \in \mathbb Z$.
[]
false
[]
31-31.5-3
31
31.5
31.5-3
docs/Chap31/31.5.md
Argue that, under the definitions of Theorem 31.27, if $\gcd(a, n) = 1$, then $$(a^{-1} \mod n) \leftrightarrow ((a_1^{-1} \mod n_1), (a_2^{-1} \mod n_2), \ldots, (a_k^{-1} \mod n_k)).$$
$$\gcd(a, n) = 1 \rightarrow \gcd(a, n_i) = 1.$$
[]
false
[]
31-31.5-4
31
31.5
31.5-4
docs/Chap31/31.5.md
Under the definitions of Theorem 31.27, prove that for any polynomial $f$, the number of roots of the equation $f(x) \equiv 0 (\mod n)$ equals the product of the number of roots of each of the equations $$f(x) \equiv 0 \pmod{n_1}, f(x) \equiv 0 \pmod{n_2}, \ldots, f(x) \equiv 0 \pmod{n_k}.$$
Based on $\text{31.28}$–$\text{31.30}$.
[]
false
[]
31-31.6-1
31
31.6
31.6-1
docs/Chap31/31.6.md
Draw a table showing the order of every element in $\mathbb Z_{11}^*$. Pick the smallest primitive root $g$ and compute a table giving $\text{ind}\_{11, g}(x)$ for all $x \in \mathbb Z_{11}^\*$.
$g = 2$, $\\{1, 2, 4, 8, 5, 10, 9, 7, 3, 6\\}$.
[]
false
[]
31-31.6-2
31
31.6
31.6-2
docs/Chap31/31.6.md
Give a modular exponentiation algorithm that examines the bits of $b$ from right to left instead of left to right.
```cpp MODULAR-EXPONENTIATION(a, b, n) i = 0 d = 1 while (1 << i) ≤ b if (b & (1 << i)) > 0 d = (d * a) % n a = (a * a) % n i = i + 1 return d ```
[ { "lang": "cpp", "code": "MODULAR-EXPONENTIATION(a, b, n)\n i = 0\n d = 1\n while (1 << i) ≤ b\n if (b & (1 << i)) > 0\n d = (d * a) % n\n a = (a * a) % n\n i = i + 1\n return d" } ]
false
[]
31-31.6-3
31
31.6
31.6-3
docs/Chap31/31.6.md
Assuming that you know $\phi(n)$, explain how to compute $a^{-1} \mod n$ for any $a \in \mathbb Z_n^\*$ using the procedure $\text{MODULAR-EXPONENTIATION}$.
$$ \begin{array}{rlll} a^{\phi(n)} & \equiv & 1 & \pmod n, \\\\ a\cdot a^{\phi(n) - 1} & \equiv & 1 & \pmod n, \\\\ a^{-1} & \equiv & a^{\phi(n)-1} & \pmod n. \end{array} $$
[]
false
[]
31-31.7-1
31
31.7
31.7-1
docs/Chap31/31.7.md
Consider an RSA key set with $p = 11$, $q = 29$, $n = 319$, and $e = 3$. What value of $d$ should be used in the secret key? What is the encryption of the message $M = 100$?
$\phi(n) = (p - 1) \cdot (q - 1) = 280$. $d = e^{-1} \mod \phi(n) = 187$. $P(M) = M^e \mod n = 254$. $S(C) = C^d \mod n = 254^{187} \mod n = 100$.
[]
false
[]
31-31.7-2
31
31.7
31.7-2
docs/Chap31/31.7.md
Prove that if Alice's public exponent $e$ is $3$ and an adversary obtains Alice's secret exponent $d$, where $0 < d < \phi(n)$, then the adversary can factor Alice's modulus $n$ in time polynomial in the number of bits in $n$. (Although you are not asked to prove it, you may be interested to know that this result remains true even if the condition $e = 3$ is removed. See Miller [255].)
$$ed \equiv 1 \mod \phi(n)$$ $$ed - 1 = 3d - 1 = k \phi(n)$$ If $p, q < n / 4$, then $$\phi(n) = n - (p + q) + 1 > n - n / 2 + 1 = n / 2 + 1 > n / 2.$$ $kn / 2 < 3d - 1 < 3d < 3n$, then $k < 6$, then we can solve $3d - 1 = n - p - n / p + 1$.
[]
false
[]
31-31.7-3
31
31.7
31.7-3 $\star$
docs/Chap31/31.7.md
Prove that RSA is multiplicative in the sense that $P_A(M_1) P_A(M_2) \equiv P_A(M_1, M_2) \pmod n$. Use this fact to prove that if an adversary had a procedure that could efficiently decrypt $1$ percent of messages from $\mathbb Z_n$ encrypted with $P_A$, then he could employ a probabilistic algorithm to decrypt every message encrypted with $P_A$ with high probability.
Multiplicative: $e$ is relatively prime to $n$. Decrypt: In each iteration randomly choose a prime number $m$ that $m$ is relatively prime to $n$, if we can decrypt $m \cdot M$, then we can return $m^{-1}M$ since $m^{-1} = m^{n - 2}$.
[]
false
[]
31-31.8-1
31
31.8
31.8-1
docs/Chap31/31.8.md
Prove that if an odd integer $n > 1$ is not a prime or a prime power, then there exists a nontrivial square root of $1$ modulo $n$.
(Omit!)
[]
false
[]
31-31.8-2
31
31.8
31.8-2 $\star$
docs/Chap31/31.8.md
It is possible to strengthen Euler's theorem slightly to the form $a^{\lambda(n)} \equiv 1 \pmod n$ for all $a \in \mathbb Z_n^\*$, where $n = p_1^{e_1} \cdots p_r^{e_r}$ and $\lambda(n)$ is defined by $$\lambda(n) = \text{lcm}(\phi(p_1^{e_1}), \ldots, \phi(p_r^{e_r})). \tag{31.42}$$ Prove that $\lambda(n) \mid \phi(n)$. A composite number $n$ is a Carmichael number if $\lambda(n) \mid n - 1$. The smallest Carmichael number is $561 = 3 \cdot 11 \cdot 17$; here, $\lambda(n) = \text{lcm}(2, 10, 16) = 80$, which divides $560$. Prove that Carmichael numbers must be both "square-free" (not divisible by the square of any prime) and the product of at least three primes. (For this reason, they are not very common.)
1. Prove that $\lambda(n) \mid \phi(n)$. We have $$ \begin{aligned} n & = p_1^{e_1} \cdots p_r^{e_r} \\\\ \phi(n) & = \phi(p_1^{e_1}) \times \dots \times \phi(p_r^{e_r}). \end{aligned} $$ Thus, $$ \begin{aligned} \lambda(n) & \mid \phi(n) \\\\ \Rightarrow \text{lcm}(\phi(p_1^{e_1}, \ldots, \phi(p_r^{e_r})) & \mid (\phi(p_1^{e_1}) \times \dots \times \phi(p_r^{e_r})) \end{aligned} $$ 2. Prove that Carmichael numbers must be "square-free" (not divisible by the square of any prime). Assume $n = p^\alpha m$ is a Carmichael number, where $\alpha \ge 2$ and $p \nmid m$. By the Chinese Remainder Theorem, the system of congruences $$ \begin{aligned} x & \equiv 1 + p \pmod p^\alpha, \\\\ x & \equiv 1 \pmod m \end{aligned} $$ has a solution $a$. Note that $\gcd(a, n) = 1$. Since $n$ is a Carmichael number, we have $a^{n - 1} \equiv 1 \pmod n$. In particular, $a^{n - 1} \equiv 1 \pmod {p^\alpha}$; therefore, $a^n \equiv a \pmod {p^2}$. So, $(1 + p)^n \equiv 1 + p \pmod {p^2}$. Expand $(1 + p)^n$ modulo $p^2$ using the binomial theorem. We have $$(1 + p)^n \equiv 1 \pmod {p^2},$$ since the first two terms of the expansion are $1$ and $np$, and the rest of the terms are divisible by $p^2$. Thus, $1 \equiv 1 + p \pmod {p^2}$. This is impossible. [Stack Exchange Reference](https://math.stackexchange.com/questions/1764812/carmichael-number-square-free) 3. Prove that Carmichael numbers must be the product of at least three primes. Assume that $n = pq$ is a Carmichael number, where $p$ and $q$ are distant primes and $p < q$. Then we have $$ \begin{aligned} & q \equiv 1 \pmod{q - 1} \\\\ \Rightarrow & n \equiv pq \equiv p \pmod{q - 1} \\\\ \Rightarrow & n - 1 \equiv p - 1 \pmod{q - 1}, \end{aligned} $$ Here, $0 < p − 1 < q − 1$ , so $n − 1$ is not divisible by $q − 1$. [Stack Exchange Reference](https://math.stackexchange.com/questions/432162/carmichael-proof-of-at-least-3-factors)
[]
false
[]
31-31.8-3
31
31.8
31.8-3
docs/Chap31/31.8.md
Prove that if $x$ is a nontrivial square root of $1$, modulo $n$, then $\gcd(x - 1, n)$ and $\gcd(x + 1, n)$ are both nontrivial divisors of $n$.
$$ \begin{array}{rlll} x^2 & \equiv & 1 & \pmod n, \\\\ x^2 - 1 & \equiv & 0 & \pmod n, \\\\ (x + 1)(x - 1) & \equiv & 0 & \pmod n. \end{array} $$ $n \mid (x + 1)(x - 1)$, suppose $\gcd(x - 1, n) = 1$, then $n \mid (x + 1)$, then $x \equiv -1 \pmod n$ which is trivial, it contradicts the fact that $x$ is nontrivial, therefore $\gcd(x - 1, n) \ne 1$, $\gcd(x + 1, n) \ne 1$.
[]
false
[]
31-31.9-1
31
31.9
31.9-1
docs/Chap31/31.9.md
Referring to the execution history shown in Figure 31.7(a), when does \text{POLLARDRHO} print the factor $73$ of $1387$?
$x = 84$, $y = 814$.
[]
false
[]
31-31.9-2
31
31.9
31.9-2
docs/Chap31/31.9.md
Suppose that we are given a function $f : \mathbb Z_n \rightarrow \mathbb Z_n$ and an initial value $x_0 \in \mathbb Z_n$. Define $x_i = f(x_{i - 1})$ for $i = 1, 2, \ldots$. Let $t$ and $u > 0$ be the smallest values such that $x_{t + i} = x_{t + u + i}$ for $i = 0, 1, \ldots$. In the terminology of Pollard's rho algorithm, $t$ is the length of the tail and $u$ is the length of the cycle of the rho. Give an efficient algorithm to determine $t$ and $u$ exactly, and analyze its running time.
(Omit!)
[]
false
[]
31-31.9-3
31
31.9
31.9-3
docs/Chap31/31.9.md
How many steps would you expect $\text{POLLARD-RHO}$ to require to discover a factor of the form $p^e$, where $p$ is prime and $e > 1$?
$\Theta(\sqrt p)$.
[]
false
[]
31-31.9-4
31
31.9
31.9-4 $\star$
docs/Chap31/31.9.md
One disadvantage of $\text{POLLARD-RHO}$ as written is that it requires one gcd computation for each step of the recurrence. Instead, we could batch the gcd computations by accumulating the product of several $x_i$ values in a row and then using this product instead of $x_i$ in the gcd computation. Describe carefully how you would implement this idea, why it works, and what batch size you would pick as the most effective when working on a $\beta$-bit number $n$.
(Omit!)
[]
false
[]
31-31-1
31
31-1
31-1
docs/Chap31/Problems/31-1.md
Most computers can perform the operations of subtraction, testing the parity (odd or even) of a binary integer, and halving more quickly than computing remainders. This problem investigates the **_binary gcd algorithm_**, which avoids the remainder computations used in Euclid's algorithm. **a.** Prove that if $a$ and $b$ are both even, then $\gcd(a, b) = 2 \cdot \gcd(a / 2, b / 2)$. **b.** Prove that if $a$ is odd and $b$ is even, then $\gcd(a, b) = \gcd(a, b / 2)$. **c.** Prove that if $a$ and $b$ are both odd, then $\gcd(a, b) = \gcd((a - b) / 2, b)$. **d.** Design an efficient binary gcd algorithm for input integers $a$ and $b$, where $a \ge b$, that runs in $O(\lg a)$ time. Assume that each subtraction, parity test, and halving takes unit time.
(Omit!) **d.** ```cpp BINARY-GCD(a, b) if a < b return BINARY-GCD(b, a) if b == 0 return a if (a & 1 == 1) and (b & 1 == 1) return BINARY-GCD((a - b) >> 1, b) if (a & 1 == 0) and (b & 1 == 0) return BINARY-GCD(a >> 1, b >> 1) << 1 if a & 1 == 1 return BINARY-GCD(a, b >> 1) return BINARY-GCD(a >> 1, b) ```
[ { "lang": "cpp", "code": "BINARY-GCD(a, b)\n if a < b\n return BINARY-GCD(b, a)\n if b == 0\n return a\n if (a & 1 == 1) and (b & 1 == 1)\n return BINARY-GCD((a - b) >> 1, b)\n if (a & 1 == 0) and (b & 1 == 0)\n return BINARY-GCD(a >> 1, b >> 1) << 1\n if a & 1 == 1\n return BINARY-GCD(a, b >> 1)\n return BINARY-GCD(a >> 1, b)" } ]
false
[]
31-31-2
31
31-2
31-2
docs/Chap31/Problems/31-2.md
**a.** Consider the ordinary "paper and pencil" algorithm for long division: dividing $a$ by $b$, which yields a quotient $q$ and remainder $r$. Show that this method requires $O((1 + \lg q) \lg b)$ bit operations. **b.** Define $\mu(a, b) = (1 + \lg a)(1 + \lg b)$. Show that the number of bit operations performed by $\text{EUCLID}$ in reducing the problem of computing $\gcd(a, b)$ to that of computing $\gcd(b, a \mod b)$ is at most $c(\mu(a, b) - \mu(b, a \mod b))$ for some sufficiently large constant $c > 0$. **c.** Show that $\text{EUCLID}(a, b)$ requires $O(\mu(a, b))$ bit operations in general and $O(\beta^2)$ bit operations when applied to two $\beta$-bit inputs.
**a.** - Number of comparisons and subtractions: $\lceil \lg a \rceil - \lceil \lg b \rceil + 1 = \lceil \lg q \rceil$. - Length of subtraction: $\lceil \lg b \rceil$. - Total: $O((1 + \lg q) \lg b)$. **b.** $$ \begin{array}{rlll} & \mu(a, b) - \mu(b, a \mod b) \\\\ = & \mu(a, b) - \mu(b, r) \\\\ = & (1 + \lg a)(1 + \lg b) - (1 + \lg b)(1 + \lg r) \\\\ = & (1 + \lg b)(\lg a - \lg r) & (\lg r \le \lg b) \\\\ \ge & (1 + \lg b)(\lg a - \lg b) \\\\ = & (1 + \lg b)(\lg q + 1) \\\\ \ge & (1 + \lg q) \lg b \end{array} $$ **c.** $\mu(a, b) = (1 + \lg a)(1 + \lg b) \approx \beta^2$
[]
false
[]
31-31-3
31
31-3
31-3
docs/Chap31/Problems/31-3.md
This problem compares the efficiency of three methods for computing the $n$th Fibonacci number $F_n$, given $n$. Assume that the cost of adding, subtracting, or multiplying two numbers is $O(1)$, independent of the size of the numbers. **a.** Show that the running time of the straightforward recursive method for computing $F_n$ based on recurrence $\text{(3.22)}$ is exponential in $n$. (See, for example, the FIB procedure on page 775.) **b.** Show how to compute $F_n$ in $O(n)$ time using memoization. **c.** Show how to compute $F_n$ in $O(\lg n)$ time using only integer addition and multiplication. ($\textit{Hint:}$ Consider the matrix $$ \begin{pmatrix} 0 & 1 \\\\ 1 & 1 \end{pmatrix} $$ and its powers.) **d.** Assume now that adding two $\beta$-bit numbers takes $\Theta(\beta)$ time and that multiplying two $\beta$-bit numbers takes $\Theta(\beta^2)$ time. What is the running time of these three methods under this more reasonable cost measure for the elementary arithmetic operations?
**a.** In order to solve $\text{FIB}(n)$, we need to compute $\text{FIB}(n - 1)$ and $\text{FIB}(n - 1)$. Therefore we have the recurrence $$T(n) = T(n - 1) + T(n - 2) + \Theta(1).$$ We can get the upper bound of Fibonacci as $O(2^n)$, but this is not the tight upper bound. The Fibonacci recurrence is defined as $$F(n) = F(n - 1) + F(n - 2).$$ The characteristic equation for this function will be $$ \begin{aligned} x^2 & = x + 1 \\\\ x^2 - x - 1 & = 0. \end{aligned} $$ We can get the roots by quadratic formula: $x = \frac{1 \pm \sqrt 5}{2}$. We know the solution of a linear recursive function is given as $$ \begin{aligned} F(n) & = \alpha_1^n + \alpha_2^n \\\\ & = \bigg(\frac{1 + \sqrt 5}{2}\bigg)^n + \bigg(\frac{1 - \sqrt 5}{2}\bigg)^n, \end{aligned} $$ where $\alpha_1$ and $\alpha_2$ are the roots of the characteristic equation. Since both $T(n)$ and $F(n)$ are representing the same thing, they are asymptotically the same. Hence, $$ \begin{aligned} T(n) & = \bigg(\frac{1 + \sqrt 5}{2}\bigg)^n + \bigg(\frac{1 - \sqrt 5}{2}\bigg)^n \\\\ & = \bigg(\frac{1 + \sqrt 5}{2}\bigg)^n \\\\ & \approx O(1.618)^n. \end{aligned} $$ **b.** This is same as [15.1-5](../../../Chap15/15.1/#151-5). **c.** Assume that all integer multiplications and additions can be done in $O(1)$. First, we want to show that $$ \begin{pmatrix} 0 & 1 \\\\ 1 & 1 \end{pmatrix}^k = \begin{pmatrix} F_{k - 1} & F_k \\\\ F_k & F_{k + 1} \end{pmatrix} . $$ By induction, $$ \begin{aligned} \begin{pmatrix} 0 & 1 \\\\ 1 & 1 \end{pmatrix}^{k + 1} & = \begin{pmatrix} 0 & 1 \\\\ 1 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\\\ 1 & 1 \end{pmatrix}^k \\\\ & = \begin{pmatrix} 0 & 1 \\\\ 1 & 1 \end{pmatrix} \begin{pmatrix} F_{k - 1} & F_k \\\\ F_k & F_{k + 1} \end{pmatrix}^k \\\\ & = \begin{pmatrix} F_k & F_{k + 1} \\\\ F_{k - 1} + F_k & F_k + F_{k + 1} \end{pmatrix} \\\\ & = \begin{pmatrix} F_k & F_{k + 1} \\\\ F_{k + 1} & F_{k + 2} \end{pmatrix} . \end{aligned} $$ We show that we can compute the given matrix to the power $n - 1$ in time $O(\lg n)$, and the bottom right entry is $F_n$. We should note that by 8 multiplications and 4 additions, we can multiply any two $2$ by $2$ matrices, that means matrix multiplications can be done in constant time. Thus we only need to bound the number of those in our algorithm. It takes $O(\lg n)$ to run the algorithm $\text{MATRIX-POW}(A, n - 1)$ becasue we half the value of $n$ in each step, and within each step, we perform a constant amount of calculation. The recurrence is $$T(n) = T(n / 2) + \Theta(1).$$ ```cpp MATRIX-POW(A, n) if n % 2 == 1 return A * MATRIX-POW(A^2, (n - 1) / 2) return MATRIX-POW(A^2, n / 2) ``` **d.** First, we should note that $\beta = O(\log n)$. - For part (a), We naively add a $\beta$-bit number which is growing exponentially each time, so the recurrence becomes $$ \begin{aligned} T(n) & = T(n - 1) + T(n - 2) + \Theta(\beta) \\\\ & = T(n - 1) + T(n - 2) + \Theta(\log n), \end{aligned} $$ which has the same solution $T(n) = O\Big(\frac{1 + \sqrt 5}{2}\Big)^n$ since $\Theta(\log n)$ can be absorbed by exponential time. - For part (b), The recurrence of the memoized verstion becomes $$M(n) = M(n - 1) + \Theta(\beta).$$ This has a solution of $\sum_{i = 2}^n \beta = \Theta(n\beta) = \Theta(2^\beta \cdot \beta)$ or $\Theta(n \log n)$. - For part (c), We perform a constant number of both additions and multiplications. The recurrence becomes $$P(n) = P(n / 2) + \Theta(\beta^2),$$ which has a solution of $\Theta(\log n \cdot \beta^2) = \Theta(\beta^3)$ or $\Theta(\log^3 n)$.
[ { "lang": "cpp", "code": "MATRIX-POW(A, n)\n if n % 2 == 1\n return A * MATRIX-POW(A^2, (n - 1) / 2)\n return MATRIX-POW(A^2, n / 2)" } ]
false
[]
31-31-4
31
31-4
31-4
docs/Chap31/Problems/31-4.md
Let $p$ be an odd prime. A number $a \in Z_p^\*$ is a **_quadratic residue_** if the equation $x^2 = a \pmod p$ has a solution for the unknown $x$. **a.** Show that there are exactly $(p - 1) / 2$ quadratic residues, modulo $p$. **b.** If $p$ is prime, we define the **_Legendre symbol_** $(\frac{a}{p})$, for $a \in \mathbb Z_p^\*$, to be $1$ if $a$ is a quadratic residue modulo $p$ and $-1$ otherwise. Prove that if $a \in \mathbb Z_p^\*$, then $$\left(\frac{a}{p}\right) \equiv a^{(p - 1) / 2} \pmod p.$$ Give an efficient algorithm that determines whether a given number $a$ is a quadratic residue modulo $p$. Analyze the efficiency of your algorithm. **c.** Prove that if $p$ is a prime of the form $4k + 3$ and $a$ is a quadratic residue in $\mathbb Z_b^\*$, then $a^{k + 1} \mod p$ is a square root of $a$, modulo $p$. How much time is required to find the square root of a quadratic residue $a$ modulo $p$? **d.** Describe an efficient randomized algorithm for finding a nonquadratic residue, modulo an arbitrary prime $p$, that is, a member of $\mathbb Z_p^\*$ that is not a quadratic residue. How many arithmetic operations does your algorithm require on average?
(Omit!)
[]
false
[]
32-32.1-1
32
32.1
32.1-1
docs/Chap32/32.1.md
Show the comparisons the naive string matcher makes for the pattern $P = 0001$ in the text $T = 000010001010001$.
```cpp STRING-MATCHER(P, T, i) for j = i to i + P.length if P[j - i + 1] != T[j] return false return true ```
[ { "lang": "cpp", "code": "STRING-MATCHER(P, T, i)\n for j = i to i + P.length\n if P[j - i + 1] != T[j]\n return false\n return true" } ]
false
[]
32-32.1-2
32
32.1
32.1-2
docs/Chap32/32.1.md
Suppose that all characters in the pattern $P$ are different. Show how to accelerate $\text{NAIVE-STRING-MATCHER}$ to run in time $O(n)$ on an $n$-character text $T$.
Suppose $T[i] \ne P[j]$, then for $k \in [1, j)$, $T[i - k] = P[j - k] \ne P[0]$, the $[i - k, i)$ are all invalid shifts which could be skipped, therefore we can compare $T[i]$ with $P[0]$ in the next iteration.
[]
false
[]
32-32.1-3
32
32.1
32.1-3
docs/Chap32/32.1.md
Suppose that pattern $P$ and text $T$ are randomly chosen strings of length $m$ and $n$, respectively, from the $d$-ary alphabet $\Sigma_d = \\{ 0, 1, \ldots, d - 1 \\}$, where $d \ge 2$. Show that the expected number of character-to-character comparisons made by the implicit loop in line 4 of the naive algorithm is $$(n - m + 1) \frac{1 - d^{-m}}{1 - d^{-1}} \le 2(n - m + 1)$$ over all executions of this loop. (Assume that the naive algorithm stops comparing characters for a given shift once it finds a mismatch or matches the entire pattern.) Thus, for randomly chosen strings, the naive algorithm is quite efficient.
Suppose for each shift, the number of compared characters is $L$, then: $$ \begin{aligned} \text E[L] & = 1 \cdot \frac{d - 1}{d} + 2 \cdot (\frac{1}{d})^1 \frac{d - 1}{d} + \cdots + m \cdot (\frac{1}{d})^{m - 1} \frac{d - 1}{d} + m \cdot (\frac{1}{d})^{m} \\\\ & = (1 + 2 \cdot (\frac{1}{d})^1 + \cdots + m \cdot (\frac{1}{d})^{m}) \frac{d - 1}{d} + m \cdot (\frac{1}{d})^{m}. \end{aligned} $$ $$ \begin{aligned} S & = 1 + 2 \cdot (\frac{1}{d})^1 + \cdots + m \cdot (\frac{1}{d})^{m - 1} \\\\ \frac{1}{d}S & = 1 \cdot (\frac{1}{d})^1 + \cdots + (m - 1) \cdot (\frac{1}{d})^{m - 1} + m \cdot (\frac{1}{d})^{m} \\\\ \frac{d - 1}{d}S & = 1 + (\frac{1}{d})^1 + \cdots + \cdot (\frac{1}{d})^{m - 1} - m \cdot (\frac{1}{d})^{m} \\\\ \frac{d - 1}{d}S & = \frac{1 - d^{-m}}{1 - d^{-1}} - m \cdot (\frac{1}{d})^{m}. \end{aligned} $$ $$ \begin{aligned} \text E[L] & = (1 + 2 \cdot (\frac{1}{d})^1 + \cdots + m \cdot (\frac{1}{d})^{m}) \frac{d - 1}{d} + m \cdot (\frac{1}{d})^{m} \\\\ & = \frac{1 - d^{-m}}{1 - d^{-1}} - m \cdot (\frac{1}{d})^{m} + m \cdot (\frac{1}{d})^{m} \\\\ & = \frac{1 - d^{-m}}{1 - d^{-1}}. \end{aligned} $$ There are $n - m + 1$ shifts, therefore the expected number of comparisons is: $$(n - m + 1) \cdot \text E[L] = (n - m + 1) \frac{1 - d^{-m}}{1 - d^{-1}}$$ Since $d \ge 2$, $1 - d^{-1} \ge 0.5$, $1 - d^{-m} < 1$, and $ \frac{1 - d^{-m}}{1 - d^{-1}} \le 2$, therefore $$(n - m + 1) \frac{1 - d^{-m}}{1 - d^{-1}} \le 2 (n - m + 1).$$
[]
false
[]
32-32.1-4
32
32.1
32.1-4
docs/Chap32/32.1.md
Suppose we allow the pattern $P$ to contain occurrences of a **_gap character_** $\diamond$ that can match an _arbitrary_ string of characters (even one of zero length). For example, the pattern $ab\diamond ba\diamond c$ occurs in the text $cabccbacbacab$ as $$c \underbrace{ab}\_{ab} \underbrace{cc}\_{\diamond} \underbrace{ba}\_{ba} \underbrace{cba}\_{\diamond} \underbrace{c}\_{c} ab$$ and as $$c \underbrace{ab}\_{ab} \underbrace{ccbac}\_{\diamond} \underbrace{ba}\_{ba} \underbrace{\text{ }}\_{\diamond} \underbrace{c}\_{c} ab$$ Note that the gap character may occur an arbitrary number of times in the pattern but not at all in the text. Give a polynomial-time algorithm to determine whether such a pattern $P$ occurs in a given text $T$, and analyze the running time of your algorithm.
By using dynamic programming, the time complexity is $O(mn)$ where $m$ is the length of the text $T$ and $n$ is the length of the pattern $P$; the space complexity is $O(mn)$, too. This problem is similar to LeetCode [44. WildCard Matching](https://leetcode.com/problems/wildcard-matching/), except that it has no question mark (`?`) requirement. You can see my naive DP implementation [here](https://github.com/walkccc/LeetCode/blob/master/solutions/cpp/0044.cpp).
[]
false
[]
32-32.2-1
32
32.2
32.2-1
docs/Chap32/32.2.md
Working modulo $q = 11$, how many spurious hits does the Rabin-Karp matcher encounter in the text $T = 3141592653589793$ when looking for the pattern $P = 26$?
$|\\{15, 59, 92\\}| = 3$.
[]
false
[]
32-32.2-2
32
32.2
32.2-2
docs/Chap32/32.2.md
How would you extend the Rabin-Karp method to the problem of searching a text string for an occurrence of any one of a given set of $k$ patterns? Start by assuming that all $k$ patterns have the same length. Then generalize your solution to allow the patterns to have different lengths.
Truncation.
[]
false
[]
32-32.2-3
32
32.2
32.2-3
docs/Chap32/32.2.md
Show how to extend the Rabin-Karp method to handle the problem of looking for a given $m \times m$ pattern in an $n \times n$ array of characters. (The pattern may be shifted vertically and horizontally, but it may not be rotated.)
Calculate the hashes in each column just like the Rabin-Karp in one-dimension, then treat the hashes in each row as the characters and hashing again.
[]
false
[]
32-32.2-4
32
32.2
32.2-4
docs/Chap32/32.2.md
Alice has a copy of a long $n$-bit file $A = \langle a_{n - 1}, a_{n - 2}, \ldots, a_0 \rangle$, and Bob similarly has an $n$-bit file $B = \langle b_{n - 1}, b_{n - 2}, \ldots, b_0 \rangle$. Alice and Bob wish to know if their files are identical. To avoid transmitting all of $A$ or $B$, they use the following fast probabilistic check. Together, they select a prime $q > 1000n$ and randomly select an integer $x$ from $\\{ 0, 1, \ldots, q - 1 \\}$. Then, Alice evaluates $$A(x) = (\sum_{i = 0}^{n - 1} a_i x^i) \mod q$$ and Bob similarly evaluates $B(x)$. Prove that if $A \ne B$, there is at most one chance in $1000$ that $A(x) = B(x)$, whereas if the two files are the same, $A(x)$ is necessarily the same as $B(x)$. ($\textit{Hint:}$ See Exercise 31.4-4.)
(Omit!)
[]
false
[]
32-32.3-1
32
32.3
32.3-1
docs/Chap32/32.3.md
Construct the string-matching automaton for the pattern $P = aabab$ and illustrate its operation on the text string $T = \text{aaababaabaababaab}$.
$$0 \rightarrow 1 \rightarrow 2 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 5 \rightarrow 1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 5 \rightarrow 1 \rightarrow 2 \rightarrow 3.$$
[]
false
[]
32-32.3-2
32
32.3
32.3-2
docs/Chap32/32.3.md
Draw a state-transition diagram for a string-matching automaton for the pattern $ababbabbababbababbabb$ over the alphabet $\sigma = \\{a, b\\}$.
$$ \begin{array}{c|c|c} 0 & 1 & 0 \\\\ 1 & 1 & 2 \\\\ 2 & 3 & 0 \\\\ 3 & 1 & 4 \\\\ 4 & 3 & 5 \\\\ 5 & 6 & 0 \\\\ 6 & 1 & 7 \\\\ 7 & 3 & 8 \\\\ 8 & 9 & 0 \\\\ 9 & 1 & 10 \\\\ 10 & 11 & 0 \\\\ 11 & 1 & 12 \\\\ 12 & 3 & 13 \\\\ 13 & 14 & 0 \\\\ 14 & 1 & 15 \\\\ 15 & 16 & 8 \\\\ 16 & 1 & 17 \\\\ 17 & 3 & 18 \\\\ 18 & 19 & 0 \\\\ 19 & 1 & 20 \\\\ 20 & 3 & 21 \\\\ 21 & 9 & 0 \end{array} $$
[]
false
[]
32-32.3-3
32
32.3
32.3-3
docs/Chap32/32.3.md
We call a pattern $P$ **_nonoverlappable_** if $P_k \sqsupset P_q$ implies $k = 0$ or $k = q$. Describe the state-transition diagram of the string-matching automaton for a nonoverlappable pattern.
$\delta(q, a) \in \\{0, 1, q + 1\\}$.
[]
false
[]
32-32.3-4
32
32.3
32.3-4 $\star$
docs/Chap32/32.3.md
Given two patterns $P$ and $P'$, describe how to construct a finite automaton that determines all occurrences of either pattern. Try to minimize the number of states in your automaton.
Combine the common prefix and suffix.
[]
false
[]
32-32.3-5
32
32.3
32.3-5
docs/Chap32/32.3.md
Given a pattern $P$ containing gap characters (see Exercise 32.1-4), show how to build a finite automaton that can find an occurrence of $P$ in a text $T$ in $O(n)$ matching time, where $n = |T|$.
Split the string with the gap characters, build finite automatons for each substring. When a substring is matched, moved to the next finite automaton.
[]
false
[]
32-32.4-1
32
32.4
32.4-1
docs/Chap32/32.4.md
Compute the prefix function $\pi$ for the pattern $\text{ababbabbabbababbabb}$.
$$\pi = \\{ 0, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 3, 4, 5, 6, 7, 8 \\}.$$
[]
false
[]
32-32.4-2
32
32.4
32.4-2
docs/Chap32/32.4.md
Give an upper bound on the size of $\pi^\*[q]$ as a function of $q$. Give an example to show that your bound is tight.
$|\pi^\*[q]| < q$.
[]
false
[]
32-32.4-3
32
32.4
32.4-3
docs/Chap32/32.4.md
Explain how to determine the occurrences of pattern $P$ in the text $T$ by examining the $\pi$ function for the string $PT$ (the string of length $m + n$ that is the concatenation of $P$ and $T$).
$\\{ q \mid \pi[q] = m \text{ and } q \ge 2m \\}$.
[]
false
[]
32-32.4-4
32
32.4
32.4-4
docs/Chap32/32.4.md
Use an aggregate analysis to show that the running time of $\text{KMP-MATCHER}$ is $\Theta$.
The number of $q = q + 1$ is at most $n$.
[]
false
[]
32-32.4-5
32
32.4
32.4-5
docs/Chap32/32.4.md
Use a potential function to show that the running time of $\text{KMP-MATCHER}$ is $\Theta(n)$.
$\Phi = p.$
[]
false
[]
32-32.4-6
32
32.4
32.4-6
docs/Chap32/32.4.md
Show how to improve $\text{KMP-MATCHER}$ by replacing the occurrence of $\pi$ in line 7 (but not line 12) by $\pi'$, where $\pi'$ is defined recursively for $q = 1, 2, \ldots, m - 1$ by the equation $$ \pi'[q] = \begin{cases} 0 & \text{ if } \pi[q] = 0, \\\\ \pi'[\pi[q]] & \text{ if } \pi[q] \ne 0 \text{ and } P[\pi[q] + 1] = P[q + 1] \\\\ \pi[q] & \text{ if } \pi[q] \ne 0 \text{ and } P[\pi[q] + 1] \ne P[q + 1]. \end{cases} $$ Explain why the modified algorithm is correct, and explain in what sense this change constitutes an improvement.
If $P[q + 1] \ne T[i]$, then if $P[\pi[q] + q] = P[q + 1] \ne T[i]$, there is no need to compare $P[\pi[q] + q]$ with $T[i]$.
[]
false
[]
32-32.4-7
32
32.4
32.4-7
docs/Chap32/32.4.md
Give a linear-time algorithm to determine whether a text $T$ is a cyclic rotation of another string $T'$. For example, $\text{arc}$ and $\text{car}$ are cyclic rotations of each other.
Find $T'$ in $TT$.
[]
false
[]
32-32.4-8
32
32.4
32.4-8 $\star$
docs/Chap32/32.4.md
Give an $O(m|\Sigma|)$-time algorithm for computing the transition function $\delta$ for the string-matching automaton corresponding to a given pattern $P$. (Hint: Prove that $\delta(q, a) = \delta(\pi[q], a)$ if $q = m$ or $P[q + 1] \ne a$.)
Compute the prefix function $m$ times.
[]
false
[]
32-32-1
32
32-1
32-1
docs/Chap32/Problems/32-1.md
Let $y^i$ denote the concatenation of string $y$ with itself $i$ times. For example, $(\text{ab})^3 = \text{ababab}$. We say that a string $x \in \Sigma^\*$ has **_repetition factor_** $r$ if $x = y ^ r$ for some string $y \in \Sigma^\*$ and some $r > 0$. Let $\rho(x)$ denote the largest $r$ such that $x$ has repetition factor $r$. **a.** Give an efficient algorithm that takes as input a pattern $P[1 \ldots m]$ and computes the value $\rho(P_i)$ for $i = 1, 2, \ldots, m$. What is the running time of your algorithm? **b.** For any pattern $P[1 \ldots m]$, let $\rho^\*(P)$ be defined as $\max_{1 \le i \le m} \rho(P_i)$. Prove that if the pattern $P$ is chosen randomly from the set of all binary strings of length $m$, then the expected value of $\rho^\*(P)$ is $O(1)$. **c.** Argue that the following string-matching algorithm correctly finds all occurrences of pattern $P$ in a text $T[1 \ldots n]$ in time $O(\rho^\*(P)n + m)$: ```cpp REPETITION_MATCHER(P, T) m = P.length n = T.length k = 1 + ρ*(P) q = 0 s = 0 while s ≤ n - m if T[s + q + 1] == P[q + 1] q = q + 1 if q == m print "Pattern occurs with shift" s if q == m or T[s + q + 1] != P[q + 1] s = s + max(1, ceil(q / k)) q = 0 ``` This algorithm is due to Galil and Seiferas. By extending these ideas greatly, they obtained a linear-time string-matching algorithm that uses only $O(1)$ storage beyond what is required for $P$ and $T$.
**a.** Compute $\pi$, let $l = m - \pi[m]$, if $m ~\text{mod}~ l = 0$ and for all $p = m - i \cdot l > 0$, $p - \pi[p] = l$, then $\rho(P_i) = m / l$, otherwise $\rho(P_i) = 1$. The running time is $\Theta(n)$. **b.** $$ \begin{aligned} P(\rho^\*(P) \ge 2) & = \frac{1}{2} + \frac{1}{8} + \frac{1}{32} + \cdots \approx \frac{2}{3} \\\\ P(\rho^\*(P) \ge 3) & = \frac{1}{4} + \frac{1}{32} + \frac{1}{256} + \cdots \approx \frac{2}{7} \\\\ P(\rho^\*(P) \ge 4) & = \frac{1}{8} + \frac{1}{128} + \frac{1}{2048} + \cdots \approx \frac{2}{15} \\\\ P(\rho^\*(P) = 1) & = \frac{1}{3} \\\\ P(\rho^\*(P) = 2) & = \frac{8}{21} \\\\ P(\rho^\*(P) = 3) & = \frac{16}{105} \\\\ \text E[\rho^\*(P)] & = 1 \cdot \frac{1}{3} + 2 \cdot \frac{8}{21} + 3 \cdot \frac{16}{105} + \ldots \approx 2.21 \end{aligned} $$ **c.** (Omit!)
[ { "lang": "cpp", "code": "> REPETITION_MATCHER(P, T)\n> m = P.length\n> n = T.length\n> k = 1 + ρ*(P)\n> q = 0\n> s = 0\n> while s ≤ n - m\n> if T[s + q + 1] == P[q + 1]\n> q = q + 1\n> if q == m\n> print \"Pattern occurs with shift\" s\n> if q == m or T[s + q + 1] != P[q + 1]\n> s = s + max(1, ceil(q / k))\n> q = 0\n>" } ]
false
[]