Zhangir Azerbayev commited on
Commit
c91a5cf
1 Parent(s): 3d7992e

converted books and formal to jsonl gz

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. books/cam/IA_L/analysis_i.tex +0 -0
  2. books/cam/IA_L/dynamics_and_relativity.tex +0 -0
  3. books/cam/IA_L/probability.tex +0 -0
  4. books/cam/IA_L/vector_calculus.tex +0 -0
  5. books/cam/IA_M/differential_equations.tex +0 -0
  6. books/cam/IA_M/groups.tex +0 -0
  7. books/cam/IA_M/numbers_and_sets.tex +0 -0
  8. books/cam/IA_M/vectors_and_matrices.tex +0 -0
  9. books/cam/IB_E/metric_and_topological_spaces.tex +0 -0
  10. books/cam/IB_E/optimisation.tex +0 -1892
  11. books/cam/IB_E/variational_principles.tex +0 -1652
  12. books/cam/IB_L/complex_analysis.tex +0 -0
  13. books/cam/IB_L/complex_methods.tex +0 -0
  14. books/cam/IB_L/electromagnetism.tex +0 -0
  15. books/cam/IB_L/fluid_dynamics.tex +0 -0
  16. books/cam/IB_L/geometry.tex +0 -0
  17. books/cam/IB_L/groups_rings_and_modules.tex +0 -0
  18. books/cam/IB_L/numerical_analysis.tex +0 -0
  19. books/cam/IB_L/statistics.tex +0 -0
  20. books/cam/IB_M/analysis_ii.tex +0 -0
  21. books/cam/IB_M/linear_algebra.tex +0 -0
  22. books/cam/IB_M/markov_chains.tex +0 -1665
  23. books/cam/IB_M/methods.tex +0 -0
  24. books/cam/IB_M/quantum_mechanics.tex +0 -0
  25. books/cam/III_E/classical_and_quantum_solitons.tex +0 -0
  26. books/cam/III_L/advanced_quantum_field_theory.tex +0 -0
  27. books/cam/III_L/algebras.tex +0 -0
  28. books/cam/III_L/logic.tex +0 -0
  29. books/cam/III_L/modular_forms_and_l_functions.tex +0 -0
  30. books/cam/III_L/positivity_in_algebraic_geometry.tex +0 -0
  31. books/cam/III_L/ramsey_theory.tex +0 -0
  32. books/cam/III_L/riemannian_geometry.tex +0 -0
  33. books/cam/III_L/schramm-loewner_evolutions.tex +0 -0
  34. books/cam/III_L/stochastic_calculus_and_applications.tex +0 -0
  35. books/cam/III_L/symplectic_geometry.tex +0 -0
  36. books/cam/III_L/the_standard_model.tex +0 -0
  37. books/cam/III_L/theoretical_physics_of_soft_condensed_matter.tex +0 -0
  38. books/cam/III_M/advanced_probability.tex +0 -0
  39. books/cam/III_M/algebraic_topology_iii.tex +0 -0
  40. books/cam/III_M/analysis_of_partial_differential_equations.tex +0 -0
  41. books/cam/III_M/combinatorics.tex +0 -1782
  42. books/cam/III_M/differential_geometry.tex +0 -0
  43. books/cam/III_M/extremal_graph_theory.tex +0 -1529
  44. books/cam/III_M/hydrodynamic_stability.tex +0 -0
  45. books/cam/III_M/local_fields.tex +0 -0
  46. books/cam/III_M/modern_statistical_methods.tex +0 -0
  47. books/cam/III_M/percolation_and_random_walks_on_graphs.tex +0 -0
  48. books/cam/III_M/quantum_computation.tex +0 -0
  49. books/cam/III_M/quantum_field_theory.tex +0 -0
  50. books/cam/III_M/symmetries_fields_and_particles.tex +0 -0
books/cam/IA_L/analysis_i.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IA_L/dynamics_and_relativity.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IA_L/probability.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IA_L/vector_calculus.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IA_M/differential_equations.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IA_M/groups.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IA_M/numbers_and_sets.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IA_M/vectors_and_matrices.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_E/metric_and_topological_spaces.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_E/optimisation.tex DELETED
@@ -1,1892 +0,0 @@
1
- \documentclass[a4paper]{article}
2
-
3
- \def\npart {IB}
4
- \def\nterm {Easter}
5
- \def\nyear {2015}
6
- \def\nlecturer {F.\ A.\ Fischer}
7
- \def\ncourse {Optimisation}
8
- \def\nofficial {http://www.maths.qmul.ac.uk/~ffischer/teaching/opt/}
9
-
10
- \input{header}
11
-
12
- \begin{document}
13
- \maketitle
14
- {\small
15
- \noindent\textbf{Lagrangian methods}\\
16
- General formulation of constrained problems; the Lagrangian sufficiency theorem. Interpretation of Lagrange multipliers as shadow prices. Examples.\hspace*{\fill} [2]
17
-
18
- \vspace{10pt}
19
- \noindent\textbf{Linear programming in the nondegenerate case}\\
20
- Convexity of feasible region; sufficiency of extreme points. Standardization of problems, slack variables, equivalence of extreme points and basic solutions. The primal simplex algorithm, artificial variables, the two-phase method. Practical use of the algorithm; the tableau. Examples. The dual linear problem, duality theorem in a standardized case, complementary slackness, dual variables and their interpretation as shadow prices. Relationship of the primal simplex algorithm to dual problem. Two person zero-sum games.\hspace*{\fill} [6]
21
-
22
- \vspace{10pt}
23
- \noindent\textbf{Network problems}\\
24
- The Ford-Fulkerson algorithm and the max-flow min-cut theorems in the rational case. Network flows with costs, the transportation algorithm, relationship of dual variables with nodes. Examples. Conditions for optimality in more general networks; *the simplex-on-a-graph algorithm*.\hspace*{\fill} [3]
25
-
26
- \vspace{10pt}
27
- \noindent\textbf{Practice and applications}\\
28
- *Efficiency of algorithms*. The formulation of simple practical and combinatorial problems as linear programming or network problems.\hspace*{\fill} [1]}
29
-
30
- \tableofcontents
31
-
32
- \section{Introduction and preliminaries}
33
- \subsection{Constrained optimization}
34
- In optimization, the objective is to maximize or minimize some function. For example, if we are a factory, we want to minimize our cost of production. Often, our optimization is not unconstrained. Otherwise, the way to minimize costs is to produce nothing at all. Instead, there are some constraints we have to obey. The is known as \emph{constrained optimization}.
35
-
36
- \begin{defi}[Constrained optimization]
37
- The general problem is of \emph{constrained optimization} is
38
- \begin{center}
39
- minimize $f(x)$ subject to $h(x) = b$, $x\in X$
40
- \end{center}
41
- where $x\in \R^n$ is the \emph{vector of decision variables}, $f: \R^n \to \R$ is the \emph{objective function}, $h: \R^n \to \R^m$ and $b\in \R^m$ are the \emph{functional constraints}, and $X\subseteq \R^n$ is the \emph{regional constraint}.
42
- \end{defi}
43
- Note that everything above is a vector, but we do not bold our vectors. This is since almost everything we work with is going to be a vector, and there isn't much point in bolding them.
44
-
45
- This is indeed the most general form of the problem. If we want to maximize $f$ instead of minimize, we can minimize $-f$. If we want our constraints to be an inequality in the form $h(x) \geq b$, we can introduce a \emph{slack variable} $z$, make the functional constraint as $h(x) - z = b$, and add the regional constraint $z \geq 0$. So all is good, and this is in fact the most general form.
46
-
47
- Linear programming is, surprisingly, the case where everything is linear. We can write our problem as:
48
- \begin{center}
49
- minimize $c^Tx$ subject to
50
- \begin{align*}
51
- a_i^Tx &\geq b_i \text{ for all }i \in M_1\\
52
- a_i^Tx &\leq b_i \text{ for all }i \in M_2\\
53
- a_i^Tx &= b_i \text{ for all }i \in M_3\\
54
- x_i &\geq 0 \text{ for all }i \in N_1\\
55
- x_j &\leq 0 \text{ for all }i \in N_2
56
- \end{align*}
57
- \end{center}
58
- where we've explicitly written out the different forms the constraints can take.
59
-
60
- This is too clumsy. Instead, we can perform some tricks and turn them into a nicer form:
61
- \begin{defi}[General and standard form]
62
- The \emph{general form} of a linear program is
63
- \begin{center}
64
- minimize $c^T x$ subject to $Ax \geq b$, $x \geq 0$
65
- \end{center}
66
- The \emph{standard form} is
67
- \begin{center}
68
- minimize $c^T x$ subject to $Ax = b$, $x \geq 0$.
69
- \end{center}
70
- \end{defi}
71
- It takes some work to show that these are indeed the most general forms. The equivalence between the two forms can be done via slack variables, as described above. We still have to check some more cases. For example, this form says that $x \geq 0$, i.e.\ all decision variables have to be positive. What if we want $x$ to be unconstrained, ie can take any value we like? We can split $x$ into to parts, $x = x^+ - x^-$, where each part has to be positive. Then $x$ can take any positive or negative value.
72
-
73
- Note that when I said ``nicer'', I don't mean that turning a problem into this form necessarily makes it easier to solve \emph{in practice}. However, it will be much easier to work with when developing general theory about linear programs.
74
-
75
- \begin{eg}
76
- We want to minimize $-(x_1 + x_2)$ subject to
77
- \begin{align*}
78
- x_1 + 2x_2 &\leq 6\\
79
- x_1 - x_2 &\leq 3\\
80
- x_1, x_2 &\geq 0
81
- \end{align*}
82
- Since we are lucky to have a 2D problem, we can draw this out.
83
- \begin{center}
84
- \begin{tikzpicture}
85
- \path [fill=gray!50!white] (0, 0) -- (3, 0) -- (4, 1) -- (0, 3) -- cycle;
86
-
87
- \draw [->] (-1, 0) -- (6, 0) node [right] {$x_1$};
88
- \draw [->] (0, -1) -- (0, 4) node [above] {$x_2$};
89
-
90
- \draw (2, -1) -- (5, 2) node [right] {$x_1 - x_2 = 3$};
91
- \draw (-1, 3.5) -- (5, 0.5) node [right] {$x_1 + 2x_2 = 6$};
92
- \draw [->] (0, 0) -- (-0.5, -0.5) node [anchor=north east] {$c$};
93
-
94
- \draw [dashed] (-1, 1) -- (1, -1) node [below] {\tiny $-(x_1 + x_2) = 0$};
95
- \draw [dashed] (-1, 3) -- (3, -1) node [below] {\tiny $-(x_1 + x_2) = -2$};
96
- \draw [dashed] (2, 3) -- (6, -1) node [below] {\tiny $-(x_1 + x_2) = -5$};
97
- \end{tikzpicture}
98
- \end{center}
99
- The shaded region is the feasible region, and $c$ is our \emph{cost vector}. The dotted lines, which are orthogonal to $c$ are lines in which the objective function is constant. To minimize our objective function, we want the line to be as right as possible, which is clearly achieved at the intersection of the two boundary lines.
100
- \end{eg}
101
- Now we have a problem. In the general case, we have absolutely \emph{no idea} how to solve it. What we \emph{do} know, is how to do \emph{un}constrained optimization.
102
-
103
- \subsection{Review of unconstrained optimization}
104
- Let $f: \R^n \to \R$, $x^*\in \R^n$. A necessary condition for $x^*$ to minimize $f$ over $\R^n$ is $\nabla f(x^*) = 0$, where
105
- \[
106
- \nabla f = \left(\frac{\partial f}{\partial x_1}, \cdots, \frac{\partial f}{\partial x_n}\right)^T
107
- \]
108
- is the gradient of $f$.
109
-
110
- However, this is obviously not a sufficient condition. Any such point can be a maximum, minimum or a saddle. Here we need a notion of convexity:
111
- \begin{defi}[Convex region]
112
- A region $S\subseteq \R^n$ is \emph{convex} iff for all $\delta\in [0, 1]$, $x, y\in S$, we have $\delta x + (1 - \delta) y \in S$. Alternatively, If you take two points, the line joining them lies completely within the region.
113
- \begin{center}
114
- \begin{tikzpicture}
115
- \begin{scope}[shift={(-2, 0)}]
116
- \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0, 0) (0.6, -1) (1, 0) (0.7, 0.7)};
117
- \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {};
118
-
119
- \node at (0, -1.5) {non-convex};
120
- \end{scope}
121
-
122
- \begin{scope}[shift={(2, 0)}]
123
- \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
124
- \node at (0, -1.5) {convex};
125
- \end{scope}
126
- \end{tikzpicture}
127
- \end{center}
128
- \end{defi}
129
-
130
- \begin{defi}[Convex function]
131
- A function $f: S\to \R$ is \emph{convex} if $S$ is convex, and for all $x, y\in S$, $\delta\in [0, 1]$, we have $\delta f(x) + (1 - \delta)f(y) \geq f(\delta x + (1 - \delta)y)$.
132
- \begin{center}
133
- \begin{tikzpicture}
134
- \draw(-2, 4) -- (-2, 0) -- (2, 0) -- (2, 4);
135
- \draw (-1.3, 0.1) -- (-1.3, -0.1) node [below] {$x$};
136
- \draw (1.3, 0.1) -- (1.3, -0.1) node [below] {$y$};
137
- \draw (-1.7, 2) parabola bend (-.2, 1) (1.7, 3.3);
138
- \draw [dashed] (-1.3, 0) -- (-1.3, 1.53) node [circ] {};
139
- \draw [dashed] (1.3, 0) -- (1.3, 2.42) node [circ] {};
140
- \draw (-1.3, 1.53) -- (1.3, 2.42);
141
- \draw [dashed] (0, 0) node [below] {\tiny $\delta x + (1 - \delta)y$} -- (0, 1.975) node [above] {\tiny$\delta f(x) + (1 - \delta) f(y)\quad\quad\quad\quad\quad\quad$} node [circ] {};
142
- \end{tikzpicture}
143
- \end{center}
144
- A function is \emph{concave} if $-f$ is convex. Note that a function can be neither concave nor convex.
145
- \end{defi}
146
-
147
- We have the following lemma:
148
- \begin{lemma}
149
- Let $f$ be twice differentiable. Then $f$ is convex on a convex set $S$ if the Hessian matrix
150
- \[
151
- Hf_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}
152
- \]
153
- is positive semidefinite for all $x\in S$, where this fancy term means:
154
- \end{lemma}
155
-
156
- \begin{defi}[Positive-semidefinite]
157
- A matrix $H$ is \emph{positive semi-definite} if $v^T Hv \geq 0$ for all $v\in \R^n$.
158
- \end{defi}
159
-
160
- Which leads to the following theorem:
161
- \begin{thm}
162
- Let $X\subseteq \R^n$ be convex, $f: \R^n \to \R$ be twice differentiable on $X$. If $x^* \in X$ satisfy $\nabla f(x^*) = 0$ and $Hf(x)$ is positive semidefinite for all $x\in X$, then $x^*$ minimizes $f$ on $X$.
163
- \end{thm}
164
- We will not prove these.
165
-
166
- Note that this is helpful, since linear functions are convex (and concave). The problem is that our problems are constrained, not unconstrained. So we will have to convert constrained problems to unconstrained problems.
167
-
168
- \section{The method of Lagrange multipliers}
169
- So how do we solve the problem of constrained maximization? The trick here is to include the constraints into the constraints into the objective function, so that things outside the constraint will not be thought to be minima.
170
-
171
- Suppose the original problem is
172
- \begin{center}
173
- minimize $f(x)$ subject to $h(x) = b$, $x\in X$.
174
- \end{center}
175
- Call the constraint $(P)$.
176
- \begin{defi}[Lagrangian]
177
- The \emph{Lagrangian} of a constraint $(P)$ is defined as
178
- \[
179
- L(x, \lambda) = f(x) - \lambda^T(h(x) - b).
180
- \]
181
- for $\lambda\in \R^m$. $\lambda$ is known as the \emph{Lagrange multiplier}.
182
- \end{defi}
183
- Note that when the constraint is satisfied, $h(x) - b = 0$, and $L(x, \lambda) = f(x)$.
184
-
185
- We could as well have used
186
- \[
187
- L(x, \lambda) = f(x) + \lambda^T(h(x) - b).
188
- \]
189
- since we just have to switch the sign of $\lambda$. So we don't have to worry about getting the sign of $\lambda$ wrong when defining the Lagrangian.
190
-
191
- If we minimize $L$ over both $x$ and $\lambda$, then we will magically find the minimal solution subject to the constrains. Sometimes.
192
-
193
- \begin{thm}[Lagrangian sufficiency]
194
- Let $x^*\in X$ and $\lambda^*\in \R^m$ be such that
195
- \[
196
- L(x^* ,\lambda^*) = \inf_{x\in X}L(x, \lambda^*)\quad\text{and}\quad h(x^*) = b.
197
- \]
198
- Then $x^*$ is optimal for ($P$).
199
-
200
- In words, if $x^*$ minimizes $L$ for a fixed $\lambda^*$, and $x^*$ satisfies the constraints, then $x^*$ minimizes $f$.
201
- \end{thm}
202
- This looks like a pretty powerful result, but it turns out that it is quite easy to prove.
203
-
204
- \begin{proof}
205
- We first define the ``feasible set'': let $X(b) = \{x\in X: h(x) = b\}$, i.e.\ the set of all $x$ that satisfies the constraints. Then
206
- \begin{align*}
207
- \min_{x\in X(b)} f(x) &= \min_{x\in X(b)} (f(x) - \lambda^{*T}(h(x) - b))\quad\text{ since $h(x) - b = 0$}\\
208
- &\geq \min_{x\in X} (f(x) - \lambda^{*T}(h(x) - b))\\
209
- &= f(x^*) - \lambda^{*T}(h(x^*) - b).\\
210
- &= f(x^*).\qedhere
211
- \end{align*}
212
- \end{proof}
213
- How can we interpret this result? To find these values of $\lambda^*$ and $x^*$, we have to solve
214
- \begin{align*}
215
- \nabla L &= 0\\
216
- h(x) &= b.
217
- \end{align*}
218
- Alternatively, we can write this as
219
- \begin{align*}
220
- \nabla f &= \lambda \nabla h\\
221
- h(x) &= b.
222
- \end{align*}
223
- What does this mean? For better visualization, we take the special case where $f$ and $h$ are a functions $\R^2 \to \R$. Usually, if we want to minimize $f$ without restriction, then for small changes in $x$, there should be no (first-order) change in $f$, i.e.\ $\d f = \nabla f\cdot \d x = 0$. This has to be true for all possible directions of $x$.
224
-
225
- However, if we are constrained by $h(x) = b$, this corresponds to forcing $x$ to lie along this particular path. Hence the restriction $\d f = 0$ only has to hold when $x$ lies along the path. Since we need $\nabla f\cdot \d x = 0$, this means that $\nabla f$ has to be perpendicular to $\d x$. Alternatively, $\nabla f$ has to be parallel to the normal to the path. Since the normal to the path is given by $\nabla h$, we obtain the requirement $\nabla f = \lambda \nabla h$.
226
-
227
- This is how we should interpret the condition $\nabla f = \lambda \nabla h$. Instead of requiring that $\nabla f = 0$ as in usual minimization problems, we only require $\nabla f$ to point at directions perpendicular to the allowed space.
228
-
229
- \begin{eg}
230
- Minimize $x_1 + x_2 - 2x_3$ subject to
231
- \begin{align*}
232
- x_1 + x_2 + x_3 &= 5\\
233
- x_1^2 + x_2^2 = 4
234
- \end{align*}
235
- The Lagrangian is
236
- \begin{align*}
237
- L(x, \lambda) ={}& x_1 - x_2 - 2x_3 - \lambda_1(x_1 + x_2 + x_3 - 5) - \lambda_2 (x_1^2 + x_2^2 - 4)\\
238
- ={}& ((1 - \lambda_1)x_1 - 2\lambda_2 x_1^2) + ((-1 - \lambda_1)x_2 - \lambda_2 x_2^2) \\
239
- &+ (-2 - \lambda_1)x_3 + 5\lambda_1 + 4\lambda_2
240
- \end{align*}
241
- We want to pick a $\lambda^*$ and $x^*$ such that $L(x^*, \lambda^*)$ is minimal. Then in particular, for our $\lambda^*$, $L(x, \lambda^*)$ must have a finite minimum.
242
-
243
- We note that $(-2 - \lambda_1)x_3$ does not have a finite minimum unless $\lambda_1 = -2$, since $x_3$ can take any value. Also, the terms in $x_1$ and $x_2$ do not have a finite minimum unless $\lambda_2 < 0$.
244
-
245
- With these in mind, we find a minimum by setting all first derivatives to be $0$:
246
- \begin{align*}
247
- \frac{\partial L}{\partial x_1} &= 1 - \lambda_1 - 2\lambda_2 x_1 = 3 - 2\lambda_2x_1\\
248
- \frac{\partial L}{\partial x_2} &= -1 - \lambda_1 - 2\lambda_2 x_2 = 1 - 2\lambda_2 x_2
249
- \end{align*}
250
- Since these must be both $0$, we must have
251
- \[
252
- x_1 = \frac{3}{2\lambda_2}, \quad x_2 = \frac{1}{2\lambda_2}.
253
- \]
254
- To show that this is indeed a minimum, we look at the Hessian matrix:
255
- \[
256
- HL =
257
- \begin{pmatrix}
258
- -2\lambda_2 & 0\\
259
- 0 & -2\lambda_2
260
- \end{pmatrix}
261
- \]
262
- which is positive semidefinite when $\lambda_2 < 0$, which is the condition we came up with at the beginning.
263
-
264
- Let $Y = \{\lambda: \R^2: \lambda_1 = -2, \lambda_2 < 0\}$ be our helpful values of $\lambda$.
265
-
266
- So we have shown above that for every $\lambda \in Y$, $L(x, \lambda)$ has a unique minimum at $x(\lambda) = (\frac{3}{2\lambda_2}, \frac{1}{2\lambda2}, x_3)^T$.
267
-
268
- Now all we have to do is find $\lambda$ and $x$ such that $x(\lambda)$ satisfies the functional constraints. The second constraint gives
269
- \[
270
- x_1^2 + x_2^2 = \frac{9}{4\lambda^2} + \frac{1}{4\lambda_2^2} = 4 \Leftrightarrow \lambda_2 = -\sqrt{\frac{5}{8}}.
271
- \]
272
- The first constraint gives
273
- \[
274
- x_3 = 5 - x_1 - x_2.
275
- \]
276
- So the theorem implies that
277
- \[
278
- x_1 = -3\sqrt{\frac{2}{5}},\quad x_2 = -\sqrt{\frac{2}{5}},\quad x_3 = 5 + 4\sqrt{\frac{2}{5}}.
279
- \]
280
- \end{eg}
281
- So far so good. But what if our functional constraint is an inequality? We will need slack variables.
282
-
283
- To minimize $f(x)$ subject to $h(x) \leq b$, $x\in X$, we proceed as follows:
284
- \begin{enumerate}
285
- \item Introduce slack variables to obtain the equivalent problem, to minimize $f(x)$ subject to $h(x) + z = b$, $x \in X$, $z \geq 0$.
286
- \item Compute the Lagrangian
287
- \[
288
- L(x, z, \lambda) = f(x) - \lambda^T(f(x) + z - b).
289
- \]
290
- \item Find
291
- \[
292
- Y = \left\{\lambda: \inf_{x\in X, z\geq 0}L(x, z, \lambda) > -\infty\right\}.
293
- \]
294
- \item For each $\lambda\in Y$, minimize $L(x, z, \lambda)$, i.e.\ find
295
- \[
296
- x^*(\lambda)\in X,\quad z^*(\lambda) \geq 0
297
- \]
298
- such that
299
- \[
300
- L(x^*(\lambda), z^*(\lambda), \lambda) = \inf_{x\in X, z\geq 0} L(x, z, \lambda)
301
- \]
302
- \item Find $\lambda^*\in Y$ such that
303
- \[
304
- h(x^*(\lambda^*)) + z^*(\lambda^*) = b.
305
- \]
306
- \end{enumerate}
307
- Then by the Lagrangian sufficiency condition, $x^*(\lambda^*)$ is optimal for the constrained problem.
308
-
309
- \subsection{Complementary Slackness}
310
- If we introduce a slack variable $z$, we note that changing the value of $z_j$ does not affect our objective function, and we are allowed to pick any positive $z$. Hence if the corresponding Lagrange multiplier is $\lambda_j$, then we must have $(z^*(\lambda))_j \lambda_j = 0$. This is since by definition $z^*(\lambda)_j$ minimizes $z_j \lambda_j$. Hence if $z_j \lambda_j \not= 0$, we can tweak the values of $z_j$ to make a smaller $z_j \lambda_j$.
311
-
312
- This makes our life easier since our search space is smaller.
313
-
314
- \begin{eg}
315
- Consider the following problem:
316
- \begin{center}
317
- maximize $x_1 - 3x_2$ subject to
318
- \begin{align*}
319
- x_1^2 + x_2^2 + z_1 &= 4\\
320
- x_1 + x_2 + z_2 + z_2 &= 2\\
321
- z_1, z_2 &\geq 0.
322
- \end{align*}
323
- where $z_1, z_2$ are slack variables.
324
- \end{center}
325
- The Lagrangian is
326
- \[
327
- L(x, z, \lambda) = ((1 - \lambda_2)x_1 - \lambda_1 x_1^2) + ((-3 - \lambda_2)x_2 - \lambda_1 x_2^2) - \lambda_1 z_1 - \lambda_2 z_2 + 4\lambda_1 + 2\lambda_2.
328
- \]
329
- To ensure finite minimum, we need $\lambda_1, \lambda_2 \leq 0$.
330
-
331
- By complementary slackness, $\lambda_1 z_1 = \lambda_2 z_2 = 0$. We can then consider the cases $\lambda_1 = 0$ and $z_1 = 0$ separately, and save a lot of algebra.
332
- \end{eg}
333
-
334
- \subsection{Shadow prices}
335
- We have previously described how we can understand the requirement $\nabla f = \lambda \nabla h$. But what does the multiplier $\lambda$ represent?
336
- \begin{thm}
337
- Consider the problem
338
- \begin{center}
339
- minimize $f(x)$ subject to $h(x) = b$.
340
- \end{center}
341
- Here we assume all functions are continuously differentiable. Suppose that for each $b\in \R^n$, $\phi(b)$ is the optimal value of $f$ and $\lambda^*$ is the corresponding Lagrange multiplier. Then
342
- \[
343
- \frac{\partial \phi}{\partial b_i} = \lambda_i^*.
344
- \]
345
- \end{thm}
346
- Proof is omitted, as it is just a tedious application of chain rule etc.
347
-
348
- This can be interpreted as follows: suppose we are a factory which is capable of producing $m$ different kinds of goods. Since we have finitely many resources, and producing stuff requires resources, $h(x) = b$ limits the amount of goods we can produce. Now of course, if we have more resources, i.e.\ we change the value of $b$, we will be able to produce more/less stuff, and thus generate more profit. The change in profit per change in $b$ is given by $\frac{\partial \phi}{\partial b_i}$, which is the value of $\lambda$.
349
-
350
- The result also holds when the functional constraints are inequality constraints. If the $i$th constraint holds with equality at the optimal solution, then the above reasoning holds. Otherwise, if it is not held with equality, then the Lagrange multiplier is $0$ by complementary slackness. Also, the partial derivative of $\phi$ with respect to $b_i$ also has to be $0$, since changing the upper bound doesn't affect us if we are not at the limit. So they are equal.
351
- \subsection{Lagrange duality}
352
- Consider the problem
353
- \begin{center}
354
- minimize $f(x)$ subject to $h(x) = b$, $x\in X$.
355
- \end{center}
356
- Denote this as $P$.
357
-
358
- The Lagrangian is
359
- \[
360
- L(x, \lambda) = f(x) - \lambda^T (h(x) - b).
361
- \]
362
- Define the dual function $g: \R^m \to \R$ as
363
- \[
364
- g(\lambda) = \inf_{x\in X}L(x, \lambda).
365
- \]
366
- ie, we fix $\lambda$, and see how small we can get $L$ to be. As before, let
367
- \[
368
- Y = \{\lambda\in \R^n: g(\lambda) > -\infty\}.
369
- \]
370
- Then we have
371
- \begin{thm}[Weak duality]
372
- If $x\in X(b)$ (i.e.\ $x$ satisfies both the functional and regional constraints) and $\lambda \in Y$, then
373
- \[
374
- g(\lambda) \leq f(x).
375
- \]
376
- In particular,
377
- \[
378
- \sup_{\lambda\in Y}g(\lambda) \leq \inf_{x\in X(b)}f(x).
379
- \]
380
- \end{thm}
381
-
382
- \begin{proof}
383
- \begin{align*}
384
- g(\lambda) &= \inf_{x'\in X}L(x', \lambda)\\
385
- &\leq L(x, \lambda)\\
386
- &= f(x) - \lambda^T (h(x) - b)\\
387
- &= f(x).\qedhere
388
- \end{align*}
389
- \end{proof}
390
-
391
- This suggests that we can solve a dual problem - instead of minimizing $f$, we can maximize $g$ subject to $\lambda\in Y$. Denote this problem as $(D)$. The original problem $(P)$ is called \emph{primal}.
392
-
393
- \begin{defi}[Strong duality]
394
- $(P)$ and $(D)$ are said to satisfy \emph{strong duality} if
395
- \[
396
- \sup_{\lambda\in Y}g(\lambda) = \inf_{x\in X(b)}f(x).
397
- \]
398
- \end{defi}
399
- It turns out that problems satisfying strong duality are exactly those for which the method of Lagrange multipliers work.
400
-
401
- \begin{eg}
402
- Again consider the problem to minimize $x_1 - x_2 - 2x_3$ subject to
403
- \begin{align*}
404
- x_1 + x_2 + x_3 &= 5\\
405
- x_1^2 + x_2^2 &= 4
406
- \end{align*}
407
- We saw that
408
- \[
409
- Y = \{\lambda\in \R^2: \lambda_1 = -2, \lambda_2 < 0\}
410
- \]
411
- and
412
- \[
413
- x^*(\lambda) = \left(\frac{3}{2\lambda_2}, \frac{1}{2\lambda_2}, 5 - \frac{4}{2\lambda_2}\right).
414
- \]
415
- The dual function is
416
- \[
417
- g(\lambda) = \inf_{x\in X} L(x, \lambda) = L(x^*(\lambda), \lambda) = \frac{10}{4\lambda_2} + 4\lambda_2 - 10.
418
- \]
419
- The dual is the problem to
420
- \begin{center}
421
- maximize $\frac{10}{4\lambda_2} + 4\lambda_2 - 10$ subject to $\lambda_2 < 0$.
422
- \end{center}
423
- The maximum is attained for
424
- \[
425
- \lambda_2 = -\sqrt{\frac{5}{8}}
426
- \]
427
- After calculating the values of $g$ and $f$, we can see that the primal and dual do have the same optimal value.
428
- \end{eg}
429
-
430
- Right now, what we've got isn't helpful, because we won't know if our problem satisfies strong duality!
431
-
432
- \subsection{Supporting hyperplanes and convexity}
433
- We use the fancy term ``hyperplane'' to denote planes in higher dimensions (in an $n$-dimensional space, a hyperplane has $n - 1$ dimensions).
434
-
435
- \begin{defi}[Supporting hyperplane]
436
- A hyperplane $\alpha: \R^m \to \R$ is \emph{supporting} to $\phi$ at $b$ if $\alpha$ intersects $\phi$ at $b$ and $\phi(c) \geq \alpha(c)$ for all $c$.
437
- \begin{center}
438
- \begin{tikzpicture}
439
- \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
440
- \draw (-1, 2) parabola bend (0, 0.5) (1, 2) node [above] {$\phi$};
441
- \node [circ] at (0.4, 0.75) {};
442
- \node [right] at (0.4, 0.75) {$\phi(b)$};
443
- \draw (-0.7, -0.5) -- (1.5, 2) node [right] {$\alpha$};
444
- \end{tikzpicture}
445
- \end{center}
446
- \end{defi}
447
-
448
- \begin{thm}
449
- $(P)$ satisfies strong duality iff $\phi(c) = \inf\limits_{x \in X(c)}f(x)$ has a supporting hyperplane at $b$.
450
- \end{thm}
451
- Note that here we fix a $b$, and let $\phi$ be a function of $c$.
452
-
453
- \begin{proof}
454
- $(\Leftarrow)$ Suppose there is a supporting hyperplane. Then since the plane passes through $\phi(b)$, it must be of the form
455
- \[
456
- \alpha(c) = \phi(b) + \lambda^T(c - b).
457
- \]
458
- Since this is supporting, for all $c\in \R^m$,
459
- \[
460
- \phi(b) + \lambda^T(c - b) \leq \phi(c),
461
- \]
462
- or
463
- \[
464
- \phi(b) \leq \phi(c) - \lambda^T(c - b),
465
- \]
466
- This implies that
467
- \begin{align*}
468
- \phi(b) &\leq \inf_{c\in \R^m}(\phi(c) - \lambda^T(c - b))\\
469
- &= \inf_{c\in \R^m}\inf_{x\in X(c)}(f(x) - \lambda^T(h(x) - b))\\
470
- \intertext{(since $\phi(c) = \inf\limits_{x\in X(c)} f(x)$ and $h(x) = c$ for $x\in X(c)$)}
471
- &= \inf_{x\in X}L(x, \lambda).\\
472
- \intertext{(since $\bigcup\limits_{c\in \R^m}X(c) = X$, which is true since for any $x\in X$, we have $x\in X(h(x))$)}
473
- &= g(\lambda)
474
- \end{align*}
475
- By weak duality, $g(\lambda) \leq \phi(b)$. So $\phi(b) = g(\lambda)$. So strong duality holds.
476
-
477
- $(\Rightarrow)$. Assume now that we have strong duality. The there exists $\lambda$ such that for all $c\in \R^m$,
478
- \begin{align*}
479
- \phi(b) &= g(\lambda)\\
480
- &= \inf_{x\in X}L(x, \lambda)\\
481
- &\leq \inf_{x\in X(c)} L(x, \lambda)\\
482
- &= \inf_{x\in X(c)} (f(x) - \lambda^T(h(x) - b))\\
483
- &= \phi(c) - \lambda^T(c - b)
484
- \end{align*}
485
- So $\phi(b) + \lambda^T(c - b) \leq \phi(c)$. So this defines a supporting hyperplane.
486
- \end{proof}
487
-
488
- We are having some progress now. To show that Lagrange multipliers work, we need to show that $(P)$ satisfies strong duality. To show that $(P)$ satisfies strong duality, we need to show that it has a supporting hyperplane at $b$. How can we show that there is a supporting hyperplane? A sufficient condition is convexity.
489
-
490
- \begin{thm}[Supporting hyperplane theorem]
491
- Suppose that $\phi: \R^m \to \R$ is convex and $b\in\R^m$ lies in the interior of the set of points where $\phi$ is finite. Then there exists a supporting hyperplane to $\phi$ at $b$.
492
- \end{thm}
493
- Proof follows rather straightforwardly from the definition of convexity, and is omitted.
494
-
495
- This is some even better progress. However, the definition of $\phi$ is rather convoluted. How can we show that it is convex? We have the following helpful theorem:
496
-
497
- \begin{thm}
498
- Let
499
- \[
500
- \phi(b) = \inf_{x\in X} \{f(x): h(x) \leq b\}
501
- \]
502
- If $X, f, h$ are convex, then so is $\phi$ (assuming feasibility and boundedness).
503
- \end{thm}
504
-
505
- \begin{proof}
506
- Consider $b_1, b_2\in \R^m$ such that $\phi(b_1)$ and $\phi(b_2)$ are defined. Let $\delta \in [0, 1]$ and define $b = \delta b_1 + (1 - \delta)b_2$. We want to show that $\phi(b) \leq \delta \phi(b_1) + (1 - \delta)\phi(b_2)$.
507
-
508
- Consider $x_1 \in X(b_1)$, $x_2 \in X(b_2)$, and let $x = \delta x_1 + (1 - \delta)x_2$. By convexity of $X$, $x\in X$.
509
-
510
- By convexity of $h$,
511
- \begin{align*}
512
- h(x) &= h(\delta x_1 + (1 - \delta) x_2)\\
513
- &\leq \delta h(x_1) + (1 - \delta)h(x_2)\\
514
- &\leq \delta b_1 + (1 - \delta)b_2\\
515
- &= b
516
- \end{align*}
517
- So $x\in X(b)$. Since $\phi(x)$ is an optimal solution, by convexity of $f$,
518
- \begin{align*}
519
- \phi(b) &\leq f(x)\\
520
- &= f(\delta x_1 + (1 - \delta) x_2)\\
521
- &\leq \delta f(x_1) + (1 - \delta)f(x_2)
522
- \end{align*}
523
- This holds for any $x_1\in X(b_1)$ and $x_2 \in X(b_2)$. So by taking infimum of the right hand side,
524
- \[
525
- \phi(b) \leq \delta \phi(b_1) + (1 - \delta) \phi(b_2).
526
- \]
527
- So $\phi$ is convex.
528
- \end{proof}
529
- $h(x) = b$ is equivalent to $h(x) \leq b$ and $-h(x) \leq -b$. So the result holds for problems with equality constraints if both $h$ and $-h$ are convex, i.e.\ if $h(x)$ is linear.
530
-
531
- So
532
- \begin{thm}
533
- If a linear program is feasible and bounded, then it satisfies strong duality.
534
- \end{thm}
535
-
536
- \section{Solutions of linear programs}
537
- \subsection{Linear programs}
538
- We'll come up with an algorithm to solve linear program efficiently. We first illustrate the general idea with the case of a 2D linear program. Consider the problem
539
- \begin{center}
540
- maximize $x_1 + x_2$ subject to
541
- \begin{align*}
542
- x_1 + 2x_2 &\leq 6\\
543
- x_1 - x_2 &\leq 3\\
544
- x_1, x_2 &\geq 0
545
- \end{align*}
546
- \end{center}
547
- We can plot the solution space out
548
- \begin{center}
549
- \begin{tikzpicture}
550
- \path [fill=gray!50!white] (0, 0) -- (3, 0) -- (4, 1) -- (0, 3) -- cycle;
551
- \draw [->] (-1, 0) -- (8, 0) node [right] {$x_1$};
552
- \draw [->] (0, -1) -- (0, 4) node [above] {$x_2$};
553
-
554
- \draw (2, -1) -- +(4, 4) node [right] {$x_1 - x_2 = 3$};
555
- \draw (-1, 3.5) -- +(8, -4) node [right] {$x_1 + 2x_2 = 6$};
556
- \draw [->] (0, 0) -- (0.5, 0.5) node [anchor=south west] {$c$};
557
- \end{tikzpicture}
558
- \end{center}
559
- To maximize $x_1 + x_2$, we want to go as far in the $c$ direction as possible. It should be clear that the optimal point will lie on a corner of the polygon of feasible region, no matter what the shape of it might be.
560
-
561
- Even if we have cases where $c$ is orthogonal to one of the lines, eg
562
- \begin{center}
563
- \begin{tikzpicture}
564
- \path [fill=gray!50!white] (0, 0) -- (3, 0) -- (3.25, 0.25) -- (0, 3.5) -- cycle;
565
- \draw [->] (-1, 0) -- (6, 0) node [right] {$x_1$};
566
- \draw [->] (0, -1) -- (0, 4) node [above] {$x_2$};
567
-
568
- \draw (2, -1) -- +(3, 3) node [right] {$x_1 - x_2 = 3$};
569
- \draw (-0.5, 4) -- +(5, -5) node [right] {$x_1 + x_2 = 3.5$};
570
- \draw [->] (0, 0) -- (0.5, 0.5) node [anchor=south west] {$c$};
571
- \node [circ] at (1.75, 1.75) {};
572
- \node at (1.75, 1.75) [anchor = south west] {$A$};
573
- \end{tikzpicture}
574
- \end{center}
575
- An optimal point might be $A$. However, if we know that $A$ is an optimal point, we can slide it across the $x_1 + x_2 = 3.5$ line until it meets one of the corners. Hence we know that one of the corners must be an optimal point.
576
-
577
- This already allows us to solve linear programs, since we can just try all corners and see which has the smallest value. However, this can be made more efficient, especially when we have a large number of dimensions and hence corners.
578
-
579
- \subsection{Basic solutions}
580
- Here we will assume that the rows of $A$ are linearly independent, and any set of $m$ columns are linearly independent. Otherwise, we can just throw away the redundant rows or columns.
581
-
582
- In general, if both the constraints and the objective functions are linear, then the optimal point always lies on a ``corner'', or an \emph{extreme point}.
583
-
584
- \begin{defi}[Extreme point]
585
- An \emph{extreme point} $x\in S$ of a convex set $S$ is a point that cannot be written as a convex combination of two distinct points in $S$, i.e.\ if $y, z\in S$ and $\delta \in (0, 1)$ satisfy
586
- \[
587
- x = \delta y + (1 - \delta) z,
588
- \]
589
- then $x = y = z$.
590
- \end{defi}
591
-
592
- Consider again the linear program in standard form, i.e.
593
- \begin{center}
594
- maximize $c^T x$ subject to $Ax = b, x \geq 0$, where $A \in \R^{m\times n}$ and $b\in \R^m$.
595
- \end{center}
596
- Note that now we are talking about maximization instead of minimization.
597
-
598
- \begin{defi}[Basic solution and basis]
599
- A solution $x\in \R^n$ is \emph{basic} if it has at most $m$ non-zero entries (out of $n$), i.e.\ if there exists a set $B\subseteq \{1, \cdots, n\}$ with $|B| = m$ such that $x_i = 0$ if $i\not\in B$. In this case, $B$ is called the \emph{basis}, and $x_i$ are the \emph{basic variables} if $i\in B$.
600
- \end{defi}
601
- We will later see (via an example) that basic solutions correspond to solutions at the ``corners'' of the solution space.
602
-
603
- \begin{defi}[Non-degenerate solutions]
604
- A basic solution is \emph{non-degenerate} if it has exactly $m$ non-zero entries.
605
- \end{defi}
606
-
607
- Note that by ``solution'', we do not mean a solution to the whole maximization problem. Instead we are referring to a solution to the constraint $Ax = b$. Being a solution does \emph{not} require that $x \geq 0$. Those that satisfy this regional constraint are known as \emph{feasible}.
608
-
609
- \begin{defi}[Basic feasible solution]
610
- A basic solution $x$ is \emph{feasible} if it satisfies $x \geq 0$.
611
- \end{defi}
612
-
613
- \begin{eg}
614
- Consider the linear program
615
- \begin{center}
616
- maximize $f(x) = x_1 + x_2$ subject to
617
- \begin{align*}
618
- x_1 + 2x_2 + z_1&= 6\\
619
- x_1 - x_2 + z_2 &= 3\\
620
- x_1, x_2, z_1, z_2 &\geq 0
621
- \end{align*}
622
- \end{center}
623
- where we have included the slack variables.
624
-
625
- Since we have 2 constraints, a basic solution would require 2 non-zero entries, and thus 2 zero entries. The possible basic solutions are
626
- \begin{center}
627
- \begin{tabular}{cccccc}
628
- \toprule
629
- & $x_1$ & $x_2$ & $z_1$ & $z_2$ & $f(x)$\\
630
- \midrule
631
- $A$ & $0$ & $0$ & $6$ & $3$ & $0$\\
632
- $B$ & $0$ & $3$ & $0$ & $6$ & $3$\\
633
- $C$ & $4$ & $1$ & $0$ & $0$ & $5$\\
634
- $D$ & $3$ & $0$ & $3$ & $0$ & $3$\\
635
- $E$ & $6$ & $0$ & $0$ & $-4$ & $6$\\
636
- $F$ & $0$ & $-3$ & $12$ & $0$ & $-3$\\
637
- \bottomrule
638
- \end{tabular}
639
- \end{center}
640
- Among all 6, $E$ and $F$ are \emph{not} feasible solutions since they have negative entries. So the basic feasible solutions are $A, B, C, D$.
641
- \begin{center}
642
- \begin{tikzpicture}
643
- \fill [fill=gray!50!white] (0, 0) node [anchor = south west] {$A$} --
644
- (3, 0) node [above] {$B$} --
645
- (4, 1) node [left] {$C$} --
646
- (0, 3) node [anchor = north east] {$D$} -- cycle;
647
-
648
- \draw [->] (-1, 0) -- (8, 0) node [right] {$x_1$};
649
- \draw [->] (0, -4) -- (0, 4) node [above] {$x_2$};
650
-
651
- \draw (-1, -4) -- +(7, 7) node [right] {$x_1 - x_2 = 3$};
652
- \draw (-1, 3.5) -- +(8, -4) node [right] {$x_1 + 2x_2 = 6$};
653
- \node [above] at (6, 0) {$E$};
654
- \node [left] at (0, -3) {$F$};
655
- \end{tikzpicture}
656
- \end{center}
657
- \end{eg}
658
-
659
- In previous example, we saw that the extreme points are exactly the basic feasible solutions. This is true in general.
660
- \begin{thm}
661
- A vector $x$ is a basic feasible solution of $Ax = b$ if and only if it is an extreme point of the set $X(b) = \{x': Ax' = b, x' \geq 0\}$.
662
- \end{thm}
663
- We will not prove this.
664
-
665
- \subsection{Extreme points and optimal solutions}
666
- Recall that we previously showed in our 2D example that the optimal solution lies on an extreme point, i.e.\ is a basic feasible solution. This is also true in general.
667
-
668
- \begin{thm}
669
- If $(P)$ is feasible and bounded, then there exists an optimal solution that is a basic feasible solution.
670
- \end{thm}
671
-
672
- \begin{proof}
673
- Let $x$ be optimal of $(P)$. If $x$ has at most non-zero entries, it is a basic feasible solution, and we are done.
674
-
675
- Now suppose $x$ has $r > m$ non-zero entries. Since it is not an extreme point, we have $y\not= z\in X(b)$, $\delta \in (0, 1)$ such that
676
- \[
677
- x = \delta y + (1 - \delta) z.
678
- \]
679
- We will show there exists an optimal solution strictly fewer than $r$ non-zero entries. Then the result follows by induction.
680
-
681
- By optimality of $x$, we have $c^T x \geq c^T y$ and $c^T x \geq c^T z$.
682
-
683
- Since $c^T x = \delta c^T y + (1 - \delta)c^Tz$, we must have that $c^T x = c^T y = c^T z$, i.e.\ $y$ and $z$ are also optimal.
684
-
685
- Since $y \geq 0$ and $z \geq 0$, $x = \delta y + (1 - \delta) z$ implies that $y_i = z_i = 0$ whenever $x_i = 0$.
686
-
687
- So the non-zero entries of $y$ and $z$ is a subset of the non-zero entries of $x$. So $y$ and $z$ have at most $r$ non-zero entries, which must occur in rows where $x$ is also non-zero.
688
-
689
- If $y$ or $z$ has strictly fewer than $r$ non-zero entries, then we are done. Otherwise, for any $\hat{\delta}$ (not necessarily in $(0, 1)$), let
690
- \[
691
- x_{\hat{\delta}} = \hat{\delta} y + (1 - \hat{\delta}) z = z + \hat{\delta}(y - z).
692
- \]
693
- Observe that $x_{\hat{\delta}}$ is optimal for every $\hat\delta\in \R$.
694
-
695
- Moreover, $y - z \not= 0$, and all non-zero entries of $y - z$ occur in rows where $x$ is non-zero as well. We can thus choose $\hat\delta\in \R$ such that $x_{\hat{\delta}} \geq 0$ and $x_{\hat{\delta}}$ has strictly fewer than $r$ non-zero entries.
696
- \end{proof}
697
- Intuitively, this is what we do when we ``slide along the line'' if $c$ is orthogonal to one of the boundary lines.
698
-
699
- This result in fact holds more generally for the maximum of a convex function $f$ over a compact (i.e.\ closed and bounded) convex set $X$.
700
-
701
- In that case, we can write any point $x\in X$ as a convex combination
702
- \[
703
- x = \sum_{i = 1}^k \delta_i x^i
704
- \]
705
- of extreme points $x^k\in X$, where $\delta \in \R_{\geq 0}^k$ and $\sum_{i=1}^k \delta_i = 1$.
706
-
707
- Then, by convexity of $f$,
708
- \[
709
- f(x) \leq \sum_{i = 1}^k \delta_i f(x^i) \leq \max_i f(x^i)
710
- \]
711
- So any point in the interior cannot be better than the extreme points.
712
- \subsection{Linear programming duality}
713
- Consider the linear program in general form with slack variables,
714
- \begin{center}
715
- minimize $c^Tx$ subject to $Ax - z = b$, $x, z\geq 0$
716
- \end{center}
717
- We have $X = \{(x, z): x, z\geq 0\}\subseteq \R^{m + n}$.
718
-
719
- The Lagrangian is
720
- \[
721
- L(x, z, \lambda) = c^Tx - \lambda^T(A x - z - b) = (c^T - \lambda^TA)x + \lambda^T z + \lambda^T b.
722
- \]
723
- Since $x, z$ can be arbitrarily positive, this has a finite minimum if and only if
724
- \[
725
- c^T - \lambda^TA \geq 0,\quad \lambda^T \geq 0.
726
- \]
727
- Call the feasible set $Y$. Then for fixed $\lambda\in Y$, the minimum of $L(x, z, \lambda)$ is attained when $(c^T - \lambda^T A)x$ and $\lambda^T z = 0$ by complementary slackness. So
728
- \[
729
- g(\lambda) = \inf_{(x, z) \in X} L(x, z, \lambda) = \lambda^T b.
730
- \]
731
- The dual is thus
732
- \begin{center}
733
- maximize $\lambda^T b$ subject to $A^T\lambda \leq c$, $\lambda \geq 0$
734
- \end{center}
735
-
736
- \begin{thm}
737
- The dual of the dual of a linear program is the primal.
738
- \end{thm}
739
-
740
- \begin{proof}
741
- It suffices to show this for the linear program in general form. We have shown above that the dual problem is
742
- \begin{center}
743
- minimize $-b^T\lambda$ subject to $-A^T \lambda \geq -c$, $\lambda \geq 0$.
744
- \end{center}
745
- This problem has the same form as the primal, with $-b$ taking the role of $c$, $-c$ taking the role of $b$, $-A^T$ taking the role of $A$. So doing it again, we get back to the original problem.
746
- \end{proof}
747
-
748
- \begin{eg}
749
- Let the primal problem be
750
- \begin{center}
751
- maximize $3x_1 + 2x_2$ subject to
752
- \begin{align*}
753
- 2x_1 + x_2 + z_1 &= 4\\
754
- 2x_1 + 3x_2 + z_2 &= 6\\
755
- x_1, x_2, z_1, z_2 \geq 0.
756
- \end{align*}
757
- \end{center}
758
- Then the dual problem is
759
- \begin{center}
760
- minimize $4\lambda_1 + 6\lambda_2$ such that
761
- \begin{align*}
762
- 2\lambda_1 + 2\lambda_2 - \mu_1 &= 3\\
763
- \lambda_1 + 3\lambda_2 - \mu_2 &= 2\\
764
- \lambda_1, \lambda_2, \mu_1, \mu_2 \geq 0.
765
- \end{align*}
766
- \end{center}
767
- We can compute all basic solutions of the primal and the dual by setting $n - m - 2$ variables to be zero in turn.
768
-
769
- Given a particular basic solutions of the primal, the corresponding solutions of the dual can be found by using the complementary slackness solutions:
770
- \[
771
- \lambda_1 z_1 = \lambda_2 z_2 = 0,\quad \mu_1 x_1 = \mu_2 x_2 = 0.
772
- \]
773
- \begin{center}
774
- \begin{tabular}{cccccccccccc}
775
- \toprule
776
- & $x_1$ & $x_2$ & $z_1$ & $z_2$ & $f(x)$ &\;& $\lambda_1$ & $\lambda_2$ & $\mu_1$ & $\mu_2$ & $g(\lambda)$\\
777
- \midrule
778
- A & 0 & 0 & 4 & 6 & 0 && 0 & 0 & -3 & -2 & 0\\
779
- B & 2 & 0 & 0 & 2 & 6 && $\frac{3}{2}$ & 0 & 0 & $-\frac{1}{2}$ & 6\\
780
- C & 3 & 0 & -2 & 0 & 9 && 0 & $\frac{3}{2}$ & 0 & $\frac{5}{2}$ & 9\\
781
- D & $\frac{3}{2}$ & 1 & 0 & 0 & $\frac{13}{2}$ && $\frac{5}{4}$ & $\frac{1}{4}$ & 0 & 0 & $\frac{13}{2}$\\
782
- E & 0 & 2 & 2 & 0 & 4 && 0 & $\frac{2}{3}$ & $-\frac{5}{3}$ & 0 & 4\\
783
- F & 0 & 4 & 0 & -6 & 8 && 2 & 0 & 1 & 0 & 8\\
784
- \bottomrule
785
- \end{tabular}
786
- \end{center}
787
- \begin{center}
788
- \begin{tikzpicture}
789
- \begin{scope}[yscale=0.5]
790
- \path [fill=gray!50!white] (0, 0) node [anchor = north east] {$A$} --
791
- (2, 0) node [below] {$B$} --
792
- (1.5, 1) node [anchor = south west] {$D$} --
793
- (0, 2) node [anchor = north east] {$E$} -- cycle;
794
- \node [above] at (3, 0) {$C$};
795
- \node [left] at (0, 4) {$F$};
796
-
797
- \draw [->] (-0.5, 0) -- (4.5, 0) node [right] {$x_1$};
798
- \draw [->] (0, -3) -- (0, 6) node [above] {$x_2$};
799
- \draw (-0.5, 5) -- +(3.5, -7) node [below] {\small $2x_1 + x_2 = 4$};
800
- \draw (-0.5, 2.333) -- +(4.5, -3) node [below] {\small $2x_1 + 3x_2 = 6$};
801
- \end{scope}
802
-
803
- \begin{scope}[shift={(6, 0)}, xscale=1.5]
804
- \path [fill=gray!50!white] (0, 3) -- (0, 1.5) node [left] {$C$} --
805
- (1.25, 0.25) node [anchor = south west] {$D$} --
806
- (2, 0) node [above] {$F$} --
807
- (3, 0) -- (3, 3) -- cycle;
808
- \node at (0, 0) [anchor = north east] {$A$};
809
- \node at (0, 0.667) [anchor = north east] {$B$};
810
- \node at (1.5, 0) [below] {$E$};
811
-
812
- \draw [->] (-0.5, 0) -- (3, 0) node [right] {$\lambda_1$};
813
- \draw [->] (0, -1.5) -- (0, 3) node [above] {$\lambda_2$};
814
-
815
- \draw (-0.5, 2) -- +(3, -3) node [below] {\small $2\lambda_1 + 2\lambda_2 = 3$};
816
- \draw (-0.5, 0.833) -- +(3, -1) node [below] {\small $\lambda_1 + 3\lambda_2 = 2$};
817
- \end{scope}
818
-
819
- \end{tikzpicture}
820
- \end{center}
821
- We see that $D$ is the only solution such that both the primal and dual solutions are feasible. So we know it is optimal without even having to calculate $f(x)$. It turns out this is always the case.
822
- \end{eg}
823
-
824
- \begin{thm}
825
- Let $x$ and $\lambda$ be feasible for the primal and the dual of the linear program in general form. Then $x$ and $\lambda$ and optimal if and only if they satisfy complementary slackness, i.e.\ if
826
- \[
827
- (c^T - \lambda^T A)x = 0\text{ and }\lambda^T(Ax - b) = 0.
828
- \]
829
- \end{thm}
830
-
831
- \begin{proof}
832
- If $x$ and $\lambda$ are optimal, then
833
- \[
834
- c^Tx = \lambda^T b
835
- \]
836
- since every linear program satisfies strong duality. So
837
- \begin{align*}
838
- c^Tx &= \lambda^T b\\
839
- &= \inf_{x'\in X} (c^T x' - \lambda^T(Ax' - b))\\
840
- &\leq c^T x - \lambda^T (Ax - b)\\
841
- &\leq c^T x.
842
- \end{align*}
843
- The last line is since $Ax \geq b$ and $\lambda\geq 0$.
844
-
845
- The first and last term are the same. So the inequalities hold with equality. Therefore
846
- \[
847
- \lambda^T b = c^Tx - \lambda^T (Ax - b) = (c^T - \lambda^TA)x + \lambda^Tb.
848
- \]
849
- So
850
- \[
851
- (c^T - \lambda^TA)x = 0.
852
- \]
853
- Also,
854
- \[
855
- c^Tx - \lambda^T(Ax - b) = c^Tx
856
- \]
857
- implies
858
- \[
859
- \lambda^T(Ax - b) = 0.
860
- \]
861
- On the other hand, suppose we have complementary slackness, i.e.
862
- \[
863
- (c^T - \lambda^T A)x = 0\text{ and }\lambda^T(Ax - b) = 0,
864
- \]
865
- then
866
- \[
867
- c^Tx = c^Tx - \lambda^T(Ax - b) = (c^T - \lambda^T A)x + \lambda^T b = \lambda^Tb.
868
- \]
869
- Hence by weak duality, $x$ and $\lambda$ are optimal.
870
- \end{proof}
871
- \subsection{Simplex method}
872
- The simplex method is an algorithm that makes use of the result we just had. To find the optimal solution to a linear program, we start with a basic feasible solution of the primal, and then modify the variables step by step until the dual is also feasible.
873
-
874
- We start with an example, showing what we do, then explain the logic behind, then do a more proper example.
875
-
876
- \begin{eg}
877
- Consider the following problem:
878
- \begin{center}
879
- maximize $x_1 + x_2$ subject to
880
- \begin{align*}
881
- x_1 + 2x_2 + z_1 &= 6\\
882
- x_1 - x_2 + z_2 &= 3\\
883
- x_1, x_2, z_1, z_2 \geq 0.
884
- \end{align*}
885
- \end{center}
886
- We write everything in the \emph{simplex tableau}, by noting down the coefficients:
887
- \begin{center}
888
- \begin{tabular}{cccccc}
889
- \toprule
890
- &$x_1$ & $x_2$ & $z_1$ & $z_2$ \\
891
- \midrule
892
- Constraint 1 & 1 & 2 & 1 & 0 & 6 \\
893
- Constraint 2 & 1 & -1 & 0 & 1 & 3 \\
894
- Objective & 1 & 1 & 0 & 0 & 0 \\
895
- \bottomrule
896
- \end{tabular}
897
- \end{center}
898
- We see an identity matrix $\begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$ in the $z_1$ and $z_2$ columns, and these correspond to basic feasible solution: $z_1 = 6, z_2 = 3, x_1 = x_2 = 0$. It's pretty clear that our basic feasible solution is not optimal, since our objective function is $0$. This is since something in the last row is positive, and we can increase the objective by, say, increasing $x_1$.
899
-
900
- The simplex method says that we can find the optimal solution if we make the bottom row all negative while keeping the right column positive, by doing row operations.
901
-
902
- We multiply the first row by $\frac{1}{2}$ and subtract/add it to the other rows to obtain
903
- \begin{center}
904
- \begin{tabular}{cccccc}
905
- \toprule
906
- &$x_1$ & $x_2$ & $z_1$ & $z_2$ & \\
907
- \midrule
908
- Constraint 1 & $\frac{1}{2}$ & 1 & $\frac{1}{2}$ & 0 & 3 \\
909
- Constraint 2 & $\frac{2}{3}$ & 0 & $\frac{1}{2}$ & 1 & 6 \\
910
- Objective & $\frac{1}{2}$ & 0 & $-\frac{1}{2}$ & 0 & -3\\
911
- \bottomrule
912
- \end{tabular}
913
- \end{center}
914
- Our new basic feasible solution is $x_2 = 3, z_2 = 6, x_1 = z_1 = 0$. We see that the number in the bottom-right corner is $-f(x)$. We can continue this process to finally obtain a solution.
915
- \end{eg}
916
- Here we adopt the following notation: let $A\subseteq \R^{m\times n}$ and $b\in \R^m$. Assume that $A$ has full rank. Let $B$ be a basis and set $B\subseteq \{1, 2, \cdots, n\}$ with $|B| = m$, corresponding to at most $m$ non-zero entries.
917
-
918
- We rearrange the columns so that all basis columns are on the left. Then we can write our matrices as
919
- \begin{align*}
920
- A_{m\times n} &=
921
- \begin{pmatrix}
922
- (A_B)_{m\times m} & (A_N)_{m\times (n - m)}
923
- \end{pmatrix}\\
924
- x_{n\times 1} &=
925
- \begin{pmatrix}
926
- (x_B)_{m\times 1} & (x_N)_{(n - m)\times 1}
927
- \end{pmatrix}^T\\
928
- c_{1\times n} &=
929
- \begin{pmatrix}
930
- (c_B)_{m\times 1} & (c_N)_{(n - m)\times 1}
931
- \end{pmatrix}.
932
- \end{align*}
933
- Then the functional constraints
934
- \[
935
- Ax = b
936
- \]
937
- can be decomposed as
938
- \[
939
- A_Bx_B + A_Nx_N = b.
940
- \]
941
- We can rearrange this to obtain
942
- \[
943
- x_B = A_B^{-1}(b - A_N x_N).
944
- \]
945
- In particular, when $x_N = 0$, then
946
- \[
947
- x_B = A_B^{-1}b.
948
- \]
949
- The general tableau is then
950
- \begin{center}
951
- \begin{tabular}{ccc}
952
- \toprule
953
- Basis components & Other components\\
954
- \midrule\\
955
- $A_B^{-1} A_B = I$ & $A_B^{-1}A_N$ & $A_B^{-1}b$\\\\
956
- \midrule
957
- \quad$c^T_B - c^T_BA_B^{-1}A_B = 0$\quad & \quad$c_N^T - c_B^TA_B^{-1}A_N$\quad & $-c_B^T A_B^{-1}b$\\
958
- \bottomrule
959
- \end{tabular}
960
- \end{center}
961
- This might look really scary, and it is! Without caring too much about how the formulas for the cells come from, we see the identity matrix on the left, which is where we find our basic feasible solution. Below that is the row for the objective function. The values of this row must be $0$ for the basis columns.
962
-
963
- On the right-most column, we have $A_B^{-1}b$, which is our $x_B$. Below that is $-c_B^TA_B^{-1}b$, which is the negative of our objective function $c_B^Tx_B$.
964
- \subsubsection{The simplex tableau}
965
- We have
966
- \begin{align*}
967
- f(x) &= c^T x \\
968
- &= c_B^T x_B + c_N^T x_N\\
969
- &= c_B^T A_B^{-1}(b - A_N x_N) + c_N^T x_N\\
970
- &= c_B^T A_B^{-1}b + (c_N^T - c_B^TA_B^{-1}A_N)x_N.
971
- \end{align*}
972
- We will maximize $c^T x$ by choosing a basis such that $c_N^T - c_B^T A_B^{-1}A_N \leq 0$, i.e.\ non-positive everywhere and $A_B^{-1}b \geq 0$.
973
-
974
- If this is true, then for any feasible solution $x\in \R^n$, we must have $x_N \geq 0$. So $(c_N^T - c_B^TA_B^{-1}A_N)x_N \leq 0$ and
975
- \[
976
- f(x) \leq c_B^T A_B^{-1}b.
977
- \]
978
- So if we choose $x_B = A_B^{-1}b$, $x_N = 0$, then we have an optimal solution.
979
-
980
- Hence our objective is to pick a basis that makes $c_N^T - c_B^T A_B^{-1}A_N \leq 0$ while keeping $A_B^{-1}b \geq 0$. To do this, suppose this is not attained. Say $(c_N^T - c_B^T A_B^{-1}A_N)_i > 0$.
981
-
982
- We can increase the value of the objective function by increasing $(x_N)_i$. As we increase $(x_N)_i$, we have to satisfy the functional constraints. So the value of other variables will change as well. We can keep increasing $(x_N)_i$ until another variable hits $0$, say $(x_B)_j$. Then we will have to stop.
983
-
984
- (However, if it so happens that we can increase $(x_N)_i$ indefinitely without other things hitting $0$, our problem is unbounded)
985
-
986
- The effect of this is that we have switched basis by removing $(x_B)_j$ and adding $(x_N)_i$. We can continue from here. If $(c_N^T - c_B^T A_B^{-1}A_N)$ is negative, we are done. Otherwise, we continue the above procedure.
987
-
988
- The simplex method is a systematic way of doing the above procedure.
989
-
990
- \subsubsection{Using the Tableau}
991
- Consider a tableau of the form
992
- \begin{center}
993
- \begin{tabular}{cc}
994
- \toprule\\
995
- \quad\quad $a_{ij}$\quad\quad\quad & $a_{i0}$\\\\
996
- \midrule
997
- \quad\quad $a_{0j}$\quad\quad\quad & $a_{00}$\\
998
- \bottomrule
999
- \end{tabular}
1000
- \end{center}
1001
- where $a_{i0}$ is $b$, $a_{0j}$ corresponds to the objective function, and $a_{00}$ is initial $0$.
1002
-
1003
- The simplex method proceeds as follows:
1004
- \begin{enumerate}
1005
- \item Find an initial basic feasible solution.
1006
- \item Check whether $a_{0j} \leq 0$ for every $j$. If so, the current solution is optimal. Stop.
1007
- \item If not, choose a \emph{pivot column} $j$ such that $a_{0j} > 0$. Choose a \emph{pivot row} $i\in \{i: a_{ij} > 0\}$ that minimizes $a_{i0}/a_{ij}$. If multiple rows are minimize $a_{i0}/a_{ij}$, then the problem is degenerate, and things \emph{might} go wrong. If $a_{ij} \leq 0$ for all $i$, i.e.\ we cannot choose a pivot row, the problem is unbounded, and we stop.
1008
- \item We update the tableau by multiplying row $i$ by $1/a_{ij}$ (such that the new $a_{ij} = 1$), and add a $(-a_{kj}/a_{ij})$ multiple of row $i$ to each row $k \not= i$, including $k = 0$ (so that $a_{kj} = 0$ for all $k \not= i$)
1009
-
1010
- We have a basic feasible solution, since our choice of $a_{ij}$ makes all right-hand columns positive after subtracting (apart from $a_{00}$).
1011
- \item \texttt{GOTO} (ii).
1012
- \end{enumerate}
1013
-
1014
- Now visit the example at the beginning of the section to see how this is done in practice. Then read the next section for a more complicated example.
1015
- \subsection{The two-phase simplex method}
1016
- Sometimes we don't have a nice identity matrix to start with. In this case, we need to use the \emph{two-phase simplex method} to first find our first basic feasible solution, then to the actual optimization.
1017
-
1018
- This method is illustrated by example.
1019
- \begin{eg}
1020
- Consider the problem
1021
- \begin{center}
1022
- minimize $6x_1 + 3x_2$ subject to
1023
- \begin{align*}
1024
- x_1 + x_2 &\geq 1\\
1025
- 2x_1 - x_2 &\geq 1\\
1026
- 3x_2 &\leq 2\\
1027
- x_1, x_2 &\geq 0
1028
- \end{align*}
1029
- \end{center}
1030
- This is a minimization problem. To avoid being confused, we maximize $-6x_1 - 3x_2$ instead. We add slack variables to obtain
1031
- \begin{center}
1032
- maximize $-6x_1 - 3x_2$ subject to
1033
- \begin{align*}
1034
- x_1 + x_2 - z_1 &= 1\\
1035
- 2x_1 - x_2 - z_2 &= 1\\
1036
- 3x_2 + z_3 &= 2\\
1037
- x_1, x_2, z_1, z_2, z_3 &\geq 0
1038
- \end{align*}
1039
- \end{center}
1040
- Now we don't have a basic feasible solution, since we would need $z_1 = z_2 = -1, z_3 = 2$, which is not feasible. So we add \emph{more} variables, called the artificial variables.
1041
- \begin{center}
1042
- maximize $-6x_1 - 3x_2$ subject to
1043
- \begin{align*}
1044
- x_1 + x_2 - z_1 + y_1&= 1\\
1045
- 2x_1 - x_2 - z_2 +y_2 &= 1\\
1046
- 3x_2 + z_3 &= 2\\
1047
- x_1, x_2, z_1, z_2, z_3, y_1, y_2 &\geq 0
1048
- \end{align*}
1049
- \end{center}
1050
- Note that adding $y_1$ and $y_2$ might create new solutions, which is bad. We solve this problem by first trying to make $y_1$ and $y_2$ both $0$ and find a basic feasible solution. Then we can throw away $y_1$ and $y_2$ and then get a basic feasible for our original problem. So momentarily, we want to solve
1051
- \begin{center}
1052
- minimize $y_1 + y_2$ subject to
1053
- \begin{align*}
1054
- x_1 + x_2 - z_1 + y_1&= 1\\
1055
- 2x_1 - x_2 - z_2 +y_2&= 1\\
1056
- 3x_2 + z_3 &= 2\\
1057
- x_1, x_2, z_1, z_2, z_3, y_1, y_2 &\geq 0
1058
- \end{align*}
1059
- \end{center}
1060
- By minimizing $y_1$ and $y_2$, we will make them zero.
1061
-
1062
- Our simplex tableau is
1063
- \begin{center}
1064
- \begin{tabular}{cccccccc}
1065
- \toprule
1066
- $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
1067
- \midrule
1068
- 1 & 1 & -1 & 0 & 0 & 1 & 0 & 1\\
1069
- 2 & -1 & 0 & -1 & 0 & 0 & 1 & 1\\
1070
- 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
1071
- \midrule
1072
- -6 & -3 & 0 & 0 & 0 & 0 & 0 & 0\\
1073
- 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0\\
1074
- \bottomrule
1075
- \end{tabular}
1076
- \end{center}
1077
- Note that we keep both our original and ``kill-$y_i$'' objectives, but now we only care about the second one. We will keep track of the original objective so that we can use it in the second phase.
1078
-
1079
- We see an initial feasible solution $y_1 = y_2 = 1, z_3 = 2$. However, this is not a proper simplex tableau, as the basis columns should not have non-zero entries (apart from the identity matrix itself). But we have the two $-1$s at the bottom! So we add the first two rows to the last to obtain
1080
- \begin{center}
1081
- \begin{tabular}{cccccccc}
1082
- \toprule
1083
- $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
1084
- \midrule
1085
- 1 & 1 & -1 & 0 & 0 & 1 & 0 & 1\\
1086
- 2 & -1 & 0 & -1 & 0 & 0 & 1 & 1\\
1087
- 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
1088
- \midrule
1089
- -6 & -3 & 0 & 0 & 0 & 0 & 0 & 0\\
1090
- 3 & 0 & -1 & -1 & 0 & 0 & 0 & 2\\
1091
- \bottomrule
1092
- \end{tabular}
1093
- \end{center}
1094
- Our pivot column is $x_1$, and our pivot row is the second row. We divide it by $1$ and add/subtract it from other rows.
1095
- \begin{center}
1096
- \begin{tabular}{cccccccc}
1097
- \toprule
1098
- $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
1099
- \midrule
1100
- 0 & $\frac{3}{2}$ & -1 & $\frac{1}{2}$ & 0 & 1 & $-\frac{1}{2}$ & $\frac{1}{2}$\\
1101
- 1 & $-\frac{1}{2}$ & 0 & $-\frac{1}{2}$ & 0 & 0 & $\frac{1}{2}$ & $\frac{1}{2}$\\
1102
- 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
1103
- \midrule
1104
- 0 & -6 & 0 & -3 & 0 & 0 & 3 & 3\\
1105
- 0 & $\frac{3}{2}$ & $-1$ & $\frac{1}{2}$ & 0 & 0 & $-\frac{3}{2}$ & $\frac{1}{2}$\\
1106
- \bottomrule
1107
- \end{tabular}
1108
- \end{center}
1109
- There are two possible pivot columns. We pick $z_2$ and use the first row as the pivot row.
1110
- \begin{center}
1111
- \begin{tabular}{cccccccc}
1112
- \toprule
1113
- $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
1114
- \midrule
1115
- 0 & 3 & -2 & 1 & 0 & 2 & -1 & 1\\
1116
- 1 & 1 & -1 & 0 & 0 & 1 & 0 & 1\\
1117
- 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
1118
- \midrule
1119
- 0 & 3 & -6 & 0 & 0 & 6 & 0 & 6\\
1120
- 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0\\
1121
- \bottomrule
1122
- \end{tabular}
1123
- \end{center}
1124
- We see that $y_1$ and $y_2$ are no longer in the basis, and hence take value $0$. So we drop all the phase I stuff, and are left with
1125
- \begin{center}
1126
- \begin{tabular}{cccccccc}
1127
- \toprule
1128
- $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$\\
1129
- \midrule
1130
- 0 & 3 & -2 & 1 & 0 & 1\\
1131
- 1 & 1 & -1 & 0 & 0 & 1\\
1132
- 0 & 3 & 0 & 0 & 1 & 2\\
1133
- \midrule
1134
- 0 & 3 & -6 & 0 & 0 & 6\\
1135
- \bottomrule
1136
- \end{tabular}
1137
- \end{center}
1138
- We see a basic feasible solution $z_1 = x_1 = 1, z_3 = 2$.
1139
-
1140
- We pick $x_2$ as the pivot column, and the first row as the pivot row. Then we have
1141
- \begin{center}
1142
- \begin{tabular}{cccccccc}
1143
- \toprule
1144
- $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$\\
1145
- \midrule
1146
- 0 & 1 & $-\frac{2}{3}$ & $\frac{1}{3}$ & 0 & $\frac{1}{3}$\\
1147
- 1 & 0 & $-\frac{1}{3}$ & $-\frac{1}{3}$ & 0 & $\frac{2}{3}$\\
1148
- 0 & 0 & 2 & -1 & 1 & 1\\
1149
- \midrule
1150
- 0 & 0 & -4 & -1 & 0 & 5\\
1151
- \bottomrule
1152
- \end{tabular}
1153
- \end{center}
1154
- Since the last row is all negative, we have complementary slackness. So this is a optimal solution. So $x_1 = \frac{2}{3}, x_2 = \frac{1}{3}, z_3 = 1$ is a feasible solution, and our optimal value is $5$.
1155
-
1156
- Note that we previously said that the bottom right entry is the negative of the optimal value, not the optimal value itself! This is correct, since in the tableau, we are maximizing $-6x_1 - 3x_2$, whose maximum value is $-5$. So the minimum value of $6x_1 + 3x_2$ is $5$.
1157
- \end{eg}
1158
- \section{Non-cooperative games}
1159
- Here we have a short digression to game theory. We mostly focus on games with two players.
1160
-
1161
- \subsection{Games and Solutions}
1162
- \begin{defi}[Bimatrix game]
1163
- A two-player game, or \emph{bimatrix game}, is given by two matrices $P, Q\in \R^{m\times n}$. Player 1, or the \emph{row player}, chooses a row $i\in \{1, \cdots, m\}$, while player 2, the \emph{column player}, chooses a column $j\in \{1, \cdots, n\}$. These are selected without knowledge of the other player's decisions. The two players then get payoffs $P_{ij}$ and $Q_{ij}$ respectively.
1164
- \end{defi}
1165
-
1166
- \begin{eg}
1167
- A game of rock-paper-scissors can have payoff matrices
1168
- \[
1169
- P_{ij} =
1170
- \begin{pmatrix}
1171
- 0 & -1 & 1\\
1172
- 1 & 0 & -1\\
1173
- -1 & 1 & 0
1174
- \end{pmatrix},\quad
1175
- Q_{ij} =
1176
- \begin{pmatrix}
1177
- 0 & 1 & -1\\
1178
- -1 & 0 & 1\\
1179
- 1 & -1 & 0
1180
- \end{pmatrix}.
1181
- \]
1182
- Here a victory gives you a payoff of $1$, a loss gives a payoff of $-1$, and a draw gives a payoff of $1$. Also the first row/column corresponds to playing rock, second corresponds to paper and third corresponds to scissors.
1183
-
1184
- Usually, this is not the best way to display the payoff matrices. First of all, we need two write out two matrices, and there isn't an easy way to indicate what row corresponds to what decision. Instead, we usually write this as a table.
1185
- \begin{center}
1186
- \begin{tabular}{cccc}
1187
- \toprule
1188
- & R & P & S\\
1189
- \midrule
1190
- R & $(0, 0)$ & $(-1, 1)$ & $(1, -1)$\\
1191
- P & $(1, -1)$ & $(0, 0)$ & $(-1, 1)$\\
1192
- S & $(-1, 1)$ & $(1, -1)$ & $(0, 0)$\\
1193
- \bottomrule
1194
- \end{tabular}
1195
- \end{center}
1196
- By convention, the first item in the tuple $(-1, 1)$ indicates the payoff of the row player, and the second item indicates the payoff of the column player.
1197
- \end{eg}
1198
- \begin{defi}[Strategy]
1199
- Players are allowed to play randomly. The set of \emph{strategies} the row player can have is
1200
- \[
1201
- X = \{x\in \R^m: x \geq 0, \sum x_i = 1\}
1202
- \]
1203
- and the column player has strategies
1204
- \[
1205
- Y = \{y\in \R^n: y \geq 0, \sum y_i = 1\}
1206
- \]
1207
- Each vector corresponds to the probabilities of selecting each row or column.
1208
-
1209
- A strategy profile $(x, y)\in X\times Y$ induces a lottery, and we write $p(x, y) = x^T Py$ for the expected payoff of the row player.
1210
-
1211
- If $x_i = 1$ for some $i$, i.e.\ we always pick $i$, we call $x$ a \emph{pure strategy}.
1212
- \end{defi}
1213
-
1214
- \begin{eg}[Prisoner's dilemma]
1215
- Suppose Alice and Bob commit a crime together, and are caught by the police. They can choose to remain silent ($S$) or testify ($T$). Different options will lead to different outcomes:
1216
- \begin{itemize}
1217
- \item Both keep silent: the police has little evidence and they go to jail for 2 years.
1218
- \item One testifies and one remains silent: the one who testifies gets awarded and is freed, while the other gets stuck in jail for 10 years.
1219
- \item Both testify: they both go to jail for 5 years.
1220
- \end{itemize}
1221
-
1222
- We can represent this by a payoff table:
1223
- \begin{center}
1224
- \begin{tabular}{ccc}
1225
- \toprule
1226
- & $S$ & $T$\\
1227
- \midrule
1228
- $S$ & $(2, 2)$ & $(0, 3)$\\
1229
- $T$ & $(3, 0)$ & $(1, 1)$\\
1230
- \bottomrule
1231
- \end{tabular}
1232
- \end{center}
1233
- Note that higher payoff is desired, so a longer serving time corresponds to a lower payoff. Also, payoffs are interpreted relatively, so replacing $(0, 3)$ with $(0, 100)$ (and $(3, 0)$ with $(100, 0)$) in the payoff table would make no difference.
1234
-
1235
- Here we see that regardless of what the other person does, it is always strictly better to testify than not (unless you want to be nice). We say $T$ is a \emph{dominant strategy}, and $(1, 1)$ is \emph{Pareto dominated} by $(2, 2)$.
1236
- \end{eg}
1237
-
1238
- \begin{eg}[Chicken]
1239
- The game of \emph{Chicken} is as follows: two people drive their cars towards each other at high speed. If they collide, they will di.e.\ Hence they can decide to chicken out ($C$) or continue driving ($D$). If both don't chicken, they die, which is bad. If one chickens and the other doesn't the person who chicken looks silly (but doesn't die). If both chicken out, they both look slightly silly. This can be represented by the following table:
1240
- \begin{center}
1241
- \begin{tabular}{ccc}
1242
- \toprule
1243
- & $C$ & $D$ \\
1244
- \midrule
1245
- $C$ & $(2, 2)$ & $(1, 3)$ \\
1246
- $D$ & $(3, 1)$ & $(0, 0)$\\
1247
- \bottomrule
1248
- \end{tabular}
1249
- \end{center}
1250
- Here there is no dominating strategy, so we need a different way of deciding what to do.
1251
-
1252
- Instead, we define the \emph{security level} of the row player to be
1253
- \[
1254
- \max_{x\in X}\min_{y\in Y} p(x, y) = \max_{x\in X}\min_{j\in \{1, \ldots, n\}} \sum_{i = 1}^m x_i p_{ij}.
1255
- \]
1256
- Such an $x$ is the strategy the row player can employ that minimizes the worst possible loss. This is called the maximin strategy.
1257
-
1258
- We can formulate this as a linear program:
1259
- \begin{center}
1260
- maximize $v$ such that
1261
- \begin{align*}
1262
- \sum_{i = 1}^m x_i p_{ij} &\geq v\quad\text{for all }j = 1, \cdots, n\\
1263
- \sum_{i = 1}^m x_i &= 1\\
1264
- x &\geq 0
1265
- \end{align*}
1266
- \end{center}
1267
- Here the maximin strategy is to chicken. However, this isn't really what we are looking for, since if both players employ this maximin strategy, it would be better for you to not chicken out.
1268
- \end{eg}
1269
-
1270
- \begin{defi}[Best response and equilibrium]
1271
- A strategy $x\in X$ is a \emph{best response} to $y\in Y$ if for all $x'\in X$
1272
- \[
1273
- p(x, y) \geq p(x', y)
1274
- \]
1275
- A pair $(x, y)$ is an \emph{equilibrium} if $x$ is the best response against $y$ and $y$ is a best response against $x$.
1276
- \end{defi}
1277
-
1278
- \begin{eg}
1279
- In the chicken game, there are two pure equilibrium, $(3, 1)$ and $(1, 3)$, and there is a mixed equilibrium in which the players pick the options with equal probability.
1280
- \end{eg}
1281
-
1282
- \begin{thm}[Nash, 1961]
1283
- Every bimatrix game has an equilibrium.
1284
- \end{thm}
1285
- We are not proving this since it is too hard.
1286
-
1287
- \subsection{The minimax theorem}
1288
- There is a special type of game known as a \emph{zero sum game}.
1289
- \begin{defi}[Zero-sum game]
1290
- A bimatrix game is a \emph{zero-sum game}, or matrix game, if $q_{ij} = -p_{ij}$ for all $i, j$, i.e.\ the total payoff is always 0.
1291
- \end{defi}
1292
- To specify a matrix game, we only need one matrix, not two, since the matrix of the other player is simply the negative of the matrix of the first.
1293
-
1294
- \begin{eg}
1295
- The rock-paper-scissors games as specified in the beginning example is a zero-sum game.
1296
- \end{eg}
1297
-
1298
- \begin{thm}[von Neumann, 1928]
1299
- If $P\in \R^{m\times n}$. Then
1300
- \[
1301
- \max_{x\in X}\min_{y\in Y} p(x, y) = \min_{y\in Y}\max_{x\in X} p(x, y).
1302
- \]
1303
- Note that this is equivalent to
1304
- \[
1305
- \max_{x\in X}\min_{y\in Y} p(x, y) = -\max_{y\in Y}\min_{x\in X} -p(x, y).
1306
- \]
1307
- The left hand side is the worst payoff the row player can get if he employs the minimax strategy. The right hand side is the worst payoff the column player can get if he uses his minimax strategy.
1308
-
1309
- The theorem then says that if both players employ the minimax strategy, then this is an equilibrium.
1310
- \end{thm}
1311
-
1312
- \begin{proof}
1313
- Recall that the optimal value of $\max\min p(x, y)$ is a solution to the linear program
1314
- \begin{center}
1315
- maximize $v$ such that
1316
- \begin{align*}
1317
- \sum_{i = 1}^m x_i p_{ij} &\geq v\quad\text{for all }j = 1, \cdots, n\\
1318
- \sum_{i = 1}^m x_i &= 1\\
1319
- x &\geq 0
1320
- \end{align*}
1321
- \end{center}
1322
- Adding slack variable $z\in \R^n$ with $z \geq 0$, we obtain the Lagrangian
1323
- \[
1324
- L(v, x, z, w, y) = v + \sum_{j = 1}^n y_j\left(\sum_{i = 1}^m x_ip_{ij} - z_j - v\right) - w\left(\sum_{i = 1}^m x_i - 1\right),
1325
- \]
1326
- where $w\in \R$ and $y\in \R^n$ are Lagrange multipliers. This is equal to
1327
- \[
1328
- \left(1 - \sum_{j = 1}^n y_j\right)v + \sum_{i = 1}^m \left(\sum_{j = 1}^n p_{ij}y_j - w\right)x_i- \sum_{j = 1}^n y_j z_j + w.
1329
- \]
1330
- This has finite minimum for all $v\in \R, x \geq 0$ iff $\sum y_i = 1$, $\sum p_{ij}y_j \leq w$ for all $i$, and $y \geq 0$. The dual is therefore
1331
- \begin{center}
1332
- minimize $w$ subject to
1333
- \begin{align*}
1334
- \sum_{j = 1}^n p_{ij}y_j &\leq w\quad\text{ for all }i\\
1335
- \sum_{j = 1}^n {y_j} &= 1\\
1336
- y &\geq 0
1337
- \end{align*}
1338
- \end{center}
1339
- This corresponds to the column player choosing a strategy $(y_i)$ such that the expected payoff is bounded above by $w$.
1340
-
1341
- The optimum value of the dual is $\displaystyle\min_{y\in Y}\max_{x\in X}p(x, y)$. So the result follows from strong duality.
1342
- \end{proof}
1343
-
1344
- \begin{defi}[Value]
1345
- The \emph{value} of the matrix game with payoff matrix $P$ is
1346
- \[
1347
- v = \max_{x\in X}\min_{y\in Y} p(x, y) = \min_{y\in Y}\max_{x\in X} p(x, y).
1348
- \]
1349
- \end{defi}
1350
- In general, the equilibrium are given by
1351
- \begin{thm}
1352
- $(x, y)\in X\times Y$ is an equilibrium of the matrix game with payoff matrix $P$ if and only if
1353
- \begin{align*}
1354
- \min_{y'\in Y} p(x, y') &= \max_{x' \in X}\min_{y'\in Y} p(x', y')\\
1355
- \max_{x'\in X} p(x', y) &= \min_{y' \in Y}\max_{x'\in X} p(x', u')
1356
- \end{align*}
1357
- i.e.\ the $x, y$ are optimizers for the $\max\min$ and $\min\max$ functions.
1358
- \end{thm}
1359
-
1360
- Proof is in the second example sheet.
1361
-
1362
- \section{Network problems}
1363
- \subsection{Definitions}
1364
- We are going to look into several problems that involve graphs. Unsurprisingly, we will need some definitions from graph theory.
1365
- \begin{defi}[Directed graph/network]
1366
- A \emph{directed graph} or \emph{network} is a pair $G = (V, E)$, where $V$ is the set of vertices and $E\subseteq V\times V$ is the set of edges. If $(u, v)\in E$, we say there is an edge from $u$ to $v$.
1367
- \end{defi}
1368
-
1369
- \begin{defi}[Degree]
1370
- The degree of a vertex $u \in V$ is the number of $v\in V$ such that $(u, v)\in E$ or $(v, u)\in E$.
1371
- \end{defi}
1372
-
1373
- \begin{defi}[Walk]
1374
- An \emph{walk} from $u\in V$ to $v\in V$ is a sequence of vertices $u = v_1, \cdots, v_k = v$ such that $(v_i, v_{i + 1})\in E$ for all $i$. An \emph{undirected walk} allows $(v_i, v_{i + 1})\in E$ or $(v_{i + 1}, v)\in E$, i.e.\ we are allowed to walk backwards.
1375
- \end{defi}
1376
-
1377
- \begin{defi}[Path]
1378
- A path is a walk where $v_1, \cdots, v_k$ are pairwise distinct.
1379
- \end{defi}
1380
-
1381
- \begin{defi}[Cycle]
1382
- A cycle is a walk where $v_1, \cdots, v_{k - 1}$ are pairwise distinct and $v_1 = v_k$.
1383
- \end{defi}
1384
-
1385
- \begin{defi}[Connected graph]
1386
- A graph is \emph{connected} if for any pair of vertices, there is an undirected path between them.
1387
- \end{defi}
1388
-
1389
- \begin{defi}[Tree]
1390
- A \emph{tree} is a connected graph without (undirected) cycles.
1391
- \end{defi}
1392
-
1393
- \begin{defi}[Spanning tree]
1394
- The \emph{spanning tree} of a graph $G = (V, E)$ is a tree $(V', E')$ with $V' = V$ and $E'\subseteq E$.
1395
- \end{defi}
1396
-
1397
- \subsection{Minimum-cost flow problem}
1398
- Let $G = (V, E)$ be a directed graph. Let the number of vertices be $|V| = n$ and let $b\in \R^n$. For each edge, we assign three numbers: a cost, an lower bound and a upper bound. We denote these as matrices as $C, \underline{M}, \overline{M}\in \R^{n\times n}$.
1399
-
1400
- Each component of the vector $b_i$ denotes the amount of flow entering or leaving each vertex $i\in V$. If $b_i > 0$, we call $i\in V$ a source. For example, if we have a factory at $b_i$ that produces stuff, $b_i$ will be positive. This is only the amount of stuff produced or consumed in the vertex, and not how many things flow through the vertex.
1401
-
1402
- $c_{ij}$ is the cost of transferring one unit of stuff from vertex $i$ to vertex $j$ (fill entries with $0$ if there is no edge between the vertices), and $\underline{m}_{ij}$ and $\overline{m}_{ij}$ denote the lower and upper bounds on the amount of flow along $(i, j)\in E$ respectively.
1403
-
1404
- $x\in \R^{n\times n}$ is a minimum-cost flow if it minimizes the cost of transferring stuff, while satisfying the constraints, i.e.\ it is an optimal solution to the problem
1405
- \begin{center}
1406
- minimize $\displaystyle\sum_{(i, j)\in E}c_{ij}x_{ij}$ subject to
1407
- \[
1408
- b_i + \sum_{j: (j, i)\in E} x_{ji} = \sum_{j: (i, j)\in E}x_{ij}\quad\text{ for each }i\in V
1409
- \]
1410
- \[
1411
- \underline{m}_{ij} \leq x_{ij}\leq \overline{m}_{ij}\quad\text{ for all }(i, j) \in E.
1412
- \]
1413
- \end{center}
1414
- This problem is a linear program. In theory, we can write it into the general form $Ax = b$, where $A$ is a huge matrix given by
1415
- \[
1416
- a_{ij}
1417
- \begin{cases}
1418
- 1 & \text{if the $k$th edge starts at vertex }i\\
1419
- -1 & \text{if the $k$th edge ends at vertex }i\\
1420
- 0 & \text{otherwise}
1421
- \end{cases}
1422
- \]
1423
- However, using this huge matrix to solve this problem by the simplex method is not very efficient. So we will look for better solutions.
1424
-
1425
- Note that for the system to make sense, we must have
1426
- \[
1427
- \sum_{i\in V}b_i = 0,
1428
- \]
1429
- i.e.\ the total supply is equal to the total consumption.
1430
-
1431
- To simplify the problem, we can convert it into an equivalent \emph{circulation problem}, where $b_i = 0$ for all $i$. We do this by adding an additional vertex where we send all the extra $b_i$ to. For example, if a vertex has $b_i = -1$, then it takes in more stuff than it gives out. So we can mandate it to send out one extra unit to the additional vertex. Then $b_i = 0$.
1432
-
1433
- An \emph{uncapacitated problem} is the case where $\underline{m}_{ij} = 0$ and $\overline{m}_{ij} = \infty$ for all $(i, j) \in E$. An uncapacitated problem is either unbounded or bounded. If it is bounded, then it is equivalent to a problem with finite capacities, since we can add a bound greater than what the optimal solution wants.
1434
-
1435
- We are going to show that this can be reduced to a simpler problem:
1436
-
1437
- \subsection{The transportation problem}
1438
- The transportation problem is a special case of the minimum-flow problem, where the graph is a \emph{bipartite graph}. In other words, we can split the vertices into two halves $A, B$, where all edges flow from a vertex in $A$ to a vertex in $B$. We call the vertices of $A$ the \emph{suppliers} and the vertices of $B$ the \emph{consumers}.
1439
-
1440
- In this case, we can write the problem as
1441
- \begin{center}
1442
- minimize $\displaystyle\sum_{i = 1}^n\sum_{j = 1}^m c_{ij}x_{ij}$ subject to
1443
- \[
1444
- \sum_{j = 1}^m x_{ij} = s_i\text{ for }i = 1, \cdots, n
1445
- \]
1446
- \[
1447
- \sum_{i = 1}^n x_{ij} = d_j\text{ for }j = 1, \cdots, m
1448
- \]
1449
- \[
1450
- x\geq 0
1451
- \]
1452
- \end{center}
1453
- This $s_i$ is the supply of each supplier, and $d_i$ is the demand of each supplier. We have $s\in \R^n, d\in \R^m$ satisfying $s, d\geq 0$, $\sum s_i = \sum d_j$.
1454
-
1455
- Finally, we have $c\in \R^{n\times m}$ representing the cost of transferal.
1456
-
1457
- We now show that every (bounded) minimum cost-flow problem can be reduced to the transportation problem.
1458
-
1459
- \begin{thm}
1460
- Every minimum cost-flow problem with finite capacities or non-negative costs has an equivalent transportation problem.
1461
- \end{thm}
1462
-
1463
- \begin{proof}
1464
- Consider a minimum-cost flow problem on network $(V, E)$. It is wlog to assume that $\underline{m}_{ij} = 0$ for all $(i, j) \in E$. Otherwise, set $\underline{m}_{ij}$ to $0$, $\overline{m}_{ij}$ to $\overline{m}_{ij} - \underline{m}_{ij}$, $b_i$ to $b_i - \underline{m}_{ij}$, $b_j$ to $b_j + \underline{m}_{ij}$, $x_{ij}$ to $x_{ij} - \underline{m}_{ij}$. Intuitively, we just secretly ship the minimum amount without letting the network know.
1465
-
1466
- Moreover, we can assume that all capacities are finite: if some edge has infinite capacity but non-negative cost, then setting the capacity to a large enough number, for example $\sum_{i \in V}|b_i|$ does not affect the optimal solutions. This is since cost is non-negative, and the optimal solution will not want shipping loops. So we will have at most $\sum |b_i|$ shipments.
1467
-
1468
- We will construct an instance of the transportation problem as follows:
1469
-
1470
- For every $i\in V$, add a consumer with demand $\left(\sum_{k: (i, k)\in E}\overline{m}_{ik}\right) - b_i$.
1471
-
1472
- For every $(i, j)\in E$, add a supplier with supply $\overline{m}_{ij}$, an edge to consumer $i$ with cost $c_{(ij, i)} = 0$ and an edge to consumer $j$ with cost $c_{(ij, j)} = c_{ij}$.
1473
- \begin{center}
1474
- \begin{tikzpicture}
1475
- \node [draw, circle, inner sep=0, minimum width=0.6cm] (ij) at (0, 0) {$ij$};
1476
- \node [draw, circle, inner sep=0, minimum width=0.6cm] (i) at (2, 1) {$i$};
1477
- \node [draw, circle, inner sep=0, minimum width=0.6cm] (j) at (2, -1) {$j$};
1478
- \draw [->] (ij) -- (i) node [pos=0.5, above] {$0$};
1479
- \draw [->] (ij) -- (j) node [pos=0.5, below] {$c_{ij}$};
1480
- \draw [->] (-1.5, 0) node [left] {$\overline{m}_{ij}$} -- (ij);
1481
- \draw [->] (i) -- +(1.5, 0) node [right] {$\sum_{k: (i, k)\in E}\overline{m}_{ik} - b_i$};
1482
- \draw [->] (j) -- +(1.5, 0) node [right] {$\sum_{k: (j, k)\in E}\overline{m}_{jk} - b_j$};
1483
- \end{tikzpicture}
1484
- \end{center}
1485
- The idea is that if the capacity of the edge $(i, j)$ is, say, 5, in the original network, and we want to transport $3$ along this edge, then in the new network, we send $3$ units from $ij$ to $j$, and $2$ units to $i$.
1486
-
1487
- The tricky part of the proof is to show that we have the same constraints in both graphs.
1488
-
1489
- For any flow $x$ in the original network, the corresponding flow on $(ij, j)$ is $x_{ij}$ and the flow on $(ij, i)$ is $\overline{m}_{ij} - x_{ij}$. The total flow into $i$ is then
1490
- \[
1491
- \sum_{k: (i, k)\in E}(\overline{m}_{ik} - x_{ik}) + \sum_{k: (k, i)\in E}x_{ki}
1492
- \]
1493
- This satisfies the constraints of the new network if and only if
1494
- \[
1495
- \sum_{k: (i, k)\in E}(\overline{m}_{ik} - x_{ik}) + \sum_{k: (k, i)\in E}x_{ki} = \sum_{k: (i, k)\in E}\overline{m}_{ik} - b_i,
1496
- \]
1497
- which is true if and only if
1498
- \[
1499
- b_i + \sum_{k:(k, i)\in E}x_{ki} - \sum_{k: (i, k)\in E}x_{ik} = 0,
1500
- \]
1501
- which is exactly the constraint for the node $i$ in the original minimal-cost flow problem. So done.
1502
- \end{proof}
1503
- To solve the transportation problem, it is convenient to have two sets of Lagrange multipliers, one for the supplier constraints and one for the consumer constraint. Then the Lagrangian of the transportation problem can be written as
1504
- \[
1505
- L(x, \lambda, \mu) = \sum_{i = 1}^m\sum_{j = 1}^n c_{ij}x_{ij} + \sum_{i = 1}^n \lambda_i\left(s_i - \sum_{j = 1}^m x_{ij}\right) - \sum_{j = 1}^n \mu_j\left(d_j - \sum_{j = 1}^n x_{ij}\right).
1506
- \]
1507
- Note that we use different signs for the Lagrange multipliers for the suppliers and the consumers, so that our ultimate optimality condition will look nicer.
1508
-
1509
- This is equivalent to
1510
- \[
1511
- L(x, \lambda, \mu) = \sum_{i = 1}^n \sum_{j = 1}^n (c_{ij} - \lambda_i + \mu_j)x_{ij} + \sum_{i = 1}^n \lambda_i s_i - \sum_{j = 1}^m \mu_j d_j.
1512
- \]
1513
- Since $x \geq 0$, the Lagrangian has a finite minimum iff $c_{ij} - \lambda_i + \mu_j \geq 0$ for all $i, j$. So this is our dual feasibility condition.
1514
-
1515
- At an optimum, complementary slackness entails that
1516
- \[
1517
- (c_{ij} - \lambda_i + \mu_j)x_{ij} = 0
1518
- \]
1519
- for all $i, j$.
1520
-
1521
- In this case, we have a tableau as follows:
1522
- \newcommand\bb[1]{\multicolumn{1}{|c|}{#1}}
1523
- \newcommand\bbb[1]{\multicolumn{2}{c|}{#1}}
1524
- \newcommand\bbbb[1]{\multicolumn{2}{c}{#1}}
1525
- \begin{center}
1526
- \begin{tabular}{c|cc|cc|cc|cc|c}
1527
- \multicolumn{1}{c}{ } & \bbbb{$\mu_1$} & \bbbb{$\mu_2$} & \bbbb{$\mu_3$} & \bbbb{$\mu_4$}\\\cline{2-9}
1528
- & \bbb{$\lambda_1 - \mu_1$} & \bbb{$\lambda_1 - \mu_2$} & \bbb{$\lambda_1 - \mu_3$} & \bbb{$\lambda_1 - \mu_4$}\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1529
- $\lambda_1$ & $x_{11}$ & \bb{$c_{11}$} & $x_{12}$ & \bb{$c_{12}$} & $x_{13}$ & \bb{$c_{13}$} & $x_{14}$ & \bb{$c_{14}$} & $s_1$\\\cline{2-9}
1530
- & \bbb{$\lambda_2 - \mu_1$} & \bbb{$\lambda_2 - \mu_2$} & \bbb{$\lambda_2 - \mu_3$} & \bbb{$\lambda_2 - \mu_4$}\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1531
- $\lambda_2$ & $x_{21}$ & \bb{$c_{21}$} & $x_{22}$ & \bb{$c_{22}$} & $x_{23}$ & \bb{$c_{23}$} & $x_{24}$ & \bb{$c_{24}$} & $s_1$\\\cline{2-9}
1532
- & \bbb{$\lambda_3 - \mu_1$} & \bbb{$\lambda_3 - \mu_2$} & \bbb{$\lambda_3 - \mu_3$} & \bbb{$\lambda_3 - \mu_4$}\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1533
- $\lambda_3$ & $x_{31}$ & \bb{$c_{31}$} & $x_{32}$ & \bb{$c_{32}$} & $x_{33}$ & \bb{$c_{33}$} & $x_{34}$ & \bb{$c_{34}$} & $s_1$\\\cline{2-9}
1534
- \multicolumn{1}{c}{ }& \bbbb{$d_1$} & \bbbb{$d_2$} & \bbbb{$d_3$} & \bbbb{$d_4$}\\
1535
- \end{tabular}
1536
- \end{center}
1537
- We have a row for each supplier and a column for each consumer.
1538
- \begin{eg}
1539
- Suppose we have three suppliers with supplies $8, 10$ and $9$; and four consumers with demands $6, 5, 8, 8$.
1540
-
1541
- It is easy to create an initial feasible solution - we just start from the first consumer and first supplier, and supply as much as we can until one side runs out of stuff.
1542
-
1543
- We first fill our tableau with our feasible solution.
1544
- \begin{center}
1545
- \begin{tabular}{c|cc|cc|cc|cc|c}
1546
- \cline{2-9}
1547
- & & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1548
- & 6 & \bb{5} & 2 & \bb{3} & & \bb{4} & & \bb{6} & 8\\\cline{2-9}
1549
- & & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1550
- & & \bb{2} & 3 & \bb{7} & 7 & \bb{4} & & \bb{1} & 10\\\cline{2-9}
1551
- & & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1552
- & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4} & 9\\\cline{2-9}
1553
- \multicolumn{1}{c}{ }& \bbbb{6} & \bbbb{5} &\bbbb{8} & \bbbb{8} &
1554
- \end{tabular}
1555
- \end{center}
1556
- \begin{center}
1557
- \begin{tikzpicture}
1558
- \node (s1) at (0, 0) [circ] {};
1559
- \node at (0, 0) [left] {$8 = s_1$};
1560
- \node (s2) at (0, -1) [circ] {};
1561
- \node at (0, -1) [left] {$10 = s_2$};
1562
- \node (s3) at (0, -2) [circ] {};
1563
- \node at (0, -2) [left] {$9 = s_3$};
1564
-
1565
- \node (d1) at (2, 0) [circ] {};
1566
- \node at (2, 0) [right] {$d_1 = 6$};
1567
- \node (d2) at (2, -1) [circ] {};
1568
- \node at (2, -1) [right] {$d_2 = 5$};
1569
- \node (d3) at (2, -2) [circ] {};
1570
- \node at (2, -2) [right] {$d_3 = 8$};
1571
- \node (d4) at (2, -3) [circ] {};
1572
- \node at (2, -3) [right] {$d_4 = 8$};
1573
-
1574
- \draw [->] (s1) -- (d1) node [pos=0.5, above] {\tiny 6};
1575
- \draw [->] (s1) -- (d2) node [pos=0.5, above] {\tiny 2};
1576
- \draw [->] (s2) -- (d2) node [pos=0.5, above] {\tiny 3};
1577
- \draw [->] (s2) -- (d3) node [pos=0.5, above] {\tiny 7};
1578
- \draw [->] (s3) -- (d3) node [pos=0.5, above] {\tiny 1};
1579
- \draw [->] (s3) -- (d4) node [pos=0.5, above] {\tiny 8};
1580
- \end{tikzpicture}
1581
- \end{center}
1582
- We see that our basic feasible solution corresponds to a spanning tree. In general, if we have $n$ suppliers and $m$ consumers, then we have $n + m$ vertices, and hence $n + m - 1$ edges. So we have $n + m - 1$ dual constraints. So we can arbitrarily choose one Lagrange multiplier, and the other Lagrange multipliers will follow. We choose $\lambda_1 = 0$. Since we require
1583
- \[
1584
- (c_{ij} - \lambda_i + \mu_i)x_{ij} = 0,
1585
- \]
1586
- for edges in the spanning tree, $x_{ij} \not= 0$. So $c_{ij} - \lambda_i + \mu_i = 0$. Hence we must have $\mu_1 = -5$. We can fill in the values of the other Lagrange multipliers as follows, and obtain
1587
- \begin{center}
1588
- \begin{tabular}{c|cc|cc|cc|cc|}
1589
- \multicolumn{1}{c}{ }& \bbbb{-5} & \bbbb{-3} & \bbbb{0} & \bbbb{-2}\\\cline{2-9}
1590
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1591
- 0 & 6 & \bb{5} & 2 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1592
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1593
- 4 & & \bb{2} & 3 & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
1594
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1595
- 2 & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
1596
- \end{tabular}
1597
- \end{center}
1598
- We can fill in the values of $\lambda_i - \mu_i$:
1599
- \begin{center}
1600
- \begin{tabular}{c|cc|cc|cc|cc|}
1601
- \multicolumn{1}{c}{ }& \bbbb{-5} & \bbbb{-3} & \bbbb{0} & \bbbb{-2}\\\cline{2-9}
1602
- & & & & & \bbb{0} & \bbb{2} \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1603
- 0 & 6 & \bb{5} & 2 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1604
- & \bbb9 & & & & & \bbb{6} \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1605
- 4 & & \bb{2} & 3 & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
1606
- & \bbb{7} & \bbb{5} & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1607
- 2 & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
1608
- \end{tabular}
1609
- \end{center}
1610
- The dual feasibility condition is
1611
- \[
1612
- \lambda_i - \mu_i \leq c_{ij}
1613
- \]
1614
- If it is satisfied everywhere, we have optimality. Otherwise, will have to do something.
1615
-
1616
- What we do is we add an edge, say from the second supplier to the first consumer. Then we have created a cycle. We keep increasing the flow on the new edge. This causes the values on other edges to change by flow conservation. So we keep doing this until some other edge reaches zero.
1617
-
1618
- If we increase flow by, say, $\delta$, we have
1619
- \begin{center}
1620
- \begin{tabular}{c|cc|cc|cc|cc|}
1621
- \cline{2-9}
1622
- & & & & &&& & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1623
- & $6 - \delta$ & \bb{5} & $2 + \delta$ & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1624
- & & & & & && & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1625
- & $\delta$ & \bb{2} & $3 - \delta$ & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
1626
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1627
- & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
1628
- \end{tabular}
1629
- \end{center}
1630
- \begin{center}
1631
- \begin{tikzpicture}
1632
- \node (s1) at (0, 0) [circ] {};
1633
- \node at (0, 0) [left] {$8 = s_1$};
1634
- \node (s2) at (0, -1) [circ] {};
1635
- \node at (0, -1) [left] {$10 = s_2$};
1636
- \node (s3) at (0, -2) [circ] {};
1637
- \node at (0, -2) [left] {$9 = s_3$};
1638
-
1639
- \node (d1) at (2, 0) [circ] {};
1640
- \node at (2, 0) [right] {$d_1 = 6$};
1641
- \node (d2) at (2, -1) [circ] {};
1642
- \node at (2, -1) [right] {$d_2 = 5$};
1643
- \node (d3) at (2, -2) [circ] {};
1644
- \node at (2, -2) [right] {$d_3 = 8$};
1645
- \node (d4) at (2, -3) [circ] {};
1646
- \node at (2, -3) [right] {$d_4 = 8$};
1647
-
1648
- \draw [->] (s1) -- (d1) node [pos=0.5, above] {\tiny $6 - \delta$};
1649
- \draw [->] (s1) -- (d2) node [pos=0.5, above] {\tiny $2 + \delta$};
1650
- \draw [->] (s2) -- (d2) node [pos=0.5, above] {\tiny $3 - \delta$};
1651
- \draw [->] (s2) -- (d3) node [pos=0.5, above] {\tiny 7};
1652
- \draw [->] (s3) -- (d3) node [pos=0.5, above] {\tiny 1};
1653
- \draw [->] (s3) -- (d4) node [pos=0.5, above] {\tiny 8};
1654
- \draw [mred, dashed, ->] (s2) -- (d1) node [pos=0.3, above] {\tiny $\delta$};
1655
- \end{tikzpicture}
1656
- \end{center}
1657
- The maximum value of $\delta$ we can take is $3$. So we end up with
1658
- \begin{center}
1659
- \begin{tabular}{c|cc|cc|cc|cc|}
1660
- \cline{2-9}
1661
- & & & & & & && \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1662
- & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1663
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1664
- & 3 & \bb{2} & & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
1665
- & & & & & & & &\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1666
- & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
1667
- \end{tabular}
1668
- \end{center}
1669
- We re-compute the Lagrange multipliers to obtain
1670
- \begin{center}
1671
- \begin{tabular}{c|cc|cc|cc|cc|}
1672
- \multicolumn{1}{c}{ }& \bbbb{-5} & \bbbb{-3} & \bbbb{-7} & \bbbb{-9}\\\cline{2-9}
1673
- & & & & & \bbb7 & \bbb9 \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1674
- 0 & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1675
- & & & \bbb0 & & & \bbb6 \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1676
- -3 & 3 & \bb{2} & & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
1677
- & \bbb0 & \bbb{-2} & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1678
- -5 & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
1679
- \end{tabular}
1680
- \end{center}
1681
- We see a violation at the bottom right. So we do it again:
1682
- \begin{center}
1683
- \begin{tabular}{c|cc|cc|cc|cc|}
1684
- \cline{2-9}
1685
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1686
- & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1687
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1688
- & 3 & \bb{2} & & \bb{7} & $7 - \delta$ & \bb{4} & $\delta$ & \bb{1}\\\cline{2-9}
1689
- & & & && & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1690
- & & \bb{5} & & \bb{6} & $1 + \delta$ & \bb{2} & $8 - \delta$ & \bb{4}\\\cline{2-9}
1691
- \end{tabular}
1692
- \end{center}
1693
- The maximum possible value of $\delta$ is 7. So we have
1694
- \begin{center}
1695
- \begin{tabular}{c|cc|cc|cc|cc|}
1696
- \cline{2-9}
1697
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1698
- & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1699
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1700
- & 3 & \bb{2} & & \bb{7} & & \bb{4} & 7 & \bb{1}\\\cline{2-9}
1701
- & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1702
- & & \bb{5} & & \bb{6} & 8 & \bb{2} & 1 & \bb{4}\\\cline{2-9}
1703
- \end{tabular}
1704
- \end{center}
1705
- Calculating the Lagrange multipliers gives
1706
- \begin{center}
1707
- \begin{tabular}{c|cc|cc|cc|cc|}
1708
- \multicolumn{1}{c}{}& \bbbb{-5} & \bbbb{-3} & \bbbb{-2} & \bbbb{-4} \\\cline{2-9}
1709
- & & & & & \bbb2 & \bbb4 \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1710
- 0 & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
1711
- & & & \bbb0 & \bbb{-1} & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1712
- -3 & 3 & \bb{2} & & \bb{7} & & \bb{4} & 7 & \bb{1}\\\cline{2-9}
1713
- & \bbb{5} & \bbb{3} & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
1714
- 0 & & \bb{5} & & \bb{6} & 8 & \bb{2} & 1 & \bb{4}\\\cline{2-9}
1715
- \end{tabular}
1716
- \end{center}
1717
- No more violations. Finally. So this is the optimal solution.
1718
- \end{eg}
1719
-
1720
- \subsection{The maximum flow problem}
1721
- Suppose we have a network $(V, E)$ with a single source $1$ and a single sink $n$. There is no costs in transportation, but each edge has a capacity. We want to transport as much stuff from $1$ to $n$ as possible.
1722
-
1723
- We can turn this into a minimum-cost flow problem. We add an edge from $n$ to $1$ with $-1$ cost and infinite capacity. Then the minimal cost flow will maximize the flow from $n$ to $1$ as possible. So the same amount of stuff will have to flow from $1$ to $n$ through the network.
1724
-
1725
- We can write this is problem as
1726
- \begin{center}
1727
- maximize $\delta$ subject to
1728
- \[
1729
- \sum_{j: (i, j) \in E}x_{ij} - \sum_{j(j, i)\in E}x_{ji} =
1730
- \begin{cases}
1731
- \delta & i = 1\\
1732
- -\delta & i = n\\
1733
- 0&\text{otherwise}
1734
- \end{cases}\quad\text{ for each }i
1735
- \]
1736
- \[
1737
- 0 \leq x_{ij}\leq C_{ij}\quad \text{ for each }(i, j)\in E.
1738
- \]
1739
- \end{center}
1740
- Here $\delta$ is the total flow from $1$ to $n$.
1741
-
1742
- While we can apply our results from the minimum-cost flow problem, we don't have to do so. There are easier ways to solve the problem, using the \emph{max-flow min-cut theorem}.
1743
-
1744
- First we need to define a cut.
1745
- \begin{defi}[Cut]
1746
- Suppose $G = (V, E)$ with capacities $C_{ij}$ for $(i, j)\in E$. A \emph{cut} of $G$ is a partition of $V$ into two sets.
1747
-
1748
- For $S\subseteq V$, the \emph{capacity} of the cut $(S, V\setminus S)$ is
1749
- \[
1750
- C(S) = \sum_{(i, j)\in (S\times (V\setminus S))\cap E}C_{ij},
1751
- \]
1752
- All this clumsy notation says is that we add up the capacities of all edges from $S$ to $V\setminus S$.
1753
- \end{defi}
1754
-
1755
- Assume $x$ is a feasible flow vector that sends $\delta$ units from $1$ to $n$. For $X, Y\subseteq V$, we define
1756
- \[
1757
- f_x(X, Y) = \sum_{(i, j)\in (X\times Y)\cap E}x_{ij},
1758
- \]
1759
- i.e.\ the overall amount of flow from $X$ to $Y$.
1760
-
1761
- For any solution $x_{ij}$ and cut $S\subseteq V$ with $1\in S, n\in V\setminus S$, the total flow from $1$ to $n$ can be written as
1762
- \[
1763
- \delta = \sum_{i\in S}\left(\sum_{j: (i, j)\in E}x_{ij} - \sum_{j: (j, i)\in E}x_{ji}\right).
1764
- \]
1765
- This is true since by flow conservation, for any $i \not= 1$, $\sum\limits_{j: (i, j) \in E}x_{ij} - \sum\limits_{j: (j, i)\in E}x_{ji} = 0$, and for $i = 1$, it is $\delta$. So the sum is $\delta$. Hence
1766
- \begin{align*}
1767
- \delta &= f_x(S, V) - f_x(V, S)\\
1768
- &= f_x(S, S) + f_x(S, V\setminus S) - f_x(V\setminus S, S) - f_x(S, S)\\
1769
- &= f_x(S, V\setminus S) - f_x(V\setminus S, S)\\
1770
- &\leq f_x(S, V\setminus S)\\
1771
- &\leq C(S)
1772
- \end{align*}
1773
- This says that the flow through the cut is less than the capacity of the cut, which is obviously true. The less obvious result is that this bound is tight, i.e.\ there is always a cut $S$ such that $\delta = C(S)$.
1774
-
1775
- \begin{thm}[Max-flow min-cut theorem]
1776
- Let $\delta$ be an optimal solution. Then
1777
- \[
1778
- \delta = \min\{C(S): S\subseteq V, 1\in S, n \in V\setminus S\}
1779
- \]
1780
- \end{thm}
1781
-
1782
- \begin{proof}
1783
- Consider any feasible flow vector $x$. Call a path $v_0, \cdots, v_k$ an \emph{augmenting path} if the flow along the path can be increased. Formally, it is a path that satisfies
1784
- \[
1785
- x_{v_{i - 1}v_i} < C_{v_{i - 1}v_i}\text{ or }x_{v_iv_{i - 1}} > 0
1786
- \]
1787
- for $i = 1,\cdots, k$. The first condition says that we have a forward edge where we have not hit the capacity, while the second condition says that we have a backwards edge with positive flow. If these conditions are satisfied, we can increase the flow of each edge (or decrease the backwards flow for backwards edge), and the total flow increases.
1788
-
1789
- Now assume that $x$ is optimal and let
1790
- \[
1791
- S = \{1\}\cup \{i\in V: \text{ there exists an augmenting path from $1$ to $i$}\}.
1792
- \]
1793
- Since there is an augmenting path from $1$ to $S$, we can increase flow from $1$ to any vertex in $S$. So $n \not\in S$ by optimality. So $n\in V\setminus S$.
1794
-
1795
- We have previously shown that
1796
- \[
1797
- \delta = f_x(S, V\setminus S) - f_x(V\setminus S, S).
1798
- \]
1799
- We now claim that $f_x(V\setminus S, S) = 0$. If it is not $0$, it means that there is a node $v\in V\setminus S$ such that there is flow from $v$ to a vertex $u\in S$. Then we can add that edge to the augmenting path to $u$ to obtain an augmenting path to $v$.
1800
-
1801
- Also, we must have $f_x(S, V\setminus S) = C(S)$. Or else, we can still send more things to the other side so there is an augmenting path. So we have
1802
- \[
1803
- \delta = C(S).\qedhere
1804
- \]
1805
- \end{proof}
1806
-
1807
- The max-flow min-cut theorem does not tell us \emph{how} to find an optimal path. Instead, it provides a quick way to confirm that our path is optimal.
1808
-
1809
- It turns out that it isn't difficult to find an optimal solution. We simply keep adding flow along augmenting paths until we cannot do so. This is known as the \emph{Ford-Fulkerson algorithm}.
1810
-
1811
- \begin{enumerate}
1812
- \item Start from a feasible flow $x$, e.g.\ $x = \mathbf{0}$.
1813
- \item If there is no augmenting path for $x$ from $1$ to $n$, then $x$ is optimal.
1814
- \item Find an augmenting path for $x$ from $1$ to $n$, and send a maximum amount of flow along it.
1815
- \item \texttt{GOTO} (ii).
1816
- \end{enumerate}
1817
-
1818
- \begin{eg}
1819
- Consider the diagram
1820
- \begin{center}
1821
- \begin{tikzpicture}[xscale=2]
1822
- \node at (0, 0) [circ] {-};
1823
- \node at (3, 0) [circ] {-};
1824
- \node at (0, 0) [above] {1};
1825
- \node at (3, 0) [above] {$n$};
1826
- \draw [->] (0, 0) -- (1, 1) node [pos = 0.5, above] {\small 5};
1827
- \draw [->] (1, 1) -- (2, 1) node [pos = 0.5, above] {\small 1};
1828
- \draw [->] (2, 1) -- (3, 0) node [pos = 0.5, above] {\small 1};
1829
- \draw [->] (0, 0) -- (1, -1) node [pos = 0.5, above] {\small 5};
1830
- \draw [->] (1, -1) -- (2, -1) node [pos = 0.5, above] {\small 2};
1831
- \draw [->] (2, -1) -- (3, 0) node [pos = 0.5, above] {\small 5};
1832
- \draw [->] (1, 1) -- (2, -1) node [pos = 0.5, above] {\small 4};
1833
-
1834
- \node at (1, 1) [circ] {};
1835
- \node at (2, 1) [circ] {};
1836
- \node at (1, -1) [circ] {};
1837
- \node at (2, -1) [circ] {};
1838
- \end{tikzpicture}
1839
- \end{center}
1840
- We can keep adding flow until we reach
1841
- \begin{center}
1842
- \begin{tikzpicture}[xscale=2]
1843
- \node at (0, 0) [circ] {-};
1844
- \node at (3, 0) [circ] {-};
1845
- \node at (0, 0) [above] {1};
1846
- \node at (3, 0) [above] {$n$};
1847
- \draw [->] (0, 0) -- (1, 1) node [pos = 0.5, above] {\small 5};
1848
- \draw [->] (1, 1) -- (2, 1) node [pos = 0.5, above] {\small 1};
1849
- \draw [->] (2, 1) -- (3, 0) node [pos = 0.5, above] {\small 1};
1850
- \draw [->] (0, 0) -- (1, -1) node [pos = 0.5, above] {\small 5};
1851
- \draw [->] (1, -1) -- (2, -1) node [pos = 0.5, above] {\small 2};
1852
- \draw [->] (2, -1) -- (3, 0) node [pos = 0.5, above] {\small 5};
1853
- \draw [->] (1, 1) -- (2, -1) node [pos = 0.5, above] {\small 4};
1854
-
1855
- \node [red] at (0.5, 0.5) [below] {\small 4};
1856
- \node [red] at (1.5, 1) [below] {\small 1};
1857
- \node [red] at (2.5, 0.5) [below] {\small 1};
1858
- \node [red] at (0.5, -0.5) [below] {\small 2};
1859
- \node [red] at (1.5, -1) [below] {\small 2};
1860
- \node [red] at (2.5, -0.5) [below] {\small 5};
1861
- \node [red] at (1.5, 0) [below] {\small 3};
1862
-
1863
- \node at (1, 1) [circ] {};
1864
- \node at (2, 1) [circ] {};
1865
- \node at (1, -1) [circ] {};
1866
- \node at (2, -1) [circ] {};
1867
- \end{tikzpicture}
1868
- \end{center}
1869
- (red is flow, black is capacity). We know this is an optimum, since our total flow is $6$, and we can draw a cut with capacity 6:
1870
- \begin{center}
1871
- \begin{tikzpicture}[xscale=2]
1872
- \node at (0, 0) [circ] {-};
1873
- \node at (3, 0) [circ] {-};
1874
- \node at (0, 0) [above] {1};
1875
- \node at (3, 0) [above] {$n$};
1876
- \draw [->] (0, 0) -- (1, 1) node [pos = 0.5, above] {\small 5};
1877
- \draw [->] (1, 1) -- (2, 1) node [pos = 0.5, above] {\small 1};
1878
- \draw [->] (2, 1) -- (3, 0) node [pos = 0.5, above] {\small 1};
1879
- \draw [->] (0, 0) -- (1, -1) node [pos = 0.5, above] {\small 5};
1880
- \draw [->] (1, -1) -- (2, -1) node [pos = 0.5, above] {\small 2};
1881
- \draw [->] (2, -1) -- (3, 0) node [pos = 0.5, above] {\small 5};
1882
- \draw [->] (1, 1) -- (2, -1) node [pos = 0.5, above] {\small 4};
1883
- \draw [dashed, red] (2.3, 1.1) -- (2.3, -1.1);
1884
-
1885
- \node at (1, 1) [circ] {};
1886
- \node at (2, 1) [circ] {};
1887
- \node at (1, -1) [circ] {};
1888
- \node at (2, -1) [circ] {};
1889
- \end{tikzpicture}
1890
- \end{center}
1891
- \end{eg}
1892
- \end{document}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
books/cam/IB_E/variational_principles.tex DELETED
@@ -1,1652 +0,0 @@
1
- \documentclass[a4paper]{article}
2
-
3
- \def\npart {IB}
4
- \def\nterm {Easter}
5
- \def\nyear {2015}
6
- \def\nlecturer {P.\ K.\ Townsend}
7
- \def\ncourse {Variational Principles}
8
- \def\nofficial {http://www.damtp.cam.ac.uk/user/examples/B6La.pdf}
9
-
10
- \input{header}
11
-
12
- \begin{document}
13
- \maketitle
14
- {\small
15
- \noindent Stationary points for functions on $\R^n$. Necessary and sufficient conditions for minima and maxima. Importance of convexity. Variational problems with constraints; method of Lagrange multipliers. The Legendre Transform; need for convexity to ensure invertibility; illustrations from thermodynamics.\hspace*{\fill} [4]
16
-
17
- \vspace{5pt}
18
- \noindent The idea of a functional and a functional derivative. First variation for functionals, Euler-Lagrange equations, for both ordinary and partial differential equations. Use of Lagrange multipliers and multiplier functions.\hspace*{\fill} [3]
19
-
20
- \vspace{5pt}
21
- \noindent Fermat's principle; geodesics; least action principles, Lagrange's and Hamilton's equations for particles and fields. Noether theorems and first integrals, including two forms of Noether's theorem for ordinary differential equations (energy and momentum, for example). Interpretation in terms of conservation laws.\hspace*{\fill} [3]
22
-
23
- \vspace{5pt}
24
- \noindent Second variation for functionals; associated eigenvalue problem.\hspace*{\fill} [2]}
25
-
26
- \tableofcontents
27
- \setcounter{section}{-1}
28
- \section{Introduction}
29
- Consider a light ray travelling towards a mirror and being reflected.
30
- \begin{center}
31
- \begin{tikzpicture}
32
- \draw (1, 1) -- (1, -1);
33
- \draw (0.9, 0) -- (1.1, 0) node [right] {$z$};
34
- \draw [->] (0, 0.7) -- (1, 0) -- (0, -0.7);
35
- \end{tikzpicture}
36
- \end{center}
37
- We see that the light ray travels towards the mirror, gets reflected at $z$, and hits the (invisible) eye. What determines the path taken? The usual answer would be that the reflected angle shall be the same as the incident angle. However, ancient Greek mathematician Hero of Alexandria provided a different answer: the path of the light minimizes the total distance travelled.
38
-
39
- We can assume that light travels in a straight line except when reflected. Then we can characterize the path by a single variable $z$, the point where the light ray hits the mirror. Then we let $L(z)$ to be the length of the path, and we can solve for $z$ by setting $L'(z) = 0$.
40
-
41
- This principle sounds reasonable - in the absence of mirrors, light travels in a straight line - which is the shortest path between two points. But is it always true that the \emph{shortest} path is taken? No! We only considered a plane mirror, and this doesn't hold if we have, say, a spherical mirror. However, it turns out that in all cases, the path is a stationary point of the length function, i.e.\ $L'(z) = 0$.
42
-
43
- Fermat put this principle further. Assuming that light travels at a finite speed, the shortest path is the path that takes the minimum time. Fermat's principle thus states that
44
- \begin{center}
45
- Light travels on the path that takes the shortest time.
46
- \end{center}
47
- This alternative formulation has the advantage that it applies to refraction as well. Light travels at different speeds in different mediums. Hence when they travel between mediums, they will change direction such that the total time taken is minimized.
48
-
49
- We usually define the refractive index $n$ of a medium to be $n = 1/v$, where $v$ is the velocity of light in the medium. Then we can write the variational principle as
50
- \begin{center}
51
- minimize $\displaystyle \int_{\text{path}} n\;\d s$,
52
- \end{center}
53
- where $\d s$ is the path length element. This is easy to solve if we have two distinct mediums. Since light travels in a straight line in each medium, we can simply characterize the path by the point where the light crosses the boundary. However, in the general case, we should be considering \emph{any} possible path between two points. In this case, we could no longer use ordinary calculus, and need new tools - calculus of variations.
54
-
55
- In calculus of variations, the main objective is to find a function $x(t)$ that minimizes an integral $\int f(x)\;\d t$ for some function $f$. For example, we might want to minimize $\int (x^2 + x)\;\d t$. This differs greatly from ordinary minimization problems. In ordinary calculus, we minimize a function $f(x)$ for all possible values of $x\in \R$. However, in calculus of variations, we will be minimizing the integral $\int f(x)\;\d t$ over all possible \emph{functions} $x(t)$.
56
-
57
- \section{Multivariate calculus}
58
- Before we start calculus of variations, we will first have a brief overview of minimization problems in ordinary calculus. Unless otherwise specified, $f$ will be a function $\R^n \to \R$. For convenience, we write the argument of $f$ as $\mathbf{x} = (x_1, \cdots, x_n)$ and $x = |\mathbf{x}|$. We will also assume that $f$ is sufficiently smooth for our purposes.
59
-
60
- \subsection{Stationary points}
61
- The quest of minimization starts with finding stationary points.
62
- \begin{defi}[Stationary points]
63
- \emph{Stationary points} are points in $\R^n$ for which $\nabla f = \mathbf{0}$, i.e.
64
- \[
65
- \frac{\partial f}{\partial x_1} = \frac{\partial f}{\partial x_2} = \cdots = \frac{\partial f}{\partial x_n} = 0
66
- \]
67
- \end{defi}
68
- All minima and maxima are stationary points, but knowing that a point is stationary is not sufficient to determine which type it is. To know more about the nature of a stationary point, we Taylor expand $f$ about such a point, which we assume is $\mathbf{0}$ for notational convenience.
69
- \begin{align*}
70
- f(\mathbf{x}) &= f(\mathbf{0}) + \mathbf{x}\cdot \nabla f + \frac{1}{2}\sum_{i, j}x_ix_j\frac{\partial^2 f}{\partial x_i \partial x_j} + O(x^3).\\
71
- &= f(\mathbf{0}) + \frac{1}{2}\sum_{i, j}x_ix_j\frac{\partial^2 f}{\partial x_i \partial x_j} + O(x^3).
72
- \end{align*}
73
- The second term is so important that we have a name for it:
74
- \begin{defi}[Hessian matrix]
75
- The \emph{Hessian matrix} is
76
- \[
77
- H_{ij}(\mathbf{x}) = \frac{\partial^2 f}{\partial x_i \partial x_j}
78
- \]
79
- \end{defi}
80
- Using summation notation, we can write our result as
81
- \[
82
- f(\mathbf{x}) - f(\mathbf{0}) = \frac{1}{2}x_i H_{ij}x_j + O(x^3).
83
- \]
84
- Since $H$ is symmetric, it is diagonalizable. Thus after rotating our axes to a suitable coordinate system, we have
85
- \[
86
- H_{ij}' =
87
- \begin{pmatrix}
88
- \lambda_1 & 0 & \cdots & 0\\
89
- 0 & \lambda_2 & \cdots & 0\\
90
- \vdots & \vdots & \ddots & \vdots \\
91
- 0 & 0 & \cdots & \lambda_n
92
- \end{pmatrix},
93
- \]
94
- where $\lambda_i$ are the eigenvalues of $H$. Since $H$ is real symmetric, these are all real. In our new coordinate system, we have
95
- \[
96
- f(\mathbf{x}) - f(\mathbf{0}) = \frac{1}{2}\sum_{i = 1}^n \lambda_i (x_i')^2
97
- \]
98
- This is useful information. If all eigenvalues $\lambda_i$ are positive, then $f(\mathbf{x}) - f(\mathbf{0})$ must be positive (for small $\mathbf{x}$). Hence our stationary point is a local minimum. Similarly, if all eigenvalues are negative, then it is a local maximum.
99
-
100
- If there are mixed signs, say $\lambda_1 > 0$ and $\lambda_2 < 0$, then $f$ increases in the $x_1$ direction and decreases in the $x_2$ direction. In this case we say we have a saddle point.
101
-
102
- If some $\lambda = 0$, then we have a \emph{degenerate stationary point}. To identify the nature of this point, we must look at even higher derivatives.
103
-
104
- In the special case where $n = 2$, we do not need to explicitly find the eigenvalues. We know that $\det H$ is the product of the two eigenvalues. Hence if $\det H$ is negative, the eigenvalues have different signs, and we have a saddle. If $\det H$ is positive, then the eigenvalues are of the same sign.
105
-
106
- To determine if it is a maximum or minimum, we can look at the trace of $H$, which is the sum of eigenvalues. If $\tr H$ is positive, then we have a local minimum. Otherwise, it is a local maximum.
107
-
108
- \begin{eg}
109
- Let $f(x, y) = x^3 + y^3 - 3xy$. Then
110
- \[
111
- \nabla f = 3(x^2 - y, y^2 - x).
112
- \]
113
- This is zero iff $x^2 = y$ and $y^2 = x$. This is satisfied iff $y^4 = y$. So either $y = 0$, or $y = 1$. So there are two stationary points: $(0, 0)$ and $(1, 1)$.
114
-
115
- The Hessian matrix is
116
- \[
117
- H =
118
- \begin{pmatrix}
119
- 6x & -3\\
120
- -3 & 6y
121
- \end{pmatrix},
122
- \]
123
- and we have
124
- \begin{align*}
125
- \det H &= 9(4xy - 1)\\
126
- \tr H &= 6(x + y).
127
- \end{align*}
128
- At $(0, 0)$, $\det H < 0$. So this is a saddle point. At $(1, 1)$, $\det H > 0$, $\tr H > 0$. So this is a local minimum.
129
- \end{eg}
130
- \subsection{Convex functions}
131
- \subsubsection{Convexity}
132
- \emph{Convex functions} is an important class of functions that has a lot of nice properties. For example, stationary points of convex functions are all minima, and a convex function can have at most one minimum value. To define convex functions, we need to first define a \emph{convex set}.
133
-
134
- \begin{defi}[Convex set]
135
- A set $S\subseteq \R^n$ is \emph{convex} if for any distinct $\mathbf{x}, \mathbf{y}\in S, t\in (0, 1)$, we have $(1 - t)\mathbf{x} + t\mathbf{y} \in S$. Alternatively, any line joining two points in $S$ lies completely within $S$.
136
- \begin{center}
137
- \begin{tikzpicture}
138
- \begin{scope}[shift={(-2, 0)}]
139
- \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0, 0) (0.6, -1) (1, 0) (0.7, 0.7)};
140
- \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {};
141
-
142
- \node at (0, -1.5) {non-convex};
143
- \end{scope}
144
-
145
- \begin{scope}[shift={(2, 0)}]
146
- \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
147
- \node at (0, -1.5) {convex};
148
- \end{scope}
149
- \end{tikzpicture}
150
- \end{center}
151
- \end{defi}
152
-
153
- \begin{defi}[Convex function]
154
- A function $f: \R^n \to \R$ is \emph{convex} if
155
- \begin{enumerate}
156
- \item The domain $D(f)$ is convex
157
- \item The function $f$ lies below (or on) all its chords, i.e.
158
- \[
159
- f((1 - t)\mathbf{x} + t\mathbf{y}) \leq (1 - t)f(\mathbf{x}) + tf(\mathbf{y}) \tag{$*$}
160
- \]
161
- for all $\mathbf{x}, \mathbf{y}\in D(f), t\in (0, 1)$.
162
- \end{enumerate}
163
- A function is \emph{strictly convex} if the inequality is strict, i.e.
164
- \[
165
- f((1 - t)\mathbf{x} + t\mathbf{y}) < (1 - t)f(\mathbf{x}) + tf(\mathbf{y}).
166
- \]
167
- \begin{center}
168
- \begin{tikzpicture}
169
- \draw(-2, 4) -- (-2, 0) -- (2, 0) -- (2, 4);
170
- \draw (-1.3, 0.1) -- (-1.3, -0.1) node [below] {$x$};
171
- \draw (1.3, 0.1) -- (1.3, -0.1) node [below] {$y$};
172
- \draw (-1.7, 2) parabola bend (-.2, 1) (1.7, 3.3);
173
- \draw [dashed] (-1.3, 0) -- (-1.3, 1.53) node [circ] {};
174
- \draw [dashed] (1.3, 0) -- (1.3, 2.42) node [circ] {};
175
- \draw (-1.3, 1.53) -- (1.3, 2.42);
176
- \draw [dashed] (0, 0) node [below] {\tiny $(1 - t)x + ty$} -- (0, 1.975) node [above] {\tiny$(1 - t)f(x) + t f(y)\quad\quad\quad\quad\quad\quad$} node [circ] {};
177
- \end{tikzpicture}
178
- \end{center}
179
- A function $f$ is (strictly) concave iff $-f$ is (strictly) convex.
180
- \end{defi}
181
-
182
- \begin{eg}\leavevmode
183
- \begin{enumerate}
184
- \item $f(x) = x^2$ is strictly convex.
185
- \item $f(x) = |x|$ is convex, but not strictly.
186
- \item $f(x) = \frac{1}{x}$ defined on $x > 0$ is strictly convex.
187
- \item $f(x) = \frac{1}{x}$ defined on $\R^* = \R \setminus \{0\}$ is \emph{not} convex. Apart from the fact that $\R^*$ is not a convex domain. But even if we defined, like $f(0) = 0$, it is not convex by considering the line joining $(-1, -1)$ and $(1, 1)$ (and in fact $f(x) = \frac{1}{x}$ defined on $x < 0$ is concave).
188
- \end{enumerate}
189
- \end{eg}
190
- \subsubsection{First-order convexity condition}
191
- While the definition of a convex function seems a bit difficult to work with, if our function is differentiable, it is easy to check if it is convex.
192
-
193
- First assume that our function is once differentiable, and we attempt to find a first-order condition for convexity. Suppose that $f$ is convex. For fixed $\mathbf{x}, \mathbf{y}$, we define the function
194
- \[
195
- h(t) = (1 - t)f(\mathbf{x}) + tf(\mathbf{y}) - f((1 - t)\mathbf{x} + t \mathbf{y}).
196
- \]
197
- By the definition of convexity of $f$, we must have $h(t) \geq 0$. Also, trivially $h(0) = 0$. So
198
- \[
199
- \frac{h(t) - h(0)}{t} \geq 0
200
- \]
201
- for any $t\in (0, 1)$. So
202
- \[
203
- h'(0) \geq 0.
204
- \]
205
- On the other hand, we can also differentiate $h$ directly and evaluate at $0$:
206
- \[
207
- h'(0) = f(\mathbf{y}) - f(\mathbf{x}) - (\mathbf{y} - \mathbf{x})\cdot \nabla f (\mathbf{x}).
208
- \]
209
- Combining our two results, we know that
210
- \[
211
- f(\mathbf{y}) \geq f(\mathbf{x}) + (\mathbf{y} - \mathbf{x})\cdot \nabla f(\mathbf{x}) \tag{$\dagger$}
212
- \]
213
- It is also true that this condition implies convexity, which is an easy result.
214
-
215
- How can we interpret this result? The equation $f(\mathbf{x}) + (\mathbf{y} - \mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0$ defines the tangent plane of $f$ at $\mathbf{x}$. Hence this condition is saying that a convex differentiable function lies above all its tangent planes.
216
-
217
- We immediately get the corollary
218
- \begin{cor}
219
- A stationary point of a convex function is a global minimum. There can be more than one global minimum (e.g.\ a constant function), but there is at most one if the function is strictly convex.
220
- \end{cor}
221
-
222
- \begin{proof}
223
- Given $\mathbf{x}_0$ such that $\nabla f(\mathbf{x}_0) = \mathbf{0}$, $(\dagger)$ implies that for any $\mathbf{y}$,
224
- \[
225
- f(\mathbf{y}) \geq f(\mathbf{x}_0) + (\mathbf{y} - \mathbf{x}_0)\cdot \nabla f(\mathbf{x}_0) = f(\mathbf{x}_0). \qedhere
226
- \]
227
- \end{proof}
228
- We can write our first-order convexity condition in a different way. We can rewrite $(\dagger)$ into the form
229
- \[
230
- (\mathbf{y} - \mathbf{x}) \cdot [\nabla f(\mathbf{y}) - \nabla f(\mathbf{x})] \geq f(\mathbf{x}) - f(\mathbf{y}) - (\mathbf{x} - \mathbf{y}) \cdot \nabla f(\mathbf{y}).
231
- \]
232
- By applying $(\dagger)$ to the right hand side (with $\mathbf{x}$ and $\mathbf{y}$ swapped), we know that the right hand side is $\geq 0$. So we have another first-order condition:
233
- \[
234
- (\mathbf{y} - \mathbf{x})\cdot [\nabla f(\mathbf{y}) - \nabla f(\mathbf{x})] \geq 0,
235
- \]
236
- It can be shown that this is equivalent to the other conditions.
237
-
238
- This condition might seem a bit weird to comprehend, but all it says is that $\nabla f(\mathbf{x})$ is a non-decreasing function. For example, when $n = 1$, the equation states that $(y - x)(f'(y) - f'(x)) \geq 0$, which is the same as saying $f'(y) \geq f'(x)$ whenever $y > x$.
239
-
240
- \subsubsection{Second-order convexity condition}
241
- We have an even nicer condition when the function is twice differentiable. We start with the equation we just obtained:
242
- \[
243
- (\mathbf{y} - \mathbf{x})\cdot [\nabla f(\mathbf{y}) - \nabla f(\mathbf{x})] \geq 0,
244
- \]
245
- Write $\mathbf{y} = \mathbf{x} + \mathbf{h}$. Then
246
- \[
247
- \mathbf{h} \cdot (\nabla f(\mathbf{x} + \mathbf{h}) - \nabla f(\mathbf{x})) \geq 0.
248
- \]
249
- Expand the left in Taylor series. Using suffix notation, this becomes
250
- \[
251
- h_i [h_j \nabla_j \nabla_i f + O(h^2)] \geq 0.
252
- \]
253
- But $\nabla_j \nabla_i f = H_{ij}$. So we have
254
- \[
255
- h_i H_{ij}h_j + O(h^3) \geq 0
256
- \]
257
- This is true for all $h$ if the Hessian $H$ is positive semi-definite (or simply positive), i.e.\ the eigenvalues are non-negative. If they are in fact all positive, then we say $H$ is positive definite.
258
-
259
- Hence convexity implies that the Hessian matrix is positive for all $\mathbf{x}\in D(f)$. Strict convexity implies that it is positive definite.
260
-
261
- The converse is also true --- if the Hessian is positive, then it is convex.
262
-
263
- \begin{eg}
264
- Let $f(x, y) = \frac{1}{xy}$ for $x, y > 0$. Then the Hessian is
265
- \[
266
- H = \frac{1}{xy}
267
- \begin{pmatrix}
268
- \frac{2}{x^2} & \frac{1}{xy}\\
269
- \frac{1}{xy} & \frac{2}{y^2}
270
- \end{pmatrix}
271
- \]
272
- The determinant is
273
- \[
274
- \det H = \frac{3}{x^4y^4} > 0
275
- \]
276
- and the trace is
277
- \[
278
- \tr H = \frac{2}{xy}\left(\frac{1}{x^2} + \frac{1}{y^2}\right) > 0.
279
- \]
280
- So $f$ is convex.
281
-
282
- To conclude that $f$ is convex, we only used the fact that $xy$ is positive, instead of $x$ and $y$ being individually positive. Then could we relax the domain condition to be $xy > 0$ instead? The answer is no, because in this case, the function will no longer be convex!
283
- \end{eg}
284
-
285
- \subsection{Legendre transform}
286
- The Legendre transform is an important tool in classical dynamics and thermodynamics. In classical dynamics, it is used to transform between the Lagrangian and the Hamiltonian. In thermodynamics, it is used to transform between the energy, Helmholtz free energy and enthalpy. Despite its importance, the definition is slightly awkward.
287
-
288
- Suppose that we have a function $f(x)$, which we'll assume is differentiable. For some reason, we want to transform it into a function of the conjugate variable $p = \frac{\d f}{\d x}$ instead. In most applications to physics, this quantity has a particular physical significance. For example, in classical dynamics, if $L$ is the Lagrangian, then $p = \frac{\partial L}{\partial \dot{x}}$ is the (conjugate) momentum. $p$ also has a context-independent geometric interpretation, which we will explore later. For now, we will assume that $p$ is more interesting than $x$.
289
-
290
- Unfortunately, the obvious option $f^*(p) = f(x(p))$ is not the transform we want. There are various reasons for this, but the major reason is that it is \emph{ugly}. It lacks any mathematical elegance, and has almost no nice properties at all.
291
-
292
- In particular, we want our $f^*(p)$ to satisfy the property
293
- \[
294
- \frac{\d f^*}{\d p} = x.
295
- \]
296
- This says that if $p$ is the conjugate of $x$, then $x$ is the conjugate of $p$. We will soon see how this is useful in the context of thermodynamics.
297
-
298
- The symmetry is better revealed if we write in terms of differentials. The differential of the function $f$ is
299
- \[
300
- \d f = \frac{\d f}{\d x}\;\d x = p\;\d x.
301
- \]
302
- So we want our $f^*$ to satisfy
303
- \[
304
- \d f^* = x\;\d p.
305
- \]
306
- How can we obtain this? From the product rule, we know that
307
- \[
308
- \d (xp) = x\;\d p + p\;\d x.
309
- \]
310
- So if we define $f^* = xp - f$ (more explicitly written as $f^*(p) = x(p)p - f(x(p))$), then we obtain the desired relation $\d f^* = x\;\d p$. Alternatively, we can say $\frac{\d f^*}{\d p} = x$.
311
-
312
- The actual definition we give will not be exactly this. Instead, we define it in a way that does not assume differentiability. We'll also assume that the function takes the more general form $\R^n \to \R$.
313
- \begin{defi}[Legendre transform]
314
- Given a function $f: \R^n \to \R$, its \emph{Legendre transform} $f^*$ (the ``conjugate'' function) is defined by
315
- \[
316
- f^*(\mathbf{p}) = \sup_{\mathbf{x}}(\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})),
317
- \]
318
- The domain of $f^*$ is the set of $\mathbf{p}\in \R^n$ such that the supremum is finite. $\mathbf{p}$ is known as the conjugate variable.
319
- \end{defi}
320
- This relation can also be written as $f^*(p) + f(x) = px$, where $x(p)$ is the value of $x$ that maximizes the function.
321
-
322
- To show that this is the same as what we were just talking about, note that the supremum of $\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})$ is obtained when its derivative is zero, i.e.\ $\mathbf{p} = \nabla f(\mathbf{x})$. In particular, in the 1D case, $f^*(p) = px - f(x)$, where $x$ satisfies $f'(x) = p$. So $\mathbf{p}$ is indeed the derivative of $f$ with respect to $\mathbf{x}$.
323
-
324
- From the definition, we can immediately conclude that
325
- \begin{lemma}
326
- $f^*$ is always convex.
327
- \end{lemma}
328
-
329
- \begin{proof}
330
- \begin{align*}
331
- f^*((1 - t)\mathbf{p} + t\mathbf{q}) &= \sup_\mathbf{x} \big[((1 - t)\mathbf{p}\cdot \mathbf{x} + t\mathbf{q}\cdot \mathbf{x} - f(\mathbf{x})\big].\\
332
- &= \sup_\mathbf{x} \big[(1 - t)(\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})) + t(\mathbf{q}\cdot \mathbf{x} - f(\mathbf{x}))\big]\\
333
- &\leq (1 - t)\sup_\mathbf{x} [\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})] + t\sup_\mathbf{x}[\mathbf{q}\cdot \mathbf{x} - f(\mathbf{x})]\\
334
- &= (1 - t)f^*(\mathbf{p}) + tf^*(\mathbf{q})
335
- \end{align*}
336
- Note that we cannot immediately say that $f^*$ is convex, since we have to show that the domain is convex. But by the above bounds, $f^*((1 - t)\mathbf{p} + t\mathbf{q})$ is bounded by the sum of two finite terms, which is finite. So $(1 - t)\mathbf{p} + t\mathbf{q}$ is also in the domain of $f^*$.
337
- \end{proof}
338
-
339
- This transformation can be given a geometric interpretation. We will only consider the 1D case, because drawing higher-dimensional graphs is hard. For any fixed $x$, we draw the tangent line of $f$ at the point $x$. Then $f^*(p)$ is the intersection between the tangent line and the $y$ axis:
340
- \begin{center}
341
- \begin{tikzpicture}
342
- \draw [->] (-1, 0) -- (4, 0) node [right] {$x$};
343
- \draw [->] (0, -2) -- (0, 3) node [above] {$y$};
344
- \draw (-1, 0) parabola (3.5, 3);
345
- \draw (-0.4, -1.3) -- +(4, 4.24) node [right] {slope $=p$};
346
- \draw [dashed] (-1, -0.9) -- +(5, 0);
347
- \node at (0, -0.9) [anchor = north west] {$-f^*(p)$};
348
- \draw [black, arrows={latex'-latex'}](2.6, -0.9) -- +(0, 2.76) node [pos=0.5, left] {$px$};
349
- \draw [blue, arrows={latex'-latex'}] (2.8, 0) -- +(0, -0.9) node [right, pos=0.5] {$f^*(p) = px - f(x)$};
350
- \draw [red, arrows={latex'-latex'}] (2.8, 0) -- +(0, 1.86) node [pos=0.4, right] {$f(x)$};
351
- \end{tikzpicture}
352
- \end{center}
353
-
354
- \begin{eg}\leavevmode
355
- \begin{enumerate}
356
- \item Let $f(x) = \frac{1}{2}ax^2$ for $a > 0$. Then $p = ax$ at the maximum of $px - f(x)$. So
357
- \[
358
- f^*(p) = px - f(x) = p\cdot \frac{p}{a} - \frac{1}{2}a\left(\frac{p}{a}\right)^2 = \frac{1}{2a}p^2.
359
- \]
360
- So the Legendre transform maps a parabola to a parabola.
361
- \item $f(v) = -\sqrt{1 - v^2}$ for $|v| < 1$ is a lower semi-circle. We have
362
- \[
363
- p = f'(v) = \frac{v}{\sqrt{1 - v^2}}
364
- \]
365
- So
366
- \[
367
- v = \frac{p}{\sqrt{1 + p^2}}
368
- \]
369
- and exists for all $p\in \R$. So
370
- \[
371
- f^*(p) = pv - f(v) = \frac{p^2}{\sqrt{1 + p^2}} + \frac{1}{\sqrt{1 + p^2}} = \sqrt{1 + p^2}.
372
- \]
373
- A circle gets mapped to a hyperbola.
374
- \item Let $f = cx$ for $c > 0$. This is convex but not strictly convex. Then $px - f(x) = (p - c)x$. This has no maximum unless $p = c$. So the domain of $f^*$ is simply $\{c\}$. One point. So $f^*(p) = 0$. So a line goes to a point.
375
- \end{enumerate}
376
- \end{eg}
377
-
378
- Finally, we prove that applying the Legendre transform twice gives the original function.
379
- \begin{thm}
380
- If $f$ is convex, differentiable with Legendre transform $f^*$, then $f^{**} = f$.
381
- \end{thm}
382
-
383
- \begin{proof}
384
- We have $f^*(\mathbf{p}) = (\mathbf{p}\cdot\mathbf{x}(\mathbf{p}) - f(\mathbf{x}(\mathbf{p}))$ where $\mathbf{x}(\mathbf{p})$ satisfies $\mathbf{p} = \nabla f(\mathbf{x}(\mathbf{p}))$.
385
-
386
- Differentiating with respect to $\mathbf{p}$, we have
387
- \begin{align*}
388
- \nabla_i f^*(\mathbf{p}) &= x_i + p_j \nabla_i x_j (\mathbf{p}) - \nabla_i x_j(\mathbf{p}) \nabla_j f(\mathbf{x})\\
389
- &= x_i + p_j \nabla_i x_j(\mathbf{p}) - \nabla_i x_j(\mathbf{p}) p_j\\
390
- &= x_i.
391
- \end{align*}
392
- So
393
- \[
394
- \nabla f^*(\mathbf{p}) = \mathbf{x}.
395
- \]
396
- This means that the conjugate variable of $\mathbf{p}$ is our original $\mathbf{x}$. So
397
- \begin{align*}
398
- f^{**}(\mathbf{x}) &= (\mathbf{x} \cdot \mathbf{p} - f^*(\mathbf{p}))|_{\mathbf{p} = \mathbf{p}(\mathbf{x})}\\
399
- &= \mathbf{x}\cdot \mathbf{p} - (\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x}))\\
400
- &= f(\mathbf{x}). \qedhere
401
- \end{align*}
402
- \end{proof}
403
- Note that strict convexity is \emph{not} required. For example, in our last example above with the straight line, $f^*(p) = 0$ for $p = c$. So $f^{**}(x) = (xp - f^*(p))|_{p = c} = cx = f(x)$.
404
-
405
- However, convexity \emph{is} required. If $f^{**} = f$ is true, then $f$ must be convex, since it is a Legendre transform. Hence $f^{**} = f$ cannot be true for non-convex functions.
406
-
407
- \subsubsection*{Application to thermodynamics}
408
- Given a system of a fixed number of particles, the energy of a system is usually given as a function of entropy and volume:
409
- \[
410
- E = E(S, V).
411
- \]
412
- We can think of this as a gas inside a piston with variable volume.
413
-
414
- There are two things that can affect the energy: we can push in the piston and modify the volume. This corresponds to a work done of $-p\;\d V$, where $p$ is the pressure. Alternatively, we can simply heat it up and create a heat change of $T\;\d S$, where $T$ is the temperature. Then we have
415
- \[
416
- \d E = T\;\d S - p\;\d V.
417
- \]
418
- Comparing with the chain rule, we have
419
- \[
420
- \frac{\partial E}{\partial S} = T,\quad -\frac{\partial E}{\partial V} = p
421
- \]
422
- However, the entropy is a mysterious quantity no one understands. Instead, we like temperature, defined as $T = \frac{\partial E}{\partial S}$. Hence we use the (negative) Legendre transform to obtain the conjugate function \emph{Helmholtz free energy}.
423
- \[
424
- F(T, V) = \inf_S [E(S, V) - TS] = E(S, V) - S\frac{\partial E}{\partial S} = E - ST,
425
- \]
426
- Note that the Helmholtz free energy satisfies
427
- \[
428
- \d F = - S \;\d T - p \;\d V.
429
- \]
430
- Just as we could recover $T$ and $p$ from $E$ via taking partial derivatives with respect to $S$ and $V$, we are able to recover $S$ and $p$ from $F$ by taking partial derivatives with respect to $T$ and $V$. This would not be the case if we simply defined $F(T, V) = E(S(T, V), V)$.
431
-
432
- If we take the Legendre transform with respect to $V$, we get the enthalpy instead, and if we take the Legendre transform with respect to both, we get the Gibbs free energy.
433
-
434
- \subsection{Lagrange multipliers}
435
- At the beginning, we considered the problem of \emph{unconstrained maximization}. We wanted to maximize $f(x, y)$ where $x, y$ can be any real value. However, sometimes we want to restrict to certain values of $(x, y)$. For example, we might want $x$ and $y$ to satisfy $x + y = 10$.
436
-
437
- We take a simple example of a hill. We model it using the function $f(x, y)$ given by the height above the ground. The hilltop would be given by the maximum of $f$, which satisfies
438
- \[
439
- 0 = \d f = \nabla f\cdot \d \mathbf{x}
440
- \]
441
- for any (infinitesimal) displacement $\d \mathbf{x}$. So we need
442
- \[
443
- \nabla f = \mathbf{0}.
444
- \]
445
- This would be a case of \emph{unconstrained maximization}, since we are considering all possible values of $x$ and $y$.
446
-
447
- A problem of \emph{constrained maximization} would be as follows: we have a path $p$ defined by $p(x, y) = 0$. What is the highest point along the path $p$?
448
-
449
- We still need $\nabla f \cdot \d \mathbf{x} = 0$, but now $\d \mathbf{x}$ is \emph{not} arbitrary. We only consider the $\d \mathbf{x}$ parallel to the path. Alternatively, $\nabla f$ has to be entirely perpendicular to the path. Since we know that the normal to the path is $\nabla p$, our condition becomes
450
- \[
451
- \nabla f = \lambda \nabla p
452
- \]
453
- for some lambda $\lambda$. Of course, we still have the constraint $p(x, y) = 0$. So what we have to solve is
454
- \begin{align*}
455
- \nabla f &= \lambda \nabla p\\
456
- p &= 0
457
- \end{align*}
458
- for the three variables $x, y, \lambda$.
459
-
460
- Alternatively, we can change this into a single problem of \emph{unconstrained} extremization. We ask for the stationary points of the function $\phi(x, y, \lambda)$ given by
461
- \[
462
- \phi(x, y, \lambda) = f(x, y) - \lambda p(x, y)
463
- \]
464
- When we maximize against the variables $x$ and $y$, we obtain the $\nabla f = \lambda \nabla p$ condition, and maximizing against $\lambda$ gives the condition $p = 0$.
465
-
466
- \begin{eg}
467
- Find the radius of the smallest circle centered on origin that intersects $y = x^2 - 1$.
468
-
469
- \begin{enumerate}
470
- \item First do it the easy way: for a circle of radius $R$ to work, $x^2 + y^2 = R^2$ and $y = x^2 - 1$ must have a solution. So
471
- \[
472
- (x^2)^2 - x^2 + 1 - R^2 = 0
473
- \]
474
- and
475
- \[
476
- x^2 = \frac{1}{2}\pm \sqrt{R^2 - \frac{3}{4}}
477
- \]
478
- So $R_{\min} = \sqrt{3}/2$.
479
-
480
- \item We can also view this as a variational problem. We want to minimize $f(x, y) = x^2 + y^2$ subject to the constraint $p(x, y) = 0$ for $p(x, y) = y - x^2 + 1$.
481
-
482
- We can solve this directly. We can solve the constraint to obtain $y = x^2 - 1$. Then
483
- \[
484
- R^2(x) = f(x, y(x)) = (x^2)^2 - x^2 + 1
485
- \]
486
- We look for stationary points of $R^2$:
487
- \[
488
- (R^2(x))' = 0 \Rightarrow x\left(x^2 - \frac{1}{2}\right)= 0
489
- \]
490
- So $x = 0$ and $R = 1$; or $x = \pm \frac{1}{\sqrt{2}}$ and $R = \frac{\sqrt{3}}{2}$. Since $\frac{\sqrt{3}}{2}$ is smaller, this is our minimum.
491
-
492
- \item Finally, we can use Lagrange multipliers. We find stationary points of the function
493
- \[
494
- \phi(x, y, \lambda) = f(x, y) - \lambda p(x, y) = x^2 + y^2 - \lambda (y - x^2 + 1)
495
- \]
496
- The partial derivatives give
497
- \begin{align*}
498
- \frac{\partial \phi}{\partial x} = 0 &\Rightarrow 2x(1 + \lambda) = 0\\
499
- \frac{\partial \phi}{\partial y} = 0 &\Rightarrow 2y - \lambda = 0\\
500
- \frac{\partial \phi}{\partial \lambda} = 0 &\Rightarrow y - x^2 + 1 = 0
501
- \end{align*}
502
- The first equation gives us two choices
503
- \begin{itemize}
504
- \item $x = 0$. Then the third equation gives $y = -1$. So $R = \sqrt{x^2 + y^2} = 1$.
505
- \item $\lambda = -1$. So the second equation gives $y = -\frac{1}{2}$ and the third gives $x = \pm \frac{1}{\sqrt{2}}$. Hence $R = \frac{\sqrt{3}}{2}$ is the minimum.
506
- \end{itemize}
507
- \end{enumerate}
508
- \end{eg}
509
- This can be generalized to problems with functions $\R^n \to \R$ using the same logic.
510
-
511
- \begin{eg}
512
- For $\mathbf{x}\in \R^n$, find the minimum of the quadratic form
513
- \[
514
- f(\mathbf{x}) = x_i A_{ij}x_j
515
- \]
516
- on the surface $|\mathbf{x}|^2 = 1$.
517
-
518
- \begin{enumerate}
519
- \item The constraint imposes a normalization condition on $\mathbf{x}$. But if we scale up $\mathbf{x}$, $f(\mathbf{x})$ scales accordingly. So if we define
520
- \[
521
- \Lambda(\mathbf{x}) = \frac{f(\mathbf{x})}{g(\mathbf{x})},\quad g(\mathbf{x}) = |\mathbf{x}|^2,
522
- \]
523
- the problem is equivalent to minimization of $\Lambda (\mathbf{x})$ without constraint. Then
524
- \[
525
- \nabla_i \Lambda(\mathbf{x}) = \frac{2}{g}\left[A_{ij} x_j - \frac{f}{g} x_i\right]
526
- \]
527
- So we need
528
- \[
529
- A\mathbf{x} = \Lambda \mathbf{x}
530
- \]
531
- So the extremal values of $\Lambda (\mathbf{x})$ are the eigenvalues of $A$. So $\Lambda_{\min}$ is the lowest eigenvalue.
532
-
533
- This answer is intuitively obvious if we diagonalize $A$.
534
-
535
- \item We can also do it with Lagrange multipliers. We want to find stationary values of
536
- \[
537
- \phi(\mathbf{x}, \lambda) = f(\mathbf{x}) - \lambda(|\mathbf{x}|^2 - 1).
538
- \]
539
- So
540
- \[
541
- \mathbf{0} = \nabla \phi \Rightarrow A_{ij} x_j = \lambda x_i
542
- \]
543
- Differentiating with respect to $\lambda$ gives
544
- \[
545
- \frac{\partial \phi}{\partial \lambda} = 0 \Rightarrow |\mathbf{x}|^2 = 1.
546
- \]
547
- So we get the same set of equations.
548
- \end{enumerate}
549
- \end{eg}
550
-
551
- \begin{eg}
552
- Find the probability distribution $\{p_1, \cdots, p_n\}$ satisfying $\sum_i p_i = 1$ that maximizes the information entropy
553
- \[
554
- S = - \sum_{i = 1}^n p_i \log p_i.
555
- \]
556
- We look for stationary points of
557
- \[
558
- \phi(\mathbf{p}, \lambda) = -\sum_{i = 1}^n p_i \ln p_i - \lambda\sum_{i = 1}^n p_i + \lambda.
559
- \]
560
- We have
561
- \[
562
- \frac{\partial \phi}{\partial p_i}= - \ln p_i - (1 + \lambda) = 0.
563
- \]
564
- So
565
- \[
566
- p_i = e^{-(1 + \lambda)}.
567
- \]
568
- It is the same for all $i$. So we must have $p_i = \frac{1}{n}$.
569
- \end{eg}
570
-
571
- \section{Euler-Lagrange equation}
572
- \subsection{Functional derivatives}
573
- \begin{defi}[Functional]
574
- A \emph{functional} is a function that takes in another real-valued function as an argument. We usually write them as $F[x]$ (square brackets), where $x = x(t): \R \to \R$ is a real function. We say that $F[x]$ is a functional of the function $x(t)$.
575
- \end{defi}
576
- Of course, we can also have functionals of many functions, e.g.\ $F[x, y]\in \R$ for $x, y: \R \to \R$. We can also have functionals of a function of many variables.
577
-
578
- \begin{eg}
579
- Given a medium with refractive index $n(\mathbf{x})$, the time taken by a path $\mathbf{x}(t)$ from $\mathbf{x}_0$ to $\mathbf{x}_1$ is given by the functional
580
- \[
581
- T[\mathbf{x}] = \int_{\mathbf{x}_0}^{\mathbf{x}_1} n(\mathbf{x}) \;\d t.
582
- \]
583
- \end{eg}
584
-
585
- While this is a very general definition, in reality, there is just one particular class of functionals we care about. Given a function $x(t)$ defined for $\alpha \leq t \leq \beta$, we study functional of the form
586
- \[
587
- F[x] = \int_\alpha^\beta f(x, \dot{x}, t)\;\d t
588
- \]
589
- for some function $f$.
590
-
591
- Our objective is to find a stationary point of the functional $F[x]$. To do so, suppose we vary $x(t)$ by a small amount $\delta x(t)$. Then the corresponding change $\delta F[x]$ of $F[x]$ is
592
- \begin{align*}
593
- \delta F[x] &= F[x + \delta x] - F[x]\\
594
- &= \int_\alpha ^\beta \big(f(x + \delta x, \dot{x} + \delta \dot{x}, t) - f(x, \dot{x}, t)\big)\;\d t\\
595
- \intertext{Taylor expand to obtain}
596
- &= \int_\alpha^\beta \left(\delta x\frac{\partial f}{\partial x} + \delta \dot{x} \frac{\partial f}{\partial \dot{x}}\right)\;\d t + O(\delta x^2)\\
597
- \intertext{Integrate the second term by parts to obtain}
598
- \delta F[x] &= \int_\alpha^\beta\delta x\left[\frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)\right]\;\d t + \left[ \delta x\frac{\partial f}{\partial \dot{x}}\right]_\alpha^\beta.
599
- \end{align*}
600
- This doesn't seem like a very helpful equation. Life would be much easier if the last term (known as the \emph{boundary term}) $\left[ \delta x\frac{\partial f}{\partial \dot{x}}\right]_\alpha^\beta$ vanishes. Fortunately, for most of the cases we care about, the boundary conditions mandate that the boundary term does indeed vanish. Most of the time, we are told that $x$ is fixed at $t = \alpha, \beta$. So $\delta x(\alpha) = \delta x(\beta) = 0$. But regardless of what we do, we always choose boundary conditions such that the boundary term is 0. Then
601
- \[
602
- \delta F[x] = \int_\alpha ^\beta \left(\delta x \frac{\delta F[x]}{\delta x(t)}\right)\;\d t
603
- \]
604
- where
605
- \begin{defi}[Functional derivative]
606
- \[
607
- \frac{\delta F[x]}{\delta x} = \frac{\partial f}{\partial x} - \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)
608
- \]
609
- is the \emph{functional derivative} of $F[x]$.
610
- \end{defi}
611
-
612
- If we want to find a stationary point of $F$, then we need $\frac{\delta F[x]}{\delta x} = 0$. So
613
- \begin{defi}[Euler-Lagrange equation]
614
- The \emph{Euler-Lagrange} equation is
615
- \[
616
- \frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) = 0
617
- \]
618
- for $\alpha \leq t \leq \beta$.
619
- \end{defi}
620
- There is an obvious generalization to functionals $F[\mathbf{x}]$ for $\mathbf{x}(t) \in \R^n$:
621
- \[
622
- \frac{\partial f}{\partial x_i} - \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot{x}_i}\right) = 0 \quad\text{ for all }i.
623
- \]
624
- \begin{eg}[Geodesics of a plane]
625
- What is the curve $C$ of minimal length between two points $A, B$ in the Euclidean plane? The length is
626
- \[
627
- L = \int_C \;\d \ell
628
- \]
629
- where $\;\d \ell = \sqrt{\d x^2 + \d y^2}$.
630
-
631
- There are two ways we can do this:
632
- \begin{enumerate}
633
- \item We restrict to curves for which $x$ (or $y$) is a good parameter, i.e.\ $y$ can be made a function of $x$. Then
634
- \[
635
- \d \ell = \sqrt{1 + (y')^2}\;\d x.
636
- \]
637
- Then
638
- \[
639
- L[y] = \int_\alpha^\beta \sqrt{1 + (y')^2}\;\d x.
640
- \]
641
- Since there is no explicit dependence on $y$, we know that
642
- \[
643
- \frac{\partial f}{\partial y} = 0
644
- \]
645
- So the Euler-Lagrange equation says that
646
- \[
647
- \frac{\d}{\d x} \left(\frac{\partial f}{\partial y'}\right) = 0
648
- \]
649
- We can integrate once to obtain
650
- \[
651
- \frac{\partial f}{\partial y'} = \text{constant}
652
- \]
653
- This is known as a \emph{first integral}, which will be studied more in detail later.
654
-
655
- Plugging in our value of $f$, we obtain
656
- \[
657
- \frac{y'}{\sqrt{1 + (y')^2}} = \text{constant}
658
- \]
659
- This shows that $y'$ must be constant. So $y$ must be a straight line.
660
- \item We can get around the restriction to ``good'' curves by choosing an arbitrary parameterization $\mathbf{r} = (x(t), y(t))$ for $t\in [0, 1]$ such that $\mathbf{r}(0) = A$, $\mathbf{r}(1) = B$. So
661
- \[
662
- \d \ell = \sqrt{\dot x^2 + \dot y^2}\;\d t.
663
- \]
664
- Then
665
- \[
666
- L[x, y] = \int_0^1 \sqrt{\dot x^2 + \dot y^2} \;\d t.
667
- \]
668
- We have, again
669
- \[
670
- \frac{\partial f}{\partial x} = \frac{\partial f}{\partial y} = 0.
671
- \]
672
- So we are left to solve
673
- \[
674
- \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot x}\right) = \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot y}\right) = 0.
675
- \]
676
- So we obtain
677
- \[
678
- \frac{\dot x}{\sqrt{\dot x^2 + \dot y^2}} = c,\quad \frac{\dot y}{\sqrt{\dot x^2 + \dot y^2}} = s
679
- \]
680
- where $c$ and $s$ are constants. While we have two constants, they are not independent. We must have $c^2 + s^2 = 1$. So we let $c = \cos \theta$, $s = \sin \theta$. Then the two conditions are both equivalent to
681
- \[
682
- (\dot x \sin \theta)^2 = (\dot y\cos \theta)^2.
683
- \]
684
- Hence
685
- \[
686
- \dot x \sin \theta = \pm\dot y \cos \theta.
687
- \]
688
- We can choose a $\theta$ such that we have a positive sign. So
689
- \[
690
- y\cos \theta = x\sin \theta + A
691
- \]
692
- for a constant $A$. This is a straight line with slope $\tan \theta$.
693
- \end{enumerate}
694
- \end{eg}
695
- \subsection{First integrals}
696
- In our example above, $f$ did not depend on $x$, and hence $\frac{\partial f}{\partial x} = 0$. Then the Euler-Lagrange equations entail
697
- \[
698
- \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) = 0.
699
- \]
700
- We can integrate this to obtain
701
- \[
702
- \frac{\partial f}{\partial \dot{x}} = \text{constant}.
703
- \]
704
- We call this the \emph{first integral}. First integrals are important in several ways. The most immediate advantage is that it simplifies the problem a lot. We only have to solve a first-order differential equation, instead of a second-order one. Not needing to differentiate $\frac{\partial f}{\partial \dot{x}}$ also prevents a lot of mess arising from the product and quotient rules.
705
-
706
- This has an additional significance when applied to problems in physics. If we have a first integral, then we get $\frac{\partial f}{\partial \dot{x}} =$ constant. This corresponds to a \emph{conserved quantity} of the system. When formulating physics problems as variational problems (as we will do in Chapter~\ref{sec:hamilton}), the conservation of energy and momentum will arise as constants of integration from first integrals.
707
-
708
- There is also a more complicated first integral appearing when $f$ does not (explicitly) depend on $t$. To find this out, we have to first consider the total derivative $\frac{\d f}{\d t}$. By the chain rule, we have
709
- \begin{align*}
710
- \frac{\d f}{\d t} &= \frac{\partial f}{\partial t} + \frac{\d x}{\d t}\frac{\partial f}{\partial x} + \frac{\d \dot{x}}{\d t}\frac{\partial f}{\partial \dot{x}} \\
711
- &= \frac{\partial f}{\partial t} + \dot{x}\frac{\partial f}{\partial x} + \ddot{x} \frac{\partial f}{\partial \dot{x}}.
712
- \end{align*}
713
- On the other hand, the Euler-Lagrange equation says that
714
- \[
715
- \frac{\partial f}{\partial x} = \frac{\d}{\d t} \left(\frac{\partial f}{\partial \dot{x}}\right).
716
- \]
717
- Substituting this into our equation for the total derivative gives
718
- \begin{align*}
719
- \frac{\d f}{\d t} &= \frac{\partial f}{\partial t} + \dot{x} \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) + \ddot{x}\frac{\partial f}{\partial \dot{x}}\\
720
- &= \frac{\partial f}{\partial t} + \frac{\d}{\d t}\left(\dot{x}\frac{\partial f}{\partial \dot{x}}\right).
721
- \end{align*}
722
- Then
723
- \[
724
- \frac{\d}{\d t}\left(f - \dot{x}\frac{\partial f}{\partial \dot{x}}\right) = \frac{\partial f}{\partial t}.
725
- \]
726
- So if $\frac{\partial f}{\partial t} = 0$, then we have the first integral
727
- \[
728
- f - \dot{x}\frac{\partial f}{\partial \dot{x}} = \text{constant}.
729
- \]
730
-
731
- \begin{eg}
732
- Consider a light ray travelling in the vertical $xz$ plane inside a medium with refractive index $n(z) = \sqrt{a - bz}$ for positive constants $a, b$. The phase velocity of light is $v = \frac{c}{n}$.
733
-
734
- According the Fermat's principle, the path minimizes
735
- \[
736
- T = \int_A^B \frac{\d \ell}{v}.
737
- \]
738
- This is equivalent to minimizing the optical path length
739
- \[
740
- cT = P = \int_A^B n\;\d \ell.
741
- \]
742
- We specify our path by the function $z(x)$. Then the path element is given by
743
- \[
744
- \d \ell = \sqrt{\d x^2 + \d z^2} = \sqrt{1 + z'(x)^2}\;\d x,
745
- \]
746
- Then
747
- \[
748
- P[z] = \int_{x_A}^{x_B}n(z)\sqrt{1 + (z')^2}\;\d x.
749
- \]
750
- Since this does not depend on $x$, we have the first integral
751
- \[
752
- k = f - z'\frac{\partial f}{\partial z'} = \frac{n(z)}{\sqrt{1 + (z')^2}}.
753
- \]
754
- for an integration constant $k$. Squaring and putting in the value of $n$ gives
755
- \[
756
- (z')^2 = \frac{b}{k^2}(z_0 - z),
757
- \]
758
- where $z_0 = (a - k^2)/b$. This is integrable and we obtain
759
- \[
760
- \frac{\d z}{\sqrt{z_0 - z}} = \pm\frac{\sqrt{b}}{k}\;\d x.
761
- \]
762
- So
763
- \[
764
- \sqrt{z - z_0} = \pm \frac{\sqrt{b}}{2k}(x - x_0),
765
- \]
766
- where $x_0$ is our second integration constant. Square it to obtain
767
- \[
768
- z = z_0 - \frac{b}{4k^2}(x - x_0)^2,
769
- \]
770
- which is a parabola.
771
- \end{eg}
772
-
773
- \begin{eg}[Principle of least action]
774
- Mechanics as laid out by Newton was expressed in terms of forces and acceleration. While this is able to describe a lot of phenomena, it is rather unsatisfactory. For one, it is messy and difficult to scale to large systems involving many particles. It is also ugly.
775
-
776
- As a result, mechanics is later reformulated in terms of a variational principle. A quantity known as the \emph{action} is defined for each possible path taken by the particle, and the actual path taken is the one that minimizes the action (technically, it is the path that is a stationary point of the action functional).
777
-
778
- The version we will present here is an old version proposed by Maupertuis and Euler. While it sort-of works, it is still cumbersome to work with. The modern version is from Hamilton who again reformulated the action principle to something that is more powerful and general. This modern version will be discussed more in detail in Chapter~\ref{sec:hamilton}, and for now we will work with the old version first.
779
-
780
- The original definition for the action, as proposed by Maupertuis, was mass $\times$ velocity $\times$ distance. This was given a more precise mathematical definition by Euler. For a particle with constant energy,
781
- \[
782
- E = \frac{1}{2} mv^2 + U(\mathbf{x}),
783
- \]
784
- where $v = |\dot{\mathbf{x}}|$. So we have
785
- \[
786
- mv = \sqrt{2m(E - U(\mathbf{x}))}.
787
- \]
788
- Hence we can define the action to be
789
- \[
790
- A = \int_A^B \sqrt{2m(E - U(\mathbf{x}))}\;\d \ell,
791
- \]
792
- where $\d\ell$ is the path length element. We minimize this to find the trajectory.
793
-
794
- For a particle near the surface of the Earth, under the influence of gravity, $U = mgz$. So we have
795
- \[
796
- A[z] = \int_A^B \sqrt{2mE - 2m^2gz}\sqrt{1 + (z')^2}\;\d x,
797
- \]
798
- which is of exactly the same form as the optics problem we just solved. So the result is again a parabola, as expected.
799
- \end{eg}
800
-
801
- \begin{eg}[Brachistochrone]
802
- The Brachistochrone problem was one of the earliest problems in the calculus of variations. The name comes from the Greek words \emph{br\'akhistos} (``shortest'') and \emph{khr\'onos} (``time'').
803
-
804
- The question is as follows: suppose we have a bead sliding along a frictionless wire, starting from rest at the origin $A$. What shape of wire minimizes the time for the bead to travel to $B$?
805
- \begin{center}
806
- \begin{tikzpicture}
807
- \draw [->] (-0.5, 0) -- (4, 0) node [right] {$x$};
808
- \draw [->] (0, 0.5) -- (0, -3) node [below] {$y$};
809
-
810
- \draw [red] (0.86, -1) circle [radius=0.1];
811
-
812
- \node [circ] {};
813
- \node [anchor = south east] {$A$};
814
-
815
- \node at (3, -1) [circ] {};
816
- \node at (3, -1) [right] {$B$};
817
- \draw (0, 0) parabola bend (2, -1.5) (3, -1);
818
- \end{tikzpicture}
819
- \end{center}
820
- The conservation of energy implies that
821
- \[
822
- \frac{1}{2}mv^2 = mgy.
823
- \]
824
- So
825
- \[
826
- v = \sqrt{2gy}
827
- \]
828
- We want to minimize
829
- \[
830
- T = \int \frac{\d \ell}{v}.
831
- \]
832
- So
833
- \[
834
- T = \frac{1}{\sqrt{2g}}\int \frac{\sqrt{\d x^2 + \d y^2}}{\sqrt{y}} = \frac{1}{\sqrt{2g}}\int \sqrt{\frac{1 + (y')^2}{y}}\;\d x
835
- \]
836
- Since there is no explicit dependence on $x$, we have the first integral
837
- \[
838
- f - y'\frac{\partial f}{\partial y'} = \frac{1}{\sqrt{y(1 + (y')^2)}} = \text{constant}
839
- \]
840
- So the solution is
841
- \[
842
- y(1 + (y')^2) = c
843
- \]
844
- for some positive constant $c$.
845
-
846
- The solution of this ODE is, in parametric form,
847
- \begin{align*}
848
- x &= c(\theta - \sin \theta)\\
849
- y &= c(1 - \cos \theta).
850
- \end{align*}
851
- Note that this has $x = y = 0$ at $\theta = 0$. This describes a cycloid.
852
- \end{eg}
853
-
854
- \subsection{Constrained variation of functionals}
855
- So far, we've considered the problem of finding stationary values of $F[x]$ without any restraint on what $x$ could be. However, sometimes there might be some restrictions on the possible values of $x$. For example, we might have a surface in $\R^3$ defined by $g(x) = 0$. If we want to find the path of shortest length on the surface (i.e.\ geodesics), then we want to minimize $F[x]$ subject to the constraint $g(x(t)) = 0$.
856
-
857
- We can again use Lagrange multipliers. The problem we have to solve is equivalent to finding stationary values (without constraints) of
858
- \[
859
- \Phi_\lambda [x] = F[x] - \lambda(P[x] - c).
860
- \]
861
- with respect to the function $x(t)$ and the variable $\lambda$.
862
-
863
- \begin{eg}[Isoperimetric problem]
864
- If we have a string of fixed length, what is the maximum area we can enclose with it?
865
-
866
- We first argue that the region enclosed by the curve is convex. If it is not, we can ``push out'' the curve to increase the area enclosed without changing the length. Assuming this, we can split the curve into two parts:
867
- \begin{center}
868
- \begin{tikzpicture}
869
- \draw [red] plot [smooth, tension=1.2] coordinates {(1, 1.5) (1.6, 2.3) (2.6, 2.3) (3.5, 2) (4, 1.5)};
870
- \node [red, anchor = south east] at (1.2, 1.75) {$y_2$};
871
- \draw [blue] plot [smooth, tension=1.2] coordinates {(1, 1.4) (1.4, 0.8) (2.6, 0.7) (3.6, 0.9) (4, 1.5)};
872
- \node [blue, anchor = north west] at (3, 0.8) {$y_1$};
873
-
874
- \draw [dashed] (1, 1.5) -- (1, 0) node [below] {$\alpha$};
875
- \draw [dashed] (4, 1.5) -- (4, 0) node [below] {$\beta$};
876
-
877
- \draw [->] (0, 0) -- (5, 0) node [right] {$x$};
878
- \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
879
- \end{tikzpicture}
880
- \end{center}
881
- We have $\d A = [y_2(x) - y_1(x)]\;\d x$. So
882
- \[
883
- A = \int_\alpha^\beta [y_2(x) - y_1(x)]\;\d x.
884
- \]
885
- Alternatively,
886
- \[
887
- A[y] = \oint y(x)\;\d x.
888
- \]
889
- and the length is
890
- \[
891
- L[y] = \oint \d\ell = \oint \sqrt{1 + (y')^2}\;\d x.
892
- \]
893
- So we look for stationary points of
894
- \[
895
- \Phi_\lambda [y] = \oint [y(x) - \lambda\sqrt{1 + (y')^2}]\;\d x + \lambda L.
896
- \]
897
- In this case, we can be sure that our boundary terms vanish since there is no boundary.
898
-
899
- Since there is no explicit dependence on $x$, we obtain the first integral
900
- \[
901
- f - y'\frac{\partial f}{\partial y'} = \text{constant} = y_0.
902
- \]
903
- So
904
- \[
905
- y_0 = y - \lambda\sqrt{1 + (y')^2} + \frac{\lambda (y')^2}{\sqrt{1 + (y')^2}} = y - \frac{\lambda}{\sqrt{1 + (y')^2}}.
906
- \]
907
- So
908
- \begin{align*}
909
- (y - y_0)^2 &= \frac{\lambda^2}{1 + (y')^2}\\
910
- (y')^2 &= \frac{\lambda^2}{(y - y_0)^2} - 1\\
911
- \frac{(y - y_0)y'}{\sqrt{\lambda^2 - (y - y_0)^2}} &= \pm 1.\\
912
- \d\left[\sqrt{\lambda^2 - (y - y_0)^2}\pm x\right] &= 0.
913
- \end{align*}
914
- So we have
915
- \[
916
- \lambda^2 - (y - y_0)^2 = (x - x_0)^2,
917
- \]
918
- or
919
- \[
920
- (x - x_0)^2 + (y - y_0)^2 = \lambda^2.
921
- \]
922
- This is a circle of radius $\lambda$. Since the perimeter of this circle will be $2\pi \lambda$, we must have $\lambda = L/(2\pi)$. So the maximum area is $\pi\lambda^2 = L^2/(4\pi)$.
923
- \end{eg}
924
-
925
- \begin{eg}[Sturm-Liouville problem]
926
- The \emph{Sturm-Liouville problem} is a very general class of problems. We will develop some very general theory about these problems without going into specific examples. It can be formulated as follows: let $\rho(x)$, $\sigma(x)$ and $w(x)$ be real functions of $x$ defined on $\alpha \leq x \leq \beta$. We will consider the special case where $\rho$ and $w$ are positive on $\alpha < x < \beta$. Our objective is to find stationary points of the functional
927
- \[
928
- F[y] = \int_\alpha^\beta (\rho(x)(y')^2 + \sigma(x)y^2)\;\d x
929
- \]
930
- subject to the condition
931
- \[
932
- G[y] = \int_\alpha^\beta w(x)y^2\;\d x = 1.
933
- \]
934
- Using the Euler-Lagrange equation, the functional derivatives of $F$ and $G$ are
935
- \begin{align*}
936
- \frac{\delta F[y]}{\delta y} &= 2\big(-(\rho y')' + \sigma y\big)\\
937
- \frac{\delta G[y]}{\delta y} &= 2 (wy).
938
- \end{align*}
939
- So the Euler-Lagrange equation of $\Phi_\lambda [y] = F[y] - \lambda(G[y] - 1)$ is
940
- \[
941
- -(\rho y')' + \sigma y - \lambda wy = 0.
942
- \]
943
- We can write this as the eigenvalue problem
944
- \[
945
- \mathcal{L}y = \lambda wy.
946
- \]
947
- where
948
- \[
949
- \mathcal{L} = -\frac{\d}{\d x}\left(\rho\frac{\d}{\d x}\right) + \sigma
950
- \]
951
- is the \emph{Sturm-Liouville operator}. We call this a \emph{Sturm-Liouville eigenvalue} problem. $w$ is called the \emph{weight function}.
952
-
953
- We can view this problem in a different way. Notice that $\mathcal{L} y = \lambda wy$ is linear in $y$. Hence if $y$ is a solution, then so is $Ay$. But if $G[y] = 1$, then $G[Ay] = A^2$. Hence the condition $G[y] = 1$ is simply a normalization condition. We can get around this problem by asking for the minimum of the functional
954
- \[
955
- \Lambda [y] = \frac{F[y]}{G[y]}
956
- \]
957
- instead. It turns out that this $\Lambda$ has some significance. To minimize $\Lambda$, we cannot apply the Euler-Lagrange equations, since $\Lambda$ is not of the form of an integral. However, we can try to vary it directly:
958
- \[
959
- \delta\Lambda = \frac{1}{G}\delta F - \frac{F}{G^2} \delta G = \frac{1}{G}(\delta F - \Lambda \delta G).
960
- \]
961
- When $\Lambda$ is minimized, we have
962
- \[
963
- \delta \Lambda = 0 \quad\Leftrightarrow\quad \frac{\delta F}{\delta y} = \Lambda \frac{\delta G}{\delta y}\quad \Leftrightarrow\quad \mathcal{L} y = \Lambda wy.
964
- \]
965
- So at stationary values of $\Lambda[y]$, $\Lambda$ is the associated Sturm-Liouville eigenvalue.
966
- \end{eg}
967
-
968
- \begin{eg}[Geodesics]
969
- Suppose that we have a surface in $\R^3$ defined by $g(\mathbf{x}) = 0$, and we want to find the path of shortest distance between two points on the surface. These paths are known as \emph{geodesics}.
970
-
971
- One possible approach is to solve $g(\mathbf{x}) = 0$ directly. For example, if we have a unit sphere, a possible solution is $x = \cos\theta \cos\phi$, $y = \cos\theta\sin\phi$, $z = \sin\theta$. Then the total length of a path would be given by
972
- \[
973
- D[\theta, \phi] = \int_A^B \sqrt{\d \theta^2 + \sin^2 \theta \d \phi^2}.
974
- \]
975
- We then vary $\theta$ and $\phi$ to minimize $D$ and obtain a geodesic.
976
-
977
- Alternatively, we can impose the condition $g(\mathbf{x}(t)) = 0$ with a Lagrange multiplier. However, since we want the constraint to be satisfied for \emph{all} $t$, we need a Lagrange multiplier \emph{function} $\lambda(t)$. Then our problem would be to find stationary values of
978
- \[
979
- \Phi[\mathbf{x}, \lambda] = \int_0^1 \big(|\dot{\mathbf{x}}| - \lambda(t) g(\mathbf{x}(t))\big)\;\d t
980
- \]
981
- \end{eg}
982
- \section{Hamilton's principle}
983
- \label{sec:hamilton}
984
- As mentioned before, Lagrange and Hamilton reformulated Newtonian dynamics into a much more robust system based on an action principle.
985
-
986
- The first important concept is the idea of a \emph{configuration space}. This configuration space is a vector space containing \emph{generalized coordinates} $\xi(t)$ that specify the configuration of the system. The idea is to capture \emph{all} information about the system in one single vector.
987
-
988
- In the simplest case of a single free particle, these generalized coordinates would simply be the coordinates of the position of the particle. If we have two particles given by positions $\mathbf{x}(t) = (x_1, x_2, x_3)$ and $\mathbf{y}(t) = (y_1, y_2, y_3)$, our generalized coordinates might be $\xi(t) = (x_1, x_2, x_3, y_1, y_2, y_3)$. In general, if we have $N$ different free particles, the configuration space has $3N$ dimensions.
989
-
990
- The important thing is that the \emph{generalized} coordinates need not be just the usual Cartesian coordinates. If we are describing a pendulum in a plane, we do not need to specify the $x$ and $y$ coordinates of the mass. Instead, the system can be described by just the angle $\theta$ between the mass and the vertical. So the generalized coordinates is just $\xi(t) = \theta(t)$. This is much more natural to work with and avoids the hassle of imposing constraints on $x$ and $y$.
991
-
992
- \subsection{The Lagrangian}
993
- The concept of generalized coordinates was first introduced by Lagrange in 1788. He then showed that $\xi(t)$ obeys certain complicated ODEs which are determined by the kinetic energy and the potential energy.
994
-
995
- In the 1830s, Hamilton made Lagrange's mechanics much more pleasant. He showed that the solutions of these ODEs are extremal points of a new ``action'',
996
- \[
997
- S[\xi] = \int L\;\d t
998
- \]
999
- where
1000
- \[
1001
- L = T - V
1002
- \]
1003
- is the \emph{Lagrangian}, with $T$ the kinetic energy and $V$ the potential energy.
1004
-
1005
- \begin{law}[Hamilton's principle]
1006
- The actual path $\xi(t)$ taken by a particle is the path that makes the action $S$ stationary.
1007
- \end{law}
1008
-
1009
- Note that $S$ has dimensions $ML^2T^{-1}$, which is the same as the 18th century action (and Plank's constant).
1010
-
1011
- \begin{eg}
1012
- Suppose we have 1 particle in Euclidean 3-space. The configuration space is simply the coordinates of the particle in space. We can choose Cartesian coordinates $\mathbf{x}$. Then
1013
- \[
1014
- T = \frac{1}{2}m|\dot{\mathbf{x}}|^2,\quad V = V(\mathbf{x}, t)
1015
- \]
1016
- and
1017
- \[
1018
- S[\mathbf{x}] = \int_{t_A}^{t_B}\left(\frac{1}{2}m|\dot{\mathbf{x}}|^2 - V(\mathbf{x}, t)\right)\;\d t.
1019
- \]
1020
- Then the Lagrangian is
1021
- \[
1022
- L(\mathbf{x}, \dot{\mathbf{x}}, t) = \frac{1}{2}m|\dot{\mathbf{x}}|^2 - V(\mathbf{x}, t)
1023
- \]
1024
- We apply the Euler-Lagrange equations to obtain
1025
- \[
1026
- 0 = \frac{\d}{\d t}\left(\frac{\partial L}{\partial \dot{\mathbf{x}}}\right) - \frac{\partial L}{\partial \mathbf{x}} = m\ddot{\mathbf{x}} + \nabla V.
1027
- \]
1028
- So
1029
- \[
1030
- m\ddot{\mathbf{x}} = -\nabla V
1031
- \]
1032
- This is Newton's law $\mathbf{F} = m\mathbf{a}$ with $\mathbf{F} = -\nabla V$. This shows that Lagrangian mechanics is ``the same'' as Newton's law. However, Lagrangian mechanics has the advantage that it does not care what coordinates you use, while Newton's law requires an inertial frame of reference.
1033
- \end{eg}
1034
- Lagrangian mechanics applies even when $V$ is time-dependent. However, if $V$ is independent of time, then so is $L$. Then we can obtain a first integral.
1035
-
1036
- As before, the chain rule gives
1037
- \begin{align*}
1038
- \frac{\d L}{\d t} &= \frac{\partial L}{\partial t} + \dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \mathbf{x}} + \ddot{\mathbf{x}} \cdot \frac{\partial}{\partial \dot{\mathbf{x}}}\\
1039
- &= \frac{\partial L}{\partial t} + \dot{\mathbf{x}}\cdot \underbrace{\left(\frac{\partial L}{\partial \mathbf{x}} - \frac{\d}{\d t}\left(\frac{\partial L}{\partial \dot{\mathbf{x}}}\right)\right)}_{\displaystyle\frac{\delta S}{\delta x} = 0} +
1040
- \underbrace{\dot{\mathbf{x}} \cdot \frac{\d}{\d t}\left(\frac{\partial L}{\partial \dot{\mathbf{x}}}\right) + \ddot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot{\mathbf{x}}}}_{\displaystyle\frac{\d}{\d t}\left(\dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot{\mathbf{x}}}\right)}
1041
- \end{align*}
1042
- So we have
1043
- \[
1044
- \frac{\d}{\d t}\left(L - \dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot{\mathbf{x}}}\right) = \frac{\partial L}{\partial t}.
1045
- \]
1046
- If $\frac{\partial L}{\partial t} = 0$, then
1047
- \[
1048
- \dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot {\mathbf{x}}} - L = E
1049
- \]
1050
- for some constant $E$. For example, for one particle,
1051
- \[
1052
- E = m|\dot{\mathbf{x}}|^2 - \frac{1}{2}m|\dot{\mathbf{x}}|^2 + V = T + V = \text{total energy}.
1053
- \]
1054
-
1055
- \begin{eg}
1056
- Consider a central force field $\mathbf{F} = -\nabla V$, where $V = V(r)$ is independent of time. We use spherical polar coordinates $(r, \theta, \phi)$, where
1057
- \begin{align*}
1058
- x &= r\sin \theta \cos \phi\\
1059
- y &= r\sin \theta \sin \phi\\
1060
- z &= r\cos \theta.
1061
- \end{align*}
1062
- So
1063
- \[
1064
- T = \frac{1}{2}m|\dot{\mathbf{x}}|^2 = \frac{1}{2}m\left(\dot{r}^2 + r^2(\dot{\theta}^2 + \sin^2 \theta \dot{\phi}^2)\right)
1065
- \]
1066
- So
1067
- \[
1068
- L = \frac{1}{2}m\dot{r}^2 + \frac{1}{2}mr^2\big(\dot{\theta}^2 + \sin^2\theta\dot{\phi}^2\big) - V(r).
1069
- \]
1070
- We'll use the fact that motion is planar (a consequence of angular momentum conservation). So wlog $\theta = \frac{\pi}{2}$. Then
1071
- \[
1072
- L = \frac{1}{2}m\dot{r}^2 + \frac{1}{2}mr^2 \dot{\phi}^2 - V(r).
1073
- \]
1074
- Then the Euler Lagrange equations give
1075
- \begin{align*}
1076
- m\ddot{r} - mr\dot{\phi}^2 + V'(r) &= 0\\
1077
- \frac{\d}{\d t}\left(mr^2 \dot{\phi}\right) &= 0.
1078
- \end{align*}
1079
- From the second equation, we see that $r^2 \dot\phi = h$ is a constant (angular momentum per unit mass). Then $\dot{\phi} = h/r^2$. So
1080
- \[
1081
- m\ddot{r} - \frac{mh^2}{r^3} + V'(r) = 0.
1082
- \]
1083
- If we let
1084
- \[
1085
- V_{\mathrm{eff}} = V(r) + \frac{mh^2}{2r^2}
1086
- \]
1087
- be the \emph{effective potential}, then we have
1088
- \[
1089
- m\ddot{r} = -V_{\mathrm{eff}}'(r).
1090
- \]
1091
- For example, in a gravitational field, $V(r) = -\frac{GM}{r}$. Then
1092
- \[
1093
- V_{\mathrm{eff}} = m\left(-\frac{GM}{r} + \frac{h^2}{2r^2}\right).
1094
- \]
1095
- \end{eg}
1096
-
1097
- \subsection{The Hamiltonian}
1098
- In 1833, Hamilton took Lagrangian mechanics further and formulated \emph{Hamiltonian mechanics}. The idea is to abandon $\dot{\mathbf{x}}$ and use the conjugate momentum $\mathbf{p} = \frac{\partial L}{\partial \dot{\mathbf{x}}}$ instead. Of course, this involves taking the Legendre transform of the Lagrangian to obtain the Hamiltonian.
1099
-
1100
- \begin{defi}[Hamiltonian]
1101
- The \emph{Hamiltonian} of a system is the Legendre transform of the Lagrangian:
1102
- \[
1103
- H(\mathbf{x}, \mathbf{p}) = \mathbf{p}\cdot \dot{\mathbf{x}} - L(\mathbf{x}, \dot{\mathbf{x}}),
1104
- \]
1105
- where $\dot{\mathbf{x}}$ is a function of $\mathbf{p}$ that is the solution to $\mathbf{p} = \frac{\partial L}{\partial \dot{\mathbf{x}}}$.
1106
-
1107
- $\mathbf{p}$ is the \emph{conjugate momentum} of $\mathbf{x}$. The space containing the variables $\mathbf{x}, \mathbf{p}$ is known as the \emph{phase space}.
1108
- \end{defi}
1109
-
1110
- Since the Legendre transform is its self-inverse, the Lagrangian is the Legendre transform of the Hamiltonian with respect to $\mathbf{p}$. So
1111
- \[
1112
- L = \mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{x}, \mathbf{p})
1113
- \]
1114
- with
1115
- \[
1116
- \dot{\mathbf{x}} = \frac{\partial H}{\partial \mathbf{p}}.
1117
- \]
1118
- Hence we can write the action using the Hamiltonian as
1119
- \[
1120
- S[\mathbf{x}, \mathbf{p}] = \int (\mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{x}, \mathbf{p}))\;\d t.
1121
- \]
1122
- This is the \emph{phase-space form} of the action. The Euler-Lagrange equations for these are
1123
- \[
1124
- \dot{\mathbf{x}} = \frac{\partial H}{\partial \mathbf{p}}, \quad \dot{\mathbf{p}} = -\frac{\partial H}{\partial \mathbf{x}}
1125
- \]
1126
- Using the Hamiltonian, the Euler-Lagrange equations put $\mathbf{x}$ and $\mathbf{p}$ on a much more equal footing, and the equations are more symmetric. There are also many useful concepts arising from the Hamiltonian, which are explored much in-depth in the II Classical Dynamics course.
1127
-
1128
- So what does the Hamiltonian look like? Consider the case of a single particle. The Lagrangian is given by
1129
- \[
1130
- L = \frac{1}{2}m|\dot{\mathbf{x}}|^2 - V(\mathbf{x}, t).
1131
- \]
1132
- Then the conjugate momentum is
1133
- \[
1134
- \mathbf{p} = \frac{\partial L}{\partial \dot{\mathbf{x}}} = m\dot{\mathbf{x}},
1135
- \]
1136
- which happens to coincide with the usual definition of the momentum. However, the conjugate momentum is often something more interesting when we use generalized coordinates. For example, in polar coordinates, the conjugate momentum of the angle is the angular momentum.
1137
-
1138
- Substituting this into the Hamiltonian, we obtain
1139
- \begin{align*}
1140
- H(\mathbf{x}, \mathbf{p}) &= \mathbf{p}\cdot \frac{\mathbf{p}}{m} - \frac{1}{2}m\left(\frac{\mathbf{p}}{m}\right)^2 + V(\mathbf{x}, t)\\
1141
- &= \frac{1}{2m}|\mathbf{p}|^2 + V.
1142
- \end{align*}
1143
- So $H$ is the total energy, but expressed in terms of $\mathbf{x}, \mathbf{p}$, not $\mathbf{x}, \dot{\mathbf{x}}$.
1144
- \subsection{Symmetries and Noether's theorem}
1145
- Given
1146
- \[
1147
- F[x]= \int_\alpha^\beta f(x, \dot{x}, t)\;\d t,
1148
- \]
1149
- suppose we change variables by the transformation $t \mapsto t^*(t)$ and $x\mapsto x^*(t^*)$. Then we have a new independent variable and a new function. This gives
1150
- \[
1151
- F[x] \mapsto F^* [x^*] = \int_{\alpha^*}^{\beta^*} f(x^*, \dot{x}^*, t^*)\;\d t^*
1152
- \]
1153
- with $\alpha^* = t^*(\alpha)$ and $\beta^* = t^*(\beta)$.
1154
-
1155
- There are some transformations that are particularly interesting:
1156
- \begin{defi}[Symmetry]
1157
- If $F^*[x^*] = F[x]$ for all $x$, $\alpha$ and $\beta$, then the transformation $*$ is a \emph{symmetry}.
1158
- \end{defi}
1159
-
1160
- This transformation could be a translation of time, space, or a rotation, or even more fancy stuff. The exact symmetries $F$ has depends on the form of $f$. For example, if $f$ only depends on the magnitudes of $x$, $\dot{x}$ and $t$, then rotation of space will be a symmetry.
1161
-
1162
- \begin{eg}\leavevmode
1163
- \begin{enumerate}
1164
- \item Consider the transformation $t \mapsto t$ and $x \mapsto x + \varepsilon$ for some small $\varepsilon$. Then
1165
- \[
1166
- F^*[x^*] = \int_\alpha^\beta f(x + \varepsilon, \dot{x}, t)\;\d x = \int_\alpha^\beta \left(f(x, \dot{x}, t) + \varepsilon \frac{\partial f}{\partial x}\right)\;\d x
1167
- \]
1168
- by the chain rule. Hence this transformation is a symmetry if $\frac{\partial f}{\partial x} = 0$.
1169
-
1170
- However, we also know that if $\frac{\partial f}{\partial x} = 0$, then we have the first integral
1171
- \[
1172
- \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) = 0.
1173
- \]
1174
- So $\frac{\partial f}{\partial \dot{x}}$ is a conserved quantity.
1175
- \item Consider the transformation $t \mapsto t - \varepsilon$. For the sake of sanity, we will also transform $x\mapsto x^*$ such that $x^*(t^*) = x(t)$. Then
1176
- \[
1177
- F^*[x^*] = \int_\alpha^\beta f(x, \dot{x}, t - \varepsilon)\;\d t = \int_\alpha^\beta \left(f(x, \dot{x}, t) - \varepsilon \frac{\partial f}{\partial t}\right)\;\d t.
1178
- \]
1179
- Hence this is a symmetry if $\frac{\partial f}{\partial t} = 0$.
1180
-
1181
- We also know that if $\frac{\partial f}{\partial t} = 0$ is true, then we obtain the first integral
1182
- \[
1183
- \frac{\d}{\d t}\left(f - \dot{x}\frac{\partial f}{\partial \dot{x}}\right) = 0
1184
- \]
1185
- So we have a conserved quantity $f - \dot{x}\frac{\partial f}{\partial \dot{x}}$.
1186
- \end{enumerate}
1187
- \end{eg}
1188
- We see that for each simple symmetry we have above, we can obtain a first integral, which then gives a constant of motion. Noether's theorem is a powerful generalization of this.
1189
-
1190
- \begin{thm}[Noether's theorem]
1191
- For every continuous symmetry of $F[x]$, the solutions (i.e.\ the stationary points of $F[x]$) will have a corresponding conserved quantity.
1192
- \end{thm}
1193
- What does ``continuous symmetry'' mean? Intuitively, it is a symmetry we can do ``a bit of''. For example, rotation is a continuous symmetry, since we can do a bit of rotation. However, reflection is not, since we cannot reflect by ``a bit''. We either reflect or we do not.
1194
-
1195
- Note that continuity is essential. For example, if $f$ is quadratic in $x$ and $\dot{x}$, then $x\mapsto -x$ will be a symmetry. But since it is not continuous, there won't be a conserved quantity.
1196
-
1197
- Since the theorem requires a continuous symmetry, we can just consider infinitesimally small symmetries and ignore second-order terms. Almost every equation will have some $O(\varepsilon^2)$ that we will not write out.
1198
-
1199
- We will consider symmetries that involve only the $x$ variable. Up to first order, we can write the symmetry as
1200
- \[
1201
- t \mapsto t,\quad x(t)\mapsto x(t) + \varepsilon h(t),
1202
- \]
1203
- for some $h(t)$ representing the symmetry transformation (and $\varepsilon$ a small number).
1204
-
1205
- By saying that this transformation is a symmetry, it means that when we pick $\varepsilon$ to be any (small) constant number, the functional $F[x]$ does not change, i.e.\ $\delta F = 0$.
1206
-
1207
- On the other hand, since $x(t)$ is a stationary point of $F[x]$, we know that if $\varepsilon$ is non-constant but vanishes at the end-points, then $\delta F = 0$ as well. We will combine these two information to find a conserved quantity of the system.
1208
-
1209
- For the moment, we do not assume anything about $\varepsilon$ and see what happens to $F[x]$. Under the transformation, the change in $F[x]$ is given by
1210
- \begin{align*}
1211
- \delta F &= \int \big(f(x + \varepsilon h, \dot{x} + \varepsilon \dot{h} + \dot{\varepsilon} h, t) - f(x, \dot{x}, t)\big)\;\d t\\
1212
- &= \int\left(\frac{\partial f}{\partial x}\varepsilon h + \frac{\partial f}{\partial \dot{x}}\varepsilon \dot{h} + \frac{\partial f}{\partial \dot{x}}\dot{\varepsilon} h\right)\;\d t\\
1213
- &= \int \varepsilon\left(\frac{\partial f}{\partial x}h + \frac{\partial f}{\partial \dot{x}}\dot{h}\right)\;\d t + \int \dot{\varepsilon} \left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t.
1214
- \end{align*}
1215
- First consider the case where $\varepsilon$ is a constant. Then the second integral vanishes. So we obtain
1216
- \[
1217
- \varepsilon \int \left(\frac{\partial f}{\partial x}h + \frac{\partial f}{\partial\dot{x}}\dot{h}\right)\;\d t = 0
1218
- \]
1219
- This requires that
1220
- \[
1221
- \frac{\partial f}{\partial x}h + \frac{\partial f}{\partial\dot{x}}\dot{h} = 0
1222
- \]
1223
- Hence we know that
1224
- \[
1225
- \delta F = \int\dot{\varepsilon}\left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t.
1226
- \]
1227
- Then consider a variable $\varepsilon$ that is non-constant but vanishes at end-points. Then we know that since $x$ is a solution, we must have $\delta F = 0$. So we get
1228
- \[
1229
- \int\dot{\varepsilon}\left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t = 0.
1230
- \]
1231
- We can integrate by parts to obtain
1232
- \[
1233
- \int \varepsilon \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t = 0.
1234
- \]
1235
- for \emph{any} $\varepsilon$ that vanishes at end-points. Hence we must have
1236
- \[
1237
- \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}h\right) = 0.
1238
- \]
1239
- So $\frac{\partial f}{\partial \dot{x}}h$ is a conserved quantity.
1240
-
1241
- Obviously, not all symmetries just involve the $x$ variable. For example, we might have a time translation $t \mapsto t - \varepsilon$. However, we can encode this as a transformation of the $x$ variable only, as $x(t) \mapsto x(t - \varepsilon)$.
1242
-
1243
- In general, to find the conserved quantity associated with the symmetry $x(t)\mapsto x(t) + \varepsilon h(t)$, we find the change $\delta F$ assuming that $\varepsilon$ is a function of time as opposed to a constant. Then the coefficient of $\dot{\varepsilon}$ is the conserved quantity.
1244
-
1245
- \begin{eg}
1246
- We can apply this to Hamiltonian mechanics. The motion of the particle is the stationary point of
1247
- \[
1248
- S[\mathbf{x}, \mathbf{p}] = \int \left(\mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{x}, \mathbf{p})\right)\;\d t,
1249
- \]
1250
- where
1251
- \[
1252
- H = \frac{1}{2m} |\mathbf{p}|^2 + V(\mathbf{x}).
1253
- \]
1254
- \begin{enumerate}
1255
- \item First consider the case where there is no potential. Since the action depends only on $\dot{\mathbf{x}}$ (or $\mathbf{p}$) and not $\mathbf{x}$ itself, it is invariant under the translation
1256
- \[
1257
- \mathbf{x}\mapsto \mathbf{x} + \boldsymbol\varepsilon,\quad \mathbf{p}\mapsto \mathbf{p}.
1258
- \]
1259
- For general $\boldsymbol\varepsilon$ that can vary with time, we have
1260
- \begin{align*}
1261
- \delta S &= \int \Big[\big(\mathbf{p}\cdot (\dot{\mathbf{x}} + \dot{\boldsymbol\varepsilon}) - H(\mathbf{p})\big) - \big(\mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{p})\big)\Big]\;\d t\\
1262
- &= \int \mathbf{p}\cdot \dot{\boldsymbol\varepsilon}\;\d t.
1263
- \end{align*}
1264
- Hence $\mathbf{p}$ (the momentum) is a constant of motion.
1265
- \item If the potential has no time-dependence, then the system is invariant under time translation. We'll skip the tedious computations and just state that time translation invariance implies conservation of $H$ itself, which is the energy.
1266
-
1267
- \item The above two results can also be obtained directly from first integrals of the Euler-Lagrange equation. However, we can do something cooler. Suppose that we have a potential $V(|\mathbf{x}|)$ that only depends on radius. Then this has a rotational symmetry.
1268
-
1269
- Choose any favorite axis of rotational symmetry $\boldsymbol\omega$, and make the rotation
1270
- \begin{align*}
1271
- \mathbf{x}&\mapsto \mathbf{x} + \varepsilon\boldsymbol\omega\times \mathbf{x}\\
1272
- \mathbf{p}&\mapsto \mathbf{p} + \varepsilon\boldsymbol\omega\times \mathbf{p},
1273
- \end{align*}
1274
- Then our rotation does not affect the radius $|\mathbf{x}|$ and momentum $|\mathbf{p}|$. So the Hamiltonian $H(\mathbf{x}, \mathbf{p})$ is unaffected. Noting that $\boldsymbol\omega \times \mathbf{p} \cdot \dot{\mathbf{x}} = 0$, we have
1275
- \begin{align*}
1276
- \delta S &= \int \left(\mathbf{p}\cdot \frac{\d}{\d t}\left(\mathbf{x} + \varepsilon\boldsymbol\omega \times \mathbf{x}\right) - \mathbf{p}\cdot \dot{\mathbf{x}}\right)\;\d t\\
1277
- &= \int \left(\mathbf{p}\cdot \frac{\d}{\d t}(\varepsilon\boldsymbol\omega\times \mathbf{x})\right)\;\d t\\
1278
- &= \int \left(\mathbf{p}\cdot \left[\boldsymbol\omega\times \frac{\d}{\d t}(\varepsilon\mathbf{x})\right]\right)\;\d t\\
1279
- &= \int \left(\mathbf{p}\cdot \left[\boldsymbol\omega\times (\dot{\varepsilon} \mathbf{x} + \varepsilon \dot{\mathbf{x}})\right]\right)\;\d t\\
1280
- &= \int \left(\dot{\varepsilon}\mathbf{p}\cdot (\boldsymbol\omega\times \mathbf{x}) + \varepsilon \mathbf{p}\cdot (\boldsymbol\omega \times \dot{\mathbf{x}})\right)\;\d t\\
1281
- \intertext{Since $\mathbf{p}$ is parallel to $\dot{\mathbf{x}}$, we are left with}
1282
- &= \int \left(\dot{\varepsilon}\mathbf{p}\cdot (\boldsymbol\omega\times \mathbf{x})\right)\;\d t\\
1283
- &= \int \dot{\varepsilon}\boldsymbol\omega\cdot (\mathbf{x}\times \mathbf{p})\;\d t.
1284
- \end{align*}
1285
- So $\boldsymbol\omega\cdot (\mathbf{x}\times \mathbf{p})$ is a constant of motion. Since this is true for all $\boldsymbol\omega$, $\mathbf{L} = \mathbf{x}\times \mathbf{p}$ must be a constant of motion, and this is the angular momentum.
1286
- \end{enumerate}
1287
- \end{eg}
1288
- \section{Multivariate calculus of variations}
1289
- So far, the function $x(t)$ we are varying is just a function of a single variable $t$. What if we have a more complicated function to consider?
1290
-
1291
- We will consider the most general case $\mathbf{y}(x_1, \cdots, x_m) \in \R^n$ that maps $\R^m \to \R^n$ (we can also view this as $n$ different functions that map $\R^m \to \R$). The functional will be a multiple integral of the form
1292
- \[
1293
- F[\mathbf{y}] = \int\cdots \int \; f(\mathbf{y}, \nabla \mathbf{y}, x_1, \cdots, x_m)\;\d x_1\cdots \d x_m,
1294
- \]
1295
- where $\nabla \mathbf{y}$ is the second-rank tensor defined as
1296
- \[
1297
- \nabla \mathbf{y} = \left(\frac{\partial \mathbf{y}}{\partial x_1}, \cdots, \frac{\partial \mathbf{y}}{\partial x_m}\right).
1298
- \]
1299
- In this case, instead of attempting to come up with some complicated generalized Euler-Lagrange equation, it is often a better idea to directly consider variations $\delta \mathbf{y}$ of $\mathbf{y}$. This is best illustrated by example.
1300
-
1301
- \begin{eg}[Minimal surfaces in $\E^3$]
1302
- This is a natural generalization of geodesics. A minimal surface is a surface of least area subject to some boundary conditions. Suppose that $(x, y)$ are good coordinates for a surface $S$, where $(x, y)$ takes values in the domain $D \subseteq \R^2$. Then the surface is defined by $z = h(x, y)$, where $h$ is the \emph{height function}.
1303
-
1304
- When possible, we will denote partial differentiation by suffices, i.e.\ $h_x = \frac{\partial h}{\partial x}$. Then the area is given by
1305
- \[
1306
- A[h] = \int_D \sqrt{1 + h_x^2 + h_y^2}\;\d A.
1307
- \]
1308
- Consider a variation of $h(x, y)$: $h\mapsto h + \delta h(x, y)$. Then
1309
- \begin{align*}
1310
- A[h + \delta h] &= \int_D \sqrt{1 + (h_x + (\delta h)_x)^2 + (h_y + (\delta h)_y)^2}\;\d A\\
1311
- &= A[h] + \int_D \left(\frac{h_x (\delta h)_x + h_y (\delta h)_y}{\sqrt{1 + h_x^2 + h_y^2}} + O(\delta h^2)\right)\;\d A
1312
- \end{align*}
1313
- We integrate by parts to obtain
1314
- \[
1315
- \delta A = -\int_D \delta h\left(\frac{\partial}{\partial x} \left(\frac{h_x}{\sqrt{1 + h_x^2 + h_y^2}}\right) + \frac{\partial}{\partial y} \left(\frac{h_y}{\sqrt{1 + h_x^2 + h_y^2}}\right)\right)\;\d A + O(\delta h^2)
1316
- \]
1317
- plus some boundary terms. So our minimal surface will satisfy
1318
- \[
1319
- \frac{\partial}{\partial x} \left(\frac{h_x}{\sqrt{1 + h_x^2 + h_y^2}}\right) + \frac{\partial}{\partial y} \left(\frac{h_y}{\sqrt{1 + h_x^2 + h_y^2}}\right) = 0
1320
- \]
1321
- Simplifying, we have
1322
- \[
1323
- (1 + h_y^2)h_{xx} + (1 + h_x^2) h_{yy} - 2h_xh_y h_{xy} = 0.
1324
- \]
1325
- This is a non-linear 2nd-order PDE, the minimal-surface equation. While it is difficult to come up with a fully general solution to this PDE, we can consider some special cases.
1326
- \begin{itemize}
1327
- \item There is an obvious solution
1328
- \[
1329
- h(x, y) = Ax + By + C,
1330
- \]
1331
- since the equation involves second-derivatives and this function is linear. This represents a plane.
1332
-
1333
- \item If $|\nabla h|^2 \ll 1$, then $h_x^2$ and $h_y^2$ are small. So we have
1334
- \[
1335
- h_{yy} + h_{yy} = 0,
1336
- \]
1337
- or
1338
- \[
1339
- \nabla^2 h = 0.
1340
- \]
1341
- So we end up with the Laplace equation. Hence harmonic functions are (approximately) minimal-area.
1342
- \item We might want a cylindrically-symmetric solution, i.e.\ $h(x, y) = z(r)$, where $r = \sqrt{x^2 + y^2}$. Then we are left with an ordinary differential equation
1343
- \[
1344
- rz'' + z' + z'^3 = 0.
1345
- \]
1346
- The general solution is
1347
- \[
1348
- z = A^{-1}\cosh (Ar) + B,
1349
- \]
1350
- a \emph{catenoid}.
1351
-
1352
- Alternatively, to obtain this,this we can substitute $h(x, y) = z(r)$ into $A[h]$ to get
1353
- \[
1354
- A[z] = 2\pi \int r\sqrt{1 + (h'(r))^2}\;\d r,
1355
- \]
1356
- and we can apply the Euler-Lagrange equation.
1357
- \end{itemize}
1358
- \end{eg}
1359
-
1360
- \begin{eg}[Small amplitude oscillations of uniform string]
1361
- Suppose we have a string with uniform constant mass density $\rho$ with uniform tension $T$.
1362
- \begin{center}
1363
- \begin{tikzpicture}
1364
- \draw [domain=0:2] plot ({3*\x}, {sin(\x*180)/1.5});
1365
- \draw [dashed] (0, 0) -- (6, 0);
1366
- \draw [->] (1.5, 0) -- (1.5, 0.666) node [pos = 0.5, right] {$y$};
1367
- \end{tikzpicture}
1368
- \end{center}
1369
- Suppose we pull the line between $x = 0$ and $x = a$ with some tension $T$. Then we set it into motion such that the amplitude is given by $y(x; t)$. Then the kinetic energy is
1370
- \[
1371
- T = \frac{1}{2}\int_0^a \rho v^2\;\d x = \frac{\rho}{2}\int_0^a \dot{y}^2 \;\d x.
1372
- \]
1373
- The potential energy is the tension times the length. So
1374
- \[
1375
- V = T\int \;\d \ell = T\int_0^a\sqrt{1 + (y')^2}\;\d x = (Ta) + \int_0^a \frac{1}{2}T(y'^2)\;\d x.
1376
- \]
1377
- Note that $y'$ is the derivative wrt $x$ while $\dot{y}$ is the derivative wrt time.
1378
-
1379
- The $Ta$ term can be seen as the \emph{ground-state energy}. It is the energy initially stored if there is no oscillation. Since this constant term doesn't affect where the stationary points lie, we will ignore it. Then the action is given by
1380
- \[
1381
- S[y] = \iint_0^a \left(\frac{1}{2}\rho \dot{y}^2 - \frac{1}{2}T(y')^2\right)\;\d x\;\d t
1382
- \]
1383
- We apply Hamilton's principle which says that we need
1384
- \[
1385
- \delta S[y] = 0.
1386
- \]
1387
- We have
1388
- \[
1389
- \delta S[y] = \iint_0^a \left(\rho \dot{y} \frac{\partial}{\partial t}\delta y - Ty' \frac{\partial}{\partial x}\delta y\right)\;\d x\;\d t.
1390
- \]
1391
- Integrate by parts to obtain
1392
- \[
1393
- \delta S[y] = \iint_0^a \delta y(\rho \ddot{y} - Ty'')\;\d x\;\d t + \text{boundary term}.
1394
- \]
1395
- Assuming that the boundary term vanishes, we will need
1396
- \[
1397
- \ddot{y} - v^2 y'' = 0,
1398
- \]
1399
- where $v^2 = T/\rho$. This is the wave equation in two dimensions. Note that this is a linear PDE, which is a simplification resulting from our assuming the oscillation is small.
1400
-
1401
- The general solution to the wave equation is
1402
- \[
1403
- y(x, t) = f_+(x - vt) + f_-(x + vt),
1404
- \]
1405
- which is a superposition of a wave travelling rightwards and a wave travelling leftwards.
1406
- \end{eg}
1407
-
1408
- \begin{eg}[Maxwell's equations]
1409
- It is possible to obtain Maxwell's equations from an action principle, where we define a Lagrangian for the electromagnetic field. Note that this is the Lagrangian for the \emph{field} itself, and there is a separate Lagrangian for particles moving in a field.
1410
-
1411
- We have to first define several quantities. First we have the charges: $\rho$ represents the electric charge density and $\mathbf{J}$ represents the electric current density.
1412
-
1413
- Then we have the potentials: $\phi$ is the electric scalar potential and $\mathbf{A}$ is the magnetic vector potential.
1414
-
1415
- Finally the fields: $\mathbf{E} = -\nabla \phi - \dot{\mathbf{A}}$ is the electric field, and $\mathbf{B} = \nabla\times \mathbf{A}$ is the magnetic field.
1416
-
1417
- We pick convenient units where $c = \varepsilon_0 = \mu_0 = 1$. With these concepts in mind, the Lagrangian is given by
1418
- \[
1419
- S[\mathbf{A}, \phi] = \int \left(\frac{1}{2}(|\mathbf{E}|^2 - |\mathbf{B}|^2) + \mathbf{A}\cdot \mathbf{J} - \phi \rho\right)\;\d V\;\d t
1420
- \]
1421
- Varying $\mathbf{A}$ and $\phi$ by $\delta \mathbf{A}$ and $\delta \phi$ respectively, we have
1422
- \[
1423
- \delta S = \int \left(-\mathbf{E}\cdot \left(\nabla \delta\phi + \frac{\partial}{\partial t} \delta \mathbf{A}\right) - \mathbf{B}\cdot \nabla \times \delta \mathbf{A} + \delta \mathbf{A}\cdot \mathbf{J} - \rho\delta\phi\right)\;\d V\;\d t.
1424
- \]
1425
- Integrate by parts to obtain
1426
- \[
1427
- \delta S = \int\left(\delta\mathbf{A}\cdot (\dot{\mathbf{E}} - \nabla\times \mathbf{B} + \mathbf{J}) + \delta \phi(\nabla\cdot \mathbf{E} - \rho)\right)\;\d V\;\d t.
1428
- \]
1429
- Since the coefficients have to be $0$, we must have
1430
- \[
1431
- \nabla \times \mathbf{B} = \mathbf{J} + \dot{\mathbf{E}},\quad \nabla \cdot \mathbf{E} = \rho.
1432
- \]
1433
- Also, the definitions of $\mathbf{E}$ and $\mathbf{B}$ immediately give
1434
- \[
1435
- \nabla\cdot \mathbf{B} = 0,\quad \nabla \times \mathbf{E} = - \dot{\mathbf{B}}.
1436
- \]
1437
- These four equations are Maxwell's equations.
1438
- \end{eg}
1439
- \section{The second variation}
1440
- \subsection{The second variation}
1441
- So far, we have only looked at the ``first derivatives'' of functionals. We can identify stationary points, but we don't know if it is a maximum, minimum or a saddle. To distinguish between these, we have to look at the ``second derivatives'', or the \emph{second variation}.
1442
-
1443
- Suppose $x(t) = x_0(t)$ is a solution of
1444
- \[
1445
- \frac{\delta F[x]}{\delta y(x)} = 0,
1446
- \]
1447
- i.e.\ $F[x]$ is stationary at $y = y_0$.
1448
-
1449
- To determine what type of stationary point it is, we need to expand $F[x + \delta x]$ to second order in $\delta x$. For convenience, let $\delta x(t) = \varepsilon \xi(t)$ with constant $\varepsilon \ll 1$. We will also only consider functionals of the form
1450
- \[
1451
- F[x] = \int_\alpha^\beta f(x, \dot{x}, t)\;\d t
1452
- \]
1453
- with fixed-end boundary conditions, i.e.\ $\xi(\alpha) = \xi(\beta) = 0$. We will use both dots ($\dot{x}$) and dashes ($x'$) to denote derivatives.
1454
-
1455
- We consider a variation $x \mapsto x + \delta x$ and expand the integrand to obtain
1456
- \begin{align*}
1457
- &f(x + \varepsilon \xi, \dot{x} + \varepsilon \dot{\xi}, t) - f(x, \dot{x}, t)\\
1458
- &= \varepsilon \left(\xi \frac{\partial f}{\partial x} + \dot{\xi}\frac{\partial f}{\partial \dot{x}}\right) + \frac{\varepsilon^2}{2}\left(\xi^2 \frac{\partial^2 f}{\partial x^2} + 2\xi\dot{\xi} \frac{\partial^2 f}{\partial x \partial \dot{x}} + \dot{\xi}^2 \frac{\partial^2 f}{\partial \dot{x}^2}\right) + O(\varepsilon^3)\\
1459
- \intertext{Noting that $2\xi \dot{\xi} = (\xi^2)'$ and integrating by parts, we obtain}
1460
- &= \varepsilon \xi\left[\frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)\right] + \frac{\varepsilon^2}{2}\left\{\xi^2\left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x\partial \dot{x}}\right)\right] + \dot{\xi}^2 \frac{\partial f}{\partial \dot{x}^2}\right\}.
1461
- \end{align*}
1462
- plus some boundary terms which vanish. So
1463
- \[
1464
- F[x + \varepsilon \xi] - F[x] = \int_\alpha^\beta\varepsilon\xi\left[\frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)\right]\;\d t + \frac{\varepsilon^2}{2}\delta^2 F[x, \xi] + O(\varepsilon^3),
1465
- \]
1466
- where
1467
- \[
1468
- \delta^2 F[x, \xi] = \int_\alpha^\beta \left\{\xi^2 \left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x \partial \dot{x}}\right)\right] + \dot{\xi}^2 \frac{\partial^2 f}{\partial \dot{x}^2}\right\}\;\d t
1469
- \]
1470
- is a functional of both $x(t)$ and $\xi(t)$. This is analogous to the term
1471
- \[
1472
- \delta \mathbf{x}^T H(\mathbf{x})\delta \mathbf{x}
1473
- \]
1474
- appearing in the expansion of a regular function $f(\mathbf{x})$. In the case of normal functions, if $H(\mathbf{x})$ is positive, $f(\mathbf{x})$ is convex for all $\mathbf{x}$, and the stationary point is hence a global minimum. A similar result holds for functionals.
1475
-
1476
- In this case, if $\delta^2 F[x, \xi] > 0$ for all non-zero $\xi$ and all allowed $x$, then a solution $x_0(t)$ of $\frac{\delta F}{\delta x} = 0$ is an absolute minimum.
1477
-
1478
- \begin{eg}[Geodesics in the plane]
1479
- We previously shown that a straight line is a stationary point for the curve-length functional, but we didn't show it is in fact the shortest distance! Maybe it is a maximum, and we can get the shortest distance by routing to the moon and back.
1480
-
1481
- Recall that $f = \sqrt{1 + (y')^2}$. Then
1482
- \[
1483
- \frac{\partial f}{\partial y} = 0,\quad \frac{\partial f}{\partial y'} = \frac{y'}{\sqrt{1 + (y')^2}},\quad \frac{\partial^2 f}{\partial y'^2} = \frac{1}{\sqrt{1 + (y')^2}^3},
1484
- \]
1485
- with the other second derivatives zero. So we have
1486
- \[
1487
- \delta ^2 F[y, \xi] = \int_\alpha^\beta \frac{\dot{\xi}^2}{(1 + (y')^2)^{3/2}}\;\d x > 0
1488
- \]
1489
- So if we have a stationary function satisfying the boundary conditions, it is an absolute minimum. Since the straight line is a stationary function, it is indeed the minimum.
1490
- \end{eg}
1491
- However, not all functions are convex\textsuperscript{[\textcolor{blue}{citation needed}]}. We can still ask whether a solution $x_0(t)$ of the Euler-Lagrange equation is a local minimum. For these, we need to consider
1492
- \[
1493
- \delta^2 F[x_0, \xi] = \int_\alpha^\beta (\rho(t)\dot{\xi}^2 + \sigma(t) \xi^2)\;\d t,
1494
- \]
1495
- where
1496
- \[
1497
- \rho(t) = \left.\frac{\partial^2 f}{\partial \dot{x}^2}\right|_{x = x_0},\quad
1498
- \sigma(t) = \left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x \partial \dot{x}}\right)\right]_{x = x_0}.
1499
- \]
1500
- This is of the same form as the Sturm-Liouville problem. For $x_0$ to minimize $F[x]$ locally, we need $\delta^2 F[x_0, \xi] > 0$. A necessary condition for this is
1501
- \[
1502
- \rho(t) \geq 0,
1503
- \]
1504
- which is the \emph{Legendre condition}.
1505
-
1506
- The intuition behind this necessary condition is as follows: suppose that $\rho(t)$ is negative in some interval $I\subseteq [\alpha, \beta]$. Then we can find a $\xi(t)$ that makes $\delta^2 F[x_0, \xi]$ negative. We simply have to make $\xi$ zero outside $I$, and small but crazily oscillating inside $I$. Then inside $I$, $\dot{x}^2$ wiill be very large while $\xi^2$ is kept tiny. So we can make $\delta^2 F[y, \xi]$ arbitrarily negative.
1507
-
1508
- Turning the intuition into a formal proof is not difficult but is tedious and will be omitted.
1509
-
1510
- However, this is not a sufficient condition. Even if we had a strict inequality $\rho (t) > 0$ for all $\alpha < t < \beta$, it is still not sufficient.
1511
-
1512
- Of course, a sufficient (but not necessary) condition is $\rho(t) > 0, \sigma(t) \geq 0$, but this is not too interesting.
1513
-
1514
- \begin{eg}
1515
- In the Branchistochrone problem, we have
1516
- \[
1517
- T[x] \propto \int_\alpha^\beta \sqrt{\frac{1 + \dot{x}^2}{x}}\;\d t.
1518
- \]
1519
- Then
1520
- \begin{align*}
1521
- \rho(t) &= \left.\frac{\partial^2 f}{\partial \dot{x}^2}\right|_{x_0} > 0\\
1522
- \sigma(t) &= \frac{1}{2x^2\sqrt{x(1 + \dot{x}^2)}} > 0.
1523
- \end{align*}
1524
- So the cycloid does minimize the time $T$.
1525
- \end{eg}
1526
- \subsection{Jacobi condition for local minima of \texorpdfstring{$F[x]$}{F[x]}}
1527
- Legendre tried to prove that $\rho > 0$ is a sufficient condition for $\delta^2 F > 0$. This is known as the \emph{strong Legendre condition}. However, he obviously failed, since it is indeed not a sufficient condition. Yet, it turns out that he was close.
1528
-
1529
- Before we get to the actual sufficient condition, we first try to understand why thinking $\rho > 0$ is sufficient isn't as crazy as it first sounds.
1530
-
1531
- If $\rho > 0$ and $\sigma < 0$, we would want to create a negative $\delta^2 F[x_0, \xi]$ by choosing $\xi$ to be large but slowly varying. Then we will have a very negative $\sigma(t)\xi^2$ while a small positive $\rho(t) \dot{\xi^2}$.
1532
-
1533
- The problem is that $\xi$ has to be $0$ at the end points $\alpha$ and $\beta$. For $\xi$ to take a large value, it must reach the value from $0$, and this requires some variation of $\xi$, thereby inducing some $\dot{\xi}$. This is not a problem if $\alpha$ and $\beta$ are far apart - we simply slowly climb up to a large value of $\xi$ and then slowly rappel back down, maintaining a low $\dot{\xi}$ throughout the process. However, it is not unreasonable to assume that as we make the distance $\beta - \alpha$ smaller and smaller, eventually all $\xi$ will lead to a positive $\delta^2 F[x_0, \xi]$, since we cannot reach large values of $\xi$ without having large $\dot{\xi}$.
1534
-
1535
- It turns out that the intuition is correct. As long as $\alpha$ and $\beta$ are sufficiently close, $\delta^2 F[x_0, \xi]$ will be positive. The derivation of this result is, however, rather roundabout, involving a number of algebraic tricks.
1536
-
1537
- For a solution $x_0$ to the Euler Lagrange equation, we have
1538
- \[
1539
- \delta^2 F[x_0, \xi] = \int_\alpha^\beta \big(\rho(t) \dot{\xi}^2 + \sigma(t) \xi^2\big)\;\d t,
1540
- \]
1541
- where
1542
- \[
1543
- \rho(t) = \left.\frac{\partial^2 f}{\partial \dot{x}^2}\right|_{x = x_0},\quad
1544
- \sigma(t) = \left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x \partial \dot{x}}\right)\right]_{x = x_0}.
1545
- \]
1546
- Assume $\rho(t) > 0$ for $\alpha < t < \beta$ (the strong Legendre condition) and assume boundary conditions $\xi(\alpha) = \xi(\beta) = 0$. When is this sufficient for $\delta^2 F > 0$?
1547
-
1548
- First of all, notice that for any smooth function $w(x)$, we have
1549
- \[
1550
- 0 = \int_\alpha^\beta (w\xi^2)' \;\d t
1551
- \]
1552
- since this is a total derivative and evaluates to $w\xi(\alpha) - w\xi(\beta) = 0$. So we have
1553
- \[
1554
- 0 = \int_\alpha^\beta (2w\xi \dot{\xi} + \dot{w}\xi^2)\;\d t.
1555
- \]
1556
- This allows us to rewrite $\delta^2 F$ as
1557
- \[
1558
- \delta^2 F = \int_\alpha^\beta \big(\rho \dot{\xi}^2 + 2w\xi \dot{\xi} + (\sigma + \dot{w})\xi^2\big)\;\d t.
1559
- \]
1560
- Now complete the square in $\xi$ and $\dot{\xi}$. So
1561
- \[
1562
- \delta^2 F = \int_\alpha^\beta \left[\rho\left(\dot{\xi} + \frac{w}{\rho} \xi\right)^2 +\left(\sigma + \dot{w} - \frac{w^2}{\rho}\right)\xi^2 \right]\;\d t
1563
- \]
1564
- This is non-negative if
1565
- \[
1566
- w^2 = \rho(\sigma + \dot{w}).\tag{$*$}
1567
- \]
1568
- So as long as we can find a solution to this equation, we know that $\delta^2 F$ is non-negative. Could it be that $\delta^2 F = 0$? Turns out not. If it were, then $\dot{\xi} = -\frac{w}{\rho}\xi$. We can solve this to obtain
1569
- \[
1570
- \xi(x) = C\exp\left(-\int_\alpha^x \frac{w(s)}{\rho(s)}\;\d s\right).
1571
- \]
1572
- We know that $\xi(\alpha) = 0$. But $\xi(\alpha) = C e^0$. So $C = 0$. Hence equality holds only for $\xi = 0$.
1573
-
1574
- So all we need to do is to find a solution to $(*)$, and we are sure that $\delta^2 F > 0$.
1575
-
1576
- Note that this is non-linear in $w$. We can convert this into a linear equation by defining $w$ in terms of a new function $u$ by $w = -\rho \dot{u}/u$. Then $(*)$ becomes
1577
- \[
1578
- \rho\left(\frac{\dot{u}}{u}\right)^2 = \sigma - \left(\frac{\rho \dot{u}}{u}\right)' = \sigma - \frac{(\rho \dot{u})'}{u} + \rho \left(\frac{\dot{u}}{u}\right)^2.
1579
- \]
1580
- We see that the left and right terms cancel. So we have
1581
- \[
1582
- -(\rho \dot{u})' + \sigma u = 0.
1583
- \]
1584
- This is the \emph{Jacobi accessory equation}, a second-order linear ODE.
1585
-
1586
- There is a caveat here. Not every solution $u$ will do. Recall that $u$ is used to produce $w$ via $w = -\rho\dot{u}/u$. Hence within $[\alpha, \beta]$, we cannot have $u = 0$ since we cannot divide by zero. If we can find a non-zero $u(x)$ satisfying the Jacobi accessory equation, then $\delta^2 F > 0$ for $\xi \not= 0$, and hence $y_0$ is a local minimum of $F$.
1587
-
1588
- A suitable solution will always exists for sufficiently small $\beta - \alpha$, but may not exist if $\beta - \alpha$ is too large, as stated at the beginning of the chapter.
1589
-
1590
- \begin{eg}[Geodesics on unit sphere]
1591
- For any curve $C$ on the sphere, we have
1592
- \[
1593
- L = \int_C \sqrt{\d \theta^2 + \sin^2 \theta \;\d \phi^2}.
1594
- \]
1595
- If $\theta$ is a good parameter of the curve, then
1596
- \[
1597
- L[\phi] = \int_{\theta _1}^{\theta_2} \sqrt{1 + \sin^2 \theta (\phi')^2}\;\d \theta.
1598
- \]
1599
- Alternatively, if $\phi$ is a good parameter, we have
1600
- \[
1601
- L[\theta] = \int_{\phi_1}^{\phi_2}\sqrt{(\theta')^2 + \sin^2 \theta}\;\d \phi.
1602
- \]
1603
- We will look at the second case.
1604
-
1605
- We have
1606
- \[
1607
- f(\theta, \theta') = \sqrt{(\theta')^2 + \sin^2 \theta}.
1608
- \]
1609
- So
1610
- \[
1611
- \frac{\partial f}{\partial \theta} = \frac{\sin \theta\cos \theta}{\sqrt{(\theta')^2 + \sin^2 \theta}},\quad \frac{\partial f}{\partial \theta'} = \frac{\theta'}{\sqrt{(\theta')^2 + \sin^2 \theta}}.
1612
- \]
1613
- Since $\frac{\partial f}{\partial \phi} = 0$, we have the first integral
1614
- \[
1615
- \text{const} = f - \theta' \frac{\partial f}{\partial \theta'} = \frac{\sin^2 \theta}{\sqrt{(\theta')^2 + \sin^2 \theta}}
1616
- \]
1617
- So a solution is
1618
- \[
1619
- c\sin^2 \theta = \sqrt{(\theta')^2 + \sin^2 \theta}.
1620
- \]
1621
- Here we need $c \geq 1$ for the equation to make sense.
1622
-
1623
- We will consider the case where $c = 1$ (in fact, we can show that we can always orient our axes such that $c = 1$). This occurs when $\theta' = 0$, i.e.\ $\theta$ is a constant. Then our first integral gives $\sin^2 \theta = \sin \theta$. So $\sin \theta = 1$ and $\theta = \pi/2$. This corresponds to a curve on the equator. (we ignore the case $\sin \theta = 0$ that gives $\theta = 0$, which is a rather silly solution)
1624
-
1625
- There are two equatorial solutions to the Euler-Lagrange equations. Which, if any, minimizes $L[\theta]$?
1626
- \begin{center}
1627
- \begin{tikzpicture}
1628
- \draw circle [radius=2];
1629
- \draw [red] (2, 0) arc (0:240:2 and 0.5);
1630
- \draw [red] (2, 0) arc (0:-60:2 and 0.5);
1631
- \draw [blue] (0, -0.5) arc (-90:-60:2 and 0.5) node [circ] {};
1632
- \draw [blue] (0, -0.5) arc (270:240:2 and 0.5) node [circ] {};
1633
- \end{tikzpicture}
1634
- \end{center}
1635
- We have
1636
- \[
1637
- \left.\frac{\partial^2 f}{\partial (\theta')^2}\right|_{\theta = \pi/2} = 1
1638
- \]
1639
- and
1640
- \[
1641
- \frac{\partial^2 f}{\partial \theta \partial \theta'} = -1,\quad \frac{\partial^2}{\partial \theta\partial \theta'} = 0.
1642
- \]
1643
- So $\rho(x) = 1$ and $\sigma(x) = -1$. So
1644
- \[
1645
- \delta^2 F = \int_{\phi_1}^{\phi_2} ((\xi')^2 - \xi^2)\;\d \phi.
1646
- \]
1647
- The Jacobi accessory equation is $u'' + u = 0$. So the general solution is $u \propto \sin \phi - \gamma \cos\phi$. This is equal to zero if $\tan \phi = \gamma$.
1648
-
1649
- Looking at the graph of $\tan \phi$, we see that $\tan$ has a zero every $\pi$ radians. Hence if the domain $\phi_2 - \phi_1$ is greater than $\pi$ (i.e.\ we go the long way from the first point to the second), it will always contain some values for which $\tan \phi$ is zero. So we cannot conclude that the longer path is a local minimum (it is obviously not a global minimum, by definition of longer) (we also cannot conclude that it is \emph{not} a local minimum, since we tested with a sufficient and not necessary condition). On the other hand, if $\phi_2 - \phi_1$ is less than $\pi$, then we will be able to pick a $\gamma$ such that $u$ is non-zero in the domain.
1650
- \end{eg}
1651
- \end{document}
1652
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
books/cam/IB_L/complex_analysis.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_L/complex_methods.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_L/electromagnetism.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_L/fluid_dynamics.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_L/geometry.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_L/groups_rings_and_modules.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_L/numerical_analysis.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_L/statistics.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_M/analysis_ii.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_M/linear_algebra.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_M/markov_chains.tex DELETED
@@ -1,1665 +0,0 @@
1
- \documentclass[a4paper]{article}
2
-
3
- \def\npart {IB}
4
- \def\nterm {Michaelmas}
5
- \def\nyear {2015}
6
- \def\nlecturer {G.\ R.\ Grimmett}
7
- \def\ncourse {Markov Chains}
8
-
9
- \input{header}
10
-
11
- \begin{document}
12
- \maketitle
13
- {\small
14
- \noindent\textbf{Discrete-time chains}\\
15
- Definition and basic properties, the transition matrix. Calculation of $n$-step transition probabilities. Communicating classes, closed classes, absorption, irreducibility. Calculation of hitting probabilities and mean hitting times; survival probability for birth and death chains. Stopping times and statement of the strong Markov property.\hspace*{\fill} [5]
16
-
17
- \vspace{5pt}
18
- \noindent Recurrence and transience; equivalence of transience and summability of $n$-step transition probabilities; equivalence of recurrence and certainty of return. Recurrence as a class property, relation with closed classes. Simple random walks in dimensions one, two and three.\hspace*{\fill} [3]
19
-
20
- \vspace{5pt}
21
- \noindent Invariant distributions, statement of existence and uniqueness up to constant multiples. Mean return time, positive recurrence; equivalence of positive recurrence and the existence of an invariant distribution. Convergence to equilibrium for irreducible, positive recurrent, aperiodic chains *and proof by coupling*. Long-run proportion of time spent in a given state.\hspace*{\fill} [3]
22
-
23
- \vspace{5pt}
24
- \noindent Time reversal, detailed balance, reversibility, random walk on a graph.\hspace*{\fill} [1]}
25
-
26
- \tableofcontents
27
- \setcounter{section}{-1}
28
- \section{Introduction}
29
- So far, in IA Probability, we have always dealt with one random variable, or numerous independent variables, and we were able to handle them. However, in real life, things often \emph{are} dependent, and things become much more difficult.
30
-
31
- In general, there are many ways in which variables can be dependent. Their dependence can be very complicated, or very simple. If we are just told two variables are dependent, we have no idea what we can do with them.
32
-
33
- This is similar to our study of functions. We can develop theories about continuous functions, increasing functions, or differentiable functions, but if we are just given a random function without assuming anything about it, there really isn't much we can do.
34
-
35
- Hence, in this course, we are just going to study a particular kind of dependent variables, known as \emph{Markov chains}. In fact, in IA Probability, we have already encountered some of these. One prominent example is the random walk, in which the next position depends on the previous position. This gives us some dependent random variables, but they are dependent in a very simple way.
36
-
37
- In reality, a random walk is too simple of a model to describe the world. We need something more general, and these are Markov chains. These, by definition, are random distributions that satisfy the \emph{Markov assumption}. This assumption, intuitively, says that the future depends only upon the current state, and not how we got to the current state. It turns out that just given this assumption, we can prove a lot about these chains.
38
-
39
- \section{Markov chains}
40
- \subsection{The Markov property}
41
- We start with the definition of a Markov chain.
42
- \begin{defi}[Markov chain]
43
- Let $X = (X_0, X_1, \cdots)$ be a sequence of random variables taking values in some set $S$, the \emph{state space}. We assume that $S$ is countable (which could be finite).
44
-
45
- We say $X$ has the \emph{Markov property} if for all $n\geq 0, i_0,\cdots, i_{n + 1}\in S$, we have
46
- \[
47
- \P(X_{n + 1} = i_{n + 1} \mid X_0 = i_0, \cdots, X_n = i_n) = \P(X_{n + 1} = i_{n + 1} \mid X_n = i_n).
48
- \]
49
- If $X$ has the Markov property, we call it a \emph{Markov chain}.
50
-
51
- We say that a Markov chain $X$ is \emph{homogeneous} if the conditional probabilities $\P(X_{n + 1} = j \mid X_n = i)$ do not depend on $n$.
52
- \end{defi}
53
- All our chains $X$ will be Markov and homogeneous unless otherwise specified.
54
-
55
- Since the state space $S$ is countable, we usually label the states by integers $i \in \N$.
56
-
57
- \begin{eg}\leavevmode
58
- \begin{enumerate}
59
- \item A random walk is a Markov chain.
60
- \item The branching process is a Markov chain.
61
- \end{enumerate}
62
- \end{eg}
63
-
64
- In general, to fully specify a (homogeneous) Markov chain, we will need two items:
65
- \begin{enumerate}
66
- \item The initial distribution $\lambda_i = \P(X_0 = i)$. We can write this as a vector $\lambda = (\lambda_i: i \in S)$.
67
- \item The transition probabilities $p_{i, j} = \P(X_{n + 1} = j \mid X_n = i)$. We can write this as a matrix $P = (p_{i, j})_{i, j\in S}$.
68
- \end{enumerate}
69
-
70
- We will start by proving a few properties of $\lambda$ and $P$. These let us know whether an arbitrary pair of vector and matrix $(\lambda, P)$ actually specifies a Markov chain.
71
- \begin{prop}\leavevmode
72
- \begin{enumerate}
73
- \item $\lambda$ is a \emph{distribution}, i.e.\ $\lambda_i \geq 0, \sum_i \lambda_i = 1$.
74
- \item $P$ is a \emph{stochastic matrix}, i.e.\ $p_{i, j} \geq 0$ and $\sum_j p_{i, j} = 1$ for all $i$.
75
- \end{enumerate}
76
- \end{prop}
77
-
78
- \begin{proof}\leavevmode
79
- \begin{enumerate}
80
- \item Obvious since $\lambda$ is a probability distribution.
81
- \item $p_{i, j} \geq 0$ since $p_{ij}$ is a probability. We also have
82
- \[
83
- \sum_j p_{i,j} = \sum_j \P(X_{1} = j \mid X_0 = i) = 1
84
- \]
85
- since $\P(X_1 = \ph \mid X_0 = i)$ is a probability distribution function.\qedhere
86
- \end{enumerate}
87
- \end{proof}
88
- Note that we only require the row sum to be $1$, and the column sum need not be.
89
-
90
- We will prove another seemingly obvious fact.
91
- \begin{thm}
92
- Let $\lambda$ be a distribution (on $S$) and $P$ a stochastic matrix. The sequence $X = (X_0, X_1, \cdots)$ is a Markov chain with initial distribution $\lambda$ and transition matrix $P$ iff
93
- \[
94
- \P(X_0 = i_0, X_1 = i_1, \cdots, X_n = i_n) = \lambda_{i_0}p_{i_0, i_1}p_{i_1, i_2}\cdots p_{i_{n - 1}, i_n}\tag{$*$}
95
- \]
96
- for all $n, i_0, \cdots, i_n$
97
- \end{thm}
98
- \begin{proof}
99
- Let $A_k$ be the event $X_k = i_k$. Then we can write $(*)$ as
100
- \[
101
- \P(A_0\cap A_1\cap\cdots \cap A_n) = \lambda_{i_0}p_{i_0, i_1}p_{i_1, i_2}\cdots p_{i_{n - 1}, i_n}. \tag{$*$}
102
- \]
103
- We first assume that $X$ is a Markov chain. We prove $(*)$ by induction on $n$.
104
-
105
- When $n = 0$, $(*)$ says $\P(A_0) = \lambda_{i_0}$. This is true by definition of $\lambda$.
106
-
107
- Assume that it is true for all $n < N$. Then
108
- \begin{align*}
109
- \P(A_0 \cap A_1 \cap \cdots \cap A_N) &= \P(A_0,\cdots, A_{N - 1})\P(A_0, \cdots, A_N \mid A_0, \cdots, A_{N - 1})\\
110
- &= \lambda_{i_0} p_{i_0, i_1}\cdots p_{i_{N - 2}, i_{N - 1}} \P(A_{N} \mid A_0,\cdots, A_{N - 1})\\
111
- &= \lambda_{i_0} p_{i_0, i_1}\cdots p_{i_{N - 2}, i_{N - 1}} \P(A_{N} \mid A_{N - 1})\\
112
- &= \lambda_{i_0} p_{i_0, i_1}\cdots p_{i_{N - 2}, i_{N - 1}} p_{i_{N - 1}, i_N}.
113
- \end{align*}
114
- So it is true for $N$ as well. Hence we are done by induction.
115
-
116
- Conversely, suppose that ($*$) holds. Then for $n = 0$, we have $\P(X_0 = i_0) = \lambda_{i_0}$. Otherwise,
117
- \begin{align*}
118
- \P(X_n = i_n\mid X_0 = i_0, \cdots, X_{n - 1} = i_{n - 1}) &= \P(A_n \mid A_0 \cap \cdots\cap A_{n - 1})\\
119
- &= \frac{\P(A_0\cap \cdots \cap A_n)}{\P(A_0\cap \cdots \cap A_{n - 1})}\\
120
- &= p_{i_{n - 1}, i_n},
121
- \end{align*}
122
- which is independent of $i_0, \cdots, i_{n - 2}$. So this is Markov.
123
- \end{proof}
124
-
125
- Often, we do not use the Markov property directly. Instead, we use the following:
126
- \begin{thm}[Extended Markov property]
127
- Let $X$ be a Markov chain. For $n \geq 0$, any $H$ given in terms of the past $\{X_i: i < n\}$, and any $F$ given in terms of the future $\{X_i: i > n\}$, we have
128
- \[
129
- \P(F\mid X_n = i, H) = \P(F\mid X_n = i).
130
- \]
131
- \end{thm}
132
- To prove this, we need to stitch together many instances of the Markov property. Actual proof is omitted.
133
-
134
- \subsection{Transition probability}
135
- Recall that we can specify the dynamics of a Markov chain by the \emph{one-step} transition probability,
136
- \[
137
- p_{i, j} = \P(X_{n + 1} = j\mid X_n = i).
138
- \]
139
- However, we don't always want to take 1 step. We might want to take 2 steps, 3 steps, or, in general, $n$ steps. Hence, we define
140
- \begin{defi}[$n$-step transition probability]
141
- The $n$-step transition probability from $i$ to $j$ is
142
- \[
143
- p_{i, j}(n) = \P(X_n = j\mid X_0 = i).
144
- \]
145
- \end{defi}
146
- How do we compute these probabilities? The idea is to break this down into smaller parts. We want to express $p_{i, j}(m + n)$ in terms of $n$-step and $m$-step transition probabilities. Then we can iteratively break down an arbitrary $p_{i, j}(n)$ into expressions involving the one-step transition probabilities only.
147
-
148
- To compute $p_{i, j}(m + n)$, we can think of this as a two-step process. We first go from $i$ to some unknown point $k$ in $m$ steps, and then travel from $k$ to $j$ in $n$ more steps. To find the probability to get from $i$ to $j$, we consider all possible routes from $i$ to $j$, and sum up all the probability of the paths. We have
149
- \begin{align*}
150
- p_{ij}(m + n) &= \P(X_{m + n}\mid X_0 = i)\\
151
- &= \sum_k \P(X_{m + n} = j\mid X_m = k, X_0 = i)\P(X_m = k\mid X_0 = i)\\
152
- &= \sum_k \P(X_{m + n} = j\mid X_m = k)\P(X_m = k \mid X_0 = i)\\
153
- &= \sum_k p_{i, k}(m)p_{k, j}(n).
154
- \end{align*}
155
- Thus we get
156
- \begin{thm}[Chapman-Kolmogorov equation]
157
- \[
158
- p_{i, j}(m + n) = \sum_{k\in S} p_{i, k}(m) p_{k, j}(n).
159
- \]
160
- \end{thm}
161
- This formula is suspiciously familiar. It is just matrix multiplication!
162
-
163
- \begin{notation}
164
- Write $P(m) = (p_{i, j}(m))_{i, j\in S}$.
165
- \end{notation}
166
- Then we have
167
- \[
168
- P(m + n) = P(m)P(n)
169
- \]
170
- In particular, we have
171
- \[
172
- P(n) = P(1)P(n - 1) = \cdots = P(1)^n = P^n.
173
- \]
174
- This allows us to easily compute the $n$-step transition probability by matrix multiplication.
175
-
176
- \begin{eg}
177
- Let $S = \{1, 2\}$, with
178
- \[
179
- P =
180
- \begin{pmatrix}
181
- 1 - \alpha & \alpha\\
182
- \beta & 1 - \beta
183
- \end{pmatrix}
184
- \]
185
- We assume $0 < \alpha, \beta < 1$. We want to find the $n$-step transition probability.
186
-
187
- We can achieve this via diagonalization. We can write $P$ as
188
- \[
189
- P = U^{-1}
190
- \begin{pmatrix}
191
- \kappa_1 & 0\\
192
- 0 & \kappa_2
193
- \end{pmatrix}U,
194
- \]
195
- where the $\kappa_i$ are eigenvalues of $P$, and $U$ is composed of the eigenvectors.
196
-
197
- To find the eigenvalues, we calculate
198
- \[
199
- \det (P - \lambda I) = (1 - \alpha - \lambda)(1 - \beta - \lambda) - \alpha\beta = 0.
200
- \]
201
- We solve this to obtain
202
- \[
203
- \kappa_1 = 1,\quad \kappa_2 = 1 - \alpha - \beta.
204
- \]
205
- Usually, the next thing to do would be to find the eigenvectors to obtain $U$. However, here we can cheat a bit and not do that. Using the diagonalization of $P$, we have
206
- \[
207
- P^n = U^{-1}
208
- \begin{pmatrix}
209
- \kappa_1^n & 0\\
210
- 0 & \kappa_2^n
211
- \end{pmatrix}U.
212
- \]
213
- We can now attempt to compute $p_{1, 2}$. We know that it must be of the form
214
- \[
215
- p_{1, 2} = A\kappa_1^n + B\kappa_2^n = A + B(1 - \alpha - \beta)^n
216
- \]
217
- where $A$ and $B$ are constants coming from $U$ and $U^{-1}$. However, we know well that
218
- \[
219
- p_{1, 2}(0) = 0,\quad p_{12}(1) = \alpha.
220
- \]
221
- So we obtain
222
- \begin{align*}
223
- A + B &= 0\\
224
- A + B(1 - \alpha - \beta) &= \alpha.
225
- \end{align*}
226
- This is something we can solve, and obtain
227
- \[
228
- p_{1, 2}(n) = \frac{\alpha}{\alpha + \beta}(1 - (1 - \alpha - \beta)^n) = 1 - p_{1, 1}(n).
229
- \]
230
- How about $p_{2, 1}$ and $p_{2, 2}$? Well we don't need additional work. We can obtain these simply by interchanging $\alpha$ and $\beta$. So we obtain
231
- \[
232
- P^n = \frac{1}{\alpha + \beta}
233
- \begin{pmatrix}
234
- \beta + \alpha(1 - \alpha - \beta)^n & \alpha - \alpha(1 - \alpha - \beta)^n \\
235
- \beta - \beta(1 - \beta - \alpha)^n & \alpha + \beta(1 - \beta - \alpha)^n \\
236
- \end{pmatrix}
237
- \]
238
- What happens as $n\to \infty$? We can take the limit and obtain
239
- \[
240
- P^n \to \frac{1}{\alpha + \beta}
241
- \begin{pmatrix}
242
- \beta & \alpha\\
243
- \beta & \alpha
244
- \end{pmatrix}
245
- \]
246
- We see that the two rows are the same. This means that as time goes on, where we end up does not depend on where we started. We will later (near the end of the course) see that this is generally true for most Markov chains.
247
-
248
- Alternatively, we can solve this by a difference equation. The recurrence relation is given by
249
- \[
250
- p_{1, 1}(n + 1) = p_{1, 1}(n)(1 - \alpha) + p_{1, 2}(n)\beta.
251
- \]
252
- Writing in terms of $p_{11}$ only, we have
253
- \[
254
- p_{1, 1}(n + 1) = p_{1, 1}(n)(1 - \alpha) + (1 - p_{1, 1}(n))\beta.
255
- \]
256
- We can solve this as we have done in IA Differential Equations.
257
- \end{eg}
258
- We saw that the Chapman-Kolmogorov equation can be concisely stated as a rule about matrix multiplication. In general, many statements about Markov chains can be formulated in the language of linear algebra naturally.
259
-
260
- For example, let $X_0$ have distribution $\lambda$. What is the distribution of $X_1$? By definition, it is
261
- \[
262
- \P(X_1 = j) = \sum_i \P(X_1 = j\mid X_0 = i)\P(X_0 = i) = \sum_i \lambda_i p_{i, j}.
263
- \]
264
- Hence this has a distribution $\lambda P$, where $\lambda$ is treated as a row vector. Similarly, $X_n$ has the distribution $\lambda P^n$.
265
-
266
- In fact, historically, Markov chains was initially developed as a branch of linear algebra, and a lot of the proofs were just linear algebra manipulations. However, nowadays, we often look at it as a branch of probability theory instead, and this is what we will do in this course. So don't be scared if you hate linear algebra.
267
-
268
- \section{Classification of chains and states}
269
- \subsection{Communicating classes}
270
- Suppose we have a Markov chain $X$ over some state space $S$. While we would usually expect the different states in $S$ to be mutually interacting, it is possible that we have a state $i \in S$ that can never be reached, or we might get stuck in some state $j \in S$ and can never leave. These are usually less interesting. Hence we would like to rule out these scenarios, and focus on what we call \emph{irreducible chains}, where we can freely move between different states.
271
-
272
- We start with an some elementary definitions.
273
- \begin{defi}[Leading to and communicate]
274
- Suppose we have two states $i, j\in S$. We write $i \to j$ ($i$ \emph{leads to} $j$) if there is some $n \geq 0$ such that $p_{i, j}(n) > 0$, i.e.\ it is possible for us to get from $i$ to $j$ (in multiple steps). Note that we allow $n = 0$. So we always have $i \to i$.
275
-
276
- We write $i \leftrightarrow j$ if $i \to j$ and $j \to i$. If $i \leftrightarrow j$, we say $i$ and $j$ \emph{communicate}.
277
- \end{defi}
278
-
279
- \begin{prop}
280
- $\leftrightarrow$ is an equivalence relation.
281
- \end{prop}
282
-
283
- \begin{proof}\leavevmode
284
- \begin{enumerate}
285
- \item Reflexive: we have $i \leftrightarrow i$ since $p_{i, i}(0) = 1$.
286
- \item Symmetric: trivial by definition.
287
- \item Transitive: suppose $i \to j$ and $j \to k$. Since $i \to j$, there is some $m > 0$ such that $p_{i, j}(m) > 0$. Since $j \to k$, there is some $n$ such that $p_{j, k}(n) > 0$. Then $p_{i, k}(m + n) = \sum_{r} p_{i, r}(m)p_{r k}(n) \geq p_{i, j}(m)p_{j, k}(n) > 0$. So $i \to k$.
288
-
289
- Similarly, if $j \to i$ and $k \to j$, then $k \to i$. So $i \leftrightarrow j$ and $j \leftrightarrow k$ implies that $i \leftrightarrow k$.\qedhere
290
- \end{enumerate}
291
- \end{proof}
292
- So we have an equivalence relation, and we know what to do with equivalence relations. We form equivalence classes!
293
- \begin{defi}[Communicating classes]
294
- The equivalence classes of $\leftrightarrow$ are \emph{communicating classes}.
295
- \end{defi}
296
- We have to be careful with these communicating classes. Different communicating classes are not completely isolated. Within a communicating class $A$, of course we can move between any two vertices. However, it is also possible that we can escape from a class $A$ to a different class $B$. It is just that after going to $B$, we cannot return to class $A$. From $B$, we might be able to get to another space $C$. We can jump around all the time, but (if there are finitely many communicating classes) eventually we have to stop when we have visited every class. Then we are bound to stay in that class.
297
-
298
- Since we are eventually going to be stuck in that class anyway, often, we can just consider this final communicating class and ignore the others. So wlog we can assume that the chain only has one communicating class.
299
-
300
- \begin{defi}[Irreducible chain]
301
- A Markov chain is \emph{irreducible} if there is a unique communication class.
302
- \end{defi}
303
- From now on, we will mostly care about irreducible chains only.
304
-
305
- More generally, we call a subset \emph{closed} if we cannot escape from it.
306
- \begin{defi}[Closed]
307
- A subset $C\subseteq S$ is \emph{closed} if $p_{i, j} = 0$ for all $i \in C, j\not\in C$.
308
- \end{defi}
309
-
310
- \begin{prop}
311
- A subset $C$ is closed iff ``$i\in C, i\to j$ implies $j \in C$''.
312
- \end{prop}
313
-
314
- \begin{proof}
315
- Assume $C$ is closed. Let $i \in C, i \to j$. Since $i \to j$, there is some $m$ such that $p_{i, j}(m) > 0$. Expanding the Chapman-Kolmogorov equation, we have
316
- \[
317
- p_{i, j}(m) = \sum_{i_1, \cdots, i_{m - 1}} p_{i, i_1}p_{i_1, i_2}, \cdots, p_{i_{m - 1},j} > 0.
318
- \]
319
- So there is some route $i, i_1, \cdots, i_{m - 1}, j$ such that $p_{i, i_1}, p_{i_1, i_2}, \cdots, p_{i_{m - 1}, j} > 0$. Since $p_{i, i_1} > 0$, we have $i_1\in C$. Since $p_{i_1,i_2} > 0$, we have $i_2\in C$. By induction, we get that $j \in C$.
320
-
321
- To prove the other direction, assume that ``$i\in C, i \to j$ implies $j\in C$''. So for any $i\in C, j\not\in C$, then $i\not\to j$. So in particular $p_{i, j} = 0$.
322
- \end{proof}
323
-
324
- \begin{eg}
325
- Consider $S = \{1, 2, 3, 4, 5, 6\}$ with transition matrix
326
- \[
327
- P =
328
- \begin{pmatrix}
329
- \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0\\
330
- 0 & 0 & 1 & 0 & 0 & 0\\
331
- \frac{1}{3} & 0 & 0 & \frac{1}{3} & \frac{1}{3} & 0\\
332
- 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0\\
333
- 0 & 0 & 0 & 0 & 0 & 1\\
334
- 0 & 0 & 0 & 0 & 1 & 0\\
335
- \end{pmatrix}
336
- \]
337
- \begin{center}
338
- \begin{tikzpicture}
339
- \node [mstate] (1) at (0, 0) {$1$};
340
- \node [mstate] (2) at (1, 1.4) {$2$};
341
- \node [mstate] (3) at (2, 0) {$3$};
342
- \node [mstate] (4) at (3, -1.4) {$4$};
343
- \node [mstate] (5) at (4, 0) {$5$};
344
- \node [mstate] (6) at (6, 0) {$6$};
345
- \draw (1) edge [loop left, ->] (1);
346
- \draw (1) edge [->] (2);
347
- \draw (2) edge [->] (3);
348
- \draw (3) edge [->] (1);
349
- \draw (3) edge [->] (4);
350
- \draw (3) edge [->] (5);
351
- \draw (4) edge [->] (5);
352
- \draw (4) edge [loop below, ->] (4);
353
- \draw (5) edge [bend left, ->] (6);
354
- \draw (6) edge [bend left, ->] (5);
355
- \end{tikzpicture}
356
- \end{center}
357
- We see that the communicating classes are $\{1, 2, 3\}$, $\{4\}$, $\{5, 6\}$, where $\{5, 6\}$ is closed.
358
- \end{eg}
359
-
360
- \subsection{Recurrence or transience}
361
- The major focus of this chapter is recurrence and transience. This was something that came up when we discussed random walks in IA Probability --- given a random walk, say starting at $0$, what is the probability that we will return to $0$ later on? Recurrence and transience is a qualitative way of answering this question. As we mentioned, we will mostly focus on irreducible chains. So by definition there is always a non-zero probability of returning to $0$. Hence the question we want to ask is whether we are going to return to $0$ with certainty, i.e.\ with probability $1$. If we are bound to return, then we say the state is recurrent. Otherwise, we say it is transient.
362
-
363
- It should be clear that this notion is usually interesting only for an infinite state space. If we have an infinite state space, we might get transience because we are very likely to drift away to a place far, far away and never return. However, in a finite state space, this can't happen. Transience can occur only if we get stuck in a place and can't leave, i.e.\ we are not in an irreducible state space. These are not interesting.
364
-
365
- \begin{notation}
366
- For convenience, we will introduce some notations. We write
367
- \[
368
- \P_i(A) = \P(A\mid X_0 = i),
369
- \]
370
- and
371
- \[
372
- \E_i(Z) = \E(Z\mid X_0 = i).
373
- \]
374
- \end{notation}
375
-
376
- Suppose we start from $i$, and randomly wander around. Eventually, we may or may not get to $j$. If we do, there is a time at which we first reach $j$. We call this the \emph{first passage time}.
377
-
378
- \begin{defi}[First passage time and probability]
379
- The \emph{first passage time} of $j \in S$ starting from $i$ is
380
- \[
381
- T_j = \min\{n \geq 1: X_n = j\}.
382
- \]
383
- Note that this implicitly depends on $i$. Here we require $n \geq 1$. Otherwise $T_i$ would always be $0$.
384
-
385
- The \emph{first passage probability} is
386
- \[
387
- f_{ij}(n) = \P_i(T_j = n).
388
- \]
389
- \end{defi}
390
-
391
- \begin{defi}[Recurrent state]
392
- A state $i\in S$ is \emph{recurrent} (or \emph{persistent}) if
393
- \[
394
- \P_i (T_i < \infty) = 1,
395
- \]
396
- i.e.\ we will eventually get back to the state. Otherwise, we call the state \emph{transient}.
397
- \end{defi}
398
- Note that transient does \emph{not} mean we don't get back. It's just that we are not sure that we will get back. We can show that if a state is recurrent, then the probability that we return to $i$ infinitely many times is also $1$.
399
-
400
- Our current objective is to show the following characterization of recurrence.
401
- \begin{thm}
402
- $i$ is recurrent iff $\sum_n p_{i, i}(n) = \infty$.
403
- \end{thm}
404
- The technique to prove this would be to use generating functions. We need to first decide what sequence to work with. For any fixed $i, j$, consider the sequence $p_{ij}(n)$ as a sequence in $n$. Then we define
405
- \[
406
- P_{i, j}(s) = \sum_{n = 0}^\infty p_{i, j}(n) s^n.
407
- \]
408
- We also define
409
- \[
410
- F_{i, j}(S) = \sum_{n = 0}^\infty f_{i, j}(n) s^n,
411
- \]
412
- where $f_{i, j}$ is our first passage probability. For the sake of clarity, we make it explicit that $p_{i, j}(0) = \delta_{i, j}$, and $f_{i, j}(0) = 0$.
413
-
414
- Our proof would be heavily based on the result below:
415
- \begin{thm}
416
- \[
417
- P_{i, j}(s) = \delta_{i, j} + F_{i, j}(s)P_{j, j}(s),
418
- \]
419
- for $-1 < s \leq 1$.
420
- \end{thm}
421
-
422
- \begin{proof}
423
- Using the law of total probability
424
- \begin{align*}
425
- p_{i, j}(n) &= \sum_{m = 1}^n \P_i(X_n = j\mid T_j = m) \P_i(T_j = m) \\
426
- \intertext{Using the Markov property, we can write this as}
427
- &= \sum_{m = 1}^n \P(X_n = j\mid X_m = j) \P_i(T_j = m)\\
428
- &= \sum_{m = 1}^n p_{j, j}(n - m) f_{i, j}(m).
429
- \end{align*}
430
- We can multiply through by $s^n$ and sum over all $n$ to obtain
431
- \[
432
- \sum_{n = 1}^\infty p_{i, j}(n) s^n = \sum_{n = 1}^\infty \sum_{m = 1}^n p_{j, j}(n - m)s^{n - m} f_{i, j}(m)s^m.
433
- \]
434
- The left hand side is \emph{almost} the generating function $P_{i, j}(s)$, except that we are missing an $n = 0$ term, which is $p_{i, j}(0) = \delta_{i, j}$. The right hand side is the ``convolution'' of the power series $P_{j, j}(s)$ and $F_{i, j}(s)$, which we can write as the product $P_{j, j}(s) F_{i, j}(s)$. So
435
- \[
436
- P_{i, j}(s) - \delta_{i, j} = P_{j, j}(s) F_{i, j}(s).\qedhere
437
- \]
438
- \end{proof}
439
-
440
- Before we actually prove our theorem, we need one helpful result from Analysis that allows us to deal with power series nicely.
441
- \begin{lemma}[Abel's lemma]
442
- Let $u_1, u_2, \cdots$ be real numbers such that $U(s) = \sum_{n} u_n s^n$ converges for $0 < s < 1$. Then
443
- \[
444
- \lim_{s\to 1^-} U(s) = \sum_n u_n.
445
- \]
446
- \end{lemma}
447
- Proof is an exercise in analysis, which happens to be on the first example sheet of Analysis II.
448
-
449
- We now prove the theorem we initially wanted to show
450
- \begin{thm}
451
- $i$ is recurrent iff $\sum_n p_{ii}(n) = \infty$.
452
- \end{thm}
453
-
454
- \begin{proof}
455
- Using $j = i$ in the above formula, for $0 < s < 1$, we have
456
- \[
457
- P_{i, i}(s) = \frac{1}{1 - F_{i, i} (s)}.
458
- \]
459
- Here we need to be careful that we are not dividing by $0$. This would be a problem if $F_{ii}(s) = 1$. By definition, we have
460
- \[
461
- F_{i, i}(s) = \sum_{n = 1}^\infty f_{i, i}(n) s^n.
462
- \]
463
- Also, by definition of $f_{ii}$, we have
464
- \[
465
- F_{i, i}(1) = \sum_n f_{i, i}(n) = \P(\text{ever returning to }i) \leq 1.
466
- \]
467
- So for $|s| < 1$, $F_{i, i}(s) < 1$. So we are not dividing by zero. Now we use our original equation
468
- \[
469
- P_{i, i}(s) = \frac{1}{1 - F_{i, i} (s)},
470
- \]
471
- and take the limit as $s \to 1$. By Abel's lemma, we know that the left hand side is
472
- \[
473
- \lim_{s \to 1}P_{i, i}(s) = P_{i, i}(1) = \sum_n p_{i, i}(n).
474
- \]
475
- The other side is
476
- \[
477
- \lim_{s \to 1}\frac{1}{1 - F_{i, i}(s)} = \frac{1}{1 - \sum f_{i, i}(n)}.
478
- \]
479
- Hence we have
480
- \[
481
- \sum_n p_{i, i}(n) = \frac{1}{1 - \sum f_{i, i}(n)}.
482
- \]
483
- Since $\sum f_{i, i}(n)$ is the probability of ever returning, the probability of ever returning is 1 if and only if $\sum_n p_{i, i}(n) = \infty$.
484
- \end{proof}
485
-
486
- Using this result, we can check if a state is recurrent. However, a Markov chain has many states, and it would be tedious to check every state individually. Thus we have the following helpful result.
487
- \begin{thm}
488
- Let $C$ be a communicating class. Then
489
- \begin{enumerate}
490
- \item Either every state in $C$ is recurrent, or every state is transient.
491
- \item If $C$ contains a recurrent state, then $C$ is closed.
492
- \end{enumerate}
493
- \end{thm}
494
-
495
- \begin{proof}\leavevmode
496
- \begin{enumerate}
497
- \item Let $i \leftrightarrow j$ and $i \not =j$. Then by definition of communicating, there is some $m$ such that $p_{i, j}(m) = \alpha > 0$, and some $n$ such that $p_{j, i}(n) = \beta > 0$. So for each $k$, we have
498
- \[
499
- p_{i, i}(m + k + n) \geq p_{i, j}(m) p_{j, j}(k) p_{j, i}(n) = \alpha\beta p_{j, j}(k).
500
- \]
501
- So if $\sum_k p_{j, j}(k) = \infty$, then $\sum_r p_{i, i}(r) = \infty$. So $j$ recurrent implies $i$ recurrent. Similarly, $i$ recurrent implies $j$ recurrent.
502
- \item If $C$ is not closed, then there is a non-zero probability that we leave the class and never get back. So the states are not recurrent.\qedhere
503
- \end{enumerate}
504
- \end{proof}
505
-
506
- Note that there is a profound difference between a finite state space and an infinite state space. A finite state space can be represented by a finite matrix, and we are all very familiar with a finite matrices. We can use everything we know about finite matrices from IA Vectors and Matrices. However, infinite matrices are weirder.
507
-
508
- For example, any finite transition matrix $P$ has an eigenvalue of $1$. This is since the row sums of a transition matrix is always $1$. So if we multiply $P$ by $\mathbf{e} = (1, 1, \cdots, 1)$, then we get $\mathbf{e}$ again. However, this is not true for infinite matrices, since we usually don't usually allow arbitrary infinite vectors. To avoid getting infinitely large numbers when multiplying vectors and matrices, we usually restrict our focus to vectors $\mathbf{x}$ such that $\sum x_i^2$ is finite. In this case the vector $\mathbf{e}$ is not allowed, and the transition matrix need not have eigenvalue $1$.
509
-
510
- Another thing about a finite state space is that probability ``cannot escape''. Each step of a Markov chain gives a probability distribution on the state space, and we can imagine the progression of the chain as a flow of probabilities around the state space. If we have a finite state space, then all the probability flow must be contained within our finite state space. However, if we have an infinite state space, then probabilities can just drift away to infinity.
511
-
512
- More concretely, we get the following result about finite state spaces.
513
- \begin{thm}
514
- In a finite state space,
515
- \begin{enumerate}
516
- \item There exists at least one recurrent state.
517
- \item If the chain is irreducible, every state is recurrent.
518
- \end{enumerate}
519
- \end{thm}
520
-
521
- \begin{proof}
522
- (ii) follows immediately from (i) since if a chain is irreducible, either all states are transient or all states are recurrent. So we just have to prove (i).
523
-
524
- We first fix an arbitrary $i$. Recall that
525
- \[
526
- P_{i, j}(s) = \delta_{i, j} + P_{j, j}(s) F_{i, j}(s).
527
- \]
528
- If $j$ is transient, then $\sum_n p_{j, j}(n) = P_{j, j}(1) < \infty$. Also, $F_{i, j}(1)$ is the probability of ever reaching $j$ from $i$, and is hence finite as well. So we have $P_{i, j}(1) < \infty$. By Abel's lemma, $P_{i, j}(1)$ is given by
529
- \[
530
- P_{i, j}(1) = \sum_n p_{i, j}(n).
531
- \]
532
- Since this is finite, we must have $p_{i, j}(n)\to 0$.
533
-
534
- Since we know that
535
- \[
536
- \sum_{j\in S}p_{i, j}(n) = 1,
537
- \]
538
- if every state is transient, then since the sum is finite, we know $\sum p_{i, j}(n) \to 0$ as $n\to \infty$. This is a contradiction. So we must have a recurrent state.
539
- \end{proof}
540
-
541
- \begin{thm}[P\'olya's theorem]
542
- Consider $\Z^d = \{(x_1, x_2, \cdots, x_d): x_i \in \Z\}$. This generates a graph with $x$ adjacent to $y$ if $|x - y| = 1$, where $|\ph |$ is the Euclidean norm.
543
- \begin{center}
544
- \begin{tikzpicture}[scale=0.75]
545
- \node at (-4, 0) {$d = 1$};
546
- \draw (-2.5, 0) -- (2.5, 0);
547
- \foreach \x in {-2,-1,...,2} {
548
- \node [circ] at (\x, 0) {};
549
- }
550
- \begin{scope}[shift={(0, -3.5)}]
551
- \node at (-4, 2) {$d = 2$};
552
- \foreach \x in {-2, -1,...,2} {
553
- \foreach \y in {-2,-1,...,2} {
554
- \node [circ] at (\x, \y) {};
555
- }
556
- }
557
- \foreach \x in {-2, -1,...,2} {
558
- \draw (\x, -2.5) -- (\x, 2.5);
559
- \draw (-2.5, \x) -- (2.5, \x);
560
- }
561
- \end{scope}
562
- \end{tikzpicture}
563
- \end{center}
564
- Consider a random walk in $\Z^d$. At each step, it moves to a neighbour, each chosen with equal probability, i.e.
565
- \[
566
- \P(X_{n + 1} = j\mid X_n = i) =
567
- \begin{cases}
568
- \frac{1}{2d} & |j - i| = 1\\
569
- 0 & \text{otherwise}
570
- \end{cases}
571
- \]
572
- This is an irreducible chain, since it is possible to get from one point to any other point. So the points are either all recurrent or all transient.
573
-
574
- The theorem says this is recurrent iff $d = 1$ or $2$.
575
- \end{thm}
576
- Intuitively, this makes sense that we get recurrence only for low dimensions, since if we have more dimensions, then it is easier to get lost.
577
-
578
- \begin{proof}
579
- We will start with the case $d = 1$. We want to show that $\sum p_{0, 0}(n) = \infty$. Then we know the origin is recurrent. However, we can simplify this a bit. It is impossible to get back to the origin in an odd number of steps. So we can instead consider $\sum p_{0, 0}(2n)$. However, we can write down this expression immediately. To return to the origin after $2n$ steps, we need to have made $n$ steps to the left, and $n$ steps to the right, in any order. So we have
580
- \[
581
- p_{0, 0}(2n) = \P(n\text{ steps left}, n\text{ steps right}) = \binom{2n}{n} \left(\frac{1}{2}\right)^{2n}.
582
- \]
583
- To show if this converges, it is not helpful to work with these binomial coefficients and factorials. So we use Stirling's formula $n! \simeq \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$. If we plug this in, we get
584
- \[
585
- p_{0, 0}(2n) \sim \frac{1}{\sqrt{\pi n}}.
586
- \]
587
- This tends to $0$, but really slowly, and even more slowly than the harmonic series. So we have $\sum p_{0, 0}(2n) = \infty$.
588
-
589
- In the $d = 2$ case, suppose after $2n$ steps, I have taken $r$ steps right, $\ell$ steps left, $u$ steps up and $d$ steps down. We must have $r + \ell + u + d = 2n$, and we need $r = \ell, u = d$ to return the origin. So we let $r = \ell = m, u = d = n - m$. So we get
590
- \begin{align*}
591
- p_{0, 0}(2n) &= \left(\frac{1}{4}\right)^{2n} \sum_{m = 0}^n \binom{2n}{m, m, n - m, n - m} \\
592
- &= \left(\frac{1}{4}\right)^{2n} \sum_{m = 0}^n \frac{(2n)!}{(m!)^2 ((n - m)!)^2} \\
593
- &= \left(\frac{1}{4}\right)^{2n} \binom{2n}{n}\sum_{m = 0}^n \left(\frac{n!}{m!(n - m)!}\right)^2\\
594
- &= \left(\frac{1}{4}\right)^{2n} \binom{2n}{n}\sum_{m = 0}^n \binom{n}{m}\binom{n}{n - m}\\
595
- \intertext{We now use a well-known identity (proved in IA Numbers and Sets) to obtain}
596
- &= \left(\frac{1}{4}\right)^{2n} \binom{2n}{n} \binom{2n}{n}\\
597
- &= \left[\binom{2n}{n} \left(\frac{1}{2}\right)^{2n}\right]^2\\
598
- &\sim \frac{1}{\pi n}.
599
- \end{align*}
600
- So the sum diverges. So this is recurrent. Note that the two-dimensional probability turns out to be the square of the one-dimensional probability. This is not a coincidence, and we will explain this after the proof. However, this does not extend to higher dimensions.
601
-
602
- In the $d = 3$ case, we have
603
- \[
604
- p_{0, 0}(2n) = \left(\frac{1}{6}\right)^{2n}\sum_{i + j + k = n}\frac{(2n)!}{(i!j!k!)^2}.
605
- \]
606
- This time, there is no neat combinatorial formula. Since we want to show this is summable, we can try to bound this from above. We have
607
- \begin{align*}
608
- p_{0, 0}(2n) &= \left(\frac{1}{6}\right)^{2n} \binom{2n}{n} \sum \left(\frac{n!}{i!j!k!}\right)^2\\
609
- &= \left(\frac{1}{2}\right)^{2n} \binom{2n}{n} \sum \left(\frac{n!}{3^n i!j!k!}\right)^2\\
610
- \intertext{Why do we write it like this? We are going to use the identity $\displaystyle \sum_{i + j + k = n} \frac{n!}{3^n i!j!k!} = 1$. Where does this come from? Suppose we have three urns, and throw $n$ balls into it. Then the probability of getting $i$ balls in the first, $j$ in the second and $k$ in the third is exactly $\frac{n!}{3^n i!j!k!}$. Summing over all possible combinations of $i$, $j$ and $k$ gives the total probability of getting in any configuration, which is $1$. So we can bound this by}
611
- &\leq \left(\frac{1}{2}\right)^{2n}\binom{2n}{n} \max\left(\frac{n!}{3^n i!j!k!}\right)\sum \frac{n!}{3^n i!j!k!}\\
612
- &= \left(\frac{1}{2}\right)^{2n}\binom{2n}{n} \max\left(\frac{n!}{3^n i!j!k!}\right)
613
- \intertext{To find the maximum, we can replace the factorial by the gamma function and use Lagrange multipliers. However, we would just argue that the maximum can be achieved when $i$, $j$ and $k$ are as close to each other as possible. So we get}
614
- &\leq \left(\frac{1}{2}\right)^{2n}\binom{2n}{n} \frac{n!}{3^n}\left(\frac{1}{\lfloor n/3\rfloor!}\right)^3\\
615
- &\leq Cn^{-3/2}
616
- \end{align*}
617
- for some constant $C$ using Stirling's formula. So $\sum p_{0,0}(2n) < \infty$ and the chain is transient. We can prove similarly for higher dimensions.
618
- \end{proof}
619
-
620
- Let's get back to why the two dimensional probability is the square of the one-dimensional probability. This square might remind us of independence. However, it is obviously not true that horizontal movement and vertical movement are independent --- if we go sideways in one step, then we cannot move vertically. So we need something more sophisticated.
621
-
622
- We write $X_n = (A_n, B_n)$. What we do is that we try to rotate our space. We are going to record our coordinates in a pair of axis that is rotated by $45^\circ$.
623
- \begin{center}
624
- \begin{tikzpicture}
625
- \draw [->] (-2, 0) -- (4, 0) node [right] {$A$};
626
- \draw [->] (0, -3) -- (0, 3) node [above] {$B$};
627
-
628
- \node [circ] at (3, 2) {};
629
- \node [right] at (3, 2) {$(A_n, B_n)$};
630
-
631
- \draw [mred, ->] (-1, -1) -- (3, 3) node [anchor = south west] {$V$};
632
- \draw [mred, ->] (-1, 1) -- (3, -3) node [anchor = north west] {$U$};
633
-
634
- \draw [mred, dashed] (3, 2) -- (0.5, -0.5) node [anchor = north east] {$U_n/\sqrt{2}$};
635
- \draw [mred, dashed] (3, 2) -- (2.5, 2.5) node [anchor = south east] {$V_n/\sqrt{2}$};
636
- \end{tikzpicture}
637
- \end{center}
638
- We can define the new coordinates as
639
- \begin{align*}
640
- U_n &= A_n - B_n\\
641
- V_n &= A_n + B_n
642
- \end{align*}
643
- In each step, either $A_n$ or $B_n$ change by one step. So $U_n$ and $V_n$ \emph{both} change by $1$. Moreover, they are independent. So we have
644
- \begin{align*}
645
- p_{0, 0}(2n) &= \P(A_n = B_n = 0)\\
646
- &= \P(U_n = V_n = 0)\\
647
- &= \P(U_n = 0)\P (V_n = 0)\\
648
- &= \left[\binom{2n}{n}\left(\frac{1}{2}\right)^{2n}\right]^2.
649
- \end{align*}
650
- \subsection{Hitting probabilities}
651
- Recurrence and transience tells us if we are going to return to the original state with (almost) certainty. Often, we would like to know something more qualitative. What is the actual probability of returning to the state $i$? If we return, what is the expected duration of returning?
652
-
653
- We can formulate this in a more general setting. Let $S$ be our state space, and let $A \subseteq S$. We want to know how likely and how long it takes for us to reach $A$. For example, if we are in a casino, we want to win, say, a million, and don't want to go bankrupt. So we would like to know the probability of reaching $A = \{1\text{ million}\}$ and $A = \{0\}$.
654
-
655
- \begin{defi}[Hitting time]
656
- The \emph{hitting time} of $A \subseteq S$ is the random variable $H^A = \min\{n \geq 0: X_n \in A\}$. In particular, if we start in $A$, then $H^A = 0$. We also have
657
- \[
658
- h_i^A = \P_i(H^A < \infty) = \P_i(\text{ever reach }A).
659
- \]
660
- \end{defi}
661
-
662
- To determine hitting times, we mostly rely on the following result:
663
- \begin{thm}
664
- The vector $(h_i^A: i \in S)$ satisfies
665
- \[
666
- h_i^A =
667
- \begin{cases}
668
- 1 & i \in A\\
669
- \sum_{j \in S}p_{i, j}h_j^A & i \not \in A
670
- \end{cases},
671
- \]
672
- and is \emph{minimal} in that for any non-negative solution $(x_i: i \in S)$ to these equations, we have $h_i^A \leq x_i$ for all $i$.
673
- \end{thm}
674
- It is easy to show that $h_i^A$ satisfies the formula given, but it takes some more work to show that $h_i^A$ is the minimal. Recall, however, that we have proved a similar result for random walks in IA probability, and the proof is more-or-less the same.
675
-
676
- \begin{proof}
677
- By definition, $h_i^A = 1$ if $i \in A$. Otherwise, we have
678
- \[
679
- h_i^A = \P_i(H^A < \infty) = \sum_{j\in S} \P_i(H^A < \infty \mid X_1 = j)p_{i, j} = \sum_{j\in S}h_{j}^A p_{i, j}.
680
- \]
681
- So $h_i^A$ is indeed a solution to the equations.
682
-
683
- To show that $h_i^A$ is the minimal solution, suppose $x = (x_i: i \in S)$ is a non-negative solution, i.e.
684
- \[
685
- x_i =
686
- \begin{cases}
687
- 1 & i \in A\\
688
- \sum_{j \in S}p_{i, j}x_j A& i \not \in A
689
- \end{cases},
690
- \]
691
- If $i \in A$, we have $h_i^A = x_i = 1$. Otherwise, we can write
692
- \begin{align*}
693
- x_i &= \sum_j p_{i, j}x_j \\
694
- &= \sum_{j \in A}p_{i, j}x_j + \sum_{j \not\in A}p_{i, j}x_j \\
695
- &= \sum_{j \in A}p_{i, j} + \sum_{j \not\in A}p_{i, j}x_j \\
696
- &\geq \sum_{j \in A}p_{i, j}\\
697
- &= \P_i(H^A = 1).
698
- \end{align*}
699
- By iterating this process, we can write
700
- \begin{align*}
701
- x_i &= \sum_{j \in A} p_{i, j} + \sum_{j \not\in A} p_{i, j}\left(\sum_k p_{i, k} x_k\right) \\
702
- &= \sum_{j \in A} p_{i, j} + \sum_{j \not\in A} p_{i, j}\left(\sum_{k\in A} p_{i, k} x_k + \sum_{k \not\in A} p_{i, k}x_k\right)\\
703
- &\geq \P_i (H^A = 1) + \sum_{j \not \in A, k \in A}p_{i, j}p_{j, k}\\
704
- &= \P_i(H^A = 1) + \P_i(H^A = 2)\\
705
- &= \P_i (H^A \leq 2).
706
- \end{align*}
707
- By induction, we obtain
708
- \[
709
- x_i \geq \P_i(H^A \leq n)
710
- \]
711
- for all $n$. Taking the limit as $n \to \infty$, we get
712
- \[
713
- x_i \geq \P_i(H^A \leq \infty) = h_i^A.
714
- \]
715
- So $h_i^A$ is minimal.
716
- \end{proof}
717
- The next question we want to ask is how long it will take for us to hit $A$. We want to find $\E_i(H^A) = k_i^A$. Note that we have to be careful --- if there is a chance that we never hit $A$, then $H^A$ could be infinite, and $\E_i(H^A) = \infty$. This occurs if $h_i^A < 1$. So often we are only interested in the case where $h_i^A = 1$ (note that $h_i^A = 1$ does \emph{not} imply that $k_i^A < \infty$. It is merely a necessary condition).
718
-
719
- We get a similar result characterizing the expected hitting time.
720
- \begin{thm}
721
- $(k_i^A: i \in S)$ is the minimal non-negative solution to
722
- \[
723
- k_i^A =
724
- \begin{cases}
725
- 0 & i \in A\\
726
- 1 + \sum_j p_{i, j}k_j^A & i \not\in A.
727
- \end{cases}
728
- \]
729
- \end{thm}
730
- Note that we have this ``$1 +$'' since when we move from $i$ to $j$, one step has already passed.
731
-
732
- The proof is almost the same as the proof we had above.
733
- \begin{proof}
734
- The proof that $(k_i^A)$ satisfies the equations is the same as before.
735
-
736
- Now let $(y_i : i\in S)$ be a non-negative solution. We show that $y_i \geq k_i^A$.
737
-
738
- If $i \in A$, we get $y_i = k_i^A = 0$. Otherwise, suppose $i\not\in A$. Then we have
739
- \begin{align*}
740
- y_i &= 1 + \sum_j p_{i, j}y_j\\
741
- &= 1 + \sum_{j \in A}p_{i, j}y_j + \sum_{j\not\in A}p_{i, j}y_j\\
742
- &= 1 + \sum_{j\not\in A}p_{i, j}y_j\\
743
- &= 1 + \sum_{j \not\in A}p_{i, j}\left(1 + \sum_{k\not\in A} p_{j, k}y_k\right)\\
744
- &\geq 1 + \sum_{j \not\in A} p_{i, j}\\
745
- &= \P_i(H^A \geq 1) + \P_i(H^A \geq 2).
746
- \end{align*}
747
- By induction, we know that
748
- \[
749
- y_i \geq \P_i (H^A \geq 1) + \cdots + \P_i(H^A \geq n)
750
- \]
751
- for all $n$. Let $n\to \infty$. Then we get
752
- \[
753
- y_i \geq \sum_{m \geq 1}\P_i(H^A \geq m) = \sum_{m \geq 1} m\P_i(H^A = m)= k_i^A.\qedhere
754
- \]
755
- \end{proof}
756
-
757
- \begin{eg}[Gambler's ruin]
758
- This time, we will consider a random walk on $\N$. In each step, we either move to the right with probability $p$, or to the left with probability $q = 1 - p$. What is the probability of ever hitting $0$ from a given initial point? In other words, we want to find $h_i = h_i^{\{0\}}$.
759
-
760
- We know $h_i$ is the minimal solution to
761
- \[
762
- h_i =
763
- \begin{cases}
764
- 1 & i = 0\\
765
- qh_{i - 1} + ph_{i + 1} & i \not= 0.
766
- \end{cases}
767
- \]
768
- What are the solutions to these equations? We can view this as a difference equation
769
- \[
770
- ph_{i + 1} - h_i + qh_{i - 1} = 0,\quad i \geq 1.
771
- \]
772
- with the boundary condition that $h_0 = 1$. We all know how to solve difference equations, so let's just jump to the solution.
773
-
774
- If $p \not= q$, i.e.\ $p \not= \frac{1}{2}$, then the solution has the form
775
- \[
776
- h_i = A + B\left(\frac{q}{p}\right)^i
777
- \]
778
- for $i \geq 0$. If $p < q$, then for large $i$, $\left(\frac{q}{p}\right)^i$ is very large and blows up. However, since $h_i$ is a probability, it can never blow up. So we must have $B = 0$. So $h_i$ is constant. Since $h_0 = 1$, we have $h_i = 1$ for all $i$. So we always get to $0$.
779
-
780
- If $p > q$, since $h_0 = 1$, we have $A + B = 1$. So
781
- \[
782
- h_i = \left(\frac{q}{p}\right)^i + A\left(1 - \left(\frac{q}{p}\right)^i\right).
783
- \]
784
- This is in fact a solution for all $A$. So we want to find the smallest solution.
785
-
786
- As $i \to\infty$, we get $h_i \to A$. Since $h_i \geq 0$, we know that $A \geq 0$. Subject to this constraint, the minimum is attained when $A = 0$ (since $(q/p)^i$ and $(1 - (q/p)^i)$ are both positive). So we have
787
- \[
788
- h_i = \left(\frac{q}{p}\right)^i.
789
- \]
790
- There is another way to solve this. We can give ourselves a ceiling, i.e.\ we also stop when we hit $k > 0$, i.e.\ $h_k = 1$. We now have two boundary conditions and can find a unique solution. Then we take the limit as $k \to \infty$. This is the approach taken in IA Probability.
791
-
792
- Here if $p = q$, then by the same arguments, we get $h_i = 1$ for all $i$.
793
- \end{eg}
794
-
795
- \begin{eg}[Birth-death chain]
796
- Let $(p_i: i \geq 1)$ be an arbitrary sequence such that $p_i \in (0, 1)$. We let $q_i = 1 - p_i$. We let $\N$ be our state space and define the transition probabilities to be
797
- \[
798
- p_{i, i + 1} = p_i,\quad p_{i, i - 1} = q_i.
799
- \]
800
- This is a more general case of the random walk --- in the random walk we have a constant $p_i$ sequence.
801
-
802
- This is a general model for population growth, where the change in population depends on what the current population is. Here each ``step'' does not correspond to some unit time, since births and deaths occur rather randomly. Instead, we just make a ``step'' whenever some birth or death occurs, regardless of what time they occur.
803
-
804
- Here, if we have no people left, then it is impossible for us to reproduce and get more population. So we have
805
- \[
806
- p_{0, 0} = 1.
807
- \]
808
- We say $0$ is \emph{absorbing} in that $\{0\}$ is closed. We let $h_i = h_i^{\{0\}}$. We know that
809
- \[
810
- h_0 = 1,\quad p_i h_{i + 1} - h_i + q_i h_{i - 1} = 0,\quad i \geq 1.
811
- \]
812
- This is no longer a difference equation, since the coefficients depends on the index $i$. To solve this, we need magic. We rewrite this as
813
- \begin{align*}
814
- p_i h_{i + 1} - h_i + q_i h_{i - 1} &= p_i h_{i + 1} - (p_i + q_i) h_i + q_i h_{i - 1} \\
815
- &= p_i (h_{i + 1} - h_i) - q_i(h_i - h_{i - 1}).
816
- \end{align*}
817
- We let $u_i = h_{i - 1} - h_i$ (picking $h_i - h_{i - 1}$ might seem more natural, but this definition makes $u_i$ positive). Then our equation becomes
818
- \[
819
- u_{i + 1} = \frac{q_i}{p_i} u_i.
820
- \]
821
- We can iterate this to become
822
- \[
823
- u_{i + 1} = \left(\frac{q_i}{p_i}\right)\left(\frac{q_{i - 1}}{p_{i - 1}}\right) \cdots \left(\frac{q_1}{p_1}\right) u_1.
824
- \]
825
- We let
826
- \[
827
- \gamma_i = \frac{q_1q_2\cdots q_i}{p_1p_2\cdots p_i}.
828
- \]
829
- Then we get $u_{i + 1} = \gamma_i u_1$. For convenience, we let $\gamma_0 = 1$. Now we want to retrieve our $h_i$. We can do this by summing the equation $u_i = h_{i - 1} - h_i$. So we get
830
- \[
831
- h_0 - h_i = u_1 + u_2 + \cdots + u_i.
832
- \]
833
- Using the fact that $h_0 = 1$, we get
834
- \[
835
- h_i = 1 - u_1(\gamma_0 + \gamma_1 + \cdots + \gamma_{i - 1}).
836
- \]
837
- Here we have a parameter $u_1$, and we need to find out what this is. Our theorem tells us the value of $u_1$ minimizes $h_i$. This all depends on the value of
838
- \[
839
- S = \sum_{i = 0}^\infty \gamma_i.
840
- \]
841
- By the law of excluded middle, $S$ either diverges or converges. If $S = \infty$, then we must have $u_1 = 0$. Otherwise, $h_i$ blows up for large $i$, but we know that $0 \leq h_i \leq 1$. If $S$ is finite, then $u_1$ can be non-zero. We know that the $\gamma_i$ are all positive. So to minimize $h_i$, we need to maximize $u_1$. We cannot make $u_1$ arbitrarily large, since this will make $h_i$ negative. To find the maximum possible value of $u_1$, we take the limit as $i \to\infty$. Then we know that the maximum value of $u_1$ satisfies
842
- \[
843
- 0 = 1 - u_1 S.
844
- \]
845
- In other words, $u_1 = 1/S$. So we have
846
- \[
847
- h_i = \frac{\sum_{k = i}^\infty \gamma_k}{\sum_{k = 0}^\infty \gamma_k}.
848
- \]
849
- \end{eg}
850
-
851
- \subsection{The strong Markov property and applications}
852
- We are going to state the \emph{strong} Markov property and see applications of it. Before this, we should know what the \emph{weak} Markov property is. We have, in fact, already seen the weak Markov property. It's just that we called it the ``Markov property' instead.
853
-
854
- In probability, we often have ``strong'' and ``weak'' versions of things. For example, we have the strong and weak law of large numbers. The difference is that the weak versions are expressed in terms of probabilities, while the strong versions are expressed in terms of random variables.
855
-
856
- Initially, when people first started developing probability theory, they just talked about probability distributions like the Poisson distribution or the normal distribution. However, later it turned out it is often nicer to talk about random variables instead. After messing with random variables, we can just take expectations or evaluate probabilities to get the corresponding statement about probability distributions. Hence usually the ``strong'' versions imply the ``weak'' version, but not the other way round.
857
-
858
- In this case, recall that we defined the Markov property in terms of the probabilities at some fixed time. We have some fixed time $t$, and we want to know the probabilities of events after $t$ in terms of what happened before $t$. In the strong Markov property, we will allow $t$ to be a random variable, and say something slightly more involved. However, we will not allow $T$ to be any random variable, but just some nice ones.
859
-
860
- \begin{defi}[Stopping time]
861
- Let $X$ be a Markov chain. A random variable $T$ (which is a function $\Omega \to \N\cup \{\infty\}$) is a \emph{stopping time} for the chain $X = (X_n)$ if for $n \geq 0$, the event $\{T = n\}$ is given in terms of $X_0, \cdots, X_n$.
862
- \end{defi}
863
- For example, suppose we are in a casino and gambling. We let $X_n$ be the amount of money we have at time $n$. Then we can set our stopping time as ``the time when we have $\$10$ left''. This is a stopping time, in the sense that we can use this as a guide to when to stop --- it is certainly possible to set yourself a guide that you should leave the casino when you have $\$10$ left. However, it does not make sense to say ``I will leave if the next game will make me bankrupt'', since there is no way to tell if the next game will make you bankrupt (it certainly will not if you win the game!). Hence this is not a stopping time.
864
-
865
- \begin{eg}
866
- The hitting time $H^A$ is a stopping time. This is since $\{H^A = n\} = \{X_i \not\in A\text{ for }i < n\} \cap \{X_n \in A\}$. We also know that $H^A + 1$ is a stopping time since it only depends in $X_i$ for $i \leq n - 1$. However, $H^A - 1$ is not a stopping time since it depends on $X_{n + 1}$.
867
- \end{eg}
868
-
869
- We can now state the strong Markov property, which is expressed in a rather complicated manner but can be useful at times.
870
- \begin{thm}[Strong Markov property]
871
- Let $X$ be a Markov chain with transition matrix $P$, and let $T$ be a stopping time for $X$. Given $T < \infty$ and $X_T = i$, the chain $(X_{T + k}: k \geq 0)$ is a Markov chain with transition matrix $P$ with initial distribution $X_{T + 0} = i$, and this Markov chain is independent of $X_0, \cdots, X_T$.
872
- \end{thm}
873
- Proof is omitted.
874
-
875
- \begin{eg}[Gambler's ruin]
876
- Again, this is the Markov chain taking values on the non-negative integers, moving to the right with probability $p$ and left with probability $q = 1 - p$. $0$ is an absorbing state, since we have no money left to bet if we are broke.
877
-
878
- Instead of computing the probability of hitting zero, we want to find the time it takes to get to $0$, i.e.
879
- \[
880
- H = \inf\{n \geq 0: X_n = 0\}.
881
- \]
882
- Note that the infimum of the empty set is $+\infty$, i.e.\ if we never hit zero, we say it takes infinite time. What is the distribution of $H$? We define the generating function
883
- \[
884
- G_i(s) = \E_i(s^H) = \sum_{n = 0}^\infty s^n \P_i(H = n),\quad |s| < 1.
885
- \]
886
- Note that we need the requirement that $|s| < 1$, since it is possible that $H$ is infinite, and we would not want to think whether the sum converges when $s = 1$. However, we know that it does for $|s| < 1$.
887
-
888
- We have
889
- \[
890
- G_1(s) = \E_1(s^H) = p\E_1(s^H\mid X_1 = 2) + q\E_1(s^H\mid X_1 = 0).
891
- \]
892
- How can we simplify this? The second term is easy, since if $X_1 = 0$, then we must have $H = 1$. So $\E_1(s^H\mid X_1 = 0) = s$. The first term is more tricky. We are now at $2$. To get to $0$, we need to pass through $1$. So the time needed to get to $0$ is the time to get from $2$ to $1$ (say $H'$), plus the time to get from $1$ to $0$ (say $H''$). We know that $H'$ and $H''$ have the same distribution as $H$, and by the strong Markov property, they are independent. So
893
- \[
894
- G_1 = p\E_1(s^{H' + H'' + 1}) + qs = psG_1^2 + qs.\tag{$*$}
895
- \]
896
- Solving this, we get two solutions
897
- \[
898
- G_1 (s) = \frac{1 \pm \sqrt{1 - 4pqs^2}}{2ps}.
899
- \]
900
- We have to be careful here. This result says that for each value of $s$, $G_1(s)$ is \emph{either} $\frac{1 + \sqrt{1 - 4pqs^2}}{2ps}$ \emph{or} $\frac{1 - \sqrt{1 - 4pqs^2}}{2ps}$. It does \emph{not} say that there is some consistent choice of $+$ or $-$ that works for everything.
901
-
902
- However, we know that if we suddenly change the sign, then $G_1(s)$ will be discontinuous at that point, but $G_1$, being a power series, has to be continuous. So the solution must be either $+$ for all $s$, or $-$ for all $s$.
903
-
904
- To determine the sign, we can look at what happens when $s = 0$. We see that the numerator becomes $1 \pm 1$, while the denominator is $0$. We know that $G$ converges at $s = 0$. Hence the numerator must be $0$. So we must pick $-$, i.e.
905
- \[
906
- G_1 (s) = \frac{1 - \sqrt{1 - 4pqs^2}}{2ps}.
907
- \]
908
- We can find $\P_1(H = k)$ by expanding the Taylor series.
909
-
910
- What is the probability of ever hitting $0$? This is
911
- \[
912
- \P_1(H < \infty) = \sum_{n = 1}^\infty \P_1(H = n) = \lim_{s\to 1} G_1(s) = \frac{1 - \sqrt{1 - 4pq}}{2p}.
913
- \]
914
- We can rewrite this using the fact that $q = 1 - p$. So $1 - 4pq = 1 - 4p(1 - p) = (1 - 2p)^2 = |q - p|^2$. So we can write
915
- \[
916
- \P_1(H < \infty) = \frac{1 - |p - q|}{2p} =
917
- \begin{cases}
918
- 1 & p \leq q\\
919
- \frac{q}{p} & p > q
920
- \end{cases}.
921
- \]
922
- Using this, we can also find $\mu = \E_1(H)$. Firstly, if $p > q$, then it is possible that $H = \infty$. So $\mu = \infty$. If $p \leq q$, we can find $\mu$ by differentiating $G_1(s)$ and evaluating at $s = 1$. Doing this directly would result it horrible and messy algebra, which we want to avoid. Instead, we can differentiate $(*)$ and obtain
923
- \[
924
- G_1' = pG_1^2 + ps 2 G_1G_1' + q.
925
- \]
926
- We can rewrite this as
927
- \[
928
- G_1'(s) = \frac{pG(s)^2 + q}{1 - 2ps G(s)}.
929
- \]
930
- By Abel's lemma, we have
931
- \[
932
- \mu = \lim_{s \to 1}G'(s) =
933
- \begin{cases}
934
- \infty & p = \frac{1}{2}\\
935
- \frac{1}{p - q} & p < \frac{1}{2}
936
- \end{cases}.
937
- \]
938
- \end{eg}
939
-
940
- \subsection{Further classification of states}
941
- So far, we have classified chains in say, irreducible and reducible chains. We have also seen that states can be recurrent or transient. By definition, a state is recurrent if the probability of returning to it is 1. However, we can further classify recurrent states. Even if a state is recurrent, it is possible that the expected time of returning is infinite. So while we will eventually return to the original state, this is likely to take a long, long time. The opposite behaviour is also possible --- the original state might be very attracting, and we are likely to return quickly. It turns out this distinction can affect the long-term behaviour of the chain.
942
-
943
- First we have the following proposition, which tells us that if a state is recurrent, then we are expected to return to it infinitely many times.
944
- \begin{thm}
945
- Suppose $X_0 = i$. Let $V_i = |\{n \geq 1: X_n = i\}|$. Let $f_{i, i} = \P_i(T_i < \infty)$. Then
946
- \[
947
- \P_i (V_i = r) = f_{i, i}^r (1 - f_{i, i}),
948
- \]
949
- since we have to return $r$ times, each with probability $f_{i, i}$, and then never return.
950
-
951
- Hence, if $f_{i, i} = 1$, then $\P_i(V_i = r) = 0$ for all $r$. So $\P_i(V_i = \infty) = 1$. Otherwise, $\P_i(V_i = r)$ is a genuine geometric distribution, and we get $\P_i(V_i < \infty) = 1$.
952
- \end{thm}
953
-
954
- \begin{proof}
955
- Exercise, using the strong Markov property.
956
- \end{proof}
957
-
958
- \begin{defi}[Mean recurrence time]
959
- Let $T_i$ be the returning time to a state $i$. Then the \emph{mean recurrence time} of $i$ is
960
- \[
961
- \mu_i = \E_i(T_i) =
962
- \begin{cases}
963
- \infty & i\text{ transient}\\
964
- \sum_{n = 1}^\infty n f_{i, i}(n) & i\text{ recurrent}
965
- \end{cases}
966
- \]
967
- \end{defi}
968
-
969
- \begin{defi}[Null and positive state]
970
- If $i$ is recurrent, we call $i$ a \emph{null state} if $\mu_i = \infty$. Otherwise $i$ is \emph{non-null} or \emph{positive}.
971
- \end{defi}
972
-
973
- This is mostly all we care about. However, there is still one more technical consideration. Recall that in the random walk starting from the origin, we can only return to the origin after an even number of steps. This causes problems for a lot of our future results. For example, we will later look at the ``convergence'' of Markov chains. If we are very likely to return to $0$ after an even number of steps, but is impossible for an odd number of steps, we don't get convergence. Hence we would like to prevent this from happening.
974
-
975
- \begin{defi}[Period]
976
- The \emph{period} of a state $i$ is $d_i = \gcd\{n \geq 1: p_{i, i}(n)> 0\}$.
977
-
978
- A state is \emph{aperiodic} if $d_i = 1$.
979
- \end{defi}
980
- In general, we like aperiodic states. This is not a very severe restriction. For example, in the random walk, we can get rid of periodicity by saying there is a very small chance of staying at the same spot in each step. We can make this chance is so small that we can ignore its existence for most practical purposes, but will help us get rid of the technical problem of periodicity.
981
-
982
- \begin{defi}[Ergodic state]
983
- A state $i$ is \emph{ergodic} if it is aperiodic and positive recurrent.
984
- \end{defi}
985
- These are the really nice states.
986
-
987
- Recall that recurrence is a class property --- if two states are in the same communicating class, then they are either both recurrent, or both transient. Is this true for the properties above as well? The answer is yes.
988
-
989
- \begin{thm}
990
- If $i \leftrightarrow j$ are communicating, then
991
- \begin{enumerate}
992
- \item $d_i = d_j$.
993
- \item $i$ is recurrent iff $j$ is recurrent.
994
- \item $i$ is positive recurrent iff $j$ is positive recurrent.
995
- \item $i$ is ergodic iff $j$ is ergodic.
996
- \end{enumerate}
997
- \end{thm}
998
-
999
- \begin{proof}\leavevmode
1000
- \begin{enumerate}
1001
- \item Assume $i \leftrightarrow j$. Then there are $m, n \geq 1$ with $p_{i, j}(m), p_{j, i}(n) > 0$. By the Chapman-Kolmogorov equation, we know that
1002
- \[
1003
- p_{i, i}(m + r + n) \geq p_{i, j}(m)p_{j, j}(r)p_{j, i}(n) \geq \alpha p_{j, j}(r),
1004
- \]
1005
- where $\alpha = p_{i, j}(m) p_{j, i}(n) > 0$. Now let $D_j = \{r \geq 1: p_{j, j}(r) > 0\}$. Then by definition, $d_j = \gcd D_j$.
1006
-
1007
- We have just shown that if $r \in D_j$, then we have $m + r + n \in D_i$. We also know that $n + m \in D_i$, since $p_{i, i}(n + m) \geq p_{i, j}(n)p_{ji}(m) > 0$. Hence for any $r \in D_j$, we know that $d_i \mid m + r + n$, and also $d_i \mid m + n$. So $d_i \mid r$. Hence $\gcd D_i \mid \gcd D_j$. By symmetry, $\gcd D_j \mid \gcd D_i$ as well. So $\gcd D_i = \gcd D_j$.
1008
- \item Proved before.
1009
- \item This is deferred to a later time.
1010
- \item Follows directly from (i), (ii) and (iii) by definition.\qedhere
1011
- \end{enumerate}
1012
- \end{proof}
1013
-
1014
- We also have the following proposition we will use later on:
1015
- \begin{prop}
1016
- If the chain is irreducible and $j \in S$ is recurrent, then
1017
- \[
1018
- \P(X_n = j\text{ for some }n \geq 1) = 1,
1019
- \]
1020
- regardless of the distribution of $X_0$.
1021
- \end{prop}
1022
- Note that this is not just the definition of recurrence, since recurrence says that if we start at $i$, then we will return to $i$. Here we are saying wherever we start, we will eventually visit $i$.
1023
-
1024
- \begin{proof}
1025
- Let
1026
- \[
1027
- f_{i, j} = \P_i(X_n = j\text{ for some }n \geq 1).
1028
- \]
1029
- Since $j \to i$, there exists a \emph{least} integer $m \geq 1$ with $p_{j, i}(m) > 0$. Since $m$ is least, we know that
1030
- \[
1031
- p_{j, i}(m) = \P_j(X_m = i, X_r \not= j\text{ for }r < m).
1032
- \]
1033
- This is since we cannot visit $j$ in the path, or else we can truncate our path and get a shorter path from $j$ to $i$. Then
1034
- \[
1035
- p_{j, i}(m)(1 - f_{i, j}) \leq 1 - f_{j, j}.
1036
- \]
1037
- This is since the left hand side is the probability that we first go from $j$ to $i$ in $m$ steps, and then never go from $i$ to $j$ again; while the right is just the probability of never returning to $j$ starting from $j$; and we know that it is easier to just not get back to $j$ than to go to $i$ in exactly $m$ steps and never returning to $j$. Hence if $f_{j, j} = 1$, then $f_{i, j} = 1$.
1038
-
1039
- Now let $\lambda_k = \P(X_0 = k)$ be our initial distribution. Then
1040
- \[
1041
- \P(X_n = j\text{ for some }n \geq 1) = \sum_i \lambda_i \P_i (X_n =j\text{ for some }n \geq 1) = 1.\qedhere
1042
- \]
1043
- \end{proof}
1044
-
1045
- \section{Long-run behaviour}
1046
- \subsection{Invariant distributions}
1047
- We want to look at what happens in the long run. Recall that at the very beginning of the course, we calculated the transition probabilities of the two-state Markov chain, and saw that in the long run, as $n \to \infty$, the probability distribution of the $X_n$ will ``converge'' to some particular distribution. Moreover, this limit does not depend on where we started. We'd like to see if this is true for all Markov chains.
1048
-
1049
- First of all, we want to make it clear what we mean by the chain ``converging'' to something. When we are dealing with real sequences $x_n$, we have a precise definition of what it means for $x_n \to x$. How can we define the convergence of a sequence of random variables $X_n$? These are not proper numbers, so we cannot just apply our usual definitions.
1050
-
1051
- For the purposes of our investigation of Markov chains here, it turns out that the right way to think about convergence is to look at the probability mass function. For each $k \in S$, we ask if $\P(X_n = k)$ converges to anything.
1052
-
1053
- In most cases, $\P(X_n = k)$ converges to \emph{something}. Hence, this is not an interesting question to ask. What we would \emph{really} want to know is whether the limit is a probability mass function. It is, for example, possible that $\P(X_n = k) \to 0$ for all $k$, and the result is not a distribution.
1054
-
1055
- From Analysis, we know there are, in general, two ways to prove something converges --- we either ``guess'' a limit and then prove it does indeed converge to that limit, or we prove the sequence is Cauchy. It would be rather silly to prove that these probabilities form a Cauchy sequence, so we will attempt to guess the limit. The limit will be known as an \emph{invariant distribution}, for reasons that will become obvious shortly.
1056
-
1057
- The main focus of this section is to study the existence and properties of invariant distributions, and we will provide sufficient conditions for convergence to occur in the next.
1058
-
1059
- Recall that if we have a starting state $\lambda$, then the distribution of the $n$th step is given by $\lambda P^n$. We have the following trivial identity:
1060
- \[
1061
- \lambda P^{n + 1} = (\lambda P^n) P.
1062
- \]
1063
- If the distribution converges, then we have $\lambda P^n \to \pi$ for some $\pi$, and also $\lambda P^{n + 1} \to \pi$. Hence the limit $\pi$ satisfies
1064
- \[
1065
- \pi P = \pi.
1066
- \]
1067
- We call these \emph{invariant distributions}.
1068
-
1069
- \begin{defi}[Invariant distriubtion]
1070
- Let $X_j$ be a Markov chain with transition probabilities $P$. The distribution $\pi = (\pi_k: k \in S)$ is an \emph{invariant distribution} if
1071
- \begin{enumerate}
1072
- \item $\pi_k \geq 0$, $\sum_k \pi_k = 1$.
1073
- \item $\pi = \pi P$.
1074
- \end{enumerate}
1075
- The first condition just ensures that this is a genuine distribution.
1076
-
1077
- An invariant distribution is also known as an invariant measure, equilibrium distribution or steady-state distribution.
1078
- \end{defi}
1079
-
1080
- \begin{thm}
1081
- Consider an irreducible Markov chain. Then
1082
- \begin{enumerate}
1083
- \item There exists an invariant distribution if some state is positive recurrent.
1084
- \item If there is an invariant distribution $\pi$, then every state is positive recurrent, and
1085
- \[
1086
- \pi_i = \frac{1}{\mu_i}
1087
- \]
1088
- for $i \in S$, where $\mu_i$ is the mean recurrence time of $i$. In particular, $\pi$ is unique.
1089
- \end{enumerate}
1090
- \end{thm}
1091
- Note how we worded the first statement. Recall that we once stated that if one state is positive recurrent, then all states are positive recurrent, and then said we would defer the proof for later on. This is where we actually prove it. In (i), we show that if some state is positive recurrent, then it has an invariant distribution. Then (ii) tells us if there is an invariant distribution, then \emph{all} states are positive recurrent. Hence one state being positive recurrent implies all states being positive recurrent.
1092
-
1093
- Now where did the formula for $\pi$ come from? We can first think what $\pi_i$ should be. By definition, we should know that for large $m$, $\P(X_m = i) \sim \pi_i$. This means that if we go on to really late in time, we would expect to visit the state $i$ $\pi_i$ of the time. On the other hand, the mean recurrence time tells us that we are expected to (re)-visit $i$ every $\mu_i$ steps. So it makes sense that $\pi_i = 1/\mu_i$.
1094
-
1095
- To put this on a more solid ground and actually prove it, we would like to look at some time intervals. For example, we might ask how many times we will hit $i$ in 100 steps. This is not a good thing to do, because we are not given where we are starting, and this probability can depend a lot on where the starting point is.
1096
-
1097
- It turns out the natural thing to do is not to use a fixed time interval, but use a \emph{random} time interval. In particular, we fix a state $k$, and look at the time interval between two consecutive visits of $k$.
1098
-
1099
- We start by letting $X_0 = k$. Let $W_i$ denote the number of visits to $i$ before the next visit to $k$. Formally, we have
1100
- \[
1101
- W_i = \sum_{m = 1}^\infty 1(X_m = i, m \leq T_k),
1102
- \]
1103
- where $T_k$ is the recurrence time of $m$ and $1$ is the indicator function. In particular, $W_i = 1$ for $i = k$ (if $T_k$ is finite). We can also write this as
1104
- \[
1105
- W_i = \sum_{m = 1}^{T_k} 1(X_m = i).
1106
- \]
1107
- This is a random variable. So we can look at its expectation. We define
1108
- \[
1109
- \rho_i = \E_k(W_i).
1110
- \]
1111
- We will show that this $\rho$ is \emph{almost} our $\pi_i$, up to a constant.
1112
- \begin{prop}
1113
- For an irreducible recurrent chain and $k \in S$, $\rho = (\rho_i: i \in S)$ defined as above by
1114
- \[
1115
- \rho_i = \E_k(W_i),\quad W_i = \sum_{m = 1}^\infty 1(X_m = i, T_k \geq m),
1116
- \]
1117
- we have
1118
- \begin{enumerate}
1119
- \item $\rho_k = 1$
1120
- \item $\sum_i \rho_i = \mu_k$
1121
- \item $\rho = \rho P$
1122
- \item $0 < \rho_i < \infty$ for all $i \in S$.
1123
- \end{enumerate}
1124
- \end{prop}
1125
-
1126
- \begin{proof}\leavevmode
1127
- \begin{enumerate}
1128
- \item This follows from definition of $\rho_i$, since for $m < T_k$, $X_m \not= k$.
1129
- \item Note that $\sum_i W_i = T_k$, since in each step we hit exactly one thing. We have
1130
- \begin{align*}
1131
- \sum_i \rho_i &= \sum_i \E_k(W_i)\\
1132
- &= \E_k\left(\sum_i W_i\right)\\
1133
- &= \E_k(T_k) \\
1134
- &= \mu_k.
1135
- \end{align*}
1136
- Note that we secretly swapped the sum and expectation, which is in general bad because both are potentially infinite sums. However, there is a theorem (bounded convergence) that tells us this is okay whenever the summands are non-negative, which is left as an Analysis exercise.
1137
- \item We have
1138
- \begin{align*}
1139
- \rho_j &= \E_k(W_j)\\
1140
- &= \E_k\left(\sum_{m \geq 1}1(X_m = j, T_k \geq m)\right) \\
1141
- &= \sum_{m\geq 1}\P_k(X_m = j, T_k \geq m)\\
1142
- &= \sum_{m \geq 1}\sum_{i \in S} \P_k(X_m = j \mid X_{m - 1} = i, T_k \geq m) \P_k(X_{m - 1} = i, T_k \geq m)\\
1143
- \intertext{We now use the Markov property. Note that $T_k \geq m$ means $X_1, \cdots, X_{m - 1}$ are all not $k$. The Markov property thus tells us the condition $T_k \geq m$ is useless. So we are left with}
1144
- &= \sum_{m \geq 1}\sum_{i \in S} \P_k(X_m = j \mid X_{m - 1} = i) \P_k(X_{m - 1} = i, T_k \geq m)\\
1145
- &= \sum_{m \geq 1}\sum_{i \in S} p_{i, j} \P_k(X_{m - 1} = i, T_k \geq m)\\
1146
- &= \sum_{i \in S} p_{i, j} \sum_{m \geq 1} \P_k(X_{m - 1} = i, T_k \geq m)
1147
- \end{align*}
1148
- The last term looks really $\rho_i$, but the indices are slightly off. We shall have faith in ourselves, and show that this is indeed equal to $\rho_i$.
1149
-
1150
- First we let $r = m - 1$, and get
1151
- \[
1152
- \sum_{m \geq 1} \P_k (X_{m - 1} = i, T_k \geq m) = \sum_{r = 0}^\infty \P_k(X_r = i, T_k \geq r + 1).
1153
- \]
1154
- Of course this does not fix the problem. We will look at the different possible cases. First, if $i = k$, then the $r = 0$ term is $1$ since $T_k \geq 1$ is always true by definition and $X_0 = k$, also by construction. On the other hand, the other terms are all zero since it is impossible for the return time to be greater or equal to $r + 1$ if we are at $k$ at time $r$. So the sum is $1$, which is $\rho_k$.
1155
-
1156
- In the case where $i \not= k$, first note that when $r = 0$ we know that $X_0 = k \not= i$. So the term is zero. For $r \geq 1$, we know that if $X_r = i$ and $T_k \geq r$, then we must also have $T_k \geq r + 1$, since it is impossible for the return time to $k$ to be exactly $r$ if we are not at $k$ at time $r$. So $\P_k(X_r = i , T_k \geq r + 1) = \P_k(X_r = i, T_k \geq r)$. So indeed we have
1157
- \[
1158
- \sum_{m \geq 0} \P_k (X_{m - 1} = i, T_k \geq m) = \rho_i.
1159
- \]
1160
- Hence we get
1161
- \[
1162
- \rho_j = \sum_{i \in S} p_{ij} \rho_i.
1163
- \]
1164
- So done.
1165
-
1166
- \item To show that $0 < \rho_i < \infty$, first fix our $i$, and note that $\rho_k = 1$. We know that $\rho = \rho P = \rho P^n$ for $n \geq 1$. So by expanding the matrix sum, we know that for any $m, n$,
1167
- \begin{align*}
1168
- \rho_i &\geq \rho_k p_{k, i}(n)\\
1169
- \rho_k &\geq \rho_i p_{i, k}(m)
1170
- \end{align*}
1171
- By irreducibility, we now choose $m, n$ such that $p_{i, k}(m), p_{k, i}(n) > 0$. So we have
1172
- \[
1173
- \rho_k p_{k, i}(n) \leq \rho_i \leq \frac{\rho_k}{p_{i, k}(m)}
1174
- \]
1175
- Since $\rho_k = 1$, the result follows.\qedhere
1176
- \end{enumerate}
1177
- \end{proof}
1178
-
1179
- Now we can prove our initial theorem.
1180
- \begin{thm}
1181
- Consider an irreducible Markov chain. Then
1182
- \begin{enumerate}
1183
- \item There exists an invariant distribution if and only if some state is positive recurrent.
1184
- \item If there is an invariant distribution $\pi$, then every state is positive recurrent, and
1185
- \[
1186
- \pi_i = \frac{1}{\mu_i}
1187
- \]
1188
- for $i \in S$, where $\mu_i$ is the mean recurrence time of $i$. In particular, $\pi$ is unique.
1189
- \end{enumerate}
1190
- \end{thm}
1191
-
1192
- \begin{proof}\leavevmode
1193
- \begin{enumerate}
1194
- \item Let $k$ be a positive recurrent state. Then
1195
- \[
1196
- \pi_i = \frac{\rho_i}{\mu_k}
1197
- \]
1198
- satisfies $\pi_i \geq 0$ with $\sum_i \pi_i = 1$, and is an invariant distribution.
1199
- \item Let $\pi$ be an invariant distribution. We first show that all entries are non-zero. For all $n$, we have
1200
- \[
1201
- \pi = \pi P^n.
1202
- \]
1203
- Hence for all $i, j \in S$, $n \in \N$, we have
1204
- \[
1205
- \pi_i \geq \pi_j p_{j,i}(n).\tag{$*$}
1206
- \]
1207
- Since $\sum \pi_1 = 1$, there is some $k$ such that $\pi_k > 0$.
1208
-
1209
- By $(*)$ with $j = k$, we know that
1210
- \[
1211
- \pi_i \geq \pi_k p_{k,i}(n) > 0
1212
- \]
1213
- for some $n$, by irreducibility. So $\pi_i > 0$ for all $i$.
1214
-
1215
- Now we show that all states are positive recurrent. So we need to rule out the cases of transience and null recurrence.
1216
-
1217
- So assume all states are transient. So $p_{j, i}(n) \to 0$ for all $i, j\in S$, $n \in \N$. However, we know that
1218
- \[
1219
- \pi_i = \sum_j \pi_j p_{j, i}(n).
1220
- \]
1221
- If our state space is finite, then since $p_{j, i}(n) \to 0$, the sum tends to $0$, and we reach a contradiction, since $\pi_i$ is non-zero. If we have a countably infinite set, we have to be more careful. We have a huge state space $S$, and we don't know how to work with it. So we approximate it by a finite $F$, and split $S$ into $F$ and $S\setminus F$. So we get
1222
- \begin{align*}
1223
- 0 &\leq \sum_j \pi_j p_{j, i}(n)\\
1224
- &= \sum_{j \in F} \pi_j p_{j, i}(n) + \sum_{j \not\in F} \pi_j p_{j, i}(n) \\
1225
- &\leq \sum_{j \in F}p_{j, i}(n) + \sum_{j \not\in F}\pi_j\\
1226
- &\to \sum_{j \not\in F}\pi_j
1227
- \end{align*}
1228
- as we take the limit $n \to \infty$. We now want to take the limit as $F \to S$. We know that $\sum_{j \in S} \pi_i = 1$. So as we put more and more things into $F$, $\sum_{j \not\in F} \pi_i \to 0$. So $\sum \pi_j p_{j, i}(n) \to 0$. So we get the desired contradiction. Hence we know that all states are recurrent.
1229
-
1230
- To rule out the case of null recurrence, recall that in the previous discussion, we said that we ``should'' have $\pi_i \mu_i =1 $. So we attempt to prove this. Then this would imply that $\mu_i$ is finite since $\pi_i > 0$.
1231
-
1232
- By definition $\mu_i = \E_i(T_i)$, and we have the general formula
1233
- \[
1234
- \E(N) = \sum_n \P(N \geq n).
1235
- \]
1236
- So we get
1237
- \[
1238
- \pi_i \mu_i = \sum_{n = 1}^\infty \pi_i\P_i (T_i \geq n).
1239
- \]
1240
- Note that $\P_i$ is a probability conditional on starting at $i$. So to work with the expression $\pi_i \P_i (T_i \geq n)$, it is helpful to let $\pi_i$ be the probability of starting at $i$. So suppose $X_0$ has distribution $\pi$. Then
1241
- \[
1242
- \pi_i \mu_i = \sum_{n = 1}^\infty \P(T_i \geq n, X_0 = i).
1243
- \]
1244
- Let's work out what the terms are. What is the first term? It is
1245
- \[
1246
- \P(T_i \geq 1, X_0 = i) = \P(X_0 = i) = \pi_i,
1247
- \]
1248
- since we know that we always have $T_i \geq 1$ by definition.
1249
-
1250
- For other $n \geq 2$, we want to compute $\P(T_i \geq n, X_0 = i)$. This is the probability of starting at $i$, and then not return to $i$ in the next $n - 1$ steps. So we have
1251
- \[
1252
- \P(T_i \geq n, X_0 = i) = \P(X_0 = i, X_m \not= i\text{ for } 1 \leq m \leq n - 1)
1253
- \]
1254
- Note that all the expressions on the right look rather similar, except that the first term is $=i$ while the others are $\not= i$. We can make them look more similar by writing
1255
- \begin{align*}
1256
- \P(T_i \geq n, X_0 = i) &= \P(X_m \not= i\text{ for } 1\leq m \leq n - 1) \\
1257
- &\quad\quad- \P(X_m \not= i\text{ for } 0\leq m \leq n - 1)
1258
- \end{align*}
1259
- What can we do now? The trick here is to use invariance. Since we started with an invariant distribution, we always live in an invariant distribution. Looking at the time interval $1 \leq m \leq n -1$ is the same as looking at $0 \leq m \leq n -2)$. In other words, the sequence $(X_0, \cdots, X_{n - 2})$ has the same distribution as $(X_1, \cdots, X_{n - 1})$. So we can write the expression as
1260
- \[
1261
- \P(T_i \geq n, X_0 = i) = a_{n - 2} - a_{n - 1},
1262
- \]
1263
- where
1264
- \[
1265
- a_r = \P(X_m \not= i \text{ for }0 \leq i \leq r).
1266
- \]
1267
- Now we are summing differences, and when we sum differences everything cancels term by term. Then we have
1268
- \[
1269
- \pi_i \mu_i = \pi_i + (a_0 - a_1) + (a_1 - a_2) + \cdots
1270
- \]
1271
- Note that we cannot do the cancellation directly, since this is an infinite sum, and infinity behaves weirdly. We have to look at a finite truncation, do the cancellation, and take the limit. So we have
1272
- \begin{align*}
1273
- \pi_i \mu_i &= \lim_{N \to \infty} [\pi_i + (a_0 - a_1) + (a_1 - a_2) + \cdots + (a_{N - 2} - a_{N - 1})]\\
1274
- &= \lim_{N\to \infty} [\pi_i + a_0 - a_{N - 1}]\\
1275
- &= \pi_i + a_0 - \lim_{N \to \infty} a_N.
1276
- \end{align*}
1277
- What is each term? $\pi_i$ is the probability that $X_0 = i$, and $a_0$ is the probability that $X_0 \not= i$. So we know that $\pi_i + a_0 = 1$. What about $\lim a_N$? We know that
1278
- \[
1279
- \lim_{N\to \infty} a_N = \P(X_m \not= i \text{ for all }m).
1280
- \]
1281
- Since the state is recurrent, the probability of never visiting $i$ is $0$. So we get
1282
- \[
1283
- \pi_i \mu_i = 1.
1284
- \]
1285
- Since $\pi_i > 0$, we get $\mu_i = \frac{1}{\pi_i} < \infty$ for all $i$. Hence we have positive recurrence. We have also proved the formula we wanted.\qedhere
1286
- \end{enumerate}
1287
- \end{proof}
1288
- \subsection{Convergence to equilibrium}
1289
- So far, we have discussed that if a chain converged, then it must converge to an invariant distribution. We then proved that the chain has a (unique) invariant distribution if and only if it is positive recurrent.
1290
-
1291
- Now, we want to understand when convergence actually occurs.
1292
- \begin{thm}
1293
- Consider a Markov chain that is irreducible, positive recurrent and aperiodic. Then
1294
- \[
1295
- p_{i,k}(n) \to \pi_k
1296
- \]
1297
- as $n \to \infty$, where $\pi$ is the unique invariant distribution.
1298
- \end{thm}
1299
- We will prove this by ``coupling''. The idea of coupling is that here we have two sets of probabilities, and we want to prove some relations between them. The first step is to move our attention to random variables, by considering random variables that give rise to these probability distribution. In other words, we look at the Markov chains themselves instead of the probabilities. In general, random variables are nicer to work with, since they are functions, not discrete, unrelated numbers.
1300
-
1301
- However, we have a problem since we get two random variables, but they are completely unrelated. This is bad. So we will need to do some ``coupling'' to correlate the two random variables together.
1302
-
1303
- \begin{proof}(non-examinable)
1304
- The idea of the proof is to show that for any $i, j, k \in S$, we have $p_{i, k}(n) \to p_{j, k}(n)$ as $n \to \infty$. Then we can argue that no matter where we start, we will tend to the same distribution, and hence any distribution tends to the same distribution as $\pi$, since $\pi$ doesn't change.
1305
-
1306
- As mentioned, instead of working with probability distributions, we will work with the chains themselves. In particular, we have \emph{two} Markov chains, and we imagine one starts at $i$ and the other starts at $j$. To do so, we define the pair $Z = (X, Y)$ of \emph{two} independent chains, with $X = (X_n)$ and $Y = (Y_n)$ each having the state space $S$ and transition matrix $P$.
1307
-
1308
- We can let $Z = (Z_n)$, where $Z_n = (X_n, Y_n)$ is a Markov chain on state space $S^2$. This has transition probabilities
1309
- \[
1310
- p_{ij, k\ell} = p_{i, k} p_{j, \ell}
1311
- \]
1312
- by independence of the chains. We would like to apply theorems to $Z$, so we need to make sure it has nice properties. First, we want to check that $Z$ is irreducible. We have
1313
- \[
1314
- p_{ij, k\ell}(n) = p_{i,k} (n) p_{j,\ell}(n).
1315
- \]
1316
- We want this to be strictly positive for some $n$. We know that there is $m$ such that $p_{i,k}(m) > 0$, and some $r$ such that $p_{j,\ell}(r) > 0$. However, what we need is an $n$ that makes them \emph{simultaneously} positive. We can indeed find such an $n$, based on the assumption that we have aperiodic chains and waffling something about number theory.
1317
-
1318
- Now we want to show positive recurrence. We know that $X$, and hence $Y$ is positive recurrent. By our previous theorem, there is a unique invariant distribution $\pi$ for $P$. It is then easy to check that $Z$ has invariant distribution
1319
- \[
1320
- \nu = (\nu_{ij}: ij \in S^2)
1321
- \]
1322
- given by
1323
- \[
1324
- \nu_{i, j} = \pi_i \pi_j.
1325
- \]
1326
- This works because $X$ and $Y$ are independent. So $Z$ is also positive recurrent.
1327
-
1328
- So $Z$ is nice.
1329
-
1330
- The next step is to couple the two chains together. The idea is to fix some state $s \in S$, and let $T$ be the earliest time at which $X_n = Y_n = s$. Because of recurrence, we can always find such at $T$. After this time $T$, $X$ and $Y$ behave under the exact same distribution.
1331
-
1332
- We define
1333
- \[
1334
- T = \inf\{n: Z_n = (X_n, Y_n) = (s, s)\}.
1335
- \]
1336
- We have
1337
- \begin{align*}
1338
- p_{i, k}(n) &= \P_i(X_n = k)\\
1339
- &= \P_{ij}(X_n = k)\\
1340
- &= \P_{ij} (X_n = k, T \leq n) + \P_{ij} (X_n = k, T > n)\\
1341
- \intertext{Note that if $T \leq n$, then at time $T$, $X_T = Y_T$. Thus the evolution of $X$ and $Y$ after time $T$ is equal. So this is equal to}
1342
- &= \P_{ij}(Y_n = k, T \leq n) + \P_{ij} (X_n = k, T > n)\\
1343
- &\leq \P_{ij}(Y_n = k) + \P_{ij}(T > n)\\
1344
- &= p_{j,k}(n) + \P_{ij}(T > n).
1345
- \end{align*}
1346
- Hence we know that
1347
- \[
1348
- |p_{i, k}(n) - p_{j, k}(n)| \leq \P_{ij} (T > n).
1349
- \]
1350
- As $n \to \infty$, we know that $\P_{ij}(T > n) \to 0$ since $Z$ is recurrent. So
1351
- \[
1352
- |p_{i, k}(n) - p_{j, k}(n)| \to 0
1353
- \]
1354
- With this result, we can prove what we want. First, by the invariance of $\pi$, we have
1355
- \[
1356
- \pi = \pi P^n
1357
- \]
1358
- for all $n$. So we can write
1359
- \[
1360
- \pi_k = \sum_j \pi_j p_{j, k}(n).
1361
- \]
1362
- Hence we have
1363
- \[
1364
- |\pi_k - p_{i, k}(n)| = \left|\sum_j \pi_j (p_{j, k}(n) - p_{i, k}(n))\right| \leq \sum_j \pi_j |p_{j, k}(n) - p_{i, k}|.
1365
- \]
1366
- We know that each individual $|p_{j, k}(n) - p_{i, k}(n)|$ tends to zero. So by bounded convergence, we know
1367
- \[
1368
- \pi_k - p_{i, k}(n) \to 0.
1369
- \]
1370
- So done.
1371
- \end{proof}
1372
- What happens when we have a \emph{null} recurrent case? We would still be able to prove the result about $p_{i, k}(n) \to p_{j, k}(n)$, since $T$ is finite by recurrence. However, we do not have a $\pi$ to make the last step.
1373
-
1374
- Recall that we motivated our definition of $\pi_i$ as the proportion of time we spend in state $i$. Can we prove that this is indeed the case?
1375
-
1376
- More concretely, we let
1377
- \[
1378
- V_i(n) = |\{m \leq n: X_m = i\}|.
1379
- \]
1380
- We thus want to know what happens to $V_i(n)/n$ as $n \to \infty$. We think this should tend to $\pi_i$.
1381
-
1382
- Note that technically, this is not a well-formed questions, since we don't exactly know how convergence of random variables should be defined. Nevertheless, we can give an informal proof of this result.
1383
-
1384
- The idea is to look at the average time between successive visits. We assume $X_0 = i$. We let $T_m$ be the time of $m$th return to $i$. In particular, $T_0 = 0$. We define $U_m = T_m - T_{m - 1}$. All of these are iid by the strong Markov property, and has mean $\mu_i$ by definition of $\mu_i$.
1385
-
1386
- Hence, by the law of large numbers,
1387
- \[
1388
- \frac{1}{m} T_m = \frac{1}{m} \sum_{r = 1}^m U_r \sim \E [U_1] = \mu_i. \tag{$*$}
1389
- \]
1390
- We now want to look at $V_i$. If we stare at them hard enough, we see that $V_i(n) \leq k$ if and only if $T_k \leq n$. We can write an equivalent statement by letting $k$ be a real number. We denote $\lceil x\rceil$ as the least integer greater than $x$. Then we have
1391
- \[
1392
- V_i(n) \leq x \Leftrightarrow T_{\lceil x\rceil} \leq n.
1393
- \]
1394
- Putting a funny value of $x$ in, we get
1395
- \[
1396
- \frac{V_i(n)}{n} \geq \frac{A}{\mu_i} \Leftrightarrow \frac{1}{n} T_{\lceil An/\mu_i\rceil} \leq 1.
1397
- \]
1398
- However, using $(*)$, we know that
1399
- \[
1400
- \frac{T_{An/\mu_i}}{An/\mu_i} \to \mu_i.
1401
- \]
1402
- Multiply both sides by $A/\mu_i$ to get
1403
- \[
1404
- \frac{A}{\mu_i} \frac{T_{An/\mu_i}}{An/\mu_i} = \frac{T_{An/\mu_i}}{n} \to \frac{A \mu_i}{\mu_i} = A.
1405
- \]
1406
- So if $A < 1$, the event $\frac{1}{n}T_{An/\mu_i} \leq 1$ occurs with almost probability $1$. Otherwise, it happens with probability $0$. So in some sense,
1407
- \[
1408
- \frac{V_i(n)}{n} \to \frac{1}{\mu_i} = \pi_i.
1409
- \]
1410
- \section{Time reversal}
1411
- Physicists have a hard time dealing with time reversal. They cannot find a decent explanation for why we can move in all directions in space, but can only move forward in time. Some physicists mumble something about entropy and pretend they understand the problem of time reversal, but they don't. However, in the world of Markov chain, time reversal is something we understand well.
1412
-
1413
- Suppose we have a Markov chain $X = (X_0, \cdots, X_N)$. We define a new Markov chain by $Y_k = X_{N - k}$. Then $Y = (X_N, X_{N - 1}, \cdots, X_0)$. When is this a Markov chain? It turns out this is the case if $X_0$ has the invariant distribution.
1414
-
1415
- \begin{thm}
1416
- Let $X$ be positive recurrent, irreducible with invariant distribution $\pi$. Suppose that $X_0$ has distribution $\pi$. Then $Y$ defined by
1417
- \[
1418
- Y_{k} = X_{N - k}
1419
- \]
1420
- is a Markov chain with transition matrix $\hat{P} = (\hat{p}_{i, j}: i, j \in S)$, where
1421
- \[
1422
- \hat{p}_{i, j} = \left(\frac{\pi_j}{\pi_i}\right) p_{j, i}.
1423
- \]
1424
- Also $\pi$ is invariant for $\hat{P}$.
1425
- \end{thm}
1426
- Most of the results here should not be surprising, apart from the fact that $Y$ is Markov. Since $Y$ is just $X$ reversed, the transition matrix of $Y$ is just the transpose of the transition matrix of $X$, with some factors to get the normalization right. Also, it is not surprising that $\pi$ is invariant for $\hat{P}$, since each $X_i$, and hence $Y_i$ has distribution $\pi$ by assumption.
1427
-
1428
- \begin{proof}
1429
- First we show that $\hat{p}$ is a stochastic matrix. We clearly have $\hat{p}_{i,j} \geq 0$. We also have that for each $i$, we have
1430
- \[
1431
- \sum_j \hat{p}_{i, j} = \frac{1}{\pi_i} \sum_j \pi_j p_{j, i} = \frac{1}{\pi_i} \pi_i = 1,
1432
- \]
1433
- using the fact that $\pi = \pi P$.
1434
-
1435
- We now show $\pi$ is invariant for $\hat{P}$: We have
1436
- \[
1437
- \sum_i \pi_i \hat{p}_{i, j} = \sum_i \pi_j p_{j, i} = \pi_j
1438
- \]
1439
- since $P$ is a stochastic matrix and $\sum_i p_{ji} = 1$.
1440
-
1441
- Note that our formula for $\hat{p}_{i,j}$ gives
1442
- \[
1443
- \pi_i \hat{p}_{i, j} = p_{j, i} \pi_j.
1444
- \]
1445
- Now we have to show that $Y$ is a Markov chain. We have
1446
- \begin{align*}
1447
- \P(Y_0 = i_0, \cdots, Y_k = i_k) &= \P(X_{N - k} = i_k, X_{N - k + 1} = i_{k - 1}, \cdots, X_N = i_0)\\
1448
- &= \pi_{i_k} p_{i_k, i_{k - 1}} p_{i_{k - 1}, p_{k - 2}} \cdots p_{i_1, i_0}\\
1449
- &= (\pi_{i_k} p_{i_k, i_{k - 1}}) p_{i_{k - 1}, p_{k - 2}} \cdots p_{i_1, i_0}\\
1450
- &= \hat{p}_{i_{k - 1} i_k} (\pi_{i_{k - 1}} p_{i_{k - 1}, p_{k - 2}}) \cdots p_{i_1, i_0}\\
1451
- &= \cdots\\
1452
- &= \pi_{i_0} \hat{p}_{i_0, i} \hat{p}_{i_1, i_2} \cdots \hat{p}_{i_{k - 1}, i_k}.
1453
- \end{align*}
1454
- So $Y$ is a Markov chain.
1455
- \end{proof}
1456
-
1457
- Just because the reverse of a chain is a Markov chain does not mean it is ``reversible''. In physics, we say a process is reversible only if the dynamics of moving forwards in time is the same as the dynamics of moving backwards in time. Hence we have the following definition.
1458
-
1459
- \begin{defi}[Reversible chain]
1460
- An irreducible Markov chain $X = (X_0, \cdots, X_N)$ in its invariant distribution $\pi$ is \emph{reversible} if its reversal has the same transition probabilities as does $X$, ie
1461
- \[
1462
- \pi_i p_{i, j} = \pi_j p_{j, i}
1463
- \]
1464
- for all $i, j \in S$.
1465
-
1466
- This equation is known as the \emph{detailed balance equation}. In general, if $\lambda$ is a distribution that satisfies
1467
- \[
1468
- \lambda_i p_{i, j} = \lambda_j p_{j, i},
1469
- \]
1470
- we say $(P, \lambda)$ is in \emph{detailed balance}.
1471
- \end{defi}
1472
- Note that most chains are \emph{not} reversible, just like most functions are not continuous. However, if we know reversibility, then we have one powerful piece of information. The nice thing about this is that it is very easy to check if the above holds --- we just have to compute $\pi$, and check the equation directly.
1473
-
1474
- In fact, there is an even easier check. We don't have to start by finding $\pi$, but just some $\lambda$ that is in detailed balance.
1475
-
1476
- \begin{prop}
1477
- Let $P$ be the transition matrix of an irreducible Markov chain $X$. Suppose $(P, \lambda)$ is in detailed balance. Then $\lambda$ is the \emph{unique} invariant distribution and the chain is reversible (when $X_0$ has distribution $\lambda$).
1478
- \end{prop}
1479
-
1480
- This is a much better criterion. To find $\pi$, we need to solve
1481
- \[
1482
- \pi_i = \sum_j \pi_j p_{j, i},
1483
- \]
1484
- and this has a big scary sum on the right. However, to find the $\lambda$, we just need to solve
1485
- \[
1486
- \lambda_i p_{i, j} = \lambda_j p_{j, i},
1487
- \]
1488
- and there is no sum involved. So this is indeed a helpful result.
1489
-
1490
- \begin{proof}
1491
- It suffices to show that $\lambda$ is invariant. Then it is automatically unique and the chain is by definition reversible. This is easy to check. We have
1492
- \[
1493
- \sum_j \lambda_j p_{j, i} = \sum_j \lambda_i p_{i, j} = \lambda_i \sum_j p_{i, j} = \lambda_i.
1494
- \]
1495
- So $\lambda$ is invariant.
1496
- \end{proof}
1497
- This gives a really quick route to computing invariant distributions.
1498
-
1499
- \begin{eg}[Birth-death chain with immigration]
1500
- Recall our birth-death chain, where at each state $i > 0$, we move to $i + 1$ with probability $p_i$ and to $i - 1$ with probability $q_i = 1 - p_i$. When we are at $0$, we are dead and no longer change.
1501
-
1502
- We wouldn't be able to apply our previous result to this scenario, since $0$ is an absorbing equation, and this chain is obviously not irreducible, let alone positive recurrent. Hence we make a slight modification to our scenario --- if we have population $0$, we allow ourselves to have a probability $p_0$ of having an immigrant and get to $1$, or probability $q_0 = 1 - p_0$ that we stay at $0$.
1503
-
1504
- This is sort of a ``physical'' process, so it would not be surprising if this is reversible. So we can try to find a solution to the detailed balance equation. If it works, we would have solved it quickly. If not, we have just wasted a minute or two. We need to solve
1505
- \[
1506
- \lambda_i p_{i, j} = \lambda_j p_{j, i}.
1507
- \]
1508
- Note that this is automatically satisfied if $j$ and $i$ differ by at least $2$, since both sides are zero. So we only look at the case where $j = i + 1$ (the case $j = i - 1$ is the same thing with the slides flipped). So the only equation we have to satisfy is
1509
- \[
1510
- \lambda_i p_i = \lambda_{i + 1}q_{i + 1}
1511
- \]
1512
- for all $i$. This is just a recursive formula for $\lambda_i$, and we can solve to get
1513
- \[
1514
- \lambda_i = \frac{p_{i - 1}}{q_i} \lambda_{i - 1} = \cdots = \left(\frac{p_{i - 1}}{q_i} \frac{p_{i - 2}}{q_{i - 1}} \cdots \frac{p_0}{q_1}\right)\lambda_0.
1515
- \]
1516
- We can call the term in the brackets
1517
- \[
1518
- \rho_i = \left(\frac{p_{i - 1}}{q_i} \frac{p_{i - 2}}{q_{i - 1}} \cdots \frac{p_0}{q_1}\right).
1519
- \]
1520
- For $\lambda_i$ to be a distribution, we need
1521
- \[
1522
- 1 = \sum_i \lambda_i = \lambda_0 \sum_i \rho_i.
1523
- \]
1524
- Thus if
1525
- \[
1526
- \sum \rho_i < \infty,
1527
- \]
1528
- then we can pick
1529
- \[
1530
- \lambda_0 = \frac{1}{\sum \rho_i}
1531
- \]
1532
- and $\lambda$ is a distribution. Hence this is the unique invariant distribution.
1533
-
1534
- If it diverges, the method fails, and we need to use our more traditional methods to check recurrence and transience.
1535
- \end{eg}
1536
-
1537
- \begin{eg}[Random walk on a finite graph]
1538
- A graph is a collection of points with edges between them. For example, the following is a graph:
1539
- \begin{center}
1540
- \begin{tikzpicture}
1541
- \node [circ] at (0, 0) {};
1542
- \node [circ] at (2, 0) {};
1543
- \node [circ] at (1, -1.5) {};
1544
- \node [circ] at (-1, -0.5) {};
1545
- \draw (0, 0) -- (2, 0) -- (1, -1.5) -- (0, 0);
1546
-
1547
- \draw (1, -1.5) -- (-1, -0.5);
1548
-
1549
- \draw (0, 0) -- (-1, -0.5);
1550
- \draw (-3, -0.5) -- (-1, -0.5) -- (-2, -1.6) -- cycle;
1551
- \node [circ] at (-3, -0.5) {};
1552
- \node [circ] at (-1, -0.5) {};
1553
- \node [circ] at (-2, -1.6) {};
1554
-
1555
- \draw (-1.5, -0.2) node [circ] {} -- (-1.2, 1) node [circ] {};
1556
- \end{tikzpicture}
1557
- \end{center}
1558
- More precisely, a graph is a pair $G = (V, E)$, where $E$ contains distinct unordered pairs of distinct vertices $(u, v)$, drawn as edges from $u$ to $v$.
1559
-
1560
- Note that the restriction of distinct pairs and distinct vertices are there to prevent loops and parallel edges, and the fact that the pairs are unordered means our edges don't have orientations.
1561
-
1562
- A graph $G$ is connected if for all $u, v \in V$, there exists a path along the edges from $u$ to $v$.
1563
-
1564
- Let $G = (V, E)$ be a connected graph with $|V| \leq \infty$. Let $X = (X_n)$ be a random walk on $G$. Here we live on the vertices, and on each step, we move to one an adjacent vertex. More precisely, if $X_n = x$, then $X_{n + 1}$ is chosen uniformly at random from the set of neighbours of $x$, i.e.\ the set $\{y \in V: (x, y) \in E\}$, independently of the past. This is a Markov chain.
1565
-
1566
- For example, our previous simple symmetric random walks on $\Z$ or $\Z^d$ are random walks on graphs (despite the graphs not being finite). Our transition probabilities are
1567
- \[
1568
- p_{i, j} =
1569
- \begin{cases}
1570
- 0 & j\text{ is not a neighbour of }i\\
1571
- \frac{1}{d_i} & j\text{ is a neighbour of }i
1572
- \end{cases},
1573
- \]
1574
- where $d_i$ is the number of neighbours of $i$, commonly known as the \emph{degree} of $i$.
1575
-
1576
- By connectivity, the Markov chain is irreducible. Since it is finite, it is recurrent, and in fact positive recurrent.
1577
-
1578
- This process is a rather ``physical'' process, and again we would expect it to be reversible. So let's try to solve the detailed balance equation
1579
- \[
1580
- \lambda_i p_{i, j} = \lambda_j p_{j, i}.
1581
- \]
1582
- If $j$ is not a neighbour of $i$, then both sides are zero, and it is trivially balanced. Otherwise, the equation becomes
1583
- \[
1584
- \lambda_i \frac{1}{d_i} = \lambda_j \frac{1}{d_j}.
1585
- \]
1586
- The solution is \emph{obvious}. Take $\lambda_i = d_i$. In fact we can multiply by any constant $c$, and $\lambda_i = c d_i$ for any $c$. So we pick our $c$ such that this is a distribution, i.e.
1587
- \[
1588
- 1 = \sum_i \lambda_i = c \sum_i d_i.
1589
- \]
1590
- We now note that since each edge adds $1$ to the degrees of each vertex on the two ends, $\sum d_i$ is just twice the number of edges. So the equation gives
1591
- \[
1592
- 1 = 2c |E|.
1593
- \]
1594
- Hence we get
1595
- \[
1596
- c = \frac{1}{2|E|}.
1597
- \]
1598
- Hence, our invariant distribution is
1599
- \[
1600
- \lambda_i = \frac{d_i}{2|E|}.
1601
- \]
1602
- Let's look at a specific scenario.
1603
-
1604
- Suppose we have a knight on the chessboard. In each step, the allowed moves are:
1605
- \begin{itemize}
1606
- \item Move two steps horizontally, then one step vertically;
1607
- \item Move two steps vertically, then one step horizontally.
1608
- \end{itemize}
1609
- For example, if the knight is in the center of the board (red dot), then the possible moves are indicated with blue crosses:
1610
- \begin{center}
1611
- \begin{tikzpicture}[scale=0.5]
1612
- \foreach \x in {0,1,...,8} {
1613
- \draw (0, \x) -- (8, \x);
1614
- \draw (\x, 0) -- (\x, 8);
1615
- }
1616
- \node [mred, circ] at (3.5, 4.5) {};
1617
-
1618
- \node [mblue] at (5.5, 5.5) {$\times$};
1619
- \node [mblue] at (5.5, 3.5) {$\times$};
1620
- \node [mblue] at (1.5, 5.5) {$\times$};
1621
- \node [mblue] at (1.5, 3.5) {$\times$};
1622
- \node [mblue] at (4.5, 6.5) {$\times$};
1623
- \node [mblue] at (4.5, 2.5) {$\times$};
1624
- \node [mblue] at (2.5, 6.5) {$\times$};
1625
- \node [mblue] at (2.5, 2.5) {$\times$};
1626
- \end{tikzpicture}
1627
- \end{center}
1628
- At each epoch of time, our erratic knight follows a legal move chosen uniformly from the set of possible moves. Hence we have a Markov chain derived from the chessboard. What is his invariant distribution? We can compute the number of possible moves from each position:
1629
- \begin{center}
1630
- \begin{tikzpicture}[scale=0.5]
1631
- \foreach \x in {0,1,...,8} {
1632
- \draw (0, \x) -- (8, \x);
1633
- \draw (\x, 0) -- (\x, 8);
1634
- }
1635
- \node at (7.5, 7.5) {$2$};
1636
- \node at (7.5, 6.5) {$3$};
1637
- \node at (6.5, 7.5) {$3$};
1638
- \node at (6.5, 6.5) {$4$};
1639
-
1640
- \node at (7.5, 5.5) {$4$};
1641
- \node at (7.5, 4.5) {$4$};
1642
- \node at (5.5, 7.5) {$4$};
1643
- \node at (4.5, 7.5) {$4$};
1644
-
1645
- \node at (6.5, 5.5) {$6$};
1646
- \node at (6.5, 4.5) {$6$};
1647
- \node at (4.5, 6.5) {$6$};
1648
- \node at (5.5, 6.5) {$6$};
1649
-
1650
- \node at (4.5, 4.5) {$8$};
1651
- \node at (4.5, 5.5) {$8$};
1652
- \node at (5.5, 4.5) {$8$};
1653
- \node at (5.5, 5.5) {$8$};
1654
- \end{tikzpicture}
1655
- \end{center}
1656
- The sum of degrees is
1657
- \[
1658
- \sum_i d_i = 336.
1659
- \]
1660
- So the invariant distribution at, say, the corner is
1661
- \[
1662
- \pi_{\mathrm{corner}} = \frac{2}{336}.
1663
- \]
1664
- \end{eg}
1665
- \end{document}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
books/cam/IB_M/methods.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/IB_M/quantum_mechanics.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_E/classical_and_quantum_solitons.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/advanced_quantum_field_theory.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/algebras.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/logic.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/modular_forms_and_l_functions.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/positivity_in_algebraic_geometry.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/ramsey_theory.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/riemannian_geometry.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/schramm-loewner_evolutions.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/stochastic_calculus_and_applications.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/symplectic_geometry.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/the_standard_model.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_L/theoretical_physics_of_soft_condensed_matter.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/advanced_probability.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/algebraic_topology_iii.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/analysis_of_partial_differential_equations.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/combinatorics.tex DELETED
@@ -1,1782 +0,0 @@
1
- \documentclass[a4paper]{article}
2
-
3
- \def\npart {III}
4
- \def\nterm {Michaelmas}
5
- \def\nyear {2017}
6
- \def\nlecturer {B. Bollobas}
7
- \def\ncourse {Combinatorics}
8
-
9
- \input{header}
10
-
11
- \begin{document}
12
- \maketitle
13
- {\small
14
- \setlength{\parindent}{0em}
15
- \setlength{\parskip}{1em}
16
-
17
- What can one say about a collection of subsets of a finite set satisfying certain conditions in terms of containment, intersection and union? In the past fifty years or so, a good many fundamental results have been proved about such questions: in the course we shall present a selection of these results and their applications, with emphasis on the use of algebraic and probabilistic arguments.
18
-
19
- The topics to be covered are likely to include the following:
20
- \begin{itemize}
21
- \item The de Bruijn--Erd\"os theorem and its extensions.
22
- \item The Graham--Pollak theorem and its extensions.
23
- \item The theorems of Sperner, EKR, LYMB, Katona, Frankl and F\"uredi. % check Furedi
24
- \item Isoperimetric inequalities: Kruskal--Katona, Harper, Bernstein, BTBT, and their applications.
25
- \item Correlation inequalities, including those of Harris, van den Berg and Kesten, and the Four Functions Inequality.
26
- \item Alon's Combinatorial Nullstellensatz and its applications.
27
- \item LLLL and its applications.
28
- \end{itemize}
29
-
30
- \subsubsection*{Pre-requisites}
31
- The main requirement is mathematical maturity, but familiarity with the basic graph theory course in Part II would be helpful.
32
- }
33
- \tableofcontents
34
-
35
- \section{Hall's theorem}
36
- We shall begin with a discussion of Hall's theorem. Ideally, you've already met it in IID Graph Theory, but we shall nevertheless go through it again.
37
-
38
- \begin{defi}[Bipartite graph]\index{bipartite graph}
39
- We say $G = (X, Y; E)$ is a \emph{bipartite graph} with bipartition $X$ and $Y$ if $(X \sqcup Y, E)$ is a graph such that every edge is between a vertex in $X$ and a vertex in $Y$.
40
-
41
- We say such a bipartite graph is \term{$(k, \ell)$-regular} if every vertex in $X$ has degree $k$ and every vertex in $Y$ has degree $\ell$. A bipartite graph that is $(k, \ell)$-regular for some $k, \ell \geq 1$ is said to be \emph{biregular}\index{biregular graph}.
42
- \end{defi}
43
-
44
- \begin{defi}[Complete matching]\index{complete matching}
45
- Let $G = (X, Y; E)$ be a bipartite graph with bipartition $X$ and $Y$. A \emph{complete matching} from $X$ to $Y$ is an injection $f: X \to Y$ such that $x\, f(x)$ is an edge for every $x \in X$.
46
- \end{defi}
47
-
48
- Hall's theorem gives us a necessary and sufficient condition for the existence of a complete matching. Let's try to first come up with a necessary condition. If there is a complete matching, then for any subset $S \subseteq X$, we certainly have $|\Gamma(S)| \geq |S|$, where \term{$\Gamma(S)$} is the set of neighbours of $S$. Hall's theorem says this is also sufficient.
49
-
50
- \begin{thm}[Hall, 1935]\index{Hall's theorem}
51
- A bipartite graph $G = (X, Y; E)$ has a complete matching from $X$ to $Y$ if and only if $|\Gamma(S)| \geq |S|$ for all $S \subseteq X$.
52
- \end{thm}
53
- This condition is known as \term{Hall's condition}.
54
-
55
- \begin{proof}
56
- We may assume $G$ is edge-minimal satisfying Hall's condition. We show that $G$ is a complete matching from $X$ to $Y$. For $G$ to be a complete matching, we need the following two properties:
57
- \begin{enumerate}
58
- \item Every vertex in $X$ has degree $1$
59
- \item Every vertex in $Y$ has degree $0$ or $1$.
60
- \end{enumerate}
61
-
62
- We first examine the second condition. Suppose $y \in Y$ is such that there exists edges $x_1 y, x_2 y \in E$. Then the minimality of $G$ implies there are sets, $X_1, X_2 \subseteq X$ such that $x_i \in X_i$ such that $|\Gamma(X_i)| = |X_i|$ and $x_i$ is the only neighbour of $y$ in $X_i$.
63
-
64
- Now consider the set $X_1 \cap X_2$. We know $\Gamma(X_1 \cap X_2) \subseteq \Gamma(X_1) \cap \Gamma(X_2)$. Moreover, this is strict, as $y$ is in the RHS but not the LHS. So we have
65
- \[
66
- \Gamma(X_1 \cap X_2) \leq |\Gamma(X_i) \cap \Gamma(X_2)| - 1.
67
- \]
68
- But also
69
- \begin{align*}
70
- |X_1 \cap X_2| &\leq |\Gamma(X_1 \cap X_2)|\\
71
- &\leq |\Gamma(X_1) \cap \Gamma(X_2)| - 1 \\
72
- &= |\Gamma(X_1)| + |\Gamma(X_2)| - |\Gamma(X_1) \cup \Gamma(X_2)| - 1\\
73
- &= |X_1| + |X_2| - |\Gamma(X_1\cup X_2)| - 1\\
74
- &\leq |X_1| + |X_2| - |X_1 \cup X_2| - 1\\
75
- &= |X_1 \cap X_2| - 1,
76
- \end{align*}
77
- which contradicts Hall's condition.
78
-
79
- One then sees that the first condition is also satisfied --- if $x \in X$ is a vertex, then the degree of $x$ certainly cannot be $0$, or else $|\Gamma(\{x\})| < |\{x\}|$, and we see that $d(x)$ cannot be $>1$ or else we can just remove an edge from $x$ without violating Hall's condition.
80
- \end{proof}
81
-
82
- We shall now describe some consequences of Hall's theorem. They will be rather straightforward applications, but we shall later see they have some interesting consequences.
83
-
84
- Let $\mathcal{A} = \{A_1, \ldots, A_m\}$ be a set system. All sets are finite. A set of \term{distinct representatives} of $\mathcal{A}$ is a set $\{a_1, \ldots a_m\}$ of distinct elements $a_i \in A_i$.
85
-
86
- Under what condition do we have a set of distinct representatives? If we have one, then for any $I \subseteq [m] = \{1, 2, \ldots, m\}$, we have
87
- \[
88
- \left|\bigcup_{i \in I} A_i \right| \geq |I|.
89
- \]
90
- We might hope this is sufficient.
91
- \begin{thm}
92
- $\mathcal{A}$ has a set of distinct representatives iff for all $\mathcal{B} \subseteq \mathcal{A}$, we have
93
- \[
94
- \left|\bigcup_{B \in \mathcal{B}} B\right| \geq |\mathcal{B}|.
95
- \]
96
- \end{thm}
97
-
98
- This is an immediate consequence of Hall's theorem.
99
-
100
- \begin{proof}
101
- Define a bipartite graph as follows --- we let $X = \mathcal{A}$, and $Y = \bigcup_{i \in [m]} A_i$. Then draw an edge from $x$ to $A_i$ if $x \in A_i$. Then there is a complete matching of this graph iff $\mathcal{A}$ has a set of distinct representations, and the condition in the theorem is exactly Hall's condition. So we are done by Hall's theorem.
102
- \end{proof}
103
-
104
- \begin{thm}
105
- Let $G = (X, Y; E)$ be a bipartite graph such that $d(x) \geq d(y)$ for all $x \in X$ and $y \in Y$. Then there is a complete matching from $X$ to $Y$.
106
- \end{thm}
107
-
108
- \begin{proof}
109
- Let $d$ be such that $d(x) \geq d \geq d(y)$ for all $x \in X$ and $y \in Y$. For $S \subseteq X$ and $T \subseteq Y$, we let $e(S, T)$ be the number of edges between $S$ and $T$. Let $S \subseteq X$, and $T = \Gamma(S)$. Then we have
110
- \[
111
- e(S, T) = \sum_{x \in S} d(x) \geq d |S|,
112
- \]
113
- but on the other hand, we have
114
- \[
115
- e(S, T) \leq \sum_{y \in T} d(y) \leq d |T|.
116
- \]
117
- So we find that $|T| \geq |S|$. So Hall's condition is satisfied.
118
- \end{proof}
119
-
120
- \begin{cor}
121
- If $G = (X, Y; E)$ is a $(k, \ell)$-regular bipartite graph with $1 \leq \ell \leq k$, then there is a complete matching from $X$ to $Y$.
122
- \end{cor}
123
-
124
- \begin{thm}
125
- Let $G = (X, Y; E)$ be biregular and $A \subseteq X$. Then
126
- \[
127
- \frac{|\Gamma(A)|}{|Y|}\geq \frac{|A|}{|X|}.
128
- \]
129
- \end{thm}
130
-
131
- \begin{proof}
132
- Suppose $G$ is $(k, \ell)$-regular. Then
133
- \[
134
- k|A| = e(A, \Gamma(A)) \leq \ell |\Gamma(A)|.
135
- \]
136
- Thus we have
137
- \[
138
- \frac{|\Gamma(A)|}{|Y|} \geq \frac{k|A|}{\ell |Y|}.% = \frac{|A|}{|X|}.
139
- \]
140
- On the other hand, we can count that
141
- \[
142
- |E| = |X| k = |Y| \ell,
143
- \]
144
- and so
145
- \[
146
- \frac{k}{\ell} = \frac{|Y|}{|X|}.
147
- \]
148
- So we are done.
149
- \end{proof}
150
- Briefly, this says biregular graphs ``expand''.
151
-
152
- \begin{cor}
153
- Let $G = (X, Y; E)$ be biregular and let $|X| \leq |Y|$. Then there is a complete matching of $X$ into $Y$.
154
- \end{cor}
155
- In particular, for \emph{any} biregular graph, there is always a complete matching from one side of the graph to the other.
156
-
157
- \begin{notation}\index{$X^{(r)}$}\index{$X^{(\leq r)}$}\index{$X^{\geq r}$}
158
- Given a set $X$, we write $X^{(r)}$ for the set of all subsets of $X$ with $r$ elements, and similarly for $X^{(\geq r)}$ and $X^{(\leq r)}$.
159
- \end{notation}
160
-
161
- If $|X| = n$, then $|X^{(r)}| = \binom{n}{r}$.
162
-
163
- Now given a set $X$ and two numbers $r < s$, we can construct a biregular graph $(X^{(r)}, X^{(s)}; E)$, where $A \in X^{(r)}$ is joined to $B \in X^{(s)}$ if $A \subseteq B$.
164
-
165
- \begin{cor}
166
- Let $1 \leq r < s \leq |X| = n$. Suppose $|\frac{n}{2} -r | \geq |\frac{n}{2} - s|$. Then there exists an injection $f: X^{(r)} \to X^{(s)}$ such that $A \subseteq f(A)$ for all $A \in X^{(r)}$.
167
-
168
- If $|\frac{n}{2} - r| \leq |\frac{n}{2} - s|$, then there exists an injection $g: X^{(s)} \to X^{(r)}$ such that $A \supseteq g(A)$ for all $A \in X^{(s)}$.
169
- \end{cor}
170
-
171
- \begin{proof}
172
- Note that $|\frac{n}{2} - r| \leq |\frac{n}{2} - s|$ iff $\binom{n}{r} \geq \binom{n}{s}$.
173
- \end{proof}
174
-
175
- \section{Sperner systems}
176
- In the next few chapters, we are going to try to understand the power set $\mathcal{P}(X)$ of a set. One particularly important structure of $\mathcal{P}(X)$ is that it is a \emph{graded poset}. A lot of the questions we ask can be formulated for arbitrary (graded) posets, but often we will only answer them for power sets, since that is what we are interested in.
177
-
178
- \begin{defi}[Chain]\index{chain}
179
- A subset $C \subseteq S$ of a poset is a \emph{chain} if any two of its elements are comparable.
180
- \end{defi}
181
-
182
- \begin{defi}[Anti-chain]\index{anti-chain}
183
- A subset $A \subseteq S$ is an \emph{anti-chain} if no two of its elements are comparable.
184
- \end{defi}
185
-
186
- Given a set $X$, the \term{power set} $\mathcal{P}(X)$\index{$\mathcal{P}(X)$} of $X$ can be viewed as a Boolean lattice. This is a poset by saying $A < B$ if $A \subsetneq B$.
187
-
188
- In general, there are many questions we can ask about a poset $\mathcal{P}$. For example, we may ask what is the largest possible of an anti-chain in $\mathcal{P}$. While this is quite hard in general, we may be able to produce answers if we impose some extra structure on our posets. One particularly useful notion is that of a \emph{graded poset}.
189
-
190
- \begin{defi}[Graded poset]\index{graded poset}
191
- We say $\mathcal{P} = (S, <)$ is a \emph{graded poset} if we can write $S$ as a disjoint union
192
- \[
193
- S = \coprod_{i = 0}^n S_i % union with dot on top
194
- \]
195
- such that
196
- \begin{itemize}
197
- \item $S_i$ is an anti-chain; and
198
- \item $x < y$ iff there exists elements $x = z_i < z_{i + 1} < \cdots < z_j = y$ such that $z_h \in S_h$.
199
- \end{itemize}
200
- \end{defi}
201
-
202
- \begin{eg}
203
- If $X$ is a set, $\mathcal{P}(X)$ is an anti-chain with $X_i = X^{(i)}$.
204
- \end{eg}
205
- If we want to obtain the largest anti-chain as possible, then we might try $X_i$ with $i = \lfloor \frac{n}{2}\rfloor$. But is this actually the largest possible? Or can we construct some funny-looking anti-chain that is even larger? Sperner says no.
206
-
207
- \begin{thm}[Sperner, 1928]
208
- For $|X| = n$, the maximal size of an antichain in $\mathcal{P}(X)$ is $\binom{n}{\lfloor n/2\rfloor}$, witnessed by $X^{\lfloor n/2\rfloor}$.
209
- \end{thm}
210
-
211
- \begin{proof}
212
- If $\mathcal{C}$ is a chain and $\mathcal{A}$ is an antichain, then $|\mathcal{A} \cap \mathcal{C}| \leq 1$. So it suffices to partition $\mathcal{P}(X)$ into
213
- \[
214
- m = \max_{k} \binom{n}{k} = \binom{n}{\lfloor n/2\rfloor} = \binom{n}{\lceil n/2 \rceil}
215
- \]
216
- many chains.
217
-
218
- We can do so using the injections constructed at the end of the previous section. For $i \geq \lfloor \frac{n}{2}\rfloor$, we can construct injections $f_i: X_{i - 1} \to X_i$ such that $A \subseteq f_i(A)$ for all $A$. By chaining these together, we get $m$ chains ending in $X^{\lfloor \frac{n}{2}\rfloor}$.
219
- \begin{center}
220
- \begin{tikzpicture}[yscale=0.5]
221
- \node [circ] at (0, 3) {};
222
-
223
- \fill (1.2, 0) ellipse (0.025 and 0.05); % because yscale
224
- \fill (1.1, -0.05) ellipse (0.025 and 0.05);
225
- \fill (0.93, 0.05) ellipse (0.025 and 0.05);
226
-
227
- \fill (-1.2, 0) ellipse (0.025 and 0.05);
228
- \fill (-1.1, -0.05) ellipse (0.025 and 0.05);
229
- \fill (-0.93, 0.05) ellipse (0.025 and 0.05);
230
-
231
- \draw (0, 0) ellipse (1.5 and 0.25);
232
-
233
- \draw [-latex', mred] (0, 1) -- +(0, -1);
234
- \draw [-latex', mblue] (0.2, 1.05) -- +(0, -1);
235
- \draw [-latex', mblue] (0.3, 0.95) -- +(0, -1);
236
- \draw [-latex', mblue] (-0.3, 1.05) -- +(0, -1);
237
- \draw [-latex', mblue] (-0.2, 0.95) -- +(0, -1);
238
-
239
- \draw [-latex'] (0.5, 1.02) -- +(0, -1);
240
- \draw [-latex'] (0.8, 0.97) -- +(0, -1);
241
- \draw [-latex'] (0.6, 1) -- +(0, -1);
242
- \draw [-latex'] (-0.6, 1) -- +(0, -1);
243
- \draw [-latex'] (-0.5, 1.04) -- +(0, -1);
244
- \draw [-latex'] (-0.8, 0.97) -- +(0, -1);
245
-
246
- \draw [fill=white, opacity=0.7] (0, 1) ellipse (1 and 0.2);
247
-
248
- \draw [-latex', mred] (0, 2) -- +(0, -1);
249
- \draw [-latex', mblue] (0.2, 2.05) -- +(0, -1);
250
- \draw [-latex', mblue] (0.3, 1.95) -- +(0, -1);
251
- \draw [-latex', mblue] (-0.3, 2.05) -- +(0, -1);
252
- \draw [-latex', mblue] (-0.2, 1.95) -- +(0, -1);
253
-
254
- \draw [fill=white, opacity=0.7] (0, 2) ellipse (0.5 and 0.15);
255
- \draw [-latex', mred] (0, 3) -- +(0, -1);
256
- \end{tikzpicture}
257
- \end{center}
258
-
259
- Similarly, we can partition $X^{(\leq \lfloor n/2\rfloor)}$ into $m$ chains with each chain ending in $X^{(\lfloor n/2\rfloor)}$. Then glue them together.
260
- \end{proof}
261
-
262
- Another way to prove this result is to provide an alternative measure on how large an antichain can be, and this gives a stronger result.
263
- \begin{thm}[LYM inequality]\index{LYM inequality}
264
- Let $\mathcal{A}$ be an antichain in $\mathcal{P}(X)$ with $|X| = n$. Then
265
- \[
266
- \sum_{r = 0}^n \frac{|\mathcal{A} \cap X^{(r)}|}{\binom{n}{r}} \leq 1.
267
- \]
268
- In particular, $|\mathcal{A}| \leq \max_r \binom{n}{r} = \binom{n}{\lfloor n/2\rfloor}$, as we already know.
269
- \end{thm}
270
-
271
- \begin{proof}
272
- A chain $C_0 \subseteq C_1 \subseteq \cdots \subseteq C_m$ is maximal if it has $n + 1$ elements. Moreover, there are $n!$ maximal chains, since we start with the empty set and then, given $C_i$, we produce $C_{i + 1}$ by picking one unused element and adding it to $C_i$.
273
-
274
- For every maximal chain $\mathcal{C}$, we have $|\mathcal{C} \cap \mathcal{A}| \leq 1$. Moreover, every set of $k$ elements appears in $k! (n - k)!$ maximal chains, by a similar counting argument as above. So
275
- \[
276
- \sum_{A \in \mathcal{A}} |A|! (n - |A|)! \leq n!.
277
- \]
278
- Then the result follows.
279
- \end{proof}
280
-
281
- There are analogous results for posets more general than just $\mathcal{P}(X)$. To formulate these results, we must introduce the following new terminology.
282
- \begin{defi}[Shadow]\index{shadow}
283
- Given $A \subseteq S_i$, the \emph{shadow} at level $i - 1$ is
284
- \[
285
- \partial A = \{x \in S_{i - 1}: x < y\text{ for some }y \in A\}.
286
- \]
287
- \end{defi}
288
-
289
- \begin{defi}[Downward-expanding poset]\index{downward-expanding poset}
290
- A graded poset $P = (S, <)$ is said to be \emph{downward-expanding} if
291
- \[
292
- \frac{|\partial A|}{|S_{i - 1}|} \geq \frac{|A|}{|S_i|}
293
- \]
294
- for all $A \subseteq A_i$.
295
-
296
- We similarly define \term{upward-expanding}, and say a poset is \term{expanding} if it is upward or downward expanding.
297
- \end{defi}
298
-
299
- \begin{defi}[Weight]\index{weight}
300
- The \emph{weight} of a set $A \subseteq S$ is
301
- \[
302
- w(A) = \sum_{i = 0}^n \frac{|A \cap S_i|}{|S_i|}.
303
- \]
304
- \end{defi}
305
-
306
- The theorem is that the LYM inequality holds in general for any downward expanding posets.
307
- \begin{thm}
308
- If $P$ is downward expanding and $A$ is an anti-chain, then $w(A) \leq 1$. In particular, $|A| \leq \max_i |S_i|$.
309
-
310
- Since each $S_i$ is an anti-chain, the largest anti-chain has size $\max_i |S_i|$.
311
- \end{thm}
312
-
313
- \begin{proof}
314
- We define the \emph{span} of $A$ to be
315
- \[
316
- \spn A = \max_{A_j \not= \emptyset} j - \min_{A_i \not= \emptyset} i.
317
- \]
318
- We do induction on $\spn A$.
319
-
320
- If $\spn A = 0$, then we are done. Otherwise, let $h_i = \max_{A_j \not= 0} j$, and set $B_{h - 1} = \partial A_h$. Then since $A$ is an anti-chain, we know $A_{h - 1} \cap B_{h - 1} = \emptyset$.
321
-
322
- We set $A' = A \setminus A_h \cup B_{h - 1}$. This is then another anti-chain, by the transitivity of $<$. We then have
323
- \[
324
- w(A) = w(A') + w(A_h) - w(B_{h - 1}) \leq w(A') \leq 1,
325
- \]
326
- where the first inequality uses the downward-expanding hypothesis and the second is the induction hypothesis.
327
- \end{proof}
328
-
329
- We may want to mimic our other proof of the fact that the largest size of an antichain in $\mathcal{P}(X)$ is $\binom{n}{\lfloor n/2\rfloor}$. This requires the notion of a \emph{regular poset}.
330
-
331
- \begin{defi}[Regular poset]\index{regular poset}\index{poset!regular}
332
- We say a graded poset $(S, <)$ is \emph{regular} if for each $i$, there exists $r_i, s_i$ such that if $x \in A_i$, then $x$ dominates $r_i$ elements at level $i - 1$, and is dominated by $s_i$ elements at level $i + 1$.
333
- \end{defi}
334
- %We observe that being regular implies being expanding.
335
- %Note that being regular implies being expanding. So we know
336
-
337
- \begin{prop}
338
- An anti-chain in a regular poset has weight $\leq 1$.
339
- \end{prop}
340
-
341
- \begin{proof}
342
- Let $M$ be the number of maximal chains of length $(n + 1)$, and for each $x \in S_k$, let $m(x)$ be the number of maximal chains through $x$. Then
343
- \[
344
- m(x) = \prod_{i = 1}^k r_i \prod_{i = k}^{n - 1} s_i.
345
- \]
346
- So if $x, y \in S_i$, then $m(x) = m(y)$.
347
-
348
- Now since every maximal chain passes through a unique element in $S_i$, for each $x \in S_i$, we have
349
- \[
350
- M = \sum_{x \in S_i} m(x) = |S_i| m(x).
351
- \]
352
- This gives the formula
353
- \[
354
- m(x) = \frac{M}{|S_i|}.
355
- \]
356
- now let $A$ be an anti-chain. Then $A$ meets each chain in $\leq 1$ elements. So we have
357
- \[
358
- M = \sum_{\text{maximal chains}} 1 \geq \sum_{x \in A} m(x) = \sum_{i = 0}^n |A \cap S_i| \cdot \frac{M}{|S_i|}.
359
- \]
360
- So it follows that
361
- \[
362
- \sum \frac{|A \cap S_i|}{|S_i|} \leq 1.\qedhere
363
- \]
364
- \end{proof}
365
-
366
- %\subsection{Littlewood--Offord problem}
367
- %In the 1930s and 1940s, people were studying roots of random polynomials of the form
368
- %\[
369
- % \sum_{k = 0}^n \varepsilon_k x^k,
370
- %\]
371
- %where $\varepsilon_k = 0, 1$.
372
-
373
- Let's now turn to a different problem. Suppose $x_1, \ldots, x_n \in \C$, with each $|x_i| \geq 1$. Given $A \subseteq [n]$, we let
374
- \[
375
- x_A = \sum_{i \in A} x_i.
376
- \]
377
- We now seek the largest size of $\mathcal{A}$ such that $|x_A - x_B| < 1$ for all $A, B \in \mathcal{A}$. More precisely, we want to find the best choice of $x_1, \ldots, x_n$ and $\mathcal{A}$ so that $|\mathcal{A}|$ is as large as possible while satisfying the above condition.
378
-
379
- If we are really lazy, then we might just choose $x_i = 1$ for all $i$. By taking $\mathcal{A} = [n]^{\lfloor n/2\rfloor}$, we can obtain $|\mathcal{A}| = \binom{n}{\lfloor n/2\rfloor}$.
380
-
381
- Erd\"os noticed this is the best bound if we require the $x_i$ to be real.
382
-
383
- \begin{thm}[Erd\"os, 1945]
384
- Let $x_i$ be all real, $|x_i| \geq 1$. For $A \subseteq [n]$, let
385
- \[
386
- x_A = \sum_{i \in A} x_i.
387
- \]
388
- Let $\mathcal{A} \subseteq \mathcal{P}(n)$. Then $|\mathcal{A}| \leq \binom{n}{\lfloor n/2\rfloor}$.
389
- \end{thm}
390
-
391
- \begin{proof}
392
- We claim that we may assume $x_i \geq 1$ for all $i$. To see this, suppose we instead had $x_1 = -2$, say. Then whether or not $i \in A$ determines whether $x_A$ should include $0$ or $-2$ in the sum. If we replace $x_i$ with $2$, then whether or not $i \in A$ determines whether $x_A$ should include $0$ or $2$. So replacing $x_i$ with $2$ just essentially shifts all terms by $2$, which doesn't affect the difference.
393
-
394
- But if we assume that $x_i \geq 1$ for all $i$, then we are done, since $\mathcal{A}$ must be an anti-chain, for if $A, B \in \mathcal{A}$ and $A \subsetneq B$, then $x_B - x_A = x_{B\setminus A} \geq 1$.
395
- \end{proof}
396
-
397
- Doing it for complex numbers is considerably harder. In 1970, Kleitman found a gorgeous proof for \emph{every} normed space. This involves the notion of a \emph{symmetric decomposition}. To motivate this, we first consider the notion of a symmetric chain.
398
-
399
- \begin{defi}[Symmetric chain]\index{symmetric chain}
400
- We say a chain $\mathcal{C} = \{C_i, C_{i + 1}, \ldots, C_{n - i}\}$ is \emph{symmetric} if $|C_j| = j$ for all $j$.
401
- \end{defi}
402
-
403
- \begin{thm}
404
- $\mathcal{P}(n)$ has a decomposition into symmetric chain.
405
- \end{thm}
406
-
407
- \begin{proof}
408
- We prove by induction. In the case $n = 1$, we simply have to take $\{\emptyset, \{1\}\}$.
409
-
410
- Now suppose $\mathcal{P}(n- 1)$ has a symmetric chain decomposition $\mathcal{C}_1 \cup \cdots \cup \mathcal{C}_t$. Given a symmetric chain
411
- \[
412
- \mathcal{C}_j = \{C_i, C_{i + 1}, \ldots, C_{n - 1 - i}\},
413
- \]
414
- we obtain two chains $\mathcal{C}_j^{(0)}, \mathcal{C}_j^{(1)}$ in $\mathcal{P}(n)$ by
415
- \begin{align*}
416
- \mathcal{C}_j^{(0)} &= \{C_i, C_{i + 1}, \ldots, C_{n - 1 - i}, C_{n - 1 - i} \cup \{n\}\}\\
417
- \mathcal{C}_j^{(1)} &= \{C_i \cup\{n\}, C_{i + 1} \cup \{n\}, \ldots, C_{n - 2 - i} \cup \{n\}\}.
418
- \end{align*}
419
- Note that if $|\mathcal{C}_j| = 1$, then $\mathcal{C}_j^{(1)} = \emptyset$, and we drop this. Under this convention, we note that every $A \in \mathcal{P}(n)$ appears in exactly one $\mathcal{C}_j^{(\varepsilon)}$, and so we are done.
420
- \end{proof}
421
-
422
- We are not going to actually need the notion of symmetric chains in our proof. What we need is the ``profile'' of a symmetric chain decomposition. By a simple counting argument, we see that for $0 \leq i \leq \frac{n}{2}$, the number of chains with $n + 1 - 2i$ sets is
423
- \[
424
- \ell(n, i) \equiv \binom{n}{i} - \binom{n}{i - 1}.
425
- \]
426
- \begin{thm}[Kleitman, 1970]
427
- Let $x_1, x_2, \ldots, x_n$ be vectors in a normed space with norm $\|x_I\| \geq 1$ for all $i$. For $A \in \mathcal{P}(n)$, we set
428
- \[
429
- x_A = \sum_{i \in A} x_i.
430
- \]
431
- Let $\mathcal{A} \subseteq \mathcal{P}(n)$ be such that $\|x_A - x_B\|< 1$. Then $\|\mathcal{A}\| \leq \binom{n}{\lfloor n/2\rfloor}$.
432
- \end{thm}
433
- This bound is indeed the best, since we can pick $x_i = x$ for some $\|x\| \geq 1$, and then we can pick $\mathcal{A} = [n]^{\lfloor n/2\rfloor}$.
434
-
435
- \begin{proof}
436
- Call $\mathcal{F} \subseteq \mathcal{P}(n)$ \emph{sparse} if $\|x_E - x_F\| \geq 1$ for all $E, F \in \mathcal{F}$, $E \not= F$. Note that if $\mathcal{F}$ is sparse, then $|\mathcal{F} \cap \mathcal{A}| \leq 1$. So if we can find a decomposition of $\mathcal{P}(n)$ into $\binom{n}{\lfloor n/2\rfloor}$ sparse sets, then we are done.
437
-
438
- We call a partition $\mathcal{P}(n) = \mathcal{F}_1 \cup \cdots \cup F_t$ \emph{symmetric} if the number of families with $n + 1 - 2i$ sets is $\ell(n, i)$, i.e.\ the ``profile'' is that of a symmetric chain decomposition.
439
-
440
- \begin{claim}
441
- $\mathcal{P}(n)$ has a symmetric decomposition into sparse families.
442
- \end{claim}
443
-
444
- We again induct on $n$. When $n = 1$, we can take $\{\emptyset, \{1\}\}$. Now suppose $\Delta_{n - 1}$ is a symmetric decomposition of $\mathcal{P}(n - 1)$ as $\mathcal{F}_1 \cup \cdots \cup \mathcal{F}_t$.
445
-
446
- Given $\mathcal{F}_j$, we construct $\mathcal{F}_j^{(0)}$ and $\mathcal{F}_j^{(1)}$ ``as before''. We pick some $D \in \mathcal{F}_j$, to be decided later, and we take
447
- \begin{align*}
448
- \mathcal{F}_j^{(0)} &= \mathcal{F}_j \cup\{ D \cup \{n\}\}\\
449
- \mathcal{F}_j^{(1)} &= \{ E \cup \{n\}: E \in \mathcal{F}_j \setminus \{D\}\}.
450
- \end{align*}
451
- The resulting set is certainly still symmetric. The question is whether it is sparse, and this is where the choice of $D$ comes in. The collection $\mathcal{F}_j^{(1)}$ is certainly still sparse, and we must pick a $D$ such that $\mathcal{F}_j^{(0)}$ is sparse.
452
-
453
- To do so, we use Hahn--Banach to obtain a linear functional $f$ such that $\|f\| = 1$ and $f(x_n) = \|x_n\| \geq 1$. We can then pick $D$ to maximize $f(x_D)$. Then we check that if $E \in \mathcal{F}_j$, then
454
- \[
455
- f(x_{D \cup \{n\}} - x_2) = f(x_D) - f(x_E) + f(x_n).
456
- \]
457
- By assumption, $f(x_n) \geq 1$ and $f(x_D) \geq f(x_E)$. So this is $\geq 1$. Since $\|f\| = 1$, it follows that $\|x_{D \cup \{n\}} - x_E\| \geq 1$.
458
- \end{proof}
459
-
460
- \section{The Kruskal--Katona theorem}
461
- For $\mathcal{A} \subseteq X^{(r)}$, recall we defined the lower shadow to be
462
- \[
463
- \partial \mathcal{A} = \{B \in X^{(r - 1)} : B \subseteq A \text{ for some } A \in \mathcal{A}\}.
464
- \]
465
- The question we wish to understand is how small we can make $\partial \mathcal{A}$, relative to $\mathcal{A}$. Crudely, we can bound the size by
466
- \[
467
- |\partial \mathcal{A}| \geq |\mathcal{A}| \frac{\binom{n}{r - 1}}{\binom{n}{r}} = \frac{n - r}{r} |\mathcal{A}|.
468
- \]
469
- But surely we can do better than this. To do so, one reasonable strategy is to first produce some choice of $\mathcal{A}$ we think is optimal, and see how we can prove that it is indeed optimal.
470
-
471
- To do so, let's look at some examples.
472
- \begin{eg}
473
- Take $n = 6$ and $r = 3$. We pick
474
- \[
475
- \mathcal{A} = \{123, 456, 124, 256\}.
476
- \]
477
- Then we have
478
- \[
479
- \partial \mathcal{A} = \{12, 13, 23, 45, 46, 56, 14, 24, 25, 26\},
480
- \]
481
- and this has $10$ elements.
482
-
483
- But if we instead had
484
- \[
485
- \mathcal{A} = \{123, 124, 134, 234\},
486
- \]
487
- then
488
- \[
489
- \partial \mathcal{A} = \{12, 13, 14, 23, 24, 34\},
490
- \]
491
- and this only has $6$ elements, and this is much better.
492
- \end{eg}
493
- Intuitively, the second choice of $\mathcal{A}$ is better because the terms are ``bunched'' together.
494
-
495
- More generally, we would expect that if we have $|\mathcal{A}| = \binom{k}{r}$, then the best choice should be $\mathcal{A} = [k]^{(r)}$, with $|\partial A| = \binom{k}{r - 1}$. For other choices of $\mathcal{A}$, perhaps a reasonable strategy is to find the largest $k$ such that $\binom{k}{r} < |\mathcal{A}|$, and then take $\mathcal{A}$ to be $[k]^{(r)}$ plus some elements. To give a concrete description of which extra elements to pick, our strategy is to define a total order on $[n]^{(r)}$, and say we should pick the initial segment of length $|\mathcal{A}|$.
496
-
497
- This suggests the following proof strategy:
498
- \begin{enumerate}
499
- \item Come up with a total order on $[n]^{(r)}$, or even $\N^{(r)}$ such that $[k]^{(r)}$ are initial segments for all $k$.
500
- \item Construct some ``compression'' operators\index{compression operator} $\mathcal{P}(\N^{(r)}) \to \mathcal{P}(\N^{(r)})$ that pushes each element down the ordering without increasing the $|\partial \mathcal{A}|$.
501
- \item Show that the only subsets of $\N^{(r)}$ that are fixed by the compression operators are the initial segments.
502
- \end{enumerate}
503
- There are two natural orders one can put on $[n]^{(r)}$:
504
- \begin{itemize}
505
- \item lex\index{lex order}: We say $A < B$ if $\min A \Delta B \in A$.
506
- \item colex\index{colex order}: We say $A < B$ if $\max A \Delta B \in B$.
507
- \end{itemize}
508
- \begin{eg}
509
- For $r = 3$, the elements of $X^{(3)}$ in colex order are
510
- \[
511
- 123, 124, 134, 234, 125, 135, 235, 145, 245, 345, 126,\ldots
512
- \]
513
- \end{eg}
514
-
515
- In fact, colex is an order on $\N^{(r)}$, and we see that the initial segment with $\binom{n}{r}$ elements is exactly $[n]^{(r)}$. So this is a good start.
516
-
517
- If we believe that colex is indeed the right order to do, then we ought to construct some compression operators. For $i \not= j$, we define the $(i, j)$-compression as follows: for a set $A \in X^{(r)}$, we define
518
- \[
519
- C_{ij}(A) =
520
- \begin{cases}
521
- (A \setminus \{j\}) \cup \{i\} & j \in A, i \not\in A\\
522
- A & \text{otherwise}
523
- \end{cases}
524
- \]
525
- For a set system, we define
526
- \[
527
- C_{ij}(\mathcal{A}) = \{C_{ij}(A): A \in \mathcal{A}\} \cup \{A \in \mathcal{A}:C_{ij}(A) \in \mathcal{A}\}
528
- \]
529
- We can picture our universe of sets as follows:
530
- \begin{center}
531
- \begin{tikzpicture}
532
- \foreach \x in {0,1,2} {
533
- \node [circ] at (\x, 0) {};
534
- \node [circ] at (\x, -1) {};
535
- \draw [->] (\x, -0.2) -- (\x, -0.8) {};
536
- }
537
- \node [left] at (0, 0) {\small$B \cup \{j\}$};
538
- \node [left] at (0, -1) {\small$B \cup \{i\}$};
539
- \foreach \y in {3, 4, 5} {
540
- \node [circ] at (\y, -0.5) {};
541
- }
542
- \end{tikzpicture}
543
- \end{center}
544
- The set system $\mathcal{A}$ is some subset of all these points, and we what we are doing is that we are pushing everything down when possible.
545
-
546
- It is clear that we have $|C_{ij}(\mathcal{A})| = |\mathcal{A}|$. We further observe that
547
-
548
- \begin{lemma}
549
- We have
550
- \[
551
- \partial C_{ij}(\mathcal{A}) \subseteq C_{ij}(\partial \mathcal{A}).
552
- \]
553
- In particular, $|\partial C_{ij}(\mathcal{A})| \leq |\partial \mathcal{A}|$.\fakeqed
554
- \end{lemma}
555
-
556
- Given $\mathcal{A} \subseteq X^{(r)}$, we say $\mathcal{A}$ is left-compressed if $c_{ij}(\mathcal{A}) = \mathcal{A}$ for all $i < j$. Is this good enough?
557
-
558
- Of course initial segments are left-compressed. However, it turns out the converse is not true.
559
-
560
- \begin{eg}
561
- $\{123, 124, 125, 126\}$ are left-compressed, but not an initial segment.
562
- \end{eg}
563
-
564
- So we want to come up with ``more powerful'' compressions. For $U, V \in X^{(s)}$ with $U \cap V = \emptyset$, we define a $(U, V)$-compression as follows: for $A \subseteq X$, we define
565
- \[
566
- C_{UV}(A) =
567
- \begin{cases}
568
- (A \setminus V) \cup U & A \cap (U \cup V) = V\\
569
- A & \text{otherwise}
570
- \end{cases}
571
- \]
572
- Again, for $\mathcal{A} \subseteq X^{(r)}$, we can define
573
- \[
574
- C_{UV}(\mathcal{A}) = \{C_{UV}(A) : A \in \mathcal{A}\} \cup \{A \in \mathcal{A}: C_{UV}(A) \in \mathcal{A}\}.
575
- \]
576
- Again, $\mathcal{A}$ is $(U, V)$-compressed if $C_{UV}(\mathcal{A}) = \mathcal{A}$.
577
-
578
- This time the behaviour of the compression is more delicate.
579
- \begin{lemma}
580
- Let $\mathcal{A} \subseteq X^{(r)}$ and $U, V \in X^{(s)}$, $U \cap V = \emptyset$. Suppose for all $u \in U$, there exists $v$ such that $\mathcal{A}$ is $(U \setminus \{u\}, V \setminus \{v\})$-compressed. Then
581
- \[
582
- \partial C_{UV} (\mathcal{A}) \subseteq C_{UV}(\partial A).\tag*{$\square$}
583
- \]
584
- \end{lemma}
585
-
586
- \begin{lemma}
587
- $\mathcal{A} \subseteq X^{(r)}$ is an initial segment of $X^{(r)}$ in colex if and only if it is $(U, V)$-compressed for all $U, V$ disjoint with $|U| = |V|$ and $\max V > \max U$.
588
- \end{lemma}
589
-
590
- \begin{proof}
591
- $\Rightarrow$ is clear. Suppose $\mathcal{A}$ is $(U, V)$ compressed for all such $U, V$. If $\mathcal{A}$ is not an initial segment, then there exists $B \in \mathcal{A}$ and $C \not \in \mathcal{A}$ such that $C < B$. Then $\mathcal{A}$ is not $(C \setminus B, B \setminus C)$-compressed. A contradiction.
592
- \end{proof}
593
-
594
- \begin{lemma}
595
- Given $\mathcal{A} \in X^{(r)}$, there exists $\mathcal{B} \subseteq X^{(r)}$ such that $\mathcal{B}$ is $(U, V)$-compressed for all $|U| = |V|$, $U \cap V= \emptyset$, $\max V > \max U$, and moreover
596
- \[
597
- |\mathcal{B}| = |\mathcal{A}|, |\partial \mathcal{B}| \leq |\partial \mathcal{A}|.\tag{$*$}
598
- \]
599
- \end{lemma}
600
-
601
- \begin{proof}
602
- Let $\mathcal{B}$ be such that
603
- \[
604
- \sum_{B \in \mathcal{B}} \sum_{i \in B} 2^i
605
- \]
606
- is minimal among those $\mathcal{B}$'s that satisfy $(*)$. We claim that this $\mathcal{B}$ will do. Indeed, if there exists $(U, V)$ such that $|U| = |V|$, $\max V > \max U$ and $C_{UV}(\mathcal{B}) \not= \mathcal{B}$, then pick such a pair with $|U|$ minimal. Then apply a $(U, V)$-compression, which is valid since given any $u \in U$ we can pick any $v \in V$ that is not $\max V$ to satisfy the requirements of the previous lemma. This decreases the sum, which is a contradiction.
607
- \end{proof}
608
-
609
- From these, we conclude that
610
- \begin{thm}[Kruskal 1963, Katona 1968]
611
- Let $\mathcal{A} \subseteq X^{(r)}$, and let $\mathcal{C} \subseteq X^{(r)}$ be the initial segment with $|\mathcal{C}| = |\mathcal{A}|$. Then
612
- \[
613
- |\partial \mathcal{A}| \geq |\partial \mathcal{C}|.
614
- \]
615
- \end{thm}
616
-
617
- We can now define the \term{shadow function}
618
- \[
619
- \partial^{(r)}(m) = \min\{|\partial \mathcal{A}| : \mathcal{A} \subseteq X^{(r)}, |\mathcal{A}| = m\}.
620
- \]
621
- This does not depend on the size of $X$ as long as $X$ is large enough to accommodate $m$ sets, i.e.\ $\binom{n}{r} \geq m$. It would be nice if we can concretely understand this function. So let's try to produce some initial segments.
622
-
623
- Essentially by definition, an initial segment is uniquely determined by the last element. So let's look at some examples.
624
- \begin{eg}
625
- Take $r = 4$. What is the size of the initial segment ending in $3479$? We note that anything that ends in something less than $8$ is less that $3479$, and there are $\binom{8}{4}$ such elements. If you end in $9$, then you are still fine if the second-to-last digit is less than $7$, and there are $\binom{6}{3}$ such elements. Continuing, we find that there are
626
- \[
627
- \binom{8}{4} + \binom{6}{3} + \binom{4}{2}
628
- \]
629
- such elements.
630
- \end{eg}
631
-
632
- Given $m_r > m_{r - 1} > \cdots > m_s \geq s$, we let$\mathcal{B}^{(r)}(m_r, m_{r - 1}, \ldots, m_s)$ be the initial segment ending in the element
633
- \[
634
- m_r + 1, m_{r - 1} + 1, \ldots, m_{s + 1} + 1, m_s, m_s - 1, m_s - 2, \ldots, m_s - (s - 1).
635
- \]
636
- This consists of the sets $\{a_1 < a_2 < \cdots < a_r\}$ such that there exists $j \in [s, r]$ with $a_i = m_i + 1$ for $i > j$, and $a_j \leq m_j$.
637
-
638
- To construct an element in $\mathcal{B}^{(r)}(m_r, \ldots, m_s)$, we need to first pick a $j$, and then select $j$ elements that are $\leq m_j$. Thus, we find that
639
- \[
640
- |\mathcal{B}^{(r)}(m_r, \ldots, m_s)| = \sum_{j = s}^r \binom{m_j}{j} = b^{(r)} (m_1, \ldots, m_s).
641
- \]
642
- We see that this $\mathcal{B}^{(r)}$ is indeed the initial segment in the colex order ending in that element. So we know that for all $m \in \N$, there is a unique sequence $m_r > m_{r - 1} > \ldots, > m_s \geq s$ such that $n = \sum_{j = 0}^r \binom{m_j}{j}$.
643
-
644
- It is also not difficult to find the shadow of this set. After a bit of thinking, we see that it is given by
645
- \[
646
- \mathcal{B}^{(r - 1)} (m_r, \ldots, m_s).
647
- \]
648
- Thus, we find that
649
- \[
650
- \partial^{(r)}\left( \sum_{i = s}^r \binom{m_i}{i}\right) = \sum_{i = s}^r \binom{m_i}{i - 1},
651
- \]
652
- and moreover every $m$ can be expressed in the form $\sum_{i = s}^r \binom{m_i}{i}$ for some unique choices of $m_i$.
653
-
654
- In particular, we have
655
- \[
656
- \partial^{(r)}\left(\binom{n}{r}\right) = \binom{n}{r - 1}.
657
- \]
658
- Since it might be slightly annoying to write $m$ in the form $\sum_{i = s}^r \binom{m_i}{i}$, Lov\'asz provided another slightly more convenient bound.
659
- \begin{thm}[Lov\'asz, 1979]
660
- If $\mathcal{A} \subseteq X^{(r)}$ with $|\mathcal{A}| = \binom{x}{r}$ for $x \geq 1, x \in \R$, then
661
- \[
662
- |\partial \mathcal{A}| \geq \binom{x}{r - 1}.
663
- \]
664
- This is best possible if $x$ is an integer.
665
- \end{thm}
666
-
667
- \begin{proof}
668
- Let
669
- \begin{align*}
670
- \mathcal{A}_0 &= \{A \in \mathcal{A}: 1 \not \in A\}\\
671
- \mathcal{A}_1 &= \{A \in \mathcal{A}: 1 \in A\}.
672
- \end{align*}
673
- For convenience, we write
674
- \[
675
- \mathcal{A}_1 - 1 = \{A \setminus \{1\}: A \in \mathcal{A}_1\}.
676
- \]
677
- We may assume $\mathcal{A}$ is $(i, j)$-compressed for all $i < j$. We induct on $r$ and then on $|\mathcal{A}|$. We have
678
- \[
679
- |\mathcal{A}_0| = |\mathcal{A}| - |\mathcal{A}_1|.
680
- \]
681
- We note that $\mathcal{A}_1$ is non-empty, as $\mathcal{A}$ is left-compressed. So $|\mathcal{A}_0| < |\mathcal{A}|$.
682
-
683
- If $r = 1$ and $|\mathcal{A}| = 1$ then there is nothing to do.
684
-
685
- Now observe that $\partial \mathcal{A} \subseteq \mathcal{A}_1 - 1$, since if $A \in \mathcal{A}$, $1 \not \in A$, and $B \subseteq A$ is such that $|A \setminus B| = 1$, then $B \cup \{1\} \in \mathcal{A}_1$ since $\mathcal{A}$ is left-compressed. So it follows that
686
- \[
687
- |\partial \mathcal{A}_0| \leq |\mathcal{A}_1|.
688
- \]
689
- Suppose $|\mathcal{A}_1| < \binom{x - 1}{r - 1}$. Then
690
- \[
691
- |\mathcal{A}_0| > \binom{x}{r} - \binom{x - 1}{r - 1} = \binom{x - 1}{r}.
692
- \]
693
- Therefore by induction, we have
694
- \[
695
- |\partial \mathcal{A}_0| > \binom{x - 1}{r - 1}.
696
- \]
697
- This is a contradiction, since $|\partial \mathcal{A}_0| \leq |\mathcal{A}_1|$. Hence $|\mathcal{A}_1| \geq \binom{x - 1}{r - 1}$. Hence we are done, since
698
- \[
699
- |\partial \mathcal{A}| \geq |\partial \mathcal{A}_1| = |\mathcal{A}_1| + |\partial (\mathcal{A}_1 - 1)| \geq \binom{x - 1}{r - 1} + \binom{x - 1}{r - 2} = \binom{x}{r - 1}.\qedhere
700
- \]
701
- \end{proof}
702
-
703
- %$r = 2$ is a non-starter. Given $m$ edges on $[n]$, at least how many vertices do they have altogether? If $m > \binom{k}{2}$, then $|\mathrm{shadow}| \geq k + 1$. Thus, if
704
- %\[
705
- % \binom{k - 1}{2} < m \leq \binom{k}{2},
706
- %\]
707
- %then for
708
- %\[
709
- % \mathcal{A} \subseteq X^{(2)} = [n]^{(2)}
710
- %\]
711
- %with $|\mathcal{A}| = m$, then $|\partial A| \geq k$. This is the best possible.
712
- %
713
- %We hope that if $\mathcal{A} \subseteq X^{(r)}$ and $\mathcal{C}$ is the initial segment of $X^{(r)}$ in colex with $|\mathcal{C}| = |\mathcal{A}|$, then
714
- %\[
715
- % |\partial \mathcal{A}| \geq |\partial \mathcal{C}|.
716
- %\]
717
- %
718
- %
719
-
720
- \section{Isoperimetric inequalities}
721
- We are now going to ask a question similar to the one answered by Kruskal--Katona. Kruskal--Katona answered the question of how small can $\partial \mathcal{A}$ be among all $\mathcal{A} \subseteq X^{(r)}$ of fixed size. Clearly, we obtain the same answer if we sought to minimized the upper shadow instead of the lower. But what happens if we want to minimize both the upper shadow and the lower shadow? Or, more generally, if we allow $\mathcal{A} \subseteq \mathcal{P}(X)$ to contain sets of different sizes, how small can the set of ``neighbours'' of $\mathcal{A}$ be?
722
-
723
- \begin{defi}[Boundary]\index{boundary}
724
- Let $G$ be a graph and $A \subseteq V(A)$. Then the \emph{boundary} $b(A)$ is the set of all $x \in G$ such that $x \not \in A$ but $x$ is adjacent to $A$.
725
- \end{defi}
726
-
727
- \begin{eg}
728
- In the following graph
729
- \begin{center}
730
- \begin{tikzpicture}
731
-
732
- \draw (0, 0) rectangle (1, 1);
733
- \draw (1, 1) -- (2.5, 1) -- (3.07736, 0) -- (1.92264, 0) -- (2.5, 1);
734
-
735
- \node [circ, mgreen] at (0, 0) {};
736
- \node [circ, mgreen] at (1, 0) {};
737
- \node [circ, mgreen] at (1, 1) {};
738
- \node [circ, red] at (0, 1) {};
739
- \node [circ, red] (a) at (2.5, 1) {};
740
-
741
- \end{tikzpicture}
742
- \end{center}
743
- the boundary of the green vertices is the red vertices.
744
- \end{eg}
745
-
746
- An \term{isoperimetric inequality} on $G$ is an inequality of the form
747
- \[
748
- |b(A)| \geq f(|A|)
749
- \]
750
- for all $A \subseteq G$. Of course, we could set $f \equiv 0$, but we would like to do better than that.
751
-
752
- The ``continuous version'' of this problem is well-known. For example, in a plane, given a fixed area, the perimeter of the area is minimized if we pick the area to be a disc. Similarly, among subsets of $\R^3$ of a given volume, the solid sphere has the smallest surface area. Slightly more exotically, among subsets of $S^2$ of given area, the circular cap has smallest perimeter.
753
-
754
- Before we proceed, we note the definition of a \emph{neighbourhood}:
755
- \begin{defi}[Neighbourhood]\index{neighbourhood}
756
- Let $G$ be a graph and $A \subseteq V(A)$. Then the \emph{neighbourhood} of $A$ is $N(A) = A \cup b(A)$.
757
- \end{defi}
758
- Of course, $|b(A)| = |N(A)| - |A|$, and it is often convenient to express and prove our isoperimetric inequalities in terms of the neighbourhood instead.
759
-
760
- If we look at our continuous cases, then we observe that all our optimal figures are balls, i.e.\ they consist of all the points a distance at most $r$ from a point, for some $r$ and single point. We would hope that this pattern generalizes.
761
-
762
- Of course, it would be a bit ambitious to hope that balls are optimal for all graphs. However, we can at least show that it is true for the graphs we care about, namely graphs obtained from power sets.
763
-
764
- \begin{defi}[Discrete cube]\index{discrete cube}
765
- Given a set $X$, we turn $\P(X)$ into a graph as follows: join $x$ to $y$ if $|x \Delta y| = 1$, i.e.\ if $x = y \cup \{a\}$ for some $a \not \in y$, or vice versa.
766
-
767
- This is the \emph{discrete cube} $Q_n$, where $n = |X|$.
768
- \end{defi}
769
-
770
- \begin{eg}
771
- $Q_3$ looks like
772
- \begin{center}
773
- \begin{tikzpicture}
774
- \node (123) at (0, 0) {123};
775
-
776
- \node (13) at (0, -1) {13};
777
- \node (12) at (-1, -1) {12};
778
- \node (23) at (1, -1) {23};
779
-
780
- \node (2) at (0, -2) {2};
781
- \node (1) at (-1, -2) {1};
782
- \node (3) at (1, -2) {3};
783
-
784
- \node (0) at (0, -3) {$\emptyset$};
785
-
786
- \draw (0) -- (1) -- (12) -- (123);
787
- \draw (0) -- (3) -- (23) -- (123);
788
- \draw (0) -- (2) -- (12);
789
- \draw (2) -- (23);
790
- \draw (1) -- (13) -- (123);
791
- \draw (3) -- (13);
792
- \end{tikzpicture}
793
- \end{center}
794
- \end{eg}
795
- This looks like a cube! Indeed, if we identify each $x \in \Q$ with the 0-1 sequence of length $n$ (e.g.\ $13 \mapsto 101000\cdots 0$), or, in other words, its indicator function, then $Q_n$ is naturally identified with the unit cube in $\R^n$.
796
- \begin{center}
797
- \begin{tikzpicture}[scale=1.5]
798
- \node (0) at (0, 0) {\small$\emptyset$};
799
- \node (1) at (1, 0) {\small$1$};
800
- \node (3) at (0, 1) {\small$3$};
801
- \node (2) at (0.4, 0.4) {\small$2$};
802
- \node (12) at (1.4, 0.4) {\small$12$};
803
- \node (13) at (1, 1) {\small$13$};
804
- \node (23) at (0.4, 1.4) {\small$23$};
805
- \node (123) at (1.4, 1.4) {\small$123$};
806
-
807
- \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
808
- \draw (3) -- (13) -- (1);
809
- \draw (13) -- (123);
810
-
811
- \draw [dashed] (0) -- (2) -- (12);
812
- \draw [dashed] (2) -- (23);
813
-
814
- \end{tikzpicture}
815
- \end{center}
816
- Note that in this picture, the topmost layer is the points that do have $3$, and the bottom layer consists of those that do not have a $3$, and we can make similar statements for the other directions.
817
-
818
- \begin{eg}
819
- Take $Q_3$, and try to find a size $A$ of size $4$ that has minimum boundary. There are two things we might try --- we can take a slice, or we can take a ball. In this case, we see the ball is the best.
820
- \end{eg}
821
- We can do more examples, and it appears that the ball $X^{(\leq r)}$ is the best all the time. So that might be a reasonable thing to try to prove. But what if we have $|A|$ such that $|X^{(\leq r)}| < |A| < |X^{(\leq r + 1)}$?
822
-
823
- It is natural to think that we should pick an $A$ with $X^{(\leq r)} \subseteq A \subseteq X^{(\leq r + 1)}$, so we set $A = X^{(\leq r)} \cup B$, where $B \subseteq X^{(r + 1)}$. Such an $A$ is known as a \term{Hamming ball}\index{ball!Hamming}.
824
-
825
- What $B$ should we pick? Observe that
826
- \[
827
- N(A) = X^{(\leq r + 1)} \cup \partial^+ B.
828
- \]
829
- So we want to pick $B$ to minimize the \emph{upper} shadow. So by Kruskal--Katona, we know we should pick $B$ to be the initial segment in the lex order.
830
-
831
- Thus, if we are told to pick $1000$ points to minimize the boundary, we go up in levels, and in each level, we go up in lex.
832
- \begin{defi}[Simplicial ordering]\index{simplicial ordering}
833
- The \emph{simplicial ordering} on $Q_n$ is defined by $x < y$ if either $|x| < |y|$, or $|x| = |y|$ and $x < y$ in lex.
834
- \end{defi}
835
- Our aim is to show that the initial segments of the simplicial order minimize the neighbourhood. Similar to Kruskal--Katona, a reasonable strategy would be to prove it by compression.
836
-
837
- For $A \subseteq Q_n$, and $1 \leq i \leq n$, the $i$-sections of $A$ are $A_+^{(i)}, A_-^{(i)} \subseteq \P(X\setminus \{i\})$ defined by
838
- \begin{align*}
839
- A_-^{(i)} &= \{x \in A: i \not\in x\}\\
840
- A_+^{(i)} &= \{x \setminus \{i\}: x \in A, i \in x\}.
841
- \end{align*}
842
- These are the top and bottom layers in the $i$ direction.
843
-
844
- The $i$-compression (or \term{co-dimension $1$ compression}) of $A$ is $C_i(A)$, defined by
845
- \begin{align*}
846
- C_i(A)_+ &= \text{first $|A_+|$ elements of $\P(X \setminus \{i\})$ in simplicial}\\
847
- C_i(A)_- &= \text{first $|A_-|$ elements of $\P(X \setminus \{i\})$ in simplicial}
848
- \end{align*}
849
-
850
- \begin{eg}
851
- Suppose we work in $Q_4$, where the original set is
852
- \begin{center}
853
- \begin{tikzpicture}
854
- \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
855
- \draw (0, 1) -- (1, 1) -- (1, 0);
856
- \draw (1, 1) -- (1.4, 1.2);
857
- \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
858
- \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
859
-
860
- \node [circ] at (0.4, 0.2) {};
861
- \node [circ] at (0, 1) {};
862
- \node [circ] at (1, 1) {};
863
- \node [circ] at (1.4, 1.2) {};
864
- \begin{scope}[shift={(2, 0)}]
865
- \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
866
- \draw (0, 1) -- (1, 1) -- (1, 0);
867
- \draw (1, 1) -- (1.4, 1.2);
868
- \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
869
- \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
870
-
871
- \node [circ] at (1, 1) {};
872
- \node [circ] at (0, 1) {};
873
- \node [circ] at (0.4, 1.2) {};
874
- \node [circ] at (1.4, 1.2) {};
875
- \node [circ] at (1.4, 0.2) {};
876
- \end{scope}
877
-
878
- \draw [->] (0.7, -0.5) -- (2.7, -0.5) node [pos=0.5, below] {$i$};
879
- \end{tikzpicture}
880
- \end{center}
881
- The resulting set is then
882
- \begin{center}
883
- \begin{tikzpicture}
884
- \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
885
- \draw (0, 1) -- (1, 1) -- (1, 0);
886
- \draw (1, 1) -- (1.4, 1.2);
887
- \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
888
- \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
889
-
890
- \node [circ] at (0, 0) {};
891
- \node [circ] at (0, 1) {};
892
- \node [circ] at (1, 0) {};
893
- \node [circ] at (0.4, 0.2) {};
894
- \begin{scope}[shift={(2, 0)}]
895
- \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
896
- \draw (0, 1) -- (1, 1) -- (1, 0);
897
- \draw (1, 1) -- (1.4, 1.2);
898
- \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
899
- \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
900
-
901
- \node [circ] at (0, 0) {};
902
- \node [circ] at (0, 1) {};
903
- \node [circ] at (1, 0) {};
904
- \node [circ] at (0.4, 0.2) {};
905
- \node [circ] at (1.4, 0.2) {};
906
- \end{scope}
907
-
908
- \draw [->] (0.7, -0.5) -- (2.7, -0.5) node [pos=0.5, below] {$i$};
909
- \end{tikzpicture}
910
- \end{center}
911
- \end{eg}
912
- Clearly, we have $|C_i(A)| = |A|$, and $C_i(A)$ ``looks more like'' an initial segment in simplicial ordering than $A$ did.
913
-
914
- We say $A$ is \emph{$i$-compressed} if $C_i(A) = A$.
915
-
916
- \begin{lemma}
917
- For $A \subseteq Q_n$, we have $|N(C_i(A))| \leq |N(A)|$.
918
- \end{lemma}
919
-
920
- \begin{proof}
921
- We have
922
- \[
923
- |N(A)| = |N(A_+) \cup A_-| + |N(A_-) \cup A_+|
924
- \]
925
- Take $B = C_i(A)$. Then
926
- \begin{align*}
927
- |N(B)| &= |N(B_+) \cup B_-| + |N(B_-) \cup B_+|\\
928
- &= \max \{|N(B_+)|, |B_-|\} + \max \{|N(B_-)|, |B_+|\}\\
929
- &\leq \max \{|N(A_+)|, |A_-|\} + \max \{|N(A_-)|, |A_+|\}\\
930
- &\leq |N(A_+) \cup A_i| + |N(A_-) \cup A_+|\\
931
- &= |N(A)|\qedhere
932
- \end{align*}
933
- \end{proof}
934
-
935
- Since each compression moves us down in the simplicial order, we can keep applying compressions, and show that
936
- \begin{lemma}
937
- For any $A \subseteq Q_n$, there is a compressed set $B \subseteq Q_n$ such that
938
- \[
939
- |B| = |A|,\quad |N(B)| \leq |N(A)|.
940
- \]
941
- \end{lemma}
942
-
943
- Are we done? Does being compressed imply being an initial segment? No! For $n = 3$, we can take $\{\emptyset, 1, 2, 12\}$, which is obviously compressed, but is not an initial segment. To obtain the actual initial segment, we should replace $12$ with $3$.
944
- \begin{center}
945
- \begin{tikzpicture}[scale=1.5]
946
- \node [mred] (0) at (0, 0) {\small$\mathbf{\emptyset}$};
947
- \node [mred] (1) at (1, 0) {\small$\mathbf{1}$};
948
- \node (3) at (0, 1) {\small$3$};
949
- \node [mred] (2) at (0.4, 0.4) {\small$\mathbf{2}$};
950
- \node [mred] (12) at (1.4, 0.4) {\small$\mathbf{12}$};
951
- \node (13) at (1, 1) {\small$13$};
952
- \node (23) at (0.4, 1.4) {\small$23$};
953
- \node (123) at (1.4, 1.4) {\small$123$};
954
-
955
- \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
956
- \draw (3) -- (13) -- (1);
957
- \draw (13) -- (123);
958
-
959
- \draw [dashed] (0) -- (2) -- (12);
960
- \draw [dashed] (2) -- (23);
961
- \end{tikzpicture}
962
- \end{center}
963
-
964
- For $n = 4$, we can take $\{\emptyset, 1, 2, 3, 4, 12, 13, 23\}$, which is again compressed by not an initial segment. It is an initial segment only if we replace $23$ with $14$.
965
- \begin{center}
966
- \begin{tikzpicture}[scale=1.5]
967
- \node [mred] (0) at (0, 0) {\small$\mathbf{\emptyset}$};
968
- \node [mred] (1) at (1, 0) {\small$\mathbf{1}$};
969
- \node [mred] (3) at (0, 1) {\small$\mathbf{3}$};
970
- \node [mred] (2) at (0.4, 0.4) {\small$\mathbf{2}$};
971
- \node [mred] (12) at (1.4, 0.4) {\small$\mathbf{12}$};
972
- \node [mred] (13) at (1, 1) {\small$\mathbf{13}$};
973
- \node [mred] (23) at (0.4, 1.4) {\small$\mathbf{23}$};
974
- \node (123) at (1.4, 1.4) {\small$123$};
975
-
976
- \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
977
- \draw (3) -- (13) -- (1);
978
- \draw (13) -- (123);
979
- \draw [dashed] (0) -- (2) -- (12);
980
- \draw [dashed] (2) -- (23);
981
-
982
- \begin{scope}[shift={(2, 0)}]
983
- \node [mred] (0) at (0, 0) {\small$\mathbf{4}$};
984
- \node (1) at (1, 0) {\small$14$};
985
- \node (3) at (0, 1) {\small$34$};
986
- \node (2) at (0.4, 0.4) {\small$24$};
987
- \node (12) at (1.4, 0.4) {\small$124$};
988
- \node (13) at (1, 1) {\small$134$};
989
- \node (23) at (0.4, 1.4) {\small$234$};
990
- \node (123) at (1.4, 1.4) {\small$1234$};
991
-
992
- \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
993
- \draw (3) -- (13) -- (1);
994
- \draw (13) -- (123);
995
- \draw [dashed] (0) -- (2) -- (12);
996
- \draw [dashed] (2) -- (23);
997
-
998
- \end{scope}
999
- \end{tikzpicture}
1000
- \end{center}
1001
- We notice that these two examples have a common pattern. The ``swap'' we have to perform to get to an initial segment is given by replacing an element with its complement, or equivalently, swapping something the opposite diagonal element. This is indeed general.
1002
- \begin{lemma}
1003
- For each $n$, there exists a unique element $z \in Q_n$ such that $z^c$ is the successor of $z$.
1004
-
1005
- Moreover, if $B \subseteq Q_n$ is compressed but not an initial segment, then $|B| = 2^{n - 1}$ and $B$ is obtained from taking the initial segment of size $2^{n - 1}$ and replacing $x$ with $x^c$.
1006
- \end{lemma}
1007
-
1008
- \begin{proof}
1009
- For the first part, simply note that complementation is an order-reversing bijection $Q_n \to Q_n$, and $|Q_n|$ is even. So the $2^{n - 1}$th element is the only such element $z$.
1010
-
1011
- Now if $B$ is not an initial segment, then we can find some $x < y$ such that $x \not \in B$ and $y \in B$. Since $B$ is compressed, it must be the case that for each $i$, there is exactly one of $x$ and $y$ that contains $i$. Hence $x = y^c$. Note that this is true for all $x < y$ such that $x \not \in B$ and $y \in B$. So if we write out the simplicial order, then $B$ must look like
1012
- \begin{center}
1013
- \begin{tikzpicture}[scale=0.5]
1014
- \foreach \x in {0,...,7} {
1015
- \fill (\x, 0) circle [radius=0.1];
1016
- }
1017
- \draw (8, 0) circle [radius=0.1];
1018
- \fill (9, 0) circle [radius=0.1];
1019
- \foreach \x in {10,...,15} {
1020
- \draw (\x, 0) circle [radius=0.1];
1021
- }
1022
- \node at (16.5, 0) {$\cdots$};
1023
- \end{tikzpicture}
1024
- \end{center}
1025
- since any $x \not \in B$ such that $x < y$ must be given by $x = y^c$, and so there must be a unique such $x$, and similarly the other way round. So it must be the case that $y$ is the successor of $x$, and so $x = z$.
1026
- \end{proof}
1027
- We observe that these anomalous compressed sets are worse off than the initial segments (exercise!). So we deduce that
1028
-
1029
- %We define these exception sets for $n \geq 3$. For $n = 2k + 1$, we define
1030
- %\[
1031
- % B_n^* = \left(X^{(\leq k)} \setminus \{\{(k + 2)(k + 3) \cdots (2k + 1)\} \}\right) \cup \{12\cdots (k + 1)\}.
1032
- %\]
1033
- %For $n = 2k$, define
1034
- %\[
1035
- % B_n^* = \left(X^{(\leq k - 1)} \cup \{(X \setminus \{1\})^{(k - 1)} + 1\} \setminus \{1 (k + 2) \cdots (2k)\}\right) \cup \{23 \cdots (k + 1)\}.
1036
- %\]
1037
-
1038
- \begin{thm}[Harper, 1967]
1039
- Let $A \subseteq Q^n$, and let $C$ be the initial segment in the simplicial order with $|C| = |A|$. Then $|N(A)| \geq |N(C)|$. In particular,
1040
- \[
1041
- |A| = \sum_{i = 0}^r \binom{n}{i}\text{ implies } |N(A)| \geq \sum_{i = 0}^{r + 1} \binom{n}{i}.
1042
- \]
1043
- \end{thm}
1044
-
1045
- \subsubsection*{The edge isoperimetric inequality in the cube}
1046
- Let $A \subseteq V$ be a subset of vertices in a graph $G = (V, E)$. Consider the \term{edge boundary}
1047
- \[
1048
- \partial_e A = \{xy \in E: x \in A, y \not \in A\}.
1049
- \]
1050
- Given a graph $G$, and given the size of $A$, can we give a lower bound for the size of $\partial_e A$?
1051
-
1052
- \begin{eg}
1053
- Take $G = Q_3$. For the vertex isoperimetric inequality, our optimal solution with $|A| = 4$ was given by
1054
- \begin{center}
1055
- \begin{tikzpicture}[scale=1.5]
1056
- \node [mred] (0) at (0, 0) {\small$\boldsymbol{\emptyset}$};
1057
- \node [mred] (1) at (1, 0) {\small$\mathbf{1}$};
1058
- \node [mred] (3) at (0, 1) {\small$\mathbf{3}$};
1059
- \node [mred] (2) at (0.4, 0.4) {\small$\mathbf{2}$};
1060
- \node (12) at (1.4, 0.4) {\small$12$};
1061
- \node (13) at (1, 1) {\small$13$};
1062
- \node (23) at (0.4, 1.4) {\small$23$};
1063
- \node (123) at (1.4, 1.4) {\small$123$};
1064
-
1065
- \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
1066
- \draw (3) -- (13) -- (1);
1067
- \draw (13) -- (123);
1068
-
1069
- \draw [dashed] (0) -- (2) -- (12);
1070
- \draw [dashed] (2) -- (23);
1071
- \end{tikzpicture}
1072
- \end{center}
1073
- The edge boundary has size $6$. However, if we just pick a slice, then the edge boundary has size $4$ only.
1074
- \end{eg}
1075
- More generally, consider $Q_n = Q_{2k + 1}$, and take the Hamming ball $B_k = X^{(\leq \ell)}$. Then
1076
- \[
1077
- \partial_e B_k = \{AB: A \subseteq B \subseteq X: |A| = k, |B| = k + 1\}.
1078
- \]
1079
- So we have
1080
- \[
1081
- |\partial_e B_k| = \binom{2k + 1}{k + 1} \cdot (k + 1) \sim \frac{2^n \sqrt{n}}{\sqrt{2\pi}}.
1082
- \]
1083
- However, if we pick the bottom face of $Q_n$. Then $|A| = 2^{n - 1}$ and $|\partial_e A| = 2^{n - 1}$. This is much much better.
1084
-
1085
- More generally, it is not unreasonable to suppose that sub-cubes are always the best. For a $k$-dimensional sub-cube in $Q_n$, we have
1086
- \[
1087
- |\partial_k| = 2^k (n - k).
1088
- \]
1089
- If we want to prove this, and also further solve the problem for $|A|$ not a power of $2$, then as our previous experience would suggest, we should define an order on $\mathcal{P}(X)$.
1090
-
1091
- \begin{defi}[Binary order]\index{binary order}
1092
- The binary order on $Q_n \cong \mathcal{P}(X)$ is given by $x < y$ if $\max x \Delta y \in y$.
1093
-
1094
- Equivalently, define $\varphi: \mathcal{P}(X) \to \N$ by
1095
- \[
1096
- \varphi(x) = \sum_{i \in x} 2^i.
1097
- \]
1098
- Then $x < y$ if $\varphi(x) < \varphi(y)$.
1099
- \end{defi}
1100
- The idea is that we avoid large elements. The first few elements in the elements would like like
1101
- \[
1102
- \emptyset, 1, 2, 12 3, 13, 23, 123, \ldots.
1103
- \]
1104
- \begin{thm}
1105
- Let $A \subseteq Q_n$ be a subset, and let $C \subseteq Q_n$ be the initial segment of length $|A|$ in the binary order. Then $|\partial_e C| \leq |\partial_e A|$.
1106
- \end{thm}
1107
-
1108
- \begin{proof}
1109
- We induct on $n$ using codimension-$1$ compressions. Recall that we previously defined the sets $A_{\pm}^{(i)}$.
1110
-
1111
- The $i$-compression of $A$ is the set $B \subseteq Q_n$ such that $|B_{\pm}^{(i)}| = |A_{\pm}^{(i)}|$, and $B_{\pm}^{(i)}$ are initial segments in the binary order. We set $D_i(A) = B$.
1112
-
1113
- Observe that performing $D_i$ reduces the edge boundary. Indeed, given any $A$, we have
1114
- \[
1115
- |\partial_e A| = |\partial_e A_+^{(i)}| + |\partial_e A_-^{(i)}| + |A_+^{(i)} \Delta A_i^{(i)}|.
1116
- \]
1117
- Applying $D_i$ clearly does not increase any of those factors. So we are happy. Now note that if $A \not= D_i A$, then
1118
- \[
1119
- \sum_{x \in A} \sum_{i \in x} 2^i < \sum_{x \in D_i A} \sum_{i \in x} 2^i.
1120
- \]
1121
- So after applying compressions finitely many times, we are left with a compressed set.
1122
-
1123
- We now hope that a compressed subset must be an initial segment, but this is not quite true.
1124
- \begin{claim}
1125
- If $A$ is compressed but not an initial, then
1126
- \[
1127
- A = \tilde{B} = \P(X \setminus \{n\}) \setminus \{123\cdots (n-1)\} \cup \{n\}.
1128
- \]
1129
- \end{claim}
1130
- By direct computation, we have
1131
- \[
1132
- |\partial_e \tilde{B}| = 2^{n - 1} - 2(n - 2),
1133
- \]
1134
- and so the initial segment is better. So we are done.
1135
-
1136
- The proof of the claim is the same as last time. Indeed, by definition, we can find some $x < y$ such that $x \not \in A$ and $y \in A$. As before, for any $i$, it cannot be the case that both $x$ and $y$ contain $i$ or neither contain $i$, since $A$ is compressed. So $x = y^c$, and we are done as before.
1137
- \end{proof}
1138
-
1139
- \section{Sum sets}
1140
- Let $G$ be an abelian group, and $A, B \subseteq G$. Define
1141
- \[
1142
- A + B = \{a + b: a \in A, b \in B\}.
1143
- \]
1144
- For example, suppose $G = \R$ and $A = \{a_1 < a_2 < \cdots < a_n\}$ and $B = \{b_1 < b_2 < \cdots < b_m\}$. Surely, $A + B \leq nm$, and this bound can be achieved. Can we bound it from below? The elements
1145
- \[
1146
- a_1 + b_1, a_1 + b_2, \ldots, a_1 + b_m, a_2 + b_m, \ldots, a_n + b_m
1147
- \]
1148
- are certainly distinct as well, since they are in strictly increasing order. So
1149
- \[
1150
- |A + B| \geq m + n - 1 = |A| + |B| - 1.
1151
- \]
1152
- What if we are working in a finite group? In general, we don't have an order, so we can't make the same argument. Indeed, the same inequality cannot always be true, since $|G + G| = |G|$. Slightly more generally, if $H$ is a subgroup of $G$, then $|H + H| = |H|$.
1153
-
1154
- So let's look at a group with no subgroups. In other words, pick $G = \Z_p$.
1155
-
1156
- \begin{thm}[Cauchy--Davenport theorem]\index{Cauchy--Davenport theorem}
1157
- Let $A$ and $B$ be non-empty subsets of $\Z_p$ with $p$ a prime, and $|A| + |B| \leq p + 1$. Then
1158
- \[
1159
- |A + B| \geq |A| + |B| - 1.
1160
- \]
1161
- \end{thm}
1162
-
1163
- \begin{proof}
1164
- We may assume $1 \leq |A| \leq |B|$. Apply induction on $|A|$. If $|A| = 1$, then there is nothing to do. So assume $A \geq 2$.
1165
-
1166
- Since everything is invariant under translation, we may assume $0, a \in A$ with $a \not= 0$. Then $\{a, 2a, \ldots, pa\} = \Z_p$. So there exists $k \geq 0$ such that $ka \in B$ and $(k + 1) a \not \in B$.
1167
-
1168
- By translating $B$, we may assume $0 \in B$ and $a \not \in B$.
1169
-
1170
- Now $0 \in A \cap B$, while $a \in A \setminus B$. Therefore we have
1171
- \[
1172
- 1 \leq |A \cap B| < |A|.
1173
- \]
1174
- Hence
1175
- \[
1176
- |(A \cap B) + (A \cup B)| \geq |A \cap B| + |A \cup B| - 1 = |A| + |B| - 1.
1177
- \]
1178
- Also, clearly
1179
- \[
1180
- (A \cap B) + (A \cup B) \subseteq A + B.
1181
- \]
1182
- So we are done.
1183
- \end{proof}
1184
-
1185
- \begin{cor}
1186
- Let $A_1, \ldots, A_k$ be non-empty subsets of $\Z_p$ such that
1187
- \[
1188
- \sum_{i =1 }^d |A_i| \leq p + k - 1.
1189
- \]
1190
- Then
1191
- \[
1192
- |A_1 + \ldots + A_k| \geq \sum_{i = 1}^k |A_i| - k + 1.
1193
- \]
1194
- \end{cor}
1195
- What if we don't take sets, but sequences? Let $a_1, \ldots, a_m \in \Z_n$. What $m$ do we need to take to guarantee that there are $m$ elements that sums to $0$? By the pigeonhole principle, $m \geq n$ suffices. Indeed, consider the sequence
1196
- \[
1197
- a_1, a_1 + a_2, a_1 + a_2 + a_3, \cdots, a_1 + \cdots + a_n.
1198
- \]
1199
- If they are all distinct, then one of them must be zero, and so we are done. If they are not distinct, then by the pigeonhole principle, there must be $k < k'$ such that
1200
- \[
1201
- a_1 + \cdots + a_k = a_1 + \cdots + a_{k'}.
1202
- \]
1203
- So it follows that
1204
- \[
1205
- a_{k + 1} + \cdots + a_{k'}.
1206
- \]
1207
- So in fact we can even require the elements we sum over to be consecutive. On the other hand, $m \geq n$ is also necessary, since we can take $a_i = 1$ for all $i$.
1208
-
1209
- We can tackle a harder question, where we require that the sum of a fixed number of things vanishes.
1210
- \begin{thm}[Erd\"os--Ginzburg--Ziv]
1211
- Let $a_1, \ldots, a_{2n - 1} \in \Z_n$. Then there exists $I \in [2n - 1]^{(n)}$ such that
1212
- \[
1213
- \sum_{i \in I} a_i = 0
1214
- \]
1215
- in $\Z_n$.
1216
- \end{thm}
1217
-
1218
- \begin{proof}
1219
- First consider the case $n = p$ is a prime. Write
1220
- \[
1221
- 0 \leq a_1 \leq a_2 \leq \cdots \leq a_{2p - 1} < p.
1222
- \]
1223
- If $a_i = a_{i + p - 1}$, then there are $p$ terms that are the same, and so we are done by adding them up. Otherwise, set $A_i = \{a_i, a_{i + p - 1}\}$ for $i = 1, \ldots, p - 1$, and $A_p = \{a_{2p - 1}\}$, then $|A_i| = 2$ for $i = 1, \ldots, p - 1$ and $|A_p| = 1$. Hence we know
1224
- \[
1225
- |A_1 + \cdots + A_p| \geq (2(p - 1) + 1) - p + 1 = p.
1226
- \]
1227
- Thus, every element in $\Z_p$ is a sum of some $p$ of our terms, and in particular $0$ is.
1228
-
1229
- In general, suppose $n$ is not a prime. Write $n = pm$, where $p$ is a prime and $m > 1$. By induction, for every $2m - 1$ terms, we can find $m$ terms whose sum is a multiple of $m$.
1230
-
1231
- Select \emph{disjoint} $S_1, S_2, \ldots, S_{2p - 1} \in [2n - 1]^{(m)}$ such that
1232
- \[
1233
- \sum_{j \in S_i} a_j = m b_i.
1234
- \]
1235
- This can be done because after selecting, say, $S_1, \ldots, S_{2p - 2}$, we have
1236
- \[
1237
- (2n - 1) - (2p - 2)m = 2m - 1
1238
- \]
1239
- elements left, and so we can pick the next one.
1240
-
1241
- We are essentially done, because we can pick $j_1, \ldots, j_p$ such that $\sum_{k = 1}^p b_{i_k}$ is a multiple of $p$. Then
1242
- \[
1243
- \sum_{k = 1}^p \sum_{j \in S_{i_k}} a_j
1244
- \]
1245
- is a sum of $mp = n$ terms whose sum is a multiple of $mp$.
1246
- \end{proof}
1247
-
1248
- \section{Projections}
1249
- So far, we have been considering discrete objects only. For a change, let's work with something continuous.
1250
-
1251
- Let $K \subseteq \R^n$ be a bounded open set. For $A \subseteq [n]$, we set
1252
- \[
1253
- K_A = \{(x_i)_{i \in A} : \exists y \in K, y_i - x_i\text{ for all }i \in A\} \subseteq \R^A.
1254
- \]
1255
- We write $|K_A|$ for the Lebesgue measure of $K_A$ as a subset of $\R^A$. The question we are interested in is given some of these $|K_A|$, can we bound $|K|$? In some cases, it is completely trivial.
1256
- \begin{eg}
1257
- If we have a partition of $A_1 \cup \cdots \cup A_m = [n]$, then we have
1258
- \[
1259
- |K| \leq \prod_{i = 1}^m |K_{A_i}|.
1260
- \]
1261
- \end{eg}
1262
-
1263
- But, for example, in $\R^3$, can we bound $|K|$ given $|K_{12}|$, $|K_{13}|$ and $|K_{23}|$?
1264
-
1265
- It is clearly not possible if we only know, say $|K_{12}|$ and $|K_{13}|$. For example, we can consider the boxes
1266
- \[
1267
- \left(0, \frac{1}{n}\right) \times (0, n) \times (0, n).
1268
- \]
1269
- \begin{prop}
1270
- Let $K$ be a body in $\R^3$. Then
1271
- \[
1272
- |K|^2 \leq |K_{12}| |K_{13}| |K_{23}|.
1273
- \]
1274
- \end{prop}
1275
- This is actually quite hard to prove! However, given what we have done so far, it is natural to try to compress $K$ in some sense. Indeed, we know equality holds for a box, and if we can make $K$ look more like a box, then maybe we can end up with a proof.
1276
-
1277
- For $K \subseteq \R^n$, its \term{$n$-sections} are the sets $K(x) \subseteq \R^{n - 1}$ defined by
1278
- \[
1279
- K(x) = \{(x_1, \ldots, x_{n - 1}) \in \R^{n - 1}: (x_1, \ldots, x_{n - 1}, x) \in K\}.
1280
- \]
1281
- \begin{proof}
1282
- Suppose first that each section of $K$ is a square, i.e.
1283
- \[
1284
- K(x) = (0, f(x)) \times (0, f(x))\;\d x
1285
- \]
1286
- for all $x$ and some $f$. Then
1287
- \[
1288
- |K| = \int f(x)^2\;\d x.
1289
- \]
1290
- Moreover,
1291
- \[
1292
- |K_{12}| = \left(\sup_x f(x)\right)^2 \equiv M^2,\quad |K_{13}| = |K_{23}| = \int f(x)\;\d x.
1293
- \]
1294
- So we have to show that
1295
- \[
1296
- \left(\int f(x)^2\;\d x\right)^2 \leq M^2 \left(\int f(x)\;\d x \right)^2,
1297
- \]
1298
- but this is trivial, because $f(x) \leq M$ for all $x$.
1299
-
1300
- Let's now consider what happens when we compress $K$. For the general case, define a new body $L \subseteq \R^3$ by setting its sections to be
1301
- \[
1302
- L(x) = (0, \sqrt{|K(x)|}) \times (0, \sqrt{|K(x)|}).
1303
- \]
1304
- Then $|L| = |K|$, and observe that
1305
- \[
1306
- |L_{12}| \leq \sup |K(x)| \leq \left|\bigcup K(x)\right| = |K_{12}|.
1307
- \]
1308
- To understand the other two projections, we introduce
1309
- \[
1310
- g(x) = |K(x)_1|,\quad h(x) = |K(x)_2|.
1311
- \]
1312
- Now observe that
1313
- \[
1314
- |L(x)| = |K(x)| \leq g(x) h(x),
1315
- \]
1316
- Since $L(x)$ is a square, it follows that $L(x)$ has side length $\leq g(x)^{1/2} h(x)^{1/2}$. So
1317
- \[
1318
- |L_{13}| = |L_{23}| \leq \int g(x)^{1/2} h(x)^{1/2}\;\d x.
1319
- \]
1320
- So we want to show that
1321
- \[
1322
- \left(\int g^{1/2}h^{1/2} \;\d x\right)^2 \leq \left(\int g\;\d x\right)\left(\int h\;\d x\right).
1323
- \]
1324
- Observe that this is just the Cauchy--Schwarz inequality applied to $g^{1/2}$ and $h^{1/2}$. So we are done.
1325
- \end{proof}
1326
-
1327
- Let's try to generalize this.
1328
- \begin{defi}[(Uniform) cover]\index{uniform cover}\index{cover}
1329
- We say a family $A_1, \ldots, A_r \subseteq [n]$ \emph{covers} $[n]$ if
1330
- \[
1331
- \bigcup_{i = 1}^r A_r = [n],
1332
- \]
1333
- and is a \emph{uniform $k$-cover} if each $i \in [n]$ is in exactly $k$ many of the sets.
1334
- \end{defi}
1335
-
1336
- \begin{eg}
1337
- With $n = 3$, the singletons $\{1\}, \{2\}, \{3\}$ form a $1$-uniform cover, and so does $\{1\}, \{2, 3\}$. Also, $\{1, 2\}, \{1, 3\}$ and $\{2, 3\}$ form a uniform $2$-cover. However, $\{1, 2\}$ and $\{2, 3\}$ do not form a uniform cover of $[3]$.
1338
- \end{eg}
1339
- Note that we allow repetitions.
1340
- \begin{eg}
1341
- $\{1\}, \{1\}, \{2, 3\}, \{2\}, \{3\}$ is a $2$-uniform cover of $[3]$.
1342
- \end{eg}
1343
-
1344
- \begin{thm}[Uniform cover inequality]\index{uniform cover inequality}
1345
- If $A_1, \ldots, A_r$ is a uniform $k$-cover of $[n]$, then
1346
- \[
1347
- |K|^k = \prod_{i = 1}^r |K_A|.
1348
- \]
1349
- \end{thm}
1350
-
1351
- \begin{proof}
1352
- Let $\mathcal{A}$ be a $k$-uniform cover of $[k]$. Note that $\mathcal{A}$ is a \emph{multiset}. Write
1353
- \begin{align*}
1354
- \mathcal{A}_- &= \{A \in \mathcal{A}: n \not \in A\}\\
1355
- \mathcal{A}_+ &= \{A \setminus \{n\} \in \mathcal{A}: n \in A\}
1356
- \end{align*}
1357
- We have $|\mathcal{A}_+| = k$, and $\mathcal{A}_+ \cup \mathcal{A}_-$ forms a $k$-uniform cover of $[n - 1]$.
1358
-
1359
- Now note that if $K = \R^n$ and $n \not \in A$, then
1360
- \[
1361
- |K_A| \geq |K(x)_A|\tag{1}
1362
- \]
1363
- for all $x$. Also, if $n \in A$, then
1364
- \[
1365
- |K_A| = \int |K(x)_{A \setminus \{n\}}|\;\d x.\tag{2}
1366
- \]
1367
- In the previous proof, we used Cauchy--Schwarz. What we need here is H\"older's inequality
1368
- \[
1369
- \int fg\;\d x \leq \left(\int f^p\;\d x\right)^{1/p} \left(\int g^q\;\d x\right)^{1/q},
1370
- \]
1371
- where $\frac{1}{p} + \frac{1}{q} = 1$. Iterating this, we get
1372
- \[
1373
- \int f_1 \cdots f_k\;\d x\leq \prod_{i = 1}^k \left(\int f_i^k\;\d x\right)^{1/k}.
1374
- \]
1375
- Now to perform the proof, we induct on $n$. We are done if $n = 1$. Otherwise, given $K \subseteq \R^n$ and $n \geq 2$, by induction,
1376
- \begin{align*}
1377
- |K| &= \int |K(x)|\;\d x \\
1378
- &\leq \int \prod_{A \in A_-} |K(x)_A|^{1/k} \prod_{A \in \mathcal{A}_+} |K(x)_A|^{1/k}\;\d x\tag{by induction}\\
1379
- &\leq \prod_{A \in \mathcal{A}_-} |K_A|^{1/k} \int \prod_{A \in \mathcal{A}_+}|K(x)_A|^{1/k}\;\d x\tag{by (1)}\\
1380
- &\leq \prod_{A \leq |\mathcal{A}_-} |K_A|^{1/k} \prod_{A \in \mathcal{A}_+} \left(\int |K(x)_A|\right)^{1/k} \tag{by H\"older}\\
1381
- &= \prod_{A \in \mathcal{A}} |K_A|^{1/k} \prod_{A \in \mathcal{A}_+} |K_{A \cup \{n\}}|^{1/k}\tag{by (2)}.%\qedhere
1382
- \end{align*}
1383
- \end{proof}
1384
- This theorem is great, but we can do better. In fact,
1385
-
1386
- \begin{thm}[Box Theorem (Bollob\'as, Thomason)]\index{box theorem}
1387
- Given a body $K \subseteq \R^n$, i.e.\ a non-empty bounded open set, there exists a box $L$ such that $|L| = |K|$ and $|L_A| \leq |K_A|$ for all $A \subseteq [n]$.
1388
- \end{thm}
1389
- Of course, this trivially implies the uniform cover theorem. Perhaps more surprisingly, we can deduce this from the uniform cover inequality.
1390
-
1391
- To prove this, we first need a lemma.
1392
- \begin{defi}[Irreducible cover]\index{reducible cover}\index{irreducible cover}
1393
- A uniform $k$-cover is \emph{reducible} if it is the disjoint union of two uniform covers. Otherwise, it is \emph{irreducible}.
1394
- \end{defi}
1395
-
1396
- \begin{lemma}
1397
- There are only finitely many irreducible covers of $[n]$.
1398
- \end{lemma}
1399
-
1400
- \begin{proof}
1401
- Let $\mathcal{A}$ and $\mathcal{B}$ be covers. We say $\mathcal{A} < \mathcal{B}$ if $\mathcal{A}$ is a ``subset'' of $\mathcal{B}$, i.e.\ for each $A \subseteq [n]$, the multiplicity of $A$ in $\mathcal{A}$ is less than the multiplicity in $\mathcal{B}$.
1402
-
1403
- Then note that the set of irreducible uniform $k$-covers form an anti-chain, and observe that there cannot be an infinite anti-chain.
1404
-
1405
- % Observe that we can represent families $\mathcal{A}$ by ``indicator functions'' $\psi_\mathcal{A}: \mathcal{P}(n) \to \Z$ by setting $\psi_\mathcal{A}(A)$ to be the multiplicity of $A$ in $\mathcal{A}$. Then we can reformulate the definition of uniform $k$-covers as saying $\psi: \mathcal{P}(n) \to \Z$ satisfies
1406
- % \[
1407
- % \sum_{A : i \in A} \psi(A) = k
1408
- % \]
1409
- % for all $i \in [n]$.
1410
- %
1411
- % In this formulation, $\psi$ is reducible if we can write $\psi = \psi_1 + \psi_2$, where $\psi_1$ and $\psi_2$ are both uniform covers.
1412
- %
1413
- % Given two covers $\psi_1, \psi_2: \mathcal{P}(n) \to \Z$, we say $\psi_1 < \psi_2$ if $\psi_1(A) < \psi_2(A)$ for all $A$. Then
1414
- %
1415
- %Fix a finite set $\Omega$, and define an ordering on $\R^\Omega$ by saying $f < g$ if $f(x) < g(x)$ for all $x$.
1416
- %
1417
- %Can we have an infinite anti-chain $f_1, f_2, \ldots$? One can easily produce such an example! However, we cannot have an infinite anti-chain in $\Z^\Omega$. In fact, we can easily see that there is an increasing subsequence.
1418
- %
1419
- %Hence, there are only finitely many irreducible uniform $k$ covers of $[n]$. However, we haven't shown any upper bound!
1420
- \end{proof}
1421
-
1422
- \begin{proof}[Proof of box theorem]
1423
- For $\mathcal{A}$ an irreducible cover, we have
1424
- \[
1425
- |K|^k \leq \prod_{A \in \mathcal{A}} |K_A|.
1426
- \]
1427
- Also,
1428
- \[
1429
- |K_A| \leq \prod_{i \in A} |K_{\{i\}}|.
1430
- \]
1431
- Let $\{x_A: A \subseteq [n]\}$ be a minimal array with $x_A \leq |K_A|$ such that for each irreducible $k$-cover $\mathcal{A}$, we have
1432
- \[
1433
- |K|^k \leq \prod_{A \in \mathcal{A}} x_A\tag{$1$}
1434
- \]
1435
- and moreover
1436
- \[
1437
- x_A \leq \prod_{i \in A} x_{\{i\}}\tag{$2$}
1438
- \]
1439
- for all $A \subseteq [n]$. We know this exists since there are only finitely many inequalities to be satisfied, and we can just decrease the $x_A$'s one by one. Now again by finiteness, for each $x_A$, there must be at least one inequality involving $x_A$ on the right-hand side that is in fact an equality.
1440
-
1441
- \begin{claim}
1442
- For each $i \in [n]$, there exists a uniform $k_i$-cover $\mathcal{C}_i$ containing $\{i\}$ with equality
1443
- \[
1444
- |K|^{k_i} = \prod_{A \in \mathcal{C}_i} x_A.
1445
- \]
1446
- \end{claim}
1447
-
1448
- Indeed if $x_i$ occurs on the right of (1), then we are done. Otherwise, it occurs on the right of (2), and then there is some $A$ such that (2) holds with equality. Now there is some cover $\mathcal{A}$ containing $A$ such that (1) holds with equality. Then replace $A$ in $\mathcal{A}$ with $\{ \{j\}: j \in A\}$, and we are done.
1449
-
1450
- Now let
1451
- \[
1452
- \mathcal{C} = \bigcup_{i = 1}^n \mathcal{C}_i,\quad
1453
- \mathcal{C}' = \mathcal{C} \setminus \{\{1\}, \{2\}, \ldots, \{n\}\},\quad
1454
- k = \sum_{i = 1}^n k_i.
1455
- \]
1456
- Then
1457
- \[
1458
- |K|^k = \prod_{A \in \mathcal{C}} x_A = \left(\prod_{A \in \mathcal{C}^1} x_A\right)\geq |K|^{k - 1} \prod_{i = 1}^n x_i.
1459
- \]
1460
- So we have
1461
- \[
1462
- |K| \geq \prod_{i = 1}^n x_i.
1463
- \]
1464
- But we of course also have the reverse inequality. So it must be the case that they are equal.
1465
-
1466
- Finally, for each $A$, consider $\mathcal{A} = \{A\} \cup \{ \{i\} : i \not \in A\}$. Then dividing (1) by $\prod_{i \in A} x_i$ gives us
1467
- \[
1468
- \prod_{i \not \in A} x_i \leq x_A.
1469
- \]
1470
- By (2), we have the inverse equality. So we have
1471
- \[
1472
- x_A = \prod_{i \in A} x_i
1473
- \]
1474
- for all $i$. So we are done by taking $L$ to be the box with side length $x_i$.
1475
- \end{proof}
1476
-
1477
- \begin{cor}
1478
- If $K$ is a union of translates of the unit cube, then for any (not necessarily uniform) $k$-cover $\mathcal{A}$, we have
1479
- \[
1480
- |K|^k \leq \prod_{A \in \mathcal{A}} |K_A|.
1481
- \]
1482
- \end{cor}
1483
- Here a $k$-cover is a cover where every element is covered at least $k$ times.
1484
-
1485
- \begin{proof}
1486
- Observe that if $B \subseteq A$, then $|K_B| \leq |K_A|$. So we can reduce $\mathcal{A}$ to a uniform $k$-cover.
1487
- \end{proof}
1488
- %
1489
- %\begin{cor}
1490
- % Let $S$ be a set of sequences of length $n$ with terms from a finite set $X$. Then every uniform $k$-cover $\mathcal{A}$ of $[n]$ satisfies
1491
- % \[
1492
- % |S|^k \leq \prod_{A \in \mathcal{A}} |S_A|,
1493
- % \]
1494
- % where $S_A$ is the restriction of the sequences in $S$ to $A$ and $|S|$ is the product of elements in $S$.
1495
- %\end{cor}
1496
-
1497
- \section{Alon's combinatorial Nullstellensatz}
1498
- Alon's combinatorial Nullstellensatz is a seemingly unexciting result that has surprisingly many useful consequences.
1499
- \begin{thm}[Alon's combinatorial Nullstellensatz]\index{combinatorial Nullstellensatz}\index{Alon's combinatorial Nullstellensatz}
1500
- Let $\F$ be a field, and let $S_1, \ldots, S_n$ be non-empty finite subsets of $\F$ with $|S_i| = d_i + 1$. Let $f \in \F[X_1, \ldots, X_n]$ have degree $d = \sum_{i = 1}^n d_i$, and let the coefficient of $X_1^{d_1} \cdots X_n^{d_n}$ be non-zero. Then $f$ is not identically zero on $S = S_1 \times \cdots \times S_n$.
1501
- \end{thm}
1502
-
1503
- Its proof follows from generalizing facts we know about polynomials in one variable. Here $R$ will always be a ring; $\F$ always a field, and $\F_q$ the unique field of order $q = p^n$. Recall the following result:
1504
- \begin{prop}[Division algorithm]\index{division algorithm}
1505
- Let $f, g \in R[X]$ with $g$ monic. Then we can write
1506
- \[
1507
- f = hg + r,
1508
- \]
1509
- where $\deg h \leq \deg f - \deg g$ and $\deg r < \deg g$.
1510
- \end{prop}
1511
- Our convention is that $\deg 0 = -\infty$.
1512
-
1513
- Let $X = (X_1, \ldots, X_n)$ be a sequence of variables, and write $R[X] = R[X_1, \ldots, X_n]$.
1514
-
1515
- \begin{lemma}
1516
- Let $f \in R[X]$, and for $i = 1, \ldots, n$, let $g_i(X_i) \in R[X_i] \subseteq R[X]$ be monic of degree $\deg g_i = \deg_{X_i} g_i = d_i$. Then there exists polynomials $h_1, \ldots, h_n, r \in R[X]$ such that
1517
- \[
1518
- f = \sum f_i g_i + r,
1519
- \]
1520
- where
1521
- \begin{align*}
1522
- \deg h_i &\leq \deg f - \deg d_i & \deg_{X_i} r &\leq d_i - 1\\
1523
- \deg_{X_i} h_i &\leq \deg_{X_i} f - d_i & \deg_{X_i} r &\leq \deg_{X_i} f\\
1524
- \deg_{X_j} h_i &\leq \deg_{X_j} f & \deg r &\leq \deg f
1525
- \end{align*}
1526
- for all $i, j$.
1527
- \end{lemma}
1528
-
1529
- \begin{proof}
1530
- Consider $f$ as a polynomial with coefficients in $R[X_2, \ldots, X_n]$, then divide by $g_1$ using the division algorithm. So we write
1531
- \[
1532
- f = h_1 g_1 + r_1.
1533
- \]
1534
- Then we have
1535
- \begin{align*}
1536
- \deg_{X_1} h_1 &\leq \deg_{X_1} f - d_1 & \deg_{X_1} r_1 &\leq d_1 - 1\\
1537
- \deg h_1 &\leq \deg f & \deg_{X_j} r_1 &\leq \deg_{X_j} f\\
1538
- \deg_{X_j} h_1 &\leq \deg_{X_j}f & \deg r &\leq \deg f.
1539
- \end{align*}
1540
- Then repeat this with $f$ replaced by $r_1$, $g_1$ by $g_2$, and $X_1$ by $X_2$.
1541
- \end{proof}
1542
-
1543
- We also know that a polynomial of one variable of degree $n \geq 1$ over a field has at most $n$ zeroes.
1544
-
1545
- \begin{lemma}
1546
- Let $S_1, \ldots, S_n$ be non-empty finite subsets of a field $\F$, and let $h \in \F[X]$ be such that $\deg_{X_i} h < |S_i|$ for $i = 1, \ldots, n$. Suppose $h$ is identically $0$ on $S = S_1 \times \cdots \times S_n \subseteq \F^n$. Then $h$ is the zero polynomial.
1547
- \end{lemma}
1548
-
1549
- \begin{proof}
1550
- Let $d_i = |S_i| - 1$. We induct on $n$. If $n = 1$, then we are done. For $n \geq 2$, consider $h$ as a one-variable polynomial in $F[X_1, \ldots, X_{n - 1}]$ in $X_n$. Then we can write
1551
- \[
1552
- h = \sum_{i = 0}^{d_n} g_i(X_1, \ldots, X_{n - 1}) X_m^i.
1553
- \]
1554
- Fix $(x_1, \ldots, x_{n - 1}) \in S_1 \times \cdots S_{n - 1}$, and set $c_i = g_i(x_1, \ldots, x_{n - 1}) \in \F$. Then $\sum_{i = 0}^{d_n} c_i X_n^i$ vanishes on $S_n$. So $c_i = g_i(x_1, \ldots, x_{n - 1}) = 0$ for all $(x_1, \ldots, x_{n - 1}) \in S_1 \times \cdots \times S_{n - 1}$. So by induction, $g_i = 0$. So $h = 0$.
1555
- \end{proof}
1556
-
1557
- Another fact we know about polynomials in one variables is that if $f \in \F[X]$ vanishes at $z_1, \ldots, z_n$, then $f$ is a multiple of $\prod_{i = 1}^n (X - z_i)$.
1558
- \begin{lemma}
1559
- For $i = 1, \ldots, n$, let $S_i$ be a non-empty finite subset of $\F$, and let
1560
- \[
1561
- g_i(X_i) = \prod_{s \in S_i} (X_i - s) \in \F[X_i] \subseteq F[X].
1562
- \]
1563
- Then if $f \in \F[X]$ is identically zero on $S = S_1 \times \cdots \times S_n$, then there exists $h_i \in \F[X]$, $\deg h_i \leq \deg f - |S_i|$ and
1564
- \[
1565
- f = \sum_{i = 1}^n h_i g_i.
1566
- \]
1567
- \end{lemma}
1568
-
1569
- \begin{proof}
1570
- By the division algorithm, we can write
1571
- \[
1572
- f = \sum_{i = 1}^n h_i g_i + r,
1573
- \]
1574
- where $r$ satisfies $\deg_{X_i} r < \deg g_i$. But then $r$ vanishes on $S_1 \times \cdots \times S_n$, as both $f$ and $g_i$ do. So $r = 0$.
1575
- \end{proof}
1576
-
1577
- We finally get to Alon's combinatorial Nullstellensatz.
1578
- \begin{thm}[Alon's combinatorial Nullstellensatz]\index{combinatorial Nullstellensatz}\index{Alon's combinatorial Nullstellensatz}
1579
- Let $S_1, \ldots, S_n$ be non-empty finite subsets of $\F$ with $|S_i| = d_i + 1$. Let $f \in \F[X]$ have degree $d = \sum_{i = 1}^n d_i$, and let the coefficient of $X_1^{d_1} \cdots X_n^{d_n}$ be non-zero. Then $f$ is not identically zero on $S = S_1 \times \cdots \times S_n$.
1580
- \end{thm}
1581
-
1582
- \begin{proof}
1583
- Suppose for contradiction that $f$ is identically zero on $S$. Define $g_i(X_i)$ and $h_i$ as before such that
1584
- \[
1585
- f = \sum h_i g_i.
1586
- \]
1587
- Since the coefficient of $X_1^{d_1} \cdots X_n^{d_n}$ is non-zero in $f$, it is non-zero in some $h_j g_j$. But that's impossible, since
1588
- \[
1589
- \deg h_j \leq \left(\sum_{i = 1}^n d_i \right) - \deg g_j = \sum_{i \not= j} d_i - 1,
1590
- \]
1591
- and so $h_j$ cannot contain a $X_1^{d_1} \cdots \hat{X_j}^{d_j} \cdots X_n^{d_n}$ term.
1592
- \end{proof}
1593
-
1594
- Let's look at some applications. Here $p$ is a prime, $q = p^k$, and $\F_q$ is the unique field of order $q$.
1595
- \begin{thm}[Chevalley, 1935]
1596
- Let $f_1, \ldots, f_m \in \F_q[X_1, \ldots, X_n]$ be such that
1597
- \[
1598
- \sum_{i = 1}^m \deg f_i < n.
1599
- \]
1600
- Then the $f_i$ cannot have exactly one common zero.
1601
- \end{thm}
1602
-
1603
- \begin{proof}
1604
- Suppose not. We may assume that the common zero is $0 = (0, \ldots, 0)$. Define
1605
- \[
1606
- f = \prod_{i = 1}^m (1 - f_i(X)^{q - 1}) - \gamma \prod_{i = 1}^n \prod_{s \in \F_q^\times} (X_i - s),
1607
- \]
1608
- where $\gamma$ is chosen so that $F(0) = 0$, namely the inverse of $\left(\prod_{s \in \F_q^\times} (-s)\right)^m$.
1609
-
1610
- Now observe that for any non-zero $x$, the value of $f_i(x)^{q - 1} = 1$, so $f(x) = 0$.
1611
-
1612
- Thus, we can set $S_i = \F_q$, and they satisfy the hypothesis of the theorem. In particular, the coefficient of $X_1^{q - 1} \cdots X_n^{q - 1}$ is $\gamma \not= 0$. However, $f$ vanishes on $\F_q^n$. This is a contradiction.
1613
- \end{proof}
1614
-
1615
- It is possible to prove similar results without using the combinatorial Nullstellensatz. These results are often collectively refered to as \term{Chevalley--Warning theorems}.
1616
- \begin{thm}[Warning]
1617
- Let $f(X) = f(X_1, \ldots, X_n) \in \F_q[X]$ have degree $< n$. Then $N(f)$, the number of zeroes of $f$ is a multiple of $p$.
1618
- \end{thm}
1619
-
1620
- One nice trick in doing these things is that finite fields naturally come with an ``indicator function''. Since the multiplicative group has order $q - 1$, we know that if $x \in \F_q$, then
1621
- \[
1622
- x^{q - 1} =
1623
- \begin{cases}
1624
- 1 & x \not= 0\\
1625
- 0 & x = 0
1626
- \end{cases}.
1627
- \]
1628
- \begin{proof}
1629
- We have
1630
- \[
1631
- 1 - f(x)^{q - 1} =
1632
- \begin{cases}
1633
- 1 & f(x) = 0\\
1634
- 0 & \text{otherwise}
1635
- \end{cases}.
1636
- \]
1637
- Thus, we know
1638
- \[
1639
- N(f) = \sum_{x \in \F_q^n} (1 - f(x)^{q - 1}) = -\sum_{x \in \F_q^n} f(x)^{q - 1} \in \F_q.
1640
- \]
1641
- Further, we know that if $k \geq 0$, then
1642
- \[
1643
- \sum_{x \in \F_q^n} x^k =
1644
- \begin{cases}
1645
- -1 & k = q - 1\\
1646
- 0 & \text{otherwise}
1647
- \end{cases}.
1648
- \]
1649
- So let's write $f(x)^{q - 1}$ as a linear combination of monomials. Each monomial has degree $<n(q - 1)$. So there is at least one $k$ such that the power of $X_k$ in that monomial is $< q - 1$. Then the sum over $X_k$ vanishes for this monomial. So each monomial contributes $0$ to the sum.
1650
- \end{proof}
1651
-
1652
- We can use Alon's combinatorial Nullstellensatz to effortlessly prove some of our previous theorems.
1653
- \begin{thm}[Cauchy--Davenport theorem]\index{Cauchy--Davenport theorem}
1654
- Let $p$ be a prime and $A, B \subseteq \Z_p$ be non-empty subsets with $|A| + |B| \leq p + 1$. Then $|A + B| \geq |A| + |B| - 1$.
1655
- \end{thm}
1656
-
1657
- \begin{proof}
1658
- Suppose for contradiction that $A + B \subseteq C \subseteq \Z_p$, and $|C| = |A| + |B| - 2$. Let's come up with a polynomial that encodes the fact that $C$ contains the sum $A + B$. We let
1659
- \[
1660
- f(X, Y) = \prod_{c \in C} (X + Y - c).
1661
- \]
1662
- Then $f$ vanishes on $A \times B$, and $\deg f = |C|$.
1663
-
1664
- To apply the theorem, we check that the coefficient of $X^{|A| - 1} Y^{|B| - 1}$ is $\binom{|C|}{|A| - 1}$, which is non-zero in $\Z_p$, since $C < p$. This contradicts Alon's combinatorial Nullstellensatz.
1665
- \end{proof}
1666
-
1667
- We can also use this to prove Erd\"os--Ginzburg--Ziv again.
1668
- \begin{thm}[Erd\"os--Ginzburg--Ziv]
1669
- Let $p$ be a prime and $a_1, \ldots, a_{2p + 1} \in \Z_p$. Then there exists $I \in [2p - 1]^{(p)}$ such that
1670
- \[
1671
- \sum_{i \in I} a_i = 0\in \Z_p.
1672
- \]
1673
- \end{thm}
1674
-
1675
- \begin{proof}
1676
- Define
1677
- \begin{align*}
1678
- f_1(X_1, \ldots, X_{2p - 1}) &= \sum_{i = 1}^{2p - 1} X_i^{p - 1}.\\
1679
- f_2(X_1, \ldots, X_{2p - 1}) &= \sum_{i = 1}^{2p - 1} a_i X_i^{p - 1}.
1680
- \end{align*}
1681
- Then by Chevalley's theorem, we know there cannot be exactly one common zero. But $0$ is one common zero. So there must be another. Take this solution, and let $I = \{i : x_i \not= 0\}$. Then $f_1(X) = 0$ is the same as saying $|I| = p$, and $f_2(X) = 0$ is the same as saying $\sum_{i \in I} a_i = 0$.
1682
- \end{proof}
1683
-
1684
- We can also consider restricted sums\index{restricted sum}\index{$\overset{\cdot}{+}$}. We set
1685
- \[
1686
- A \overset{\cdot}{+} B = \{a + b: a \in A, b \in B, a \not= b\}.
1687
- \]
1688
- \begin{eg}
1689
- If $n \not= m$, then
1690
- \begin{align*}
1691
- [n] \overset{\cdot}{+} [m] &= \{3, 4, \ldots, m + n\}\\
1692
- [n] \overset{\cdot}{+} [n] &= \{3, 4, \ldots, 2n - 1\}
1693
- \end{align*}
1694
- \end{eg}
1695
- From this example, we show that if $|A| \geq 2$, then $|A \overset{\cdot}{+} A|$ can be as small as $2|A| - 3$. In 1964, Erd\"os and Heilbronn
1696
- \begin{conjecture}[Erd\"os--Heilbronn, 1964]
1697
- If $2|A|\leq p + 3$, then $|A \overset{\cdot}{+} A| \geq 2|A| - 3$.
1698
- \end{conjecture}
1699
- This remained open for $30$ years, and was proved by Dias da Silva and Hamidoune. A much much simpler proof was given by Alon, Nathanson and Ruzsa in 1996.
1700
- \begin{thm}
1701
- Let $A, B \subseteq \Z_p$ be such that $2 \leq |A| < |B|$ and $|A| + |B| \leq p + 2$. Then $A \overset{\cdot}{+} B \geq |A| + |B| - 2$.
1702
- \end{thm}
1703
- The above example shows we cannot do better.
1704
-
1705
- \begin{proof}
1706
- Suppose not. Define
1707
- \[
1708
- f(X, Y) = (X - Y) \prod_{c \in C} (X + Y - c),
1709
- \]
1710
- where $A \overset{\cdot}{+} B \subseteq C \subseteq \Z_p$ and $|C| = |A| + |B| - 3$.
1711
-
1712
- Then $\deg g = |A| + |B| - 2$, and the coefficient of $X^{|A| - 1} Y^{|B| - 1}$ is
1713
- \[
1714
- \binom{|A| + |B| - 3}{|A| - 2} - \binom{|A| + |B| - 3}{|A| - 1} \not= 0.
1715
- \]
1716
- Hence by Alon's combinatorial Nullstellensatz, $f(x, y)$ is not identically zero on $A \times B$. A contradiction.
1717
- \end{proof}
1718
-
1719
- \begin{cor}[Erd\"os--Heilbronn conjecture]
1720
- If $A, B \subseteq \Z_p$, non-empty and $|A| + |B| \leq p + 3$, and $p$ is a prime, then $|A \overset{\cdot}{+} B| \geq |A| + |B| - 3$.
1721
- \end{cor}
1722
-
1723
- \begin{proof}
1724
- We may assume $2 \leq |A| \leq |B|$. Pick $a \in A$, and set $A' = A \setminus \{a\}$. Then
1725
- \[
1726
- |A \overset{\cdot}{+} B| \geq |A' \overset{\cdot}{+} B| \geq |A'| + |B| - 2 = |A| + |B| - 3.\qedhere
1727
- \]
1728
- \end{proof}
1729
-
1730
- Now consider the following problem: suppose we have a circular table $\Z_{2n + 1}$. Suppose the host invites $n$ couples, and the host, being a terrible person, wants the $i$th couple to be a disatnce $d_i$ apart for some $1 \leq d_i \leq n$. Can this be done?
1731
-
1732
- \begin{thm}
1733
- If $2n + 1$ is a prime, then this can be done.
1734
- \end{thm}
1735
-
1736
- \begin{proof}
1737
- We may wlog assume the host is at $0$. We want to partition $\Z_p \setminus \{0\} = \Z_p^\times$ into $n$ pairs $\{x_i, x_i + d_i\}$. Consider the polynomial ring $\Z_p[X_1, \ldots, X_n] = \Z_p[X]$. We define
1738
- \[
1739
- f(x) = \prod_i X_i (X_i + d_i) \prod_{i < j} (X_i - X_j)(X_i + d_i - X_j) (X_i - X_j - d_j)(X_i + d_i - X_j - d_j).
1740
- \]
1741
- We want to show this is not identically zero on $\Z_p^n$
1742
-
1743
- First of all, we have
1744
- \[
1745
- \deg f = 4 \binom{n}{2} + 2n = 2n^2.
1746
- \]
1747
- So we are good. The coefficient of $X_1^{2n} \cdots X_n^{2n}$ is the same as that in
1748
- \[
1749
- \prod X_i^2 \prod_{i < j} (X_i - X_j)^4 = \prod X_i^2 \prod_{i \not= j} (X_i - X_j)^2 = \prod X_i^{2n} \prod_{i \not= j} \left(1 - \frac{X_i}{X_j}\right)^2.
1750
- \]
1751
- This, we are looking for the constant term in
1752
- \[
1753
- \prod_{i \not= j} \left(1 - \frac{X_i}{ X_j}\right)^2.
1754
- \]
1755
- By a question on the example sheet, this is
1756
- \[
1757
- \binom{2n}{2,2,\ldots, 2} \not= 0\text{ in }\Z_p.\qedhere
1758
- \]
1759
- \end{proof}
1760
-
1761
- Our final example is as follows: suppose we are in $\Z_p$, and $a_1, \ldots, a_p$ and $c_1, \ldots, c_p$ are enumerations of the elements, and $b_i = c_i - a_i$. Then clearly we have $\sum b_i = 0$. Is the converse true? The answer is yes!
1762
- \begin{thm}
1763
- If $b_1, \ldots, b_p \in \Z_p$ are such that $\sum b_i = 0$, then there exists numerations $a_1, \ldots, a_p$ and $b_1, \ldots, b_p$ of the elements of $\Z_p$ such that for each $i$, we have
1764
- \[
1765
- a_i + b_i = c_i.
1766
- \]
1767
- \end{thm}
1768
-
1769
- \begin{proof}
1770
- It suffices to show that for all $(b_i)$, there are distinct $a_1, \cdots, a_{p - 1}$ such that $a_i + b_i \not= a_j + b_j$ for all $i \not= j$. Consider the polynomial
1771
- \[
1772
- \prod_{i < j} (X_i - X_j)(X_i + b_i - X_j - b_j).
1773
- \]
1774
- The degree is
1775
- \[
1776
- 2 \binom{p - 1}{2} = (p - 1)(p - 2).
1777
- \]
1778
- We then inspect the coefficient of $X_1^{p - 2} \cdots X_{p - 1}^{p - 2}$, and checking that this is non-zero is the same as above.
1779
- \end{proof}
1780
-
1781
- \printindex
1782
- \end{document}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
books/cam/III_M/differential_geometry.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/extremal_graph_theory.tex DELETED
@@ -1,1529 +0,0 @@
1
- \documentclass[a4paper]{article}
2
-
3
- \def\npart {III}
4
- \def\nterm {Michaelmas}
5
- \def\nyear {2017}
6
- \def\nlecturer {A.\ G.\ Thomason}
7
- \def\ncourse {Extremal Graph Theory}
8
-
9
- \input{header}
10
-
11
- \newcommand\ind{\mathrm{ind}}
12
-
13
- \renewcommand\ex{\mathrm{ex}}
14
- \begin{document}
15
- \maketitle
16
- {\small
17
- \setlength{\parindent}{0em}
18
- \setlength{\parskip}{1em}
19
- Tur\'an's theorem, giving the maximum size of a graph that contains no complete $r$-vertex subgraph, is an example of an extremal graph theorem. Extremal graph theory is an umbrella title for the study of how graph and hypergraph properties depend on the values of parameters. This course builds on the material introduced in the Part II Graph Theory course, which includes Tur\'an's theorem and also the Erd\"os--Stone theorem.
20
-
21
- The first few lectures will cover the Erd\"os--Stone theorem and stability. Then we shall treat Szemer\'edi's Regularity Lemma, with some applications, such as to hereditary properties. Subsequent material, depending on available time, might include: hypergraph extensions, the flag algebra method of Razborov, graph containers and applications.
22
-
23
- \subsubsection*{Pre-requisites}
24
- A knowledge of the basic concepts, techniques and results of graph theory, as afforded by the Part II Graph Theory course, will be assumed. This includes Tur\'an's theorem, Ramsey's theorem, Hall's theorem and so on, together with applications of elementary probability.
25
- }
26
- \tableofcontents
27
-
28
- \section{The \texorpdfstring{Erd\"os--Stone}{Erdos--Stone} theorem}
29
- The starting point of extremal graph theory is perhaps Tur\'an's theorem, which you hopefully learnt from the IID Graph Theory course. To state the theory, we need the following preliminary definition:
30
- \begin{defi}[Tur\'an graph]\index{Tur\'an graph}
31
- The \emph{Tur\'an graph} \term{$T_r(n)$} is the complete $r$-partite graph on $n$ vertices with class sizes $\lfloor n/r\rfloor$ or $\lceil n/r\rceil$. We write $t_r(n)$ for the number of edges in $T_r(n)$.\index{$t_r(n)$}
32
- \end{defi}
33
-
34
- The theorem then says
35
- \begin{thm}[Tur\'an's theorem]\index{Tur\'an's theorem}
36
- If $G$ is a graph with $|G| = n$, $e(G) \geq t_r(n)$ and $G \not \supseteq K_{r+1}$. Then $G = T_r(n)$.
37
- \end{thm}
38
- This is an example of an \emph{extremal theorem}. More generally, given a fixed graph $F$, we seek
39
- \[
40
- \ex(n, F) = \max \{e(G): |G| = n, G \not\supseteq F\}.
41
- \]
42
- Tur\'an's theorem tells us $\ex(n, K_{r + 1}) = t_r(n)$. We cannot find a nice expression for the latter number, but we have
43
- \[
44
- \ex(n, K_{r + 1}) = t_r(n) \approx \left(1 - \frac{1}{r}\right)\binom{n}{2}.
45
- \]
46
- Tur\'an's theorem is a rather special case. First of all, we actually know the exact value of $\ex(n, F)$. Moreover, there is a unique \term{extremal graph} realizing the bound. Both of these properties do not extend to other choices of $F$.
47
-
48
- By definition, if $e(G) > \ex(n, K_{r + 1})$, then $G$ contains a $K_{r + 1}$. The Erd\"os--Stone theorem tells us that as long as $|G| = n$ is big enough, this condition implies $G$ contains a much larger graph than $K_{r + 1}$.
49
-
50
- \begin{notation}\index{$K_r(t)$}
51
- We denote by $K_r(t)$ the complete $r$-partite graph with $t$ vertices in each class.
52
- \end{notation}
53
- So $K_r(1) = K_r$ and $K_r(t) = T_r(rt)$.
54
-
55
- \begin{thm}[Erd\"os--Stone, 1946]\index{Erd\"os--Stone theorem}
56
- Let $r \geq 1$ be an integer and $\varepsilon > 0$. Then there exists $d = d(r, \varepsilon)$ and $n_0 = n_0(r, \varepsilon)$ such that if $|G| = n \geq n_0$ and
57
- \[
58
- e(G) \geq \left(1 - \frac{1}{r} + \varepsilon \right) \binom{n}{2},
59
- \]
60
- then $G \supseteq K_{r + 1}(t)$, where $t = \lfloor d \log n\rfloor$.
61
- \end{thm}
62
- Note that we can remove $n_0$ from the statement simply by reducing $d$, since for sufficiently small $d$, whenever $n < n_0$, we have $\lfloor d \log n\rfloor = 0$.
63
-
64
- One corollary of the theorem, and a good way to think about the theorem is that given numbers $r, \varepsilon, t$, whenever $|G| = n$ is sufficiently large, the inequality $e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2}$ implies $G \subseteq K_{r + 1}(t)$.
65
-
66
- To prove the Erd\"os--Stone theorem, a natural strategy is to try to throw away vertices of small degree, and so that we can bound the \emph{minimal degree} of the graph instead of the total number of edges. We will make use of the following lemma to do so:
67
- \begin{lemma}
68
- Let $c, \varepsilon > 0$. Then there exists $n_1 = n_1(c, \varepsilon)$ such that if $|G| = n \geq n_1$ and $e(G) \geq (c + \varepsilon) \binom{n}{2}$, then $G$ has a subgraph $H$ where $\delta(H) \geq c |H|$ and $|H| \geq \sqrt{\varepsilon} n$.
69
- \end{lemma}
70
-
71
- \begin{proof}
72
- The idea is that we can keep removing the vertex of smallest degree and then we must eventually get the $H$ we want. Suppose this doesn't gives us a suitable graph even after we've got $\sqrt{\varepsilon}n$ vertices left. That means we can find a sequence
73
- \[
74
- G = G_n \supseteq G_{n - 1} \supseteq G_{n - 2} \supseteq \cdots G_s,
75
- \]
76
- where $s = \lfloor \varepsilon^{1/2}n \rfloor$, $|G_j| = j$ and the vertex in $G_j \setminus G_{j - 1}$ has degree $< cj$ in $G_j$.
77
-
78
- We can then calculate
79
- \begin{align*}
80
- e(G_s) &> (c + \varepsilon) \binom{n}{2} - c \sum_{j = s + 1}^n j \\
81
- &= (c + \varepsilon) \binom{n}{2} - c \left\{\binom{n+1}{2} - \binom{s + 1}{2}\right\} \\
82
- &\sim \frac{\varepsilon n^2}{2}
83
- \end{align*}
84
- as $n$ gets large (since $c$ and $s$ are fixed numbers). In particular, this is $> \binom{s}{2}$. But $G_s$ only has $s$ vertices, so this is impossible.
85
- \end{proof}
86
-
87
- Using this, we can reduce the Erd\"os--Stone theorem to a version that talks about the minimum degree instead.
88
- \begin{lemma}
89
- Let $r \geq 1$ be an integer and $\varepsilon > 0$. Then there exists a $d_1 = d_1(r, \varepsilon)$ and $n_2 = n_2(r, \varepsilon)$ such that if $|G| = n \geq n_2$ and
90
- \[
91
- \delta(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right)n,
92
- \]
93
- then $G \supseteq K_{r + 1}(t)$, where $t = \lfloor d_1 \log n\rfloor$.
94
- \end{lemma}
95
- We first see how this implies the Erd\"os--Stone theorem:
96
-
97
- \begin{proof}[Proof of Erd\"os--Stone theorem]
98
- Provided $n_0$ is large, say $n_0 > n_1\left(1 - \frac{1}{r} + \frac{\varepsilon}{2}, \frac{\varepsilon}{2}\right)$, we can apply the first lemma to $G$ to obtain a subgraph $H \subseteq G$ where $|H| > \sqrt{\frac{\varepsilon}{2}} n$, and $\delta(H) \geq \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right) |H|$.
99
-
100
- We can then apply our final lemma as long as $\sqrt{\frac{\varepsilon}{2}} n$ is big enough, and obtain $K_{r + 1}(t) \subseteq H \subseteq G$, with $t > \left\lfloor d_1(r, \varepsilon/2) \log \left(\sqrt{\frac{\varepsilon}{2}} n\right)\right\rfloor$.
101
- \end{proof}
102
-
103
- We can now prove the lemma.
104
-
105
- \begin{proof}[Proof of lemma]
106
- We proceed by induction on $r$. If $r = 0$ or $\varepsilon \geq \frac{1}{r}$, the theorem is trivial. Otherwise, by the induction hypothesis, we may assume $G \supseteq K_r(T)$ for
107
- \[
108
- T = \left\lfloor \frac{2t}{\varepsilon r}\right\rfloor.
109
- \]
110
- Call it $K = K_r(T)$. This is always possible, as long as
111
- \[
112
- d_1(r, \varepsilon) < \frac{\varepsilon r}{2} d_1\left(r - 1, \frac{1}{r(r - 1)}\right).
113
- \]
114
- The exact form is not important. The crucial part is that $\frac{1}{r(r - 1)} = \frac{1}{r - 1} - \frac{1}{r}$, which is how we chose the $\varepsilon$ to put into $d_1(r - 1, \varepsilon)$.
115
-
116
- Let $U$ be the set of vertices in $G - K$ having at least $\left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right)|K|$ neighbours in $K$. Calculating
117
- \[
118
- \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right) |K| = \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right) rT = (r - 1)T + \frac{\varepsilon r}{2}T \geq (r - 1)T + t,
119
- \]
120
- we see that every element in $U$ is joined to at least $t$ vertices in each class, and hence is joined to some $K_r(t) \subseteq K$.
121
-
122
- So we have to show that $|U|$ is large. If so, then by a pigeonhole principle argument, it follows that we can always find enough vertices in $U$ so that adding them to $K$ gives a $K_{r + 1}(t)$.
123
- \begin{claim}
124
- There is some $c > 0$ (depending on $r$ and $\varepsilon$) such that for $n$ sufficiently large, we have
125
- \[
126
- |U| \geq cn.
127
- \]
128
- \end{claim}
129
-
130
- The argument to establish these kinds of bounds is standard, and will be used repeatedly. We write $e(K, G - K)$ for the number of edges between $K$ and $G - K$. By the minimum degree condition, each vertex of $K$ sends at least $\left(1 - \frac{1}{r} + \varepsilon\right)n - |K|$ edges to $G - K$. Then we have two inequalities
131
- \begin{align*}
132
- |K| \left\{\left(1 - \frac{1}{r} + \varepsilon\right)n - |K|\right\} &\leq e(K, G - K) \\
133
- &\leq |U||K| + (n - |U|) \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right)|K|.
134
- \end{align*}
135
- Rearranging, this tells us
136
- \[
137
- \frac{\varepsilon n}{2} - |K| \leq |U| \left(\frac{1}{r} - \frac{\varepsilon}{2}\right).
138
- \]
139
- Now note that $|K| \sim \log n$, so for large $n$, the second term on the left is negligible, and it follows that $U \sim n$.
140
-
141
- We now want to calculate the number of $K_r(t)$'s in $K$. To do so, we use the simple inequality
142
- \[
143
- \binom{n}{k} \leq \left(\frac{e n}{k}\right)^k.
144
- \]
145
- Then we have
146
- \[
147
- \# K_r(t) = \binom{T}{t}^r \leq \left(\frac{eT}{t}\right)^{rt} \leq \left(\frac{3e}{\varepsilon r}\right)^{rt} \leq \left(\frac{3e}{\varepsilon r}\right)^{rd\log n} = n^{rd \log (3e/\varepsilon r)}% \leq \frac{\varepsilon r n}{3t} \leq \frac{|U|}{t},
148
- \]
149
- Now if we pick $d$ sufficiently small, then this tells us $\#K_r(t)$ grows sublinearly with $n$. Since $|U|$ grows linearly, and $t$ grows logarithmically, it follows that for $n$ large enough, we have
150
- \[
151
- |U| \geq t\cdot \# K_r(t).
152
- \]
153
- Then by the pigeonhole principle, there must be a set $W \subseteq U$ of size $t$ joined to the same $K_r(t)$, giving a $K_{r + 1}(t)$.
154
- \end{proof}
155
-
156
- Erd\"os and Simonovits noticed that Erd\"os--Stone allows us to find $\ex(n, F)$ asymptotically for all $F$.
157
-
158
- \begin{thm}[Erd\"os--Simonovits]
159
- Let $F$ be a fixed graph with chromatic number $\chi(F) = r + 1$. Then
160
- \[
161
- \lim_{n \to \infty} \frac{\ex(n, F)}{\binom{n}{2}} = 1 - \frac{1}{r}.
162
- \]
163
- \end{thm}
164
-
165
- \begin{proof}
166
- Since $\chi(F) = r + 1$, we know $F$ cannot be embedded in an $r$-partite graph. So in particular, $F \not\subseteq T_r(n)$. So
167
- \[
168
- \ex(n, F) \geq t_r(n) \geq \left(1 - \frac{1}{r}\right)\binom{n}{2}.
169
- \]
170
- On the other hand, given any $\varepsilon > 0$, if $|G| = n$ and
171
- \[
172
- e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2},
173
- \]
174
- then by the Erd\"os--Stone theorem, we have $G \supseteq K_{r + 1}(|F|) \supseteq F$ provided $n$ is large. So we know that for every $\varepsilon > 0$, we have
175
- \[
176
- \limsup \frac{\ex(n, F)}{\binom{n}{2}} \leq 1 - \frac{1}{r} + \varepsilon.
177
- \]
178
- So we are done.
179
- \end{proof}
180
- If $r > 1$, then this gives us a genuine asymptotic expression for $\ex(n, F)$. However, if $r = 1$, i.e.\ $F$ is a bipartite graph, then this only tells us $\ex(n, F) = o\left(\binom{n}{2}\right)$, but doesn't tell us about the true asymptotic behaviour.
181
-
182
- To end the chapter, we show that $t \sim \log n$ is indeed the best we can do in the Erd\"os--Stone theorem, by actually constructing some graphs.
183
- \begin{thm}
184
- Given $r \in \N$, there exists $\varepsilon_r > 0$ such that if $0 < \varepsilon < \varepsilon_r$, then there exists $n_3(r, \varepsilon)$ so that if $n > n_3$, there exists a graph $G$ of order $n$ such that
185
- \[
186
- e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2}
187
- \]
188
- but $K_{r + 1}(t) \not\subseteq G$, where
189
- \[
190
- t = \left\lceil \frac{3 \log n}{\log 1/\varepsilon}\right\rceil.
191
- \]
192
- \end{thm}
193
- So this tells us we cannot get better than $t \sim \log n$, and this gives us some bound on $d(r, \varepsilon)$.
194
-
195
- \begin{proof}
196
- Start with the Tur\'an graph $T_r(n)$. Let $m = \left\lceil \frac{n}{r} \right\rceil$. Then there is a class $W$ of $T_r(n)$ of size $m$. The strategy is to add $\varepsilon \binom{n}{2}$ edges inside $W$ to obtain $G$ such that $G[W]$ (the subgraph of $G$ formed by $W$) does not contain a $K_2(t)$. It then follows that $G \not\supseteq K_{r + 1}(t)$, but also
197
- \[
198
- e(G) \geq t_r(n) + \varepsilon \binom{n}{2} \geq \left(1 - \frac{1}{r} +\varepsilon \right) \binom{n}{2},
199
- \]
200
- as desired.
201
-
202
- To see that such an addition is possible, we choose edges inside $W$ independently with probability $p$, to be determined later. Let $X$ be the number of edges chosen, and $Y$ be the number of $K_2(t)$ created. If $\E[X - Y] > \varepsilon \binom{n}{2}$, then this means there is an actual choice of edges with $X - Y > \varepsilon \binom{n}{2}$. We then remove an edge from each $K_2(t)$ to leave a $K_2(t)$-free graph with at least $X - Y > \varepsilon \binom{n}{2}$ edges.
203
-
204
- So we want to actually compute $\E[X - Y]$. Seeing that asymptotically, $m$ is much larger than $t$, we have
205
- \begin{align*}
206
- \E[X - Y] &= \E[X] - \E[Y] \\
207
- &= p\binom{m}{2} - \frac{1}{2}\binom{m}{t} \binom{m - t}{t} p^{t^2}\\
208
- &\sim \frac{1}{2} pm^2 - \frac{1}{2} m^{2t} p^{t^2}\\
209
- &= \frac{1}{2} pm^2 (1 - m^{2t - 2} p^{t^2 - 1})\\
210
- &= \frac{1}{2} pm^2 (1 - (m^2 p^{t + 1})^{t - 1}).
211
- \end{align*}
212
- We pick $p = 3 \varepsilon r^2$ and $\varepsilon_r = (3r^2)^{-6}$. Then $p < \varepsilon^{5/6}$, and
213
- \[
214
- m^2 p^{t + 1} < m^2 \varepsilon^{5/6 (t + 1)} < (m^2 m^{-5/2})^{t - 1} < \frac{1}{2}. % return to this
215
- \]
216
- Hence, we find that
217
- \[
218
- \E[x - y] \geq \frac{1}{4} p m^2 > \varepsilon \binom{n}{2}.\qedhere
219
- \]
220
- \end{proof}
221
-
222
- Let's summarize what we have got so far. Let $t(n, r, \varepsilon)$ be the largest value of $t$ such that a graph of order $n$ with $\left(1 - \frac{1}{r} + \varepsilon \right) \binom{n}{2}$ edges is guaranteed to contain $K_{r + 1}(t)$. Then we know $t(n, r, \varepsilon)$ grows with $n$ like $\log n$. If we are keen, we can ask how $t$ depends on the other terms $r$ and $\varepsilon$. We just saw that
223
- \[
224
- t(n, r, \varepsilon) \leq \frac{3 \log n}{\log 1/\varepsilon}.
225
- \]
226
- So we see that the dependence on $\varepsilon$ is at most logarithmic, and in 1976, Bollob\'as, Erd\"os and Simonovits showed that
227
- \[
228
- t(n, r, \varepsilon) \geq c \frac{\log n}{r \log 1/\varepsilon}
229
- \]
230
- for some $c$. Thus, $t(n, r, \varepsilon)$ also grows (inverse) logarithmically with $\varepsilon$.
231
-
232
- In our original bound, there is no dependence on $r$, which is curious. While Bollob\'as, Erd\"os and Simonovits's bound does, in 1978, Chv\'atal and Szemer\'edi showed that
233
- \[
234
- t \geq \frac{1}{500} \frac{\log n}{\log 1/\varepsilon}
235
- \]
236
- if $n$ is large. So we know there actually is no dependence on $r$.
237
-
238
- We can also ask about the containment of less regular graphs than $K_r(t)$. In 1994, Bollob\'as and Kohayakawa adapted the proof of the Erd\"os--Stone theorem to show that there is a constant $c$ such that for any $0 < \gamma < 1$, if $e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2}$ and $n$ is large, then we can find a complete $(r + 1)$-partite subgraph with class sizes
239
- \[
240
- \left\lfloor c \gamma \frac{\log n}{r \log 1/\varepsilon}\right\rfloor, \left\lfloor c \gamma \frac{\log n}{\log r}\right\rfloor, r - 1\text{ times}, \left\lfloor c \varepsilon^{\frac{3}{2} - \frac{\gamma}{2}} n^{1 - \gamma}\right\rfloor.
241
- \]
242
- We might wonder if we can make similar statements for hypergraphs. It turns out all the analogous questions are open. Somehow graphs are much easier.
243
-
244
- \section{Stability}
245
- An extremal problem is \term{stable} if every near-optimal extremal example looks close to some optimal (unique) example. Stability pertain for $\ex(n, F)$.
246
-
247
- \begin{thm}
248
- Let $t, r \geq 2$ be fixed, and suppose $|G| = n$ and $G \not\supseteq K_{r + 1}(t)$. If
249
- \[
250
- e(G) = \left(1 - \frac{1}{r} + o(1)\right) \binom{n}{2}.
251
- \]
252
- Then
253
- \begin{enumerate}
254
- \item There exists $T_r(n)$ on $V(G)$ with $|E(G) \Delta E(T_r(n))| = o(n^2)$.
255
- \item $G$ contains an $r$-partite subgraph with $\left(1 - \frac{1}{r} + o(1)\right) \binom{n}{2}$ edges.
256
- \item $G$ contains an $r$-partite subgraph with minimum degree $\left(1 - \frac{1}{r} + o(1)\right)n$.
257
- \end{enumerate}
258
- \end{thm}
259
-
260
- The outline of the proof is as follows: we first show that the three statements are equivalent, so that we only have to prove (ii). To prove (ii), we first use Erd\"os--Stone to find some $K_r(s)$ living inside $G$ (for some $s$), which is a good $r$-partite subgraph of $G$, but with not quite the right number of vertices. We now look at the remaining vertices. For each vertex $v$, find the $C_i$ such that $v$ is connected to as few vertices in $C_i$ as possible, and enlarge $C_i$ to include that as well. After doing this, $C_i$ will have quite a few edges between its vertices, and we throw those away. The hope is that there are only $o(n^2)$ edges to throw away, so that we still have $\left(1 - \frac{1}{r} + o(1)\right) \binom{n}{2}$ edges left. It turns out as long as we throw out some ``bad'' vertices, we will be fine.
261
-
262
- \begin{proof}
263
- We first prove (ii), which looks the weakest. We will then argue that the others follow from this (and in fact they are equivalent).
264
-
265
- By Erd\"os--Stone, we can always find some $K_r(s) = K \subseteq G$ for some $s = s(n) \to \infty$. We can, of course, always choose $s(n)$ so that $s(n) \leq \log n$, which will be helpful for our future analysis. We let $C_i$ be the $i$th class of $K$.
266
-
267
- \begin{itemize}
268
- \item By throwing away $o(n)$ vertices, we may assume that
269
- \[
270
- \delta(G) \geq \left(1 - \frac{1}{r} + o(1)\right) n
271
- \]
272
- \item Let $X$ be the set of all vertices joined to at least $t$ vertices in each $C_i$. Note that there are $\binom{s}{t}^r$ many copies of $K_r(t)$'s in $K$. So by the pigeonhole principle, we must have $|X| < t \binom{s}{t}^r = o(n)$, or else we can construct a $K_{r + 1}(t)$ inside $G$.
273
- \item Let $Y$ be the set of vertices joined to fewer than $(r - 1) s - \frac{s}{2t} + t$ other vertices. By our bound on $\delta(G)$, we certainly have
274
- \[
275
- e(K, G - K) \geq s(r - 1 + o(1))n.
276
- \]
277
- On the other hand, every element of $G$ is certainly joined to at most $(r-1)s + t$ edges, since $X$ was thrown away. So we have
278
- \[
279
- ((r - 1)s + t)(n - |Y|) + |Y| \left((r - 1) s - \frac{s}{2t} + t\right) \geq e(K, G - K).
280
- \]
281
- So we deduce that $|Y| = o(n)$. Throw $Y$ away.
282
- \end{itemize}
283
- Let $V_i$ be the set of vertices in $G \setminus K$ joined to fewer than $t$ of $C_i$. It is now enough to show that $e(G[V_i]) = o(n^2)$. We can then throw away the edges in each $V_i$ to obtain an $r$-partite graph.
284
-
285
- Suppose on the contrary that $e(G[V_j]) \geq \varepsilon n^2$, say, for some $j$. Then Erd\"os--Stone with $r = 1$ says we have $G[V_j] \supseteq K_2(t)$. Each vertex of $K_2(t)$ has at least $s - \frac{s}{2t} + 1$ in each other $C_i$, for $i \not= j$. So we see that $K_2(t)$ has at least $s - 2t \left(\frac{s}{2t} - 1\right) > t$ common neighbours in each $C_i$, giving $K_{r + 1}(t) \subseteq G$.
286
-
287
- It now remains to show that (i) and (iii) follows from (ii). The details are left for the example sheet, but we sketch out the rough argument. (ii) $\Rightarrow$ (i) is straightforward, since an $r$-partite graph with that many edges is already pretty close to being a Tur\'an graph.
288
-
289
- To deduce (iii), since we can assume $\delta(G) \geq \left(1 - \frac{1}{r} + o(1)\right) n$, we note that if (iii) did not hold, then we have $\varepsilon n$ vertices of degree $\leq \left(1 - \frac{1}{r} - \varepsilon\right)n$, and their removal leaves a graph of order $(1 - \varepsilon) n$ with at least
290
- \[
291
- \left(1 - \frac{1}{r} + o(1)\right)\binom{n}{2} - \varepsilon n \left(1 - \frac{1}{r} - \varepsilon\right)n > \left(1 - \frac{1}{r} + \varepsilon^2\right) \binom{(1 - \varepsilon)n}{2},
292
- \]
293
- which by Erd\"os--Stone would contain $K_{r + 1}(t)$. So we are done.
294
- \end{proof}
295
- %if we want this to succeed, we have to throw out some ``bad'' vertices before we start.
296
- %
297
- %\begin{proof}
298
- % Clearly $(i) \Rightarrow (ii)$, and also $(ii) \Rightarrow (i)$, since an $r$-partite graph with that many edges is already pretty close to a Tur\'an graph. This is made precise on the example sheet.
299
- %
300
- % It is also easy to see that $(iii) \Rightarrow (ii)$. Note too that we may jettison $o(n)$ vertices if we wish to. By doing that, we may assume that $\delta(G) \geq \left(1 - \frac{1}{r} + o(1)\right) n$. So $(ii) \Rightarrow (iii)$, for otherwise we have $\varepsilon n$ vertices of degree $\leq \left(1 - \frac{1}{r} - \varepsilon\right)n$, and their removal leaves a graph of order $(1 - \varepsilon) n$ with at least
301
- % \[
302
- % \left(1 - \frac{1}{r} + o(1)\right)\binom{n}{2} - \varepsilon n \left(1 - \frac{1}{r} - \varepsilon\right)n > \left(1 - \frac{1}{r} + \varepsilon^2\right) \binom{(1 - \varepsilon)n}{2},
303
- % \]
304
- % which by Erd\"os--Stone would contain $K_{r + 1}(t)$.
305
- %
306
- % So our three properties can be assumed to be equivalent, and (ii) is the weakest-looking statement. So we will prove $(ii)$
307
- %
308
- % Now we have $G \supseteq K = K_r(s)$ where $\log n \geq s = s(n) \to \infty$ by Erd\"os--Stone. Let $C_i$ be the $i$th class of $K$. Let $X$ be the set of vertices joined to at least $T$ in each $C_i$. There are $\binom{s}{t}^{r}$ many $K_r(t)$'s in $K$, and since $G \supseteq K_{r + 1}(t)$, we have $|X| < t \binom{s}{t}^r = o(n)$. Jettison $X$.
309
- %
310
- % Let $Y$ be the set of vertices joined to fewer than $(r - 1)S - \frac{s}{2t} + t$. Since $\delta(G) = \left(1 - \frac{1}{r} + o(1) \right) n$, we see
311
- % \[
312
- % e(K, G - K) \geq s(r - 1 + o(1))n.
313
- % \]
314
- % On the other hand, because $X$ has been jettisoned, we know
315
- % \[
316
- % ((r - 1)s + t)(n - |Y|) + |Y| \left((r - 1) s - \frac{s}{2t} + t\right) \geq e(K, G - K).
317
- % \]
318
- % So $|Y| = o(n)$. So we jettison $Y$.
319
- %
320
- % The remaining vertices are partitioned by letting $V_i$ be the set of vertices in $G - K$ joined to fewer than $t$ of $C_i$. Now it is enough to show that $e(G[V_i]) = o(n^2)$, since then $V_1, \ldots, V_r$ are the classes of our $r$-partite subgraph.
321
- %
322
- % Suppose on the contrary that $e(G[V_j]) \geq \varepsilon n^2$, say, for some $j$. Then Erd\"os--Stone with $r = 1$ says we have $G[V_j] \supseteq K_2(t)$. Each vertex of $K_2(t)$ has at least $s - \frac{s}{2t} + 1$ in each other $C_i$, for $i \not= j$. So we see that $K_2(t)$ has at least $s - 2t \left(\frac{s}{2t} - 1\right) > t$ common neighbours in each $C_i$, giving $K_{r + 1}(t) \subseteq G$.
323
- %\end{proof}
324
-
325
- \begin{cor}
326
- Let $\chi(F) = r + 1$, and let $G$ be extremal for $F$, i.e.\ $G \not\supseteq F$ and $e(G) = \ex(|G|, F)$. Then $\delta(G) = \left(1 - \frac{1}{r} + o(1)\right)n$.
327
- \end{cor}
328
-
329
- \begin{proof}
330
- By our asymptotic bound on $\ex(n, F)$, it cannot be that $\delta(G)$ is greater than $\left(1 - \frac{1}{r} + o(1)\right)n$. So there must be a vertex $v \in G$ with $d(v) \leq \left(1 - \frac{1}{r} - \varepsilon\right) n$.
331
-
332
- We now apply version (iii) of the stability theorem to obtain an $r$-partite subgraph $H$ of $G$. We can certainly pick $|F|$ vertices in the same part of $H$, which are joined to $m = \left(1 - \frac{1}{r} + o(1)\right)$ common neighbours. Form $G^*$ from $G - v$ by adding a new vertex $u$ joined to these $m$ vertices. Then $e(G^*) > e(G)$, and so the maximality of $G$ entails $G^* \supseteq F$.
333
-
334
- Pick a copy of $F$ in $G^*$. This must involve $u$, since $G$, and hence $G - v$ does not contain a copy of $F$. Thus, there must be some $x$ amongst the $|F|$ vertices mentioned. But then we can replace $u$ with $x$ and we are done.
335
- \end{proof}
336
-
337
- Sometimes, stability and bootstrapping can give exact results.
338
-
339
- \begin{eg}
340
- $\ex(n, C_5) = t_2(n)$. In fact, $\ex(n, C_{2k + 1}) = t_2(n)$ if $n$ is large enough.
341
- \end{eg}
342
-
343
- \begin{thm}[Simonovits]
344
- Let $F$ be $(r + 1)$-edge-critical\index{$r$-edge-critical}, i.e.\ $\chi(F) = r + 1$ but $\chi(F) \setminus e) = r$ for every edge $e$ of $F$. Then for large $n$,
345
- \[
346
- \ex(n, F) = t_r(n).
347
- \]
348
- and the only extremal graph is $T_r(n)$.
349
- \end{thm}
350
- So we get a theorem like Tur\'an's theorem for large $n$.
351
-
352
- \begin{proof}
353
- Let $G$ be an extremal graph for $F$ and let $H$ be an $r$-partite subgraph with
354
- \[
355
- \delta(H) = \left(1 - \frac{1}{r} + o(1)\right)n.
356
- \]
357
- Note that $H$ necessarily have $r$ parts of size $\left(\frac{1}{r} + o(1)\right)n$ each. Assign each of the $o(n)$ vertices in $V(G) \setminus V(H)$ to a class where it has fewest neighbours.
358
-
359
- Suppose some vertex $v$ has $\varepsilon n$ neighbours in its own class. Then it has $\geq \varepsilon n$ in each class. Pick $\varepsilon n$ neighbours of $v$ in each class of $H$. These neighbours span a graph with at least $\left(1 -\frac{1}{r} + o(1)\right) \binom{r\varepsilon n}{2}$ edges. So by Erd\"os--Stone (or arguing directly), they span $F - w$ for some vertex $w$ (contained in $K_r(|F|)$). Hence $G \supseteq F$, contradiction.
360
-
361
- Thus, each vertex of $G$ has only $o(n)$ vertices in its own class. So it is joined to all but $o(n)$ vertices in every other class.
362
-
363
- Suppose some class of $G$ contains an edge $xy$. Pick a set $Z$ of $|F|$ vertices in that class with $\{x, y\} \in Z$. Now $Z$ has $\left(\frac{1}{r} + o(1)\right)n$ common neighbours in each class, so (by Erd\"os--Stone or directly) these common neighbours span a $K_{r - 1}(|F|)$. But together with $Z$, we have a $K_r(|F|)$ but with an edge added inside a class. But by our condition that $F \setminus e$ is $r$-partite for any $e$, this subgraph contains an $F$. This is a contradiction.
364
-
365
- So we conclude that no class of $G$ contains an edge. So $G$ itself is $r$-partite, but the $r$-partite graph with most edges is $T_r(n)$, which does not contain $F$. So $G = T_r(n)$.
366
- \end{proof}
367
-
368
- \section{Supersaturation}
369
- Suppose we have a graph $G$ with $e(G) > \ex(n, F)$. Then by definition, there is at least one copy of $F$ in $G$. But can we tell how many copies we have? It turns out this is not too difficult to answer, and in fact we can answer the question for any hypergraph.
370
-
371
- Recall that an \term{$\ell$-uniform hypergraph} is a pair $(V, E)$, where $E \subseteq V^{(\ell)}$. We can define the extremal function of a class of hypergraphs $\mathcal{F}$ by
372
- \[
373
- \ex(n, \mathcal{F}) = \max \{e(G): |G| = n, \text{$G$ contains no $f \in \mathcal{F}$}\}.
374
- \]
375
- Again we are interested in the limiting density
376
- \[
377
- \pi(\mathcal{F}) = \lim_{n \to \infty} \frac{\ex(n, \mathcal{F})}{\binom{n}{\ell}}.
378
- \]
379
- It is an easy exercise to show that this limit always exists. We solved it explicitly for graphs explicitly previously, but we don't really need the Erd\"os--Stone theorem just to show that the limit exists.
380
-
381
- The basic theorem in supersaturation is
382
- \begin{thm}[Erd\"os--Simonovits]
383
- Let $H$ be some $\ell$-uniform hypergraph. Then for all $\varepsilon > 0$, there exists $\delta(H, \varepsilon)$ such that every $\ell$-uniform hypergraph $G$ with $|G| = n$ and
384
- \[
385
- e(G) > (\pi(H) + \varepsilon) \binom{n}{\ell}
386
- \]
387
- contains $\lfloor \delta n^{|H|}\rfloor$ copies of $H$.
388
- \end{thm}
389
- Note that $n^{|H|}$ is approximately the number of ways to choose $|H|$ vertices from $n$, so it's the number of possible candidates for subgraphs $H$.
390
-
391
- \begin{proof}
392
- For each $m$-set $M \subseteq V(G)$, we let $G[M]$ be the sub-hypergraph induced by these vertices. Let the number of subsets $M$ with $e(G[M]) > \left(\pi(H) + \frac{\varepsilon}{2}\right) \binom{m}{\ell}$ be $\eta \binom{n}{m}$. Then we can estimate
393
- \begin{multline*}
394
- \left(\pi(H) + \varepsilon\right) \binom{n}{\ell} \leq e(G) = \frac{\sum_M e(G[M])}{\binom{n - \ell}{m - \ell}} \\
395
- \leq \frac{\eta \binom{n}{m} \binom{m}{\ell} + (1 - \eta) \binom{n}{m} \left(\pi(H) + \frac{\varepsilon}{2}\right) \binom{m}{\ell}}{\binom{n - \ell}{m - \ell}}.
396
- \end{multline*}
397
- So if $n > m$, then
398
- \[
399
- \pi(H) + \varepsilon \leq \eta + (1 + \eta) \left(\pi(H) + \frac{\varepsilon}{2}\right).
400
- \]
401
- So
402
- \[
403
- \eta \geq \frac{\frac{\varepsilon}{2}}{1 - \pi(H) - \frac{\varepsilon}{2}} > 0.
404
- \]
405
- The point is that it is positive, and that's all we care about.
406
-
407
- We pick $m$ large enough so that
408
- \[
409
- \ex(m, H) < \left(\pi(H) + \frac{\varepsilon}{2}\right) \binom{m}{\ell}.
410
- \]
411
- Then $H \subseteq G[M]$ for at least $\eta \binom{n}{m}$ choices of $M$. Hence $G$ contains at least
412
- \[
413
- \frac{\eta\binom{n}{m}}{\binom{n - |H|}{m - |H|}} = \frac{\eta\binom{n}{|H|}}{\binom{m}{|H|}}
414
- \]
415
- copies of $H$, and we are done. (Our results hold when $n$ is large enough, but we can choose $\delta$ small enough so that $\delta n^{|H|} < 1$ when $n$ is small)
416
- \end{proof}
417
-
418
- Let $k_P(G)$ be the number of copies of $K_p$ in $G$. Ramsey's theorem tells us $k_p(G) + k_p(\bar{G}) > 0$ if $|G|$ is large (where $\bar{G}$ is the complement of $G$). In the simplest case where $p = 3$, it turns out we can count these monochromatic triangles exactly.
419
- \begin{thm}[Lorden, 1961]
420
- Let $G$ have degree sequence $d_1, \ldots, d_n$. Then
421
- \[
422
- k_3(G) + k_3(\bar{G}) = \binom{n}{3} - (n - 2) e(G) + \sum_{i = 1}^n \binom{d_i}{2}.
423
- \]
424
- \end{thm}
425
-
426
- \begin{proof}
427
- The number of paths of length $2$ in $G$ and $\bar{G}$ is precisely
428
- \[
429
- \sum_{i = 1}^n \left(\binom{d_i}{2} + \binom{n - 1 - d_i}{2}\right) = 2 \sum_{i = 1}^n \binom{d_i}{2} - 2 (n - 2) e(G) + 3 \binom{n}{3},
430
- \]
431
- since to find a path of length $2$, we pick the middle vertex and then pick the two edges. A complete or empty $K_3$ contains $3$ such paths; Other sets of three vertices contain exactly 1 such path. Hence
432
- \[
433
- \binom{n}{3} + 2 (k_3(G) + k_3(\bar{G})) = \text{number of paths of length $2$}.\qedhere
434
- \]
435
- \end{proof}
436
- \begin{cor}[Goodman, 1959]
437
- We have
438
- \[
439
- k_3(G) + k_3(\bar{G}) \geq \frac{1}{24} n(n - 1)(n - 5).
440
- \]
441
- \end{cor}
442
-
443
- In particular, the Ramsey number of a triangle is at most $6$.
444
-
445
- \begin{proof}
446
- Let $m = e(G)$. Then
447
- \[
448
- k_3(G) + k_3(\bar{G}) \geq \binom{n}{3} - (n - 2) m + n \binom{2m/n}{2}. % Cauchy--Schwarz
449
- \]
450
- Then minimize over $m$.
451
- \end{proof}
452
-
453
- This shows the minimum density of a monochromatic $K_3$ in a red/blue colouring of $K_n$ (for $n$ large) is $\sim \frac{1}{4}$. But if we colour edges monochromatic, then $\frac{1}{4}$ is the probability of having a triangle being monochromatic. So the density is achieved by a ``random colouring''. Recall also that the best bounds on the Ramsey numbers we have are obtained by random colourings. So we might think the best way of colouring if we want to minimize the number of monochromatic cliques is to do so randomly.
454
-
455
- However, this is not true. While we do not know what the minimum density of monochromatic $K_4$ in a red/blue colouring of $K_n$, it is already known to be $< \frac{1}{33}$ (while $\frac{1}{32}$ is what we expect from a random colouring). It is also known by flag algebras to be $> \frac{1}{35}$. So we are not very far.
456
-
457
- \begin{cor}
458
- For $m = e(G)$ and $n = |G|$, we have
459
- \[
460
- k_3(G) \geq \frac{m}{3n}(4m - n^2).
461
- \]
462
- \end{cor}
463
-
464
- \begin{proof}
465
- By Lorden's theorem, we know
466
- \[
467
- k_3(G) + k_3(\bar{G}) = \binom{n}{3} - (n - 2) e(\bar{G}) + \sum\binom{\bar{d}_i}{2},
468
- \]
469
- where $\bar{d}_i$ is the degree sequence in $\bar{G}$. But
470
- \[
471
- 3 k_3(\bar{G}) \leq \sum \binom{\bar{d}_i}{2},
472
- \]
473
- since the sum is counting the number of paths of length $2$. So we find that
474
- \[
475
- k_3(G) \geq \binom{n}{3} - (n - 2)\bar{m} - \frac{2}{3}n \binom{2 \bar{m}/n}{2},
476
- \]
477
- and $\bar{m} = \binom{n}{2} - m$.
478
- \end{proof}
479
- Observe that equality is almost never attained. It is attained only for regular graphs with no subgraphs looking like
480
- \begin{center}
481
- \begin{tikzpicture}
482
- \node [circ] at (0, 0) {};
483
- \node [circ] at (1, 0) {};
484
- \node [circ] at (0.5, 0.866) {};
485
-
486
- \draw (0, 0) -- (1, 0);
487
- \end{tikzpicture}
488
- \end{center}
489
- So non-adjacency is an equivalence relation, so the graph is complete multi-partite and regular. Thus it is $T_r(n)$ with $r \mid n$.
490
-
491
- \begin{thm}
492
- Let $G$ be a graph. For any graph $F$, let $i_F(G)$ be the number of induced copies of $F$ in $G$, i.e.\ the number of subsets $M \subseteq V(G)$ such that $G[M] \cong F$. So, for example, $i_{K_p}(G) = k_p(G)$.
493
-
494
- Define
495
- \[
496
- f(G) = \sum_F \alpha_F i_F(G),
497
- \]
498
- with the sum being over a finite collection of graphs $F$, each being complete multipartite, with $\alpha_F \in \R$ and $\alpha_F \geq 0$ if $F$ is not complete. Then amongst graphs of given order, $f(G)$ is maximized on a complete multi-partite graph. Moreover, if $\alpha_{\bar{K}_3} > 0$, then there are no other maxima.
499
- \end{thm}
500
-
501
- \begin{proof}
502
- We may suppose $\alpha_{\bar{K}_3} > 0$, because the case of $\alpha_{\bar{K}_3} = 0$ follows from a limit argument. Choose a graph $G$ maximizing $f$ and suppose $G$ is not complete multipartite. Then there exist non-adjacent vertices $x, y$ whose neighbourhoods $X, Y$ differ.
503
-
504
- There are four contributions to $i_F(G)$, coming from
505
- \begin{enumerate}
506
- \item $F$'s that contain both $x$ and $y$;
507
- \item $F$'s that contain $y$ but not $x$;
508
- \item $F$'s that contain $x$ but not $y$;
509
- \item $F$'s that contain neither $x$ nor $y$.
510
- \end{enumerate}
511
- We may assume that the contribution from (iii) $\geq$ (ii), and if they are equal, then $|X| \leq |Y|$.
512
-
513
- Consider what happens when we remove all edges between $y$ and $Y$, and add edges from $y$ to everything in $X$. Clearly (iii) and (iv) are unaffected.
514
- \begin{itemize}
515
- \item If (iii) $>$ (ii), then after this move, the contribution of (ii) becomes equal to the contribution of (iii) (which didn't change), and hence strictly increased.
516
-
517
- The graphs $F$ that contribute to (i) are not complete, so $\alpha_F \geq 0$. Moreover, since $F$ is complete multi-partite, it cannot contain a vertex in $X \Delta Y$. So making the move can only increase the number of graphs contributing to (i), and each contribution is non-negative.
518
- \item If (iii) $=$ (ii) and $|X| \leq |Y|$, then we make similar arguments. The contribution to (ii) is unchanged this time, and we know the contribution of (i) strictly increased, because the number of $\bar{K}_3$'s contributing to (i) is the number of points not connected to $x$ and $y$.
519
- \end{itemize}
520
- In both cases, the total sum increased.
521
- %
522
- % coming from $F$'s that contain both $x, y$; contain $x$ but not $y$; $y$ but not $x$; or contain neither $x$ nor $y$. Moreover, the first contribution depends only on $X \cap Y$ and $V \setminus (X \cup Y)$, because $F$, being complete multi-partite, cannot contain $x, y$ and a vertex in $X \Delta Y$.
523
- %
524
- % Considering the contributions to $f(G)$ from the fourfold individual contributions,
525
- % \[
526
- % f(G) = h(X \cap Y, V - (X \cup Y)) + g(X) + g(Y) + C,
527
- % \]
528
- % where $g$ and $h$ are some functions and $C$ is independent of $X, Y$.
529
- %
530
- % Note that $h(A, B) \leq h(A', B')$ if $A \subseteq A'$ and $B \subseteq B'$, because if $F$ is of the first kind, it is not complete, and so $\alpha_F \geq 0$. Moreover, if $B \not= B'$, then $h(A, B) < h(A', B')$, because the contribution from $F = \bar{K_3}$ is $\alpha_{\bar{K}_3} |B|$, and $\alpha_{\bar{K}_3} > 0$.
531
- %
532
- % We may assume that $g(X) \geq g(Y)$ and if $g(X) = g(Y)$, we may assume $|X| \leq |Y|$. In particular, $X \not= X \cup Y$. Hence
533
- % \[
534
- % g(X) + h(X, V \setminus X) > g(Y) + h(X \cap Y, V \setminus (X \cup Y)).
535
- % \]
536
- % (we certainly have $\geq$, and in both cases, we can check that we have a strict gain)
537
- %
538
- % Now remove the edges between $y$ and $Y$, and add edges between $y$ and $X$ to get $G'$, and observe that
539
- % \[
540
- % f(G') = h(X, V - X) + 2 g(X) + C > f(G).
541
- % \]
542
- \end{proof}
543
- Perhaps that theorem seemed like a rather peculiar one to prove. However, it has some nice consequences. The following theorem relates $k_p(G)$ with $k_r(G)$ for different $p$ and $r$:
544
-
545
- \begin{thm}[Bollob\'as, 1976]
546
- Let $1 \leq p \leq r$, and for $0 \leq x \leq \binom{n}{p}$, let $\psi(x)$ be a maximal convex function lying below the points
547
- \[
548
- \{(k_p(T_q(n)), k_r(T_q(n))): q = r - 1, r, \ldots\} \cup \{(0, 0)\}.
549
- \]
550
- Let $G$ be a graph of order $n$. Then
551
- \[
552
- k_r(G) \geq \psi(k_p(G)).
553
- \]
554
- \end{thm}
555
-
556
- \begin{proof}
557
- Let $f(G) = k_p(G) - c k_r(G)$, where $c > 0$.
558
- \begin{claim}
559
- It is enough to show that $f$ is maximized on a Tur\'an graph for any $c$.
560
- \end{claim}
561
- Indeed, suppose we plot out the values of $(k_p(T_q(n)), k_r(T_q(n))$:
562
- \begin{center}
563
- \begin{tikzpicture}
564
- \draw [->] (-1, 0) -- (6, 0) node [right] {$k_p(G)$};
565
- \draw [->] (0, 0) -- (0, 4) node [above] {$k_r(G)$};
566
-
567
- \draw [thick, mblue] (0, 0) node [circ] {} -- (1.5, 0) node [circ] {} node [below, black] {\small$k_p(T_{r - 1}(n))$} -- (3.2, 1.2) node [circ] {} -- (5, 3) node [circ] {};
568
-
569
- \draw [dashed] (3.2, 0) node [below] {\small$k_p(T_r(n))$} -- (3.2, 1.2) -- (0, 1.2) node [left] {\small$k_r(T_r(n))$};
570
- \draw [dashed] (5, 0) node [below] {\small$k_p(T_{r + 1}(n))$} -- (5, 3) -- (0, 3) node [left] {\small$k_r(T_{r + 1}(n))$};
571
- \end{tikzpicture}
572
- \end{center}
573
-
574
- If the theorem doesn't hold, then we can pick a $G$ such that $(k_p(G), k_r(G))$ lies below $\psi$. Draw a straight line through the point with slope parallel to $\psi$. This has slope $\frac{1}{c} > 0$ for some $c$. The intercept on the $x$-axis is then $k_p(G) - c k_r(G)$, which would be greater than $f(\text{any Tur\'an graph})$ by convexity, a contradiction.
575
-
576
- Now the previous theorem immediately tells us $f$ is maximized on some complete multi-partite graph. Suppose this has $q$ class, say of sizes $a_1 \leq a_2 \leq \cdots \leq a_q$. It is easy to verify $q \geq r - 1$. In fact, we may assume $q \geq r$, else the maximum is on a Tur\'an graph $T_{r - 1}(n)$.
577
-
578
- Then we can write
579
- \[
580
- f(G) = a_1 a_q A - c a_q a_q B + C,
581
- \]
582
- where $A, B, C$ are rationals depending only on $a_2, \ldots, a_{q - 1}$ and $a_1 + a_q$ ($A$ and $B$ count the number of ways to pick a $K_p$ and $K_r$ respectively in a way that involves terms in both the first and last classes).
583
-
584
- We wlog assume $c$ is irrational. Hence $a_q a_q A - c a_1 a_q = a_1 a_q (A - cB) \not= 0$.
585
- \begin{itemize}
586
- \item If $A - cB < 0$, replace $a_1$ and $a_q$ by $0$ and $a_1 + a_q$. This would then increase $f$, which is impossible.
587
- \item If $A - cB > 0$ and $a_1 \leq a_q - 2$, then we can replace $a_1, a_q$ by $a_1 + 1, a_q - 1$ to increase $f$.
588
- \end{itemize}
589
- Hence $a_1 \geq a_q - 1$. So $G = T_q(n)$.
590
- \end{proof}
591
-
592
- %It was conjectured that the value of $\min \{k_3(G): e(G) = m, |G| = n\}$ is given by an $r$-partite graph, where $r$ is the minimum possible (subject to $e(G) = m$).
593
- %
594
- %The continuous envelope of the conjection in range $\frac{1}{2}$ to $\frac{2}{3}$ was proved by Fisher in 1989. The whole range was proved (in the limit $n \to \infty$) was proved by Razborov in 2007, who introduced the method fof flag algebras. The idea was to use a computer to find the best possible Cauchy--Schwarz inequality, using semi-definite programming. Later in 2008, Nikiforov did $k_4$'s directly. Finally, Reiher did $k_r$'s for all $r$ by another method.
595
- %
596
- %Finally, Liu, Pikhurko, Staden in 2017+ obtained the exact result for $k_3$'s (for $n$ large).
597
- %
598
- %It is an open problem to maximize the number of induced paths of length $3$. We don't even have a conjecture.
599
-
600
- \section{\texorpdfstring{Szemer\'edi's}{Szemeredi's} regularity lemma}
601
- Szmer\'edi's regularity lemma tells us given a very large graph, we can always equipartition it into pieces that are ``uniform'' in some sense. The lemma is arguably ``trivial'', but it also has many interesting consequences. To state the lemma, we need to know what we mean by ``uniform''.
602
-
603
- %A graph with the (large scale) property that all subsets have the same density can be regarded as ``pseudo-random'' in a quantifiable sense. This lies behind the spirit of Szemer\'edi's lemma.
604
-
605
- \begin{defi}[Density]\index{density}
606
- Let $U, W$ be disjoint subsets of the vertex set of some graph. The number of edges between $U$ and $W$ is denoted by \term{$e(U, W)$}, and the \emph{density} is
607
- \[
608
- d(U, W) = \frac{e(U, W)}{|U| |W|}.
609
- \]
610
- \end{defi}
611
- \begin{defi}[$\varepsilon$-uniform pair]\index{$\varepsilon$-uniform}
612
- Let $0 < \varepsilon < 1$. We say a pair $(U, W)$ is \emph{$\varepsilon$-uniform} if
613
- \[
614
- |d(U', W') - d(U, W)| < \varepsilon
615
- \]
616
- whenever $U' \subseteq U$, $W' \subseteq W$, and $|U'| \geq \varepsilon |U|$, $|W'| \geq \varepsilon |W|$.
617
- \end{defi}
618
-
619
- Note that it is necessary to impose some conditions on how small $U'$ and $W'$ can be. For example, if $|U'| = |W'| = 1$, then $d(U', W')$ is either $0$ or $1$. So we cannot have a sensible definition if we want to require the inequality to hold for arbitrary $U', W'$.
620
-
621
- But we might be worried that it is unnatural to use the same $\varepsilon$ for two different purposes. This is not something one should worry about. The Szemer\'edi regularity lemma is a fairly robust result, and everything goes through if we use different $\varepsilon$'s for the two different purposes. However, it is annoying to have to have many different $\varepsilon$'s floating around.
622
-
623
- Before we state and prove Szemer\'edi's regularity lemma, let's first try to understand why uniformity is good. The following is an elementary observation.
624
- \begin{lemma}
625
- Let $(U, W)$ be an $\varepsilon$-uniform pair with $d(U, W) = d$. Then
626
- \begin{align*}
627
- |\{u \in U: |\Gamma(u) \cap W| > (d - \varepsilon) |W|\}| &\geq (1 - \varepsilon)|U|\\
628
- |\{u \in U: |\Gamma(u) \cap W| < (d + \varepsilon) |W|\}| &\geq (1 - \varepsilon)|U|,
629
- \end{align*}
630
- where $\Gamma(u)$ is the set of neighbours of $u$.
631
- \end{lemma}
632
-
633
- \begin{proof}
634
- Let
635
- \[
636
- X = \{u \in U: |\Gamma(u) \cap W| \leq (d - \varepsilon)|W|\}.
637
- \]
638
- Then $e(X, W) \leq (d - \varepsilon) |X||W|$. So
639
- \[
640
- d(X, W) \leq d - \varepsilon = d(U, W) - \varepsilon.
641
- \]
642
- So it fails the uniformity condition. Since $W$ is definitely not small, we must have $|X| < \varepsilon |U|$.
643
-
644
- The other case is similar, or observe that the complementary bipartite graph between $U$ and $W$ has density $1 - d$ and is $\varepsilon$-uniform.
645
- \end{proof}
646
-
647
- What is good about $\varepsilon$-uniform pairs is that if we have enough of them, then we can construct essentially any subgraph we like. Later, Szemer\'edi's regularity lemma says any graph large enough has $\varepsilon$-uniform equipartitions, and together, they can give us some pretty neat results.
648
- \begin{lemma}[Graph building lemma]\index{graph building lemma}\index{building lemma}
649
- Let $G$ be a graph containing distinct vertex subsets $V_1, \ldots, V_r$ with $|V_i| = u$, such that $(V_i, V_j)$ is $\varepsilon$-uniform and $d(V_i, V_j) \geq \lambda$ for all $1 \leq i \leq j \leq r$.
650
-
651
- Let $H$ be a graph with maximum degree $\Delta(H) \leq \Delta$. Suppose $H$ has an $r$-colouring in which no colour is used more than $s$ times, i.e.\ $H \subseteq K_r(s)$, and suppose $(\Delta + 1) \varepsilon \leq \lambda^\Delta$ and $s \leq \lfloor \varepsilon u\rfloor$. Then $H \subseteq G$.
652
- \end{lemma}
653
-
654
- To prove this, we just do what we did in the previous lemma, and find lots of vertices connected to lots of other vertices, and then we are done.
655
- \begin{proof}
656
- We wlog assume $V(H) = \{1, \ldots, k\}$, and let $c: V(H) \to \{1, \ldots, r\}$ be a colouring of $V(H)$ using no colour more than $s$ times. We want to pick vertices $x_1, \ldots, x_k$ in $G$ so that $x_i x_j \in E(G)$ if $ij \in E(H)$.
657
-
658
- We claim that, for $0 \leq \ell \leq k$, we can choose distinct vertices $x_1, \ldots, x_\ell$ so that $x_j \in C_{c(j)}$, and for $\ell < j \leq k$, a set $X^{\ell}_j$ of \emph{candidates} for $x_j$ such that
659
- \begin{enumerate}
660
- \item $X_j^{\ell} \subseteq V_{c(j)}$;
661
- \item $x_i y_j \in E(G)$ for all $y_j \in X_j^{\ell}$ and $i \leq \ell$ such that $ij \in E(H)$.
662
-
663
- \item $|X_j^{\ell}| \geq (\lambda - \varepsilon)^{|N(j, \ell)|} |V_{c(j)}|$, where
664
- \[
665
- N(j, \ell) = \{x_i: 1 \leq i \leq \ell \text{ and }ij \in E(H)\}.
666
- \]
667
- \end{enumerate}
668
-
669
- The claim holds for $\ell = 0$ by taking $X_j^0 = V_{c(j)}$.
670
-
671
- By induction, suppose it holds for $\ell$. To pick $x_{\ell + 1}$, of course we should pick it from our candidate set $X_{\ell + 1}^\ell$. Then the first condition is automatically satisfied. Define the set
672
- \[
673
- T = \{j > \ell + 1 : (\ell + 1)j \in E(H)\}.
674
- \]
675
- Then each $t \in T$ presents an obstruction to (ii) and (iii) being satisfied. To satisfy (ii), for each $t \in T$, we should set
676
- \[
677
- X^{\ell + 1}_t = X_t^\ell \cap \Gamma(x_{\ell + 1}).
678
- \]
679
- Thus, to satisfy (iii), we want to exclude those $x_{\ell + 1}$ that make this set too small. We define
680
- \[
681
- Y_t = \Big\{y \in X_{\ell + 1}^{\ell} : |\Gamma(y) \cap X_t^\ell| \leq (\lambda - \varepsilon) |X^{\ell}_t|\Big\}.
682
- \]
683
- So we want to find something in $X_{\ell + 1}^\ell \setminus \bigcup_{t \in T} Y_t$. We also cannot choose one of the $x_i$ already used. So our goal is to show that
684
- \[
685
- \left|X_{\ell + 1}^{\ell} - \bigcup_{t \in T} Y_t \right| > s - 1.
686
- \]
687
- This will follow simply from counting the sizes of $|X_{\ell + 1}^\ell|$ and $|Y_t|$. We already have a bound on the size of $|X_{\ell + 1}^\ell|$, and we shall show that if $|Y_t|$ is too large, then it violates $\varepsilon$-uniformity.
688
-
689
- Indeed, by definition of $Y_t$, we have
690
- \[
691
- d(Y_t, X_{\ell + 1}^\ell) \leq \lambda - \varepsilon \leq d(V_{c(t)}, V_{c(\ell + 1)}) - \varepsilon.
692
- \]
693
- So either $|X_{\ell + 1}^\ell| < \varepsilon |V_{c(t)}|$ or $|Y_t| < \varepsilon |V_{c(\ell + 1)}|$. But the first cannot occur.
694
-
695
- Indeed, write $m = |N(\ell + 1, \ell)|$. Then $m + |T| \leq \Delta$. In particular, since the $|T| = 0$ case is trivial, we may assume $m \leq \Delta - 1$. So we can easily bound
696
- \[
697
- |X_{\ell + 1}| \geq (\lambda - \varepsilon)^{\Delta - 1} |V_{c(t)}| \geq (\lambda^{\Delta - 1} - (\Delta - 1)\varepsilon) |V_{c(t)}| > \varepsilon |V_{c(t)}|.
698
- \]
699
- Thus, by $\varepsilon$-uniformity, it must be the case that
700
- \[
701
- |Y_t| \leq \varepsilon |V_{c(\ell + 1)}|.
702
- \]
703
- Therefore, we can bound
704
- \begin{multline*}
705
- \left|X_{\ell + 1}^{\ell} - \bigcup_{t \in T} Y_t \right| \geq (\lambda - \varepsilon)^m |V_{c(\ell + 1)}| - (\Delta - m) \varepsilon|V_{c(\ell + 1)}|\\
706
- \geq (\lambda^m - m\varepsilon - (\Delta - m)\varepsilon) u \geq \varepsilon u > s - 1.
707
- \end{multline*}
708
- So we are done.
709
-
710
- % At most $s - 1$ vertices of $X_{\ell + 1}^\ell - \bigcup Y_t$ have been chosen amongst $x_1, \ldots, x_\ell$, so we may select $x_{\ell + 1}$ in this set. Then take
711
- % \[
712
- % X^{\ell + 1}_t = X^\ell_t \cap \Gamma(x_{\ell + 1})
713
- % \]
714
- % for $t \in T$, and for $t \not \in T$, we just set $X_j^{\ell + 1} X_j^{\ell}$.
715
- %
716
- % This establishes the claim for $1 \leq \ell \leq k$, completing the proof.
717
- \end{proof}
718
-
719
- \begin{cor}
720
- Let $H$ be a graph with vertex set $\{v_1, \ldots, v_r\}$. Let $0 < \lambda, \varepsilon < 1$ satisfy $r \varepsilon \leq \lambda^{r - 1}$.
721
-
722
- Let $G$ be a graph with disjoint vertex subsets $V_1, \ldots, V_r$, each of size $u \geq 1$. Suppose each pair $(V_i, V_j)$ is $\varepsilon$ uniform, and $d(V_i, V_j) \geq \lambda$ if $v_i v_j \in E(H)$, and $d(V_i, V_j) \leq 1 - \lambda$ if $v_i v_j \not \in E(H)$. Then there exists $x_i \in V_i$ so that the map $v_i \to x_i$ is an isomorphism $H \to G[\{x_1, \ldots, x_r\}]$.
723
- \end{cor}
724
-
725
- \begin{proof}
726
- By replacing the $V_i$-$V_j$ edges by the complementary set whenever $v_i v_j \not \in E(H)$, we may assume $d(V_i, V_j) \geq \lambda$ for all $i, j$, and $H$ is a complete graph.
727
-
728
- We then apply the previous lemma with $\Delta = r - 1$ and $s = 1$.
729
- \end{proof}
730
-
731
- Szemer\'edi showed that every graph that is sufficiently large can be partitioned into finitely many classes, with most pairs being $\varepsilon$-uniform. The idea is simple --- whenever we see something that is not uniform, we partition it further into subsets that are more uniform. The ``hard part'' of the proof is to come up with a measure of how far we are from being uniform.
732
-
733
- \begin{defi}[Equipartition]\index{equipartition}
734
- An \term{equipartition} of $V(G)$ into $k$ parts is a partition into sets $V_1, \ldots, V_k$, where $\lfloor \frac{n}{k} \rfloor \leq V_i \leq \lceil \frac{n}{k}\rceil$, where $n = |G|$.
735
-
736
- We say that the partition is $\varepsilon$-uniform\index{$\varepsilon$-uniform!partition} if $(V_i, V_j)$ is $\varepsilon$-uniform for all but $\varepsilon \binom{k}{2}$ pairs.
737
- \end{defi}
738
-
739
-
740
- \begin{thm}[Szemer\'edi's regularity lemma]\index{Szemer\'edi regularity lemma}
741
- Let $0 < \varepsilon < 1$ and let $\ell$ be some natural number. Then there exists some $L = L(\ell, \varepsilon)$ such that every graph has an $\varepsilon$-uniform equipartition into $m$ parts for some $\ell \leq m \leq L$, depending on the graph.
742
- \end{thm}
743
- This lemma was proved by Szemer\'edi in order to prove his theorem on arithmetic progressions in dense subsets of integers.
744
-
745
- When we want to apply this, we usually want at least $\ell$ many parts. For example, having $1$ part is usually not very helpful. The upper bound on $m$ is helpful for us to ensure the parts are large enough, by picking graphs with sufficiently many vertices.
746
-
747
- We first need a couple of trivial lemmas.
748
- \begin{lemma}
749
- Let $U' \subseteq U$ and $W' \subseteq W$, where $|U'| \geq (1 - \delta)|U|$ and $|W'| \geq (1 - \delta) |W|$. Then
750
- \[
751
- |d(U', W') - d(U, W)| \leq 2\delta.
752
- \]
753
- \end{lemma}
754
-
755
- \begin{proof}
756
- Let $d = d(U, W)$ and $d' = d(U', W')$. Then
757
- \[
758
- d = \frac{e(U, W)}{|U||W|} \geq \frac{e(U', W')}{|U||W|} = d' \frac{|U'||W'|}{|U||W|} \geq d' (1 - \delta)^2.
759
- \]
760
- Thus,
761
- \[
762
- d' - d \leq d'(1 - (1 - \delta)^2) \leq 2\delta d' \leq 2 \delta.
763
- \]
764
- The other inequality follows from considering the complementary graph, which tells us
765
- \[
766
- (1 - d') - (1 - d) \leq 2\delta.\qedhere
767
- \]
768
- \end{proof}
769
-
770
- \begin{lemma}
771
- Let $x_1, \ldots, x_n$ be real numbers with
772
- \[
773
- X = \frac{1}{n} \sum_{i = 1}^n x_i,
774
- \]
775
- and let
776
- \[
777
- x = \frac{1}{m} \sum_{i = 1}^m x_i.
778
- \]
779
- Then
780
- \[
781
- \frac{1}{n} \sum_{i = 1}^n x_i^2 \geq X^2 + \frac{m}{n - m}(x - X)^2 \geq X^2 + \frac{m}{n} (x - X)^2.
782
- \]
783
- \end{lemma}
784
- If we ignore the second term on the right, then this is just Cauchy--Schwarz.
785
-
786
- \begin{proof}
787
- We have
788
- \begin{align*}
789
- \frac{1}{n} \sum_{i = 1}^n x_i^2 &= \frac{1}{n} \sum_{i = 1}^m x_i^2 + \frac{1}{n} \sum_{i = m + 1}^n x_i^2 \\
790
- &\geq \frac{m}{n} x^2 + \frac{n - m}{n} \left(\frac{nX - mx}{n - m}\right)^2\\
791
- &\geq X^2 + \frac{m}{n - m} (x - X)^2
792
- \end{align*}
793
- by two applications of Cauchy--Schwarz.
794
- \end{proof}
795
-
796
- We can now prove Szemer\'edi's regularity lemma.
797
- \begin{proof}
798
- Define the index $\ind(\mathcal{P})$ of an equipartition $\mathcal{P}$ into $k$ parts $V_i$ to be
799
- \[
800
- \ind(P) = \frac{1}{k^2} \sum_{i < j} d^2(V_i, V_j).
801
- \]
802
- We show that if $P$ is not $\varepsilon$-uniform, then there is a refinement equipartition $\mathcal{Q}$ into $k 4^k$ parts, with $\ind(\mathcal{Q}) \geq \ind(\mathcal{P}) + \frac{\varepsilon^5}{8}$.
803
-
804
- This is enough to prove the theorem. For choose $t \geq \ell$ with $4^t \varepsilon^5 \geq 100$. Define recursively a function $f$ by
805
- \[
806
- f(0) = t,\quad f(j + 1) = f(j) 4^{f(j)}.
807
- \]
808
- Let
809
- \[
810
- N = f(\lceil 4 \varepsilon^{-5}\rceil),
811
- \]
812
- and pick $L = N 16^N$.
813
-
814
- Then, if $n \leq L$, then just take an equipartition into single vertices. Otherwise, begin with a partition into $t$ parts. As long as the current partition into $k$ parts is not $\varepsilon$ uniform, replace it a refinement into $4 k^4$ parts. The point is that $\ind(\mathcal{P}) \leq \frac{1}{2}$ for any partition. So we can't do this more than $4 \varepsilon^{-5}$ times, at which point we have partitioned into $N \leq L$ parts.
815
-
816
- Note that the reason we had to set $L = N 16^N$ is that in our proof, we want to assume we have many vertices lying around.
817
-
818
- The proof really is just one line, but students tend to complain about such short proofs, so let's try to explain it in a bit more detail. If the partition is not $\varepsilon$-uniform, this means we can further partition each part into uneven pieces. Then our previous lemma tells us this discrepancy allows us to push up $\frac{1}{n} \sum x_i^2$.
819
-
820
- So given an equipartition $\mathcal{P}$ that is not $\varepsilon$-uniform, for each non-uniform pair $(V_i, V_j)$ of $P$, we pick witness sets
821
- \[
822
- X_{ij} \subseteq V_i,\quad X_{ji} \subseteq V_j
823
- \]
824
- with $|X_{ij}| \geq \varepsilon |V_i|$, $|X_{ji}| \geq |V_j|$ and $|d(X_{ij}, X_{ji}) - D(V_i, V_j)| \geq \varepsilon$.
825
-
826
- Fix $i$. Then the sets $X_{ij}$ partition $V_i$ into at most $2^{k - 1}$ \term{atoms}. Let $m = \lfloor\frac{n}{k4^k}\rfloor$, and let $n = k 4^k m + ak + b$, where $0 \leq a \leq 4^k$ and $b \leq k$. Then we see that
827
- \[
828
- \lfloor n/k\rfloor = 4^k m + a
829
- \]
830
- and the parts of $\mathcal{P}$ have size $4^k m + a$ or $4^km + a + 1$, with $b$ of the larger size.
831
-
832
- Partition each part of $\mathcal{P}$ into $4^k$ sets, of size $m$ or $m + 1$. The smaller $V_i$ having $a$ parts of size $m + 1$, and thelargrer having $a + 1$ such pairs.
833
-
834
- We see that any such partition is an equipartition into $k 4^k$ parts of size $m$ or $m + 1$, with $ak + b$ parts of larger size $m + 1$.
835
-
836
- Let's choose such an equipartition $\mathcal{Q}$ with parts as nearly as possible inside atoms, so each atom is a union of parts of $\mathcal{Q}$ with at most $m$ extra vertices.
837
-
838
- All that remains is to check that $\ind (\mathcal{Q}) \geq \ind(\mathcal{P}) + \frac{\varepsilon^5}{8}$.
839
-
840
- Let the sets of $\mathcal{Q}$ within $V_i$ be $V_i(s)$, where $1 \leq s \leq 4^k \equiv q$. So
841
- \[
842
- V_i = \bigcup_{s = 1}^q V_i(s).
843
- \]
844
- Now
845
- \[
846
- \sum_{1 \leq s, t \leq q} e(V_i(s), V_j(t)) = e(V_i, V_j).
847
- \]
848
- We'd like to divide by some numbers and convert these to densities, but this is where we have to watch out. But this is still quite easy to handle. We have
849
- \[
850
- \frac{m}{m + 1} \leq q |V_i(s)| \leq |V_i| \leq \frac{m + 1}{m} q |V_i(s)|
851
- \]
852
- for all $s$. So we want $m$ to be large for this to not hurt us too much.
853
-
854
- So
855
- \[
856
- \left(\frac{m}{m + 1}\right) d(V_i, V_j) \leq \frac{1}{q^2} \sum_{s, t} d(V_i(s), V_j(t)) \leq \left(\frac{m + 1}{m}\right)^2 d(V_i, V_j).
857
- \]
858
- Using $n \geq k 16^k$, and hence
859
- \[
860
- \left(\frac{m}{m + 1}\right)^2 \geq 1 - \frac{2}{m} \geq 1 - \frac{2}{4^k} \geq 1 - \frac{\varepsilon^5}{50},
861
- \]
862
- we have
863
- \[
864
- \left|\frac{1}{q^2} \sum_{s, t} d(V_i(s), V_j(t)) - d(V_i, V_j)\right| \leq \frac{\varepsilon^5}{49} + d(V_i, V_j). % s/49/50/ ?
865
- \]
866
- In particular,
867
- \[
868
- \frac{1}{q^2} \sum d^2(V_i(s), V_j(t)) \geq d^2(V_i, V_j) - \frac{\varepsilon^5}{25}.
869
- \]
870
- The lower bound can be improved if $(V_i, V_j)$ is not $\varepsilon$-uniform.
871
-
872
- Let $X_{ij}^*$ be the largest subset of $X_{ij}$ that is the union of parts of $\mathcal{Q}$. We may assume
873
- \[
874
- X_{ij}^* = \bigcup_{1 \leq s \leq r_i} V_i(s).
875
- \]
876
- By an argument similar to the above, we have
877
- \[
878
- \frac{1}{r_i r_j} \sum_{\substack{1 \leq s \leq r_i\\ 1 \leq t \leq r_j}} d(V_i(s), d_j(t)) \leq d(X_{ij}^*, X_{ji}^*) + \frac{\varepsilon^5}{49}.
879
- \]
880
- By the choice of parts of $\mathcal{Q}$ within atoms, and because $|V_i| \geq qm = 4^k m$, we have
881
- \begin{align*}
882
- |X_{ij}^*| &\geq |X_{ij}| - 2^{k - 1}m \\
883
- &\geq |X_{ij}| \left(1 - \frac{2^k m}{\varepsilon |V_i|}\right) \\
884
- &\geq |X_{ij}| \left(1 - \frac{1}{2^k \varepsilon}\right)\\
885
- &\geq |X_{ij}| \left(1 - \frac{\varepsilon}{10}\right).
886
- \end{align*}
887
- So by the lemma, we know
888
- \[
889
- |d(X_{ij}^*, X_{ji}^*) - d(X_{ij}, X_{ji})| < \frac{\varepsilon}{5}.
890
- \]
891
- Recalling that
892
- \[
893
- |d(X_{ij}, X_{ji}) - d(V_i, V_j)| > \varepsilon,
894
- \]
895
- we have
896
- \[
897
- \bigg| \frac{1}{r_i r_j} \sum_{\substack{1 \leq s \leq r_i\\ 1 \leq t \leq r_j}} d(V_i(s), V_j(t)) - d(V_i, V_j)\bigg| > \frac{3}{4} \varepsilon.
898
- \]
899
- We can now allow our Cauchy--Schwarz inequality with $n = q^2$ and $m = r_i r_j$ gives
900
- \begin{align*}
901
- \frac{1}{q^2} \sum_{1 \leq s, t \leq q} d^2 (V_i(s), V_j(t)) &\geq d^2(V_i, V_j) - \frac{\varepsilon^5}{25} + \frac{r_i r_j}{q^2}\cdot \frac{9\varepsilon^2}{16} \\
902
- &\geq d^2(V_i, V_j) - \frac{\varepsilon^5}{25} + \frac{\varepsilon^4}{3},
903
- \end{align*}
904
- using the fact that
905
- \begin{multline*}
906
- \frac{r_i}{q} \geq \left(1 - \frac{1}{m}\right) \frac{r_i}{q} \frac{m + 1}{m} \geq \left(1 - \frac{1}{m}\right)\\ \geq \frac{|X_{ij}^*|}{|V_i|} \geq \left(1 - \frac{1}{m}\right) \left(1 - \frac{\varepsilon}{10}\right) \frac{|X_{ij}|}{|V_i|} > \frac{4\varepsilon}{5}.
907
- \end{multline*}
908
- Therefore
909
- \begin{align*}
910
- \ind(\mathcal{Q}) &= \frac{1}{k^2q^2}\sum_{\substack{1 \leq i < j < k\\ 1 \leq s, t\leq q}} d^2(V_i(s), V_j(t))\\
911
- &\geq \frac{1}{k^2} \sum_{1 \leq i < j \leq k} d^2(V_i, V_j) - \frac{\varepsilon^5}{25} + \varepsilon \binom{k}{2} \frac{\varepsilon^4}{3}\\
912
- &\geq \ind(P) + \frac{\varepsilon^5}{8}.\qedhere
913
- \end{align*}
914
- \end{proof}
915
- The proof gives something like
916
- \[
917
- L \sim 2^{2^{.^{.^{.^{2}}}}},
918
- \]
919
- where the tower is $\varepsilon^{-5}$ tall. Can we do better than that?
920
-
921
- In 1997, Gowers showed that the tower of height at least $\varepsilon^{-1/16}$. More generally, we can define $V_1, \ldots, V_k$ to be $(\varepsilon, \delta, \eta)$-uniform if all but $\eta \binom{k}{2}$ pairs $(V_i, V_j)$ satisfy $|d(V_i, V_j) - d(V_i', V_j')| \leq \varepsilon$ whenever $|V_i'| \geq \delta |V_i|$. Then there is a graph for which every $(1 - \delta^{1/16}, \delta, 1 - 20 \delta^{1/16})$-uniform partition needs a tower height $\delta^{-1/16}$ parts.
922
-
923
- More recently, Moskowitz and Shapira (2012) improved these bounds. Most recently, a reformulation of the lemma due to Lov\'asz and Szegedy (2007) for which the upper bound is tower($\varepsilon^{-2}$) was shown to have lower bound tower($\varepsilon^{-2}$) by Fox and Lov\'asz (2014) (note that these are different Lov\'asz's!).
924
-
925
- Let's now turn to some applications of the Szemer\'edi's regularity lemma. Recall that Ramsey's theorem says there exists $R(k)$ so every red-blue colouring of the edges of $K_n$ yeidls a monochromatic $K_k$ provided $n \geq R(k)$. There are known bounds
926
- \[
927
- 2^{k/2} \leq R(k) \leq 4^k.
928
- \]
929
- The existence of $R(k)$ implies that for every graph $G$, there exists a number $r(G)$ minimal so if $n \geq r(G)$ and we colour $K_n$, we obtain a monochromatic $G$. Clearly, we have
930
- \[
931
- r(G) \leq R(|G|).
932
- \]
933
- How much smaller can $r(G)$ be compared to $R(|G|)$?
934
-
935
- \begin{thm}
936
- Given an integer $d$, there exists $c(d)$ such that
937
- \[
938
- r(G) \leq c|G|
939
- \]
940
- for every graph $G$ with $\Delta(G) \leq d$.
941
- \end{thm}
942
-
943
- \begin{proof}
944
- Let $t = R(d + 1)$. Pick $\varepsilon \leq \min \left\{\frac{1}{t}, \frac{1}{2^d (d + 1)}\right\}$. Let $\ell \geq t^2$, and let $L = L(\ell, \varepsilon)$. We show that $c = \frac{L}{\varepsilon}$ works.
945
-
946
- Indeed, let $G$ be a graph. Colour the edges of $K_n$ by red and blue, where $n \geq c |G|$. Apply Szemir\'edi to the red graph with $\ell, \varepsilon$ as above. Let $H$ be the graph whose vertices are $\{V_1, \ldots, V_m\}$, where $V_1, \ldots, V_m$ is the partition of the red graph. Let $V_i, V_j \in E(H)$ if $(V_i, V_j)$ is $\varepsilon$-uniform. Notice that $m = |H| \geq \ell \geq t^2$, and $e(\bar{H}) \leq \varepsilon \binom{m}{2}$. So $H \supseteq K_t$, or else by Tur\'an's theroem, there are integers $d_1, \ldots, d_{t - 1}$ and $\sum d_i m$ and
947
- \[
948
- e(\bar{H}) \geq \binom{d_i}{2}\geq (t - 1) \binom{m/(t - 1)}{2} \geq \varepsilon \binom{m}{2}
949
- \]
950
- by our choice of $\varepsilon$ and $m$.
951
-
952
- We may as well assume all pairs $V_i, V_j$ for $1 \leq i < j \leq t$ are $\varepsilon$-uniform. We colour the edge of $K_t$ green if $d(V_i, V_j) \geq \frac{1}{2}$ (in the red graph), or white if $< \frac{1}{2}$ (i.e.\ density $> \frac{1}{2}$ in the blue graph).
953
-
954
- By Ramsey's theorem, we may assume all pairs $V_i V_j$ for $1 \leq i < j \leq d + 1$ are the same colour. We may wlog assume the colour is green, and we shall find a red $G$ (a similar argument gives a blue $G$ if the colour is white).
955
-
956
- Indeed, take a vertex colouring of $G$ with at most $d + 1$ colours (using $\Delta(G) \leq d$), with no colour used more than $|G|$ times. By the building lemma with $H$ (in the lemma) being $G$ (in this proof), and $G$ (in the lemma) equal to the subgrpah of the red graph spanned by $V_1, \ldots, V_{d + 1}$ (here),
957
- \[
958
- u = |V_i| \geq \frac{n}{L} \geq \frac{c |G|}{L} \geq \frac{|G|}{\varepsilon},
959
- \]
960
- $r = d + 1$, $\lambda = \frac{1}{2}$, and we are done.
961
- \end{proof}
962
-
963
- This proof is due to Chv\'atal, R\"odl, Szem\'eredi, Trotter (1983). It was extended to more general graphs by Chen and Schelp (1993) including planar graphs. It was conjectured by Burr and Erd\"os (1978) to be true for $d$-degenerate graphs ($e(H) \leq d |H|$ for all $H \subseteq G$).
964
-
965
- Kostochka--R\"odl (2004) introduced ``dependent random choice'', used by Fox--Sudakov (2009) and finally Lee (2015) proved the full conjecture.
966
-
967
- An important application of the Szem\'eredi regularity lemma is the \emph{triangle removal lemma}.
968
- \begin{thm}[Triangle removal lemma]\index{triangle removal lemma}
969
- Given $\varepsilon > 0$, there exists $\delta > 0$ such that if $|G| = n$ and $G$ contains at most $\delta n^3$ triangles, then there exists a set of at most $\varepsilon n^2$ edges whose removal leaves no triangles.
970
- \end{thm}
971
-
972
- \begin{proof}
973
- Exercise. (See example sheet)
974
- \end{proof}
975
- Appropriate modifications hold for general graphs, not just triangles.
976
-
977
- \begin{cor}[Roth, 1950's]
978
- Let $\varepsilon > 0$. Then if $n$ is large enough, and $A \subseteq [n] = \{1, 2, \ldots, n\}$ with $|A| \geq \varepsilon n$, then $A$ contains a $3$-term arithmetic progression.
979
- \end{cor}
980
- Roth originally proved this by some sort of Fourier analysis arguments, while Szmer\'edi came and prove this for all lengths later.
981
-
982
- \begin{proof}
983
- Define
984
- \[
985
- B = \{(x, y) \in [2n]^2 : x - y \in A\}.
986
- \]
987
- Then certainly $|B| \geq \varepsilon n^2$. We form a $3$-partite graph with disjoint vertex classes $X = [2n]$ and $Y = [2n]$, $Z = [4n]$. If we have $x \in X, y \in Y$ and $z \in Z$, we join $x$ to $y$ if $(x, y) \in B$; join $x$ to $z$ if $(x, z - x) \in B$ and join $y$ to $z$ if $(z - y, y) \in B$.
988
-
989
- A triangle in $G$ is a triple $(x, y, z)$ where $(x, y), (x, y + w), (x + w, y) \in B$ where $w = z - x - y$. Note that $w < 0$ is okay. A $0$-triangle is one with $w = 0$. There are at least $\varepsilon n^2 $ of these, one for each $(x, y) \in B$, and these are edge disjoint, because $z = x + y$.
990
-
991
- Hence the triangles cannot be killed by removing $\leq \varepsilon n^2/2$ edges. By the triangle removal lemma, we must have $\geq \delta n^3$ triangles for some $\delta$. In particular, for $n$ large enough, there is some triangle that is not a $0$-triangle.
992
-
993
- But then we are done, since
994
- \[
995
- x - y - w, x - y, x - y + w \in A
996
- \]
997
- where $w \not= 0$, and this is a $3$-term arithemtic progression.
998
- \end{proof}
999
- There is a simple generalization of this argument which yields $k$-term arithmetic progressions provided we have a suitable removal lemma. This needs a Szemer\'edi regularity lemma for $(k - 1)$-uniform hypergraphs, instead of just graphs, and a corresponding version of the building lemma.
1000
-
1001
- The natural generalizations of Szemer\'edi's lemma to hypergraphs is easily shown to be true (exercise). The catch is that what the Szemer\'edi's lemma does not give us a strong enough result to apply the building lemma.
1002
-
1003
- What we need is a stronger version of the regularity that allows us to build graphs, but not too strong that we can't prove the Szemer\'edi regularity lemma. A workable hypergraph regularity lemma along these lines was proved by Nagle, R\"odl, Skokan, and by Gowers (those weren't quite the same lemma, though).
1004
-
1005
- \section{Subcontraction, subdivision and linking}
1006
- We begin with some definitions.
1007
- \begin{defi}
1008
- Let $G$ be a graph, $e = xy$ an edge. Then the graph $G/e$\index{$G/e$} is the graph formed from $G \setminus \{x, y\}$ by adding a new vertex joined to all neighbours of $x$ and $y$. We say this is the graph formed by \emph{contracting} the edge $e$.
1009
- \end{defi}
1010
-
1011
- % insert picture
1012
-
1013
- \begin{defi}[(Sub)contraction]
1014
- A \term{contraction} of $G$ is a graph obtained by a sequence of edge contractions. A \term{subcontraction} of $G$ is a contraction of a subgraph. We write $G \succ H$ if $H$ is a subcontraction of $G$.
1015
- \end{defi}
1016
- If $G \succ H$, then $G$ has disjoint vertex subsets $W_v$ for each $v \in V(H)$ such that $G[W_v]$ is connected, and there is an edge of $G$ between $W_u$ and $W_v$ if $w \in E(H)$.
1017
-
1018
- \begin{defi}[Subdivision]\index{subdivision}
1019
- If $H$ is a graph, \term{$TH$} stands for any graph obtained from $H$ by replacing its edges by vertex disjoint paths (i.e.\ we subdivide edges).
1020
- \end{defi}
1021
- The $T$ stands for ``topological'', since the resulting graph has the same topology.
1022
-
1023
- Clearly, if $G \supseteq TH$, then $G \succ H$.
1024
-
1025
- \begin{center}
1026
- \begin{tikzpicture}
1027
- \draw (0, 0) node [circ] {} -- (2, 0) node [circ] {} -- (1, 1.732) node [circ] {} -- cycle;
1028
-
1029
- \draw [->] (2.5, 0.866) -- +(1, 0);
1030
-
1031
- \draw (4, 0) node [circ] {} -- (6, 0) node [circ] {} node [pos=0.5, circ] {} -- (5, 1.732) node [circ] {} node [pos=0.333, circ] {} node [pos=0.667, circ] {} -- cycle;
1032
- \end{tikzpicture}
1033
- \end{center}
1034
-
1035
- Recall the following theorem:
1036
- \begin{thm}[Menger's theorem]\index{Menger's theorem}
1037
- Let $G$ be a graph and $s_1, \ldots, s_k, t_1, \ldots, t_k$ be distinct vertices. If $\kappa(G) \geq k$, then there exists $k$ vertex disjoint paths from $\{s_1, \ldots, s_k\}$ to $\{t_1, \ldots, t_k\}$.
1038
- \end{thm}
1039
-
1040
- This is good, but not good enoguh. It would be nice if we can have paths that join $s_i$ to $t_i$, as opposed to joining $s_i$ to any element in $\{t_1, \ldots, t_k\}$.
1041
-
1042
- \begin{defi}[$k$-linked graph]\index{$k$-linked graph}
1043
- We say $G$ is \emph{$k$-linked} if there exists vertex disjoint $s_i$-$t_i$ paths for $1 \leq i \leq k$ for any choice of these $2k$ vertices.
1044
- \end{defi}
1045
-
1046
- We want to understand how these three notions interact. There are some obvious ways in which they interact. For example, it is not hard to see that $G \supseteq TK_t$ if $G$ is $k$-linked for some large $k$. Here is a kind of converse.
1047
- \begin{lemma}
1048
- If $\kappa(G) \geq 2k$ and $G \supseteq TK_{5k}$, then $G$ is $k$-linked.
1049
- \end{lemma}
1050
-
1051
- \begin{proof}
1052
- Let $B$ be the set of ``branch vertices'' of $TK_{5k}$, i.e.\ the $5k$ vertices joined by paths. By Menger's theorem, there exists $2k$ vertex disjoint paths joining $\{s_1, \ldots, s_k, t_1, \ldots, t_k\}$ to $B$ (note that these sets might intersect). We say one of our $2k$ paths \emph{impinges} on a path in $TK_{5k}$ if it meets that path and subsequently leaves it. Choose a set of $2k$ paths impinging minimally (counting 1 per impingement).
1053
-
1054
- Let these join $\{s_1, \ldots, t_k\}$ to $\{v_1, \ldots, v_{2k}\}$, where $B = \{v_1, \ldots, v_{5k}\}$. Then no path impinges on a path in $TK_{5k}$ from $\{v_1, \ldots, v_{2k}\}$ to $\{v_{2k + 1}, \ldots, v_{5k}\}$. Otherwise, pick the path impinging closest to $v_{2k + j}$, and reroute it to $v_{2k + j}$ rather than to something in $\{v_1, \ldots, v_{2k}\}$, reducing the number of impingements.
1055
-
1056
- Hence each path (of any $2k$) meet at most one path in $\{v_1, \ldots, v_{2k}\}$ to $\{v_{2k + 1}, \ldots, v_{5k}\}$ (once we hit it, we must stay there).
1057
-
1058
- Thus, we may assume the paths from $\{v_1, \ldots, v_{2k}\}$ to $\{v_{4k + 1}, \ldots, v_{5k}\}$ are not met at all. Then we are done, since we can link up $v_j$ to $v_{k + j}$ via $v_{4k + j}$ to join $s_j$ to $t_k$.
1059
- \end{proof}
1060
- The argument can be improved to use $TK_{3k}$.
1061
-
1062
- Since we are doing extremal graph theory, we want to ask the following question --- how many edges do we need to guarantee $G \succ K_t$ or $G \supseteq TK_t$?
1063
-
1064
- For two graphs $G$ and $H$, we write $G + H$ for the graph formed by taking the disjoint union of $G$ and $H$ and then joining everything in $G$ to everything in $H$.
1065
- \begin{lemma}
1066
- If $e(G) \geq k|G|$, then there exists some $H$ with $|H| \leq 2k$ and $\delta(H) \geq k$ such that $G \succ K_1 + H$.
1067
- \end{lemma}
1068
-
1069
- \begin{proof}
1070
- Consider the minimal subcontraction $G'$ of $G$ among those that satisfy $e(G') \geq k |G'|$. Then we must in fact have $e(G') = k|G'|$ , or else we can just throw away an edge.
1071
-
1072
- Since $e(G') = k|G'|$, it must be the case that $\delta(G') \leq 2k$. Let $v$ be of minimum degree in $G'$. and set $H = G'[\Gamma(v)]$. Then $G \succ K_1 + H$ and $|H| \leq 2v$.
1073
-
1074
- To see that $\delta(H) \geq k$, suppose $u \in V(H) = \Gamma(v)$. Then by minimality of $G'$, we have
1075
- \[
1076
- e(G'/uv) \leq k|G'| - k - 1.
1077
- \]
1078
- But the number of edges we kill when performing this contraction is exactly $1$ plus the number of triangles containing $uv$. So $uv$ lies in at least $k$ triangles of $G'$. In other words, $\delta(H) \geq k$.
1079
- \end{proof}
1080
-
1081
- By iterating this, we can find some subcontractions into $K_t$.
1082
- \begin{thm}
1083
- If $t \geq 3$ and $e(G) \geq 2^{t - 3}|G|$, then $G \succ K_t$.
1084
- \end{thm}
1085
-
1086
- \begin{proof}
1087
- If $t = 3$, then $G$ contains a cycle. So $G \succ K_3$. If $t > 3$, then $G \succ K_1 + H$ where $\delta(H) \geq 2^{t - 3}$. So $e(H) \geq 2^{t - 4} |H|$ and (by induction) $H \succ K_{t - 1}$.
1088
- \end{proof}
1089
-
1090
- We can prove similar results for the topological things.
1091
- \begin{lemma}
1092
- If $\delta(G) \geq 2k$, then $G$ contains vertex disjoint subgraphs $H, J$ with $\delta(H) \geq k$, $J$ connected, and every vertex in $H$ has a neighbour in $J$.
1093
- \end{lemma}
1094
- If we contract $J$ to a single vertex, then we get an $H + K_1$.
1095
-
1096
- \begin{proof}
1097
- We may assume $G$ is connected, or else we can replace $G$ by a component.
1098
-
1099
- Now pick a subgraph $J$ maximal such that $J$ is connected and $e(G/J) \geq k(|G| - |J| + 1)$. Note that any vertex could be used for $J$. So such a $J$ exist.
1100
-
1101
- Let $H$ be the subgraph spanned by the vertices having a neighbour in $J$. Note that if $v \in V(H)$, then $v$ has at least $k$ neighbours in $H$. Indeed, when we contract $J \cup \{v\}$, maximality tells us
1102
- \[
1103
- e(G/(J \cup \{v\})) \leq k(|G| - |J| + 1) - k - 1,
1104
- \]
1105
- and the number of edges we lose in the contraction is $1$ plus the number of neighbours of $v$ (it is easier to think of this as a two-step process --- first contract $J$, so that we get an $H + K_1$, then contract $v$ with the vertex coming from $K_1$).
1106
- \end{proof}
1107
-
1108
- Again, we iterate this result.
1109
- \begin{thm}
1110
- Let $F$ be a graph with $n$ edges and no isolated vertices. If $\delta(G) \geq 2^n$, then $G \supseteq TF$.
1111
- \end{thm}
1112
-
1113
- \begin{proof}
1114
- If $n = 1$, then this is immediate. If $F$ consists of $n$ isolated edges, then $F \subseteq G$ (in fact, $\delta(G) \geq 2n - 1$ is enough for this). Otherwise, pick an edge $e$ which is not isolated. Then $F - e$ has at most $1$ isolated vertex. Apply the previous lemma to $G$ to obtain $H$ with $\delta(H) \geq 2^{n - 1}$. Find a copy of $T(F - e)$ in $H$ (apart from the isolated vertex, if exists).
1115
-
1116
- If $e$ was just a leaf, then just add an edge (say to $J$). If not, just construct a path going through $J$ to act as $e$, since $J$ is connected.
1117
- \end{proof}
1118
-
1119
- It is convenient to make the definition
1120
- \begin{align*}
1121
- c(t) &= \inf \{c: e(G) \geq c|G| \Rightarrow G \succ K_t\}\\
1122
- t(t) &= \inf \{c: e(G) \geq c|G| \Rightarrow G \supseteq T K_t\}.
1123
- \end{align*}
1124
- We can interpret the first result as saying $c(t) \leq 2^{t - 3}$. Moreover, note that if $e(G) \geq k|G|$, then $G$ must contain a subgraph with $\delta \geq k$. Otherwise, we can keep removing vertices of degree $<k$ till we have none left, but still have a positive number of edges, which is clearly nonsense. Hence, the second result says $t(t) \leq 2^{\binom{t}{2}}$.
1125
-
1126
- Likewise, we can define
1127
- \[
1128
- f(k) = \min \{c: \kappa(G) \geq c \Rightarrow G \text{ is $k$-linked}\}.
1129
- \]
1130
- Since $\kappa(G) \geq c$ implies $\delta(G) \geq c$, the existence of $t(t)$ and the beginning lemma implies that $f(k)$ exists.
1131
-
1132
- In 1967, Mader showed $c(t) = t - 2$ for $t \leq 7$. Indeed, for $t \leq 7$, we have the exact result
1133
- \[
1134
- \ex(n; \succ K_t) = (t - 2) n - \binom{t - 1}{2}.
1135
- \]
1136
- So for some time, people though $c$ is linear. But in fact, $c(t)$ is superlinear.
1137
-
1138
- \begin{lemma}
1139
- We have
1140
- \[
1141
- c(t) \geq (\alpha + o(1)) t \sqrt{\log t}.
1142
- \]
1143
- To determine $\alpha$, let $\lambda < 1$ be the root to $1 - \lambda + 2 \lambda \log \lambda = 0$. In fact $\lambda \approx 0.284$. Then
1144
- \[
1145
- \alpha = \frac{1 - \lambda}{2 \sqrt{\log 1/\lambda}} \approx 0.319.
1146
- \]
1147
- \end{lemma}
1148
-
1149
- \begin{proof}
1150
- Consider a random graph $G(n, p)$, where $p$ is a constant to be chosen later. So $G(n, p)$ has $n$ vertices and edges are chosen independently at random, with probability $p$. Here we \emph{fix} a $t$, and then try to find the best combination of $n$ and $p$ to give the desired result.
1151
-
1152
- A partition $V_1, \ldots, V_t$ of $V(G)$ is said to be \emph{complete} if there is an edge between $V_i$ and $V_j$ for all $i$ and $j$. Note that having a complete partition is a \emph{necessary}, but not sufficient condition for having a $K_t$ minor. So it suffices to show that with probability $> 0$, there is no complete partition.
1153
-
1154
- Writing $q = 1 - p$, a given partition with $|V_i| = n_i$ is complete with probability
1155
- \begin{align*}
1156
- \prod_{i, j}(1 - q^{n_i n_j}) &\leq \exp\left(- \sum_{i < j} q^{n_i n_j}\right)\\
1157
- &\leq \exp \left(- \binom{t}{2}\prod q^{n_i n_j/\binom{t}{2}}\right) \tag{AM-GM}\\
1158
- &\leq \exp \left(- \binom{t}{2} q^{n^2/t^2}\right).
1159
- \end{align*}
1160
- The expected number of complete partitions is then
1161
- \[
1162
- \leq t^n \exp \left(- \binom{t}{2} q^{n^2/t^2}\right),
1163
- \]
1164
- As long as we restrict to the choices of $n$ and $q$ such that
1165
- \[
1166
- t > n \sqrt{\frac{\log(1/q)}{\log n}},
1167
- \]
1168
- we can bound this by
1169
- \[
1170
- \leq \exp \left(n \log t - \binom{t}{2} \frac{1}{n} \right) = o(1)
1171
- \]
1172
- in the limit $t \to \infty$. We set
1173
- \[
1174
- q = \lambda,\quad n = \frac{t \sqrt{\log t}}{\sqrt{\log 1/\lambda}}.
1175
- \]
1176
- Then with probability $>0$, we can find a graph with only $o(1)$ many complete partitions, and by removing $o(n)$ many edges, we have a graph with
1177
- \[
1178
- p \binom{n}{2} - o(n) = (\alpha + o(1)) t \sqrt{\log t} \cdot n
1179
- \]
1180
- many edges with no $K_t$ minor.
1181
- \end{proof}
1182
-
1183
- At this point, the obvious question to ask is --- is $t \sqrt{\log t}$ the correct growth rate? The answer is yes, and perhaps surprisingly, the proof is also probabilistic. Before we prove that, we need the following lemma:
1184
-
1185
- \begin{lemma}
1186
- Let $k \in \N$ and $G$ be a graph with $e(G) \geq 11 k |G|$. Then there exists some $H$ with
1187
- \[
1188
- |H| \leq 11k + 2,\quad 2\delta (H) \geq |H| + 4k - 1
1189
- \]
1190
- such that $G \succ H$.
1191
- \end{lemma}
1192
-
1193
- \begin{proof}
1194
- We use our previous lemma as a starting point. Letting $\ell = 11k$, we know that we can find $H_1$ such that
1195
- \[
1196
- G \succ K_1 + H_1,
1197
- \]
1198
- with $|H_1| \leq 2\ell$ and $\delta(H_1) \geq \ell$. We shall improve the connectivity of this $H_1$ at the cost of throwing away some elements.
1199
-
1200
- By divine inspiration, consider the constant $\beta \approx 0.37$ defined by the equation
1201
- \[
1202
- 1 = \beta \left(1 + \frac{\log 2}{\beta}\right),
1203
- \]
1204
- and the function $\phi$ defined by
1205
- \[
1206
- \phi(F) = \beta \ell \frac{|F|}{2} \left( \log \frac{|F|}{\beta \ell} + 1\right).
1207
- \]
1208
- Now consider the set of graphs
1209
- \[
1210
- \mathcal{C} = \{F : |F| \geq \beta \ell, e(F) \geq \phi(F)\}.
1211
- \]
1212
- % where
1213
- % \[
1214
- % \phi(F) = \beta \ell \frac{|F|}{2} \left( \log \frac{|F|}{\beta \ell} + 1\right)
1215
- % \]
1216
- % and $\beta \approx 0.37$ is defined by
1217
- % \[
1218
- % 1 = \beta \left(1 + \frac{\log 2}{\beta}\right).
1219
- % \]
1220
- Observe that $H_1 \in \mathcal{C}$, since $\delta(H_1) \geq \ell$ and $|H_1| \leq 2\ell$.
1221
-
1222
- Let $H_2$ be a subcontraction of $H_1$, minimal with respect to $\mathcal{C}$.
1223
-
1224
- Since the complete graph of order $\lceil \beta \ell\rceil$ is not in $\mathcal{C}$, as $\phi > \binom{\beta \ell}{2}$, the only reason we are minimal is that we have hit the bound on the number of edges. Thus, we must have
1225
- \[
1226
- |H_2| \geq \beta\ell + 1,\quad e(H_2) = \lceil \phi(H_2)\rceil,
1227
- \]
1228
- and
1229
- \[
1230
- e(H_2/uv) < \phi(H_2/uv)
1231
- \]
1232
- for all edges $uv$ of $H_2$.
1233
-
1234
- Choose a vertex $u \in H_2$ of minimum degree, and put $H_3 = H_2 [\Gamma(u)]$. Then we have
1235
- \[
1236
- |H_3| = \delta (H_2) \leq \frac{2 \lceil \phi(H_2)\rceil}{|H_2|} \leq \left\lfloor\beta \ell \left(\log \left(\frac{|H_2|}{\beta \ell}\right) + 1\right) + \frac{2}{|H_2|}\right\rfloor \leq \ell,
1237
- \]
1238
- since $\beta \ell \leq |H_2| \leq 2\ell$.
1239
-
1240
- Let's write $b = \beta \ell$ and $h = |H_2|$. Then by the usual argument, we have
1241
- \begin{align*}
1242
- &\hphantom{ {}\geq{}}2 \delta (H_3) - |H_3| \\
1243
- &\geq 2 (\phi(H_2) - \phi(H_2/uv) - 1) - |H_3|\\
1244
- &\geq bh \left(\log \frac{h}{\beta} + 1\right) - b (h - 1) \left(\log \frac{h - 1}{b} + 1\right) - b \left(\log \frac{h}{b} + 1\right) - 3\\% funny terms for rounding.
1245
- &= b(h - 1) \log \frac{h}{h - 1} - 3\\
1246
- &> b - 4,
1247
- \end{align*}
1248
- because $h \geq b + 1$ and $x \log \left(1 + \frac{1}{x}\right) > 1 - \frac{1}{x}$ for real $x > 1$. So
1249
- \[
1250
- 2 \delta (H_3) - |H_3| \geq \beta \ell - 4 > 4k - 4.
1251
- \]
1252
- If we put $H = K_2 + H_3$, then $G \succ H$, $|H| \leq 11k + 2$ and
1253
- \[
1254
- 2\delta (H) - |H| \geq 2 \delta(H_3) - |H_3| + 2 \geq 4k - 1.\qedhere
1255
- \]
1256
- \end{proof}
1257
- % Where did that magical function $\phi$ come from? This was an argument by Mader, who had a proof using two straight lines. The exact function is found by solving a differential equation of the form $\phi(x) - \phi(x - 1) \approx \phi'(x)$.
1258
- % use a graph of steeper slope. We know we have 2\ell vertices
1259
-
1260
-
1261
- \begin{thm}
1262
- We have
1263
- \[
1264
- c(t) \leq 7 t \sqrt{\log t}
1265
- \]
1266
- if $t$ is large.
1267
- \end{thm}
1268
-
1269
- The idea of the proof is as follows: First, we pick some $H$ with $G \succ H$ such that $H$ has high minimum degree, as given by the previous lemma. We now randomly pick disjoint subsets $U_1, \ldots, U_{2t}$, and hope that they give us a subcontraction to $K_t$. There are more sets that we need, because some of them might be ``bad'', in the sense that, for example, there might be no edges from $U_1$ to $U_{57}$. However, since $H$ has high minimum degree, the probability of being bad is quite low, and with positive probability, we can find $t$ subsets that have an edge between any pair. While the $U_i$ are not necessarily connected, this is not a huge problem since $H$ has so many edges that we can easily find paths that connect the different vertices in $U_i$.
1270
-
1271
- \begin{proof}
1272
- For large $t$, we can choose $k \in \N$ so that
1273
- \[
1274
- 5.8t \sqrt{\log_2 t} \leq 11k \leq 7t \sqrt{\log t}.
1275
- \]
1276
- Let $\ell = \lceil 1.01 \sqrt{\log_2 t}\rceil$. Then $k \geq t\ell/2$. We want to show that $G \succ K_t$ if $e(G) \geq 11k |G|$.
1277
-
1278
- We already know that $G \succ H$ where $h = |H| \leq 11k + 2$, and $2 \delta(H) \geq |H| + 4k - 1$. We show that $H \succ K_t$.
1279
-
1280
- From now on, we work entirely in $H$. Note that any two adjacent vertices have $\geq 4k + 1$ common neighbours. Randomly select $2t$ disjoint $\ell$ sets $U_1, \ldots, U_{2t}$ in $V(H)$. This is possible since we need $2t \ell$ vertices, and $2t\ell \leq 4k$.
1281
-
1282
- Of course, we only need $t$ many of those $\ell$ sets, and we want to select the ones that have some chance of being useful to us.
1283
-
1284
- Fix a part $U$. For any vertex $v$ (not necessarily in $U$), its degree is at least $h/2$. So the probability that $v$ has no neighbour in $U$ is at most $\binom{h/2}{\ell} / \binom{h}{\ell} \leq 2^{-\ell}$. Write $X(U)$ for the vertices having no neighbour in $U$, then $\E |X(U)| \leq 2^{-\ell} h$.
1285
-
1286
- We say $U$ is \emph{bad} if $|X(U)| > 8 \cdot 2^{-\ell}h$. Then by Markov's inequality, the probability that $U$ is bad is $< \frac{1}{8}$. Hence the expected number of bad sets amongst $U_1, \ldots, U_{2t}$ is $\leq \frac{t}{4}$. By Markov again, the probability that there are more than $\frac{t}{2}$ bad sets is $< \frac{1}{2}$.
1287
-
1288
- We say a pair of sets $(U, U')$ is \emph{bad} if there is no $U-U'$ edge. Now the probability that $U, U'$ is bad is $\P(U' \subseteq X(U))$. If we condition on the event that $U$ is good (i.e.\ not bad), then this probability is bounded by
1289
- \[
1290
- \binom{8 \cdot 2^{-\ell} h}{\ell} / \binom{h - \ell}{\ell} \leq 8^\ell 2^{-\ell^2} \left(1 + \frac{\ell}{h - \ell}\right)^\ell \leq 8^\ell 2^{-\ell^2} e^{\ell^2/(h - \ell)} \leq 9^\ell 2^{-\ell^2}
1291
- \]
1292
- if $t$ is large (hence $\ell$ is slightly large and $h$ is very large). We can then bound this by $\frac{1}{8t}$.
1293
-
1294
- Hence, the expected number of bad pairs (where one of them is good) is at most
1295
- \[
1296
- \frac{1}{8t} \binom{2t}{2} \leq \frac{t}{4}.
1297
- \]
1298
- So the probability that there are more than $\frac{t}{2}$ such bad pairs is $< \frac{1}{2}$.
1299
-
1300
- Now with positive probability, $U_1, \ldots, U_{2t}$ has at most $\frac{t}{2}$ bad sets and at most $\frac{t}{2}$ bad pairs amongst the good sets. Then we can find $t$ many sets that are pairwise good. We may wlog assume they are $U_1, \ldots, U_t$.
1301
-
1302
- Fixing such a choice $U_1, \ldots, U_t$, we now work deterministically. We would be done if each $U_i$ were connected, but they are not in general. However, we can find $W_1, \ldots, W_t$ in $V(H) \setminus (U_1 \cup \cdots U_t)$ such that $U_i \cup W_i$ is connected for $1 \leq i \leq t$.
1303
-
1304
- Indeed, if $U_i = \{u_1, \ldots, u_\ell\}$, we pick a common neighbour of $u_{j - 1}$ and $u_j$ if $u_{j - 1} u_j \not \in E(H)$, and put it in $W_i$. In total, this requires us to pick $\leq t\ell$ distinct vertices in $V(H) - (U_1 \cup \cdots \cup U_t)$, and we can do this because $u_{j - 1}$ and $u_j$ have at least $4k - t\ell \geq t\ell$ common neighbours in this set.
1305
- \end{proof}
1306
-
1307
- It has been shown (2001) that equality holds in the previous lemma.
1308
-
1309
- So we now understand $c(t)$ quite well. How about subdivision and linking? We decide to work on $f(k)$ now, because Robertson and Seymour (1995) showed that our first lemma that holds if $G \succ TK_{3k}$ instead of $G \supseteq K_{3k}$. We also know how many edges we need to get a minor. Combining with our previous theorem, we know
1310
- \[
1311
- f(k) = O(k \sqrt{\log k}).
1312
- \]
1313
- We know we can't do better than $k$, but we can get it down to order $k$ by proving our first lemma under the weaker hypothesis that $G \succ H$ where $H$ is dense.
1314
-
1315
- So far, we have proven theorems that say, roughly, ``if $G$ is a graph with enough edges, then it subcontracts to some highly connected thing''. Now saying that $G$ subcontracts to some highly connected thing is equivalent to saying we can find disjoint non-empty connected subgraphs $D_1, \ldots, D_m$ such that each $D_i$ is connected, and there are many edges between the $D_i$. The lemma we are going to prove next says under certain conditions, we can choose the $D_i$ in a way that they contain certain prescribed points.
1316
-
1317
- \begin{defi}[$S$-cut]\index{$S$-cut}
1318
- Given $S \subseteq V(G)$, an $S$-cut is a pair $(A, B)$ of subsets of the vertices such that $A \cup B = V(G)$, $S \subseteq A$ and $e(A \setminus B, B \setminus A) = 0$. The \term{order} of the cut is $|A \cap B|$.
1319
-
1320
- We say $(A, B)$ \emph{avoids} $C$ if $A \cap V(C) = \emptyset$.
1321
- \end{defi}
1322
-
1323
- \begin{eg}
1324
- For any $S \subseteq A$, the pair $(A, V(G))$ is an $S$-cut.
1325
- \end{eg}
1326
-
1327
- \begin{lemma}
1328
- Let $d \geq 0$, $k \geq 2$ and $h \geq d + \lfloor 3k/2 \rfloor$ be integers.
1329
-
1330
- Let $G$ be a graph, $S= \{s_1, \ldots, s_k\} \subseteq V(G)$. Suppose there exists disjoint subgraphs $C_1, \ldots, C_h$ of $G$ such that
1331
- \begin{itemize}
1332
- \item[($*$)] Each $C_i$ is either connected, or each of its component meets $S$. Moreover, each $C_i$ is adjacent to all but at most $d$ of the $C_j$, $j \not= i$ not meeting $S$.
1333
- \item[($\dagger$)] Moreover, no $S$-cut of order $< k$ avoids $d + 1$ of $C_1, \ldots, C_h$.
1334
- \end{itemize}
1335
- Then $G$ contains disjoint non-empty connected subgraphs $D_1, \ldots, D_m$, where
1336
- \[
1337
- m = h - \lfloor k/2 \rfloor,
1338
- \]
1339
- such that for $1 \leq i \leq k$, $s_i \in D_i$, and $D_i$ is adjacent to all but at most $d$ of $D_{k + 1}, \ldots, D_m$.
1340
- \end{lemma}
1341
-
1342
- \begin{proof}
1343
- Suppose the theorem is not true. Then there is a minimal counterexample $G$ (with minimality defined with respect to proper subgraphs).
1344
-
1345
- We first show that we may assume $G$ has no isolated vertices. If $v$ is an isolated vertex, and $v \not \in S$, then we can simply apply the result to $G - v$. If $v \in S$, then the $S$-cut $(S, V(G) \setminus \{v\})$ of order $k - 1$ avoids $h - k \geq d - 1$ of the $C_i$'s.
1346
-
1347
- \begin{claim}
1348
- If $(A, B)$ is an $S$-cut of order $k$ avoiding $d + 1$ many of the $C_i$'s, then $B = V(G)$ and $A$ is discrete.
1349
- \end{claim}
1350
- Indeed, given such an $S$-cut, we define
1351
- \begin{align*}
1352
- S' &= A \cap B\\
1353
- G' &= G[B] - E(S')\\
1354
- C_i' &= C_i \cap G'.
1355
- \end{align*}
1356
- We now make a further claim:
1357
- \begin{claim}
1358
- $(G', S', \{C_i'\})$ satisfies the hypothesis of the lemma.
1359
- \end{claim}
1360
- We shall assume this claim, and then come back to establishing the claim.
1361
-
1362
- Assuming these claims, we show that $G$ isn't a countrexample after all. Since $(G', S', \{C_i'\})$ satisfies the hypothesis of the lemma, by minimality, we can find subgraphs $D_1', \ldots, D_m'$ in $G'$ that satisfy the conclusion of the lemma for $G'$ and $S'$. Of course, these do not necessarily contain our $s_i$Assuming these claims, we can construct the desired $D_i$. However, we can just add paths from $S'$ to the $s_i$.
1363
-
1364
- Indeed, $G[A]$ has no $S$-cut $(A'', B'')$ of order less than $k$ with $S' \subseteq B''$, else $(A'', B'' \cup B)$ is an $S$-cut of $G$ of the forbidden kind, since $A$, and hence $A''$ avoids too many of the $C_i$. Hence by Menger's theorem, there are vertex disjoint paths $P_1, \ldots, P_k$ in $G[A]$ joining $s_i$ to $s_i'$ (for some labelling of $S'$). Then simply take
1365
- \[
1366
- D_i = D_i' \cup P_i.
1367
- \]
1368
-
1369
- \separator
1370
-
1371
- To check that $(G', S', \{C_i'\})$ satisfy the hypothesis of the lemma, we first need to show that the $C_i'$ are non-empty.
1372
-
1373
- If $(A, B)$ avoids $C_j$, then by definition $C_j \cap A = \emptyset$. So $C_j = C_j'$. By assumption, there are $d + 1$ many $C_j$'s for which this holds. Also, since these don't meet $S$, we know each $C_i$ is adjacent to at least one of these $C_j$'s. So in particular, $C_i'$ is non-empty since $C_i' \supseteq C_i \cap C_j' = C_i \cap C_j$.
1374
-
1375
- Let's now check the conditions:
1376
- \begin{itemize}
1377
- \item[($*$)] For each $C_i'$, consider its components. If there is some component that does not meet $S'$, then this component is contained in $B \setminus A$. So this must also be a component of $C_i$. But $C_i$ is connected. So this component is $C_i$. So $C_i' = C_i$. Thus, either $C_i'$ is connected, or all components meet $S'$.
1378
-
1379
- Moreover, since any $C_j'$ not meeting $S'$ equals $C_j$, it follows that each $C_i$ and so each $C_i'$ is adjacent to all but at most $d$ of these $C_j'$s. Therefore $(*)$ holds.
1380
-
1381
- \item[($\dagger$)] If $(A', B')$ is an $S'$-cut in $G'$ avoiding some $C_i'$, then $(A \cup A', B')$ is an $S$-cut of $G$ of the same order, and it avoids $C_i$ (since $C_i'$ doesn't meet $S'$, and we have argued this implies $C_i' = C_i$, and so $C_i \cap A = \emptyset$). In particular, no $S'$-cut $(A', B')$ of $G'$ has order less than $k$ and still avoids $d + 1$ of $C_1', \ldots, C_h'$.
1382
- \end{itemize}
1383
-
1384
- \separator
1385
-
1386
- We can now use our original claim. We know what $S$-cuts of order $k$ loop like, so we see that if there is an edge that does not join two $C_i$'s, then we can contract the eedge to get a smaller counterexample. Recalling that $G$ has no isolated vertices, we see that in fact $V(G) = \cup V(C_i)$, and $|C_i| = 1$ unless $C_i \subseteq S$.
1387
-
1388
- Let
1389
- \[
1390
- C = \bigcup \{V(C_i): |C_i| \geq 2\}.
1391
- \]
1392
- We claim that there is a set $I$ of $|C|$ independent edges meeting $C$. If not, by Hall's theorem, we can find $X \subseteq C$ whose neighbourhood $Y$ (in $V(G) - S$) such that $|Y| < |X|$. Then $(S \cup Y, V(G) - X)$ is an $S$-cut of order $|S| - |X| + |Y| < |S| = k$ avoiding $\geq |G| - |S| - |Y|$ many $C_i$'s, and this is
1393
- \[
1394
- \geq |G| - |S| - |X| + 1 \geq |G| - |C| - k + 1.
1395
- \]
1396
- But $h \leq |G| - |C| + \frac{|C|}{2}$, since $|G| - |C|$ is the number of $C_i$'s of size $1$, so we can bound the above by
1397
- \[
1398
- \geq h - \frac{|C|}{2} - k + 1 \geq h - \frac{3k}{2} + 1 \geq d + 1.
1399
- \]
1400
- This contradiction shows that $I$ exists.
1401
-
1402
- Now we can just write down the $D_i$'s. For $1 \leq i \leq k$, set $D_i$ to be $s_i$ if $s_i$ is a $C_\ell$ with $|C_\ell| = 1$, i.e.\ $s_i \not \in C$. If $s_i \in C$, let $D_i$ be the edge of $I$ meeting $S_i$.
1403
-
1404
- Let $D_{k + 1}, \ldots, D_m$ each be single vertices of $G - S - (\text{ends of }I)$. Note those exist because $\geq |G| - k - |C| \geq h - \lfloor \frac{3k}{2}\rfloor = m - k$ vertices are avaiable. Note that each $D_i$ contains a $C_\ell$ with $|C_\ell| = 1$. So each $D_i$ is joined to all but $\leq d$ many $D_j$'s.
1405
- \end{proof}
1406
-
1407
- The point of this result is to prove the following theorem:
1408
- \begin{thm}
1409
- Let $G$ be a graph with $\kappa(G) \geq 2k$ and $e(G) \geq 11k |G|$. Then $G$ is $k$-linked. In particular, $f(k) \leq 22k$.
1410
- \end{thm}
1411
- Note that if $\kappa(G) \geq 22k$ then $\delta(G) \geq 22k$, so $e(G) \geq 11k |G|$.
1412
-
1413
- \begin{proof}
1414
- We know that under these conditions, $G \succ H$ with $|H| \leq 11k + 2$ and $2\delta(H) \geq |H| + 4k - 1$. Let $h = |H|$, and $d = h - 1 - \delta (H)$. Note that
1415
- \[
1416
- h = 2d - 2 + 2 \delta(H) - h \geq 2d + 4k.
1417
- \]
1418
- Let $C_1, \ldots, C_k$ be connected subgraphs of $G$ which contract to form $H$. Then clearly each $C_i$ is joined to all but at most $h - 1 - \delta(H) = d$ other $C_j$'s. Now let $s_1, \ldots, s_k, t_1, \ldots, t_k$ be the vertices we want to link up. Let $S = \{s_1, \ldots, s_k, t_1, \ldots, t_k\}$. Observe that $G$ has no $S$-cut of order $< 2k$ at all since $\kappa(G) \geq 2k$. So the conditions are satisfied, but with $2k$ instead of $k$.
1419
-
1420
- So there are subgraphs $D_1, \ldots, D_m$, where $m = h - k \geq 2d + 3k$ as described. We may assume $s_i \in D_i$ and $t_i \in D_{k + i}$. Note that for each pair $(D_i, D_j)$, we can find $\geq m - 2d \geq 3k$ other $D_\ell$ such that $D_\ell$ is joined to both $D_i$ and $D_j$. So for each $i = 1, \ldots, k$, we can find an unused $D_\ell$ not among $D_1, \ldots, D_{2k}$ that we can use to connect up $D_i$ and $D_{k + i}$, hence $s_i$ and $t_i$.
1421
- \end{proof}
1422
-
1423
- In 2004, the coefficient was reduced from $11$ to $5$. This shows $f(k) \leq 10 k$. It is known that $f(1) = 3, f(2) = 6$ and $f(k) \geq 3k - 2$ for $k \geq 3$.
1424
-
1425
- Finally, we consider the function $t(t)$. We begin by the following trivial lemma:
1426
- \begin{lemma}
1427
- If $\delta(a) \geq t^2$ and $G$ is $\binom{t + 2}{3}$-linked, then $G \supseteq TK_t$.
1428
- \end{lemma}
1429
-
1430
- \begin{proof}
1431
- Since $\delta(G) \geq t^2$, we may pick vertices $v_1, \ldots, v_t$ and sets $U_1, \ldots U_t$ all disjoint so that $|U_i| = t - 1$ and $v_i$ is joined to $U_i$. Then $G - \{v_1, \ldots, v_k\}$ is still $\binom{t + 2}{2} - t = \binom{t}{2}$-linked. So $U_1, \ldots, U_{t - 1}$ may be joined with paths to form a $TK_t$ in $G$.
1432
- \end{proof}
1433
-
1434
- To apply our previous theorem together with this lemma, we need the following lemma:
1435
- \begin{lemma}
1436
- Let $k, d \in \N$ with $k \leq \frac{d + 1}{2}$, and suppose $e(G) \geq d|G|$. Then $G$ contains a subgraph $H$ with
1437
- \[
1438
- e(H) = d|H| - kd + 1,\quad \delta(H) \geq d + 1,\quad \kappa(H) \geq k.
1439
- \]
1440
- \end{lemma}
1441
-
1442
- \begin{proof}
1443
- Define
1444
- \[
1445
- \mathcal{E}_{d, k} = \{F \subseteq G: |F| \geq d, e(F) > d|F| - kd\}.
1446
- \]
1447
- We observe that $D \in \mathcal{E}_{d, k}$, but $K_d \not \in \mathcal{E}_{d, k}$. Hence $|F| > d$ for all $F \in \mathcal{E}_{d, k}$. let $H$ be a subgraph of $G$ minimal with respect to $H \in \mathcal{E}_{d, k}$. Then $e(H) = d|H| - kd + 1$, and $\delta(H) \geq d + 1$, else $H - v \in \mathcal{E}_{d,k}$ for some $v \in H$.
1448
-
1449
- We only have to worry about the connectivity condition. Suppose $S$ is a cutset of $H$. Let $C$ be a component of $G - S$. Since $\delta(h) \geq d + 1$, we have $|C \cup S| \geq d + 2$ and $|H - C| \geq d + 2$.
1450
-
1451
- Since neither $C \sup S$ nor $H - C$ lies in that class, we have
1452
- \[
1453
- e(C \cup S) \leq d|C \cup S| - kd \leq d|C| + d |S| - kd
1454
- \]
1455
- and
1456
- \[
1457
- e(H - C) \leq d|H| - d|C| - kd.
1458
- \]
1459
- So we find that
1460
- \[
1461
- e(H) \leq d|H| + d|S| - 2 kd.
1462
- \]
1463
- But we know this is equal to $d|H| - kd + 1$. So we must have $|S| \geq k$.
1464
- \end{proof}
1465
-
1466
- \begin{thm}
1467
- \[
1468
- \frac{t^2}{16} \leq t(t) \leq 13 \binom{t + 1}{2}.
1469
- \]
1470
- \end{thm}
1471
- Recall our previous upper bound was something like $2^{t^2}$. So this is an improvement.
1472
- \begin{proof}
1473
- The lower bound comes from (disjoint copies of) $K_{t^2/8, t^2/8}$. For the uppper bound, write
1474
- \[
1475
- d = 13 \binom{t + 1}{2},\quad k = t\cdot (t + 1).
1476
- \]
1477
- Then we can find a subgraph $H$ with $\delta(H) \geq d + 1$ and $\kappa(H) \geq k + 1$.
1478
-
1479
- Certainly $|H| \geq d + 2$, and we know
1480
- \[
1481
- e(H) \geq d |H| - kd > (d - k)|H| \geq 11 \binom{t + 1}{2} |H|.
1482
- \]
1483
- So we know $H$ is $\binom{t + 1}{2}$-linked, and so $H \supseteq TK_t$.
1484
- \end{proof}
1485
- So $t(t)$ is, more or less, $t^2$!
1486
-
1487
- \section{Extremal hypergraphs}
1488
- Define $K_\ell^\ell(t)$ be the complete $\ell$-partite $\ell$-uniform hypergraph with $\ell$ classes of size $t$, containing all $t^{\ell}$ edges with one vertex in each class.
1489
-
1490
- \begin{thm}[Erd\"os]
1491
- Let $G$ be $\ell$-uniform of order $n$ with $p \binom{n}{\ell}$ edges, where $p \geq 2n^{-1/t^{\ell - 1}}$. Then $G$ contains $K_{\ell}^{\ell}(t)$ (provided $n$ is large).
1492
- \end{thm}
1493
- If we take $\ell = 2$, then this agrees with what we know about the exact value of the extremal function.
1494
-
1495
- \begin{proof}
1496
- Assume that $t \geq 2$. We proceed by induction on $\ell \geq 2$. For each $(\ell - 1)$-set $\sigma \subseteq V(G)$, let
1497
- \[
1498
- N(\sigma) = \{v: \sigma \cup \{v\} \in E(G)\}.
1499
- \]
1500
- Then the average degree is
1501
- \[
1502
- \binom{n}{\ell - 1}^{-1} \sum |N(\sigma)| = \binom{n}{\ell - 1}^{-1} \ell |E(G)| = p (n - \ell + 1).
1503
- \]
1504
- For each of the $\binom{n}{t}$ many $t$-sets $T \subseteq V(G)$, let
1505
- \[
1506
- D(T) = \{\sigma : T \subseteq N(\sigma)\}.
1507
- \]
1508
- Then
1509
- \[
1510
- \sum_T |D(T)| = \sum_\sigma \binom{|N(\sigma)|}{t} \geq \binom{n}{\ell - 1} \binom{p(n - \ell + 1)}{t},
1511
- \]
1512
- where the inequality follows from convexity, since we can assume that $p(n - \ell + 1) \geq t - 1$ if $n$ is large.
1513
-
1514
- In particular, there exists a $T$ with
1515
- \[
1516
- |D(T)| \geq \binom{n}{t}^{-1} \binom{n}{\ell - 1} \binom{p(n - \ell + 1)}{t} \geq \frac{1}{2} p^t \binom{n}{\ell - 1}.
1517
- \]
1518
- when $n$ is large. By our choice of $t$, we can write this as
1519
- \[
1520
- |D(T)| \geq 2^{t - 1} n^{-1/t^{\ell - 2}} \binom{n}{\ell - 1}.
1521
- \]
1522
- If $\ell = 2$, then this is $\geq 2^{t - 1} \geq t$. So $T$ is joined to a $T$-set giving $K_{t, t}$.
1523
-
1524
- If $\ell \geq 3$, then $|D(T)| \geq 2 n^{-1/t^{\ell - 2}} \binom{n}{\ell - 1}$. So, by induction, the $(\ell - 1)$-uniform hypergraph induced by the $\sigma$ with $T \subseteq N(\sigma)$ contains $K_{\ell - 1}^{\ell - 1}(t)$, giving $K_{\ell}^\ell(t)$ with $T$.
1525
- \end{proof}
1526
- We can do a simple random argument to show that this is a right-order of magnitude.
1527
-
1528
- \printindex
1529
- \end{document}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
books/cam/III_M/hydrodynamic_stability.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/local_fields.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/modern_statistical_methods.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/percolation_and_random_walks_on_graphs.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/quantum_computation.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/quantum_field_theory.tex DELETED
The diff for this file is too large to render. See raw diff
 
books/cam/III_M/symmetries_fields_and_particles.tex DELETED
The diff for this file is too large to render. See raw diff