book_name
stringclasses
89 values
def_number
stringlengths
12
19
text
stringlengths
5.47k
10k
1016_(GTM181)Numerical Analysis
Definition 10.12
Definition 10.12 The predictor corrector method for the Euler method for the numerical solution of the initial value problem (10.7), also known as the improved Euler method or Heun method, constructs approximations \( {u}_{j} \) to the exact solution \( u\left( {x}_{j}\right) \) at the equidistant grid points \[ {x}_{j} \mathrel{\text{:=}} {x}_{0} + {jh},\;j = 1,2,\ldots , \] \( {by} \) \[ {u}_{j + 1} \mathrel{\text{:=}} {u}_{j} + \frac{h}{2}\left\lbrack {f\left( {{x}_{j},{u}_{j}}\right) + f\left( {{x}_{j + 1},{u}_{j} + {hf}\left( {{x}_{j},{u}_{j}}\right) }\right) }\right\rbrack ,\;j = 0,1,\ldots \] Example 10.13 Consider again the initial value problem from Example 10.7. Table 10.2 gives the difference between the exact solution as computed by the Picard-Lindelöf iterations and the approximate solution obtained by the improved Euler method for various step sizes \( h \) . We observe quadratic convergence as \( h \rightarrow 0 \) . TABLE 10.2. Numerical example for the improved Euler method <table><thead><tr><th>\( x \)</th><th>\( h = {0.1} \)</th><th>\( h = {0.01} \)</th><th>\( h = {0.001} \)</th></tr></thead><tr><td>0.1</td><td>-0.00016667</td><td>-0.00000167</td><td>-0.00000002</td></tr><tr><td>0.2</td><td>-0.00033326</td><td>-0.00000333</td><td>-0.00000003</td></tr><tr><td>0.3</td><td>-0.00049955</td><td>-0.00000500</td><td>-0.00000005</td></tr><tr><td>0.4</td><td>-0.00066530</td><td>-0.00000668</td><td>-0.00000007</td></tr><tr><td>0.5</td><td>-0.00083027</td><td>-0.00000837</td><td>-0.00000009</td></tr></table> In the following section we will show that the Euler method and the improved Euler method are convergent with convergence order one and two, respectively, as observed in the special cases of Examples 10.9 and 10.13. ## 10.3 Single-Step Methods We generalize the Euler methods into more general single-step methods by the following definition. Definition 10.14 Single-step methods for the approximate solution of the initial value problem \[ {u}^{\prime } = f\left( {x, u}\right) ,\;u\left( {x}_{0}\right) = {u}_{0}, \] construct approximations \( {u}_{j} \) to the exact solution \( u\left( {x}_{j}\right) \) at the equidistant grid points \[ {x}_{j} \mathrel{\text{:=}} {x}_{0} + {jh},\;j = 1,2,\ldots , \] with step size \( h \) by \[ {u}_{j + 1} \mathrel{\text{:=}} {u}_{j} + {h\varphi }\left( {{x}_{j},{u}_{j};h}\right) ,\;j = 0,1,\ldots , \] where the function \( \varphi : G \times \left( {0,\infty }\right) \rightarrow \mathbb{R} \) is given in terms of the right-hand side \( f : G \rightarrow \mathbb{R} \) of the differential equation. Example 10.15 The Euler method and the improved Euler method are single-step methods with \[ \varphi \left( {x, u;h}\right) = f\left( {x, u}\right) \] \( \left( {10.10}\right) \) and \[ \varphi \left( {x, u;h}\right) = \frac{1}{2}\left\lbrack {f\left( {x, u}\right) + f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) }\right\rbrack \] (10.11) respectively. The function \( \varphi \) describes how the differential equation \[ {u}^{\prime } = f\left( {x, u}\right) \] is approximated by the difference equation \[ \frac{1}{h}\left\lbrack {u\left( {x + h}\right) - u\left( x\right) }\right\rbrack = \varphi \left( {x, u;h}\right) . \] From a reasonable approximation we expect that the exact solution to the initial value problem approximately satisfies the difference equation. Hence, \[ \frac{1}{h}\left\lbrack {u\left( {x + h}\right) - u\left( x\right) }\right\rbrack - \varphi \left( {x, u;h}\right) \rightarrow 0,\;h \rightarrow 0, \] must be fulfilled for the exact solution \( u \) . We also expect that the order of this convergence will influence the accuracy of the approximate solution. These considerations are made more precise by the following definition. Definition 10.16 For each \( \left( {x, u}\right) \in G \) denote by \( \eta = \eta \left( \xi \right) \) the unique solution to the initial value problem \[ {\eta }^{\prime } = f\left( {\xi ,\eta }\right) ,\;\eta \left( x\right) = u, \] with initial data \( \left( {x, u}\right) \) . Then \[ \Delta \left( {x, u;h}\right) \mathrel{\text{:=}} \frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack - \varphi \left( {x, u;h}\right) \] is called the local discretization error. The single-step method is called consistent (with the initial value problem) if \[ \mathop{\lim }\limits_{{h \rightarrow 0}}\Delta \left( {x, u;h}\right) = 0 \] uniformly for all \( \left( {x, u}\right) \in G \), and it is said to have consistency order \( p \) if \[ \left| {\Delta \left( {x, u;h}\right) }\right| \leq K{h}^{p} \] for all \( \left( {x, u}\right) \in G \), all \( h > 0 \), and some constant \( K \) . Without loss of generality, in the sequel we always will assume that \( f \) (and later also derivatives of \( f \) ) are uniformly continuous and bounded on \( G \) . This can always be achieved by reducing \( G \) to a smaller domain. Theorem 10.17 A single-step method is consistent if and only if \[ \mathop{\lim }\limits_{{h \rightarrow 0}}\varphi \left( {x, u;h}\right) = f\left( {x, u}\right) \] uniformly for all \( \left( {x, u}\right) \in G \) . Proof. Since we assume \( f \) to be bounded, we have \[ \eta \left( {x + t}\right) - \eta \left( x\right) = {\int }_{0}^{t}{\eta }^{\prime }\left( {x + s}\right) {ds} = {\int }_{0}^{t}f\left( {x + s,\eta \left( {x + s}\right) }\right) {ds} \rightarrow 0,\;t \rightarrow 0, \] uniformly for all \( \left( {x, u}\right) \in G \) . Therefore, since we also assume that \( f \) is uniformly continuous, it follows that \[ \frac{1}{h}\left| {{\int }_{0}^{h}\left\lbrack {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right\rbrack {dt}}\right| \leq \mathop{\max }\limits_{{0 \leq t \leq h}}\left| {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right| \] \[ = \mathop{\max }\limits_{{0 \leq t \leq h}}\left| {f\left( {x + t,\eta \left( {x + t}\right) }\right) - f\left( {x,\eta \left( x\right) }\right) }\right| \rightarrow 0,\;h \rightarrow 0, \] uniformly for all \( \left( {x, u}\right) \in G \) . From this we obtain that \[ \Delta \left( {x, u;h}\right) + \varphi \left( {x, u;h}\right) - f\left( {x, u}\right) = \frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack - {\eta }^{\prime }\left( x\right) \] \[ = \frac{1}{h}{\int }_{0}^{h}\left\lbrack {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right\rbrack {dt} \rightarrow 0,\;h \rightarrow 0, \] uniformly for all \( \left( {x, u}\right) \in G \) . This now implies that the two conditions \( \Delta \rightarrow 0, h \rightarrow 0 \), and \( \varphi \rightarrow f, h \rightarrow 0 \), are equivalent. Theorem 10.18 The Euler method is consistent. If \( f \) is continuously differentiable in \( G \), then the Euler method has consistency order one. Proof. Consistency is a consequence of Theorem 10.17 and the fact that \( \varphi \left( {x, u;h}\right) = f\left( {x, u}\right) \) for Euler’s method. If \( f \) is continuously differentiable, then from the differential equation \( {\eta }^{\prime } = f\left( {\xi ,\eta }\right) \) it follows that \( \eta \) is twice continuously differentiable with \[ {\eta }^{\prime \prime } = {f}_{x}\left( {\xi ,\eta }\right) + {f}_{u}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) . \] \( \left( {10.12}\right) \) Therefore, Taylor's formula yields \[ \left| {\Delta \left( {x, u;h}\right) }\right| = \left| {\frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack - {\eta }^{\prime }\left( x\right) }\right| = \frac{h}{2}\left| {{\eta }^{\prime \prime }\left( {x + {\theta h}}\right) }\right| \leq {Kh} \] for some \( 0 < \theta < 1 \) and some bound \( K \) for the function \( 2\left( {{f}_{x} + {f}_{u}f}\right) \) . Theorem 10.19 The improved Euler method is consistent. If \( f \) is twice continuously differentiable in \( G \), then the improved Euler method has consistency order two. Proof. Consistency follows from Theorem 10.17 and \[ \varphi \left( {x, u;h}\right) = \frac{1}{2}\left\lbrack {f\left( {x, u}\right) + f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) }\right\rbrack \rightarrow f\left( {x, u}\right) ,\;h \rightarrow 0. \] If \( f \) is twice continuously differentiable, then (10.12) implies that \( \eta \) is three times continuously differentiable with \[ {\eta }^{\prime \prime \prime } = {f}_{xx}\left( {\xi ,\eta }\right) + 2{f}_{xu}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) + {f}_{uu}\left( {\xi ,\eta }\right) {f}^{2}\left( {\xi ,\eta }\right) \] \[ + {f}_{u}\left( {\xi ,\eta }\right) {f}_{x}\left( {\xi ,\eta }\right) + {f}_{u}^{2}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) . \] Hence Taylor's formula yields \[ \left| {\eta \left( {x + h}\right) - \eta \left( x\right) - h{\eta }^{\prime }\left( x\right) - \frac{{h}^{2}}{2}\;{\eta }^{\prime \prime }\left( x\right) }\right| = \frac{{h}^{3}}{6}\;\left| {{\eta }^{\prime \prime \prime }\left( {x + {\theta h}}\right) }\right| \leq {K}_{1}{h}^{3} \] (10.13) for some \( 0 < \theta < 1 \) and a bound \( {K}_{1} \) for \( 6\left( {{f}_{xx} + 2{f}_{xu}f + {f}_{uu}{f}^{2} + {f}_{u}{f}_{x} + {f}_{u}^{2}f}\right) \) . From Taylor's formula for functions of two variables we have the estimate \[ \left| {f\left( {x + h, u + k}\right) - f\left( {x, u}\right) - h{f}_{x}\left( {x, u}\right) - k{f}_{u}\left( {x, u}\right) }\right| \leq \frac{1}{2}{K}_{2}{\left( \left| h\right| + \left| k\right| \right) }^{2} \] with a bound \( {K}_{2} \) for the second derivatives \( {f}_{xx},{f}_{xu} \), and \( {f}_{uu} \) . From this, setting \( k = {hf}\left( {x, u}\right) \), in view of (10.12) we obtain \[ \left| {f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) - f\left( {x, u}\right) - h{\eta }^{\prime \prime }\left( x\right) }\right| \leq \frac{1}{2}{K}_{2}{\left( 1 + {K}_{0}\right) }^{2}{h}^{2} \]
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 3.7.13
Definition 3.7.13 Assume that \( F \in {C}^{1}\left( {U, Y}\right) \) is \( \Omega \) -admissible, \( p \in U \) is a base point of \( F \), and \( y \in Y \smallsetminus F\left( {\partial \Omega }\right) \) . We define the degree \( {\deg }_{p}\left( {F,\Omega, y}\right) = \) \( {\deg }_{p}\left( {F,\Omega, z}\right) \), where \( z \in {B}_{\epsilon }\left( y\right) \subset Y \smallsetminus F\left( {\partial \Omega }\right) \) is a regular value of \( {\left. F\right| }_{\Omega } \) . Again, the degree is independent of the choice of \( z \) . For different base points \( {p}_{0},{p}_{1} \), one has \[ {\deg }_{{p}_{0}}\left( {F,\Omega, y}\right) = \sigma \left( {{F}^{\prime } \circ \gamma }\right) {\deg }_{{p}_{1}}\left( {F,\Omega, y}\right) , \] where \( \gamma \in C\left( {\left\lbrack {0,1}\right\rbrack, U}\right) ,\gamma \left( i\right) = i, i = 0,1 \) . The following fundamental properties hold: (Homotopy invariance) Let \( H \) be an \( \Omega \) -admissible homotopy. Suppose that \( y \in Y \smallsetminus \left( {\left\lbrack {0,1}\right\rbrack \times \partial \Omega }\right) \) and that \( p \) is a base point of \( H\left( {t, \cdot }\right) \forall t \in \left\lbrack {0,1}\right\rbrack \) . Then \[ {\deg }_{p}\left( {H\left( {1, \cdot }\right) ,\Omega, y}\right) = {\deg }_{p}\left( {H\left( {0, \cdot }\right) ,\Omega, y}\right) . \] (Additivity) Suppose the \( \Omega = {\Omega }_{1} \cup {\Omega }_{2} \), where \( {\Omega }_{i}, i = 1,2 \) are disjoint open subsets of \( U \), and that \( F \in {C}^{1}\left( {U, Y}\right) \) is \( \Omega \) -admissible, \( p \in U \) is a base point of \( F \) . Then \( F \) is \( {\Omega }_{i} \) -admissible for \( i = 1,2 \) . Moreover, if \( y \in Y \smallsetminus F\left( {\partial \Omega }\right) \), then \( y \in Y \smallsetminus F\left( {\partial {\Omega }_{i}}\right), i = 1,2 \), and \[ {\deg }_{p}\left( {F,\Omega, y}\right) = {\deg }_{p}\left( {F,{\Omega }_{1}, y}\right) + {\deg }_{p}\left( {F,{\Omega }_{2}, y}\right) . \] (Normality) Let \( \Omega \) be an open subset of \( X,\forall p \in X \), and \( y \notin \partial \Omega \) . Then \( {\deg }_{p}\left( {\mathrm{{id}},\Omega, y}\right) = 1 \) if \( y \in \Omega , = 0 \) if \( y \notin \Omega \) . Obviously, the excision property and the Kronecker existence theorem hold as well. The generalized degree has been used to extend Rabinowitz global bifurcation theorem [PR1], and is applied to the study of bifurcation problems for semi-linear elliptic equations on \( {R}^{n} \) (see [JLS]). ## Minimization Methods The calculus of variations studies the optimal shape, time, velocity, energy, volume or gain etc. under certain conditions. Laws in astronomy, mechanics, physics, all natural sciences and engineering technologies, as well as in economic behavior obey variational principles. The main object of the calculus of variations is to find out the solutions governed by these principles. Tracing back to Fermat, who postulated that light follows a path of least possible time, this is a subject in finding the minimizers of a given functional. Starting from the brothers Johann and Jakob Bernoulli and L. Euler, the calculus of variations has a long history, and renews itself according to the developments of mathematics and other sciences. The problem is formulated as follows: Assume that \( f : {\mathbb{R}}^{n} \times {\mathbb{R}}^{N} \times {\mathbb{R}}^{nN} \rightarrow \) \( {\mathbb{R}}^{1} \) is a continuous function, and that \( E \) is a set of \( N \) -vector functions. Let \( J \) be a functional defined on \( E \) : \[ J\left( u\right) = \int f\left( {x, u\left( x\right) ,\nabla u\left( x\right) }\right) {dx}. \] Find \( {u}_{0} \in E \), such that \[ J\left( {u}_{0}\right) = \operatorname{Min}\{ J\left( u\right) \mid u \in E\} . \] The central problems in the calculus of variations are the existence and the regularity of the minimizers. These are the 19th and the 20th problems among the 23 problems posed by Hilbert in his famous lecture delivered at International Congress of Mathematicians in 1900. This chapter is devoted to an introduction of the minimization method. We pay attention only to the existence of minimizers, but not to the regularity, although the latter is a very important and rich part of the theory of the calculus of variations. The direct method, studied in Sect. 4.2, is the core of the minimization method, in which \( {\mathrm{w}}^{ * } \) -compactness and \( {\mathrm{w}}^{ * } \) lower semi-continuity (w*l.s.c.) play crucial roles. A necessary and sufficient condition on the integrand \( f \) for the \( {\mathrm{w}}^{ * } \) l.s.c. of the functional \( J \) on the Sobolev space \( {W}^{1, p}, p \in (1,\infty \rbrack \) is studied in Sect. 4.3. In the case when \( {\mathrm{w}}^{ * } \) l.s.c. fails, either the minimizing sequence does not converge or it does not converge to a minimizer. The Young measure and the relaxation functional are introduced in Sect. 4.4. In the spaces \( {W}^{1,1} \) and \( {L}^{1} \) the closed balls are no longer \( {\mathrm{w}}^{ * } \) compact. Instead, we consider the \( {BV} \) space and the Hardy space, respectively. They are studied in Sect. 4.5. Two interesting applications are given in Sect. 4.6. One is on the phase transitions and the other is the segmentation in the image processing. The concentration phenomenon, which happens in many problems from geometry to physics, concerns the lack of compactness. We give a brief introduction to the method of managing this phenomenon in Sect. 4.7. The minimax method dealing with saddle points is briefly introduced in Sect. 4.8. With the aid of the Ekeland variational principle and the Palais-Smale condition, it is studied in the spirit of the minimization method. Section 4.1 is an introduction, where various variational principles and their reductions are introduced. ## 4.1 Variational Principles Let \( X \) be a real Banach space, and \( U \subset X \) be an open set. A point \( {x}_{0} \in U \) is called a local maximum (or minimum) point of \( f : U \rightarrow {\mathbb{R}}^{1} \), if \[ f\left( x\right) \leq f\left( {x}_{0}\right) \;\left( {\text{ or }f\left( x\right) \geq f\left( {x}_{0}\right) }\right) \;\forall x \in {B}_{\varepsilon }\left( {x}_{0}\right) \subset U, \] for some \( \varepsilon > 0 \) . If further, \( f \) is \( G \) -differentiable at \( {x}_{0} \), then \[ {df}\left( {{x}_{0}, h}\right) = {\left. \frac{d}{dt}f\left( {x}_{0} + th\right) \right| }_{t = 0} = \theta \;\forall h \in X, \] or simply \[ {df}\left( {x}_{0}\right) = \theta . \] (4.1) Moreover, if \( f \) has second-order \( G \) -derivatives at \( {x}_{0} \), then \[ {d}^{2}f\left( {x}_{0}\right) \left( {h, h}\right) \leq 0\;\left( {\text{ or } \geq 0}\right) \;\forall h \in X. \] In particular, if \( X \) is a Hilbert space, and \( f \in {C}^{2} \) then \( {d}^{2}f\left( {x}_{0}\right) \) is a selfadjoint operator. We conclude that \( {d}^{2}f\left( {x}_{0}\right) \) is nonnegative (nonpositive) if \( {x}_{0} \) is a local minimum (or maximum) point. Conversely, \( {x}_{0} \) is a local minimum (or maximum) point if \( {d}^{2}f\left( {x}_{0}\right) \) is positive (or negative) definite. ## 4.1.1 Constraint Problems Let \( X, Y \) be real Banach spaces, \( U \subset X \) be an open set. Suppose that \( f \) : \( U \rightarrow {\mathbb{R}}^{1}, g : U \rightarrow Y \) are \( {C}^{1} \) mappings. Let \[ M = \{ x \in U \mid g\left( x\right) = \theta \} . \] Find the necessary condition for \[ \mathop{\min }\limits_{{x \in M}}f\left( x\right) \] (4.2) Theorem 4.1.1 (Ljusternik) Suppose that \( {x}_{0} \in M \) solves (4.2), and that \( \operatorname{Im}{g}^{\prime }\left( {x}_{0}\right) \) is closed. Then \( \exists \left( {\lambda ,{y}^{ * }}\right) \in {\mathbb{R}}^{1} \times {Y}^{ * } \) such that \( \left( {\lambda ,{y}^{ * }}\right) \neq \left( {0,\theta }\right) \), and \[ \lambda {f}^{\prime }\left( {x}_{0}\right) + {g}^{\prime }{\left( {x}_{0}\right) }^{ * }{y}^{ * } = \theta . \] (4.3) Furthermore, if \( \operatorname{Im}{g}^{\prime }\left( {x}_{0}\right) = Y \), then \( \lambda \neq 0 \) . Proof. In the case where \( {Y}_{1} = \operatorname{Im}{g}^{\prime }\left( {x}_{0}\right) \subsetneqq Y \), the conclusion (4.3) is trivial; one may choose \( \lambda = 0 \) and \( {y}^{ * } \in {Y}_{1}^{ \bot } \mathrel{\text{:=}} \left\{ {{z}^{ * } \in {Y}^{ * } \mid \left\langle {{z}^{ * }, z}\right\rangle = 0\forall z \in {Y}_{1}}\right\} \) . We assume \( {Y}_{1} = Y \) . The tangent space \( {T}_{{x}_{0}}\left( M\right) \) of \( M \) at \( {x}_{0} \) is as follows: \[ {T}_{{x}_{0}}\left( M\right) \] \[ = \left\{ {h \in X\mid \exists \varepsilon > 0,\exists v \in {C}^{1}\left( {\left( {-\varepsilon ,\varepsilon }\right), X}\right) ,{x}_{0} + v\left( t\right) \in M, v\left( 0\right) = \theta ,\dot{v}\left( 0\right) = h}\right\} . \] We want to prove that \( {T}_{{x}_{0}}\left( M\right) = \ker {g}^{\prime }\left( {x}_{0}\right) \) . In order to avoid technical complication, we make an additional assumption: Either \( {g}^{\prime }\left( {x}_{0}\right) \) is a Fredholm operator, or \( X \) is a Hilbert space. The assumption is superfluous, because a modified IFT has been studied in [De] (pp. 334) to improve the proof. Indeed, from \[ g\left( {{x}_{0} + v\left( t\right) }\right) = \theta \] it follows that \[ {\left. {g}^{\prime }\left( {x}_{0}\right) h = \frac{d}{dt}g\left( {x}_{0} + v\left( t\right) \right) \right| }_{t = 0} = 0\forall h \in {T}_{{x}_{0}}\left( M\right) , \] i.e., \( {T}_{{x}_{0}}\left( M\right) \subset \ker {g}^{\prime }\left( {x}_{0}\right) \) . On the other hand, if \( h \in \ker {g}^{\prime }\left( {x}_{0}\right) \), one solves the equation: \[ g\left( {{x}_{0} + {th} + w\left( t\right) }\right) = \theta , \] for \( w \in {C}^{1}\left( {\left( {-\varepsilon ,\varepsilon }\right) ,{X}_{1}}\right), w\left( 0\right) = \theta \), where \( {X}_{1} \) is the co
1048_(GTM209)A Short Course on Spectral Theory
Definition 1.7.4
Definition 1.7.4. An element \( x \) of a Banach algebra \( A \) (with or without unit) is called quasinilpotent if \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} = 0 \] Significantly, quasinilpotence is characterized quite simply in spectral terms. Corollary 1. An element \( x \) of a unital Banach algebra \( A \) is quasinilpo-tent iff \( \sigma \left( x\right) = \{ 0\} \) . Proof. \( x \) is quasinilpotent \( \Leftrightarrow r\left( x\right) = 0 \Leftrightarrow \sigma \left( x\right) = \{ 0\} \) . Exercises. (1) Let \( {a}_{1},{a}_{2},\ldots \) be a sequence of complex numbers such that \( {a}_{n} \rightarrow 0 \) as \( n \rightarrow \infty \) . Show that the associated weighted shift operator on \( {\ell }^{2} \) (see the Exercises of Section 1.6) has spectrum \( \{ 0\} \) . (2) Consider the simplex \( {\Delta }_{n} \subset {\left\lbrack 0,1\right\rbrack }^{n} \) defined by \[ {\Delta }_{n} = \left\{ {\left( {{x}_{1},\ldots ,{x}_{n}}\right) \in {\left\lbrack 0,1\right\rbrack }^{n} : {x}_{1} \leq {x}_{2} \leq \cdots \leq {x}_{n}}\right\} \] Show that the volume of \( {\Delta }_{n} \) is \( 1/n \) !. Give a decent proof here: For example, you might consider the natural action of the permutation group \( {S}_{n} \) on the cube \( {\left\lbrack 0,1\right\rbrack }^{n} \) and think about how permutations act on \( {\Delta }_{n} \) . (3) Let \( k\left( {x, y}\right) \) be a Volterra kernel as in Example 1.1.4, and let \( K \) be its corresponding integral operator on the Banach space \( C\left\lbrack {0,1}\right\rbrack \) . Estimate the norms \( \begin{Vmatrix}{K}^{n}\end{Vmatrix} \) by showing that there is a positive constant \( M \) such that for every \( f \in C\left\lbrack {0,1}\right\rbrack \) and every \( n = 1,2,\ldots \) , \[ \begin{Vmatrix}{{K}^{n}f}\end{Vmatrix} \leq \frac{{M}^{n}}{n!}\parallel f\parallel \] (4) Let \( K \) be a Volterra operator as in the preceding exercise. Show that for every complex number \( \lambda \neq 0 \) and every \( g \in C\left\lbrack {0,1}\right\rbrack \), the Volterra equation of the second kind \( {Kf} - {\lambda f} = g \) has a unique solution \( f \in C\left\lbrack {0,1}\right\rbrack \) . ## 1.8. Ideals and Quotients The purpose of this section is to collect some basic information about ideals in Banach algebras and their quotient algebras. We begin with a complex algebra \( A \) . Definition 1.8.1. An \( {ideal} \) in \( A \) is linear subspace \( I \subseteq A \) that is invariant under both left and right multiplication, \( {AI} + {IA} \subseteq I \) . There are two trivial ideals, namely \( I = \{ 0\} \) and \( I = A \), and \( A \) is called simple if these are the only ideals. An ideal is proper if it is not all of \( A \) . Suppose now that \( I \) is a proper ideal of \( A \) . Forming the quotient vector space \( A/I \), we have a natural linear map \( x \in A \mapsto \dot{x} = x + I \in A/I \) of \( A \) onto \( A/I \) . Since \( I \) is a two-sided ideal, one can unambiguously define a multiplication in \( A/I \) by \[ \left( {x + I}\right) \cdot \left( {y + I}\right) = {xy} + I,\;x, y \in A. \] This multiplication makes \( A/I \) into a complex algebra, and the natural map \( x \mapsto \dot{x} \) becomes a surjective homomorphism of complex algebras having the given ideal \( I \) as its kernel. This information is conveniently summarized in the short exact sequence of complex algebras (1.15) \[ 0 \rightarrow I \rightarrow A \rightarrow A/I \rightarrow 0 \] the map of \( I \) to \( A \) being the inclusion map, and the map of \( A \) onto \( A/I \) being \( x \mapsto \dot{x} \) . A basic philosophical principle of mathematics is to determine what information about \( A \) can be extracted from corresponding information about both the ideal \( I \) and its quotient \( A/I \) . For example, suppose that \( A \) is finite-dimensional as a vector space over \( \mathbb{C} \) . Then both \( I \) and \( A/I \) are finite-dimensional vector spaces, and from the observation that (1.15) is an exact sequence of vector spaces and linear maps one finds that the dimension of \( A \) is determined by the dimensions of the ideal and its quotient by way of \( \dim A = \dim I + \dim A/I \) (see Exercise (1) below). The methods of homological algebra provide refinements of this observation that allow the computation of more subtle invariants of algebras (such as \( K \) -theoretic invariants), which have appropriate generalizations to the category of Banach algebras. Proposition 1.8.2. Let \( A \) be a Banach algebra with normalized unit \( \mathbf{1} \) and let \( I \) be a proper ideal in \( A \) . Then for every \( z \in I \) we have \( \parallel \mathbf{1} + z\parallel \geq 1 \) . In particular, the closure of a proper ideal is a proper ideal. Proof. If there is an element \( z \in I \) with \( \parallel \mathbf{1} + z\parallel < 1 \), then by Theorem 1.5.2 \( z \) must be invertible in \( A \) ; hence \( \mathbf{1} = {z}^{-1}z \in I \), which implies that \( I \) cannot be a proper ideal. The second assertion follows from the continuity of the norm; if \( \parallel \mathbf{1} + z\parallel \geq 1 \) for all \( z \in I \), then \( \parallel \mathbf{1} + z\parallel \geq 1 \) persists for all \( z \) in the closure of \( I \) . REMARK 1.8.3. If \( I \) is a proper closed ideal in a Banach algebra \( A \) with normalized unit 1, then the unit of \( A/I \) satisfies \[ \parallel \mathbf{i}\parallel = \mathop{\inf }\limits_{{z \in I}}\parallel \mathbf{1} + z\parallel = 1 \] hence the unit of \( A/I \) is also normalized. More significantly, it follows that a unital Banach algebra \( A \) with normalized unit is simple iff it is topologically simple (i.e., \( A \) has no nontrivial closed ideals; see the corollary of Theorem 1.8.5 below). That assertion is false for nonunital Banach algebras. For example, in the Banach algebra \( \mathcal{K} \) of all compact operators on the Hilbert space \( {\ell }^{2} \), the set of finite-rank operators is a proper ideal that is dense in \( \mathcal{K} \) . Indeed, \( \mathcal{K} \) contains many proper ideals, such as the ideal \( {\mathcal{L}}^{2} \) of Hilbert-Schmidt operators that we will encounter later on. Nevertheless, \( \mathcal{K} \) is topologically simple (for example, see [2], Corollary 1 of Theorem 1.4.2). More generally, let \( I \) be a closed ideal in an arbitrary Banach algebra \( A \) (with or without unit). Then \( A/I \) is a Banach space; it is also a complex algebra relative to the multiplication defined above, and in fact it is a Banach algebra since for any \( x, y \in A \) , \[ \parallel \dot{x}\dot{y}\parallel = \mathop{\inf }\limits_{{z \in I}}\parallel {xy} + z\parallel \leq \mathop{\inf }\limits_{{{z}_{1},{z}_{2} \in I}}\parallel {xy} + \underset{ \in I}{\underbrace{x{z}_{2} + {z}_{1}y + {z}_{1}{z}_{2}}}\parallel \] \[ = \mathop{\inf }\limits_{{{z}_{1},{z}_{2} \in I}}\begin{Vmatrix}{\left( {x + {z}_{1}}\right) \left( {x + {z}_{2}}\right) }\end{Vmatrix} \leq \parallel \dot{x}\parallel \parallel \dot{y}\parallel . \] Notice, too, that (1.15) becomes an exact sequence of Banach algebras and continuous homomorphisms. If \( \pi : A \rightarrow A/I \) denotes the natural surjective homomorphism, then we obviously have \( \parallel \pi \parallel \leq 1 \) in general, and \( \parallel \pi \parallel = 1 \) when \( A \) is unital with normalized unit. The sequence (1.15) gives rise to a natural factorization of homomorphisms as follows. Let \( A, B \) be Banach algebras and let \( \omega : A \rightarrow B \) be a homomorphism of Banach algebras (a bounded homomorphism of the underlying algebraic structures). Then \( \ker \omega \) is a closed ideal in \( A \), and there is a unique homomorphism \( \dot{\omega } : A/\ker \omega \rightarrow B \) such that for all \( x \in A \) we have \( \omega \left( x\right) = \dot{\omega }\left( {x + \ker \omega }\right) \) . The properties of this promotion of \( \omega \) to \( \dot{\omega } \) are summarized as follows: Proposition 1.8.4. Every bounded homomorphism of Banach algebras \( \omega : A \rightarrow B \) has a unique factorization \( \omega = \dot{\omega } \circ \pi \), where \( \dot{\omega } \) is an injective homomorphism of \( A/\ker \omega \) to \( B \) and \( \pi : A \rightarrow A/\ker \omega \) is the natural projection. One has \( \parallel \dot{\omega }\parallel = \parallel \omega \parallel \) . Proof. The assertions in the first sentence are straightforward, and we prove \( \parallel \dot{\omega }\parallel = \parallel \omega \parallel \) . From the factorization \( \omega = \dot{\omega } \circ \pi \) and the fact that \( \parallel \pi \parallel \leq 1 \) we have \( \parallel \omega \parallel \leq \parallel \dot{\omega }\parallel \) ; the opposite inequality follows from \[ \parallel \dot{\omega }\left( \dot{x}\right) \parallel = \parallel \omega \left( x\right) \parallel = \parallel \omega \left( {x + z}\right) \parallel \leq \parallel \omega \parallel \parallel x + z\parallel ,\;z \in \ker \omega , \] after the infimum is taken over all \( z \in \ker \omega \) . Before introducing maximal ideals, we review some basic principles of set theory. A partially ordered set is a pair \( \left( {S, \leq }\right) \) consisting of a set \( S \) and a binary relation \( \leq \) that is transitive \( \left( {x \leq y, y \leq z \Rightarrow x \leq z}\right) \) and satisfies \( x \leq y \leq x \Rightarrow x = y \) . An element \( x \in S \) is said to be maximal if there is no element \( y \in S \) satisfying \( x \leq y \) and \( y \neq x \) . A linearly ordered subset of \( S \) is a subset \( L \subseteq S \) for which any two elements \( x, y \in L \) are related by either \( x \leq y \) or \( y \leq x \) . The set \( \mathcal{L} \) of all linearly ordered subsets of \( S \) is itself partially ordered by set inclusion. The Hausdorff maximality principle makes the assertion that every part
1359_[陈省身] Lectures on Differential Geometry
Definition 4.1
Definition 4.1. If for every local frame field \( S \) of a vector bundle \( \left( {E, M,\pi }\right) \) there is a given \( q \times q \) matrix \( {\Phi }_{S} \) of exterior differential \( k \) -forms which satisfies the following transformation rule under a change (4.7) of the frame field \( S \) : \[ {\Phi }_{{S}^{\prime }} = A \cdot {\Phi }_{S} \cdot {A}^{-1} \] (4.9) then we call \( \left\{ {\Phi }_{S}\right\} \) a tensorial matrix of adjoint type. Because the transformation formula for the connection matrix \( \omega \) under the change (4.7) of the frame field \( S \) is given by \[ {\omega }^{\prime } \cdot A = {dA} + A \cdot \omega , \] (4.10) it follows by exterior differentiation of (4.9) that \[ d{\Phi }_{{S}^{\prime }} = {dA} \land {\Phi }_{S} \cdot {A}^{-1} + A \cdot d{\Phi }_{S} \cdot {A}^{-1} + {\left( -1\right) }^{k}A \cdot {\Phi }_{S} \land d{A}^{-1}. \] By plugging (4.9) into the above equation and rearranging terms, we have \[ D{\Phi }_{{S}^{\prime }} = A \cdot D{\Phi }_{S} \cdot {A}^{-1}, \] (4.11) where \[ D{\Phi }_{S} = d{\Phi }_{s} - \omega \land {\Phi }_{S} + {\left( -1\right) }^{k}{\Phi }_{S} \land \omega . \] (4.12) Thus we see that \( \left\{ {D{\Phi }_{S}}\right\} \) is still an adjoint type tensorial matrix whose elements are exterior differential \( \left( {k + 1}\right) \) -forms. We call \( D{\Phi }_{S} \) the covariant derivative of \( {\Phi }_{S} \) . By the above definition, the Bianchi identity (4.6) implies that the covariant derivative of the curvature matrix \( \Omega \) is zero, that is, \[ {D\Omega } = 0. \] (4.13) For notational simplicity in discussions on tensorial matrices of adjoint type, we usually omit the lower index \( S \) specifying a given frame field when computations are carried out only with respect to that frame field. Covariantly differentiating (4.12) again, we get \[ {D}^{2}\Phi = \Phi \land \Omega - \Omega \land \Phi \] (4.14) (The reader should verify this). On denoting the right hand side by \[ \left\lbrack {\Phi ,\Omega }\right\rbrack = \Phi \land \Omega - \Omega \land \Phi \] (4.15) (4.14) becomes \[ {D}^{2}\Phi = \left\lbrack {\Phi ,\Omega }\right\rbrack \] (4.16) Now consider a complex \( r \) -linear function \( P\left( {{A}_{1},\ldots ,{A}_{r}}\right) \) of \( q \times q \) matrices \( {A}_{i}\left( {1 \leq i \leq r}\right) \) . If we assume that \[ {A}_{i} = \left( {a}_{\alpha \beta }^{i}\right) ,\;1 \leq \alpha ,\beta \leq q,\;1 \leq i \leq r, \] (4.17) then the function \( P \) can be expressed as \[ P\left( {{A}_{1},\ldots ,{A}_{r}}\right) = \mathop{\sum }\limits_{{1 \leq {\alpha }_{i},{\beta }_{i} \leq q}}{\lambda }_{{\alpha }_{1}\cdots {\alpha }_{r},{\beta }_{1}\cdots {\beta }_{r}}{a}_{{\alpha }_{1}{\beta }_{1}}^{1}\cdots {a}_{{\alpha }_{r}{\beta }_{r}}^{r}, \] (4.18) where the \( {\lambda }_{{\alpha }_{1}\cdots {\alpha }_{r},{\beta }_{1}\cdots {\beta }_{r}} \) are complex numbers. If for any permutation \( \sigma \) of \( \{ 1,\ldots, r\} \) we have \[ P\left( {{A}_{\sigma \left( 1\right) },\ldots ,{A}_{\sigma \left( r\right) }}\right) = P\left( {{A}_{1},\ldots ,{A}_{r}}\right) \] (4.19) then we say \( P \) is symmetric. If for any \( B \in \mathrm{{GL}}\left( {q;\mathbb{C}}\right) \) we have \[ P\left( {B{A}_{1}{B}^{-1},\ldots, B{A}_{r}{B}^{-1}}\right) = P\left( {{A}_{1},\ldots ,{A}_{r}}\right) , \] (4.20) then we say \( P \) is an invariant polynomial. We can use the following method to obtain a sequence of symmetric invariant polynomials. Suppose I is the \( q \times q \) identity matrix. Let \[ \det \left( {\mathrm{I} + \frac{i}{2\pi }A}\right) = \mathop{\sum }\limits_{{0 \leq j \leq q}}\left( \begin{array}{l} q \\ j \end{array}\right) {P}_{j}\left( A\right) \] (4.21) here \( {P}_{j}\left( A\right) \) is a homogeneous polynomial of elements of \( A \) of order \( j \) . For any nondegenerate \( q \times q \) matrix \( B \), since \[ \mathrm{I} + \frac{i}{2\pi }{BA}{B}^{-1} = B\left( {\mathrm{I} + \frac{i}{2\pi }A}\right) {B}^{-1}, \] we have \[ \det \left( {\mathrm{I} + \frac{i}{2\pi }{BA}{B}^{-1}}\right) = \det \left( {\mathrm{I} + \frac{i}{2\pi }A}\right) . \] (4.22) Therefore \[ {P}_{j}\left( {{BA}{B}^{-1}}\right) = {P}_{j}\left( A\right) \] (4.23) that is, \( {P}_{j}\left( A\right) \) is an invariant polynomial. Suppose \( {P}_{j}\left( {{A}_{1},\ldots ,{A}_{j}}\right) \) is the completely polarized polynomial of \( {P}_{j}\left( A\right) \), that is, \( {P}_{j}\left( {{A}_{1},\ldots ,{A}_{j}}\right) \) is a symmetric \( j \) -linear function of \( {A}_{1},\ldots ,{A}_{j} \) , satisfying \[ {P}_{j}\left( {A,\ldots, A}\right) = {P}_{j}\left( A\right) . \] (4.24) It is easy to show that \( {P}_{j}\left( {{A}_{1},\ldots ,{A}_{j}}\right) \) can be expressed in terms of \( {P}_{j}\left( A\right) \) . For instance, \[ {P}_{2}\left( {{A}_{1},{A}_{2}}\right) = \frac{1}{2}\left\{ {{P}_{2}\left( {{A}_{1} + {A}_{2}}\right) - {P}_{2}\left( {A}_{1}\right) - {P}_{2}\left( {A}_{2}\right) }\right\} \] \[ {P}_{3}\left( {{A}_{1},{A}_{2},{A}_{3}}\right) = \frac{1}{6}\left\{ {{P}_{3}\left( {{A}_{1} + {A}_{2} + {A}_{3}}\right) - {P}_{3}\left( {{A}_{1} + {A}_{2}}\right) }\right. \] (4.25) \[ - {P}_{3}\left( {{A}_{1} + {A}_{3}}\right) - {P}_{3}\left( {{A}_{2} + {A}_{3}}\right) + {P}_{3}\left( {A}_{1}\right) \] \[ \left. {+{P}_{3}\left( {A}_{2}\right) + {P}_{3}\left( {A}_{3}\right) }\right\} \text{.} \] Therefore \( {P}_{j}\left( {{A}_{1},\ldots ,{A}_{j}}\right) \) is an invariant symmetric \( j \) -linear function. Suppose \( P\left( {{A}_{1},\ldots ,{A}_{r}}\right) \) is an invariant polynomial, and we express a nondegenerate matrix \( B \) as \[ B = \mathrm{I} + {B}^{\prime } \] (4.26) Then \[ {B}^{-1} = \mathrm{I} - {B}^{\prime } + \cdots , \] (4.27) where the missing part contains higher order terms of the elements of the matrix \( {B}^{\prime } \) . Plugging this into (4.20) and considering the linear part of \( {B}^{\prime } \) we obtain \[ \mathop{\sum }\limits_{{1 \leq i \leq r}}P\left( {{A}_{1},\ldots ,{B}^{\prime }{A}_{i} - {A}_{i}{B}^{\prime },\ldots ,{A}_{r}}\right) = 0. \] (4.28) This equation still holds if \( {A}_{i} \) is a matrix of exterior differential forms. \( {}^{c} \) Suppose \( {A}_{i} \) is a matrix of exterior differential \( {d}_{i} \) -forms. Then for any \( q \times q \) matrix \( \theta \) of differential 1-forms we have \[ \mathop{\sum }\limits_{{1 \leq i \leq r}}{\left( -1\right) }^{{d}_{1} + \cdots + {d}_{i - 1}}P\left( {{A}_{1},\ldots ,\theta \land {A}_{i},\ldots ,{A}_{r}}\right) \] \[ + \mathop{\sum }\limits_{{1 \leq i \leq r}}{\left( -1\right) }^{{d}_{1} + \cdots + {d}_{i} + 1}P\left( {{A}_{1},\ldots ,{A}_{i} \land \theta ,\ldots ,{A}_{r}}\right) = 0. \] (4.29) To show this, we need only note that \( \theta \) is a sum of matrices of the form \( {B}^{\prime } \cdot a \) , where \( {B}^{\prime } \) is a \( q \times q \) matrix of numbers and \( a \) is a differential 1-form. Using the multilinear property of \( P \), we need only show (4.29) for \( \theta = {B}^{\prime } \cdot a \) . Plugging \( \theta = {B}^{\prime } \cdot a \) into the left hand side of (4.29), we obtain \[ a \land \left\{ {\mathop{\sum }\limits_{{1 \leq i \leq r}}P\left( {{A}_{1},\ldots ,{B}^{\prime } \cdot {A}_{i},\ldots ,{A}_{r}}\right) }\right. \] \[ \left. {-\mathop{\sum }\limits_{{1 \leq i \leq r}}P\left( {{A}_{1},\ldots ,{A}_{i} \cdot {B}^{\prime },\ldots ,{A}_{r}}\right) }\right\} . \] Suppose \( P\left( {{A}_{1},\ldots ,{A}_{r}}\right) \) can be expressed as in (4.18). Then we define \( P\left( {{A}_{1},\ldots ,{A}_{r}}\right) = \) \( \mathop{\sum }\limits_{{1 \leq {\alpha }_{i},{\beta }_{i} \leq q}}{\lambda }_{{\alpha }_{1}\cdots {\alpha }_{r},{\beta }_{1}\cdots {\beta }_{r}}{a}_{{\alpha }_{1}{\beta }_{1}}^{1} \land \cdots \land {a}_{{\alpha }_{r}{\beta }_{r}}^{r} \) when \( {A}_{i} = \left( {a}_{\alpha \beta }^{i}\right) \) is a matrix composed of exterior differential forms. By (4.28), the quantity inside the brackets is zero, hence (4.29) holds. Invariant polynomials establish a relation between the global and local properties of a connection. Suppose \( P\left( {{A}_{1},\ldots ,{A}_{r}}\right) \) is an invariant polynomial, and choose \( {A}_{i} \) to be a tensorial matrix of adjoint type composed of differential \( {d}_{i} \) -forms. Obviously \( P\left( {{A}_{1},\ldots ,{A}_{r}}\right) \) is an exterior differential \( \left( {{d}_{1} + {d}_{2} + \cdots + {d}_{r}}\right) \) -form independent of the choice of the local frame field. Hence it is an exterior differential form defined globally on \( M \) . By (4.12) and (4.29), we have \[ {dP}\left( {{A}_{1},\ldots ,{A}_{r}}\right) = \mathop{\sum }\limits_{{1 \leq i \leq r}}{\left( -1\right) }^{{d}_{1} + \cdots + {d}_{i - 1}}P\left( {{A}_{1},\ldots, d{A}_{i},\ldots ,{A}_{r}}\right) \] \[ = \mathop{\sum }\limits_{{1 \leq i \leq r}}{\left( -1\right) }^{{d}_{1} + \cdots + {d}_{i - 1}}\left\{ {P\left( {{A}_{1},\ldots, D{A}_{i},\ldots ,{A}_{r}}\right) }\right. \] (4.30) \[ \left. {+P\left( {{A}_{1},\ldots ,\omega \land {A}_{i} + {\left( -1\right) }^{{d}_{i} + 1}{A}_{i} \land \omega ,\ldots ,{A}_{r}}\right) }\right\} \] \[ = \mathop{\sum }\limits_{{1 \leq i \leq r}}{\left( -1\right) }^{{d}_{1} + \cdots + {d}_{i - 1}}P\left( {{A}_{1},\ldots, D{A}_{i},\ldots ,{A}_{r}}\right) . \] In particular, for an invariant polynomial \( {P}_{j}\left( A\right) \), if we choose \( A \) to be the curvature matrix \( \Omega \) of a connection, then \( {D\Omega } = 0 \) . Hence \[ d{P}_{j}\left( \Omega \right) = 0. \] (4.31) Thus \( {P}_{j}\left( \Omega \right) \) is a closed exterior differential \( {2j} \) -form defined globally on \( M \) . Theorem 4.1. Suppose \( \left( {E, M,\pi }\right) \) is a q-dimensional complex vector bundle on a smooth \( m \) -dimensional manifold \( M \), and \( \Omega \) and \( \widetilde{\Omega } \) are curvature forms corresponding to the connections \( \omega \) and \( \wide
1112_(GTM267)Quantum Theory for Mathematicians
Definition 12.2
Definition 12.2 If \( A \) is a symmetric operator on \( \mathbf{H} \), then for all unit vectors \( \psi \) in \( \operatorname{Dom}\left( A\right) \), the uncertainty \( {\Delta }_{\psi }A \) of \( A \) in the state \( \psi \) is given \( {by} \) \[ {\left( {\Delta }_{\psi }A\right) }^{2} = \left\langle {\left( {A - {\left\langle A\right\rangle }_{\psi }I}\right) \psi ,\left( {A - {\left\langle A\right\rangle }_{\psi }I}\right) \psi }\right\rangle . \] (12.3) By expanding out the right-hand side of (12.3), we see that the uncertainty may also be computed as \[ {\left( {\Delta }_{\psi }A\right) }^{2} = \langle {A\psi },{A\psi }\rangle - {\left( \langle \psi, A\psi \rangle \right) }^{2}. \] [Compare (3.24).] Of course, if \( \psi \) happens to be in the domain of \( {A}^{2} \), then Definition 12.2 agrees with (12.2). Proposition 12.3 If \( A \) is a symmetric operator on \( \mathbf{H} \), then for all unit vectors \( \psi \in \operatorname{Dom}\left( A\right) \), we have \( {\Delta }_{\psi }A = 0 \) if and only if \( \psi \) is an eigenvector for \( A \) . Proof. If \( {\Delta }_{\psi }A = 0 \), then from (12.3), we see that \( \left( {A - \langle A{\rangle }_{\psi }I}\right) \psi = 0 \) , meaning that \( \psi \) is an eigenvector for \( A \) with eigenvalue \( \langle A{\rangle }_{\psi } \) . Conversely, if \( {A\psi } = {\lambda \psi } \) for some \( \lambda \), then \( \langle \psi ,{A\psi }\rangle = \lambda \langle \psi ,\psi \rangle = \lambda \) . Thus, \( \left( {\check{A} - \langle A{\rangle }_{\psi }I}\right) \psi = 0 \) , which, by (12.3), means that \( {\Delta }_{\psi }A = 0 \) . ∎ As discussed in the introduction to this chapter, we expect that immediately after a measurement of an observable \( A \), the state of the system will have very small uncertainty for \( A \) . Indeed, if \( A \) has discrete spectrum, we expect that the state of the system will be an eigenvector for \( A \) . Even in the case of a continuous spectrum, we expect that the uncertainty in \( A \) can be made as small as one wishes, by making more and more precise measurements. Suppose now that one wishes to observe simultaneously two (or more) different observables, represented by operators \( A \) and \( B \) . In the case of a discrete spectrum, the system after the measurement should be simultaneously an eigenvector for \( A \) and an eigenvector for \( B \) . In the case where \( A \) and \( B \) commute, this idea is reasonable. There is a version of the spectral theorem for commuting self-adjoint operators; in the case of discrete spectrum, it says that two commuting self-adjoint operators have an orthonormal basis of simultaneous eigenvectors with real eigenvalues. (In the case of unbounded operators, there are, as usual, technical domain conditions in defining what it means for two self-adjoint operators to commute.) In the case where \( A \) and \( B \) do not commute, they do not need to have any simultaneous eigenvectors. Certainly, \( A \) and \( B \) cannot have an orthonormal basis of simultaneous eigenvectors, or they would in fact commute. The lack of simultaneous eigenvectors suggests, then, that it is simply not possible to make a simultaneous measurement of two self-adjoint operators unless they commute. In standard physics terminology, the quantities \( A \) and \( B \) are said to be "incommensurable," meaning not capable of being measured at the same time. (See Exercise 2 for a classification of the simultaneous eigenvectors of a representative pair of noncommuting operators.) In the case of a continuous spectrum, the notion of an eigenvector is replaced by the notion of a state with very small uncertainty for the relevant operator. In light of our discussion of simultaneous eigenvectors, we may expect that for noncommuting operators, it may be difficult to find states where the uncertainties of both operators are small. This expectation is realized in the following version of the uncertainty principle. Theorem 12.4 Suppose \( A \) and \( B \) are symmetric operators and \( \psi \) is a unit vector belonging to \( \operatorname{Dom}\left( {AB}\right) \cap \operatorname{Dom}\left( {BA}\right) \) . Then \[ {\left( {\Delta }_{\psi }A\right) }^{2}{\left( {\Delta }_{\psi }B\right) }^{2} \geq \frac{1}{4}{\left| \langle \left\lbrack A, B\right\rbrack {\rangle }_{\psi }\right| }^{2}. \] (12.4) Note that if \( \psi \in \operatorname{Dom}\left( {AB}\right) \) then in particular, \( \psi \in \operatorname{Dom}\left( B\right) \), and if \( \psi \in \operatorname{Dom}\left( {BA}\right) \) then \( \psi \in \operatorname{Dom}\left( A\right) \) . Thus, the assumptions on \( \psi \) are sufficient to guarantee that \( {\Delta }_{\psi }A \) and \( {\Delta }_{\psi }B \) make sense as in Definition 12.2. Proof. Define operators \( {A}^{\prime } \) and \( {B}^{\prime } \) by \( {A}^{\prime } \mathrel{\text{:=}} A - \langle \psi ,{A\psi }\rangle I \) and \( {B}^{\prime } \mathrel{\text{:=}} \) \( B - \langle \psi ,{B\psi }\rangle I \) . (We use the same domains for \( {A}^{\prime } \) and \( {B}^{\prime } \) as for \( A \) and \( B \), and it is easily verified that \( {A}^{\prime } \) and \( {B}^{\prime } \) are still symmetric on those domains.) Then by the Cauchy-Schwarz inequality, we obtain \[ \left\langle {{A}^{\prime }\psi ,{A}^{\prime }\psi }\right\rangle \left\langle {{B}^{\prime }\psi ,{B}^{\prime }\psi }\right\rangle \geq {\left| \left\langle {A}^{\prime }\psi ,{B}^{\prime }\psi \right\rangle \right| }^{2} \] (12.5) \[ \geq {\left| \mathrm{{Im}}\left\langle {A}^{\prime }\psi ,{B}^{\prime }\psi \right\rangle \right| }^{2} \] (12.6) \[ = \frac{1}{4}{\left| \left\langle {A}^{\prime }\psi ,{B}^{\prime }\psi \right\rangle - \left\langle {B}^{\prime }\psi ,{A}^{\prime }\psi \right\rangle \right| }^{2}. \] (12.7) The assumptions on \( \psi \) guarantee that \( {B\psi } \in \operatorname{Dom}\left( A\right) \) and hence also that \( {B}^{\prime }\psi \in \operatorname{Dom}\left( {A}^{\prime }\right) \), and similarly with \( {A}^{\prime } \) and \( {B}^{\prime } \) reversed. Since \( {A}^{\prime } \) and \( {B}^{\prime } \) are symmetric, we may rewrite (12.7) as \[ \left\langle {{A}^{\prime }\psi ,{A}^{\prime }\psi }\right\rangle \left\langle {{B}^{\prime }\psi ,{B}^{\prime }\psi }\right\rangle \geq \frac{1}{4}{\left| \left\langle \psi ,{A}^{\prime }{B}^{\prime }\psi \right\rangle - \left\langle \psi ,{B}^{\prime }{A}^{\prime }\psi \right\rangle \right| }^{2} \] \[ = \frac{1}{4}{\left| \left\langle \psi ,\left\lbrack {A}^{\prime },{B}^{\prime }\right\rbrack \psi \right\rangle \right| }^{2}. \] Now, since the identity operator commutes with everything, the commutator of \( {A}^{\prime } \) and \( {B}^{\prime } \) is the same as the commutator of \( A \) and \( B \) . Furthermore, \( \left\langle {{A}^{\prime }\psi ,{A}^{\prime }\psi }\right\rangle \) is nothing but \( {\left( {\Delta }_{\psi }A\right) }^{2} \) and similarly for \( B \) . Thus, we obtain \[ {\left( {\Delta }_{\psi }A\right) }^{2}{\left( {\Delta }_{\psi }B\right) }^{2} \geq \frac{1}{4}{\left| \langle \psi ,\left\lbrack A, B\right\rbrack \psi \rangle \right| }^{2}, \] which is what we wanted to prove. - We now specialize Theorem 12.4 to the case in which the commutator is \( i\hslash I \) and take the square root of both sides. Corollary 12.5 Suppose \( A \) and \( B \) are symmetric operators satisfying \[ \left\lbrack {A, B}\right\rbrack = i\hslash I \] on \( \operatorname{Dom}\left( {AB}\right) \cap \operatorname{Dom}\left( {BA}\right) \) . Then if \( \psi \in \operatorname{Dom}\left( {AB}\right) \cap \operatorname{Dom}\left( {BA}\right) \) is a unit vector, we have \[ \left( {{\Delta }_{\psi }A}\right) \left( {{\Delta }_{\psi }B}\right) \geq \frac{\hslash }{2} \] (12.8) In particular, for all unit vectors \( \psi \in {L}^{2}\left( \mathbb{R}\right) \) in \( \operatorname{Dom}\left( {XP}\right) \cap \operatorname{Dom}\left( {PX}\right) \), we have \[ \left( {{\Delta }_{\psi }X}\right) \left( {{\Delta }_{\psi }P}\right) \geq \frac{\hslash }{2} \] (12.9) Note that the factor of \( \hslash \) appearing on the right-hand side of (12.8) is really just \( \left| {\langle \psi ,\left\lbrack {A, B}\right\rbrack \psi \rangle }\right| \) . Since, however, \( \psi \) is a unit vector and \( \left\lbrack {A, B}\right\rbrack = i\hslash I \) , \( \psi \) drops out of the right-hand side of our inequality. We see then that both sides of (12.9) make sense whenever \( {\Delta }_{\psi }X \) and \( {\Delta }_{\psi }P \) make sense, namely, whenever \( \psi \) belongs to \( \operatorname{Dom}\left( X\right) \) and to \( \operatorname{Dom}\left( P\right) \) . (Recall Definition 12.2.) On the other hand, the proof that we have given for (12.9) requires \( \psi \) to be in both \( \operatorname{Dom}\left( {XP}\right) \) and \( \operatorname{Dom}\left( {PX}\right) \) . Nevertheless, it is natural to ask whether (12.9) holds for all \( \psi \) in \( \operatorname{Dom}\left( X\right) \cap \operatorname{Dom}\left( P\right) \) . We may similarly ask whether (12.8) holds for all \( \psi \) in \( \operatorname{Dom}\left( A\right) \cap \operatorname{Dom}\left( B\right) \) . As we will see in Sects. 12.2 and 12.3, the answer to the first question is yes and the answer to the second question is no. Meanwhile, it is of interest to investigate "minimum uncertainty states," that is, states \( \psi \) for which the inequality (12.4) is an equality. Proposition 12.6 If \( A \) and \( B \) are symmetric and \( \psi \) is a unit vector in \( \operatorname{Dom}\left( {AB}\right) \cap \operatorname{Dom}\left( {BA}\right) \), equality holds in (12.4) if and only if one of the following holds: (1) \( \psi \) is an eigenvector for \( A \) ,(2) \( \psi \) is an eigenvector for \( B \), or (3) \( \psi \) is an eigenvector for an operator of the form \[ A - {i\gamma B} \] for some nonzero real number \( \gamma \) . In the case \( A = X \) and \( B = P \), w
1139_(GTM44)Elementary Algebraic Geometry
Definition 7.5
Definition 7.5. Let \( C \) be any curve in \( {\mathbb{C}}_{XY} \), and let \( p \) be the polynomial of \( \mathbb{C}\left\lbrack {X, Y}\right\rbrack \) having no repeated factors, for which \( C = \mathrm{V}\left( p\right) \) . Then with notation as in Theorem 7.4, \( \mathrm{V}\left( p\right) \) is singular at \( P \) if \( \left( {\partial p/\partial X}\right) \left( P\right) = \left( {\partial p/\partial Y}\right) \left( P\right) \) \( = 0 \), and is nonsingular at \( P \) otherwise. We then say \( P \) is a singular (or nonsingular) point of \( C \) . Before giving the proof of Theorem 7.4, recall that in Definition 3.1 of differentiability of a function, the tangent plane at \( \left( {a, f\left( a\right) }\right) \) to the graph of a smooth function \( f : U \rightarrow {\mathbb{R}}_{Y}\left( {U \subset {\mathbb{R}}_{{X}_{1},\ldots ,{X}_{n}}}\right) \) is given by \[ Y = f\left( a\right) + \mathop{\sum }\limits_{{j = 1}}^{n}\left( {\frac{\partial f}{\partial {X}_{j}}\left( a\right) }\right) \left( {{X}_{j} - {a}_{j}}\right) . \] If \( f = \left( {{f}_{1},\ldots ,{f}_{m}}\right) : U \rightarrow {\mathbb{R}}_{{Y}_{1},\ldots ,{Y}_{m}} \) of Definition 3.1 is differentiable at \( a \) , we have in \( {\mathbb{R}}^{n + m}m \) hyperplanes through \( \left( {a, f\left( a\right) }\right) \), namely \[ {Y}_{i} = {f}_{i}\left( a\right) + \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial {f}_{i}}{\partial {X}_{j}}\left( a\right) \left( {{X}_{j} - {a}_{j}}\right) \;\left( {i = 1,\ldots, m}\right) . \] (25) Since these equations are linearly independent, the planes intersect in a real \( n \) -plane through \( \left( {a, f\left( a\right) }\right) \), which is the tangent plane to \( V = \mathrm{V}\left( {{Y}_{1} - {f}_{1},\ldots }\right. \) , \( \left. {{Y}_{m} - {f}_{m}}\right) \) at \( \left( {a, f\left( a\right) }\right) \) ; it coincides with the set of limits of secant lines through \( \left( {a, f\left( a\right) }\right) \) and nearby points of \( V \) . Proof of Theorem 7.4 \( \Leftarrow \) : This is just Corollary 3.9. \( \Rightarrow \) : We prove this half by contradiction. The strategy is this: Assume that \( \mathrm{V}\left( p\right) \) is smooth at \( P \) and that \( \left( {\partial p/\partial x}\right) \left( P\right) = \left( {\partial p/\partial y}\right) \left( P\right) = 0 \) . Then we shall find a neighborhood and coordinate system about \( P \) relative to which: (a) \( \mathrm{V}\left( p\right) \) smooth at \( P \) implies \( \mathrm{V}\left( p\right) \) is locally at \( P \) a graph of some function. (b) \( \left( {\partial p/\partial x}\right) \left( P\right) = \left( {\partial p/\partial y}\right) \left( P\right) = 0 \) implies \( \mathrm{V}\left( p\right) \) is locally at \( P \) not a graph of any function. First, write \( {\mathbb{C}}_{XY} = {\mathbb{R}}_{{X}_{1}{X}_{2}{Y}_{1}{Y}_{2}} = {\mathbb{R}}^{4} \) . Without loss of generality, let \( P = \left( 0\right) \in {\mathbb{R}}^{4} \) and let the part of \( \mathrm{V}\left( p\right) \) near (0) be the graph of some smooth function \( f = \left( {{f}_{1},{f}_{2}}\right) \), i.e. \[ \left( {{Y}_{1},{Y}_{2}}\right) = \left( {{f}_{1}\left( {{X}_{1},{X}_{2}}\right) ,{f}_{2}\left( {{X}_{1},{X}_{2}}\right) }\right) : U \rightarrow {U}^{\prime } \] for some open sets \( U \) and \( {U}^{\prime } \) containing 0 in \( {\mathbb{R}}_{{X}_{1}{X}_{2}} \) and \( {\mathbb{R}}_{{Y}_{1}{Y}_{2}} \) respectively. Since \( {f}_{1} \) and \( {f}_{2} \) are smooth, there are in \( {\mathbb{R}}^{4} \) real 3-spaces \[ {Y}_{1} = {c}_{11}{X}_{1} + {c}_{12}{X}_{2}\text{ and }{Y}_{2} = {c}_{21}{X}_{1} + {c}_{22}{X}_{2} \] which locally approximate \( {f}_{1} \) and \( {f}_{2} \), these 3-spaces intersecting in a real tangent plane \( T \) . ( \( T \) will be our new " \( \left( {{X}_{1},{X}_{2}}\right) \) -space" in a moment.) We note two things about the plane \( T \) : (7.6) This real 2-space is actually a complex 1-space; (7.7) Let \( {F}_{i} = {Y}_{i} - {f}_{i}\left( {{X}_{1},{X}_{2}}\right), i = 1,2 \) . For each real line in \( T \) through \( \left( 0\right) \), the corresponding directional derivative at \( \left( 0\right) \) of each \( {F}_{i} \) is zero. Proof of (7.6). Since \( \mathbf{V}\left( p\right) \) is smooth at \( \left( 0\right) \in {\mathbb{R}}^{4} \), the tangent plane to \( \mathbf{V}\left( p\right) \) at \( \left( {0,0}\right) \) is given by \[ {Y}_{i} = \frac{\partial {f}_{i}}{\partial {X}_{1}}\left( 0\right) \left( {{X}_{1} - 0}\right) + \frac{\partial {f}_{i}}{\partial {X}_{2}}\left( 0\right) \left( {{X}_{2} - 0}\right) \;\left( {i = 1,2}\right) ; \] this is the limit as \( \left( a\right) = \left( {{a}_{1},{a}_{2}}\right) \rightarrow 0 \) of \[ {Y}_{i} - {f}_{i}\left( a\right) = \frac{\partial {f}_{i}}{\partial {X}_{1}}\left( a\right) \left( {{X}_{1} - {a}_{1}}\right) + \frac{\partial {f}_{i}}{\partial {X}_{2}}\left( a\right) \left( {{X}_{2} - {a}_{2}}\right) \;\left( {i = 1,2}\right) . \] (26) Hence in this sense, the tangent plane \( T \) at \( \left( 0\right) \) is the limit of tangent planes \( T\left( P\right) \) to the graph at points \( P \) of \( \mathbf{V}\left( p\right) \) near (0). Since surely the limiting position of a sequence of complex lines in \( {\mathbb{C}}^{2} \) is a complex line, it suffices to show that for \( P \neq \left( 0\right) \) near \( \left( 0\right) \), each such \( T\left( P\right) \) is a complex line. Now if \( \left( {\partial p/\partial X}\right) \left( 0\right) = \left( {\partial p/\partial Y}\right) \left( 0\right) = 0 \), then by Lemma 3.9, in some neighborhood of any \( P \neq \left( 0\right) ,\left( {P\text{in}\mathrm{V}\left( p\right) \text{and}P\text{sufficiently close to}\left( 0\right) }\right) \) , the part of \( \mathbf{V}\left( p\right) \) near \( P \) is the graph of the analytic function \[ Y = f\left( X\right) \;\left( {f = {f}_{1} + i{f}_{2}}\right) . \] The complex line through \( \left( {a, b}\right) \in {\mathbb{C}}_{XY} \) in the corresponding complex definition of differentiability is \[ Y = f\left( a\right) + {f}^{\prime }\left( a\right) \left( {X - a}\right) . \] By equating real and imaginary parts of this equation and making use of the Cauchy-Riemann equations, one may now verify that this real plane in \( {\mathbb{C}}_{XY} \) is the same as that defined by the corresponding real equations in (26), so each such tangent plane is a complex line. Proof of (7.7). Since \( {f}_{i} \) is differentiable there are planes \[ {Y}_{i} = {c}_{i1}{X}_{1} + {c}_{i2}{X}_{2}\;\left( {i = 1,2}\right) \] so that \[ \mathop{\lim }\limits_{{\left( {{x}_{1},{x}_{2}}\right) \rightarrow 0}}\frac{{f}_{i}\left( {{x}_{1},{x}_{2}}\right) - \left( {{c}_{i1}{x}_{1} + {c}_{i2}{x}_{2}}\right) }{\left| {x}_{1}\right| + \left| {x}_{2}\right| } = 0\;\left( {i = 1,2}\right) . \] This means \[ \mathop{\lim }\limits_{{\left. {1,{x}_{2},{y}_{1},{y}_{2}}\right) \rightarrow 0}}\frac{\left( {{y}_{i} - {f}_{i}\left( {{x}_{1},{x}_{2}}\right) }\right) - \left( {{y}_{i} - \left( {{c}_{i1}{x}_{1} + {c}_{i2}{x}_{2}}\right) }\right) }{\left| {x}_{1}\right| + \left| {x}_{2}\right| } \rightarrow 0\;\left( {i = 1,2}\right) . \] Now \( {y}_{i} - \left( {{c}_{i1}{x}_{1} + {c}_{i2}{x}_{2}}\right) \) is zero for \( \left( {{x}_{1},{x}_{2},{y}_{1},{y}_{2}}\right) \in T \), so if \( \left( {{x}_{1},{x}_{2},{y}_{1},{y}_{2}}\right) \rightarrow 0 \) along points in \( T \), the above limit becomes \[ \mathop{\lim }\limits_{{\left( {{x}_{1},{x}_{2},{y}_{1},{y}_{2}}\right) \rightarrow 0}}\frac{{y}_{i} - {f}_{i}\left( {{x}_{1},{x}_{2}}\right) }{\left| {x}_{1}\right| + \left| {x}_{2}\right| } \rightarrow 0 \] hence approaching along points in \( T \), we have \[ \mathop{\lim }\limits_{{\left( {{x}_{1},{x}_{2},{y}_{1},{y}_{2}}\right) \rightarrow 0}}\frac{{F}_{i}\left( {{x}_{1},{x}_{2},{y}_{1},{y}_{2}}\right) - {F}_{i}\left( {0,0,0,0}\right) }{\left| {x}_{1}\right| + \left| {x}_{2}\right| + \left| {y}_{1}\right| + \left| {y}_{2}\right| } = 0. \] Therefore (7.7) is proved. To continue with the proof of " \( \Rightarrow \) " in Theorem 7.4, note that the rank at (0) of the Jacobian matrix \[ J = \left( \begin{array}{llll} \frac{\partial {F}_{1}}{\partial {X}_{1}} & \frac{\partial {F}_{1}}{\partial {X}_{2}} & \frac{\partial {F}_{1}}{\partial {Y}_{1}} & \frac{\partial {F}_{1}}{\partial {Y}_{2}} \\ \frac{\partial {F}_{2}}{\partial {X}_{1}} & \frac{\partial {F}_{2}}{\partial {X}_{2}} & \frac{\partial {F}_{2}}{\partial {Y}_{1}} & \frac{\partial {F}_{2}}{\partial {Y}_{2}} \end{array}\right) \] is two (the last two columns form an identity matrix). We now choose new coordinates in \( {\mathbb{C}}^{2} \) about (0) as follows: Let \( {\mathbb{C}}_{{X}^{\prime }} = T \) be the tangent space to \( \mathrm{V}\left( p\right) \) at \( \left( {0,0}\right) \), and let \( {\mathbb{C}}_{{Y}^{\prime }} \) be any other complex 1-subspace of \( {\mathbb{C}}^{2} \) . If \( {J}^{\prime } \) denotes \( J \) after changing to a new real basis with vectors in \( {\mathbb{C}}_{{X}^{\prime }} \) and \( {\mathbb{C}}_{{Y}^{\prime }} \), then rank \( \left( {J}^{\prime }\right) \) \( = 2 \), since the rank of a matrix is unchanged by a change of basis. Now derivatives at (0) in any direction in \( {\mathbb{R}}_{{X}_{1}^{\prime }{X}_{2}^{\prime }} \) are all zero (by (7.7)), so the \( {Y}^{\prime } \) -columns of \( {J}^{\prime } \) are linearly independent; hence by the implicit function theorem (Theorem 3.6), the part of \( \mathrm{V}\left( p\right) \) near (0) is also a graph of a function relative to these new coordinates. Now, if \( \left( {\partial p/\partial X}\right) \left( 0\right) = \left( {\partial p/\partial Y}\right) \left( 0\right) = 0 \), then \( p \) has zero derivative in any direction, so if \( X = \alpha {X}^{\prime } + \beta {Y}^{\prime } \) and \( Y = \gamma {X}^{\prime } + \delta {Y}^{\prime } \), then \( p\left( {\alpha {X}^{\prime } + \beta {Y}^{\prime }}\right. \) , \( \left. {\gamma {X}^{\prime }
106_106_The Cantor function
Definition 1.1
Definition 1.1. Let \( w \in \widetilde{P} \) . The set of variables involved in \( w \), denoted by \( V\left( w\right) \), is defined by \[ V\left( w\right) = \cap \{ U \mid U \subseteq V, w \in \widetilde{P}\left( {U,\mathcal{R}}\right) \} . \] \( {}^{1} \) This is very different to the concepts of existence used in other contexts such as "Do flying saucers exist?" or "Does God exist?" or "Do electrons exist?". Exercise 1.2. Show that (i) \( V\left( F\right) = \varnothing \) . (ii) If \( r \in \mathcal{R},\operatorname{ar}\left( r\right) = n \), and \( {x}_{1},\ldots ,{x}_{n} \in V \), then \( V\left( {r\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\right) = \) \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) (iii) If \( {w}_{1},{w}_{2} \in \widetilde{P} \), then \( V\left( {{w}_{1} \Rightarrow {w}_{2}}\right) = V\left( {w}_{1}\right) \cup V\left( {w}_{2}\right) \) . (iv) If \( x \in V \) and \( w \in \widetilde{P} \), then \( V\left( {\left( {\forall x}\right) w}\right) = \{ x\} \cup V\left( w\right) \) . Show further that (i)-(iv) may be taken as the definition of the function \( V\left( w\right) \) . Definition 1.3. Let \( w \in \widetilde{P} \) . The depth of quantification of \( w \), denoted by \( d\left( w\right) \), is defined by (i) \( d\left( F\right) = 0, d\left( {r\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\right) = 0 \) for every free generator of \( \widetilde{P} \) . (ii) \( d\left( {{w}_{1} \Rightarrow {w}_{2}}\right) = \max \left( {d\left( {w}_{1}\right), d\left( {w}_{2}\right) }\right) \) . (iii) \( d\left( {\left( {\forall x}\right) w}\right) = 1 + d\left( w\right) \left( {x \in V}\right) \) . Our desired congruence relation on \( \widetilde{P} \) may now be defined. Definition 1.4. Let \( {w}_{1},{w}_{2} \in \widetilde{P} \) . We define \( {w}_{1} \approx {w}_{2} \) if (a) \( d\left( {w}_{1}\right) = d\left( {w}_{2}\right) = 0 \) and \( {w}_{1} = {w}_{2} \), or (b) \( d\left( {w}_{1}\right) = d\left( {w}_{2}\right) > 0,{w}_{1} = {a}_{1} \Rightarrow {b}_{1},{w}_{2} = {a}_{2} \Rightarrow {b}_{2},{a}_{1} \approx {a}_{2} \) and \( {b}_{1} \approx {b}_{2} \), or (c) \( {w}_{1} = \left( {\forall x}\right) a,{w}_{2} = \left( {\forall y}\right) b \) and either (i) \( x = y \) and \( a \approx b \), or (ii) there exists \( c = c\left( x\right) \) such that \( c\left( x\right) \approx a, c\left( y\right) \approx b \) and \( y \notin V\left( c\right) \) . We remark that in part (c) (ii), the notation \( c = c\left( x\right) \) indicates the way the element concerned is a function of \( x \), and ignores its possible dependence on other variables. We use it so we can represent the effect of substituting \( y \) for \( x \) throughout. It is therefore unnecessary for us to impose the condition \( x \notin V\left( {c\left( y\right) }\right) \) . The notation does not imply \( V\left( {c\left( x\right) }\right) = \{ x\} \), hence we must impose the condition \( y \notin V\left( {c\left( x\right) }\right) \) . Thus the condition (c) (ii) is symmetric, and \( \approx \) is trivially reflexive. The proof that it is transitive is left as an exercise. Exercise 1.5. (i) Given that \( z \notin V\left( {w}_{1}\right) \cup V\left( {w}_{2}\right) \), show by induction over \( d\left( {w}_{1}\right) \) that the element \( c = c\left( x\right) \) in (c) (ii) can always be chosen such that \( z \notin V\left( c\right) \) . (ii) If \( u\left( x\right) \approx v\left( x\right) \) and \( y \notin V\left( {u\left( x\right) }\right) \cup V\left( {v\left( x\right) }\right) \), show by induction over \( d\left( {u\left( x\right) }\right) \) that \( u\left( y\right) \approx v\left( y\right) \) . (iii) Prove that \( \approx \) is transitive. Since the relation \( \approx \) is an equivalence which is clearly compatible with the operations of the algebra, it is a congruence relation on \( \widetilde{P}\left( {V,\mathcal{R}}\right) \) . Definition 1.6. The (reduced) first-order algebra \( P\left( {V,\mathcal{R}}\right) \) on \( \left( {V,\mathcal{R}}\right) \) is the factor algebra of \( \widetilde{P}\left( {V,\mathcal{R}}\right) \) by the congruence relation \( \approx \) . The elements of \( P = P\left( {V,\mathcal{R}}\right) \) are the congruence classes. If \( w \in \widetilde{P} \) and \( \left\lbrack w\right\rbrack \) is the congruence class of \( w \), then \[ \left( {\forall x}\right) \left\lbrack w\right\rbrack = \left\lbrack {\left( {\forall x}\right) w}\right\rbrack \] and \[ \left\lbrack {w}_{1}\right\rbrack \Rightarrow \left\lbrack {w}_{2}\right\rbrack = \left\lbrack {{w}_{1} \Rightarrow {w}_{2}}\right\rbrack \] Definition 1.7. Let \( w \in P \) . We define the set \( \operatorname{var}\left( w\right) \) of (free) variables of \( w \) by putting \( \operatorname{var}\left( w\right) = \operatorname{var}\left( \widetilde{w}\right) \), where \( \widetilde{w} \in \widetilde{P} \) is some representative of the congruence class \( w \), and where \( \operatorname{var}\left( \widetilde{w}\right) \) is defined inductively by (i) \( \operatorname{var}\left( F\right) = \varnothing \) , (ii) \( \operatorname{var}\left( {r\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\right) = \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) for \( r \in \mathcal{R},{x}_{1},\ldots ,{x}_{n} \in V \) , (iii) \( \operatorname{var}\left( {{\widetilde{w}}_{1} \Rightarrow {\widetilde{w}}_{2}}\right) = \operatorname{var}\left( {\widetilde{w}}_{1}\right) \cup \operatorname{var}\left( {\widetilde{w}}_{2}\right) \) , (iv) \( \operatorname{var}\left( {\left( {\forall x}\right) \widetilde{w}}\right) = \operatorname{var}\left( \widetilde{w}\right) - \{ x\} \) . Definition 1.8. Let \( A \subseteq P \) . Put \[ \operatorname{var}\left( A\right) = \mathop{\bigcup }\limits_{{p \in A}}\operatorname{var}\left( p\right) \] ## Exercises 1.9. Show that if \( {\widetilde{w}}_{1} \approx {\widetilde{w}}_{2} \), then \( \operatorname{var}\left( {\widetilde{w}}_{1}\right) = \operatorname{var}\left( {\widetilde{w}}_{2}\right) \), and conclude that \( \operatorname{var}\left( w\right) \) is defined for \( w \in P \) . 1.10. Show that for any \( w \in P \), there is a representative \( \widetilde{w} \) of \( w \) such that no variable \( x \in V \) appears in \( \widetilde{w} \) more than once in a quantifier \( \left( {\forall x}\right) \) , and no \( x \in \operatorname{var}\left( w\right) \) appears at all in a quantifier (i.e., \( \widetilde{w} \) has no repeated dummy variables, and no free variables also appear as dummies). We assume henceforth that any \( w \in P \) is represented by a \( \widetilde{w} \in \widetilde{P} \) having the form described in Exercise 1.10. We shall also usually abuse notation and not distinguish between \( p \in \widetilde{P} \) and \( \left\lbrack p\right\rbrack \in P \) . ## §2 Interpretations We want to think of the elements of \( V \) as names of objects, and the elements of \( \mathcal{R} \) as relations among those objects. If we take a non-empty set \( U \) , and a function \( \varphi : V \rightarrow U \), then we can think of \( x \in V \) as a name for the element \( \varphi \left( x\right) \in U \) . Of course, not every element \( u \in U \) need have a name, while some elements \( u \) may well have more than one name. Next we take a function \( \psi \) , from \( \mathcal{R} \) into the set of all relations on \( U \), such that if \( r \in {\mathcal{R}}_{n} \), then \( \psi \left( r\right) \) is an \( n \) -ary relation. It will be convenient to write simply \( {\varphi x} \) for \( \varphi \left( x\right) \), and \( {\psi r} \) for \( \psi \left( r\right) \) . As for valuations, these again should be functions \( v : P \rightarrow {\mathbb{Z}}_{2} \) which will correspond to our intuitive notion of truth. Since our interpretation of the element \( r\left( {{x}_{1},\ldots ,{x}_{n}}\right) \in P \) in terms of \( U,\varphi ,\psi \) must obviously be the statement that \( \left( {\varphi {x}_{1},\ldots ,\varphi {x}_{n}}\right) \in {\psi r} \), we shall require of \( v \) that (a) if \( r \in {\mathcal{R}}_{n} \) and \( {x}_{1},\ldots ,{x}_{n} \in V \), then \( v\left( {r\left( {{x}_{1},\ldots ,{x}_{n}}\right) = 1}\right. \) if \( \left( {\varphi {x}_{1},\ldots ,\varphi {x}_{n}}\right) \in \) \( {\psi r} \), and is 0 otherwise, while we still require that (b) \( v \) is a homomorphism of \( \{ F, \Rightarrow \} \) -algebras. It remains for us to define truth for a proposition of the form \( \left( {\forall x}\right) p\left( x\right) \) in terms of our understanding of it for \( p\left( x\right) \), and so we use an induction over the depth of quantification. Let \( {P}_{k}\left( {V,\mathcal{R}}\right) \) be the set of all elements \( p \) of \( P\left( {V,\mathcal{R}}\right) \) with \( d\left( p\right) \leq k \) . If we take some new variable \( t \), then intuitively, we consider \( \left( {\forall x}\right) p\left( x\right) \left( { = \left( {\forall t}\right) p\left( t\right) }\right) \) to be true if \( p\left( t\right) \) is true no matter how we choose to interpret \( t \) . This leads to a further requirement for \( v \), namely: \( \left( {c}_{k}\right) \) Suppose \( p = \left( {\forall x}\right) q\left( x\right) \) has depth \( k \) . Put \( {V}^{\prime } = V \cup \{ t\} \) where \( t \notin V \) . If for every extension \( {\varphi }^{\prime } : {V}^{\prime } \rightarrow U \) of \( \varphi \) and for every \( {v}_{k - 1}^{\prime } : {P}_{k - 1}\left( {{V}^{\prime }, R}\right) \rightarrow {\mathbb{Z}}_{2} \) ; such that \( \left( {{\varphi }^{\prime },\psi ,{v}_{k - 1}^{\prime }}\right) \) satisfy (a),(b) and \( \left( {c}_{i}\right) \) for all \( i < k \), we have \( {v}_{k - 1}^{\prime }\left( {q\left( t\right) }\right) = \) 1, then \( v\left( p\right) = 1 \), otherwise \( v\left( p\right) = 0 \) . Exercise 2.1. Given \( U,\varphi ,\psi \), prove that there is one and only one function \( v : P \rightarrow {\mathbb{Z}}_{2} \) satisfying (a),(b) and \( \left( {c}_{i}\right) \) for a
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 4.5.1
Definition 4.5.1. A commutative Banach algebra \( A \) has the spectral extension property if \( {r}_{B}\left( x\right) = {r}_{A}\left( x\right) \) for every extension \( B \) of \( A \) and every \( x \in A \) . \( A \) is said to have the strong spectral extension property if \( {\sigma }_{B}\left( x\right) \cup \{ 0\} = \) \( {\sigma }_{A}\left( x\right) \cup \{ 0\} \) holds for every commutative extension \( B \) of \( A \) and every \( x \in A \) . Finally, A has the multiplicative Hahn-Banach property if, given any commutative extension \( B \) of \( A \), every \( \varphi \in \Delta \left( A\right) \) extends to some element of \( \Delta \left( B\right) \) . It is clear from the very definition that the strong spectral extension property implies the spectral extension property. For any commutative Banach algebra \( C \) and \( y \in C \) , \[ {\sigma }_{C}\left( y\right) \smallsetminus \{ 0\} \subseteq \{ \psi \left( y\right) : \psi \in \Delta \left( C\right) \} \subseteq {\sigma }_{C}\left( y\right) \] (Theorem 2.2.5). Therefore the multiplicative Hahn-Banach property implies the strong spectral extension property. We have seen in Corollary 4.2.17 that every regular semisimple commutative Banach algebra \( A \) has the multiplicative Hahn-Banach property. The purpose of this section is to characterise the semisimple commutative Banach algebras having either of these three properties by a condition similar to, but weaker than, regularity of \( A \) and conditions involving the Shilov boundary \( \partial \left( A\right) \) of \( A \) . Lemma 4.5.2. For a commutative Banach algebra \( A \), the following conditions are equivalent. (i) A has the spectral extension property. (ii) Every submultiplicative norm \( \left| \cdot \right| \) on \( A \) satisfies \( {r}_{A}\left( a\right) \leq \left| a\right| \) for all \( a \in A \) . Proof. (i) \( \Rightarrow \) (ii) Let \( \left| \cdot \right| \) be any submultiplicative norm on \( A \) and let \( \left( {B,\parallel \cdot \parallel }\right) \) be the completion of \( \left( {A,\left| \cdot \right| }\right) \) . By (i), for all \( a \in A \) , \[ {r}_{A}\left( a\right) = {r}_{B}\left( a\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{a}^{n}\end{Vmatrix}}^{1/n} \leq \parallel a\parallel = \left| a\right| . \] (ii) \( \Rightarrow \) (i) Let \( \left( {B,\parallel \cdot \parallel }\right) \) be any extension of \( A \) . Then \( {r}_{B} \) is a submultiplicative norm on \( B \) and hence on \( A \) . Thus (ii) implies that \( {r}_{A}\left( a\right) \leq {r}_{B}\left( a\right) \) for all \( a \in A \) , as required. For an element \( a \in A \), define the permanent radius \( {r}_{p}\left( a\right) \) of \( a \) to be \[ {r}_{p}\left( a\right) = \inf \left\{ {{r}_{B}\left( a\right) : B\text{ is an extension of }A}\right\} . \] Clearly, \( {r}_{p}\left( a\right) \leq {r}_{A}\left( a\right) \) . More precisely, \[ {r}_{p}\left( a\right) = \inf \{ \left| a\right| : \left| \cdot \right| \text{is a submultiplicative norm on}A\} \text{.} \] Indeed, if \( \left| \cdot \right| \) is any submultiplicative norm on \( A \) and \( B \) is the completion of \( A \) with respect to \( \left| \cdot \right| \), then \( B \) is an extension of \( A \) and hence \( {r}_{p}\left( a\right) \leq {r}_{B}\left( a\right) \leq \left| a\right| \) for every \( a \in A \) . Theorem 4.5.3. For a semisimple commutative Banach algebra \( A \) the following are equivalent. (i) \( A \) has the spectral extension property. (ii) If \( E \) is a closed subset of \( \Delta \left( A\right) \) that does not contain the Shilov boundary of \( A \), then there exists an element \( a \in A \) such that \( \widehat{a} = 0 \) on \( E \) and \( {r}_{p}\left( a\right) > 0 \) . (iii) Whenever \( B \) is a commutative extension of \( A \), every \( \varphi \in \partial \left( A\right) \) extends to some element of \( \Delta \left( B\right) \) . Proof. (i) \( \Rightarrow \) (ii) Let \( E \) be a closed subset of \( \Delta \left( A\right) \) that does not contain the Shilov boundary of \( A \) . Towards a contradiction, assume that \( E \) has the property that for any \( a \in A,{\left. \widehat{a}\right| }_{E} = 0 \) implies \( a = 0 \) . Then \( \left| a\right| = {\begin{Vmatrix}{\left. \widehat{a}\right| }_{E}\end{Vmatrix}}_{\infty } \) defines a submultiplicative norm on \( A \) . Because \( A \) has the spectral extension property, Lemma 4.5.2 shows that \( {r}_{A}\left( a\right) \leq \left| a\right| \) . Now, since \( A \) is semisimple, \( {r}_{A}\left( a\right) = \parallel \widehat{a}{\parallel }_{\infty } \) and hence \( \parallel \widehat{a}{\parallel }_{\infty } = {\begin{Vmatrix}{\left. \widehat{a}\right| }_{E}\end{Vmatrix}}_{\infty } \) for all \( a \in A \) . Thus \( E \) is a boundary for \( A \) and therefore has to contain the Shilov boundary. This contradiction shows the existence of some nonzero element \( a \in A \) such that \( \widehat{a} = 0 \) on \( E \) . Finally, using once more the facts that \( A \) is semisimple and has the spectral extension property, we have \( {r}_{p}\left( a\right) = {r}_{A}\left( a\right) > 0 \) . (ii) \( \Rightarrow \) (iii) Let \( B \) be a commutative extension of \( A \) and consider the restriction map \[ \phi : \Delta \left( B\right) \cup \{ 0\} \rightarrow \Delta \left( A\right) \cup \{ 0\} ,{\left. \;\psi \rightarrow \psi \right| }_{A}. \] Then \( \phi \) is continuous with respect to the \( {w}^{ * } \) -topologies. Moreover, \( \Delta \left( B\right) \cup \{ 0\} \) is a \( {w}^{ * } \) -closed subset of the unit ball of \( {B}^{ * } \), because a \( {w}^{ * } \) -limit of elements of \( \Delta \left( B\right) \) is either 0 or an element of \( \Delta \left( B\right) \) . Thus \( \Delta \left( B\right) \cup \{ 0\} \) is \( {w}^{ * } \) -compact and hence so is \( \phi \left( {\Delta \left( B\right) \cup \{ 0\} }\right) \subseteq \Delta \left( A\right) \cup \{ 0\} \) . Let \[ E = \Delta \left( A\right) \cap \phi \left( {\Delta \left( B\right) \cup \{ 0\} }\right) \] which is a closed subset of \( \Delta \left( A\right) \) . For all \( a \in A \) , \[ {r}_{B}\left( a\right) = \sup \{ \left| {\psi \left( a\right) }\right| : \psi \in \Delta \left( B\right) \} = \sup \{ \left| {\varphi \left( a\right) }\right| : \varphi \in E\} , \] since, by definition of \( E,\phi \left( {\Delta \left( B\right) }\right) \smallsetminus E \) consists only of the zero functional. Also, by definition of \( E \), every \( \varphi \in E \) extends to some element of \( \Delta \left( B\right) \) . It therefore suffices to show that \( \partial \left( A\right) \subseteq E \) . Assume that \( E \) does not contain \( \partial \left( A\right) \) . Then, by hypothesis (ii), there exists \( a \in A \) such that \( {r}_{p}\left( a\right) > 0 \) and \( {\left. \widehat{a}\right| }_{E} = 0 \) . Thus, by the above formula for \( {r}_{B}\left( a\right) \) , \[ 0 < {r}_{p}\left( a\right) \leq {r}_{B}\left( a\right) = {\begin{Vmatrix}{\left. \widehat{a}\right| }_{E}\end{Vmatrix}}_{\infty } = 0. \] This contradiction shows that \( \partial \left( A\right) \subseteq E \) . (iii) \( \Rightarrow \) (i) Let \( a \in A \) and let \( B \) be any extension of \( A \) . To show that \( {r}_{B}\left( a\right) = {r}_{A}\left( a\right) \), we can assume that \( B \) is commutative. In fact, if \( C \) is the closure of \( A \) in \( B \), then \( {r}_{B}\left( a\right) = {r}_{C}\left( a\right) \) . Choose \( \varphi \in \partial \left( A\right) \) such that \( \left| {\varphi \left( a\right) }\right| = \) \( {r}_{A}\left( a\right) \) . By (iii), \( \varphi \) extends to some \( \psi \in \Delta \left( B\right) \) . It follows that \[ {r}_{A}\left( a\right) = \left| {\varphi \left( a\right) }\right| = \left| {\psi \left( a\right) }\right| \leq {r}_{B}\left( a\right) \leq {r}_{A}\left( a\right) , \] and so (i) holds. Before proceeding, we insert a consequence concerning the existence of zero divisors. Corollary 4.5.4. Let \( A \) be a semisimple commutative Banach algebra and suppose that \( A \) has the spectral extension property. If \( A \) is not one-dimensional, then \( A \) contains zero divisors. Proof. Notice that if \( E \) is any proper closed subset of \( \partial \left( A\right) \), then, by Theorem 4.5.3, there exists \( a \neq 0 \) in \( A \) such that \( \widehat{a} = 0 \) on \( E \) . Now \( \partial \left( A\right) \) contains at least two elements. Indeed, this follows from Theorem 3.3.14 if \( \partial \left( A\right) \neq \Delta \left( A\right) \) and is clear otherwise since \( A \) is not one-dimensional and semisimple. Thus we can find two nonempty disjoint open subsets \( U \) and \( V \) of \( \partial \left( A\right) \) . Let \( E = \partial \left( A\right) \smallsetminus U \) and \( F = \partial \left( A\right) \smallsetminus V \) . Then there exist nonzero elements \( a, b \in A \) such that \( \widehat{a} = 0 \) on \( E \) and \( \widehat{b} = 0 \) on \( F \) . It follows that \( \widehat{ab} = \widehat{a}\widehat{b} = 0 \) on \( \partial \left( A\right) \) and hence on all of \( \Delta \left( A\right) \) . By semisimplicity of \( A \), we get \( {ab} = 0 \) . An obvious method to construct an extension is as follows. Lemma 4.5.5. Let \( A \) be a semisimple commutative Banach algebra. Then \( {C}_{0}\left( {\partial \left( A\right) }\right) \) is an extension of \( A \) . Furthermore, if \( \varphi \in \Delta \left( A\right) \) extends to some element of \( \Delta \left( {{C}_{0}\left( {\partial \left( A\right) }\right) }\right) \), then \( \varphi \in \partial \left( A\right) \) . Proof. Because \( A \) is semisimple, the mapping \( a \rightarrow {\left. \widehat{a}\right| }_{\partial \left( A\right) } \) is an injective homomorphism of \( A \) into \( {C}_{0}\left( {\partial \left( A\right) }\right) \) . Now, every element of \( \Delta \left( {{C}_{0}\left( {\partial \left( A\right) }\right) }\right) \) equ
1225_(Griffiths) Introduction to Algebraic Curves
Definition 4.1
Definition 4.1. Suppose \( C \) is a Riemann surface. Then a holomorphic differential (respectively, meromorphic differential) \( \omega \) is by definition a family \( \left\{ \left( {{U}_{i},{z}_{i},{\omega }_{i}}\right) \right\} \) such that: a) \( \left\{ \left( {{U}_{i},{z}_{i}}\right) \right\} \) is a holomorphic covering of \( C \), and \[ {\omega }_{i} = {f}_{i}\left( {z}_{i}\right) d{z}_{i} \] where \( {f}_{i} \in \mathcal{O}\left( {U}_{i}\right) \) (resp. \( K\left( {U}_{i}\right) \) ); b) if \( {z}_{i} = {\varphi }_{ij}\left( {z}_{j}\right) \) is the coordinate transformation on \( {U}_{i} \cap {U}_{j}\left( { \neq \varnothing }\right) \) , then \[ {f}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) }\right) \frac{d{\varphi }_{ij}\left( {z}_{j}\right) }{d{z}_{j}} = {f}_{j}\left( {z}_{j}\right) , \] i.e., the local representation of the differential changes according to the chain rule \[ {f}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) }\right) d{\varphi }_{ij}\left( {z}_{j}\right) = {f}_{j}\left( {z}_{j}\right) d{z}_{j}. \] We use \( {\Omega }^{1}\left( C\right) \) (respectively \( {K}^{1}\left( C\right) \) ) to denote the set of holomorphic differentials (respectively, meromorphic differentials) on \( C \) . The following result is obvious: Proposition 4.2. Suppose \( {\omega }_{0},{\omega }_{1} \) are meromorphic differentials on \( C \) , \( {\omega }_{0} ≢ 0 \) . Then \( {\omega }_{1}/{\omega }_{0} \) defines a meromorphic function on \( C \) . EXAMPLE 1. Suppose \( r\left( z\right) \) is a rational function on \( \mathbb{C} \), then \( r\left( z\right) {dz} \) is a meromorphic differential on the Riemann sphere \( S \) . EXERCISE 4.1. Prove that all the \( \omega \in {K}^{1}\left( S\right) \) are of the above form. Suggestion. Fix a rational function \( {r}_{0}\left( z\right) \), and let \[ {\omega }_{0} = {r}_{0}\left( z\right) {dz} \] then \( \omega /{\omega }_{0} \in K\left( S\right) \) for any \( \omega \in {K}^{1}\left( S\right) \) and and we have determined \( K\left( S\right) \) above. EXAMPLE 2. Suppose \[ \Lambda = \left\{ {{m}_{1}{w}_{1} + {m}_{2}{w}_{2} \mid {m}_{1},{m}_{2} \in \mathbb{Z}}\right\} \] is a lattice in \( \mathbb{C} \), and \( f \) is a doubly periodic meromorphic function with period \( \left( {{w}_{1},{w}_{2}}\right) \), then \( f\left( w\right) {dw} \) is a meromorphic differential defined on \( C = \mathbb{C}/\Lambda \) . EXERCISE 4.2. Verify that \( \omega = {dw} \) yields a holomorphic differential on \( C = \mathbb{C}/\Lambda \) . EXAMPLE 3. Suppose \( C \) is a Riemann surface, with \[ f = \left\{ \left( {{U}_{i},{z}_{i},{f}_{i}\left( {z}_{i}\right) }\right) \right\} \in K\left( C\right) . \] Then the following expression defines a meromorphic differential \( {df} \in \) \( {K}^{1}\left( C\right) \) \[ {df} = \left\{ \left( {{U}_{i},{z}_{i}, d{f}_{i}\left( {z}_{i}\right) = \frac{d{f}_{i}\left( {z}_{i}\right) }{d{z}_{i}}d{z}_{i}}\right) \right\} . \] Definition 4.3. We call df, as defined above, the differential of the meromorphic function \( f \) . DEFINITION 4.4. Suppose \( C \) is a Riemann surface with \[ \omega = \left\{ \left( {{U}_{i},{z}_{i},{f}_{i}\left( {z}_{i}\right) d{z}_{i}}\right) \right\} \in {K}^{1}\left( C\right) ,\;p \in {U}_{i} \cap {U}_{j}. \] Then \[ {\nu }_{p}\left( {f}_{i}\right) = {\nu }_{p}\left( {{f}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) }\right) \frac{d{\varphi }_{ij}\left( {z}_{j}\right) }{d{z}_{j}}}\right) = {\nu }_{p}\left( {f}_{j}\right) , \] and thus we can define \[ {\nu }_{p}\left( \omega \right) = {\nu }_{p}\left( {f}_{i}\right) ,\;p \in {U}_{i}. \] If \( {\nu }_{p}\left( \omega \right) > 0 \), then \( p \) is called a zero of \( \omega \) ; if \( {\nu }_{p}\left( \omega \right) < 0 \), then \( p \) is called a pole of \( \omega \) . DEFINITION 4.5. Suppose \( C \) is a Riemann surface with \[ \omega = \left\{ \left( {{U}_{i},{z}_{i},{f}_{i}\left( {z}_{i}\right) d{z}_{i}}\right) \right\} \in {K}^{1}\left( C\right) , \] \( \gamma \) is a piecewise smooth curve on \( C \) not containing the poles of \( \omega \), and \( \gamma = \mathop{\bigcup }\limits_{i}{\gamma }_{i} \) is any partition of \( \gamma \) satisfying \( {\gamma }_{i} \subset {U}_{i} \) . We define the integral \[ {\int }_{\gamma }\omega = \mathop{\sum }\limits_{i}{\int }_{{\gamma }_{i}}{f}_{i}\left( {z}_{i}\right) d{z}_{i} \] This definition is well defined, for suppose \( \gamma = \mathop{\bigcup }\limits_{j}{\gamma }_{j}^{\prime } \) were another partition of \( {\gamma }_{j} \) satisfying \( {\gamma }_{j}^{\prime } \subset {U}_{j} \) . Then by the change of variables formula: \[ {\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{i}\left( {z}_{i}\right) d{z}_{i} = {\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{j}\left( {z}_{j}\right) d{z}_{j}, \] so that \[ \mathop{\sum }\limits_{i}{\int }_{{\gamma }_{i}}{f}_{i}\left( {z}_{i}\right) d{z}_{i} = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{i}\left( {z}_{i}\right) d{z}_{i} \] \[ = \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{i}{\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{j}\left( {z}_{j}\right) d{z}_{j} \] \[ = \mathop{\sum }\limits_{j}{\int }_{{\gamma }_{j}^{\prime }}{f}_{j}\left( {z}_{j}\right) d{z}_{j} \] Theorem 4.6 (Stokes’ Theorem for holomorphic differentials). Suppose \( C \) is a Riemann surface with \( \Omega \subset C \) an open set, \( \bar{\Omega } \) compact, \( \partial \Omega = \gamma \) a piecewise smooth curve, and \( \omega \) a holomorphic differential defined on an open set containing \( \bar{\Omega } \) . Then \[ {\int }_{\partial \Omega }\omega = 0 \] Proof. Suitably subdivide \( \Omega \) into a disjoint union \( \Omega = \mathop{\bigcup }\limits_{i}{\Omega }_{i} \) such that for each \( i \) , \[ {\Omega }_{i} \subset {U}_{i} \] and \( \partial {\Omega }_{i} \) is a piecewise smooth curve. By using local coordinate representations and applying Cauchy's theorems, we get \[ {\int }_{\partial {\Omega }_{i}}\omega = 0 \] whence \[ {\int }_{\partial \Omega }\omega = \mathop{\sum }\limits_{i}{\int }_{\partial {\Omega }_{t}}\omega = 0 \] since the contributions of the boundaries \( \partial {\Omega }_{i} \) cancel out other than those arcs contributing to \( \partial \Omega \) . Q.E.D. EXERCISE 4.3. Prove that there are no holomorphic differentials on the Riemann sphere \( S \) except the trivial one. Suggestion. Let \( \omega \) be a holomorphic differential, and fix \( q \in S \) . Consider \[ f\left( p\right) = {\int }_{q}^{p}\omega ,\;p \in S, \] which we know from Stokes' theorem to be well defined (why?). It is easily seen that \( f \) is a holomorphic function and thus constant. Hence \( \omega = {df} = 0. \) Definition 4.7. Let \( C \) be a Riemann surface with \( \omega \in {K}^{1}\left( C\right), p \in C \) , \( {\gamma }_{p} \) a small circle around the point \( p \), and \( \omega \) having no poles other than \( p \) on the disc surrounded by \( {\gamma }_{p} \) ( \( p \) itself may or may not be a pole). Then we define the residue at the point \( p \) of \( \omega \) to be \[ {\operatorname{Res}}_{p}\left( \omega \right) = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}\omega \] From Stokes' Theorem, this definition is independent of the choice of the small circle \( {\gamma }_{p} \) . Moreover, for \( p \in {U}_{j},{\gamma }_{p} \subset {U}_{j} \), if \( \omega = {f}_{j}\left( {z}_{j}\right) d{z}_{j} \) in \( {U}_{j} \), then \[ {\operatorname{Res}}_{p}\left( \omega \right) = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}\omega = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}{f}_{j}\left( {z}_{j}\right) d{z}_{j} \] \[ = {\operatorname{Res}}_{p}\left( {{f}_{j}\left( {z}_{j}\right) d{z}_{j}}\right) \text{.} \] THEOREM 4.8 (RESIDUE THEOREM). Suppose \( C \) is a compact Riemann surface. For \( \omega \in {K}^{1}\left( C\right) \), we have \[ \mathop{\sum }\limits_{{p \in C}}{\operatorname{Res}}_{p}\left( \omega \right) = 0 \] Proof. Since \( C \) is compact, \( \omega \) can have only a finite number of poles on \( C,{p}_{1},{p}_{2},\ldots ,{p}_{m} \) . Now choose mutually disjoint small discs \( {\Delta }_{1},{\Delta }_{2},\ldots ,{\Delta }_{m} \) so that each contains a distinct \( {p}_{i} \) and each satisfies the conditions in Definition 4.7. Write \[ \Omega = C \smallsetminus \mathop{\bigcup }\limits_{i}{\Delta }_{i} \] Clearly, for a suitably chosen orientation, we have \[ \partial \Omega = - \mathop{\bigcup }\limits_{i}\partial {\Delta }_{i} \] and from Stokes' Theorem, we obtain \[ {2\pi i}\mathop{\sum }\limits_{{p \in C}}{\operatorname{Res}}_{p}\left( \omega \right) = {2\pi i}\mathop{\sum }\limits_{{i = 1}}^{m}{\operatorname{Res}}_{{p}_{i}}\left( \omega \right) \] \[ = \mathop{\sum }\limits_{{i = 1}}^{m}{\int }_{\partial {\Delta }_{i}}\omega \] \[ = - {\int }_{\partial \Omega }\omega = 0.\;\text{ Q.E.D. } \] THEOREM 4.9. Let \( C \) be a compact Riemann surface. If \( f \in K\left( C\right) \) is not a constant function, then \[ \mathop{\sum }\limits_{{p \in C}}{\nu }_{p}\left( f\right) = 0 \] This implies in particular that the number of zeroes of \( f \) is equal to the number of poles of \( f \) (counting multiplicity): \[ \text{#\{zeroes of}f\text{\}} \] \[ = \# \{ \text{ poles of }f\} \text{. } \] Proof. Choose \( \omega = {df}/f \) and apply the above residue theorem. Q.E.D. COROLLARY 4.10. If \( f \in K\left( C\right) \) is not constant, then for any \( a \in C \), we have \[ \# {f}^{-1}\left( a\right) \] \[ = \# \{ \text{ poles of }f\} \text{. } \] Here, \( \# {f}^{-1}\left( a\right) \) is the number of points \( p \) such that \( f\left( p\right) = a \), counting each such point with suitable multiplicity. This says that any complex number is a functional value of \( f \), and every functional value is assumed the same number of times (counting multiplicity). REMARK 4.11. If we think of \( f \in K\left(
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 2.4.1
Definition 2.4.1. Let \( A \) be a Banach algebra with involution \( x \rightarrow {x}^{ * } \) . Then \( A \) is called a \( {C}^{ * } \) -algebra, if its norm satisfies the equation \( \begin{Vmatrix}{{x}^{ * }x}\end{Vmatrix} = \parallel x{\parallel }^{2} \) for all \( x \in A \) . The definition of a \( {C}^{ * } \) -subalgebra is evident. Note that a \( {C}^{ * } \) -algebra is a Banach \( * \) -algebra since the equation \( \parallel x{\parallel }^{2} = \) \( \begin{Vmatrix}{{x}^{ * }x}\end{Vmatrix} \) implies \( \parallel x\parallel \leq \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \) and hence \( \parallel x\parallel = \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \) for all \( x \in A \) . Now let \( A \) be a commutative Banach algebra for which the Gelfand homomorphism is an isometric isomorphism onto \( {C}_{0}\left( {\Delta \left( A\right) }\right) \) . Notice first that in this case for every \( x \in A \) there is a unique element \( {x}^{ * } \in A \) such that \( \widehat{{x}^{ * }} = \overline{\widehat{x}} \) . Obviously, the mapping \( x \rightarrow {x}^{ * } \) is an involution. Moreover, \[ \begin{Vmatrix}{x}^{ * }\end{Vmatrix} = {\begin{Vmatrix}\widehat{{x}^{ * }}\end{Vmatrix}}_{\infty } = \parallel \widehat{x}{\parallel }_{\infty } = \parallel x\parallel \] and hence \[ \begin{Vmatrix}{{x}^{ * }x}\end{Vmatrix} = {\begin{Vmatrix}\widehat{{x}^{ * }x}\end{Vmatrix}}_{\infty } = \parallel \overline{\widehat{x}}\widehat{x}{\parallel }_{\infty } = \parallel \widehat{x}{\parallel }_{\infty }^{2} = \parallel x{\parallel }^{2}. \] Thus \( A \) is a \( {C}^{ * } \) -algebra. The main purpose of what follows is to show that conversely for each commutative \( {C}^{ * } \) -algebra \( A \) the Gelfand homomorphism is an isometric \( * \) -isomorphism onto \( {C}_{0}\left( {\Delta \left( A\right) }\right) \) . This is one of the most striking results in Gelfand's theory. Example 2.4.2. (1) Let \( X \) be an arbitrary topological space. With the involution given by \( {f}^{ * }\left( x\right) = \overline{f\left( x\right) } \) and the supremum norm \( \parallel \cdot {\parallel }_{\infty },{C}^{b}\left( X\right) \) is a commutative \( {C}^{ * } \) -algebra. If \( X \) is a locally compact Hausdorff space, then \( {C}_{0}\left( X\right) \) is a \( {C}^{ * } \) -subalgebra of \( {C}^{b}\left( X\right) \) . (2) Let \( H \) be a complex Hilbert space, and recall that for \( T \in \mathcal{B}\left( H\right) ,{T}^{ * } \) denotes the adjoint operator of \( T \) . Then \( \mathcal{B}\left( H\right) \) is a \( {C}^{ * } \) -algebra since \( \begin{Vmatrix}{{T}^{ * }T}\end{Vmatrix} = \) \( \parallel T{\parallel }^{2} \) holds for all \( T \in \mathcal{B}\left( H\right) \) . However, \( \mathcal{B}\left( H\right) \) is not commutative whenever \( \dim H \geq 2.\mathcal{K}\left( H\right) \), the closed ideal consisting of all compact operators in \( H \) , is a \( {C}^{ * } \) -subalgebra of \( \mathcal{B}\left( H\right) \) because \( {T}^{ * } \) is compact whenever \( T \) is. (3) Suppose \( T \in \mathcal{B}\left( H\right) \) is normal, that is, \( {T}^{ * }T = T{T}^{ * } \), and let \( A\left( T\right) \) denote the smallest closed subalgebra of \( \mathcal{B}\left( H\right) \) containing \( T,{T}^{ * } \) and the identity operator of \( H \) . Then \( A\left( T\right) \) is a commutative \( {C}^{ * } \) -algebra with identity. (4) The Gelfand-Naimark theorem [39] states that for every \( {C}^{ * } \) -algebra \( A \) there exists a Hilbert space \( H \) such that \( A \) is isometrically \( * \) -isomorphic to some \( {C}^{ * } \) -subalgebra of \( \mathcal{B}\left( H\right) \) . (5) Let \( G \) be a locally compact Abelian group. Then \( {L}^{1}\left( G\right) \) is a commutative Banach \( * \) -algebra. However, whenever \( G \neq \{ e\} \), the \( {L}^{1} \) -norm fails to be a \( {C}^{ * } \) -norm. In fact, it is not difficult to construct \( f \in {L}^{1}\left( G\right) \) such that \[ {\begin{Vmatrix}{f}^{ * } * f\end{Vmatrix}}_{1} \neq \parallel f{\parallel }_{1}^{2} \] (Exercise 2.12.25). (6) The assignment \( f \rightarrow {f}^{ * } \), where \( {f}^{ * }\left( z\right) = \overline{f\left( \bar{z}\right) } \), defines an involution on the disc algebra \( A\left( \mathbb{D}\right) \) (Example 1.1.7(2)). However, \( A\left( \mathbb{D}\right) \) fails to be a \( {C}^{ * } \) -algebra (Exercise 1.6.15). If \( A \) is a \( * \) -algebra, then so is \( {A}_{e} \) once we define \[ {\left( a + \lambda e\right) }^{ * } = {a}^{ * } + \bar{\lambda }e,\;a \in A,\;\lambda \in \mathbb{C}. \] Then \( {A}_{e} \) is a normed \( * \) -algebra with \( \parallel a + {\lambda e}\parallel = \parallel a\parallel + \left| \lambda \right| \), yet in general not a \( {C}^{ * } \) -algebra if \( A \) is. The following lemma, where we do not assume \( A \) to be commutative, shows that nevertheless a different norm can be introduced on \( {A}_{e} \) which extends the norm on \( A \) and turns \( {A}_{e} \) into a \( {C}^{ * } \) -algebra. Lemma 2.4.3. Let \( A \) be a \( {C}^{ * } \) -algebra without identity. There exists a norm \( \parallel \cdot {\parallel }_{0} \) on \( {A}_{e} \) such that \( \parallel a{\parallel }_{0} = \parallel a\parallel \) for all \( a \in A \) and \( \left( {{A}_{e},\parallel \cdot {\parallel }_{0}}\right) \) becomes a \( {C}^{ * } \) -algebra. Proof. Let \( \parallel \cdot \parallel \) denote the above norm on \( {A}_{e} \) ; that is, \[ \parallel a + {\lambda e}\parallel = \parallel a\parallel + \left| \lambda \right| ,\;a \in A,\;\lambda \in \mathbb{C}. \] For \( x \in {A}_{e} \), let \( {L}_{x} : A \rightarrow A \) be defined by \( {L}_{x}\left( a\right) = {xa}, a \in A \) . Then \[ \begin{Vmatrix}{{L}_{x}a}\end{Vmatrix} \leq \parallel x\parallel \cdot \parallel a\parallel \] so that \( {L}_{x} \) is bounded and \( \begin{Vmatrix}{L}_{x}\end{Vmatrix} \leq \parallel x\parallel \) . We claim that \( \parallel x{\parallel }_{0} = \begin{Vmatrix}{L}_{x}\end{Vmatrix} \) defines a \( {C}^{ * } \) -norm on \( {A}_{e} \) extending the given norm on \( A \) . Note first that, for \( a \in A \) , \[ \begin{Vmatrix}{{L}_{a}\left( {a}^{ * }\right) }\end{Vmatrix} = \begin{Vmatrix}{a{a}^{ * }}\end{Vmatrix} = \parallel a{\parallel }^{2} = \parallel a\parallel \cdot \begin{Vmatrix}{a}^{ * }\end{Vmatrix} \] and hence \( \begin{Vmatrix}{L}_{a}\end{Vmatrix} \geq \parallel a\parallel \) and therefore \( \begin{Vmatrix}{L}_{a}\end{Vmatrix} = \parallel a\parallel \) . Now, \( x \rightarrow \parallel x{\parallel }_{0} \) is a norm on \( {A}_{e} \) as soon as we have seen that \( {L}_{x} = 0 \) implies \( x = 0 \) . To this end let \[ x = b + {\lambda e},\;b \in A,\;\lambda \in \mathbb{C}, \] be such that \( {xa} = 0 \) for all \( a \in A \) . If \( \lambda \neq 0 \), then \( a = \left( {-\left( {1/\lambda }\right) b}\right) a \) for all \( a \in A \) , that is, \( u = - \left( {1/\lambda }\right) b \) is a left identity for \( A \) . Since \[ {u}^{ * } = u{u}^{ * } = {\left( u{u}^{ * }\right) }^{ * } = {\left( {u}^{ * }\right) }^{ * } = u, \] and hence, for all \( a \in A \) , \[ {au} = a{u}^{ * } = {\left( u{a}^{ * }\right) }^{ * } = {\left( {a}^{ * }\right) }^{ * } = a, \] \( u \) is also a right identity for \( A \) . This contradiction yields \( x = b \in A \), and therefore \( x = 0 \) as \( \parallel b\parallel = \begin{Vmatrix}{L}_{b}\end{Vmatrix} \) . Moreover, \( \parallel \cdot {\parallel }_{0} \) is an algebra norm since \[ \parallel {xy}{\parallel }_{0} = \begin{Vmatrix}{L}_{xy}\end{Vmatrix} = \begin{Vmatrix}{{L}_{x} \circ {L}_{y}}\end{Vmatrix} \leq \begin{Vmatrix}{L}_{x}\end{Vmatrix}\begin{Vmatrix}{L}_{y}\end{Vmatrix} = \parallel x{\parallel }_{0}\parallel y{\parallel }_{0}, \] and \( {A}_{e} \) is complete because \( A \) is complete and \( {A}_{e}/A \) is one-dimensional. Finally, \( \parallel \cdot {\parallel }_{0} \) is a \( {C}^{ * } \) -norm on \( {A}_{e} \) . Indeed, from \[ {\begin{Vmatrix}{L}_{x}\left( a\right) \end{Vmatrix}}^{2} = \parallel {xa}{\parallel }^{2} = \begin{Vmatrix}{{\left( xa\right) }^{ * }\left( {xa}\right) }\end{Vmatrix} \] \[ = \begin{Vmatrix}{{a}^{ * }\left( {{x}^{ * }x}\right) a}\end{Vmatrix} \leq \begin{Vmatrix}{a}^{ * }\end{Vmatrix} \cdot \begin{Vmatrix}{{L}_{{x}^{ * }x}a}\end{Vmatrix} \] \[ \leq \parallel a{\parallel }^{2}\begin{Vmatrix}{L}_{{x}^{ * }x}\end{Vmatrix} \] it follows that \[ \parallel x{\parallel }_{0}^{2} = {\begin{Vmatrix}{L}_{x}\end{Vmatrix}}^{2} \leq \begin{Vmatrix}{L}_{{x}^{ * }x}\end{Vmatrix} = {\begin{Vmatrix}{x}^{ * }x\end{Vmatrix}}_{0} \leq \begin{Vmatrix}{L}_{{x}^{ * }}\end{Vmatrix}\begin{Vmatrix}{L}_{x}\end{Vmatrix} = {\begin{Vmatrix}{x}^{ * }\end{Vmatrix}}_{0}\parallel x{\parallel }_{0}, \] and this in turn gives \[ \parallel x{\parallel }_{0} \leq {\begin{Vmatrix}{x}^{ * }\end{Vmatrix}}_{0}\text{ and }{\begin{Vmatrix}{x}^{ * }\end{Vmatrix}}_{0} \leq {\begin{Vmatrix}{x}^{* * }\end{Vmatrix}}_{0} = \parallel x{\parallel }_{0}. \] Thus \( {\begin{Vmatrix}{x}^{ * }\end{Vmatrix}}_{0} = \parallel x{\parallel }_{0} \), and \( {\begin{Vmatrix}{x}^{ * }x\end{Vmatrix}}_{0} \leq {\begin{Vmatrix}{x}^{ * }\end{Vmatrix}}_{0}\parallel x{\parallel }_{0} = \parallel x{\parallel }_{0}^{2} \) . Lemma 2.4.4. Let \( A \) be a commutative \( {C}^{ * } \) -algebra. Then the Gelfand homomorphism is a \( * \) -homomorphism; that is, \( \widehat{{x}^{ * }} = \overline{\widehat{x}} \) for all \( x \in A \) . Proof. We have to show that \( \varphi \left( {x}^{ * }\right) = \overline{\varphi \left( x\right) } \) for \( \varphi \in \Delta \left( A\right) \) and \( x \in A \) . Of course, we can assume that \( A \) has an identity \( e \) . Let \[ \varphi \left( x\right) = \alpha + {i\beta }\text{ and }\varphi \left( {x}^{ * }\right) = \gamma + {i\delta }, \] \( \alpha ,\beta ,\gamma ,\delta \in \mathbb{R} \) . Towards a contradiction, assume that \( \beta + \delta \neq 0 \) and let \[ y = {\left( \beta + \delta \right) }^{-1}\left( {x + {x}^{ *
1118_(GTM272)Operator Theoretic Aspects of Ergodic Theory
Definition 16.2
Definition 16.2. A semigroup endowed with a topology is called a left-topological semigroup if for each \( a \in S \) the left multiplication by \( a \), i.e., the mapping \[ S \rightarrow S,\;s \mapsto {as} \] is continuous. Similarly, \( S \) is called a right-topological semigroup if for each \( a \in S \) the right multiplication with \( a \), i.e., the mapping \[ S \rightarrow S,\;s \mapsto {sa} \] is continuous. If both left and right multiplications are continuous, \( S \) is a semi-topological semigroup. Further, \( S \) is a topological semigroup if the multiplication mapping \[ S \times S \rightarrow S,\;\left( {s, t}\right) \mapsto {st} \] is continuous. Note that a topological group (Example 2.9) is a topological semigroup that is algebraically a group and such that the inversion mapping \( s \mapsto {s}^{-1} \) is continuous. Clearly, an Abelian semigroup is left-topological if and only if it is semitopological. One says that in a semitopological semigroup the multiplication is separately continuous, whereas in a topological semigroup it is jointly continuous. Of course, there are examples of semitopological semigroups that are not topological, i.e., such that the multiplication is not jointly continuous (see Exercise 6). The example particularly interesting for us is \( \mathcal{L}\left( E\right), E \) a Banach space, endowed with the weak operator topology. By Proposition C. 19 and Example C. 19 this is a semitopological semigroup, which is not topological in general. Another example of a right-topological but not topological semigroup is \( \beta \mathbb{N} \) (as topological space familiar already from Chapter 4), whose semigroup structure will be studied in Chapter 19. Any semigroup can be made into a topological one by endowing it with the discrete topology. This indicates that such semigroups may be too general to study. However, compact left-topological semigroups (i.e., ones whose topology is compact) exhibit some amazing structure, which we shall study now. Theorem 16.3 (Ellis). In a compact left-topological semigroup every right ideal contains a minimal right ideal. Every minimal right ideal is closed and contains at least one idempotent. In particular, every compact left-topological semigroup contains an idempotent. Proof. Note that \( {sS} \) is a closed right ideal for any \( s \in S \) . If \( R \) is a right ideal and \( x \in R \), then \( {xS} \subseteq R \), and hence any right ideal contains a closed one. If \( R \) is minimal, then we must have \( R = {xS} \), and \( R \) itself is closed. Now if \( {J}_{0} \) is a given right ideal, then let \( \mathcal{M} \) be the set of all closed right ideals of \( S \) contained in \( {J}_{0} \) . Then \( \mathcal{M} \) is nonempty and partially ordered by set inclusion. Moreover, every chain \( \mathcal{C} \) in \( \mathcal{M} \) has a lower bound \( \bigcap \mathcal{C} \), since this set is nonempty by compactness. By Zorn’s lemma, there is a minimal element \( R \) in \( \mathcal{M} \) . If \( J \subseteq R \) is also a right ideal and \( x \in J \), then \( {xS} \subseteq J \subseteq R \), and \( {xS} \) is a closed right ideal. By construction \( {xS} = R \), hence \( J = R \) . Finally, by an application of Zorn's lemma as before we find a nonempty closed subsemigroup \( H \) of \( R \) which is minimal within all closed subsemigroups of \( R \) . For every \( e \in H \) the set \( {eH} \) is a closed subsemigroup of \( R \), contained in \( H \) , whence \( {eH} = H \) . The set \( \{ t \in H : {et} = e\} \) is then nonempty and closed, and a subsemigroup of \( R \) . By minimality, it must coincide with \( H \), and hence contains \( e \) . This yields \( {e}^{2} = e \), concluding the proof. Lemma 16.4. In a compact left-topological semigroup the Sushkevich kernel satisfies \[ K\left( S\right) = \bigcup \{ R : R\text{ minimal right ideal }\} . \] (16.1) In particular, \( K\left( S\right) \neq \varnothing \) . Moreover, an idempotent of \( S \) is minimal if and only if it is contained in \( K\left( S\right) \) . Proof. To prove " \( \supseteq \) " let \( I \) be an ideal and \( J \) a minimal right ideal. Then \( {JI} \subseteq J \cap I \) , so \( J \cap I \) is nonempty, hence a right ideal. By minimality \( J = J \cap I \), i.e., \( J \subseteq I \) . For " \( \subseteq \) " it suffices to show that the right-hand side of (16.1) is an ideal. Let \( R \) be any minimal right ideal, \( x \in S \), and \( {R}^{\prime } \subseteq {xR} \) another right ideal. Then \( \varnothing \neq \{ y : {xy} \in \) \( \left. {R}^{\prime }\right\} \cap R \) is a right ideal, hence by minimality equals \( R \) . But this means that \( {xR} \subseteq {R}^{\prime } \) and it follows that \( {xR} \) is also minimal, whence contained in the right-hand side of (16.1). The remaining statements follow from (16.1) and Lemma 16.1. We can now state and prove the main result in this section. Theorem 16.5. Let \( S \) be a compact semitopological semigroup. Then the following assertions are equivalent: (i) Every minimal right ideal is a minimal left ideal. (ii) There is a unique minimal right ideal and a unique minimal left ideal. (iii) There is a unique minimal idempotent. (iv) The Sushkevich kernel \( K\left( S\right) \) is a group. (v) There is a minimal idempotent \( e \in S \) such that es \( = \) se for all \( s \in S \) . The conditions \( \left( \mathrm{i}\right) - \left( \mathrm{v}\right) \) are satisfied in particular if \( S \) is Abelian. Proof. Clearly, (ii) implies (i) by Lemma 16.4. (i) \( \Rightarrow \) (iv): Let \( R \) be a minimal right ideal and let \( e \in R \) be an idempotent, which exists by Theorem 16.3. By hypothesis, \( R \) is also a left ideal, i.e., an ideal. Hence, \( K\left( S\right) \subseteq R \subseteq K\left( S\right) \) (by Lemma 16.4), yielding \( R = K\left( S\right) \) . Since \( R \) is minimal as a left and as a right ideal, \( R = {eS} = {Se} \) . This implies that \( K\left( S\right) = R = {eSe} \) is a group by Lemma 16.1, hence (iv) is proved. (iv) \( \Rightarrow \) (iii): A minimal idempotent belongs to \( K\left( S\right) \) by Lemma 16.4, and so must coincide with the unique neutral element of \( K\left( S\right) \) . (iii) \( \Rightarrow \) (ii): Since different minimal right ideals are disjoint and each one contains an idempotent, there can be only one of them. By symmetry, the same is true for left ideals, whence (ii) follows. (iv) \( \Rightarrow \) (v): Let \( e \in K\left( S\right) \) be the neutral element of the group \( K\left( S\right) \) . Then \( e \) is a minimal idempotent by Lemma 16.4. If \( s \in S \), then \( {se},{es} \in K\left( S\right) \), hence \( {es} = {ese} = {se}. \) (v) \( \Rightarrow \) (iii): Let \( {e}^{2} = e \in S \) be as in (v) and let \( f \in S \) be any minimal idempotent. Then \( p \mathrel{\text{:=}} {ef} = {fe} \) is also an idempotent satisfying \( {pe} = {ep} = p = {fp} = {pf} \) . By minimality of \( e \) and \( f \) it follows (by Lemma 16.1) that \( e = p = f \) . Suppose that \( S \) is a compact semitopological semigroup satisfying the equivalent conditions (i)-(iv) of Theorem 16.5, so that \( K\left( S\right) \) is a group. Then \( K\left( S\right) \) is also compact, as it coincides with the unique right (left) ideal, being closed by Theorem 16.3. The following fundamental result of Ellis (see Ellis (1957) or Hindman and Strauss (1998, Sec. 2.5)) states that in this case the group \( K\left( S\right) \) is already topological. Theorem 16.6 (Ellis). Let \( G \) be a semitopological group whose topology is locally compact. Then \( G \) is a topological group. As mentioned, combining Theorem 16.5 with Ellis' result, we obtain the following. Corollary 16.7. Let \( S \) be a compact semitopological semigroup that satisfies the equivalent conditions of Theorem 16.5 (e.g., \( S \) is Abelian). Then its minimal ideal \( K\left( S\right) \) is a compact (topological) group. One can ask for which noncommutative semigroups \( S \) the minimal ideal \( K\left( S\right) \) is a group. One class of examples is given by the so-called amenable semigroups, for details see Day (1957) or Paterson (1988). Another one, and very important for our operator theoretic perspective, are the weakly closed semigroups of contractions on certain Banach spaces, see Section 16.2. ## Proof of Ellis' Theorem in the Metrizable Case The proof of Theorem 16.6 is fairly involved. The major difficulty is deducing joint continuity from separate continuity. By making use of the group structure it is enough to find one single point in \( G \times G \) at which multiplication is continuous. This can be achieved by applying a more general result of Namioka (1974) that asserts that under appropriate assumptions a separately continuous function has many points of joint continuity, see also Kechris (1995, Sec. I.8M) or Todorcevic (1997, Sec. 4). Recall from Appendix A. 9 the definition of a Baire space. Proposition 16.8. Let \( Z, Y \) be metric spaces, \( X \) a Baire space, and let \( f : X \times Y \rightarrow \) \( Z \) be a separately continuous function. For every \( b \in Y \), the set \[ \{ x \in X : f\text{ is not continuous at }\left( {x, b}\right) \} \] is of first category in \( X \) . Proof. Let \( b \in Y \) be fixed. For \( n, k \in \mathbb{N} \) define \[ {X}_{b, n, k} \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{y \in \mathrm{B}\left( {b,\frac{1}{k}}\right) }}\left\{ {x \in X : d\left( {f\left( {x, y}\right), f\left( {x, b}\right) }\right) \leq \frac{1}{n}}\right\} . \] By the continuity of \( f \) in the first variable, we obtain that the sets \( {X}_{b, n, k} \) are closed. From the continuity of \( f \) in the second variable, we infer that for all \( n \in \mathbb{N} \) the sets \( {X}_{b, n, k} \) cover \( X \), i.e., \[ X = \mathop{\bigcup }\limits_{{k \in \mathbb{N}}}{X}_{b, n, k} = \mathop{\bigcup }\limits_{{k \in \mathbb{N}}}{X}_{
1088_(GTM245)Complex Analysis
Definition 2.12
Definition 2.12. Let \( f \) be a function defined on a set \( S \) in \( \mathbb{C} \) . We assume that \( f \) is complex-valued, unless otherwise stated. Thus \( f \) may be viewed as either a map from \( S \) into \( {\mathbb{R}}^{2} \) or into \( \mathbb{C} \) and also as two real-valued functions defined on the set \( S \) . Let \( c \) be a limit point of \( S \) and let \( \alpha \) be a complex number. We say that the limit of \( {fatc} \) is \( \alpha \), and we write \[ \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \alpha \] if for each \( \epsilon > 0 \) there exists a \( \delta > 0 \) such that \[ \left| {f\left( z\right) - \alpha }\right| < \epsilon \text{whenever}z \in S\text{and}0 < \left| {z - c}\right| < \delta \text{.} \] Remark 2.13. The condition that \( c \) is a limit point of \( S \) ensures that there are points \( z \) in \( S \) arbitrarily close to (but different from) \( c \) so that \( f\left( z\right) \) is defined there. Note that it is not required that \( f\left( c\right) \) be defined. The above definition is again a translation of language from \( {\mathbb{R}}^{2} \) to \( \mathbb{C} \) . Thus we will be able to adopt many results (the next three theorems, in particular) from real analysis. In addition to the usual algebraic operations on pairs of functions \( f : S \rightarrow \) \( \mathbb{C} \) and \( g : S \rightarrow \mathbb{C} \) familiar from real analysis, such as \( f + {cg} \) with \( c \in \mathbb{C},{fg} \), and \( \frac{f}{g} \) (provided \( g \) does not vanish on \( S \) ; that is, if \( g\left( z\right) \neq 0 \) for any \( z \in S \) or, equivalently, if no \( z \in S \) is a zero of \( g \) ), we will consider other functions constructed from a single function \( f \), that are usually not emphasized in real analysis. Among them are the following: \[ \left( {\Re f}\right) \left( z\right) = \Re f\left( z\right) ,\left( {\Im f}\right) \left( z\right) = \Im f\left( z\right) ,\bar{f}\left( z\right) = \overline{f\left( z\right) },\left| f\right| \left( z\right) = \left| {f\left( z\right) }\right| , \] also defined on \( S \) . For instance, if \( f\left( z\right) = {z}^{2} = {x}^{2} - {y}^{2} + {2\iota xy} \) for \( z \in \mathbb{C} \), we have \( \left( {\Re f}\right) \left( z\right) = {x}^{2} - \) \( {y}^{2},\left( {\Im f}\right) \left( z\right) = {2xy},\bar{f}\left( z\right) = {\bar{z}}^{2} = {x}^{2} - {y}^{2} - {2\iota xy} \), and \( \left| f\right| \left( z\right) = {\left| z\right| }^{2} = {x}^{2} + {y}^{2} \) for \( z \in \mathbb{C} \) . Theorem 2.14. Let \( S \) be a subset of \( \mathbb{C} \) and let \( f \) and \( g \) be functions defined on \( S \) . Ifc is a limit point of \( S \), then: (a) \( \mathop{\lim }\limits_{{z \rightarrow c}}\left( {f + {ag}}\right) \left( z\right) = \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) + a\mathop{\lim }\limits_{{z \rightarrow c}}g\left( z\right) \) for all \( a \in \mathbb{C} \) (b) \( \mathop{\lim }\limits_{{z \rightarrow c}}\left( {fg}\right) \left( z\right) = \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) \mathop{\lim }\limits_{{z \rightarrow c}}g\left( z\right) \) (c) \( \mathop{\lim }\limits_{{z \rightarrow c}}\left| f\right| \left( z\right) = \left| {\mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) }\right| \) (d) \( \mathop{\lim }\limits_{{z \rightarrow c}}\bar{f}\left( z\right) = \overline{\mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) } \) Remark 2.15. The usual interpretation of the above formulae is used here and in the rest of the book: the \( {\mathrm{{LHS}}}^{6} \) exists whenever the RHS exists, and then we have the stated equality. Corollary 2.16. Let \( S \) be a subset of \( \mathbb{C} \), let \( f \) be a function defined on \( S \), and \( \alpha \in \mathbb{C} \) . Set \( u = \Re f \) and \( v = \Im f \) (so that \( f\left( z\right) = u\left( z\right) + {iv}\left( z\right) \) ). If \( c \) is a limit point of \( S \), then \[ \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \alpha \] if and only if \[ \mathop{\lim }\limits_{{z \rightarrow c}}u\left( z\right) = \Re \alpha \text{ and }\mathop{\lim }\limits_{{z \rightarrow c}}v\left( z\right) = \Im \alpha . \] Definition 2.17. Let \( S \) be a subset of \( \mathbb{C}, f : S \rightarrow \mathbb{C} \) be a function defined on \( S \) , and \( c \in S \) be a point in \( S \) . We say that: (a) \( f \) is continuous at \( c \) if \( \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = f\left( c\right) \) . (b) \( f \) is continuous on \( S \) if it is continuous at each \( c \) in \( S \) . (c) \( f \) is uniformly continuous on \( S \) if for all \( \epsilon > 0 \), there is a \( \delta > 0 \) such that \[ \left| {f\left( z\right) - f\left( w\right) }\right| < \epsilon \text{for all}z\text{and}w\text{in}S\text{with}\left| {z - w}\right| < \delta \text{.} \] Remark 2.18. A function \( f \) is (uniformly) continuous on \( S \) if and only if both \( \Re f \) and \( \Im f \) are. Uniform continuity implies continuity, but the converse is not true in general. Theorem 2.19. Let \( f \) and \( g \) be functions defined in appropriate sets, that is, sets where the composition \( g \circ f \) of these functions makes sense. Then the following properties hold: (a) If \( f \) is continuous at \( c \) and \( f\left( c\right) \neq 0 \), then \( \frac{1}{f} \) is defined in a neighborhood of \( c \) and is continuous at \( c \) . (b) If \( f \) is continuous at \( c \) and \( g \) is continuous at \( f\left( c\right) \), then \( g \circ f \) is continuous at \( c \) . Theorem 2.20. Let \( K \subset \mathbb{C} \) be a compact set and \( f : K \rightarrow \mathbb{C} \) be a continuous function on \( K \) . Then \( f \) is uniformly continuous on \( K \) . \( {}^{6} \) LHS (RHS) are standard abbreviations for left (right) hand side and will be used throughout this book. Proof. A continuous mapping from a compact metric space to a metric space is uniformly continuous. Definition 2.21. Given a sequence of functions \( \left\{ {f}_{n}\right\} \), all defined on the same set \( S \) in \( \mathbb{C} \), we say that \( \left\{ {f}_{n}\right\} \) converges uniformly to a function \( f \) on \( S \) if for all \( \epsilon > 0 \) there exists an \( N \in {\mathbb{Z}}_{ > 0} \) such that \[ \left| {f\left( z\right) - {f}_{n}\left( z\right) }\right| < \epsilon \text{ for all }z \in S\text{ and all }n > N. \] Remark 2.22. \( \left\{ {f}_{n}\right\} \) converges uniformly on \( S \) (to some function \( f \) ) if and only if for all \( \epsilon > 0 \) there exists an \( N \in {\mathbb{Z}}_{ > 0} \) such that \[ \left| {{f}_{n}\left( z\right) - {f}_{m}\left( z\right) }\right| < \epsilon \text{for all}z \in S\text{and all}n\text{and}m > N\text{.} \] Note that in this case the limit function \( f \) is uniquely determined; it is the pointwise limit \( f\left( z\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( z\right) \), for all \( z \in S \) . Theorem 2.23. Let \( \left\{ {f}_{n}\right\} \) be a sequence of functions defined on \( S \subseteq \mathbb{C} \) . If: (1) \( \left\{ {f}_{n}\right\} \) converges uniformly on \( S \) . (2) Each \( {f}_{n} \) is continuous on \( S \) . Then the function \( f \) defined by \[ f\left( z\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( z\right), z \in S \] is continuous on \( S \) . Proof. Start with two points \( z \) and \( c \) in \( S \) . Then for each natural number \( n \) we have \[ \left| {f\left( z\right) - f\left( c\right) }\right| \leq \left| {f\left( z\right) - {f}_{n}\left( z\right) }\right| + \left| {{f}_{n}\left( z\right) - {f}_{n}\left( c\right) }\right| + \left| {{f}_{n}\left( c\right) - f\left( c\right) }\right| . \] Now fix \( \epsilon > 0 \) . By (1), the first and third term on the right-hand side are less than \( \frac{\epsilon }{3} \) for \( n \) large. If we now fix \( c \) and \( n \), it follows from (2) that the second term is less than \( \frac{\epsilon }{3} \) as soon as \( z \) is close enough to \( c \) . Thus \( f \) is continuous at \( c \) . Definition 2.24. A domain or region in \( \mathbb{C} \) is a subset of \( \mathbb{C} \) which is open and connected. Remark 2.25. Note that a domain in \( \mathbb{C} \) could also be defined as an open arcwise connected subset of \( \mathbb{C} \) . (See also Exercise 2.20.) Also note that each point in a domain \( D \) is a limit point of \( D \), and therefore it makes sense to ask, at each point in \( D \), about the limit of any function defined on \( D \) . ## 2.3 Differentiability and Holomorphic Mappings Up to now, the complex numbers were used mainly to supply us with a convenient alternative notation. This is about to change. The definition of the derivative of a complex-valued function of a complex variable mimics that for the derivative of a real-valued function of a real variable. However, we shall see shortly that the properties of the two classes of functions are quite different. Definition 2.26. Let \( f \) be a function defined in some disc about \( c \in \mathbb{C} \) . We say that \( f \) is (complex) differentiable at \( c \) provided \[ \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( {c + h}\right) - f\left( c\right) }{h} \] (2.9) exists. In this case the limit is denoted by \[ {f}^{\prime }\left( c\right) ,\frac{\mathrm{d}f}{\mathrm{\;d}z}\left( c\right) ,{\left. \frac{\mathrm{d}f}{\mathrm{\;d}z}\right| }_{z = c},\text{ or }\left( {Df}\right) \left( c\right) , \] and is called the derivative of \( f \) at \( c \) . Remark 2.27. (1) It is important that \( h \) be an arbitrary complex number (of small nonzero modulus) in the above definition. (2) Note that \[ \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( {c + h}\right) - f\left( c\right) }{h} = \mathop{\lim }\limits_{{z \rightarrow c}}\frac{f\left( z\right) - f\left( c\right) }{z - c}. \] (3) If \( f \) i
1094_(GTM250)Modern Fourier Analysis
Definition 7.3.2
Definition 7.3.2. For a given \( {K}_{0} \) in \( {\mathcal{S}}^{\prime }\left( {\left( {\mathbf{R}}^{n}\right) }^{m}\right) \), let \( T \) be a multilinear operator from \( \mathcal{S}\left( {\mathbf{R}}^{n}\right) \times \cdots \times \mathcal{S}\left( {\mathbf{R}}^{n}\right) \) to \( {\mathcal{S}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) that satisfies, for all \( {\psi }_{1},\ldots ,{\psi }_{m} \) in \( \mathcal{S}\left( {\mathbf{R}}^{n}\right) \) , \[ T\left( {{\psi }_{1},\ldots ,{\psi }_{m}}\right) \left( x\right) = \left( {\left( {{\psi }_{1} \otimes \cdots \otimes {\psi }_{m}}\right) \star {K}_{0}}\right) \left( {x,\ldots, x}\right) , \] (7.3.9) where \( \star \) denotes convolution on \( {\left( {\mathbf{R}}^{n}\right) }^{m} \), and recall that \[ \left( {{\psi }_{1} \otimes \cdots \otimes {\psi }_{m}}\right) \left( {{y}_{1},\ldots ,{y}_{m}}\right) = {\psi }_{1}\left( {y}_{1}\right) \ldots {\psi }_{m}\left( {y}_{m}\right) \] for all \( {y}_{1},\ldots ,{y}_{m} \in {\mathbf{R}}^{n} \) . Then we say that \( T \) is an \( m \) -linear convolution operator. Notice that \( m \) -linear convolution operators commute with simultaneous translations. In this chapter we investigate when such operators admit bounded extensions from \( {L}^{{p}_{1}}\left( {\mathbf{R}}^{n}\right) \times \cdots \times {L}^{{p}_{m}}\left( {\mathbf{R}}^{n}\right) \) into \( {L}^{p}\left( {\mathbf{R}}^{n}\right) \) for some indices \( {p}_{1},\ldots ,{p}_{m}, p \) . For \( {\xi }_{k} \in {\mathbf{R}}^{n} \) we introduce the notation \( \overrightarrow{\xi } = \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \) for a vector in \( {\left( {\mathbf{R}}^{n}\right) }^{m} \) and we denote by \( d\overrightarrow{\xi } = d{\xi }_{1}\cdots d{\xi }_{m} \) the combined differential of all variables. A multilinear convolution or translation-invariant operator can be written in the form \[ T\left( {{f}_{1},\ldots ,{f}_{m}}\right) \left( x\right) = {\int }_{{\left( {\mathbf{R}}^{n}\right) }^{m}}{K}_{0}\left( {x - {y}_{1},\ldots, x - {y}_{m}}\right) {f}_{1}\left( {y}_{1}\right) \cdots {f}_{m}\left( {y}_{m}\right) d\overrightarrow{y}, \] (7.3.10) where \( {K}_{0} \) is a function on \( {\left( {\mathbf{R}}^{n}\right) }^{m} \) or a tempered distribution, in which case the integral in (7.3.10) is interpreted in the sense of distributions. Let \( \sigma \) be the distributional Fourier transform of \( {K}_{0} \) . If \( \sigma \) is a function, i.e., a locally integrable function such that there are \( R, N, C > 0 \) so that \[ \left| {\sigma \left( \overrightarrow{\xi }\right) }\right| \leq C{\left( 1 + \left| \overrightarrow{\xi }\right| \right) }^{N} \] (7.3.11) for all \( \left| \overrightarrow{\xi }\right| > R \), then the operator in (7.3.10) can also be expressed in the form \[ {T}_{\sigma }\left( {{f}_{1},\ldots ,{f}_{m}}\right) \left( x\right) = {\int }_{{\left( {\mathbf{R}}^{n}\right) }^{m}}\sigma \left( \overrightarrow{\xi }\right) {\widehat{f}}_{1}\left( {\xi }_{1}\right) \cdots {\widehat{f}}_{m}\left( {\xi }_{m}\right) {e}^{{2\pi ix} \cdot \left( {{\xi }_{1} + \cdots + {\xi }_{m}}\right) }d\overrightarrow{\xi }. \] (7.3.12) Definition 7.3.3. Fix \( 0 < {p}_{1},\ldots ,{p}_{m} \leq \infty \) and \( 0 < p < \infty \) satisfying \[ \frac{1}{{p}_{1}} + \cdots + \frac{1}{{p}_{m}} = \frac{1}{p} \] (7.3.13) A locally integrable function \( \sigma \) defined on \( {\left( {\mathbf{R}}^{n}\right) }^{m} \) that satisfies (7.3.11) is called a \( \left( {{p}_{1},\ldots ,{p}_{m}, p}\right) \) multilinear multiplier if the associated operator \( {T}_{\sigma } \) given by (7.3.12) extends to a bounded operator from \( {L}^{{p}_{1}}\left( {\mathbf{R}}^{n}\right) \times \cdots \times {L}^{{p}_{m}}\left( {\mathbf{R}}^{n}\right) \) to \( {L}^{p}\left( {\mathbf{R}}^{n}\right) \) . The function \( \sigma \) is also called the multilinear symbol of \( {T}_{\sigma } \) . We denote by \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}\left( {\mathbf{R}}^{n}\right) \), or simply by \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \), the space of all \( \left( {{p}_{1},\ldots ,{p}_{m}, p}\right) \) multilinear multipliers on \( {\mathbf{R}}^{n} \) , and we define the quasi-norm of \( \sigma \) in \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}\left( {\mathbf{R}}^{n}\right) \) as the quasi-norm of \( {T}_{\sigma } \) from \( {L}^{{p}_{1}} \times \cdots \times {L}^{{p}_{m}} \) into \( {L}^{p} \), i.e., \[ \parallel \sigma {\parallel }_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} = {\begin{Vmatrix}{T}_{\sigma }\end{Vmatrix}}_{{L}^{{p}_{1}} \times \cdots \times {L}^{{p}_{m}} \rightarrow {L}^{p}} \] Thus if a multilinear convolution operator is bounded from \( {L}^{{p}_{1}} \times \cdots \times {L}^{{p}_{m}} \) to \( {L}^{p} \) for some indices related as in Hölder’s inequality, then we call it a multilinear multiplier operator and we call its symbol a multilinear multiplier. We have the following list of basic properties of multilinear multipliers. Proposition 7.3.4. Let \( 0 < {p}_{1},\ldots ,{p}_{m} \leq \infty \) . Then the following statements are valid: (i) If \( \lambda \in \mathbf{C},\sigma ,{\sigma }_{1} \) and \( {\sigma }_{2} \) are in \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \), then so are \( {\lambda \sigma } \) and \( {\sigma }_{1} + {\sigma }_{2} \), and \[ \parallel {\lambda \sigma }{\parallel }_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} = \left| \lambda \right| \parallel \sigma {\parallel }_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} \] \[ {\begin{Vmatrix}{\sigma }_{1} + {\sigma }_{2}\end{Vmatrix}}_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} \leq {C}_{p}\left( {{\begin{Vmatrix}{\sigma }_{1}\end{Vmatrix}}_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} + {\begin{Vmatrix}{\sigma }_{2}\end{Vmatrix}}_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}}}\right) . \] (ii) If \( \sigma \left( \overrightarrow{\xi }\right) \in {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) and \( \overrightarrow{a} \in {\mathbf{R}}^{n} \), then \( \sigma \left( {\overrightarrow{\xi } + \overrightarrow{a}}\right) \) is in \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) and \[ \parallel \mathbf{\sigma }{\parallel }_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} = \parallel \mathbf{\sigma }\left( {\cdot + \overrightarrow{a}}\right) {\parallel }_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}}. \] (iii) If \( \sigma \left( \overrightarrow{\xi }\right) \in {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) and \( \delta > 0 \), then \( \sigma \left( {\delta \overrightarrow{\xi }}\right) \) is in \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) and \[ \parallel \sigma {\parallel }_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} = \parallel \sigma \left( {\delta \left( \cdot \right) }\right) {\parallel }_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}}. \] (iv) If \( \sigma \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \in {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) and \( A \) is an orthogonal matrix in \( {\mathbf{R}}^{n} \), then \( \sigma \left( {A{\xi }_{1},\ldots, A{\xi }_{m}}\right) \) is in \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) with the same quasi-norm. (v) Let \( {\sigma }_{j} \) be a sequence of functions in \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) such that \( {\begin{Vmatrix}{\sigma }_{j}\end{Vmatrix}}_{{\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}}} \leq C \) for all \( j = 1,2,\ldots \) . If \( {\sigma }_{j} \) are uniformly bounded and they converge pointwise to \( \sigma \) a.e. as \( j \rightarrow \infty \), then \( \sigma \) is in \( {\mathcal{M}}_{{p}_{1},\ldots ,{p}_{m}} \) with quasi-norm bounded by \( C \) . Proof. Item (i) is trivial while (ii)-(iv) are proved by a straightforward change of variables: translation, dilation, and rotation. Item \( \left( v\right) \) easily follows by applying Fa-tou’s lemma on \( {L}^{p} \) (where \( p \) is related to \( {p}_{1},\ldots ,{p}_{m} \) via (7.3.13)) since \[ {T}_{\sigma }\left( {{f}_{1},\ldots ,{f}_{m}}\right) \left( x\right) = \mathop{\lim }\limits_{{j \rightarrow \infty }}{T}_{{\sigma }_{j}}\left( {{f}_{1},\ldots ,{f}_{m}}\right) \left( x\right) \] for all \( x \in {\mathbf{R}}^{n} \) and all \( {f}_{k} \in \mathcal{S}\left( {\mathbf{R}}^{n}\right) \) . ## 7.3.3 Regularizations of Multilinear Symbols and Consequences Next we show that certain regularizations of the symbols \( \sigma \) of operators \( {T}_{\sigma } \) preserve boundedness. Recall that \( \star \) denotes convolution on \( {\left( {\mathbf{R}}^{n}\right) }^{m} \) . Theorem 7.3.5. Let \( 0 < {p}_{1},\ldots ,{p}_{m} < \infty \) and \( 0 < p < \infty \), and let \( \sigma \) be a locally integrable function defined on \( {\left( {\mathbf{R}}^{n}\right) }^{m} \) that satisfies (7.3.11) for some \( N \geq 0 \) . If \( N = 0 \) , suppose that \( \varphi \) lies in \( {L}^{1}\left( {\mathbf{R}}^{n}\right) \), and if \( N > 0 \), suppose that \( \left| {\varphi \left( \xi \right) }\right| \leq {C}^{\prime }{\left( 1 + \left| \xi \right| \right) }^{-N - n - 1} \) for all \( \xi \in {\mathbf{R}}^{n} \), so that \( \left( {\varphi \otimes \cdots \otimes \varphi }\right) \star \sigma \) is well defined. Assume that the multilinear convolution operator \( {T}_{\sigma } \) associated with \( \sigma \) maps \( {L}^{{p}_{1}} \times \cdots \times {L}^{{p}_{m}} \rightarrow {L}^{p} \) . Then \( {T}_{\left( {\varphi \otimes \cdots \otimes \varphi }\right) \star \sigma } \) is also bounded and satisfies \[ {\begin{Vmatrix}{T}_{\left( {\varphi \otimes \cdots \otimes \varphi }\right) \star \sigma }\end{Vmatrix}}_{{L}^{{p}_{1}} \times \cdots \times {L}^{{p}_{m}} \rightarrow {L}^{p}} \leq {C}_{m,{p}_{1},\ldots ,{p}_{m}, p}\parallel \varphi {\parallel }_{{L}^{1}}^{m}{\begin{Vmatrix}{T}_{\sigma }\end{Vmatrix}}_{{L}^{{p}_{1}} \times \cdots \times {L}^{{p}_{m}} \rightarrow {L}^{p}} \] for some constant \( {C}_{m,{p}_{1},\ldots ,{p}_{m}, p} \) . Proof. Let us denote by \( {M}^{b}\left( f\right) \left( x\right) = {e}^{{2\pi ib} \cdot x}f\left( x\right) \) the modulation operator acting on a function \( f \) . For f
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 2.3.4
Definition 2.3.4. A compact subset \( K \) of \( {\mathbb{C}}^{n}, n \in \mathbb{N} \), is said to be polyno-mially convex if for every \( z \in {\mathbb{C}}^{n} \smallsetminus K \) there exists a polynomial \( p \) such that \( p\left( z\right) = 1 \) and \( \left| {p\left( w\right) }\right| < 1 \) for all \( w \in K \) . Lemma 2.3.5. Every compact convex subset \( K \) of \( {\mathbb{C}}^{n} \) is polynomially convex. Proof. We view \( {\mathbb{C}}^{n} \) as a \( {2n} \) -dimensional real vector space. Then, given \( w \in \) \( {\mathbb{C}}^{n} \smallsetminus K \), there exist a real linear functional \( \psi \) on \( {\mathbb{C}}^{n} = {\mathbb{R}}^{2n} \) and \( \alpha \in \mathbb{R} \) such that \[ \psi \left( w\right) > \alpha \text{ and }\psi \left( z\right) < \alpha \text{ for all }z \in K. \] Let \( z = \left( {{z}_{1},\ldots ,{z}_{n}}\right) \in {\mathbb{C}}^{n} \), with \( {z}_{j} = {x}_{j} + i{y}_{j},{x}_{j},{y}_{j} \in \mathbb{R} \) . Then \( \psi \) has the form \[ \psi \left( z\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\left( {{a}_{j}{x}_{j} + {b}_{j}{y}_{j}}\right) \] where \( {a}_{j},{b}_{j} \in \mathbb{R},1 \leq j \leq n \) . Let \( {c}_{j} = {a}_{j} - i{b}_{j},1 \leq j \leq n \), and consider the function \[ f\left( z\right) = \exp \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{c}_{j}{z}_{j}}\right) \] on \( {\mathbb{C}}^{n} \) . Then \[ \left| {f\left( z\right) }\right| = \exp \left( {\operatorname{Re}\left( {\mathop{\sum }\limits_{{j = 1}}^{n}{c}_{j}{z}_{j}}\right) }\right) = \exp \left( {\mathop{\sum }\limits_{{j = 1}}^{n}\left( {{a}_{j}{x}_{j} + {b}_{j}{y}_{j}}\right) }\right) = \exp \psi \left( z\right) \] and hence \( \left| {f\left( w\right) }\right| > {e}^{\alpha } \) and \( \left| {f\left( z\right) }\right| < {e}^{\alpha } \) for all \( z \in K \) . It follows that, for a suitable \( N \in \mathbb{N} \), the polynomial \( q \) defined by \[ q\left( z\right) = \mathop{\prod }\limits_{{j = 1}}^{n}\left( {\mathop{\sum }\limits_{{k = 0}}^{N}\frac{1}{k!}{c}_{j}^{k}{z}_{j}^{k}}\right) \] satisfies \( \left| {q\left( w\right) }\right| > {e}^{\alpha } \) and \( \left| {q\left( z\right) }\right| < {e}^{\alpha } \) for all \( z \in K \) . Finally, the polynomial \( p = {\left| q\left( w\right) \right| }^{-1}q \) has the properties required in Definition 2.3.4. Theorem 2.3.6. For a compact subset \( K \) of \( {\mathbb{C}}^{n} \) the following conditions are equivalent. (i) There exists a unital commutative Banach algebra \( A \) which is generated by \( n \) elements \( {x}_{1},\ldots ,{x}_{n} \) such that \( K = {\sigma }_{A}\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) . (ii) \( K \) is polynomially convex. Proof. To prove (i) \( \Rightarrow \) (ii), let \( e \) denote the identity of \( A \) and let \[ \lambda = \left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \in {\mathbb{C}}^{n} \smallsetminus {\sigma }_{A}\left( {{x}_{1},\ldots ,{x}_{n}}\right) . \] Then, given any \( \varphi \in \Delta \left( A\right) ,\varphi \left( {x}_{j}\right) \neq {\lambda }_{j} \) for some \( 1 \leq j \leq n \) . Equivalently, for each \( M \in \operatorname{Max}\left( A\right) \) there exists \( j \) such that \( {x}_{j} - {\lambda }_{j}e \notin M \) . Consider the ideal \[ I = \left\{ {\mathop{\sum }\limits_{{j = 1}}^{n}\left( {{x}_{j} - {\lambda }_{j}e}\right) {y}_{j} : {y}_{j} \in A}\right\} \] of \( A \) . If \( I \) were a proper ideal, then \( I \subseteq M \) for some \( M \in \operatorname{Max}\left( A\right) \), but \( {x}_{j} - {\lambda }_{j}e \in I \) and \( {x}_{j} - {\lambda }_{j}e \notin M \) for some \( j \) . Thus \( I = A \), and hence there exist \( {y}_{1},\ldots ,{y}_{n} \in A \) such that \[ \mathop{\sum }\limits_{{j = 1}}^{n}\left( {{x}_{j} - {\lambda }_{j}e}\right) {y}_{j} = e \] Choose \( \delta > 0 \) such that \( \delta \mathop{\sum }\limits_{{j = 1}}^{n}\begin{Vmatrix}{{x}_{j} - {\lambda }_{j}e}\end{Vmatrix} < 1 \) . Since \( A \) is generated by \( {x}_{1},\ldots ,{x}_{n} \), there exist polynomials \( {p}_{1},\ldots ,{p}_{n} \) in \( n \) variables such that \[ \begin{Vmatrix}{{p}_{j}\left( {{x}_{1},\ldots ,{x}_{n}}\right) - {y}_{j}}\end{Vmatrix} \leq \delta \] for \( 1 \leq j \leq n \) . It follows that \[ \begin{Vmatrix}{e - \mathop{\sum }\limits_{{j = 1}}^{n}\left( {{x}_{j} - {\lambda }_{j}e}\right) {p}_{j}\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\end{Vmatrix} \leq \mathop{\sum }\limits_{{j = 1}}^{n}\begin{Vmatrix}{{x}_{j} - {\lambda }_{j}e}\end{Vmatrix} \cdot \begin{Vmatrix}{{y}_{j} - {p}_{j}\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\end{Vmatrix} < 1. \] Now, define a polynomial \( p \) on \( {\mathbb{C}}^{n} \) by \[ p\left( {{z}_{1},\ldots ,{z}_{n}}\right) = 1 - \mathop{\sum }\limits_{{j = 1}}^{n}\left( {{z}_{j} - {\lambda }_{j}}\right) {p}_{j}\left( {{z}_{1},\ldots ,{z}_{n}}\right) . \] Then \( p\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) = 1 \), and for every \( \varphi \in \Delta \left( A\right) \) \[ \left| {p\left( {\varphi \left( {x}_{1}\right) ,\ldots ,\varphi \left( {x}_{n}\right) }\right) }\right| = \left| {1 - \mathop{\sum }\limits_{{j = 1}}^{n}\left( {\varphi \left( {x}_{j}\right) - {\lambda }_{j}}\right) {p}_{j}\left( {\varphi \left( {x}_{1}\right) ,\ldots ,\varphi \left( {x}_{n}\right) }\right) }\right| \] \[ = \left| {\varphi \left( e\right) - \mathop{\sum }\limits_{{j = 1}}^{n}\varphi \left( {{x}_{j} - {\lambda }_{i}e}\right) \varphi \left( {{p}_{j}\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\right) }\right| \] \[ \leq \begin{Vmatrix}{e - \mathop{\sum }\limits_{{j = 1}}^{n}\left( {{x}_{j} - {\lambda }_{j}e}\right) {p}_{j}\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\end{Vmatrix} \] \[ < 1\text{.} \] This proves that \( {\sigma }_{A}\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is polynomially convex. Conversely, suppose that \( K \subseteq {\mathbb{C}}^{n} \) is polynomially convex. Let \( A = P\left( K\right) \) , the algebra of all functions \( f : K \rightarrow \mathbb{C} \) that are uniform limits of polynomial functions on \( K \) . Then \( A \) is generated by the functions \[ {f}_{j}\left( z\right) = {z}_{j},\;z = \left( {{z}_{1},\ldots ,{z}_{n}}\right) \in K,\;1 \leq j \leq n. \] We are going to show that \( K = {\sigma }_{A}\left( {{f}_{1},\ldots ,{f}_{n}}\right) \) . For \( z \in K \), define \( {\varphi }_{z} \in \Delta \left( A\right) \) by \( {\varphi }_{z}\left( f\right) = f\left( z\right) \) . As distinct points can be separated by the functions \( {f}_{j} \), the mapping \[ \phi : K \rightarrow \Delta \left( A\right) ,\;z \rightarrow {\varphi }_{z} \] is injective. \( \phi \) is also continuous since \( \Delta \left( A\right) \) carries the weak topology with respect to the functions \( \varphi \rightarrow \varphi \left( f\right), f \in A \), and \( z \rightarrow {\varphi }_{z}\left( f\right) = f\left( z\right) \) is continuous on \( K \) . Thus \( \phi \) is a homeomorphism from \( K \) onto \( \phi \left( K\right) \subseteq \Delta \left( A\right) \) . We claim that \( \phi \left( K\right) = \Delta \left( A\right) \) . Towards a contradiction, suppose there exists \( \varphi \in \Delta \left( A\right) \smallsetminus \phi \left( K\right) \) and put \[ {\lambda }_{j} = \varphi \left( {f}_{j}\right) ,\;1 \leq j \leq n,\text{ and }\lambda = \left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) . \] Then \( \lambda \notin K \) since otherwise \( {\varphi }_{\lambda }\left( {f}_{j}\right) = {f}_{j}\left( \lambda \right) = {\lambda }_{j} = \varphi \left( {f}_{j}\right) ,1 \leq j \leq n \) , and hence \( \varphi = {\varphi }_{\lambda } \) as \( A \) is generated by \( {f}_{1},\ldots ,{f}_{n} \) . Because \( K \) is polynomially convex, we can choose a polynomial \( p \) in \( n \) variables such that \( \left| {p\left( {{z}_{1},\ldots ,{z}_{n}}\right) }\right| < \) 1 for all \( z = \left( {{z}_{1},\ldots ,{z}_{n}}\right) \in K \) and \( p\left( \lambda \right) = 1 \) . Then, as \( K \) is compact, \[ {\begin{Vmatrix}{\left. p\right| }_{K}\end{Vmatrix}}_{\infty } = \mathop{\sup }\limits_{{z \in K}}\left| {p\left( z\right) }\right| < 1 \] and hence \( \left| {\psi \left( {\left. p\right| }_{K}\right) }\right| < 1 \) for all \( \psi \in \Delta \left( A\right) \) . Now, \( {\left. p\right| }_{K} \) is a finite linear combination of functions of the form \[ z \rightarrow {z}_{1}^{{m}_{1}}{z}_{2}^{{m}_{2}} \cdot \ldots \cdot {z}_{n}^{{m}_{n}} = {f}_{1}{\left( z\right) }^{{m}_{1}}{f}_{2}{\left( z\right) }^{{m}_{2}} \cdot \ldots \cdot {f}_{n}{\left( z\right) }^{{m}_{n}}. \] As \( \varphi \left( {f}_{j}\right) = {\lambda }_{j},1 \leq j \leq n \), we obtain \( \varphi \left( {\left. p\right| }_{K}\right) = p\left( \lambda \right) = 1 \), which is a contradiction. It follows that \( \phi \left( K\right) = \Delta \left( A\right) \), and hence \[ {\sigma }_{A}\left( {{f}_{1},\ldots ,{f}_{n}}\right) = \left\{ {\left( {{\varphi }_{z}\left( {f}_{1}\right) ,\ldots ,{\varphi }_{z}\left( {f}_{n}\right) }\right) : z \in K}\right\} \] \[ = \left\{ {\left( {{z}_{1},\ldots ,{z}_{n}}\right) : z \in K}\right\} = K\text{.} \] This shows (ii) \( \Rightarrow \) (i). It is worth emphasising that the proof of (ii) \( \Rightarrow \) (i) in Theorem 2.3.6 shows that \( \Delta \left( {P\left( K\right) }\right) = K \) when \( K \) is polynomially convex. The following theorem provides a topological description of polynomially convex subsets of \( \mathbb{C} \) . Theorem 2.3.7. A compact subset \( K \) of \( \mathbb{C} \) is polynomially convex if and only if \( \mathbb{C} \smallsetminus K \) connected. Proof. We first assume that \( K \) is polynomially convex and that nevertheless \( \mathbb{C} \smallsetminus K \) is not connected. Then \( \mathbb{C} \smallsetminus K \) has a bounded connected component \( S \neq \varnothing \) . Then \( S \) is closed in \( \mathbb{C} \smallsetminus K \) and also open \( \mathbb{C} \smallsetminus K \), since \( \mathbb{C} \smallsetminus K \) is locally connected. Hence \( S \) is also open in \( \
1118_(GTM272)Operator Theoretic Aspects of Ergodic Theory
Definition 11.8
Definition 11.8. We say that the sequence of operators \( {\left( {T}_{n}\right) }_{n \in \mathbb{N}} \) satisfies an (abstract) maximal inequality if there is a function \( c : \left( {0,\infty }\right) \rightarrow \lbrack 0,\infty ) \) with \( \mathop{\lim }\limits_{{\lambda \rightarrow \infty }}c\left( \lambda \right) = 0 \) such that \[ \mu \left\lbrack {{T}^{ * }f > \lambda }\right\rbrack \leq c\left( \lambda \right) \;\left( {\lambda > 0, f \in E,\parallel f\parallel \leq 1}\right) . \] The following result shows that an abstract maximal inequality is exactly what we need. Proposition 11.9 (Banach’s Principle). Let \( \mathrm{X} = \left( {X,\sum ,\mu }\right) \) be a measure space, \( 1 \leq p < \infty \), and \( {\left( {T}_{n}\right) }_{n \in \mathbb{N}} \) a sequence of bounded linear operators on \( E = {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) . If the --- \( {}^{2} \) This notation should not be confused with the Hilbert space adjoint of an operator. The two meanings of \( * \) will not occur in the same context. --- associated maximal operator \( {T}^{ * } \) satisfies a maximal inequality, then the set \[ F \mathrel{\text{:=}} \left\{ {f \in E : {\left( {T}_{n}f\right) }_{n \in \mathbb{N}}\text{ is a.e.-convergent }}\right\} \] is a closed subspace of \( E \) . Proof. Since the operators \( {T}_{n} \) are linear, \( F \) is a subspace of \( E \) . To see that it is closed, let \( f \in E \) and \( g \in F \) . For any natural numbers \( k, l \) we have \[ \left| {{T}_{k}f - {T}_{l}f}\right| \leq \left| {{T}_{k}\left( {f - g}\right) }\right| + \left| {{T}_{k}g - {T}_{l}g}\right| + \left| {{T}_{l}\left( {g - f}\right) }\right| \leq 2{T}^{ * }\left( {f - g}\right) + \left| {{T}_{k}g - {T}_{l}g}\right| . \] Taking the limsup in \( {\mathrm{L}}^{0} \) with \( k, l \rightarrow \infty \) (see (11.1)) one obtains \[ h \mathrel{\text{:=}} \mathop{\limsup }\limits_{{k, l \rightarrow \infty }}\left| {{T}_{k}f - {T}_{l}f}\right| \leq 2{T}^{ * }\left( {f - g}\right) + \mathop{\limsup }\limits_{{k, l \rightarrow \infty }}\left| {{T}_{k}g - {T}_{l}g}\right| = 2{T}^{ * }\left( {f - g}\right) \] since \( g \in F \) . For \( \lambda > 0 \) we thus have \( \left\lbrack {h > {2\lambda }}\right\rbrack \subseteq \left\lbrack {{T}^{ * }\left( {f - g}\right) > \lambda }\right\rbrack \) and hence \[ \mu \left\lbrack {h > {2\lambda }}\right\rbrack \leq \mu \left\lbrack {{T}^{ * }\left( {f - g}\right) > \lambda }\right\rbrack \leq c\left( \frac{\lambda }{\parallel f - g\parallel }\right) . \] If \( f \in \bar{F} \) we can make \( \parallel f - g\parallel \) arbitrarily small, and since \( \mathop{\lim }\limits_{{t \rightarrow \infty }}c\left( t\right) = 0 \), we obtain \( \mu \left\lbrack {h > {2\lambda }}\right\rbrack = 0 \) . Since \( \lambda > 0 \) was arbitrary, it follows that \( h = 0 \) . This shows that \( f \in F \), hence \( F \) is closed. ## Maximal Inequalities for Dunford-Schwartz Operators In the following, \( \mathrm{X} = \left( {X,\sum ,\mu }\right) \) denotes a general measure space and \( T \) denotes a positive Dunford-Schwartz operator on \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) . By Theorem 8.23, \( T \) is contractive for the \( p \) -norm on \( {\mathrm{L}}^{1} \cap {\mathrm{L}}^{p} \) for each \( 1 \leq p \leq \infty \), and by a standard approximation \( T \) extends in a consistent way to a positive contraction on each space \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) for \( 1 \leq p < \infty \) . Since the measure is not necessarily finite, it is unclear, however, whether \( T \) extends to \( {\mathrm{L}}^{\infty } \) . What we will use are the following simple observations. Lemma 11.10. Let \( 1 \leq p < \infty \) and \( 0 \leq f \in {\mathrm{L}}^{p}\left( {\mathrm{X};\mathbb{R}}\right) \) and \( \lambda > 0 \) . Then the following assertions hold: a) \( \mu \left\lbrack {f > \lambda }\right\rbrack \leq {\lambda }^{-p}\parallel f{\parallel }_{p}^{p} < \infty \) . b) \( {\left( f - \lambda \right) }^{ + } \in {\mathrm{L}}^{1} \cap {\mathrm{L}}^{p} \) . c) \( {Tf} - \lambda \leq T{\left( f - \lambda \right) }^{ + } \) . Proof. a) Let \( A \mathrel{\text{:=}} \left\lbrack {f > \lambda }\right\rbrack \) . Then \( {\lambda }^{p}{\mathbf{1}}_{A} \leq {f}^{p}{\mathbf{1}}_{A} \), and integrating proves the claim. For b) use the same set \( A \) to write \[ {\left( f - \lambda \right) }^{ + } = \left( {f - \lambda }\right) {\mathbf{1}}_{A} = f{\mathbf{1}}_{A} - \lambda {\mathbf{1}}_{A}. \] Since \( \mu \left( A\right) < \infty \), the claim follows. Finally, note that \( \left| {f - {\left( f - \lambda \right) }^{ + }}\right| \leq \lambda \), which is easily checked by distinguishing what happens on the sets \( A \) and \( {A}^{\mathrm{c}} \) . Since \( T \) is a Dunford-Schwartz operator, we obtain \[ {Tf} - T{\left( f - \lambda \right) }^{ + } \leq \left| {T\left( {f - {\left( f - \lambda \right) }^{ + }}\right) }\right| \leq \lambda , \] and c) is proved. We now turn to the so-called maximal ergodic theorem. For \( 0 \leq f \in {\mathrm{L}}^{p} \) (with \( 1 \leq p < \infty ) \) and \( \lambda > 0 \) we write \[ {\mathrm{A}}_{n}^{ * }f = \mathop{\max }\limits_{{1 \leq k \leq n}}{\mathrm{\;A}}_{k}f,\;{S}_{k}f \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 0}}^{{k - 1}}{T}^{j}f,\;{M}_{n}^{\lambda }f \mathrel{\text{:=}} \mathop{\max }\limits_{{1 \leq k \leq n}}\left( {{S}_{k}f - {k\lambda }}\right) . \] Then the set \[ \left\lbrack {{\mathrm{A}}_{n}^{ * }f > \lambda }\right\rbrack = \left\lbrack {{M}_{n}^{\lambda }f > 0}\right\rbrack \subseteq \mathop{\bigcup }\limits_{{k = 1}}^{n}\left\lbrack {{S}_{k}f > {k\lambda }}\right\rbrack \] has finite measure and \( {\left( {M}_{n}^{\lambda }f\right) }^{ + } \in {\mathrm{L}}^{1} \cap {\mathrm{L}}^{p} \), by Lemma 11.10.a and b) above. Theorem 11.11 (Maximal Ergodic Theorem). Let \( T \) be a positive Dunford-Schwartz operator on \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) ,\mathrm{X} \) some measure space, let \( p \in \lbrack 1,\infty ) \), and let \( 0 \leq f \in {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) . Then for each \( \lambda > 0 \) and \( n \in \mathbb{N} \) \[ \mu \left\lbrack {{\mathrm{A}}_{n}^{ * }f > \lambda }\right\rbrack \leq \frac{1}{\lambda }{\int }_{\left\lbrack {\mathrm{A}}_{n}^{ * }f > \lambda \right\rbrack }f\mathrm{\;d}\mu . \] Proof. Take \( k \in \{ 2,\ldots, n\} \) . Then, by Lemma 11.10.c, \[ {S}_{k}f - {k\lambda } = f - \lambda + T{S}_{k - 1}f - \left( {k - 1}\right) \lambda \leq f - \lambda + T{\left( {S}_{k - 1}f - \left( k - 1\right) \lambda \right) }^{ + } \] \[ \leq f - \lambda + T{\left( {M}_{n}^{\lambda }f\right) }^{ + } \] By taking the maximum with respect to \( k \) we obtain \( {M}_{n}^{\lambda }f \leq f - \lambda + T{\left( {M}_{n}^{\lambda }f\right) }^{ + } \) (for \( k = 1 \) we have \( {S}_{k}f - {k\lambda } = f - \lambda \) ). Now we integrate and estimate \[ {\int }_{X}{\left( {M}_{n}^{\lambda }f\right) }^{ + }\mathrm{d}\mu = {\int }_{\left\lbrack {M}_{n}^{\lambda }f > 0\right\rbrack }{M}_{n}^{\lambda }f\mathrm{\;d}\mu \leq {\int }_{\left\lbrack {M}_{n}^{\lambda }f > 0\right\rbrack }f - \lambda \mathrm{d}\mu + {\int }_{X}T{\left( {M}_{n}^{\lambda }f\right) }^{ + }\mathrm{d}\mu \] \[ \leq {\int }_{\left\lbrack {M}_{n}^{\lambda }f > 0\right\rbrack }f - \lambda \mathrm{d}\mu + {\int }_{X}{\left( {M}_{n}^{\lambda }f\right) }^{ + }\mathrm{d}\mu \] by the \( {\mathrm{L}}^{1} \) -contractivity of \( T \) . It follows that \[ {\lambda \mu }\left\lbrack {{\mathrm{A}}_{n}^{ * }f > \lambda }\right\rbrack = {\lambda \mu }\left\lbrack {{M}_{n}^{\lambda }f > 0}\right\rbrack \leq {\int }_{\left\lbrack {M}_{n}^{\lambda }f > 0\right\rbrack }f\mathrm{\;d}\mu = {\int }_{\left\lbrack {\mathrm{A}}_{n}^{ * }f > \lambda \right\rbrack }f\mathrm{\;d}\mu , \] which concludes the proof. Corollary 11.12 (Maximal Inequality). Let \( T \) be a positive Dunford-Schwartz operator on \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) ,\mathrm{X} \) some measure space, let \( p \in \lbrack 1,\infty ) \), and let \( 0 \leq f \in {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) . Then \[ \mu \left\lbrack {{\mathrm{A}}^{ * }f > \lambda }\right\rbrack \leq {\lambda }^{-p}\parallel f{\parallel }_{p}^{p}\;\left( {\lambda > 0}\right) . \] Proof. By the Maximal Ergodic Theorem 11.11 and by Hölder's inequality, \[ \mu \left\lbrack {{\mathrm{A}}_{n}^{ * }\left| f\right| > \lambda }\right\rbrack \leq \frac{1}{\lambda }{\int }_{\left\lbrack {\mathrm{A}}_{n}^{ * }\left| f\right| > \lambda \right\rbrack }\left| f\right| \mathrm{d}\mu \leq \frac{1}{\lambda }\parallel f{\parallel }_{p}\mu {\left\lbrack {\mathrm{A}}_{n}^{ * }\left| f\right| > \lambda \right\rbrack }^{1/q}, \] where \( q \) is the conjugate exponent to \( p \) . This leads to \[ \mu {\left\lbrack {\mathrm{A}}_{n}^{ * }\left| f\right| > \lambda \right\rbrack }^{1/p} \leq \frac{\parallel f{\parallel }_{p}}{\lambda }. \] Now, since \( T \) is positive, each \( {\mathrm{A}}_{n} \) is positive and hence \( \mathop{\max }\limits_{{1 \leq k \leq n}}\left| {{\mathrm{\;A}}_{k}f}\right| \leq {\mathrm{A}}_{n}^{ * }\left| f\right| \) . It follows that \[ \mu \left\lbrack {\mathop{\max }\limits_{{1 \leq k \leq n}}\left| {{\mathrm{\;A}}_{k}f}\right| > \lambda }\right\rbrack \leq \mu \left\lbrack {{\mathrm{A}}_{n}^{ * }\left| f\right| > \lambda }\right\rbrack \leq {\lambda }^{-p}\parallel f{\parallel }_{p}^{p}. \] We let \( n \rightarrow \infty \) and obtain the claim. We remark that there is a better estimate in the case \( 1 < p < \infty \), see Exercise 6 . ## Proof of the Pointwise Ergodic Theorem Let, as before, \( T \) be a positive Dunford-Schwartz operator on some \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) . If the measure space is finite, then, by Lemma \( {11.7},{\mathrm{\;A}}_{n}f \) converges almost everywhere for all \( f \) from the dense subspace \( F = \operatorname{fix}\left( T\right) \oplus \left( {\mathrm{I} - T}\right) {\mathrm{L}}
1098_(GTM254)Algebraic Function Fields and Codes
Definition 1.4.4
Definition 1.4.4. For a divisor \( A \in \operatorname{Div}\left( F\right) \) we define the Riemann-Roch space associated to \( A \) by \[ \mathcal{L}\left( A\right) \mathrel{\text{:=}} \{ x \in F \mid \left( x\right) \geq - A\} \cup \{ 0\} . \] This definition has the following interpretation: if \[ A = \mathop{\sum }\limits_{{i = 1}}^{r}{n}_{i}{P}_{i} - \mathop{\sum }\limits_{{j = 1}}^{s}{m}_{j}{Q}_{j} \] with \( {n}_{i} > 0,{m}_{j} > 0 \) then \( \mathcal{L}\left( A\right) \) consists of all elements \( x \in F \) such that - \( x \) has zeros of order \( \geq {m}_{j} \) at \( {Q}_{j} \), for \( j = 1,\ldots, s \), and - \( x \) may have poles only at the places \( {P}_{1},\ldots ,{P}_{r} \), with the pole order at \( {P}_{i} \) being bounded by \( {n}_{i}\left( {i = 1,\ldots, r}\right) \) . Remark 1.4.5. Let \( A \in \operatorname{Div}\left( F\right) \) . Then (a) \( x \in \mathcal{L}\left( A\right) \) if and only if \( {v}_{P}\left( x\right) \geq - {v}_{P}\left( A\right) \) for all \( P \in {\mathbb{P}}_{F} \) . (b) \( \mathcal{L}\left( A\right) \neq \{ 0\} \) if and only if there is a divisor \( {A}^{\prime } \sim A \) with \( {A}^{\prime } \geq 0 \) . The proof of these remarks is trivial; nevertheless they are often very useful. In particular Remark 1.4.5(b) will be used frequently. Lemma 1.4.6. Let \( A \in \operatorname{Div}\left( F\right) \) . Then we have: (a) \( \mathcal{L}\left( A\right) \) is a vector space over \( K \) . (b) If \( {A}^{\prime } \) is a divisor equivalent to \( A \), then \( \mathcal{L}\left( A\right) \simeq \mathcal{L}\left( {A}^{\prime }\right) \) (isomorphic as vector spaces over \( K \) ). Proof. (a) Let \( x, y \in \mathcal{L}\left( A\right) \) and \( a \in K \) . Then for all \( P \in {\mathbb{P}}_{F},{v}_{P}\left( {x + y}\right) \geq \) \( \min \left\{ {{v}_{P}\left( x\right) ,{v}_{P}\left( y\right) }\right\} \geq - {v}_{P}\left( A\right) \) and \( {v}_{P}\left( {ax}\right) = {v}_{P}\left( a\right) + {v}_{P}\left( x\right) \geq - {v}_{P}\left( A\right) \) . So \( x + y \) and \( {ax} \) are in \( \mathcal{L}\left( A\right) \) by Remark 1.4.5(a). (b) By assumption, \( A = {A}^{\prime } + \left( z\right) \) with \( 0 \neq z \in F \) . Consider the mapping \[ \varphi : \left\{ \begin{matrix} \mathcal{L}\left( A\right) & \rightarrow & F, \\ x & \mapsto & {xz}. \end{matrix}\right. \] This is a \( K \) -linear mapping whose image is contained in \( \mathcal{L}\left( {A}^{\prime }\right) \) . In the same manner, \[ {\varphi }^{\prime } : \left\{ \begin{matrix} \mathcal{L}\left( {A}^{\prime }\right) & \rightarrow & F, \\ x & \mapsto & x{z}^{-1} \end{matrix}\right. \] is \( K \) -linear from \( \mathcal{L}\left( {A}^{\prime }\right) \) to \( \mathcal{L}\left( A\right) \) . These mappings are inverse to each other, hence \( \varphi \) is an isomorphism between \( \mathcal{L}\left( A\right) \) and \( \mathcal{L}\left( {A}^{\prime }\right) \) . Lemma 1.4.7. (a) \( \mathcal{L}\left( 0\right) = K \) . (b) If \( A < 0 \) then \( \mathcal{L}\left( A\right) = \{ 0\} \) . Proof. (a) We have \( \left( x\right) = 0 \) for \( 0 \neq x \in K \), therefore \( K \subseteq \mathcal{L}\left( 0\right) \) . Conversely, if \( 0 \neq x \in \mathcal{L}\left( 0\right) \) then \( \left( x\right) \geq 0 \) . This means that \( x \) has no pole, so \( x \in K \) by Corollary 1.1.20. (b) Assume there exists an element \( 0 \neq x \in \mathcal{L}\left( A\right) \) . Then \( \left( x\right) \geq - A > 0 \) , which implies that \( x \) has at least one zero but no pole. This is impossible. In the sequel we shall consider various \( K \) -vector spaces. The dimension of such a vector space \( V \) will be denoted by \( \dim V \) . Our next objective is to show that \( \mathcal{L}\left( A\right) \) is finite-dimensional for each divisor \( A \in \operatorname{Div}\left( F\right) \) . Lemma 1.4.8. Let \( A, B \) be divisors of \( F/K \) with \( A \leq B \) . Then we have \( \mathcal{L}\left( A\right) \subseteq \mathcal{L}\left( B\right) \) and \[ \dim \left( {\mathcal{L}\left( B\right) /\mathcal{L}\left( A\right) }\right) \leq \deg B - \deg A. \] Proof. \( \mathcal{L}\left( A\right) \subseteq \mathcal{L}\left( B\right) \) is trivial. In order to prove the other assertion we can assume that \( B = A + P \) for some \( P \in {\mathbb{P}}_{F} \) ; the general case follows then by induction. Choose an element \( t \in F \) with \( {v}_{P}\left( t\right) = {v}_{P}\left( B\right) = {v}_{P}\left( A\right) + 1 \) . For \( x \in \mathcal{L}\left( B\right) \) we have \( {v}_{P}\left( x\right) \geq - {v}_{P}\left( B\right) = - {v}_{P}\left( t\right) \), so \( {xt} \in {\mathcal{O}}_{P} \) . Thus we obtain a \( K \) -linear map \[ \psi : \left\{ \begin{matrix} \mathcal{L}\left( B\right) & \rightarrow & {F}_{P}, \\ x & \mapsto & \left( {xt}\right) \left( P\right) . \end{matrix}\right. \] An element \( x \) is in the kernel of \( \psi \) if and only if \( {v}_{P}\left( {xt}\right) > 0 \) ; i.e., \( {v}_{P}\left( x\right) \geq \) \( - {v}_{P}\left( A\right) \) . Consequently \( \operatorname{Ker}\left( \psi \right) = \mathcal{L}\left( A\right) \), and \( \psi \) induces a \( K \) -linear injective mapping from \( \mathcal{L}\left( B\right) /\mathcal{L}\left( A\right) \) to \( {F}_{P} \) . It follows that \[ \dim \left( {\mathcal{L}\left( B\right) /\mathcal{L}\left( A\right) }\right) \leq \dim {F}_{P} = \deg B - \deg A. \] Proposition 1.4.9. For each divisor \( A \in \operatorname{Div}\left( F\right) \) the space \( \mathcal{L}\left( A\right) \) is a finite-dimensional vector space over \( K \) . More precisely: if \( A = {A}_{ + } - {A}_{ - } \) with positive divisors \( {A}_{ + } \) and \( {A}_{ - } \), then \[ \dim \mathcal{L}\left( A\right) \leq \deg {A}_{ + } + 1 \] Proof. Since \( \mathcal{L}\left( A\right) \subseteq \mathcal{L}\left( {A}_{ + }\right) \), it is sufficient to show that \[ \dim \mathcal{L}\left( {A}_{ + }\right) \leq \deg {A}_{ + } + 1 \] We have \( 0 \leq {A}_{ + } \), so Lemma 1.4.8 yields \( \dim \left( {\mathcal{L}\left( {A}_{ + }\right) /\mathcal{L}\left( 0\right) }\right) \leq \deg {A}_{ + } \) . Since \( \mathcal{L}\left( 0\right) = K \) we conclude that \( \dim \mathcal{L}\left( {A}_{ + }\right) = \dim \left( {\mathcal{L}\left( {A}_{ + }\right) /\mathcal{L}\left( 0\right) }\right) + 1 \leq \) \( \deg {A}_{ + } + 1 \) . Definition 1.4.10. For \( A \in \operatorname{Div}\left( F\right) \) the integer \( \ell \left( A\right) \mathrel{\text{:=}} \dim \mathcal{L}\left( A\right) \) is called the dimension of the divisor \( A \) . One of the most important problems in the theory of algebraic function fields is to calculate the dimension of a divisor. We shall be concerned with this question in the subsequent sections; the answer to the problem will be given by the Riemann-Roch Theorem 1.5.15. We begin by proving a sharpening of Proposition 1.3.3. Roughly speaking, the next theorem states that an element \( 0 \neq x \in F \) has as many zeros as poles, provided the zeros and poles are counted properly. Theorem 1.4.11. All principal divisors have degree zero. More precisely: let \( x \in F \smallsetminus K \) and \( {\left( x\right) }_{0} \) resp. \( {\left( x\right) }_{\infty } \) denote the zero resp. pole divisor of \( x \) . Then \[ \deg {\left( x\right) }_{0} = \deg {\left( x\right) }_{\infty } = \left\lbrack {F : K\left( x\right) }\right\rbrack . \] Corollary 1.4.12. (a) Let \( A,{A}^{\prime } \) be divisors with \( A \sim {A}^{\prime } \) . Then we have \( \ell \left( A\right) = \ell \left( {A}^{\prime }\right) \) and \( \deg A = \deg {A}^{\prime } \) . (b) If \( \deg A < 0 \) then \( \ell \left( A\right) = 0 \) . (c) For a divisor \( A \) of degree zero the following assertions are equivalent: (1) \( A \) is principal. (2) \( \ell \left( A\right) \geq 1 \) . (3) \( \ell \left( A\right) = 1 \) . Proof of Corollary 1.4.12. (a) follows immediately from Lemma 1.4.6 and Theorem 1.4.11. (b) Suppose that \( \ell \left( A\right) > 0 \) . By Remark 1.4.5 there is some divisor \( {A}^{\prime } \sim A \) with \( {A}^{\prime } \geq 0 \), hence \( \deg A = \deg {A}^{\prime } \geq 0 \) . (c) \( \left( 1\right) \Rightarrow \left( 2\right) \) : If \( A = \left( x\right) \) is principal then \( {x}^{-1} \in \mathcal{L}\left( A\right) \), so \( \ell \left( A\right) \geq 1 \) . \( \left( 2\right) \Rightarrow \left( 3\right) \) : Assume now that \( \ell \left( A\right) \geq 1 \) and \( \deg A = 0 \) . Then \( A \sim {A}^{\prime } \) for some \( {A}^{\prime } \geq 0 \) (Remark 1.4.5(b)). The conditions \( {A}^{\prime } \geq 0 \) and \( \deg {A}^{\prime } = 0 \) imply that \( {A}^{\prime } = 0 \), hence \( \ell \left( A\right) = \ell \left( {A}^{\prime }\right) = \ell \left( 0\right) = 1 \), by Lemma 1.4.7. \( \left( 3\right) \Rightarrow \left( 1\right) \) : Suppose that \( \ell \left( A\right) = 1 \) and \( \deg A = 0 \) . Choose \( 0 \neq z \in \mathcal{L}\left( A\right) \) , then \( \left( z\right) + A \geq 0 \) . Since \( \deg \left( {\left( z\right) + A}\right) = 0 \), it follows that \( \left( z\right) + A = 0 \), therefore \( A = - \left( z\right) = \left( {z}^{-1}\right) \) is principal. Proof of Theorem 1.4.11. Set \( n \mathrel{\text{:=}} \left\lbrack {F : K\left( x\right) }\right\rbrack \) and \[ B \mathrel{\text{:=}} {\left( x\right) }_{\infty } = \mathop{\sum }\limits_{{i = 1}}^{r} - {v}_{{P}_{i}}\left( x\right) {P}_{i}, \] where \( {P}_{1},\ldots ,{P}_{r} \) are all the poles of \( x \) . Then \[ \deg B = \mathop{\sum }\limits_{{i = 1}}^{r}{v}_{{P}_{i}}\left( {x}^{-1}\right) \cdot \deg {P}_{i} \leq \left\lbrack {F : K\left( x\right) }\right\rbrack = n \] by Proposition 1.3.3, and thus it remains to show that \( n \leq \deg B \) as well. Choose a basis \( {u}_{1},\ldots ,{u}_{n} \) of \( F/K\left( x\right) \) and a divisor \( C \geq 0 \) such that \( \left( {u}_{i}\right) \geq - C \) for \( i = 1,\ldots, n \) . We have \[ \ell \left( {{lB} + C}\right
1098_(GTM254)Algebraic Function Fields and Codes
Definition 1.4.15
Definition 1.4.15. The genus \( g \) of \( F/K \) is defined by \[ g \mathrel{\text{:=}} \max \{ \deg A - \ell \left( A\right) + 1 \mid A \in \operatorname{Div}\left( F\right) \} . \] Observe that this definition makes sense by Proposition 1.4.14. It will turn out that the genus is the most important invariant of a function field. Corollary 1.4.16. The genus of \( F/K \) is a non-negative integer. Proof. In the definition of \( g \), put \( A = 0 \) . Then \( \deg \left( 0\right) - \ell \left( 0\right) + 1 = 0 \), hence \( g \geq 0 \) . Theorem 1.4.17 (Riemann's Theorem). Let \( F/K \) be a function field of genus \( g \) . Then we have: (a) For all divisors \( A \in \operatorname{Div}\left( F\right) \) , \[ \ell \left( A\right) \geq \deg A + 1 - g. \] (b) There is an integer \( c \), depending only on the function field \( F/K \), such that \[ \ell \left( A\right) = \deg A + 1 - g, \] whenever \( \deg A \geq c \) . Proof. (a) This is just the definition of the genus. (b) Choose a divisor \( {A}_{0} \) with \( g = \deg {A}_{0} - \ell \left( {A}_{0}\right) + 1 \) and set \( c \mathrel{\text{:=}} \deg {A}_{0} + g \) . If \( \deg A \geq c \) then \[ \ell \left( {A - {A}_{0}}\right) \geq \deg \left( {A - {A}_{0}}\right) + 1 - g \geq c - \deg {A}_{0} + 1 - g = 1. \] So there is an element \( 0 \neq z \in \mathcal{L}\left( {A - {A}_{0}}\right) \) . Consider the divisor \( {A}^{\prime } \mathrel{\text{:=}} A + \left( z\right) \) which is \( \geq {A}_{0} \) . We have \[ \deg A - \ell \left( A\right) = \deg {A}^{\prime } - \ell \left( {A}^{\prime }\right) \;\text{(by Corollary 1.4.12)} \] \[ \geq \deg {A}_{0} - \ell \left( {A}_{0}\right) \;\text{(by Lemma 1.4.8)} \] \[ = g - 1\text{.} \] Hence \( \ell \left( A\right) \leq \deg A + 1 - g \) . Example 1.4.18. We want to show that the rational function field \( K\left( x\right) /K \) has genus \( g = 0 \) . In order to prove this, let \( {P}_{\infty } \) denote the pole divisor of \( x \) (notation as in Proposition 1.2.1). Consider for \( r \geq 0 \) the vector space \( \mathcal{L}\left( {r{P}_{\infty }}\right) \) . Obviously the elements \( 1, x,\ldots ,{x}^{r} \) are in \( \mathcal{L}\left( {r{P}_{\infty }}\right) \), hence \[ r + 1 \leq \ell \left( {r{P}_{\infty }}\right) = \deg \left( {r{P}_{\infty }}\right) + 1 - g = r + 1 - g \] for sufficiently large \( r \) . Thus \( g \leq 0 \) . Since \( g \geq 0 \) holds for every function field, the assertion follows. In general it is hard to determine the genus of a function field. Large parts of Chapter 3 will be devoted to this problem. ## 1.5 The Riemann-Roch Theorem In this section \( F/K \) denotes an algebraic function field of genus \( g \) . Definition 1.5.1. For \( A \in \operatorname{Div}\left( F\right) \) the integer \[ i\left( A\right) \mathrel{\text{:=}} \ell \left( A\right) - \deg A + g - 1 \] is called the index of specialty of \( A \) . Riemann’s Theorem 1.4.17 states that \( i\left( A\right) \) is a non-negative integer, and \( i\left( A\right) = 0 \) if \( \deg A \) is sufficiently large. In the present section we will provide several interpretations for \( i\left( A\right) \) as the dimension of certain vector spaces. To this end we introduce the notion of an adele. Definition 1.5.2. An adele of \( F/K \) is a mapping \[ \alpha : \left\{ \begin{matrix} {\mathbb{P}}_{F} & \rightarrow & F, \\ P & \mapsto & {\alpha }_{P}, \end{matrix}\right. \] such that \( {\alpha }_{P} \in {\mathcal{O}}_{P} \) for almost all \( P \in {\mathbb{P}}_{F} \) . We regard an adele as an element of the direct product \( \mathop{\prod }\limits_{{P \in {\mathbb{P}}_{F}}}F \) and therefore use the notation \( \alpha = {\left( {\alpha }_{P}\right) }_{P \in {\mathbb{P}}_{F}} \) or, even shorter, \( \alpha = \left( {\alpha }_{P}\right) \) . The set \[ {\mathcal{A}}_{F} \mathrel{\text{:=}} \{ \alpha \mid \alpha \text{ is an adele of }F/K\} \] is called the adele space of \( F/K \) . It is regarded as a vector space over \( K \) in the obvious manner (actually \( {\mathcal{A}}_{F} \) can be regarded as a ring, but the ring structure will never be used). The principal adele of an element \( x \in F \) is the adele all of whose components are equal to \( x \) (note that this definition makes sense since \( x \) has only finitely many poles). This gives an embedding \( F \hookrightarrow {\mathcal{A}}_{F} \) . The valuations \( {v}_{P} \) of \( F/K \) extend naturally to \( {\mathcal{A}}_{F} \) by setting \( {v}_{P}\left( \alpha \right) \mathrel{\text{:=}} {v}_{P}\left( {\alpha }_{P}\right) \) (where \( {\alpha }_{P} \) is the \( P \) -component of the adele \( \alpha \) ). By definition we have that \( {v}_{P}\left( \alpha \right) \geq 0 \) for almost all \( P \in {\mathbb{P}}_{F} \) . We note that the notion of an adele is not consistent in the literature. Some authors use the name repartition for what we call an adele. Others mean by an adele (or a repartition) a mapping \( \alpha \) such that \( \alpha \left( P\right) \) is an element of the \( P \) -adic completion \( {\widehat{F}}_{P} \) for all \( P \in {\mathbb{P}}_{F} \) (cf. Chapter 4). Definition 1.5.3. For \( A \in \operatorname{Div}\left( F\right) \) we define \[ {\mathcal{A}}_{F}\left( A\right) \mathrel{\text{:=}} \left\{ {\alpha \in {\mathcal{A}}_{F} \mid {v}_{P}\left( \alpha \right) \geq - {v}_{P}\left( A\right) \text{ for all }P \in {\mathbb{P}}_{F}}\right\} . \] Obviously this is a \( K \) -subspace of \( {\mathcal{A}}_{F} \) . Theorem 1.5.4. For every divisor \( A \) the index of specialty is \[ i\left( A\right) = \dim \left( {{\mathcal{A}}_{F}/\left( {{\mathcal{A}}_{F}\left( A\right) + F}\right) }\right) . \] Here, as usual, dim means the dimension as a \( K \) -vector space. Note that although the vector spaces \( {\mathcal{A}}_{F},{\mathcal{A}}_{F}\left( A\right) \) and \( F \) are infinite-dimensional, the theorem states that the quotient space \( {\mathcal{A}}_{F}/\left( {{\mathcal{A}}_{F}\left( A\right) + F}\right) \) has finite dimension over \( K \) . As a corollary, we obtain another characterization of the genus of \( F/K \) . Corollary 1.5.5. \( \;g = \dim \left( {{\mathcal{A}}_{F}/\left( {{\mathcal{A}}_{F}\left( 0\right) + F}\right) }\right) \) . Proof of Corollary 1.5.5. \( \;i\left( 0\right) = \ell \left( 0\right) - \deg \left( 0\right) + g - 1 = 1 - 0 + g - 1 = g \) . Proof of Theorem 1.5.4. We proceed in several steps. Step 1. Let \( {A}_{1},{A}_{2} \in \operatorname{Div}\left( F\right) \) and \( {A}_{1} \leq {A}_{2} \) . Then \( {\mathcal{A}}_{F}\left( {A}_{1}\right) \subseteq {\mathcal{A}}_{F}\left( {A}_{2}\right) \) and \[ \dim \left( {{\mathcal{A}}_{F}\left( {A}_{2}\right) /{\mathcal{A}}_{F}\left( {A}_{1}\right) }\right) = \deg {A}_{2} - \deg {A}_{1}. \] (1.24) Proof of Step 1. \( {\mathcal{A}}_{F}\left( {A}_{1}\right) \subseteq {\mathcal{A}}_{F}\left( {A}_{2}\right) \) is trivial. It is sufficient to prove (1.24) in the case \( {A}_{2} = {A}_{1} + P \) with \( P \in {\mathbb{P}}_{F} \) (the general case follows by induction). Choose \( t \in F \) with \( {v}_{P}\left( t\right) = {v}_{P}\left( {A}_{1}\right) + 1 \) and consider the \( K \) -linear map \[ \varphi : \left\{ \begin{matrix} {\mathcal{A}}_{F}\left( {A}_{2}\right) & \rightarrow {F}_{P}, \\ \alpha & \mapsto \left( {t{\alpha }_{P}}\right) \left( P\right) . \end{matrix}\right. \] One checks easily that \( \varphi \) is surjective and that the kernel of \( \varphi \) is \( {\mathcal{A}}_{F}\left( {A}_{1}\right) \) . Consequently \[ \deg {A}_{2} - \deg {A}_{1} = \deg P = \left\lbrack {{F}_{P} : K}\right\rbrack = \dim \left( {{\mathcal{A}}_{F}\left( {A}_{2}\right) /{\mathcal{A}}_{F}\left( {A}_{1}\right) }\right) . \] Step 2. Let \( {A}_{1},{A}_{2} \in \operatorname{Div}\left( F\right) \) and \( {A}_{1} \leq {A}_{2} \) as before. Then \[ \dim \left( {\left( {{\mathcal{A}}_{F}\left( {A}_{2}\right) + F}\right) /\left( {{\mathcal{A}}_{F}\left( {A}_{1}\right) + F}\right) }\right. \] \[ = \left( {\deg {A}_{2} - \ell \left( {A}_{2}\right) }\right) - \left( {\deg {A}_{1} - \ell \left( {A}_{1}\right) }\right) . \] (1.25) Proof of Step 2. We have an exact sequence of linear mappings \[ 0 \rightarrow \mathcal{L}\left( {A}_{2}\right) /\mathcal{L}\left( {A}_{1}\right) \overset{{\sigma }_{1}}{ \rightarrow }{\mathcal{A}}_{F}\left( {A}_{2}\right) /{\mathcal{A}}_{F}\left( {A}_{1}\right) \] \[ \overset{{\sigma }_{2}}{ \rightarrow }\left( {{\mathcal{A}}_{F}\left( {A}_{2}\right) + F}\right) /\left( {{\mathcal{A}}_{F}\left( {A}_{1}\right) + F}\right) \rightarrow 0 \] (1.26) where \( {\sigma }_{1} \) and \( {\sigma }_{2} \) are defined in the obvious manner. In fact, the only non-trivial assertion is that the kernel of \( {\sigma }_{2} \) is contained in the image of \( {\sigma }_{1} \) . In order to prove this, let \( \alpha \in {\mathcal{A}}_{F}\left( {A}_{2}\right) \) with \( {\sigma }_{2}\left( {\alpha + {\mathcal{A}}_{F}\left( {A}_{1}\right) }\right) = 0 \) . Then \( \alpha \in {\mathcal{A}}_{F}\left( {A}_{1}\right) + F \), so there is some \( x \in F \) with \( \alpha - x \in {\mathcal{A}}_{F}\left( {A}_{1}\right) \) . As \( {\mathcal{A}}_{F}\left( {A}_{1}\right) \subseteq {\mathcal{A}}_{F}\left( {A}_{2}\right) \) we conclude that \( x \in {\mathcal{A}}_{F}\left( {A}_{2}\right) \cap F = \mathcal{L}\left( {A}_{2}\right) \) . Therefore \( \alpha + {\mathcal{A}}_{F}\left( {A}_{1}\right) = x + {\mathcal{A}}_{F}\left( {A}_{1}\right) = \) \( {\sigma }_{1}\left( {x + \mathcal{L}\left( {A}_{1}\right) }\right) \) lies in the image of \( {\sigma }_{1} \) . From the exactness of (1.26) we obtain \[ \dim \left( {{\mathcal{A}}_{F}\left( {A}_{2}\right) + F}\right) /\left( {{\mathcal{A}}_{F}\left( {A}_{1}\right) + F}\right) \] \[ = \dim \left( {{\mathcal{A}}_{F}\left( {A}_{2}\right) /{\mathcal{A}}_{F}\left( {A}_{1}\right) }\right) - \dim \left( {\mathcal{L}\left( {A}_{2}\right) /\mathcal{L}\left( {A}_{1}\right) }\right) \] \[ = \left( {\deg {A}_{2} - \deg {A}_{1}}\
111_111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 10.6
Definition 10.6 The operator \( {A}_{\mathrm{t}} \) is called the Friedrichs extension of \( T \) and denoted by \( {T}_{F} \) . The first assertion of the following theorem says that \( {T}_{F} \) is indeed an extension of the symmetric operator \( T \) . Theorem 10.17 Let \( T \) be a densely defined lower semibounded symmetric operator on a Hilbert space \( \mathcal{H} \) . (i) \( {T}_{F} \) is a lower semibounded self-adjoint extension of \( T \) which has the same greatest lower bound as \( T \) . (ii) If \( S \) is another lower semibounded self-adjoint extension of \( T \), then \( {T}_{F} \geq S \) and \( {\mathfrak{t}}_{{T}_{F}} = \overline{{\mathfrak{s}}_{T}} \subseteq {\mathfrak{t}}_{S} = \overline{{\mathfrak{s}}_{S}} \) . (iii) \( \mathcal{D}\left( {T}_{F}\right) = \mathcal{D}\left( {T}^{ * }\right) \cap \mathcal{D}\left( \overline{{\mathfrak{s}}_{T}}\right) \) and \( {T}_{F} = {T}^{ * } \mid \mathcal{D}\left( \overline{{\mathfrak{s}}_{T}}\right) \) . Moreover, \( {T}_{F} \) is the only selfadjoint extension of \( T \) with domain contained in \( \mathcal{D}\left( \overline{{\mathfrak{s}}_{T}}\right) \) . iv) \( {\left( T + \lambda I\right) }_{F} = {T}_{F} + {\lambda I} \) for \( \lambda \in \mathbb{R} \) . Proof (i): If \( x \in \mathcal{D}\left( T\right) \), then \( {\mathrm{t}}_{{T}_{F}}\left\lbrack {x, y}\right\rbrack = \overline{{\mathfrak{s}}_{T}}\left\lbrack {x, y}\right\rbrack = \langle {Tx}, y\rangle \) for all \( y \in \mathcal{D}\left( T\right) \) . From Proposition \( {10.5}\left( \mathrm{v}\right) \), applied to the core \( \mathcal{D}\left( T\right) = \mathcal{D}\left( {\mathfrak{s}}_{T}\right) \) of \( {\mathfrak{t}}_{{T}_{F}} = \overline{{\mathfrak{s}}_{T}} \), we obtain \( T \subseteq {T}_{F} \) . Clearly, the greatest lower bound of \( T \) coincides with the greatest lower bound of \( {\mathfrak{s}}_{F} \), so of \( \overline{{\mathfrak{s}}_{F}} \), and hence of \( {T}_{F} \) by Proposition 10.4(iii). (ii): Let \( S \) be a lower semibounded self-adjoint extension of \( T \) . By (i), \( {S}_{F} \) is a self-adjoint extension of \( S \), and so \( {S}_{F} = S \) . Hence, \( \overline{{\mathfrak{s}}_{S}} = {\mathfrak{t}}_{{S}_{F}} = {\mathfrak{t}}_{S} \) by the definition of \( {S}_{F} \) . From \( S \supseteq T \) we obtain \( {\mathfrak{s}}_{S} \supseteq {\mathfrak{s}}_{T} \), and hence \( {\mathfrak{t}}_{S} = \overline{{\mathfrak{s}}_{S}} \supseteq \overline{{\mathfrak{s}}_{T}} = {\mathfrak{t}}_{{T}_{F}} \) . The relation \( {\mathfrak{t}}_{{T}_{F}} \subseteq {\mathfrak{t}}_{S} \) implies that \( {T}_{F} \geq S \) according to Definition 10.5. (iii): Since \( T \subseteq {T}_{F} \) by (i), we have \( {\left( {T}_{F}\right) }^{ * } = {T}_{F} \subseteq {T}^{ * } \), so \( {T}_{F} = {T}^{ * } \upharpoonright \mathcal{D}\left( {T}_{F}\right) \) and \( \mathcal{D}\left( {T}_{F}\right) \subseteq \mathcal{D}\left( {T}^{ * }\right) \cap \mathcal{D}\left( \overline{{\mathfrak{s}}_{T}}\right) = : \mathcal{D}\left( R\right) \) . Let \( R \mathrel{\text{:=}} {T}^{ * } \upharpoonright \mathcal{D}\left( R\right) \) and \( x \in \mathcal{D}\left( R\right) \) . Using the symmetry of \( {\mathfrak{t}}_{{T}_{F}} \), formula (10.9), and the relations \( \overline{{\mathfrak{s}}_{T}} = {\mathfrak{t}}_{{T}_{F}} \) and \( T \subseteq {T}_{F} \), we obtain \[ {\mathrm{t}}_{{T}_{F}}\left\lbrack {x, y}\right\rbrack = \overline{{\mathrm{t}}_{{T}_{F}}\left\lbrack {y, x}\right\rbrack } = \overline{\left\langle {T}_{F}y, x\right\rangle } = \langle x,{Ty}\rangle = \left\langle {{T}^{ * }x, y}\right\rangle = \langle {Rx}, y\rangle \] for \( y \in \mathcal{D}\left( T\right) \) . Applying again Proposition 10.5(v) with \( \mathcal{D} = \mathcal{D}\left( T\right) \), we get \( R \subseteq {T}_{F} \) , and so \( \mathcal{D}\left( R\right) \subseteq \mathcal{D}\left( {T}_{F}\right) \) . Thus, we have shown that \( \mathcal{D}\left( {T}_{F}\right) = \mathcal{D}\left( {T}^{ * }\right) \cap \mathcal{D}\left( \overline{{\mathfrak{s}}_{T}}\right) \) . Let \( S \) be an arbitrary self-adjoint extension of \( T \) such that \( \mathcal{D}\left( S\right) \subseteq \mathcal{D}\left( \overline{{\mathfrak{s}}_{T}}\right) \) . Clearly, \( T \subseteq S \) implies that \( S = {S}^{ * } \subseteq {T}^{ * } \), so \( \mathcal{D}\left( S\right) \subseteq \mathcal{D}\left( {T}^{ * }\right) \cap \mathcal{D}\left( \overline{{\mathfrak{s}}_{F}}\right) = \mathcal{D}\left( {T}_{F}\right) \) . Since \( S = {T}^{ * } \upharpoonright \mathcal{D}\left( S\right) \) and \( {T}_{F} = {T}^{ * } \upharpoonright \mathcal{D}\left( {T}_{F}\right) \), we get \( S \subseteq {T}_{F} \) . Hence, \( S = {T}_{F} \) . (iv): Since \( {\mathfrak{s}}_{T + {\lambda I}} = {\mathfrak{s}}_{T} + \lambda ,{\mathfrak{t}}_{{\left( T + \lambda I\right) }_{F}} = \overline{{\mathfrak{s}}_{T + {\lambda I}}} = \overline{{\mathfrak{s}}_{T}} + \lambda = {\mathfrak{t}}_{{T}_{F}} + \lambda = {\mathfrak{t}}_{{T}_{F} + {\lambda I}} \), and hence \( {\left( T + \lambda I\right) }_{F} = {T}_{F} + {\lambda I} \) . ## 10.5 Examples of Semibounded Forms and Operators Let us begin with a simple but standard example for the Friedrichs extension. Example 10.4 (Friedrichs extension of \( A = - \frac{{d}^{2}}{d{x}^{2}} \) on \( \mathcal{D}\left( A\right) = {H}_{0}^{2}\left( {a, b}\right), a, b \in \mathbb{R} \) ) From Example 1.4 we know that \( A \) is the square \( {T}^{2} \) of the symmetric operator \( T = - \mathrm{i}\frac{d}{dx} \) with domain \( \mathcal{D}\left( T\right) = {H}_{0}^{1}\left( {a, b}\right) \) in \( {L}^{2}\left( {a, b}\right) \) and \( {A}^{ * } = - \frac{{d}^{2}}{d{x}^{2}} \) on \( \mathcal{D}\left( {A}^{ * }\right) = \) \( {H}^{2}\left( {a, b}\right) \) . Thus, \( A \) is a densely defined positive symmetric operator, and \[ {\mathfrak{s}}_{A}\left\lbrack f\right\rbrack = \langle {Af}, f\rangle = \parallel {Tf}{\parallel }^{2} = {\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}^{2}\;\text{ for }f \in \mathcal{D}\left( {\mathfrak{s}}_{A}\right) = \mathcal{D}\left( A\right) = {H}_{0}^{2}\left( {a, b}\right) . \] Hence, the form norm of \( {\mathfrak{s}}_{A} \) coincides with the norm of the Sobolev space \( {H}^{1}\left( {a, b}\right) \) . Since \( {C}_{0}^{\infty }\left( {a, b}\right) \subseteq {H}_{0}^{2}\left( {a, b}\right) \), the completion \( \mathcal{D}\left( \overline{{\mathfrak{s}}_{A}}\right) \) of \( \left( {\mathcal{D}\left( {\mathfrak{s}}_{A}\right) ,\parallel \cdot {\parallel }_{{\mathfrak{s}}_{A}}}\right) \) is \( {H}_{0}^{1}\left( {a, b}\right) \) . Therefore, by Theorem 10.17(iii), the Friedrichs extension \( {A}_{F} \) of \( A \) is the restriction of \( {A}^{ * } \) to \( \mathcal{D}\left( \overline{{\mathfrak{s}}_{A}}\right) \), that is, \[ {A}_{F}f = - {f}^{\prime \prime }\;\text{ for }f \in \mathcal{D}\left( {A}_{F}\right) = \left\{ {f \in {H}^{2}\left( {a, b}\right) : f\left( a\right) = f\left( b\right) = 0}\right\} . \] By Theorem D.1, the embedding of \( {H}_{0}^{1}\left( {a, b}\right) = \mathcal{D}\left( {\mathfrak{t}}_{{A}_{F}}\right) \) into \( {L}^{2}\left( {a, b}\right) \) is compact. It follows therefore from Proposition 10.6 that \( {A}_{F} \) has a purely discrete spectrum. One easily computes that \( {A}_{F} \) has the orthonormal basis \( \left\{ {{f}_{n} : n \in \mathbb{N}}\right\} \) of eigenfunctions: \[ {A}_{F}{f}_{n} = {\pi }^{2}{\left( b - a\right) }^{-2}{n}^{2}{f}_{n},\;\text{ where }{f}_{n}\left( x\right) = \sqrt{2}{\left( b - a\right) }^{-1/2}\sin {\pi n}\frac{x - a}{b - a}, n \in \mathbb{N}. \] The smallest eigenvalue of \( {A}_{F} \) is \( {\pi }^{2}{\left( b - a\right) }^{-2} \), so we have \( {A}_{F} \geq {\pi }^{2}{\left( b - a\right) }^{-2} \cdot I \) . Hence, \( {\mathfrak{t}}_{{A}_{F}} \geq {\pi }^{2}{\left( b - a\right) }^{-2} \), which yields the Poincaré inequality (see, e.g.,(D.1)) \[ \begin{Vmatrix}{f}^{\prime }\end{Vmatrix} \geq \pi {\left( b - a\right) }^{-1}\parallel f\parallel \;\text{ for }f \in \mathcal{D}\left\lbrack {A}_{F}\right\rbrack = {H}_{0}^{1}\left( {a, b}\right) . \] (10.24) Before we continue with other ordinary differential operators, we treat a simple operator-theoretic example. Example 10.5 Let \( T \) be a linear operator of a Hilbert space \( \left( {{\mathcal{H}}_{1},\langle \cdot , \cdot {\rangle }_{1}}\right) \) into a Hilbert space \( \left( {{\mathcal{H}}_{2},\langle \cdot , \cdot {\rangle }_{2}}\right) \) . Define a positive form \( \mathfrak{t} \) by \[ \mathfrak{t}\left\lbrack {x, y}\right\rbrack = \langle {Tx},{Ty}{\rangle }_{2},\;x, y \in \mathcal{D}\left( \mathfrak{t}\right) \mathrel{\text{:=}} \mathcal{D}\left( T\right) . \] (10.25) Taking \( m = 0 \) as a lower bound of \( \mathfrak{t} \), by (10.4) and (1.4), we have \[ \parallel x{\parallel }_{\mathfrak{t}}^{2} = \parallel {Tx}{\parallel }_{2}^{2} + \parallel x{\parallel }_{1}^{2} = \parallel x{\parallel }_{T}^{2},\;x \in \mathcal{D}\left( \mathfrak{t}\right) , \] that is, form norm \( \parallel \cdot {\parallel }_{\mathrm{t}} \) and graph norm \( \parallel \cdot {\parallel }_{T} \) coincide on \( \mathcal{D}\left( \mathrm{t}\right) = \mathcal{D}\left( T\right) \) . Therefore, the form \( t \) is closed if and only the operator \( T \) is closed. Likewise, \( t \) is closable if and only if \( T \) is. In the latter case, we have \( \overline{\mathfrak{t}}\left\lbrack {x, y}\right\rbrack = \langle \bar{T}x,\bar{T}y{\rangle }_{2} \) for \( x, y \in \mathcal{D}\left( \overline{\mathfrak{t}}\right) = \mathcal{D}\left( \bar{T}\right) \) . Further, a linear subspace of \( \mathcal{D}\left( \mathrm{t}\right) \) is a core of \( \mathrm{t} \) if and only if it is a core of \( T \) . Statement If \( \mathcal{D}\left( T\right) \) is dense in \( {\mathcal{H}}_{1} \) and the operator \( T \) is closed, then \( {A}_{\mathrm{t}} = {T}^{ * }T \) . Proof For \( x \in \mathcal{D}\left( {A}_{\mathrm{t}}\right) \) and \( y \in \mathcal{D}\left( T\right) \), we have \( \langle {Tx},{Ty}{\rangle }_{2} = \mathfra
1129_(GTM35)Several Complex Variables and Banach Algebras
Definition 18.1
Definition 18.1. Put \( \Gamma = \{ \zeta \left| \right| \zeta \mid = 1)\} \) . A function \( f \) in \( C\left( \Gamma \right) \) is a boundary function if \( \exists F \) continuous in \( \left| \zeta \right| \leq 1 \) and analytic in \( \left| \zeta \right| < 1 \) with \( F = f \) on \( \Gamma \) . Given \( u \) defined on \( \Gamma \), we put \[ \dot{u} = \frac{d}{d\theta }\left( {u\left( {e}^{i\theta }\right) }\right) \] Definition 18.2. \( {H}_{1} \) is the space of all real-valued functions \( u \) on \( \Gamma \) such that \( u \) is absolutely continuous, \( u \in {L}^{2}\left( \Gamma \right) \) and \( \dot{u} \in {L}^{2}\left( \Gamma \right) \) . For \( u \in {H}_{1} \), we put \[ \parallel u{\parallel }_{1} = \parallel u{\parallel }_{{L}^{2}} + \parallel \dot{u}{\parallel }_{{L}^{2}} \] Normed with \( \parallel {\parallel }_{1},{H}_{1} \) is a Banach space. Fix \( u \in {H}_{1} \) . \[ u = {a}_{0} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\cos {n\theta } + {b}_{n}\sin {n\theta }. \] Since \( \dot{u} \in {L}^{2},\mathop{\sum }\limits_{1}^{\infty }{n}^{2}\left( {{a}_{n}^{2} + {b}_{n}^{2}}\right) < \infty \) and so \( \mathop{\sum }\limits_{1}^{\infty }\left( {\left| {a}_{n}\right| + \left| {b}_{n}\right| }\right) < \infty \) . Definition 18.3. For \( u \) as above, \[ {Tu} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\sin {n\theta } - {b}_{n}\cos {n\theta }. \] Observe the following facts: (2) If \( u, v \in {H}_{1} \), then \( u + {iv} \) is a boundary function provided \( u = - {Tv} \) . (3) If \( u \in {H}_{1} \), then \( {Tu} \in {H}_{1} \) and \( \parallel {Tu}{\parallel }_{1} \leq \parallel u{\parallel }_{1} \) . Definition 18.4. Let \( {w}_{2},\ldots ,{w}_{n} \) be smooth boundary functions and put \( w = \) \( \left( {{w}_{2},\ldots ,{w}_{n}}\right) .w \) is then a map of \( \Gamma \) into \( {\mathbb{C}}^{n - 1} \) . For \( x \in {H}_{1} \) , \[ {A}_{w}x = - T\{ h\left( {x, w}\right) \} \] where \( h \) is as in (1). \( {A}_{w} \) is thus a map of \( {H}_{1} \) into \( {H}_{1} \) . Let \( U \) be as in Theorem 18.3 and choose \( \delta > 0 \) such that the point described by (1) with parameters \( {x}_{1} \) and \( w \) lies in \( U \) provided \( \left| {x}_{1}\right| < \delta \) and \( \left| {w}_{j}\right| < \delta ,2 \leq \) \( j \leq n \) . Lemma 18.4. Let \( {w}_{2},\ldots ,{w}_{n} \) be smooth boundary functions with \( \left| {w}_{j}\right| < \delta \) for all \( j \) and such that \( {w}_{2} \) is schlicht, i.e., its analytic extension is one-one in \( \left| \zeta \right| \leq 1 \) . Put \( A = {A}_{w} \) . Suppose \( {x}^{ * } \in {H}_{1},\left| {x}^{ * }\right| < \delta \) on \( \Gamma \) and \( A{x}^{ * } = {x}^{ * } \) . Then \( \exists \) analytic disk \( E \) with \( \partial E \) contained in \( U \) . Proof. Since \( A{x}^{ * } = {x}^{ * },{x}^{ * } = - T\left\{ {h\left( {{X}^{ * }, w}\right) }\right\} \), and so \( {x}^{ * } + {ih}\left( {{x}^{ * }, w}\right) \) is a boundary function by (2). Let \( \psi \) be the analytic extension of \( {x}^{ * } + {ih}\left( {{x}^{ * }, w}\right) \) to \( \left| \zeta \right| < 1 \) . The set defined for \( \left| \zeta \right| \leq 1 \) by \( {z}_{1} = \psi \left( \zeta \right) ,{z}_{2} = {w}_{2}\left( \zeta \right) ,\ldots ,{z}_{n} = {w}_{n}\left( \zeta \right) \) is an analytic disk \( E \) in \( {\mathbb{C}}^{n} \) . \( \partial E \) is defined for \( \left| \zeta \right| = 1 \) by \( {z}_{1} = {x}^{ * }\left( \zeta \right) + {ih}\left( {{x}^{ * }\left( \zeta \right) }\right. \) , \( \left. {w\left( \zeta \right) }\right) ,{z}_{2} = {w}_{2}\left( \zeta \right) ,\ldots ,{z}_{n} = {w}_{n}\left( \zeta \right) \) and so by (1) lies on \( \mathop{\sum }\limits^{{{2n} - 1}} \) . Since by hypothesis \( \left| {x}^{ * }\right| < \delta \) and \( \left| {w}_{j}\right| < \delta \) for all \( j,\partial E \subset U \) . In view of the preceding, to prove Theorem 18.3, it suffices to show that \( A = {A}_{w} \) has a fix-point \( {x}^{ * } \) in \( {H}_{1} \) with \( \left| {x}^{ * }\right| < \delta \) for prescribed small \( w \) . To produce this fix-point, we shall use the following well-known Lemma on metric spaces. Lemma 18.5. Let \( K \) be a complete metric space with metric \( \rho \) and \( \Phi \) a map of \( K \) into \( K \) which satisfies \[ \rho \left( {\Phi \left( x\right) ,\Phi \left( y\right) }\right) \leq {\alpha \rho }\left( {x, y}\right) ,\;\text{ all }x, y \in K. \] where \( \alpha \) is a constant with \( 0 < \alpha < 1 \) . Then \( \Phi \) has a fix-point in \( K \) . We give the proof of Exercise 18.1. As complete metric space we shall use the ball in \( {H}_{1} \) of radius \( M,{B}_{M} = \{ x \in \) \( \left. {{H}_{1}\parallel \parallel x{\parallel }_{1} \leq M}\right\} \) . We shall show that for small \( M \) if \( \left| w\right| \) is sufficiently small and \( A = {A}_{w} \), then (4) \( A \) maps \( {B}_{M} \) into \( {B}_{M} \) . (5) \( \exists \alpha ,0 < \alpha < 1 \), such that \[ \parallel {Ax} - {Ay}{\parallel }_{1} \leq \alpha \parallel x - y{\parallel }_{1}\;\text{ for all }x, y \in {B}_{M}. \] Hence Lemma 18.5 will apply to \( A \) . We need some notation. Fix \( N \) and let \( x = \left( {{x}_{1},\ldots ,{x}_{N}}\right) \) be a map of \( \Gamma \) into \( {\mathbb{R}}^{N} \) such that \( {x}_{i} \in {H}_{1} \) for each \( i \) . \[ \dot{x} = \left( {{\dot{x}}_{1},\ldots ,{\dot{x}}_{N}}\right) ,\left| x\right| = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{2}} \] \[ \parallel x{\parallel }_{1} = \sqrt{{\int }_{\Gamma }{\left| x\right| }^{2}{d\theta }} + \sqrt{{\int }_{\Gamma }{\left| \dot{x}\right| }^{2}{d\theta }} \] \[ \parallel x{\parallel }_{\infty } = \sup \left| x\right| \text{, taken over}\Gamma \text{.} \] Observe that \( \parallel x{\parallel }_{\infty } \leq C\parallel x{\parallel }_{1} \), where \( C \) is a constant depending only on \( N \) . In the following two Exercises, \( h \) is a smooth function on \( {\mathbb{R}}^{N} \) which vanishes at 0 of order \( \geq 2 \) . *EXERCISE 18.2. \( \exists \) constant \( K \) depending only on \( h \) such that for every map \( x \) of \( \Gamma \) into \( {\mathbb{R}}^{N} \) with \( \parallel x{\parallel }_{\infty } \leq 1 \) , \[ \parallel h\left( x\right) {\parallel }_{1} \leq K{\left( \parallel x{\parallel }_{1}\right) }^{2}. \] *EXERCISE 18.3. \( \exists \) constant \( K \) depending only on \( h \) such that for every pair of maps \( x, y \) of \( \Gamma \) into \( {\mathbb{R}}^{N} \) with \( \parallel x{\parallel }_{\infty } \leq 1,\parallel y{\parallel }_{\infty } \leq 1 \) . \[ \parallel h\left( x\right) - h\left( y\right) {\parallel }_{1} < K\parallel x - y{\parallel }_{1}\left( {\parallel x{\parallel }_{1} + \parallel y{\parallel }_{1}}\right) . \] Fix boundary functions \( {w}_{2},\ldots ,{w}_{n} \) as earlier and put \( w = \left( {{w}_{2},\ldots ,{w}_{n}}\right) \) . Then \( w \) is a map of \( \Gamma \) into \( {\mathbb{C}}^{n - 1} = {\mathbb{R}}^{{2n} - 2} \) . Lemma 18.6. For all sufficiently small \( M > 0 \) the following holds: if \( \parallel w{\parallel }_{1} < \) \( M \) and \( A = {A}_{w} \), then \( A \) maps \( {B}_{M} \) into \( {B}_{M} \) and \( \exists \alpha ,0 < \alpha < 1 \), such that \( \parallel {Ax} - {Ay}{\parallel }_{1} \leq \alpha \parallel x - y{\parallel }_{1} \) for all \( x, y \in {B}_{M}. \) Proof. Fix \( M \) and choose \( w \) with \( \parallel w{\parallel }_{1} < M \) and choose \( x \in {B}_{M} \) . The map \( \left( {x, w}\right) \) takes \( \Gamma \) into \( \mathbb{R} \times {\mathbb{C}}^{n - 1} = {\mathbb{R}}^{{2n} - 1} \) . If \( M \) is small, \( \parallel \left( {x, w}\right) {\parallel }_{\infty } \leq 1 \) . Since \( \left( {x, w}\right) = \left( {x,0}\right) + \left( {0, w}\right) , \) \[ \parallel \left( {x, w}\right) {\parallel }_{1} \leq \parallel \left( {x,0}\right) {\parallel }_{1} + \parallel \left( {0, w}\right) {\parallel }_{1} = \parallel x{\parallel }_{1} + \parallel w{\parallel }_{1}. \] By Exercise 18.2, \[ \parallel h\left( {x, w}\right) {\parallel }_{1} < K{\left( \parallel \left( x, w\right) {\parallel }_{1}\right) }^{2} \] \[ < K{\left( \parallel x{\parallel }_{1} + \parallel w{\parallel }_{1}\right) }^{2} < K{\left( M + M\right) }^{2} = 4{M}^{2}K. \] \[ \parallel {Ax}{\parallel }_{1} = \parallel T\{ h\left( {x, w}\right) \} {\parallel }_{1} \leq \parallel h\left( {x, w}\right) {\parallel }_{1} < 4{M}^{2}K. \] Hence if \( M < 1/{4K},\parallel {Ax}{\parallel }_{1} \leq M \) . So for \( M < 1/{4K}, A \) maps \( {B}_{M} \) into \( {B}_{M} \) . Next fix \( M < 1/{4K} \) and \( w \) with \( \parallel w{\parallel }_{1} < M \) and fix \( x, y \in {B}_{M} \) . If \( M \) is small, \( \parallel \left( {x, w}\right) {\parallel }_{\infty } \leq 1 \) and \( \parallel \left( {y, w}\right) {\parallel }_{\infty } \leq 1. \) \[ {Ax} - {Ay} = T\{ h\left( {y, w}\right) - h\left( {x, w}\right) \} . \] Hence by (3), and Exercise 18.3, \( \parallel {Ax} - {Ay}{\parallel }_{1} \leq \parallel h\left( {y, w}\right) - h\left( {x, w}\right) {\parallel }_{1} \leq \) \( \left. {\left. K\left| \right| \left( x, w\right) - \left( y, w\right) {\left| \right| }_{1}\left( \parallel x, w\right) {\parallel }_{1} + \parallel \left( y, w\right) {\parallel }_{1}\right| }_{1}\right) \leq K\parallel x - y{\parallel }_{1}\left( {\parallel x{\parallel }_{1} + \parallel y{\parallel }_{1} + }\right. \) \( 2\left| \right| w{\left| \right| }_{1}) \leq {4MK}\left| \right| x - y{\left| \right| }_{1} \) . Put \( \alpha = {4MK} \) . Then \( \alpha < 1 \) and we are done. Proof of Theorem 18.3. Choose \( M \) by Lemma 18.6, choose \( w \) with \( \parallel w{\parallel }_{1} < M \) and put \( A = {A}_{w} \) . In view of Lemmas 18.5 and 18.6, \( A \) has a fix-point \( {x}^{ * } \) in \( {B}_{M} \) . Since for \( x \in {H}_{1},\parallel x{\parallel }_{\infty } \leq C\parallel x{\parallel }_{1} \), where \( C \) is a constant, for given \( \d
1118_(GTM272)Operator Theoretic Aspects of Ergodic Theory
Definition 2.1
Definition 2.1. A topological (dynamical) system is a pair \( \left( {K;\varphi }\right) \), where \( K \) is a nonempty compact space \( {}^{2} \) and \( \varphi : K \rightarrow K \) is continuous. A topological system \( \left( {K;\varphi }\right) \) is surjective if \( \varphi \) is surjective, and the system is invertible if \( \varphi \) is invertible, i.e., a homeomorphism. An invertible topological system \( \left( {K;\varphi }\right) \) defines two "one-sided" topological systems, namely the forward system \( \left( {K;\varphi }\right) \) and the backward system \( \left( {K;{\varphi }^{-1}}\right) \) . Many of the following notions, like the forward orbit of a point \( x \), do make sense in the more general setting of a continuous self-map of a topological space. However, we restrict ourselves to compact spaces and reserve the term topological dynamical system for this special situation. --- \( {}^{1} \) D. Mac Hale, Comic sections: the book of mathematical jokes, humour, wit, and wisdom, Boole Press, 1993. \( {}^{2} \) Note that by (our) definition compact spaces are Hausdorff, see Appendix A. --- ## 2.1 Basic Examples First we list some basic examples of topological dynamical systems. Example 2.2. The trivial system is \( \left( {K;{\mathrm{{id}}}_{K}}\right) \), where \( K = \{ 0\} \) . The trivial system is invertible, and it is abbreviated by \( \{ 0\} \) . Example 2.3 (Finite State Space). Take \( d \in \mathbb{N} \) and consider the finite set \( K \mathrel{\text{:=}} \{ 0,\ldots, d - 1\} \) with the discrete topology. Then \( K \) is compact and every map \( \varphi : K \rightarrow K \) is continuous. The system \( \left( {K;\varphi }\right) \) is invertible if and only if \( \varphi \) is a permutation of the elements of \( K \) . A topological system on a finite state space can be interpreted as a finite directed graph with the edges describing the action of \( \varphi \) : The points of \( K \) form the vertices of the graph and there is a directed edge from vertex \( i \) to vertex \( j \) precisely if \( \varphi \left( i\right) = j \) . See Figure 2.1 below and also Exercise 1. Example 2.4 (Finite-Dimensional Contractions). Let \( \parallel \cdot \parallel \) be a norm on \( {\mathbb{R}}^{d} \) and let \( T : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d} \) be linear and contractive with respect to the chosen norm, i.e., \( \parallel {Tx}\parallel \leq \parallel x\parallel, x \in {\mathbb{R}}^{d} \) . Then the unit ball \( K \mathrel{\text{:=}} \left\{ {x \in {\mathbb{R}}^{d} : \parallel x\parallel \leq 1}\right\} \) is compact, and \( \varphi \mathrel{\text{:=}} {\left. T\right| }_{K} \) is a continuous self-map of \( K \) . As a more concrete example we choose the norm \( \parallel x{\parallel }_{\infty } = \max \left\{ {\left| {x}_{1}\right| ,\ldots ,\left| {x}_{d}\right| }\right\} \) on \( {\mathbb{R}}^{d} \) and the linear operator \( T \) given by a row-substochastic matrix \( {\left( {t}_{ij}\right) }_{i, j = 1,\ldots, d} \) , i.e., \[ {t}_{ij} \geq 0\;\text{ and }\;\mathop{\sum }\limits_{{k = 1}}^{d}{t}_{ik} \leq 1\;\left( {1 \leq i, j \leq d}\right) . \] Then \[ \left| {\left\lbrack Tx\right\rbrack }_{i}\right| = \left| {\mathop{\sum }\limits_{{j = 1}}^{d}{t}_{ij}{x}_{j}}\right| \leq \mathop{\sum }\limits_{{j = 1}}^{d}{t}_{ij}\left| {x}_{j}\right| \leq \parallel x{\parallel }_{\infty }\mathop{\sum }\limits_{{j = 1}}^{d}{t}_{ij} \leq \parallel x{\parallel }_{\infty } \] for every \( j = 1,\ldots, d \) . Hence, \( T \) is contractive. Example 2.5 (Shift). Take \( k \in \mathbb{N} \) and consider the set \[ K \mathrel{\text{:=}} {\mathcal{W}}_{k}^{ + } \mathrel{\text{:=}} \{ 0,1,\ldots, k - 1{\} }^{{\mathbb{N}}_{0}} \] Fig. 2.1 A topological ![2ce18656-55b1-426e-a113-dde3fcb83791_28_0.jpg](images/2ce18656-55b1-426e-a113-dde3fcb83791_28_0.jpg) system on the finite state space \( K = \{ 0,1,\ldots ,9\} \) depicted as a graph of infinite sequences within the set \( L \mathrel{\text{:=}} \{ 0,\ldots, k - 1\} \) . In this context, the set \( L \) is often called an alphabet, its elements being the letters. So the elements of \( K \) are the infinite words composed of these letters. Endowed with the discrete metric the alphabet is a compact metric space. Hence, by Tychonoff’s theorem \( K \) is compact when endowed with the product topology (see Section A. 5 and Exercise 2). On \( K \) we consider the (left) shift \( \tau \) defined by \[ \tau : K \rightarrow K,\;{\left( {x}_{n}\right) }_{n \in {\mathbb{N}}_{0}} \mapsto {\left( {x}_{n + 1}\right) }_{n \in {\mathbb{N}}_{0}}. \] Then \( \left( {K;\tau }\right) \) is a topological system, called the one-sided shift. If we consider two-sided sequences instead, that is, \( {\mathcal{W}}_{k} \mathrel{\text{:=}} \{ 0,1,\ldots, k - 1{\} }^{\mathbb{Z}} \) with \( \tau \) defined analogously, we obtain an invertible topological system \( \left( {{\mathcal{W}}_{k};\tau }\right) \), called the two-sided shift. Example 2.6 (Cantor System). The Cantor set is \[ C = \left\{ {x \in \left\lbrack {0,1}\right\rbrack : x = \mathop{\sum }\limits_{{j = 1}}^{\infty }\frac{{a}_{j}}{{3}^{j}},{a}_{j} \in \{ 0,2\} }\right\} , \] (2.1) cf. Appendix A.8. As a closed subset of the unit interval, the Cantor set \( C \) is compact. Consider on \( C \) the mapping \[ \varphi \left( x\right) = \left\{ \begin{array}{ll} {3x} & \text{ if }0 \leq x \leq \frac{1}{3} \\ {3x} - 2 & \text{ if }\frac{2}{3} \leq x \leq 1 \end{array}\right. \] The continuity of \( \varphi \) is clear, and a close inspection using (2.1) reveals that \( \varphi \) maps \( C \) to itself. Hence, \( \left( {C;\varphi }\right) \) is a topological system, called the Cantor system. Example 2.7 (Translation mod 1). Consider the interval \( K \mathrel{\text{:=}} \lbrack 0,1) \) and define \[ d\left( {x, y}\right) \mathrel{\text{:=}} \left| {{\mathrm{e}}^{{2\pi }\mathrm{i}x} - {\mathrm{e}}^{{2\pi }\mathrm{i}y}}\right| \;\left( {x, y \in \lbrack 0,1}\right) ). \] By Exercise \( 3, d \) is a metric on \( K \), continuous with respect to the standard one, and turning \( K \) into a compact metric space. For a real number \( x \in \mathbb{R} \) we write \[ x\left( {\;\operatorname{mod}\;1}\right) \mathrel{\text{:=}} x - \lfloor x\rfloor \] where \( \lfloor x\rfloor \mathrel{\text{:=}} \max \{ n \in \mathbb{Z} : n \leq x\} \) is the greatest integer less than or equal to \( x \) . Now, given \( \alpha \in \lbrack 0,1) \) we define the translation by \( \alpha {\;\operatorname{mod}\;1} \) as \[ \varphi \left( x\right) \mathrel{\text{:=}} x + \alpha \left( {\;\operatorname{mod}\;1}\right) = x + \alpha - \lfloor x + \alpha \rfloor \;\left( {x \in \lbrack 0,1}\right) ). \] By Exercise \( 3,\varphi \) is continuous with respect to the metric \( d \), hence it gives rise to a topological system on \( \lbrack 0,1) \) . We shall abbreviate this system by \( \left( {\lbrack 0,1}\right) ;\alpha ) \) . Example 2.8 (Rotation on the Torus). Let \( K = \mathbb{T} \mathrel{\text{:=}} \{ z \in \mathbb{C} : \left| z\right| = 1\} \) be the unit circle, also called the (one-dimensional) torus. Take \( a \in \mathbb{T} \) and define \( \varphi : \mathbb{T} \rightarrow \mathbb{T} \) by \[ \varphi \left( z\right) \mathrel{\text{:=}} a \cdot z\;\text{ for all }z \in \mathbb{T}. \] Since \( \varphi \) is obviously continuous, it gives rise to an invertible topological system defined on \( \mathbb{T} \) called the rotation by \( a \) . We shall denote this system by \( \left( {\mathbb{T};a}\right) \) . Example 2.9 (Rotation on Compact Groups). The previous example is an instance of the following general set-up. A topological group is a group \( \left( {G, \cdot }\right) \) endowed with a topology such that the maps \[ \left( {g, h}\right) \mapsto g \cdot h,\;G \times G \rightarrow G \] and \[ g \mapsto {g}^{-1} \] are continuous. A topological group is a compact group if \( G \) is compact. Given a compact group \( G \), the left rotation by \( a \in G \) is the mapping \[ {\varphi }_{a} : G \rightarrow G,\;{\varphi }_{a}\left( g\right) \mathrel{\text{:=}} a \cdot g. \] Then \( \left( {G;{\varphi }_{a}}\right) \) is an invertible topological system, which we henceforth shall denote by \( \left( {G;a}\right) \) . Similarly, the right rotation by \( a \in G \) is \[ {\rho }_{a} : G \rightarrow G,\;{\rho }_{a}\left( g\right) \mathrel{\text{:=}} g \cdot a \] and \( \left( {G;{\rho }_{a}}\right) \) is an invertible topological system. Obviously, the trivial system (Example 2.2) is an instance of a group rotation. If the group is Abelian, then left and right rotations are identical and one often speaks of translation instead of rotation. The torus \( \mathbb{T} \) is our most important example of a compact Abelian group. Example 2.10 (Dyadic Adding Machine). For \( n \in \mathbb{N} \) consider the cyclic group \( {\mathbb{Z}}_{{2}^{n}} \mathrel{\text{:=}} \mathbb{Z}/{2}^{n}\mathbb{Z} \), endowed with the discrete topology. The quotient maps \[ {\pi }_{ij} : {\mathbb{Z}}_{{2}^{j}} \rightarrow {\mathbb{Z}}_{{2}^{i}},\;{\pi }_{ij}\left( {x + {2}^{j}\mathbb{Z}}\right) \mathrel{\text{:=}} x + {2}^{i}\mathbb{Z}\;\left( {i \leq j}\right) \] are trivially continuous, and satisfy the relations \[ {\pi }_{ii} = \mathrm{{id}}\;\text{ and }\;{\pi }_{ij} \circ {\pi }_{jk} = {\pi }_{ik}\;\left( {i \leq j \leq k}\right) . \] The topological product space \( G \mathrel{\text{:=}} \mathop{\prod }\limits_{{n \in \mathbb{N}}}{\mathbb{Z}}_{{2}^{n}} \) is a compact Abelian group by Tychonoff’s Theorem A.5. Since each \( {\pi }_{ij} \) is a group homomorphism, the set \[ {\mathbb{A}}_{2} = \left\{ {x = {\left( {x}_{n}\right) }_{n \in \mathbb{N}} \in \mathop{\prod }\limits_{{n \in \mathbb{N}}}{\mathbb{Z}}_{{2}^{n}} : {x}_{i} = {\pi }_{ij}\left( {x}_{j}\right) \text{ for all }i \leq j}\right\} \] is a closed subgroup of \( G \), called the group of dyadic integers. Being closed in the compact group \( G,{\
1075_(GTM233)Topics in Banach Space Theory
Definition 12.3.3
Definition 12.3.3. Let \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) be a bounded sequence in a Banach space \( X \), and let \( {\left( {y}_{n}\right) }_{n = 1}^{\infty } \) be a bounded sequence in a Banach space \( Y \) . We will say that \( {\left( {y}_{n}\right) }_{n = 1}^{\infty } \) is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) if given \( \epsilon > 0 \) and \( N \in \mathbb{N} \) there exist a sequence of blocks of \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) , \[ {u}_{j} = \mathop{\sum }\limits_{{{p}_{j - 1} + 1}}^{{p}_{j}}{a}_{j}{x}_{j},\;j = 1,2,\ldots, N, \] where \( \left( {p}_{j}\right) \) are integers with \( 0 = {p}_{0} < {p}_{1} < \cdots < {p}_{N} \), and \( \left( {a}_{n}\right) \) are scalars, and an operator \( T : {\left\lbrack {y}_{j}\right\rbrack }_{j = 1}^{N} \rightarrow {\left\lbrack {u}_{j}\right\rbrack }_{j = 1}^{N} \) with \( T{y}_{j} = {u}_{j} \) for \( 1 \leq j \leq N \) such that \( \parallel T\parallel \begin{Vmatrix}{T}^{-1}\end{Vmatrix} < \) \( 1 + \epsilon \) . Note here that we do not assume that \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) or \( {\left( {y}_{n}\right) }_{n = 1}^{\infty } \) is a basic sequence, although usually they are. Definition 12.3.4. Let \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) be a bounded sequence in a Banach space \( X \) . A sequence space \( \mathcal{X} \) is said to be block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) if the canonical basis vectors \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) in \( \mathcal{X} \) are block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) . Obviously if \( \mathcal{X} \) is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \), it is also true that \( \mathcal{X} \) is finitely representable in \( X \) . We are thus asking for a strong form of finite representability. Definition 12.3.5. A sequence space \( \mathcal{X} \) is said to be block finitely representable in another sequence space \( \mathcal{Y} \) if it is block finitely representable in the canonical basis of \( \mathcal{Y} \) . Proposition 12.3.6. Suppose \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a nonconstant spreading sequence in a Banach space X. (i) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) fails to be weakly Cauchy, then \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a basic sequence equivalent to the canonical \( {\ell }_{1} \) -basis. (ii) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is weakly null, then it is an unconditional basic sequence with suppression constant \( {K}_{s} = 1 \) . i) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is weakly Cauchy, then \( {\left( {x}_{{2n} - 1} - {x}_{2n}\right) }_{n = 1}^{\infty } \) is weakly null and spreading. Proof. (i) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is not weakly Cauchy, then no subsequence can be weakly Cauchy (by the spreading property), and so by Rosenthal's theorem (Theorem 11.2.1), some subsequence is equivalent to the canonical \( {\ell }_{1} \) -basis; but then again, this means that the entire sequence is equivalent to the \( {\ell }_{1} \) -basis. (ii) It is enough to show that if \( {a}_{1},\ldots ,{a}_{n} \in \mathbb{R} \) and \( 1 \leq m \leq n \), then \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + \mathop{\sum }\limits_{{m < j \leq n}}{a}_{j}{x}_{j}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{e}_{j}}\end{Vmatrix}. \] Suppose \( \epsilon > 0 \) . By Mazur’s theorem we can find \( {c}_{j} \geq 0 \) for \( 1 \leq j \leq l \), say, such that \( \mathop{\sum }\limits_{{j = 1}}^{l}{c}_{j} = 1 \) and \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{l}{c}_{j}{x}_{j}}\end{Vmatrix} < \epsilon \] Now consider \[ x = \mathop{\sum }\limits_{{j = 1}}^{{m - 1}}{a}_{j}{x}_{j} + {a}_{m}\mathop{\sum }\limits_{{j = m}}^{{m + l - 1}}{c}_{j - m + 1}{x}_{j} + \mathop{\sum }\limits_{{j = m + l}}^{{m + l - 1}}{a}_{j - l + 1}{x}_{j}. \] Then \[ x = \mathop{\sum }\limits_{{i = 1}}^{l}{c}_{i}\left( {\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + {a}_{m}{x}_{i + m} + \mathop{\sum }\limits_{{j = m + 1}}^{n}{a}_{j}{x}_{l + j - 1}}\right) , \] and so \[ \parallel x\parallel \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{x}_{j}}\end{Vmatrix} \] But \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + \mathop{\sum }\limits_{{m < j \leq n}}{a}_{j}{x}_{j}}\end{Vmatrix} \leq \parallel x\parallel + \left| {a}_{m}\right| \epsilon \] and so \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + \mathop{\sum }\limits_{{m < j \leq n}}{a}_{j}{x}_{j}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{x}_{j}}\end{Vmatrix} + \left| {a}_{m}\right| \epsilon . \] Since \( \epsilon > 0 \) is arbitrary, we are done. (iii) This is immediate, since \( {\left( {x}_{{2n} - 1} - {x}_{2n}\right) }_{n = 1}^{\infty } \) is weakly null and spreading (obviously, it cannot be constant). Theorem 12.3.7. Suppose \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a normalized sequence in a Banach space \( X \) such that \( {\left\{ {x}_{n}\right\} }_{n = 1}^{\infty } \) is not relatively compact. Then there is a spreading sequence space that is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) . More precisely, there exist a subsequence \( {\left( {x}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) of \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) and a spreading sequence space \( \mathcal{X} \) such that if we let \( M = {\left\{ {n}_{k}\right\} }_{k = 1}^{\infty } \), then \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( M\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{x}_{{p}_{j}}}\end{Vmatrix} = {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{e}_{j}\end{Vmatrix}}_{\mathcal{X}}. \] Proof. This is a neat application of Ramsey's theorem due to Brunel and Sucheston [36]. We first observe that by taking a subsequence, we can assume that \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) has no convergent subsequence. Let us fix some finite sequence of real numbers \( {\left( {a}_{j}\right) }_{j = 1}^{r} \) . According to Theorem 11.1.1, given any infinite subset \( M \) of \( \mathbb{N} \), we can find a further infinite subset \( {M}_{1} \) such that \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{1}\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{x}_{{p}_{j}}}\end{Vmatrix}\text{ exists. } \] Let \( {\left( {a}_{1}^{\left( k\right) },\ldots ,{a}_{{r}_{k}}^{\left( k\right) }\right) }_{k = 1}^{\infty } \) be an enumeration of all finitely nonzero sequences of rationals, and let us construct a decreasing sequence \( {\left( {M}_{k}\right) }_{k = 1}^{\infty } \) of infinite subsets of \( \mathbb{N} \) such that \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{k}\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}^{\left( k\right) }{x}_{{p}_{j}}}\end{Vmatrix}\text{ exists. } \] A diagonal procedure allows us to pick an infinite subset \( {M}_{\infty } \) that is contained in each \( {M}_{k} \) up to a finite set. It is not difficult to check that \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{\infty }\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{x}_{{p}_{j}}}\end{Vmatrix}\text{ exists } \] for every finite sequence of reals \( {\left( {a}_{j}\right) }_{j = 1}^{r} \) . Given \( \xi = {\left( \xi \left( j\right) \right) }_{j = 1}^{\infty } \in {c}_{00} \), put \[ \parallel \xi {\parallel }_{\mathcal{X}} = \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{\infty }\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}\xi \left( j\right) {x}_{{p}_{j}}}\end{Vmatrix}. \] Then \( \parallel \cdot {\parallel }_{\mathcal{X}} \) satisfies the spreading property, but we need to check that it is a norm on \( {c}_{00} \) (it obviously is a seminorm). If \( \parallel \xi {\parallel }_{\mathcal{X}} = 0 \) and \( \xi = \mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{e}_{j} \) with \( {a}_{r} \neq 0 \) , then we also have \( {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{{r - 1}}{a}_{j}{e}_{j} + {a}_{r}{e}_{r + 1}\end{Vmatrix}}_{\mathcal{X}} = 0 \) . Hence \[ {\begin{Vmatrix}{e}_{1} - {e}_{2}\end{Vmatrix}}_{\mathcal{X}} = {\begin{Vmatrix}{e}_{r} - {e}_{r + 1}\end{Vmatrix}}_{\mathcal{X}} = 0. \] Returning to the definition, we see that this implies \[ \mathop{\lim }\limits_{{\left( {{p}_{1},{p}_{2}}\right) \in {\mathcal{F}}_{2}\left( {M}_{\infty }\right) }}\begin{Vmatrix}{{x}_{{p}_{1}} - {x}_{{p}_{2}}}\end{Vmatrix} = 0 \] which can mean only that the subsequence \( {\left( {x}_{j}\right) }_{j \in {M}_{\infty }} \) is convergent, contrary to our construction. Definition 12.3.8. The spreading sequence space \( \mathcal{X} \) introduced in Theorem 12.3.7 is called a spreading model for the sequence \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) . We now turn to Krivine's theorem. This result was obtained by Krivine in 1976, and although the main ideas of the proof we include here are the same as in Krivine's original proof, we have used ideas from two subsequent expositions of Krivine's theorem by Rosenthal [274] and Lemberg [186]. Krivine's theorem should be contrasted with Tsirelson space, which we constructed in Section 11.3. The existence of Tsirelson space implies that there is a Banach space with a basis such that n
106_106_The Cantor function
Definition 3.1
Definition 3.1. The set of axioms of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) is the set \( \mathcal{A} = \) \( {\mathcal{A}}_{1} \cup \cdots \cup {\mathcal{A}}_{5} \), where \[ {\mathcal{A}}_{1} = \{ p \Rightarrow \left( {q \Rightarrow p}\right) \mid p, q \in P\left( {V,\mathcal{R}}\right) \} \] \[ {\mathcal{A}}_{2} = \{ \left( {p \Rightarrow \left( {q \Rightarrow r}\right) }\right) \Rightarrow \left( {\left( {p \Rightarrow q}\right) \Rightarrow \left( {p \Rightarrow r}\right) }\right) \mid p, q, r \in P\left( {V,\mathcal{R}}\right) \} , \] \[ {\mathcal{A}}_{3} = \{ \sim \sim p \Rightarrow p \mid p \in P\left( {V,\mathcal{R}}\right) \} \] \[ {\mathcal{A}}_{4} = \{ \left( {\forall x}\right) \left( {p \Rightarrow q}\right) \Rightarrow \left( {p \Rightarrow \left( {\left( {\forall x}\right) q}\right) }\right) \mid p, q \in P\left( {V,\mathcal{R}}\right), x \notin \operatorname{var}\left( p\right) \} , \] \[ {\mathcal{A}}_{5} = \{ \left( {\forall x}\right) p\left( x\right) \Rightarrow p\left( y\right) \mid p\left( x\right) \in P\left( {V,\mathcal{R}}\right), y \in V\} . \] We remind the reader that these axioms are stated in terms of elements of the reduced predicate algebra. In \( {\mathcal{A}}_{5} \), for example, the substitution of \( y \) for \( x \) in \( p\left( x\right) \) implies that we have chosen a representative of \( \left\lbrack {\left( {\forall x}\right) p\left( x\right) }\right\rbrack \) in which \( \left( {\forall y}\right) \) does not appear. In addition to Modus Ponens, we shall use one further rule of inference, which will enable us to formalise the following commonly occurring argument: we have proved \( p\left( x\right) \), but \( x \) was any element, and therefore \( \left( {\forall x}\right) p\left( x\right) \) . The rule of inference called Generalisation allows us to deduce \( \left( {\forall x}\right) p\left( x\right) \) from \( p\left( x\right) \) provided \( x \) is general. The restriction on the use of Generalisation needs to be stated carefully. Definition 3.2 Let \( A \subseteq P, p \in P \) . A proof of length \( n \) of \( p \) from \( A \) is a sequence \( {p}_{1},\ldots ,{p}_{n} \) of \( n \) elements of \( P \) such that \( {p}_{n} = p \), the sequence \( {p}_{1},\ldots ,{p}_{n - 1} \) is a proof of length \( n - 1 \) of \( {p}_{n - 1} \) from \( A \), and (a) \( {p}_{n} \in \mathcal{A} \cup A \), or (b) \( {p}_{i} = {p}_{j} \Rightarrow {p}_{n} \) for some \( i, j < n \), or (c) \( {p}_{n} = \left( {\forall x}\right) w\left( x\right) \) and some subsequence \( {p}_{{k}_{1}},\ldots ,{p}_{{k}_{r}} \) of \( {p}_{1},\ldots {p}_{n - 1} \) is a proof (of length \( < n \) ) of \( w\left( x\right) \) from a subset \( {A}_{0} \) of \( A \) such that \( x \notin \operatorname{var}\left( {A}_{0}\right) \) . This is an inductive definition of a proof in \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) . As for \( \operatorname{Prop}\left( X\right) \) , we require a proof to be a proof of finite length. The restriction \( x \notin \operatorname{var}\left( {A}_{0}\right) \) in (c) means that no special assumptions about \( x \) are used in proving \( w\left( x\right) \) , and is the formal analogue of the restriction on the use of Generalisation in our informal logic. As before, we write \( A \vdash p \) if there exists a proof of \( p \) from \( A \) . We denote by \( \operatorname{Ded}\left( A\right) \) the set of all \( p \) such that \( A \vdash p \) . We write \( \vdash p \) for \( \varnothing \vdash p \), and any \( p \) for which \( \vdash p \) is called a theorem of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) . Example 3.3. We show \( \{ \sim \left( {\exists x}\right) \left( { \sim p}\right) \} \vdash \left( {\forall x}\right) p \) for any element \( p \in P \) . (Recall that \( \left( {\exists x}\right) \) is an abbreviation for \( \sim \left( {\forall x}\right) \sim \) .) The following is a proof. \( \left( {\mathcal{A}}_{3}\right) \) (assumption) \[ {p}_{3} = \left( {\forall x}\right) \left( { \sim \sim p}\right) ,\;\left( {{p}_{1} = {p}_{2} \Rightarrow {p}_{3}}\right) \] \[ {p}_{4} = \left( {\forall x}\right) \left( { \sim \sim p\left( x\right) }\right) \Rightarrow \sim \sim p\left( y\right) , \] \( \left( {\mathcal{A}}_{5}\right) \) Note that by \( \left( {\mathcal{A}}_{5}\right) \), the \( y \) in \( {p}_{4} \) may be chosen to be any variable. To permit a subsequent use of Generalisation, \( y \) must not be in \( \operatorname{var}\left( { \sim \left( {\exists x}\right) \left( { \sim p\left( x\right) }\right) }\right) \) . A possible choice for \( y \) is the variable \( x \) itself. \[ {p}_{5} = \sim \sim p\left( y\right) \] \[ \left( {{p}_{4} = {p}_{3} \Rightarrow {p}_{5}}\right) \] \[ {p}_{6} = \sim \sim p\left( y\right) \Rightarrow p\left( y\right) \] \( \left( {\mathcal{A}}_{3}\right) \) \[ {p}_{7} = p\left( y\right) \] \( \left( {{p}_{6} = {p}_{5} \Rightarrow {p}_{7}}\right) \) \[ {p}_{8} = \left( {\forall y}\right) p\left( y\right) \] (Generalisation, \( y \notin \operatorname{var}\left( { \sim \left( {\exists x}\right) \left( { \sim p\left( x\right) }\right) }\right) \) ## Exercises 3.4. Show that every axiom of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) is valid. 3.5. Construct a proof in \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) of \( \left( {\forall x}\right) \left( {\forall y}\right) p\left( {x, y}\right) \) from \( \{ \left( {\forall y}\right) \left( {\forall x}\right) p\left( {x, y}\right) \} \) . ## §4 Properties of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) We have now constructed the logic \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) . Its algebra of propositions is the reduced first order algebra \( P\left( {V,\mathcal{R}}\right) \), its valuations are the valuations associated with the interpretations of \( P\left( {V,\mathcal{R}}\right) \) defined in \( §2 \), and its proofs are as defined in \( §3 \) . We can immediately inquire if there is a substitution theorem for this logic, corresponding to Theorem 4.11 of the Propositional Calculus. There, substitution was defined in terms of a homomorphism \( \varphi : {P}_{1} \rightarrow {P}_{2} \) of one algebra of propositions into another. If \( {P}_{1},{P}_{2} \) are first order algebras, then as the concept of a homomorphism from \( {P}_{1} \) to \( {P}_{2} \) requires these algebras to have the same set of operations, it follows that they must have the same set of individual variables. Even in this case, a homomorphism would be too restrictive for our purposes, for we would naturally want to be able to interchange two variables \( x, y \), so mapping elements \( p\left( x\right) \) of the algebra to \( \varphi \left( {p\left( x\right) }\right) = p\left( y\right) \), but unfortunately such a map is not a homomorphism. For if \( p\left( x\right) \in P \) is such that \( x \in \operatorname{var}\left( {p\left( x\right) }\right), y \notin \operatorname{var}\left( {p\left( x\right) }\right) \), then \[ \varphi \left( {\left( {\forall x}\right) p\left( x\right) }\right) = \left( {\forall y}\right) p\left( y\right) = \left( {\forall x}\right) p\left( x\right) , \] \[ \left( {\forall x}\right) \varphi \left( {p\left( x\right) }\right) = \left( {\forall x}\right) p\left( y\right) . \] Since \( y \in \operatorname{var}\left( {\left( {\forall x}\right) p\left( y\right) }\right) \) but \( y \notin \operatorname{var}\left( {\left( {\forall y}\right) p\left( y\right) }\right) \), these elements are distinct and \( \varphi \) is not a homomorphism. Definition 4.1. Let \( {P}_{1} = P\left( {{V}_{1},{\mathcal{R}}^{\left( 1\right) }}\right) \) and \( {P}_{2} = P\left( {{V}_{2},{\mathcal{R}}^{\left( 2\right) }}\right) \) . A semi-homomorphism \( \left( {\alpha ,\beta }\right) : \left( {{P}_{1},{V}_{1}}\right) \rightarrow \left( {{P}_{2},{V}_{2}}\right) \) is a pair of maps \( \alpha : {P}_{1} \rightarrow {P}_{2} \) , \( \beta : {V}_{1} \rightarrow {V}_{2} \) such that (a) \( \beta \left( {V}_{1}\right) \) is infinite, (b) \( \alpha \) is an \( \{ \mathrm{F}, \Rightarrow \} \) -homomorphism, and (c) \( \alpha \left( {\left( {\forall x}\right) p}\right) = \left( {\forall {x}^{\prime }}\right) \alpha \left( p\right) \), where \( {x}^{\prime } = \beta \left( x\right) \) . Lemma 4.2. Let \( \left( {\alpha ,\beta }\right) : \left( {{P}_{1},{V}_{1}}\right) \rightarrow \left( {{P}_{2},{V}_{2}}\right) \) be a semi-homomorphism. Let \( p \in {P}_{1} \) and suppose \( x \in {V}_{1} - \operatorname{var}\left( p\right) \) . Then \( \beta \left( x\right) \notin \operatorname{var}\left( {\alpha \left( p\right) }\right) \) . Proof: We observe first that if \( x \neq y \), then \( \left( {\forall x}\right) p = \left( {\forall y}\right) p \) if and only if neither \( x \) nor \( y \) is in \( \operatorname{var}\left( \mathrm{p}\right) \) . Since \( \beta \left( {V}_{1}\right) \) is infinite, there is an element \( {y}^{\prime } \in \beta \left( {V}_{1}\right) \) such that \( {y}^{\prime } \neq \beta \left( x\right) \) and \( {y}^{\prime } \notin \beta \left( {\operatorname{var}\left( p\right) }\right) \) . Choosing \( y \in {V}_{1} \) so that \( \beta \left( y\right) = {y}^{\prime } \), it follows that \( \left( {\forall x}\right) p = \) \( \left( {\forall y}\right) p \) . If \( {x}^{\prime } = \beta \left( x\right) \), then we have \[ \left( {\forall {x}^{\prime }}\right) \alpha \left( p\right) = \alpha (\left( {\forall x}\right) p = \alpha \left( {\left( {\forall y}\right) p}\right) = \left( {\forall {y}^{\prime }}\right) \alpha \left( p\right) , \] and it follows again that \( {x}^{\prime } \notin \operatorname{var}\left( {\alpha \left( p\right) }\right) \) . Theorem 4.3. (The Substitution Theorem). Let \( \left( {\alpha ,\beta }\right) : \left( {{P}_{1},{V}_{1}}\right) \rightarrow \left( {{P}_{2},{V}_{2}}\right) \) be a semi-homomorphism. Let \( A \subseteq P, p \in {P}_{1} \) . (a) If \( A \vdash p \), then \(
1118_(GTM272)Operator Theoretic Aspects of Ergodic Theory
Definition 7.9
Definition 7.9. Let \( E \) be a Banach lattice. A linear subspace \( I \subseteq E \) is called a (vector) lattice ideal if \[ f, g \in E,\left| f\right| \leq \left| g\right| ,\;g \in I\; \Rightarrow \;f \in I. \] If \( I \) is a lattice ideal, then \( f \in I \) if and only if \( \left| f\right| \in I \) if and only if \( \operatorname{Re}f,\operatorname{Im}f \in I \) . It follows from (7.3e) that the real part \( {I}_{\mathbb{R}} \mathrel{\text{:=}} I \cap {E}_{\mathbb{R}} \) of a lattice ideal \( I \) is a vector sublattice of \( {E}_{\mathbb{R}} \) . Immediate examples of closed lattice ideals in \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) are obtained from measurable sets \( A \in {\sum }_{\mathrm{X}} \) by a construction similar to the topological case (cf. page 52): \[ {I}_{A} \mathrel{\text{:=}} \left\{ {f : \left| f\right| \land {\mathbf{1}}_{A} = 0}\right\} = \left\{ {f : \left| f\right| \land \mathbf{1} \leq {\mathbf{1}}_{{A}^{c}}}\right\} = \{ f : A \subseteq \left\lbrack {f = 0}\right\rbrack \} . \] Then \( {I}_{A} \) is indeed a closed lattice ideal, and for \( A = \varnothing \) and \( A = \mathrm{X} \) we recover \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) and \( \{ 0\} \), the two trivial lattice ideals. The following characterization tells that actually all closed lattice ideals in \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) arise by this construction. Theorem 7.10. Let \( \mathrm{X} \) be a finite measure space and \( 1 \leq p < \infty \) . Then each closed lattice ideal \( I \subseteq {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) has the form \( {I}_{A} \) for some \( A \in {\sum }_{\mathrm{X}} \) . Proof. Let \( I \subseteq {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) be a closed lattice ideal. The set \[ J \mathrel{\text{:=}} \{ f \in I : 0 \leq f \leq 1\} \] is nonempty, closed, \( \vee \) -stable and has upper bound \( \mathbf{1} \in {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) (since \( {\mu }_{\mathrm{X}} \) is finite). Therefore, by Corollary 7.8 it has even a least upper bound \( g \in J \) . It follows that \( 0 \leq g \leq \mathbf{1} \) and thus \( h \mathrel{\text{:=}} g \land \left( {\mathbf{1} - g}\right) \geq 0 \) . Since \( 0 \leq h \leq g \) and \( g \in I \), the ideal property yields \( h \in I \) . Since \( I \) is a subspace, \( g + h \in I \) . But \( h \leq \mathbf{1} - g \), so \( g + h \leq 1 \), and this yields \( h + g \in J \) . Thus \( h + g \leq g \), i.e., \( h \leq 0 \) . All in all we obtain \( g \land \left( {\mathbf{1} - g}\right) = h = 0 \), hence \( g \) must be a characteristic function \( {\mathbf{1}}_{{A}^{\mathrm{c}}} \) for some \( A \in \sum \) . We claim that \( I = {I}_{A} \) . To prove the inclusion " \( \subseteq \) " take \( f \in I \) . Then \( \left| f\right| \land \mathbf{1} \in J \) and therefore \( \left| f\right| \land \mathbf{1} \leq g = {\mathbf{1}}_{{A}^{\mathrm{c}}} \) . This means that \( f \in {I}_{A} \) . To prove the converse inclusion take \( f \in {I}_{A} \) . It suffices to show that \( \left| f\right| \in I \), hence we may suppose that \( f \geq 0 \) . Then \( {f}_{n} \mathrel{\text{:=}} f \land \left( {n\mathbf{1}}\right) = n\left( {{n}^{-1}f \land \mathbf{1}}\right) \leq n{\mathbf{1}}_{{A}^{\mathrm{c}}} = {ng} \) . Since \( g \in J \subseteq I \), we have \( {ng} \in I \), whence \( {f}_{n} \in I \) . Now \( {\left( {f}_{n}\right) }_{n \in \mathbb{N}} \) is increasing and converges pointwise, hence in norm to its supremum \( f \) . Since \( I \) is closed, \( f \in I \) and this concludes the proof. Remarks 7.11. 1) In most of the results of this section we required \( p < \infty \) for good reasons. The space \( {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) is a Banach lattice, but its norm is not order continuous in general (Exercise 7). If the measure is finite, \( {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) is still order complete (Exercise 11), but this is not true for general measure spaces. Moreover, if \( {\mathrm{L}}^{\infty } \) is not finite dimensional, then there are always closed lattice ideals not of the form \( {I}_{A} \) . 2) For a finite measure space \( \mathrm{X} \) we have \[ {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \subseteq {\mathrm{L}}^{p}\left( \mathrm{X}\right) \subseteq {\mathrm{L}}^{1}\left( \mathrm{X}\right) \;\left( {1 \leq p \leq \infty }\right) . \] Particularly important in this scale will be the Hilbert lattice \( {\mathrm{L}}^{2}\left( \mathrm{X}\right) \) . 3) Let \( K \) be a compact space. Then the closed lattice ideals in the Banach lattice \( \mathrm{C}\left( K\right) \) coincide with the closed algebra ideals, i.e., with the sets \[ {I}_{A} = \{ f \in \mathrm{C}\left( K\right) : f \equiv 0\text{ on }A\} \;\left( {A \subseteq K,\text{ closed }}\right) , \] see Exercise 8. ## 7.3 The Koopman Operator and Ergodicity We now study measure-preserving systems \( \left( {\mathrm{X};\varphi }\right) \) and the Koopman operator \( T \mathrel{\text{:=}} \) \( {T}_{\varphi } \) on \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) . We know that, for every \( 1 \leq p \leq \infty \) , 1) \( T \) is an isometry on \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) ; 2) \( T \) is a Banach lattice homomorphism on \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) (see page 117); 3) \( T\left( {fg}\right) = {Tf} \cdot {Tg} \) for all \( f \in {\mathrm{L}}^{p}\left( \mathrm{X}\right), g \in {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) ; 4) \( T \) is a \( {C}^{ * } \) -algebra homomorphism on \( {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) . As in the topological case, properties of the dynamical system are reflected in properties of the Koopman operator. Here is a first example (cf. Lemma 4.14 for the topological analogue). Proposition 7.12. A measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \) is invertible if and only if its Koopman operator \( {T}_{\varphi } \) is invertible on \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) for one/each \( 1 \leq p \leq \infty \) . Proof. Fix \( 1 \leq p \leq \infty \) and abbreviate \( T \mathrel{\text{:=}} {T}_{\varphi } \) . Let \( \left( {\mathrm{X};\varphi }\right) \) be invertible, i.e., \( {\varphi }^{ * } \) is surjective (Definition 6.2). Since \( T{\mathbf{1}}_{A} = {\mathbf{1}}_{{\varphi }^{ * }A} \) for any \( A \in \sum \left( \mathrm{X}\right) \), it follows that \( \operatorname{ran}\left( T\right) \) contains all characteristic functions of measurable sets. Since \( T \) is an isometry, its range is closed, and since simple functions are dense, \( T \) is surjective. Conversely, suppose that \( T \) is surjective, and let \( B \in \sum \left( \mathrm{X}\right) \) . Then there is \( f \in \) \( {\mathrm{L}}^{p}\left( \mathrm{X}\right) \) with \( {Tf} = {\mathbf{1}}_{B} \) . Then \( {Tf} = {\mathbf{1}}_{B} = {\mathbf{1}}_{B}^{2} = \left( {Tf}\right) \left( {Tf}\right) = T\left( {f}^{2}\right) \) . Since \( T \) is injective, it follows that \( f = {f}^{2} \) . But then \( f = {\mathbf{1}}_{A} \) for \( A = \left\lbrack {f \neq 0}\right\rbrack \) and hence \( {\varphi }^{ * }A = B \) . In the following we shall see that ergodicity of \( \left( {\mathrm{X};\varphi }\right) \) can be characterized by a lattice theoretic property of the associated Koopman operator, namely irreducibility. Definition 7.13. A positive operator \( T \in \mathcal{L}\left( E\right) \) on a Banach lattice \( E \) is called irreducible if the only \( T \) -invariant closed lattice ideals of \( E \) are the trivial ones, i.e., \[ I \subseteq E\text{ closed lattice ideal,}T\left( I\right) \subseteq I \Rightarrow I = \{ 0\} \text{ or }I = E\text{. } \] If \( T \) is not irreducible, it is called reducible. Let us first discuss this notion in the finite-dimensional setting. Example 7.14. Consider the space \( {\mathrm{L}}^{1}\left( {\{ 0,\ldots, n - 1\} }\right) = {\mathbb{R}}^{n} \) and a positive operator \( T \) on it, identified with its \( n \times n \) -matrix. Then the irreducibility of \( T \) according to Definition 7.13 coincides with that notion introduced in Section 2.4 on page 27. Namely, if \( T \) is reducible, then there exists a nontrivial \( T \) -invariant ideal \( {I}_{A} \) in \( {\mathbb{R}}^{n} \) for some \( \varnothing \neq A \subsetneq \{ 0,1,\ldots, n - 1\} \) . After a permutation of the points we may suppose that \( A = \{ k,\ldots, n - 1\} \) for \( 0 < k < n \), and this means that the representing matrix (with respect to the canonical basis) has the form: ![2ce18656-55b1-426e-a113-dde3fcb83791_142_0.jpg](images/2ce18656-55b1-426e-a113-dde3fcb83791_142_0.jpg) Let us return to the situation of a measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \) . We consider the associated Koopman operator \( T \mathrel{\text{:=}} {T}_{\varphi } \) on the space \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) \), but note that \( T \) leaves each space \( {\mathrm{L}}^{p} \) invariant. The following result shows that the ergodicity of a measure-preserving system is characterized by the irreducibility of the Koopman operator \( T \) or the one-dimensionality of its fixed space \[ \operatorname{fix}\left( T\right) \mathrel{\text{:=}} \left\{ {f \in {\mathrm{L}}^{1}\left( \mathrm{X}\right) : {Tf} = f}\right\} = \ker \left( {\mathrm{I} - T}\right) . \] (Note that always \( \mathbf{1} \in \mathrm{{fix}}\left( T\right) \) whence \( \dim \mathrm{{fix}}\left( T\right) \geq 1 \) .) This is reminiscent (cf. Corollary 4.19) of but also contrary to the topological system case, where minimality could not be characterized by the one-dimensionality of the fixed space of the Koopman operator, but by the irreducibility of the Koopman operator. Proposition 7.15. Let \( \left( {\mathrm{X};\varphi }\right) \) be a measure-preserving system with Koopman operator \( T \mathrel{\text{:=}} {T}_{\varphi } \) on \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) . Then for \( 1 \leq p \leq \infty \) the space
1164_(GTM70)Singular Homology Theory
Definition 3.2
Definition 3.2. The Euler characteristic of a graph is the number of vertices minus the number of edges. We can now state the main theorem about the homology groups of a graph: Theorem 3.2. Let \( \left( {X,{X}^{0}}\right) \) be a finite, regular graph. Then \( {H}_{q}\left( X\right) = 0 \) for \( q > 1 \) , \( {H}_{1}\left( X\right) \) is a free abelian group, and \[ \operatorname{rank}\left( {{H}_{0}\left( X\right) }\right) - \operatorname{rank}\left( {{H}_{1}\left( X\right) }\right) = \text{Euler characteristic.} \] We leave it to the reader to prove this theorem, using the homology sequence of the pair \( \left( {X,{X}^{0}}\right) \) and the two results from linear algebra stated above. This theorem gives a simple method for determining the structure for \( {H}_{1}\left( X\right) \) . For we can determine the rank of \( {H}_{0}\left( X\right) \) by counting the number of components, and we can determine the Euler characteristic by counting the number of vertices and edges. For certain purposes it is necessary to go more deeply into the structure of \( {H}_{1}\left( X\right) \), and actually give some sort of concrete representation of the elements of this group. This we will now proceed to do. The exact sequence (3.2) shows that \( {H}_{1}\left( X\right) \) and \( {H}_{0}\left( X\right) \) are the kernel and cokernel respectively of the homomorphism \( {\partial }_{ * } : {H}_{1}\left( {X,{X}^{0}}\right) \rightarrow {H}_{0}\left( {X}^{0}\right) \) . Our procedure will be to choose convenient bases for the free abelian groups \( {H}_{1}\left( {X,{X}^{0}}\right) \) and \( {H}_{0}\left( {X}^{0}\right) \), and then express \( {\partial }_{ * } \) in terms of these bases. The edges of the graph \( X \) will be denoted by \( {e}_{1},\ldots ,{e}_{k} \) and the vertices by \( {v}_{1},\ldots ,{v}_{m} \) . It is easy to choose a natural basis for the group \( {H}_{0}\left( {X}^{0}\right) \) . Since \( {X}^{0} \) is a discrete space, \( {H}_{0}\left( {X}^{0}\right) \) is naturally isomorphic to the direct sum of the groups \( {H}_{0}\left( {v}_{i}\right) \) for \( i = 1,2,\ldots, m \) . The augmentation homomorphism \( \varepsilon : {H}_{0}\left( {v}_{i}\right) \rightarrow \mathbf{Z} \) is an isomorphism; therefore it is natural to choose as a generator of \( {H}_{0}\left( {v}_{i}\right) \) the element \( {a}_{i} \) such that \( \varepsilon \left( {a}_{i}\right) = 1 \) . Then \( \left\{ {{a}_{1},\ldots ,{a}_{m}}\right\} \) is a basis for \( {H}_{0}\left( {X}^{0}\right) \) . To avoid proliferation of notation, it is convenient to use the same symbol \( {v}_{i} \) for the basis element \( {a}_{i} \in {H}_{0}\left( {v}_{i}\right) \) . This abuse of notation will hardly ever lead to confusion, and it is sanctioned by many decades of use. Thus we will denote our basis of \( {H}_{0}\left( {X}^{0}\right) \) by \( \left\{ {{v}_{1},\ldots ,{v}_{m}}\right\} \) . Choosing a basis for \( {H}_{1}\left( {X,{X}^{0}}\right) \) is only slightly more complicated. According to Theorem 3.1, \( {H}_{1}\left( {X,{X}^{0}}\right) \) decomposes into the direct sum of infinite cyclic subgroups, which correspond to the edges \( {e}_{1},\ldots ,{e}_{k} \) . Thus to choose a basis for \( {H}_{1}\left( {X,{X}^{0}}\right) \) it suffices to choose a generator for the infinite cyclic group \( {H}_{1}\left( {{\bar{e}}_{i},{\dot{e}}_{i}}\right) \) for \( i = 1,2,\ldots, m \) . It turns out that such a choice is purely arbitrary; there is no natural or preferred choice of a generator. In order to understand the meaning of such a choice, consider the following commutative diagram (cf. Exercise II.5.5): ![26a5d8f2-88cf-4556-8447-3a182179fff0_58_0.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_58_0.jpg) The homomorphism \( {\partial }_{1} \) is an isomorphism; thus choosing a generator for \( {H}_{1}\left( {{\bar{e}}_{i},{\dot{e}}_{i}}\right) \) is equivalent to choosing a generator for \( {\widetilde{H}}_{0}\left( {\dot{e}}_{i}\right) \) . The set \( {\dot{e}}_{i} \) consists of two vertices; let us denote them by \( {v}_{\alpha } \) and \( {v}_{\beta } \) . Using the convention introduced in the preceding paragraph, we may use the same symbols, \( {v}_{\alpha } \) and \( {v}_{\beta } \), to denote a basis for \( {H}_{0}\left( {\dot{e}}_{i}\right) \) . With this convention, the two possible choices of a generator for the infinite cyclic subgroup \( {\widetilde{H}}_{0}\left( {\dot{e}}_{i}\right) \) are \( {v}_{\alpha } - {v}_{\beta } \) and \( {v}_{\beta } - {v}_{\alpha } \) . Thus we see that a choice of basis for \( {H}_{1}\left( {{\bar{e}}_{i},{\dot{e}}_{i}}\right) \) corresponds to an ordering of the vertices of the edge \( {e}_{i} \) . For this reason, we will say that we orient the edge \( {e}_{i} \) when we make such a choice. To make things precise, we lay down the following rule: Orient the edge \( {e}_{i} \) by choosing an ordering of its two vertices. If \( {v}_{\beta } > {v}_{\alpha } \), then this ordering of vertices corresponds to the generator \( {\partial }_{1}^{-1}\left( {{v}_{\beta } - {v}_{\alpha }}\right) \) of the group \( {H}_{1}\left( {{\bar{e}}_{i},{\dot{e}}_{i}}\right) \) . We can now give the following recipe for the homomorphism \( {\partial }_{ * } \) : \( {H}_{1}\left( {X,{X}^{0}}\right) \rightarrow {H}_{0}\left( {X}^{0}\right) \) (a) A basis for \( {H}_{0}\left( {X}^{0}\right) \) consists of the set of vertices. (b) Orient the edges by choosing an order for the vertices of each edge. On a diagram or drawing of the given graph, it is convenient to indicate the orientation by an arrow on each edge pointing from the first vertex to the second. (c) A basis for \( {H}_{1}\left( {X,{X}^{0}}\right) \) consists of the set of oriented edges. (d) If \( {e}_{i} \) is any edge, with vertices \( {v}_{\alpha } \) and \( {v}_{\beta } \) and orientation determined by the relation \( {v}_{\beta } > {v}_{\alpha } \), then \[ {\partial }_{ * }\left( {e}_{i}\right) = {v}_{\beta } - {v}_{\alpha } \] EXAMPLE 3.1. Figure 2 shows a graph with six vertices and nine edges which can not be imbedded in the plane. (This graph comes up in the well-known problem of the three houses and the three utilities). We have oriented all the edges by placing arrows on them which point upwards. According to the preceding rules, the homomorphism \( {\partial }_{ * } \) is given by the following formulas: \[ {\partial }_{ * }\left( {e}_{1}\right) = {v}_{1}\; - {v}_{4} \] \[ {\partial }_{ * }\left( {e}_{2}\right) = \;{v}_{2}\; - {v}_{5} \] \[ {\partial }_{ * }\left( {e}_{3}\right) = \;{v}_{3}\; - {v}_{6} \] \[ {\partial }_{ * }\left( {e}_{4}\right) = \;{v}_{2}\; - {v}_{4} \] \[ {\partial }_{ * }\left( {e}_{5}\right) = \;{v}_{3}\; - {v}_{5} \] \[ {\partial }_{ * }\left( {e}_{6}\right) = {v}_{1} \] \[ {\partial }_{ * }\left( {e}_{7}\right) = \;{v}_{2}\; - {v}_{6} \] \[ {\partial }_{ * }\left( {e}_{8}\right) = \] \[ {\partial }_{ * }\left( {e}_{9}\right) = {v}_{1} \] In other words, \( {\partial }_{ * } \) is represented by the following matrix: \[ \left\lbrack \begin{array}{rrrrrr} 1 & 0 & 0 & - 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & - 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & - 1 \\ 0 & 1 & 0 & - 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & - 1 & 0 \\ 1 & 0 & 0 & 0 & - 1 & 0 \\ 0 & 1 & 0 & 0 & - 0 & - 1 \\ 0 & 0 & 1 & - 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & - 1 \end{array}\right\rbrack \] There remains the problem of determining the kernel and cokernel of \( {\partial }_{ * } \) . In books on linear algebra there is an algorithm described for introducing new ![26a5d8f2-88cf-4556-8447-3a182179fff0_59_0.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_59_0.jpg) Figure 2 bases in the domain and range of such a homomorphism so that the corresponding matrix is a diagonal matrix. Then generators of the kernel and cokernel can be read off with ease. Unfortunately, this algorithm is rather lengthy and tedious. As a practical alternative, one can proceed as follows. The Euler characteristic of this graph is \( 6 - 9 = - 3 \) . Since it is connected, \( {H}_{0}\left( X\right) \) has rank 1. Hence \( {H}_{1}\left( X\right) \) has rank 4, by Theorem 3.2. Therefore we should be able to find four linearly independent elements in the kernel of \( {\partial }_{ * } \), and then hope to prove that they form a basis for the kernel of \( {\partial }_{ * } \) . Consider the following four elements of \( {H}_{1}\left( {X,{X}^{0}}\right) \) : \[ {z}_{1} = {e}_{1} - {e}_{6} + {e}_{2} - {e}_{4} \] \[ {z}_{2} = {e}_{2} - {e}_{4} + {e}_{8} - {e}_{5} \] \[ {z}_{3} = {e}_{3} - {e}_{5} + {e}_{6} - {e}_{9} \] and \[ {z}_{4} = {e}_{7} - {e}_{2} + {e}_{5} - {e}_{3} \] These four elements (which we may as well call cycles) were determined by inspection of the above diagram. They correspond in an obvious way to certain oriented closed paths in the diagram. It is readily verified that all four of these cycles actually belong to the kernel of \( {\partial }_{ * } \), and that they are linearly independent. Finally, it is a nice exercise in linear algebra to check that the set \( \left\{ {{e}_{1},{e}_{2},{e}_{3},{e}_{4},{e}_{5},{z}_{1},{z}_{2},{z}_{3},{z}_{4}}\right\} \) is also a basis for \( {H}_{1}\left( {X,{X}^{0}}\right) \) . These facts suffice to prove that \( \left\{ {{z}_{1},{z}_{2},{z}_{3},{z}_{4}}\right\} \) is actually a basis for the kernel of \( {\partial }_{ * } \), or what is equivalent, for the homology group \( {H}_{1}\left( X\right) \) . We leave it to the reader to carry through the details of the proof. The reader is strongly urged to make diagrams of several graphs and determine a set of linearly independent cycles which constitute a basis for the 1-dimensional homology group of each graph. It is only by such exercises that one can gain an adequate understanding and intuitive feeling for homology theory. The idea that a 1-dimensional homology class is represented by a linear combination of cycles is very important. Next we will discuss the problem of determining the homomorphism in
1172_(GTM8)Axiomatic Set Theory
Definition 17.3
Definition 17.3. Let \( \gamma \) be a cardinal. A Boolean algebra \( \mathbf{B} \) satisfies the \( \gamma \) -chain condition iff \[ \left( {\forall S \subseteq B}\right) \left\lbrack {\left( {\forall x, y \in S}\right) \left\lbrack {x \neq y \rightarrow x \cdot y = \mathbf{0}}\right\rbrack \rightarrow \overline{\bar{S}} \leq \gamma }\right\rbrack . \] In particular, \( \mathbf{B} \) satisfies the \( \omega \) -chain condition iff \( \mathbf{B} \) satisfies the c.c.c. Theorem 17.4. Let \( \gamma \) be an infinite cardinal and suppose that \( \mathbf{B} \) satisfies the \( \gamma \) -chain condition. If \( \alpha > \gamma \) is a cardinal, then \( \llbracket \operatorname{Card}\left( \breve{\alpha }\right) \rrbracket = \mathbf{1} \) . Proof. As in the proof of Theorem 17.2, suppose that [Card (č)] ≠ 1 for some cardinal \( \alpha > \gamma \), then defining \( b \) as before, we have for some \( \beta < \alpha \), and \( f \in {V}^{\left( \mathbf{B}\right) } \) \[ \text{i)}b \leq \mathop{\prod }\limits_{{\eta < \alpha }}\mathop{\sum }\limits_{{\xi < \beta }}\llbracket f\left( \check{\xi }\right) = \check{\eta }\rrbracket \text{where}b > \mathbf{0}\text{.} \] Therefore, using the \( {AC} \) in \( V \) , \[ \left( {\forall \eta < \alpha }\right) \left( {\exists {\xi }_{\eta } < \beta }\right) \left\lbrack {b \cdot \left\lbrack {f\left( {\check{\xi }}_{\eta }\right) = \check{\eta }}\right\rbrack \neq \mathbf{0}}\right\rbrack . \] For \( \xi < \beta \) define \[ {A}_{\xi } = \left\{ {\eta < \alpha \mid {\xi }_{\eta } = \xi }\right\} \] Then for some \( {\xi }_{ * } < \beta \) , ii) \( {\bar{A}}_{{\xi }_{ * }} > \gamma \) , since otherwise \( \left( {\forall {\xi }_{ * } < \beta }\right) \left\lbrack {{\bar{A}}_{{\xi }_{ * }} \leq \gamma }\right\rbrack \) . But \( \alpha = \mathop{\bigcup }\limits_{{{\xi }_{ * } < \beta }}{A}_{{\xi }_{ * }} \), so this would imply \( \overline{\bar{\alpha }} \leq \overline{\bar{\beta }} \cdot \gamma < \alpha \) since \( \beta ,\gamma < \alpha \) . This is a contradiction. Consider \[ S = \left\{ {b \cdot \left\lbrack {f\left( {\check{\xi }}_{ * }\right) = \check{\eta }}\right\rbrack \mid \eta \in {A}_{{\xi }_{ * }}}\right\} . \] Then for \( \eta \in {A}_{{\xi }_{ * }},{\xi }_{\eta } = {\xi }_{ * } \) and hence \[ b \cdot \left\lbrack {f\left( {\check{\xi }}_{ * }\right) = \check{\eta }}\right\rbrack = b \cdot \left\lbrack {f\left( {\check{\xi }}_{\eta }\right) = \check{\eta }}\right\rbrack \neq \mathbf{0} \] since \( b \leq \llbracket f : \check{\beta } \rightarrow \check{\alpha }\rrbracket \land \llbracket {\check{\xi }}_{ * } = {\check{\xi }}_{\eta }\rrbracket = \mathbf{1} \) . Therefore elements of \( S \) are \( \neq \mathbf{0} \) . Moreover, if \( {\eta }_{1},{\eta }_{2} \in {A}_{{\xi }_{ \bullet }} \land {\eta }_{1} \neq {\eta }_{2} \) , \[ b \cdot \left\lbrack {f\left( {\check{\xi }}_{ * }\right) = {\check{\eta }}_{1}}\right\rbrack \cdot \left\lbrack {f\left( {\check{\xi }}_{ * }\right) = {\check{\eta }}_{2}}\right\rbrack \leq \left\lbrack {{\check{\eta }}_{1} = {\check{\eta }}_{2}}\right\rbrack = \mathbf{0}. \] Therefore elements of \( S \) are mutually disjoint and \( \bar{S} > \gamma \), by ii). But the existence of such an \( S \) contradicts the assumption that \( \mathbf{B} \) satisfies the \( \gamma \) -chain condition. Corollary 17.5. If \( \mathbf{B} \) satisfies the c.c.c. and \( \alpha \) is a cardinal, then \( \llbracket \operatorname{Card}\left( \check{\alpha }\right) \rrbracket = 1 \) . Remark. This means that cardinals are absolute if \( \mathbf{B} \) satisfies the c.c.c. We can express this fact also in the following way: Corollary 17.6. If \( \mathbf{B} \) satisfies the c.c.c. then \( \left( {\forall \alpha }\right) \left\lbrack {\left\lbrack {{\left( {\omega }_{\alpha }\right) }^{ \vee } = {\omega }_{\ddot{\alpha }}}\right\rbrack = 1}\right\rbrack \) . [Note: For the meaning of this formula see the note stated in the proof of Theorem 17.2.] Proof. (By induction on \( \alpha \) .) We have already proved the case \( \alpha = 0 \) at the end of \( \$ {13} \) . Therefore assume \( \alpha > 0 \) and \( \left( {\forall \xi < \alpha }\right) \left\lbrack {\left\lbrack {{\left( {\omega }_{\xi }\right) }^{ \vee } = {\omega }_{\xi }}\right\rbrack = 1}\right\rbrack \) . Since \[ u = {\omega }_{\alpha } \leftrightarrow \operatorname{Card}\left( u\right) \land \left( {\forall \xi < \alpha }\right) \left\lbrack {{\omega }_{\xi } < u}\right\rbrack \] \[ \land \left( {\forall v}\right) \left\lbrack {\operatorname{Card}\left( v\right) \land \left( {\forall \xi < \alpha }\right) \left\lbrack {{\omega }_{\xi } < v}\right\rbrack \rightarrow u \leq v}\right\rbrack \rbrack \] is provable in \( {ZF} \), we have \[ \llbracket u = {\omega }_{\check{\alpha }}\rrbracket = \llbracket \operatorname{Card}\left( u\right) \rrbracket \cdot \llbracket \left( {\forall \xi < \check{\alpha }}\right) \left\lbrack {{\omega }_{\xi } < u}\right\rbrack \rrbracket \] \[ \cdot \left\lbrack \left\lbrack {\left( {\forall v}\right) \left\lbrack {\operatorname{Card}\left( v\right) \land \left( {\forall \xi < \breve{\alpha }}\right) \left\lbrack {{\omega }_{\xi } < v}\right\rbrack \rightarrow u \leq v}\right\rbrack }\right\rbrack \right\rbrack . \] We wish to prove that \( \llbracket {\left( {\omega }_{\alpha }\right) }^{ \smile } = {\omega }_{\check{\alpha }}\rrbracket = \mathbf{1} \) . By Corollary 17.5, \( \llbracket \operatorname{Card}\left( {\left( {\omega }_{\alpha }\right) }^{ \smile }\right) \rrbracket = \mathbf{1} \) . \[ \llbracket \left( {\forall \xi < \breve{\alpha }}\right) \left\lbrack {{\omega }_{\xi } < {\left( {\omega }_{\alpha }\right) }^{ \vee }}\right\rbrack \rrbracket = \mathop{\prod }\limits_{{\xi < \alpha }}\left\lbrack {{\omega }_{\xi } < {\left( {\omega }_{\alpha }\right) }^{ \vee }}\right\rbrack \rrbracket \] \( = \mathop{\prod }\limits_{{\xi < \alpha }}\left\lbrack {{\left( {\omega }_{\xi }\right) }^{ \vee } < {\left( {\omega }_{\alpha }\right) }^{ \vee }}\right\rbrack \; \) by the induction hypothesis Finally, let \[ \left. \left. {{b}_{0} = \llbracket \left( {\forall v}\right) \left\lbrack {\operatorname{Card}\left( v\right) \land \left( {\forall \xi < \alpha }\right) \left\lbrack {{\omega }_{\xi } < v}\right\rbrack \rightarrow {\left( {\omega }_{\alpha }\right) }^{ \smile } \leq v}\right\rbrack \rrbracket }\right\rbrack \right\rbrack \] \[ = \mathop{\prod }\limits_{{\eta \in {On}}}\left\lbrack {\left( {\llbracket \operatorname{Card}\left( \check{\eta }\right) \rrbracket \cdot \mathop{\prod }\limits_{{\xi < \alpha }}\llbracket {\omega }_{\xi } < \check{\eta }\rrbracket }\right) \Rightarrow \llbracket \left( {\omega }_{\alpha }\right) \check{} \leq \check{\eta }\rrbracket }\right\rbrack \] Let \( \eta \in {On} \) . If \( \eta \) is not a cardinal, \( \llbracket \operatorname{Card}\left( \breve{\eta }\right) \rrbracket = \mathbf{0} \) . Therefore, we need only consider the case Card \( \left( \eta \right) \) . Then \( \left\lbrack {\operatorname{Card}\left( \check{\eta }\right) }\right\rbrack = \mathbf{1} \) and \( \mathop{\prod }\limits_{{\xi < \alpha }}\llbracket {\omega }_{\xi } < \check{\eta }\rrbracket = \mathop{\prod }\limits_{{\xi < \alpha }}\llbracket {\left( {\omega }_{\xi }\right) }^{ \smile } < \check{\eta }\rrbracket \; \) by the induction hypothesis. Hence \[ \mathop{\prod }\limits_{{\xi < \alpha }}\left\lbrack {{\omega }_{\xi } < \check{\eta }}\right\rbrack \neq \mathbf{0} \rightarrow \left( {\forall \xi < \alpha }\right) \left\lbrack {{\omega }_{\xi } < \eta }\right\rbrack \] \[ \rightarrow {\omega }_{\alpha } \leq \eta \] \[ \rightarrow \left\lbrack {{\left( {\omega }_{\alpha }\right) }^{ \smile } \leq \check{\eta }}\right\rbrack = 1\text{.} \] This proves \[ \left\lbrack {\left( {\forall \eta \in {On}}\right) \left\lbrack {\left( {\llbracket \operatorname{Card}\left( \check{\eta }\right) \rrbracket \cdot \mathop{\prod }\limits_{{t < \alpha }}\llbracket {\omega }_{\check{\xi }} < \check{\eta }\rrbracket }\right) \Rightarrow \llbracket {\left( {\omega }_{\alpha }\right) }^{ \smile } \leq \check{\eta }\rrbracket }\right) = 1}\right\rbrack . \] Therefore \( {b}_{0} = \mathbf{1} \) . Thus \( \llbracket {\left( {\omega }_{\alpha }\right) }^{ \vee } = {\omega }_{\breve{\alpha }}\rrbracket = \mathbf{1} \) . Remark. Finally we mention a theorem which says that constructible sets are absolute in the same sense as ordinals, i.e., quantification of constructible sets (in the sense of \( {\mathbf{V}}^{\left( \mathbf{B}\right) } \) ) can be replaced by quantifications (in the Boolean sense) over the standard constructible sets. Let Const \( \left( x\right) \) be the formal predicate expressing that \( x \) is constructible in the sense of Gödel: Definition 17.7. Const \( \left( x\right) \overset{\Delta }{ \leftrightarrow }\left( {\exists v}\right) \left\lbrack {\operatorname{Ord}\left( v\right) \land x = F\left( v\right) }\right\rbrack \) where \( F \) is Gödel’s constructibility function. (See Definition 15.13, Introduction to Axiomatic Set Theory.) Theorem 17.8. \( \left( {\forall u \in {V}^{\left( \mathbf{B}\right) }}\right) \left\lbrack {\lbrack \text{Const}\left( u\right) \rbrack = \mathop{\sum }\limits_{{x \in L}}\llbracket u = \check{x}\rrbracket }\right\rbrack \) . Proof. For \( u \in {V}^{\left( \mathbf{B}\right) } \) , \[ \llbracket \text{Const}\left( u\right) \rrbracket = \llbracket \left( {\exists v}\right) \left\lbrack {\text{Ord}\left( v\right) \land u = F\left( v\right) }\right\rbrack \rrbracket \] \[ = \mathop{\sum }\limits_{{\alpha \in {On}}}\llbracket u = F\left( \breve{\alpha }\right) \rrbracket \;\text{ by Corollary 13.23 } \] In the proof of Theorem 15.28 in Introduction to Axiomatic Set Theory we established that \( x = F\left( \alpha \right) \) is equivalent to a formula \( \left( {\exists f}\right) \phi \left( {f, x,\alpha }\right) \) where \( \phi \left( {f, x,\alpha }\right) \) is a bounded form
110_The Schwarz Function and Its Generalization to Higher Dimensions
Definition 6.26
Definition 6.26. Let \( C \) and \( D \) be two nonempty sets, and \( H \mathrel{\text{:=}} {H}_{\left( \ell ,\alpha \right) } \) a hyperplane in a vector space \( E \) . \( H \) is called a separating hyperplane for the sets \( C \) and \( D \) if \( C \) is contained in one of the algebraically closed half-spaces determined by \( H \) and \( D \) in the other, say \( C \subseteq {\bar{H}}_{\left( \ell ,\alpha \right) }^{ + } \) and \( D \subseteq {\bar{H}}_{\left( \ell ,\alpha \right) }^{ - } \) . \( H \) is called a strictly separating hyperplane for the sets \( C \) and \( D \) if \( C \) is contained in one of the algebraically open half-spaces determined by \( H \) and \( D \) in the other, say \( C \subseteq {H}_{\left( \ell ,\alpha \right) }^{ + } \) and \( D \subseteq {H}_{\left( \ell ,\alpha \right) }^{ - } \) . \( H \) is called a strongly separating hyperplane for the sets \( C \) and \( D \) if there exist \( \beta \) and \( \gamma \) satisfying \( \gamma < \alpha < \beta \), such that \( C \subseteq {\bar{H}}_{\left( \ell ,\beta \right) }^{ + } \), and \( D \subseteq {\bar{H}}_{\left( \ell ,\gamma \right) }^{ - } \) . \( H \) is called a properly separating hyperplane for the sets \( C \) and \( D \) if \( H \) separates \( C \) and \( D \), and \( C \) and \( D \) are not both contained in the hyperplane \( H \) . If there exists a hyperplane \( H \) separating the sets \( C \) and \( D \) in one of the senses above, we say that \( C \) and \( D \) can be separated, strictly separated, strongly separated, properly separated, respectively. As in the finite-dimensional case, hyperplanes are proper, maximal affine subsets. However, when \( E \) is a topological vector space, it is no longer true that every hyperplane is necessarily closed. Lemma 6.27. Let \( E \) be a real vector space. A set \( H \subset E \) is a hyperplane if and only if \( H \) is a proper maximal affine subset of \( E \) . Moreover, if \( E \) is a topological vector space, then the hyperplane \( {H}_{\left( \ell ,\alpha \right) } \) is closed if and only if \( \ell \) is a continuous linear functional. Proof. Clearly, a hyperplane \( {H}_{\left( \ell ,\alpha \right) } \) is a proper affine subset of \( E \) . The max-imality of \( H \) holds: if \( a \in E \smallsetminus H \), then \( \ell \left( a\right) \neq 0 \), so that if \( x \in E \), we have \( \ell \left( x\right) = \ell \left( {\left( {\ell \left( x\right) /\ell \left( a\right) }\right) a}\right) \), that is, \( x - \ell \left( x\right) /\ell \left( a\right) a \in H \), proving that \( E = \operatorname{span}\{ H, a\} \) . Conversely, suppose that \( H \) is a proper maximal affine subset of \( E \) . Assume without loss of generality that \( H \) is a linear subspace of \( E \) . If \( a \in E \smallsetminus H \), then \( E = \operatorname{span}\{ H, a\} \), so that every \( x \in E \) has a representation \( x = u + {ta} \), where \( u \in H \) and \( t \in \mathbb{R} \) . This representation is unique, since \( x = {u}_{1} + {t}_{1}a = {u}_{2} + {t}_{2}a \) implies that \( {u}_{2} - {u}_{1} = \left( {{t}_{1} - {t}_{2}}\right) a \in H \cap \operatorname{span}\left( {\{ a\} }\right) = \{ 0\} \), that is, \( {u}_{2} = {u}_{1} \) and \( {t}_{2} = {t}_{1} \) . Define \[ \ell \left( x\right) = t,\;\text{ where }\;x = u + {ta}, u \in H, t \in \mathbb{R}, \] which is easily shown to be a linear functional. Clearly, \( H = {H}_{\left( \ell ,0\right) } \), proving that \( H \) is a hyperplane. Now suppose that \( E \) is a topological vector space. If \( \ell \) is continuous, it is clear that \( {H}_{\left( \ell ,\alpha \right) } \) is a (topologically) closed set. Conversely, if \( H \mathrel{\text{:=}} {H}_{\left( \ell ,\alpha \right) } \) is closed, we claim that \( \ell \) is continuous. Pick a point \( x \) in the complement of \( H \), which is an open set. There exists an open neighborhood \( N \) of the origin such that \( x + N \subseteq E \smallsetminus H \) . We may assume that \( N \) is a symmetric neighborhood, that is, \( N = - N \) : since \( \left( {t, x}\right) \mapsto {tx} \) is continuous, there exist \( \delta > 0 \) and a neighborhood \( W \) of the origin such that \( {tV} \subseteq N \) for all \( \left| t\right| < \delta \) . The set \( \bar{N} \mathrel{\text{:=}} { \cup }_{\left| t\right| < \delta }{tW} \subseteq N \) is clearly a symmetric neighborhood of the origin. If \( \ell \left( N\right) \) is unbounded, then it is easy to see that \( \ell \left( N\right) = \mathbb{R} \), so that there exists \( y \in N \) satisfying \( \ell \left( y\right) = \alpha - \ell \left( x\right) \), which gives the contradiction \( x + y \in \left( {x + N}\right) \cap H = \varnothing \) . Therefore, \( \ell \left( N\right) \) is bounded, say \( \left| {\ell \left( x\right) }\right| \leq M \) for \( x \in N \) . The continuity of \( \ell \) follows, because given \( \epsilon > 0,\left| {\ell \left( x\right) }\right| < \epsilon \) for every \( x \in \left( {\epsilon /M}\right) N \) . Note that the proof above also establishes the following result. Corollary 6.28. Let \( E \) be a real topological vector space, and \( H \mathrel{\text{:=}} {H}_{\left( \ell ,\alpha \right) } \) a hyperplane. The linear functional \( \ell \) is continuous if and only if one of the half-spaces \( {H}^{ + },{H}^{ - } \) contains an open set. Some form of Zorn's lemma is needed to prove separation theorems in general vector spaces. Recall that a partial order \( \preccurlyeq \) on a set \( X \) is a reflexive, antisymmetric, and transitive relation on \( X \), that is, for \( x, y, z \in X \), we have (a) \( x \preccurlyeq x \) , (b) \( x \preccurlyeq y, y \preccurlyeq x \Rightarrow x = y \) , (c) \( x \preccurlyeq y, y \preccurlyeq z \Rightarrow x \preccurlyeq z \) . A subset \( Y \subseteq X \) is called totally ordered if any two elements \( x, y \in Y \) can be compared, that is, either \( x \preccurlyeq y \) or \( y \preccurlyeq x \) . An upper bound of any set \( Z \subseteq X \) is a point \( x \in X \) such that \( z \preccurlyeq x \) for every \( z \in Z \) . A maximal element of a partially ordered set \( X \) is a point \( x \in X \) such that \( x \preccurlyeq z \) implies that \( z = x \) . Lemma 6.29. (Zorn's lemma) A partially ordered set has a maximal element if every totally ordered subset of it has an upper bound. Zorn's lemma is a basic axiom of set theory equivalent to the axiom of choice or the well-ordering principle; see for example [125] for more details. A pair of nonempty convex sets \( C \) and \( D \) satisfying \( C \cap D = \varnothing \) and \( C \cup D = \) \( E \) are called complementary convex sets. The following result essentially goes back to [152] and [248]. Lemma 6.30. If \( A \) and \( B \) are two nonempty, disjoint convex sets in a vector space \( E \), then there exist complementary convex sets \( C \) and \( D \) in \( E \) such that \( A \subseteq C \) and \( B \subseteq D \) . Proof. We introduce a relation \( \preccurlyeq \) on the set \( \mathcal{C} \) of disjoint convex subsets \( \left( {C, D}\right) \subseteq E \times E \) such that \( A \subseteq C \) and \( B \subseteq D \) by the inclusion relation, that is, we declare \( \left( {C, D}\right) \preccurlyeq \left( {{C}^{\prime },{D}^{\prime }}\right) \) if \( C \subseteq {C}^{\prime } \) and \( D \subseteq {D}^{\prime } \) . It is evident that \( \preccurlyeq \) is a partial order relation on \( \mathcal{C} \) . Moreover, if \( \mathcal{D} \subset \mathcal{C} \) is any totally ordered subset, then the union of sets in \( \mathcal{D} \) is a pair of disjoint convex sets that is an upper bound for \( \mathcal{D} \) . Thus, Zorn’s lemma applies, and there exists a maximal element \( \left( {C, D}\right) \in \mathcal{C} \), that is, \( C \) and \( D \) are convex sets satisfying \( A \subseteq C \) and \( B \subseteq D \) , and whenever \( {C}^{\prime } \) and \( {D}^{\prime } \) are convex sets satisfying \( C \subseteq {C}^{\prime } \) and \( D \subseteq {D}^{\prime } \), then \( {C}^{\prime } = C \) and \( {D}^{\prime } = D \) . We claim that \( C \cup D = E \) . If this is not true, pick a point \( x \in E \smallsetminus \left( {C \cup D}\right) \) . Since \( \left( {C, D}\right) \) is a maximal pair, we have \( \operatorname{co}\left( {\{ x\} \cup C}\right) \cap D \neq \varnothing \) and \( \operatorname{co}(\{ x\} \cup \) \( D) \cap C \neq \varnothing \) . Let \( {y}_{1} \in \operatorname{co}\left( {\{ x\} \cup D}\right) \cap C \) and \( {y}_{2} \in \operatorname{co}\left( {\{ x\} \cup C}\right) \cap D \) ; then there exist \( {x}_{2} \in D \) such that \( {y}_{1} \in \left( {x,{x}_{2}}\right) \), and \( {x}_{1} \in C \) such that \( {y}_{2} \in \left( {x,{x}_{1}}\right) \) ; see Figure 6.5. But the intersection point \( z \) of the line segments \( \left\lbrack {{x}_{1},{y}_{1}}\right\rbrack \) and \( \left\lbrack {{x}_{2},{y}_{2}}\right\rbrack \) belongs to both \( C \) and \( D \), a contradiction. This proves the claim and the lemma. ![968fd3dd-2b91-4cd3-8e1b-204f8f1c2faa_172_0.jpg](images/968fd3dd-2b91-4cd3-8e1b-204f8f1c2faa_172_0.jpg) Fig. 6.5. Lemma 6.31. Let \( \left( {C, D}\right) \) be complementary convex sets in a vector space \( E \) . The set \[ L \mathrel{\text{:=}} \operatorname{ac}\left( C\right) \cap \operatorname{ac}\left( D\right) \] is either a hyperplane in \( E \) or the whole space \( E \) . Moreover, (a) \( L = E \) if and only if \( \operatorname{ai}\left( C\right) = \operatorname{ai}\left( D\right) = \varnothing \), or equivalently if and only if \( \operatorname{ac}\left( C\right) = \operatorname{ac}\left( D\right) = E \) . (b) If \( L \) is a hyperplane, then the sets \( \operatorname{ai}\left( C\right) \) and \( \operatorname{ai}\left( D\right) \) are both nonempty, and the pairs \( \left( {\operatorname{ai}\left( C\right) ,\operatorname{ai}\left( D\right) }\right) \) and \( \left( {\operatorname{ac}\left( C\right) ,\o
1094_(GTM250)Modern Fourier Analysis
Definition 4.6.5
Definition 4.6.5. A bounded complex-valued function \( b \) on \( {\mathbf{R}}^{n} \) is said to be accretive if there is a constant \( {c}_{0} > 0 \) such that \( \operatorname{Re}b\left( x\right) \geq {c}_{0} \) for almost all \( x \in {\mathbf{R}}^{n} \) . The following theorem is the main result of this section. Theorem 4.6.6. Let \( {\theta }_{s} \) be a complex-valued function on \( {\mathbf{R}}^{n} \times {\mathbf{R}}^{n} \) that satisfies (4.6.28) and (4.6.29), and let \( {\Theta }_{s} \) be the linear operator in (4.6.30) whose kernel is \( {\theta }_{s} \) . If there is an accretive function \( b \) such that \[ {\Theta }_{s}\left( b\right) = 0 \] (4.6.41) for all \( s > 0 \), then there is a constant \( {C}_{n}\left( b\right) \) such that the estimate \[ {\left( {\int }_{0}^{\infty }{\begin{Vmatrix}{\Theta }_{s}\left( f\right) \end{Vmatrix}}_{{L}^{2}}^{2}\frac{ds}{s}\right) }^{\frac{1}{2}} \leq {C}_{n}\left( b\right) \parallel f{\parallel }_{{L}^{2}} \] (4.6.42) holds for all \( f \in {L}^{2} \) . Corollary 4.6.7. The Cauchy integral operator \( {\mathcal{C}}_{\Gamma } \) maps \( {L}^{2}\left( \mathbf{R}\right) \) to itself. The corollary is a consequence of Theorem 4.6.6. Indeed, the crucial and important cancellation property \[ {\Theta }_{s}\left( {1 + i{A}^{\prime }}\right) = 0 \] (4.6.43) is valid for the accretive function \( 1 + i{A}^{\prime } \), when \( {\Theta }_{s} \) and \( {\theta }_{s} \) are as in (4.6.23) and (4.6.24). To prove (4.6.43) we simply note that \[ {\Theta }_{s}\left( {1 + i{A}^{\prime }}\right) \left( x\right) = {\int }_{\mathbf{R}}\frac{s\left( {1 + i{A}^{\prime }\left( y\right) }\right) {dy}}{{\left( y - x + i\left( A\left( y\right) - A\left( x\right) \right) + is\right) }^{2}} \] \[ = {\left\lbrack \frac{-s}{y - x + i\left( {A\left( y\right) - A\left( x\right) }\right) + {is}}\right\rbrack }_{y = - \infty }^{y = + \infty } \] \[ = 0 - 0 = 0\text{.} \] This condition plays exactly the role of (4.6.31), which may fail in general. The necessary "internal cancellation" of the family of operators \( {\Theta }_{s} \) is exactly captured by the single condition (4.6.43). It remains to prove Theorem 4.6.6. Proof. We fix an approximation of the identity operator, such as \[ {P}_{s}\left( f\right) \left( x\right) = {\int }_{{\mathbf{R}}^{n}}{\Phi }_{s}\left( {x - y}\right) f\left( y\right) {dy}, \] where \( {\Phi }_{s}\left( x\right) = {s}^{-n}\Phi \left( {{s}^{-1}x}\right) \), and \( \Phi \) is a nonnegative Schwartz function with integral 1. Then \( {P}_{s} \) is a nice positive averaging operator that satisfies \( {P}_{s}\left( 1\right) = 1 \) for all \( s > 0 \) . The key idea is to decompose the operator \( {\Theta }_{s} \) as \[ {\Theta }_{s} = \left( {{\Theta }_{s} - {M}_{{\Theta }_{s}\left( 1\right) }{P}_{s}}\right) + {M}_{{\Theta }_{s}\left( 1\right) }{P}_{s}, \] (4.6.44) where \( {M}_{{\Theta }_{s}\left( 1\right) } \) is the operator given by multiplication by \( {\Theta }_{s}\left( 1\right) \) . We begin with the first term in (4.6.44), which is essentially an error term. We simply observe that \[ \left( {{\Theta }_{s} - {M}_{{\Theta }_{s}\left( 1\right) }{P}_{s}}\right) \left( 1\right) = {\Theta }_{s}\left( 1\right) - {\Theta }_{s}\left( 1\right) {P}_{s}\left( 1\right) = {\Theta }_{s}\left( 1\right) - {\Theta }_{s}\left( 1\right) = 0. \] Therefore, Theorem 4.6.3 is applicable once we check that the kernel of the operator \( {\Theta }_{s} - {M}_{{\Theta }_{s}\left( 1\right) }{P}_{s} \) satisfies (4.6.28) and (4.6.29). But these are verified easily, since the kernels of both \( {\Theta }_{s} \) and \( {P}_{s} \) satisfy these estimates and \( {\Theta }_{s}\left( 1\right) \) is a bounded function uniformly in \( s \) . The latter statement is a consequence of condition (4.6.28). We now need to obtain the required quadratic estimate for the term \( {M}_{{\Theta }_{s}\left( 1\right) }{P}_{s} \) . With the use of Theorem 3.3.7, this follows once we prove that the measure \[ {\left| {\Theta }_{s}\left( 1\right) \left( x\right) \right| }^{2}\frac{dxds}{s} \] is Carleson. It is here that we use condition (4.6.41). Since \( {\Theta }_{s}\left( b\right) = 0 \) we have \[ {P}_{s}\left( b\right) {\Theta }_{s}\left( 1\right) = \left( {{P}_{s}\left( b\right) {\Theta }_{s}\left( 1\right) - {\Theta }_{s}{P}_{s}\left( b\right) }\right) + \left( {{\Theta }_{s}{P}_{s}\left( b\right) - {\Theta }_{s}\left( b\right) }\right) . \] (4.6.45) Suppose we could show that the measures \[ {\left| {\Theta }_{s}\left( b\right) \left( x\right) - {\Theta }_{s}{P}_{s}\left( b\right) \left( x\right) \right| }^{2}\frac{dxds}{s}, \] (4.6.46) \[ {\left| {\Theta }_{s}{P}_{s}\left( b\right) \left( x\right) - {P}_{s}\left( b\right) \left( x\right) {\Theta }_{s}\left( 1\right) \left( x\right) \right| }^{2}\frac{dxds}{s}, \] (4.6.47) are Carleson. Then it would follow from (4.6.45) that the measure \[ {\left| {P}_{s}\left( b\right) \left( x\right) {\Theta }_{s}\left( 1\right) \left( x\right) \right| }^{2}\frac{dxds}{s} \] is also Carleson. Using the accretivity condition on \( b \) and the positivity of \( {P}_{s} \) we obtain \[ \left| {{P}_{s}\left( b\right) }\right| \geq \operatorname{Re}{P}_{s}\left( b\right) = {P}_{s}\left( {\operatorname{Re}b}\right) \geq {P}_{s}\left( {c}_{0}\right) = {c}_{0}, \] from which it follows that \( {\left| {\Theta }_{s}\left( 1\right) \left( x\right) \right| }^{2} \leq {c}_{0}^{-2}{\left| {P}_{s}\left( b\right) \left( x\right) {\Theta }_{s}\left( 1\right) \left( x\right) \right| }^{2} \) . Thus the measure \( {\left| {\Theta }_{s}\left( 1\right) \left( x\right) \right| }^{2}{dxds}/s \) must be Carleson. Therefore, the proof will be complete if we can show that both measures (4.6.46) and (4.6.47) are Carleson. Theorem 3.3.8 plays a key role here. We begin with the measure in (4.6.46). First we observe that the kernel \[ {L}_{s}\left( {x, y}\right) = {\int }_{{\mathbf{R}}^{n}}{\theta }_{s}\left( {x, z}\right) {\Phi }_{s}\left( {z - y}\right) {dz} \] of \( {\Theta }_{s}{P}_{s} \) satisfies (4.6.28) and (4.6.29). The verification of (4.6.28) is a straightforward consequence of the estimate in Appendix B.1, while (4.6.29) follows easily from the mean value theorem. It follows that the kernel of \[ {R}_{s} = {\Theta }_{s} - {\Theta }_{s}{P}_{s} \] satisfies the same estimates. Moreover, it is easy to see that \( {R}_{s}\left( 1\right) = 0 \) and thus the quadratic estimate (4.6.32) holds for \( {R}_{s} \) in view of Theorem 4.6.3. Therefore, the hypotheses of Theorem 3.3.8(c) are satisfied, and this gives that the measure in (4.6.46) is Carleson. We now continue with the measure in (4.6.47). Here we set \[ {T}_{s}\left( f\right) \left( x\right) = {\Theta }_{s}{P}_{s}\left( f\right) \left( x\right) - {P}_{s}\left( f\right) \left( x\right) {\Theta }_{s}\left( 1\right) \left( x\right) . \] The kernel of \( {T}_{s} \) is \( {L}_{s}\left( {x, y}\right) - {\Theta }_{s}\left( 1\right) \left( x\right) {\Phi }_{s}\left( {x - y}\right) \), which clearly satisfies (4.6.28) and (4.6.29), since \( {\Theta }_{s}\left( 1\right) \left( x\right) \) is a bounded function uniformly in \( s > 0 \) . We also observe that \( {T}_{s}\left( 1\right) = 0 \) . Using Theorem 4.6.3, we conclude that the quadratic estimate (4.6.32) holds for \( {T}_{s} \) . Therefore, the hypotheses of Theorem 3.3.8(c) are satisfied; hence the measure in (4.6.46) is Carleson. We conclude by observing that if we attempt to replace \( {\Theta }_{s} \) with \( {\widetilde{\Theta }}_{s} = {\Theta }_{s}{M}_{1 + i{A}^{\prime }} \) in the resolution identity (4.6.26), then \( {\widetilde{\Theta }}_{s}\left( 1\right) = 0 \) would hold, but the kernel of \( {\widetilde{\Theta }}_{s} \) would not satisfy the regularity estimate (4.6.29). The whole purpose of Theorem 4.6.6 was to find a certain balance between regularity and cancellation. ## Exercises 4.6.1. Given a function \( H \) on a Lipschitz graph \( \Gamma \), we associate a function \( h \) on the line by setting \( h\left( t\right) = H\left( {t + {iA}\left( t\right) }\right) \) . Prove that for all \( 0 < p < \infty \) we have \[ \parallel h{\parallel }_{{L}^{p}\left( \mathbf{R}\right) }^{p} \leq \parallel H{\parallel }_{{L}^{p}\left( \Gamma \right) }^{p} \leq \sqrt{1 + {L}^{2}}\parallel h{\parallel }_{{L}^{p}\left( \mathbf{R}\right) }^{p}, \] where \( L \) is the Lipschitz constant of the defining function \( A \) of the graph \( \Gamma \) . 4.6.2. Let \( A : \mathbf{R} \rightarrow \mathbf{R} \) satisfy \( \left| {A\left( y\right) - A\left( {y}^{\prime }\right) }\right| \leq L\left| {y - {y}^{\prime }}\right| \) for all \( y,{y}^{\prime } \in \mathbf{R} \) for some \( L > 0 \) . Let \( h \) be a Schwartz function on \( \mathbf{R} \) . (a) Show that for all \( s > 0 \) and \( x, y \in \mathbf{R} \) we have \[ \frac{{s}^{2} + {\left| x - y\right| }^{2}}{{\left| x - y\right| }^{2} + {\left| A\left( x\right) - A\left( y\right) + s\right| }^{2}} \leq 4{L}^{2} + 2. \] (b) Use the Lebesgue dominated convergence theorem to prove that for all \( x \in \mathbf{R} \) \[ \mathop{\lim }\limits_{{s \rightarrow 0}}{\int }_{\left| {x - y}\right| > \sqrt{s}}\frac{s\left( {1 + i{A}^{\prime }\left( y\right) }\right) h\left( y\right) }{{\left( y - x + i\left( A\left( y\right) - A\left( x\right) \right) + is\right) }^{2}}{dy} = 0. \] (c) Integrate directly to show that for all \( x \in \mathbf{R} \) we have \[ \mathop{\lim }\limits_{{s \rightarrow 0}}{\int }_{\left| {x - y}\right| \leq \sqrt{s}}\frac{s\left( {1 + i{A}^{\prime }\left( y\right) }\right) }{{\left( y - x + i\left( A\left( y\right) - A\left( x\right) \right) + is\right) }^{2}}{dy} = 0. \] (d) Use part (a) to prove that for all \( x \in \mathbf{R} \) we have \[ \mathop{\lim }\limits_{{s \rightarrow 0}}{\int }_{\left| {x - y}\right| \leq \sqrt{s}}\frac{s\left( {1 + i{A}^{\prime }\left( y\right) }\right) \left( {h\left( y\right) - h\left( x\right) }\right) }{{\left( y - x +
1172_(GTM8)Axiomatic Set Theory
Definition 5.9
Definition 5.9. Let \( {G}_{1} \) be an open subset of \( P \) and \( {G}_{2} \) be an open subset of \( \mathbf{F} \) . Then \[ {G}_{1}^{ * } \triangleq \bigcup \left\{ {N\left( p\right) \mid \left\lbrack p\right\rbrack \subseteq {G}_{1}}\right\} \] \[ {G}_{2}^{\Delta } \triangleq \bigcup \left\{ {\left\lbrack p\right\rbrack \mid N\left( p\right) \subseteq {G}_{2}}\right\} \] Remark. Clearly \( {G}_{1}^{ * } \) and \( {G}_{2}^{\Delta } \) are open subsets of \( \mathbf{F} \) and \( P \) respectively. Theorem 5.10. If \( {G}_{1} \) and \( {G}_{2} \) are open subsets of \( P \) and \( \mathbf{F} \) respectively then 1. \( {G}_{1} \subseteq {G}_{1}^{*\Delta } \) . 2. \( {G}_{2} \subseteq {G}_{2}^{\Delta * } \) . Proof. 1. \( a \in {G}_{1} \rightarrow \left\lbrack a\right\rbrack \subseteq {G}_{1} \) \[ \rightarrow N\left( a\right) \subseteq {G}_{1}^{ * } \] \( \rightarrow \left\lbrack a\right\rbrack \subseteq {G}_{1}^{*\Delta }\; \) (since \( {G}_{1}^{ * } \) is an open subset of \( \mathbf{F} \) ) \[ \rightarrow a \in {G}_{1}^{* \vartriangle }\text{.} \] 2. \( F \in {G}_{2} \rightarrow \left( {\exists a \in F}\right) \left\lbrack {N\left( a\right) \subseteq {G}_{2}}\right\rbrack \) \[ \rightarrow \left( {\exists a \in F}\right) \left\lbrack {\left\lbrack a\right\rbrack \subseteq {G}_{2}^{\Delta }}\right\rbrack \] \( \rightarrow \left( {\exists a \in F}\right) \left\lbrack {N\left( a\right) \subseteq {G}_{2}^{\Delta * }}\right\rbrack \; \) (since \( {G}_{2}^{\Delta } \) is an open subset of \( P \) ) \[ \rightarrow F \in {G}_{2}^{\Delta * }\text{.} \] Theorem 5.11. 1. If \( {G}_{1} \) and \( {G}_{2} \) are open subsets of \( P \) then \[ {G}_{1} \subseteq {G}_{2} \rightarrow {G}_{1}^{ * } \subseteq {G}_{2}^{ * } \] 2. If \( {G}_{1} \) and \( {G}_{2} \) are open subsets of \( \mathbf{F} \) then \[ {G}_{1} \subseteq {G}_{2} \rightarrow {G}_{1}^{\Delta } \subseteq {G}_{2}^{\Delta } \] Proof. Left to the reader. Theorem 5.12. If \( G \) is an open subset of \( \mathbf{F} \) and \( \left\lbrack a\right\rbrack \subseteq {G}^{\Delta } \), then \( N\left( a\right) \subseteq G \) . Proof. \[ \left\lbrack a\right\rbrack \subseteq {G}^{\Delta } \rightarrow a \in {G}^{\Delta } \] \[ \rightarrow \left( {\exists b}\right) \left\lbrack {N\left( b\right) \subseteq G \land a \in \left\lbrack b\right\rbrack }\right\rbrack \] \[ \rightarrow \left( {\exists b \geq a}\right) \left\lbrack {N\left( b\right) \subseteq G}\right\rbrack \] \[ \rightarrow \left( {\exists b}\right) \left\lbrack {N\left( a\right) \subseteq N\left( b\right) \subseteq G}\right\rbrack \] \[ \rightarrow N\left( a\right) \subseteq G\text{.} \] Theorem 5.13. If \( G \) is a regular open subset of \( P \) then \( {G}^{*\Delta } = G \) . Proof. \[ a \in {G}^{*\Delta } \rightarrow \left\lbrack a\right\rbrack \subseteq {G}^{*\Delta } \] \[ \rightarrow N\left( a\right) \subseteq {G}^{ * } \] \[ \rightarrow \left( {\forall F}\right) \left\lbrack {a \in F \rightarrow F \in {G}^{ * }}\right\rbrack \] \[ \rightarrow \left( {\forall F}\right) \left\lbrack {a \in F \rightarrow \left( {\exists b \in F}\right) \left\lbrack {\left\lbrack b\right\rbrack \subseteq G}\right\rbrack }\right\rbrack \text{.}\] For each \( c \leq a \) there is an ultrafilter \( {F}^{\prime } \) for \( \mathbf{P} \) such that \( c \in {F}^{\prime } \) . But then, since \( c \leq a \) and \( c \in {F}^{\prime } \) we have \( a \in {F}^{\prime } \) . Consequently if \( a \in {G}^{*\Delta } \) then \( \exists b \in {F}^{\prime } \cap G \) . Since \( {F}^{\prime } \) is an ultrafilter for \( \mathbf{P} \) and since both \( c \) and \( b \) are in \( {F}^{\prime } \) \[ \left( {\exists {b}^{\prime } \in F}\right) \left\lbrack {{b}^{\prime } \leq c \land {b}^{\prime } \leq b}\right\rbrack . \] But \( b \in G \land {b}^{\prime } \leq b \) . Therefore \( {b}^{\prime } \in G \) i.e., \[ a \in {G}^{*\Delta } \rightarrow \left( {\forall c \leq a}\right) \left( {\exists b \leq c}\right) \left\lbrack {b \in G}\right\rbrack \] \[ \rightarrow \left( {\forall c \leq a}\right) \left\lbrack {\left\lbrack c\right\rbrack \cap G \neq 0}\right\rbrack \] \[ \rightarrow \left( {\forall c \leq a}\right) \left\lbrack {c \in {G}^{ - }}\right\rbrack \] \[ \rightarrow \left\lbrack a\right\rbrack \subseteq {G}^{ - } \] \[ \rightarrow a \in {G}^{-0} \] \[ \rightarrow a \in G\text{.} \] Then by Theorem 5.10, \( {G}^{*\Delta } = G \) . Theorem 5.14. If \( G \) is an open subset of \( \mathbf{F} \) then \( {G}^{\Delta * } = G \) . Proof. \[ F \in {G}^{\Delta * } \rightarrow \left( {\exists a \in F}\right) \left\lbrack {\left\lbrack a\right\rbrack \subseteq {G}^{\Delta }}\right\rbrack \] \[ \rightarrow \left( {\exists a \in F}\right) \left\lbrack {N\left( a\right) \subseteq G}\right\rbrack \] \[ \rightarrow F \in G\text{.} \] Therefore by Theorem 5.10, \( {G}^{\Delta * } = G \) . Theorem 5.15. Let \( {G}_{1} \) and \( {G}_{2} \) be open sets of a topological space \( \langle X, T\rangle \) . If for each regular open set \( H \) \[ {G}_{1} \cap H = 0 \rightarrow {G}_{2} \cap H = 0 \] then \( {G}_{2} \subseteq {G}_{1}{}^{-0} \) . Proof. If \( H = {\left( X - {G}_{1}\right) }^{0} \) then \( H \) is regular open. If \( {G}_{1} \cap H = 0 \) then \( {G}_{2} \cap H = 0 \) and hence \[ {G}_{2} \cap {H}^{ - } = 0. \] Therefore \[ {G}_{2} \subseteq X - {H}^{ - } = {G}_{1}{}^{-0}. \] Theorem 5.16. 1. If \( {G}_{1} \) is an open subset of \( P \) then \[ {G}_{1}^{ * } = 0 \rightarrow {G}_{1} = 0. \] 2. If \( {G}_{2} \) is an open subset of \( \mathbf{F} \) \[ {G}_{2}^{\Delta } = 0 \rightarrow {G}_{2} = 0. \] Proof. Left to the reader. Theorem 5.17. 1. If \( G \) is a regular open subset of \( \mathbf{F} \) then \( {G}^{\Delta } \) is regular open. 2. If \( G \) is a regular open subset of \( P \) then \( {G}^{ * } \) is regular open. Proof. 1. Let \( {G}_{1} = {\left( {G}^{\Delta }\right) }^{-0} \) . Then \[ G = {G}^{\Delta * } \subseteq {\left( {G}^{\Delta }\right) }^{-0 * } = {G}_{1}^{ * }. \] If \( {G}_{2} \) is regular open and \( G \cap {G}_{2} = 0 \) then \[ {\left( {G}^{\Delta } \cap {G}_{2}^{\Delta }\right) }^{ * } \subseteq {G}^{\Delta * } \cap {G}_{2}^{\Delta * } = G \cap {G}_{2} = 0. \] Therefore \( {G}^{\Delta } \cap {G}_{2}^{\Delta } = 0 \) and hence \( {G}_{1} \cap {G}_{2}^{\Delta } = 0 \) . Furthermore \[ {\left( {G}_{1}^{ * } \cap {G}_{2}\right) }^{\Delta } \subseteq {G}_{1}^{*\Delta } \cap {G}_{2}^{\Delta } = {G}_{1} \cap {G}_{2}^{\Delta } = 0. \] Thus \( {G}_{1}^{ * } \cap {G}_{2} = 0 \) and hence, by Theorem 5.15, \( {G}_{1}^{ * } \subseteq {G}^{-0} = G \) . Consequently \[ {G}^{\Delta } = {G}_{1}^{*\Delta } = {G}_{1} \] i.e., \( {G}^{\Delta } \) is regular open. 2. Let \( {G}_{2} = {\left( {G}^{ * }\right) }^{-0} \) . Then \[ G \subseteq {G}^{*\Delta } \subseteq {G}_{2}^{\Delta } \] If \( H \) is regular open and \( G \cap H = 0 \) then \[ {\left( {G}^{ * } \cap {H}^{ * }\right) }^{\Delta } \subseteq {G}^{*\Delta } \cap {H}^{*\Delta } = G \cap H = 0. \] Therefore \( {G}^{ * } \cap {H}^{ * } = 0 \) and hence \( {G}_{2} \cap {H}^{ * } = 0 \) . Furthermore \[ {\left( {G}_{2}^{\Delta } \cap H\right) }^{ * } \subseteq {G}_{2}^{\Delta * } \cap {H}^{ * } = {G}_{2} \cap {H}^{ * } = 0. \] Consequently \( {G}_{2}^{\Delta } \cap H = 0 \) and hence \( {G}_{2}^{\Delta } \subseteq {G}^{-0} = G \) . Thus, by Theorem 5.14 \[ {G}^{ * } = {G}_{2}^{\Delta * } = {G}_{2} \] i.e., \( {G}^{ * } \) is regular open. Remark. From the foregoing theorems we obtain the following result. Theorem 5.18. If \( \mathbf{P} = \langle P, \leq \rangle \) is a partial order structure, then the Boolean algebra \( \mathbf{B} \) of regular open subsets of \( P \) is isomorphic to the Boolean algebra of regular open subsets of \( \mathbf{F} \) . Proof. The mapping \( * \) is a one-to-one, order preserving mapping from the first algebra onto the second. Remark. As you will see later, it is useful to consider the Boolean algebra of all regular open sets of a product topological space. So we shall show a general theorem about that. If a partial order structure \( \mathbf{P} = \langle P, \leq \rangle \) has a greatest element, then we denote it by \( 1 : \left( {\forall p \in P}\right) \left\lbrack {p \leq 1}\right\rbrack \) . In case \( P \) has an element 1, let \( {P}_{0} = P - \{ 1\} \) and \( {\mathbf{P}}_{0} = \left\langle {{P}_{0}, \leq }\right\rangle \) . Then clearly the Boolean algebra of all regular open subsets of \( \mathbf{P} \) is isomorphic to that of \( {\mathbf{P}}_{0} \) . Consequently, with regard to Boolean algebras of regular open subsets of partial order structures, we may assume that the partial order structures have a greatest element 1 . Definition 5.19. Let \( {\mathbf{P}}_{i} = \left\langle {{P}_{i}, \leq }\right\rangle, i \in I,\left( {I\text{an index set}}\right) \) be a partial order structure having a greatest element \( {1}_{i} \) . Then the product structure \( \mathbf{P} \triangleq \mathop{\prod }\limits_{{i \in I}}{\mathbf{P}}_{i} \triangleq \langle P, \leq \rangle \) is the following partial order structure. 1. \( P \triangleq \left. {\left. { < p \in \mathop{\prod }\limits_{{i \in I}}{P}_{i}}\right| \;p\left( i\right) = {1}_{i}\text{for all but finitely many}i\text{’s}}\right\} \) . 2. \( \left( {\forall p, q \in P}\right) \left\lbrack {p \leq q\overset{\Delta }{ \leftrightarrow }\left( {\forall i \in I}\right) \left\lbrack {p\left( i\right) \leq q\left( i\right) }\right\rbrack }\right\rbrack \) . 3. \( 1 \triangleq \) the unique \( p \in P \) such that \( \left( {\forall i \in I}\right) \left\lbrack {p\left( i\right) = {1}_{i}}\right\rbrack \) . Theorem 5.20. Let \( \mathbf{P} = \mathop{\prod }\limits_{{i \in I}}{\mathbf{P}}_{i} \) be given as above and let \( \mathbf{F} \) and \( {\mathbf{F}}_{i} \) be the \( {T}_{1} \) -spaces corresponding to \( \mathbf{P} \), and \( {\mathbf{P}}_{i} \) respectively in accordance with Definition 5.6. Then \(
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 13.6
Definition 13.6. Suppose that \( B \) and \( F \) are Hausdorff topological spaces. A fiber bundle with base \( B \) and fiber \( F \) is a Hausdorff topological space \( X \) together with a continuous map \( p : X \rightarrow B \), called the projection map, having the following properties. First, for each \( b \) in \( B \), the preimage \( {p}^{-1}\left( b\right) \) of \( b \) in \( X \) is homeomorphic to \( F \) . Second, for every \( b \) in \( B \), there is a neighborhood \( U \) of \( b \) such that \( {p}^{-1}\left( U\right) \) is homeomorphic to \( U \times F \) in such a way that the projection map is simply projection onto the first factor. In any fiber bundle, the sets of the form \( {p}^{-1}\left( b\right) \) are called the fibers. The second condition in the definition may be stated more pedantically as follows. For each \( b \in B \), there should exist a neighborhood \( U \) of \( B \) and a homeomorphism \( \Phi \) of \( {p}^{-1}\left( U\right) \) with \( U \times F \) having the property that \( p\left( x\right) = {p}_{1}\left( {\Phi \left( x\right) }\right) \), where \( {p}_{1} : U \times F \rightarrow U \) is the map \( {p}_{1}\left( {u, f}\right) = u \) . The simplest sort of fiber bundle is the product space \( X = B \times F \), with the projection map being simply the projection onto the first factor. Such a fiber bundle is called trivial. The second condition in the definition of a fiber bundle is called local triviality and it says that any fiber bundle must look locally like a trivial bundle. In general, \( X \) need not be globally homeomorphic to \( B \times F \) . If \( X \) were a trivial fiber bundle, then the fundamental group of \( X \) would be simply the product of the fundamental group of the base \( B \) and the fundamental group of the fiber \( F \) . In particular, if \( X \) were a trivial fiber bundle and \( {\pi }_{1}\left( B\right) \) were trivial, then \( {\pi }_{1}\left( X\right) \) would be isomorphic to \( {\pi }_{1}\left( F\right) \) . The following result says that if \( {\pi }_{1}\left( B\right) \) and \( {\pi }_{2}\left( B\right) \) are trivial, then the same conclusion holds, even if \( X \) is nontrivial. Theorem 13.7. Suppose that \( X \) is a fiber bundle with base \( B \) and fiber \( F \) . If \( {\pi }_{1}\left( B\right) \) and \( {\pi }_{2}\left( B\right) \) are trivial, then \( {\pi }_{1}\left( X\right) \) is isomorphic to \( {\pi }_{1}\left( F\right) \) . Proof. According to a standard topological result (e.g., Theorem 4.41 and Proposition 4.48 in [Hat]), there is a long exact sequence of homotopy groups for a fiber bundle. The portion of this sequence relevant to us is the following: \[ {\pi }_{2}\left( B\right) \underset{f}{ \rightarrow }{\pi }_{1}\left( F\right) \underset{g}{ \rightarrow }{\pi }_{1}\left( X\right) \underset{h}{ \rightarrow }{\pi }_{1}\left( B\right) . \] (13.1) Saying that the sequence is exact means that each map is a homomorphism and the image of each map is equal to the kernel of the following map. Since we are assuming \( {\pi }_{2}\left( B\right) \) is trivial, the image of \( f \) is trivial, which means the kernel of \( g \) is also trivial. Since \( {\pi }_{1}\left( B\right) \) is also trivial, the kernel of \( h \) must be \( {\pi }_{1}\left( X\right) \), which means that the image of \( g \) is \( {\pi }_{1}\left( X\right) \) . Thus, \( g \) is an isomorphism of \( {\pi }_{1}\left( F\right) \) with \( {\pi }_{1}\left( X\right) \) . Proposition 13.8. Suppose \( G \) is a matrix Lie group and \( H \) is a closed subgroup of \( G \) . Then \( G \) has the structure of a fiber bundle with base \( G/H \) and fiber \( H \), where the projection map \( p : G \rightarrow G/H \) is given by \( p\left( x\right) = \left\lbrack x\right\rbrack \), with \( \left\lbrack x\right\rbrack \) denoting the coset \( {xH} \in G/H \) . Proof. For any coset \( \left\lbrack x\right\rbrack \) in \( G/H \), the preimage of \( \left\lbrack x\right\rbrack \) under \( p \) is the set \( {xH} \subset G \) , which is clearly homeomorphic to \( H \) . Meanwhile, the required local triviality property of the bundle follows from Lemma 11.21 and Theorem 11.22. (If we take a open set \( U \) in \( G/H \) as in the proof of Theorem 11.22, Lemma 11.21 tells us that the preimage of \( U \) under \( p \) is homeomorphic to \( U \times H \) in such a way that the projection \( p \) is just projection onto the first factor.) Proposition 13.9. Consider the map \( p : \mathrm{{SO}}\left( n\right) \rightarrow {S}^{n - 1} \) given by \[ p\left( R\right) = R{e}_{n} \] (13.2) where \( {e}_{n} = \left( {0,\ldots ,0,1}\right) \) . Then \( \left( {\mathrm{{SO}}\left( n\right), p}\right) \) is a fiber bundle with base \( {S}^{n - 1} \) and fiber \( \mathrm{{SO}}\left( {n - 1}\right) \) . Proof. We think of \( \mathrm{{SO}}\left( {n - 1}\right) \) as the (closed) subgroup of \( \mathrm{{SO}}\left( n\right) \) consisting of block diagonal matrices of the form \[ R = \left( \begin{matrix} {R}^{\prime } & 0 \\ 0 & 1 \end{matrix}\right) \] with \( {R}^{\prime } \in \mathrm{{SO}}\left( {n - 1}\right) \) . By Proposition 13.8, \( \mathrm{{SO}}\left( n\right) \) is a fiber bundle with base \( \mathrm{{SO}}\left( n\right) /\mathrm{{SO}}\left( {n - 1}\right) \) and fiber \( \mathrm{{SO}}\left( {n - 1}\right) \) . Now, it is easy to see that \( \mathrm{{SO}}\left( n\right) \) acts transitively on the sphere \( {S}^{n - 1} \) . Thus, the map \( p \) in (13.2) maps \( \mathrm{{SO}}\left( n\right) \) onto \( {S}^{n - 1} \) . Since \( R{e}_{n} = {e}_{n} \) if and only \( R \in \mathrm{{SO}}\left( {n - 1}\right) \), we see that \( p \) descends to a (continuous) bijection of \( \mathrm{{SO}}\left( n\right) /\mathrm{{SO}}\left( {n - 1}\right) \) onto \( {S}^{n - 1} \) . Since both \( \widehat{\mathrm{{SO}}}\left( n\right) /\mathrm{{SO}}\left( {n - 1}\right) \) and \( {S}^{n - 1} \) are compact, this map is actually a homeomorphism (Theorem 4.17 in [Rud1]). Thus, \( \mathrm{{SO}}\left( n\right) \) is a fiber bundle of the claimed sort. Proposition 13.10. For all \( n \geq 3 \), the fundamental group of \( \mathrm{{SO}}\left( n\right) \) is isomorphic to \( \mathbb{Z}/2 \) . Meanwhile, \( {\pi }_{1}\left( {\mathrm{{SO}}\left( 2\right) }\right) \cong \mathbb{Z} \) . Proof. Suppose that \( n \) is at least 4, so that \( n - 1 \) is at least 3 . Then, by Proposition 13.5, \( {\pi }_{1}\left( {S}^{n - 1}\right) \) and \( {\pi }_{2}\left( {S}^{n - 1}\right) \) are trivial and, so, Theorem 13.7 and Proposition 13.9 tell us that \( {\pi }_{1}\left( {\mathrm{{SO}}\left( n\right) }\right) \) is isomorphic to \( {\pi }_{1}\left( {\mathrm{{SO}}\left( {n - 1}\right) }\right) \) . Thus, \( {\pi }_{1}\left( {\mathrm{{SO}}\left( n\right) }\right) \) is isomorphic to \( {\pi }_{1}\left( {\mathrm{{SO}}\left( 3\right) }\right) \) for all \( n \geq 4 \) . It remains to show that \( {\pi }_{1}\left( {\mathrm{{SO}}\left( 3\right) }\right) \cong \mathbb{Z}/2 \) . This can be done by noting that \( \mathrm{{SO}}\left( 3\right) \) is homeomorphic to \( {\mathbb{{RP}}}^{3} \), as in Proposition 1.17, or by observing that the map \( \Phi \) in Proposition 1.19 is a two-to-one covering map from \( \mathrm{{SU}}\left( 2\right) \sim {S}^{3} \) onto \( \mathrm{{SO}}\left( 3\right) \) . Finally, we observe that \( \mathrm{{SO}}\left( 2\right) \) is homeomorphic to the unit circle \( {S}^{1} \), so that \( {\pi }_{1}\left( {\mathrm{{SO}}\left( 2\right) }\right) \cong \mathbb{Z} \) (Theorem 1.7 in [Hat]). If one looks into the proof of the long exact sequence of homotopy groups for a fiber bundle, one finds that the map \( g \) in (13.1) is induced by the inclusion of \( F \) into \( X \) . Thus, if \( l \) is a homotopically nontrivial loop in \( \mathrm{{SO}}\left( n\right) \), then after we include \( \mathrm{{SO}}\left( n\right) \) into \( \mathrm{{SO}}\left( {n + 1}\right) \), the loop \( l \) is still homotopically nontrivial. Meanwhile, we may take \( \mathrm{{SU}}\left( 2\right) \) as the universal cover of \( \mathrm{{SO}}\left( n\right) \), with covering map being the homomorphism \( \Phi \) in Proposition 1.19 (compare Exercise 8). Now, if we take \( l \) to be the loop in \( \mathrm{{SO}}\left( 3\right) \) consisting of rotations by angle \( \theta \) in the \( \left( {{x}_{2},{x}_{3}}\right) \) -plane, \( 0 \leq \theta \leq {2\pi } \), the computations in (1.15) and (1.16) show that the lift of \( l \) to \( \mathrm{{SU}}\left( 2\right) \) is not a loop. (Rather, the lift will start at \( I \) and end at \( - I \) .) Thus, by Corollary 13.4, \( l \) is homotopically nontrivial in \( \mathrm{{SO}}\left( 3\right) \) . But this loop \( l \) is conjugate in \( \mathrm{{SO}}\left( 3\right) \) to the loop of rotations in the \( \left( {{x}_{1},{x}_{2}}\right) \) -plane, so that loop is also homotopically nontrivial. Thus, by the discussion in the previous paragraph, we may say that, for any \( n \geq 3 \), the one nontrivial homotopy class in \( \mathrm{{SO}}\left( n\right) \) is represented by the loop \[ l\left( \theta \right) \mathrel{\text{:=}} \left( \begin{array}{rrrrr} \cos \theta & - \sin \theta & & & \\ \sin \theta & \cos \theta & & & \\ & & 1 & & \\ & & & \ddots & \\ & & & & 1 \end{array}\right) ,\;0 \leq \theta \leq {2\pi }. \] (Compare Exercise 6.) Proposition 13.11. The group \( \mathrm{{SU}}\left( n\right) \) is simply connected for all \( n \geq 2 \) . For all \( n \geq 1 \), we have that \( {\pi }_{1}\left( {\mathrm{U}\left( n\right) }\right) \cong \mathbb{Z} \) . Proof. For all \( n \geq 3 \), the group \( \mathrm{{SU}}\left( n\right) \) acts transitively on the sphere \( {S}^{{2n} - 1} \) . By a small modification of the proof of Proposition 13.9, \( \mathrm{{SU}}\left( n\right) \) is a fiber bundle with base \( {S}^{{2n} - 1} \) and fiber \( \mathrm{{SU}}\left( {n - 1}\right) \) . Since \( {2n} - 1 > 3 \) for all \( n \geq 2 \), Theorem 13.7 and Proposition 13.5 tell
1329_[肖梁] Abstract Algebra (2022F)
Definition 1.2.5
Definition 1.2.5. Let \( \left( {G, * }\right) \) and \( \left( {H, \circ }\right) \) be groups. Then we may form a new group structure on \( G \times H \) with group operation given by \[ \left( {g, h}\right) \star \left( {{g}^{\prime },{h}^{\prime }}\right) \mathrel{\text{:=}} \left( {g * {g}^{\prime }, h \circ {h}^{\prime }}\right) . \] This is called the direct product of \( G \) and \( H \) . 1.3. Basic properties of groups. We list a few basic properties of groups as follows. Let \( G \) be a group. (1) The identity element of a group \( G \) is unique. (If both \( e \) and \( {e}^{\prime } \) are identity elements, \( e = e * {e}^{\prime } = {e}^{\prime } \) .) (2) The inverse of an element \( a \in G \) is unique. Better: if an element \( b \in G \) satisfies either \( b * a = e \) or \( a * b = e \), then we have \( b = {a}^{-1} \) . (If \( b * a = e \), then we have \( b = b * e = b * \left( {a * {a}^{-1}}\right) = {a}^{-1} \) . The other case is similar.) (3) \( {\left( {a}^{-1}\right) }^{-1} = a \) . (This follows from \( {a}^{-1} \cdot {\left( {a}^{-1}\right) }^{-1} = e \) and the uniqueness of inverse of \( {a}^{-1} \) by (2).) (4) \( {\left( a * b\right) }^{-1} = {b}^{-1} * {a}^{-1} \) . (This follows from \( \left( {{b}^{-1} * {a}^{-1}}\right) * \left( {a * b}\right) = e \) and (2).) (5) \( a * u = a * v \) implies \( u = v \) . Similarly, \( u * b = v * b \) implies \( u = v \) . (For the first implication, multiply on the left by \( {a}^{-1} \) gives \( {a}^{-1} * a * u = {a}^{-1} * a * v \) , which further implies \( e * u = e * v \), i.e. \( u = v \) .) Convention 1.3.1. When writing operations in a group, there are often two conventions: - Multiplicative convention: we typically choose this convention when we do not know whether \( G \) is abelian. We write \( a \cdot b \) (or simply \( {ab} \) ) for the group operation and 1 for the identity. For example: \[ {\left( {a}_{1}{a}_{2}\cdots {a}_{n}\right) }^{-1} = {a}_{n}^{-1}{a}_{n - 1}^{-1}\cdots {a}_{1}^{-1},\;{g}^{n} = \underset{n\text{ times }}{\underbrace{g \cdot g\cdots g}}. \] - Additive convention: we typically adopt this convention when \( G \) is abelian. We write + for the group operation,0 for the identity, and \( - a \) for the inverse of \( a \) . For example, \[ a + b = b + a,\;n \cdot a \mathrel{\text{:=}} \underset{n\text{ times }}{\underbrace{a + a + \cdots + a}}. \] 1.4. Important examples of groups I: Dihedral groups. A first important example of groups is the dihedral groups \[ {D}_{2n} = \text{symmetry group of a regular}n\text{-gon.} \] Since it is non-abelian, we use multiplicative convention for this. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_6_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_6_0.jpg) Symmetry of a pentagon We may list elements of \( {D}_{2n} \) as follows: \[ {D}_{2n} = \left\{ \begin{array}{l} e = \text{ identity,}r = \text{ rotation counter-clockwise }\frac{2\pi }{n},{r}^{2},\ldots ,{r}^{n - 1} \\ s = {s}_{1} = \text{ reflection about }{\ell }_{1},{s}_{2} = \text{ reflection about }{\ell }_{2},\ldots ,{s}_{n} \end{array}\right\} \] \[ = \left\{ \begin{array}{l} e, r,{r}^{2},\ldots ,{r}^{n - 1} \\ s,{rs},{r}^{2}s,\ldots ,\ldots ,{r}^{n - 1}s \end{array}\right\} . \] Here \( {rs} = {s}_{2},{r}^{2}s = {s}_{3},\ldots \) To see this (using the example \( n = 5 \) ), we note that vertex \( 1\overset{s}{ \mapsto }1\overset{r}{ \mapsto }2 \) . So it must be the reflection about \( {\ell }_{2} \) . In particular, \( \left| {D}_{2n}\right| = {2n} \) . We may rewrite this group in a more efficient form: \[ {D}_{2n} = \left\langle {r, s\left| {\;\begin{array}{l} {r}^{n} = 1,{s}^{2} = 1 \\ {srs} = {r}^{-1} \end{array}}\right. }\right\rangle . \] This means: \( {D}_{2n} \) consists of the set of words in \( r, s,{r}^{-1},{s}^{-1} \) but subject to the given relations. (The relation \( {srs} = {r}^{-1} \) maybe seen as: drawing this regular \( n \) -gon on a piece of paper, then \( {srs} \) is to first flip the paper, then rotation, and then flip back; this is the same as rotating backwards \( {r}^{-1} \) .) A fun exercise to see is that: \( {srs} = {r}^{-1} \) implies: \[ s{r}^{i}s = \underset{i\text{ copies of }{srs}}{\underbrace{{srs} \cdot {srs}\cdots {srs}}} = {r}^{-1}\cdots {r}^{-1} = {r}^{-i}. \] Definition 1.4.1. A subset \( S = \left\{ {{s}_{1},\ldots ,{s}_{n}}\right\} \) of a group \( G \) is called a set of generators if every element of \( G \) can be written as products of \( {s}_{1},\ldots ,{s}_{n},{s}_{1}^{-1},\ldots ,{s}_{n}^{-1} \) . An equality consisting of generators and their inverses is called a relation (e.g. \( {srs} = {r}^{-1} \) ) We write \( G = \left\langle {{s}_{1},\ldots ,{s}_{n} \mid {R}_{1},{R}_{2},\ldots }\right\rangle \) if all relations in \( G \) can be deduced from the relations \( {R}_{1},{R}_{2},\ldots \) . Example 1.4.2. The group \( {\mathbf{Z}}_{6} = \left\langle {t \mid {t}^{6} = 1}\right\rangle \) . We may also write \( {\mathbf{Z}}_{6} = \left\langle {r, s \mid {r}^{3} = {s}^{2} = 1,{rs} = {sr}}\right\rangle \) (if \( r \) represents \( \overline{2} \) and \( s \) represents \( \overline{3} \) ). So there might be many ways to represent the same group using generators and relations. 1.5. Important examples of groups II: permutation groups. The second example of groups is the permutation groups or symmetric groups. Definition 1.5.1. Let \( \Omega \) be a set. The set \[ {S}_{\Omega } \mathrel{\text{:=}} \{ \text{ bijections }\sigma : \Omega \overset{ \sim }{ \rightarrow }\Omega \} \] admits a group structure: - the group operation is composition: \( {\sigma \tau } : \Omega \xrightarrow[]{\tau }\Omega \xrightarrow[]{\sigma }\Omega \) ; - the identity element is id : \( \Omega \rightarrow \Omega \) ; - the inverse of the element \( \sigma \) is the inverse map. This \( {S}_{\Omega } \) is called the symmetry group or the permutation group of \( \Omega \) . When \( \Omega = \{ 1,2,\ldots, n\} \), we write \( {S}_{n} \) for \( {S}_{\Omega } \) instead. Notation 1.5.2. There are two ways to represent elements in \( {S}_{n} \) : Expression 1: For example, we write \[ \sigma = \left( \begin{array}{lllllll} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 7 & 5 & 1 & 3 & 2 & 6 & 4 \end{array}\right) \] to mean the bijection that sends \( 1 \mapsto 7,2 \mapsto 5,3 \mapsto 1,\ldots \) (writing vertically). We may alternatively express this using a diagram ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_7_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_7_0.jpg) One sees that \( \sigma \) essentially permutes \( 1,7,4,3 \) in order, and swaps 2 with 5 . So we have the following. Expression 2: We rewrite \( \sigma \) as (1743)(25). Here for distinct numbers \( {a}_{1},\ldots ,{a}_{r} \in \{ 1,\ldots, n\} \), we call \( \left( {{a}_{1}{a}_{2}\cdots {a}_{r}}\right) \) a cycle. It represents the permutation of \( \{ 1,\ldots, n\} \) that sends \( {a}_{i} \mapsto {a}_{i + 1} \) and \( {a}_{r} \mapsto {a}_{1} \) yet keeping all other numbers invariant. Thus, \( \sigma = \left( {1743}\right) \left( {25}\right) \) can be viewed as a composition of two cycles: ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_7_1.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_7_1.jpg) Writing an element \( \sigma \in {S}_{n} \) as the product of disjoint cycles is called the cycle decomposition of \( \sigma \) . Properties 1.5.3. (1) \( {S}_{n} \) is a non-commutative group. (2) Disjoint cycles commute with each other. This allows us to make effective computation using cycle decompositions. Taking the \( \sigma \) above as an example: \[ {\sigma }^{2} = {\left( {1743}\right) }^{2}{\left( {25}\right) }^{2} = \left( {14}\right) \left( {37}\right) . \] \[ {\sigma }^{-1} = \left( {1347}\right) \left( {25}\right) \] Exercise 1.5.4. Cycles of two elements, namely \( \left( {ij}\right) \) are called transpositions. Prove the following statements in turn. (1) The group \( {S}_{n} \) is generated by all transpositions \( \left( {ij}\right) \) . (2) The group \( {S}_{n} \) is generated by all "adjacent" transpositions \( \left( {i, i + 1}\right) \) . (3) The group \( {S}_{n} \) is generated by \( \{ \left( {12}\right) ,\left( {{123}\ldots n}\right) \} \) . (The relations for \( {S}_{n} \) with two generators (12) and \( \left( {{123}\cdots n}\right) \) are somewhat difficult to write down.) 1.6. Isomorphism of groups. When there are two groups \( G \) and \( H \), we often write \( {e}_{G} \) and \( {e}_{H} \) for their identity elements, respectively. Definition 1.6.1. Two groups \( \left( {G, * }\right) \) and \( \left( {H, \star }\right) \) are isomorphic if there exists a bijection \( \phi : G\overset{ \sim }{ \rightarrow }H \) such that, for any \( g, h \in G \) , (1) \( \phi \left( {g * h}\right) = \phi \left( g\right) \star \phi \left( h\right) \) ; (2) \( \phi \left( {e}_{G}\right) = {e}_{H} \) ; (3) \( \phi \left( {g}^{-1}\right) = \phi {\left( g\right) }^{-1} \) . We write \( G \simeq H \) ; such a map \( \phi \) is called an isomorphism. (In fact, we will see in the next lecture that condition (1) implies (2) and (3).) Example 1.6.2. (1) \( \exp : \left( {\mathbb{R}, + }\right) \rightarrow \left( {{\mathbb{R}}_{ > 0}, \cdot }\right) \) is an isomorphism. (2) The following is an isomorphism. \[ {\mathbf{Z}}_{n}\overset{ \cong }{ \rightarrow }{\mu }_{n} = \{ \text{ all }n\text{ th roots of unity in }\mathbb{C}\} \] \[ a \mapsto {\zeta }_{n}^{a} = {e}^{{2\pi i}\frac{a}{n}}. \] Remark 1.6.3. In group theory, isomorphic groups are considered "same". Basic question in group theory: classify groups with certain properties. For example, all groups of order 6 are either isomorphic to \( {\mathbf{Z}}_{6} \) or to \( {S}_{3} \) . In particular, this says that \( {D}_{6} \simeq {S}_{3} \) (by identifying the symmetry of a regular triangle with the symmetry of the three vertices). Yet \( {\mathbf{Z}}_{6} ≄ {S}_{3} \) because \( {S}_{3} \) is not commutative. 1.7. Import
1092_(GTM249)Classical Fourier Analysis
Definition 3.6.8
Definition 3.6.8. A set of integers \( E \) is called a Sidon set if every function in \( {\mathcal{C}}_{E} \) has an absolutely convergent Fourier series. There are several characterizations of Sidon sets. We state them below. Proposition 3.6.9. The following assertions are equivalent for a subset \( E \) of \( \mathbf{Z} \) . (1) There is a constant \( K \) such that for all trigonometric polynomials \( P \) with \( \widehat{P} \) supported in \( E \) we have \[ \mathop{\sum }\limits_{{m \in \mathbf{Z}}}\left| {\widehat{P}\left( m\right) }\right| \leq K\parallel P{\parallel }_{{L}^{\infty }} \] (2) There exists a constant \( K \) such that \[ \parallel \widehat{f}{\parallel }_{{\ell }^{1}\left( \mathbf{Z}\right) } \leq K\parallel f{\parallel }_{{L}^{\infty }\left( {\mathbf{T}}^{1}\right) } \] for every bounded function \( f \) on \( {\mathbf{T}}^{1} \) with \( \widehat{f} \) supported in \( E \) . (3) Every function \( f \) in \( {\mathcal{C}}_{E} \) has an absolutely convergent Fourier series; i.e., \( E \) is a Sidon set. (4) For every bounded function b on \( E \) there is a finite Borel measure \( \mu \) on \( {\mathbf{T}}^{1} \) such that \( \widehat{\mu }\left( m\right) = b\left( m\right) \) for all \( m \in E \) . (5) For every function \( b \) on \( \mathbf{Z} \) with the property \( b\left( m\right) \rightarrow 0 \) as \( m \rightarrow \infty \), there is a function \( g \in {L}^{1}\left( {\mathbf{T}}^{1}\right) \) such that \( \widehat{g}\left( m\right) = b\left( m\right) \) for all \( m \in E \) . Proof. Suppose that (1) holds. Given \( f \) in \( {L}^{\infty }\left( {\mathbf{T}}^{1}\right) \) with \( \widehat{f} \) is supported in \( E \), write \[ \left( {f * {F}_{N}}\right) \left( x\right) = \mathop{\sum }\limits_{{m = - N}}^{N}\left( {1 - \frac{\left| m\right| }{N + 1}}\right) \widehat{f}\left( m\right) {e}^{2\pi imx}, \] where \( {F}_{N} \) is the Fejér kernel. These are trigonometric polynomials whose Fourier coefficients vanish on \( \mathbf{Z} \smallsetminus E \) . Applying (1) we obtain \[ \mathop{\sum }\limits_{{k \in \mathbf{Z}}}\left( {1 - \frac{\left| m\right| }{N + 1}}\right) \left| {\widehat{f}\left( m\right) }\right| \leq K{\begin{Vmatrix}f * {F}_{N}\end{Vmatrix}}_{{L}^{\infty }}. \] Letting \( N \rightarrow \infty \) we obtain (2). It is trivial that (2) implies (3). If (3) holds, then the map \( f \mapsto \widehat{f} \) is a linear bijection from \( {\mathcal{C}}_{E} \) to \( {\ell }^{1}\left( E\right) \) . Moreover its inverse mapping \( \widehat{f} \mapsto f \) is continuous, since \[ \parallel f{\parallel }_{{L}^{\infty }\left( {\mathbf{T}}^{1}\right) } \leq \mathop{\sup }\limits_{{t \in \left\lbrack {0,1}\right\rbrack }}\left| {\mathop{\sum }\limits_{{k \in \mathbf{Z}}}\widehat{f}\left( k\right) {e}^{2\pi ikt}}\right| \leq \mathop{\sum }\limits_{{k \in \mathbf{Z}}}\left| {\widehat{f}\left( k\right) }\right| = \parallel \widehat{f}{\parallel }_{{\ell }^{1}\left( \mathbf{Z}\right) }. \] By the open mapping theorem, it follows that \( f \mapsto \widehat{f} \) is a continuous mapping, which proves the existence of a constant \( K \) such that (1) holds. We have now proved the equivalence of (1), (2), and (3). We show that (2) implies (4). If \( E \) is a Sidon set and if \( b \) is a bounded function on \( E \), say \( \parallel b{\parallel }_{{\ell }^{\infty }} \leq 1 \), then the mapping \[ f \mapsto \mathop{\sum }\limits_{{m \in E}}\widehat{f}\left( m\right) \widehat{b}\left( m\right) \] is a bounded linear functional on \( {\mathcal{C}}_{E} \) with norm at most \( K \) . By the Hanh-Banach theorem this functional admits an extension to \( \mathcal{C}\left( {\mathbf{T}}^{1}\right) \) with the same norm. Hence there is a measure \( \mu \), whose total variation \( \parallel \mu \parallel \) does not exceed \( K \), such that \[ \mathop{\sum }\limits_{{m \in E}}\widehat{f}\left( m\right) \widehat{b}\left( m\right) = {\int }_{{\mathbf{T}}^{1}}f\left( t\right) {d\mu }\left( t\right) . \] Taking \( f\left( t\right) = {e}^{2\pi imt} \) in (2) we obtain \( \widehat{\mu }\left( m\right) = b\left( m\right) \) for all \( m \in E \) . If (4) holds and \( b\left( m\right) \rightarrow 0 \) as \( \left| m\right| \rightarrow \infty \), using Lemma 3.3.2 there is a convex sequence \( c\left( m\right) \) such that \( c\left( m\right) > 0, c\left( m\right) \rightarrow 0 \) as \( \left| m\right| \rightarrow \infty, c\left( {-m}\right) = c\left( m\right) \) , and \( \left| {b\left( m\right) }\right| \leq c\left( m\right) \) for all \( m \in \mathbf{Z} \) . By (4), there is a finite Borel measure \( \mu \) with \( \widehat{\mu }\left( m\right) = b\left( m\right) /c\left( m\right) \) for all \( m \in E \) . By Theorem 3.3.4, there is a function \( g \) in \( {L}^{1}\left( {\mathbf{T}}^{1}\right) \) such that \( \widehat{g}\left( m\right) = c\left( m\right) \) for all \( m \in \mathbf{Z} \) . Then \( b\left( m\right) = \widehat{g}\left( m\right) \widehat{\mu }\left( m\right) \) for all \( m \in E \) . Since \( f = g * \mu \) is in \( {L}^{1} \), we have \( b\left( m\right) = \widehat{f}\left( m\right) \) for all \( m \in E \), and thus (4) implies (5). Finally, if (5) holds, we show (3). Given \( f \in {\mathcal{C}}_{E} \), we show that for an arbitrary sequence \( {d}_{m} \) tending to zero, we have \( \mathop{\sum }\limits_{{m \in \mathbf{Z}}}\left| {\widehat{f}\left( m\right) {d}_{m}}\right| < \infty \) ; this implies that \( \mathop{\sum }\limits_{{m \in \mathbf{Z}}}\left| {\widehat{f}\left( m\right) }\right| < \infty \) . Given a sequence \( {d}_{m} \rightarrow 0 \), pick a function \( g \) in \( {L}^{1} \) such that \( \widehat{g}\left( m\right) \widehat{f}\left( m\right) = \left| {\widehat{f}\left( m\right) }\right| \left| {d}_{m}\right| \) for all \( m \in E \) by assumption (5). Then the series \[ \mathop{\sum }\limits_{{m \in \mathbf{Z}}}\widehat{g}\left( m\right) \widehat{f}\left( m\right) = \mathop{\sum }\limits_{{m \in \mathbf{Z}}}\widehat{f * g}\left( m\right) \] (3.6.25) has nonnegative terms and the function \( f * g \) is continuous, thus \( {F}_{N} * \left( {f * g}\right) \left( 0\right) \rightarrow \) \( \left( {f * g}\right) \left( 0\right) \) as \( N \rightarrow \infty \) . It follows that \( {D}_{N} * \left( {f * g}\right) \left( 0\right) \rightarrow \left( {f * g}\right) \left( 0\right) \), thus the series in (3.6.25) converges (see Exercise 3.5.4) and hence \( \mathop{\sum }\limits_{{m \in \mathbf{Z}}}\left| {\widehat{f}\left( m\right) {d}_{m}}\right| < \infty \) . Example 3.6.10. Every lacunary set is a Sidon set. Indeed, suppose that \( E \) is a lacunary set with constant \( A \) . If \( f \) is a continuous function which satisfies (3.6.24), then Theorem 3.6.6 gives that \[ \mathop{\sum }\limits_{{m \in \Lambda }}\left| {\widehat{f}\left( m\right) }\right| \leq C\left( A\right) \parallel f{\parallel }_{{L}^{\infty }} < \infty \] hence \( f \) has an absolutely convergent Fourier series. Example 3.6.11. There exist subsets of \( \mathbf{Z} \) that are not Sidon. For example, \( \mathbf{Z} \smallsetminus \{ 0\} \) is not a Sidon set. See Exercise 3.6.7. ## Exercises 3.6.1. Suppose that \( 0 < {\lambda }_{1} < {\lambda }_{2} < \cdots < {\lambda }_{N} \) is a lacunary sequence of integers with constant \( A \geq 3 \) . Prove that for every integer \( m \) there exists at most one \( N \) -tuple \( \left( {{\varepsilon }_{1},\ldots ,{\varepsilon }_{N}}\right) \) with each \( {\varepsilon }_{j} \in \{ - 1,1,0\} \) such that \[ m = {\varepsilon }_{1}{\lambda }_{1} + \cdots + {\varepsilon }_{N}{\lambda }_{N} \] [Hint: Suppose there exist two such \( N \) -tuples. Pick the largest \( k \) such that the coefficients of \( {\lambda }_{k} \) are different.] 3.6.2. Is the sequence \( {\lambda }_{k} = \left\lbrack {e}^{{\left( \log k\right) }^{2}}\right\rbrack, k = 2,3,4,\ldots \) lacunary? 3.6.3. Let \( {a}_{k} \geq 0 \) for all \( k \in {\mathbf{Z}}^{ + } \) and \( 1 \leq p < \infty \) . Show that there exist constants \( {C}_{p},{c}_{p} \) such that for all \( N \in {\mathbf{Z}}^{ + } \) we have \[ {c}_{p}{\left( \mathop{\sum }\limits_{{k = 1}}^{N}{\left| {a}_{k}\right| }^{2}\right) }^{\frac{1}{2}} \leq {\left( {\int }_{0}^{1}{\left| \mathop{\sum }\limits_{{k = 1}}^{N}{a}_{k}{e}^{{2\pi i}{2}^{k}x}\right| }^{p}dx\right) }^{\frac{1}{p}} \leq {C}_{p}{\left( \mathop{\sum }\limits_{{k = 1}}^{N}{\left| {a}_{k}\right| }^{2}\right) }^{\frac{1}{2}}, \] while \[ \mathop{\sup }\limits_{{x \in \left\lbrack {0,1}\right\rbrack }}\left| {\mathop{\sum }\limits_{{k = 1}}^{N}{a}_{k}{e}^{{2\pi i}{2}^{k}x}}\right| = \mathop{\sum }\limits_{{k = 1}}^{N}\left| {a}_{k}\right| . \] 3.6.4. Suppose that \( 0 < {\lambda }_{1} < {\lambda }_{2} < \cdots \) is a lacunary sequence and let \( f \) be a bounded function on the circle that satisfies \( \widehat{f}\left( m\right) = 0 \) whenever \( m \in \mathbf{Z} \smallsetminus \left\{ {{\lambda }_{1},{\lambda }_{2},\ldots }\right\} \) . Suppose also that \[ \mathop{\sup }\limits_{{t \neq 0}}\frac{\left| f\left( t\right) - f\left( 0\right) \right| }{{\left| t\right| }^{\alpha }} = B < \infty \] for some \( 0 < \alpha < 1 \) . (a) Prove that there is a constant \( C \) such that \( \left| {\widehat{f}\left( {\lambda }_{k}\right) }\right| \leq {CB}{\lambda }_{k}^{-\alpha } \) for all \( k \geq 1 \) . (b) Prove that \( f \in {\dot{\Lambda }}_{\alpha }\left( {\mathbf{T}}^{1}\right) \) . [Hint: Let \( {2N} = \left\lbrack {\left( {1 - {A}^{-1}}\right) {\lambda }_{k}}\right\rbrack \) and let \( {K}_{N} \) be as in the proof of Proposition 3.6.2. Write \[ \widehat{f}\left( {\lambda }_{k}\right) = {\int }_{\left| x\right| \leq {N}^{-1}}\left( {f\left( x\right) - f\left( 0\right) }\right) {e}^{-{2\pi i}{\lambda }_{k}x}{K}_{N}\left( x\right) {dx} \] \[ + {\int }_{{N}^{-1} \leq \left| x\right| \leq \frac{1}{2}}\left( {f\left( x\right) - f\left( 0\right) }\right) {e}^{-{2\pi i}{\lambda }_{k}x}{K}_{N}\left( x\right) {dx}. \
109_The rising sea Foundations of Algebraic Geometry
Definition 1.53
Definition 1.53. A path in the chamber graph is called a gallery. Thus a gallery is a sequence of chambers \( \Gamma = \left( {{C}_{0},{C}_{1},\ldots ,{C}_{l}}\right) \) such that consecutive chambers \( {C}_{i - 1} \) and \( {C}_{i}\left( {i = 1,\ldots, l}\right) \) are adjacent. The integer \( l \) is called the length of \( \Gamma \) . We will write \[ \Gamma : {C}_{0},\ldots ,{C}_{l} \] and say that \( \Gamma \) is a gallery from \( {C}_{0} \) to \( {C}_{l} \), or that \( \Gamma \) connects \( {C}_{0} \) and \( {C}_{l} \) . The minimal length \( l \) of a gallery connecting two chambers \( C, D \) is called the gallery distance between \( C \) and \( D \) and is denoted \( d\left( {C, D}\right) \) . Finally, a gallery \( C = {C}_{0},\ldots ,{C}_{l} = D \) of minimal length \( l = d\left( {C, D}\right) \) is called a minimal gallery from \( C \) to \( D \) . This is the same as what is commonly called a geodesic in the chamber graph. Once we have proven that \( d = {d}_{\mathcal{H}} \), we will no longer need the notation \( {d}_{\mathcal{H}} \) , nor will we need to refer to the distance as "gallery distance," though we may still do so occasionally for emphasis. We sometimes represent a gallery schematically by means of a diagram \[ \Gamma : {C}_{0} - {C}_{1} - {C}_{2} - \cdots - {C}_{l}, \] which may be further decorated with hyperplanes as in the diagram (1.8). Warning. In some of the literature, including the precursor [53] of the present book, galleries are defined more generally to be sequences as above in which consecutive chambers are either equal or adjacent. Such sequences do come up naturally, as we will see, and we will call them pregalleries. A pregallery can be converted to a gallery by deleting repeated chambers. We noted above that the metric \( {d}_{\mathcal{H}} \) of Definition 1.39 has the property that \( {d}_{\mathcal{H}}\left( {C,{C}^{\prime }}\right) = 1 \) if \( C \) and \( {C}^{\prime } \) are adjacent, i.e., if they are connected by an edge in the chamber graph. This motivates the following: Proposition 1.54. The chamber graph is connected, and the gallery distance \( d\left( {C, D}\right) \) is equal to \( {d}_{\mathcal{H}}\left( {C, D}\right) \) for any two chambers \( C, D \) . The crux of the proof is the following result: Lemma 1.55. For any two chambers \( C \neq D \), there is a chamber \( {C}^{\prime } \) adjacent to \( C \) such that \( {d}_{\mathcal{H}}\left( {{C}^{\prime }, D}\right) = {d}_{\mathcal{H}}\left( {C, D}\right) - 1 \) . Proof. Since \( C \) is defined by its set of walls (Proposition 1.32), there must be a wall of \( C \) that separates \( C \) from \( D \) . [Otherwise, we would have \( D \subseteq C \) , contradicting the fact that distinct chambers are disjoint.] Let \( A \) be the corresponding panel of \( C \), and let \( {C}^{\prime } \) be the projection \( {AD} \) (Section 1.4.6). Then \( {C}^{\prime } \) is adjacent to \( C \), and \( {d}_{\mathcal{H}}\left( {{C}^{\prime }, D}\right) = {d}_{\mathcal{H}}\left( {C, D}\right) - 1 \) . Proof of the proposition. Given two chambers \( C, D \), we may apply the lemma finitely many times to obtain a gallery of length \( {d}_{\mathcal{H}}\left( {C, D}\right) \) from \( C \) to \( D \) . In particular, the chamber graph is connected and \( d \leq {d}_{\mathcal{H}} \) . To prove the opposite inequality, consider a gallery \[ C = {C}_{0},{C}_{1},\ldots ,{C}_{l} = D \] of minimal length \( l = d\left( {C, D}\right) \) . Then \( {d}_{\mathcal{H}}\left( {{C}_{i - 1},{C}_{i}}\right) = 1 \) for \( i = 1,\ldots, l \), whence \( {d}_{\mathcal{H}}\left( {C, D}\right) \leq l \) Given a minimal gallery \( C = {C}_{0},\ldots ,{C}_{l} = D \), let \( {H}_{1},\ldots ,{H}_{l} \in \mathcal{H} \) be the hyperplanes such that \( {C}_{i - 1} \) and \( {C}_{i} \) are adjacent along \( {H}_{i} \) . [Warning: This notation has nothing to do with our original indexing of the elements of \( \mathcal{H} \) as \( {\left\{ {H}_{i}\right\} }_{i \in I} \) ; we will have no further need for that indexing.] We will refer to the \( {H}_{i} \) as the "walls crossed" by the gallery. Since exactly one component of the sign sequence changes as we move from one chamber to the next, and since exactly \( l = d\left( {C, D}\right) \) signs must change altogether, it is clear that \( {H}_{1},\ldots ,{H}_{l} \) are distinct and are precisely the elements of \( \mathcal{H} \) that separate \( C \) from \( D \) . Conversely, suppose we have a gallery from \( C \) to \( D \) that does not cross any wall more than once. If \( k \) is the length of the gallery, then exactly \( k \) signs change, so \( k = l \) and the gallery is minimal. This proves the following: Proposition 1.56. A gallery from \( C \) to \( D \) is minimal if and only if it does not cross any wall more than once. In this case the walls that it crosses are precisely those that separate \( C \) from \( D \) . Since the set \( \mathcal{C} = \mathcal{C}\left( \mathcal{H}\right) \) of chambers is a metric space, it has a well-defined diameter, which we will also refer to as the diameter of \( \sum \) ; by definition, it is the maximum distance \( d\left( {C, D}\right) \) between two chambers \( C, D \) . The following result is immediate from the interpretation of the metric on \( \mathcal{C} \) as \( {d}_{\mathcal{H}} \) : Proposition 1.57. The diameter of \( \mathcal{C} \) is \( m \mathrel{\text{:=}} \left| \mathcal{H}\right| \) . For any chamber \( C \) , there is a unique chamber \( D \) with \( d\left( {C, D}\right) = m \), namely, the opposite chamber \( D = - C \) . Observe that for any chambers \( C \) and \( D \) , \[ d\left( {C, D}\right) + d\left( {D, - C}\right) = m. \] (1.9) Indeed, every hyperplane in \( \mathcal{H} \) separates \( D \) from either \( C \) or \( - C \), but not both. Thus if we concatenate a minimal gallery from \( C \) to \( D \) with a minimal gallery from \( D \) to \( - C \), we get a minimal gallery from \( C \) to \( - C \) . Consequently: Corollary 1.58. For any chambers \( C, D \), there is a minimal gallery from \( C \) to \( - C \) passing through \( D \) . We have confined ourselves so far to distances and galleries between chambers. But it is also possible to consider distances and galleries involving cells other than chambers. The basic facts about these are easily deduced from the chamber case via the theory of projections (Section 1.4.6); see Exercises 1.61 and 1.62 below. ## Exercises 1.59. Let \( C \) be a chamber. (a) If \( A \) is a cell that is not a chamber, show that \( {AC} \) is not opposite to \( C \) . (b) Conversely, if \( D \) is any chamber not opposite \( C \), then \( D = {AC} \) for some panel \( A \) of \( D \) . 1.60. Arguing as in the proof of Proposition 1.54, prove the following criterion for recognizing the distance function on a graph. Let \( G \) be a graph with vertex set \( \mathcal{V} \), and let \( \delta : \mathcal{V} \times \mathcal{V} \rightarrow {\mathbb{Z}}_{ + } \) be a function, where \( {\mathbb{Z}}_{ + } \) is the set of nonnegative integers. Call two vertices incident if they are connected by an edge. Assume: (1) \( \delta \left( {v, v}\right) = 0 \) for all vertices \( v \) . (2) If \( v \) and \( {v}^{\prime } \) are incident, then \( \left| {\delta \left( {v, w}\right) - \delta \left( {{v}^{\prime }, w}\right) }\right| \leq 1 \) for all vertices \( w \) . (3) Given vertices \( v \neq w \), there is a vertex \( {v}^{\prime } \) incident to \( v \) such that \( \delta \left( {{v}^{\prime }, w}\right) < \) \( \delta \left( {v, w}\right) \) . Then \( G \) is connected, and \( \delta \) is the graph metric. 1.61. Given \( A, C \in \sum \) with \( C \) a chamber, consider galleries \[ {C}_{0},\ldots ,{C}_{l} = C \] with \( A \leq {C}_{0} \) . Such a gallery will be said to connect \( A \) to \( C \) . Show that a gallery from \( A \) to \( C \) of minimal length must start with \( {C}_{0} = {AC} \) . Deduce that the minimal length \( d\left( {A, C}\right) \) of such a gallery is \( \left| {\mathcal{S}\left( {A, C}\right) }\right| \), where \( \mathcal{S}\left( {A, C}\right) \) is the set of hyperplanes in \( \mathcal{H} \) that strictly separate \( A \) from \( C \) . [A hyperplane is said to strictly separate two subsets if they are contained in opposite open half-spaces.] 1.62. More generally, given any two cells \( A, B \in \sum \), consider galleries \( \Gamma \) of the form \[ {C}_{0},\ldots ,{C}_{l} \] with \( A \leq {C}_{0} \) and \( B \leq {C}_{l} \) . In other words, \( \Gamma \) is a path in the chamber graph starting in \( {\mathcal{C}}_{ \geq A} \) and ending in \( {\mathcal{C}}_{ \geq B} \) . Show that the minimal length \( d\left( {A, B}\right) \) of such a gallery is \( \left| {\mathcal{S}\left( {A, B}\right) }\right| \), where \( \mathcal{S}\left( {A, B}\right) \) has the same meaning as in the previous exercise. More concisely, \[ d\left( {{\mathcal{C}}_{ \geq A},{\mathcal{C}}_{ \geq B}}\right) = \left| {\mathcal{S}\left( {A, B}\right) }\right| \] where the left side denotes the usual distance between subsets of a metric space. Moreover, the chambers \( {C}_{0} \) that can start a minimal gallery are precisely those having \( {AB} \) as a face. A glance at Dress-Scharlau [97] is illuminating in connection with the previous exercise. 1.63. Generalize Corollary 1.58 as follows: For any cell \( A \) and chamber \( D \) , there is a minimal gallery from \( A \) to \( - A \) passing through \( D \) . 1.64. Proposition 1.57 can be viewed as giving a characterization of the chamber \( - C \) opposite a given chamber \( C \) in terms of the metric on \( \mathcal{C} \) . In this exercise we extend that characterization to arbitrary cells. (a) Fix a cell \( A \in \sum \), and consider the maximum value of \( d\left( {A, B}\right) \), as \( B \) varies over all cells. Show that \( d\left( {A, B}\right) \) achieves this maximum value if and only if \( B \geq -
109_The rising sea Foundations of Algebraic Geometry
Definition 6.28
Definition 6.28. We will call the subgroups of the form \( {P}_{J} \) standard parabolic subgroups, and we will call the left cosets \( g{P}_{J} \) standard parabolic cosets. As in the case of Coxeter complexes, the stabilizer calculation leads immediately to the following result: Corollary 6.29. The building \( \Delta \) is isomorphic, as a poset, to the set of standard parabolic cosets, ordered by reverse inclusion. Remark 6.30. Let \( Z \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{g \in G}}{gB}{g}^{-1} \) ; this is the normal subgroup of \( G \) consisting of the elements that act trivially on \( \Delta \) . Let \( \bar{G} \mathrel{\text{:=}} G/Z \) . By analogy with the situation for Coxeter groups and their associated complexes, one might expect to be able to recover \( \bar{G} \) from \( \Delta \) as the group \( {\operatorname{Aut}}_{0}\Delta \) of type-preserving automorphisms. This turns out to be false in general; counterexamples will be given in Section 6.9 below (see Remark 6.112(c)). ## 6.2 Bruhat Decompositions, Tits Subgroups, and BN-Pairs ## 6.2.1 Bruhat Decompositions We have seen that a Weyl-transitive action of a group \( G \) on a building leads to a subgroup \( B \) and a bijection \( C : W \rightarrow B \smallsetminus G/B \) with certain properties. Conversely, we will show that such a bijection leads easily to a Weyl-transitive action of \( G \) on a building. Definition 6.31. Suppose we are given a group \( G \), a subgroup \( B \), a Cox-eter system \( \left( {W, S}\right) \), and a bijection \( C : W \rightarrow B \smallsetminus G/B \) satisfying the following condition: (B) For all \( s \in S \) and \( w \in W \) , \[ C\left( {sw}\right) \subseteq C\left( s\right) C\left( w\right) \subseteq C\left( {sw}\right) \cup C\left( w\right) . \] If \( l\left( {sw}\right) = l\left( w\right) + 1 \), then \( C\left( s\right) C\left( w\right) = C\left( {sw}\right) \) . Then the bijection \( C \) is said to provide a Bruhat decomposition of type \( \left( {W, S}\right) \) for \( \left( {G, B}\right) \) . Note that we necessarily have \( C\left( 1\right) = B \) . For if \( {w}_{1} \in W \) is the element such that \( C\left( {w}_{1}\right) = B \), then we can take any \( s \in S \) and deduce that \[ C\left( s\right) = C\left( s\right) C\left( {w}_{1}\right) \supseteq C\left( {s{w}_{1}}\right) \] hence \( s = s{w}_{1} \) and \( {w}_{1} = 1 \) . It is now completely routine to reverse the arguments given in Section 6.1.6 and construct a building \( \Delta \), provided we use the W-metric approach to buildings. Namely, set \( \mathcal{C} \mathrel{\text{:=}} G/B \) and define \( \delta : \mathcal{C} \times \mathcal{C} \rightarrow W \) to be the composite \[ G/B \times G/B \rightarrow B \smallsetminus G/B \rightarrow W, \] where the first map is \( \left( {{gB},{hB}}\right) \mapsto B{g}^{-1}{hB} \), and the second is \( {C}^{-1} \) . Thus \[ \delta \left( {{gB},{hB}}\right) = w \Leftrightarrow {g}^{-1}h \in C\left( w\right) . \] One easily verifies the axioms (WD1), (WD2), and (WD3) of Section 5.1.1, so, by Section 5.6, we have a building \( \Delta \) with \( \mathcal{C}\left( \Delta \right) = \mathcal{C} \) . Moreover, the natural action of \( G \) on \( G/B \) induces an action of \( G \) on \( \Delta \), which we claim is Weyl transitive: Take the coset \( B \) as fundamental chamber, and consider two chambers \( {gB},{g}^{\prime }B \) with \( \delta \left( {B,{gB}}\right) = \delta \left( {B,{g}^{\prime }B}\right) \) . Then \( g \) and \( {g}^{\prime } \) are in the same double coset \( C\left( w\right) \), so \( {g}^{\prime } = {bg}{b}^{\prime } \) for some \( b,{b}^{\prime } \in B \) ; hence \( {g}^{\prime }B = {bgB} \) . Thus \( B \) , the stabilizer of the fundamental chamber, is transitive on the chambers at given Weyl distance from the fundamental chamber. This proves the claim. Note that Corollary 6.29 gives an explicit description of the building \( \Delta \) as a simplicial complex: It can be identified with the poset of standard parabolic cosets, ordered by reverse inclusion. [As a byproduct, we obtain the fact that \( {P}_{J} \mathrel{\text{:=}} \mathop{\bigcup }\limits_{{w \in {W}_{J}}}C\left( w\right) \) is in fact a subgroup of \( G \), which we will also verify algebraically in the next subsection.] Alternatively, one can derive this description from the W-metric theory, which says that \( \Delta \) is the poset of residues, ordered by reverse inclusion. Indeed, given \( J \subseteq S \), one checks directly from the definitions that two chambers \( {gB},{hB} \) are in the same \( J \) -residue if and only if \( {g}^{-1}h \in {P}_{J} \), so the \( J \) -residues are in \( 1 - 1 \) correspondence with the left \( {P}_{J} \) -cosets. Definition 6.32. Given a Bruhat decomposition for \( \left( {G, B}\right) \), we denote by \( \Delta \left( {G, B}\right) \) the poset of standard parabolic cosets, ordered by reverse inclusion. Remark 6.33. The notation \( \Delta \left( {G, B}\right) \) is somewhat misleading, since one needs the bijection \( C : W \rightarrow B \smallsetminus G/B \) in order to define the standard parabolic subgroups and hence the poset \( \Delta \left( {G, B}\right) \) . This abuse of notation is not serious, however, because it turns out that \( \Delta \left( {G, B}\right) \), if it is thick, depends only on the pair \( \left( {G, B}\right) \) . See Corollary 6.44. Combining the discussion above with Theorems 6.17 and 6.21, we obtain the following: Proposition 6.34. Given a Bruhat decomposition for \( \left( {G, B}\right) \), the poset \( \Delta = \) \( \Delta \left( {G, B}\right) \) is a building, and the natural action of \( G \) on \( \Delta \) by left translation is Weyl transitive and has \( B \) as the stabilizer of a fundamental chamber. Conversely, if a group \( G \) admits a Weyl-transitive action on a building \( \Delta \) and \( B \) is the stabilizer of a fundamental chamber, then \( \left( {G, B}\right) \) admits a Bruhat decomposition and \( \Delta \) is canonically isomorphic to \( \Delta \left( {G, B}\right) \) . Thus there is essentially a 1-1 correspondence between Bruhat decompositions and Weyl-transitive actions. Remark 6.35. In this subsection we have used the W-metric approach to buildings since it meshes so perfectly with the algebraic theory of Bruhat decompositions. Moreover, we do not know any way to prove Proposition 6.34 from the simplicial point of view, since there is no obvious way to construct apartments in \( \Delta \left( {G, B}\right) \) from the data given in Definition 6.31. In Section 6.2.5, however, when we develop the algebraic theory corresponding to strongly transitive actions, it will be possible to give an alternative treatment that is purely simplicial. This will be outlined in Exercise 6.54. Our next goal is to get a better algebraic understanding of Bruhat decompositions. ## 6.2.2 Axioms for Bruhat Decompositions If one wants to construct a building from group-theoretic data, it is of interest to minimize what has to be verified. It turns out that we can get by with axioms that appear to be weaker than the requirements in Section 6.2.1. Let \( G \) be a group, \( B \) a subgroup, \( \left( {W, S}\right) \) a Coxeter system, and \[ C : W \rightarrow B \smallsetminus G/B \] a function. Consider the following three axioms: (Bru1) \( C\left( w\right) = B \) if and only if \( w = 1 \) . (Bru2) \( C : W \rightarrow B \smallsetminus G/B \) is surjective, i.e., \[ G = \mathop{\bigcup }\limits_{{w \in W}}C\left( w\right) \] (Bru3) For any \( s \in S \) and \( w \in W \) , \[ C\left( {sw}\right) \subseteq C\left( s\right) C\left( w\right) \subseteq C\left( {sw}\right) \cup C\left( w\right) . \] There appears to be asymmetry in (Bru3), which involves left multiplication by elements of \( S \) . But we will see in the next proposition that the axioms (Bru1)-(Bru3) imply the following "right" analogue of (Bru3): \( \left( {\mathbf{{Bru3}}}^{\prime }\right) \) For any \( s \in S \) and \( w \in W \) , \[ C\left( {ws}\right) \subseteq C\left( w\right) C\left( s\right) \subseteq C\left( {ws}\right) \cup C\left( w\right) . \] We now show that (Bru1), (Bru2), and (Bru3) suffice for a Bruhat decomposition. In other words, they imply that \( C \) is bijective and that the second assertion of (B) holds. For reasons that will become obvious in Section 6.2.3, we will take care to prove this without using the assumption that \( \left( {W, S}\right) \) is a Coxeter system. Proposition 6.36. Let \( G \) be a group and \( B \) a subgroup. Suppose we are given a group \( W \), a generating set \( S \) consisting of elements of order 2, and a function \( C : W \rightarrow B \smallsetminus G/B \) satisfying (Bru1),(Bru2), and (Bru3). Then the six conditions below are satisfied. In particular, \( C \) provides a Bruhat decomposition for \( \left( {G, B}\right) \) if \( \left( {W, S}\right) \) is a Coxeter system. (1) \( C \) is a bijection, i.e., \[ G = \mathop{\coprod }\limits_{{w \in W}}C\left( w\right) \] (2) \( C{\left( w\right) }^{-1} = C\left( {w}^{-1}\right) \) for all \( w \in W \) . Consequently, \( \left( {\mathrm{{Bru}}}^{\gamma }\right) \) holds. (3) If \( l\left( {sw}\right) \geq l\left( w\right) \) with \( s \in S \) and \( w \in W \), then \( C\left( s\right) C\left( w\right) = C\left( {sw}\right) \) . (4) Given a reduced decomposition \( w = {s}_{1}\cdots {s}_{l} \) of an element \( w \in W \), we have \( C\left( w\right) = C\left( {s}_{1}\right) \cdots C\left( {s}_{l}\right) \) . (5) If \( l\left( {sw}\right) \leq l\left( w\right) \) with \( s \in S \) and \( w \in W \), and if \( \left\lbrack {C\left( s\right) : B}\right\rbrack \geq 2 \), then \( C\left( s\right) C\left( w\right) = C\left( {sw}\right) \cup C\left( w\right) \) . (6) Let \( J \subseteq S \) be an arbitrary subset. Then \( {P}_{J} \mathrel{\text{:=}} \mathop{\bigcup }\limits_{{w \in {W}_{J}}}C\left( w\right) \) is a su
1112_(GTM267)Quantum Theory for Mathematicians
Definition 16.37
Definition 16.37 If \( \Pi : G \rightarrow \mathrm{{GL}}\left( V\right) \) is a representation of a matrix Lie group \( G \), then a subspace \( W \) of \( V \) is called an invariant subspace if \( \Pi \left( g\right) w \in W \) for all \( g \in G \) and \( w \in W \) . Similarly, if \( \pi : \mathfrak{g} \rightarrow \mathrm{{gl}}\left( V\right) \) is a representation of a Lie algebra \( \mathfrak{g} \), then a subspace \( W \) of \( V \) is called an invariant subspace if \( \pi \left( X\right) w \in W \) for all \( X \in \mathfrak{g} \) and \( w \in W \) . A representation of a group or Lie algebra is called irreducible if the only invariant subspaces are \( W = V \) and \( W = \{ 0\} \) . Definition 16.38 If \( \left( {\Pi ,{V}_{1}}\right) \) and \( \left( {\sum ,{V}_{2}}\right) \) are representations of a matrix Lie group \( G \), a map \( \Phi : {V}_{1} \rightarrow {V}_{2} \) is called an intertwining map (or morphism) if \( \Phi \left( {\Pi \left( g\right) v}\right) = \sum \left( g\right) \Phi \left( v\right) \) for all \( v \in {V}_{1} \), with an analogous definition for intertwining maps of Lie algebra representations. If an intertwining map is an invertible linear map, it is called an isomorphism. Two representations are said to be isomorphic (or equivalent) if there exists an isomorphism between them. In the "action" notation, the requirement on an intertwining map \( \Phi \) is that \( \Phi \left( {g \cdot v}\right) = g \cdot \Phi \left( v\right) \), meaning that \( \Phi \) commutes with the action of \( G \) . A typical goal of representation theory is to classify all finite-dimensional irreducible representations of \( G \) up to isomorphism. Given a representation \( \Pi : G \rightarrow \mathrm{{GL}}\left( V\right) \) of a matrix Lie group \( G \), we can identify \( \mathrm{{GL}}\left( V\right) \) with \( \mathrm{{GL}}\left( {N;\mathbb{C}}\right) \) and \( \mathrm{{gl}}\left( V\right) \) with \( \mathrm{{gl}}\left( {n;\mathbb{C}}\right) \) by picking a basis for \( V \) . We may then apply Theorem 16.23 to obtain a representation \( \pi : \mathfrak{g} \rightarrow \mathrm{{gl}}\left( V\right) \) such that \[ \Pi \left( {e}^{X}\right) = {e}^{\pi \left( X\right) } \] for all \( X \in \mathfrak{g} \) . Proposition 16.39 Suppose \( G \) is a connected matrix Lie group with Lie algebra \( \mathfrak{g} \) . Suppose that \( \Pi : G \rightarrow \mathrm{{GL}}\left( V\right) \) is a finite-dimensional representation of \( G \) and \( \pi : \mathfrak{g} \rightarrow \mathrm{{gl}}\left( V\right) \) is the associated Lie algebra representation. Then a subspace \( W \) of \( V \) is invariant under the action of \( G \) if and only if it is invariant under the action of \( \mathfrak{g} \) . In particular, \( \Pi \) is irreducible if and only if \( \pi \) is irreducible. Furthermore, two representations of \( G \) are isomorphic if and only if the associated Lie algebra representations are isomorphic. In general, given an representation \( \pi \) of \( \mathfrak{g} \), there may be no representation \( \Pi \) such that \( \pi \) and \( \Pi \) are related in the usual way. If, however, \( G \) is simply connected, Theorem 16.30 tells us that there is, in fact, a \( \Pi \) associated with every \( \pi \) . Proof. Suppose \( W \subset V \) is invariant under \( \pi \left( X\right) \) for all \( X \in \mathfrak{g} \) . Then \( W \) is invariant under \( \pi {\left( X\right) }^{m} \) for all \( m \) . Since \( V \) is finite dimensional, any subspace of it is automatically a closed subset and thus \( W \) is invariant under \[ \Pi \left( {e}^{X}\right) = {e}^{\pi \left( X\right) } = \mathop{\sum }\limits_{{m = 0}}^{\infty }\frac{\pi {\left( X\right) }^{m}}{m!}. \] Since \( G \) is connected, every element of \( G \) is (Corollary 16.28) a product of exponentials of elements of \( \mathfrak{g} \), and so \( W \) is invariant under \( \Pi \left( A\right) \) for all \( A \in G \) . In the other direction, if \( W \) is invariant under \( \Pi \left( A\right) \) for all \( A \in G \), then since \( W \) is closed, it is invariant under \[ \pi \left( X\right) = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{e}^{hX} - I}{h} \] for all \( X \in \mathfrak{g} \) . Now suppose \( {\Pi }_{1} \) and \( {\Pi }_{2} \) are two representations of \( G \), acting on vector spaces \( {V}_{1} \) and \( {V}_{2} \), respectively. If \( \Phi : {V}_{1} \rightarrow {V}_{2} \) is an invertible linear map, then an argument similar to the above shows \( \Phi {\Pi }_{1}\left( A\right) = {\Pi }_{2}\left( A\right) \Phi \) for all \( A \in G \) if and only if \( \Phi {\pi }_{1}\left( X\right) = {\pi }_{2}\left( X\right) \Phi \) for all \( X \in \mathfrak{g} \) . Thus, \( \Phi \) is an isomorphism of group representations if and only if it is an isomorphism of Lie algebra representations. - Theorem 16.40 (Schur’s Lemma) If \( {V}_{1} \) and \( {V}_{2} \) are two irreducible representations of a group or Lie algebra, then the following hold. 1. If \( \Phi : {V}_{1} \rightarrow {V}_{2} \) is an intertwining map, then either \( \Phi = 0 \) or \( \Phi \) is an isomorphism. 2. If \( \Phi : {V}_{1} \rightarrow {V}_{2} \) and \( \Psi : {V}_{1} \rightarrow {V}_{2} \) are nonzero intertwining maps, then there exists a nonzero constant \( c \in \mathbb{C} \) such that \( \Phi = {c\Psi } \) . In particular, if \( \Phi \) is an intertwining map of \( {V}_{1} \) to itself then \( \Phi = {cI} \) . Although the first part of Schur's lemma holds for representations over an arbitrary field, the second part holds only for representations over algebraically closed fields. Proof. It is easy to see that \( \ker \Phi \) is an invariant subspace of \( {V}_{1} \) . Since \( {V}_{1} \) is irreducible, this means that either \( \ker \Phi = {V}_{1} \), in which case \( \Phi = 0 \) , or \( \ker \Phi = \{ 0\} \), in which case \( \Phi \) is injective. Similarly, the range of \( \Phi \) is invariant, and thus equal to either \( \{ 0\} \) or \( {V}_{2} \) . If \( \Phi \) is not zero, then the range of \( \Phi \) is not zero, hence all of \( {V}_{2} \) . Thus, if \( \Phi \) is not zero, it is both injective and surjective, establishing Point 1. For Point 2, since \( \Phi \) and \( \Psi \) are nonzero, they are isomorphisms, by Point 1. It suffices to prove that \( \Gamma \mathrel{\text{:=}} {\Phi }^{-1}\Psi \) is a multiple of the identity, where \( \Gamma \) is an intertwining map of \( {V}_{1} \) to itself. Since we are working over \( \mathbb{C},\Gamma \) must have at least one eigenvalue \( \lambda \) . If \( W \) denotes the \( \lambda \) - eigenspace of \( \Gamma \), then \( W \) is invariant under the action of the group or Lie algebra. After all, if \( {\Gamma w} = {\lambda w} \), then (in the notation of the group case) \( \Gamma \left( {\Pi \left( A\right) w}\right) = \Pi \left( A\right) {\Gamma w} = {\lambda \Pi }\left( A\right) w \) . Since \( \lambda \) is an eigenvector of \( \Gamma \), the invariant subspace \( W \) is nonzero and thus \( W = {V}_{1} \), which means precisely that \( \Gamma = {\lambda I} \) . ∎ ## 16.7.2 Unitary Representations In quantum mechanics, we are interested not only in vector spaces, but, more specifically, in Hilbert spaces, since expectation values are defined in terms of an inner product. We wish to consider, then, actions of a group that preserve the inner product as well as the linear structure. Although the Hilbert spaces in quantum mechanics are generally infinite dimensional, we restrict our attention in this section to the finite-dimensional case. Definition 16.41 Suppose \( V \) is a finite-dimensional Hilbert space over \( \mathbb{C} \) . Denote by \( \mathrm{U}\left( V\right) \) the group of invertible linear transformations of \( V \) that preserve the inner product. A (finite-dimensional) unitary representation of a matrix Lie group \( G \) is a continuous homomorphism of \( \Pi : G \rightarrow \mathrm{U}\left( V\right) \) , for some finite-dimensional Hilbert space \( V \) . Proposition 16.42 Let \( \Pi : G \rightarrow \mathrm{{GL}}\left( V\right) \) be a finite-dimensional representation of a connected matrix Lie group \( G \), and let \( \pi \) be the associated representation of the Lie algebra \( \mathfrak{g} \) of \( G \) . Let \( \langle \cdot , \cdot \rangle \) be an inner product on \( V \) . Then \( \Pi \) is unitary with respect to \( \langle \cdot , \cdot \rangle \) if and only if \( \pi \left( X\right) \) is skew-selfadjoint with respect to \( \langle \cdot , \cdot \rangle \) for all \( X \in \mathfrak{g} \), that is, if and only if \[ \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \] for all \( X \in \mathfrak{g} \) . In a slight abuse of notation, we will refer to a representation \( \pi \) of a Lie algebra \( \mathfrak{g} \) on a finite-dimensional inner product space as unitary if \( \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \) for all \( X \in \mathfrak{g} \) . Proof. Suppose first that \( \Pi \left( A\right) \) is unitary for all \( A \in G \) . Then for all \( X \in \mathfrak{g} \) and \( t \in \mathbb{R} \) we have \[ \Pi {\left( {e}^{tX}\right) }^{ * } = \Pi {\left( {e}^{tX}\right) }^{-1} = \Pi \left( {e}^{-{tX}}\right) = {e}^{-{t\pi }\left( X\right) }. \] On the other hand, \[ \Pi {\left( {e}^{tX}\right) }^{ * } = {\left( {e}^{{t\pi }\left( X\right) }\right) }^{ * } = {e}^{{t\pi }{\left( X\right) }^{ * }}. \] Thus, \[ {e}^{{t\pi }{\left( X\right) }^{ * }} = {e}^{-{t\pi }\left( X\right) } \] for all \( t \) . Differentiating at \( t = 0 \) yields \( \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \) . In the other direction, if \( \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \) for all \( X \in \mathfrak{g} \), then \[ \Pi {\left( {e}^{X}\right) }^{ * } = {e}^{\pi {\left( X\right) }^{ * }} = {e}^{-\pi \left( X\right) } = \Pi \left( {e}^{-X}\right) = \Pi {\left( {e}^{X}\right) }^{-1}, \] meaning that \( \Pi \left( {e}^{X}\right) \)
1077_(GTM235)Compact Lie Groups
Definition 6.5
Definition 6.5. (a) Let \( \mathfrak{g} \) be the Lie algebra of a Lie subgroup of \( {GL}\left( {n,\mathbb{C}}\right) \) . The complexification of \( \mathfrak{g},{\mathfrak{g}}_{\mathbb{C}} \), is defined as \( {\mathfrak{g}}_{\mathbb{C}} = \mathfrak{g}{ \otimes }_{\mathbb{R}}\mathbb{C} \) . The Lie bracket on \( \mathfrak{g} \) is extended to \( {\mathfrak{g}}_{\mathbb{C}} \) by \( \mathbb{C} \) -linearity. (b) If \( \left( {\psi, V}\right) \) is a representation of \( \mathfrak{g} \), extend the domain of \( \psi \) to \( {\mathfrak{g}}_{\mathbb{C}} \) by \( \mathbb{C} \) -linearity. Then \( \left( {\psi, V}\right) \) is said to be irreducible under \( {\mathfrak{g}}_{\mathbb{C}} \) if there are no proper \( \psi \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) -invariant subspaces. Writing a matrix in terms of its skew-Hermitian and Hermitian parts, observe that \( \mathfrak{{gl}}\left( {n,\mathbb{C}}\right) = \mathfrak{u}\left( n\right) \oplus i\mathfrak{u}\left( n\right) \) . It follows that if \( \mathfrak{g} \) is the Lie algebra of a compact Lie group \( G \) realized with \( G \subseteq U\left( n\right) ,{\mathfrak{g}}_{\mathbb{C}} \) may be identified with \( \mathfrak{g} \oplus i\mathfrak{g} \) equipped with the standard Lie bracket inherited from \( \mathfrak{{gl}}\left( {n,\mathbb{C}}\right) \) (Exercise 6.3). We will often make this identification without comment. In particular, \( \mathfrak{u}{\left( n\right) }_{\mathbb{C}} = \mathfrak{{gl}}\left( {n,\mathbb{C}}\right) \) . Similarly, \( \mathfrak{{su}}{\left( n\right) }_{\mathbb{C}} = \mathfrak{{sl}}\left( {n,\mathbb{C}}\right) ,\mathfrak{{so}}{\left( n\right) }_{\mathbb{C}} \) is realized by \[ \mathfrak{{so}}\left( {n,\mathbb{C}}\right) = \left\{ {X \in \mathfrak{{sl}}\left( {n,\mathbb{C}}\right) \mid {X}^{t} = - X}\right\} , \] and, realizing \( \mathfrak{{sp}}\left( n\right) \) as \( \mathfrak{u}\left( {2n}\right) \cap \mathfrak{{sp}}\left( {n,\mathbb{C}}\right) \) as in \( §{4.1.3},\mathfrak{{sp}}{\left( n\right) }_{\mathbb{C}} \) is realized by \( \mathfrak{{sp}}\left( {n,\mathbb{C}}\right) \) (Exercise 6.3). Lemma 6.6. Let \( \mathfrak{g} \) be the Lie algebra of a Lie subgroup of \( {GL}\left( {n,\mathbb{C}}\right) \) and let \( \left( {\psi, V}\right) \) be a representation of \( \mathfrak{g} \) . Then \( V \) is irreducible under \( \mathfrak{g} \) if and only if it is irreducible under \( {\mathfrak{g}}_{\mathbb{C}} \) . Proof. Simply observe that since a subspace \( W \subseteq V \) is a complex subspace, \( W \) is \( \psi \left( \mathfrak{g}\right) \) -invariant if and only if it is \( \psi \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) -invariant. For example, \( \mathfrak{{su}}{\left( 2\right) }_{\mathbb{C}} = \mathfrak{{sl}}\left( {2,\mathbb{C}}\right) \) is equipped with the standard basis \[ E = \left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right) ,\;H = \left( \begin{matrix} 1 & 0 \\ 0 & - 1 \end{matrix}\right) ,\;F = \left( \begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array}\right) \] (c.f. Exercise 4.21). Since \( E = \frac{1}{2}\left( \begin{matrix} 0 & 1 \\ - 1 & 0 \end{matrix}\right) - \frac{i}{2}\left( \begin{matrix} 0 & i \\ i & 0 \end{matrix}\right) \), Equation 6.3 shows that the resulting action of \( E \) on \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is given by \[ E \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \frac{1}{2}\left\lbrack {-k{z}_{1}^{k - 1}{z}_{2}^{n - k + 1} - \left( {k - n}\right) {z}_{1}^{k + 1}{z}_{2}^{n - k - 1}}\right\rbrack \] \[ - \frac{i}{2}\left\lbrack {-{ik}{z}_{1}^{k - 1}{z}_{2}^{n - k + 1} + i\left( {k - n}\right) {z}_{1}^{k + 1}{z}_{2}^{n - k - 1}}\right\rbrack \] \[ = - k{z}_{1}^{k - 1}{z}_{2}^{n - k + 1}\text{.} \] Similarly (Exercise 6.4), the action of \( H \) and \( F \) on \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is given by (6.7) \[ H \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \left( {n - {2k}}\right) {z}_{1}^{k}{z}_{2}^{n - k} \] \[ F \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = \left( {k - n}\right) {z}_{1}^{k + 1}{z}_{2}^{n - k - 1}. \] Irreducibility of \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is immediately apparent from these formulas (Exercise 6.7). ## 6.1.3 Weights Let \( G \) be a compact Lie group and \( \left( {\pi, V}\right) \) a finite-dimensional representation of \( G \) . Fix a Cartan subalgebra \( \mathfrak{t} \) of \( \mathfrak{g} \) and write \( {\mathfrak{t}}_{\mathbb{C}} \) for its complexification. By Theorem 5.6, there exists an inner product, \( \left( {\cdot , \cdot }\right) \), on \( V \) that is \( G \) -invariant and for which \( {d\pi } \) is skew-Hermitian on \( \mathfrak{g} \) and is Hermitian on \( i\mathfrak{g} \) . Thus \( {\mathfrak{t}}_{\mathbb{C}} \) acts on \( V \) as a family of commuting normal operators and so \( V \) is simultaneously diagonalizable under the action of \( {\mathrm{t}}_{\mathbb{C}} \) . In particular, the following definition is well defined. Definition 6.8. Let \( G \) be a compact Lie group, \( \left( {\pi, V}\right) \) a finite-dimensional representation of \( G \), and \( \mathfrak{t} \) a Cartan subalgebra of \( \mathfrak{g} \) . There is a finite set \( \Delta \left( V\right) = \Delta \left( {V,{\mathfrak{t}}_{\mathbb{C}}}\right) \subseteq \) \( {\mathfrak{t}}_{\mathbb{C}}^{ * } \), called the weights of \( V \), so that \[ V = {\bigoplus }_{\alpha \in \Delta \left( V\right) }{V}_{\alpha } \] where \[ {V}_{\alpha } = \left\{ {v \in V \mid {d\pi }\left( H\right) v = \alpha \left( H\right) v, H \in {\mathrm{t}}_{\mathbb{C}}}\right\} \] is nonzero. The above displayed equation is called the weight space decomposition of \( V \) with respect to \( {\mathfrak{t}}_{\mathbb{C}} \) . As an example, take \( G = {SU}\left( 2\right), V = {V}_{n}\left( {\mathbb{C}}^{2}\right) \), and \( \mathfrak{t} \) to be the diagonal matrices in \( \mathfrak{{su}}\left( 2\right) \) . Define \( {\alpha }_{m} \in {\mathfrak{t}}_{\mathbb{C}}^{ * } \) by requiring \( {\alpha }_{m}\left( H\right) = m \) . Then Equation 6.7 shows that the weight space decomposition for \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is \( {V}_{n}\left( {\mathbb{C}}^{2}\right) = {\bigoplus }_{k = 0}^{n}{V}_{n}{\left( {\mathbb{C}}^{2}\right) }_{{\alpha }_{n - {2k}}} \), where \( {V}_{n}{\left( {\mathbb{C}}^{2}\right) }_{{\alpha }_{n - {2k}}} = \mathbb{C}{z}_{1}^{k}{z}_{2}^{n - k} \) . Theorem 6.9. (a) Let \( G \) be a compact Lie group, \( \left( {\pi, V}\right) \) a finite-dimensional representation of \( G, T \) a maximal torus of \( G \), and \( V = {\bigoplus }_{\alpha \in \Delta \left( {V,{\mathrm{t}}_{\mathbb{C}}}\right) }{V}_{\alpha } \) the weight space decomposition. For each weight \( \alpha \in \Delta \left( V\right) ,\alpha \) is purely imaginary on \( \mathfrak{t} \) and is real valued on it. (b) For \( t \in T \), choose \( H \in \mathfrak{t} \) so that \( {e}^{H} = t \) . Then \( t{v}_{\alpha } = {e}^{\alpha \left( H\right) }{v}_{\alpha } \) for \( {v}_{\alpha } \in {V}_{\alpha } \) . Proof. Part (a) follows from the facts that \( {d\pi } \) is skew-Hermitian on \( \mathfrak{t} \) and is Hermitian on \( i\mathrm{t} \) . Part (b) follows from the fact that \( \exp \mathrm{t} = T \) and the relation \( {e}^{d\pi H} = \pi \left( {e}^{H}\right) . \) By \( \mathbb{C} \) -linearity, \( \alpha \in \Delta \left( V\right) \) is completely determined by its restriction to either \( \mathfrak{t} \) or \( {it} \) . Thus we permit ourselves to interchangeably view \( \alpha \) as an element of any of the dual spaces \( {\mathfrak{t}}_{\mathbb{C}}^{ * },{\left( i\mathfrak{t}\right) }^{ * } \) (real valued), or \( {\mathfrak{t}}^{ * } \) (purely imaginary valued). In alternate notation (not used in this text), \( {it} \) is sometimes written \( {\mathfrak{t}}_{\mathbb{C}}\left( \mathbb{R}\right) \) . ## 6.1.4 Roots Let \( G \) be a compact Lie group. For \( g \in G \), extend the domain of \( \operatorname{Ad}\left( g\right) \) from \( \mathfrak{g} \) to \( {\mathfrak{g}}_{\mathbb{C}} \) by \( \mathbb{C} \) -linearity. Then \( \left( {\mathrm{{Ad}},{\mathfrak{g}}_{\mathbb{C}}}\right) \) is a representation of \( G \) with differential given by ad (extended by \( \mathbb{C} \) -linearity). It has a weight space decomposition \[ {\mathfrak{g}}_{\mathbb{C}} = {\bigoplus }_{\alpha \in \Delta \left( {{\mathfrak{g}}_{\mathbb{C}},{\mathfrak{t}}_{\mathbb{C}}}\right) }{\mathfrak{g}}_{\alpha } \] that is important enough to warrant its own name. Notice the zero weight space is \( {\mathfrak{g}}_{0} = \left\{ {Z \in {\mathfrak{g}}_{\mathbb{C}} \mid \left\lbrack {H, Z}\right\rbrack = 0, H \in {\mathfrak{t}}_{\mathbb{C}}}\right\} \) . Thus \[ {\mathfrak{g}}_{0} = {\mathfrak{t}}_{\mathbb{C}} \] since \( \mathfrak{t} \) is a maximal Abelian subspace of \( \mathfrak{g} \) . In the definition below, it turns out to be advantageous to separate this zero weight space from the remaining nonzero weight spaces. Definition 6.10. Let \( G \) be a compact Lie group and \( \mathfrak{t} \) a Cartan subalgebra of \( \mathfrak{g} \) . There is a finite set of nonzero elements \( \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) = \Delta \left( {{\mathfrak{g}}_{\mathbb{C}},{\mathfrak{t}}_{\mathbb{C}}}\right) \subseteq {\mathfrak{t}}_{\mathbb{C}}^{ * } \), called the roots of \( {\mathfrak{g}}_{\mathbb{C}} \) , so that \[ {\mathfrak{g}}_{\mathbb{C}} = {\mathfrak{t}}_{\mathbb{C}} \oplus {\bigoplus }_{\alpha \in \Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) }{\mathfrak{g}}_{\alpha } \] where \( {\mathfrak{g}}_{\alpha } = \left\{ {Z \in {\mathfrak{g}}_{\mathbb{C}} \mid \left\lbrack {H, Z}\right\rbrack = \alpha \left( H\right) Z, H \in {\mathfrak{t}}_{\mathbb{C}}}\right\} \) is nonzero. The above displayed equation is called the root space decomposition of \( {\mathfrak{g}}_{\mathbb{C}} \) with respect to \( {\mathfrak{t}}_{\mathbb{C}} \) . Theorem 6.11. (a) Let \( G \) be a compact Lie group, \( \left( {\pi, V}\right) \) a finite-dimension
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 0.3.3
Definition 0.3.3 Let \( D \) be a Dedekind domain with field of fractions \( K \) . Then a \( D \) -submodule \( \mathcal{A} \) of \( K \) is a fractional ideal of \( D \) if there exists \( \alpha \in D \) such that \( \alpha \mathcal{A} \subset D \) . Every ideal is a fractional ideal and the set of ideals in \( D \) is closed under multiplication of ideals. The fractional ideals are also closed under multiplication but can also be shown to be closed under taking inverses where the identity element is the ring \( D \) itself. Indeed, it turns out that each ideal \( I \) has, as its inverse, \[ {I}^{-1} = \{ \alpha \in K \mid {\alpha I} \subset D\} . \] Theorem 0.3.4 Let \( D \) be a Dedekind domain. 1. Let \( I \) be a non-zero ideal of \( D \) . Then \[ I = {\mathcal{P}}_{1}^{{a}_{1}}{\mathcal{P}}_{2}^{{a}_{2}}\cdots {\mathcal{P}}_{r}^{{a}_{r}} \] where \( {\mathcal{P}}_{i} \) are distinct prime ideals uniquely determined by \( I \), as are the positive integers \( {a}_{i} \) . 2. The set of fractional ideals of \( D \) form a free abelian group under multiplication, free on the set of prime ideals. We now leave the general setting of Dedekind domains and return to the rings of integers \( {R}_{k} \) to determine more information on their prime ideals. Note that, from Theorem 0.3.2, for any non-zero ideal \( I \), the quotient \( {R}_{k}/I \) is finite. Definition 0.3.5 If \( I \) is a non-zero ideal of \( {R}_{k} \), define the norm of \( I \) by \[ N\left( I\right) = \left| {{R}_{k}/I}\right| \] The unique factorisation enables the determination of the norm of ideals to be reduced to the determination of norms of prime ideals. This reduction firstly requires the use of the Chinese Remainder Theorem in this context: Lemma 0.3.6 Let \( {\mathcal{Q}}_{1},{\mathcal{Q}}_{2},\ldots ,{\mathcal{Q}}_{r} \) be ideals in \( {R}_{k} \) such that \( {\mathcal{Q}}_{i} + {\mathcal{Q}}_{j} = {R}_{k} \) for \( i \neq j \) . Then \[ {\mathcal{Q}}_{1}{\mathcal{Q}}_{2}\cdots {\mathcal{Q}}_{r} = { \cap }_{i = 1}^{r}{\mathcal{Q}}_{i}\text{ and }{R}_{k}/{\mathcal{Q}}_{1}\cdots {\mathcal{Q}}_{r} \cong \oplus \mathop{\sum }\limits_{i}{R}_{k}/{\mathcal{Q}}_{i}. \] For distinct prime ideals \( {\mathcal{P}}_{1},{\mathcal{P}}_{2} \) the condition \( {\mathcal{P}}_{1}^{a} + {\mathcal{P}}_{2}^{b} = {R}_{k} \) can be shown to hold for any positive integers \( a, b \) (see Exercise 0.3, No. 3). Secondly, the ring \( {R}_{k}/{\mathcal{P}}^{a} \) has ideals \( {\mathcal{P}}^{a + b}/{\mathcal{P}}^{a} \) and each ideal of the form \( {\mathcal{P}}^{c}/{\mathcal{P}}^{c + 1} \) can be shown to be a one-dimensional vector space over the field \( {R}_{k}/\mathcal{P} \) . Thus if \[ I = {\mathcal{P}}_{1}^{{a}_{1}}{\mathcal{P}}_{2}^{{a}_{2}}\cdots {\mathcal{P}}_{r}^{{a}_{r}} \] then \[ N\left( I\right) = \mathop{\prod }\limits_{{i = 1}}^{r}{\left( N\left( {\mathcal{P}}_{i}\right) \right) }^{{a}_{i}} \] \( \left( {0.13}\right) \) and \( N \) is multiplicative so that \[ N\left( {IJ}\right) = N\left( I\right) N\left( J\right) \] \( \left( {0.14}\right) \) The unique factorisation thus requires that the prime ideals in \( {R}_{k} \) be investigated. If \( \mathcal{P} \) is a prime ideal of \( {R}_{k} \), then \( {R}_{k}/\mathcal{P} \) is a finite field and so has order of the form \( {p}^{f} \) for some prime number \( p \) . Note that \( \mathcal{P} \cap \mathbb{Z} \) is a prime ideal \( {p}^{\prime }\mathbb{Z} \) of \( \mathbb{Z} \) and that \( \mathbb{Z}/{p}^{\prime }\mathbb{Z} \) embeds in \( {R}_{k}/\mathcal{P} \) . Thus \( {p}^{\prime } = p \) and \[ p{R}_{k} = {\mathcal{P}}_{1}^{{e}_{1}}{\mathcal{P}}_{2}^{{e}_{2}}\cdots {\mathcal{P}}_{g}^{{e}_{g}} \] \( \left( {0.15}\right) \) where, for each \( i,{R}_{k}/{\mathcal{P}}_{i} \) is a field of order \( {p}^{{f}_{i}} \) for some \( {f}_{i} \geq 1 \) . The primes \( {\mathcal{P}}_{i} \) are said to lie over or above \( p \), or \( p\mathbb{Z} \) . Note that \( {f}_{i} \) is the degree of the extension of finite fields \( \left\lbrack {{R}_{k}/{\mathcal{P}}_{i} : \mathbb{Z}/p\mathbb{Z}}\right\rbrack \) . If \( \left\lbrack {k : \mathbb{Q}}\right\rbrack = d \), then \( N\left( {p{R}_{k}}\right) = {p}^{d} \) and so \[ d = \mathop{\sum }\limits_{{i = 1}}^{g}{e}_{i}{f}_{i} \] (0.16) Definition 0.3.7 The prime number \( p \) is said to be ramified in the extension \( k \mid \mathbb{Q} \) if, in the decomposition at (0.15), some \( {e}_{i} > 1 \) . Otherwise, \( p \) is unramified. The following theorem of Dedekind connects ramification with the discriminant. Theorem 0.3.8 A prime number \( p \) is ramified in the extension \( k \mid \mathbb{Q} \) if and only if \( p \mid {\Delta }_{k} \) . There are thus only finitely many rational primes which ramify in the extension \( k \mid \mathbb{Q} \) . If \( \mathcal{P} \) is a prime ideal in \( {R}_{k} \) with \( \left| {{R}_{k}/\mathcal{P}}\right| = q\left( { = {p}^{m}}\right) \), and \( \ell \mid k \) is a finite extension, then a similar analysis to that given above holds. Thus in \( {R}_{\ell } \) , \[ \mathcal{P}{R}_{\ell } = {\mathcal{Q}}_{1}^{{e}_{1}}{\mathcal{Q}}_{2}^{{e}_{2}}\cdots {\mathcal{Q}}_{g}^{{e}_{g}} \] \( \left( {0.17}\right) \) where, for each \( i,{R}_{\ell }/{\mathcal{Q}}_{i} \) is a field of order \( {q}^{{f}_{i}} \) . The \( {e}_{i},{f}_{i} \) then satisfy (0.16) where \( \left\lbrack {\ell : k}\right\rbrack = d \) . Dedekind’s Theorem 0.3.8 also still holds when \( {\Delta }_{k} \) is replaced by the relative discriminant, and, of course, in this case, the ideal \( \mathcal{P} \) must divide the ideal \( {\delta }_{\ell \mid k} \) . Now consider the cases of quadratic extensions \( \mathbb{Q}\left( \sqrt{d}\right) \mid \mathbb{Q} \) in some detail. Denote the ring of integers in \( \mathbb{Q}\left( \sqrt{d}\right) \) by \( {O}_{d} \) . Note that from (0.16), there are exactly three possibilities and it is convenient to use some special terminology to describe these. 1. \( p{O}_{d} = {\mathcal{P}}^{2} \) (i.e., \( g = 1,{e}_{1} = 2 \) and so \( {f}_{1} = 1 \) ). Thus \( p \) is ramified in \( \mathbb{Q}\left( \sqrt{d}\right) \mid \mathbb{Q} \) and this will occur if \( p \mid d \) when \( d \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) and if \( p \mid {4d} \) when \( d ≢ 1\left( {\;\operatorname{mod}\;4}\right) \) . Note also in this case that \( {O}_{d}/\mathcal{P} \cong {\mathbb{F}}_{p} \), so that \( N\left( \mathcal{P}\right) = p \) 2. \( p{O}_{d} = {\mathcal{P}}_{1}{\mathcal{P}}_{2} \) (i.e., \( g = 2,{e}_{1} = {e}_{2} = {f}_{1} = {f}_{2} = 1 \) ). In this case, we say that \( p \) decomposes in \( \mathbb{Q}\left( \sqrt{d}\right) \mid \mathbb{Q} \) . In this case \( N\left( {\mathcal{P}}_{1}\right) = N\left( {\mathcal{P}}_{2}\right) = p \) . 3. \( p{O}_{d} = \mathcal{P} \) (i.e., \( \left. {g = 1,{e}_{1} = 1,{f}_{1} = 2}\right) \) . In this case, we say that \( p \) is inert in the extension. Note that \( N\left( \mathcal{P}\right) = {p}^{2} \) . The deductions here are particularly simple since the degree of the extension is 2 . To determine how the prime ideals of \( {R}_{k} \) lie over a given rational prime \( p \) can often be decided by the result below, which is particularly useful in computations. We refer to this result as Kummer's Theorem. (It is not clear to us that this is a correct designation, and in algebraic number theory, it is not a unique designation. However, in this book, it will uniquely pick out this result.) Theorem 0.3.9 Let \( {R}_{k} = \mathbb{Z}\left\lbrack \theta \right\rbrack \) for some \( \theta \in {R}_{k} \) with minimum polynomial h. Let \( p \) be a (rational) prime. Suppose, over \( {\mathbb{F}}_{p} \), that \[ \bar{h} = {\bar{h}}_{1}^{{e}_{1}}{\bar{h}}_{2}^{{e}_{2}}\cdots {\bar{h}}_{r}^{{e}_{r}} \] where \( {h}_{i} \in \mathbb{Z}\left\lbrack x\right\rbrack \) is monic of degree \( {f}_{i} \) and the overbar denotes the natural map \( \;\mathbb{Z}\left\lbrack x\right\rbrack \rightarrow {\overline{\mathbb{F}}}_{p}\left\lbrack x\right\rbrack .\; \) Then \( \;{\mathcal{P}}_{i} = p{R}_{k} + {h}_{i}\left( \theta \right) {R}_{k}\; \) is a prime ideal, \( \;N\left( {\mathcal{P}}_{i}\right) = {p}^{{f}_{i}} \) and \[ p{R}_{k} = {\mathcal{P}}_{1}^{{e}_{1}}{\mathcal{P}}_{2}^{{e}_{2}}\cdots {\mathcal{P}}_{r}^{{e}_{r}} \] There is also a relative version of this theorem applying to an extension \( \ell \mid k \) with \( {R}_{\ell } = {R}_{k}\left\lbrack \theta \right\rbrack \) and \( P \) a prime ideal in \( {R}_{k} \) . As noted earlier, such extensions may not have integral bases. Even in the absolute case of \( k \mid \mathbb{Q} \) , it is not always possible to find a \( \theta \in {R}_{k} \) such that \( \left\{ {1,\theta ,{\theta }^{2},\ldots ,{\theta }^{d - 1}}\right\} \) is an integral basis. Thus the theorem as stated is not always applicable. There are further versions of this theorem which apply in a wider range of cases. Once again we consider quadratic extensions, which always have such a basis as required by Kummer’s Theorem, with \( \theta = \sqrt{d} \) if \( d ≢ 1\left( {\;\operatorname{mod}\;4}\right) \) and \( \theta = \left( {1 + \sqrt{d}}\right) /2 \) if \( d \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) . In the first case, \( p \) is ramified if \( p \mid {4d} \) . For other values of \( p,{x}^{2} - \bar{d} \in {\mathbb{F}}_{p}\left\lbrack x\right\rbrack \) factorises if and only if there exists \( a \in \mathbb{Z} \) such that \( {a}^{2} \equiv d\left( {\;\operatorname{mod}\;p}\right) \) [i.e. if and only if \( \left( \frac{d}{p}\right) = 1 \) ]. In the second case, if \( p \) is odd and \( p \nmid d \), then \( {x}^{2} - x + \left( {1 - \bar{d}}\right) /4 \in {\mathbb{F}}_{p}\left\lbrack x\right\rbrack \) factorises if and only if \( {\left( 2x - 1\right) }^{2} - \bar{d} \in {\mathbb{F}}_{p}\left\lbrack x\right\rbrack \) factorises [i.e. if and only if \( \left( \frac{d}{p}\right) = 1 \) ]. If \( p = 2 \), then \[ {x}^{2} -
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 3.4
Definition 3.4. If there exists a sequence \( {t}_{n} \rightarrow + \infty \) (or \( - \infty \) ) as \( n \rightarrow + \infty \) such that \[ \mathop{\lim }\limits_{{n \rightarrow + \infty }}\overrightarrow{f}\left( {P, t}\right) = q \] then the point \( q \) is called a \( \omega \) (or \( \alpha \) ) limit point of \( \overrightarrow{f}\left( {P, t}\right) \) . The set of all \( \omega \) (or \( \alpha \) ) limit points of \( \overrightarrow{f}\left( {P, t}\right) \) will be denoted by \( {\Omega }_{P} \) (or \( {A}_{p} \) ). From Definition 3.3, if \( \overrightarrow{f}\left( {P, t}\right) \) is a periodic motion, then for any fixed \( t \) we have \( \overrightarrow{f}\left( {P,{nT} + t}\right) = \overrightarrow{f}\left( {P, t}\right) \), where \( T \) is the period and \( n \) is any integer. Letting \( n \rightarrow + \infty \) or \( - \infty \), we can then deduce that \( {\Omega }_{P} = {A}_{P} = {L}_{P} \) . In particular, if \( P \) is a critical point, then \( {\Omega }_{P} = {A}_{P} = P \) . Definition 3.5. If for every \( t \in \left( {-\infty , + \infty }\right) \), the set \( \overrightarrow{f}\left( {A, t}\right) = \{ \overrightarrow{f}\left( {P, t}\right) \mid P \) \( \in A\} \) has the property that \( \overrightarrow{f}\left( {A, t}\right) = A \) . Then \( A \) is called an invariant set of \( \overrightarrow{f} \) . It is clear that any orbit is an invariant set. From definition, we see that invariant sets are collections of orbits. THEOREM 3.2. For any \( t \in \left( {-\infty , + \infty }\right) \), the set \( {\Omega }_{P} \) satisfies \( \overrightarrow{f}\left( {{\Omega }_{P}, t}\right) = \) \( {\Omega }_{P} \) . That is, \( {\Omega }_{P} \) is an invariant set. Proof. Let \( q \in {\Omega }_{P}, r = \overrightarrow{f}\left( {q, t}\right) \) . We will prove \( r \in {\Omega }_{P} \) . Since \( q \in {\Omega }_{P} \), from the definition there exists \( {t}_{n} \rightarrow + \infty \) as \( n \rightarrow \infty \) such that \( \mathop{\lim }\limits_{{n \rightarrow + \infty }}\overrightarrow{f}\left( {P,{t}_{n}}\right) = q \) . From properties II and III of dynamical systems, we have \[ \mathop{\lim }\limits_{{n \rightarrow + \infty }}\overrightarrow{f}\left( {P,{t}_{n} + t}\right) = \mathop{\lim }\limits_{{n \rightarrow + \infty }}\overrightarrow{f}\left( {\overrightarrow{f}\left( {P,{t}_{n}}\right), t}\right) = \overrightarrow{f}\left( {q, t}\right) = r. \] That is \( r \in {\Omega }_{P} \), and we have proved that \( \overrightarrow{f}\left( {{\Omega }_{P}, t}\right) \subset {\Omega }_{P} \) for all \( t \in \) \( \left( {-\infty , + \infty }\right) \) . Moreover, from properties II and III, for any \( t \in \left( {-\infty , + \infty }\right) \) we have \( \overrightarrow{f}\left( {\overrightarrow{f}\left( {{\Omega }_{P}, t}\right) , - t}\right) \subset \overrightarrow{f}\left( {{\Omega }_{P}, - t}\right) \) . That is, \( {\Omega }_{P} \subset \overrightarrow{f}\left( {{\Omega }_{P}, - t}\right) \) . Consequently, we have shown that for any \( t \in \left( {-\infty , + \infty }\right) ,\overrightarrow{f}\left( {{\Omega }_{P}, t}\right) = {\Omega }_{P} \) . THEOREM 3.3. \( {\Omega }_{P} \) is a closed set. Proof. Let \( {P}_{n} \in {\Omega }_{P} \) and \( \mathop{\lim }\limits_{{n \rightarrow + \infty }}{P}_{n} = q \) ; we have to show \( q \in {\Omega }_{P} \) . Since \( {P}_{n} \in {\Omega }_{p} \), there exist \( {t}_{n} > n \) such that \( \rho \left( {{P}_{n},\overrightarrow{f}\left( {P,{t}_{n}}\right) }\right) < 1/n \) ; thus \( \overrightarrow{f}\left( {P,{t}_{n}}\right) \rightarrow q, q \in {\Omega }_{p} \) . This shows that \( {\Omega p} \) is a closed set. Definition 3.6. If there do not exist relatively closed nonempty subsets \( {M}_{1},{M}_{2} \) of \( M\left( {{M}_{1},{M}_{2} \subset M}\right) \) such that \( {M}_{1} \cap {M}_{2} = \varnothing \) and \( M = {M}_{1} \cup {M}_{2} \) , then we say that the set \( M \) is connected. Otherwise \( M \) is disconnected. Let \( {M}_{1},{M}_{2} \) be closed subsets of \( M \) such that \( M = {M}_{1} \cup {M}_{2} \), and \( {M}_{1} \cap \) \( {M}_{2} = \varnothing \) . Since \( {M}_{2} = M \smallsetminus {M}_{1},{M}_{1} \) and \( {M}_{2} \) are also relatively open subsets of \( M \) . Therefore, \( {M}_{1},{M}_{2} \) are both open and closed subsets of \( M \) . The set \( M \) and the empty set \( \varnothing \) are both called trivial open and closed subsets of \( M \) . If \( M \) does not have any nontrivial both open and closed subsets, then \( M \) is connected. Otherwise, \( M \) is disconnected, and each both open and closed subset is called a connected component. Note. Definition 3.6 defines (regional) connectedness, it is different from pathwise connectedness. THEOREM 3.4. Suppose that \( \overrightarrow{f}\left( {P,{I}^{ + }}\right) \) is bounded, then \( {\Omega }_{P} \) is connected. Proof. Suppose \( {\Omega }_{P} \) is disconnected, then \( \exists {\Omega }_{P}^{\left( 1\right) },{\Omega }_{P}^{\left( 2\right) } \subset {\Omega }_{P},{\Omega }_{P} = \) \( {\Omega }_{P}^{\left( 1\right) } \cup {\Omega }_{P}^{\left( 2\right) },{\Omega }_{P}^{\left( 1\right) } \cap {\Omega }_{P}^{\left( 2\right) } = \varnothing \) and \( {\Omega }_{P}^{\left( 1\right) },{\Omega }_{P}^{\left( 2\right) } \) are relatively closed subsets of \( {\Omega }_{P} \) . Since \( {\Omega }_{P} \) is closed in \( {R}^{n} \), thus \( {\Omega }_{P}^{\left( 1\right) },{\Omega }_{P}^{\left( 2\right) } \) are closed relative to \( {R}^{n} \) , and \( \rho \left( {{\Omega }_{P}^{\left( 1\right) },{\Omega }_{P}^{\left( 2\right) }}\right) = d > 0 \) . Therefore there exist sequences \( {t}_{n}^{\left( i\right) } \rightarrow + \infty \), with \( \overrightarrow{f}\left( {P,{t}_{n}^{\left( t\right) }}\right) \in S\left( {{\Omega }_{P}^{\left( t\right) }, d/3}\right), i = 1,2,\ldots \) and \( 0 < {t}_{1}^{\left( 1\right) } < {t}_{1}^{\left( 2\right) } < {t}_{2}^{\left( 1\right) } < {t}_{2}^{\left( 2\right) } < \) \( \cdots < {t}_{n}^{\left( 1\right) } < {t}_{n}^{\left( 2\right) } < \cdots \) . Since \( \rho \left\lbrack {S\left( {{\Omega }_{P}^{\left( 1\right) }, d/3}\right), S\left( {{\Omega }_{P}^{\left( 2\right) }, d/3}\right) }\right\rbrack > d/6 \), there are \( {t}_{n}^{\left( 1\right) } < {\zeta }_{n} < {t}_{n}^{\left( 2\right) } \) where \[ \overrightarrow{f}\left( {P,{\zeta }_{n}}\right) \notin S\left( {{\Omega }_{P}^{\left( 1\right) }, d/3}\right) \cup S\left( {{\Omega }_{P}^{\left( 2\right) }, d/3}\right) . \] Since \( \overrightarrow{f}\left( {P,{I}^{ + }}\right) \) is bounded, there exists a convergent subsequence of \( \left\{ {\overrightarrow{f}\left( {P,{\zeta }_{n}}\right) }\right\} \) ,(denoted again by the same symbol for simplicity), so that \[ \mathop{\lim }\limits_{{n \rightarrow + \infty }}\overrightarrow{f}\left( {P,{\zeta }_{n}}\right) = q \notin {\Omega }_{P} \] This is a contradiction. Hence \( {\Omega }_{P} \) is connected. Theorems 3.2,3.3, and 3.4 are all valid if the \( {\Omega }_{P} \) are all replaced by \( {A}_{P} \) . From the properties of \( {\Omega }_{P} \) (or \( {A}_{P} \) ), we can classify the orbits of the dynamical system (3.1) into three types. 1. If \( {\Omega }_{P} \) (or \( {A}_{P} \) ) is an empty set, then \( \overrightarrow{f}\left( {P, I}\right) \) will be called an orbit going away in the positive (or negative) direction. An orbit which is going away in both the positive and negative direction is called an orbit going away. 2. If \( {\Omega }_{P} \) (or \( {A}_{P} \) ) is nonempty and \( {\Omega }_{P} \cap \overrightarrow{f}\left( {P,{I}^{ + }}\right) = \varnothing \) (or \( {A}_{P} \cap \) \( \overrightarrow{f}\left( {P,{I}^{ - }}\right) = \varnothing ) \), then \( \overrightarrow{f}\left( {P, I}\right) \) will be called a positively (or negatively) asymptotic orbit. An orbit which is asymptotic both positively and negatively is called an asymptotic orbit. 3. If \( {\Omega }_{P} \cap \overrightarrow{f}\left( {P,{I}^{ + }}\right) \neq \varnothing \) or \( \left( {{A}_{P} \cap \overrightarrow{f}\left( {P,{I}^{ - }}\right) \neq \varnothing }\right) \), then \( \overrightarrow{f}\left( {P, I}\right) \) will be called a positively (or negatively) Poisson stable orbit, or briefly \( {P}^{ + } \) (or \( {P}^{ - } \) ) stable orbit. An orbit which is both positively and negatively Poisson stable is called a Poisson stable orbit, or briefly \( P \) stable orbit. EXAMPLE 3.3. Given the differential equations \[ \frac{dx}{dt} = x,\;\frac{dy}{dt} = - y, \] its phase portrait is illustrated in Figure 1.3. ![bea09977-be18-4815-a30e-4fa2fe3b219c_39_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_39_0.jpg) FIGURE 1.3 \( {L}_{1},{L}_{2},{L}_{2},{L}_{4} \) -going away orbit \( {L}_{5},{L}_{6} \) -orbit positively asymptotic and going away in the negative direction \( {L}_{7},{L}_{8} \) -orbit negatively asymptotic and going away in the positive direction \( 0 - P \) stable orbit ![bea09977-be18-4815-a30e-4fa2fe3b219c_39_1.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_39_1.jpg) Figure 1.4 Origin 0 and the closed orbit \( L\left( {r = 1}\right) - P \) stable orbit \( {L}_{P}\left( {0 < r\left( P\right) < 1}\right) \) -asymptotic orbit \( {L}_{P}\left( {r\left( P\right) > 1}\right) \) -orbit positively asymptotic and going away in the negative direction EXAMPLE 3.4. Given the differential equations \[ \frac{dx}{dt} = y + x\left\lbrack {1 - \left( {{x}^{2} + {y}^{2}}\right) }\right\rbrack \] \[ \frac{dy}{dt} = - x + y\left\lbrack {1 - \left( {{x}^{2} + {y}^{2}}\right) }\right\rbrack \] Licensed to Peking University. Prepared on Tue Jul 12 10:17:51 EDT 2022for download from IP 222.29.104.70. License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms let \( x = r\cos \theta, y = r\sin \theta \), the differential equations are transformed into \[ \frac{dr}{dt} = r\left( {1 - {r}^{2}}\right) ,\;\frac{d\theta }{dt} =
1042_(GTM203)The Symmetric Group
Definition 3.6.8
Definition 3.6.8 Permutations \( \pi ,\sigma \in {\mathcal{S}}_{n} \) differ by a dual Knuth relation of the first kind, written \( \pi \overset{{1}^{ * }}{ \cong }\sigma \), if for some \( k \) , 1. \( \pi = \ldots k + 1\ldots k\ldots k + 2\ldots \) and \( \sigma = \ldots k + 2\ldots k\ldots k + 1\ldots \) or vice versa. They differ by a dual Knuth relation of the second kind, written \( \pi \overset{{2}^{ * }}{ \cong }\sigma \), if for some \( k \) , 2. \( \pi = \ldots k\ldots k + 2\ldots k + 1\ldots \) and \( \sigma = \ldots k + 1\ldots k + 2\ldots k\ldots \) or vice versa. The two permutations are dual Knuth equivalent, written \( \pi \overset{{K}^{ * }}{ \cong }\sigma \), if there is a sequence of permutations such that \[ \pi = {\pi }_{1}\overset{{i}^{ * }}{ \cong }{\pi }_{2}\overset{{j}^{ * }}{ \cong }\cdots \overset{{l}^{ * }}{ \cong }{\pi }_{k} = \sigma \] where \( i, j,\ldots, l \in \{ 1,2\} \) . ∎ Note that the only two nontrivial dual Knuth relations in \( {S}_{3} \) are \[ {213}\overset{{1}^{ * }}{ \cong }{312}\text{ and }{132}\overset{{2}^{ * }}{ \cong }{231}\text{. } \] These correspond exactly to (3.12). The following lemma is obvious from the definitions. In fact, the definition of the dual Knuth relations was concocted precisely so that this result should hold. Lemma 3.6.9 If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then \[ \pi \overset{K}{ \cong }\sigma \Leftrightarrow {\pi }^{-1}\overset{{K}^{ * }}{ \cong }{\sigma }^{-1}\text{. ∎} \] Now it is an easy matter to derive the dual version of Knuth's theorem about \( P \) -equivalence (Theorem 3.4.3). Theorem 3.6.10 If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then \[ \pi \overset{{K}^{ * }}{ \cong }\sigma \Leftrightarrow \pi \overset{Q}{ \cong }\sigma . \] Proof. We have the following string of equivalences: \[ \pi \overset{{K}^{ * }}{ \cong }\sigma \; \Leftrightarrow \;{\pi }^{-1}\overset{K}{ \cong }{\sigma }^{-1}\;\text{ (Lem } \] \[ \Leftrightarrow \;P\left( {\pi }^{-1}\right) = P\left( {\sigma }^{-1}\right) \;\text{(Theorem 3.4.3)} \] \[ \Leftrightarrow \;Q\left( \pi \right) = Q\left( \sigma \right) .\;\text{ (Theorem 3.6.6) } \bullet \] ## 3.7 Schützenberger’s Jeu de Taquin The jeu de taquin (or "teasing game") of Schützenberger [Scii 76] is a powerful tool. It can be used to give alternative descriptions of both the \( P \) - and \( Q \) - tableaux of the Robinson-Schensted algorithm (Theorems 3.7.7 and 3.9.4) as well as the ordinary and dual Knuth relations (Theorems 3.7.8 and 3.8.8). To get the full-strength version of these concepts, we must generalize to skew tableaux. Definition 3.7.1 If \( \mu \subseteq \lambda \) as Ferrers diagrams, then the corresponding skew diagram, or skew shape, is the set of cells \[ \lambda /\mu = \{ c : c \in \lambda \text{ and }c \notin \mu \} . \] A skew diagram is normal if \( \mu = \varnothing \) . ∎ If \( \lambda = \left( {3,3,2,1}\right) \) and \( \mu = \left( {2,1,1}\right) \), then we have the skew diagram \[ \lambda /\mu = \] Of course, normal shapes are the left-justified ones we have been considering all along. The definitions of skew tableaux, standard skew tableaux, and so on, are all as expected. In particular, the definition of the row word of a tableau still makes sense in this setting. Thus we can say that two skew partial tableaux \( P, Q \) are Knuth equivalent, written \( P\overset{K}{ \cong }Q \), if \[ {\pi }_{P}\overset{K}{ \cong }{\pi }_{Q} \] Similar definitions hold for the other equivalence relations that we have introduced. Note that if \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \), then we can make \( \pi \) into a skew tableau by putting \( {x}_{i} \) in the cell \( \left( {n - i + 1, i}\right) \) for all \( i \) . This object is called the antidi-agonal strip tableau associated with \( \pi \) and is also denoted by \( \pi \) . For example, if \( \pi = {3142} \) (a good approximation, albeit without the decimal point), then ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg) So \( \pi \overset{K}{ \cong }\sigma \) as permutations if and only if \( \pi \overset{K}{ \cong }\sigma \) as tableaux. We now come to the definition of a jeu de taquin slide, which is essential to all that follows. Definition 3.7.2 Given a partial tableau \( P \) of shape \( \lambda /\mu \), we perform a forward slide on \( P \) from cell \( c \) as follows. F1 Pick \( c \) to be an inner corner of \( \mu \) . F2 While \( c \) is not an inner corner of \( \lambda \) do Fa If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \min \left\{ {{P}_{i + 1, j},{P}_{i, j + 1}}\right\} \) . Fb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . If only one of \( {P}_{i + 1, j},{P}_{i, j + 1} \) exists in step Fa, then the maximum is taken to be that single value. We denote the resulting tableau by \( {j}^{c}\left( P\right) \) . Similarly, a backward slide on \( P \) from cell \( c \) produces a tableau \( {j}_{c}\left( P\right) \) as follows B1 Pick \( c \) to be an outer corner of \( \lambda \) . B2 While \( c \) is not an outer corner of \( \mu \) do Ba If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \max \left\{ {{P}_{i - 1, j},{P}_{i, j - 1}}\right\} \) . Bb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . ∎ By way of illustration, let \[ P = \begin{array}{lllll} & & & 6 & 8 \\ & 2 & 4 & 5 & 9. \\ 1 & 3 & 7 & & \end{array} \] We let a dot indicate the position of the empty cell as we perform a forward slide from \( c = \left( {1,3}\right) \) . <table><tr><td></td><td></td><td>0</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>-</td><td>5</td><td>9</td><td></td><td>2</td><td>5</td><td>-</td><td>9</td><td></td><td>2</td><td>5</td><td>9</td><td>0</td></tr><tr><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td></tr></table> Thus \[ {j}^{c}\left( P\right) = \begin{array}{llll} & 4 & 6 & 8 \\ 2 & 5 & 9 & \\ 1 & 3 & 7 & \end{array} \] A backward slide from \( c = \left( {3,4}\right) \) looks like the following. <table><tr><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>0</td><td>5</td><td>9</td><td></td><td>-</td><td>2</td><td>5</td><td>9</td></tr><tr><td>1</td><td>3</td><td>7</td><td>-</td><td></td><td>1</td><td>3</td><td></td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td></tr></table> So ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg) Note that a slide is an invertible operation. Specifically, if \( c \) is a cell for a forward slide on \( P \) and the cell vacated by the slide is \( d \), then a backward slide into \( d \) restores \( P \) . In symbols, \[ {j}_{d}{j}^{c}\left( P\right) = P. \] (3.13) Similarly, \[ {j}^{c}{j}_{d}\left( P\right) = P. \] (3.14) if the roles of \( d \) and \( c \) are reversed. Of course, we may want to make many slides in succession. Definition 3.7.3 A sequence of cells \( \left( {{c}_{1},{c}_{2},\ldots ,{c}_{l}}\right) \) is a slide sequence for a tableau \( P \) if we can legally form \( P = {P}_{0},{P}_{1},\ldots ,{P}_{l} \), where \( {P}_{i} \) is obtained from \( {P}_{i - 1} \) by performing a slide into cell \( {c}_{i} \) . Partial tableaux \( P \) and \( Q \) are equivalent, written \( P \cong Q \), if \( Q \) can be obtained from \( P \) by some sequence of slides. - This equivalence relation is the same as Knuth equivalence, as the next series of results shows. Proposition 3.7.4 ([Scii 76]) If \( P, Q \) are standard skew tableaux, then \[ P \cong Q \Rightarrow P\overset{K}{ \cong }Q. \] Proof. By induction, it suffices to prove the theorem when \( P \) and \( Q \) differ by a single slide. In fact, if we call the operation in steps Fb or Rb of the slide definition a move, then we need to demonstrate the result only when \( P \) and \( Q \) differ by a move. (The row word of a tableau with a hole in it can still be defined by merely ignoring the hole.) The conclusion is trivial if the move is horizontal because then \( {\pi }_{P} = {\pi }_{Q} \) . If the move is vertical, then we can clearly restrict to the case where \( P \) and \( Q \) have only two rows. So suppose that \( x \) is the element being moved and that <table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>\( x \)</th><th>\( {R}_{r} \)</th></tr></thead><tr><td>\( {S}_{l} \)</td><td></td><td>\( \bullet \)</td><td>\( {S}_{r} \)</td></tr></table> <table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>·</th><th colspan="2">\( {R}_{r} \)</th></tr></thead><tr><td>\( {S}_{l} \)</td><td></td><td>\( x \)</td><td>\( {S}_{r} \)</td><td></td></tr></table> where \( {R}_{l} \) and \( {S}_{l} \) (respectively, \( {R}_{r} \) and \( {S}_{r} \) ) are the left (respectively, right) portions of the two rows. Now induct on the number of elements in \( P \) (or \( Q \) ). If both tableaux consist only of \( x \), then we are done. Now suppose \( \left| {R}_{r}\right| > \left| {S}_{r}\right| \) . Let \( y \) be the rightmost element of \( {R}_{r} \) and let \( {P}^{\prime },{Q}^{\prime } \) be \( P, Q \), respectively, with \( y \) removed. By our assumption \( {P}^{\prime } \) and \( {Q}^{\prime } \) are still skew tableaux, so
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 7.2.6
Definition 7.2.6. A map \( p : E \rightarrow B \) is a locally trivial fiber bundle with fiber \( F \) if the following holds: There is a cover of \( E \) by open sets \( \left\{ {U}_{i}\right\} \) and for each \( i \) a homeomorphism \( {h}_{i} : {p}^{-1}\left( {U}_{i}\right) \rightarrow {U}_{i} \times F \) making the following diagram commute ![21ef530b-1e09-406a-b041-cf4539af5c14_142_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_142_1.jpg) where the lower horizontal map is the identity and the right-hand vertical map is projection onto the first factor. The map \( p \) has a section \( s \) if there is a map \( s : B \rightarrow E \) with \( {ps} : B \rightarrow B \) the identity. Example 7.2.7. (1) The projection \( p : X \times Y \rightarrow X \) is a locally trivial fiber bundle with fiber \( Y \) . (Indeed, we call this a globally trivial fiber bundle.) (2) Every covering projection \( p : \widetilde{X} \rightarrow X \) is a locally trivial fiber bundle with fiber the discrete space \( {p}^{-1}\left( {x}_{0}\right) \) . For example, \( p : {S}^{n} \rightarrow \mathbb{R}{P}^{n} \) is a fiber bundle with fiber two points. (3) The map \( p : {S}^{{2n} + 1} \rightarrow \mathbb{C}{P}^{n} \) given by \( p\left( {{z}_{0},\ldots ,{z}_{n}}\right) = \left\lbrack {{z}_{0},\ldots ,{z}_{n}}\right\rbrack \) is a locally trivial fiber bundle with fiber \( {S}^{1} = \{ z \in \mathbb{C}\left| \right| z \mid = 1\} \) . Theorem 7.2.8. Every locally trivial fiber bundle is a fibration. Theorem 7.2.9. Let \( p : E \rightarrow B \) be a locally trivial fiber bundle. Let \( {b}_{0} \in B \) , \( F = {p}^{-1}\left( {b}_{0}\right) \), and \( {e}_{0} \in F \) . Then for every \( n \) , \[ {p}_{ * } : {\pi }_{n}\left( {E, F,{e}_{0}}\right) \rightarrow {\pi }_{n}\left( {B,{b}_{0}}\right) \] is an isomorphism. ![21ef530b-1e09-406a-b041-cf4539af5c14_143_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_143_0.jpg) Proof. First we show \( {p}_{ * } \) is onto. Let \( g : \left( {{I}^{n},\partial {I}^{n}}\right) \rightarrow \left( {B,{b}_{0}}\right) \) represent an element of \( {\pi }_{n}\left( {B,{b}_{0}}\right) \) . Regard \( {I}^{n} \) as \( I \times {I}^{n - 1} \) . Then \( \{ 0\} \times {I}^{n - 1} \subset \partial {I}^{n - 1} \), so we let \( \widetilde{g} : \{ 0\} \times {I}^{n - 1} \rightarrow \) \( {e}_{0} \) . Then we can apply the covering homotopy property to obtain a commutative diagram and hence a map \( {G}^{\prime } : \left( {{I}^{n},{J}^{\prime },{K}^{\prime }}\right) \rightarrow \left( {E, F,{e}_{0}}\right) \), where \( {J}^{\prime } = \{ 0\} \times {I}^{n - 1} \) and \( {K}^{\prime } \) is the closure of \( \partial {I}^{n} - {J}^{\prime } \) . This is almost but not quite what we need to obtain a representation of an element of \( {\pi }_{n}\left( {E, F,{e}_{0}}\right) \) . For that we need \( G : \left( {{I}^{n},{J}^{n - 1},{K}^{n - 1}}\right) \rightarrow \) \( \left( {E, F,{e}_{0}}\right) \) . But we may obtain such a map \( G \) by composing a homeomorphism of \( {I}^{n} \) with itself with the map \( {G}^{\prime } \) as in the following picture, where (as usual) the heavy lines indicate points mapped to \( {e}_{0} \) : ![21ef530b-1e09-406a-b041-cf4539af5c14_143_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_143_1.jpg) Next we show \( {p}_{ * } \) is \( 1 - 1 \) . Let \( \widetilde{g} : \left( {{I}^{n},{J}^{n - 1},{K}^{n - 1}}\right) \rightarrow \left( {E, F,{e}_{0}}\right) \) and suppose that \( g = {p}_{ * }\left( \widetilde{g}\right) \) represents the trivial element of \( {\pi }_{n}\left( {B,{b}_{0}}\right) \) . Then there is a mapping \( G : \left( {{I}^{n},{J}^{n - 1},{K}^{n - 1}}\right) \times I \rightarrow \left( {B,{b}_{0}}\right) \) extending the map \( g \) on \( \left( {{I}^{n},{J}^{n - 1},{K}^{n - 1}}\right) \times \{ 0\} \) and with \( G : \left( {{I}^{n},{J}^{n - 1},{K}^{n - 1}}\right) \times \{ 1\} \rightarrow {b}_{0} \) . By the covering homotopy property, there is a map \( \widetilde{G} : \left( {{I}^{n},{J}^{n - 1},{K}^{n - 1}}\right) \times I \rightarrow E \) extending \( \widetilde{g} \) with \( p\widetilde{G} = G \) . In particular, \( p\widetilde{G}\left( {\left( {{I}^{n},{J}^{n - 1},{K}^{n - 1}}\right) \times \{ 1\} }\right) = {b}_{0} \), i.e., \( \widetilde{G}\left( {{I}^{n}\times \{ 1\} }\right) \subseteq F \) . But this means that \( \widetilde{G} \), and hence \( \widetilde{g} \), represents the trivial element of \( {\pi }_{n}\left( {E, F,{e}_{0}}\right) \) . We now have a corollary that generalizes both Theorems 7.2.1 and 7.2.2. Corollary 7.2.10. Let \( p : E \rightarrow B \) be a locally trivial fiber bundle with fiber \( F = \) \( {p}^{-1}\left( {b}_{0}\right) \) and let \( {f}_{0} \in F \) . Then there is an exact sequence \[ \cdots \rightarrow {\pi }_{n}\left( {F,{f}_{0}}\right) \rightarrow {\pi }_{n}\left( {E,{f}_{0}}\right) \overset{{p}_{ * }}{ \rightarrow }{\pi }_{n}\left( {B,{b}_{0}}\right) \rightarrow {\pi }_{n - 1}\left( {F,{f}_{0}}\right) \rightarrow \cdots . \] If \( p \) has a section, then for each \( n \) , \[ {\pi }_{n}\left( {E,{f}_{0}}\right) \cong {\pi }_{n}\left( {B,{b}_{0}}\right) \times {\pi }_{n}\left( {F,{f}_{0}}\right) . \] Proof. The first claim follows immediately from Theorems 7.1.9 and 7.2.9. As for the second claim, if \( s \) is a section, then \( {s}_{ * } : {\pi }_{n}\left( {B,{b}_{0}}\right) \rightarrow {\pi }_{n}\left( {E,{f}_{0}}\right) \) splits \( {p}_{ * } \) , so this long exact sequence breaks up into a series of split short exact sequences. Example 7.2.11. By Example 7.2.7 and Corollary 7.2.10, we have that \( {\pi }_{k}\left( {S}^{{2n} + 1}\right) \cong {\pi }_{k}\left( {\mathbb{C}{P}^{n}}\right) \) for every \( k \geq 3 \) . In particular, since \( \mathbb{C}{P}^{1} = {S}^{2} \), we have that \( {\pi }_{3}\left( {S}^{2}\right) \cong {\pi }_{3}\left( {S}^{3}\right) \) . We have the following general finiteness properties of homotopy groups Theorem 7.2.12. Let \( X \) be a connected finite CW-complex. (a) If \( X \) is simply connected, then \( {\pi }_{n}\left( X\right) \) is a finitely generated abelian group for each \( n \) . (b) In general, \( {\pi }_{n}\left( {X,{x}_{0}}\right) \) is finitely generated as a \( \mathbb{Z}{\pi }_{1}\left( {X,{x}_{0}}\right) \) module. Example 7.2.13. Let \( X \) be the space ![21ef530b-1e09-406a-b041-cf4539af5c14_144_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_144_0.jpg) Then \( {\pi }_{2}\left( X\right) \cong {\pi }_{2}\left( \widetilde{X}\right) \) where \( \widetilde{X} \), the universal cover of \( X \), is ![21ef530b-1e09-406a-b041-cf4539af5c14_144_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_144_1.jpg) and \( {\pi }_{2}\left( \widetilde{X}\right) \) is not finitely generated. \( \diamond \) Theorem 7.2.14. \( {\pi }_{i}\left( {S}^{n}\right) = 0 \) for \( i < n \) . Proof. Give \( {S}^{i} \) a CW-structure with one cell in dimension \( i \) and one cell in dimension 0, and give \( {S}^{n} \) a CW-structure with one cell in dimension \( n \) and one cell in dimension 0 . Let \( f : {S}^{i} \rightarrow {S}^{n} \) represent an element of \( {\pi }_{i}\left( {S}^{n}\right) \) . Then by Theorem 4.2.28, \( f \) is (freely) homotopic to a cellular map \( g : {S}^{i} \rightarrow {S}^{n} \) . But by the definition of a cellular map, the image of the \( i \) -skeleton of \( {S}^{i} \) must be contained in the \( i \) -skeleton of \( {S}^{n} \), which is a point. Thus \( f \) is freely homotopic to a constant map. But by Corollary 2.3.3 \( {\pi }_{1}\left( {S}^{n}\right) = 0 \) for \( n > 1 \), so by Corollary 7.1.12 \( f \) is homotopic as a map of pairs to a constant map. (This proof is deceptively simple, as the proof of Theorem 4.2.28 is highly nontrivial.) Theorem 7.2.15 (Hopf). Two maps \( f : {S}^{n} \rightarrow {S}^{n} \) and \( g : {S}^{n} \rightarrow {S}^{n} \) are homotopic if and only if they have the same degree. Corollary 7.2.16. For any \( n \geq 1,{\pi }_{n}\left( {S}^{n}\right) \cong \mathbb{Z} \) . Proof. By Hopf's theorem, we have an isomorphism \[ {\pi }_{n}\left( {S}^{n}\right) \rightarrow \left\{ {\text{ degrees of maps from }{S}^{n}\text{ to }{S}^{n}}\right\} \] But this latter set is \( \mathbb{Z} \) by Theorem 4.2.31. Hopf's theorem has a vast generalization due to Hurewicz, which we now give. Definition 7.2.17. The Hurewicz map \( {\theta }_{n} : {\pi }_{n}\left( {X,{x}_{0}}\right) \rightarrow {H}_{n}\left( X\right) \) or \( {\theta }_{n} : {\pi }_{n}\left( {X, A,{x}_{0}}\right) \rightarrow \) \( {H}_{n}\left( {X, A}\right) \) is defined as follows: Let \( f : \left( {{S}^{n},1}\right) \rightarrow \left( {X,{x}_{0}}\right) \) represent \( \alpha \in {\pi }_{n}\left( {X,{x}_{0}}\right) \) . Let \( {\sigma }_{n} \) be the standard generator of \( {H}_{n}\left( {S}^{n}\right) \) as defined in Remark 4.1.10. Then \[ {\theta }_{n}\left( \alpha \right) = {f}_{ * }\left( {\sigma }_{n}\right) \] Similarly, if \( f : \left( {{D}^{n},{S}^{n - 1},1}\right) \rightarrow \left( {X, A,{x}_{0}}\right) \) represents \( \alpha \in {\pi }_{n}\left( {X,{x}_{0}}\right) \), then \( {\theta }_{n}\left( \alpha \right) = {f}_{ * }\left( {\delta }_{n}\right) \) Note that \( {\theta }_{1} : {\pi }_{1}\left( {X,{x}_{0}}\right) \rightarrow {H}_{1}\left( X\right) \) is the map \( \theta \) of Sect. 5.2. Lemma 7.2.18. The following diagram commutes: \[ \cdots \rightarrow {\pi }_{n}\left( {A,{x}_{0}}\right) \rightarrow {\pi }_{n}\left( {X,{x}_{0}}\right) \rightarrow {\pi }_{n}\left( {X, A,{x}_{0}}\right) \overset{\partial }{ \rightarrow }{\pi }_{n - 1}\left( {A,{x}_{0}}\right) \rightarrow \cdots \] Theorem 7.2.19 (Hurewicz). Let \( X \) be a path-connected space. For any fixed integer \( n \geq 2 \) the following are equivalent: (a) \( {\pi }_{1}\left( X\right) = 0 \) and \( {H}_{k}\left( X\right) = 0 \) for \( k = 1,\ldots, n - 1 \) . (b) \( {\pi }_{k}\left( X\right) = 0 \) for \( k = 1,\ldots, n - 1 \) . In this situation, the Hurewicz map
1098_(GTM254)Algebraic Function Fields and Codes
Definition 3.5.4
Definition 3.5.4. Let \( {F}^{\prime }/F \) be an algebraic extension of function fields and \( P \in {\mathbb{P}}_{F} \) . (a) An extension \( {P}^{\prime } \) of \( P \) in \( {F}^{\prime } \) is said to be tamely (resp. wildly) ramified if \( e\left( {{P}^{\prime } \mid P}\right) > 1 \) and the characteristic of \( K \) does not divide \( e\left( {{P}^{\prime } \mid P}\right) \) (resp. \( \operatorname{char}K \) divides \( e\left( {{P}^{\prime } \mid P}\right) ) \) . (b) We say that \( P \) is ramified (resp. unramified) in \( {F}^{\prime }/F \) if there is at least one \( {P}^{\prime } \in {\mathbb{P}}_{F} \) over \( P \) such that \( {P}^{\prime } \mid P \) is ramified (resp. if \( {P}^{\prime } \mid P \) is unramified for all \( {P}^{\prime }|P \) ). The place \( P \) is tamely ramified in \( {F}^{\prime }/F \) if it is ramified in \( {F}^{\prime }/F \) and no extension of \( P \) in \( {F}^{\prime } \) is wildly ramified. If there is at least one wildly ramified place \( {P}^{\prime } \mid P \) we say that \( P \) is wildly ramified in \( {F}^{\prime }/F \) . (c) \( P \) is totally ramified in \( {F}^{\prime }/F \) if there is only one extension \( {P}^{\prime } \in {\mathbb{P}}_{{F}^{\prime }} \) of \( P \) in \( {F}^{\prime } \), and the ramification index is \( e\left( {{P}^{\prime } \mid P}\right) = \left\lbrack {{F}^{\prime } : F}\right\rbrack \) . (d) \( {F}^{\prime }/F \) is said to be ramified (resp. unramified) if at least one \( P \in {\mathbb{P}}_{F} \) is ramified in \( {F}^{\prime }/F \) (resp. if all \( P \in {\mathbb{P}}_{F} \) are unramified in \( {F}^{\prime }/F \) ). (e) \( {F}^{\prime }/F \) is said to be tame if no place \( P \in {\mathbb{P}}_{F} \) is wildly ramified in \( {F}^{\prime }/F \) . Corollary 3.5.5. Let \( {F}^{\prime }/F \) be a finite separable extension of algebraic function fields. (a) If \( P \in {\mathbb{P}}_{F} \) and \( {P}^{\prime } \in {\mathbb{P}}_{{F}^{\prime }} \) such that \( {P}^{\prime } \mid P \), then \( {P}^{\prime } \mid P \) is ramified if and only if \( {P}^{\prime } \leq \operatorname{Diff}\left( {{F}^{\prime }/F}\right) \) . If \( {P}^{\prime } \mid P \) is ramified, then \[ d\left( {{P}^{\prime } \mid P}\right) = e\left( {{P}^{\prime } \mid P}\right) - 1 \Leftrightarrow {P}^{\prime } \mid P\text{ is tamely ramified,} \] \[ d\left( {{P}^{\prime } \mid P}\right) \geq e\left( {{P}^{\prime } \mid P}\right) \Leftrightarrow {P}^{\prime } \mid P\text{is wildly ramified.} \] (b) Almost all places \( P \in {\mathbb{P}}_{F} \) are unramified in \( {F}^{\prime }/F \) . This corollary follows immediately from Dedekind's Theorem. Next we note an important special case of the Hurwitz Genus Formula. Corollary 3.5.6. Suppose that \( {F}^{\prime }/F \) is a finite separable extension of algebraic function fields having the same constant field \( K \) . Let \( g \) (resp. \( {g}^{\prime } \) ) denote the genus of \( F/K \) (resp. \( {F}^{\prime }/K \) ). Then \[ 2{g}^{\prime } - 2 \geq \left\lbrack {{F}^{\prime } : F}\right\rbrack \cdot \left( {{2g} - 2}\right) + \mathop{\sum }\limits_{{P \in {\mathbb{P}}_{F}}}\mathop{\sum }\limits_{{{P}^{\prime } \mid P}}\left( {e\left( {{P}^{\prime } \mid P}\right) - 1}\right) \cdot \deg {P}^{\prime }. \] Equality holds if and only if \( {F}^{\prime }/F \) is tame (for instance if \( K \) is a field of characteristic 0). Proof. Trivial by Theorems 3.4.13 and 3.5.1. Corollary 3.5.7. Suppose that \( {F}^{\prime }/F \) is a finite separable extension of function fields having the same constant field. Let \( g \) (resp. \( {g}^{\prime } \) ) denote the genus of \( F \) (resp. \( {F}^{\prime } \) ). Then \( g \leq {g}^{\prime } \) . Corollary 3.5.8. Let \( F/K\left( x\right) \) be a finite separable extension of the rational function field of degree \( \left\lbrack {F : K\left( x\right) }\right\rbrack > 1 \) such that \( K \) is the constant field of \( F \) . Then \( F/K\left( x\right) \) is ramified. Proof. The Hurwitz Genus Formula yields \[ {2g} - 2 = - 2\left\lbrack {F : K\left( x\right) }\right\rbrack + \deg \operatorname{Diff}\left( {F/K\left( x\right) }\right) , \] where \( g \) is the genus of \( F/K \) . Therefore \[ \deg \operatorname{Diff}\left( {F/K\left( x\right) }\right) \geq 2\left( {\left\lbrack {F : K\left( x\right) }\right\rbrack - 1}\right) > 0. \] The assertion follows since each place in the support of the different ramifies by Corollary 3.5.5. We give another application of the above results. Proposition 3.5.9 (Lüroth's Theorem). Every subfield of a rational function field is rational; i.e., if \( K \subsetneqq {F}_{0} \subseteq K\left( x\right) \) then \( {F}_{0} = K\left( y\right) \) for some \( y \in {F}_{0} \) . Proof. Suppose first that \( K\left( x\right) /{F}_{0} \) is separable. Let \( {g}_{0} \) denote the genus of \( {F}_{0}/K \) . Then \[ - 2 = \left\lbrack {K\left( x\right) : {F}_{0}}\right\rbrack \cdot \left( {2{g}_{0} - 2}\right) + \deg \operatorname{Diff}\left( {K\left( x\right) /{F}_{0}}\right) , \] which implies \( {g}_{0} = 0 \) . If \( P \) is a place of \( K\left( x\right) /K \) of degree one then \( {P}_{0} = P \cap {F}_{0} \) is a place of \( {F}_{0}/K \) of degree one. Therefore \( {F}_{0}/K \) is rational by Proposition 1.6.3. Now assume that \( K\left( x\right) /{F}_{0} \) is not separable. There is an intermediate field \( {F}_{0} \subseteq {F}_{1} \subseteq K\left( x\right) \) such that \( {F}_{1}/{F}_{0} \) is separable and \( K\left( x\right) /{F}_{1} \) is purely inseparable. According to what we have proved above, it is sufficient to show that \( {F}_{1}/K \) is rational. As \( K\left( x\right) /{F}_{1} \) is purely inseparable, \( \left\lbrack {K\left( x\right) : {F}_{1}}\right\rbrack = q = {p}^{\nu } \) where \( p = \operatorname{char}K > 0 \), and \( {z}^{q} \in {F}_{1} \) for each \( z \in K\left( x\right) \) . In particular, \[ K\left( {x}^{q}\right) \subseteq {F}_{1} \subseteq K\left( x\right) \] (3.58) The degree \( \left\lbrack {K\left( x\right) : K\left( {x}^{q}\right) }\right\rbrack \) is equal to the degree of the pole divisor of \( {x}^{q} \) in \( K\left( x\right) /K \) by Theorem 1.4.11, therefore \( \left\lbrack {K\left( x\right) : K\left( {x}^{q}\right) }\right\rbrack = q \) . By (3.58), it follows that \( {F}_{1} = K\left( {x}^{q}\right) \), hence \( {F}_{1}/K \) is rational. Next we prove a theorem that is often very useful for evaluating the different of \( {F}^{\prime }/F \) . Theorem 3.5.10. Suppose \( {F}^{\prime } = F\left( y\right) \) is a finite separable extension of a function field \( F \) of degree \( \left\lbrack {{F}^{\prime } : F}\right\rbrack = n \) . Let \( P \in {\mathbb{P}}_{F} \) be such that the minimal polynomial \( \varphi \left( T\right) \) of \( y \) over \( F \) has coefficients in \( {\mathcal{O}}_{P} \) (i.e., \( y \) is integral over \( {\mathcal{O}}_{P} \) ), and let \( {P}_{1},\ldots ,{P}_{r} \in {\mathbb{P}}_{{F}^{\prime }} \) be all places of \( {F}^{\prime } \) lying over \( P \) . Then the following hold: (a) \( d\left( {{P}_{i} \mid P}\right) \leq {v}_{{P}_{i}}\left( {{\varphi }^{\prime }\left( y\right) }\right) \) for \( 1 \leq i \leq r \) . (b) \( \left\{ {1, y,\ldots ,{y}^{n - 1}}\right\} \) is an integral basis of \( {F}^{\prime }/F \) at the place \( P \) if and only if \( d\left( {{P}_{i} \mid P}\right) = {v}_{{P}_{i}}\left( {{\varphi }^{\prime }\left( y\right) }\right) \) for \( 1 \leq i \leq r \) . (Here \( {\varphi }^{\prime }\left( T\right) \) denotes the derivative of \( \varphi \left( T\right) \) in the polynomial ring \( F\left\lbrack T\right\rbrack \) .) Proof. The dual basis of \( \left\{ {1, y,\ldots ,{y}^{n - 1}}\right\} \) is closely related to the different exponents \( d\left( {{P}_{i} \mid P}\right) \) by Proposition 3.4.2, therefore our first aim is to determine this dual basis. Since \( \varphi \left( y\right) = 0 \), the polynomial \( \varphi \left( T\right) \) factors in \( {F}^{\prime }\left\lbrack T\right\rbrack \) as \[ \varphi \left( T\right) = \left( {T - y}\right) \left( {{c}_{n - 1}{T}^{n - 1} + \ldots + {c}_{1}T + {c}_{0}}\right) \] (3.59) with \( {c}_{0},\ldots ,{c}_{n - 1} \in {F}^{\prime } \) and \( {c}_{n - 1} = 1 \) . We claim: \[ \left\{ {\frac{{c}_{0}}{{\varphi }^{\prime }\left( y\right) },\ldots ,\frac{{c}_{n - 1}}{{\varphi }^{\prime }\left( y\right) }}\right\} \text{is the dual basis of}\left\{ {1, y,\ldots ,{y}^{n - 1}}\right\} \text{.} \] (3.60) (Note that \( {\varphi }^{\prime }\left( y\right) \neq 0 \) since \( y \) is separable over \( F \) .) By definition of the dual basis, (3.60) is equivalent to \[ {\operatorname{Tr}}_{{F}^{\prime }/F}\left( {\frac{{c}_{i}}{{\varphi }^{\prime }\left( y\right) } \cdot {y}^{l}}\right) = {\delta }_{il}\;\text{ for }0 \leq i, l \leq n - 1. \] (3.61) In order to prove (3.61), consider the \( n \) distinct embeddings \( {\sigma }_{1},\ldots ,{\sigma }_{n} \) of \( {F}^{\prime }/F \) into \( \Phi \) (which denotes, as usual, an algebraically closed extension of \( F \) ). We set \( {y}_{j} \mathrel{\text{:=}} {\sigma }_{j}\left( y\right) \) and obtain \[ \varphi \left( T\right) = \mathop{\prod }\limits_{{j = 1}}^{n}\left( {T - {y}_{j}}\right) \] Differentiating this equation and substituting \( T = {y}_{\nu } \) yields \[ {\varphi }^{\prime }\left( {y}_{\nu }\right) = \mathop{\prod }\limits_{{i \neq \nu }}\left( {{y}_{\nu } - {y}_{i}}\right) \] (3.62) For \( 0 \leq l \leq n - 1 \) we consider the polynomial \[ {\varphi }_{l}\left( T\right) \mathrel{\text{:=}} \left( {\mathop{\sum }\limits_{{j = 1}}^{n}\frac{\varphi \left( T\right) }{T - {y}_{j}} \cdot \frac{{y}_{j}^{l}}{{\varphi }^{\prime }\left( {y}_{j}\right) }}\right) - {T}^{l} \in \Phi \left\lbrack T\right\rbrack . \] Its degree is at most \( n - 1 \), and for \( 1 \leq \nu \leq n \) we have \[ {\varphi }_{l}\left( {y}_{\nu }\right) = \left( {\mathop{\prod }\limits_{{i \neq \nu }}\left( {{y}_{\nu } - {y}_{i}}\right) }\right) \cdot \frac{{y}
1068_(GTM227)Combinatorial Commutative Algebra
Definition 4.3
Definition 4.3 Let \( X \) be a labeled cell complex. The cellular monomial matrix supported on \( X \) uses the reduced chain complex of \( X \) for scalar entries, with \( \varnothing \) in homological degree 0 . Row and column labels are those on the corresponding faces of \( X \) . The cellular free complex \( {\mathcal{F}}_{X} \) supported on \( X \) is the complex of \( {\mathbb{N}}^{n} \) -graded free \( S \) -modules (with basis) represented by the cellular monomial matrix supported on \( X \) . The free complex \( {\mathcal{F}}_{X} \) is a cellular resolution if it is acyclic (homology only in degree 0). By convention, the label on the empty face \( \varnothing \in X \) is \( \mathbf{0} \in {\mathbb{N}}^{n} \), which is the exponent on \( 1 \in S \), the least common multiple of no monomials. It is also possible to write down the differential \( \partial \) of \( {\mathcal{F}}_{X} \) without using monomial matrices, where it can be written as \[ {\mathcal{F}}_{X} = {\bigoplus }_{F \in X}S\left( {-{\mathbf{a}}_{F}}\right) ,\;\partial \left( F\right) \; = \mathop{\sum }\limits_{{\text{facets }G\text{ of }F}}\operatorname{sign}\left( {G, F}\right) {\mathbf{x}}^{{\mathbf{a}}_{F} - {\mathbf{a}}_{G}}G. \] The symbols \( F \) and \( G \) here are thought of both as faces of \( X \) and as basis vectors in degrees \( {\mathbf{a}}_{F} \) and \( {\mathbf{a}}_{G} \) . The sign for \( \left( {G, F}\right) \) equals \( \pm 1 \) and is part of the data in the boundary map of the chain complex of \( X \) . Example 4.4 The following labeled hexagon appears as a face of the three-dimensional polytope at the beginning of this chapter: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_74_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_74_0.jpg) Given the orientations that we have chosen for the faces of \( X \), the cellular free complex \( {\mathcal{F}}_{X} \) supported by this labeled hexagon is written as follows: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_74_1.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_74_1.jpg) ![9d852306-8a03-41f2-b2e7-a141e7b451e2_74_2.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_74_2.jpg) This is the representation of the resolution in terms of cellular monomial matrices. The arrows drawn in and on the hexagon denote the orientations of its faces, which determine the values of \( \operatorname{sign}\left( {G, F}\right) \) . For example, \[ \partial \left( {\langle v\rangle }\right) = {b}^{2}\cdots / + {bc}\cdots \partial + {c}^{2}\cdots \] \[ + {ac} \cdot / + {a}^{2} \cdot / + {ab} \cdot / \] in the non-monomial matrix way of writing cellular free complexes. Given two vectors \( \mathbf{a},\mathbf{b} \in {\mathbb{N}}^{n} \), we write \( \mathbf{a} \preccurlyeq \mathbf{b} \) and say that \( \mathbf{a} \) precedes \( \mathbf{b} \) , if \( \mathbf{b} - \mathbf{a} \in {\mathbb{N}}^{n} \) . A subset \( Q \subseteq {\mathbb{N}}^{n} \) is an order ideal if \( \mathbf{a} \in Q \) whenever \( \mathbf{b} \in Q \) and \( \mathbf{a} \preccurlyeq \mathbf{b} \) . Loosely, \( Q \) is "closed under going down" in the partial order on \( {\mathbb{N}}^{n} \) . For an order ideal \( Q \), define the labeled subcomplex \[ {X}_{Q} = \left\{ {F \in X \mid {\mathbf{a}}_{F} \in Q}\right\} \] of a labeled cell complex \( X \) . For each \( \mathbf{b} \in {\mathbb{N}}^{n} \) there are two important such subcomplexes. By \( {X}_{ \preccurlyeq \mathbf{b}} \) we mean the subcomplex of \( X \) consisting of all faces with labels coordinatewise at most \( \mathbf{b} \) . Similarly, denote by \( {X}_{ \prec \mathbf{b}} \) the subcomplex of \( X \) consisting of all faces with labels \( \prec \mathbf{b} \), where \( {\mathbf{b}}^{\prime } \prec \mathbf{b} \) if \( {\mathbf{b}}^{\prime } \preccurlyeq \mathbf{b} \) and \( {\mathbf{b}}^{\prime } \neq \mathbf{b} \) . A fundamental property of cellular free complexes is that their acyclicity can be determined using merely the geometry of polyhedral cell complexes. Let us call a cell complex acyclic if it is either empty or has zero reduced homology. In the empty case, its only homology lies in homological degree -1 . The property of being acylic depends on the underlying field \( \mathbb{k} \) , as we shall see in Section 4.3.5. Proposition 4.5 The cellular free complex \( {\mathcal{F}}_{X} \) supported on \( X \) is a cellular resolution if and only if \( {X}_{ \preccurlyeq \mathbf{b}} \) is acyclic over \( \mathbb{k} \) for all \( \mathbf{b} \in {\mathbb{N}}^{n} \) . When \( {\mathcal{F}}_{X} \) is acyclic, it is a free resolution of \( S/I \), where \( I = \left\langle {{\mathbf{x}}^{{\mathbf{a}}_{v}} \mid v \in X}\right. \) is a vertex \( \rangle \) is generated by the monomial labels on vertices. Proof. The free modules contributing to the part of \( {\mathcal{F}}_{X} \) in degree \( \mathbf{b} \in {\mathbb{N}}^{n} \) are precisely those generated in degrees \( \preccurlyeq \mathbf{b} \) . This proves the criterion for acyclicity, noting that if this degree \( \mathbf{b} \) complex is acyclic, then its homology contributes to the homology of \( {\mathcal{F}}_{X} \) in homological degree 0 . If \( {\mathcal{F}}_{X} \) is acyclic, then it resolves \( S/I \) because the image of its last map equals \( I \subseteq S \) . Example 4.6 Let \( I \) be the ideal whose generating exponents are the vertex labels on the right-hand cell complex in Fig. 4.1. The label '215' in the diagrams is short for \( \left( {2,1,5}\right) \) . The labeled complex \( X \) on the left supports a cellular minimal free resolution of \( S/\left( {I + \left\langle {{x}^{5},{y}^{6},{z}^{6}}\right\rangle }\right) \), so Proposition 4.5 implies that the subcomplex \( {\mathcal{F}}_{{X}_{ \preccurlyeq {455}}} \) resolves \( S/I \) . ![9d852306-8a03-41f2-b2e7-a141e7b451e2_76_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_76_0.jpg) Figure 4.1: The cell complexes from Example 4.6 ## 4.2 Betti numbers and \( K \) -polynomials Given a monomial ideal \( I \) with a cellular resolution \( {\mathcal{F}}_{X} \), we next see how the Betti numbers and the \( K \) -polynomial of the monomial ideal \( I \) can be computed from the labeled cell complex \( X \) . The key is that \( X \) satisfies the acylicity criterion of Proposition 4.5. In the forthcoming statement and its proof, we use freely the fact that \( {\beta }_{i,\mathbf{b}}\left( I\right) = {\beta }_{i + 1,\mathbf{b}}\left( {S/I}\right) \) . As in Chapter 1 for the simplicial case, if \( X \) is a polyhedral cell complex and \( \mathbb{k} \) is a field then \( {\widetilde{H}}_{ \bullet }\left( {X;\mathbb{k}}\right) \) denotes the homology of the reduced chain complex \( {\widetilde{\mathcal{C}}}_{ \bullet }\left( {X;\mathbb{k}}\right) \) . Theorem 4.7 If \( {\mathcal{F}}_{X} \) is a cellular resolution of the monomial quotient \( S/I \) , then the Betti numbers of \( I \) can be calculated for \( i \geq 1 \) as \[ {\beta }_{i,\mathbf{b}}\left( I\right) = {\dim }_{\mathbb{k}}{\widetilde{H}}_{i - 1}\left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) . \] Proof. When \( {\mathbf{x}}^{\mathbf{b}} \) does not lie in \( I \), the complex \( {X}_{ \prec \mathbf{b}} \) consists at most of the empty face \( \varnothing \in X \), which has no homology in homological degrees \( \geq 0 \) . This is good, because \( {\beta }_{i,\mathbf{b}}\left( I\right) \) is zero unless \( {\mathbf{x}}^{\mathbf{b}} \in I \), as \( {K}^{\mathbf{b}}\left( I\right) \) is void if \( {\mathbf{x}}^{\mathbf{b}} \notin I \) . Now assume \( {\mathbf{x}}^{\mathbf{b}} \in I \), and calculate Betti numbers as in Lemma 1.32 by tensoring \( {\mathcal{F}}_{X} \) with \( \mathbb{k} \) . The resulting complex in degree \( \mathbf{b} \) is the complex of vector spaces over \( \mathbb{k} \) obtained by taking the quotient of the reduced chain complex \( {\widetilde{\mathcal{C}}}_{ \bullet }\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \) modulo its subcomplex \( {\widetilde{\mathcal{C}}}_{ \bullet }\left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) \) . In other words, the desired Betti number \( {\beta }_{i,\mathbf{b}}\left( I\right) \) is the dimension over \( \mathbb{k} \) of the \( {i}^{\text{th }} \) homology of the rightmost complex in the following exact sequence of complexes: \[ 0 \rightarrow \widetilde{\mathcal{C}} \cdot \left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) \rightarrow \widetilde{\mathcal{C}} \cdot \left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \rightarrow \widetilde{\mathcal{C}} \cdot \left( \mathbf{b}\right) \rightarrow 0. \] The long exact sequence for homology reads \[ \cdots \rightarrow {\widetilde{H}}_{i}\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \rightarrow {\widetilde{H}}_{i}\left( {{\widetilde{\mathcal{C}}}_{ \bullet }\left( \mathbf{b}\right) }\right) \rightarrow {\widetilde{H}}_{i - 1}\left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) \rightarrow {\widetilde{H}}_{i - 1}\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \rightarrow \cdots \] Our assumption \( {\mathbf{x}}^{\mathbf{b}} \in I \) implies by Proposition 4.5 that \( {X}_{ \preccurlyeq \mathbf{b}} \) has no reduced homology: \( {\widetilde{H}}_{j}\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) = 0 \) for all \( j \) . Hence the long exact sequence implies that \( {\widetilde{H}}_{i}\left( {{\widetilde{\mathcal{C}}}_{ \bullet }\left( \mathbf{b}\right) }\right) \cong {\widetilde{H}}_{i - 1}\left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) \) . Now take \( \mathbb{k} \) -vector space dimensions. Example 4.8 Consider the ideal \( I = \left\langle {{x}_{1}{x}_{2},{x}_{1}{x}_{3},{x}_{1}{x}_{4},{x}_{2}{x}_{3},{x}_{2}{x}_{4},{x}_{3}{x}_{4}}\right\rangle \) , and let \( X \) be the boundary complex of the (solid) octahedron. Label the six vertices of \( X \) with the six generators of \( I \) so that opposite vertices get monomials with disjoint support. Then \( {\mathcal{F}}_{X} \) is a nonminimal free resolution \[ 0 \leftarrow {S}^{1}
1139_(GTM44)Elementary Algebraic Geometry
Definition 2.10
Definition 2.10. Let \( f \) and \( g \) be any two functions defined on a neighborhood of \( P \in \mathcal{T} \) with values in a field; we define \( {f}_{P} \sim + {g}_{P} \sim \) by \( {\left( f + g\right) }_{P} \sim \) , \( {f}_{P} \sim \cdot {g}_{P} \sim \) by \( {\left( f \cdot g\right) }_{P} \sim \), and \( - \left( {{f}_{P} \sim }\right) \) by \( {\left( -f\right) }_{P} \sim \) . If there is a neighborhood of \( P \) on which \( f \) is never zero, then we define \( 1/{f}_{P} \sim \) by \( {\left( 1/f\right) }_{P} \sim \) . (These are clearly well defined.) Definitions 2.8-2.10 are very general; one can consider important classes of germs at different "levels," for instance the set of subsets \( S \) of \( \mathcal{T} \) closed at \( P \) (that is, \( S \) closed within a sufficiently small neighborhood of \( P \) ); if furthermore \( \mathcal{T} \) is \( \mathbb{C} \) supplied with the usual topology, we can replace closed at \( P \) by analytic at \( P \) (that is, \( S \) coincides throughout some neighborhood of \( P \) with the zero-set of a function analytic at \( P \in \mathbb{C} \) ). One then speaks of closed germs, analytic germs, etc. And for functions, one may, for instance, consider functions continuous, differentiable, or analytic at a point. Definition 2.10 shows that the set of all function germs in any such fixed level in general forms a ring. We may, more generally, consider any ring \( R \) of functions, each function being defined on some neighborhood of a fixed point \( P \) of a topological space \( \mathcal{T} \) . The set \( {R}_{P} \sim = \left\{ {{f}_{P} \sim \mid f \in R}\right\} \) forms the induced set of function germs at \( P \) . In view of Definition 2.10 we see that \( f \rightarrow {f}_{P}{}^{ \sim } \) is a ring homomorphism, and \( {R}_{P}{}^{ \sim } \) is in general a proper homomorphic image of \( R \) . Example 2.11. Let \( R \) be the ring of all real-valued functions on \( \mathbb{R} \) which are constant in a neighborhood of (0). The elements of \( {R}_{P}{}^{ \sim } \) are collections of functions, and each collection can be represented by a constant function. \( {R}_{P}{}^{ \sim } \) is in this case isomorphic to \( \mathbb{R} \) . As another example, consider polynomials or functions analytic at \( \left( 0\right) \in \mathbb{C} \) . There is essentially only one function in each germ (from the familiar "identity theorem" for power series). The main applications of this section in the rest of the chapter, are to curves. In Theorem 2.28 we prove the following: Suppose (i) \( P \) is any point of an irreducible curve \( C \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}} \) with coordinate ring \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) and function field \( \mathbb{C}\left( {{x}_{1},\ldots ,{x}_{n}}\right) = {K}_{C} \) ,(ii) there is given an evaluation at \( P \) of each function \( f \in {K}_{C} \) coinciding with the natural evaluation of \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) at \( P \) and satisfying properties (2.5.1) and (2.5.2), and (iii) \( R \) is the subring of \( {K}_{C} \) assigned finite values. Then we can conclude that there is canonically associated with \( R \) a germ \( {B}_{P} \sim \) ( \( B \) for "branch") such that \( R \) can be regarded in a natural way as a ring of function germs on \( {B}_{P} \sim \), and such that for any \( f \in R \), its initially-given evaluation at \( P \) will be the value of \( {f}_{P} \sim \) at \( P \) . Before proving Theorem 2.28 we establish a number of basic results. We first look at some properties of the above ring \( R \) . We begin with an example. Consider \( \mathbb{C} = {\mathbb{C}}_{X} \) ; at (0) the corresponding ring \( R \subset \mathbb{C}\left( X\right) \) of functions assigned finite values at (0) consists of the functions \( p/q \), where \( p \) and \( q \) are polynomials and \( q\left( 0\right) \neq 0 \) . The ring \( R \) can be naturally regarded as a ring of functions on \( {\mathbb{C}}_{\left( 0\right) } \sim \) . There is but one point of \( \mathbb{C} \) common to all sets in \( {\mathbb{C}}_{\left( 0\right) } \sim \) . If, as suggested earlier, \( R \) is to act like a "coordinate ring" on \( {\mathbb{C}}_{\left( 0\right) } \sim \), then we might conjecture that \( R \) has but one maximal ideal, corresponding to the point (0). This is indeed so; in fact we show, more generally (Theorem 2.12 and its corollary), that the ring \( R \) of all elements in any field \( K \) assigned values satisfying properties (2.5.1) and (2.5.2), has a unique maximal ideal. We start with the following simple characterization, which leads to Definition 2.13. Theorem 2.12. Let \( K \) and \( k \) be fields; if each element in \( K \) is assigned a value in \( k \cup \{ \infty \} \), and if this assignment satisfies properties (2.5.1) and (2.5.2), then the set of elements assigned finite values forms a subring \( R \) of \( K \), and for each \( a \in K, a \notin R \) implies \( 1/a \in R \) . Conversely, let \( K \) be a field; if \( R \) is any subring of \( K \) such that for each \( a \in K, a \notin R \) implies \( 1/a \in R \), then there is a field \( k \) such that each element of \( K \) is assigned a value in \( k \cup \{ \infty \} \), this assignment satisfying (2.5.1) and (2.5.2). Proof. The first half is obvious. For the converse, assume without loss of generality that \( R \neq K \), and let \( m \) be the set of elements \( a \) of \( R \) such that \( 1/a \notin R \) . We show that \( \mathfrak{m} \) is a maximal ideal in \( R \) . Then properties (2.5.1) and (2.5.2) follow at once for the field \( R/\mathfrak{m} \) . We first show that \( m \) is an ideal. \( m \) is closed under addition. For let \( a, b \in \mathfrak{m} \) . If \( a \) or \( b \) is 0, then \( a + b \in \mathfrak{m} \) . Therefore assume \( a \neq 0, b \neq 0 \) . We show \( a + b \in \mathfrak{m} \) . By hypothesis on \( R \), either \( a/b \in R \) or \( {\left( a/b\right) }^{-1} = b/a \in R \) . Suppose \( a/b \in R \) . Now \( 1 \in R \) (if not, then \( 1/1 = 1 \in R \) ); hence because \( R \) is assumed to be a ring, \( 1 + \left( {a/b}\right) = \left( {b + a}\right) /b \in R \) . To show \( a + b \in \mathfrak{m} \), suppose \( a + b \notin \mathfrak{m} \) . Then \( 1/\left( {a + b}\right) \in R \), and since \( R \) is a ring, \( \left\lbrack {\left( {b + a}\right) /b}\right\rbrack \left\lbrack {1/\left( {a + b}\right) }\right\rbrack = \) \( 1/b \in R \) . But this is impossible, since we assumed from the outset that \( b \in \mathfrak{m} \) ; by \( m \) ’s definition, \( b \in m \) implies \( 1/b \notin R \) . Next, \( m \) has the absorbing property, for suppose \( a \in R, b \in \mathfrak{m} \) but that \( {ab} \notin \mathfrak{m} \) . Then \( 1/{ab} \in R \), hence \( a/{ab} = 1/b \in R \) , again giving us a contradiction. It is now easy to see that \( m \) is maximal, since any ideal \( \mathfrak{a} \subset R \) containing an element \( c \in R \smallsetminus m \) must also contain \( c \cdot \left( {1/c}\right) = 1\left( {c \in R \smallsetminus m\text{implies}1/c \in R}\right) \) , which implies that \( \mathfrak{a} = R \) . Thus there can be no proper ideal of \( R \) larger than m. Note that if a field \( K \) is given an evaluation satisfying (2.5.1) and (2.5.2), the subring \( R \) of elements assigned finite values in turn determines the same evaluation (up to an isomorphism of \( k = R/m \) ) via \( a \rightarrow a + m \) (if \( a \in R \) ), and \( a \rightarrow \infty \) (if \( a \notin R \) ). In view of Theorem 2.12 we make the following Definition 2.13. A subring \( R \) of a field \( K \) is called a valuation ring if, for each \( a \in K \), either \( a \in R \) or \( 1/a \in R \) . If \( R \) contains a subfield \( k \) of \( K \), then \( R \) is called a valuation ring over \( k \) . Remark 2.14. Henceforth, the term valuation ring in this book will always mean valuation ring over \( \mathbb{C} \) unless stated otherwise. Corollary to Theorem 2.12. Every valuation ring \( R \) has a unique maximal ideal. Proof. Any maximal ideal other than \( m = \{ a \in R \mid 1/a \notin R\} \) would have to contain an element of \( R \smallsetminus \mathfrak{m} \), and we saw this is impossible. There is an important side to evaluation which we have not touched upon yet. We begin with an example. Consider the field \( K = \mathbb{C}\left( X\right) \) . In addition to assigning at a point \( P \) a value \( f\left( P\right) \in \mathbb{C} \cup \{ \infty \} \) to each \( f \in \mathbb{C}\left( X\right) \) , we may also assign an order, \( {\operatorname{ord}}_{P}\left( f\right) \) ; it is a straightforward generalization of the definition for polynomials: We define, for \( f = p/q\left( {p, q \in \mathbb{C}\left\lbrack X\right\rbrack }\right) \) , \[ {\operatorname{ord}}_{P}\left( f\right) = {\operatorname{ord}}_{P}\left( \frac{p}{q}\right) = {\operatorname{ord}}_{P}\left( p\right) - {\operatorname{ord}}_{P}\left( q\right) ,\;\text{ for any }P \in \mathbb{C}. \] (1) It is obvious that \( {\operatorname{ord}}_{P}\left( f\right) \) is well defined. Observe that if \( p/q \) (written in lowest terms) is expanded about \( P \) (e.g., expand \( p \) and \( q \) about \( P \) and use "long division"), the exponent of the lowest-degree term is just \( {\operatorname{ord}}_{P}\left( {p/q}\right) \) ; this fits in with the term order. (For \( c \in \mathbb{C} \smallsetminus \{ 0\} \), or \( {\mathrm{d}}_{P}\left( c\right) = 0 \) ; or \( {\mathrm{d}}_{P}\left( 0\right) = \infty \), by definition. We assume \( \infty \) is greater than any element of \( \mathbb{Z} \) .) As with polynomials, or \( {\mathrm{d}}_{P} \) for elements of \( \mathbb{C}\left( X\right) \) satisfies (a) \( {\operatorname{ord}}_{P}\left( {f + g}\right) \geq \min \left( {{\operatorname{ord}}_{P}\left( f\right) ,{\operatorname{ord}}_{P}\left( g\right) }\right) \) ,
1083_(GTM240)Number Theory II
Definition 10.3.11
Definition 10.3.11. Let \( k \geq 1 \) be an integer. For any \( n \geq 1 \) such that \( {\left( -1\right) }^{k}n \equiv 0 \) or 1 modulo 4, write \( {\left( -1\right) }^{k}n = D{f}^{2} \), where \( f \in \mathbb{Z} \) and \( D \) is a fundamental discriminant (including 1). We define the functions \( {H}_{k}\left( n\right) \) by the formula \[ {H}_{k}\left( n\right) = L\left( {\left( \frac{D}{ \cdot }\right) ,1 - k}\right) \mathop{\sum }\limits_{{d \mid f}}\mu \left( d\right) \left( \frac{D}{d}\right) {d}^{k}{\sigma }_{{2k} - 1}\left( {f/d}\right) , \] and we also set by convention \( {H}_{k}\left( 0\right) = \zeta \left( {1 - {2k}}\right) \) . The theorem is then as follows. Theorem 10.3.12. For \( k \geq 2 \) the Fourier series \[ {\mathcal{H}}_{k}\left( \tau \right) = \mathop{\sum }\limits_{{n \geq 0}}{H}_{k}\left( n\right) {q}^{n} \] is a modular form of weight \( k + 1/2 \) on the congruence subgroup \( {\Gamma }_{0}\left( 4\right) \), where as usual \( q = \exp \left( {2i\pi \tau }\right) \) . Since the space of modular forms is finite-dimensional, it is then an easy matter to identify precisely a given form from its first few Fourier coefficients, given a specific basis. It is easy to show that the function \[ \theta \left( \tau \right) = \mathop{\sum }\limits_{{n \in \mathbb{Z}}}{q}^{{n}^{2}} = 1 + 2\mathop{\sum }\limits_{{n \geq 1}}{q}^{{n}^{2}} \] (of weight \( 1/2 \) ) and the function \[ {\theta }^{4}\left( {\tau + 1/2}\right) = {\left( \mathop{\sum }\limits_{{n \in \mathbb{Z}}}{\left( -1\right) }^{n}{q}^{{n}^{2}}\right) }^{4} = {\left( 1 + 2\mathop{\sum }\limits_{{n \geq 1}}{\left( -1\right) }^{n}{q}^{{n}^{2}}\right) }^{4} \] (of weight 2) generate the algebra of all modular forms of half-integral weight on \( {\Gamma }_{0}\left( 4\right) \) . In other words, any modular form of integral or half-integral weight on \( {\Gamma }_{0}\left( 4\right) \) is an isobaric polynomial in these two functions. A little computation gives the following corollary, which is very useful for the computation of special values when many of them are needed. Corollary 10.3.13. We have \[ {\mathcal{H}}_{2}\left( \tau \right) = \frac{{5\theta }\left( \tau \right) {\theta }^{4}\left( {\tau + 1/2}\right) - {\theta }^{5}\left( \tau \right) }{480}, \] \[ {\mathcal{H}}_{3}\left( \tau \right) = - \frac{7{\theta }^{3}\left( \tau \right) {\theta }^{4}\left( {\tau + 1/2}\right) + {\theta }^{7}\left( \tau \right) }{2016}, \] \[ {\mathcal{H}}_{4}\left( \tau \right) = \frac{\theta \left( \tau \right) {\theta }^{8}\left( {\tau + 1/2}\right) + {14}{\theta }^{5}\left( \tau \right) {\theta }^{4}\left( {\tau + 1/2}\right) + {\theta }^{9}\left( \tau \right) }{3840}. \] Remarks. (1) Since the \( \theta \) function is lacunary, even applied naïvely these formulas give a very efficient method for computing large batches of special values of \( L \) -functions of quadratic characters. However, it is still \( O\left( {D}^{1/2 + \varepsilon }\right) \) on average. On the other hand, if we use FFT-based techniques for multiplying power series, we can compute large numbers of coefficients even faster, and go down to \( O\left( {D}^{\varepsilon }\right) \) on average. (2) The above formulas are essentially equivalent to those that we have given in Theorem 5.4.16. (3) Because Hilbert modular forms exist only for totally real number fields, the method using Hecke-Eisenstein series is applicable for computing special values of real quadratic characters only, while the present method is applicable both to real and to imaginary quadratic characters. The formulas obtained by the above two methods are in fact closely related. For instance, if we set classically \[ {E}_{2}\left( \tau \right) = 1 - {24}\mathop{\sum }\limits_{{n \geq 1}}{\sigma }_{1}\left( n\right) {q}^{n}, \] which is not quite a modular form, it is easy to check directly that \[ - \frac{{\theta }^{\prime }\left( \tau \right) /\left( {2i\pi }\right) }{20} + \frac{{E}_{2}\left( {4\tau }\right) \theta \left( \tau \right) }{120} \] is a true modular form of weight \( 5/2 \), and the first coefficients show that it is equal to \( {\mathcal{H}}_{2}\left( \tau \right) \) . Similarly it is not difficult to check that \( {\mathcal{H}}_{4}\left( \tau \right) = \) \( {E}_{4}\left( {4\tau }\right) \theta \left( \tau \right) /{240} \), where \[ {E}_{4}\left( \tau \right) = 1 + {240}\mathop{\sum }\limits_{{n \geq 1}}{\sigma }_{3}\left( n\right) {q}^{n}. \] This gives the following formulas, which generalize to arbitrary \( N > 0 \) (and not only discriminants of real quadratic fields) Siegel's formulas coming from Hecke-Eisenstein series: Proposition 10.3.14. By convention set \( {\sigma }_{k}\left( 0\right) = \zeta \left( {-k}\right) /2 \) (so that \( {\sigma }_{1}\left( 0\right) = \) \( - 1/{24} \) and \( \left. {{\sigma }_{3}\left( 0\right) = 1/{240}}\right) \) . We have \[ {H}_{2}\left( N\right) = - \frac{1}{5}\mathop{\sum }\limits_{\substack{{s \in \mathbb{Z},{s}^{2} \leq N} \\ {s \equiv N\left( {\;\operatorname{mod}\;2}\right) } }}{\sigma }_{1}\left( \frac{N - {s}^{2}}{4}\right) - \frac{N}{10}\delta \left( \sqrt{N}\right) , \] \[ {H}_{4}\left( N\right) = \mathop{\sum }\limits_{\substack{{s \in \mathbb{Z},{s}^{2} \leq N} \\ {s \equiv N\left( {\;\operatorname{mod}\;2}\right) } }}{\sigma }_{3}\left( \frac{N - {s}^{2}}{4}\right) , \] where \( \delta \left( \sqrt{N}\right) = 1 \) if \( N \) is a square and 0 otherwise. Remarks. (1) There also exist similar formulas for \( {H}_{3}\left( N\right) \) and \( {H}_{5}\left( N\right) \) involving modified \( {\sigma }_{2} \) functions; see Exercise 52. (2) Since the formulas coming from modular forms of half-integral weight include those coming from Hilbert modular forms, the reader may wonder why we have included the latter. The main reason is that they also give explicit formulas for computing the special values of Dedekind zeta functions at negative integers of all totally real number fields, not only quadratic ones, and this is in fact how Siegel's Theorem 10.5.3 on the rationality of such values is proved. (3) The reader will have noticed that we do not mention the function \( {H}_{1}\left( N\right) \) , which is essentially a class number, and the corresponding Fourier series \( {\mathcal{H}}_{1}\left( \tau \right) \) . The theory is here complicated by the fact that the latter is not quite a modular form of weight \( 3/2 \) (analogous to but more complicated than the situation for \( {E}_{2}\left( \tau \right) \) ). However, the theory can be worked out completely, and it gives beautiful formulas on class numbers, due to Hurwitz, Eichler, Zagier, and the author. We refer for instance to [Coh2] for details. ## 10.3.3 The Pólya-Vinogradov Inequality In the next subsection we will give some bounds for \( L\left( {\chi ,1}\right) \) . For this, it is useful, although not essential, to have some good estimates on \( \mathop{\sum }\limits_{{1 \leq n \leq X}}\chi \left( n\right) \) . Such an estimate is the following Pólya-Vinogradov inequality: Proposition 10.3.15 (Pólya-Vinogradov). Let \( \chi \) be a nontrivial character modulo \( m \) of conductor \( f > 1 \) . For all \( X \geq 0 \) we have the inequality \[ \left| {\mathop{\sum }\limits_{{1 \leq a \leq X}}\chi \left( a\right) }\right| \leq d\left( {m/f}\right) {f}^{1/2}\log \left( f\right) , \] where \( d\left( n\right) \) denotes the number of positive divisors of \( n \) . Proof. Assume first that \( \chi \) is a primitive character and set \( S\left( X\right) = \) \( \mathop{\sum }\limits_{{1 \leq a \leq X}}\bar{\chi }\left( a\right) \) . It is clear that \( S\left( X\right) = S\left( {\lfloor X\rfloor }\right) \), so we may assume that \( X = N \in {\mathbb{Z}}_{ \geq 0} \) . By Corollary 2.1.42 and the fact that \( \chi \left( x\right) = 0 \) when \( \gcd \left( {x, m}\right) > 1 \) we have \[ \tau \left( \chi \right) S\left( N\right) = \mathop{\sum }\limits_{{1 \leq a \leq N}}\tau \left( {\chi, a}\right) = \mathop{\sum }\limits_{{1 \leq a \leq N}}\mathop{\sum }\limits_{{x{\;\operatorname{mod}\;m}}}\chi \left( x\right) {e}^{{2i\pi ax}/m} \] \[ = \mathop{\sum }\limits_{{x{\;\operatorname{mod}\;m}}}\chi \left( x\right) \mathop{\sum }\limits_{{1 \leq a \leq N}}{e}^{{2i\pi ax}/m} \] \[ = \mathop{\sum }\limits_{{x{\;\operatorname{mod}\;m},\gcd \left( {x, m}\right) = 1}}\chi \left( x\right) \frac{{e}^{{2i\pi }\left( {N + 1}\right) x/m} - {e}^{{2i\pi x}/m}}{{e}^{{2i\pi x}/m} - 1}. \] Note that the denominator does not vanish since \( \gcd \left( {x, m}\right) = 1 \) and \( m > 1 \) . We bound this crudely as follows: \[ \left| {\tau \left( \chi \right) S\left( N\right) }\right| \leq \mathop{\sum }\limits_{{1 \leq x \leq m - 1, x \neq m/2}}\frac{1}{\sin \left( {{\pi x}/m}\right) } \] \[ \leq 2\mathop{\sum }\limits_{{1 \leq x \leq \left( {m - 1}\right) /2}}\frac{1}{\sin \left( {{\pi x}/m}\right) } \leq m\mathop{\sum }\limits_{{1 \leq x \leq \left( {m - 1}\right) /2}}\frac{1}{x}, \] using the high-school inequality \( \sin \left( t\right) \geq \left( {2/\pi }\right) t \) for \( t \in \left\lbrack {0,\pi /2}\right\rbrack \) . Now since \( 1/x \) is a convex function, we have the inequality \[ {\int }_{x - 1/2}^{x + 1/2}\frac{dt}{t} > \frac{1}{x} \] (see Exercise 43). Thus \[ \mathop{\sum }\limits_{{1 \leq x \leq \left( {m - 1}\right) /2}}\frac{1}{x} < {\int }_{1/2}^{m/2}\frac{dt}{t} = \log \left( m\right) . \] Since \( \left| {\tau \left( \chi \right) }\right| = {m}^{1/2} \) by Proposition 2.1.45, the result follows for primitive characters. Now let \( \chi \) be any nontrivial character modulo \( m \), let \( f \) be the conductor of \( \chi \), and let \( {\chi }_{f} \) be the character modulo \( f \) equivalent to \( \chi \) . Since \( \gcd \left( {a, f}\right) = 1 \) and \( \gcd \left( {a, m/f}\right) = 1 \) implies \( \gcd \left( {a, m}\right) = 1 \), using the definition of the Möbius function we have \[ \mathop{\sum }\limits_{{1 \leq a \leq X}}
113_Topological Groups
Definition 2.16
Definition 2.16 (Bounded minimum). Let \( R \) be an \( m \) -ary relation. For all \( {x}_{0},\ldots ,{x}_{m - 1} \in \omega \), let \[ f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = \left\{ \begin{array}{l} \text{ the least }y < {x}_{m - 1}\text{ such that }\left\langle {{x}_{0},\ldots ,{x}_{m - 2}, y}\right\rangle \in R, \\ \text{ if there is such a }y, \\ 0\;\text{ otherwise. } \end{array}\right. \] \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \) is denoted by \( {\mu y} < {x}_{m - 1}R\left( {{x}_{0},\ldots ,{x}_{m - 2}, y}\right) \) . Proposition 2.17. Let \( A \) be closed under elementary recursive operations. If \( R \) is an m-ary A-relation, then the function \( f \) of 2.16 is a member of \( A \) . Proof. Note that (1) \( \overline{\mathrm{{sg}}}\mathop{\sum }\limits_{{y < i}}{\chi }_{R}\left( {{x}_{0},\ldots ,{x}_{m - 2}, y}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\left\langle {{x}_{0},\ldots ,{x}_{m - 1}, y}\right\rangle \notin R\text{ for all }y < i, \\ 0 & \text{ otherwise. } \end{array}\right. \) Let \( g\left( {{x}_{0},\ldots ,{x}_{m - 2}, i}\right) = \overline{\mathrm{{sg}}}\mathop{\sum }\limits_{{y < i}}{\chi }_{R}\left( {{x}_{0},\ldots ,{x}_{m - 2}, y}\right) \) for all \( {x}_{0},\ldots ,{x}_{m - 2} \) , \( i \in \omega \) . Thus \( g \in A \) . From (1) we see that \[ \sum \left\{ {g\left( {{x}_{0},\ldots ,{x}_{m - 2},{si}}\right) ;i < {x}_{m - 1}}\right\} = \left\{ \begin{array}{l} f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \;\text{ if there is a }y < \\ {x}_{m - 1}\text{ such that }\left\langle {{x}_{0},\ldots ,{x}_{m - 2}, y}\right\rangle \in R, \\ {x}_{m - 1}\;\text{ otherwise. } \end{array}\right. \] Hence \[ f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = \overline{\operatorname{sg}}g\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \cdot \sum \left\{ {g\left( {{x}_{0},\ldots ,{x}_{m - 2},{si}}\right) : i < {x}_{m - 1}}\right\} , \] so \( f \in A \) . The rather technical proof of \( {2.17}\mathrm{{may}} \) be compared with a proof of the intuitive version of the proposition, which goes: if \( R \) is an \( m \) -ary effective relation, then the function \( f \) of 2.16 is effective. In fact, to calculate \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \), we test successively whether \( \left\langle {{x}_{0},\ldots ,{x}_{m - 2},0}\right\rangle \in R,\left\langle {{x}_{0},\ldots }\right. \) , \( \left. {{x}_{m - 2},1}\right\rangle \in R,\ldots ,\left\langle {{x}_{0},\ldots ,{x}_{m - 2},{x}_{m - 1}}\right\rangle \in R \) . If at some point we reach an \( i \) such that \( \left\langle {{x}_{0},\ldots ,{x}_{m - 2}, i}\right\rangle \in R \), we set \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = i \) and stop testing. If we complete our testing without finding such an \( i \) we set \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = 0 \) . Proposition 2.18 (Definition by cases). Let \( A \) be closed under elementary recursive operations. Suppose \( {g}_{0},\ldots ,{g}_{m - 1} \) are n-ary members of \( A \) , \( {R}_{0},\ldots ,{R}_{m - 1} \) are pairwise disjoint n-ary A-relations with \( \mathop{\bigcup }\limits_{{i < m}}{R}_{i} = {}^{n}\omega \) , and \( f \) is the n-ary function such that, for all \( {x}_{0},\ldots ,{x}_{n - 1} \in \omega \) , \[ f\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) = \left\{ \begin{array}{ll} {g}_{0}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) & \text{ if }\left\langle {{x}_{0},\ldots ,{x}_{n - 1}}\right\rangle \in {R}_{0}, \\ {g}_{1}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) & \text{ if }\left\langle {{x}_{0},\ldots ,{x}_{n - 1}}\right\rangle \in {R}_{1}, \\ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots & \\ {g}_{m - 1}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) & \text{ if }\left\langle {{x}_{0},\ldots ,{x}_{n - 1}}\right\rangle \in {R}_{m - 1}. \end{array}\right. \] Then \( f \in A \) . Proof. For any \( {x}_{0},\ldots ,{x}_{n - 1} \in \omega \) , \[ f\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) = {\chi }_{R0}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) \cdot {g}_{0}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) + \cdots \] \[ + {\chi }_{R\left( {m - 1}\right) }\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) \cdot {g}_{m - 1}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) . \] ## Definition 2.19 (i) for \( x, y \in \omega \), let \[ x - y = \left\{ \begin{array}{ll} x - y & \text{ if }x \geq y \\ 0 & \text{ if }x < y \end{array}\right. \] (ii) \[ \min \left( {x, y}\right) = \left\{ \begin{array}{ll} x & \text{ if }x \leq y \\ y & \text{ if }x > y \end{array}\right. \] (iii) (by induction). For \( m > 2,\mathop{\min }\limits_{m}\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = \min \left( {\mathop{\min }\limits_{{m - 1}}\left( {{x}_{0},}\right. }\right. \) \( \left. {\ldots ,{x}_{m - 2}}\right) ,{x}_{m - 1}) \), with \( {\min }_{2}\left( {x, y}\right) = \min \left( {x, y}\right) \) . (iv) \( \max \left( {x, y}\right) ,\mathop{\max }\limits_{m}\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \) similarly. (v) \( \mathrm{{rm}}\left( {x, y}\right) = \) remainder upon dividing \( x \) by \( y \), if \( y \neq 0;\mathrm{{rm}}\left( {x,0}\right) = 0 \) . \( \left( {vi}\right) \mid = \{ \left( {x, y}\right) : x \) divides \( y\} = \{ \left( {x, y}\right) \) : there is a \( z \) such that \( y = x \cdot z\} . \) (vii) \( \mathrm{{PM}} = \{ x : x \) is a positive prime \( \} \) . Proposition 2.20. All of the functions and relations of 2.19 are elementary. Proof. Obvious, as concerns \( \left( i\right) - \left( {iv}\right) \) . For \( \left( v\right) \) , \[ \operatorname{rm}\left( {x, y}\right) = \left\{ \begin{array}{ll} x - \left( {y \cdot \left\lbrack {x/y}\right\rbrack }\right) & \text{ if }y \neq 0, \\ 0 & \text{ if }y = 0. \end{array}\right. \] For \( \left( {vi}\right) \), note that \( x \mid y \) iff there is a \( z \) such that \( y = x \cdot z \) iff there is a \( z < y \) such that \( y = x \cdot z \) ; now see 2.14. Finally, \( p \in \mathrm{{PM}} \) iff for every \( x < p \), either not \( x \mid p \) or \( x = 1 \), and \( p \neq 0, p \neq 1 \) ; cf. 2.15. Definition 2.21. For every \( k \) let \( {\mathrm{p}}_{k} \) be the \( \left( {k + 1}\right) \) st prime; thus \( {\mathrm{p}}_{0} = 2 \) , \( {\mathrm{p}}_{1} = 3,{\mathrm{p}}_{2} = 5,\ldots \) Proposition 2.22 (Number-theoretic). For every \( k,{\mathrm{p}}_{k} \leq \exp \left( {2,{2}^{k}}\right) \) . Proof. By induction on \( k \) . Trivial for \( k = 0,1 \) . Induction step, \( k > 0 \) : (Euclid) \[ \leq \exp \left( {2,{2}^{0}}\right) \cdot \ldots \cdot \exp \left( {2,{2}^{k}}\right) - 1\;\text{(induction hypothesis)} \] \[ = {2}^{\sum \{ \exp \left( {2, i}\right) : i \leq k\} } - 1 \] \[ = \exp \left( {{2}^{k + 1} - 1}\right) - 1 \leq \exp \left( {2}^{k + 1}\right) \text{.} \] Proposition 2.23. \( \mathrm{p} \) is elementary. Proof. Let \( N = \{ \left( {x, y}\right) : x, y \in \mathrm{{PM}}, x < y \), and \( y \) is the next prime after \( x\} \) . Thus \( N = \{ \left( {x, y}\right) : x, y \in \mathrm{{PM}} \) and \( x < y \) and for all \( z < y \), either \( z \leq x \) or not \( z \notin \mathrm{{PM}}\} \), so \( N \) is elementary. Let \( \Pr = \{ \left( {x, k}\right) : x \) is the \( \left( {k + 1}\right) \) st prime \( \} \) . Thus \( \left( {x, k}\right) \in \Pr \) if \( x \in \mathrm{{PM}} \) and \( \mathop{\sum }\limits_{{y < x}}{\chi }_{\mathrm{{PM}}}y = k \), so \( \Pr \) is elementary. Finally, \( {\mathrm{p}}_{k} = {\mu x} < \exp \left( {2,{2}^{k}}\right) + 1\left( {\left( {x, k}\right) \in {Pr}}\right) \), so \( \mathrm{p} \) is elementary. Definition 2.24. If \( a = 0 \) or \( a = 1 \), let \( {\left( a\right) }_{i} = 0 \) . If \( a > 1 \) let \( {\left( a\right) }_{i} \) be the exponent of \( {\mathrm{p}}_{i} \) in the prime decomposition of \( a \) . Sometimes we write \( \left( a\right) i \) instead of \( {\left( a\right) }_{i} \) . Proposition 2.25. ( ) is elementary. Proof. \( \;{\left( a\right) }_{i} = {\mu x} < a\left( {{\mathrm{p}}_{i}^{x}\left| {a\text{ and not }{\mathrm{p}}_{i}^{x + 1}}\right| a}\right) \) . Definition 2.26. \( \;{la} = \) greatest \( i \) such that \( {\mathrm{p}}_{i} \mid x\left( { = 0\text{if}x = 0\text{or 1}}\right) \) . Proposition 2.27. 1 is elementary. Proof. \( \;\operatorname{la} = {\mu i} < a\left\lbrack {{\mathrm{p}}_{i} \mid x\text{and}\forall j \leq a\left( {i < j \Rightarrow {\mathrm{p}}_{j} \nmid a}\right) }\right\rbrack \) . We now proceed to study a larger class of functions, the class of primitive recursive functions. Most of the effective functions encountered in the literature were actually shown to be primitive recursive. Actually most of them are even elementary, and usually this can easily be shown. We feel that it is only an historical accident that elementary functions are not more widely discussed than primitive recursive functions. Definition 2.28. The class of primitive recursive functions is the intersection of all classes \( A \) of functions such that \( s,{\mathbf{U}}_{i}^{n} \in A \) for all \( n > 0 \) and \( i < n \) , and such that \( A \) is closed under composition and under the following two operations: (i) The parameterized operation of primitive recursion: if \( f \) is \( m \) -ary and \( h \) \( \left( {m + 2}\right) \) -ary, \( m > 0 \), then define \( g \) recursively as follows: \[ g\left( {{x}_{0},\ldots ,{x}_{m - 1},0}\right) = f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) , \] \[ g\left( {{x}_{0},\ldots ,{x}_{m - 1},{sy}}\right) = h\left( {{x}_{0},\ldots ,{x}_{m - 1}, y, g\left( {{x}_{0},\ldots ,{x}_{m - 1}, y}\right) }\right) , \] for all \( {x}_{0},\ldots ,{x}_{m - 1}, y \in \omega \) . Then \( g \) is obtained from \( f \) and \( h \) by primitive recursion, in symbols \( g = {\mathrm{R}}^{m}\left( {f, h}\right) \) . (ii) The no-parameter operation of primitive recursion: if \( a \in \omega \) and \( h \) is 2-ary, define \( g \) : \[ {g0} = a, \] \[ {gsy} = h\left( {y,{gy}}\
18_Algebra Chapter 0
Definition 7.15
Definition 7.15. The Galois group \( {\operatorname{Gal}}_{k}\left( {f\left( x\right) }\right) \) of a separable polynomial \( f\left( x\right) \in \) \( k\left\lbrack x\right\rbrack \) is the Galois group of the splitting field of \( f\left( x\right) \) over \( k \) . Corollary 7.16. Let \( k \) be a field of characteristic 0, and let \( f\left( x\right) \in k\left\lbrack x\right\rbrack \) be an irreducible polynomial. Then \( f\left( x\right) \) is solvable by radicals if and only if its Galois group is solvable. ## Proof. This is an immediate consequence of Lemma 7.11 and Proposition 7.14, Corollary 7.16 is called Galois' criterion. Ruffini (1799) and Abel (1824) had previously established that general formulas in radicals for the solutions of equations of degree \( \geq 5 \) do not exist (that is, Theorem 7.8); but it was Galois who identified the precise condition given in Corollary 7.16. Of course, Theorem 7.8 follows immediately from Galois' criterion: Proof of Theorem 7.8. By Lemma 7.5, the Galois group of the general polynomial of degree \( n \) is \( {S}_{n} \) . The group \( {S}_{n} \) is not solvable for \( n \geq 5 \) (Corollary IV 4.21); hence the statement follows from Corollary 7.16 In fact, we could now do more: we know that \( {S}_{3} \) and \( {S}_{4} \) are solvable (cf. Exercise IV 3.16); from a composition series with cyclic quotients we could in principle decompose explicitly the splitting field of general polynomials of degree 3 and 4 as radical extensions and as a consequence recover the Tartaglia/Cardano/Ferrari formulas for their solutions. 7.5. Galois groups of polynomials. Impressive as Galois' criterion is, it does not in itself produce a single polynomial of degree (say) 5 over \( \mathbb{Q} \) that is not solvable by radicals. Finding such polynomials requires a certain amount of extra work; we will be able to do this by the end of this subsection. Computing Galois groups of polynomials is in fact a popular sport among algebraists, excelling at which would require much more information than the reader will glean here. I will just list a few straightforward observations. To begin with, recall that an element of \( {\operatorname{Aut}}_{k}\left( F\right) \) must send roots of a polynomial \( f\left( x\right) \in k\left\lbrack x\right\rbrack \) to roots of the same polynomial, and if \( f\left( x\right) \) is irreducible and \( F \) is its splitting field, then there are automorphisms of \( F \) sending any root of \( f\left( x\right) \) to any other root (Proposition 1.5, Lemma 4.2). This observation may be rephrased as follows: Lemma 7.17. Let \( f\left( x\right) \in k\left\lbrack x\right\rbrack \) be a separable irreducible polynomial of degree \( n \) . Then \( {\operatorname{Gal}}_{k}\left( {f\left( x\right) }\right) \) acts transitively on the set of roots of \( f\left( x\right) \) in \( \bar{k} \) . In particular, \( {\operatorname{Gal}}_{k}\left( {f\left( x\right) }\right) \) may be identified with a transitive subgroup of the symmetric group \( {S}_{n} \) . Of course a subgroup of \( {S}_{n} \) is transitive if the corresponding action on \( \{ \mathbf{1},\ldots ,\mathbf{n}\} \) is transitive; the diligent reader has run across this terminology in Exercise IV 4.12 The reader will easily produce a statement analogous to Lemma 7.17 in case \( f\left( x\right) \) is reducible. Of course the action is not transitive in this case, and if the factors of \( f\left( x\right) \) have degrees \( {n}_{1},\ldots ,{n}_{r} \), then \( {\operatorname{Gal}}_{k}\left( {f\left( x\right) }\right) \) is contained in a subgroup of \( {S}_{n} \) isomorphic to \( {S}_{{n}_{1}} \times \cdots \times {S}_{{n}_{r}} \) . Lemma 7.17 (or its 'reducible' variations) already give some information. For example, we see that the Galois group of a separable irreducible cubic can only be \( {A}_{3} \) or \( {S}_{3} \), since these are the only transitive subgroups of \( {S}_{3} \) . Similarly, the range of possibilities for irreducible polynomials of degree 4 is rather restricted: \( {S}_{4},{A}_{4} \) , and isomorphic copies of the dihedral group \( {D}_{8} \), of \( \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} \), and of \( \mathbb{Z}/4\mathbb{Z} \) (Exercise 7.8). One approach to the computation of Galois groups of polynomials amounts to defining invariants imposing further restrictions, leading to algorithms deciding which of such possibilities occurs for a given polynomial. We have already encountered the most important of these invariants: if the roots of a separable polynomial \( f\left( x\right) \in k\left\lbrack x\right\rbrack \) in its splitting field are \( {\alpha }_{1},\ldots ,{\alpha }_{n} \), the discriminant of \( f\left( x\right) \) is the element \( D = {\Delta }^{2} \), where \[ \Delta = \mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{\alpha }_{i} - {\alpha }_{j}}\right) \] Every permutation of the roots fixes \( D \), so \( D \) must be fixed by the whole Galois group; therefore, \( D \in k \) . Odd permutations move \( \Delta \) (if char \( k \neq 2 \) ), and even permutations fix it; therefore \( \Delta \) is fixed by the Galois group \( G \) (in other words, \( \Delta \in k) \) if and only if \( G \subseteq {A}_{n} \) . This proves Lemma 7.18. Let \( k \) be a field of characteristic \( \neq 2 \), and let \( f\left( x\right) \in k\left\lbrack x\right\rbrack \) be a separable polynomial, with discriminant \( D \) . Then the Galois group of \( f\left( x\right) \) is contained in the alternating group \( {A}_{n} \) if and only if \( D \) is a square in \( k \) . Example 7.19. Lemma 7.18 and a discriminant computation are all that is needed to compute the Galois group of an irreducible cubic polynomial \[ f\left( x\right) = {x}^{3} + a{x}^{2} + {bx} + c. \] It would be futile to try to remember the discriminant \[ D = {a}^{2}{b}^{2} - 4{a}^{3}c - 4{b}^{3} + {18abc} - {27}{c}^{2}; \] but one may remember the trick of shifting \( x \) by \( a/3 \) (in characteristic \( \neq 3 \) ), with the effect of killing the coefficient of \( {x}^{2} \) : \[ f\left( {x - \frac{a}{3}}\right) = {x}^{3} + {px} + q \] for suitable \( p \) and \( q \) . This does not change \( D \) (shifting all roots \( {\alpha }_{i} \) by the same amount has no effect on the differences \( {\alpha }_{i} - {\alpha }_{j} \) ), yet \[ D = - 4{p}^{3} - {27}{q}^{2} \] is a little more memorable. By Lemma 7.18 and the preceding considerations, the Galois group is \( {A}_{3} \) if \( D \) is a square and \( {S}_{3} \) otherwise. For example, \( {x}^{3} - 2 \) has Galois group \( {S}_{3} \) over \( \mathbb{Q} \), since \( D = - {108} \) is not a square in \( \mathbb{Q};{x}^{3} - {3x} + 1 \) has discriminant \( {81} = {9}^{2} \), and therefore it has Galois group \( {A}_{3} \cong \mathbb{Z}/3\mathbb{Z} \) . The reader will have no difficulty locating a discussion of the different possibilities for polynomials of degree 4 and detailed information for higher degree polynomials. I will just highlight the following simple observation. Example 7.20. Let \( f\left( x\right) \in \mathbb{Q}\left\lbrack x\right\rbrack \) be an irreducible polynomial of degree \( p \), where \( p \) is prime. Assume that \( f\left( x\right) \) has \( p - 2 \) real roots and 2 nonreal, complex roots. Then the Galois group of \( f\left( x\right) \) is \( {S}_{p} \) . Indeed, complex conjugation induces an automorphism of the splitting field and acts by interchanging the two nonreal roots, so the Galois group \( G \), as a subgroup of \( {S}_{p} \), contains a transposition. On the other hand, the degree of the splitting field (and hence \( \left| G\right| \) ) is divisible by \( p \), because it contains a simple extension of order \( p \) , obtained by adjoining any one root to \( \mathbb{Q} \) . Since \( p \) is prime, \( G \) contains an element of order \( p \) by Cauchy’s theorem (Theorem IV12.1); the only elements of order \( p \) in \( {S}_{p} \) are \( p \) -cycles, so \( G \) contains a \( p \) -cycle. It follows that \( G = {S}_{p} \), by (a simple variation of) Exercise IV 4.7 For example, the Galois group of \( f\left( x\right) = {x}^{5} - {5x} - 1 \) over \( \mathbb{Q} \) is \( {S}_{5} \), giving a concrete example of a quintic that cannot be solved by radicals (Exercise 7.10). The reader will use this technique to produce polynomials of every prime degree \( p \) in \( \mathbb{Z}\left\lbrack x\right\rbrack \) whose Galois group is \( {S}_{p} \) (Exercise 7.11). 7.6. Abelian groups as Galois groups over \( \mathbb{Q} \) . Having mentioned the inverse Galois problem in [7.3] I should point out that the reader is now in the position of proving that every finite abelian group may be realized as the Galois group of an extension over \( \mathbb{Q} \) . In fact, the reader can prove a much more precise result: every finite abelian group may be realized as the group of some intermediate field of the extension \( \mathbb{Q} \subseteq \mathbb{Q}\left( {\zeta }_{n}\right) \) of \( \mathbb{Q} \) in a cyclotomic field. This uses a number-theoretic fact: For every integer \( N \), there are infinitely many primes \( p \) such that \( p \equiv 1 \) \( {\;\operatorname{mod}\;N} \) . This is a particular case of Dirichlet’s theorem (1837), which states that if \( a, b \) are positive integers and \( \gcd \left( {a, b}\right) = 1 \), then there are infinitely many primes of the form \( a + {nb} \) with \( n > 0 \) . The particular case \( a = 1, b = N \) needed here was apparently already known to Euler and can be proven by elementary means (in fact, there is a proof using cyclotomic polynomials: see Exercise 5.18 . Assuming this fact, argue as follows: -By the classification theorem (Theorem [IV16.6], every finite abelian group \( G \) is isomorphic to a product of cyclic groups \[ \left( {\mathbb{Z}/{n}_{1}\mathbb{Z}}\right) \times \cdots \times \left( {\mathbb{Z}/{n}_{r}\mathbb{Z}}\right) \] -By Dirichlet’s
106_106_The Cantor function
Definition 3.3
Definition 3.3. Let \( L \) be a set of identical relations on \( T \) -algebras. The class \( V \) of all \( T \) -algebras which satisfy all the identical relations in \( L \) is called the variety of \( T \) -algebras defined by \( L \) . The laws of the variety are all the identical relations satisfied by every algebra of \( V \) . Note that the set of laws of the variety includes \( L \), but may be larger. ## Examples 3.4. \( T \) consists of a single binary operation \( * \), and \( L \) has the one element \( \left( {{x}_{1} * \left( {{x}_{2} * {x}_{3}}\right) ,\left( {{x}_{1} * {x}_{2}}\right) * {x}_{3}}\right) \) . If \( A \) satisfies this identical relation, then \( a * \left( {b * c}\right) = \) \( \left( {a * b}\right) * c \) for all \( a, b, c \in A \) . Thus the operation on \( A \) is associative and \( A \) is a semigroup. The variety defined by \( L \) in this case is the class of all semigroups. 3.5. \( T \) consists of 0 -ary,1 -ary and 2 -ary operations \( e, i, * \) respectively. \( L \) has the three elements \[ \left( {{x}_{1} * \left( {{x}_{2} * {x}_{3}}\right) ,\left( {{x}_{1} * {x}_{2}}\right) * {x}_{3}}\right) , \] \[ \left( {e * {x}_{1},{x}_{1}}\right) \] \[ \left( {i\left( {x}_{1}\right) * {x}_{1}, e}\right) \text{.} \] The first law ensures that \( * \) is an associative operation in every algebra of the variety defined by \( L \) . The second shows that the distinguished element \( e \) is always a left identity, while the third guarantees that \( i\left( a\right) \) is a left inverse of the element \( a \) . Hence the algebras of the variety are groups. ## Exercises 3.6. Show that the class of all abelian groups is a variety. 3.7. \( R \) is a ring with 1 . Show that the class of unital left \( R \) -modules is a variety. 3.8. \( S \) is a commutative ring with 1 . Show that the class of commutative rings \( R \) with \( {1}_{R} = {1}_{S} \) and which contain \( S \) as a subring is a variety. 3.9. Is the class of finite groups a variety? ## §4 Relatively Free Algebras Let \( V \) be the variety of \( T \) -algebras defined by the set \( L \) of laws. Definition 4.1. A \( T \) -algebra \( R \) in the variety \( V \) is the (relatively) free algebra of \( V \) on the set \( X \) of (relatively) free generators (where a function \( \sigma : X \rightarrow R \) is given, usually as an inclusion) if, for every algebra \( A \) in \( V \) and every function \( \tau : X \rightarrow A \), there exists a unique homomorphism \( \varphi : R \rightarrow A \) such that \( {\varphi \sigma } = \tau \) . This definition differs from the earlier definition of a free algebra only in that we consider here only algebras in \( V \) . Definition 4.2. An algebra is relatively free if it is a free algebra of some variety. Theorem 4.3. For any type \( T \), and any set \( L \) of laws, let \( V \) be the variety of \( T \) -algebras defined by \( L \) . For any set \( X \), there exists a free \( T \) -algebra of \( V \) on \( X \) . Proof: Let \( \left( {F,\rho }\right) \) be the free \( T \) -algebra on \( X \) . A congruence relation on \( F \) is defined by putting \( u \sim v \) (where \( u, v \in F \) ) if \( \varphi \left( u\right) = \varphi \left( v\right) \) for every homomorphism \( \varphi \) of \( F \) into an algebra in \( V \) . Clearly \( \sim \) is an equivalence relation on \( F \) . If now \( t \in {T}_{k} \) and \( {u}_{i} \sim {v}_{i}\left( {i = 1,\ldots, k}\right) \), then for every such homomorphism \( \varphi ,\varphi \left( {u}_{i}\right) = \varphi \left( {v}_{i}\right) \), and so \[ \varphi \left( {t\left( {{u}_{1},\ldots ,{u}_{k}}\right) }\right) = t\left( {\varphi \left( {u}_{1}\right) ,\ldots ,\varphi \left( {u}_{k}\right) }\right) = t\left( {\varphi \left( {v}_{1}\right) ,\ldots ,\varphi \left( {v}_{k}\right) }\right) = \varphi \left( {t\left( {{v}_{1},\ldots ,{v}_{k}}\right) }\right) , \] verifying that \( \varphi \) is a congruence relation. We define \( R \) to be the set of congruence classes of elements of \( F \) with respect to this congruence relation. Denoting the congruence class containing \( u \) by \( \bar{u} \), we define the action of \( t \in {T}_{k} \) on \( R \) by putting \( t\left( {{\bar{u}}_{1},\ldots ,{\bar{u}}_{k}}\right) = \) \( t\left( {{u}_{1},\ldots ,{u}_{k}}\right) \) . This definition is independent of the choice of representatives \( {u}_{1},\ldots ,{u}_{k} \) of the classes \( {\bar{u}}_{1},\ldots ,{\bar{u}}_{k} \), and makes \( R \) a \( T \) -algebra. Also, the map \( u \rightarrow \bar{u} \) is clearly a homomorphism \( \eta : F \rightarrow R \) . Finally, we define \( \sigma : X \rightarrow R \) by \( \sigma \left( x\right) = \overline{\rho \left( x\right) } \) . We now prove that \( \left( {R,\sigma }\right) \) is relatively free on \( X \) . Let \( A \) be any algebra in \( V \), and let \( \tau : X \rightarrow A \) be any function from \( X \) into \( A \) . Because \( \left( {F,\rho }\right) \) is free, there exists a unique homomorphism \( \psi : F \rightarrow A \) such that \( {\psi \rho } = \tau \) . ![aa35aa61-3413-461b-96f9-006ca0282e6b_18_0.jpg](images/aa35aa61-3413-461b-96f9-006ca0282e6b_18_0.jpg) For \( \bar{u} \in R \), we define \( \varphi \left( \bar{u}\right) = \psi \left( u\right) \) . This is independent of the choice of representative \( u \) of the element \( \bar{u} \), since if \( \bar{u} = \bar{v} \), then \( \psi \left( u\right) = \psi \left( v\right) \) . The map \( \varphi : R \rightarrow A \) is clearly a homomorphism, and \( {\varphi \sigma } = {\varphi \eta \rho } = {\psi \rho } = \tau \) . If \( {\varphi }^{\prime } : R \rightarrow A \) is another homomorphism such that \( {\varphi }^{\prime }\sigma = \tau \), then \( {\varphi }^{\prime }{\eta \rho } = \tau \) and therefore \( {\varphi }^{\prime }\eta = \psi \) . Consequently for each element \( \bar{u} \in R \) we have \[ {\varphi }^{\prime }\left( \bar{u}\right) = {\varphi }^{\prime }\eta \left( u\right) = \psi \left( u\right) = \varphi \left( \bar{u}\right) \] and hence \( {\varphi }^{\prime } = \varphi \) . When considering only the algebras of a given variety \( V \), we may redefine variables and words accordingly. Thus we define a \( V \) -variable as an element of the free generating set of a free algebra of \( V \), and a \( V \) -word in the \( V \) -variables \( {x}_{1},\ldots ,{x}_{n} \) as an element of the free algebra of \( V \) on the free generators \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) ## Examples 4.4. \( T \) consists of a single binary operation which we shall write as juxtaposition. Let \( V \) be the variety of associative \( T \) -algebras. Then all products in the free \( T \) -algebra obtained by any bracketing of \( {x}_{1},\ldots ,{x}_{n} \) , taken in that order, are congruent under the congruence relation used in our construction of the relatively free algebra, and correspond to the one word \( {x}_{1}{x}_{2}\cdots {x}_{n} \) of \( V \) . We observe that in this example, all elements of the absolutely free algebra \( F \), which map to a given element \( {x}_{1}{x}_{2}\cdots {x}_{n} \) of the relatively free algebra, come from the same layer \( {F}_{n - 1} \) of \( F \) . 4.5. \( T \) consists of a 0-ary, a 1-ary and a 2-ary operation. \( V \) is the variety of abelian groups, defined by the laws given in Example 3.5 together with the law \( \left( {{x}_{1}{x}_{2},{x}_{2}{x}_{1}}\right) \) . In this case, the relatively free algebra on \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) is the set of all \( {x}_{1}^{{r}_{1}}{x}_{2}^{{r}_{2}}\cdots {x}_{n}^{{r}_{n}} \) (or equivalently the set of all \( n \) -tuples \( \left( {{r}_{1},\ldots ,{r}_{n}}\right) \) ) with \( {r}_{i} \in \mathbb{Z} \) . Here the layer property of Example 4.4 does not hold, because, for example, we have the identity \( e \in {F}_{0},{x}_{1}^{-1} \in {F}_{1},{x}_{1}^{-1} * {x}_{1} \in {F}_{2} \) and yet \( \bar{e} = \overline{{x}_{1}^{-1} * {x}_{1}} \) . ## Exercises 4.6. \( K \) is a field. Show that vector spaces over \( K \) form a variety \( V \) of algebras, and that every vector space over \( K \) is a free algebra of \( V \) . 4.7. \( R \) is a commutative ring with 1 and \( V \) is the variety of commutative rings \( S \) which contain \( R \) as a subring and in which \( {1}_{R} \) is a multiplicative identity of \( S \) . Show that the free algebra of \( V \) on the set \( X \) of variables is the polynomial ring over \( R \) in the elements of \( X \) . ## Chapter II ## Propositional Calculus ## §1 Introduction Mathematical logic is the study of logic as a mathematical theory. Following the usual procedure of applied mathematics, we construct a mathematical model of the system to be studied, and then conduct what is essentially a pure mathematical investigation of the properties of our model. Since this book is intended for mathematicians, the system we propose to study is not general logic but the logic used in mathematics. By this restriction, we achieve considerable simplification, because we do not have to worry about precise meanings of words-in mathematics, words have precisely defined meanings. Furthermore, we are free of reasoning based on things such as emotive argument, which must be accounted for in any theory of general logic. Finally, the nature of the real world need not concern us, since the world we shall study is the purely conceptual one of pure mathematics. In any formal study of logic, the language and system of reasoning needed to carry out the investigation is called the meta-language or meta-logic. As we are constructing a mathematical model of logic, our meta-language is mathematics, and so all our existing knowledge of mathematics is available for possible application to our model. We shall make specific use of informal set theory (including cardinal numbers and Zorn's lemma) and of the universal algebra developed in Chapter I. For the purpose of our study, it suffices t
18_Algebra Chapter 0
Definition 6.6
Definition 6.6. An \( R \) -module \( M \) is Noetherian if every submodule of \( M \) is finitely generated as an \( R \) -module. Thus, a ring \( R \) is Noetherian in the sense of Definition 4.2 if and only if it is Noetherian 'as a module over itself'. The ring in Example 6.5 is not Noetherian. We will study the Noetherian condition more carefully later on (§VI1.1); but we can already see one reason why this is a good, 'solid' notion. Proposition 6.7. Let \( M \) be an \( R \) -module, and let \( N \) be a submodule of \( M \) . Then \( M \) is Noetherian if and only if both \( N \) and \( M/N \) are Noetherian. Proof. If \( M \) is Noetherian, then so is \( M/N \) (same proof as for Exercise 4.2), and so is \( N \) (because every submodule of \( N \) is a submodule of \( M \), so it is finitely generated because \( M \) is Noetherian). This proves the ’only if’ part of the statement. For the converse, assume \( N \) and \( M/N \) are Noetherian, and let \( P \) be a submodule of \( M \) ; we have to prove that \( P \) is finitely generated. Since \( P \cap N \) is a submodule of \( N \) and \( N \) is Noetherian, \( P \cap N \) is finitely generated. By the ’second isomorphism theorem', Proposition 5.18, \[ \frac{P}{P \cap N} \cong \frac{P + N}{N} \] and hence \( P/\left( {P \cap N}\right) \) is isomorphic to a submodule of \( M/N \) . Since \( M/N \) is Noetherian, this shows that \( P/\left( {P \cap N}\right) \) is finitely generated. It follows that \( P \) itself is finitely generated, by Exercise 6.18, Corollary 6.8. Let \( R \) be a Noetherian ring, and let \( M \) be a finitely generated \( R \) -module. Then \( M \) is Noetherian (as an \( R \) -module). Proof. Indeed, by hypothesis there is an onto homomorphism \( {R}^{\oplus n} \rightarrow M \) of \( R \) - modules; hence (by the first isomorphism theorem, Corollary 5.16) \( M \) is isomorphic to a quotient of \( {R}^{\oplus n} \) . By Proposition 6.7, it suffices to prove that \( {R}^{\oplus n} \) is Noetherian. This may be done by induction. The statement is true for \( n = 1 \) by hypothesis. For \( n > 1 \), assume we know that \( {R}^{\oplus \left( {n - 1}\right) } \) is Noetherian; since \( {R}^{\oplus \left( {n - 1}\right) } \) may be viewed as a submodule of \( {R}^{\oplus n} \), in such a way that \[ \frac{{R}^{\oplus n}}{{R}^{\oplus \left( {n - 1}\right) }} \cong R \] (Exercise 6.4), and \( R \) is Noetherian, it follows that \( {R}^{\oplus n} \) is Noetherian, again by applying Proposition 6.7 6.5. Finitely generated vs. finite type. If \( S \) is an \( R \) -algebra, it may be ’finitely generated’ in two very different ways: as an \( R \) -module and as an \( R \) -algebra. It is important to keep these two concepts well distinct, although unfortunately the language used to express them is very similar. The following definitions differ in three small details... " \( S \) is finitely generated as a module " \( S \) is finitely generated as an algebra over \( R \) if there is an onto homomor- over \( R \) if there is an onto homomorphism of \( R \) -modules from the free \( R \) - phism of \( R \) -algebras from the free \( R \) - module on a finite set to \( S \) ." algebra on a finite set to \( S \) ." The mathematical difference is more substantial than it may appear. As we have seen in [6.3], the free \( R \) -module over a finite set \( A = \{ 1,\ldots, n\} \) is isomorphic to \( {R}^{\oplus n} \) ; the free commutative \( R \) -algebra over \( A \) is isomorphic to \( R\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . Thus, a commutative \( {}^{31} \) ring \( S \) is finitely generated as an \( R \) -module if there is an onto homomorphism of \( R \) -modules \[ {R}^{\oplus n} \twoheadrightarrow S \] for some \( n \) ; it is finitely generated as an \( R \) -algebra if there is an onto homomorphism of \( R \) -algebras \[ R\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \rightarrow S \] for some \( n \) . In other words, \( S \) is finitely generated as an \( R \) module if and only if \( S \cong {R}^{\oplus n}/M \) for some \( n \) and a submodule \( M \) of \( {R}^{\oplus n} \) ; it is a finite-type \( R \) -algebra if and only if \( S \cong R\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack /I \) for some \( n \) and an ideal \( I \) of \( R\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . We say that \( S \) is finite in the first case 32 and of finite type in the second. It is clear that ’finite’ \( \Rightarrow \) ’finite type’; it should be just as clear that the converse does not hold. Example 6.9. The polynomial ring \( R\left\lbrack x\right\rbrack \) is a finite-type \( R \) -algebra, but it is not finite as an \( R \) -module. The distinction, while macroscopic in general, may evaporate in special, important cases. For example, one can prove that if \( k \) and \( K \) are fields and \( k \subseteq K \), then \( K \) is of finite type over \( k \) if and only if it is in fact finite as a \( k \) -module (that is, it is a finite-dimensional \( k \) -vector space). This is one version of Hilbert’s Nullstellensatz, a deep result we already mentioned in Example 4.15 and that we will prove (in an important class of examples) in SVII 2.2. David Hilbert's name is associated to another important result concerning finite-type \( R \) -algebras: if \( R \) is Noetherian (as a ring, that is, as an \( R \) -module) \( {}^{31} \) We are mostly interested in the commutative case, so I will make this hypothesis here; the only change in the general case is typographical: \( \langle \cdots \rangle \) rather than \( \left\lbrack \cdots \right\rbrack \) . Also, note that a commutative ring is finitely generated as an algebra if and only if it is finitely generated as a commutative algebra; cf. Exercise 6.15 \( {}^{32} \) This is particularly unfortunate, since \( S \) may very well be an infinite set. and \( S \) is a finite-type \( R \) -algebra, then \( S \) is also Noetherian (as a ring, that is, as an \( S \) -module). This is an immediate consequence of the so-called Hilbert’s basis theorem. The proof of Hilbert's basis theorem is completely elementary: it could be given here as an exercise, with a few key hints; we will see it in SVI1.1 ## Exercises ## 6.1. \( \vartriangleright \) Prove Claim 6.3. [6.3] 6.2. Prove or disprove that if \( R \) is ring and \( M \) is a nonzero \( R \) -module, then \( M \) is not isomorphic to \( M \oplus M \) . 6.3. Let \( R \) be a ring, \( M \) an \( R \) -module, and \( p : M \rightarrow M \) an \( R \) -module homomorphism such that \( {p}^{2} = p \) . (Such a map is called a projection.) Prove that \( M \cong \ker p \oplus \operatorname{im}p \) 6.4. \( \vartriangleright \) Let \( R \) be a ring, and let \( n > 1 \) . View \( {R}^{\oplus \left( {n - 1}\right) } \) as a submodule of \( {R}^{\oplus n} \), via the injective homomorphism \( {R}^{\oplus \left( {n - 1}\right) } \hookrightarrow {R}^{\oplus n} \) defined by \[ \left( {{r}_{1},\ldots ,{r}_{n - 1}}\right) \mapsto \left( {{r}_{1},\ldots ,{r}_{n - 1},0}\right) . \] Give a one-line proof that \[ \frac{{R}^{\oplus n}}{{R}^{\oplus \left( {n - 1}\right) }} \cong R. \] ## [86.4] 6.5. \( \vartriangleright \) (Notation as in [6.3.) For any ring \( R \) and any two sets \( {A}_{1},{A}_{2} \), prove that \( {\left( {R}^{\oplus {A}_{1}}\right) }^{\oplus {A}_{2}} \cong {R}^{\oplus \left( {{A}_{1} \times {A}_{2}}\right) } \) . [SVIII]2.2] 6.6. \( \neg \) Let \( R \) be a ring, and let \( F = {R}^{\oplus n} \) be a finitely generated free \( R \) -module. Prove that \( {\operatorname{Hom}}_{R\text{-Mod }}\left( {F, R}\right) \cong F \) . On the other hand, find an example of a ring \( R \) and a nonzero \( R \) -module \( M \) such that \( {\operatorname{Hom}}_{R\text{-Mod }}\left( {M, R}\right) = 0 \) . 6.8 6.7. \( \vartriangleright \) Let \( A \) be any set. - For any family \( {\left\{ {M}_{a}\right\} }_{a \in A} \) of modules over a ring \( R \), define the product \( \mathop{\prod }\limits_{{a \in A}}{M}_{a} \) and coproduct \( {\bigoplus }_{a \in A}{M}_{a} \) . If \( {M}_{a} \cong R \) for all \( a \in A \), these are denoted \( {R}^{A},{R}^{\oplus A} \) , respectively. - Prove that \( {\mathbb{Z}}^{\mathbb{N}} ≆ {\mathbb{Z}}^{\oplus \mathbb{N}} \) . (Hint: Cardinality.) ## \( \left\lbrack {{6.1},{6.8}}\right\rbrack \) 6.8. Let \( R \) be a ring. If \( A \) is any set, prove that \( {\operatorname{Hom}}_{R - \operatorname{Mod}}\left( {{R}^{\oplus A}, R}\right) \) satisfies the universal property for the product of the family \( {\left\{ {R}_{a}\right\} }_{a \in A} \), where \( {R}_{a} \cong R \) for all \( a \) ; thus, \( {\operatorname{Hom}}_{R\text{-Mod }}\left( {{R}^{\oplus A}, R}\right) \cong {R}^{A} \) . Conclude that \( {\operatorname{Hom}}_{R\text{-Mod }}\left( {{R}^{\oplus A}, R}\right) \) is not isomorphic to \( {R}^{\oplus A} \) in general (cf. Exercises 6.6 and 6.7 ) 6.9. \( \neg \) Let \( R \) be a ring, \( F \) a nonzero free \( R \) -module, and let \( \varphi : M \rightarrow N \) be a homomorphism of \( R \) -modules. Prove that \( \varphi \) is onto if and only if for all \( R \) -module homomorphisms \( \alpha : F \rightarrow N \) there exists an \( R \) -module homomorphism \( \beta : F \rightarrow M \) such that \( \alpha = \varphi \circ \beta \) . (Free modules are projective, as we will see in Chapter VIII) 7.8, VI 5.5, 6.10. \( \vartriangleright \) (Cf. Exercise 1,12.) Let \( M, N \), and \( Z \) be \( R \) -modules, and let \( \mu : M \rightarrow Z \) , \( \nu : N \rightarrow Z \) be homomorphisms of \( R \) -modules. Prove that \( R \) -Mod has ’fibered products’: there exists an \( R \) -module \( M{ \times }_{Z}N \) with \( R \) -module homomorhisms \( {\pi }_{M} : M{ \times }_{Z}N \rightarrow M,{\pi }_{N} : M{ \times }_{Z}N \rightarrow N \), such that \( \mu \circ {\pi }_{M} = \nu \circ {\pi }_{N} \), and which is universal with respect to this requirem
1112_(GTM267)Quantum Theory for Mathematicians
Definition 17.24
Definition 17.24 A function \( \mathbf{c} : {\mathbb{R}}^{3} \times {\mathbb{R}}^{3} \rightarrow {\mathbb{R}}^{3} \) is said to transform like a vector if \[ \mathbf{c}\left( {R\mathbf{x}, R\mathbf{p}}\right) = R\left( {\mathbf{c}\left( {\mathbf{x},\mathbf{p}}\right) }\right) \] (17.21) for all \( R \in \mathrm{{SO}}\left( 3\right) \) . In the physics literature, the expression "is a vector" is sometimes used in place of "transforms like a vector." Note that in Definition 17.24, we only consider the transformation property of \( \mathbf{c} \) under elements of \( \mathrm{{SO}}\left( 3\right) \) rather than under a general element of \( \mathrm{O}\left( 3\right) \) . If \( \mathbf{c} \) transforms like a vector, one says that \( \mathbf{c} \) is an "true vector" if \( \mathbf{c} \) satisfies (17.21) for all \( R \) in \( \mathrm{O}\left( 3\right) \) [not just in \( \mathrm{{SO}}\left( 3\right) \) ] and one says that \( \mathbf{c} \) is a "pseudovector" if \( \mathbf{c} \) satisfies \( \mathbf{c}\left( {R\mathbf{x}, R\mathbf{p}}\right) = - R\left( {\mathbf{c}\left( {\mathbf{x},\mathbf{p}}\right) }\right) \) for \( R \in \mathrm{O}\left( 3\right) \smallsetminus \mathrm{{SO}}\left( 3\right) \) . For our purposes, it is not necessary to distinguish between true vectors and pseudovectors. The position function \( {\mathbf{c}}_{1}\left( {\mathbf{x},\mathbf{p}}\right) \mathrel{\text{:=}} \mathbf{x} \), the momentum function \( {\mathbf{c}}_{2}\left( {\mathbf{x},\mathbf{p}}\right) \mathrel{\text{:=}} \) \( \mathbf{p} \), and the angular momentum function \( {\mathbf{c}}_{3}\left( {\mathbf{x},\mathbf{p}}\right) \mathrel{\text{:=}} \mathbf{x} \times \mathbf{p} \) are simple examples of functions that transform like vectors. (Transformation under rotations is one of the standard properties of the cross product.) A typical example of a function transforming like a vector is \( \mathbf{c}\left( {\mathbf{x},\mathbf{p}}\right) = \left( {\mathbf{x} \cdot \mathbf{p}}\right) \left| \mathbf{x}\right| \left( {\mathbf{x} \times \mathbf{p}}\right) \) . Proposition 17.25 Let \( \mathbf{j}\left( {\mathbf{x},\mathbf{p}}\right) = \mathbf{x} \times \mathbf{p} \) denote the angular momentum function on \( {\mathbb{R}}^{3} \times {\mathbb{R}}^{3} \) . Suppose a smooth function \( \mathbf{c} : {\mathbb{R}}^{3} \times {\mathbb{R}}^{3} \rightarrow {\mathbb{R}}^{3} \) transforms like a vector. Then we have \[ \left\{ {{c}_{k},{j}_{k}}\right\} = 0 \] (17.22) for \( k = 1,2,3 \) . Furthermore, we have \[ \left\{ {{c}_{1},{j}_{2}}\right\} = \left\{ {{j}_{1},{c}_{2}}\right\} = {c}_{3} \] \( \left( {17.23}\right) \) and other relations obtained from (17.23) by cyclically permuting the indices. Proof. Let \( R\left( \theta \right) \) denote a counterclockwise rotation by angle \( \theta \) in the \( \left( {{x}_{1},{x}_{2}}\right) \) -plane. Applying (17.21) with \( R = R\left( \theta \right) \) and looking only at the first component of the vectors, we have \[ {c}_{1}\left( {R\left( \theta \right) \mathbf{x}, R\left( \theta \right) \mathbf{p}}\right) = {c}_{1}\left( {\mathbf{x},\mathbf{p}}\right) \cos \theta - {c}_{2}\left( {\mathbf{x},\mathbf{p}}\right) \sin \theta . \] (17.24) Now, as in the proof of Proposition 2.30, the Poisson bracket \( \left\{ {{c}_{1},{j}_{3}}\right\} \) is precisely the derivative of the left-hand side of (17.24) with respect to \( \theta \) , evaluated at \( \theta = 0 \) . Thus, \[ \left\{ {{c}_{1},{j}_{3}}\right\} = - {c}_{2} \] and so \( \left\{ {{j}_{3},{c}_{1}}\right\} = {c}_{2} \), which is one of the relations obtained from (17.23) by cyclically permuting the indices. Meanwhile, if we again apply (17.21) with \( R = R\left( \theta \right) \) but look now at the third component of the vectors, we have that \[ {c}_{3}\left( {R\left( \theta \right) \mathbf{x}, R\left( \theta \right) \mathbf{p}}\right) = {c}_{3}\left( {\mathbf{x},\mathbf{p}}\right) . \] Differentiating this relation with respect to \( \theta \) at \( \theta = 0 \) gives \( \left\{ {{c}_{3},{j}_{3}}\right\} = 0 \) . All other brackets are computed similarly. ∎ We now turn to the quantum counterpart of a function that transforms like a vector. Definition 17.26 For any ordered triple \( \mathbf{C} \mathrel{\text{:=}} \left( {{C}_{1},{C}_{2},{C}_{3}}\right) \) of operators on \( {L}^{2}\left( {\mathbb{R}}^{3}\right) \) and any vector \( \mathbf{v} \in {\mathbb{R}}^{3} \), let \( \mathbf{v} \cdot \mathbf{C} \) be the operator \[ \mathbf{v} \cdot \mathbf{C} = \mathop{\sum }\limits_{{j = 1}}^{3}{v}_{j}{C}_{j} \] \( \left( {17.25}\right) \) Then an ordered triple \( \mathbf{C} \) of operators on \( {L}^{2}\left( {\mathbb{R}}^{3}\right) \) is called a vector operator if \[ \left( {R\mathbf{v}}\right) \cdot \mathbf{C} = \Pi \left( R\right) \left( {\mathbf{v} \cdot \mathbf{C}}\right) \Pi {\left( R\right) }^{-1} \] (17.26) for all \( R \in \mathrm{{SO}}\left( 3\right) \) . Here \( \Pi \left( \cdot \right) \) is the natural unitary action of \( \mathrm{{SO}}\left( 3\right) \) on \( {L}^{2}\left( {\mathbb{R}}^{3}\right) \) in Definition 17.1. Let us try to understand what this definition is saying in the case of, say, the angular momentum, which is (as we shall see) a vector operator. The operators \( {\widehat{J}}_{1},{\widehat{J}}_{2} \), and \( {\widehat{J}}_{3} \) represent the components of \( \widehat{\mathbf{J}} \) in the directions of \( {\mathbf{e}}_{1},{\mathbf{e}}_{2} \), and \( {\mathbf{e}}_{3} \), respectively. More generally, we can consider the component of \( \widehat{\mathbf{J}} \) in the direction of any unit vector \( \mathbf{v} \), which will be nothing but \( \mathbf{v} \cdot \widehat{\mathbf{J}} \), as defined in (17.25). Since there is no preferred direction in space, we expect that for any two unit vectors \( {\mathbf{v}}_{1} \) and \( {\mathbf{v}}_{2} \), the operators \( {\mathbf{v}}_{1} \cdot \widehat{\mathbf{J}} \) and \( {\mathbf{v}}_{2} \cdot \widehat{\mathbf{J}} \) should be "the same operator, up to rotation." Specifically, if \( R \) is some rotation with \( R{\mathbf{v}}_{1} = {\mathbf{v}}_{2} \), then \( {\mathbf{v}}_{1} \cdot \widehat{\mathbf{J}} \) and \( {\mathbf{v}}_{2} \cdot \widehat{\mathbf{J}} \) should differ only by the action of \( R \) on the Hilbert space \( {L}^{2}\left( {\mathbb{R}}^{3}\right) \) . But this is precisely what (17.26) says, with \( \mathbf{v} = {\mathbf{v}}_{1} \) and \( \mathbf{C} = \widehat{\mathbf{J}} \) : \[ {\mathbf{v}}_{2} \cdot \widehat{\mathbf{J}} = \Pi \left( R\right) \left( {{\mathbf{v}}_{1} \cdot \widehat{\mathbf{J}}}\right) \Pi {\left( R\right) }^{-1} \] We will not concern ourselves with the question of whether (17.26) continues to hold for \( R \in \mathrm{O}\left( 3\right) \smallsetminus \mathrm{{SO}}\left( 3\right) \) . The position and momentum operators \( \mathbf{X} \) and \( \mathbf{P} \) are easily seen to be vector operators. As in the classical case, the cross product of two vector operators is again a vector operator. (See Exercise 7 in Chap. 18.) In particular, the angular momentum, \( \widehat{\mathbf{J}} = \mathbf{X} \times \mathbf{P} \) is a vector operator. If the operators \( {C}_{1},{C}_{2} \), and \( {C}_{3} \) are unbounded, we should say something in Definition 17.26 about the domains of the operators in question. The simplest approach is to find some dense subspace \( V \) of \( {L}^{2}\left( {\mathbb{R}}^{3}\right) \) that is contained in the domain of each \( {C}_{j} \) and such that \( V \) is invariant under rotations. In that case, the equality in (17.26) is understood to hold when applied to a vector in \( V \) . In many cases, we can take \( V \) to be the Schwartz space \( \mathcal{S}\left( {\mathbb{R}}^{3}\right) \) . In the following proposition, the space \( V \) should satisfy certain technical domain conditions that permit differentiation of (17.29) when applied to a vector \( \psi \) in \( V \) . We will not pursue the details of such conditions here. Proposition 17.27 If \( \mathbf{C} \) is a vector operator, then the components of \( \mathbf{C} \) satisfy \[ \frac{1}{i\hslash }\left\lbrack {{C}_{j},{\widehat{J}}_{j}}\right\rbrack = 0 \] \( \left( {17.27}\right) \) for \( j = 1,2,3 \) . Furthermore, we have \[ \frac{1}{i\hslash }\left\lbrack {{C}_{1},{\widehat{J}}_{2}}\right\rbrack = \frac{1}{i\hslash }\left\lbrack {{\widehat{J}}_{1},{C}_{2}}\right\rbrack = {C}_{3}, \] (17.28) and other relations obtained from (17.28) by cyclically permuting the indices. Proof. As in the proof of Proposition 17.25, \( R\left( \theta \right) \) denote a rotation in the \( \left( {{x}_{1},{x}_{2}}\right) \) -plane, and let \( {\mathbf{e}}_{1} = \left( {1,0,0}\right) \) . Applying (17.26) with \( R = R\left( \theta \right) \) and \( \mathbf{v} = {\mathbf{e}}_{1} \), we have \[ \Pi \left( {R\left( \theta \right) }\right) {C}_{1}\Pi {\left( R\left( \theta \right) \right) }^{-1} = {C}_{1}\cos \theta + {C}_{2}\sin \theta . \] (17.29) But \( R\left( \theta \right) = {e}^{\theta {F}_{3}} \), where \( \left\{ {F}_{j}\right\} \) is the basis for so(3) described in Sect. 16.5. Thus, differentiating (17.29) with respect to \( \theta \) at \( \theta = 0 \) gives \[ \pi \left( {F}_{3}\right) {C}_{1} - {C}_{1}\pi \left( {F}_{3}\right) = {C}_{2} \] Since \( {\widehat{J}}_{3} = i\hslash \pi \left( {F}_{3}\right) \) (Proposition 17.3), we obtain \( \left( {1/\left( {i\hslash }\right) }\right) \left\lbrack {{\widehat{J}}_{3},{C}_{1}}\right\rbrack = {C}_{2} \) , which is one of the relations obtained from (17.28) by cyclically permuting the variables. Meanwhile, applying (17.26) with \( R = R\left( \theta \right) \) and \( \mathbf{v} = {\mathbf{e}}_{3} \) gives \[ \Pi \left( {R\left( \theta \right) }\right) {C}_{3}\Pi {\left( R\left( \theta \right) \right) }^{-1} = {C}_{3}. \] Differentiating this relation with respect to \( \theta \) at \( \theta = 0 \) gives \( \left\lbrack {\pi \left( {F}_{3}\right) ,{C}_{3}}\right\rbrack = 0 \) . All
1068_(GTM227)Combinatorial Commutative Algebra
Definition 11.11
Definition 11.11 A submodule \( M \subseteq N \) is essential if every submodule of \( N \) intersects \( M \) nontrivially: \( 0 \neq {N}^{\prime } \subseteq N \Rightarrow {N}^{\prime } \cap M \neq 0 \) . We call \( N \) an essential extension of \( M \) . The extension is proper if \( N \neq M \) . Our principal example is the inclusion of a face into its injective hull. Lemma 11.12 The inclusion \( \mathbb{k}\{ F\} \subset \mathbb{k}\{ F - Q\} \) is an essential extension. Proof. Each element \( \mathbf{u} \in F - Q \) can be expressed as \( \mathbf{u} = \mathbf{f} - \mathbf{a} \) for some \( \mathbf{a} \in Q \) and \( \mathbf{f} \in F \) . The equation \( \mathbf{a} + \mathbf{u} = \mathbf{f} \in F \) translates into \( {\mathbf{t}}^{\mathbf{a}}{\mathbf{t}}^{\mathbf{u}} = \) \( {\mathbf{t}}^{\mathbf{f}} \in \mathbb{k}\{ F\} \) . If \( {N}^{\prime } \) is a nonzero submodule of \( \mathbb{k}\{ F - Q\} \), then \( {N}^{\prime } \) contains a nonzero \( \mathbb{k} \) -linear combination of monomials \( {\mathbf{t}}^{\mathbf{u}} \) . Multiplying this element by a suitable monomial \( {\mathbf{t}}^{\mathbf{a}} \) as above yields a nonzero element of \( {N}^{\prime } \cap \mathbb{k}\{ F\} \) . \( ▱ \) The most common argument using an essential extension \( M \subseteq N \) says: If a homomorphism \( N \rightarrow {N}^{\prime } \) induces an inclusion \( M \rightarrow {N}^{\prime } \), then \( N \rightarrow {N}^{\prime } \) is also an inclusion. The proof of the "if" part of our next result uses this argument. For notation, the \( Q \) -graded part of a module \( M \) is the submodule \( {M}_{Q} = {\bigoplus }_{\mathbf{a} \in Q}{M}_{\mathbf{a}} \) obtained by ignoring all \( {\mathbb{Z}}^{d} \) -graded degrees outside of \( Q \) . Theorem 11.13 A monomial ideal \( W \) is irreducible if and only if the \( Q \) - graded part of some indecomposable injective module \( E \) satisfies \( {E}_{Q} = \bar{W} \) . Proof. First we prove the "if" direction. The multiplication rule in Definition 11.7 implies that \( \mathbb{k}\{ \mathbf{a} + Q - F{\} }_{Q} \) is isomorphic to \( \bar{W} \) for some ideal \( W \) . Supposing that \( W \neq \mathbb{k}\left\lbrack Q\right\rbrack \), we may as well assume \( \mathbf{a} \in Q \) by Proposition 11.9 (add an element way inside \( F \) ), so that \( {\mathbf{t}}^{\mathbf{a}} \in \bar{W} \) generates an essential submodule \( \mathbb{k}\{ \mathbf{a} + F\} \) . Suppose \( W = {I}_{1} \cap {I}_{2} \) . The copy of \( \mathbb{k}\{ \mathbf{a} + F\} \) inside \( \bar{W} \) must include into \( \mathbb{k}\left\lbrack Q\right\rbrack /{I}_{j} \) for \( j = 1 \) or 2 ; indeed, if both induced maps \( \mathbb{k}\{ \mathbf{a} + F\} \rightarrow \mathbb{k}\left\lbrack Q\right\rbrack /{I}_{j} \) have nonzero kernels, then these kernels intersect in a nonzero submodule of \( \mathbb{k}\left\lbrack {\mathbf{a} + F}\right\rbrack \) because \( \mathbb{k}\left\lbrack F\right\rbrack \) is a domain. Essentiality of \( \mathbb{k}\{ \mathbf{a} + F\} \subseteq \bar{W} \) forces \( \bar{W} \rightarrow \mathbb{k}\left\lbrack Q\right\rbrack /{I}_{j} \) to be an inclusion for some \( j \), so \( W \) contains - and hence equals - this ideal \( {I}_{j} \) . Thus \( W \) is irreducible. Now we prove the "only if" direction. Since \( W \) is irreducible, its radical is the unique prime ideal \( {P}_{F} = \mathbb{k}\{ Q \smallsetminus F\} \) associated to \( \bar{W} \) . Let \( N \) be the span \( \mathbb{k}\left\{ {{\mathbf{t}}^{\mathbf{u}} \in \mathbb{k}\left\lbrack Q\right\rbrack \mid \left( {W : {\mathbf{t}}^{\mathbf{u}}}\right) = {P}_{F}}\right\} \) of all monomials in \( \bar{W} \) with annihilator equal to \( {P}_{F} \), which is a \( \mathbb{k}\left\lbrack Q\right\rbrack \) -submodule of \( \bar{W} \) . Define \( U \) to be the exponent vectors on a finite set of monomials generating \( N \) . Given \( \mathbf{u} \in U \), we have \( {\mathbf{t}}^{\mathbf{u} + \mathbf{f}} \notin W \) for \( \mathbf{f} \in F \) . Consequently, all monomials with exponents in \( Q \cap \left( {\mathbf{u} + F - Q}\right) \) lie outside \( W \), because \( W \) is an ideal. Thus the ideal \( {W}^{\mathbf{u}} \) defined by \( {\bar{W}}^{\mathbf{u}} = \mathbb{k}\{ \mathbf{u} + F - Q{\} }_{Q} \) contains \( W \) . But every monomial in \( \mathbb{k}\left\lbrack Q\right\rbrack \smallsetminus W \) has a monomial multiple whose annihilator equals \( {P}_{F} \), whence \( W = \mathop{\bigcap }\limits_{{\mathbf{u} \in U}}{W}^{\mathbf{u}} \) . Irreducibility of \( W \) implies that \( W = {W}^{\mathbf{u}} \) for some \( \mathbf{u} \) . Theorem 11.13 says approximately that the standard monomials for an irreducible monomial ideal lie in the intersection of a cone and a translate of its negative, justifying the heuristic illustration in Fig. 11.1. ## 11.3 Monomial matrices revisited Earlier in this book, we used monomial matrices as a convenient notational device to write down complexes of free modules over \( {\mathbb{Z}}^{n} \) -graded polynomial rings. Now we extend this construction to injective \( \mathbb{k}\left\lbrack Q\right\rbrack \) -modules. When we defined monomial matrices in Section 1.4, we tacitly assumed a full understanding of the \( {\mathbb{N}}^{n} \) -graded homomorphisms \( S\left( {-\mathbf{b}}\right) \rightarrow S\left( {-\mathbf{c}}\right) \) between a pair of copies of \( S = \mathbb{k}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . Of course, such a homomorphism is completely determined by the image of the generator \( {1}_{\mathbf{b}} \) of \( S\left( {-\mathbf{b}}\right) \) : either the map is zero or it takes \( {1}_{\mathbf{b}} \) to a nonzero scalar multiple of the monomial \( {\mathbf{x}}^{\mathbf{b} - \mathbf{c}} \cdot {1}_{\mathbf{c}} \), which sits in degree \( \mathbf{b} \) of \( S\left( {-\mathbf{c}}\right) \) . To justify using monomial matrices here, we need to get a handle on homomorphisms between (indecomposable) injectives. For this purpose, let us review the notion of homogeneous homomorphism in more detail. In what follows, \( {\mathbb{Z}}^{d} \) -graded \( \mathbb{k} \) -algebras \( R \) always have \( \mathbb{k} \) contained in the degree zero piece \( {R}_{0} \) . The principal examples to think of are \( R = \mathbb{k}\left\lbrack Q\right\rbrack \) and \( R = \mathbb{k} \) . Definition 11.14 Let \( R \) be a \( {\mathbb{Z}}^{d} \) -graded \( \mathbb{k} \) -algebra. A map \( \phi : M \rightarrow N \) of graded \( R \) -modules is homogeneous of degree \( \mathbf{b} \in {\mathbb{Z}}^{d} \) (or just homogeneous when \( \mathbf{b} = \mathbf{0} \) ) if \( \phi \left( {M}_{\mathbf{a}}\right) \subseteq {N}_{\mathbf{a} + \mathbf{b}} \) . For fixed \( \mathbf{b} \in {\mathbb{Z}}^{d} \), the set of such maps is a \( \mathbb{k} \) -vector space denoted by \[ {\underline{\operatorname{Hom}}}_{R}{\left( M, N\right) }_{\mathbf{b}} = \text{ degree }\mathbf{b}\text{ homogeneous maps }M \rightarrow N \] \[ = \text{homogeneous maps}M \rightarrow N\left( \mathbf{b}\right) \] \[ = \text{homogeneous maps}M\left( {-\mathbf{b}}\right) \rightarrow N\text{.} \] As the notation suggests, if \( R \) is either \( \mathbb{k} \) or \( \mathbb{k}\left\lbrack Q\right\rbrack \), and \( M \) is a \( \mathbb{k}\left\lbrack Q\right\rbrack \) -module, \[ {\underline{\operatorname{Hom}}}_{R}\left( {M, N}\right) = {\bigoplus }_{\mathbf{b} \in {\mathbb{Z}}^{d}}{\underline{\operatorname{Hom}}}_{R}{\left( M, N\right) }_{\mathbf{b}} \] is a \( {\mathbb{Z}}^{d} \) -graded \( \mathbb{k}\left\lbrack Q\right\rbrack \) -module, with \( {\mathbf{x}}^{\mathbf{a}}\phi \) defined by \( \left( {{\mathbf{x}}^{\mathbf{a}}\phi }\right) \left( m\right) = \phi \left( {{\mathbf{x}}^{\mathbf{a}}m}\right) \) . When \( R = \mathbb{k}\left\lbrack Q\right\rbrack \), we write \( \underline{\operatorname{Hom}}\left( {M, N}\right) = {\underline{\operatorname{Hom}}}_{k\left\lbrack Q\right\rbrack }\left( {M, N}\right) \) if no confusion can result. The graded module \( \underline{\operatorname{Hom}}\left( {M, N}\right) \) is isomorphic to the \( \mathbb{Z} \) -graded and ungraded versions whenever \( M \) is finitely generated (all versions can be calculated using the same graded free presentation of \( M \) ). The obvious combinatorial relation between the localization \( \mathbb{k}\left\lbrack {Q - F}\right\rbrack \) and the injective hull \( \mathbb{k}\{ F - Q\} \) underlies a deeper algebraic duality. To pinpoint it, we "turn modules upside down" algebraically. Definition 11.15 The Matlis dual of a graded \( \mathbb{k}\left\lbrack Q\right\rbrack \) -module \( M \) is the \( \mathbb{k}\left\lbrack Q\right\rbrack \) -module \( {M}^{ \vee } = {\underline{\operatorname{Hom}}}_{\mathbb{k}}\left( {M,\mathbb{k}}\right) \) . In other words, \( {M}^{ \vee } \) is defined by \[ {\left( {M}^{ \vee }\right) }_{-\mathbf{u}} = {\operatorname{Hom}}_{\mathbb{k}}\left( {{M}_{\mathbf{u}},\mathbb{k}}\right) \] the multiplication \( {\left( {M}^{ \vee }\right) }_{-\mathbf{u}}\overset{{\mathbf{t}}^{\mathbf{a}}}{ \rightarrow }{\left( {M}^{ \vee }\right) }_{\mathbf{a} - \mathbf{u}} \) being transpose to \( {M}_{\mathbf{u} - \mathbf{a}}\overset{{\mathbf{t}}^{\mathbf{a}}}{ \rightarrow }{M}_{\mathbf{u}} \) . Observe that \( {\left( {M}^{ \vee }\right) }^{ \vee } = M \), as long as \( {\dim }_{\mathbb{k}}\left( {M}_{\mathbf{b}}\right) \) is finite for all \( \mathbf{b} \in {\mathbb{Z}}^{d} \) . Note that the Matlis dual of the localization \( \mathbb{k}\left\lbrack {Q - F}\right\rbrack \) of \( \mathbb{k}\left\lbrack Q\right\rbrack \) along \( F \) is the injective hull \( \mathbb{k}\{ F - Q\} \) of \( \mathbb{k}\left\lbrack F\right\rbrack \) . In symbols, \( \mathbb{k}\{ F - Q\} = \mathbb{k}{\left\lbrack Q - F\right\rbrack }^{ \vee } \) . Matlis duality behaves well with respect to Hom and tensor product: Lemma 11.16 \( \underline{\operatorname{Hom}}\left( {M,{N}^{ \vee }}\right) = {\left( M \otimes N\right) }^{ \vee } \) . Proof. The resul
1076_(GTM234)Analysis and Probability Wavelets, Signals, Fractals
Definition 2.8.1
Definition 2.8.1. One of the uses of the functions \( h \) from (2.7.2), i.e., the harmonic functions, is that they determine invariant measures \( v \) on \( X \), i.e., measures on \( X \) invariant under the endomorphism \( \sigma : X \rightarrow X \) . We say that \( v \) is \( \sigma \) -invariant, or simply invariant, if \( v \circ {\sigma }^{-1} = v \), or \[ v\left( {{\sigma }^{-1}\left( B\right) }\right) = v\left( B\right) ,\;B \in \mathcal{B}. \] (2.8.1) We say that \( v \) is \( R \) -invariant if \[ {\int }_{X}{Rfdv} = {\int }_{X}{fdv} \] (2.8.2) for all bounded measurable functions \( f \) on \( X \) . Proposition 2.8.2. Let \( v \) be \( R \) -invariant, and let \( h \) satisfy \( {Rh} = h \) . Then the measure \[ d{v}_{h} \mathrel{\text{:=}} {hdv} \] (2.8.3) is invariant. Conversely, if \( {v}_{1} \) is a \( \mathcal{B} \) -measure on \( X \) which is assumed \( \sigma \) -invariant, and if \( {v}_{1} \ll v \), then the Radon-Nikodym derivative \( h = \frac{d{v}_{1}}{dv} \) satisfies \[ {Rh} = h\;v\text{-a.e. on}X\text{.} \] (2.8.4) Proof. (Part one!) \[ \int f \circ {\sigma d}{v}_{h} = \int f \circ {\sigma hdv} \] \[ = \int R\left( {f \circ {\sigma h}}\right) {dv} = \int {fRhdv} \] \[ = \int {fhdv} = \int {fd}{v}_{h} \] which implies that \( {v}_{h} \) is \( \sigma \) -invariant. To prove (2.8.4) under the assumptions in the second part of the proposition, let \( f \) be a bounded \( \mathcal{B} \) -measurable function on \( X \) . Then \[ \int {fRhdv} = \int R\left\lbrack {\left( {f \circ \sigma }\right) h}\right\rbrack {dv} = \int \left( {f \circ \sigma }\right) {hdv} \] \[ = \int \left( {f \circ \sigma }\right) d{v}_{1} = \int {fd}{v}_{1} = \int {fhdv}, \] where all integrals are over \( X \) . As \( f \) is arbitrary, the conclusion (2.8.4) follows. Remark 2.8.3. Suppose there is some probability measure \( v \) on \( X \) satisfying \[ v{R}_{W} = v. \] (2.8.5) Then the assumption (ii), \[ \mathop{\sum }\limits_{{\sigma \left( y\right) = x}}W\left( y\right) \leq 1 \] (2.8.6) may be reduced to the special normalization (2.4.1) by the following argument. Assuming (2.8.6), then the sequence \[ {h}_{n}\left( x\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{{\sigma }^{n}\left( y\right) = x}}W\left( {{\sigma }^{n - 1}\left( y\right) }\right) \cdots W\left( {\sigma \left( y\right) }\right) W\left( y\right) \] (2.8.7) is monotone decreasing. If a probability measure \( v \) exists satisfying (2.8.5), then \[ v\left( {h}_{n}\right) = {\int }_{X}{h}_{n}{dv} = 1\;\text{ for all }n, \] and the limit function \[ h\left( x\right) \mathrel{\text{:=}} \mathop{\inf }\limits_{n}{h}_{n}\left( x\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{h}_{n}\left( x\right) \] is measurable, and satisfies \[ v\left( h\right) = 1\;\text{ and }\;{R}_{W}h = h. \] (2.8.8) Since \( W\left( x\right) h\left( x\right) \leq h\left( {\sigma \left( x\right) }\right) \), it follows that the modified \( W \) -function \[ {W}_{h}\left( x\right) \mathrel{\text{:=}} \frac{W\left( x\right) h\left( x\right) }{h\left( {\sigma \left( x\right) }\right) },\;x \in X, \] (2.8.9) is well defined, and satisfies the special normalization rule (2.4.1), i.e., \[ \mathop{\sum }\limits_{{y \in X,\sigma \left( y\right) = x}}W\left( y\right) = 1\;\text{ a.e. }x \in X. \] To see this, recall that \[ \mathop{\sum }\limits_{{\sigma \left( y\right) = x}}{W}_{h}\left( y\right) = \mathop{\sum }\limits_{{\sigma \left( y\right) = x}}\frac{W\left( y\right) h\left( y\right) }{h\left( {\sigma \left( y\right) }\right) } \] \[ = \frac{1}{h\left( x\right) }\mathop{\sum }\limits_{{\sigma \left( y\right) = x}}W\left( y\right) h\left( y\right) \] \[ = \frac{1}{h\left( x\right) }h\left( x\right) = 1 \] which is the desired identity. ![c9c6b607-efb6-4d90-9e43-38960086b6c1_100_0.jpg](images/c9c6b607-efb6-4d90-9e43-38960086b6c1_100_0.jpg) ## Exercises 2.1. Verify the details in the argument in the proof of Lemma 2.4.1 for why \( \mathop{\bigcup }\limits_{n}{\mathfrak{A}}_{n} \) is an algebra of continuous functions, and for why it is dense in \( C\left( \mathbf{\Omega }\right) \) . 2.2. Give a direct and geometric argument for the identity (2.6.7) in Remark 2.6.5. 2.3. Give three different ways to see that a cocycle \( V = {V}_{h} \) may be obtained from every solution \( h \) as specified by (2.7.2) in Theorem 2.7.1. 2.4. Can the assumption in Theorem 2.7.1 of boundedness on the harmonic function \( h \) be omitted? 2.5. Give a version of Theorem 2.7.1 which holds when \( h \) is not assumed bounded. ## 2.6. Compact operators Let \( I \mathrel{\text{:=}} \left\lbrack {0,1}\right\rbrack \), and let \( R : I \times I \rightarrow \mathbb{C} \) be a continuous function. Suppose that \[ \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{\bar{\xi }}_{i}R\left( {{x}_{i},{x}_{j}}\right) {\xi }_{j} \geq 0 \] (2E.1) holds for all finite sequences \( \left( {\xi }_{i}\right) \) and all point configurations \( {x}_{1},{x}_{2},\ldots \) in \( I \) . (a) Then show that there is a monotone sequence \( {\lambda }_{1},{\lambda }_{2},\ldots ,0 \leq {\lambda }_{n + 1} \leq {\lambda }_{n} \leq \) \( \cdots \leq {\lambda }_{1} \), such that \( {\lambda }_{n} \rightarrow 0 \) ; and an ONB \( \left( {g}_{n}\right) \) in \( {L}^{2}\left( I\right) \), with Lebesgue measure, satisfying \[ {\int }_{0}^{1}R\left( {x, y}\right) {g}_{n}\left( y\right) {dy} = {\lambda }_{n}{g}_{n}\left( x\right) . \] (b) The spectral theorem: Show that the operator \( {T}_{R} \), defined as \[ \left( {{T}_{R}f}\right) \left( x\right) \mathrel{\text{:=}} {\int }_{0}^{1}R\left( {x, y}\right) f\left( y\right) {dy} \] in \( {L}^{2}\left( I\right) \), satisfies \[ {T}_{R} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}\left| {g}_{n}\right\rangle \left\langle {g}_{n}\right| \] where \( \left| {g}_{n}\right\rangle \left\langle {g}_{n}\right| \) is Dirac notation for the rank-one projection onto \( \mathbb{C}{g}_{n} \) . (We say that \( {T}_{R} \) is a positive (or non-negative) compact operator.) ## 2.7. Karhunen-Loève [Ash90] Let \( \left( {\Omega ,\mathcal{B}, v}\right) \) be a probability space, i.e., with \( v\left( \Omega \right) = 1 \) ; and set \[ E\left( Y\right) \mathrel{\text{:=}} {\int }_{\Omega }Y\left( \omega \right) {dv}\left( \omega \right) \] for random variables \( Y \) on \( \Omega \) . Let \( X : I \rightarrow {L}^{2}\left( {\Omega, v}\right) \) be a random process with \( X\left( x\right) \mathrel{\text{:=}} X\left( {x, \cdot }\right) \) varying continuously in \( {L}^{2}\left( \Omega \right) \) . Suppose the following two conditions hold: (i) \( E\left( {X\left( x\right) }\right) = 0 \), for all \( x \in I \), and (ii) \( R\left( {x, y}\right) \mathrel{\text{:=}} E\left( {\overline{X\left( x\right) }X\left( y\right) }\right) \) is continuous on \( I \times I \) . (a) Then show that condition (2E.1) is satisfied, and let \( \left( {g}_{n}\right) \) be an associated ONB in \( {L}^{2}\left( I\right) \) . (b) Tensor product: Show that there is a sequence \( \left( {Y}_{n}\right) \) such that (i) \( {Y}_{n} \in {L}^{2}\left( {\Omega, v}\right) \) , (ii) \( X\left( {x,\omega }\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{g}_{n}\left( x\right) {Y}_{n}\left( \omega \right) \) is convergent in \( {L}^{2}\left( I\right) \otimes {L}^{2}\left( {\Omega, v}\right) \), and (iii) \( E\left( {{\bar{Y}}_{k}{Y}_{n}}\right) = {\delta }_{k, n}{\lambda }_{k} \) . Hint: Set \[ {Y}_{k} \mathrel{\text{:=}} {\int }_{0}^{1}\overline{{g}_{k}\left( x\right) }X\left( {x, \cdot }\right) {dx}. \] ## 2.8. Brownian motion Notation as in Exercises 2.6 and 2.7: \( I = \left\lbrack {0,1}\right\rbrack \), and \( \left( {\Omega ,\mathcal{B}, v}\right) \) is a fixed probability space. Suppose \( X : I \rightarrow {L}^{2}\left( \Omega \right) \) is given, and auppose also that it is Gaussian, that is, that the family \( \{ X\left( x\right) \mid x \in I\} \) of random variables has joint Gaussian distributions. (a) Under the stated condition, prove that the random variables \( \left\{ {{Y}_{n} \mid n \in \mathbb{N}}\right\} \) in Exercise 2.7(b) are automatically independent Gaussian. (b) Suppose in addition to the above condition on \( \{ X\left( x\right) \mid x \in I\} \) that \( R\left( {x, y}\right) = \) \( \min \left( {x, y}\right) \) for \( \left( {x, y}\right) \in I \times I \) . Then show that \( \left( {X\left( x\right) }\right) \) is Brownian motion and has the following representation: \[ X\left( {x,\omega }\right) = \sqrt{2}\mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\sin \left( {\left( {n - \frac{1}{2}}\right) {\pi x}}\right) }{\left( {n - \frac{1}{2}}\right) \pi }{Z}_{n}\left( \omega \right) \;\text{ for }x \in I,\omega \in \Omega , \] (2E.2) where \( \left\{ {{Z}_{n} \mid n \in \mathbb{N}}\right\} \) is an orthonormal family of random variables in \( {L}^{2}\left( \Omega \right) \) . In particular, \( E\left( {Z}_{n}\right) = 0 \) and \( E\left( {Z}_{n}^{2}\right) = 1 \) holds for all \( n \in \mathbb{N} \) . (c) With the same assumptions as in (b) above, prove that the expansion (2E.2) for a.e. \( \omega \in \Omega \) in fact converges uniformly for \( x \in I \) . (d) Show that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left( \left( n - \frac{1}{2}\right) \pi \right) }^{-1}\left| {{Z}_{n}\left( \cdot \right) }\right| \) converges with probability 1 . 2.9. Consider the setting in Exercise 2.8, i.e., \( I = \left\lbrack {0,1}\right\rbrack \), and \( \left( {\Omega ,\mathcal{B}, v}\right) \) a fixed probability space. (a) Let \( {\chi }_{\lbrack 0, x)} \) denote the indicator function of the subinterval \( \lbrack 0, x) \subset I \) . Show that \[ R\left( {x, y}\right) \mathrel{\text{:=}} \min \left( {x, y}\right) = {\left\langle {\chi }_{\lbrack 0, x)} \mid {\chi }_{\lbrack 0, y)}\right\rangle }_{{L}^{2}\left( I\right) }\;\text{ for all }x, y \in I. \] (b) Find the expansion of \( {\chi }_{\lbrack 0
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 6.39
Definition 6.39. A function \( f : X \rightarrow \overline{\mathbb{R}} \) on a normed space \( X \) is said to be subdifferentially compact at a point \( \bar{x} \) where it is finite if for all sequences \( \left( {x}_{n}\right) { \rightarrow }_{f}\bar{x} \) , \( \left( {t}_{n}\right) \rightarrow {0}_{ + },\left( {w}_{n}^{ * }\right) \overset{ * }{ \rightarrow }0 \) such that \( {w}_{n}^{ * } \in {t}_{n}{\partial }_{F}f\left( {x}_{n}\right) \) for all \( n \in \mathbb{N} \), one has \( \left( {w}_{n}^{ * }\right) \rightarrow 0 \) . Such a notion is related to coderivative compactness via the epigraph multimap. Proposition 6.40. A lower semicontinuous function \( f : X \rightarrow \overline{\mathbb{R}} \) on an Asplund space is subdifferentially compact at a point \( \bar{x} \) where it is finite if and only if its epigraph multimap \( E \mathrel{\text{:=}} {E}_{f} \) is coderivatively compact at \( {\bar{x}}_{f} \mathrel{\text{:=}} \left( {\bar{x}, f\left( \bar{x}\right) }\right) \) . Proof. Suppose \( E \) is coderivatively compact at \( {\bar{x}}_{f} \) . Given sequences \( \left( {t}_{n}\right) \rightarrow {0}_{ + } \) , \( \left( {x}_{n}\right) { \rightarrow }_{f}\bar{x},\left( {w}_{n}^{ * }\right) \overset{ * }{ \rightarrow }0 \) such that \( {w}_{n}^{ * } \in {t}_{n}{\partial }_{F}f\left( {x}_{n}\right) \) for all \( n \in \mathbb{N} \), one has \( \left( {{w}_{n}^{ * }, - {t}_{n}}\right) \in \) \( {N}_{F}\left( {E,\left( {{x}_{n}, f\left( {x}_{n}\right) }\right) }\right) \), hence \( \left( {w}_{n}^{ * }\right) \rightarrow 0 \), and \( f \) is subdifferentially compact at \( \bar{x} \) . Conversely, suppose \( f \) is subdifferentially compact at \( \bar{x} \) and let \( \left( \left( {{w}_{n},{r}_{n}}\right) \right) \rightarrow {\bar{x}}_{f} \) in \( E,\left( \left( {{w}_{n}^{ * },{r}_{n}^{ * }}\right) \right) \overset{ * }{ \rightarrow }\left( {0,0}\right) \) with \( \left( {{w}_{n}^{ * }, - {r}_{n}^{ * }}\right) \in {N}_{F}\left( {E,\left( {{w}_{n},{r}_{n}}\right) }\right) \) for all \( n \) . Let \( N \mathrel{\text{:=}} \{ n \in \) \( \left. {\mathbb{N} : {r}_{n}^{ * } = 0}\right\} \), so that \( {w}_{n}^{ * } \in {r}_{n}^{ * }{\partial }_{F}f\left( {w}_{n}\right) \) for all \( n \in \mathbb{N} \smallsetminus N \) and \( \left( {w}_{n}^{ * }\right) \rightarrow 0 \) if \( N \) is finite, since \( f \) is subdifferentially compact at \( \bar{x} \) . It remains to consider the case in which \( N \) is infinite. Using Corollary 4.130 and a sequence \( \left( {\varepsilon }_{n}\right) \rightarrow {0}_{ + } \), for all \( n \in N \) we can find \( {t}_{n} \in \left( {0,{\varepsilon }_{n}}\right) ,{x}_{n} \in B\left( {{w}_{n},{\varepsilon }_{n}, f}\right) ,{x}_{n}^{ * } \in {\partial }_{F}f\left( {x}_{n}\right) \) such that \( \begin{Vmatrix}{{w}_{n}^{ * } - {t}_{n}{x}_{n}^{ * }}\end{Vmatrix} < {\varepsilon }_{n} \) . Then \( \left( {{t}_{n}{x}_{n}^{ * }}\right) \overset{ * }{ \rightarrow }0 \), hence \( \left( {{t}_{n}{x}_{n}^{ * }}\right) \rightarrow 0 \) along \( N \), since \( f \) is subdifferentially compact at \( \bar{x} \) . Therefore \( \left( {w}_{n}^{ * }\right) \rightarrow 0 \) and \( {E}_{f} \) is coderivatively compact at \( {\bar{x}}_{f} \) . ## Exercises 1. Check that a subset \( S \) of \( X \) is normally compact at \( \bar{x} \in S \) if and only if for every Banach space \( Y \) and \( \bar{y} \in Y \), the multimap \( F : X \rightrightarrows Y \) with graph \( S \times Y \) is coderivatively compact at \( \left( {\bar{x},\bar{y}}\right) \) . 2. Suppose \( F : X \rightrightarrows Y \) has the strong partial cone property up to a compact set around \( \left( {\bar{x},\bar{y}}\right) \) in the following sense: there exist \( \alpha ,\tau > 0 \), a neighborhood \( W \) of \( \left( {\bar{x},\bar{y}}\right) \), and compact subsets \( K \) of \( X, L \) of \( Y \) such that \[ \forall t \in \left\lbrack {0,\tau }\right\rbrack ,\;F \cap W + {t\alpha }{B}_{X} \times \{ 0\} \subset F + t\left( {K \times L}\right) . \] (6.8) (a) Show that for all \( \left( {x, y}\right) \in F \cap W,{y}^{ * } \in {Y}^{ * },{x}^{ * } \in {D}_{F}^{ * }F\left( {x, y}\right) \left( {y}^{ * }\right) \) one has \( \alpha \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \leq \) \( {h}_{K}\left( {x}^{ * }\right) + {h}_{L}\left( {y}^{ * }\right) \) . (b) Prove that the latter property ensures that \( F \) is strongly coderivatively compact at \( \left( {\bar{x},\bar{y}}\right) \) . 3. Check that if the graph of \( F : X \rightrightarrows Y \) has the cone property up to a compact set around \( \left( {\bar{x},\bar{y}}\right) \), then \( F : X \rightrightarrows Y \) has the strong partial cone property up to a compact set around \( \left( {\bar{x},\bar{y}}\right) \) . 4. Check that a subset \( S \) of \( X \) has the cone property up to a compact set at \( \bar{x} \in S \) if and only if for every Banach space \( Y \) and \( \bar{y} \in Y \), the multimap \( F : X \rightrightarrows Y \) with graph \( S \times Y \) has the strong partial cone property up to a compact set around \( \left( {\bar{x},\bar{y}}\right) \) . ## 6.3 Calculus Rules for Coderivatives and Normal Cones Since limiting subdifferentials are related to limiting coderivatives and limiting normal cones, it is sensible to deduce calculus rules for subdifferentials from calculus rules for normal cones and coderivatives under operations on sets or multimaps such as intersections and direct and inverse images. We start with intersections. ## 6.3.1 Normal Cone to an Intersection Unions and intersections are such basic operations with sets that they deserve priority. Simple examples show that the simple rule \( N\left( {F \cup G, x}\right) \subset N\left( {F, x}\right) \cap N\left( {G, x}\right) \) for two subsets \( F, G \) of a Banach space, \( x \in F \cap G \) is satisfied for the firm and the directional normal cones but not for the limiting normal cone. Thus, we focus our attention on intersections. We start with the observation that metric estimates yield a rule for the normal cone to an intersection. Theorem 6.41 (Normal cone to an intersection). Let \( \left( {{S}_{1},\ldots ,{S}_{k}}\right) \) be a family of closed subsets of an Asplund space satisfying the following linear coherence condition at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) : for some \( c > 0,\rho > 0 \) , \[ \forall x \in B\left( {\bar{x},\rho }\right) ,\;d\left( {x, S}\right) \leq {cd}\left( {x,{S}_{1}}\right) + \cdots + {cd}\left( {x,{S}_{k}}\right) . \] (6.9) Then one has \[ {N}_{L}\left( {S,\bar{x}}\right) \subset {N}_{L}\left( {{S}_{1},\bar{x}}\right) + \cdots + {N}_{L}\left( {{S}_{k},\bar{x}}\right) . \] (6.10) The result follows from a passage to the limit in Theorem 4.75; but we present another proof. Proof. Let \( {\bar{x}}^{ * } \in {N}_{L}\left( {S,\bar{x}}\right) \), so that by Proposition 6.8, \( {\bar{x}}^{ * } = r{\bar{u}}^{ * } \) for some \( r \in {\mathbb{R}}_{ + } \) , \( {\bar{u}}^{ * } \in {\partial }_{L}{d}_{S}\left( \bar{x}\right) \) . Let \( f \mathrel{\text{:=}} {cd}\left( {\cdot ,{S}_{1}}\right) + \cdots + {cd}\left( {\cdot ,{S}_{k}}\right) \), so that \( {d}_{S} \leq f \) and \( {\left. f\right| }_{S} = 0 \) . Proposition 6.21 ensures that \( {\bar{u}}^{ * } \in {\partial }_{L}f\left( \bar{x}\right) \) . The sum rule yields \( {\bar{u}}_{i}^{ * } \in c{\partial }_{L}{d}_{{S}_{i}}\left( \bar{x}\right) \) such that \( {\bar{u}}^{ * } = {\bar{u}}_{1}^{ * } + \cdots + {\bar{u}}_{k}^{ * } \) . Then \( {\bar{x}}_{i}^{ * } \mathrel{\text{:=}} r{\bar{u}}_{i}^{ * } \in {N}_{L}\left( {{S}_{i},\bar{x}}\right) \) and \( {\bar{x}}^{ * } = {\bar{x}}_{1}^{ * } + \cdots + {\bar{x}}_{k}^{ * } \) . The study of the limiting normal cone to an intersection we undertake now makes use of the alliedness property that appeared in Chap. 4. It generalizes the notion of direct sum of linear spaces. Definition 6.42 ([813]). A finite family \( {\left( {S}_{i}\right) }_{i \in I}\left( {I \mathrel{\text{:=}} {\mathbb{N}}_{k}}\right) \) of closed subsets of a normed space \( X \) is said to be allied at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if whenever \( {x}_{n, i}^{ * } \in \) \( {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) with \( {\left( {x}_{n, i}\right) }_{n} \in {S}_{i} \) for \( \left( {n, i}\right) \in \mathbb{N} \times I,{\left( {x}_{n, i}\right) }_{n} \rightarrow \bar{x}, \) \[ {\left( \begin{Vmatrix}{x}_{n,1}^{ * } + \cdots + {x}_{n, k}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0 \Rightarrow \forall i \in I\;{\left( \begin{Vmatrix}{x}_{n, i}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0. \] This property can be reformulated as follows: there exist \( \rho > 0, c > 0 \) such that \[ \forall {x}_{i} \in {S}_{i} \cap B\left( {\bar{x},\rho }\right) ,{x}_{i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{i}}\right) ,\;c\mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{x}_{i}^{ * }\end{Vmatrix} \leq \begin{Vmatrix}{{x}_{1}^{ * } + \cdots + {x}_{k}^{ * }}\end{Vmatrix}. \] (6.11) This reformulation follows by homogeneity from the fact that one can find \( \rho > 0 \) and \( c > 0 \) such that for \( {x}_{i} \in {S}_{i} \cap B\left( {\bar{x},\rho }\right) ,{x}_{i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{i}}\right) \) with \( \begin{Vmatrix}{{x}_{1}^{ * } + \cdots + {x}_{k}^{ * }}\end{Vmatrix} < c \) one has \( \mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{x}_{i}^{ * }\end{Vmatrix} < 1 \) or, equivalently, \( \mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{x}_{i}^{ * }\end{Vmatrix} \geq 1 \Rightarrow \begin{Vmatrix}{{x}_{1}^{ * } + \cdots + {x}_{k}^{ * }}\end{Vmatrix} \geq c \) . The result that follows reduces alliedness to an easier requirement. Proposition 6.43. A finite family \( {\left( {S}_{i}\right) }_{i \in I}\left( {I \mathrel{\text{:=}} {\mathbb{N}}_{k}}\right) \) of closed subsets of a normed space \( X \) is allied at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if
1129_(GTM35)Several Complex Variables and Banach Algebras
Definition 4.3
Definition 4.3. The dual space to \( {T}_{x} \) is denoted \( {T}_{x}^{ * } \) . Note. The dimension of \( {T}_{x}^{ * } \) over \( \mathbb{C} \) is \( N \) . Definition 4.4. A 1-form \( \omega \) on \( \Omega \) is a map \( \omega \) assigning to each \( x \) in \( \Omega \) an element of \( {T}_{x}^{ * } \) . Example. Let \( f \in {C}^{\infty } \) . For \( x \in \Omega \), put \[ {\left( df\right) }_{x}\left( v\right) = v\left( f\right) ,\;\text{ all }v \in {T}_{x}. \] Then \( {\left( df\right) }_{x} \in {T}_{x}^{ * } \) . \( {df} \) is the 1 -form on \( \Omega \) assigning to each \( x \) in \( \Omega \) the element \( {\left( df\right) }_{x} \) . Note. \( d{x}_{1},\ldots, d{x}_{N} \) are particular 1 -forms. In a natural way 1 -forms may be added and multiplied by scalar functions. Lemma 4.2. Every 1 -form \( \omega \) admits a unique representation \[ \omega = \mathop{\sum }\limits_{1}^{N}{C}_{j}d{x}_{j} \] the \( {C}_{j} \) being scalar functions on \( \Omega \) . Note. For \( f \in {C}^{\infty } \) , \[ {df} = \mathop{\sum }\limits_{{j = 1}}^{N}\frac{\partial f}{d{x}_{j}}d{x}_{j} \] We now recall some multilinear algebra. Let \( V \) be an \( N \) -dimensional vector space over \( \mathbb{C} \) . Denote by \( { \land }^{k}\left( V\right) \) the vector space of \( k \) -linear alternating maps of \( V \times \cdots \times V \rightarrow \mathbb{C} \) . ("Alternating" means that the value of the function changes sign if two of the variables are interchanged.) Define \( \mathcal{G}\left( V\right) \) as the direct sum \[ \mathcal{G}\left( V\right) = { \land }^{0}\left( V\right) \oplus { \land }^{1}\left( V\right) \oplus \cdots \oplus { \land }^{N}\left( V\right) . \] Here \( { \land }^{0}\left( V\right) = \mathbb{C} \) and \( { \land }^{1}\left( V\right) \) is the dual space of \( V \) . Put \( { \land }^{j}\left( V\right) = 0 \) for \( j > N \) . We now introduce a multiplication into the vector space \( \mathcal{G}\left( V\right) \) . Fix \( \tau \in \) \( { \land }^{k}\left( V\right) ,\sigma \in { \land }^{1}\left( V\right) \) . The map \[ \left( {{\xi }_{1},\ldots ,{\xi }_{k},{\xi }_{k + 1},\ldots ,{\xi }_{k + 1}}\right) \rightarrow \tau \left( {{\xi }_{1},\ldots ,{\xi }_{k}}\right) \sigma \left( {{\xi }_{k + 1},\ldots ,{\xi }_{k + 1}}\right) \] is a \( \left( {k + l}\right) \) -linear map from \( V \times \cdots \times V\left( {k + l\text{factors}}\right) \rightarrow \mathbb{C} \) . It is, however, not alternating. To obtain an alternating map, we use Definition 4.5. Let \( \tau \in { \land }^{k}\left( V\right) ,\sigma \in { \land }^{l}\left( V\right), k, l \geq 1 \) . \[ \tau \land \sigma \left( {{\xi }_{1},\ldots ,{\xi }_{k + 1}}\right) \] \[ = \frac{1}{\left( {k + l}\right) !}\mathop{\sum }\limits_{\pi }{\left( -1\right) }^{\pi }\tau \left( {{\xi }_{\pi \left( 1\right) },\ldots ,{\xi }_{\pi \left( k\right) }}\right) \cdot \sigma \left( {{\xi }_{\pi \left( {k + 1}\right) },\ldots ,{\xi }_{\pi \left( {k + l}\right) }}\right) , \] the sum being taken over all permutations \( \pi \) of the set \( \{ 1,2,\ldots, k + l\} \), and \( {\left( -1\right) }^{\pi } \) denoting the sign of the permutation \( \pi \) . Lemma 4.3. \( \tau \land \sigma \) as defined is \( \left( {k + l}\right) \) -linear and alternating and so \( \in { \land }^{k + l}\left( V\right) \) . The operation \( \land \) (wedge) defines a product for pairs of elements, one in \( { \land }^{k}\left( V\right) \) and one in \( { \land }^{l}\left( V\right) \), the value lying in \( { \land }^{k + l}\left( V\right) \), hence in \( \mathcal{G}\left( V\right) \) . By linearity, \( \land \) extends to a product on arbitrary pairs of elements of \( \mathcal{G}\left( V\right) \) with value in \( \mathcal{G}\left( V\right) \) . For \( \tau \in { \land }^{0}\left( V\right) ,\sigma \in \mathcal{G}\left( V\right) \), define \( \tau \land \sigma \) as scalar multiplication by \( \tau \) . Lemma 4.4. Under \( \land ,\mathcal{G}\left( V\right) \) is an associative algebra with identity. \( \mathcal{G}\left( V\right) \) is not commutative. In fact, Lemma 4.5. If \( \tau \in { \land }^{k}\left( V\right) ,\sigma \in { \land }^{l}\left( V\right) \), then \( \tau \land \sigma = {\left( -1\right) }^{kl}\sigma \land \tau \) . Let \( {e}_{1},\ldots ,{e}_{N} \) form a basis for \( { \land }^{1}\left( V\right) \) . Lemma 4.6. Fix \( k \) . The set of elements \[ {e}_{{i}_{1}} \land {e}_{{i}_{2}} \land \cdots \land {e}_{{i}_{k}},\;1 \leq {i}_{1} < {i}_{2} < \cdots < {i}_{k} \leq N, \] forms a basis for \( { \land }^{k}\left( V\right) \) . We now apply the preceding to the case when \( V = {T}_{x}, x \in \Omega \) . Then \( { \land }^{k}\left( {T}_{x}\right) \) is the space of all \( k \) -linear alternating functions on \( {T}_{x} \), and so, for \( k = 1 \), coincides with \( {T}_{x}^{ * } \) . The following thus extends our definition of a 1 -form. Definition 4.6. A \( k \) -form \( {\omega }^{k} \) on \( \Omega \) is a map \( {\omega }^{k} \) assigning to each \( x \) in \( \Omega \) an element of \( { \land }^{k}\left( {T}_{x}\right) \) . \( k \) -forms form a module over the algebra of scalar functions on \( \Omega \) in a natural way. Let \( {\tau }^{k} \) and \( {\sigma }^{l} \) be, respectively, a \( k \) -form and an \( l \) -form. For \( x \in \Omega \), put \[ {\tau }^{k} \land {\sigma }^{l}\left( x\right) = {\tau }^{k}\left( x\right) \land {\sigma }^{l}\left( x\right) \in { \land }^{k + 1}\left( {T}_{x}\right) . \] In particular, since \( d{x}_{1},\ldots, d{x}_{N} \) are 1 -forms, \[ d{x}_{{i}_{1}} \land d{x}_{{i}_{2}} \land \cdots \land d{x}_{{i}_{k}} \] is a \( k \) -form for each choice of \( \left( {{i}_{1},\ldots ,{i}_{k}}\right) \) . Because of Lemma 4.5, \[ d{x}_{j} \land d{x}_{j} = 0\text{ for each }j. \] Hence \( d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}} = 0 \) unless the \( {i}_{v} \) are distinct. Lemma 4.7. Let \( {\omega }^{k} \) be any \( k \) -form on \( \Omega \) . Then there exist (unique) scalar functions \( {C}_{{i}_{1}},\ldots ,{i}_{k} \) on \( \Omega \) such that \[ {\omega }^{k} = \mathop{\sum }\limits_{{{i}_{1} < {i}_{2} < \cdots < {i}_{k}}}{C}_{{i}_{1}}\cdots {i}_{k}d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}}. \] Definition 4.7. \( { \land }^{k}\left( \Omega \right) \) consists of all \( k \) -forms \( {\omega }^{k} \) such that the functions \( {C}_{{i}_{1}}\ldots {i}_{k} \) occurring in Lemma 4.7 lie in \( {C}^{\infty }.{ \land }^{0}\left( \Omega \right) = {C}^{\infty } \) . Recall now the map \( f \rightarrow {df} \) from \( {C}^{\infty } \rightarrow { \land }^{1}\left( \Omega \right) \) . We wish to extend \( d \) to a linear map \( { \land }^{k}\left( \Omega \right) \rightarrow { \land }^{k + 1}\left( \Omega \right) \), for all \( k \) . Definition 4.8. Let \( {\omega }^{k} \in { \land }^{k}\left( \Omega \right), k = 0,1,2,\ldots \) Then \[ {\omega }^{k} = \mathop{\sum }\limits_{{{i}_{1} < \cdots < {i}_{k}}}{C}_{{i}_{1}}\cdots {}_{{i}_{k}}d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}}. \] Define \[ d{\omega }^{k} = \mathop{\sum }\limits_{{{i}_{1} < \cdots < {i}_{k}}}d{C}_{{i}_{1}}\cdots {}_{{i}_{k}} \land d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}}. \] Note that \( d \) maps \( { \land }^{k}\left( \Omega \right) \rightarrow { \land }^{k + 1}\left( \Omega \right) \) . We call \( d{\omega }^{k} \) the exterior derivative of \( {\omega }^{k} \) . For \( \omega \in { \land }^{1}\left( \Omega \right) \) , \[ \omega = \mathop{\sum }\limits_{{i = 1}}^{N}{C}_{i}d{x}_{i} \] \[ {d\omega } = \mathop{\sum }\limits_{{i, j}}\frac{\partial {C}_{i}}{\partial {x}_{j}}d{x}_{j} \land d{x}_{i} = \mathop{\sum }\limits_{{i < j}}\left( {\frac{\partial {C}_{j}}{\partial {x}_{i}} - \frac{\partial {C}_{i}}{\partial {x}_{j}}}\right) d{x}_{i} \land d{x}_{j}. \] It follows that for \( f \in {C}^{\infty } \) , \[ d\left( {df}\right) = d\left( {\mathop{\sum }\limits_{{i = 1}}^{N}\frac{\partial f}{d{x}_{i}}d{x}_{i}}\right) \] \[ = \mathop{\sum }\limits_{{i < j}}\left( {\frac{\partial }{\partial {x}_{i}}\left( \frac{\partial f}{\partial {x}_{j}}\right) - \frac{\partial }{\partial {x}_{j}}\left( \frac{\partial f}{\partial {x}_{i}}\right) }\right) d{x}_{i} \land d{x}_{j} = 0 \] or \( {d}^{2} = 0 \) on \( {C}^{\infty } \) . More generally, Lemma 4.8. \( {d}^{2} = 0 \) for every \( k \) ; i.e., if \( {\omega }^{k} \in { \land }^{k}\left( \Omega \right), k \) arbitrary, then \( d\left( {d{\omega }^{k}}\right) = \) 0. To prove Lemma 4.8, it is useful to prove first Lemma 4.9. Let \( {\omega }^{k} \in { \land }^{k}\left( \Omega \right) ,{\omega }^{l} \in { \land }^{l}\left( \Omega \right) \) . Then \[ d\left( {{\omega }^{k} \land {\omega }^{l}}\right) = d{\omega }^{k} \land {\omega }^{l} + {\left( -1\right) }^{k}{\omega }^{k} \land d{\omega }^{l}. \] NOTES For an exposition of the material in this section, see, e.g., I. M. Singer and J. A. Thorpe, Lecture Notes on Elementary Topology and Geometry, Scott, Foresman, Glenview, Ill., 1967, Chap. V. 5 ## The \( \bar{\partial } \) -Operator Note. As in the preceding section, the proofs in this section are left as exercises. Let \( \Omega \) be an open subset of \( {\mathbb{C}}^{n} \) . The complex coordinate functions \( {z}_{1},\ldots ,{z}_{n} \) as well as their conjugates \( {\bar{z}}_{1},\ldots ,{\bar{z}}_{n} \) lie in \( {C}^{\infty }\left( \Omega \right) \) . Hence the forms \[ d{z}_{1},\ldots, d{z}_{n},\;d{\bar{z}}_{1},\ldots, d{\bar{z}}_{n} \] all belong to \( { \land }^{1}\left( \Omega \right) \) . Fix \( x \in \Omega \) . Note that \( { \land }^{1}\left( {T}_{x}\right) = {T}_{x}^{ * } \) has dimension \( {2n} \) over \( \mathbb{C} \), since \( {\mathbb{C}}^{n} = {\mathbb{R}}^{2n} \) . If \( {x}_{j} = \operatorname{Re}\left( {z}_{j}\right) \) and \( {y}_{j} = \operatorname{Im}\left( {z}_{j}\right) \), then \[ {\left( d{x}_{1}\right) }_{x},\ldots ,{\left( d{x}_{n}\right) }_{x},\;{\left( d{y}_{l}\right) }_{x},\ldots ,{\left(
1189_(GTM95)Probability-1
Definition 3
Definition 3. A family of distribution functions \( F = \left\{ {{F}_{\alpha };\alpha \in \mathfrak{A}}\right\} \) defined on \( {R}^{n}, n \geq 1 \), is relatively compact (or tight) if the same property is possessed by the family \( \mathcal{P} = \left\{ {{\mathrm{P}}_{\alpha };\alpha \in \mathfrak{A}}\right\} \) of probability measures, where \( {\mathrm{P}}_{\alpha } \) is the measure constructed from \( {F}_{\alpha } \) . 3. The following result is fundamental for the study of weak convergence of probability measures. Theorem 1 (Prokhorov’s Theorem). Let \( \mathcal{P} = \left\{ {{\mathrm{P}}_{\alpha };\alpha \in \mathfrak{A}}\right\} \) be a family of probability measures defined on a complete separable metric space \( \left( {E,\mathcal{E},\rho }\right) \) . Then \( \mathcal{P} \) is relatively compact if and only if it is tight. Proof. We shall give the proof only when the space is the real line. (The proof can be carried over (see [9], [76]), almost unchanged, to arbitrary Euclidean spaces \( {R}^{n}, n \geq 2 \) . Then the theorem can be extended successively to \( {R}^{\infty } \), to \( \sigma \) -compact spaces; and finally to general complete separable metric spaces, by reducing each case to the preceding one.) Necessity. Let the family \( \mathcal{P} = \left\{ {{\mathrm{P}}_{\alpha };\alpha \in \mathfrak{A}}\right\} \) of probability measures defined on \( \left( {R,\mathcal{B}\left( R\right) }\right) \) be relatively compact but not tight. Then there is an \( \varepsilon > 0 \) such that for every compact set \( K \subseteq R \) \[ \mathop{\sup }\limits_{\alpha }{\mathrm{P}}_{\alpha }\left( {R \smallsetminus K}\right) > \varepsilon \] and therefore, for each interval \( I = \left( {a, b}\right) \) , \[ \mathop{\sup }\limits_{\alpha }{\mathrm{P}}_{\alpha }\left( {R \smallsetminus I}\right) > \varepsilon . \] It follows that for every interval \( {I}_{n} = \left( {-n, n}\right), n \geq 1 \), there is a measure \( {\mathrm{P}}_{{\alpha }_{n}} \) such that \[ {\mathrm{P}}_{{\alpha }_{n}}\left( {R \smallsetminus {I}_{n}}\right) > \varepsilon \] Since the original family \( \mathcal{P} \) is relatively compact, we can select from \( {\left\{ {\mathrm{P}}_{{\alpha }_{n}}\right\} }_{n \geq 1} \) a subsequence \( \left\{ {\mathrm{P}}_{{\alpha }_{{n}_{k}}}\right\} \) such that \( {\mathrm{P}}_{{\alpha }_{{n}_{k}}}\overset{w}{ \rightarrow }\mathrm{Q} \), where \( \mathrm{Q} \) is a probability measure. Then, by the equivalence of conditions (I) and (II) in Theorem 1 of Sect. 1, we have \[ \mathop{\limsup }\limits_{{k \rightarrow \infty }}{\mathrm{P}}_{{\alpha }_{{n}_{k}}}\left( {R \smallsetminus {I}_{n}}\right) \leq \mathrm{Q}\left( {R \smallsetminus {I}_{n}}\right) \] (2) for every \( n \geq 1 \) . But \( \mathrm{Q}\left( {R \smallsetminus {I}_{n}}\right) \downarrow 0, n \rightarrow \infty \), and the left side of (2) exceeds \( \varepsilon > 0 \) . This contradiction shows that relatively compact families are tight. To prove the sufficiency we need a general result (Helly's theorem) on the sequential compactness of families of generalized distribution functions (Subsection 2 of Sect. 3, Chap. 2). Let \( \mathcal{I} = \{ G\} \) be the collection of generalized distribution functions \( G = G\left( x\right) \) that satisfy: (1) \( G\left( x\right) \) is nondecreosing; (2) \( 0 \leq G\left( {-\infty }\right), G\left( {+\infty }\right) \leq 1 \) ; (3) \( G\left( x\right) \) is continuous on the right. Then \( \mathcal{I} \) clearly contains the class of distribution functions \( \mathcal{F} = \{ F\} \) for which \( F\left( {-\infty }\right) = 0 \) and \( F\left( {+\infty }\right) = 1 \) . Theorem 2 (Helly’s Theorem). The class \( \mathcal{I} = \{ G\} \) of generalized distribution functions is sequentially compact, i.e., for every sequence \( \left\{ {G}_{n}\right\} \) of functions from \( \mathcal{I} \) we can find a function \( G \in \mathcal{I} \) and a subsequence \( \left\{ {n}_{k}\right\} \subseteq \{ n\} \) such that \[ {G}_{{n}_{k}}\left( x\right) \rightarrow G\left( x\right) ,\;k \rightarrow \infty , \] for every point \( x \) of the set \( \mathbb{C}\left( G\right) \) of points of continuity of \( G = G\left( x\right) \) . Proof. Let \( T = \left\{ {{x}_{1},{x}_{2},\ldots }\right\} \) be a countable dense subset of \( R \) . Since the sequence of numbers \( \left\{ {{G}_{n}\left( {x}_{1}\right) }\right\} \) is bounded, there is a subsequence \( {N}_{1} = \left\{ {{n}_{1}^{\left( 1\right) },{n}_{2}^{\left( 1\right) },\ldots }\right\} \) such that \( {G}_{{n}_{i}^{\left( 1\right) }}\left( {x}_{1}\right) \) approaches a limit \( {g}_{1} \) as \( i \rightarrow \infty \) . Then we extract from \( {N}_{1} \) a subsequence \( {N}_{2} = \left\{ {{n}_{1}^{\left( 2\right) },{n}_{2}^{\left( 2\right) },\ldots }\right\} \) such that \( {G}_{{n}_{i}^{\left( 2\right) }}\left( {x}_{2}\right) \) approaches a limit \( {g}_{2} \) as \( i \rightarrow \infty \) ; and so on. Define a function \( {G}_{T}\left( x\right) \) on the set \( T \subseteq R \) by \[ {G}_{T}\left( {x}_{i}\right) = {g}_{i},\;{x}_{i} \in T \] and consider the "Cantor" diagonal sequence \( N = \left\{ {{n}_{1}^{\left( 1\right) },{n}_{2}^{\left( 2\right) },\ldots }\right\} \) . Then, for each \( {x}_{i} \in T \), as \( m \rightarrow \infty \), we have \[ {G}_{{n}_{m}^{\left( m\right) }}\left( {x}_{i}\right) \rightarrow {G}_{T}\left( {x}_{i}\right) \] Finally, let us define \( G = G\left( x\right) \) for all \( x \in R \) by putting \[ G\left( x\right) = \inf \left\{ {{G}_{T}\left( y\right) : y \in T, y > x}\right\} . \] (3) We claim that \( G = G\left( x\right) \) is the required function and \( {G}_{{n}_{m}^{\left( m\right) }}\left( x\right) \rightarrow G\left( x\right) \) at all points \( x \) of continuity of \( G \) . Since all the functions \( {G}_{n} \) under consideration are nondecreasing, we have \( {G}_{{n}_{m}^{\left( m\right) }}\left( x\right) \leq {G}_{{n}_{m}^{\left( m\right) }}\left( y\right) \) for all \( x \) and \( y \) that belong to \( T \) and satisfy the inequality \( x \leq y \) . Hence \( {G}_{T}\left( x\right) \leq {G}_{T}\left( y\right) \) for such \( x \) and \( y \) . It follows from this and (3) that \( G = G\left( x\right) \) is nondecreasing. Now let us show that it is continuous on the right. Let \( {x}_{k} \downarrow x \) and \( d = \mathop{\lim }\limits_{k}G\left( {x}_{k}\right) \) . Clearly \( G\left( x\right) \leq d \), and we have to show that actually \( G\left( x\right) = d \) . Suppose the contrary, that is, let \( G\left( x\right) < d \) . It follows from (3) that there is a \( y \in T, x < y \) , such that \( {G}_{T}\left( y\right) < d \) . But \( x < {x}_{k} < y \) for sufficiently large \( k \), and therefore \( G\left( {x}_{k}\right) \leq {G}_{T}\left( y\right) < d \) and \( \lim G\left( {x}_{k}\right) < d \), which contradicts \( d = \mathop{\lim }\limits_{k}G\left( {x}_{k}\right) \) . Thus we have constructed a function \( G \) that belongs to \( \mathcal{I} \) . We now establish that \( {G}_{{n}_{m}^{\left( m\right) }}\left( {x}^{0}\right) \rightarrow G\left( {x}^{0}\right) \) for every \( {x}^{0} \in \mathbb{C}\left( G\right) \) . If \( {x}^{0} < y \in T \), then \[ \mathop{\limsup }\limits_{m}{G}_{{n}_{m}^{\left( m\right) }}\left( {x}^{0}\right) \leq \mathop{\limsup }\limits_{m}{G}_{{n}_{m}^{\left( m\right) }}\left( y\right) = {G}_{T}\left( y\right) , \] whence \[ \mathop{\limsup }\limits_{m}{G}_{{n}_{m}^{\left( m\right) }}\left( {x}^{0}\right) \leq \inf \left\{ {{G}_{T}\left( y\right) : y > {x}^{0}, y \in T}\right\} = G\left( {x}^{0}\right) . \] (4) On the other hand, let \( {x}^{1} < y < {x}^{0}, y \in T \) . Then \[ G\left( {x}^{1}\right) \leq {G}_{T}\left( y\right) = \mathop{\lim }\limits_{m}{G}_{{n}_{m}^{\left( m\right) }}\left( y\right) = \mathop{\liminf }\limits_{m}{G}_{{n}_{m}^{\left( m\right) }}\left( y\right) \leq \mathop{\liminf }\limits_{m}{G}_{{n}_{m}^{\left( m\right) }}\left( {x}^{0}\right) . \] Hence if we let \( {x}^{1} \uparrow {x}^{0} \) we find that \[ G\left( {{x}^{0} - }\right) \leq \mathop{\liminf }\limits_{m}{G}_{{n}_{m}^{\left( m\right) }}\left( {x}^{0}\right) . \] (5) But if \( G\left( {{x}^{0} - }\right) = G\left( {x}^{0}\right) \) then (4) and (5) imply that \( {G}_{{n}_{m}^{\left( m\right) }}\left( {x}^{0}\right) \rightarrow G\left( {x}^{0}\right) \) , \( m \rightarrow \infty \) . This completes the proof of the theorem. 口 We can now complete the proof of Theorem 1. Sufficiency. Let the family \( \mathcal{P} \) be tight and let \( \left\{ {\mathrm{P}}_{n}\right\} \) be a sequence of probability measures from \( \mathcal{P} \) . Let \( \left\{ {F}_{n}\right\} \) be the corresponding sequence of distribution functions. By Helly’s theorem, there are a subsequence \( \left\{ {F}_{{n}_{k}}\right\} \subseteq \left\{ {F}_{n}\right\} \) and a generalized distribution function \( G \in \mathcal{I} \) such that \( {F}_{{n}_{k}}\left( x\right) \rightarrow G\left( x\right) \) for \( x \in \mathbb{C}\left( G\right) \) . Let us show that because \( \mathcal{P} \) was assumed tight, the function \( G = G\left( x\right) \) is in fact a genuine distribution function \( \left( {G\left( {-\infty }\right) = 0, G\left( {+\infty }\right) = 1}\right) \) . Take \( \varepsilon > 0 \), and let \( I = (a, b\rbrack \) be the interval for which \[ \mathop{\sup }\limits_{n}{\mathrm{P}}_{n}\left( {R \smallsetminus I}\right) < \varepsilon \] or, equivalently, \[ 1 - \varepsilon \leq {\mathrm{P}}_{n}(a, b\rbrack ,\;n \geq 1 \] Choose points \( {a}^{\prime },{b}^{\prime } \in \mathbb{C}\left( G\right) \) such that \( {a}^{\prime } < a,{b}^{\prime } > b \) . Then \( 1 - \varepsilon \leq {\mathrm{P}}_{{n}_{k}}(a, b\rbrack \leq \) \( \left. {{\mathrm{P}}_{{n}_{k}}\left( {{a}^{\prime },{b}^{\prime }}\right. }\right\rbrack = {F}_{{n}_{k}}\left( {b}^{\prime }\right) - {F}_{{n}_{k}}\left( {a}^{\prime }\right) \rightarrow G\left( {b}^{\prime }\right) - G\l
1083_(GTM240)Number Theory II
Definition 16.4.4
Definition 16.4.4. (1) A commutative ring \( R \) is semisimple if it is a finite product of fields. (2) An R-module \( M \) is simple if its only submodules are 0 and \( M \) . (3) An R-module is semisimple if it is a finite direct sum of simple modules. (4) An R-module \( M \) is cyclic if it is generated over \( R \) by a single element, in other words if \( M = {aR} \) for some \( a \in M \) . Lemma 16.4.5. Let \( H \) be a cyclic group of order \( n \), and assume that \( q \nmid n \) . Then \( {\mathbb{F}}_{q}\left\lbrack H\right\rbrack \) is a semisimple ring. Proof. Let \( {X}^{n} - 1 = \mathop{\prod }\limits_{{1 \leq i \leq g}}{P}_{i}^{{e}_{i}}\left( X\right) \) be the decomposition of \( {X}^{n} - 1 \) as a power product of distinct monic irreducible polynomials in \( {\mathbb{F}}_{q}\left\lbrack X\right\rbrack \) . Since \( q \nmid n \) the polynomial \( {X}^{n} - 1 \) has distinct roots in an algebraic closure of \( {\mathbb{F}}_{q} \), hence \( {e}_{i} = 1 \) for all \( i \) . Thus by the lemma \[ {\mathbb{F}}_{q}\left\lbrack H\right\rbrack \simeq {\mathbb{F}}_{q}\left\lbrack X\right\rbrack /\left( {\left( {{X}^{n} - 1}\right) {\mathbb{F}}_{q}\left\lbrack X\right\rbrack }\right) \simeq \mathop{\prod }\limits_{{1 \leq i \leq g}}{K}_{i}, \] where \( {K}_{i} = {\mathbb{F}}_{q}\left\lbrack X\right\rbrack /\left( {{P}_{i}\left( X\right) {\mathbb{F}}_{q}\left\lbrack X\right\rbrack }\right) \) is a field, so \( {\mathbb{F}}_{q}\left\lbrack H\right\rbrack \) is semisimple. The following proposition summarizes the results that we need. Proposition 16.4.6. Let \( R \) be a semisimple ring. Then: (1) Any R-module is semisimple. (2) Every exact sequence of \( R \) -modules is split. (3) For any \( R \) -module \( M \) there exists \( \alpha \in M \) such that \( {\operatorname{Ann}}_{R}\left( \alpha \right) = {\operatorname{Ann}}_{R}\left( M\right) \) , so \( M \) contains the cyclic submodule \( {aR} \) isomorphic to \( R/{\operatorname{Ann}}_{R}\left( M\right) \) . (4) If \( R \) and \( M \) are finite then \( \left| M\right| \geq \left| {R/{\operatorname{Ann}}_{R}\left( M\right) }\right| \) with equality if and only if \( M \) is cyclic. (5) Let \( M \) be a cyclic module. Every submodule \( {M}^{\prime } \) of \( M \) is also cyclic, \( {\operatorname{Ann}}_{R}\left( M\right) = {\operatorname{Ann}}_{R}\left( {M}^{\prime }\right) \cdot {\operatorname{Ann}}_{R}\left( {M/{M}^{\prime }}\right) \), and \( {\operatorname{Ann}}_{R}\left( M\right) \) and \( {\operatorname{Ann}}_{R}\left( {M/{M}^{\prime }}\right) \) are coprime ideals. ## 16.4.2 Preliminaries on the Plus Part Recall some notation. We let as always \( p \) and \( q \) be distinct odd primes, and we set \( K = \mathbb{Q}\left( {\zeta }_{p}\right) \) and \( G = \operatorname{Gal}\left( {K/\mathbb{Q}}\right) \), which is canonically isomorphic to \( {\left( \mathbb{Z}/p\mathbb{Z}\right) }^{ * } \) . We let \( {K}^{ + } = \mathbb{Q}\left( {{\zeta }_{p} + {\zeta }_{p}^{-1}}\right) \) be the maximal totally real subfield of \( K,{G}^{ + } = \operatorname{Gal}\left( {{K}^{ + }/\mathbb{Q}}\right) = G/\langle \iota \rangle \) . We recall from Propositions 3.5.20 and 3.5.21 that \( U\left( K\right) = \left\langle {\zeta }_{p}\right\rangle U\left( {K}^{ + }\right) \) and that the natural map from \( {Cl}\left( {K}^{ + }\right) \) to \( {Cl}\left( K\right) \) is injective. Lemma 16.4.7. We have \( {Cl}\left( {K}^{ + }\right) \left\lbrack q\right\rbrack = {Cl}\left( K\right) {\left\lbrack q\right\rbrack }^{ + } \) . Proof. By Proposition 3.5.21 we can write by abuse of notation \( {Cl}\left( {K}^{ + }\right) \left\lbrack q\right\rbrack \subset \) \( {Cl}\left( K\right) \left\lbrack q\right\rbrack \), and since evidently \( {Cl}\left( {K}^{ + }\right) \) is invariant by \( \iota \) we have \( {Cl}\left( {K}^{ + }\right) \left\lbrack q\right\rbrack \subset \) \( {Cl}\left( K\right) {\left\lbrack q\right\rbrack }^{ + } \) . Conversely, let \( \mathfrak{a} \) be a representative of an element of \( {Cl}\left( K\right) {\left\lbrack q\right\rbrack }^{ + } \) . Since \( {Cl}\left( K\right) \left\lbrack q\right\rbrack \) is an \( {\mathbb{F}}_{q}\left\lbrack G\right\rbrack \) -module and 2 is invertible in \( {\mathbb{F}}_{q} \), it follows that \( {Cl}\left( K\right) {\left\lbrack q\right\rbrack }^{ + } \) is equal to the kernel of multiplication by \( \left( {1 - \iota }\right) /2 \) (or by \( 1 - \iota \) ) from \( {Cl}\left( K\right) \left\lbrack q\right\rbrack \) to itself. Thus there exist \( \alpha \) and \( \beta \) in \( {K}^{ * } \) such that \( \mathfrak{a}\iota {\left( \mathfrak{a}\right) }^{-1} = \alpha {\mathbb{Z}}_{K} \) and \( {\mathfrak{a}}^{q} = \beta {\mathbb{Z}}_{K} \) . Let \( \mathfrak{b} \) be the ideal of \( {K}^{ + } \) defined by \( \mathfrak{b} = {\mathcal{N}}_{K/{K}^{ + }}\left( \mathfrak{a}\right) \) . We have \( \mathfrak{b}{\mathbb{Z}}_{K} = \mathfrak{a}\iota \left( \mathfrak{a}\right) \), hence \( {\mathfrak{b}}^{q}{\mathbb{Z}}_{K} = {\mathfrak{a}}^{q}\iota \left( {\mathfrak{a}}^{q}\right) = {\beta \iota }\left( \beta \right) {\mathbb{Z}}_{K} = {\mathcal{N}}_{K/{K}^{ + }}\left( \beta \right) {\mathbb{Z}}_{K} \) ; hence intersecting with \( {K}^{ + } \), we deduce that \( {\mathfrak{b}}^{q} = {\mathcal{N}}_{K/{K}^{ + }}\left( \beta \right) {K}^{ + } \), so that the class of \( \mathfrak{b} \) belongs to \( {Cl}\left( {K}^{ + }\right) \left\lbrack q\right\rbrack \) . Furthermore, setting \( m = \left( {q + 1}\right) /2 \) we compute that \[ {\mathfrak{b}}^{m}{\mathbb{Z}}_{K} = {\mathfrak{a}}^{m}\iota {\left( \mathfrak{a}\right) }^{m} = {\mathfrak{a}}^{m}{\left( \mathfrak{a}{\alpha }^{-1}\right) }^{m} = {\mathfrak{a}}^{q + 1}{\alpha }^{-m} = \mathfrak{a}\beta {\alpha }^{-m}, \] so the class of \( \mathfrak{a} \) is equal to the class of \( {\mathfrak{b}}^{m}{\mathbb{Z}}_{K} \), proving the lemma. Recall from Definition 16.2.4 that \( E = \left\{ {u{\pi }^{k}, u \in U\left( K\right), k \in \mathbb{Z}}\right\} = \) \( \mathbb{Z}{\left\lbrack {\zeta }_{p},1/p\right\rbrack }^{ * } \) . This is a \( \mathbb{Z}\left\lbrack G\right\rbrack \) -module, so that \( E/{E}^{q} \) is an \( {\mathbb{F}}_{q}\left\lbrack G\right\rbrack \) -module. By Lemma 3.5.19 and the fact that \( \pi = 1 - {\zeta }_{p} \), for any \( x \in \bar{E} \) the expression \( \iota \left( x\right) /x \) is a \( {2p} \) th root of unity, and since \( q \) is coprime to \( {2p} \), it is a \( q \) th power. It follows that \( E/{E}^{q} \) is pointwise invariant by \( \iota \), so that it is in fact an \( {\mathbb{F}}_{q}\left\lbrack {G}^{ + }\right\rbrack \) -module. The following lemma describes its structure very precisely when \( p ≢ 1\left( {\;\operatorname{mod}\;q}\right) \) . Lemma 16.4.8. Assume that \( p ≢ 1\left( {\;\operatorname{mod}\;q}\right) \) . (1) We have \( \left| {E/{E}^{q}}\right| = {q}^{\left( {p - 1}\right) /2} \) . (2) If we set \( W = U\left( {K}^{ + }\right) /\{ \pm 1\} \), then \( {\operatorname{Ann}}_{\mathbb{Z}\left\lbrack {G}^{ + }\right\rbrack }\left( W\right) = s\mathbb{Z}\left\lbrack {G}^{ + }\right\rbrack \), where \( s = \) \( \mathop{\sum }\limits_{{\sigma \in {G}^{ + }}}\sigma . \) (3) We have \( {\operatorname{Ann}}_{{\mathbb{F}}_{q}\left\lbrack {G}^{ + }\right\rbrack }\left( {W/{W}^{q}}\right) = s{\mathbb{F}}_{q}\left\lbrack {G}^{ + }\right\rbrack \) . (4) We have \( {\operatorname{Ann}}_{{\mathbb{F}}_{q}\left\lbrack {G}^{ + }\right\rbrack }\left( {E/{E}^{q}}\right) = 0 \) . (5) \( E/{E}^{q} \) is a free \( {\mathbb{F}}_{q}\left\lbrack {G}^{ + }\right\rbrack \) -module of rank 1 . Proof. (1). The map \( \left( {u, k}\right) \) from \( U\left( K\right) \times \mathbb{Z} \) to \( E \) is an isomorphism since \( k \) is defined uniquely as the \( \mathfrak{p} \) -adic valuation of \( u{\pi }^{k} \), hence by Dirichlet’s theorem, as an abelian group \( E \simeq {\mu }_{2p} \times {\mathbb{Z}}^{\left( {p - 1}\right) /2} \), since the rank of the group of units of \( K \) is equal to \( \left( {p - 3}\right) /2 \) . Since \( {2p} \) is coprime to \( q \) it follows that \( E/{E}^{q} \simeq {\left( \mathbb{Z}/q\mathbb{Z}\right) }^{\left( {p - 1}\right) /2} \), proving (1). (2). Let \( \mathop{\sum }\limits_{{\sigma \in {G}^{ + }}}{a}_{\sigma }\sigma \) belong to \( {\operatorname{Ann}}_{\mathbb{Z}\left\lbrack {G}^{ + }\right\rbrack }\left( W\right) \), in other words be such that \( \mathop{\prod }\limits_{{\sigma \in {G}^{ + }}}\sigma {\left( \varepsilon \right) }^{{a}_{\sigma }} = \pm 1 \) for all \( \varepsilon \in U\left( {K}^{ + }\right) \) . Let \( {\left( {\varepsilon }_{i}\right) }_{1 \leq i \leq \left( {p - 3}\right) /2} \) be a system of fundamental units of \( {K}^{ + } \) . Taking logarithms we have \( \mathop{\sum }\limits_{{\sigma \in {G}^{ + }}}{a}_{\sigma }\log \left( \left| {\sigma \left( {\varepsilon }_{i}\right) }\right| \right) = \) 0 for all \( i \) . On the other hand, by Dirichlet’s theorem the \( \left( {\left( {p - 3}\right) /2}\right) \times ((p - \) 1)/2) matrix of the \( \sigma {\left( {\varepsilon }_{i}\right) }_{i \leq \left( {p - 3}\right) /2,\sigma \in {G}^{ + }} \) has rank \( \left( {p - 3}\right) /2 \), so its kernel has dimension 1. Since \( \mathop{\sum }\limits_{{\sigma \in {G}^{ + }}}\log \left( \left| {\sigma \left( {\varepsilon }_{i}\right) }\right| \right) = 0 \), this kernel is generated over \( \mathbb{R} \) by the column vector having all \( \left( {p - 1}\right) /2 \) coordinates equal to 1 . It follows that \( {a}_{\sigma } = a \) for all \( \sigma \), hence that \( \mathop{\sum }\limits_{{\sigma \in {G}^{ + }}}{a}_{\sigma }\sigma = a \cdot s \), as claimed. (3). By Lemma 16.4.3 applied to \( H = {G}^{ + } \), we see that if \( p ≢ 1\left( {\;\operatorname{mod}\;q}\right) \) the ring \( {\mathbb{F}}_{q}\left\lbrack {G}^{ + }\right\rbrack /\left( {s{\mathbb{F}}_{q}\left\lbrack {G}^{ + }\right\rbrack }\right) \) has no nonzero nilpotent elements. Set temporarily \( I = s\m
113_Topological Groups
Definition 26.9
Definition 26.9. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures. A partial isomorphism of \( \mathfrak{A} \) into \( \mathfrak{B} \) is a one-one function \( f \) mapping a subset of \( A \) into \( B \) such that if \( m \leq \left| {\operatorname{Dmn}f}\right| ,\varphi \) is an atomic formula, \( \operatorname{Fv}\varphi \subseteq \left\{ {{v}_{0},\ldots ,{v}_{m - 1}}\right\} \), and \( x \in \) \( {}^{m}\operatorname{Dmn}f \), then \( \mathfrak{A} \vDash \varphi \left\lbrack x\right\rbrack \) iff \( \mathfrak{B} \vDash \varphi \left\lbrack {f \circ x}\right\rbrack \) . For any \( \alpha \in \omega \cup \{ \omega \} \), an \( \alpha \) -system of partial isomorphisms of \( \mathfrak{A} \) into \( \mathfrak{B} \) is a system \( \left\langle {{I}_{m} : m \in \alpha }\right\rangle \) satisfying the following conditions: (i) each \( {I}_{m} \) is a nonempty set of partial isomorphism of \( \mathfrak{A} \) into \( \mathfrak{B} \) ; (ii) if \( m + 1 \in \alpha \), then \( {I}_{m + 1} \subseteq {I}_{m} \) ; (iii) if \( m + 1 \in \alpha, f \in {I}_{m + 1} \), and \( a \in A \) (resp. \( b \in B \) ), then there is a \( g \in {I}_{m} \) such that \( f \subseteq g \) and \( a \in \operatorname{Dmn}g \) (resp. \( b \in \operatorname{Rng}g \) ). In view of the above lemmas, the following lemma shows that elementary equivalence implies this new notion. Lemma 26.10. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures and let \( n \in \omega \) . If there is an \( \left( {n + 1}\right) \) -sequence for \( \mathfrak{A} \) and \( \mathfrak{B} \), then there is an \( \left( {n + 1}\right) \) -system of partial isomorphisms of \( \mathfrak{A} \) into \( \mathfrak{B} \) . Proof. Let \( \left\langle {{J}_{m} : m \in n + 1}\right\rangle \) be an \( \left( {n + 1}\right) \) -sequence for \( \mathfrak{A},\mathfrak{B} \) . For each \( m \in n + 1 \) we set \[ {I}_{m} = \left\{ {\left\{ {\left( {{x}_{i},{y}_{i}}\right) : i < k}\right\} : k \leq n - m\text{ and }x{J}_{k}y}\right\} . \] Thus \( {26.9}\left( {ii}\right) \) is obvious. By \( {26.6}\left( {ii}\right) \left( 2\right) ,\left( 3\right) \) each \( {J}_{k} \) is nonempty, so each \( {I}_{m} \) is nonempty. Also, from \( {26.6}\left( {ii}\right) \left( 2\right) ,\left( 3\right) ,\left( 4\right) \) we infer that for each \( k \leq n \), if \( x{J}_{k}y \) then \( \mathfrak{A} \vDash \varphi \left\lbrack x\right\rbrack \) iff \( \mathfrak{B} \vDash \varphi \left\lbrack y\right\rbrack \), for each atomic formula \( \varphi \) with variables among \( {v}_{0},\ldots ,{v}_{k - 1} \) . It follows that each \( {I}_{m} \) is a set of partial isomorphisms of \( \mathfrak{A} \) into \( \mathfrak{B} \) . Finally, \( {26.6}\left( {ii}\right) \left( 3\right) \) immediately yields (iii). Our final equivalent for elementary equivalence is not quite such a simple reformulation of the relations \( { \equiv }_{m} \) . It is formulated in game terminology, and we shall first give an intuitive account of the game. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures, and let two players I and II be given. Player I begins the game by picking a positive integer \( n \), an \( \varepsilon \in 2 \), and if \( \varepsilon = 0 \) an element \( {a}_{0} \) of \( A \), while if \( \varepsilon = 1 \) an element \( {b}_{0} \) of \( B \) . The game continues with II and I moving in turn. At the \( i \) th move of \( I \), he chooses an \( \varepsilon \in 2 \), and if \( \varepsilon = 0 \) an element \( {a}_{i} \) of \( A \) while if \( \varepsilon = 1 \) he chooses an element \( {b}_{i} \) of \( B \) . Then II chooses an element \( {b}_{i} \) of \( B \) if \( \varepsilon = 0 \), or an element \( {a}_{i} \) of \( A \) if \( \varepsilon = 1 \) . The game ends after \( {2n} \) moves, at which point two sequences \( a \in {}^{n}A \) and \( b \in {}^{n}B \) have been constructed. The rule of the game is that II wins provided that \( \left\{ {\left( {{a}_{i},{b}_{i}}\right) : i < n}\right\} \) is a partial isomorphism of \( \mathfrak{A} \) into \( \mathfrak{B} \) . As we shall show, elementary equivalence is equivalent to the existence of a winning strategy for II. It is clear intuitively what we mean by "winning strategy": there must exist a completely deterministic method for II to make a move, given what has happened so far in the game, so that at the end II always wins. The precise definition runs as follows: Definition 26.11. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures. We say that II has a winning strategy for the m-elementary game over \( \mathfrak{A},\mathfrak{B} \) provided that there is a function \( F \) with the following two properties: (i) If \( k < m, x \in {}^{k}A \), and \( y \in {}^{k}B \), then \( F\left( {x, y,0, a}\right) \in B \) for each \( a \in A \) and \( F\left( {x, y,1, b}\right) \in A \) for each \( b \in B \) . (ii) For every \( z \in {}^{m}\left\lbrack {\left( {\{ 0\} \times A}\right) \cup \left( {\{ 1\} \times B}\right) }\right\rbrack \), define \( x \in {}^{m}A \) and \( y \in {}^{m}B \) by induction as follows. Suppose \( k < m \) and \( x \upharpoonright k, y \upharpoonright k \) have been defined. If \( {\left( zk\right) }_{0} = 0 \), let \( {x}_{k} = {\left( zk\right) }_{1} \) and \( {y}_{k} = F\left( {x \upharpoonright k, y \upharpoonright k,0,{x}_{k}}\right) \) . If \( {\left( zk\right) }_{0} = 1 \) , let \( {y}_{k} = {\left( zk\right) }_{1} \) and \( {x}_{k} = F\left( {x \upharpoonright k, y \upharpoonright k,1,{y}_{k}}\right) \) . Then for the so defined sequences \( x \) and \( y \) it is the case that \( \left\{ {\left( {{x}_{i},{y}_{i}}\right) \} : i < m}\right\} \) is a partial isomorphism of \( \mathfrak{A} \) into \( \mathfrak{B} \) . Lemma 26.12. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures, and let \( m \) be a positive integer. If there is an \( \left( {m + 1}\right) \) -system of partial isomorphisms of \( \mathfrak{A} \) into \( \mathfrak{B} \), then player II has a winning strategy for the m-elementary game over \( \mathfrak{A},\mathfrak{B} \) . Proof. Assume the hypothesis of 26.12, and let \( \left\langle {{I}_{k} : k \leq m}\right\rangle \) be an \( \left( {m + 1}\right) \) - system of partial isomorphisms of \( \mathfrak{A} \) into \( \mathfrak{B} \) . We define a function satisfying \( {26.11}\left( i\right) \) as follows. Let a well-ordering of \( A, B \), and all sets \( {I}_{k} \) for \( k < m \) be given. Let \( k < m, x \in {}^{k}A \), and \( y \in {}^{k}B \) . Set \( f = \left\{ {\left( {{x}_{i},{y}_{i}}\right) : i < k}\right\} \) . Let \( a \in A \) and \( b \in B \) . If there is no \( g \in {I}_{m - k} \) such that \( f \subseteq g \), we let \( F\left( {x, y,0, a}\right) \) be the first element of \( B \) and \( F\left( {x, y,1, b}\right) \) be the first element of \( A \) . Now suppose there is such a \( g \), and let \( g \) be the first such. By 26.9(iii) let \( h \) be the first member of \( {I}_{m - k - 1} \) such that \( g \subseteq h \) and \( a \in \operatorname{Dmn}h \), and let \( l \) be the first member of \( {I}_{m - k - 1} \) such that \( g \subseteq l \) and \( b \in \operatorname{Rng}l \) . Then we set \( F\left( {x, y,0, a}\right) = {ha} \) and \( F\left( {x, y,1, b}\right) = {l}^{-1}b \) . Thus \( F \) is constructed so that \( {26.11}\left( i\right) \) holds. To check \( {26.11}\left( {ii}\right) \), let \( z \in {}^{m}\left\lbrack {\left( {\{ 0\} \times A}\right) \cup \left( {\{ 1\} \times B}\right) }\right\rbrack \) be given, and construct \( x \) and \( y \) as in \( {26.11}\left( {ii}\right) \) . It is straightforward to check by induction on \( k \) that whenever \( k \leq m \) there is an \( g \in {I}_{m - k} \) such that \( \left\{ {\left( {{x}_{i},{y}_{i}}\right) : i < k}\right\} \subseteq g \) . Applying this for \( k = m \), we obtain the conclusion of \( {26.11}\left( {ii}\right) \) . Our final lemma completes the circle of implications between the above notions. Lemma 26.13. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures, and let \( m \) be a positive integer. If player II has a winning strategy for the m-elementary game over \( \mathfrak{A},\mathfrak{B} \) , then \( \mathfrak{A} \vDash \varphi \) iff \( \mathfrak{B} \vDash \varphi \) whenever \( \varphi \) is a prenex sentence with \( m \) initial quantifiers. Proof. Let \( F \) be as in 26.11, and for each \( z \in {}^{m}\left\lbrack {\left( {\{ 0\} \times A}\right) \cup \left( {\{ 1\} \times B}\right) }\right\rbrack \) let \( {x}_{z} \) and \( {y}_{z} \) be constructed as in 26.11(ii). Let \( \varphi \) be a sentence \( {\mathbf{Q}}_{0}{v}_{0}\cdots {\mathbf{Q}}_{m - 1}{v}_{m - 1}\psi \) where \( \psi \) is quantifier free and each \( {\mathbf{Q}}_{i} \) is \( \forall \) or \( \exists \) . We now prove the following statement by downward induction on \( k \) from \( m \) to 0 . for all \( k \leq m \) and all \( z \in {}^{m}\left\lbrack {\left( {\{ 0\} \times A}\right) \cup \left( {\{ 1\} \times B}\right) }\right\rbrack \) , \( \mathfrak{A} \vDash {\mathbf{Q}}_{k}{v}_{k}\cdots \) (1) \( \;{\mathbf{Q}}_{m - 1}{v}_{m - 1}\psi \left\lbrack {{x}_{z}0,\ldots ,{x}_{z}\left( {k - 1}\right) }\right\rbrack \) iff \( \mathfrak{B} \vDash {\mathbf{Q}}_{k}{v}_{k}\cdots \) \( {\mathbf{Q}}_{m - 1}{v}_{m - 1}\psi \left\lbrack {{y}_{z}0,\ldots ,{y}_{z}\left( {k - 1}\right) }\right\rbrack \) . The case \( k = m \) is true because of the conclusion of \( {26.11}\left( {ii}\right) \) . Now suppose that (1) is true for \( k + 1 \) ; we prove it for \( k \) . Suppose that \( z \in {}^{m}\lbrack \left( {\{ 0\} \times A}\right) \cup \) \( \left( {\{ 1\} \times B}\right) \rbrack \) . We take only the case \( {\mathbf{Q}}_{k} = \mathbf{V} \), and argue from satisfaction in \( \mathfrak{A} \) to satisfaction in \( \mathfrak{B} \) . Assume, then, that \( \mathfrak{A} \vDash {\mathbf{Q}}
1343_[鄂维南&李铁军&Vanden-Eijnden] Applied Stochastic Analysis -GSM199
Definition 1.10
Definition 1.10. A random variable \( X \) is an \( \mathcal{F} \) -measurable real-valued function \( X : \Omega \rightarrow \mathbb{R} \) ; i.e., for any \( B \in \mathcal{R},{X}^{-1}\left( B\right) \in \mathcal{F} \) . Definition 1.11. The distribution of the random variable \( X \) is a probability measure \( \mu \) on \( \mathbb{R} \), defined for any set \( B \in \mathcal{R} \) by (1.11) \[ \mu \left( B\right) = \mathbb{P}\left( {X \in B}\right) = \mathbb{P} \circ {X}^{-1}\left( B\right) . \] In particular, we define the distribution function \( F\left( x\right) = \mathbb{P}\left( {X \leq x}\right) \) when \( B = ( - \infty, x\rbrack \) . If there exists an integrable function \( \rho \left( x\right) \) such that (1.12) \[ \mu \left( B\right) = {\int }_{B}\rho \left( x\right) {dx} \] for any \( B \in \mathcal{R} \), then \( \rho \) is called the probability density function (PDF) of \( X \) . Here \( \rho \left( x\right) = {d\mu }/{dm} \) is the Radon-Nikodym derivative of \( \mu \left( {dx}\right) \) with respect to the Lebesgue measure \( m\left( {dx}\right) \) if \( \mu \left( {dx}\right) \) is absolutely continuous with respect to \( m\left( {dx}\right) \) ; i.e., for any set \( B \in \mathcal{R} \), if \( m\left( B\right) = 0 \), then \( \mu \left( B\right) = 0 \) (see also Section C of the appendix) [Bil79]. In this case, we write \( \mu \ll m \) . Definition 1.12. The expectation of a random variable \( X \) is defined as (1.13) \[ \mathbb{E}X = {\int }_{\Omega }X\left( \omega \right) \mathbb{P}\left( {d\omega }\right) = {\int }_{\mathbb{R}}{x\mu }\left( {dx}\right) \] if the integrals are well-defined. The variance of \( X \) is defined as (1.14) \[ \operatorname{Var}\left( X\right) = \mathbb{E}{\left( X - \mathbb{E}X\right) }^{2} \] For two random variables \( X \) and \( Y \), we can define their covariance as \( \left( {1.15}\right) \) \[ \operatorname{Cov}\left( {X, Y}\right) = \mathbb{E}\left( {X - \mathbb{E}X}\right) \left( {Y - \mathbb{E}Y}\right) \] \( X \) and \( Y \) are called uncorrelated if \( \operatorname{Cov}\left( {X, Y}\right) = 0 \) . All of the above definitions can be extended to the vectorial case in which \( \mathbf{X} = {\left( {X}_{1},{X}_{2},\ldots ,{X}_{d}\right) }^{T} \in {\mathbb{R}}^{d} \) is a random vector and each component \( {X}_{k} \) is a random variable. In this case, the covariance matrix of \( \mathbf{X} \) is defined as (1.16) \[ \operatorname{Cov}\left( \mathbf{X}\right) = \mathbb{E}\left( {\mathbf{X} - \mathbb{E}\mathbf{X}}\right) {\left( \mathbf{X} - \mathbb{E}\mathbf{X}\right) }^{T}. \] Definition 1.13. For any \( p \geq 1 \), the space \( {L}^{p}\left( \Omega \right) \) (or \( {L}_{\omega }^{p} \) ) consists of random variables whose \( p \) th-order moment is finite: (1.17) \[ {L}^{p}\left( \Omega \right) = \left\{ {\mathbf{X}\left( \omega \right) : \mathbb{E}{\left| \mathbf{X}\right| }^{p} < \infty }\right\} . \] For \( \mathbf{X} \in {L}^{p}\left( \Omega \right) \), let (1.18) \[ \parallel \mathbf{X}{\parallel }_{p} = {\left( \mathbb{E}{\left| \mathbf{X}\right| }^{p}\right) }^{1/p},\;p \geq 1. \] Theorem 1.14. (i) Minkowski inequality. \[ \parallel \mathbf{X} + \mathbf{Y}{\parallel }_{p} \leq \parallel \mathbf{X}{\parallel }_{p} + \parallel \mathbf{Y}{\parallel }_{p},\;p \geq 1,\mathbf{X},\mathbf{Y} \in {L}^{p}\left( \Omega \right) \] (ii) Hölder inequality. \( \mathbb{E}\left| \left( {\mathbf{X},\mathbf{Y}}\right) \right| \leq \parallel \mathbf{X}{\parallel }_{p}\parallel \mathbf{Y}{\parallel }_{q},\;p > 1,1/p + 1/q = 1,\mathbf{X} \in {L}^{p}\left( \Omega \right) ,\mathbf{Y} \in {L}^{q}\left( \Omega \right) , \) where \( \left( {\mathbf{X},\mathbf{Y}}\right) \) denotes the standard scalar product in \( {\mathbb{R}}^{d} \) . (iii) Schwartz inequality. \[ \mathbb{E}\left| \left( {\mathbf{X},\mathbf{Y}}\right) \right| \leq \parallel \mathbf{X}{\parallel }_{2}\parallel \mathbf{Y}{\parallel }_{2} \] Obviously Schwartz inequality is a special case of Hölder inequality when \( p = q = 2 \) . The proof of these inequalities can be found, for example, in Chapter 2 of [Shi96]. It also follows that \( \parallel \cdot {\parallel }_{p} \) is a norm. One can further prove that \( {L}^{p}\left( \Omega \right) \) is a Banach space and \( {L}^{2}\left( \Omega \right) \) is a Hilbert space with inner product (1.19) \[ {\left( \mathbf{X},\mathbf{Y}\right) }_{{L}_{\omega }^{2}} = \mathbb{E}\left( {\mathbf{X},\mathbf{Y}}\right) \] Lemma 1.15 (Chebyshev’s inequality). Let \( \mathbf{X} \) be a random variable such that \( \mathbb{E}{\left| \mathbf{X}\right| }^{p} < \infty \) for some \( p > 0 \) . Then \( \left( {1.20}\right) \) \[ \mathbb{P}\{ \left| \mathbf{X}\right| \geq \lambda \} \leq \frac{1}{{\lambda }^{p}}\mathbb{E}{\left| \mathbf{X}\right| }^{p} \] for any positive constant \( \lambda \) . Proof. For any \( \lambda > 0 \) , \[ \mathbb{E}{\left| \mathbf{X}\right| }^{p} = {\int }_{{\mathbb{R}}^{d}}{\left| \mathbf{x}\right| }^{p}\mu \left( {d\mathbf{x}}\right) \geq {\int }_{\left| \mathbf{x}\right| \geq \lambda }{\left| \mathbf{x}\right| }^{p}\mu \left( {d\mathbf{x}}\right) \geq {\lambda }^{p}{\int }_{\left| \mathbf{x}\right| \geq \lambda }\mu \left( {d\mathbf{x}}\right) = {\lambda }^{p}\mathbb{P}\left( {\left| \mathbf{X}\right| \geq \lambda }\right) . \] It is straightforward to generalize the above estimate to any nonnegative increasing function \( f\left( x\right) \), which gives \( \mathbb{P}\left( {\left| \mathbf{X}\right| \geq \lambda }\right) \leq \mathbb{E}f\left( \left| \mathbf{X}\right| \right) /f\left( \lambda \right) \) if \( f\left( \lambda \right) > \) 0. Lemma 1.16 (Jensen’s inequality). Let \( \mathbf{X} \) be a random variable such that \( \mathbb{E}\left| \mathbf{X}\right| < \infty \) and \( \phi : \mathbb{R} \rightarrow \mathbb{R} \) is a convex function such that \( \mathbb{E}\left| {\phi \left( \mathbf{X}\right) }\right| < \infty \) . Then (1.21) \[ \mathbb{E}\phi \left( \mathbf{X}\right) \geq \phi \left( {\mathbb{E}\mathbf{X}}\right) \] This follows directly from the definition of convex functions. Readers can also refer to \( \mathbf{{Chu01}} \) for the details. Below we list some typical continuous distributions. Example 1.17 (Uniform distribution). The uniform distribution on a domain \( B \) (in \( {\mathbb{R}}^{d} \) ) is defined by the probability density function: \[ \rho \left( x\right) = \left\{ \begin{array}{ll} \frac{1}{\operatorname{vol}\left( B\right) }, & \text{ if }\mathbf{x} \in B, \\ 0, & \text{ otherwise. } \end{array}\right. \] In one dimension if \( B = \left\lbrack {0,1}\right\rbrack \) (denoted as \( \mathcal{U}\left\lbrack {0,1}\right\rbrack \) later), this reduces to \[ \rho \left( x\right) = \left\{ \begin{array}{ll} 1, & \text{ if }x \in \left\lbrack {0,1}\right\rbrack \\ 0, & \text{ otherwise. } \end{array}\right. \] For the uniform distribution on \( \left\lbrack {0,1}\right\rbrack \), we have \[ \mathbb{E}X = \frac{1}{2},\;\operatorname{Var}\left( X\right) = \frac{1}{12}. \] Example 1.18 (Exponential distribution). The exponential distribution \( \mathcal{E}\left( \lambda \right) \) is defined by the probability density function: \[ \rho \left( x\right) = \left\{ \begin{array}{ll} 0, & \text{ if }x < 0 \\ \lambda {e}^{-{\lambda x}}, & \text{ if }x \geq 0 \end{array}\right. \] The mean and variance of \( E\left( \lambda \right) \) are (1.22) \[ \mathbb{E}X = \frac{1}{\lambda },\;\operatorname{Var}\left( X\right) = \frac{1}{{\lambda }^{2}}. \] As an example, the waiting time of a Poisson process with rate \( \lambda \) is exponentially distributed with parameter \( \lambda \) . Example 1.19 (Normal distribution). The one-dimensional normal distribution (also called Gaussian distribution) \( N\left( {\mu ,{\sigma }^{2}}\right) \) is defined by the probability density function: (1.23) \[ \rho \left( x\right) = \frac{1}{\sqrt{{2\pi }{\sigma }^{2}}}\exp \left( {-\frac{1}{2{\sigma }^{2}}{\left( x - \mu \right) }^{2}}\right) \] with mean \( \mu \) and variance \( {\sigma }^{2} \) . If \( \mathbf{\sum } \) is an \( n \times n \) symmetric positive definite matrix and \( \mathbf{\mu } \) is a vector in \( {\mathbb{R}}^{n} \), we can also define the \( n \) -dimensional normal distribution \( N\left( {\mathbf{\mu },\mathbf{\sum }}\right) \) through the density (1.24) \[ \rho \left( \mathbf{x}\right) = \frac{1}{{\left( 2\pi \right) }^{n/2}{\left( \det \mathbf{\sum }\right) }^{1/2}}\exp \left( {-\frac{1}{2}{\left( \mathbf{x} - \mathbf{\mu }\right) }^{T}{\mathbf{\sum }}^{-1}\left( {\mathbf{x} - \mathbf{\mu }}\right) }\right) . \] In this case, we have \[ \mathbb{E}\mathbf{X} = \mathbf{\mu },\;\operatorname{Cov}\left( \mathbf{X}\right) = \mathbf{\sum }. \] The normal distribution is the most important probability distribution. It is also called the Gaussian distribution. Random variables with normal distribution are also called Gaussian random variables. In the case of degeneracy, i.e., the covariance matrix \( \mathbf{\sum } \) is not invertible, which corresponds to the case that some components are in the subspace spanned by other components, we need to define the Gaussian distribution via characteristic functions (see Section 1.9). Example 1.20 (Gibbs distribution). In equilibrium statistical mechanics, we are concerned with a probability distribution \( \pi \) over a state space \( S \) . In the case of an \( n \) -particle system with continuous states, we have \( \mathbf{x} = \) \( \left( {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{n},{\mathbf{p}}_{1},\ldots ,{\mathbf{p}}_{n}}\right) \in S = {\mathbb{R}}^{6n} \), where \( {\mathbf{x}}_{k} \) and \( {\mathbf{p}}_{k} \) are the position and momentum of the \( k \) th particle, respectively. The PDF \( \pi \left( \mathbf{x}\right) \), called the Gibbs distribution, has a specific form: (1.25) \[ \pi \left( \mathbf{x}\right) = \frac{1}{Z}{e}^{-{\beta H}\left( \mathbf{x}\right) },\;\mathbf{x} \in {\
1139_(GTM44)Elementary Algebraic Geometry
Definition 3.6
Definition 3.6. Let \( V \subset {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) be irreducible, and let \( {K}_{V} \) be \( V \) ’s function field-that is, the set of quotients of equal-degree forms in \( {x}_{1},\ldots ,{x}_{n + 1} \), where \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n + 1}}\right\rbrack = \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack /\mathrm{J}\left( V\right) \) . If \( W \) is an irreducible subvariety of \( V \), then the set of all elements of \( {K}_{V} \) which can be written as \( p/q \) , where \( p \) and \( q \) are forms in \( {x}_{1},\ldots ,{x}_{n + 1} \) of the same degree, and where \( q \) is not identically zero on \( W \), forms a subring of \( {K}_{V} \) ; it is called the local ring of \( V \) at \( W \), and is denoted by \( \mathfrak{o}\left( {W;V}\right) \) . Remark 3.7. If \( W \subset V \) are irreducible varieties in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \), and if \( R \) is the coordinate ring of any dehomogenization \( \mathrm{D}\left( V\right) \) of \( V \) (where \( W \) is not contained in the hyperplane at infinity), then \( \mathfrak{o}\left( {W;V}\right) \) is the localization \( {R}_{\mathfrak{p}} = \) \( \mathfrak{o}\left( {\mathrm{D}\left( W\right) ;\mathrm{D}\left( V\right) }\right) \) of \( R \) at \( \mathrm{D}\left( W\right) = \mathrm{V}\left( \mathfrak{p}\right) \) ; this follows from the fact that if we without loss of generality dehomogenize at \( {X}_{n + 1} \), then \[ \frac{p\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) }{q\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) } = \frac{p\left( {{x}_{1}/{x}_{n + 1},\ldots ,1}\right) }{q\left( {{x}_{1}/{x}_{n + 1},\ldots ,1}\right) }. \] The left-hand side is an element of \( \mathfrak{o}\left( {W;V}\right) \), while the right-hand side belongs to \( {R}_{\mathfrak{p}} \) . Many of the basic algebraic and geometric relations between \( R \) and \( {R}_{\mathfrak{p}} \) may be compactly expressed using a double sequence, as in Diagrams 2 and 3 of Chapter III. We explore this next. Again, for expository purposes we select a fixed variety \( V \subset {\mathbb{C}}_{{x}_{1},\ldots ,{x}_{n}} \) having \( R = \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) as coordinate ring, and we let \( W = \mathbf{V}\left( \mathfrak{p}\right) \) be an arbitrary, fixed irreducible subvariety of \( V \) . Our sequence is given in Diagram 1. ![9396b131-9501-41be-b2cf-577fd90ab693_247_0.jpg](images/9396b131-9501-41be-b2cf-577fd90ab693_247_0.jpg) Diagram 1. In this diagram, \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) denotes the lattice \( \left( {\mathcal{I}\left( {R}_{\mathfrak{p}}\right) , \subset ,\cap , + }\right) \) of ideals of \( {R}_{\mathfrak{p}} \) and \( \mathcal{J}\left( {R}_{\mathfrak{p}}\right) \) denotes the lattice \( \left( {\mathcal{J}\left( {R}_{\mathfrak{p}}\right) , \subset ,\cap , + }\right) \) of closed ideals of \( {R}_{\mathfrak{p}} \) . Closure in \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) is with respect to the radical of Definition 1.1 of Chapter III; by Lemma 5.7 of Chapter III the radical of an ideal \( \mathfrak{a} \) in \( {R}_{\mathfrak{p}} \) will be seen to be the intersection of all prime ideals of \( {R}_{\mathfrak{p}} \) which contain \( \mathfrak{a} \), since \( {R}_{\mathfrak{p}} \) is Noetherian (Lemma 3.9). This radical is not in general the intersection of the \( \mathfrak{a} \) -containing maximal ideals of \( {R}_{\mathfrak{p}} \), since \( {R}_{\mathfrak{p}} \) has but one maximal ideal. Continuing the explanation of symbols in Diagram \( 1,\mathcal{G}\left( {R}_{\mathfrak{p}}\right) \) denotes the lattice \( \left( {\mathcal{G}\left( {R}_{\mathfrak{p}}\right) , \subset ,\cap , \cup }\right) \) of all \( {V}_{W} \) where \( V \in \mathcal{I} \) and \( W \) is fixed, with \( \subset , \cap \), and \( \cup \) as in Definition 3.3. The letter \( \mathcal{G} \) reminds us that these ordered pairs \( {V}_{W} \) are identified with germs (We remark that there exists an analogous sequence at the analytic level, where one uses germs instead of representatives, since there is not in general a canonical representative of each "analytic germ," as is the case with algebraic varieties, where there is a unique smallest algebraic variety representing a given "algebraic germ." One can even push certain aspects to the differential level.) It is easily seen that \( \mathcal{G}\left( {R}_{\mathfrak{p}}\right) \) actually is a lattice, using Definition 3.3 together with the fact that \( \varnothing \) and the subvarieties of \( V \) containing \( \mathbf{V}\left( \mathfrak{p}\right) \) form a lattice. As for the various maps, \( {\left( \;\right) }^{c} \) and \( {\left( \;\right) }^{e} \) are just contraction and extension of ideals. Since \( R \rightarrow {R}_{\mathfrak{p}} \) is an embedding, \( {\left( \;\right) }^{c} \) reduces to intersection with \( R \) . In contrast to extension in Section III,10, we shall see that \( {\left( \;\right) }^{e} \) maps closed ideals in \( \mathcal{I}\left( R\right) \) to closed ideals in \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) . The map \( {\left( \;\right) }_{W} \) sends \( V \) into \( {V}_{W} \) , and \( i \) assigns to each \( {V}_{W} \) the variety \( i\left( {V}_{W}\right) = {V}_{\left( W\right) } \) . (Thus \( i \) simply removes from \( {V}_{W} \) reference to the "center" \( W \) .) Finally, the bottom horizontal maps i* and \( \sqrt{} \) are the embedding and radical maps; \( {G}^{ * } \) and \( {J}^{ * } \) will be defined in terms of the other maps, and will turn out to be mutually inverse lattice-reversing isomorphisms. In establishing properties of these maps, extension and contraction between \( \mathcal{I}\left( R\right) \) and \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) play a basic part; we look at them first. \( {\left( \;\right) }^{e} : \mathcal{I}\left( R\right) \rightarrow \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) This map is onto \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) ; in particular, each ideal \( {\mathfrak{a}}^{ * } \subset {R}_{\mathfrak{p}} \) comes from the ideal \( {\mathfrak{a}}^{*c} \subset R \) -that is, For each \( {\mathfrak{a}}^{ * } \in {R}_{\mathfrak{p}} \) , \[ {\mathfrak{a}}^{ * } = {\mathfrak{a}}^{*{ce}} \] (8) Proof. That \( {\mathfrak{a}}^{*{ce}} \subset {\mathfrak{a}}^{ * } \) is obvious, since \( {a}^{ * } \in {\mathfrak{a}}^{*{ce}} \) implies that \( {a}^{ * } = a/m \) for some \( a \in {\mathfrak{a}}^{*c} \) and some \( m \in R \smallsetminus \mathfrak{p} \) . To show \( {\mathfrak{a}}^{ * } \subset {\mathfrak{a}}^{*{ce}} \), let \( {a}^{ * } \in {\mathfrak{a}}^{ * } \) . Then \( {a}^{ * } \in {R}_{\mathfrak{p}} \) , which implies \( {a}^{ * } = a/m \) for some \( a \in R \) and \( m \in R \smallsetminus \mathfrak{p} \) ; also \( a = m{a}^{ * } \), so \( a \in {a}^{ * } \) , which means \( a \in {\mathfrak{a}}^{ * } \cap R = {\mathfrak{a}}^{*c} \) . Hence \( {\mathfrak{a}}^{ * } = a/m \in {\mathfrak{a}}^{*{ce}} \) , Next note that \( {\left( \;\right) }^{e} \) is not necessarily \( 1 : 1 \), since \[ {\mathfrak{a}}^{e} = {R}_{\mathfrak{p}}\text{ for every ideal }\mathfrak{a} ⊄ \mathfrak{p}. \] (9) ( \( \mathfrak{a} \subset \mathfrak{p} \) implies that there is an \( m \in \mathfrak{a} \cap \left( {R \smallsetminus \mathfrak{p}}\right) \), hence \( m/m = 1 \in {\mathfrak{a}}^{e} \) .) However, (3.8) \( {\left( \;\right) }^{e} \) is \( 1 : 1 \) on the set of contracted ideals of \( \mathcal{I}\left( R\right) \) . For if \( \mathfrak{a} = {\mathfrak{a}}^{*c} \) and \( \mathfrak{b} = {\mathfrak{b}}^{*c} \), and if \( {\mathfrak{a}}^{e} = {\mathfrak{a}}^{*{ce}} = {\mathfrak{b}}^{e} = {\mathfrak{b}}^{*{ce}} \), then \( {\mathfrak{a}}^{ * } = {\mathfrak{b}}^{ * } \), so \( \mathfrak{a} = {\mathfrak{a}}^{*c} = \mathfrak{b} = {\mathfrak{b}}^{*c}. \) \( {\left( \;\right) }^{c} : \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \rightarrow \mathcal{I}\left( R\right) \) This map is not necessarily onto, because \( {\mathfrak{a}}^{*c} \) is either \( R \) or is contained in \( \mathfrak{p} \) . (If \( {\mathfrak{a}}^{*c} \) is not contained in \( \mathfrak{p} \), then \( {\mathfrak{a}}^{ * } = {\mathfrak{a}}^{ce} = {R}_{\mathfrak{p}} \), whence \( {\mathfrak{a}}^{*c} = R \) .) Next note that \( {\left( \text{ }\text{ }\right) }^{c} \) is \( 1 : 1 \), for if \( {\mathfrak{a}}^{*c} = {\mathfrak{b}}^{*c} \), then \( {\mathfrak{a}}^{*{ce}} = {\mathfrak{b}}^{*{ce}} = {\mathfrak{a}}^{ * } = {\mathfrak{b}}^{ * } \) . In general \( \mathfrak{a} \neq {\mathfrak{a}}^{ec} \), but we always have \[ \mathfrak{a} \subset {\mathfrak{a}}^{ec}. \] (10) (Theorem 3.14 will supply geometric meaning to (10), and also to Theorem 3.10 below.) The following characterization of \( {\mathfrak{a}}^{ec} \) is useful: \[ {\mathfrak{a}}^{ec} = \{ a \in R \mid {am} \in \mathfrak{a}\text{, for some }m \in R \smallsetminus \mathfrak{p}\} . \] (11) Proof \( \subset \) : Each element of \( {\mathfrak{a}}^{e} \) is a sum of quotients of elements in \( \mathfrak{a} \) by elements in \( R \smallsetminus \mathfrak{p} \) ; obviously such a sum is itself such a quotient. Hence an element \( a \) is in \( {\mathfrak{a}}^{ec} \) iff it is in \( R \) and is of the form \( a = {a}^{\prime }/m \) where \( {a}^{\prime } \in \mathfrak{a} \) . Hence \( {am} = {a}^{\prime } \in \mathfrak{a} \) , proving the inclusion. \( \supset \) : Any \( a \) on the right-hand side of (11) can be written as \( a = {am}/m = {a}^{\prime }/m \) where \( {a}^{\prime } \in \mathfrak{a} \), hence \( a \in {\mathfrak{a}}^{e} \) ; but also \( a \in R \), so \( a \in {\mathfrak{a}}^{ec} \) . An immediate corollary of the injectivity of \( {\left( \;\right) }^{c} \) is this basic fact, referred to earlier: Lemma 3.9. The ring \( {R}_{\mathfrak{p}} \) is Noetherian. Proof. \( {\left( \;\right) }^{c} \) is 1 : 1 onto the set of contracted ideals of \( R \) ; since \( {\left( \;\right)
1009_(GTM175)An Introduction to Knot Theory
Definition 12.3
Definition 12.3. A twist about \( \mathrm{C} \) is any homeomorphism isotopic to the homeomorphism \( \tau : F \rightarrow F \) defined such that \( \tau \mid F - A \) is the identity and, parametrising \( A \) as \( {S}^{1} \times \left\lbrack {0,1}\right\rbrack \) in an orientation-preserving manner, \( \tau \mid A \) is given by \( \tau \left( {{e}^{i\theta }, t}\right) = \left( {{e}^{i\left( {\theta - {2\pi t}}\right) }, t}\right) \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_0.jpg) Figure 12.1 Note that the effect of \( \tau \) on a path crossing \( C \) is to sweep that path all the way around the annulus. See Figure 12.1. Strictly, of course, a twist homeomorphism should here be piecewise linear; the fourth power of the piecewise linear homeomorphism shown in Figure 12.2 (which fixes the inner boundary component and moves each vertex on the outer boundary to the next vertex in a clockwise direction) is an appropriate piecewise linear model for a twist rather than the homeomorphism of Figure 12.1. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_1.jpg) Figure 12.2 Definition 12.4. Oriented simple closed curves \( p \) and \( q \) contained in the interior of the surface \( F \) are called twist-equivalent, written \( p{ \sim }_{\tau }q \), if \( {hp} = q \) for some homeomorphism \( h \) of \( F \) that is in the group of homeomorphisms generated by all twists of \( F \) (which includes homeomorphisms isotopic to the identity). In this definition \( h \) is required to carry the orientation of one curve to that of the other. Of course, in general there may be no homeomorphism of any sort that sends \( p \) to \( q \) ; that is certainly the case if \( p \) separates \( F \) and \( q \) does not. Lemma 12.5. Suppose oriented simple closed curves \( p \) and \( q \), contained in the interior of the surface \( F \), intersect transversely at precisely one point. Then \( p{ \sim }_{\tau }q \) . Proof. The first diagram of Figures 12.3 shows the intersection point of \( p \) and \( q \) and also a simple closed curve \( {C}_{1} \) that runs parallel to, and is slightly displaced from, \( q \) . Similarly, \( {C}_{2} \) is a slightly displaced copy of \( p \) . The second diagram shows \( {\tau }_{1}p \), where \( {\tau }_{1} \) is a twist about \( {C}_{1} \) . The third diagram shows \( {\tau }_{2}{\tau }_{1}p \), where \( {\tau }_{2} \) is a twist about \( {C}_{2} \) . In this diagram \( {\tau }_{2}{\tau }_{1}p \) has a doubled-back portion that can easily be moved by a homeomorphism isotopic to the identity (that is, a slide in \( F \) ) to change \( {\tau }_{2}{\tau }_{1}p \) to \( q \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_0.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_1.jpg) Figure 12.3 Lemma 12.6. Suppose that oriented simple closed curves \( p \) and \( q \) contained in the interior of the surface \( F \) are disjoint and that neither separates \( F \) (that is, \( \left\lbrack p\right\rbrack \neq 0 \neq \left\lbrack q\right\rbrack \) in \( \left. {{H}_{1}\left( {F,\partial F}\right) }\right) \) . Then \( p{ \sim }_{\tau }q \) . Proof. Consideration of the surface obtained by cutting \( F \) along \( p \cup q \) shows at once that there is a simple closed curve \( r \) in \( F \) that intersects each of \( p \) and \( q \) transversely at one point. Then, by Lemma 12.5, \( p{ \sim }_{\tau }r{ \sim }_{\tau }q \) . Proposition 12.7. Suppose that oriented simple closed curves \( p \) and \( q \) are contained in the interior of the surface \( F \) and that neither separates \( F \) . Then \( p{ \sim }_{\tau }q \) . Proof. Changing \( q \) by means of a homeomorphism of \( F \) that is (close to and) isotopic to the identity, it can be assumed that \( p \) and \( q \) intersect transversely at \( n \) points. The proof is by induction on \( n \) ; Lemmas 12.5 and 12.6 start the induction, so assume that \( n \geq 2 \) and that the result is true for less that \( n \) points of intersection. Let \( A \) and \( B \) be consecutive points along \( p \) of \( p \cap q \) . Suppose firstly that \( p \) leaves \( A \) on one side of \( q \) and returns to \( B \) from the other side of \( q \) . Let \( r \) be a simple closed curve in \( F \) that starts near \( A \), follows close to \( p \) until near \( B \) and then returns to its start in a neighbourhood of \( q \) . As shown in the first diagram of Figure 12.4, \( r \) can be chosen so that \( p \cap r \) contains less than \( n \) points and \( q \cap r \) is one point. Hence \( p{ \sim }_{\tau }r \) by the induction hypothesis, and \( r{ \sim }_{\tau }q \) by Lemma 12.5. Suppose now that \( p \) leaves \( A \) on one side of \( q \) and returns to \( B \) from the same side of \( q \) . Let \( {r}_{1} \) and \( {r}_{2} \) be the two simple closed curves shown in the second ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_2.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_3.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_3.jpg) Figure 12.4 diagram of Figure 12.4. Each starts near \( A \), proceeds near \( p \) until close to \( B \) and then back to its start following near to \( q \) . However, \( {r}_{1} \) starts on the right of \( p \) and \( {r}_{2} \) starts on the left. Now in \( {H}_{1}\left( {F,\partial F}\right) ,\left\lbrack {r}_{1}\right\rbrack - \left\lbrack {r}_{2}\right\rbrack = \left\lbrack q\right\rbrack \), and hence at least one of \( {r}_{1} \) and \( {r}_{2} \) does not separate (as \( \left\lbrack q\right\rbrack \neq 0 \) ). Let that curve be defined to be \( r \) . Then \( r \) is disjoint from \( q \), so \( r{ \sim }_{\tau }q \) by Lemma 12.6 and, as \( r \cap p \) has at most \( n - 2 \) points, \( p{ \sim }_{\tau }r \) by the induction hypothesis. Corollary 12.8. Let \( {p}_{1},{p}_{2},\ldots ,{p}_{n} \) be disjoint simple closed curves in the interior of \( F \) the union of which does not separate \( F \) . Let \( {q}_{1},{q}_{2},\ldots ,{q}_{n} \) be another set of curves with the same properties. Then there is a homeomorphism \( h \) of \( F \) that is in the group generated by twists, so that \( h{p}_{i} = {q}_{i} \) for each \( i = 1,2,\ldots, n \) . Proof. Suppose inductively that such an \( h \) can be found so that \( h{p}_{i} = {q}_{i} \) for each \( i = 1,2,\ldots, n - 1 \) . Apply Proposition 12.7 to \( h{p}_{n} \) and \( {q}_{n} \) in \( F \) cut along \( {q}_{1} \cup {q}_{2} \cup \ldots \cup {q}_{n - 1} \) The theory of homeomorphisms of surfaces will be left at that point and attention turned back to \( n \) -manifolds with particular interest in \( n = 3 \) . Definition 12.9. Let \( \mathrm{M} \) be an \( n \) -manifold, let \( e : \partial {D}^{r} \times {D}^{n - r} \rightarrow \partial M \) be an embedding (where, as usual, \( {D}^{s} \) is the standard \( s \) -dimensional disc or ball). Then \( M{ \cup }_{e}\left( {{D}^{r} \times {D}^{n - r}}\right) \) is called “ \( M \) with an \( r \) -handle added”. Note that the boundary of this new manifold is \( \partial M \) changed by an \( \left( {r - 1}\right) \) -surgery. Definition 12.10. A handlebody of genus \( g \) is an orientable 3-manifold that is a 3-ball with \( {g1} \) -handles added. Here, "orientable" can be taken to mean that every simple closed curve in the manifold has a solid torus neighbourhood. It is a straightforward exercise in the elementary technicalities of piecewise linear manifold theory to show that, up to homeomorphism, there is only one genus \( g \) handlebody. It is indeed, as already stated, the product of an interval with a \( g \) -holed disc. A regular neighbourhood of any finite connected graph embedded in an orientable 3-manifold is a handlebody. This follows by taking the neighbourhood of a maximal tree as the 3-ball and neighbourhoods of the midpoints of the remaining edges as 1-handles. Definition 12.11. A Heegaard splitting of a (closed, connected, orientable) 3- manifold \( M \) is a pair of handlebodies \( X \) and \( Y \) contained in \( M \) such that \( X \cup Y = M \) and \( X \cap Y = \partial X = \partial Y \) . Note that \( X \) and \( Y \) have the same genus; namely, the genus of their common boundary surface. Lemma 12.12. Any closed connected orientable 3-manifold has a Heegaard splitting. Proof. This is similar to the first part of the proof of Theorem 8.2. Take a triangulation of \( M \) as a simplicial complex \( K \) . The vertices of the first derived subdivision \( {K}^{\left( 1\right) } \) of \( K \) are the barycentres \( \widehat{A} \) of the simplexes \( A \) of \( K \) . The second derived subdivision \( {K}^{\left( 2\right) } \) of \( K \) is, of course, just \( {\left( {K}^{\left( 1\right) }\right) }^{\left( 1\right) } \) . The 1-skeleton of \( K \) (that is, the sub-complex consisting of the 0 -simplexes and 1 -simplexes of \( K \) ), being a graph, has, as intimated above, for its simplicial neighbourhood in \( {K}^{\left( 2\right) } \), a handlebody. The closure of the complement of this is the simplicial neighbourhood in \( {K}^{\left( 2\right) } \) of another graph. That graph, called the dual 1-skeleton of \( K \), is the sub-complex \( \mathop{\bigcup }\limits_{A}{C}_{A} \) of \( {K}^{\left( 1\right) } \), where the union is over all 3-simplexes \( A \), and \( {C}_{A} \) is the cone with vertex \( \widehat{A} \) on the barycentres of the 2-dimensional faces of \( A \) . Thus \( {K}^{\left( 2\right) } \) is expressed as the union of two handlebodies that intersect in their common boundary, and this is the required Heegaard splitting. Theorem 12.13. Let \( M \) be a closed connected orientable 3-manifold. There exists finite sets of disjoint solid tori \( {T}_{1}^{\prime },{T}_{2}^{\pr
1119_(GTM273)Homotopical Topology
Definition 2
Definition 2. A filtration of \( C \) is a family of subgroups \( {F}_{p}C \subset C, p \in \mathbb{Z} \) such that if \( p < q \), then \( {F}_{p}C \subset {F}_{q}C \) . We will also assume that \( \bigcup {F}_{p}C = C \) and \( \bigcap {F}_{p}C = 0 \) . A filtration \( \left\{ {{F}_{p}C}\right\} \) of \( C \) is called finite if for some \( m \) and \( n \geq m,{F}_{p}C = 0 \) when \( p < m \) and \( {F}_{p}C = C \) when \( p \geq n \) . A filtration \( \left\{ {{F}_{p}C}\right\} \) is called positive if \( {F}_{p}C = 0 \) for all \( p < 0 \) . Usually, we will assume that the filtration is positive and finite, in which case it is essentially a chain \[ 0 = {F}_{-1}C \subset {F}_{0}C \subset {F}_{1}C \subset \ldots \subset {F}_{n - 1}C \subset {F}_{n}C = C \] (but even in this case we have the right to use the notation \( {F}_{p}C \) for \( p < - 1 \) when it is 0, and for \( p > n \) when it is \( C \) ). ![ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_317_0.jpg](images/ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_317_0.jpg) ![ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_318_0.jpg](images/ca3d23e8-d3f0-4fbc-a946-2e04586ccbf2_318_0.jpg) Definition 3. A grading of \( C \) is a family of subgroups \( {C}_{r} \subset C, r \in \mathbb{Z} \) such that \( C = {\bigoplus }_{r}{C}_{r} \) . Usually (not always), we will assume that the grading is positive and finite, meaning that actually \( C = {\bigoplus }_{r = 0}^{n}{C}_{r} \) and \( {C}_{r} = 0 \) for \( r < 0 \) and \( r > n \) . For a filtered group \( C \) (as in Definition 2) we define the adjoint graded group \( \operatorname{Gr}C = \) \( {\bigoplus }_{r}\left( {{F}_{r}C/{F}_{r - 1}C}\right) \) . The groups \( C \) and \( \operatorname{Gr}C \) are not always isomorphic as Abelian groups (for example, the adjoint group to the group \( C = \mathbb{Z} \) with a filtration \( 0 \subset \) \( 2\mathbb{Z} \subset \mathbb{Z} \) is \( {\mathbb{Z}}_{2} \oplus \mathbb{Z} \ncong \mathbb{Z} \) ), but they may be regarded as closely related. For example, if a filtered group \( C \) is finite, then the group \( \operatorname{Gr}C \) is also finite and has the same order; if \( C \) has a finite rank, then \( \operatorname{Gr}C \) also has a finite rank, the same as \( C \), if the filtration is finite; if \( C \) is a vector space (over some field), filtered by subspaces, then Gr \( C \) is also a vector space over the same field, of the same dimension as \( C \), if \( C \) is finite dimensional. Structures of these kinds may co-exist in the same Abelian group; then we usually assume that they satisfy some compatibility conditions; actually, we will not even explicitly state that these conditions are met; rather, we will state the opposite in the rare cases when the structures considered are not supposed to be compatible. If an Abelian group \( C \) possesses a differential \( d \) and a filtration \( \left\{ {{F}_{p}C}\right\} \), then we assume that for all \( p, d\left( {{F}_{p}C}\right) \subset {F}_{p}C \) . In this case, we have differential groups \( \left( {{F}_{p}C,{\left. d\right| }_{{F}_{p}C}}\right) \), and the inclusion map \( {F}_{p}C \rightarrow C \) induces a homology homomorphism \( H\left( {{F}_{p}C,{\left. d\right| }_{{F}_{p}C}}\right) \rightarrow H = H\left( {C, d}\right) \), and its image is denoted as \( {F}_{p}H \) . [Thus, \( {F}_{p}H = \) \( \left. {\left( {{F}_{p}C \cap \operatorname{Ker}d}\right) /\left( {{F}_{p}C \cap d\left( C\right) }\right) \text{.] In this way, we obtain a filtration}\left\{ {{F}_{p}H}\right\} \text{of}H}\right\rbrack \) . If \( C \) has a differential \( d \) and a grading \( C = {\bigoplus }_{r \in \mathbb{Z}}{C}_{r} \), then we usually assume that \( d \) is homogeneous of some degree \( u \in \mathbb{Z} \), which means that for all \( r, d\left( {C}_{r}\right) \subset {C}_{r + u} \) . We are best familiar with the case \( u = - 1 \) . Then \( C \) is the same as a (chain) complex \[ \ldots \overset{{d}_{r - 1}}{ \leftarrow }{C}_{r - 1}\overset{{d}_{r}}{ \leftarrow }{C}_{r}\overset{{d}_{r + 1}}{ \leftarrow }{C}_{r + 1}\overset{{d}_{r + 2}}{ \leftarrow }\ldots \] in the sense of Sect. 12.2. The case \( u = 1 \) is represented by cochain complexes (Sect. 15.1). Ahead, we will deal with differential graded groups with differentials of all possible degrees. Notice that the homology group of a differential graded group with homogeneous differential (of some degree \( u \) ) has a natural grading: \[ {H}_{r} = \frac{\operatorname{Ker}\left( {d : {C}_{r} \rightarrow {C}_{r + u}}\right) }{\operatorname{Im}\left( {d : {C}_{r - u} \rightarrow {C}_{r}}\right) }. \] EXERCISE 1. Prove that if the differential in \( C \) is homogeneous with respect to the grading, then (whatever the degree of \( d \) is) \( H = {\bigoplus }_{r}{H}_{r} \) . If \( C \) has a filtration, \( \left\{ {{F}_{p}C}\right\} \), and a grading, \( C = {\bigoplus }_{r}{C}_{r} \), then the two structures are called compatible if for every \( p,{F}_{p}C = {\bigoplus }_{r}\left( {{F}_{p}C \cap {C}_{r}}\right) \) . This condition is quite restrictive. It is stronger than it may seem at the first glance: A randomly chosen filtration and grading do not satisfy it. Here is the simplest (?) example. Let \( C \) be a free Abelian group with two generators: \( a \) and \( b \) (so \( C = \mathbb{Z}a \oplus \mathbb{Z}b \) ). Consider the filtration \( 0 = {F}_{-1}C \subset {F}_{0}C \subset {F}_{1}C = C \) with \( {F}_{0}C = \mathbb{Z}\left( {a + b}\right) \) and the grading \( C = {C}_{0} \oplus {C}_{1} \) with \( {C}_{0} = \mathbb{Z}a,{C}_{1} = \mathbb{Z}b \) . Then \[ \mathbb{Z} \cong {F}_{0}C \neq \left( {{F}_{0}C \cap {C}_{0}}\right) \oplus \left( {{F}_{)}C \cap {C}_{1}}\right) = 0 \oplus 0 = 0. \] For a better understanding, we can notice that the filtration and the grading are compatible if every \( {F}_{p}C \) is generated by "homogeneous elements," that is, by elements belonging to the groups \( {C}_{r} \) . EXERCISE 2. Prove that if the differential, the filtration, and the grading are mutually compatible, then (whatever the degree of the differential is) the filtration and the grading in the homology are compatible. And the last common situation is when \( C \) has two (or more, but we will never encounter this case) gradings, \( C = {\bigoplus }_{r}{C}_{r} \) and \( C = {\bigoplus }_{s}{C}_{s}^{\prime } \) . These two gradings are compatible (or form a bigrading) if \( C = {\bigoplus }_{r, s}{C}_{r, s} \), where \( {C}_{r, s} = {C}_{r} \cap {C}_{s}^{\prime } \) (or, equivalently, if \( {C}_{r} = {\bigoplus }_{s}{C}_{r, s} \) ; or, equivalently, if \( {C}_{s}^{\prime } = {\bigoplus }_{r}{C}_{r, s} \) ). A typical example is contained in Exercise 3. EXERCISE 3. Prove that if \( C \) possesses a compatible filtration \( \left\{ {F}_{p}\right\} \) and grading \( C = {\bigoplus }_{s}{C}_{s} \), then \( \operatorname{Gr}C \) acquires a natural bigrading: \( \operatorname{Gr}C = {\bigoplus }_{p, r}\left( {{F}_{p}C \cap }\right. \) \( \left. {C}_{r}\right) /\left( {{F}_{p - 1}C \cap {C}_{r}}\right) \) . ## 20.2 The Spectral Sequence of a Filtered Differential Group Let \( C \) be a differential group with a differential \( d \) and a filtration \( \left\{ {{F}_{p}C}\right\} \) compatible with \( d \) . We will assume that the filtration is finite and positive, and we will briefly consider the case of the infinite filtration at the end of the section. In the next section, we will adjust our construction to the case when \( C \) also possesses a grading. Begin with a simple observation. Since \( d\left( {{F}_{p}C}\right) \subset {F}_{p}C \), the differential \( d \) induces a differential \( {d}_{p}^{0} : {F}_{p}C/{F}_{p - 1}C \rightarrow {F}_{p}C/{F}_{p - 1}C \) [obviously, \( {\left( {d}_{p}^{0}\right) }^{2} = 0 \) ] and the direct sum of all \( {d}_{p}^{0} \) becomes a homogeneous differential of degree \( 0,{d}^{0} : \operatorname{Gr}C \rightarrow \operatorname{Gr}C \) . Question: Are \( H\left( {\operatorname{Gr}C,{d}^{0}}\right) \) and \( \operatorname{Gr}H\left( {C, d}\right) \) the same? Answer: not, in general. Indeed, when we compute \( H\left( C\right) \), we first restrict ourselves to \( \operatorname{Ker}d = \{ c \in C \mid \) \( {dc} = 0\} \) . But when we compute \( H\left( {\operatorname{Gr}C}\right) \), we take those \( c \in {F}_{p}C \) for which \( {dc} \in {F}_{p - 1}C \) ; that is, the group of "cycles" is bigger in the second computation. On the other hand, when we compute \( H\left( C\right) \), we factorize over \( d\left( C\right) \), while in the computation of \( H\left( {\operatorname{Gr}C}\right) \) we factorize over \( d\left( {{F}_{p}C}\right) \), which is not as big. This shows that the group \( H\left( {\operatorname{Gr}C}\right) \) should be bigger than \( \operatorname{Gr}H\left( C\right) \) . This is what the spectral sequence exists for: a gradual, "monotonic" transition from \( H\left( {\operatorname{Gr}C}\right) \) to \( \operatorname{Gr}H\left( C\right) \) . Now we pass to main definitions. For \( p, r \geq 0 \), put \[ {E}_{p}^{r} = \frac{{F}_{p}C \cap {d}^{-1}\left( {{F}_{p - r}C}\right) }{\left\lbrack {{F}_{p - 1}C \cap {d}^{-1}\left( {{F}_{p - r}C}\right) }\right\rbrack + \left\lbrack {{F}_{p}C \cap d\left( {{F}_{p + r - 1}C}\right) }\right\rbrack },{E}^{r} = {\bigoplus }_{p}{E}_{p}^{r}. \] In words: We take elements of \( {F}_{p}C \) whose differentials lie in a smaller group, \( {F}_{p - r}C \) ; then we factorize over those chosen elements which happen to be in \( {F}_{p - 1}C \) and also for those which are differentials, not of arbitrary elements of \( C \), but only of elements of \( {F}_{p + r - 1}C \) . Consider three particular cases. \[ {E}_{p}^{0} = \frac{{F}_{p}C \cap {d}^{-1}\left( {{F}_{p}C}\right) }{\left\lbrack {{F}_{p - 1}C \cap {d}^{-1}\left( {{F}_{p}C}\right) }\right\rbrack + \left\lbrack {{F}_{p}C \cap d\left( {{F}_{p - 1}C}\right) }\right\rbrack } \] \[ = \frac{{F}_{p}C}{{F}_{p - 1}C + d\left( {{F}_
1185_(GTM91)The Geometry of Discrete Groups
Definition 10.4.1
Definition 10.4.1. The symbol \[ \left( {g : {m}_{1},\ldots ,{m}_{r};s;t}\right) \] (10.4.1) is called the signature of \( G \) : each parameter is a non-negative integer and \( {m}_{j} \geq 2 \) . If there are no elliptic elements in \( G \), we simply write \( \left( {g : 0;s;t}\right) \) . It is possible to state precisely which signatures occur. Theorem 10.4.2. There is a non-elementary finitely generated Fuchsian group with signature (10.4.1) and \( {m}_{j} \geq 2 \) if and only if \[ {2g} - 2 + s + t + \mathop{\sum }\limits_{{j = 1}}^{r}\left( {1 - \frac{1}{{m}_{j}}}\right) > 0. \] (10.4.2) The proof that (10.4.2) is a necessary condition for the existence of a group with signature (10.4.1) is a consequence of the following result. Theorem 10.4.3. Let \( G \) be a non-elementary finitely generated Fuchsian group with signature (10.4.1) and Nielsen region N. Then \[ \text{h-area}\left( {N/G}\right) = {2\pi }\left\{ {{2g} - 2 + s + t + \mathop{\sum }\limits_{{j = 1}}^{r}\left( {1 - \frac{1}{{m}_{j}}}\right) }\right\} \text{.} \] If \( G \) is also of the first kind, then \( N = \Delta \) and \( t = 0 \) : thus we obtain a formula for the area of any fundamental polygon of \( G \) . Corollary 10.4.4. Let \( G \) be a finitely generated Fuchsian group of the first kind with signature \( \left( {g : {m}_{1},\ldots ,{m}_{r};s;0}\right) \) . Then for any convex fundamental polygon \( P \) of \( G \) , \[ \text{h-area}\left( P\right) = {2\pi }\left\lbrack {{2g} - 2 + s + \mathop{\sum }\limits_{{j = 1}}^{r}\left( {1 - \frac{1}{{m}_{j}}}\right) }\right\rbrack \text{.} \] Proof of Theorem 10.4.3. We take \( D \) to be the Dirichlet polygon for \( G \) with centre \( w \) so \[ \mathrm{h} - \operatorname{area}\left( {D \cap N}\right) = \mathrm{h} - \operatorname{area}\left( {N/G}\right) . \] By choosing \( w \) appropriately, we may assume that each elliptic and parabolic cycle on \( \partial D \) has length one and (by taking \( w \) to avoid a countable set of geodesics) we may assume that no cycle of vertices of \( D \) lies on the axes of hyperbolic boundary elements. Clearly, only finitely many distinct images of a hyperbolic axis can meet the closure of any locally finite fundamental domain. As \( N \) is bounded by hyperbolic axes (because \( G \) is finitely generated), this implies that only finitely many sides of \( N \) meet \( D \) and so \( D \cap N \) is a finite sided polygon. The boundary of \( D \cap N \) consists of, say, \( {2n} \) paired sides (which are arcs of paired sides of \( P \) ) and \( k \) sides which are not paired (and consist of arcs in \( D \) of the axes bounding \( N \) ). The vertices of \( D \cap N \) are the \( r \) elliptic cycles of length one, the \( s \) parabolic cycles of length one, some accidental cycles of \( P \) (say \( a \) of these) and finally \( k \) cycles of length two corresponding to the end-points of the \( k \) unpaired sides of \( D \cap N \) . Applying Euler's formula (after "filling in" the holes), we obtain \[ 2 - {2g} = \left( {1 + t}\right) - \left( {n + k}\right) + \left( {r + a + k + s}\right) \] \[ n - a = {2g} - 1 + r + s + t. \] Now join \( w \) to each vertex of \( D \cap N \), thus dividing \( D \cap N \) into \( {2n} + k \) triangles. Adding the areas of these triangles, we obtain \[ \text{h-area}\left( {D \cap N}\right) = \left( {{2n} + k}\right) \pi - {2\pi } - {2\pi a} - {\pi k} - \mathop{\sum }\limits_{{j = 1}}^{r}\frac{2\pi }{{m}_{j}} \] \[ = {2\pi }\left\lbrack {n - a - 1 - \mathop{\sum }\limits_{{j = 1}}^{r}\frac{1}{{m}_{j}}}\right\rbrack \] \[ = {2\pi }\left\lbrack {{2g} - 2 + s + t + \mathop{\sum }\limits_{{j = 1}}^{r}\left( {1 - \frac{1}{{m}_{j}}}\right) }\right\rbrack . \] It is evident from the nature of the formula in Theorem 10.4.3 that h-area \( \left( {N/G}\right) \) has a positive universal lower bound, valid for all groups \( G \) . For brevity, write \[ A = \left( {1/{2\pi }}\right) \mathrm{h} - \operatorname{area}\left( {N/G}\right) \] and, in order to compute this lower bound, we may assume that \( A < \frac{1}{6} \) : this is a convenient number for the following analysis and we shall soon see that there are groups for which \( A < \frac{1}{6} \) . If \( r = 0 \) or if \( {m}_{j} = 2 \) for each \( j \), then \( A = n/2 \) for some integer \( n \) . As \( A > 0 \) , we find that \( A \geq \frac{1}{2} \) so we may assume that \( r > 0 \) and that some \( {m}_{j} \) is at least three. Then \[ 1 > {6A} \] \[ \geq 6\left\lbrack {{2g} - 2 + s + t + \left( \frac{r - 1}{2}\right) + \frac{2}{3}}\right\rbrack \] which yields \[ {4g} + {2s} + {2t} + r < 4\text{.} \] Because \[ 2 < A + 2 \] \[ \leq {2g} + s + t + r \] \[ \leq {4g} + {2s} + {2t} + r \] \[ < 4\text{,} \] we obtain \[ {2g} + s + t + r = 3 \] \[ = {4g} + {2s} + {2t} + r \] so \[ g = s = t = 0,\;r = 3. \] We may now assert that \[ A = 1 - \left( {\frac{1}{{m}_{1}} + \frac{1}{{m}_{2}} + \frac{1}{{m}_{3}}}\right) > 0. \] If each \( {m}_{j} \) is at least three, then one \( {m}_{j} \) is at least four and then \( A \geq \frac{1}{12} \) . If not, then \( {m}_{3} = 2 \), say, and so \[ A = \frac{1}{2} - \left( {\frac{1}{{m}_{1}} + \frac{1}{{m}_{2}}}\right) > 0. \] If each of \( {m}_{1} \) and \( {m}_{2} \) is at least four, then one is at least five and then \( A \geq \frac{1}{20} \) . If not, then \( {m}_{2} = 3 \), say, and \[ A \geq \frac{1}{42} \] with equality when and only when \( G \) has signature \( \left( {0 : 2,3,7;0;0}\right) \) . For future reference we state this as our next result. Theorem 10.4.5. For every non-elementary Fuchsian group \( G \) with Nielsen region \( N \) \[ \text{h-area}\left( {N/G}\right) \geq \pi /{21}\text{.} \] Equality holds precisely when \( G \) has signature \( \left( {0 : 2,3,7;0;0}\right) \) in which case \( N = \Delta \) . We end this section with the remaining part of the proof of Theorem 10.4.2. Proof of Theorem 10.4.2. Sufficiency. Given the symbol (10.4.1) satisfying (10.4.2), we must construct a Fuchsian group \( G \) which has (10.4.1) as its signature. For any positive \( d \), construct the circle given by \( \rho \left( {z,0}\right) = d \) and also a set of \( {4g} + r + s + t \) points \( {z}_{j} \) equally spaced around this circle (and labelled in the natural way). The arcs \( {z}_{j}{z}_{j + 1} \) subtend an angle \( {2\theta } \) at the origin where \[ \theta = \frac{2\pi }{{8g} + {2r} + {2s} + {2t}} \] For the first four of these arcs, we construct a configuration with mappings \( {h}_{j} \) as illustrated in Figure 10.4.1. Note that the points \( {z}_{1},\ldots ,{z}_{5} \) are all images of each other. This construction is repeated \( g - 1 \) more times, starting the next stage at \( {z}_{5} \) and so on: this accounts for \( {4g}\operatorname{arcs}{z}_{j}{z}_{j + 1} \), an angle \( {8g\theta } \) at the origin and mappings \( {h}_{1},\ldots ,{h}_{2g} \) . Using the next \( r \) arcs \( {z}_{j}{z}_{j + 1} \), we construct configurations with mappings \( {e}_{i} \) as illustrated in Figure 10.4.2 (recall that the integers \( {m}_{i} \) are available from (10.4.1) and \( {m}_{i} \geq 2 \) ). Necessarily, \( {e}_{i} \) is an elliptic element of order \( {m}_{i} \) ![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_284_0.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_284_0.jpg) and fixing \( {w}_{i} \) . This part of the construction accounts for an additional angular measure of \( {2r\theta } \) at the origin. Next, we repeat the construction \( s \) times and now on each occasion the corresponding \( {w}_{i} \) are on \( \{ \left| z\right| = 1\} \) : the angle at \( {w}_{i} \) is zero and the corresponding mappings \( {p}_{i} \) (for \( {e}_{i} \) ) are parabolic. There are now \( t \) remaining arcs, each subtending an angle of \( {2\theta } \) at the origin. On each of these arcs we construct the configurations and hyperbolic mappings \( {b}_{i} \) as illustrated in Figure 10.4.3 where \[ {\theta }_{1} = \left( \frac{1 + d}{1 + {2d}}\right) \theta \] We have now constructed a polygon with vertices \( {z}_{j},{u}_{i},{v}_{i},{w}_{i} \) and with side-pairings given by the \( {h}_{i},{e}_{i},{p}_{i} \) and \( {b}_{i} \) . The group \( G \) generated by these maps may or may not be discrete but in any case, the points \( {z}_{1},{z}_{2},\ldots \) lie in the same \( G \) -orbit. Moreover, the angle sum subtended at these \( {z}_{j} \) is \[ \phi \left( d\right) = {8g\alpha } + 2\left( {{\beta }_{1} + \cdots + {\beta }_{r + s + t}}\right) . \] ![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_284_1.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_284_1.jpg) Figure 10.4.2 ![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_285_0.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_285_0.jpg) Figure 10.4.3 Each of the angles \( \alpha \) and \( {\beta }_{j} \) depend continuously on the parameter \( d \) . We shall show that for some choice of \( d \) we have \( \phi \left( d\right) = {2\pi } \) . Then Poincaré’s Theorem (see Exercise 9.8.2) implies that \( G \) is discrete and that the constructed polygon is a fundamental domain for \( G \) . It then remains to verify that \( G \) does indeed have the signature (10.4.1). By elementary trigonometry, we have (using Figures 10.4.1, 10.4.2 and 10.4.3 in turn) (i) \[ \cosh d = \cot \theta \cot \alpha \] (ii) \[ \cosh d = \frac{\cos \theta \cos {\beta }_{j} + \cos \left( {\pi /{m}_{j}}\right) }{\sin \theta \sin {\beta }_{j}}, \] when \( j = 1,\ldots, r \), and a similar expression with \( \cos \left( {\pi /{m}_{j}}\right) \) replaced by 1 when \( j = r + 1,\ldots, r + s \) ; (iii) \[ \cosh d = \frac{\cos {\theta }_{1}\cos {\beta }_{j} + 1}{\sin {\theta }_{1}\sin {\beta }_{j}}. \] Note that as \( d \rightarrow 0 \), so \( \alpha \rightarrow \left( {\pi /2}\right) - \theta \) . In (ii), we have \[ \cos \left( {\theta + {\beta }_{j}}\right) = \cos \left( {\pi - \frac{\pi }{{m}_{j}}}\right) + \sin \theta \sin {\beta }_{j}\left( {\co
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 2.4
Definition 2.4. For a given set of systems \( \mathcal{C} \subset \mathcal{B} \), we say \( \mathcal{C} \) approximates \( X \) if \( X \in \overline{\mathcal{C}} \) . In order to prove Theorem 2.2, Peixoto first proved a series of approximating lemmas (i.e., Lemma 2.2 to Lemma 2.9). Theorem 2.2 is then proved by using these lemmas. LEMMA 2.2. Any system \( X \in \mathcal{B} \) can be approximated by a system \( {Y}_{1} \) of type \( \left( 1\right) \) . The proof is not difficult. However, it requires some knowledge of differential topology, and it is thus omitted. (See [4].) As for Lemmas 2.3 to 2.9, we will present them in detail in the following two subsections. (B) On the elimination of nontrivial minimal sets. A minimal set is a nonempty invariant closed set which does not contain any nonempty invariant closed proper subset. For example, a critical point or a closed orbit of a system is clearly a minimal set of the system. In Chapter VII, we found that an irrational flow on the torus forms a minimal set of the system. Usually, a minimal set which is neither a critical point nor a closed orbit is called a nontrivial minimal set of the system [16]. In this section we essentially prove that the systems of type \( \left( {1,2}\right) \) are dense in the systems of type (1). Thus, by Lemma 2.2, the systems of type \( \left( {1,2}\right) \) are then dense in \( \mathcal{B} \) . The main idea is to show that under \( {C}^{1} \) small perturbations we can eliminate a nontrivial minimal set in system \( {Y}_{1} \), and obtain a new closed-orbit or new orbit connecting a saddle point to a saddle point. This is the method of elmination of nontrivial minimal sets to be presented in this subsection. Let \( P \) be an ordinary point of the system \( {Y}_{1} \) . For \( P \), we construct a "square" \( R = {abcd} \), where the "horizontal" sides \( {ca} \) and \( {db} \) are two orbit arcs of \( {Y}_{1} \), and the "vertical" sides \( {ab} \) and \( {cd} \) are two arcs orthogonal to the linear field of \( {Y}_{1} \) (as shown in Figure 8.16). Without loss of generality, we may assume that for the given local coordinate system, \( R \) is chosen sufficiently small such that it is contained in the local coordinate neighborhood of \( P \) . Moreover, we may assume that \( R \) is \( \left| x\right| \leq 1,\left| y\right| \leq 1 \), and \( P = \left( {0,0}\right), a = \left( {1,1}\right), b = \left( {1, - 1}\right), c = \left( {-1,1}\right) \) , \( d = \left( {-1, - 1}\right) \) . Further, inside \( R \), the vector field \( {Y}_{1} \) is always in the direction of the positive \( x \) -axis and is of length 1 . Let \( q \in \left\lbrack {a, b}\right\rbrack \), and consider the orbit \( \gamma \left( q\right) \) of \( {Y}_{1} \) passing through the point \( q \) . Suppose \( \gamma \left( q\right) \) intercepts with \( \left\lbrack {c, d}\right\rbrack \) after \( q \), denote the first such ![bea09977-be18-4815-a30e-4fa2fe3b219c_462_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_462_0.jpg) Figure 8.16 time by \( {T}_{a} \) . This determines a map \( T : \left\lbrack {a, b}\right\rbrack \rightarrow \left\lbrack {c, d}\right\rbrack \) ; and let its domain of definition be \( \Gamma \subset \left\lbrack {a, b}\right\rbrack \), where \( \Gamma \) can possibly be empty. Let \( {c}_{0},{d}_{0} \in \left\lbrack {a, b}\right\rbrack \) be the points, if they exist, such that \( {T}_{{c}_{0}} = c,{T}_{{d}_{0}} = d \) . For simplicity, we always assume that \( {ca} \) and \( {bd} \) of the rectangle \( R \) do not lie on the same orbit. LEMMA 2.3. Let the set \( \Gamma \subset \left\lbrack {a, b}\right\rbrack \) be the domain of definition for the mapping \( T : \left\lbrack {a, b}\right\rbrack \rightarrow \left\lbrack {c, d}\right\rbrack \) . Then \( \Gamma \) consists of a finite number of intervals. Moreover, suppose an endpoint \( S \) of these intervals has the property that \( S \notin \) \( \Gamma \), then the orbit \( \gamma \left( S\right) \) through \( S \) tends to a saddle point. Proof. Let \( q \in \Gamma \smallsetminus a \cup b \cup {c}_{0} \cup {d}_{0} \) . The continuity of the system implies that there exists a small neighborhood \( U\left( q\right) \) of \( q \) in \( \left\lbrack {a, b}\right\rbrack \) such that \( U\left( q\right) \subset \Gamma \) . That is, \( \Gamma \smallsetminus a \cup b \cup {c}_{0} \cup {d}_{0} \) is open in \( \left\lbrack {a, b}\right\rbrack \) ; and it is clearly the union of at most countably many disjoint open intervals. Let \( \left( {s,{s}^{\prime }}\right) \) be one of these intervals, and assume that \( s \notin \Gamma \) . Consider all the orbits which start from \( \left( {s,{s}^{\prime }}\right) \) and intersect \( \left\lbrack {c, d}\right\rbrack \) (see Figure 8.17). For all \( q \in \left( {s,{s}^{\prime }}\right) \), the arcs of the orbits \( \overset{⏜}{q{T}_{q}} \) form a "strip" \( \Delta \) . Consider the orbit \( \gamma \left( S\right) \) through \( S \) . Since \( S \notin \Gamma ,\gamma \left( S\right) \) must lie on the boundary \( \partial \Delta \) of \( \Delta \) and \( \omega \left( \gamma \right) \subset \partial \Delta \) . It can be readily shown that \( \Delta \) is homeomorphic to a rectangle in \( {R}^{2} \), excluding two parallel lines (as shown in ![bea09977-be18-4815-a30e-4fa2fe3b219c_462_1.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_462_1.jpg) FIGURE 8.17 ![bea09977-be18-4815-a30e-4fa2fe3b219c_463_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_463_0.jpg) Figure 8.18 Figure 8.18). Clearly, \( \omega \left( \gamma \right) \) can only be a critical point; and since the system \( {Y}_{1} \) is of type (1), this critical point must be a saddle point. On the other hand, since a system of type (1) can only have a finite number of critical points, the set \( \Gamma \smallsetminus a \cup b \cup {c}_{0} \cup {d}_{0} \) must be the union of finitely many disjoint open intervals. Hence, \( \Gamma \) consists of a finite number of open, closed, or half-open, half-closed intervals. LEMMA 2.4. Consider a point \( P \) in a nontrivial minimal set \( \mu \) of \( {Y}_{1} \) . Suppose that there exists a local coordinate square \( R \) surrounding \( P \), such that no orbit starting from the right side ab of \( R \) tends to a saddle point. Then \( {Y}_{1} \) can be approximated by a system which has a closed orbit passing through \( P \), and this closed orbit does not bound a cell. Proof. If \( T \) is defined at \( a \) and \( b \), then Lemma 2.3 implies that \( T \) is defined everywhere in \( \left\lbrack {a, b}\right\rbrack \) . If \( T \) is not defined at either \( a \) or \( b \), then the property of nontrivial minimal set \( \mu \) implies that we can find a point \( \bar{P} \) in \( \mu \) with a corresponding square \( \bar{R} \) such that \( T \) is defined at both \( a \) and \( b \) of the right-hand side \( \left\lbrack {a, b}\right\rbrack \) ; and thus \( T \) is defined everywhere in \( \left\lbrack {a, b}\right\rbrack \) . (The proof is left to the reader.) Hence, we may assume that for \( P \in \mu \) and the corresponding square \( R \), the map \( T \) is defined everywhere on the right side \( \left\lbrack {a, b}\right\rbrack \) . (As shown in Figure 8.19). Let \( \gamma \) denote the orbit through \( P \), and assume that there are infinitely many arcs of \( \gamma \) arbitrarily close to \( P \) . Let \( {q}_{t} \) be the \( i \) th time intersection after \( P \) that the orbit \( \gamma \) intersects with the segment \( \sigma : x = - 1 \) , \( 0 \geq y \geq - 1/2 \) . There are sufficiently large \( i \) such that the \( {q}_{l} \) are arbitrarily ![bea09977-be18-4815-a30e-4fa2fe3b219c_463_1.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_463_1.jpg) FIGURE 8.19 close to \( \left( {-1,0}\right) \) ; and let \( {P}_{l} \) be the corresponding point on the \( y \) -axis such that \( {P}_{l} \) is arbitrarily close to \( P = \left( {0,0}\right) \) . Let \( \varphi \) be a differentiable function such that \( \varphi > 0 \) inside \( R \) and \( \varphi = 0 \) outside \( R \) ; and let \( Z = \left( {0,1}\right) \) be the unit upward field in \( R \) . For \( 0 \leq u \leq 1 \) , define a new vector field on \( {M}^{2}, X\left( u\right) = {Y}_{1} + {\varepsilon u\varphi Z} \) . When \( \varepsilon \) is sufficiently small, this field can be made arbitrarily close to \( {Y}_{1} \) . Let \( \gamma \left( u\right) \) be the orbit of \( X\left( u\right) \) passing through \( P \) . Since \( T \) is defined everywhere in \( \left\lbrack {a, b}\right\rbrack \), the orbit \( \gamma \left( u\right) \) must intersect \( R \) infinitely many times, and it does not tend to a critical point. For each point \( y \in \sigma \), consider the orbits of \( X\left( 0\right) \) and \( X\left( 1\right) \) passing through the point \( y \) . Let \( \delta \left( y\right) > 0 \) denote the length of the arc segment on the \( y \) -axis determined by these two orbits. \( \delta \left( y\right) \) is continuous with respect to \( y \), and thus by compactness we conclude that there exists a constant \( \delta > 0 \) such that \( \delta \left( y\right) > \delta \) for all \( y \in \sigma \) . Choose a sufficiently large \( i \) such that \( \rho \left( {{P}_{i}, P}\right) < \delta \) . From the orientabil-ity of \( {M}^{2} \) it follows that we can find a sufficiently small \( {u}_{0} \) such that for \( u \leq {u}_{0} \), the \( i \) th times \( \gamma \left( u\right) \) intersect \( \sigma \) at a point \( {q}_{1}\left( u\right) \) above \( {q}_{1} \) . Hence, the corresponding \( {P}_{l}\left( u\right) \), when \( \gamma \left( u\right) \) intersects the \( y \) -axis, also lie above \( {P}_{l} \) (as shown in Figure 8.20). Clearly, \( {P}_{t}\left( u\right) \) is continuous and monotonically increasing with respect to \( u \) . If \( {P}_{t}\left( {u}_{0}\right) \) is above the point \( P \), then there must exist \(
113_Topological Groups
Definition 3.45
Definition 3.45. For all \( x, i \in \omega \) let \( \beta \left( {x, i}\right) = \mathrm{{rm}}\left( {\operatorname{Exc}x,1 + \left( {i + 1}\right) \mathrm{L}x}\right) \) . Theorem 3.46 (Number-theoretic: Gödel’s \( \beta \) -function lemma). For any finite sequence \( {y}_{0},\ldots ,{y}_{n - 1} \) of natural numbers there is an \( x \in \omega \) such that \( \beta \left( {x, i}\right) = {y}_{i} \) for each \( i < n \) . Proof. Let \( s \) be the maximum of \( {y}_{0},\ldots ,{y}_{n - 1}, n \) . For each \( i < n \) let \( {m}_{i} = \) \( 1 + \left( {i + 1}\right) \cdot s! \) Then for \( i < j < n \) the integers \( {m}_{i} \) and \( {m}_{j} \) are relatively prime. For, if a prime \( p \) divides both \( {m}_{i} \) and \( {m}_{j} \), it also divides \( {m}_{j} - {m}_{i} = \left( {j + 1}\right) \) . \( s! - \left( {i + 1}\right) \cdot s! = \left( {j - i}\right) \cdot s! \) Now \( p \nmid s! \), since \( p \mid 1 + \left( {i + 1}\right) s! \) . Hence \( p \mid j - \) \( i \) . But \( j - i < n \leq s \), and hence this would imply that \( p \mid s \) !, which we know is impossible. Thus indeed \( {m}_{i} \) and \( {m}_{j} \) are relatively prime. Hence by the Chinese remainder theorem choose \( v \) such that \[ v \equiv {y}_{i}\left( {\;\operatorname{mod}\;{m}_{i}}\right) \;\text{ for each }i < n. \] Let \( x = \mathrm{J}\left( {v, s!}\right) \) . Then Exc \( x = v \) by \( {3.43}\left( {iv}\right) \), and \( \mathrm{L}x = s \) ! by \( {3.43}\left( v\right) \) . Hence if \( i < n \) we have \[ \beta \left( {x, i}\right) = \operatorname{rm}\left( {\operatorname{Exc}x,1 + \left( {i + 1}\right) \mathrm{L}x}\right) \] \[ = \operatorname{rm}\left( {v,{m}_{i}}\right) \] \[ = {y}_{i}\text{.} \] Definition 3.47. If \( f \) is a 1-place function with range \( \omega \), let \( {f}^{\left( -1\right) }y = {\mu x}\left( {{fx} = y}\right) \) for all \( y \in \omega \) . We say that \( {f}^{\left( -1\right) } \) is obtained from \( f \) by inversion. Theorem 3.48 (Julia Robinson). The class of recursive functions is the intersection of all classes \( A \) of functions such that \( + ,\Delta ,\mathrm{{Exc}},{\mathrm{U}}_{i}^{n} \in A \) (for \( 0 \leq i < \) n), and such that \( A \) is closed under the operations of composition, and of inversion (applied to functions with range \( \omega \) ). Proof. Clearly the indicated intersection is a subset of the class of recursive functions \( \left( {{f}^{\left( -1\right) }y = {\mu x}\left( {\left| {{fx} - y}\right| = 0}\right) }\right) \), so we have here a special case of minimalization). Now suppose that \( A \) is a class with the properties indicated in the statement of the theorem. We want to show that every recursive function is in \( A \) . This will take several steps. The general idea of the proof is this: Inversion is a special case of minimal-ization, and the general case is obtained from inversion by using pairing functions. Primitive recursion is obtained by representing the computation of a function \( f \) as a finite sequence of the successive values of \( f \), coding the sequence into one number using the \( \beta \) function, and selecting that number out by minimalization. Our proof will begin with some preliminaries, giving a stock of members of \( A \), which leads to the fact that the pairing functions are in \( A \) . First note that for any \( x \in \omega ,{x}^{2} \leq {x}^{2} + x < {\left( x + 1\right) }^{2} \), and hence \( \operatorname{Exc}\left( {{x}^{2} + x}\right) = x \) . Thus (1) \[ \text{Exc has range}\omega \text{.} \] Next, (2) \[ {\operatorname{Exc}}^{\left( -1\right) }\left( {2x}\right) = {x}^{2} + {2x}\;\text{ for all }x \in \omega . \] For, obviously \( \operatorname{Exc}\left( {{x}^{2} + {2x}}\right) = {2x} \) . If \( \operatorname{Exc}\left( y\right) = {2x} \) with \( y < {x}^{2} + {2x} \), we may write \( y = {z}^{2} + {2x} < {\left( z + 1\right) }^{2} \) and so \( z < x \) and hence \( {\left( z + 1\right) }^{2} = {z}^{2} + \) \( {2z} + 1 \leq {z}^{2} + {2x} < {\left( z + 1\right) }^{2} \), a contradiction. Thus (2) holds. Again, (3) \[ {\operatorname{Exc}}^{\left( -1\right) }\left( {{2x} + 1}\right) = {x}^{2} + {4x} + 2\;\text{ for all }x \in \omega . \] For, \( {\left( x + 1\right) }^{2} = {x}^{2} + {2x} + 1 < {x}^{2} + {4x} + 2 < {x}^{2} + {4x} + 4 = {\left( x + 2\right) }^{2} \) , and hence \( \operatorname{Exc}\left( {{x}^{2} + {4x} + 2}\right) = {2x} + 1 \) . Now suppose \( \operatorname{Exc}\left( y\right) = {2x} + 1 \) , with \( y < {x}^{2} + {4x} + 2 \) . Choose \( z \) such that \( y = {z}^{2} + {2x} + 1 < {\left( z + 1\right) }^{2} \) . Then \( {z}^{2} + {2x} + 1 = y < {x}^{2} + {4x} + 2 = {\left( x + 1\right) }^{2} + {2x} + 1 \), and hence \( z \leq x \) . Hence \( {\left( z + 1\right) }^{2} = {z}^{2} + {2z} + 1 \leq {z}^{2} + {2x} + 1 = y < {\left( z + 1\right) }^{2} \), a contradiction. Thus (3) holds. From (2) we see that \( {\mathrm{C}}_{0}^{1}x = 0 = \operatorname{Exc} \circ {\operatorname{Exc}}^{\left( -1\right) }\left( {x + x}\right) \) for all \( x \in \omega \) ; hence (4) \[ {\mathrm{C}}_{0}^{1} \in A\text{.} \] Hence by composition with \( J \) , (5) \[ {\mathrm{C}}_{m}^{n} \in A\;\text{ for all }n > 0\text{ and all }m \in \omega . \] Now let \( x \ominus y = \operatorname{Exc}\left( {{\operatorname{Exc}}^{\left( -1\right) }\left( {{2x} + {2y}}\right) + {3x} + y + 4}\right) \) for all \( x, y \in \omega \) . Thus (6) \[ \ominus \in A\text{.} \] Now if \( x \geq y \), then \[ {\left( x + y + 2\right) }^{2} = {\left( x + y\right) }^{2} + {4x} + {4y} + 4 \] \[ \leq {\left( x + y\right) }^{2} + 2\left( {x + y}\right) + {3x} + y + 4 \] \[ = {\operatorname{Exc}}^{\left( -1\right) }\left( {{2x} + {2y}}\right) + {3x} + y + 4 \] by (2) \[ < {\left( x + y\right) }^{2} + {6x} + {6y} + 9 \] \[ = {\left( x + y + 3\right) }^{2}\text{.} \] Hence (7) \[ x \ominus y = x - y\;\text{ if }y \leq x. \] Let \( {fx} = {x}^{2} \) for all \( x \in \omega \) . Then by (2),(7), \( {fx} = {\operatorname{Exc}}^{\left( -1\right) }\left( {2x}\right) - {2x} \) for all \( x \in \omega \), so (8) \[ f \in A\text{.} \] Next note that \( \operatorname{sg}x = \operatorname{Exc}J\left( {x}^{2}\right) \) and \( \overline{\operatorname{sg}}x = 1 \ominus \operatorname{sg}x \) for all \( x \in \omega \) . Thus (9) \[ \mathrm{{sg}},\overline{\mathrm{{sg}}} \in A\text{.} \] Furthermore, (10) Exc \( \circ s \) has range \( \omega \) . For, Exc \( {s0} = 0 \), and if \( x \neq 0 \), then Exc \( s\left( {{x}^{2} + x - 1}\right) = x \) . Now using 3.43(iii) we see that \( {\lambda x} = \operatorname{Exc}{\left( \operatorname{Exc} \circ \delta \right) }^{\left( -1\right) }\left( x\right) \) for all \( x \in \omega \) . Hence (11) \[ h \in A\text{.} \] Recall that \( \mathbb{E} \) is the set of even numbers. Next we show (12) \[ \chi \mathbb{E}\left( x\right) = \operatorname{Exc}\mathcal{{ss}}{\operatorname{Exc}}^{\left( -1\right) }x\;\text{ for all }x \in \omega . \] For, if \( x = {2y} \), then \[ \operatorname{Exc}\mathcal{{ss}}{\operatorname{Exc}}^{\left( -1\right) }x = \operatorname{Exc}\mathcal{{ss}}\left( {{y}^{2} + {2y}}\right) \] by (2) \[ = \operatorname{Exc}\left( {{y}^{2} + {2y} + 2}\right) \] \[ = 1\text{;} \] if \( x = {2y} + 1 \), then \[ \operatorname{Exc}\mathcal{I}{\operatorname{Exc}}^{\left( -1\right) }x = \operatorname{Exc}\mathcal{I}\left( {{y}^{2} + {4y} + 2}\right) \] by (3) \[ = \operatorname{Exc}\left( {{y}^{2} + {4y} + 4}\right) \] \[ = 0\text{. } \] From (12) we have: (13) \[ \chi \mathbb{E} \in A\text{.} \] Now let \( {gx} = 2\operatorname{Exc}x + \overline{\operatorname{sg}}\chi \mathbb{E}x \) for all \( x \in \omega \) . Thus (14) \[ g \in A\text{.} \] We claim: (15) \[ g\text{has range}\omega \text{.} \] For, if \( x = {2y} \) then, since \( {y}^{2} + y \) is even, \( g\left( {{y}^{2} + y}\right) = 2\operatorname{Exc}\left( {{y}^{2} + y}\right) = \) \( {2y} = x \) . If \( x = {2y} + 1 \) then, since \( {\left( y + 1\right) }^{2} + y \) is odd, \( g\left( {{\left( y + 1\right) }^{2} + y}\right) = \) \( {2y} + 1 = x \) . Let \( {hx} = \left\lbrack {x/2}\right\rbrack = \) greatest integer \( y \leq x/2 \) for all \( x \in \omega \) . Then (16) \( {hx} = \operatorname{Exc}{g}^{\left( -1\right) }x\; \) for all \( x \in \omega \), and hence \( h \in A. \) For, \( 2\operatorname{Exc}{g}^{\left( -1\right) }x + \overline{\mathrm{{sg}}}\chi \mathbb{E}{g}^{\left( -1\right) }x = x \) for any \( x \) ; thus if \( x \) is even, then \( 2\operatorname{Exc}{g}^{\left( -1\right) }x = x \) ; while if \( x \) is odd, \( 2\operatorname{Exc}{g}^{\left( -1\right) }x + 1 = x \), as desired. For any \( x \in \omega \), let \( {kx} = \left\lbrack {\left( {\operatorname{Exc}/x}\right) /2}\right\rbrack + \operatorname{sg}x \) . Thus (17) \[ k \in A\text{.} \] Furthermore, (18) \[ k\left( {x}^{2}\right) = x\;\text{ for all }x. \] For, if \( x = 0 \) the result is obvious. If \( x \neq 0 \), then \( p{x}^{2} = {x}^{2} - 1 = {\left( x - 1\right) }^{2} + \) \( {2x} - 2 \), Exc \( h{x}^{2} = {2x} - 2 \), and hence \( k\left( {x}^{2}\right) = x \), as desired. Let \( {lx} = \left\lbrack {\sqrt{}x}\right\rbrack \) for all \( x \in \omega \) . Then by (18), \( {lx} = k\left( {x \ominus \operatorname{Exc}x}\right) \), so (19) \[ l \in A\text{.} \] Hence by (8) and (19) (20) \[ \mathrm{J},\mathrm{L} \in A\text{.} \] (21) \[ \text{if}x < y\text{, then}x \ominus y = {3x} + y + 3\text{.} \] For, \[ {\left( x + y + 1\right) }^{2} = {\left( x + y\right) }^{2} + {2x} + {2y} + 1 \] \[ < {\left( x + y\right) }^{2} + {5x} + {3y} + 4 \] \[ < {\left( x + y\right) }^{2} + {4x} + {4y} + 4 \] \[ = {\left( x + y + 2\right) }^{2}\text{.} \] Since \( x \ominus y = \operatorname{Exc}\left( {{\left( x + y\right) }^{2} + 2\left( {x + y}\right) + {3x} + y + 4}\right) \) ,(21) now follows. (22) \( {\chi }_{ \geq }\left( {x, y}\right) = \operatorname{sg}\left\lbrack {\left( {x \ominus
109_The rising sea Foundations of Algebraic Geometry
Definition 3.109
Definition 3.109. Given simplices \( A, B \in \sum \), their product is the chamber \( {AB} \) described in Theorem 3.108 and characterized by equation (3.9). The product is also denoted by \( {\operatorname{proj}}_{A}B \) and called the projection of \( B \) onto \( A \) . As in Section 1.4.6, equation (3.9) has the following consequence: Corollary 3.110. The product of simplices is associative. Hence \( \sum \) is a semigroup. Example 3.111. Let \( C \) and \( {C}^{\prime } \) be adjacent chambers. Let \( v,{v}^{\prime } \) be the vertices of \( C,{C}^{\prime } \) that are not in the common panel, as in Figure 3.9. Consider the ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_173_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_173_0.jpg) Fig. 3.9. Adjacent chambers. product \( v{v}^{\prime } \) . We show by two different methods that \( v{v}^{\prime } \leq C \) . Method 1: There is a minimal gallery \( C,{C}^{\prime } \) from \( v \) to \( {v}^{\prime } \) since \( v \) and \( {v}^{\prime } \) are not joinable. [They have the same type.] Hence \( v{v}^{\prime } \) is a face of the starting chamber \( C \) . Method 2: Use sign sequences. Assume for simplicity (and without loss of generality) that \( {\sigma }_{H}\left( C\right) = + \) for every wall \( H \), so that \( {\sigma }_{H}\left( v\right) \geq 0 \) for all \( H \) . We then have \( {\sigma }_{H}\left( {C}^{\prime }\right) = + \) for all \( H \) except the one containing \( P \mathrel{\text{:=}} C \cap {C}^{\prime } \) ; hence \( {\sigma }_{H}\left( {v}^{\prime }\right) \geq 0 \) for all \( H \) except the one containing \( P \) . Since \( {\sigma }_{H}\left( v\right) = + \) for that exceptional wall, it follows that \( {\sigma }_{H}\left( {v{v}^{\prime }}\right) \geq 0 \) for all \( H \) and hence that \( v{v}^{\prime } \leq C \) . We close this subsection by recording some connections between the poset and semigroup structures on \( \sum \) as in Proposition 1.41 and Exercise 1.44. The proofs are easy via sign sequences and are left to the reader Proposition 3.112. Let \( A \) and \( B \) be arbitrary simplices in \( \sum \) . (1) \( A \leq {AB} \), with equality if and only if \( \operatorname{supp}B \leq \operatorname{supp}A \) . (2) \( A \leq B \) if and only if \( {AB} = B \) . (3) \( \operatorname{supp}A = \operatorname{supp}B \) if and only if \( {AB} = A \) and \( {BA} = B \) . (4) If \( \operatorname{supp}A = \operatorname{supp}B \), then left multiplication by \( B \) and \( A \) defines mutually inverse bijections \( {\sum }_{ \geq A} \rightleftarrows {\sum }_{ \geq B} \) . In particular, \( \dim A = \dim B \) . (5) \( {AB} \) and \( {BA} \) have the same support, which is the intersection of the walls containing both \( A \) and \( B \) . Corollary 3.113. For any simplices \( A, B \in \sum ,\dim {AB} = \dim {BA} \) . Consequently, \( \dim {AB} \geq \max \{ \dim A,\dim B\} \) . Proof. The first assertion follows immediately from parts (5) and (4) of the proposition. For the second, we have \( \dim {AB} \geq \dim A \) trivially because \( A \leq \) \( {AB} \), and similarly \( \dim {BA} \geq \dim B \) ; now use the fact that \( \dim {BA} = \dim {AB} \) . ## Exercises 3.114. Use Theorem 3.108 to give a new proof of Lemma 2.25. 3.115. Show that every root is a subsemigroup, and hence every intersection of roots is a subsemigroup. In particular, this applies to the support of any simplex (Definition 3.98). 3.116. Show that (finitely many) simplices \( A, B,\ldots, C \) are joinable if and only if they commute with one another in the semigroup \( \sum \), in which case their product is their least upper bound (see Exercise 1.43). Deduce, as in the proof of Proposition 1.127, that \( \sum \) is a flag complex. 3.117. Given simplices \( {A}_{1},{A}_{2}, B \in \sum \) with \( {A}_{1} \leq {A}_{2} \), show that \( d\left( {{A}_{1}, B}\right) \leq \) \( d\left( {{A}_{2}, B}\right) \), with equality if \( {A}_{2} \geq {A}_{1}B \) . In particular, \( d\left( {A, B}\right) = d\left( {{AB}, B}\right) \) for any two simplices \( A, B \) . 3.118. Figure 3.9 suggests that \( v{v}^{\prime } = C \) . Give examples to show that this is not necessarily the case. For instance, \( v{v}^{\prime } \) could be a vertex or an edge. 3.119. Recall that the link \( {L}_{A} \mathrel{\text{:=}} {\operatorname{lk}}_{\sum }A \) of any simplex \( A \) is again a Coxeter complex; hence it has a semigroup structure. Is it a subsemigroup of \( \sum \) ? If not, how is the product on \( {L}_{A} \) related to the product in \( \sum \) ? The remaining exercises are intended to show how the use of products can sometimes replace arguments based on the Tits cone. The intent of the exercises, then, is that they should be solved combinatorially, without the Tits cone. Given a chamber \( C \) and a panel \( P \) of \( C \), the wall containing \( P \) will be called a wall of \( C \) . Thus every chamber has exactly \( n + 1 \) walls if \( \dim \sum = n \) . 3.120. Fix a chamber \( C \) and let \( {\mathcal{H}}_{C} \) be its set of walls. (a) Show that \( C \) is defined by \( {\mathcal{H}}_{C} \) ; in other words, if \( D \) is a chamber such that \( {\sigma }_{H}\left( D\right) = {\sigma }_{H}\left( C\right) \) for all \( H \in {\mathcal{H}}_{C} \), then \( D = C \) . (b) Suppose \( A \) is a simplex such that \( {\sigma }_{H}\left( A\right) \leq {\sigma }_{H}\left( C\right) \) for all \( H \in {\mathcal{H}}_{C} \) . Show that \( A \leq C \) . (c) If \( A \) and \( B \) are faces of \( C \), show that \( A \leq B \) if and only if \( {\sigma }_{H}\left( A\right) \leq {\sigma }_{H}\left( B\right) \) for all \( H \in {\mathcal{H}}_{C} \) . (d) If \( A \leq C \), show that \( A \) is defined by \( {\mathcal{H}}_{C} \) ; in other words, if \( B \) is a simplex such that \( {\sigma }_{H}\left( B\right) = {\sigma }_{H}\left( A\right) \) for all \( H \in {\mathcal{H}}_{C} \), then \( B = A \) . 3.121. (a) Let \( C \) be a chamber, and let \( s \) and \( t \) be reflections with respect to two distinct walls of \( C \), denoted by \( {H}_{s} \) and \( {H}_{t} \) . Let \( m \) be the order of \( {st} \) , and assume \( m \geq 3 \) . If \( D \) is another chamber that also has \( {H}_{s} \) and \( {H}_{t} \) as two of its walls, show that either \( {H}_{s} \) and \( {H}_{t} \) both separate \( C \) from \( D \) or else neither of them separates \( C \) from \( D \) . (b) Give an example to show that we cannot drop the assumption that \( m \geq 3 \) in (a). (c) Generalize (a) as follows. Let \( {H}_{1},\ldots ,{H}_{k} \) be walls of a chamber \( C \) such that the corresponding reflections \( {s}_{i} \) generate an irreducible Coxeter group. If \( D \) is another chamber having \( {H}_{1},\ldots ,{H}_{k} \) as walls, show that either every \( {H}_{i} \) separates \( C \) from \( D \) or else no \( {H}_{i} \) separates \( C \) from \( D \) . 3.122. Use the previous exercise to give a combinatorial proof of the following fact, which we have proven earlier by different methods: If \( \left( {W, S}\right) \) is irreducible and \( {wS}{w}^{-1} = S \) for some \( w \neq 1 \) in \( W \), then \( W \) is finite and \( w \) is the longest element. [We gave two proofs of this for finite reflection groups, one algebraic and one geometric; see the proof of Corollary 1.91. And we generalized the algebraic proof to the infinite case in the proof of Proposition 2.73. The point of the exercise is that we now have the tools to generalize the geometric proof.] ## 3.6.5 Applications of Products In this brief subsection we use products to prove two results that will be needed later. Both proofs make use of the following lemma. Lemma 3.123. Let \( \sum = \sum \left( {W, S}\right) \), and let \( {W}_{A} \) for \( A \in \sum \) be the stabilizer of \( A \) in \( W \) . Then for any two simplices \( A, B \in \sum \) we have \( {W}_{AB} = {W}_{A} \cap {W}_{B} \) . Proof. It is clear that \( {W}_{A} \cap {W}_{B} \leq {W}_{AB} \) and that \( {W}_{AB} \leq {W}_{A} \) . [For the latter, note that \( A \leq {AB} \) and \( W \) is type-preserving.] So all that remains to show is that \( {W}_{AB} \) fixes \( B \) . This follows from Proposition 3.100 because \( B \in \operatorname{supp}{BA} = \operatorname{supp}{AB} \) . We can now prove a finiteness result that, a priori, is far from obvious: Proposition 3.124. Let \( A \) and \( B \) be arbitrary simplices of \( \sum \left( {W, S}\right) \) . Choose a chamber \( C \geq {AB} \) . Then every minimal gallery from \( A \) to \( B \) is equivalent under \( {W}_{A} \cap {W}_{B} \) to one that starts with \( C \) . In particular, there are only finitely many \( \left( {{W}_{A} \cap {W}_{B}}\right) \) -orbits of minimal galleries from \( A \) to \( B \) . Proof. By the lemma and Exercise 3.13, \( {W}_{A} \cap {W}_{B} \) is transitive on \( {\mathcal{C}}_{AB} \) . This implies the first assertion. For the second assertion, we need only recall that a minimal gallery from \( A \) to \( B \) starting with \( C \) must end with \( {BC} \), so there are only finitely many of these, one for each reduced decomposition of \( \delta \left( {C,{BC}}\right) \) \( \left\lbrack { = \delta \left( {A, B}\right) }\right\rbrack \) . Our next result is taken from Tits [247, Lemma 12.12]. Proposition 3.125. Let \( \sum \) be an irreducible Coxeter complex, and let \( H \) be a wall of \( \sum \) . Then \( \sum \) contains a chamber \( C \) that is disjoint from \( H \), in the sense that none of the vertices of \( C \) are in \( H \) . More generally, every simplex \( A \) disjoint from \( H \) is a face of a chamber disjoint from \( H \) . Note that we cannot drop the irreducibility assumption. For example, suppose \( \sum \) is of type \( {\mathrm{A}}_{1} \times {\mathrm{A}}_{1} \), i.e., \( \sum \) is the poset of cells associated with the reflection group \( \{ \pm 1\} \times \{ \pm 1\} \) acting on \( \mathbb{R} \times \mathbb{R} \) . Then there are two walls
1167_(GTM73)Algebra
Definition 3.6
Definition 3.6. A module J over a ring \( \mathrm{R} \) is said to be injective if given any diagram of \( \mathrm{R} \) -module homomorphisms ![a635b0ec-463f-4a06-bfab-0631c5cb2124_213_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_213_0.jpg) with top row exact (that is, \( \mathrm{g} \) a monomorphism), there exists an \( \mathrm{R} \) -module homomorphism \( \mathrm{h} : \mathrm{B} \rightarrow \mathrm{J} \) such that the diagram ![a635b0ec-463f-4a06-bfab-0631c5cb2124_213_1.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_213_1.jpg) is commutative (that is, \( \mathrm{{hg}} = \mathrm{f} \) ). Remarks analogous to those in the paragraph following Definition 3.1 apply here to unitary injective modules over a ring with identity. It is not surprising that the duals of many (but not all) of the preceding propositions may be readily proved. For example since in a category products are the dual concept of coproducts (direct sums), the dual of Proposition 3.5 is Proposition 3.7. A direct product of \( \mathrm{R} \) -modules \( \mathop{\prod }\limits_{{i \in I}}{\mathrm{\;J}}_{\mathrm{i}} \) is injective if and only if \( {\mathrm{J}}_{\mathrm{i}} \) is injective for every \( \mathrm{i} \in \mathbf{I} \) . ## PROOF. Exercise; see Proposition 3.5. Since the concept of a free module cannot be dualized (Exercise 13), there are no analogues of Theorems 3.2 or 3.4 (iii) for injective modules. However, Corollary 3.3 can be dualized. It states, in effect, that for every module \( A \) there is a projective module \( P \) and an exact sequence \( P \rightarrow A \rightarrow 0 \) . The dual of this statement is that for every module \( A \) there is an injective module \( J \) and an exact sequence \( 0 \rightarrow A \rightarrow J \) ; in other words, every module may be embedded in an injective module. The remainder of this section, which is not needed in the sequel, is devoted to proving this fact for unitary modules over a ring with identity. Once this has been done the dual of Theorem 3.4 (i), (ii), is easily proved (Proposition 3.13). We begin by characterizing injective \( R \) -modules in terms of left ideals (submodules) of the ring \( R \) . Lemma 3.8. Let \( \mathrm{R} \) be a ring with identity. A unitary \( \mathrm{R} \) -module \( \mathrm{J} \) is injective if and only if for every left ideal \( \mathbf{L} \) of \( \mathbf{R} \), any \( \mathbf{R} \) -module homomorphism \( \mathbf{L} \rightarrow \mathbf{J} \) may be extended to an \( \mathrm{R} \) -module homomorphism \( \mathrm{R} \rightarrow \mathrm{J} \) . SKETCH OF PROOF. To say that \( f : L \rightarrow J \) may be extended to \( R \) means that there is a homomorphism \( h : R \rightarrow J \) such that the diagram ![a635b0ec-463f-4a06-bfab-0631c5cb2124_214_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_214_0.jpg) is commutative. Clearly, such an \( h \) always exists if \( J \) is injective. Conversely, suppose \( J \) has the stated extension property and suppose we are given a diagram of module homomorphisms ![a635b0ec-463f-4a06-bfab-0631c5cb2124_214_1.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_214_1.jpg) with top row exact. To show that \( J \) is injective we must find a homomorphism \( h : B \rightarrow J \) with \( {hg} = f \) . Let \( \mathcal{S} \) be the set of all \( R \) -module homomorphisms \( h : C \rightarrow J \) , where \( \operatorname{Im}g \subset C \subset B \) . S is nonempty since \( f{g}^{-1} : \operatorname{Im}g \rightarrow J \) is an element of \( \mathcal{S}(g \) is a monomorphism). Partially order \( \mathcal{S} \) by extension: \( {h}_{1} \leq {h}_{2} \) if and only if Dom \( {h}_{1} \subset \) Dom \( {h}_{2} \) and \( {h}_{2} \mid \) Dom \( {h}_{1} = {h}_{1} \) . Verify that the hypotheses of Zorn’s Lemma are satisfied and conclude that \( \mathcal{S} \) contains a maximal element \( h : H \rightarrow J \) with \( {hg} = f \) . We shall complete the proof by showing \( H = B \) . If \( H \neq B \) and \( {b\varepsilon B} - H \), then \( L = \{ r \in R \mid {rb\varepsilon H}\} \) is a left ideal of \( R \) . The map \( L \rightarrow J \) given by \( r \mapsto h\left( {rb}\right) \) is a well-defined \( R \) -module homomorphism. By hypothesis there is an \( R \) -module homomorphism \( k : R \rightarrow J \) such that \( k\left( r\right) = h\left( {rb}\right) \) for all \( r \in L \) . Let \( c = k\left( {1}_{R}\right) \) and define a map \( \bar{h} : H + {Rb} \rightarrow J \) by \( a + {rb} \mapsto h\left( a\right) + {rc} \) . We claim that \( \bar{h} \) is well defined. For if \( {a}_{1} + {r}_{1}b = {a}_{2} + {r}_{2}{b\varepsilon H} + {Rb} \), then \( {a}_{1} - {a}_{2} = \left( {{r}_{2} - {r}_{1}}\right) b \) \( {\varepsilon H} \cap {Rb} \) . Hence \( {r}_{2} - {r}_{1}{\varepsilon L} \) and \( h\left( {a}_{1}\right) - h\left( {a}_{2}\right) = h\left( {{a}_{1} - {a}_{2}}\right) = h\left( {\left( {{r}_{2} - {r}_{1}}\right) b}\right) = \) \( k\left( {{r}_{2} - {r}_{1}}\right) = \left( {{r}_{2} - {r}_{1}}\right) k\left( {1}_{R}\right) = \left( {{r}_{2} - {r}_{1}}\right) c \) . Therefore, \( \bar{h}\left( {{a}_{1} + {r}_{1}b}\right) = h\left( {a}_{1}\right) + {r}_{1}c = h\left( {a}_{2}\right) \) \( + {r}_{2}c = \bar{h}\left( {{a}_{2} + {r}_{2}b}\right) \) and \( \bar{h} \) is well defined. Verify that \( \bar{h} : H + {Rb} \rightarrow J \) is an \( R \) -module homomorphism that is an element of the set \( \$ \) . This contradicts the maximality of \( h \) since \( b \notin H \) and hence \( H \subset H + {Rb} \) . Therefore, \( H = B \) and \( J \) is injective. An abelian group \( D \) is said to be divisible if given any \( y \in D \) and \( 0 \neq n \in \mathbf{Z} \), there exists \( x \in D \) such that \( {nx} = y \) . For example, the additive group \( \mathbf{Q} \) is divisible, but \( \mathbf{Z} \) is not (Exercise 4). It is easy to prove that a direct sum of abelian groups is divisible if and only if each summand is divisible and that the homomorphic image of a divisible group is divisible (Exercise 7). ## Lemma 3.9. An abelian group \( \mathrm{D} \) is divisible if and only if \( \mathrm{D} \) is an injective (unitary) Z-module. PROOF. If \( D \) is injective, \( y \in D \) and \( 0 \neq n \in \mathbf{Z} \), let \( f : \langle n\rangle \rightarrow D \) be the unique homomorphism determined by \( n \mapsto y;(\langle n\rangle \) is a free \( \mathbf{Z} \) -module by Theorems I.3.2 and II.1.1). Since \( D \) is injective, there is a homomorphism \( h : \mathbf{Z} \rightarrow D \) such that the diagram ![a635b0ec-463f-4a06-bfab-0631c5cb2124_215_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_215_0.jpg) is commutative. If \( x = h\left( 1\right) \), then \( {nx} = {nh}\left( 1\right) = h\left( n\right) = f\left( n\right) = y \) . Therefore, \( D \) is divisible. To prove the converse note that the only left ideals of \( \mathbf{Z} \) are the cyclic groups \( \langle n\rangle ,{n\varepsilon }\mathbf{Z} \) . If \( D \) is divisible and \( f : \langle n\rangle \rightarrow D \) is a homomorphism, then there exists \( {x\varepsilon D} \) with \( {nx} = f\left( n\right) \) . Define \( h : \mathbf{Z} \rightarrow D \) by \( 1 \mapsto x \) and verify that \( h \) is a homomorphism that extends \( f \) . Therefore, \( D \) is injective by Lemma 3.8. REMARK. A complete characterization of divisible abelian groups (injective unitary \( \mathbf{Z} \) -modules) is given in Exercise 11. ## Lemma 3.10. Every abelian group A may be embedded in a divisible abelian group. PROOF. By Theorem II.1.4 there is a free Z-module \( F \) and an epimorphism \( F \rightarrow A \) with kernel \( K \) so that \( F/K \cong A \) . Since \( F \) is a direct sum of copies of \( \mathbf{Z} \) (Theorem II.1.1) and \( \mathbf{Z} \subset \mathbf{Q}, F \) may be embedded in a direct sum \( D \) of copies of the rationals Q (Theorem I.8.10). But \( D \) is a divisible group by Proposition 3.7, Lemma 3.9, and the remarks preceding it. If \( f : F \rightarrow D \) is the embedding monomorphism, then \( f \) induces an isomorphism \( F/K \cong f\left( F\right) /f\left( K\right) \) by Corollary I.5.8. Thus the composition \( A \cong F/K \cong f\left( F\right) /f\left( K\right) \subset D/f\left( K\right) \) is a monomorphism. But \( D/f\left( K\right) \) is divisible since it is the homomorphic image of a divisible group. If \( R \) is a ring with identity and \( J \) is an abelian group, then \( {\operatorname{Hom}}_{Z}\left( {R, J}\right) \), the set of all \( \mathbf{Z} \) -module homomorphisms \( R \rightarrow J \), is an abelian group (Exercise 1.7). Verify that \( {\operatorname{Hom}}_{\mathbb{Z}}\left( {R, J}\right) \) is a unitary left \( R \) -module with the action of \( R \) defined by \( \left( {rf}\right) \left( x\right) = f\left( {xr}\right) \) , \( \left( {r, x\varepsilon R;f\varepsilon {\operatorname{Hom}}_{\mathbf{Z}}\left( {R, J}\right) }\right) \) . Lemma 3.11. If \( \mathrm{J} \) is a divisible abelian group and \( \mathrm{R} \) is a ring with identity, then \( {\operatorname{Hom}}_{\mathbf{Z}}\left( {\mathrm{R},\mathrm{J}}\right) \) is an injective left \( \mathrm{R} \) -module. SKETCH OF PROOF. By Lemma 3.8 it suffices to show that for each left ideal \( L \) of \( R \), every \( R \) -module homomorphism \( f : L \rightarrow {\operatorname{Hom}}_{\mathbf{Z}}\left( {R, J}\right) \) may be extended to an \( R \) -module homomorphism \( h : R \rightarrow {\operatorname{Hom}}_{\mathbf{Z}}\left( {R, J}\right) \) . The map \( g : L \rightarrow J \) given by \( g\left( a\right) = \left\lbrack {f\left( a\right) }\right\rbrack \left( {1}_{R}\right) \) is a group homomorphism. Since \( J \) is an injective \( \mathbf{Z} \) -module by Lemma '3.9 and we have the diagram ![a635b0ec-463f-4a06-bfab-0631c5cb2124_215_1.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_215_1.jpg) there is a group homomorphism \( \bar{g} : R \rightarrow J \) such that \( \bar{g} \mid L = g \) . Define \( h : R \rightarrow \) \( {\opera
1088_(GTM245)Complex Analysis
Definition 3.53
Definition 3.53. Let \( U \subset \mathbb{C} \) be a neighborhood of a point \( c \) . A function \( f \) that is holomorphic in \( {U}^{\prime } = U - \{ c\} \), a deleted neighborhood of the point \( c \), has a removable singularity at \( c \) if there is a holomorphic function in \( U \) that agrees with \( f \) on \( {U}^{\prime } \) . Otherwise \( c \) is called a singularity of \( f \) . Note that all singularities are isolated points. Let us consider two functions \( f \) and \( g \) having power series expansions at each point of a domain \( D \) in \( \widehat{\mathbb{C}} \) . Assume that neither function vanishes identically on \( D \) and fix \( c \in D \cap \mathbb{C} \) . Let \[ F\left( z\right) = \frac{f\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( f\right) }}\text{ and }G\left( z\right) = \frac{g\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( g\right) }} \] for \( z \in D \) . Then the functions \( F \) and \( G \) have removable singularities at \( c \), do not vanish there, and have power series expansions at each point of \( D \) . Furthermore, we define a new function \( h \) on \( D \) by \[ h\left( z\right) = \frac{f}{g}\left( z\right) = \frac{{\left( z - c\right) }^{{v}_{c}\left( f\right) }F\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( g\right) }G\left( z\right) }\text{ for all }z \in D \] and fixed \( c \in D \cap \mathbb{C} \) . There are exactly three distinct possibilities for the behavior of the function \( h \) at \( z = c \), which lead to the following definitions. Definition 3.54. (I) If \( {v}_{c}\left( g\right) > {v}_{c}\left( f\right) \), then \( h\left( c\right) = \infty \) (this defines \( h\left( c\right) \), and the resulting function \( h \) is continuous at \( c \) ). We say that \( h \) has a pole of order \( {v}_{c}\left( g\right) - {v}_{c}\left( f\right) \) at \( c \) . If \( {v}_{c}\left( g\right) - {v}_{c}\left( f\right) = 1 \), we say that the pole is simple. (II) If \( {v}_{c}\left( g\right) = {v}_{c}\left( f\right) \), then the singularity of \( h \) at \( c \) is removable, and, by definition, \( h\left( c\right) = \frac{F\left( c\right) }{G\left( c\right) } \neq 0 \) . (III) If \( {v}_{c}\left( g\right) < {v}_{c}\left( f\right) \), then the singularity is again removable and in this case \( h\left( c\right) = 0 \) . In all cases we set \( {v}_{c}\left( h\right) = {v}_{c}\left( f\right) - {v}_{c}\left( g\right) \) and call it the order or multiplicity of \( h \) at \( c \) . In cases (II) and (III) of the definition, \( h \) has a power series expansion at \( c \) as a consequence of the following result. Theorem 3.55. If a function \( f \) has a power series expansion at \( c \) and \( f\left( c\right) \neq 0 \) , then \( \frac{1}{f} \) also has a power series expansion at \( c \) . Proof. Without loss of generality we assume \( c = 0 \) and \( f\left( 0\right) = 1 \) . Thus \[ f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{z}^{n},{a}_{0} = 1, \] and the radius of convergence of the series is nonzero. We want to find the reciprocal power series, that is, a series \( g \) with positive radius of convergence, that we write as \[ g\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{b}_{n}{z}^{n} \] and satisfies \[ \left( {\sum {a}_{n}{z}^{n}}\right) \left( {\sum {b}_{n}{z}^{n}}\right) = 1 \] The LHS and the RHS are both power series, where the RHS is a power series expansion whose coefficients are all equal to zero except for the first one. Equating the first two coefficients on both sides, we obtain \[ {a}_{0}{b}_{0} = 1\text{, from where}{b}_{0} = 1\text{, and} \] \[ {a}_{1}{b}_{0} + {a}_{0}{b}_{1} = 0,\;\text{ from where }{b}_{1} = - {a}_{1}{b}_{0} = - {a}_{1}. \] Similarly, using the \( n \) -th coefficient of the power series when expanded for the LHS, for \( n \geq 1 \), we obtain \[ {a}_{n}{b}_{0} + {a}_{n - 1}{b}_{1} + \cdots + {a}_{0}{b}_{n} = 0. \] Thus by induction we define \[ {b}_{n} = - \mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{b}_{j}{a}_{n - j}, n \geq 1. \] Since \( \rho > 0 \), we have \( \frac{1}{\rho } < + \infty \) . Since \( \mathop{\limsup }\limits_{n}{\left| {a}_{n}\right| }^{\frac{1}{n}} = \frac{1}{\rho } \), there exists a positive number \( k \) such that \( \left| {a}_{n}\right| \leq {k}^{n} \) . We show by the use of induction, once again, that \( \left| {b}_{n}\right| \leq {2}^{n - 1}{k}^{n} \) for all \( n \geq 1 \) . For \( n = 1 \), we have \( {b}_{1} = - {a}_{1} \) and hence \( \left| {b}_{1}\right| = \left| {a}_{1}\right| \leq k \) . Suppose the inequality holds for \( 1 \leq j \leq n \) for some \( n \geq 1 \) . Then \[ \left| {b}_{n + 1}\right| \leq \mathop{\sum }\limits_{{j = 0}}^{n}\left| {b}_{j}\right| \left| {a}_{n + 1 - j}\right| = \left| {a}_{n + 1}\right| + \mathop{\sum }\limits_{{j = 1}}^{n}\left| {b}_{j}\right| \left| {a}_{n + 1 - j}\right| \] \[ \leq {k}^{n + 1} + \mathop{\sum }\limits_{{j = 1}}^{n}{2}^{j - 1}{k}^{j}{k}^{n + 1 - j} \] \[ = {k}^{n + 1}\left( {1 + {2}^{n} - 1}\right) \text{.} \] Thus there is a reciprocal series, with radius of convergence \( \sigma \) satisfying \[ \frac{1}{\sigma } = \mathop{\limsup }\limits_{n}{\left| {b}_{n}\right| }^{\frac{1}{n}} \leq \mathop{\lim }\limits_{n}\left( {2}^{1 - \frac{1}{n}}\right) k = {2k} \] and therefore nonzero. Corollary 3.56. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \) and \( f \) a function defined on \( D \) . If \( f \) has a power series expansion at each point of \( D \) and \( f\left( z\right) \neq 0 \) for all \( z \in D \), then \( \frac{1}{f} \) has a power series expansion at each point of \( D \) . Definition 3.57. For each domain \( D \subseteq \widehat{\mathbb{C}} \), we define \( \mathbf{H}\left( D\right) = \{ f : D \rightarrow \mathbb{C};f \) has a power series expansion at each point of \( D\} . \) We will see in Chap. 5 that \( \mathbf{H}\left( D\right) \) is the set of holomorphic functions on \( D \) . Corollary 3.58. Assume that \( D \) is a domain in \( \widehat{\mathbb{C}} \) . The set \( \mathbf{H}\left( D\right) \) is an integral domain and an algebra over \( \mathbb{C} \) . Its units are the functions that never vanish on \( D \) . Definition 3.59. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \) . A function \( f : D \rightarrow \widehat{\mathbb{C}} \) is meromorphic on \( D \) if it is locally \( {}^{9} \) the ratio of two functions having power series expansions (with the denominator not identically zero). The set of meromorphic functions on \( D \) is denoted by \( \mathbf{M}\left( D\right) \) . Recall that, by our convention, \( \mathbf{M}{\left( D\right) }_{ \neq 0} \) is the set of meromorphic functions with the constant function 0 omitted, where \( 0\left( z\right) = 0 \) for all \( z \) in \( D \) . \( {}^{9} \) A property \( P \) is satisfied locally on an open set \( D \) if for each point \( c \in D \), there exists a neighborhood \( U \subset D \) of \( c \) such that \( P \) is satisfied in \( U \) . Corollary 3.60. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \), let \( c \) be any point in \( D \cap \mathbb{C} \), and let \( f \in \mathbf{M}{\left( D\right) }_{ \neq 0} \) . There exist a connected neighborhood \( U \) of \( c \) in \( D \), an integer \( n = {v}_{c}f \), and a unit \( g \in \mathbf{H}\left( U\right) \) such that \[ f\left( z\right) = {\left( z - c\right) }^{n}g\left( z\right) \text{ for all }z \in U. \] Remark 3.61. If \( \infty \in D \), there exists an appropriate version of the above Corollary for \( c = \infty \) exists; see Exercise 3.7. Corollary 3.62. If \( D \) is a domain in \( \widehat{\mathbb{C}} \), then the set \( \mathbf{M}\left( D\right) \) is a field and an algebra over \( \mathbb{C} \) . Corollary 3.63. If \( D \) be a domain and \( c \in D \), then \[ {v}_{c} : \mathbf{M}{\left( D\right) }_{ \neq 0} \rightarrow \mathbb{Z} \] is a homomorphism; that is, \( {v}_{c}\left( {f \cdot g}\right) = {v}_{c}\left( f\right) + {v}_{c}\left( g\right) \) for all \( f \) and \( g \) in \( \mathbf{M}{\left( D\right) }_{ \neq 0} \) . Defining \( {v}_{c}\left( 0\right) = + \infty \), we also have \[ {v}_{c}\left( {f + g}\right) \geq \min \left\{ {{v}_{c}\left( f\right) ,{v}_{c}\left( g\right) }\right\} \text{ for all }f\text{ and }g\text{ in }\mathbf{M}\left( D\right) ; \] that is, \( {v}_{c} \) is a (discrete) valuation \( {}^{10} \) (of rank one) on \( \mathbf{M}\left( D\right) \) . Remark 3.64. The converse statement also holds; it is nontrivial and not established in this book. The next corollary defines the term Laurent series, the natural generalization of power series for functions in \( \mathbf{H}\left( D\right) \) to functions in \( \mathbf{M}\left( D\right) \) . Corollary 3.65. If \( f \in \mathbf{M}{\left( D\right) }_{ \neq 0} \) and \( c \in D \cap \mathbb{C} \), then \( f \) has a Laurent series expansion at \( c \) ; that is, there exists a \( \mu \in \mathbb{Z}\left( {\mu = {v}_{c}\left( f\right) }\right) \), a sequence of complex numbers \( {\left\{ {a}_{n}\right\} }_{n = \mu }^{\infty } \) with \( {a}_{\mu } \neq 0 \), and a deleted neighborhood \( {U}^{\prime } \) of \( c \) such that \[ f\left( z\right) = \mathop{\sum }\limits_{{n = \mu }}^{\infty }{a}_{n}{\left( z - c\right) }^{n} \] for all \( z \in {U}^{\prime } \) . The corresponding power series \[ \mathop{\sum }\limits_{{n = \max \left( {0,\mu }\right) }}^{\infty }{a}_{n}{\left( z - c\right) }^{n} \] converges uniformly and absolutely on compact subsets of \( U = {U}^{\prime } \cup \{ c\} \) . \( {}^{10} \) Standard, but not universal, terminology. Remark 3.66. If \( \infty \in D \), then for all sufficiently large real numbers \( R \), the Laurent series representing \( f \) in \( \{ \left| z\right| > R\} \cup \{ \infty \} \) has the form \[ f\left( z\right) = \mathop{\sum }\limits_{{n = \mu }}^{\infty }{a}_{n}{\left( \frac{1}{z}\right) }^{n}. \] Corollary 3.67. If \( f \in \mathbf{M}\left
110_The Schwarz Function and Its Generalization to Higher Dimensions
Definition 2.28
Definition 2.28. Let \( M \) be a nonempty subset of \( {\mathbb{R}}^{n} \) and \( x \in M \) . A vector \( d \in {\mathbb{R}}^{n} \) is called a tangent direction of \( M \) at \( x \) if there exist a sequence \( {x}_{n} \in M \) converging to \( x \) and a nonnegative sequence \( {\alpha }_{n} \) such that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\alpha }_{n}\left( {{x}_{n} - x}\right) = d \] The tangent cone of \( M \) at \( x \), denoted by \( {T}_{M}\left( x\right) \), is the set of all tangent directions of \( M \) at \( x \) . This definition is sufficient for our purposes. We remark that the same definition is valid in a topological vector space. A detailed study of this and several related concepts is needed in nonsmooth analysis; see [230] and [199, 200]. Theorem 2.29. (Lyusternik) Let \( f : U \rightarrow {\mathbb{R}}^{m} \) be a \( {C}^{1} \) map, where \( U \subset {\mathbb{R}}^{n} \) is an open set. Let \( M = {f}^{-1}\left( {f\left( {x}_{0}\right) }\right) \) be the level set of a point \( {x}_{0} \in U \) . If the derivative \( {Df}\left( {x}_{0}\right) \) is a linear map onto \( {\mathbb{R}}^{m} \), then the tangent cone of \( M \) at \( {x}_{0} \) is the null space of the linear map \( {Df}\left( {x}_{0}\right) \), that is, \[ {T}_{M}\left( {x}_{0}\right) = \left\{ {d \in {\mathbb{R}}^{n} : {Df}\left( {x}_{0}\right) d = 0}\right\} . \] Remark 2.30. Let \( f = \left( {{f}_{1},\ldots ,{f}_{m}}\right) \), where \( \left\{ {f}_{i}\right\} \) are the components functions of \( f \) . It is easy to verify that \[ \operatorname{Ker}{Df}\left( {x}_{0}\right) = \left\{ {d \in {\mathbb{R}}^{n} : \left\langle {\nabla {f}_{i}\left( {x}_{0}\right), d}\right\rangle = 0, i = 1,\ldots, m}\right\} \] and that the surjectivity of \( {Df}\left( {x}_{0}\right) \) is equivalent to the linear independence of the gradient vectors \( {\left\{ \nabla {f}_{i}\left( {x}_{0}\right) \right\} }_{1}^{m} \) . Proof. We may assume that \( {x}_{0} = 0 \) and \( f\left( {x}_{0}\right) = 0 \), by considering the function \( x \mapsto f\left( {x + {x}_{0}}\right) - f\left( {x}_{0}\right) \) if necessary. Define \( A \mathrel{\text{:=}} {Df}\left( 0\right) \) . The proof of the inclusion \( {T}_{M}\left( 0\right) \subseteq \operatorname{Ker}A \) is easy: if \( d \in {T}_{M}\left( 0\right) \), then there exist points \( x\left( t\right) = \) \( {td} + o\left( t\right) \in M \), and we have \[ 0 = f\left( {0 + {td} + o\left( t\right) }\right) = f\left( 0\right) + {tDf}\left( 0\right) \left( d\right) + o\left( t\right) = {tDf}\left( 0\right) \left( d\right) + o\left( t\right) . \] Dividing both sides by \( t \) and letting \( t \rightarrow 0 \), we obtain \( {Df}\left( 0\right) \left( d\right) = 0 \) . The proof of the reverse inclusion \( \operatorname{Ker}A \subseteq {T}_{M}\left( 0\right) \) is based on the idea that the equation \( f\left( x\right) = 0 \) can be written as \( f\left( {y, z}\right) = 0 \) in a form that is suitable for applying the implicit function theorem. Define \( K \mathrel{\text{:=}} \operatorname{Ker}A \) and \( L \mathrel{\text{:=}} {K}^{ \bot } \) . Since \( A \) is onto \( {\mathbb{R}}^{m} \), we can identify \( K \) and \( L \) with \( {\mathbb{R}}^{n - m} \) and \( {\mathbb{R}}^{m} \), respectively, by introducing a suitable basis in \( {\mathbb{R}}^{n} \) . We write a point \( x \in {\mathbb{R}}^{n} \) in the form \( x = \left( {y, z}\right) \in K \times L \) . We have \( A = \left\lbrack {{D}_{y}f\left( 0\right) ,{D}_{z}f\left( 0\right) }\right\rbrack \), and \[ 0 = A\left( K\right) = \left\{ {A\left( {{d}_{1},0}\right) : {d}_{1} \in {\mathbb{R}}^{n - m}}\right\} = {D}_{y}f\left( 0\right) \left( {\mathbb{R}}^{n - m}\right) , \] so that \( {D}_{y}f\left( 0\right) = 0 \) . Since \( A \) has rank \( m \), it follows that \( {D}_{z}f\left( 0\right) \) is nonsingular. Theorem 2.26 implies that there exist neighborhoods \( {U}_{1} \subseteq {\mathbb{R}}^{m} \) and \( {U}_{2} \subseteq \) \( {\mathbb{R}}^{n - m} \) around the origin and a \( {C}^{1} \) map \( \alpha : {U}_{1} \rightarrow {U}_{2},\alpha \left( 0\right) = 0 \), such that \( x = \left( {y, z}\right) \in {U}_{1} \times {U}_{2} \) satisfies \( f\left( x\right) = 0 \) if and only if \( z = \alpha \left( y\right) \) . The equation \( f\left( x\right) = 0 \) can then be written as \( f\left( {y,\alpha \left( y\right) }\right) = 0 \) . Differentiating this equation and using the chain rule, we obtain \[ 0 = {D}_{y}f\left( {y,\alpha \left( y\right) }\right) + {D}_{z}f\left( {y,\alpha \left( y\right) }\right) {D\alpha }\left( y\right) . \] At the origin \( x = 0,{D}_{y}f\left( 0\right) = 0 \), and \( {D}_{z}f\left( 0\right) \) nonsingular, so that \( {D\alpha }\left( 0\right) = 0 \) . If \( \left| y\right| \) is small, we have \[ \alpha \left( y\right) = \alpha \left( 0\right) + {D\alpha }\left( 0\right) y + o\left( y\right) = o\left( y\right) . \] Let \( d = \left( {{d}_{1},0}\right) \in K \) . As \( t \rightarrow 0 \), the point \( x\left( t\right) \mathrel{\text{:=}} \left( {t{d}_{1},\alpha \left( {t{d}_{1}}\right) }\right) = \left( {t{d}_{1}, o\left( t\right) }\right) \) lies in \( M \), that is, \( f\left( {x\left( t\right) }\right) = 0 \), and satisfies \( \left( {x\left( t\right) - {td}}\right) /t = \left( {0, o\left( t\right) }\right) /t \rightarrow 0 \) . This implies that \( K \subseteq {T}_{M}\left( 0\right) \), and the theorem is proved. ## 2.6 Morse’s Lemma Let \( f : U \rightarrow \mathbb{R} \) be a \( {C}^{2 + k}\left( {k \geq 0}\right) \) function on an open set \( U \subseteq {\mathbb{R}}^{n} \) . Recall that a critical point \( x \in U \) is called nondegenerate if the Hessian matrix \( {D}^{2}f\left( x\right) \) is nonsingular. Morse's lemma, due originally to Morse [202], states that after a local, possibly nonlinear, change of coordinates, the function \( f \) is identical to its quadratic form \( q\left( x\right) \mathrel{\text{:=}} f\left( {x}_{0}\right) + \left\langle {{D}^{2}f\left( {x}_{0}\right) \left( {x - {x}_{0}}\right), x - {x}_{0}}\right\rangle \) . Thus, the quadratic function \( q\left( x\right) \) determines the behavior of the function \( f \) around \( {x}_{0} \) . Morse's original proof uses the Gram-Schmidt process. A modern version of the proof can be found in Milnor [197]. The simple proof below is from [6]. It has the virtue that the same proof, with obvious modifications, works in Banach spaces. The following technical result is needed in the proof of Morse's lemma. Lemma 2.31. Let \( {S}^{n} \) be the space of \( n \times n \) symmetric matrices, \( A \in {S}^{n} \) nonsingular, and let \( {S}_{A}^{n} \) be the vector space of \( n \times n \) matrices \( X \) such that \( {AX} \) is symmetric. The quadratic map \[ {q}_{A} : {S}_{A}^{n} \rightarrow {S}^{n}\;\text{ defined by }{q}_{A}\left( X\right) = {X}^{T}{AX} \] is locally one-to-one around \( I \in {S}_{A}^{n} \) . Consequently, there exist open neighborhoods \( U \ni I \) and \( V \ni A \) such that \( {q}_{A}^{-1} : V \rightarrow U \) is a well-defined, infinitely differentiable map. Proof. We have \[ q\left( {I + {tH}}\right) \mathrel{\text{:=}} {q}_{A}\left( {I + {tH}}\right) = \left( {I + t{H}^{T}}\right) A\left( {I + {tH}}\right) \] \[ = A + t\left( {{H}^{T}A + {AH}}\right) + {t}^{2}{H}^{T}{AH} = A + {2tAH} + {t}^{2}A{H}^{2}, \] so that \( {Dq}\left( I\right) \left( H\right) = {2AH} \) . The mapping \( {Dq}\left( I\right) \) is one-to-one, since \( {Dq}\left( I\right) \left( H\right) = \) \( {AH} = 0 \) implies \( H = 0 \), due to the fact that \( A \) is nonsingular. The map \( {Dq}\left( I\right) \) is also onto, since given \( Y \in {S}^{n} \), the matrix \( X \mathrel{\text{:=}} {A}^{-1}Y/2 \) is in \( {S}_{A}^{n} \) and satisfies \( {Dq}\left( I\right) \left( X\right) = Y \) . The rest of the lemma follows from the inverse function theorem (Corollary 2.27). Theorem 2.32. (Morse’s lemma) Let \( k \geq 1 \) and \( f : U \rightarrow \mathbb{R} \) be a \( {C}^{2 + k} \) function on an open set \( U \subseteq {\mathbb{R}}^{n} \) . If \( {x}_{0} \in U \) is a nondegenerate critical point of \( f \), then there exist open neighborhoods \( V \ni {x}_{0} \) and \( W \ni 0 \) in \( {\mathbb{R}}^{n} \) and a \( {C}^{k} \) diffeomorphism \( \varphi : V \rightarrow W \) such that \[ f\left( x\right) = f\left( {x}_{0}\right) + \frac{1}{2}\left\langle {{D}^{2}f\left( {x}_{0}\right) \varphi \left( x\right) ,\varphi \left( x\right) }\right\rangle . \] Proof. We may assume without any loss of generality that \( U \) is a convex set, \( {x}_{0} = 0 \), and \( f\left( 0\right) = 0 \) . Let \( 0 \neq x \in U \), and define \( \alpha \left( t\right) \mathrel{\text{:=}} f\left( {tx}\right) \) . We have \[ \alpha \left( 1\right) = \alpha \left( 0\right) + {\alpha }^{\prime }\left( 0\right) + {\int }_{0}^{1}\left( {1 - t}\right) {\alpha }^{\prime \prime }\left( t\right) {dt} \] by Theorem 1.5, and since \( {\alpha }^{\prime }\left( t\right) = \langle \nabla f\left( {tx}\right), x\rangle ,\nabla f\left( 0\right) = 0 \) and \( {\alpha }^{\prime \prime }\left( t\right) = \) \( \left\langle {{D}^{2}f\left( {tx}\right) x, x}\right\rangle \), we obtain \[ f\left( x\right) = \frac{1}{2}\langle A\left( x\right) x, x\rangle ,\;\text{ where }\;A\left( x\right) \mathrel{\text{:=}} 2{\int }_{0}^{1}\left( {1 - t}\right) {D}^{2}f\left( {tx}\right) {dt}. \] Note that \( A : U \rightarrow {S}^{n} \) is a \( {C}^{k} \) map, and \( A\left( 0\right) = 2\left( {{\int }_{0}^{1}\left( {1 - t}\right) {dt}}\right) {D}^{2}f\left( 0\right) = \) \( {D}^{2}f\left( 0\right) \) . Consequently, the map \[ H : {V}_{0} \rightarrow Z\;\text{ defined by }H = {q}_{A\left( 0\right) }^{-1} \circ A, \] where \( {V}_{0} \) is a neighborhood of \( 0 \in {\mathbb{R}}^{n} \) and \( Z \) is a neighborhood of \( I \in {S}_{A\left( 0\right) }^{n} \) as in Lemma 2.31, is also \( {C}^{k} \) . We have \( A = {q}_{A\left( 0\right) } \circ H \), that is, \( A\l
1098_(GTM254)Algebraic Function Fields and Codes
Definition 2.2.6
Definition 2.2.6. Let \( G \) and \( D = {P}_{1} + \ldots + {P}_{n} \) be divisors as before (i.e., the \( {P}_{i} \) are pairwise distinct places of degree one, and \( \operatorname{supp}G \cap \operatorname{supp}D = \varnothing \) ). Then we define the code \( {C}_{\Omega }\left( {D, G}\right) \subseteq {\mathbb{F}}_{q}^{n} \) by \[ {C}_{\Omega }\left( {D, G}\right) \mathrel{\text{:=}} \left\{ {\left( {{\omega }_{{P}_{1}}\left( 1\right) ,\ldots ,{\omega }_{{P}_{n}}\left( 1\right) }\right) \mid \omega \in {\Omega }_{F}\left( {G - D}\right) }\right\} . \] Also the code \( {C}_{\Omega }\left( {D, G}\right) \) is called an algebraic geometry code. The relation between the codes \( {C}_{\mathcal{L}}\left( {D, G}\right) \) and \( {C}_{\Omega }\left( {D, G}\right) \) will be explained in Theorem 2.2.8 and Proposition 2.2.10. Our first result about \( {C}_{\Omega }\left( {D, G}\right) \) is an analogue to Theorem 2.2.2. Theorem 2.2.7. \( {C}_{\Omega }\left( {D, G}\right) \) is an \( \left\lbrack {n,{k}^{\prime },{d}^{\prime }}\right\rbrack \) code with parameters \[ {k}^{\prime } = i\left( {G - D}\right) - i\left( G\right) \;\text{ and }\;{d}^{\prime } \geq \deg G - \left( {{2g} - 2}\right) . \] Under the additional hypothesis \( \deg G > {2g} - 2 \), we have \( {k}^{\prime } = i\left( {G - D}\right) \geq \) \( n + g - 1 - \deg G \) . If moreover \( {2g} - 2 < \deg G < n \) then \[ {k}^{\prime } = n + g - 1 - \deg G. \] Proof. Let \( P \in {\mathbb{P}}_{F} \) be a place of degree one and let \( \omega \) be a Weil differential with \( {v}_{P}\left( \omega \right) \geq - 1 \) . We claim that \[ {\omega }_{P}\left( 1\right) = 0\; \Leftrightarrow \;{v}_{P}\left( \omega \right) \geq 0. \] (2.7) In order to prove this we use Proposition 1.7.3 which states that for an integer \( r \in \mathbb{Z} \) , \[ {v}_{P}\left( \omega \right) \geq r \Leftrightarrow {\omega }_{P}\left( x\right) = 0\text{ for all }x \in F\text{ with }{v}_{P}\left( x\right) \geq - r. \] (2.8) The implication \( \Leftarrow \) of (2.7) is an obvious consequence of (2.8). Conversely, suppose that \( {\omega }_{P}\left( 1\right) = 0 \) . Let \( x \in F \) with \( {v}_{P}\left( x\right) \geq 0 \) . Since \( \deg P = 1 \), we can write \( x = a + y \) with \( a \in {\mathbb{F}}_{q} \) and \( {v}_{P}\left( y\right) \geq 1 \) . Then \[ {\omega }_{P}\left( x\right) = {\omega }_{P}\left( a\right) + {\omega }_{P}\left( y\right) = a \cdot {\omega }_{P}\left( 1\right) + 0 = 0. \] (Observe that \( {\omega }_{P}\left( y\right) = 0 \) because \( {v}_{P}\left( \omega \right) \geq - 1 \) and \( {v}_{P}\left( y\right) \geq 1 \), cf. (2.8).) Hence (2.7) is proved. Now we consider the \( {\mathbb{F}}_{q} \) -linear mapping \[ {\varrho }_{D} : \left\{ \begin{matrix} {\Omega }_{F}\left( {G - D}\right) & \rightarrow \;{C}_{\Omega }\left( {D, G}\right) , \\ \omega & \mapsto \left( {{\omega }_{{P}_{1}}\left( 1\right) ,\ldots ,{\omega }_{{P}_{n}}\left( 1\right) }\right) . \end{matrix}\right. \] \( {\varrho }_{D} \) is surjective, and its kernel is \( {\Omega }_{F}\left( G\right) \) by (2.7). Therefore \[ {k}^{\prime } = \dim {\Omega }_{F}\left( {G - D}\right) - \dim {\Omega }_{F}\left( G\right) = i\left( {G - D}\right) - i\left( G\right) . \] (2.9) Let \( {\varrho }_{D}\left( \omega \right) \in {C}_{\Omega }\left( {D, G}\right) \) be a codeword of weight \( m > 0 \) . Then \( {\omega }_{{P}_{i}}\left( 1\right) = 0 \) for certain indices \( i = {i}_{1},\ldots ,{i}_{n - m} \), so \[ \omega \in {\Omega }_{F}\left( {G - \left( {D - \mathop{\sum }\limits_{{j = 1}}^{{n - m}}{P}_{{i}_{j}}}\right) }\right) \] by (2.7). Since \( {\Omega }_{F}\left( A\right) \neq 0 \) implies \( \deg A \leq {2g} - 2 \) (by Theorem 1.5.17), we obtain \[ {2g} - 2 \geq \deg G - \left( {n - \left( {n - m}\right) }\right) = \deg G - m. \] Hence the minimum distance \( {d}^{\prime } \) of \( {C}_{\Omega }\left( {D, G}\right) \) satisfies the inequality \( {d}^{\prime } \geq \) \( \deg G - \left( {{2g} - 2}\right) \) . Assume now that \( \deg G > {2g} - 2 \) . By Theorem 1.5.17 we obtain \( i\left( G\right) = 0 \) . Now (2.9) and the Riemann-Roch Theorem yield \[ {k}^{\prime } = i\left( {G - D}\right) = \ell \left( {G - D}\right) - \deg \left( {G - D}\right) - 1 + g \] \[ = \ell \left( {G - D}\right) + n + g - 1 - \deg G\text{.} \] The remaining assertions of Theorem 2.2.7 follow immediately. In analogy to Definition 2.2.4, the integer \( \deg G - \left( {{2g} - 2}\right) \) is called the designed distance of \( {C}_{\Omega }\left( {D, G}\right) \) . There is a close relation between the codes \( {C}_{\mathcal{L}}\left( {D, G}\right) \) and \( {C}_{\Omega }\left( {D, G}\right) \) : Theorem 2.2.8. The codes \( {C}_{\mathcal{L}}\left( {D, G}\right) \) and \( {C}_{\Omega }\left( {D, G}\right) \) are dual to each other; i.e., \[ {C}_{\Omega }\left( {D, G}\right) = {C}_{\mathcal{L}}{\left( D, G\right) }^{ \bot }. \] Proof. First we note the following fact: Consider a place \( P \in {\mathbb{P}}_{F} \) of degree one, a Weil differential \( \omega \) with \( {v}_{P}\left( \omega \right) \geq - 1 \) and an element \( x \in F \) with \( {v}_{P}\left( x\right) \geq 0 \) . Then \[ {\omega }_{P}\left( x\right) = x\left( P\right) \cdot {\omega }_{P}\left( 1\right) . \] (2.10) In order to prove (2.10) we write \( x = a + y \) with \( a = x\left( P\right) \in {\mathbb{F}}_{q} \) and \( {v}_{P}\left( y\right) > 0 \) . Then \( {\omega }_{P}\left( x\right) = {\omega }_{P}\left( a\right) + {\omega }_{P}\left( y\right) = a \cdot {\omega }_{P}\left( 1\right) + 0 = x\left( P\right) \cdot {\omega }_{P}\left( 1\right) \), by (2.8). Next we show that \( {C}_{\Omega }\left( {D, G}\right) \subseteq {C}_{\mathcal{L}}{\left( D, G\right) }^{ \bot } \) . So let \( \omega \in {\Omega }_{F}\left( {G - D}\right) \) and \( x \in \mathcal{L}\left( G\right) \) . We obtain \[ 0 = \omega \left( x\right) = \mathop{\sum }\limits_{{P \in {\mathbb{{IP}}}_{F}}}{\omega }_{P}\left( x\right) \] (2.11) \[ = \mathop{\sum }\limits_{{i = 1}}^{n}{\omega }_{{P}_{i}}\left( x\right) \] \( \left( {2.12}\right) \) \[ = \mathop{\sum }\limits_{{i = 1}}^{n}x\left( {P}_{i}\right) \cdot {\omega }_{{P}_{i}}\left( 1\right) \] (2.13) \[ = \left\langle {\left( {{\omega }_{{P}_{1}}\left( 1\right) ,\ldots ,{\omega }_{{P}_{n}}\left( 1\right) }\right) ,\left( {x\left( {P}_{1}\right) ,\ldots, x\left( {P}_{n}\right) }\right) }\right\rangle , \] where \( \langle \) , \( \rangle {denotesthecanonicalinnerproducton}{\mathbb{F}}_{q}^{n}.{Westillhavetojustify} \) the single steps in the above computation. (2.11) follows from Proposition 1.7.2 and the fact that Weil differentials vanish on principal adeles. For \( P \in \) \( {\mathbb{P}}_{F} \smallsetminus \left\{ {{P}_{1},\ldots ,{P}_{n}}\right\} \) we have \( {v}_{P}\left( x\right) \geq - {v}_{P}\left( \omega \right) \) (as \( x \in \mathcal{L}\left( G\right) \) and \( \omega \in \Omega \left( {G - D}\right) \) ), so \( {\omega }_{P}\left( x\right) = 0 \) by (2.8). This proves (2.12). Finally,(2.13) follows from (2.10). Hence \( {C}_{\Omega }\left( {D, G}\right) \subseteq {C}_{\mathcal{L}}{\left( D, G\right) }^{ \bot } \) . It is now sufficient to show that the codes \( {C}_{\Omega }\left( {D, G}\right) \) and \( {C}_{\mathcal{L}}{\left( D, G\right) }^{ \bot } \) have the same dimension. Using Theorems 2.2.2, 2.2.7 and the Riemann-Roch Theorem we find: \[ \dim {C}_{\Omega }\left( {D, G}\right) = i\left( {G - D}\right) - i\left( G\right) \] \[ = \ell \left( {G - D}\right) - \deg \left( {G - D}\right) - 1 + g - \left( {\ell \left( G\right) - \deg G - 1 + g}\right) \] \[ = \deg D + \ell \left( {G - D}\right) - \ell \left( G\right) \] \[ = n - \left( {\ell \left( G\right) - \ell \left( {G - D}\right) }\right) \] \[ = n - \dim {C}_{\mathcal{L}}\left( {D, G}\right) = \dim {C}_{\mathcal{L}}{\left( D, G\right) }^{ \bot }. \] Our next aim is to prove that \( {C}_{\Omega }\left( {D, G}\right) \) can be represented as \( {C}_{\mathcal{L}}\left( {D, H}\right) \) with an appropriate divisor \( H \) . For this purpose we need the following lemma. Lemma 2.2.9. There exists a Weil differential \( \eta \) such that \[ {v}_{{P}_{i}}\left( \eta \right) = - 1\;\text{ and }\;{\eta }_{{P}_{i}}\left( 1\right) = 1\;\text{ for }i = 1,\ldots, n. \] Proof. Choose an arbitrary Weil differential \( {\omega }_{0} \neq 0 \) . By the Weak Approximation Theorem there is an element \( z \in F \) with \( {v}_{{P}_{i}}\left( z\right) = - {v}_{{P}_{i}}\left( {\omega }_{0}\right) - 1 \) for \( i = 1,\ldots, n \) . Setting \( \omega \mathrel{\text{:=}} z{\omega }_{0} \) we obtain \( {v}_{{P}_{i}}\left( \omega \right) = - 1 \) . Therefore \( {a}_{i} \mathrel{\text{:=}} {\omega }_{{P}_{i}}\left( 1\right) \neq 0 \) by (2.7). Again by the Approximation Theorem we find \( y \in F \) such that \( {v}_{{P}_{i}}\left( {y - {a}_{i}}\right) > 0 \) . It follows that \( {v}_{{P}_{i}}\left( y\right) = 0 \) and \( y\left( {P}_{i}\right) = {a}_{i} \) . We put \( \eta \mathrel{\text{:=}} {y}^{-1}\omega \) and obtain \( {v}_{{P}_{i}}\left( \eta \right) = {v}_{{P}_{i}}\left( \omega \right) = - 1 \), and \[ {\eta }_{{P}_{i}}\left( 1\right) = {\omega }_{{P}_{i}}\left( {y}^{-1}\right) = {y}^{-1}\left( {P}_{i}\right) \cdot {\omega }_{{P}_{i}}\left( 1\right) = {a}_{i}^{-1} \cdot {a}_{i} = 1. \] Proposition 2.2.10. Let \( \eta \) be a Weil differential such that \( {v}_{{P}_{i}}\left( \eta \right) = - 1 \) and \( {\eta }_{{P}_{i}}\left( 1\right) = 1 \) for \( i = 1,\ldots, n \) . Then \[ {C}_{\mathcal{L}}{\left( D, G\right) }^{ \bot } = {C}_{\Omega }\left( {D, G}\right) = {C}_{\mathcal{L}}\left( {D, H}\right) \;\text{ with }\;H \mathrel{\text{:=}} D - G + \left( \eta \right) . \] Proof. The equality \( {C}_{\mathcal{L}}{\left( D, G\right) }^{ \bot } = {C}_{\Omega }\left( {D, G}\right) \) was already shown in Theorem 2.2.8. Observe that \( \operatorname{supp}\left( {D - G + \left( \eta \right) }\right) \cap \operatorname{supp}D = \varn
1075_(GTM233)Topics in Banach Space Theory
Definition 9.1.1
Definition 9.1.1. A block basic sequence \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) of a basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) , \[ {u}_{n} = \mathop{\sum }\limits_{{{p}_{n - 1} + 1}}^{{p}_{n}}{a}_{i}{e}_{i} \] is a constant-coefficient block basic sequence if for each \( n \) there is a constant \( {c}_{n} \) such that \( {a}_{i} = {c}_{n} \) or \( {a}_{i} = 0 \) for \( {p}_{n - 1} + 1 \leq i \leq {p}_{n} \) ; that is, \[ {u}_{n} = {c}_{n}\mathop{\sum }\limits_{{i \in {A}_{n}}}{e}_{i} \] where \( {A}_{n} \) is a subset of integers contained in \( \left( {{p}_{n - 1},{p}_{n}}\right\rbrack \) . Definition 9.1.2. A basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of a Banach space \( X \) is perfectly homogeneous if every normalized constant-coefficient block basic sequence of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is equivalent to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . This definition is enough to force every perfectly homogeneous basis to be unconditional, since \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) must be equivalent to \( {\left( {\epsilon }_{n}{e}_{n}\right) }_{n = 1}^{\infty } \) for every choice of signs \( {\epsilon }_{n} = \pm 1 \) . Lemma 9.1.3. Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be a normalized perfectly homogeneous basis of a Banach space \( X \) . Then \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is uniformly equivalent to all its normalized constant-coefficient block basic sequences. That is, there is a constant \( \mathrm{K} \geq 1 \) such that for every normalized constant-coefficient block basic sequences \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) and \( {\left( {v}_{n}\right) }_{n = 1}^{\infty } \) of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) we have \[ {\mathrm{K}}^{-1}\begin{Vmatrix}{\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{u}_{k}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{v}_{k}}\end{Vmatrix} \leq \mathrm{K}\begin{Vmatrix}{\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{u}_{k}}\end{Vmatrix} \] (9.1) for any choice of scalars \( {\left( {a}_{i}\right) }_{i = 1}^{n} \) and every \( n \in \mathbb{N} \) . Proof. It suffices to prove such an inequality for the basic sequence \( {\left( {e}_{n}\right) }_{n = {n}_{0} + 1}^{\infty } \) for some \( {n}_{0} \) . If the lemma fails, we can inductively build constant-coefficient block basic sequences \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) and \( {\left( {v}_{n}\right) }_{n = 1}^{\infty } \) of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) such that for some increasing sequence of integers \( {\left( {p}_{n}\right) }_{n = 0}^{\infty } \) with \( {p}_{0} = 0 \) and some scalars \( {\left( {a}_{i}\right) }_{i = 1}^{\infty } \) we have \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{i = {p}_{n - 1} + 1}}^{{p}_{n}}{a}_{i}{u}_{i}}\end{Vmatrix} < {2}^{-n} \] but \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{i = {p}_{n - 1} + 1}}^{{p}_{n}}{a}_{i}{v}_{i}}\end{Vmatrix} > {2}^{-n} \] which contradicts the assumption of perfect homogeneity. Equation (9.1) also yields that for every increasing sequence of integers \( {\left( {n}_{k}\right) }_{k = 1}^{\infty } \) , \[ {\mathrm{K}}^{-1}\begin{Vmatrix}{\mathop{\sum }\limits_{{k = 1}}^{n}{e}_{k}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{k = 1}}^{n}{e}_{{n}_{k}}}\end{Vmatrix} \leq \mathrm{K}\begin{Vmatrix}{\mathop{\sum }\limits_{{k = 1}}^{n}{e}_{k}}\end{Vmatrix}. \] (9.2) Let us suppose that \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a normalized basis for a Banach space \( X \) . For each \( N \in \mathbb{N} \) put \[ \lambda \left( N\right) = \begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{N}{e}_{n}}\end{Vmatrix} \] Obviously, \[ {\mathrm{K}}_{\mathrm{b}}{}^{-1} \leq \lambda \left( N\right) \leq N,\;N \in \mathbb{N}, \] (9.3) where \( {\mathrm{K}}_{\mathrm{b}} \geq 1 \) is the basis constant. Notice that if \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is 1 -unconditional, then the sequence \( {\left( \lambda \left( N\right) \right) }_{N = 1}^{\infty } \) is nondecreasing. Lemma 9.1.4. Suppose that \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a normalized unconditional basis of a Banach space \( X \) . If \( \mathop{\sup }\limits_{N}\lambda \left( N\right) < \infty \), then \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is equivalent to the canonical basis of \( {c}_{0} \) . Proof. For every \( N \) and scalars \( {\left( {a}_{n}\right) }_{n = 1}^{N} \) we have \[ \frac{1}{{\mathrm{\;K}}_{\mathrm{u}}}\mathop{\sup }\limits_{n}\left| {a}_{n}\right| \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{N}{a}_{n}{e}_{n}}\end{Vmatrix} \leq {\mathrm{K}}_{\mathrm{u}}\mathop{\sup }\limits_{n}\left| {a}_{n}\right| \mathop{\sup }\limits_{N}\lambda \left( N\right) , \] i.e., \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is equivalent to the unit vector basis of \( {c}_{0} \) . Lemma 9.1.5. Let \( {\left( {e}_{i}\right) }_{i = 1}^{\infty } \) be a normalized perfectly homogeneous basis of a Banach space \( X \) . Then, if \( \dot{\mathrm{K}} \) is the constant given by Lemma 9.1.3, we have \[ \frac{1}{{\mathrm{\;K}}^{3}}\lambda \left( n\right) \lambda \left( m\right) \leq \lambda \left( {nm}\right) \leq {\mathrm{K}}^{3}\lambda \left( n\right) \lambda \left( m\right) \] (9.4) for all \( m, n \) in \( \mathbb{N} \) . Proof. Consider a family \( {\left( {f}_{j}\right) }_{j = 1}^{m} \) of \( m \) disjoint blocks of length \( n \) of the basis \( {\left( {e}_{i}\right) }_{i = 1}^{\infty } \) , \[ {f}_{j} = \mathop{\sum }\limits_{{i = \left( {j - 1}\right) n + 1}}^{{jn}}{e}_{i},\;j = 1,\ldots, m. \] Let \( {c}_{j} = \begin{Vmatrix}{f}_{j}\end{Vmatrix} \) for \( j = 1,\ldots, m \) . By hypothesis, \[ {\mathrm{K}}^{-1}\lambda \left( n\right) \leq {c}_{j} \leq \mathrm{K}\lambda \left( n\right) ,\;j = 1,2,\ldots, m. \] Note that \( \mathrm{K} \) can also serve as an unconditional constant (of course, not necessarily the optimal) for \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \), so that \[ \frac{1}{{\mathrm{K}}^{2}\lambda \left( n\right) }\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{m}{f}_{j}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{m}{c}_{j}^{-1}{f}_{j}}\end{Vmatrix} \leq \frac{{\mathrm{K}}^{2}}{\lambda \left( n\right) }\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{m}{f}_{j}}\end{Vmatrix}. \] Now, again by Lemma 9.1.3, \[ {\mathrm{K}}^{-1}\lambda \left( m\right) \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{m}{c}_{j}^{-1}{f}_{j}}\end{Vmatrix} \leq \mathrm{K}\lambda \left( m\right) \] Hence, \[ \frac{\lambda \left( {mn}\right) }{{\mathrm{K}}^{3}\lambda \left( n\right) } \leq \lambda \left( m\right) \leq \frac{{\mathrm{K}}^{3}\lambda \left( {mn}\right) }{\lambda \left( n\right) }. \] Before continuing, we need the following lemma, which is very useful in many different contexts. Lemma 9.1.6. Let \( {\left( {s}_{n}\right) }_{n = 1}^{\infty } \) be a sequence of real numbers. (i) Suppose that \( {s}_{m + n} \leq {s}_{m} + {s}_{n} \) for all \( m, n \in \mathbb{N} \) . Then \( \mathop{\lim }\limits_{n}{s}_{n}/n \) exists (possibly equal to \( - \infty \) ) and \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{{s}_{n}}{n} = \mathop{\inf }\limits_{n}\frac{{s}_{n}}{n} \] (ii) Suppose that \( \left| {{s}_{m + n} - {s}_{m} - {s}_{n}}\right| \leq 1 \) for all \( m, n \in \mathbb{N} \) . Then there is a constant \( c \) such that \[ \left| {{s}_{n} - {cn}}\right| \leq 1,\;n = 1,2,\ldots \] Proof. (i) Fix \( n \in \mathbb{N} \) . Then, each \( m \in \mathbb{N} \) can be written as \( m = \ln + r \) for some \( 0 \leq l \) and \( 0 \leq r < n \) . The hypothesis implies that \[ {s}_{ln} \leq l{s}_{n},\;{s}_{{ln} + r} \leq l{s}_{n} + {s}_{r}. \] Thus \[ \frac{{s}_{m}}{m} = \frac{{s}_{{ln} + r}}{{ln} + r} \leq \frac{l}{{ln} + r}{s}_{n} + \frac{{s}_{r}}{{ln} + r} \leq \frac{{s}_{n}}{n} + \frac{\mathop{\max }\limits_{{0 \leq r < n}}{s}_{r}}{m}, \] and so \[ \mathop{\limsup }\limits_{{m \rightarrow \infty }}\frac{{s}_{m}}{m} \leq \frac{{s}_{n}}{n},\;n \in \mathbb{N}. \] (9.5) Hence, \[ \mathop{\limsup }\limits_{{m \rightarrow \infty }}\frac{{s}_{m}}{m} \leq \mathop{\inf }\limits_{n}\frac{{s}_{n}}{n} \] (ii) Let \( {t}_{n} = {s}_{n} + 1 \) and \( {u}_{n} = {s}_{n} - 1 \) . Then \( {\left( {t}_{n}\right) }_{n = 1}^{\infty } \) and \( {\left( -{u}_{n}\right) }_{n = 1}^{\infty } \) both obey the conditions of \( \left( i\right) \) . Hence \( \lim {t}_{n}/n = \lim {u}_{n}/n \) both exist and are finite; let \( c \) be their common value. By \( \left( i\right) \) we have \[ \frac{{u}_{n}}{n} \leq c \leq \frac{{t}_{n}}{n},\;n = 1,2,\ldots , \] and the conclusion follows. Lemma 9.1.7. Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be a normalized perfectly homogeneous basis of a Banach space \( X \) . Then, either \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is equivalent to the canonical basis of \( {c}_{0} \) or there exist a constant \( C \) and \( 1 \leq p < \infty \) such that \[ {C}^{-1}{\left| A\right| }^{\frac{1}{p}} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{n \in A}}{e}_{n}}\end{Vmatrix} \leq C{\left| A\right| }^{\frac{1}{p}} \] for every finite subset \( A \) of \( \mathbb{N} \) . Proof. If we plug \( m = {2}^{k} \) and \( n = {2}^{j} \) in equation (9.4), we obtain \[ \frac{1}{{\mathrm{\;K}}^{3}}\lambda \left( {2}^{k}\right) \lambda \left( {2}^{j}\right) \leq \lambda \left( {2}^{j + k}\right) \leq {\mathrm{K}}^{3}\lambda \left( {2}^{k}\right) \lambda \left( {2}^{j}\right) . \] (9.6) Let \[ h\left( k\right) = {\log }_{2}\lambda \left( {2}^{k}\right) ,\;k = 0,1,2,\ldots \] From (9.6) we get \[ \left| {h\left( j\right) + h\left( k\right) - h\left( {j + k}\right) }\right| \leq 3{\log }_{2}\mathrm{\;K}. \] By (ii) of Lemma 9.1.6 there is a constant \( c \) such that \[ \left| {h\left( j\right) - {cj}}\right| \leq 3{\log }_{2}\mathrm{\;K},\;j = 1,2,\
1167_(GTM73)Algebra
Definition 2.1
Definition 2.1. Let \( \mathrm{C} \) be an algebraically closed field with subfields \( \mathrm{K},\mathrm{E},\mathrm{F} \) such that \( \mathrm{K} \subset \mathrm{E} \cap \mathrm{F} \) . E and \( \mathrm{F} \) are linearly disjoint over \( \mathrm{K} \) if every subset of \( \mathrm{E} \) which is linearly independent over \( \mathrm{K} \) is also linearly independent over \( \mathrm{F} \) . REMARKS. An alternate definition in terms of tensor products is given in Exercise 1 . Note that a subset \( X \) of \( E \) is linearly independent over a subfield of \( C \) if and only if every finite subset of \( X \) is. Consequently, when proving linear disjointness, we need only deal with finite linearly independent sets. EXAMPLE. If \( K \subset E \) then \( E \) and \( K \) are trivially linearly disjoint over \( K \) . This fact will be used in several proofs. Other less trivial examples appear in the theorems and exercises below. The wording of Definition 2.1 suggests that the definition of linear disjointness is in fact symmetric in \( E \) and \( F \) . We now prove this fact. Theorem 2.2. Let \( \mathrm{C} \) be an algebraically closed field with subfields \( \mathrm{K},\mathrm{E},\mathrm{F} \) such that \( \mathrm{K} \subset \mathrm{E} \cap \mathrm{F} \) . Then \( \mathrm{E} \) and \( \mathrm{F} \) are linearly disjoint over \( \mathrm{K} \) if and only if \( \mathrm{F} \) and \( \mathrm{E} \) are linearly disjoint over \( \mathbf{K} \) . PROOF. It suffices to assume \( E \) and \( F \) linearly disjoint and show that \( F \) and \( E \) are linearly disjoint. Suppose \( X \subset F \) is linearly independent over \( K \), but not over \( E \) so that \( {r}_{1}{u}_{1} + \cdots + {r}_{n}{u}_{n} = 0 \) for some \( {u}_{i}{\varepsilon X} \) and \( {r}_{i}{\varepsilon E} \) not all zero. Choose a subset of \( \left\{ {{r}_{1},\ldots ,{r}_{n}}\right\} \) which is maximal with respect to linear independence over \( K \) ; reindex if necessary so that this set is \( \left\{ {{r}_{1},{r}_{2},\ldots ,{r}_{t}}\right\} \left( {t \geq 1}\right) \) . Then for each \( j > t,{r}_{j} = \mathop{\sum }\limits_{{i = 1}}^{t}{a}_{ij}{r}_{i} \) with \( {a}_{ij} \in K \) (Exercise IV.2.1). After a harmless change of index we have: \[ 0 = \mathop{\sum }\limits_{{j = 1}}^{n}{r}_{j}{u}_{j} = \mathop{\sum }\limits_{{j = 1}}^{t}{r}_{j}{u}_{j} + \mathop{\sum }\limits_{{j = t + 1}}^{n}\left( {\mathop{\sum }\limits_{{i = 1}}^{t}{a}_{ij}{r}_{i}}\right) {u}_{j} \] \[ = \mathop{\sum }\limits_{{k = 1}}^{t}\left( {{u}_{k} + \mathop{\sum }\limits_{{j = t + 1}}^{n}{a}_{kj}{u}_{j}}\right) {r}_{k} \] Since \( E \) and \( F \) are linearly disjoint, \( \left\{ {{r}_{1},\ldots ,{r}_{t}}\right\} \) is linearly independent over \( F \) which implies that \( {u}_{k} + \mathop{\sum }\limits_{{j = t + 1}}^{n}{a}_{kj}{u}_{j} = 0 \) for every \( k \leq t \) . This contradicts the linear independence of \( X \) over \( K \) . Therefore \( X \) is linearly independent over \( E \) . The following lemma and theorem provide some useful criteria for two fields to be linearly disjoint. Lemma 2.3. Let \( \mathrm{C} \) be an algebraically closed field with subfields \( \mathrm{K},\mathrm{E},\mathrm{F} \) such that \( \mathrm{K} \subset \mathrm{E} \cap F \) . Let \( \mathrm{R} \) be a subring of \( \mathrm{E} \) such that \( \mathrm{K}\left( \mathrm{R}\right) = E \) and \( \mathrm{K} \subset \mathrm{R} \) (which implies that \( \mathrm{R} \) is a vector space over \( \mathrm{K} \) ). Then the following conditions are equivalent: (i) \( \mathrm{E} \) and \( \mathrm{F} \) are linearly disjoint over \( \mathrm{K} \) ; (ii) every subset of \( \mathbf{R} \) that is linearly independent over \( \mathbf{K} \) is also linearly independent over \( \mathrm{F} \) ; (iii) there exists a basis of \( \mathrm{R} \) over \( \mathrm{K} \) which is linearly independent over \( \mathrm{F} \) . REMARK. The lemma is true with somewhat weaker hypotheses (Exercise 2) but this is all that we shall need. PROOF OF 2.3. (i) \( \Rightarrow \) (ii) and (i) \( \Rightarrow \) (iii) are trivial. (ii) \( \Rightarrow \) (i) Let \( X = \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \) be a finite subset of \( E \) which is linearly independent over \( K \) . We must show that \( X \) is linearly independent over \( F \) . Since \( {u}_{i} \) e \( E = K\left( R\right) \) each \( {u}_{i} \) is of the form \( {u}_{i} = {c}_{i}{d}_{i}^{-1} \) \( = {c}_{i}/{d}_{i} \), where \( {c}_{i} = {f}_{i}\left( {{r}_{1},\ldots ,{r}_{{t}_{i}}}\right) ,0 \neq {d}_{i} = {g}_{i}\left( {{r}_{1},\ldots ,{r}_{{t}_{i}}}\right) \) with \( {r}_{j}{\varepsilon R} \) and \( {f}_{i},{g}_{i}\varepsilon \) \( K\left\lbrack {{x}_{1},\ldots ,{x}_{{t}_{i}}}\right\rbrack \) (Theorem V.1.3). Let \( d = {d}_{1}{d}_{2}\cdots {d}_{n} \) and for each \( i \) let \( {v}_{i} = \) \( {c}_{i}{d}_{1}\cdots {d}_{i - 1}{d}_{i + 1}\cdots {d}_{n} \in R \) . Then \( {u}_{i} = {v}_{i}{d}^{-1} \) and the subset \( {X}^{\prime } = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) of \( R \) is linearly independent over a subfield of \( C \) if and only if \( X \) is. By hypothesis \( X \) and hence \( {X}^{\prime } \) is linearly independent over \( K \) . Consequently,(ii) implies that \( {X}^{\prime } \) is linearly independent over \( F \), whence \( X \) is linearly independent over \( F \) . (iii) \( \Rightarrow \) (ii) Let \( U \) be a basis of \( R \) over \( K \) which is linearly independent over \( F \) . We must show that every finite subset \( X \) of \( R \) that is linearly independent over \( K \) is also linearly independent over \( F \) . Since \( X \) is finite, there is a finite subset \( {U}_{1} \) of \( U \) such that \( X \) is contained in the \( K \) -subspace \( V \) of \( R \) spanned by \( {U}_{1} \) ; (note that \( {U}_{1} \) is a basis of \( V \) over \( K \) ). Let \( {V}_{1} \) be the vector space spanned by \( {U}_{1} \) over \( F.U \) and hence \( {U}_{1} \) is linearly independent over \( F \) by (iii). Therefore \( {U}_{1} \) is a basis of \( {V}_{1} \) over \( F \) and \( {\dim }_{K}V = {\dim }_{F}{V}_{1} \) . Now \( X \) is contained in some finite basis \( W \) of \( V \) over \( K \) (Theorem IV.2.4). Since \( W \) certainly spans \( {V}_{1} \) as a vector space over \( F, W \) contains a basis \( {W}_{1} \) of \( {V}_{1} \) over \( F \) . Thus \( \left| {W}_{1}\right| \leq \left| W\right| = {\dim }_{K}V = {\dim }_{F}{V}_{1} = \left| {W}_{1}\right| \), whence \( W = {W}_{1} \) . Therefore, the subset \( X \) of \( W \) is necessarily linearly independent over \( F \) . Theorem 2.4. Let \( \mathrm{C} \) be an algebraically closed field with subfields \( \mathrm{K},\mathrm{E},\mathrm{L},\mathrm{F} \) such that \( \mathrm{K} \subset \mathrm{E} \) and \( \mathrm{K} \subset \mathrm{L} \subset \mathrm{F} \) . Then \( \mathrm{E} \) and \( \mathrm{F} \) are linearly disjoint over \( \mathrm{K} \) if and only if (i) \( \mathrm{E} \) and \( \mathrm{L} \) are linearly disjoint over \( \mathrm{K} \) and (ii) \( \mathrm{{EL}} \) and \( \mathrm{F} \) are linearly disjoint over \( \mathrm{L} \) . ![a635b0ec-463f-4a06-bfab-0631c5cb2124_339_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_339_0.jpg) \( \left( \Leftarrow \right) \) If a subset \( X \) of \( E \) is linearly independent over \( K \), then \( X \) is linearly independent over \( L \) by (i). Therefore (since \( X \subset E \subset {EL} \) ), \( X \) is linearly independent over \( F \) by (ii). \( \left( \Rightarrow \right) \) If \( E \) and \( F \) are linearly disjoint over \( K \), then \( E \) and \( L \) are automatically linearly disjoint over \( K \) . To prove (ii) observe that \( {EL} = L\left( R\right) \), where \( R \) is the subring \( L\left\lbrack E\right\rbrack \) of \( C \) generated by \( L \) and \( E \) . By Theorem V.1.3 every element of \( R \) is of the form \( f\left( {{e}_{1},\ldots ,{e}_{n}}\right) \left( {{e}_{i}{\varepsilon E};{f\varepsilon L}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack }\right) \) . Therefore, any basis \( U \) of \( E \) over \( K \) spans \( R \) considered as a vector space over \( L \) . Since \( E \) and \( L \) are linearly disjoint over \( K, U \) is linearly independent over \( L \) . Hence \( U \) is a basis of \( R \) over \( L \) . But \( U \) is linearly independent over \( F \) by the linear disjointness of \( E \) and \( F \) . Therefore, \( {EL} \) and \( F \) are linearly disjoint over \( L \) by Lemma 2.3. Next we explore linear disjointness with respect to certain extension fields of \( K \) that will play an important part in the definition of separability. Definition 2.5. Let \( \mathrm{K} \) be a field of characteristic \( \mathrm{p} \neq 0 \) and let \( \mathrm{C} \) be an algebraically closed field containing \( \mathbf{K} \) . For each integer \( \mathrm{n} \geq 0 \) \[ {\mathrm{K}}^{1/{\mathrm{p}}^{\mathrm{n}}} = \left\{ {\mathrm{u}\varepsilon \mathrm{C} \mid {\mathrm{u}}^{{\mathrm{p}}^{\mathrm{n}}}\varepsilon \mathrm{K}}\right\} . \] \[ {\mathrm{K}}^{1/{\mathrm{p}}^{\infty }} = \mathop{\bigcup }\limits_{{n \geq 0}}{\mathrm{K}}^{1/{\mathrm{p}}^{\mathrm{n}}} = \left\{ {\mathrm{u}\varepsilon \mathrm{C} \mid {\mathrm{u}}^{\mathrm{{pn}}}\varepsilon \mathrm{K}\;\text{ for some }\;\mathrm{n} \geq 0}\right\} . \] REMARKS. Since \( {\left( u \pm v\right) }^{{p}^{n}} = {u}^{{p}^{n}} \pm {v}^{{p}^{n}} \) in a field of characteristic \( p \) (Exercise III.1.11) each \( {K}^{1/{p}^{n}} \) is actually a field. Since \( K = {K}^{1/{p}^{0}} \subset {K}^{1/{p}^{n}} \subset {K}^{1/{p}^{m}} \subset {K}^{1/{p}^{\infty }} \) for all \( n, m \) such that \( 0 \leq n \leq m \), it follows readily that \( {K}^{1/{p}^{\infty }} \) is also a field. The fact that \( C \) is algebraically closed implies that \( {K}^{1/{\nu }^{n}} \) is a splitting field over \( K \) of the set of polynomials \( \left\{ {{x}^{{p}^{n}} - k \mid k \in K}\right\} \) (Exercise 5). In particular, every \( k \in K \) is
1139_(GTM44)Elementary Algebraic Geometry
Definition 2.22
Definition 2.22. A variety is a hypersurface in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) (or in \( {\mathbb{C}}^{n} \) ) if it can be defined by a single nonconstant homogeneous polynomial in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots }\right. \) , \( \left. {X}_{n + 1}\right\rbrack \) (or by a single nonconstant polynomial in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) ). Theorem 2.23. A variety in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) is a hypersurface \( \Leftrightarrow \) it is of pure dimension \( n - 1 \) . Proof. Since any variety in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) is represented by a homogeneous variety in \( {\mathbb{C}}^{n + 1} \), it suffices to prove the result in the affine case. \( \Rightarrow : \) Suppose \( V = \mathbf{V}\left( p\right) \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}} \), where \( p \) is nonconstant in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) . Assume first that \( p \) is irreducible. Then \( \mathrm{V}\left( p\right) \) has pure dimension, and for some \( i,\partial p/\partial {X}_{i} \) is not identically zero; hence \( \partial p/d{X}_{i} \) cannot vanish on \( V \), for otherwise it would have to be in the prime ideal \( \left( p\right) \) (that is, a multiple of \( p \) ), while \( \deg \partial p/\partial {X}_{i} < \deg p \) . Therefore the rank of \( J\left( V\right) = \) \( \left( {\partial p/\partial {X}_{1},\ldots ,\partial p/\partial {X}_{n}}\right) \) attains the maximum of 1 at a point of \( V \) ; hence \( \dim V = n - 1 \) . Since any hypersurface is a union of irreducible hyper-surfaces, the dimension is pure. \( \Leftarrow \) : Suppose \( V \subset {\mathbb{C}}^{n} \) has pure dimension \( n - 1 \) ; we want to show that \( V = \mathbf{V}\left( p\right) \) for some polynomial \( p \) . If this is true for irreducible varieties of dimension \( n - 1 \), then it is true for arbitrary varieties of pure dimension \( n - 1 \) . Therefore assume \( V \) is irreducible, say \[ V = \mathbf{V}\left( {{p}_{1},\ldots ,{p}_{r}}\right) \text{, where all}{p}_{i}\text{are nonconstant.} \] Now consider \( {p}_{1} \) . If \( {p}_{1} = {p}_{11} \cdot \ldots \cdot {p}_{1s} \) is a factorization of \( {p}_{1} \) into irreducibles, then \( \mathrm{V}\left( {p}_{1}\right) = \mathrm{V}\left( {p}_{11}\right) \cup \ldots \cup \mathrm{V}\left( {p}_{1\mathrm{\;s}}\right) \) . Hence \( V \subset \mathrm{V}\left( {p}_{1i}\right) \) for some \( i \) . Since \( {p}_{1i} \) is irreducible, we have \( V = \mathrm{V}\left( {p}_{1i}\right) \) (Exercise 4.3 of Chapter III). Since \( {p}_{1j} \) is nonconstant, \( V = \mathrm{V}\left( {p}_{1j}\right) \) is a hypersurface. Just as one considers products of sets in set theory and products of spaces in topology, one also has products of varieties. Later on we shall need them, together with a basic dimensionality property of "product varieties." We begin with products of affine varieties. Theorem 2.24. Let \( V \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{m}} \) and \( W \subset {\mathbb{C}}_{{Y}_{1},\ldots ,{Y}_{n}} \) be two varieties. (2.24.1) The set-theoretic product \( V \times W \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{m},{Y}_{1},\ldots ,{Y}_{n}} \) is a variety. (We call it a product variety.) (2.24.2) Let \( V \) and \( W \) be irreducible with generic points \( \left( x\right) = \left( {{x}_{1},\ldots ,{x}_{m}}\right) \) and \( \left( y\right) = \left( {{y}_{1},\ldots ,{y}_{n}}\right) \), respectively, and suppose that \( \mathbb{C}\left\lbrack x\right\rbrack \cap \mathbb{C}\left\lbrack y\right\rbrack = \mathbb{C} \) . Then \( V \times W \) is irreducible and has \( \left( {x, y}\right) \) as a generic point. (2.24.3) \( \dim V \times W = \dim V + \dim W \) . Proof. The proof of (2.24.1) may be reduced to the case when \( V \) and \( W \) are both irreducible, since obviously \( \left( {\mathop{\bigcup }\limits_{i}{V}_{i}}\right) \times \left( {\mathop{\bigcup }\limits_{j}{W}_{j}}\right) = \mathop{\bigcup }\limits_{{i, j}}{V}_{i} \times {W}_{j} \) . This case then follows at once from (2.24.2) which is itself obvious. (2.24.3) is immediate from Theorem 2.14. It is natural to also ask about products of projective spaces and varieties. Just as with affine spaces, we can form the set product \( {\mathbb{P}}^{m}\left( \mathbb{C}\right) \times {\mathbb{P}}^{n}\left( \mathbb{C}\right) \), and endow it with the product topology. One might guess that this is in some sense the same as \( {\mathbb{P}}^{m + n}\left( \mathbb{C}\right) \) . But it turns out that except when \( m \) or \( n \) is zero, \( {\mathbb{P}}^{m}\left( \mathbb{C}\right) \times {\mathbb{P}}^{n}\left( \mathbb{C}\right) \neq {\mathbb{P}}^{m + n}\left( \mathbb{C}\right) \) . In fact, at a purely topological level, it turns out that the product of any two spaces homeomorphic to \( {\mathbb{P}}^{m}\left( \mathbb{C}\right) \) and \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) , where \( m, n > 0 \), is never homeomorphic to any \( {\mathbb{P}}^{k}\left( \mathbb{C}\right) \) . We indicate the gist of a proof for those who know some homology theory. It is known (see, for instance, [Vick, Prop 2.7, p. 49]) that the homology groups (over the integers) of \( {\mathbb{P}}^{k}\left( \mathbb{C}\right) \) are: \[ {H}_{i}\left( {{\mathbb{P}}^{k}\left( \mathbb{C}\right) }\right) = \left\{ \begin{array}{ll} \mathbb{Z} & \text{ for }i = 0,2,4,\ldots ,{2k} \\ 0 & \text{ otherwise. } \end{array}\right. \] The Künneth formula then tells us that \[ {H}_{2}\left( {{\mathbb{P}}^{m}\left( \mathbb{C}\right) \times {\mathbb{P}}^{n}\left( \mathbb{C}\right) }\right) = \mathop{\sum }\limits_{{i + j = 2}}\left( {{H}_{i}\left( {{\mathbb{P}}^{m}\left( \mathbb{C}\right) }\right) \otimes {H}_{j}\left( {{\mathbb{P}}^{n}\left( \mathbb{C}\right) }\right) }\right) = \mathbb{Z} \oplus \mathbb{Z} \] (where \( \sum \) and \( \oplus \) denote direct sum, and \( \otimes \) denotes tensor product over \( \mathbb{Z} \) ); but this is not a homology group of any \( {\mathbb{P}}^{k}\left( \mathbb{C}\right) \) . Yet products of projective spaces and varieties do naturally arise, as we will see later in this chapter when we use them (or what is the same, "multi-homogeneous varieties") in defining at the variety-theoretic level notions like order and multiplicity, and in proving Bézout's theorem. We call any product \( {\mathbb{P}}^{{n}_{1}}\left( \mathbb{C}\right) \times \ldots \times {\mathbb{P}}^{{n}_{s}}\left( \mathbb{C}\right) \) a multiprojective space, an \( s \) -way projective space, or most precisely, an \( \left( {{n}_{1},\ldots ,{n}_{s}}\right) \) -projective space, this product being looked at as the set of all \( \left( {\left( {{n}_{1} + 1}\right) + \ldots + \left( {{n}_{s} + 1}\right) }\right) \) -tuples \[ \left( {\left( {{a}_{11},\ldots ,{a}_{1,{n}_{1} + 1}}\right) ,\ldots ,\left( {{a}_{s1},\ldots ,{a}_{s, n + 1}}\right) }\right) ; \] (3) each such point is identified with \[ \left( {\left( {{c}_{1}{a}_{11},\ldots ,{c}_{1}{a}_{1,{n}_{1} + 1}}\right) ,\ldots ,\left( {{c}_{s}{a}_{s1},\ldots ,{c}_{s}{a}_{s, n + 1}}\right) }\right) , \] (4) where \( {c}_{1},\ldots ,{c}_{s} \) are arbitrary elements of \( \mathbb{C} \smallsetminus \{ 0\} \) . In analogy with homogeneous sets, we say that a subset \( S \) of \( {\mathbb{C}}^{\left( {{n}_{1} + 1}\right) + \ldots + \left( {{n}_{s} + 1}\right) } \) is multihomogeneous (or \( s \) -way homogeneous, or \( \left( {{n}_{1} + 1,\ldots ,{n}_{s} + 1}\right) \) -homogeneous) if whenever a point of the form in (3) is in \( S \), then the corresponding point in (4) is also in \( S \) . In Theorem 2.6 of Chapter II, we proved that a variety \( V \subset {\mathbb{C}}^{n} \) is homogeneous iff it is definable by a set of homogeneous polynomials. A proof analogous to that of Theorem 2.6, Chapter II, shows that an algebraic variety is \( \left( {{n}_{1},\ldots ,{n}_{s}}\right) \) -homogeneous in \( {\mathbb{C}}_{{x}_{11},\ldots ,{x}_{s{n}_{s}}} \) iff it is defined by polynomials \( p\left( {{X}_{11},\ldots ,{X}_{s{n}_{s}}}\right) \) which are \( \left( {{n}_{1},\ldots ,{n}_{s}}\right) \) -homogeneous-that is, for each of \( i = 1,\ldots, s \), it is homogeneous in the set of indeterminants \( \left\{ {{X}_{i1},\ldots ,{X}_{i{n}_{i}}}\right\} \) . An \( \left( {{n}_{1} + 1,\ldots ,{n}_{s} + 1}\right) \) -homogeneous variety in \( {\mathbb{C}}^{\left( {{n}_{1} + 1}\right) + \ldots + \left( {{n}_{s} + 1}\right) } \) then defines a set in \( {\mathbb{P}}^{ * } = {\mathbb{P}}^{{n}_{1}}\left( \mathbb{C}\right) \times \ldots \times {\mathbb{P}}^{{n}_{s}}\left( \mathbb{C}\right) \) which we call a variety (if no confusion can arise), a multiprojective, \( s \) -way, or \( \left( {{n}_{1},\ldots ,{n}_{s}}\right) \) -projective, variety in \( {\mathbb{P}}^{ * } \) . The reader may check that the basic lattice and decomposition properties of ordinary varieties continue to hold for multiprojective varieties. Note that for varieties \( {V}_{i} \subset {\mathbb{P}}^{{n}_{i}}\left( \mathbb{C}\right) \) where \( i = 1,\ldots, s,{V}_{1} \times \ldots \times {V}_{s} \) is \( s \) -way projective in \( {\mathbb{P}}^{ * } \) . One may also "multidehomogenize" in the obvious way. If \( {\mathbb{C}}^{{n}_{i}} \) denotes a particular dehomogenization of \( {\mathbb{P}}^{{n}_{i}}\left( \mathbb{C}\right) \), then \( {\mathbb{C}}^{{n}_{1}} \times \ldots \times {\mathbb{C}}^{{n}_{s}} \) is the corresponding multidehomogenization of \( {\mathbb{P}}^{ * } \) ; any variety \( V \) in \( {\mathbb{P}}^{ * } \) then has a corresponding multi-dehomogenization which we call an affine representative of \( V \) . (This includes the case when \( V \) is a point \( P \) .) Definition 2.25. Let \( V \) be multiprojective. The dimension at \( P \) of \( V \), written \( {\dim }_{P}V \), is the dimension of any affine representative of \( V \) at an affine representative of \( P \) . The dime
1329_[肖梁] Abstract Algebra (2022F)
Definition 4.2.6
Definition 4.2.6. The normal subgroup \( {A}_{n} \mathrel{\text{:=}} \ker \left( {\operatorname{sgn} : {S}_{n} \rightarrow \{ \pm 1\} }\right) \) is called the alternating group. Properties 4.2.7. (1) \( {A}_{n} \vartriangleleft {S}_{n} \) and \( {S}_{n}/{A}_{n} \cong \{ \pm 1\} \) . In particular, \[ \left| {A}_{n}\right| = \left| {S}_{n}\right| /\left| {\{ \pm 1\} }\right| = \frac{n!}{2}. \] (2) We claim that \( \operatorname{sgn}\left( \text{transpotion}\right) = - 1 \) . Indeed, \( \operatorname{sgn}\left( \left( {12}\right) \right) = - 1 \) because for \( \sigma = \left( {12}\right) \) , \[ \sigma \left( \Delta \right) = \mathop{\prod }\limits_{{1 \leq i < j \leq n}}\left( {{x}_{\sigma \left( i\right) } - {x}_{\sigma \left( j\right) }}\right) = \left( {{x}_{\sigma \left( 2\right) } - {x}_{\sigma \left( 1\right) }}\right) \mathop{\prod }\limits_{\substack{{1 \leq i < j \leq n} \\ {j \geq 3} }}\left( {{x}_{i} - {x}_{j}}\right) = - \Delta . \] For a general transposition \( \left( {ab}\right) \), fix \( \tau \in {S}_{n} \), such that \( \tau \left( 1\right) = a \) and \( \tau \left( 2\right) = b \) . Then (4.2.3.1) implies that \( \tau \left( {12}\right) {\tau }^{-1} = \left( {ab}\right) \) . Thus, \[ \operatorname{sgn}\left( \left( {ab}\right) \right) = \operatorname{sgn}\left( \tau \right) \cdot \operatorname{sgn}\left( \left( {12}\right) \right) \cdot \operatorname{sgn}{\left( \tau \right) }^{-1} = \operatorname{sgn}\left( \left( {12}\right) \right) = - 1. \] Thus, in general, we have for \( \sigma \in {S}_{n} \) , \[ \operatorname{sgn}\left( \sigma \right) = {\left( -1\right) }^{\text{number of transpositions in the factorization of }}. \] In particular, \( {A}_{n} = \left\{ {\sigma \in {S}_{n} \mid \sigma }\right. \) is an even permutation \( \} \) . Theorem 4.2.8. When \( n \geq 5,{A}_{n} \) is a simple group. Remark 4.2.9. (1) \( {A}_{3} = \langle \left( {123}\right) \rangle \) is a cyclic group of order 3 . (2) \( {A}_{4} \trianglerighteq \{ 1,\left( {12}\right) \left( {34}\right) ,\left( {14}\right) \left( {23}\right) ,\left( {13}\right) \left( {24}\right) \} \cong {\mathbf{Z}}_{2}^{2} \) . (3) It is known that a simple group of order 60 is isomorphic to \( {A}_{5} \) . (It is the smallest non-commutative simple group.) Proof of Theorem 4.2.8. Recall that a 3-cycle \( \left( {ijk}\right) \) always belong to \( {A}_{n} \) . We will prove three statements, together they prove Theorem 4.2.8. (1) \( {A}_{n} \) is generated by all 3-cycles (true with \( n \geq 3 \) ). Indeed, \( \left( {a, b}\right) \left( {c, d}\right) = \left( {a, c, b}\right) \left( {a, c, d}\right) \) and \( \left( {a, c}\right) \left( {a, b}\right) = \left( {a, b, c}\right) \) . (2) If a normal subgroup \( N \trianglelefteq {A}_{n} \) contains a 3-cycle, then it contains all 3-cycles (true for \( n \geq 3 \) ). Indeed, Assume that \( N \) contains the 3-cycle \( \left( {i, j, k}\right) \) . Note that, for every \( \sigma \in {S}_{n} \) , either \( \sigma \in {A}_{n} \) or \( \sigma \left( {i, j}\right) \in {A}_{n} \) . Then (4.2.3.1) implies that - either \( \sigma \left( {i, j, k}\right) {\sigma }^{-1} = \left( {\sigma \left( i\right) ,\sigma \left( j\right) ,\overline{\sigma \left( k\right) }}\right) \in N \), or - \( \sigma \left( {i, j}\right) \left( {i, j, k}\right) {\left( \sigma \left( i, j\right) \right) }^{-1} = \sigma \left( {j, i, k}\right) {\sigma }^{-1} = \left( {\sigma \left( j\right) ,\sigma \left( i\right) ,\sigma \left( k\right) }\right) \in N \) (but then we have \( {\left( \sigma \left( j\right) ,\sigma \left( i\right) ,\sigma \left( k\right) \right) }^{2} = \left( {\sigma \left( i\right) ,\sigma \left( j\right) ,\sigma \left( k\right) }\right) \in N \) ). So \( N \) always contains \( \left( {\sigma \left( i\right) ,\sigma \left( j\right) ,\sigma \left( k\right) }\right) \) for every \( \sigma \in {S}_{n} \), and thus \( N \) contains all 3-cycles. (3) If \( \{ 1\} \neq N \vartriangleleft {A}_{n} \) is a nontrivial normal subgroup, then \( N \) contains all 3-cycles. Take a nontrivial element \( \sigma \in N \) . We separate several cases: (a) If \( \sigma \) is a product of disjoint cycles, at least one cycle has length \( > 3 \), i.e. \( \sigma = \) \( \mu \left( {{a}_{1},{a}_{2},\ldots ,{a}_{r}}\right) \) with \( r > 3 \), then we have \[ \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{3}\right) }^{-1} = \mu \left( {{a}_{2},{a}_{3},{a}_{1},{a}_{4},{a}_{5},\ldots ,{a}_{r}}\right) \in N. \] So \( {\sigma }^{-1} \circ \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{3}\right) }^{-1} \) sends ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_0.jpg) It is equal to \( \left( {{a}_{1},{a}_{3},{a}_{r}}\right) \), a 3-cycle. (b) Suppose that (a) does not hold, and then \( \sigma \) is a product of disjoint 3-cycles and 2-cycles. It then follows that \( {\sigma }^{3} \) is a product of disjoint 2-cycles and \( {\sigma }^{2} \) is a product of disjoint 3-cycles (and they cannot be both 1). So (by considering \( {\sigma }^{3} \) or \( {\sigma }^{2} \) instead of \( \sigma \), we are reduced to the case when \( \sigma \) is purely a product of disjoint 3-cycles or a product of disjoint 2-cycles. (c) If \( \sigma \) is a product of one 3-cycles, we are already done. If \( \sigma \) is a product of more than one disjoint 3-cycles, we write \( \sigma = \mu \left( {{a}_{4},{a}_{5},{a}_{6}}\right) \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \) . Then \[ \left( {{a}_{1},{a}_{2},{a}_{4}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{4}\right) }^{-1}{\sigma }^{-1} \in N \] and we compute it as: ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_1.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_28_1.jpg) This is \( \left( {{a}_{1},{a}_{2},{a}_{5},{a}_{3},{a}_{4}}\right) \), a 5-cycle, and we are reduced to case (a). (d) If \( \sigma \) is a product of (necessarily even number) of disjoint transpositions, we write \( \sigma = \mu \left( {{a}_{1},{a}_{2}}\right) \left( {{a}_{3},{a}_{4}}\right) \) . Then \[ \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{3}\right) }^{-1}{\sigma }^{-1} = \left( {{a}_{1},{a}_{3}}\right) \left( {{a}_{2},{a}_{4}}\right) \in N. \] (This step "removes" the extra transpositions \( \mu \) .) Write \( {\sigma }^{\prime } \mathrel{\text{:=}} \left( {{a}_{1},{a}_{3}}\right) \left( {{a}_{2},{a}_{4}}\right) \) . After this, we use the condition \( n \geq 5 \) to take another number \( {a}_{5} \in \{ 1,\ldots, n\} \) . Explicit computation shows again that \[ \left( {{a}_{1},{a}_{2},{a}_{5}}\right) \sigma {\left( {a}_{1},{a}_{2},{a}_{5}\right) }^{-1}{\sigma }^{-1} = \left( {{a}_{1},{a}_{2},{a}_{5},{a}_{4},{a}_{3}}\right) \in N, \] producing a 5-cycle and hence reduces to case (a). ## 4.3. Direct products. Definition 4.3.1. Let \( I \) be an index set and let \( {G}_{i} \) (for \( i \in I \) ) be a group with operator \( { \star }_{i} \) and identity \( {e}_{i} \) . Define the direct product of \( {\left( {G}_{i}\right) }_{i \in I} \), denoted by \( \mathop{\prod }\limits_{{i \in I}}{G}_{i} \) (or \( {G}_{1} \times {G}_{2} \times \cdots \times {G}_{n} \) if \( I = \) \( \{ 1,2,\ldots, n\} ) \), to be the group with underlying set \( G = \mathop{\prod }\limits_{{i \in I}}{G}_{i} \), with operation \[ {\left( {g}_{i}\right) }_{i \in I} \star {\left( {h}_{i}\right) }_{i \in I} \mathrel{\text{:=}} {\left( {g}_{i}{ \star }_{i}{h}_{i}\right) }_{i \in I}. \] The identity element is \( {\left\{ {e}_{i}\right\} }_{i \in I} \) and the inverse of \( {\left( {g}_{i}\right) }_{i \in I} \) is \( {\left( {g}_{i}^{-1}\right) }_{i \in I} \) . For each \( j \in I \), there is a natural embedding ( \( = \) injective homomorphism) \[ {G}_{j} \hookrightarrow G = \mathop{\prod }\limits_{{i \in I}}{G}_{i} \] \[ {g}_{j} \mapsto \left( {1,\ldots ,1,{g}_{j},1,\ldots ,1}\right) \] \[ {j}^{\text{th }}\text{place} \] This realizes \( {G}_{j} \) as a normal subgroup of \( \mathop{\prod }\limits_{{i \in I}}{G}_{i} \) and we have \[ \left( {\mathop{\prod }\limits_{{i \in I}}{G}_{i}}\right) /{G}_{j} \cong \mathop{\prod }\limits_{{i \in I\smallsetminus \{ j\} }}{G}_{i} \] "Dually", there is a natural projection (= surjective homomorphism) \[ {\pi }_{j} : G \rightarrow {G}_{j} \] \[ {\left( {g}_{i}\right) }_{i \in I} \mapsto {g}_{j} \] We have \( \ker {\pi }_{j} \cong \mathop{\prod }\limits_{{i \in I\smallsetminus \{ j\} }}{G}_{i} \) . Finally, when \( {G}_{i} \) ’s are all isomorphic to a group \( H \) and \( i = \{ 1,\ldots, r\} \), we write \( {H}^{r} \) for \( \mathop{\prod }\limits_{{i \in I}}{G}_{i} \) . 4.4. Finitely generated abelian groups. Recall that a group \( G \) is finitely generated if there exist a finite subset \( A \) of \( G \) such that \( G = \langle A\rangle \) . Theorem 4.4.1 (Fundamental theorem of finitely generated abelian groups). Let \( G \) be a finitely generated abelian group. Then \[ G \simeq {\mathbb{Z}}^{r} \times {Z}_{{n}_{1}} \times {Z}_{{n}_{2}} \times \cdots \times {Z}_{{n}_{s}} \] for some integers \( r \geq 0,2 \leq {n}_{1} \leq {n}_{2} \leq \cdots \leq {n}_{s} \) satisfying \( {n}_{i} \mid {n}_{i + 1} \) . Moreover these integers \( r,{n}_{1},\ldots ,{n}_{s} \) are unique. The integer \( r \) is called the \( \mathbf{{rank}} \) of the abelian group \( G \) . We will explain later in the semester that abelian groups \( = \mathbb{Z} \) -modules. So this theorem will follow from the classification of modules over a PID (Theorem 15.3.3). The goal in this lecture is to see how to characterize finitely generated abelian groups. Lemma 4.4.2. If \( m, n \in {\mathbb{N}}_{ \geq 2} \) satisfying \( \gcd \left( {m, n}\right) = 1 \), then \[ {\mathbf{Z}}_{mn} \cong {\mathbf{Z}}_{m} \times {\mathbf{Z}}_{n} \] Proof. Consider the group homomorphism \[ {\mathbf{Z}}_{mn}\xrightarrow[]{\phi }{\mathbf{Z}}_{m} \times {\mathbf{Z}}_{n} \] \[ a \ma
1088_(GTM245)Complex Analysis
Definition 2.6
Definition 2.6. Let \( A \subseteq \mathbb{C} \) . We say that \( A \) is bounded if the set of nonnegative real numbers \( \{ \left| z\right| ;z \in A\} \) is; that is, if there exists a positive real number \( M \) such that \( \left| z\right| < M \) for all \( z \) in \( A \) . Definition 2.7. Let \( c \in \mathbb{C} \) and \( \epsilon > 0 \) . The \( \epsilon \) -ball about \( c \), or the open disc with center \( c \) and radius \( \epsilon \), is the set \[ {U}_{c}\left( \epsilon \right) = U\left( {c,\epsilon }\right) = \{ z \in \mathbb{C};\left| {z - c}\right| < \epsilon \} , \] that is, the interior of the circle with center \( c \) and radius \( \epsilon \) . Proposition 2.8. A subset \( A \) of \( \mathbb{C} \) is bounded if and only if there exist a complex number \( c \) and a positive number \( R \) such that \[ A \subset U\left( {c, R}\right) \text{.} \] Remark 2.9. A proof is omitted for one of three reasons (in addition to the reason described in Remark 2.3): either it is trivial or it follows directly from results in real analysis or it appears as an exercise at the end of the corresponding chapter. \( {}^{5} \) The third possibility is always labeled as such; when standard results in real analysis are needed, there is some indication of what they are or where to find them. For example, the next two theorems are translations to \( \mathbb{C} \) of standard metric results for \( {\mathbb{R}}^{2} \) . It \( {}^{5} \) Exercises can be found at the end of each chapter and are numbered by chapter, so that Exercise 2.7 is to be found at the end of Chap. 2. should be clear from the context when the first possibility occurs. It is recommended that the reader ensures that he/she is able to supply an appropriate proof when none is given. Theorem 2.10 (Bolzano-Weierstrass). Every bounded infinite set \( S \) in \( \mathbb{C} \) has at least one limit point; that is, there exists at least one \( c \in \mathbb{C} \) such that, for each \( \epsilon > 0 \) , the ball \( U\left( {c,\epsilon }\right) \) contains a point \( z \in S \) with \( z \neq c \) . Theorem 2.11. A set \( K \subset \mathbb{C} \) is compact if and only if it is closed and bounded. We will certainly be using a number of consequences of compactness not discussed in this chapter (e.g., in a compact metric space, every sequence has a convergent subsequence) and also of connectedness, which we will not define here. Definition 2.12. Let \( f \) be a function defined on a set \( S \) in \( \mathbb{C} \) . We assume that \( f \) is complex-valued, unless otherwise stated. Thus \( f \) may be viewed as either a map from \( S \) into \( {\mathbb{R}}^{2} \) or into \( \mathbb{C} \) and also as two real-valued functions defined on the set \( S \) . Let \( c \) be a limit point of \( S \) and let \( \alpha \) be a complex number. We say that the limit of \( {fatc} \) is \( \alpha \), and we write \[ \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \alpha \] if for each \( \epsilon > 0 \) there exists a \( \delta > 0 \) such that \[ \left| {f\left( z\right) - \alpha }\right| < \epsilon \text{whenever}z \in S\text{and}0 < \left| {z - c}\right| < \delta \text{.} \] Remark 2.13. The condition that \( c \) is a limit point of \( S \) ensures that there are points \( z \) in \( S \) arbitrarily close to (but different from) \( c \) so that \( f\left( z\right) \) is defined there. Note that it is not required that \( f\left( c\right) \) be defined. The above definition is again a translation of language from \( {\mathbb{R}}^{2} \) to \( \mathbb{C} \) . Thus we will be able to adopt many results (the next three theorems, in particular) from real analysis. In addition to the usual algebraic operations on pairs of functions \( f : S \rightarrow \) \( \mathbb{C} \) and \( g : S \rightarrow \mathbb{C} \) familiar from real analysis, such as \( f + {cg} \) with \( c \in \mathbb{C},{fg} \), and \( \frac{f}{g} \) (provided \( g \) does not vanish on \( S \) ; that is, if \( g\left( z\right) \neq 0 \) for any \( z \in S \) or, equivalently, if no \( z \in S \) is a zero of \( g \) ), we will consider other functions constructed from a single function \( f \), that are usually not emphasized in real analysis. Among them are the following: \[ \left( {\Re f}\right) \left( z\right) = \Re f\left( z\right) ,\left( {\Im f}\right) \left( z\right) = \Im f\left( z\right) ,\bar{f}\left( z\right) = \overline{f\left( z\right) },\left| f\right| \left( z\right) = \left| {f\left( z\right) }\right| , \] also defined on \( S \) . For instance, if \( f\left( z\right) = {z}^{2} = {x}^{2} - {y}^{2} + {2\iota xy} \) for \( z \in \mathbb{C} \), we have \( \left( {\Re f}\right) \left( z\right) = {x}^{2} - \) \( {y}^{2},\left( {\Im f}\right) \left( z\right) = {2xy},\bar{f}\left( z\right) = {\bar{z}}^{2} = {x}^{2} - {y}^{2} - {2\iota xy} \), and \( \left| f\right| \left( z\right) = {\left| z\right| }^{2} = {x}^{2} + {y}^{2} \) for \( z \in \mathbb{C} \) . Theorem 2.14. Let \( S \) be a subset of \( \mathbb{C} \) and let \( f \) and \( g \) be functions defined on \( S \) . Ifc is a limit point of \( S \), then: (a) \( \mathop{\lim }\limits_{{z \rightarrow c}}\left( {f + {ag}}\right) \left( z\right) = \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) + a\mathop{\lim }\limits_{{z \rightarrow c}}g\left( z\right) \) for all \( a \in \mathbb{C} \) (b) \( \mathop{\lim }\limits_{{z \rightarrow c}}\left( {fg}\right) \left( z\right) = \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) \mathop{\lim }\limits_{{z \rightarrow c}}g\left( z\right) \) (c) \( \mathop{\lim }\limits_{{z \rightarrow c}}\left| f\right| \left( z\right) = \left| {\mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) }\right| \) (d) \( \mathop{\lim }\limits_{{z \rightarrow c}}\bar{f}\left( z\right) = \overline{\mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) } \) Remark 2.15. The usual interpretation of the above formulae is used here and in the rest of the book: the \( {\mathrm{{LHS}}}^{6} \) exists whenever the RHS exists, and then we have the stated equality. Corollary 2.16. Let \( S \) be a subset of \( \mathbb{C} \), let \( f \) be a function defined on \( S \), and \( \alpha \in \mathbb{C} \) . Set \( u = \Re f \) and \( v = \Im f \) (so that \( f\left( z\right) = u\left( z\right) + {iv}\left( z\right) \) ). If \( c \) is a limit point of \( S \), then \[ \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \alpha \] if and only if \[ \mathop{\lim }\limits_{{z \rightarrow c}}u\left( z\right) = \Re \alpha \text{ and }\mathop{\lim }\limits_{{z \rightarrow c}}v\left( z\right) = \Im \alpha . \] Definition 2.17. Let \( S \) be a subset of \( \mathbb{C}, f : S \rightarrow \mathbb{C} \) be a function defined on \( S \) , and \( c \in S \) be a point in \( S \) . We say that: (a) \( f \) is continuous at \( c \) if \( \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = f\left( c\right) \) . (b) \( f \) is continuous on \( S \) if it is continuous at each \( c \) in \( S \) . (c) \( f \) is uniformly continuous on \( S \) if for all \( \epsilon > 0 \), there is a \( \delta > 0 \) such that \[ \left| {f\left( z\right) - f\left( w\right) }\right| < \epsilon \text{for all}z\text{and}w\text{in}S\text{with}\left| {z - w}\right| < \delta \text{.} \] Remark 2.18. A function \( f \) is (uniformly) continuous on \( S \) if and only if both \( \Re f \) and \( \Im f \) are. Uniform continuity implies continuity, but the converse is not true in general. Theorem 2.19. Let \( f \) and \( g \) be functions defined in appropriate sets, that is, sets where the composition \( g \circ f \) of these functions makes sense. Then the following properties hold: (a) If \( f \) is continuous at \( c \) and \( f\left( c\right) \neq 0 \), then \( \frac{1}{f} \) is defined in a neighborhood of \( c \) and is continuous at \( c \) . (b) If \( f \) is continuous at \( c \) and \( g \) is continuous at \( f\left( c\right) \), then \( g \circ f \) is continuous at \( c \) . Theorem 2.20. Let \( K \subset \mathbb{C} \) be a compact set and \( f : K \rightarrow \mathbb{C} \) be a continuous function on \( K \) . Then \( f \) is uniformly continuous on \( K \) . \( {}^{6} \) LHS (RHS) are standard abbreviations for left (right) hand side and will be used throughout this book. Proof. A continuous mapping from a compact metric space to a metric space is uniformly continuous. Definition 2.21. Given a sequence of functions \( \left\{ {f}_{n}\right\} \), all defined on the same set \( S \) in \( \mathbb{C} \), we say that \( \left\{ {f}_{n}\right\} \) converges uniformly to a function \( f \) on \( S \) if for all \( \epsilon > 0 \) there exists an \( N \in {\mathbb{Z}}_{ > 0} \) such that \[ \left| {f\left( z\right) - {f}_{n}\left( z\right) }\right| < \epsilon \text{ for all }z \in S\text{ and all }n > N. \] Remark 2.22. \( \left\{ {f}_{n}\right\} \) converges uniformly on \( S \) (to some function \( f \) ) if and only if for all \( \epsilon > 0 \) there exists an \( N \in {\mathbb{Z}}_{ > 0} \) such that \[ \left| {{f}_{n}\left( z\right) - {f}_{m}\left( z\right) }\right| < \epsilon \text{for all}z \in S\text{and all}n\text{and}m > N\text{.} \] Note that in this case the limit function \( f \) is uniquely determined; it is the pointwise limit \( f\left( z\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( z\right) \), for all \( z \in S \) . Theorem 2.23. Let \( \left\{ {f}_{n}\right\} \) be a sequence of functions defined on \( S \subseteq \mathbb{C} \) . If: (1) \( \left\{ {f}_{n}\right\} \) converges uniformly on \( S \) . (2) Each \( {f}_{n} \) is continuous on \( S \) . Then the function \( f \) defined by \[ f\left( z\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( z\right), z \in S \] is continuous on \( S \) . Proof. Start with two points \( z \) and \( c \) in \( S \) . Then for each natural number \( n \) we have
1057_(GTM217)Model Theory
Definition 8.2.12
Definition 8.2.12 Suppose \( T \) is an \( \omega \) -stable theory with monster model \( \mathbb{M} \) . We say that \( T \) is one-based if whenever \( A, B \subseteq {\mathbb{M}}^{\mathrm{{eq}}}, A = {\operatorname{acl}}^{\mathrm{{eq}}}\left( A\right) \), and \( B = {\operatorname{acl}}^{\mathrm{{eq}}}\left( B\right) \), then \( A{ \downarrow }_{A \cap B}B \) . The next lemma explains why we call these theories one-based. \( {}^{1} \) Lemma 8.2.13 Suppose that \( T \) is \( \omega \) -stable. The following are equivalent. i) \( T \) is one-based. ii) For all \( \bar{a} \in {\mathbb{M}}^{\text{eq }} \) and \( B \subseteq {\mathbb{M}}^{\text{eq }} \), if \( \operatorname{tp}\left( {\bar{a}/B}\right) \) is stationary, then \( \operatorname{cb}\left( {\operatorname{tp}\left( {\bar{a}/B}\right) }\right) \subseteq {\operatorname{acl}}^{\mathrm{{eq}}}\left( \bar{a}\right) \) . Proof i) \( \Rightarrow \) ii) Let \( A = {\operatorname{acl}}^{\text{eq }}\left( \bar{a}\right) \) . Because \( \operatorname{tp}\left( {\bar{a}/{\operatorname{acl}}^{\text{eq }}\left( B\right) }\right) \) does not fork over \( B \) , we may without loss of generality assume that \( B = {\operatorname{acl}}^{\mathrm{{eq}}}\left( B\right) \) . Because \( T \) is one-based, \( \bar{a}{ \downarrow }_{A \cap B}B \) . Thus \( \operatorname{cb}\left( {\operatorname{tp}\left( {\bar{a}/B}\right) }\right) \subseteq A \cap B \subseteq A \) . --- \( {}^{1} \) Compare this to Exercise 8.4.12 for arbitrary \( \omega \) -stable theories. --- ii) \( \Rightarrow \) i) Let \( A, B \subseteq {\mathbb{M}}^{\text{eq }} \) with \( {\operatorname{acl}}^{\text{eq }}\left( A\right) = A \) , \( {\operatorname{acl}}^{\text{eq }}\left( B\right) = B \), and \( \bar{a} \in A \) . For any \( \omega \) -stable theory, \( \operatorname{cb}\left( {\operatorname{tp}\left( {\bar{a}/B}\right) }\right) \) is contained in \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( B\right) = B \) . By ii), \( \operatorname{cb}\left( {\operatorname{tp}\left( {\overline{a}/B}\right) }\right) \subseteq {\operatorname{acl}}^{\mathrm{{eq}}}\left( \overline{a}\right) \subseteq A \) . Thus, \( \operatorname{cb}\left( {\operatorname{tp}\left( {\overline{a}/B}\right) }\right) \subseteq A \cap B \) and \( \operatorname{tp}\left( {\overline{a}/B}\right) \) does not fork over \( A \cap B \) . Theorem 8.2.14 Suppose that \( \mathbb{M} \) is strongly minimal. Then, \( T \) is one-based if and only if \( D \) is locally modular. ## Proof We first assume \( \left( *\right) \) For every \( \alpha \in {\mathbb{M}}^{\mathrm{{eq}}} \), there is \( \bar{d} \in \mathbb{M} \) such that \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( \alpha \right) = {\operatorname{acl}}^{\mathrm{{eq}}}\left( \bar{d}\right) \) . Under this assumption we claim that \( \mathbb{M} \) is one-based if and only if \( \mathbb{M} \) is modular. Suppose that \( \mathbb{M} \) is one-based. If \( A, B \subset \mathbb{M} \) are algebraically closed, then \( A{ \bot }_{{\operatorname{acl}}^{\mathrm{{eq}}}\left( A\right) \cap {\operatorname{acl}}^{\mathrm{{eq}}}\left( B\right) }B \) . By \( \left( *\right) \), we can find \( \bar{d} \in A \cap B \) such that \( A{ \downarrow }_{\bar{d}}B \) . By monotonicity, \( A{ \downarrow }_{A \cap B}B \), as desired. Suppose, on the other hand, that \( \mathbb{M} \) is modular. If \( A, B \subseteq {\mathbb{M}}^{\text{eq }} \) are algebraically closed in \( {\mathbb{M}}^{\text{eq }} \), we can find \( {A}_{0},{B}_{0} \subseteq \mathbb{M} \) such that \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( {\bar{A}}_{0}\right) = A \) and \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( {B}_{0}\right) = B \) . By modularity, \( {A}_{0}{ \downarrow }_{\operatorname{acl}\left( {A}_{0}\right) \cap \operatorname{acl}\left( {B}_{0}\right) }{B}_{0} \) . Thus, \( A{ \downarrow }_{A \cap B}B \) by Corollary 6.3.21. We need to show that one-basedness is preserved by localization. Suppose that \( X \subset \mathbb{M} \) . Let \( {\mathbb{M}}_{X} \) denote \( \mathbb{M} \) viewed as an \( {\mathcal{L}}_{X} \) -structure. Claim \( \mathbb{M} \) is one-based if and only if \( {\mathbb{M}}_{X} \) is one-based. Suppose \( \mathbb{M} \) is one-based. If \( A, B \subseteq {\mathbb{M}}^{\text{eq }} \), then \[ {\operatorname{acl}}^{\mathrm{{eq}}}\left( {AX}\right) { \downarrow }_{{\operatorname{acl}}^{\mathrm{{eq}}}\left( {AX}\right) \cap {\operatorname{acl}}^{\mathrm{{eq}}}\left( {AX}\right) }{\operatorname{acl}}^{\mathrm{{eq}}}\left( {BX}\right) . \] Thus, \( {\mathbb{M}}_{X} \) is one-based. Suppose \( {\mathbb{M}}_{X} \) is one-based. Let \( B \subseteq \mathbb{M} \) and \( \bar{a} \in \mathbb{M} \) . We want to show that \( \operatorname{cb}\left( {\operatorname{tp}\left( {\bar{a}/B}\right) }\right) \subseteq {\operatorname{acl}}^{\mathrm{{eq}}}\left( \bar{a}\right) \) . Because \( \operatorname{tp}\left( {\bar{a}/B}\right) \) does not fork over some finite \( {B}_{0} \subseteq B \), we may, without loss of generality, assume that \( B \) is finite. Also, without loss of generality, we may assume that \( \bar{a}, B{ \bot }_{\varnothing }X \) . Otherwise, we replace \( \bar{a} \) and \( \bar{B} \) by \( {\bar{a}}^{\prime } \) and \( {\bar{B}}^{\prime } \), realizing a nonforking extension of \( \operatorname{tp}\left( {\bar{a}, B}\right) \) over \( X \) . Because \( \bar{a}, B \bot X,\bar{a}{ \bot }_{B}X \) by transitivity. Let \( \bar{c} \) be a canonical base for \( \operatorname{tp}\left( {\bar{a}/B, X}\right) \) . Because \( \bar{a}{ \Downarrow }_{B}^{D}X,\bar{c} \in {\operatorname{acl}}^{\mathrm{{eq}}}\left( B\right) \) and, because \( {\mathbb{M}}_{X} \) is one-based, \( \bar{c} \in {\operatorname{acl}}^{\mathrm{{eq}}}\left( {\bar{a}, X}\right) \) . But \( \bar{c}{ \downarrow }_{\bar{a}}X \) because \( \bar{a}, B{ \downarrow }_{\varnothing }X \) . Thus, \( \bar{c} \in {\operatorname{acl}}^{\mathrm{{eq}}}\left( \bar{a}\right) \) . We can now finish the proof. Suppose that \( \mathbb{M} \) is locally modular. We can find \( d \in \mathbb{M} \) such that \( {\mathbb{M}}_{d} \) is modular. By Lemma 8.2.9, if \( X \subset \mathbb{M} \) is infinite, then \( {\mathbb{M}}_{X, d} \) satisfies \( \left( *\right) \) . By Exercise 8.4.8, \( {\mathbb{M}}_{X, d} \) is also modular. Thus, \( {\mathbb{M}}_{X, d} \) is one-based and \( \mathbb{M} \) is one-based. On the other hand, if \( \mathbb{M} \) is one-based and \( X \subset \mathbb{M} \) is infinite, then \( {\mathbb{M}}_{X} \) is one-based and satisfies \( \left( *\right) \) . Thus, \( {\mathbb{M}}_{X} \) is modular. By Theorem 8.2.11, \( \mathbb{M} \) is locally modular. --- There are much stronger versions of Theorem 8.2.14. --- Theorem 8.2.15 Suppose that \( T \) is uncountably categorical and \( \mathbb{M} \) is the monster model of \( T \) . The following are equivalent. i) \( T \) is one-based. ii) Every strongly minimal \( D \subseteq {\mathbb{M}}^{n} \) is locally modular. iii) Some strongly minimal \( D \subseteq {\mathbb{M}}^{n} \) is locally modular. For a proof, see Theorem 4.3.1 in [18]. ## 8.3 Geometry and Algebra In this section, we will sketch some important results showing the relationship between the geometry of strongly minimal sets and the presence of definable algebraic structure. We conclude with a sketch of how these ideas come together in Hrushovski's proof of the Mordell-Lang Conjecture for function fields. ## Nontrivial Locally Modular Strongly Minimal Sets So far, the only examples we have given of nontrivial locally modular strongly minimal sets are affine and projective geometries. In both cases, there is a group present. The following remarkable theorem of Hrushovski shows that this is always the case. Theorem 8.3.1 Suppose that \( \mathbb{M} \) is strongly minimal, nontrivial, and locally modular; then there is an infinite group definable in \( {\mathbb{M}}^{\text{eq }} \) . Proof We will deduce this result using the group configuration theorem. A direct proof of this result would be an easier special case of the group configuration theorem. The reader can find direct proofs in [18] or [76]. Because \( \mathbb{M} \) is nontrivial, we can find a finite \( A \subset \mathbb{M} \) and \( b, c \in \mathbb{M} \smallsetminus \operatorname{acl}\left( A\right) \) such that \( c \in \operatorname{acl}\left( {A, b}\right) \smallsetminus \left( {\operatorname{acl}\left( A\right) \cup \operatorname{acl}\left( b\right) }\right) \) . Choose \( d \in \mathbb{M} \) independent from \( A, b, c \) . By Theorem 8.2.11, \( {\mathbb{M}}_{d} \) is modular. Adding \( d \) to the language, we may assume that \( \mathbb{M} \) is modular. Because \( d \) is independent from \( A, b, c \), we still have \( c \in \operatorname{acl}\left( {A, b}\right) \smallsetminus \left( {\operatorname{acl}\left( A\right) \cup \operatorname{acl}\left( b\right) }\right) \) . Let \( C = \operatorname{acl}\left( A\right) \cap \operatorname{acl}\left( {b, c}\right) \) . By modularity (see Exercise 8.4.6), \[ \dim \left( {b, c, A}\right) = \dim \left( {b, c}\right) + \dim \left( A\right) - \dim C. \] Because \( \dim \left( {b, c, A}\right) = \dim A + 1 \) and \( \dim \left( {b, c}\right) = 2,\dim C = 1 \) . Thus, there is \( a \in C \) with \( \dim \left( a\right) = 1 \) . Note that \[ \dim \left( {a, c}\right) = \dim \left( {a, b}\right) = \dim \left( {b, c}\right) = \dim \left( {a, b, c}\right) = 2. \] Choose \( y, z \in \mathbb{M} \) such that \( \left( {b, c}\right) \) and \( \left( {y, z}\right) \) realize the same type over \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( a\right) \) and \( \left( {y, z}\right) \) are independent from \( \left( {b, c}\right) \) over \( a \) (i.e., \( \dim \left( {y, z/a, b, c}\right) = \) \( \dim \left( {y, z/a}\right) = \dim \left( {b, c/a}\right) = 1) \) . Thus, \( \dim \left( {a, b, c, x, y}\right) = 3 \) . Because \( \left( {y, z}\right) \) and \( \left( {b, c}\right) \) realize the same type over \( a \) , \[ \dim \
108_The Joys of Haar Measure
Definition 3.6.23
Definition 3.6.23. For \( x \in {K}_{m} \) coprime to \( \mathfrak{p} \) we set \[ {\left( \frac{x}{\mathfrak{p}}\right) }_{m} = {\omega }_{\mathfrak{P}}^{d}\left( x\right) \;\text{ and }\;G\left( \mathfrak{p}\right) = \tau {\left( {\omega }_{\mathfrak{P}}^{-d}\right) }^{m}. \] In addition, to simplify notation we will often write \( {\chi }_{\mathfrak{p}}\left( x\right) \) instead of \( {\left( \frac{x}{\mathfrak{p}}\right) }_{m}^{-1} \) , so that \( G\left( \mathfrak{p}\right) = \tau {\left( {\chi }_{\mathfrak{p}}\right) }^{m} \) . This notation is justified by Lemma 3.6.3 (3) and (4), which tells us that \( {\omega }_{\mathfrak{P}}{\left( x\right) }^{d} \) and \( \tau {\left( {\omega }_{\mathfrak{P}}^{-d}\right) }^{m} \in {K}_{m} \) depend only on \( \mathfrak{p} = \mathfrak{P} \cap {K}_{m} \) . The above definition generalizes the well-known quadratic reciprocity symbol \( \left( \frac{x}{p}\right) \) studied in Section 2.2, hence is naturally called the \( m \) th-power reciprocity symbol. The following series of definitions and formulas is completely analogous to what is done in the classical case of quadratic reciprocity. Lemma 3.6.24. (1) \( {\left( \frac{x}{\mathrm{p}}\right) }_{m} \) is characterized by the fact that it is a character of order \( m \) such that \[ {\left( \frac{x}{\mathfrak{p}}\right) }_{m} \equiv {x}^{\left( {q - 1}\right) /m}\left( {\;\operatorname{mod}\;\mathfrak{p}}\right) . \] (2) We have \[ \tau \left( {\chi }_{\mathfrak{p}}\right) = \mathop{\sum }\limits_{{t \in {\left( {\mathbb{Z}}_{{K}_{m}}/\mathfrak{p}\right) }^{ * }}}{\chi }_{\mathfrak{p}}\left( t\right) {\psi }_{1}\left( t\right) \] where as usual \( {\psi }_{1}\left( t\right) = {\zeta }_{p}^{{\operatorname{Tr}}_{\left( {\mathbb{Z}}_{{K}_{m}}/\mathfrak{p}\right) }/\left( {\mathbb{Z}/p\mathbb{Z}}\right) \left( t\right) } \) . Proof. (1) is the translation of the corresponding properties of the character \( {\omega }_{\mathfrak{P}} \), and (2) from the fact that the natural inclusion map from \( {K}_{m} \) to \( L \) induces a canonical isomorphism between \( {\mathbb{Z}}_{{K}_{m}}/\mathfrak{p} \) and \( {\mathbb{Z}}_{L}/\mathfrak{P} \) . Definition 3.6.25. Let \( \mathfrak{a} \) be an integral ideal of \( {K}_{m} \) coprime to \( m \), and \( x \) be coprime to \( \mathfrak{a} \) . We define \( {\left( \frac{x}{\mathfrak{a}}\right) }_{m} \) and \( G\left( \mathfrak{a}\right) \) by the formulas \[ {\left( \frac{x}{\mathfrak{a}}\right) }_{m} = \mathop{\prod }\limits_{{\mathfrak{p} \mid \mathfrak{a}}}{\left( \frac{x}{\mathfrak{p}}\right) }_{m}^{{v}_{\mathfrak{p}}\left( \mathfrak{a}\right) }\;\text{ and }\;G\left( \mathfrak{a}\right) = \mathop{\prod }\limits_{{\mathfrak{p} \mid \mathfrak{a}}}G{\left( \mathfrak{p}\right) }^{{v}_{\mathfrak{p}}\left( \mathfrak{a}\right) }. \] If \( \mathfrak{a} = \alpha {\mathbb{Z}}_{{K}_{m}} \) is a principal ideal, we will write \( {\left( \frac{x}{\alpha }\right) }_{m} \) and \( G\left( \alpha \right) \) instead of \( {\left( \frac{x}{\alpha {\mathbb{Z}}_{{K}_{m}}}\right) }_{m} \) and \( G\left( {\alpha {\mathbb{Z}}_{{K}_{m}}}\right) \) . Thus by definition we have \( {\left( \frac{x}{\mathfrak{a}\mathfrak{b}}\right) }_{m} = {\left( \frac{x}{\mathfrak{a}}\right) }_{m}{\left( \frac{x}{\mathfrak{b}}\right) }_{m} \) and \( G\left( {\mathfrak{a}\mathfrak{b}}\right) = G\left( \mathfrak{a}\right) G\left( \mathfrak{b}\right) \) . Proposition 3.6.26. We have: (1) \( {\left| G\left( \mathfrak{a}\right) \right| }^{2} = \mathcal{N}{\left( \mathfrak{a}\right) }^{m} \) . (2) \( G\left( \mathfrak{a}\right) {\mathbb{Z}}_{{K}_{m}} = {\mathfrak{a}}^{\gamma } \), where \( \gamma = {m\Theta } \) is as above. (3) If \( \alpha \in {\mathbb{Z}}_{{K}_{m}} \) there exists a unit \( \varepsilon \left( \alpha \right) \) of \( {\mathbb{Z}}_{{K}_{m}} \) such that \( G\left( \alpha \right) = \varepsilon \left( \alpha \right) {\alpha }^{\gamma } \) . Proof. If \( \mathfrak{p} \) is a prime, by Proposition 2.5.9 we have \( {\left| G\left( \mathfrak{p}\right) \right| }^{2} = {\left| \tau \left( {\omega }_{\mathfrak{P}}^{-d}\right) \right| }^{2m} = \) \( {q}^{m} = \mathcal{N}{\left( \mathfrak{p}\right) }^{m} \), so (1) follows by multiplicativity. Statement (2) for a prime ideal is exactly Proposition 3.6.13, and the general result follows by multi-plicativity. By (2) we have \( G\left( \alpha \right) {\mathbb{Z}}_{{K}_{m}} = {\alpha }^{\gamma }{\mathbb{Z}}_{{K}_{m}} \), hence \( G\left( \alpha \right) = \varepsilon \left( \alpha \right) {\alpha }^{\gamma } \) for some unit \( \varepsilon \left( \alpha \right) \) . We now want to show that \( \varepsilon \left( \alpha \right) \) is a root of unity. For this we need two lemmas. Lemma 3.6.27. For any integral ideal a prime to \( m \) and any \( \sigma \in \operatorname{Gal}\left( {{K}_{m}/\mathbb{Q}}\right) \) we have \( G{\left( \mathfrak{a}\right) }^{\sigma } = G\left( {\mathfrak{a}}^{\sigma }\right) \) . Proof. By definition, if \( \mathfrak{p} \) is a prime ideal coprime to \( m \) we have \( G\left( \mathfrak{p}\right) = \) \( \tau {\left( {\omega }_{\mathfrak{P}}^{-d}\right) }^{m} \) . Thus by Lemma 3.6.3 we have \( G{\left( \mathfrak{p}\right) }^{\sigma } = {\tau }_{(}{\omega }_{\sigma \left( \mathfrak{P}\right) }^{-d}{)}^{m} = G\left( {\sigma \left( \mathfrak{p}\right) }\right) \) since \( \sigma \left( \mathfrak{p}\right) \) is the prime ideal of \( {K}_{m} \) below \( \sigma \left( \mathfrak{P}\right) \) . As usual, the lemma follows by multiplicativity. Lemma 3.6.28. If \( \alpha \in {\mathbb{Z}}_{{K}_{m}} \) then \( {\left| {\alpha }^{\gamma }\right| }^{2} = \mathcal{N}{\left( \alpha \right) }^{m} \) . Proof. Since \( {\sigma }_{-1} \) sends \( {\zeta }_{m} \) to \( {\zeta }_{m}^{-1} \) it is complex conjugation. Thus \[ {\left| {\alpha }^{\gamma }\right| }^{2} = {\alpha }^{\gamma }{\sigma }_{-1}\left( {\alpha }^{\gamma }\right) = {\alpha }^{\left( {1 + {\sigma }_{-1}}\right) \gamma }. \] Denoting by \( \mathop{\sum }\limits_{t}^{ * } \) a sum for \( 1 \leq t \leq m - 1 \) such that \( \gcd \left( {t, m}\right) = 1 \) we have \[ \left( {1 + {\sigma }_{-1}}\right) \gamma = \mathop{\sum }\limits_{t}^{ * }t{\sigma }_{t}^{-1} + \mathop{\sum }\limits_{t}^{ * }t{\sigma }_{-t}^{-1} = \mathop{\sum }\limits_{t}^{ * }t{\sigma }_{t}^{-1} + \mathop{\sum }\limits_{t}^{ * }\left( {m - t}\right) {\sigma }_{t}^{-1} \] \[ = m\mathop{\sum }\limits_{t}^{ * }{\sigma }_{t}^{-1} = m\mathop{\sum }\limits_{{\sigma \in \operatorname{Gal}\left( {{K}_{m}/\mathbb{Q}}\right) }}\sigma . \] It follows that \[ {\alpha }^{\left( {1 + {\sigma }_{-1}}\right) \gamma } = \mathop{\prod }\limits_{{\sigma \in \operatorname{Gal}\left( {{K}_{m}/\mathbb{Q}}\right) }}\sigma {\left( \alpha \right) }^{m} = {\mathcal{N}}_{{K}_{m}/\mathbb{Q}}{\left( \alpha \right) }^{m}. \] Proposition 3.6.29. The element \( \varepsilon \left( \alpha \right) \) is a root of unity. In other words, there exist \( i \in \mathbb{Z} \) and a sign \( \pm \) such that \( G\left( \alpha \right) = \pm {\zeta }_{m}^{i}{\alpha }^{\gamma } \) . Proof. By Proposition 3.6.26 (1) and (3) we have \[ {\left| G\left( \alpha \right) \right| }^{2} = {\left| \mathcal{N}\left( \alpha \right) \right| }^{m} = {\left| \varepsilon \left( \alpha \right) \right| }^{2}{\left| {\alpha }^{\gamma }\right| }^{2} = {\left| \varepsilon \left( \alpha \right) \right| }^{2}\mathcal{N}{\left( \alpha \right) }^{m}, \] hence \( \left| {\varepsilon \left( \alpha \right) }\right| = 1 \) (note that \( \mathcal{N}\left( \alpha \right) \) is automatically positive). On the other hand, applying Lemma 3.6.27 to \( \mathfrak{a} = \alpha {\mathbb{Z}}_{{K}_{m}} \) and using Proposition 3.6.26 (3) we deduce that for all \( \sigma \in \operatorname{Gal}\left( {{K}_{m}/\mathbb{Q}}\right) \) , \[ \varepsilon {\left( \alpha \right) }^{\sigma }{\alpha }^{\gamma \sigma } = G{\left( \alpha \right) }^{\sigma } = G\left( {\alpha }^{\sigma }\right) = \varepsilon \left( {\alpha }^{\sigma }\right) {\alpha }^{\gamma \sigma }, \] so that \( \varepsilon {\left( \alpha \right) }^{\sigma } = \varepsilon \left( {\alpha }^{\sigma }\right) \) . Since we have shown that \( \left| {\varepsilon \left( \alpha \right) }\right| = 1 \) for all \( \alpha \in {\mathbb{Z}}_{{K}_{m}} \) , and in particular for \( {\alpha }^{\sigma } \), it follows that \( \left| {\varepsilon {\left( \alpha \right) }^{\sigma }}\right| = 1 \) for all \( \sigma \) . We conclude by Kronecker’s theorem (Corollary 3.3.10) that \( \varepsilon \left( \alpha \right) \) is a root of unity, and the proposition follows from Corollary 3.5.12. We can now proceed to the statement and proof of Eisenstein's reciprocity law. As the reader will notice, the proof of the following proposition is essentially identical to that of the classical proof of the quadratic reciprocity law (see the proof of Lemma 2.2.2). Proposition 3.6.30. Let \( {p}_{1} \) and \( {p}_{2} \) be two distinct prime numbers not dividing \( m \) and let \( {\mathfrak{p}}_{1} \) and \( {\mathfrak{p}}_{2} \) be prime ideals of \( {K}_{m} \) above \( {p}_{1} \) and \( {p}_{2} \) respectively. Then \[ {\left( \frac{G\left( {\mathfrak{p}}_{1}\right) }{{\mathfrak{p}}_{2}}\right) }_{m} = {\left( \frac{\mathcal{N}\left( {\mathfrak{p}}_{2}\right) }{{\mathfrak{p}}_{1}}\right) }_{m}. \] Proof. Let \( {f}_{1} \) and \( {f}_{2} \) respectively be the orders of \( {p}_{1} \) and \( {p}_{2} \) modulo \( m \) , so that by Proposition 3.5.18 we have \( f\left( {{\mathfrak{p}}_{i}/p}\right) = {f}_{i} \) . Set \( {q}_{i} = \mathcal{N}\left( {\mathfrak{p}}_{i}\right) = {p}_{i}^{{f}_{i}} \equiv 1 \) \( \left( {\;\operatorname{mod}\;m}\right) \) and recall the notation of Definition 3.6.23. Since \( {\chi }_{{\mathfrak{p}}_{1}}\left( t\right) \) is an \( m \) th root of unity and \( m \mid \left( {{q}_{2} - 1}\right) \), by Lemma 3.6.24 we have \[ \tau {\left( {\chi }_{{\mathfrak{p}}_{1}}\right) }^{{q}_{2}} \equiv \mathop{\sum }\limits_{{t \in {\left( {\mat
1088_(GTM245)Complex Analysis
Definition 5.19
Definition 5.19. The index of a cycle \( \gamma = \left( {{\gamma }_{1},{\gamma }_{2},\ldots ,{\gamma }_{n}}\right) \) with respect to a point \( c \in \mathbb{C} \) - range \( \gamma \) is denoted by \( I\left( {\gamma, c}\right) \) and defined by \[ I\left( {\gamma, c}\right) = I\left( {{\gamma }_{1}, c}\right) + \cdots + I\left( {{\gamma }_{n}, c}\right) . \] (5.5) Definition 5.20. A cycle \( \gamma \) with range contained in a domain \( D \subseteq \mathbb{C} \) is said to be homologous to zero in \( D \) if \( I\left( {\gamma, c}\right) = 0 \) for every \( c \in \mathbb{C} - D \) . Observe that if a continuous closed path \( \gamma \) is homotopic to a point in \( D \), then the cycle \( \left( \gamma \right) \) with the single component \( \gamma \) is homologous to zero in \( D \) . However, the two notions are different, see Exercise 5.3. With these definitions and some work, \( {}^{1} \) we can obtain the most general forms of Cauchy's theorem and integral formula. Theorem 5.21 (Cauchy’s Theorem and Integral Formula: General Form). If \( f \) is analytic in a domain \( D \subseteq \mathbb{C} \) and \( \gamma \) is a cycle homologous to zero in \( D \), then (a) \( {\int }_{\gamma }f\left( z\right) \mathrm{d}z = 0 \) . (b) For all \( c \in D \) -range \( \gamma \), we have (5.1). Proof. If \[ E = \{ z \in \mathbb{C} - \text{ range }\gamma ;I\left( {\gamma, z}\right) = 0\} , \] then the set \( E \) is open in \( \mathbb{C} \) and contains the unbounded component of the complement of the range of \( \gamma \) in \( \mathbb{C} \), because it contains the unbounded component of the complement of the range of each component curve of \( \gamma \), as we saw in Sect. 4.5. Moreover \( E \supset \left( {\mathbb{C} - D}\right) \), since \( \gamma \) is homologous to zero in \( D \) . Define \( g : D \times D \rightarrow \mathbb{C} \) by \[ g\left( {w, z}\right) = \left\{ \begin{array}{ll} \frac{f\left( z\right) - f\left( w\right) }{z - w} & \text{ for }z \neq w, \\ {f}^{\prime }\left( w\right) & \text{ for }z = w. \end{array}\right. \] The function \( g \) is continuous in \( D \times D \), and for fixed \( z \in D, g\left( {\cdot, z}\right) \) is holomorphic on \( D \) . Furthermore, for all \( c \in D \) -range \( \gamma \), we have \[ {\int }_{\gamma }g\left( {c, z}\right) \mathrm{d}z = {\int }_{\gamma }\frac{f\left( z\right) - f\left( c\right) }{z - c}\mathrm{\;d}z \] \[ = {\int }_{\gamma }\frac{f\left( z\right) }{z - c}\mathrm{\;d}z - f\left( c\right) {\int }_{\gamma }\frac{\mathrm{d}z}{z - c} \] \[ = {\int }_{\gamma }\frac{f\left( z\right) }{z - c}\mathrm{\;d}z - f\left( c\right) {2\pi \iota I}\left( {\gamma, c}\right) . \] (5.6) We define next \[ h\left( w\right) = \left\{ \begin{array}{l} {\int }_{\gamma }g\left( {w, z}\right) \mathrm{d}z\text{ for }w \in D, \\ {\int }_{\gamma }\frac{f\left( z\right) \mathrm{d}z}{z - w}\;\text{ for }w \in E. \end{array}\right. \] Noting that \( D \cup E = \mathbb{C} \), we see from (5.6) that for \( w \in D \cap E \) , \( {}^{1} \) We are following a course outlined by J. D. Dixon, A brief proof of Cauchy’s integral formula, Proc. Amer. Math. Soc. 29 (1971), 625-626. \[ {\int }_{\gamma }g\left( {w, z}\right) \mathrm{d}z = {\int }_{\gamma }\frac{f\left( z\right) }{z - w}\mathrm{\;d}z \] because \( I\left( {\gamma, w}\right) = 0 \), and thus \( h \) is a well-defined function on the plane. The set \( E \) contains the complement of a large disc, and the function \( h \) is clearly bounded there. By Exercise 4.11, \( h \) is complex differentiable; thus \( h \) is a bounded analytic function in \( \mathbb{C} \) and hence constant, by Liouville’s theorem. Since \( \mathop{\lim }\limits_{{z \rightarrow \infty }}h\left( w\right) = 0, h \) is the zero function. In particular, \[ {\int }_{\gamma }g\left( {w, z}\right) \mathrm{d}z = 0 \] for all \( w \in D \) - range \( \gamma \), and (b) follows from (5.6). We now fix a point \( c \in D \) - range \( \gamma \) and apply part (b) to the analytic function defined on \( D \) by \( z \mapsto \left( {z - c}\right) f\left( z\right) \) and the cycle \( \gamma \), to obtain \[ I\left( {\gamma, w}\right) \left( {w - c}\right) f\left( w\right) = \frac{1}{{2\pi }\imath }{\int }_{\gamma }\frac{\left( {z - c}\right) f\left( z\right) }{z - w}\mathrm{\;d}z \] for all \( w \in D \) -range \( \gamma \) . We obtain part (a) by evaluating the last equation at \( w = c \) . Remark 5.22. 1. A topologist would develop the concept of homology in much more detail using chains and cycles. However, for our purposes, the above definitions suffice. 2. To help the reader with some of the problems of the last chapter, we review a standard definition from algebraic topology: two cycles \( \gamma = \left( {{\gamma }_{1},\ldots ,{\gamma }_{n}}\right) \) and \( \delta = \left( {{\delta }_{1},\ldots ,{\delta }_{m}}\right) \) with ranges contained in a domain \( D \) are homologous in \( D \) if the cycle with components \( \left( {{\gamma }_{1},\ldots ,{\gamma }_{n},{\delta }_{{1}_{ - }},\ldots ,{\delta }_{{m}_{ - }}}\right) \) is homologous to zero in \( D \) , where \( {\delta }_{i} \) _ is the curve \( {\delta }_{i} \) traversed backward (see Definition 4.9). 3. It is also standard to define the next relation between curves, that we have not had any reason to use. Two non-closed paths \( {\gamma }_{1} \) and \( {\gamma }_{2} \) (with ranges contained in \( D \) ) are homologous in \( D \) if they have the same initial point and the same end point and the cycle \( \left( {{\gamma }_{1} * {\gamma }_{2}}\right) \) is homologous to zero in \( D \) ; note that the only component of this cycle is the closed path \( {\gamma }_{1} * {\gamma }_{2} \) . 4. The notions of a cycle \( \gamma = \left( {{\gamma }_{1},{\gamma }_{2},\ldots ,{\gamma }_{n}}\right) \) and the sum \( {\gamma }_{1} + {\gamma }_{2} + \cdots + \) \( {\gamma }_{n} \) of its components, as in Definition 4.58, are different and should not be confused. 5. A domain \( D \) in \( \mathbb{C} \) is simply connected if and only if \( I\left( {\gamma, c}\right) = 0 \) for all cycles \( \gamma \) in \( \mathrm{D} \) and all \( c \in \mathbb{C} - D \) . ## 5.3 Jordan Curves We recall that the continuous closed path \( \gamma : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbb{C} \) is a simple closed path or a Jordan curve whenever \( \gamma \left( {t}_{1}\right) = \gamma \left( {t}_{2}\right) \) with \( 0 \leq {t}_{1} < {t}_{2} \leq 1 \) implies \( {t}_{1} = 0 \) and \( {t}_{2} = 1 \) . In this case, the range of \( \gamma \) is a homeomorphic image of the unit circle \( {S}^{1} \) . To see this, we define \[ h\left( {\mathrm{e}}^{2\pi it}\right) = \gamma \left( t\right) \] and note that \( h \) maps \( {S}^{1} \) onto the range of \( \gamma \) . Observe that \( h \) is well defined, continuous, and injective. Since the circle is compact, \( h \) is a homeomorphism. Theorem 5.23 (Jordan Curve Theorem \( {}^{2} \) ). If \( \gamma \) is a simple closed path in \( \mathbb{C} \), then (a) \( \mathbb{C} \) - range \( \gamma \) has exactly two connected components, one of which is bounded. (b) Range \( \gamma \) is the boundary of each of these components, and (c) \( I\left( {\gamma, c}\right) = 0 \) for all \( c \) in the unbounded component of the complement of the range of \( \gamma .I\left( {\gamma, c}\right) = \pm 1 \) for all \( c \) in the bounded component of the complement of the range of \( \gamma \) . The choice of sign depends only on the choice of direction for traversal on \( \gamma \) . Definition 5.24. For a simple closed path \( \gamma \) in \( \mathbb{C} \) we define the interior of \( \gamma, i\left( \gamma \right) \) , to be the bounded component of \( \mathbb{C} \) - range \( \gamma \) and the exterior of \( \gamma, e\left( \gamma \right) \), to be the unbounded component of \( \mathbb{C} \) -range \( \gamma \) . If \( I\left( {\gamma, c}\right) = + 1 \) (respectively -1) for \( c \) in \( i\left( \gamma \right) \) then we say that \( \gamma \) is a Jordan curve with positive (respectively negative) orientation. We shall not prove the above theorem. It is a deep result. In all of our applications, it will be obvious that our Jordan curves have the above properties. Remark 5.25. Another important (and nontrivial to prove) property of Jordan curves is the fact that the interior of a Jordan curve is always a simply connected domain in \( \mathbb{C} \) . If we view the Jordan curve as lying on the Riemann sphere \( \widehat{\mathbb{C}} \), then each component of the complement of its range is simply connected. This property allows us to prove the following result. Theorem 5.26 (Cauchy’s Theorem (Extended Version)). Let \( {\gamma }_{0},\ldots ,{\gamma }_{n} \) be \( n + 1 \) positively oriented Jordan curves. Assume that \[ \text{range}{\gamma }_{j} \subset e\left( {\gamma }_{k}\right) \cap i\left( {\gamma }_{0}\right) \] --- \( {}^{2} \) For a proof see the appendix to Ch. IX of J. Dieudonné, Foundations of Modern Analysis, Pure and Applied Mathematics, vol. X, Academic Press, 1960 or Chap. 10 of J. R. Munkres, Topology (Second Edition), Dover, 2000. --- ![a50267de-c956-4a7f-8c2e-850adafcee65_148_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_148_0.jpg) Fig. 5.2 Jordan curves and the domain they define for all \( 1 \leq j \neq k \leq n \), see Fig. 5.2. If \( f \) is a holomorphic function on a neighborhood \( N \) of the closure of the domain \[ D = i\left( {\gamma }_{0}\right) \cap e\left( {\gamma }_{1}\right) \cap \cdots \cap e\left( {\gamma }_{n}\right) , \] then \[ {\int }_{{\gamma }_{0}}f\left( z\right) \mathrm{d}z = \mathop{\sum }\limits_{{k = 1}}^{n}{\int }_{{\gamma }_{k}}f\left( z\right) \mathrm{d}z. \] Proof. Adjoin nonintersecting curves \( {\delta }_{j} \) in \( D \) from \( {\gamma }_{0} \) to \( {\gamma }_{j} \) for \( j = 1,\ldots, n \), as in Fig. 5.2. Then the cycle \[ \delta = \left( {{\gamma }_{0},{\delta }_{
18_Algebra Chapter 0
Definition 3.9
Definition 3.9. The rank of \( \alpha \), denoted \( \operatorname{rk}\alpha \), is the dimension of \( \operatorname{im}\alpha \) . The nullity of \( \alpha \) is \( \dim \left( {\ker \alpha }\right) \) . Claim 3.10. Let \( \alpha : V \rightarrow W \) be a linear map of finite-dimensional vector spaces. Then \[ \left( {\operatorname{rank}\text{of}\alpha }\right) + \left( {\text{nullity of}\alpha }\right) = \dim V\text{.} \] Proof. Let \( n = \dim V \) and \( m = \dim W \) . By Proposition 2.10 we can represent \( \alpha \) by an \( m \times n \) matrix of the form \[ \left( \begin{matrix} {I}_{r} & 0 \\ 0 & 0 \end{matrix}\right) \] From this representation it is immediate that \( \operatorname{rk}\alpha = r \) and the nullity of \( \alpha \) is \( n - r \) , with the stated consequence. Summarizing, \( \operatorname{rk}\alpha \) equals the (column) rank of any matrix \( P \) representing \( \alpha \) ; similarly, the nullity of \( \alpha \) equals ’ \( \dim V \) minus the (row) rank’ of \( P \) . Claim 3.10 is the abstract version of the equality of row rank and column rank. 3.4. Euler characteristic and the Grothendieck group. Against my best efforts, I cannot resist extending these simple observations to more general complexes. Claim 3.10 may be reformulated as follows: Proposition 3.11. Let \[ 0 \rightarrow U \rightarrow V \rightarrow W \rightarrow 0 \] be a short exact sequence of finite-dimensional vector spaces. Then \[ \dim \left( V\right) = \dim \left( U\right) + \dim \left( W\right) \] Equivalently, this amounts to the relation \( \dim \left( {V/U}\right) = \dim \left( V\right) - \dim \left( U\right) \) . Consider then a complex of finite-dimensional vector spaces and linear maps: \[ {V}_{ \bullet } : \;0 \rightarrow {V}_{N}\overset{{\alpha }_{N}}{ \rightarrow }{V}_{N - 1}\overset{{\alpha }_{N - 1}}{ \rightarrow }\cdots \overset{{\alpha }_{2}}{ \rightarrow }{V}_{1}\overset{{\alpha }_{1}}{ \rightarrow }{V}_{0} \rightarrow 0 \] (cf. [III]7.1). Thus, \( {\alpha }_{i - 1} \circ {\alpha }_{i} = 0 \) for all \( i \) . This condition is equivalent to the requirement that \( \operatorname{im}\left( {\alpha }_{i + 1}\right) \subseteq \ker \left( {\alpha }_{i}\right) \) ; recall that the homology of this complex is defined as the collection of spaces \[ {H}_{i}\left( {V}_{ \bullet }\right) = \frac{\ker \left( {\alpha }_{i}\right) }{\operatorname{im}\left( {\alpha }_{i + 1}\right) }. \] The complex is exact if \( \operatorname{im}\left( {\alpha }_{i + 1}\right) = \ker \left( {\alpha }_{i}\right) \) for all \( i \), that is, if \( {H}_{i}\left( {V}_{ \bullet }\right) = 0 \) for all \( i \) . Definition 3.12. The Euler characteristic of \( {V}_{ \bullet } \) is the integer \[ \chi \left( {V}_{ \bullet }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\dim \left( {V}_{i}\right) \] The original motivation for the introduction of this number is topological: with suitable positions, this Euler characteristic equals the Euler characteristic obtained by triangulating a manifold and then computing the number of vertices of the triangulation, minus the number of edges, plus the number of faces, etc. The following simple result is then a straightforward (and very useful) generalization of Proposition 3.11 Proposition 3.13. With notation as above, \[ \chi \left( {V}_{ \bullet }\right) = \mathop{\sum }\limits_{{i = 0}}^{N}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }\right) }\right) . \] In particular, if \( {V}_{ \bullet } \) is exact, then \( \chi \left( {V}_{ \bullet }\right) = 0 \) . Proof. There is nothing to show for \( N = 0 \), and the result follows directly from Proposition 3.11 if \( N = 1 \) (Exercise 3.15). Arguing by induction, given a complex \[ {V}_{ \bullet } : \;0 \rightarrow {V}_{N}\overset{{\alpha }_{N}}{ \rightarrow }{V}_{N - 1}\overset{{\alpha }_{N - 1}}{ \rightarrow }\cdots \overset{{\alpha }_{2}}{ \rightarrow }{V}_{1}\overset{{\alpha }_{1}}{ \rightarrow }{V}_{0} \rightarrow 0, \] we may assume that the result is known for 'shorter' complexes. Consider then the truncation \[ {V}_{ \bullet }^{\prime } : \;0 \rightarrow {V}_{N - 1}\overset{{\alpha }_{N - 1}}{ \rightarrow }\cdots \overset{{\alpha }_{2}}{ \rightarrow }{V}_{1}\overset{{\alpha }_{1}}{ \rightarrow }{V}_{0} \rightarrow 0. \] Then \[ \chi \left( {V}_{ \bullet }\right) = \chi \left( {V}_{ \bullet }^{\prime }\right) + {\left( -1\right) }^{N}\dim \left( {V}_{N}\right) \] and \[ {H}_{i}\left( {V}_{ \bullet }\right) = {H}_{i}\left( {V}_{ \bullet }^{\prime }\right) \;\text{ for }0 \leq i \leq N - 2, \] while \[ {H}_{N - 1}\left( {V}_{ \bullet }^{\prime }\right) = \ker \left( {\alpha }_{N - 1}\right) ,\;{H}_{N - 1}\left( {V}_{ \bullet }\right) = \frac{\ker \left( {\alpha }_{N - 1}\right) }{\operatorname{im}\left( {\alpha }_{N}\right) },\;{H}_{N}\left( {V}_{ \bullet }\right) = \ker \left( {\alpha }_{N}\right) . \] By Proposition 3.11 (cf. Claim 3.10), \[ \dim \left( {V}_{N}\right) = \dim \left( {\operatorname{im}\left( {\alpha }_{N}\right) }\right) + \dim \left( {\ker \left( {\alpha }_{N}\right) }\right) \] and \[ \dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }\right) }\right) = \dim \left( {\ker \left( {\alpha }_{N - 1}\right) }\right) - \dim \left( {\operatorname{im}\left( {\alpha }_{N}\right) }\right) \] therefore \[ \dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }^{\prime }\right) }\right) - \dim \left( {V}_{N}\right) = \dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }\right) }\right) - \dim \left( {{H}_{N}\left( {V}_{ \bullet }\right) }\right) . \] Putting all of this together with the induction hypothesis, \[ \chi \left( {V}_{ \bullet }^{\prime }\right) = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }^{\prime }\right) }\right) \] gives \[ \chi \left( {V}_{ \bullet }\right) = \chi \left( {V}_{ \bullet }^{\prime }\right) + {\left( -1\right) }^{N}\dim \left( {V}_{N}\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }^{\prime }\right) }\right) + {\left( -1\right) }^{N}\dim \left( {V}_{N}\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{N - 2}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }^{\prime }\right) }\right) + {\left( -1\right) }^{N - 1}\left( {\dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }^{\prime }\right) }\right) - \dim \left( {V}_{N}\right) }\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{N - 2}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }\right) }\right) + {\left( -1\right) }^{N - 1}\left( {\dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }\right) }\right) - \dim \left( {{H}_{N}\left( {V}_{ \bullet }\right) }\right) }\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{N}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }\right) }\right) \] as needed. In terms of the topological motivation recalled above, Proposition 3.13 tells us that the Euler characteristic of a manifold may be computed as the alternating sum of the ranks of its homology, that is, of its Betti numbers. Having come this far, I cannot refrain from mentioning the next, equally simpleminded, generalization. The reader has surely noticed that the only tool used in the proof of Proposition 3.13 was the 'additivity' property of dimension, established in Proposition 3.11 if \[ 0 \rightarrow U \rightarrow V \rightarrow W \rightarrow 0 \] is exact, then \[ \dim \left( V\right) = \dim \left( U\right) + \dim \left( W\right) \] Proposition 3.13 is a formal consequence of this one property of \( \dim \) . With this in mind, we can reinterpret what we have just done in the following curious way. Consider the category \( k \) -Vect \( {}^{f} \) of finite-dimensional \( k \) -vector spaces. Each object \( V \) of \( k - {\operatorname{Vect}}^{f} \) determines an isomorphism class \( \left\lbrack V\right\rbrack \) . Let \( F\left( {k - {\operatorname{Vect}}^{f}}\right) \) be the free abelian group on the set of these isomorphism classes; further, let \( E \) be the subgroup generated by the elements \[ \left\lbrack V\right\rbrack - \left\lbrack U\right\rbrack - \left\lbrack W\right\rbrack \] for all short exact sequences \[ 0 \rightarrow U \rightarrow V \rightarrow W \rightarrow 0 \] in \( k \) -Vect \( {}^{f} \) . The quotient group \[ K\left( {k - {\operatorname{Vect}}^{f}}\right) \mathrel{\text{:=}} \frac{F\left( {k - {\operatorname{Vect}}^{f}}\right) }{E} \] is called the Grothendieck group of the category \( k \) -Vect \( {}^{f} \) . The element determined by \( V \) in the Grothendieck group is still denoted \( \left\lbrack V\right\rbrack \) . More generally, a Grothendieck group may be defined for any category admitting a notion of exact sequence. Every complex \( {V}_{ \bullet } \) determines an element in \( K\left( {k - {\operatorname{Vect}}^{f}}\right) \), namely \[ {\chi }_{K}\left( {V}_{ \bullet }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\left\lbrack {V}_{i}\right\rbrack \in K\left( {k - {\operatorname{Vect}}^{f}}\right) . \] Claim 3.14. With notation as above, we have the following: - \( {\chi }_{K} \) ’s an Euler characteristic’, in the sense that it satisfies the formula given in Proposition 3.13: \[ {\chi }_{K}\left( {V}_{ \bullet }\right) = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\left\lbrack {{H}_{i}\left( {V}_{ \bullet }\right) }\right\rbrack \] - \( {\chi }_{K} \) is a ’universal Euler characteristic’, in the following sense. Let \( G \) be an abelian group, and let \( \delta \) be a function associating an element of \( G \) to each finite-dimensional vector space, such that \( \delta \left( V\right) = \delta \left( {V}^{\prime }\right) \) if \( V \cong {V}^{\prime } \) and \( \delta \left( {V/U}\right) = \) \( \delta \left( V\right) - \delta \left( U\right) \) . For \( {V}_{ \bullet } \) a complex, define \[ {\chi }_{G}\left( {V}_{ \bu
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 1.128
Definition 1.128 (Ioffe). A family \( \left( {{f}_{1},\ldots ,{f}_{k}}\right) \) of lower semicontinuous functions on \( X \) with sum \( f \) is said to be linearly coherent around some \( \bar{x} \in X \), or to satisfy the linear metric qualification condition around \( \bar{x} \), if there exist \( c > 0,\rho > 0 \) such that for all \( x \in B\left( {\bar{x},\rho }\right) ,\left( {{t}_{1},\ldots ,{t}_{k}}\right) \in {\mathbb{R}}^{k} \) relation (1.35) holds with \( \mu \left( r\right) \mathrel{\text{:=}} {cr} \) for \( r \in {\mathbb{R}}_{ + } \) , i.e., one has \[ d\left( {\left( {x,{t}_{1} + \cdots + {t}_{k}}\right) ,\operatorname{epi}f}\right) \leq {cd}\left( {\left( {x,{t}_{1}}\right) ,\operatorname{epi}{f}_{1}}\right) + \cdots + {cd}\left( {\left( {x,{t}_{k}}\right) ,\operatorname{epi}{f}_{k}}\right) . \] (1.36) An analogue of Proposition 1.126 can be given. Proposition 1.129. (a) Every family \( \left( {{f}_{1},\ldots ,{f}_{k}}\right) \) of lower semicontinuous functions on \( X \) all but one of which are locally Lipschitzian around \( \bar{x} \) is linearly coherent. (b) If \( \left( {{f}_{1},\ldots ,{f}_{k}}\right) \) is a family of lower semicontinuous functions that is linearly coherent around \( \bar{x} \in X \) and if \( {f}_{k + 1} \) is Lipschitzian around \( \bar{x} \), then \( \left( {{f}_{1},\ldots ,{f}_{k + 1}}\right) \) is linearly coherent around \( \bar{x} \) . Proof. It suffices to prove that if \( f \) is lower semicontinuous and if \( g \) is Lipschitzian around \( \bar{x} \), then \( \left( {f, g}\right) \) is linearly coherent around \( \bar{x} \) . Then assertions (a) and (b) follow by induction on \( k \) . Since (1.36) is preserved when one changes the norm in \( X \times \mathbb{R} \) to an equivalent one, we may suppose the Lipschitz rate of \( g \) is 1 on some ball \( B\left( {\bar{x},\rho }\right) \) . Let \( F, G, H \) be the epigraphs of \( f, g \), and \( h \mathrel{\text{:=}} f + g \) respectively and let \( x \in B\left( {\bar{x},\rho }\right) \) , \( s, t \in \mathbb{R} \) . Given \( \varepsilon > 0 \), let \( \left( {u, r}\right) \in F \) satisfying \( \parallel u - x\parallel + \left| {r - s}\right| \leq {d}_{F}\left( {x, s}\right) + \varepsilon \) . When \( g\left( x\right) \geq t \), we have \( \left| {g\left( x\right) - t}\right| = {\left( g\left( x\right) - t\right) }^{ + } = {d}_{G}\left( {x, t}\right) \), and since \( \left( {u, r + g\left( u\right) }\right) \in H \) and \[ \parallel u - x\parallel + \left| {\left( {r + g\left( u\right) }\right) - \left( {s + t}\right) }\right| \leq \parallel u - x\parallel + \left| {r - s}\right| + \left| {g\left( u\right) - g\left( x\right) }\right| + \left| {g\left( x\right) - t}\right| \] \[ \leq 2\parallel u - x\parallel + \left| {r - s}\right| + \left| {g\left( x\right) - t}\right| , \] we get \( {d}_{H}\left( {x, s + t}\right) \leq 2{d}_{F}\left( {x, s}\right) + {2\varepsilon } + {d}_{G}\left( {x, t}\right) \) . When \( g\left( x\right) < t \) we have \( (u, r + g\left( u\right) + \) \( t - g\left( x\right) ) \in H \) and \[ \parallel u - x\parallel + \left| {\left( {r + g\left( u\right) + t - g\left( x\right) }\right) - \left( {s + t}\right) }\right| \leq \parallel u - x\parallel + \left| {r - s}\right| + \left| {g\left( u\right) - g\left( x\right) }\right| \] \[ \leq 2\parallel u - x\parallel + \left| {r - s}\right| \leq 2{d}_{F}\left( {x, s}\right) + {2\varepsilon }. \] Thus, in both cases we have \( {d}_{H}\left( {x, s + t}\right) \leq 2{d}_{F}\left( {x, s}\right) + {2\varepsilon } + {d}_{G}\left( {x, t}\right) \) . Since \( \varepsilon > 0 \) is arbitrary, we get \( {d}_{H}\left( {x, s + t}\right) \leq 2{d}_{F}\left( {x, s}\right) + {d}_{G}\left( {x, t}\right) \) . ## 1.6.5 Links Between Penalization and Robust Infima Now let us point out the links of the preceding concepts with penalization. This can be done for each of the various cases of stabilized infimum. In view of the passages described above, we limit our study to the case of a composition \( h \circ g \) , where \( g : X \rightarrow Y \) and \( h : Y \rightarrow {\mathbb{R}}_{\infty } \) . In order to get some flexibility, we make use of a function \( k : Y \times Y \rightarrow {\overline{\mathbb{R}}}_{ + } \mathrel{\text{:=}} \left\lbrack {0, + \infty }\right\rbrack \) such that \[ k\left( {y,{y}^{\prime }}\right) \rightarrow 0 \Leftrightarrow d\left( {y,{y}^{\prime }}\right) \rightarrow 0. \] (1.37) We call such a function a forcing bifunction. For instance, one may choose \( k \mathrel{\text{:=}} {d}^{p} \) with \( p > 0 \) or, more generally, \( k \mathrel{\text{:=}} \mu \circ d \), where \( \mu : {\mathbb{R}}_{ + } \rightarrow {\overline{\mathbb{R}}}_{ + } \) is continuous at 0 with \( \mu \left( 0\right) = 0 \) and firm (i.e., \( \left( {t}_{n}\right) \rightarrow 0 \) whenever \( \left( {\mu \left( {t}_{n}\right) }\right) \rightarrow 0 \) ). Given \( c > 0 \), we define the penalized infimum of \( h \circ g \) by \[ {m}_{c} \mathrel{\text{:=}} \inf \{ h\left( y\right) + {ck}\left( {g\left( x\right), y}\right) : \left( {x, y}\right) \in X \times Y\} ,\;m \mathrel{\text{:=}} \mathop{\sup }\limits_{{c > 0}}{m}_{c}. \] Then \( c \mapsto {m}_{c} \) is clearly nondecreasing. One may have \( {m}_{c} = - \infty \) for all \( c > 0 \) while \( { \land }_{g}h \) is finite. This fact occurs for \( X \mathrel{\text{:=}} \{ 0\} \subset Y \mathrel{\text{:=}} \mathbb{R}, g\left( 0\right) \mathrel{\text{:=}} 0, h\left( y\right) \mathrel{\text{:=}} - {y}^{2} \) , \( k\left( {y,{y}^{\prime }}\right) \mathrel{\text{:=}} \left| {y - {y}^{\prime }}\right| \) ; note that it does not occur when \( k \) is given by \( k\left( {y,{y}^{\prime }}\right) = {\left( y - {y}^{\prime }\right) }^{2} \) , so that it is of interest to choose \( k \) appropriately. When \( {m}_{c} > - \infty \) for at least one \( c > 0 \), one has a remarkable relationship between \( m \mathrel{\text{:=}} \mathop{\sup }\limits_{{c > 0}}{m}_{c} \) and \( { \land }_{g}h \) . It shows that \( m \) does not depend on the choice of \( k \) among those ensuring \( m > - \infty \) . Proposition 1.130. One always has \( m \leq { \land }_{g}h \) . If \( m > - \infty \), equality holds. Proof. Let us first prove that for all \( c > 0 \) we have \( {m}_{c} \leq { \land }_{g}h \) . We may suppose that \( { \land }_{g}h < + \infty \) . Let \( s > r > { \land }_{g}h \) . By definition of \( { \land }_{g}h \), for all \( \delta > 0 \), there exist some \( x \in X, y \in Y \) satisfying \( d\left( {g\left( x\right), y}\right) < \delta \) and \( h\left( y\right) < r \) . Taking \( \delta > 0 \) such that \( k\left( {{y}^{\prime },{y}^{\prime \prime }}\right) < \eta \mathrel{\text{:=}} {c}^{-1}\left( {s - r}\right) \) when \( d\left( {{y}^{\prime },{y}^{\prime \prime }}\right) < \delta \), we get some \( \left( {x, y}\right) \in X \times Y \) such that \( h\left( y\right) + {ck}\left( {y, g\left( x\right) }\right) < r + {c\eta } = s \), hence \( {m}_{c} < s \) and, \( s \) being arbitrarily close to \( { \land }_{g}h \) , \( {m}_{c} \leq { \land }_{g}h \) . Thus \( m \mathrel{\text{:=}} \mathop{\sup }\limits_{{c > 0}}{m}_{c} \leq { \land }_{g}h \) . Now let us show that \( { \land }_{g}h \leq m \) when \( m > - \infty \) . We may suppose that \( m < + \infty \) . Let \( b > 0 \) be such that \( {m}_{b} > - \infty \) and let \( r > m,\delta > 0 \) be given. Let \( \alpha > 0 \) be such that \( d\left( {y,{y}^{\prime }}\right) < \delta \) whenever \( y,{y}^{\prime } \in Y \) satisfy \( k\left( {y,{y}^{\prime }}\right) < \alpha \) . Now we pick \( c > b \) large enough that \( \left( {c - b}\right) \alpha \geq r - {m}_{b} \) . Since \( r > m \geq {m}_{c} \), we can find \( \left( {x, y}\right) \in X \times Y \) such that \( h\left( y\right) + {ck}\left( {g\left( x\right), y}\right) < r \) . Since \( {m}_{b} \leq h\left( y\right) + {bk}\left( {g\left( x\right), y}\right) \), we have \( \left( {c - b}\right) k\left( {g\left( x\right), y}\right) < \) \( r - {m}_{b} \), hence \( k\left( {g\left( x\right), y}\right) < \alpha \) and \( d\left( {g\left( x\right), y}\right) < \delta \) . Thus \[ \inf \left\{ {h\left( {y}^{\prime }\right) : {y}^{\prime } \in Y, d\left( {{y}^{\prime }, g\left( X\right) }\right) < \delta }\right\} \leq h\left( y\right) < r. \] Taking the supremum over \( \delta > 0 \), we get \( { \land }_{g}h \leq r \), hence \( { \land }_{g}h \leq m \) . A similar result holds for a sum. We leave the proof as an exercise. This time, given a family \( \left( {{f}_{1},\ldots ,{f}_{k}}\right) \) of functions on \( X \) and a forcing bifunction \( {k}_{X} : X \times X \rightarrow \) \( {\overline{\mathbb{R}}}_{ + } \), we set \( m \mathrel{\text{:=}} \mathop{\sup }\limits_{{c > 0}}{m}_{c} \) with \[ {m}_{c} \mathrel{\text{:=}} \inf \left\{ {{f}_{1}\left( {x}_{1}\right) + \cdots + {f}_{k}\left( {x}_{k}\right) + c\mathop{\sum }\limits_{{i, j = 1}}^{k}{k}_{X}\left( {{x}_{i},{x}_{j}}\right) : \left( {{x}_{1},\ldots ,{x}_{k}}\right) \in {X}^{k}}\right\} . \] Proposition 1.131. One always has \( m \leq \land \left( {{f}_{1},\ldots ,{f}_{k}}\right) \) . If the functions \( {f}_{i} \) are bounded below, or, more generally, if \( m > - \infty \), equality holds. Penalization methods are not limited to convergence of values. They also bear on convergence of approximate minimizers, as we are going to show for the minimization of a sum of functions and then for the minimization of a composite function. For the sake of simplicity, we slightly change the notation of Proposition 1.131, considering two functions \( f, g : X \rightarrow {\mathbb{R}}_{\infty } \) and setting, for \( c \in {\mathbb{R}}_{ + } \) , \[ {p}_{c}\left( {x, y}\right) \mathrel{\text{:=}} f\left( x\right) + g\left( y\right) + c{k}_{X}\left( {x, y}\right) . \] Proposition 1.132. Let \( f, g \) be bounded below, or more generally, let them be such that \( {m}_{b} \mathrel{\text{:=}} \inf {p}_{b}\left( {X \times X}\right) > - \
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space
Definition 2.3.1
Definition 2.3.1. The function \( F \in {\mathbf{H}}^{2} \) is an outer function if \( F \) is a cyclic vector for the unilateral shift. That is, \( F \) is an outer function if \[ \mathop{\bigvee }\limits_{{k = 0}}^{\infty }\left\{ {{U}^{k}F}\right\} = {\mathbf{H}}^{2} \] Theorem 2.3.2. If \( F \) is an outer function, then \( F \) has no zeros in \( \mathbb{D} \) . Proof. If \( F\left( {z}_{0}\right) = 0 \), then \( \left( {{U}^{n}F}\right) \left( {z}_{0}\right) = {z}_{0}^{n}F\left( {z}_{0}\right) = 0 \) for all \( n \) . Since the limit of a sequence of functions in \( {\mathbf{H}}^{2} \) that all vanish at \( {z}_{0} \) must also vanish at \( {z}_{0} \) (Theorem 1.1.9), \[ \mathop{\bigvee }\limits_{{k = 0}}^{\infty }\left\{ {{U}^{k}F}\right\} \] cannot be all of \( {\mathbf{H}}^{2} \) . Hence there is no \( {z}_{0} \in \mathbb{D} \) with \( F\left( {z}_{0}\right) = 0 \) . Recall that a function analytic on \( \mathbb{D} \) is identically zero if it vanishes on a set that has a limit point in \( \mathbb{D} \) . The next theorem is an analogous result for boundary values of functions in \( {\mathbf{H}}^{2} \) . Theorem 2.3.3 (The F. and M. Riesz Theorem). If \( f \in {\mathbf{H}}^{2} \) and the set \[ \left\{ {{e}^{i\theta } : \widetilde{f}\left( {e}^{i\theta }\right) = 0}\right\} \] has positive measure, then \( f \) is identically 0 on \( \mathbb{D} \) . Proof. Let \( E = \left\{ {{e}^{i\theta } : \widetilde{f}\left( {e}^{i\theta }\right) = 0}\right\} \) and let \[ \mathcal{M} = \mathop{\bigvee }\limits_{{k = 0}}^{\infty }\left\{ {{U}^{k}\widetilde{f}}\right\} = \mathop{\bigvee }\limits_{{k = 0}}^{\infty }\left\{ {{e}^{ik\theta }\widetilde{f}}\right\} \] Then every function \( \widetilde{g} \in \mathcal{M} \) vanishes on \( E \), since all functions \( {e}^{ik\theta }\widetilde{f} \) do. If \( \widetilde{f} \) is not identically zero, it follows from Beurling's theorem (Theorem 2.2.12) that \( \mathcal{M} = \widetilde{\phi }{\widetilde{\mathbf{H}}}^{2} \) for some inner function \( \phi \) . In particular, this implies that \( \widetilde{\phi } \in \mathcal{M} \) , so \( \widetilde{\phi } \) vanishes on \( E \) . But \( \left| {\widetilde{\phi }\left( {e}^{i\theta }\right) }\right| = 1 \) a.e. This contradicts the hypothesis that \( E \) has positive measure, thus \( \widetilde{f} \), and hence \( f \), must be identically zero. Another beautiful result that follows from Beurling's theorem is the following factorization of functions in \( {\mathbf{H}}^{2} \) . Theorem 2.3.4. If \( f \) is a function in \( {\mathbf{H}}^{2} \) that is not identically zero, then \( f = {\phi F} \), where \( \phi \) is an inner function and \( F \) is an outer function. This factorization is unique up to constant factors. Proof. Let \( f \in {\mathbf{H}}^{2} \) and consider \( \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{U}^{n}f}\right\} \) . If this span is \( {\mathbf{H}}^{2} \), then \( f \) is outer by definition, and we can take \( \phi \) to be the constant function 1 and \( F = f \) to obtain the desired conclusion. If \( \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{U}^{n}f}\right\} \neq {\mathbf{H}}^{2} \), then, by Beurling’s theorem (Corollary 2.2.12), there must exist a nonconstant inner function \( \phi \) with \( \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{U}^{n}f}\right\} = \phi {\mathbf{H}}^{2} \) . Since \( f \) is in \( \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{U}^{n}f}\right\} = \phi {\mathbf{H}}^{2} \), there exists a function \( F \) in \( {\mathbf{H}}^{2} \) with \( f = {\phi F} \) . We shall show that \( F \) is outer. The invariant subspace \( \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{U}^{n}F}\right\} \) equals \( \psi {\mathbf{H}}^{2} \) for some inner function \( \psi \) . Then, since \( f = {\phi F} \), it follows that \( {U}^{n}f = {U}^{n}\left( {\phi F}\right) = \phi {U}^{n}F \) for every positive integer \( n \), from which we can conclude, by taking linear spans, that \( \phi {\mathbf{H}}^{2} = {\phi \psi }{\mathbf{H}}^{2} \) . Theorem 2.2.8 now implies that \( \phi \) and \( {\phi \psi } \) are constant multiples of each other. Hence \( \psi \) must be a constant function. Therefore \( \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{U}^{n}F}\right\} = {\mathbf{H}}^{2} \), so \( F \) is an outer function. Note that if \( f = {\phi F} \) with \( \phi \) inner and \( F \) outer, then \( \mathop{\bigvee }\limits_{{n = 0}}^{\infty }\left\{ {{U}^{n}f}\right\} = \phi {\mathbf{H}}^{2} \) . Thus uniqueness of the factorization follows from the corresponding assertion in Theorem 2.2.8. Definition 2.3.5. For \( f \in {\mathbf{H}}^{2} \), if \( f = {\phi F} \) with \( \phi \) inner and \( F \) outer, we call \( \phi \) the inner part of \( f \) and \( F \) the outer part of \( f \) . Theorem 2.3.6. The zeros of an \( {\mathbf{H}}^{2} \) function are precisely the zeros of its inner part. Proof. This follows immediately from Theorem 2.3.2 and Theorem 2.3.4. To understand the structure of Lat \( U \) as a lattice requires being able to determine when \( {\phi }_{1}{\mathbf{H}}^{2} \) is contained in \( {\phi }_{2}{\mathbf{H}}^{2} \) for inner functions \( {\phi }_{1} \) and \( {\phi }_{2} \) . This will be accomplished by analysis of a factorization of inner functions. ## 2.4 Blaschke Products Some of the invariant subspaces of the unilateral shift are those consisting of the functions vanishing at certain subsets of \( \mathbb{D} \) . The simplest such subspaces are those of the form, for \( {z}_{0} \in \mathbb{D} \) , \[ {\mathcal{M}}_{{z}_{0}} = \left\{ {f \in {\mathbf{H}}^{2} : f\left( {z}_{0}\right) = 0}\right\} . \] The subspace \( {\mathcal{M}}_{{z}_{0}} \) is an invariant subspace for \( U \) . Therefore Beurling’s theorem (Corollary 2.2.12) implies that there is an inner function \( \psi \) such that \( {\mathcal{M}}_{{z}_{0}} = \psi {\mathbf{H}}^{2} \) Theorem 2.4.1. For each \( {z}_{0} \in \mathbb{D} \), the function \[ \psi \left( z\right) = \frac{{z}_{0} - z}{1 - \overline{{z}_{0}}z} \] is an inner function and \( {\mathcal{M}}_{{z}_{0}} = \left\{ {f \in {\mathbf{H}}^{2} : f\left( {z}_{0}\right) = 0}\right\} = \psi {\mathbf{H}}^{2} \) . Proof. The function \( \psi \) is clearly in \( {\mathbf{H}}^{\infty } \) . Moreover, it is continuous on the closure of \( \mathbb{D} \) . Therefore, to show that \( \psi \) is inner, it suffices to show that \( \left| {\psi \left( z\right) }\right| = 1 \) when \( \left| z\right| = 1 \) . For this, note that \( \left| z\right| = 1 \) implies \( z\bar{z} = 1 \), so that \[ \left| \frac{{z}_{0} - z}{1 - \overline{{z}_{0}}z}\right| = \left| \frac{{z}_{0} - z}{z\left( {\bar{z} - \overline{{z}_{0}}}\right) }\right| = \frac{1}{\left| z\right| }\left| \frac{{z}_{0} - z}{\bar{z} - \overline{{z}_{0}}}\right| = 1. \] To show that \( {\mathcal{M}}_{{z}_{0}} = \psi {\mathbf{H}}^{2} \), first note that \( \psi \left( {z}_{0}\right) f\left( {z}_{0}\right) = 0 \) for all \( f \in {\mathbf{H}}^{2} \) , so \( \psi {\mathbf{H}}^{2} \subset {\mathcal{M}}_{{z}_{0}} \) . For the other inclusion, note that \( f\left( {z}_{0}\right) = 0 \) implies that \( f\left( z\right) = \psi \left( z\right) g\left( z\right) \) for some function \( g \) analytic in \( \mathbb{D} \) . Let \[ \varepsilon = \inf \left\{ {\left| {\psi \left( z\right) }\right| : z \in \mathbb{D},\;\left| z\right| \geq \frac{1 + \left| {z}_{0}\right| }{2}}\right\} . \] Clearly \( \varepsilon > 0 \) . Thus \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| f\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta } \geq {\varepsilon }^{2}\frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| g\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta } \] for \( r \geq \frac{1 + \left| {z}_{0}\right| }{2} \) . Therefore \[ \mathop{\sup }\limits_{{0 < r < 1}}\frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| g\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta } \leq \frac{1}{{\varepsilon }^{2}}\mathop{\sup }\limits_{{0 < r < 1}}\frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| f\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta }. \] It follows from Theorem 1.1.12 that \( g \in {\mathbf{H}}^{2} \) . Hence \( f = {\psi g} \) is in \( \psi {\mathbf{H}}^{2} \) . A similar result holds for subspaces of \( {\mathbf{H}}^{2} \) vanishing on any finite subset of \( \mathbb{D} \) . Theorem 2.4.2. If \( {z}_{1},{z}_{2},\ldots ,{z}_{n} \in \mathbb{D} \) , \[ \mathcal{M} = \left\{ {f \in {\mathbf{H}}^{2} : f\left( {z}_{1}\right) = f\left( {z}_{2}\right) = \cdots = f\left( {z}_{n}\right) = 0}\right\} , \] and \[ \psi \left( z\right) = \mathop{\prod }\limits_{{k = 1}}^{n}\frac{{z}_{k} - z}{1 - \overline{{z}_{k}}z} \] then \( \psi \) is an inner function and \( \mathcal{M} = \psi {\mathbf{H}}^{2} \) . Proof. It is obvious that a product of a finite number of inner functions is inner. Thus Theorem 2.4.1 above implies that \( \psi \) is inner. It is clear that \( \psi {\mathbf{H}}^{2} \) is contained in \( \mathcal{M} \) . The proof of the opposite inclusion is very similar to the proof of the case of a single factor established in Theorem 2.4.1 above. That is, if \( f\left( {z}_{1}\right) = f\left( {z}_{2}\right) = \cdots = f\left( {z}_{n}\right) = 0 \), then \( f = {\psi g} \) for some function \( g \) analytic on \( \mathbb{D} \) . It follows as in the previous proof (take \( r \) greater than the maximum of \( \frac{1 + \left| {z}_{j}\right| }{2} \) ) that \( g \) is in \( {\mathbf{H}}^{2} \), so \( f \in \psi {\mathbf{H}}^{2} \) . It is important to be able to factor out the zeros of inner functions. If an inner function has only a finite number of zeros in \( \mathbb{D} \), such a factorization is implicit in the preceding theorem, as we now show. (We will subsequently consider the case in which an inner function has an infinite number of zeros.) It is customary to distinguish any possible zero at 0 . Corollary 2.4.3. Suppose that the inner function \( \phi \) has a zero of
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space
Definition 3.1.5
Definition 3.1.5. For \( \phi \in {\mathbf{L}}^{\infty } \), the essential range of \( \phi \) is defined to be \[ \text{ess}\operatorname{ran}\phi = \left\{ {\lambda : m\left\{ {{e}^{i\theta } : \left| {\phi \left( {e}^{i\theta }\right) - \lambda }\right| < \varepsilon }\right\} > 0,\text{ for all }\varepsilon > 0}\right\} \text{,} \] where \( m \) is the (normalized) Lebesgue measure. Note that the essential norm of an \( {\mathbf{L}}^{\infty } \) function \( \phi \) (Definition 1.1.23) is equal to \[ \sup \{ \left| \lambda \right| : \lambda \in \operatorname{essran}\phi \} \] Theorem 3.1.6. If \( \phi \in {\mathbf{L}}^{\infty } \), then \( \sigma \left( {M}_{\phi }\right) = \Pi \left( {M}_{\phi }\right) = \operatorname{ess}\operatorname{ran}\phi \) . Proof. We prove this in two steps. We first show that ess \( \operatorname{ran}\phi \subset \Pi \left( {M}_{\phi }\right) \) , and then show that \( \sigma \left( {M}_{\phi }\right) \subset \) ess ran \( \phi \) . These two assertions together imply the theorem. Let \( \lambda \in \operatorname{essran}\phi \) . For each natural number \( n \), define \[ {E}_{n} = \left\{ {{e}^{i\theta } : \left| {\phi \left( {e}^{i\theta }\right) - \lambda }\right| < \frac{1}{n}}\right\} \] and let \( {\chi }_{n} \) be the characteristic function of \( {E}_{n} \) . Notice that \( m\left( {E}_{n}\right) > 0 \) . Then \[ {\begin{Vmatrix}\left( {M}_{\phi } - \lambda \right) {\chi }_{n}\end{Vmatrix}}^{2} = \frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| \left( \phi \left( {e}^{i\theta }\right) - \lambda \right) {\chi }_{n}\left( {e}^{i\theta }\right) \right| }^{2}{d\theta } \] \[ = \frac{1}{2\pi }{\int }_{{E}_{n}}{\left| \left( \phi \left( {e}^{i\theta }\right) - \lambda \right) \right| }^{2}{d\theta } \] \[ \leq \frac{1}{{n}^{2}}m\left( {E}_{n}\right) \] Also, \[ {\begin{Vmatrix}{\chi }_{n}\end{Vmatrix}}^{2} = \frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| {\chi }_{n}\left( {e}^{i\theta }\right) \right| }^{2}{d\theta } = m\left( {E}_{n}\right) \neq 0. \] Thus, if we define \( {f}_{n} = {\chi }_{n}/\begin{Vmatrix}{\chi }_{n}\end{Vmatrix} \), then \( \left\{ {f}_{n}\right\} \) is a sequence of unit vectors such that \[ \begin{Vmatrix}{\left( {{M}_{\phi } - \lambda }\right) {f}_{n}}\end{Vmatrix} \leq \frac{1}{n} \] Therefore \( \lambda \in \Pi \left( {M}_{\phi }\right) \) . Now suppose \( \lambda \notin \operatorname{ess}\operatorname{ran}\phi \) . Then there exists \( \varepsilon > 0 \) such that \[ m\left\{ {{e}^{i\theta } : \left| {\phi \left( {e}^{i\theta }\right) - \lambda }\right| < \varepsilon }\right\} = 0. \] This means that the function \( 1/\left( {\phi - \lambda }\right) \) is defined almost everywhere, and, in fact, \( 1/\left| {\phi - \lambda }\right| \leq 1/\varepsilon \) a.e. Thus \( 1/\left( {\phi - \lambda }\right) \in {\mathbf{L}}^{\infty } \) . But then the operator \( {M}_{\frac{1}{\phi - \lambda }} \) is bounded and is clearly the inverse of \( {M}_{\phi } - \lambda \) . Thus \( \lambda \notin \sigma \left( {M}_{\phi }\right) \) . ## 3.2 Basic Properties of Toeplitz Operators The Toeplitz operators are the "compressions" of the multiplication operators to the subspace \( {\widetilde{\mathbf{H}}}^{2} \), defined as follows. Definition 3.2.1. For each \( \phi \) in \( {\mathbf{L}}^{\infty } \), the Toeplitz operator with symbol \( \phi \) is the operator \( {T}_{\phi } \) defined by \[ {T}_{\phi }f = {P\phi f} \] for each \( f \) in \( {\widetilde{\mathbf{H}}}^{2} \), where \( P \) is the orthogonal projection of \( {\mathbf{L}}^{2} \) onto \( {\widetilde{\mathbf{H}}}^{2} \) . Theorem 3.2.2. The matrix of the Toeplitz operator with symbol \( \phi \) with respect to the basis \( {\left\{ {e}^{in\theta }\right\} }_{n = 0}^{\infty } \) of \( {\widetilde{\mathbf{H}}}^{2} \) is \[ {T}_{\phi } = \left( \begin{matrix} {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & {\phi }_{-3} & \\ {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & \ddots \\ {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & \ddots \\ {\phi }_{3} & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & \ddots \\ & \ddots & \ddots & \ddots & \ddots \end{matrix}\right) , \] where \( {\phi }_{k} \) is the \( k \) th Fourier coefficient of \( \phi \) . Proof. This can easily be computed in the same way as the corresponding result for multiplication operators (Theorem 3.1.2). Alternatively, since \( P \) is the projection onto \( {\widetilde{\mathbf{H}}}^{2} \) and \( {T}_{\phi } \) is defined on \( {\widetilde{\mathbf{H}}}^{2} \), the matrix of \( {T}_{\phi } \) is the lower right corner of the matrix of \( {M}_{\phi } \) . That is, the lower right corner of \[ {M}_{\phi } = \left( \begin{matrix} & & & & & & \\ & \ddots & \ddots & \ddots & & & \\ & \ddots & {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & & \\ & \ddots & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & \\ & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & \\ & & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & \ddots \\ & & & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & \ddots \\ & & & & \ddots & \ddots & \ddots \\ & & & & & & \end{matrix}\right) , \] so \[ {T}_{\phi } = \left( \begin{matrix} {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & {\phi }_{-3} & \\ {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & \ddots \\ {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & \ddots \\ {\phi }_{3} & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & \ddots \\ & \ddots & \ddots & \ddots & \ddots \end{matrix}\right) . \] Thus Toeplitz operators have singly infinite Toeplitz matrices. We will show that every singly infinite Toeplitz matrix that represents a bounded operator is the matrix of a Toeplitz operator (Theorem 3.2.6). The most tractable Toeplitz operators are the analytic ones. Definition 3.2.3. The Toeplitz operator \( {T}_{\phi } \) is an analytic Toeplitz operator if \( \phi \) is in \( {\widetilde{\mathbf{H}}}^{\infty } \) . Note that if \( \phi \) is in \( {\widetilde{\mathbf{H}}}^{\infty } \), then \( {T}_{\phi }f = {P\phi f} = {\phi f} \) for all \( f \in {\widetilde{\mathbf{H}}}^{2} \) . It is easily seen that the standard matrix representations of analytic Toeplitz operators are lower triangular matrices. Theorem 3.2.4. If \( {T}_{\phi } \) is an analytic Toeplitz operator, then the matrix of \( {T}_{\phi } \) with respect to the basis \( {\left\{ {e}^{in\theta }\right\} }_{n = 0}^{\infty } \) is \[ {T}_{\phi } = \left( \begin{matrix} {\phi }_{0} & 0 & 0 & 0 & 0 & \\ {\phi }_{1} & {\phi }_{0} & 0 & 0 & 0 & \ddots \\ {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & 0 & 0 & \ddots \\ {\phi }_{3} & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & 0 & \ddots \\ & \ddots & \ddots & \ddots & \ddots & \ddots \end{matrix}\right) , \] where \( \phi \left( {e}^{i\theta }\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }{\phi }_{k}{e}^{in\theta } \) . Proof. The Fourier coefficients of \( \phi \) with negative indices are 0 since \( \phi \) is in \( {\widetilde{\mathbf{H}}}^{2} \), so this follows immediately from Theorem 3.2.2. The following theorem is analogous to Theorem 2.2.5. Theorem 3.2.5. The commutant of the unilateral shift acting on \( {\widetilde{\mathbf{H}}}^{2} \) is \[ \left\{ {{T}_{\phi } : \phi \in {\widetilde{\mathbf{H}}}^{\infty }}\right\} . \] Proof. First, every analytic Toeplitz operator commutes with the shift. This follows from the fact that the shift is \( {M}_{{e}^{i\theta }} \) and, for every \( f \in {\widetilde{\mathbf{H}}}^{2} \) , \[ {T}_{\phi }{M}_{{e}^{i\theta }}f = \phi {e}^{i\theta }f = {e}^{i\theta }{\phi f} = {M}_{{e}^{i\theta }}{T}_{\phi }f. \] The proof of the converse is very similar to the corresponding proof of Theorem 2.2.5. Suppose that \( {AU} = {UA} \) . Let \( \phi = A{e}_{0} \) . Then \( \phi \in {\widetilde{\mathbf{H}}}^{2} \) and, since \( {AU} = {UA} \), we have, for each positive integer \( n \) , \[ A{e}_{n} = A{U}^{n}{e}_{0} = {U}^{n}A{e}_{0} = {U}^{n}\phi = {e}^{in\theta }\phi . \] Thus, by linearity, \( {Ap} = {\phi p} \) for every polynomial \( p \in {\widetilde{\mathbf{H}}}^{2} \) . For an arbitrary \( f \in {\widetilde{\mathbf{H}}}^{2} \), choose a sequence of polynomials \( \left\{ {p}_{n}\right\} \) such that \( \left\{ {p}_{n}\right\} \rightarrow f \) in \( {\widetilde{\mathbf{H}}}^{2} \) . Then, by continuity and the fact that \( A{p}_{n} = \phi {p}_{n} \), it follows that \( \left\{ {\phi {p}_{n}}\right\} \rightarrow {Af} \) . Also, there exists a subsequence \( \left\{ {p}_{{n}_{j}}\right\} \) converging almost everywhere to \( f \) (since every sequence converging in \( {\mathbf{L}}^{2} \) has a subsequence converging almost everywhere \( \left\lbrack {{47}\text{, p. 68]})\text{. Therefore,}\left\{ {\phi {p}_{{n}_{j}}}\right\} \rightarrow {\phi f}\text{a.e. Thus}{Af} = {\phi f}\text{a.e.}}\right\rbrack \) It remains to be shown that \( \phi \) is essentially bounded. If \( A = 0 \) the result is trivial, so we may assume that \( \parallel A\parallel \neq 0 \) . Define the measurable function \( \psi \) by \( \psi = \phi /\parallel A\parallel \) . Note that \( \psi \) is in \( {\widetilde{\mathbf{H}}}^{2} \) . Then \[ {\psi f} = \frac{\phi f}{\parallel A\parallel } = \frac{Af}{\parallel A\parallel } \] for every \( f \) in \( {\widetilde{\mathbf{H}}}^{2} \) . It follows that \( \parallel {\psi f}\parallel \leq \parallel f\parallel \) for all \( f \) in \( {\widetilde{\mathbf{H}}}^{2} \) . Taking \( f \) to be the constant function 1 and a trivial induction yield \( \begin{Vmatrix}{\psi }^{n}\end{Vmatrix} \leq 1 \) for every natural number \( n \) . Suppose that there is a positive \( \varepsilon \) such that the set \( E \) defined by \[ E = \left\{ {{e}^{i\theta } : \left| {\psi \left( {e}^{i\theta }\right) }\right| \geq 1 + \varepsilon }\right\} \] has positive measure. Then \[ \begin{Vmatrix}{\psi }^{n}\end{Vmatrix} = \frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| \psi \left( {e}^{i\theta }\right) \right| }^{n}{d\theta } \]
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 0.6.2
Definition 0.6.2 Two valuations \( v,{v}^{\prime } \) on \( K \) are equivalent if there exists \( a \in {\mathbb{R}}^{ + } \) such that \( {v}^{\prime }\left( x\right) = {\left\lbrack v\left( x\right) \right\rbrack }^{a} \) for \( x \in K \) . An alternative formulation of this notion of equivalence is that the valuations define the same topology on \( K \) . ## Definition 0.6.3 - If the valuation \( v \) satisfies in addition (iv) \( v\left( {x + y}\right) \leq \max \{ v\left( x\right), v\left( y\right) \} \) for all \( x, y \in K \) , then \( v \) is called a non-Archimedean valuation. - If the valuation \( v \) is not equivalent to one which satisfies (iv), then \( v \) is Archimedean. Non-Archimedean valuations can be characterised among valuations as those for which \( \left\{ {v\left( {n.{1}_{K}}\right) : n \in \mathbb{Z}}\right\} \) is a bounded set (see Exercise 0.6, No. 1). Lemma 0.6.4 Let \( v \) be a non-Archimedean valuation on \( K \) . Let - \( R\left( v\right) = \{ \alpha \in K \mid v\left( \alpha \right) \leq 1\} \) , \[ \text{-}\mathcal{P}\left( v\right) = \{ \alpha \in K \mid v\left( \alpha \right) < 1\} \text{.} \] Then \( R\left( v\right) \) is a local ring whose unique maximal ideal is \( \mathcal{P}\left( v\right) \) and whose field of fractions is \( K \) . ## Definition 0.6.5 The ring \( R\left( v\right) \) is called the valuation ring of \( K \) (with respect to \( v \) ). Now let \( K = k \) be a number field. All the valuations on \( k \) can be determined as we now indicate. Let \( \sigma : k \rightarrow \mathbb{C} \) be any one of the Galois embeddings of \( k \) . Define \( {v}_{\sigma } \) by \( {v}_{\sigma }\left( x\right) = \left| {\sigma \left( x\right) }\right| \), where \( \left| \cdot \right| \) is the usual absolute value. It is not difficult to see that these are all Archimedean valuations and that \( {v}_{\sigma } \) and \( {v}_{{\sigma }^{\prime }} \) are equivalent if and only if \( \left( {\sigma ,{\sigma }^{\prime }}\right) \) is a complex conjugate pair of embeddings. Out of an equivalence class of valuations, it is usual to select a normalised one. For real \( \sigma \), this is just \( {v}_{\sigma } \) as defined above, but for a complex embedding \( \sigma \), choose \( {v}_{\sigma }\left( x\right) = {\left| \sigma \left( x\right) \right| }^{2} \) . Now let \( \mathcal{P} \) be any prime ideal in \( {R}_{k} \) and let \( c \) be a real number such that \( 0 < c < 1 \) . For \( x \in {R}_{k} \smallsetminus \{ 0\} \), define \( {v}_{\mathcal{P}} \) (and \( {n}_{\mathcal{P}} \) ) by \( {v}_{\mathcal{P}}\left( x\right) = {c}^{{n}_{\mathcal{P}}\left( x\right) } \), where \( {n}_{\mathcal{P}}\left( x\right) \) is the largest integer \( m \) such that \( x \in {\mathcal{P}}^{m} \) or, alternatively, such that \( {\mathcal{P}}^{m} \mid x{R}_{k} \) . It is straightforward to show that \( {v}_{\mathcal{P}} \) satisfies (i),(ii) and (iv). Since \( k \) is the field of fractions of \( {R}_{k} \), the definition extends to \( {k}^{ * } \) by \( {v}_{\mathcal{P}}\left( {x/y}\right) = {v}_{\mathcal{P}}\left( x\right) /{v}_{\mathcal{P}}\left( y\right) \) . This is well-defined and gives a non-Archimedean valuation on \( k \) . Alternatively, the functions \( {n}_{\mathcal{P}} \) can be defined by using the unique expression of the fractional ideal \( x{R}_{k} \) as a product of prime ideals: \[ x{R}_{k} = \mathop{\prod }\limits_{\mathcal{P}}{\mathcal{P}}^{{n}_{\mathcal{P}}\left( x\right) } \] Changing the value of \( c \) gives an equivalent valuation and a normalised valuation is frequently selected by the choice (recall Definition 0.3.5) \( c = \) \( 1/N\left( \mathcal{P}\right) \) so that \[ {v}_{\mathcal{P}}\left( x\right) = N{\left( \mathcal{P}\right) }^{-{n}_{\mathcal{P}}\left( x\right) } \] On a number field \( k \), all the valuations, up to equivalence, have been described in view of the following crucial result: Theorem 0.6.6 Let \( k \) be a number field. Any non-Archimedean valuation on \( k \) is equivalent to a \( \mathcal{P} \) -adic valuation \( {v}_{\mathcal{P}} \) for some prime ideal \( \mathcal{P} \) in \( {R}_{k} \) . Any Archimedean valuation on \( k \) is equivalent to a valuation \( {v}_{\sigma } \) as described earlier for a Galois monomorphism \( \sigma \) of \( k \) . For prime ideals \( {\mathcal{P}}_{1} \neq {\mathcal{P}}_{2} \), the valuations \( {v}_{{\mathcal{P}}_{1}},{v}_{{\mathcal{P}}_{2}} \) cannot be equivalent. Recall that \( {\mathcal{P}}_{1} + {\mathcal{P}}_{2} = {R}_{k} \) so that \( 1 = x + y \) with \( x \in {\mathcal{P}}_{1} \) and \( y \in {\mathcal{P}}_{2} \) . Thus \( {n}_{{\mathcal{P}}_{1}}\left( x\right) \geq 1 \) . If \( {n}_{{\mathcal{P}}_{2}}\left( x\right) \geq 1 \), then \( {n}_{{\mathcal{P}}_{2}}\left( {1 - y}\right) \) and \( {n}_{{\mathcal{P}}_{2}}\left( y\right) \geq 1 \) so that \( 1 \in {\mathcal{P}}_{2} \) . Thus \( {n}_{{\mathcal{P}}_{2}}\left( x\right) = 0 \) so that \( {v}_{{\mathcal{P}}_{1}},{v}_{{\mathcal{P}}_{2}} \) cannot be equivalent. An equivalence class of valuations is called a place, a prime or a prime spot of \( k \) . There are \( {r}_{1} + {r}_{2} \) Archimedean places on \( k \) and these are referred to as the infinite places or infinite primes of \( k \) (recall \( §{0.1} \) ). The classes of non-Archimedean valuations are known as the finite places or finite primes and these are in one-to-one correspondence with the prime ideals of \( {R}_{k} \) . To avoid confusion, we will use \( p \) to denote any prime, finite or infinite, in \( k \) , but we will reserve \( \mathcal{P} \) for a non-Archimedean prime or prime ideal in \( k \) . For the valuations \( {v}_{\mathcal{P}} \), the image of \( {k}^{ * } \) under \( {v}_{\mathcal{P}} \) is a discrete subgroup of the positive reals under multiplication. It is isomorphic to the additive group \( {n}_{\mathcal{P}}\left( {k}^{ * }\right) \), which is \( \mathbb{Z} \) . Let \( \pi \in {R}_{k} \) be such that \( {n}_{\mathcal{P}}\left( \pi \right) = 1 \) . Such an element is called a uniformiser and will be used heavily in the next section. Then the unique maximal ideal \( \mathcal{P}\left( {v}_{\mathcal{P}}\right) = {\pi R}\left( {v}_{\mathcal{P}}\right) \) and the local ring \( R\left( {v}_{\mathcal{P}}\right) \) will be a principal ideal domain, all of whose ideals are of the form \( {\pi }^{n}R\left( {v}_{\mathcal{P}}\right) \) . Since \( k \) is the field of fractions of \( {R}_{k} \), the local ring \( R\left( {v}_{\mathcal{P}}\right) \) can be identified with the localisation of \( {R}_{k} \) at the multiplicative set \( {R}_{k} \smallsetminus \mathcal{P} \) , and \( k \) is also the field of fractions of \( R\left( {v}_{\mathcal{P}}\right) \) . The unique maximal ideal in \( R\left( {v}_{\mathcal{P}}\right) \) is \( \mathcal{P}R\left( {v}_{\mathcal{P}}\right) = {\pi R}\left( {v}_{\mathcal{P}}\right) \) and the quotient field \( R\left( {v}_{\mathcal{P}}\right) /{\pi R}\left( {v}_{\mathcal{P}}\right) \), called the residue field, coincides with \( {R}_{k}/\mathcal{P} \) . A principal ideal domain with only one maximal ideal is known as a discrete valuation ring so that these rings \( R\left( {v}_{\mathcal{P}}\right) \) are all discrete valuation rings. More generally, these can be used to give an alternative characterisation of Dedekind domains (see Definition 0.3.1) Theorem 0.6.7 Let \( D \) be an integral domain. The following are equivalent: 1. \( D \) is a Dedekind domain. 2. \( D \) is Noetherian and the localisation of \( D \) at each non-zero prime ideal is a discrete valuation ring. Example 0.6.8 Let \( k = \mathbb{Q} \) . Then there is precisely one infinite place represented by the usual absolute value \( v\left( x\right) = \left| x\right| \) . The finite places are in one-to-one correspondence with the rational primes \( p \) of \( \mathbb{Z} \) . For a fixed prime \( p \), the corresponding finite place can be represented by the normalised \( p \) -adic valuation. Thus for \( x \in \mathbb{Z},{v}_{p}\left( x\right) = {p}^{-{n}_{p}\left( x\right) } \), where \( {n}_{p}\left( x\right) \) is the highest power of \( p \) dividing \( x \) . Then \[ R\left( {v}_{p}\right) = \{ a/b \in \mathbb{Q} \mid p \nmid b\} \] and since \( {n}_{p}\left( p\right) = 1 \), the unique maximal ideal is the principal ideal \( {pR}\left( {v}_{p}\right) \) . Note that the field of fractions of \( R\left( {v}_{p}\right) \) is again \( \mathbb{Q} \) and the quotient field \( R\left( {v}_{p}\right) /{pR}\left( {v}_{p}\right) \) is the finite field \( {\mathbb{F}}_{p} \) . We conclude this section with a discussion of ray class groups, now that the appropriate language is available to do this. A ray class group is defined with respect to a modulus in \( k \) where the following definition holds. Definition 0.6.9 A modulus in \( k \) is a formal product \[ \mathcal{M} = \mathop{\prod }\limits_{p}{p}^{m\left( p\right) } \] over all finite and infinite primes, with \( m\left( p\right) = 0 \) for all but a finite number, \( m\left( p\right) = 0 \) if \( p \) is a complex infinite prime, \( m\left( p\right) = 0,1 \) if \( p \) is a real infinite prime and \( m\left( p\right) \) is a positive integer otherwise. Thus a modulus is a finite product which can be split into two parts: the infinite part \( {\mathcal{M}}_{\infty } \), where the product is over the real primes and the finite part \( {\mathcal{M}}_{0} \), where the product is over a finite number of prime ideals. Recall that the ideal group \( {I}_{k} \) is the free abelian group on the prime ideals of \( k \) . Let \( {I}_{k}\left( \mathcal{M}\right) \) denote the subgroup of those fractional ideals that are relatively prime to all \( \mathcal{P} \), where \( \mathcal{P} \mid {\mathcal{M}}_{0} \), so that \( {I}_{k}\left( \mathcal{M}\right) \) is generated by prime ideals not dividing \( {\mathcal{M}}_{0} \) . With respect to \( \mathcal{M} \), we introduce the following equival
1083_(GTM240)Number Theory II
Definition 11.1.1
Definition 11.1.1. (1) We say that a function \( f \) is strictly differentiable at a point \( a \in {\mathbb{Z}}_{p} \) if the function of two variables \( {\Phi f}\left( {x, y}\right) = (f\left( x\right) - \) \( f\left( y\right) )/\left( {x - y}\right) \) has a limit \( \ell = {f}^{\prime }\left( a\right) \) as \( \left( {x, y}\right) \rightarrow \left( {a, a}\right), x \neq y \) . (2) We say that \( f \) is strictly differentiable on some subset \( X \) of \( {\mathbb{Z}}_{p} \), and write \( f \in {S}^{1}\left( X\right) \), if \( f \) is strictly differentiable for all \( a \in X \) . It is easy to show that \( f \in {S}^{1}\left( X\right) \) if and only if \( {\Phi f} \) can be extended to a continuous function on \( X \times X \), if and only if there exists a continuous function \( \varepsilon \) defined on \( X \times X \) such that \( \varepsilon \left( {x, x}\right) = 0 \) and satisfying \( f\left( y\right) = \) \( f\left( x\right) + \left( {y - x}\right) {f}^{\prime }\left( x\right) + \left( {y - x}\right) \varepsilon \left( {x, y}\right) \) for all \( \left( {x, y}\right) \in X \times X. \) Theorem 11.1.2. Let \( f\left( x\right) = \mathop{\sum }\limits_{{k \geq 0}}{a}_{k}\left( \begin{array}{l} x \\ k \end{array}\right) \) be the Mahler expansion of a continuous function \( f \) on \( {\mathbb{Z}}_{p} \) (see Theorem 4.2.26). (1) \( f \) is Lipschitz-continuous (in other words \( {\Phi f} \) is bounded) if and only if \( k\left| {a}_{k}\right| \) is bounded. In that case, \[ \parallel {\Phi f}\parallel = \mathop{\sup }\limits_{{x \neq y}}\left| {{\Phi f}\left( {x, y}\right) }\right| = \mathop{\sup }\limits_{{k \geq 1}}{p}^{\lfloor \log \left( k\right) /\log \left( p\right) \rfloor }\left| {a}_{k}\right| . \] (2) \( f \in {S}^{1}\left( {\mathbb{Z}}_{p}\right) \) if and only if \( k\left| {a}_{k}\right| \rightarrow 0 \) as \( k \rightarrow \infty \) . Definition 11.1.3. If \( f \) is Lipschitz-continuous we define the \( {L}^{1} \) -norm of \( f \) by the formula \[ \parallel f{\parallel }_{1} = \max \left( {\left| {f\left( 0\right) }\right| ,\parallel {\Phi f}\parallel }\right) , \] which is indeed a norm. We can now give the definition of the Volkenborn integral: Definition 11.1.4. Let \( g \) be a function from \( {\mathbb{Z}}_{p} \) to \( {\mathbb{C}}_{p} \) . We define the Volken-born integral of \( g \) on \( {\mathbb{Z}}_{p} \), if it exists, by the formula \[ {\int }_{{\mathbb{Z}}_{p}}g\left( t\right) {dt} = \mathop{\lim }\limits_{{r \rightarrow \infty }}\frac{1}{{p}^{r}}\mathop{\sum }\limits_{{0 \leq n < {p}^{r}}}g\left( n\right) . \] If \( g \) is a function from \( {U}_{p} = {\mathbb{Z}}_{p}^{ * } \) to \( {\mathbb{C}}_{p} \), we define similarly \[ {\int }_{{\mathbb{Z}}_{p}^{ * }}g\left( t\right) {dt} = \mathop{\lim }\limits_{{r \rightarrow \infty }}\frac{1}{{p}^{r}}\mathop{\sum }\limits_{{0 \leq n < {p}^{r}, p \nmid n}}g\left( n\right) . \] Note that if \( g \) is a function on \( {\mathbb{Z}}_{p}^{ * } \) and if we define \( {g}_{0} \) to be the function on \( {\mathbb{Z}}_{p} \) equal to \( g \) on \( {\mathbb{Z}}_{p}^{ * } \) and to 0 on \( p{\mathbb{Z}}_{p} \) then evidently \( {\int }_{{\mathbb{Z}}_{p}^{ * }}g\left( t\right) {dt} = {\int }_{{\mathbb{Z}}_{p}}{g}_{0}\left( t\right) {dt} \) . On the other hand, because of the \( p \) -adic topology it is clear that \( g \in {S}^{1}\left( {\mathbb{Z}}_{p}^{ * }\right) \) if and only if \( {g}_{0} \in {S}^{1}\left( {\mathbb{Z}}_{p}\right) \), so that we can always reduce an integral over \( {\mathbb{Z}}_{p}^{ * } \) to an integral over \( {\mathbb{Z}}_{p} \) if desired. The following result, which we will not prove, ensures the existence of the Volkenborn integral of sufficiently regular functions; see [Rob1]. Proposition 11.1.5. If \( g \in {S}^{1}\left( {\mathbb{Z}}_{p}\right) \) then \( {\int }_{{\mathbb{Z}}_{p}}g\left( t\right) {dt} \) exists, and similarly for \( {\mathbb{Z}}_{p}^{ * } \) . We will thus be able to define \( p \) -adic functions by integrating functions of two variables, in other words by setting \[ f\left( x\right) = {\int }_{{\mathbb{Z}}_{p}}g\left( {x, t}\right) {dt}\;\text{ or }\;f\left( x\right) = {\int }_{{\mathbb{Z}}_{p}^{ * }}g\left( {x, t}\right) {dt}. \] We will see that all the functions that we will introduce in this chapter (the logarithm of Morita’s \( p \) -adic gamma function, Diamond’s \( p \) -adic \( \log \) gamma function, and \( p \) -adic zeta and \( L \) -functions) have a simple definition in terms of Volkenborn integrals. To avoid excessive technicalities, we will be a little sloppy, and often assume without any justification that we can differentiate under the integral sign. This is done in [Rob1] for integrals of the form \( {\int }_{{\mathbb{Z}}_{p}}g\left( {x + t}\right) {dt} \), and otherwise it can be checked directly on the specific integral without appealing to general theorems. Here are some basic properties of these integrals, which we will not need. We always assume that the functions \( f \) that occur are in \( {S}^{1}\left( {\mathbb{Z}}_{p}\right) \) . Proposition 11.1.6. (1) \[ \left| {{\int }_{{\mathbb{Z}}_{p}}f\left( t\right) {dt}}\right| \leq p\parallel f{\parallel }_{1} \] (2) If \( {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{1} \rightarrow 0 \) (in other words if \( {f}_{n} \rightarrow f \) in \( {S}^{1}\left( {\mathbb{Z}}_{p}\right) \) ) then \[ {\int }_{{\mathbb{Z}}_{p}}{f}_{n}\left( t\right) {dt} \rightarrow {\int }_{{\mathbb{Z}}_{p}}f\left( t\right) {dt} \] (3) \[ {\int }_{{\mathbb{Z}}_{p}}\left( {f\left( {t + 1}\right) - f\left( t\right) }\right) {dt} = {f}^{\prime }\left( 0\right) . \] In particular, if \( g\left( x\right) = {\int }_{{\mathbb{Z}}_{p}}f\left( {x + t}\right) {dt} \), then \( g\left( {x + 1}\right) - g\left( x\right) = {f}^{\prime }\left( x\right) \) . (4) If \( f\left( x\right) = \mathop{\sum }\limits_{{k \geq 0}}{a}_{k}\left( \begin{array}{l} x \\ k \end{array}\right) \) then \[ {\int }_{{\mathbb{Z}}_{p}}f\left( t\right) {dt} = \mathop{\sum }\limits_{{k \geq 0}}{\left( -1\right) }^{k}\frac{{a}_{k}}{k + 1}. \] (5) If \( f \) is an odd function \( \left( {f\left( {-x}\right) = - f\left( x\right) }\right) \), then \[ {\int }_{{\mathbb{Z}}_{p}}f\left( t\right) {dt} = - \frac{{f}^{\prime }\left( 0\right) }{2}. \] Examples. (1) For \( x \in {\mathbb{C}}_{p} \) such that \( \left| x\right| < 1 \), we have \[ {\int }_{{\mathbb{Z}}_{p}}{\left( 1 + x\right) }^{t}{dt} = \frac{{\log }_{p}\left( {1 + x}\right) }{x}. \] (2) For all \( x \in {\mathbb{Q}}_{p} \) and \( k \in {\mathbb{Z}}_{ \geq 0} \) we have \[ {\int }_{{\mathbb{Z}}_{p}}{\left( x + t\right) }^{k}{dt} = {B}_{k}\left( x\right) \] We invite the reader to prove these formulas (Exercise 1). Since the second example above is essential, we give the proof of a more general result. Lemma 11.1.7. Let \( \chi \) be a periodic function defined on \( \mathbb{Z} \) of period a power of \( p \), and let \( k \in {\mathbb{Z}}_{ \geq 0} \) . For all \( x \in {\mathbb{C}}_{p} \) we have \[ {\int }_{{\mathbb{Z}}_{p}}\chi \left( t\right) {\left( x + t\right) }^{k}{dt} = {B}_{k}\left( {\chi, x}\right) . \] In particular, \[ {\int }_{{\mathbb{Z}}_{p}}{\left( x + t\right) }^{k}{dt} = {B}_{k}\left( x\right) \;\text{ and }\;{\int }_{{\mathbb{Z}}_{p}}\chi \left( t\right) {t}^{k}{dt} = {B}_{k}\left( \chi \right) . \] Proof. By definition and Corollary 9.4.17 we have \[ {\int }_{{\mathbb{Z}}_{p}}\chi \left( t\right) {\left( x + t\right) }^{k}{dt} = \mathop{\lim }\limits_{{r \rightarrow \infty }}\frac{1}{{p}^{r}}\mathop{\sum }\limits_{{0 \leq n < {p}^{r}}}\chi \left( n\right) {\left( n + x\right) }^{k} \] \[ = \mathop{\lim }\limits_{{r \rightarrow \infty }}\frac{{B}_{k + 1}\left( {\chi ,{p}^{r} + x}\right) - {B}_{k + 1}\left( {\chi, x}\right) }{{p}^{r}\left( {k + 1}\right) } \] \[ = \frac{{B}_{k + 1}^{\prime }\left( {\chi, x}\right) }{k + 1} = {B}_{k}\left( {\chi, x}\right) \] as soon as \( {p}^{r} \) is a multiple of the period of \( \chi \), by definition of the derivative and the fact that \( {B}_{k + 1}^{\prime }\left( {\chi, x}\right) = \left( {k + 1}\right) {B}_{k}\left( {\chi, x}\right) \) . ## 11.2 The \( p \) -adic Hurwitz Zeta Functions ## 11.2.1 Teichmüller Extensions and Characters on \( {\mathbb{Z}}_{p} \) ## Introduction. Recall that in the complex case, our fundamental building block was the Hurwitz zeta function \( \zeta \left( {s, x}\right) \), which enabled us first to motivate the definition of the gamma function and prove most of its properties as immediate consequences of the corresponding ones for \( \zeta \left( {s, x}\right) \), and second to define the Dirichlet \( L \) -functions as a finite linear combination of \( \zeta \left( {s, x}\right) \) for suitable rational values of \( x \) . We will proceed in exactly the same way in the \( p \) -adic case. We are going to see, however, that it is essential to distinguish between the cases \( {v}_{p}\left( x\right) < 0 \) and \( {v}_{p}\left( x\right) \geq 0 \) . Definition of \( {q}_{p} \) . The prime number \( p = 2 \) is always annoying in number theory, and especially in \( \mathfrak{p} \) -adic theory: over a general \( \mathfrak{p} \) -adic field the annoying primes are those for which \( e/\left( {p - 1}\right) \geq 1 \), in other words \( e \geq p - 1 \) . In the case of \( {\mathbb{Q}}_{p} \), which is the main object of consideration in this chapter (although some variables will be in \( {\mathbb{C}}_{p} \) ), the only annoying prime is \( p = 2 \) (the "oddest prime" as a famous saying goes). It is thus convenient to set the following notation, which we have met briefly in Proposition 4.4.47: Definition 11.2.1. We set \( {q}_{p} = p \) when \( p \geq 3 \), and \( {q}_{2} = 4 \) . In addition, we define \[ {\mathrm{{CZ}}}_{p} = \left\{ {x \in {\mathbb{Q}}_{p},{v}_{p}\left( x\right) \leq - {v}_{p}\left( {q}_{p}\right) }\right\} \] so that when \( p \geq 3 \) we have \( {\mathrm{{CZ}}}_{p} =
1065_(GTM224)Metric Structures in Differential Geometry
Definition 5.1
Definition 5.1. A connection on a principal \( G \) -bundle \( \pi : P \rightarrow M \) is a distribution \( \mathcal{H} \) on \( P \) such that: (1) \( {TP} = \ker {\pi }_{ * } \oplus \mathcal{H} \) . (2) \( {R}_{g * }\mathcal{H} = \mathcal{H} \circ {R}_{g} \) for all \( g \in G \) . As in the vector bundle case, the splitting in (1) determines a decomposition \( u = {u}^{v} + {u}^{h} \in \ker {\pi }_{ * } \oplus \mathcal{H} \) of any \( u \in {TP} \) as a sum of a vertical and a horizontal vector. By the above definition, any connection \( \mathcal{H} \) on a vector bundle \( \xi \) determines a connection \( \widetilde{\mathcal{H}} = \left\{ {u \in {TP} \mid {\rho }_{ * }\left( {u,0}\right) \in \mathcal{H}}\right\} \) on the principal \( {GL}\left( n\right) \) -bundle \( {Fr}\left( \xi \right) \) : If \( {\pi }_{E} \) denotes the vector bundle projection, and \( {\pi }_{1} : P \times {\mathbb{R}}^{n} \rightarrow P \) the projection onto the first factor, then \( \pi \circ {\pi }_{1} = {\pi }_{E} \circ \rho \) . Since \( {\pi }_{E * \mid \mathcal{H}} \) is onto, \( {\pi }_{ * }\widetilde{\mathcal{H}} = {\pi }_{ * }{\pi }_{1 * }\left( {\widetilde{\mathcal{H}} \times 0}\right) = {\pi }_{E * }{\rho }_{ * }\left( {\widetilde{\mathcal{H}} \times 0}\right) = {\pi }_{E * }\left( \mathcal{H}\right) = {TM} \), so that \( \widetilde{\mathcal{H}} \) is complementary to \( \ker {\pi }_{ * } \) . Furthermore, if \( \gamma \) is a basis of parallel fields along a curve in \( M \) and \( g \in {GL}\left( n\right) \), then each element of \( {\gamma g} \) is a constant linear combination of the fields in \( \gamma \), and is therefore parallel. Thus, \( \mathcal{H} \) is invariant under \( {R}_{g} \) . Conversely, given a principal \( {GL}\left( n\right) \) -bundle \( P \rightarrow M \) and a connection \( \mathcal{H} \) on the bundle, we obtain a connection on the vector bundle \( \xi : E = P{ \times }_{{GL}\left( n\right) }{\mathbb{R}}^{n} \rightarrow \) \( M \) by requiring that \( \rho \left( {\gamma, u}\right) \) be parallel along \( c \) whenever \( \gamma \) is parallel along \( c \) and \( u \in {\mathbb{R}}^{n} \) ; i.e., we claim that \( \widetilde{\mathcal{H}} \mathrel{\text{:=}} {\rho }_{ * }\left( {\mathcal{H} \times 0}\right) \) is a connection on \( \xi \) : Clearly, \( \widetilde{\mathcal{H}} + \mathcal{V}\xi = {TE} \) . To see that \( \widetilde{\mathcal{H}} \) is invariant under multiplication \( {\mu }_{a} \) by \( a \in \mathbb{R} \) , recall that the map \( \rho \circ \left( {{R}_{g} \times {1}_{{\mathbb{R}}^{n}}}\right) \) on \( P \times {\mathbb{R}}^{n} \) equals \( \rho \circ \left( {{1}_{P} \times g}\right) \) . Thus, \[ {\widetilde{\mathcal{H}}}_{{a\rho }\left( {b, u}\right) } = {\widetilde{\mathcal{H}}}_{\rho \left( {b,{au}}\right) } = {\widetilde{\mathcal{H}}}_{\rho \left( {{ba}{I}_{n}, u}\right) } = {\rho }_{ * }\left( {{\mathcal{H}}_{{ba}{I}_{n}} \times {0}_{u}}\right) \] \[ = {\rho }_{ * } \circ {\left( {R}_{a{I}_{n}} \times {1}_{{\mathbb{R}}^{n}}\right) }_{ * }\left( {{\mathcal{H}}_{b} \times {0}_{u}}\right) = {\rho }_{ * } \circ {\left( {1}_{P} \times a{I}_{n}\right) }_{ * }\left( {{\mathcal{H}}_{b} \times {0}_{u}}\right) . \] But \( \rho \circ \left( {{1}_{P} \times a{I}_{n}}\right) = {\mu }_{a} \circ \rho \), so that \[ {\widetilde{\mathcal{H}}}_{{a\rho }\left( {b, u}\right) } = {\mu }_{a * } \circ {\rho }_{ * }\left( {{\mathcal{H}}_{b} \times {0}_{u}}\right) = {\mu }_{a * }{\widetilde{\mathcal{H}}}_{\rho \left( {b, u}\right) } \] as claimed. For \( b \in P \), the map \( {l}_{b} : G \rightarrow P \) given by \( {l}_{b}\left( g\right) = {R}_{g}\left( b\right) = {bg} \) is an imbedding onto the fiber of \( P \) through \( b \) by Lemma 10.1 in Chapter 5. If \( U \in \mathfrak{g} \), the fundamental vector field \( \widetilde{U} \in \mathfrak{X}P \) determined by \( U \) is defined by \[ \widetilde{U}\left( b\right) = {l}_{b * }U\left( e\right) ,\;b \in P. \] In analogy with the vector bundle case, define the horizontal lift of \( X \in \mathfrak{X}M \) to be the unique horizontal \( \bar{X} \in \mathfrak{X}P \) that is \( \pi \) -related to \( X \) . Such an \( \bar{X} \) is said to be basic. Proposition 5.1. The map \( \psi : \mathfrak{g} \rightarrow \mathfrak{X}P \) which assigns to \( U \in \mathfrak{g} \) the fundamental vector field \( \widetilde{U} \) determined by \( U \) is a Lie algebra homomorphism. Furthermore, \( \left\lbrack {\widetilde{U}, X}\right\rbrack \) is horizontal if \( X \) is, and is zero if \( X \) is basic. Proof. \( \psi \) is by definition linear. To see that it is a homomorphism, notice first of all that the flow of \( \widetilde{U} \) is \( {R}_{\exp \left( {tU}\right) } \) : In fact, if \( \gamma \) is the curve \( t \mapsto \exp \left( {tU}\right) \) in \( G \) and \( c\left( t\right) = {R}_{\exp \left( {tU}\right) }\left( b\right) = {l}_{b} \circ \gamma \left( t\right) \), then \[ \dot{c}\left( t\right) = {l}_{b * } \circ \dot{\gamma }\left( t\right) = {l}_{b * }{U}_{\exp \left( {tU}\right) } = {l}_{b * } \circ {L}_{\left( {\exp {tU}}\right) * }U\left( e\right) = {l}_{b\exp \left( {tU}\right) * }U\left( e\right) = \widetilde{U} \circ c\left( t\right) . \] Thus, by definition of the Lie bracket, \[ \left\lbrack {\widetilde{U},\widetilde{V}}\right\rbrack \left( b\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{t}\left( {{R}_{\exp \left( {-{tU}}\right) * }{l}_{b\exp \left( {tU}\right) * }V\left( e\right) - {l}_{b * }V\left( e\right) }\right) . \] If \( {\tau }_{a} \) denotes conjugation by \( a \) in \( G \), then \[ {R}_{\exp \left( {-{tU}}\right) } \circ {l}_{b\exp \left( {tU}\right) }\left( g\right) = b\exp \left( {tU}\right) g\exp \left( {-{tU}}\right) = {l}_{b} \circ {\tau }_{\exp \left( {tU}\right) }\left( g\right) . \] By Example 8.1(iii) in Chapter 1, \[ \left\lbrack {\widetilde{U},\widetilde{V}}\right\rbrack \left( b\right) = {l}_{b * }\mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{t}\left( {{\operatorname{Ad}}_{\exp \left( {tU}\right) }V\left( e\right) - V\left( e\right) }\right) , \] and it remains to show that the latter limit is \( \left\lbrack {U, V}\right\rbrack \left( e\right) \) . But if \( R \) now denotes right translation in \( G \), then \[ \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{t}\left( {{\operatorname{Ad}}_{\exp \left( {tU}\right) }V\left( e\right) - V\left( e\right) }\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{t}\left( {{R}_{\exp \left( {-{tU}}\right) * } \circ {L}_{\exp \left( {tU}\right) * }V\left( e\right) - V\left( e\right) }\right) \] \[ = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{t}\left( {{R}_{\exp \left( {-{tU}}\right) * } \circ V \circ {R}_{\exp \left( {tU}\right) }\left( e\right) - V\left( e\right) }\right) \] \[ = \left\lbrack {U, V}\right\rbrack \left( e\right) \] since \( U \) has flow \( {R}_{\exp \left( {tU}\right) } \) and \( V \) is left-invariant. This shows that \( \psi \) is a Lie algebra homomorphism. At this stage, it is worth noting that the above argument establishes the following: Observation. Denote by ad : \( \mathfrak{g} \rightarrow \mathfrak{{gl}}\left( \mathfrak{g}\right) \) the derivative at the identity of Ad \( : G \rightarrow {GL}\left( \mathfrak{g}\right) \) ; i.e., for \( U \in \mathfrak{g},{\operatorname{ad}}_{U} = {\operatorname{Ad}}_{*e}U \) . Then \( {\operatorname{ad}}_{U}V = \left\lbrack {U, V}\right\rbrack \) . We now proceed to the second part of the proposition: If \( X \) is horizontal, then as above, \[ \left\lbrack {\widetilde{U}, X}\right\rbrack \left( b\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{t}\left( {{R}_{\exp \left( {-{tU}}\right) * } \circ X \circ {R}_{\exp \left( {tU}\right) }\left( b\right) - X\left( b\right) }\right) \] is horizontal, since \( \mathcal{H} \) is invariant under \( {R}_{g} \) . Finally, if \( \bar{X} \) is basic and \( \pi \) -related to \( X \in \mathfrak{X}M \), then \( {\pi }_{ * }\left\lbrack {\widetilde{U},\bar{X}}\right\rbrack = \left\lbrack {0, X}\right\rbrack \circ \pi = 0 \), since vertical fields are \( \pi \) -related to the zero field on \( M \) . Thus, the horizontal component of \( \left\lbrack {\widetilde{U},\bar{X}}\right\rbrack \), and by the above, \( \left\lbrack {\widetilde{U},\bar{X}}\right\rbrack \) itself, must vanish. We next discuss an analogue for principal bundles of the connection map \( \kappa : {TE} \rightarrow E \) for vector bundles: Recall that \( \kappa \) essentially picked out the vertical component \( {u}^{v} \) of \( u \in {TE} \) . Since \( {u}^{v} \in \ker {\pi }_{ * } \), it can be identified with an element \( \kappa \left( v\right) \) of \( E \) . A similar property holds for principal bundles: If \( u \in {T}_{b}P \) is vertical, that is, \( u \in \ker {\pi }_{ * } \), then it is tangent to the orbit of \( b \) which is diffeomorphic to \( G \), and hence parallelizable. In other words, there exists a unique \( U \in \mathfrak{g} \) with \( \widetilde{U}\left( b\right) = u \), so we may define \( \kappa \left( u\right) = U \) . It is customary to use the letter \( \omega \) instead: Definition 5.2. The connection form \( \omega \) of a connection on a principal \( G \) -bundle \( P \rightarrow M \) is the \( \mathfrak{g} \) -valued 1 -form given by \[ \omega \left( u\right) = {\left( {l}_{b * e}\right) }^{-1}{u}^{v},\;u \in {T}_{b}P,\;b \in P. \] (Strictly speaking, \( \omega \in {A}_{1}\left( {P,\eta }\right) \), where \( \eta \) denotes the trivial bundle over \( P \) with total space \( P \times \mathfrak{g} \) .) Proposition 5.2. The connection form \( \omega \) of a connection \( \mathcal{H} \) satisfies (1) \( {\omega }_{\mid \mathcal{H}} \equiv 0,\;{l}_{b * } \circ {\omega }_{\mid \ker {\pi }_{ * }} = {1}_{\ker {\pi }_{ * }} \) , (2) \( {R}_{g}^{ * }\omega = {\operatorname{Ad}}_{{g}^{-1}} \circ \omega ,\;g \in G \) . Conversely, if \( \omega \) is a \( \mathfrak{g} \) -valued 1-form on \( P \) satisfying the
1057_(GTM217)Model Theory
Definition 7.5.6
Definition 7.5.6 Suppose that \( \mathbb{M} \) is \( \omega \) -stable. We call \( a, b, c, x, y, z \in {\mathbb{M}}^{\text{eq }} \) a group configuration if: i) \( \operatorname{RM}\left( a\right) = \operatorname{RM}\left( b\right) = \operatorname{RM}\left( c\right) = \operatorname{RM}\left( x\right) = \operatorname{RM}\left( y\right) = \operatorname{RM}\left( z\right) = 1 \) ; ii) any pair of elements has rank 2 ; iii) \( \operatorname{RM}\left( {a, b, c}\right) = \operatorname{RM}\left( {c, x, y}\right) = \operatorname{RM}\left( {a, y, z}\right) = \operatorname{RM}\left( {b, x, z}\right) = 2 \) ; iv) any other triple has rank 3 ; v) \( \operatorname{RM}\left( {a, b, c, x, y, z}\right) \) has rank 3 . We represent the group configuration by the following diagram. ![9e29fa71-7654-4400-976d-440e996c799d_286_0.jpg](images/9e29fa71-7654-4400-976d-440e996c799d_286_0.jpg) The points in the diagram have rank 1. Conditions iii) and iv) assert that each line has rank 2 while any three non-collinear points have rank 3 . There is one easy way that a group configuration arises. Suppose that \( G \) is a strongly minimal Abelian group. Let \( a, b, x \) be independent generic elements of \( G \) . Let \( c = {ba}, y = {cx} \), and \( z = {bx} \) ; then, \( y = {az} \) and it is easy to check that conditions i)-v) hold. Remarkably, Hrushovski proved that whenever there is a group configuration there is also a definable group. Theorem 7.5.7 Suppose that there is a group configuration in \( {\mathbb{M}}^{\text{eq }} \) . Then, there is a rank one group definable in \( {\mathbb{M}}^{\text{eq }} \) . We give an application of the group configuration in Theorem 8.3.1. Proofs of Hrushovski’s Theorem appear in [18] §4.5 and [76] §5.4. ## 7.6 Exercises and Remarks Throughout the Exercises \( G \) is an \( \omega \) -stable group. Exercise 7.6.1 a) Show that the Descending Chain Condition fails for the stable group \( \left( {\mathbb{Z},+,0}\right) \) . b) Suppose that \( G \) is a stable group and \( \phi \left( {x,\bar{y}}\right) \) is a formula. Show that we cannot find \( {\bar{a}}_{1},{\bar{a}}_{2},\ldots \) such that \( {G}_{i} = \left\{ {x : G \vDash \phi \left( {x,{\bar{a}}_{i}}\right) }\right\} \) is a subgroup of \( G \) and \( {G}_{1} \supset {G}_{2} \supset {G}_{3} \supset \ldots \) [Hint: Suppose not and find a violation of the order property.] Exercise 7.6.2 Prove Lemma 7.1.8. Exercise 7.6.3 Let \( p \in {S}_{1}\left( G\right) \), let \( {G}_{1} \) be an elementary extension of \( G \) , and let \( {p}_{1} \) be the unique nonforking extension of \( p \) to \( {G}_{1} \) . Show that the formula that defines \( \operatorname{Stab}\left( p\right) \) in \( G \) defines \( \operatorname{Stab}\left( {p}_{1}\right) \) in \( {G}_{1} \) . Exercise 7.6.4 Show that if \( G \) is connected and \( H \trianglelefteq G \) is definable, then \( G/H \) is connected. Exercise 7.6.5 Suppose that \( G \) is a connected \( \omega \) -stable group and \( \sigma \) : \( G \rightarrow G \) is a definable homomorphism with finite kernel. Show that \( \sigma \) is surjective. Exercise 7.6.6 Use Lemma 7.5.2 to show that any infinite stable integral domain is a field. Exercise 7.6.7 Suppose that \( \left( {K,+,\cdot ,\ldots }\right) \) is a field of finite Morley rank. Show that \( K \) has no infinite definable subrings. [Hint: By Exercise 7.6.6, any definable subring is a subfield. Show that if there is a definable subfield, then \( K \) must have infinite rank.] Conclude that if \( \sigma : K \rightarrow K \) is a nontrivial field automorphism and \( \left( {K,+,\cdot ,\sigma ,0,1}\right) \) has finite Morley rank, then \( K \) has characteristic \( p > 0 \) and the fixed field of \( \sigma \) is finite. Exercise 7.6.8 Let \( K \) be a finite Morley rank field of characteristic 0 . a) Show that there are no nontrivial definable additive subgroups of \( K \) . [Hint: Let \( G \) be a definable subgroup and consider \( R = \{ a : {aG} = G\} \) .] This is still an open question in characteristic \( p \) . b) If \( \sigma : {K}^{n} \rightarrow {K}^{m} \) is a definable additive homomorphism, then \( \sigma \) is \( K \) -linear. [Hint: Consider \( \{ a : \forall {x\sigma }\left( {ax}\right) = {a\sigma }\left( x\right) \} \) .] In particular, the only definable homomorphisms of \( {K}^{ + } \) are \( x \mapsto {ax}, a \in K \) . Exercise 7.6.9 Show that \( G \) acts transitively on the generic types of \( G \) . Exercise 7.6.10 Show that if \( \operatorname{RM}\left( p\right) = \operatorname{RM}\left( {\operatorname{Stab}\left( p\right) }\right) \), then \( \operatorname{Stab}\left( p\right) \) is connected and \( p \) is a translate of the generic of \( \operatorname{Stab}\left( p\right) \) . Exercise 7.6.11 Suppose that \( \Gamma \) is a connected group and there is a definable transitive action of \( \Gamma \) on a finite set \( S \) . Then \( \left| S\right| = 1 \) . Exercise 7.6.12 Show that if \( X \subseteq G \) is definable and indecomposable and \( g \in G \), then \( {gX} \) is indecomposable and \( {gX}{g}^{-1} \) is indecomposable. Exercise 7.6.13 Suppose that \( G \) is an algebraic group and \( X \subseteq G \) is an irreducible subvariety. Prove that \( X \) is indecomposable. Exercise 7.6.14 Show that if \( G \) is an infinite group of finite Morley rank with no definable infinite proper subgroups, and \( X \subseteq G \) is infinite and definable, then \( X \) generates \( G \) . Exercise 7.6.15 Show that if \( G \) is an \( {\aleph }_{0} \) -saturated \( \omega \) -stable group and \( X \subseteq G \) is an infinite definable set, then there are \( {Y}_{1},\ldots ,{Y}_{n} \subseteq X \) such that \( X = {Y}_{1} \cup \ldots \cup {Y}_{n} \) and \( {Y}_{1},\ldots ,{Y}_{n} \) are indecomposable. Exercise 7.6.16 Suppose that \( \left( {K,+,\cdot ,\ldots }\right) \) is a field of finite Morley rank and \( X \subseteq K \) is an infinite definable set. a) Show that there are \( {a}_{1},\ldots ,{a}_{n} \in K \) such that \( K = {a}_{1}X + {a}_{2}X + \ldots + \) \( {a}_{n}X \) . [Hint: Without loss of generality, assume that \( K \) is \( {\aleph }_{0} \) -saturated and, by Exercise 7.6.15, that \( X \) is indecomposable. Let \( x \in X \) and \( Y = X - x \) . Show that the additive subgroup \( A \) generated by \( \{ {aY} : a \in K\} \) is definable. Argue that \( A = K \) .] b) Show that if the language \( \mathcal{L} \) is countable, then the theory of \( K \) is categorical in all uncountable powers. Exercise 7.6.17 Let \( F \) be an infinite field, and let \( G \) be a group of automorphisms of \( F \) such that the action of \( G \) on \( F \) has finite Morley rank. Show that \( G = \{ 1\} \) . [Hint: Without loss of generality, \( G \) is Abelian. Using Exercise 7.6.7, \( F \) has characteristic \( p > 0 \) and for all \( \sigma \in G - \{ 1\} ,\operatorname{Fix}\left( \sigma \right) \) , the fixed field of \( \sigma \), is finite. Show that if \( \sigma \in G - \{ 1\} \) then for all \( n > 1 \) , \( {\sigma }^{n} \neq 1 \) and \( \left| {\operatorname{Fix}\left( {\sigma }^{n}\right) }\right| > \left| {\operatorname{Fix}\left( \sigma \right) }\right| \) . Thus, if \( G \neq \{ 1\} \), then \( G \) is infinite. On the other hand, if \( G \) is infinite and \( \sigma \) is generic, so is \( {\sigma }^{n} \) . Derive a contradiction.] Exercise 7.6.18 Show that if \( G \) is a simple group of finite Morley rank, and \( H \equiv G \), then \( H \) is simple. [Hint: You first must show that an infinite Abelian simple group is not \( \omega \) -stable.] Exercise 7.6.19 Let \( K \) be an algebraically closed field, and let \( G \) be the affine group of matrices \[ G = \left\{ {\left( \begin{array}{ll} a & b \\ 0 & 1 \end{array}\right) : a, b \in K, a \neq 0}\right\} . \] a) Show that \( G \) is connected. b) Show that \[ {G}^{\prime } = \left\{ {\left( \begin{array}{ll} 1 & b \\ 0 & 1 \end{array}\right) : b \in K}\right\} \] and \( {G}^{\prime \prime } = \{ 1\} \) . Thus, \( G \) is solvable. c) Show that \( G \) is centerless. Exercise 7.6.20 Let \( \mathbb{M} \) be a monster model of the theory of real closed fields. Let \( I = \{ x \in \mathbb{M} : \left| x\right| < \frac{1}{n} \) for \( n = 1,2,\ldots \} \) . Show that \( \left( {I, + }\right) \) is an \( \bigwedge \) -definable group that is not definable. Exercise 7.6.21 Fill in the details in the proof of Lemma 7.4.2 iv). Exercise 7.6.22 Let \( K \) be an algebraically closed field. a) Show that there is a variety \( V \) with open cover \( {V}_{0} \cup {V}_{1} \) and \( {f}_{i} : {V}_{i} \rightarrow K \) a homeomorphism such that \( {f}_{i}\left( {{V}_{i} \cap {V}_{1 - i}}\right) = K \smallsetminus \{ 0\} \) and \( {f}_{i} \circ {f}_{1 - i}^{-1} \) is the identity on \( K \smallsetminus \{ 0\} \) for \( i = 0,1 \) . (The variety \( V \) looks like the line \( K \) with 0 "doubled".) b) Show that \( \Delta = \{ \left( {x, y}\right) \in V \times V : x = y\} \) is not closed in \( V \times V \) . c) Show that if \( G \) is an algebraic group, then \( \Delta = \{ \left( {x, y}\right) \in G \times G : x = y\} \) is closed in \( G \) . Exercise 7.6.23 Prove Lemma 7.4.6. ## Remarks We know some things about the Cherlin-Zil’ber Conjecture for groups of very small rank. Of course, a connected group of rank 1 is Abelian. Cherlin proved that there are no non-Abelian simple groups of rank 2. Theorem 7.6.24 If \( G \) is a connected rank 2 group, then \( G \) is solvable. Problems arise in the analysis starting at rank 3. Definition 7.6.25 We say that \( G \) is a bad group if \( G \) is connected, nonsolvable, and all proper connected definable subgroups of \( G \) are nilpotent. An algebraic group over an algebraically closed field is not a bad group. The real algebraic group \( S{O}_{3}\left( \mathbb{R}\right) \) is connected, nonsolvable, and all real algebraic subgroups are one-dimensional, but \( S{O}_{3}\left( \mathbb{R}\r
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 5.1.5
Definition 5.1.5. For any closed subset \( E \) of \( \Delta \left( A\right) \), define an ideal \( j\left( E\right) \) of \( A \) by \[ j\left( E\right) = \{ x \in A : \widehat{x}\text{ has compact support and }\operatorname{supp}\widehat{x} \cap E = \varnothing \} . \] If \( E \) is a singleton \( \{ \varphi \} \), we simply write \( j\left( \varphi \right) \) instead of \( j\left( {\{ \varphi \} }\right) \) . Theorem 5.1.6. Suppose that \( A \) is semisimple and regular and let \( I \) be an ideal of \( A \) and \( E \) a closed subset of \( \Delta \left( A\right) \) . Then \( h\left( I\right) = E \) if and only if \[ j\left( E\right) \subseteq I \subseteq k\left( E\right) \] In particular, \( \overline{j\left( E\right) } \) in the smallest closed ideal of \( A \) with hull equal to \( E \) . Proof. Suppose first that \( j\left( E\right) \subseteq I \subseteq k\left( E\right) \) . Then, since \( A \) is regular, \[ E = h\left( {k\left( E\right) }\right) \subseteq h\left( I\right) \subseteq h\left( {j\left( E\right) }\right) . \] To show that actually \( h\left( I\right) = E \), it therefore suffices to verify that \( h\left( {j\left( E\right) }\right) \subseteq \) \( E \) . To that end, let \( \varphi \in \Delta \left( A\right) \smallsetminus E \) and choose a relatively compact open neighbourhood \( U \) of \( \varphi \) such that \( \bar{U} \cap E = \varnothing \) . Because \( A \) is regular, there exists \( x \in A \) such that \[ \widehat{x}\left( \varphi \right) = 1\text{ and }{\left. \widehat{x}\right| }_{\Delta \left( A\right) \smallsetminus U} = 0. \] Thus \( \widehat{x} \) has compact support and vanishes on the open neighbourhood \( \Delta \left( A\right) \smallsetminus \) \( \bar{U} \) of \( E \) . So \( x \in j\left( E\right) \), whereas \( \varphi \left( x\right) \neq 0 \) . This shows \( \varphi \notin h\left( {j\left( E\right) }\right) \), as required. Conversely, suppose that \( h\left( I\right) = E \) . Then \( I \subseteq k\left( {h\left( I\right) }\right) = k\left( E\right) \), and if \( x \in j\left( E\right) \), then \( \widehat{x} \) has compact support and \( h\left( I\right) \cap \operatorname{supp}\widehat{x} = \varnothing \), and this implies \( x \in I \) by Corollary 5.1.4. Finally, this also shows that \( h\left( \overline{j\left( E\right) }\right) = h\left( {j\left( E\right) }\right) = E \) and \( \overline{j\left( E\right) } \subseteq I \) for every closed ideal \( I \) of \( A \) with \( h\left( I\right) = E \) . We now introduce some further notions that are fundamental to the study of ideal theory in commutative Banach algebras. Definition 5.1.7. Let \( A \) be a commutative Banach algebra and \( E \) a closed subset of \( \Delta \left( A\right) \) . (i) \( E \) is called a spectral set or set of synthesis (some authors also use the term Wiener set) if \( k\left( E\right) \) is the only closed ideal of \( A \) with hull equal to \( E \) . We say that spectral synthesis holds for \( A \) or \( A \) admits spectral synthesis if every closed subset of \( \Delta \left( A\right) \) is a set of synthesis. (ii) \( E \) is called a Ditkin set or Wiener-Ditkin set for \( A \) if given \( x \in k\left( E\right) \) , there exists a sequence \( {\left( {y}_{k}\right) }_{k} \) in \( j\left( E\right) \) such that \( {y}_{k}x \rightarrow x \) as \( k \rightarrow \infty \) . (iii) \( A \) is called Tauberian if the set of all \( x \in A \) such that \( \widehat{x} \) has compact support is dense in \( A \) . Remark 5.1.8. (1) Suppose that \( A \) is semisimple and regular. Then Theorem 5.1.6 shows that a closed subset \( E \) of \( \Delta \left( A\right) \) is a set of synthesis if and only if \( k\left( E\right) = \overline{j\left( E\right) } \), and \( E \) is a Ditkin set if and only if \( E \) is a set of synthesis and \( x \in \overline{{xk}\left( E\right) } \) for every \( x \in k\left( E\right) \) . Furthermore, \( A \) is Tauberian precisely when \( \varnothing \) is a set of synthesis. (2) The fact that a singleton \( \{ \varphi \} \) is a Ditkin set for \( A \) is often rephrased by saying that \( A \) satisfies Ditkin’s condition at \( \varphi \) . Similarly, one says that \( A \) satisfies Ditkin’s condition at infinity if \( \varnothing \) is a Ditkin set. Moreover, \( A \) is said to satisfy Ditkin’s condition if it satisfies Ditkin’s condition at every \( \varphi \in \Delta \left( A\right) \) and at infinity. We have already observed that every proper modular ideal of a Banach algebra is contained in some maximal modular ideal (Lemma 1.4.2). It need not generally be the case that every proper closed ideal of a commutative Banach algebra is contained in some maximal modular ideal. However, we have the following Lemma 5.1.9. Let \( A \) be a regular and semisimple commutative Banach algebra and suppose that \( A \) is Tauberian. Then \( h\left( I\right) \neq \varnothing \) for every proper closed ideal of \( A \) . In particular, if \( a \in A \) is such that \( \widehat{a}\left( \varphi \right) \neq 0 \) for all \( \varphi \in \Delta \left( A\right) \) , then the ideal \( {Aa} \) is dense in \( A \) . Proof. If \( I \) is a proper closed ideal with \( h\left( I\right) = \varnothing \), then \( j\left( \varnothing \right) \subseteq I \) by Theorem 5.1.6. However, \( j\left( E\right) \) is dense in \( A \) since \( A \) is Tauberian. The second statement is now obvious. In passing we insert a characterisation of sets of synthesis in terms of the dual space \( {A}^{ * } \) of \( A \) (Proposition 5.1.13). Definition 5.1.10. Let \( V \) be an open subset of \( \Delta \left( A\right) \) and let \( f \in {A}^{ * } \) . Then \( f \) is said to vanish on \( V \) if \( f\left( x\right) = 0 \) for all \( x \in A \) for which supp \( \widehat{x} \) is compact and contained in \( V \) . Lemma 5.1.11. Let \( A \) be a semisimple and regular commutative Banach algebra. Given \( f \in {A}^{ * } \), there exists a largest open subset of \( \Delta \left( A\right) \) on which \( f \) vanishes. Proof. We first show that if \( f \) vanishes on finitely many open subsets \( {V}_{1},\ldots ,{V}_{n} \) of \( \Delta \left( A\right) \), then \( f \) vanishes on \( \mathop{\bigcup }\limits_{{j = 1}}^{n}{V}_{j} \) . To that end, let \( x \in A \) be such that supp \( \widehat{x} \) is compact and contained in \( \mathop{\bigcup }\limits_{{j = 1}}^{n}{V}_{j} \) . Since \( A \) is regular, by Corollary 4.2.12 there exist \( {u}_{1},\ldots ,{u}_{n} \in A \) so that \( \operatorname{supp}{\widehat{u}}_{j} \subseteq {V}_{j},1 \leq j \leq n \) , and \( \mathop{\sum }\limits_{{j = 1}}^{n}{\widehat{u}}_{j} = 1 \) on supp \( \widehat{x} \) . Because \( A \) is semisimple, it follows that \( x = \mathop{\sum }\limits_{{j = 1}}^{{n}^{\prime }}x{u}_{j} \), and since supp \( {\widehat{xu}}_{j} \subseteq {V}_{j} \) for \( 1 \leq j \leq n \), we conclude that \[ f\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}f\left( {x{u}_{j}}\right) = 0 \] because \( f \) vanishes on each \( {V}_{j} \) . Now, let \( \mathcal{V} \) be the collection of all open subsets of \( \Delta \left( A\right) \) on which \( f \) vanishes and let \( U = \bigcup \{ V : V \in \mathcal{V}\} \) . Then \( f \) vanishes on \( U \) . Indeed, if \( x \in A \) is such that supp \( \widehat{x} \) is compact and contained in \( U \), then there exist \( {V}_{1},\ldots ,{V}_{n} \in \mathcal{V} \) with \( \operatorname{supp}\widehat{x} \subseteq \mathop{\bigcup }\limits_{{j = 1}}^{n}{V}_{j} \), and hence \( f\left( x\right) = 0 \) by the first part of the proof. Thus \( f \) vanishes on \( U \) and, by definition, \( U \) is the largest open subset of \( \Delta \left( A\right) \) on which \( f \) vanishes. Definition 5.1.12. Let \( f \in {A}^{ * } \) and let \( U \) be the largest open subset of \( \Delta \left( A\right) \) on which \( f \) vanishes (Lemma 5.1.11). The closed set \( \Delta \left( A\right) \smallsetminus U \) is called the support of \( f \) and denoted supp \( f \) . Now the characterisation of spectral sets in terms of \( {A}^{ * } \), announced above, is as follows. Proposition 5.1.13. Let \( E \) be a closed subset of \( \Delta \left( A\right) \) . Then \( E \) is a spectral set if and only if whenever \( f \in {A}^{ * } \) is such that \( \operatorname{supp}f \subseteq E \), then \( f\left( x\right) = 0 \) for all \( x \in k\left( E\right) \) . Proof. Suppose first that \( E \) is a set of synthesis and let \( f \in {A}^{ * } \) such that \( \operatorname{supp}f \subseteq E \) . Then \( f \) vanishes on \( \Delta \left( A\right) \smallsetminus E \) and hence \( f\left( x\right) = 0 \) for all \( x \in j\left( E\right) \) . Thus \( f\left( x\right) = 0 \) for all \( x \in \overline{j\left( E\right) } = k\left( E\right) \) . Conversely, if \( \overline{j\left( E\right) } \neq k\left( E\right) \), then by the Hahn-Banach theorem there exists \( f \in {A}^{ * } \) such that \( f\left( x\right) = 0 \) for all \( x \in \overline{j\left( E\right) } \), whereas \( f\left( y\right) \neq 0 \) for some \( y \in k\left( E\right) \) . Then \( f \) vanishes on \( \Delta \left( A\right) \smallsetminus E \) and hence \( \operatorname{supp}f \subseteq E \) . This finishes the proof. When proceeding to study the local membership principle, it is convenient to introduce the following notation. For \( a \in A \) and \( M \subseteq A \), let \( \Delta \left( {a, M}\right) \) denote the closed subset of \( \Delta \left( A\right) \) consisting of all \( \varphi \in \Delta \left( A\right) \) such that \( \widehat{a} \) does not belong locally to \( M \) at \( \varphi \) . Lemma 5.1.14. Let \( A \) be semisimple and regular and let \( I \) be a closed ideal of \( A \) . Let \( x \in A \) and let \( \varphi \) be an isolated point of \( \Delta \left( {x, I}\right) \) . In addition, suppose that \( \overline{j\left( \varphi \right) } \) possesses an approximate identity. Then \( \widehat{x} \) does not belong locally
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space
Definition 4.1.1
Definition 4.1.1. The flip operator is the operator \( J \) mapping \( {\mathbf{L}}^{2} \) into \( {\mathbf{L}}^{2} \) defined by \( \left( {Jf}\right) \left( {e}^{i\theta }\right) = f\left( {e}^{-{i\theta }}\right) \) . It is clear that \( J \) is a unitary operator and that \( J \) is also self-adjoint. The matrix of \( J{M}_{\phi } \) has the following form: \[ J{M}_{\phi } = \left( \begin{matrix} & & & & & & & & \\ & \ddots & \ddots & & & & & & \\ & {\phi }_{6} & {\phi }_{5} & {\phi }_{4} & {\phi }_{3} & & & & \\ & {\phi }_{5} & {\phi }_{4} & {\phi }_{3} & {\phi }_{2} & {\phi }_{1} & & & \\ & {\phi }_{4} & {\phi }_{3} & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & & \\ & {\phi }_{3} & {\phi }_{2} & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & & \\ & & {\phi }_{1} & {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & {\phi }_{-3} & & \\ & & & {\phi }_{-1} & {\phi }_{-2} & {\phi }_{-3} & {\phi }_{-4} & & \\ & & & \ddots & \ddots & \ddots & \ddots & \ddots & \\ & & & & & & & & \end{matrix}\right) . \] This follows from the computation \[ \left( {J{M}_{\phi }{e}_{n},{e}_{m}}\right) = \left( {{M}_{\phi }{e}^{in\theta },{e}^{-{im\theta }}}\right) = \frac{1}{2\pi }{\int }_{0}^{2\pi }\phi \left( {e}^{i\theta }\right) \overline{{e}^{-i\left( {m + n}\right) \theta }}{d\theta } = {\phi }_{-\left( {m + n}\right) }. \] The above matrix for \( J{M}_{\phi } \) is constant along its skew-diagonals. Matrices of this form are known as Hankel matrices. Definition 4.1.2. A finite matrix, or a doubly infinite matrix (i.e., a matrix with entries in positions \( \left( {m, n}\right) \) for \( m \) and \( n \) integers), or a singly infinite matrix (i.e., a matrix with entries in positions \( \left( {m, n}\right) \) for \( m \) and \( n \) nonnegative integers) is called a Hankel matrix if its entries are constant along each skew-diagonal. That is, the matrix \( \left( {a}_{m, n}\right) \) is Hankel if \( {a}_{{m}_{1},{n}_{1}} = {a}_{{m}_{2},{n}_{2}} \) whenever \( {m}_{1} + {n}_{1} = {m}_{2} + {n}_{2} \) Thus the matrix of \( J{M}_{\phi } \) with respect to the standard basis of \( {\mathbf{L}}^{2} \) is a doubly infinite Hankel matrix. It is clear that a bounded operator on \( {\mathbf{L}}^{2} \) whose matrix with respect to the standard basis is a Hankel matrix is of the form \( J{M}_{\phi } \) for some \( \phi \) in \( {\mathbf{L}}^{\infty } \) . To see this, simply multiply the given Hankel matrix on the left by the matrix of \( J \) with respect to the standard basis. The resulting matrix is a doubly infinite Toeplitz matrix, and it is therefore the matrix of an operator \( {M}_{\phi } \) with respect to the standard basis. Since \( {J}^{2} = I \) it follows that the original operator is \( J{M}_{\phi } \) . The study of singly infinite Hankel matrices is much more complicated than that of doubly infinite ones. Note that the lower-right corner of the matrix representation of \( J{M}_{\phi } \) displayed above is a singly infinite Hankel matrix. That corner is the matrix of the restriction of \( {PJ}{M}_{\phi } \) to \( {\widetilde{\mathbf{H}}}^{2} \) with respect to the standard basis of \( {\widetilde{\mathbf{H}}}^{2} \) . Definition 4.1.3. A Hankel operator is an operator that is the restriction to \( {\widetilde{\mathbf{H}}}^{2} \) of an operator of the form \( {PJ}{M}_{\phi } \), where \( P \) is the projection of \( {\mathbf{L}}^{2} \) onto \( {\widetilde{\mathbf{H}}}^{2}, J \) is the flip operator, and \( \phi \) is a function in \( {\mathbf{L}}^{\infty } \) . This operator is denoted by \( {H}_{\phi } \) . Its matrix with respect to the standard basis of \( {\widetilde{\mathbf{H}}}^{2} \) is \[ \left( \begin{matrix} {\phi }_{0} & {\phi }_{-1} & {\phi }_{-2} & {\phi }_{-3} & {\phi }_{-4} & \cdots \\ {\phi }_{-1} & {\phi }_{-2} & {\phi }_{-3} & {\phi }_{-4} & \cdots & \\ {\phi }_{-2} & {\phi }_{-3} & {\phi }_{-4} & \cdots & & \\ {\phi }_{-3} & {\phi }_{-4} & & \cdots & & \\ {\phi }_{-4} & & \cdots & & & \\ \vdots & & & & & \\ & & & & & \\ & & & & & \end{matrix}\right) , \] where \( \phi \left( {e}^{i\theta }\right) = \mathop{\sum }\limits_{{k = - \infty }}^{\infty }{\phi }_{k}{e}^{ik\theta } \) . Note that every Hankel operator is bounded since it is the restriction of the product of three bounded operators. As opposed to the situation with respect to Toeplitz operators, there is no unique symbol corresponding to a given Hankel operator. Theorem 4.1.4. The Hankel operators \( {H}_{\phi } \) and \( {H}_{\psi } \) are equal if and only if \( \phi - \psi \) is in \( {e}^{i\theta }{\widetilde{\mathbf{H}}}^{2} \) . Proof. Since the matrix of a Hankel operator depends only on the Fourier coefficients in nonpositive positions (Definition 4.1.3), two \( {\mathbf{L}}^{\infty } \) functions induce the same Hankel operator if and only if their Fourier coefficients agree for nonpositive indices. This is equivalent to the difference between the functions being in \( {e}^{i\theta }{\widetilde{\mathbf{H}}}^{2} \) . Definition 4.1.5. If \( f \) is a function in \( {\mathbf{L}}^{2} \), then the coanalytic part of \( f \) is the function \[ \mathop{\sum }\limits_{{n = 0}}^{{-\infty }}\left( {f,{e}^{in\theta }}\right) {e}^{in\theta } \] Thus two different functions with equal coanalytic parts induce the same Hankel operator, so we cannot talk about the symbol of a Hankel operator. Definition 4.1.6. The \( {\mathbf{L}}^{\infty } \) function \( \phi \) is a symbol of the Hankel operator \( H \) if \( H \) is the restriction of \( {PJ}{M}_{\phi } \) to \( {\widetilde{\mathbf{H}}}^{2} \) . Theorem 4.1.7. The operator \( A \) has a Hankel matrix with respect to the standard basis of \( {\widetilde{\mathbf{H}}}^{2} \) if and only if it satisfies the equation \( {U}^{ * }A = {AU} \), where \( U \) is the unilateral shift. Proof. First note that \[ \left( {{U}^{ * }A{e}_{n},{e}_{m}}\right) = \left( {A{e}_{n}, U{e}_{m}}\right) = \left( {A{e}_{n},{e}_{m + 1}}\right) . \] Also, \[ \left( {{AU}{e}_{n},{e}_{m}}\right) = \left( {A{e}_{n + 1},{e}_{m}}\right) . \] Therefore, \[ \left( {{U}^{ * }A{e}_{n},{e}_{m}}\right) = \left( {{AU}{e}_{n},{e}_{m}}\right) \] for all \( m \) and \( n \) if and only if \[ \left( {A{e}_{n},{e}_{m + 1}}\right) = \left( {A{e}_{n + 1},{e}_{m}}\right) \] for all \( m \) and \( n \) . Thus \( {U}^{ * }A = {AU} \) if and only if \( A \) has a Hankel matrix. Corollary 4.1.8. If A has a Hankel matrix with respect to the standard basis of \( {\widetilde{\mathbf{H}}}^{2} \), and \( U \) is the unilateral shift, then \( {U}^{ * }{AU} \) has a Hankel matrix. Proof. This is easily seen by noticing the effect on the matrix of \( A \) of multiplying on the left by \( {U}^{ * } \) and on the right by \( U \) . Alternatively, \[ {U}^{ * }\left( {{U}^{ * }{AU}}\right) = {U}^{ * }\left( {{U}^{ * }A}\right) U \] \[ = {U}^{ * }\left( {AU}\right) U\;\text{ (by Theorem 4.1.7) } \] \[ = \left( {{U}^{ * }{AU}}\right) U\text{.} \] It follows from Theorem 4.1.7 that \( {U}^{ * }{AU} \) is Hankel. It is not too easy to show that a bounded operator that has a Hankel matrix is a Hankel operator. To prove it requires several preliminary results. Theorem 4.1.9 (Douglas’s Theorem). Let \( \mathcal{H},\mathcal{K} \), and \( \mathcal{L} \) be Hilbert spaces and suppose that \( E : \mathcal{H} \rightarrow \mathcal{K} \) and \( F : \mathcal{H} \rightarrow \mathcal{L} \) are bounded operators. If \( {E}^{ * }E \leq {F}^{ * }F \), then there exists an operator \( R : \mathcal{L} \rightarrow \mathcal{K} \) with \( \parallel R\parallel \leq 1 \) such that \( E = {RF} \) . Proof. First of all, observe that the hypothesis \( {E}^{ * }E \leq {F}^{ * }F \) is equivalent to \( \parallel {Ex}\parallel \leq \parallel {Fx}\parallel \) for all \( x \in \mathcal{H} \), since \( \left( {{E}^{ * }{Ex}, x}\right) = \parallel {Ex}{\parallel }^{2} \) and \( \left( {{F}^{ * }{Fx}, x}\right) = \) \( \parallel {Fx}{\parallel }^{2} \) . We first define \( R \) on the range of \( F \) . Given \( {Fx} \), set \( {RFx} = {Ex} \) . It needs to be shown that \( R \) is well-defined. If \( F{x}_{1} = F{x}_{2} \), then \( F\left( {{x}_{1} - {x}_{2}}\right) = 0 \) . Since \( \begin{Vmatrix}{E\left( {{x}_{1} - {x}_{2}}\right) }\end{Vmatrix} \leq \begin{Vmatrix}{F\left( {{x}_{1} - {x}_{2}}\right) }\end{Vmatrix} \), we have \( E\left( {{x}_{1} - {x}_{2}}\right) = 0 \) and hence \( E{x}_{1} = E{x}_{2} \) . Thus \( {RF}{x}_{1} = {RF}{x}_{2} \) . That \( R \) is linear is obvious. We must show that \( R \) is bounded. In fact, \( \parallel R\parallel \leq 1 \), since \( y = {Fx} \) yields \[ \parallel {Ry}\parallel = \parallel {RFx}\parallel = \parallel {Ex}\parallel \leq \parallel {Fx}\parallel = \parallel y\parallel . \] Thus \( R \) is bounded, and we can extend it to the closure of the range of \( F \) by continuity. Define \( R \) to be zero on the orthogonal complement of the range of \( F \) . It is then clear that \( \parallel R\parallel \leq 1 \) and that \( {RFx} = {Ex} \) for all \( x \in \mathcal{H} \) . Lemma 4.1.10. Let \( \mathcal{H} \) and \( \mathcal{K} \) be Hilbert spaces and let \( B : \mathcal{H} \rightarrow \mathcal{K} \) be a bounded operator with \( \parallel B\parallel \leq 1 \) . Then \( {\left( I - {B}^{ * }B\right) }^{1/2}{B}^{ * } = {B}^{ * }{\left( I - B{B}^{ * }\right) }^{1/2} \) . Proof. First of all, since \( \parallel B\parallel = \begin{Vmatrix}{B}^{ * }\end{Vmatrix} \leq 1 \), it follows that \( I - {B}^{ * }B \geq 0 \) and \( I - B{B}^{ * } \geq 0 \) . Thus \( {\left( I - {B}^{ * }B\right) }^{1/2} \) and \( {\left( I - B{B}^{ * }\right) }^{1/2} \) exist, since every positive operator has a unique positive square root (see, for example, [12, p. 240], [48, p. 331]). From the trivial equality \( \left( {I - {B}^{ * }B}\right) {B}^{ * } = {B}^{ * }\left( {I - B{B}^{ * }}\right) \), it follows by induction that \( {\left( I - {B}^{ * }B\right) }^{n}{B}^{ * } = {B}^{ * }{\left( I - B{B}^{ * }\right)
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 5.2.14
Definition 5.2.14 For a pair of topological spaces \( \left( {X, Y}\right) \), let \[ L\left( {X, Y;G}\right) = \sup \left\{ {l \in {\mathbb{Z}}_{ + }\mid \exists \text{ nontrivial classes }\left\lbrack {\tau }_{1}\right\rbrack ,\left\lbrack {\tau }_{2}\right\rbrack ,\ldots ,\left\lbrack {\tau }_{l}\right\rbrack \in }\right. \] \[ \left. {{H}_{ * }\left( {X, Y;G}\right) \text{ such that }\left\lbrack {\tau }_{1}\right\rbrack < \cdots < \left\lbrack {\tau }_{l}\right\rbrack }\right\} . \] \( L\left( {X, Y;G}\right) \) measures the length of the chain of subordinate nontrivial singular homology classes. It is directly related to the notion of cup length: \[ {CL}\left( {X, Y;G}\right) = \sup \left\{ {l \in {\mathbb{Z}}_{ + }\mid \exists {c}_{0} \in {H}^{ * }\left( {X, Y;G}\right) ,\exists {c}_{1},\ldots ,{c}_{l} \in {H}^{ * }\left( {X;G}\right) }\right. \] \[ \text{such that}\dim \left( {c}_{i}\right) > 0, i = 1,2,\ldots, l\text{, and}\left. {{c}_{0} \cup \cdots \cup {c}_{l} \neq 0}\right\} \text{.} \] It is known from algebraic topology that \( {\operatorname{cat}}_{X}\left( X\right) \geq {CL}\left( {X,\varnothing ;G}\right) \), i.e., the cup length provides a lower bound estimate for the category; we have \( {\operatorname{cat}}_{{P}^{n}}\left( {P}^{n}\right) = n + 1,{\operatorname{cat}}_{{T}^{n}}\left( {T}^{n}\right) = n + 1 \), where \( {\mathrm{P}}^{n} \) is the \( n \) - dimensional real projective space, and \( {T}^{n} \) is the \( n \) -torus. ## 5.2.4 Index Theorem Symmetric functions, or more generally, a function, which is invariant under some G-group action, may have more critical points. This is due to the fact that the underlying space quotient by the G-group action has more complicated topology. In this subsection we shall deal with this problem. Let us recall the proof of the Ljusterik-Schnirelmann multiplicity theorem, in which only the fundamental properties (1)-(6) of the category were used. We are inspired to extend an abstract theory based on these properties for \( G \) -group action invariant functions. Let \( M \) be a Banach-Finsler manifold with a compact group action \( G \) . Let \( \sum \) be the set of all \( G \) -invariant closed subsets of \( M \), and \( \mathcal{H} \) be the set of all \( G \) -equivariant continuous mappings from \( M \) into itself, i.e., \( h \in \mathcal{H} \) if and only if \( h \in C\left( {M, M}\right) \) and \( h \circ g = g \circ h\forall g \in G \) . Definition 5.2.15 An index \( \left( {\sum ,\mathcal{H}, i}\right) \) with respect to \( G \) is defined by \( i : \sum \rightarrow \) \( \mathbb{N} \cup \{ + \infty \} \), which satisfies: (1) \( i\left( A\right) = 0 \Leftrightarrow A = \varnothing ,\forall A \in \sum \) . (2) (Monotonicity) \( A \subset B \Rightarrow i\left( A\right) \leq i\left( B\right) ,\forall A, B \in \sum \) . (3) (Subadditivity) \( i\left( {A \cup B}\right) \leq i\left( A\right) + i\left( B\right) ,\forall A, B \in \sum \) . (4) (Deformation nondecreasing) If \( \eta : \left\lbrack {0,1}\right\rbrack \times M \rightarrow M \) satisfies \( \eta \left( {t, \cdot }\right) \in \) \( \mathcal{H}\forall t \in \left\lbrack {0,1}\right\rbrack \), and \( \eta \left( {0, \cdot }\right) = i{d}_{M} \), then \( i\left( A\right) \leq i\left( \overline{\eta \left( {1, A}\right) }\right) ,\forall A \in \sum \) . (5) (Continuity) If \( A \in \sum \) is compact, then there is a neighborhood \( N \in \sum \) of \( A \) such that \( A \subset \operatorname{int}\left( N\right) \) and \( i\left( A\right) = i\left( N\right) \) . (6) (Normality) \( i\left( \left\lbrack p\right\rbrack \right) = 1\forall p \notin {\mathrm{{Fix}}}_{G} \) where \( \left\lbrack p\right\rbrack = \{ g \cdot p \mid g \in G\} \) and \( {\mathrm{{Fix}}}_{G} = \) \( \{ x \in M \mid g \cdot x = x\forall g \in G\} \) is the fixed-point set of \( G \), if \( G \neq \{ e\} \) . Example 1. If \( G = \{ e\} \), then category \( \left( {\sum ,\mathcal{H},{\operatorname{cat}}_{M}}\right) \) is an index. Example 2. Recall the genus defined in Sect. 3.3, where \( M \) is a Banach space, \( G = {\mathbb{Z}}_{2} = \{ I, - I\} \), i.e., \( {Ix} = x,\left( {-I}\right) x = - x\forall x \in M \) ,. Thus \( \sum = \) the set of all closed symmetric subset, \( \mathcal{H} \) is the set of odd continuous mappings. Let \[ \gamma \left( A\right) = \left\{ \begin{array}{l} 0\;\text{ if }A = \varnothing , \\ \inf \{ k \in \mathbb{N} \mid \exists \text{ an odd map }\phi \in C\left( {A,{R}^{k}\smallsetminus \{ \theta \} }\right) \} , \\ + \infty \;\text{ if no such odd map }. \end{array}\right. \] Then genus \( \left( {\sum ,\mathcal{H},\gamma }\right) \) is an index with respect to \( {\mathbb{Z}}_{2} \) . (Verifications) By definition, (1) and (2) are trivial. (3) (Subadditivity) Set \( \gamma \left( A\right) = n,\gamma \left( B\right) = m \) ; we may assume that both \( n, m < \infty \) . This means that \( \exists \varphi : A \rightarrow {R}^{n} \smallsetminus \{ \theta \} \exists \psi : B \rightarrow {R}^{m} \smallsetminus \{ \theta \} \) both are odd and continuous. By Tietze’s theorem \( \exists \widetilde{\varphi } \in C\left( {M,{R}^{n}}\right) ,\exists \widetilde{\psi } \in C\left( {M,{R}^{m}}\right) \) such that \( {\left. \widetilde{\varphi }\right| }_{A} = \varphi ,{\left. \widetilde{\psi }\right| }_{B} = \psi \) . Without loss of generality, we may assume that \( \widetilde{\varphi } \) and \( \widetilde{\psi } \) are odd. Define \( f\left( x\right) = \left( {\widetilde{\varphi }\left( x\right) ,\widetilde{\psi }\left( x\right) }\right) \), then \( f \in C\left( {M,{R}^{n + m}}\right) \) is odd and \( f\left( {A \cup B}\right) \subset {R}^{n + m} \smallsetminus \{ \theta \} \) . This implies that \( \gamma \left( {A \cup B}\right) \leq \gamma \left( A\right) + \gamma \left( B\right) \) . (4) (Deformation nondecreasing) Suppose \( \eta : \left\lbrack {0,1}\right\rbrack \times M \rightarrow M \) is continuous and odd, and satisfies \( \eta \left( {0, \cdot }\right) = \) id with \( \gamma \left( \overline{\eta \left( {1, A}\right) }\right) = n \), i.e., \( \exists \varphi : \overline{\eta \left( {1, A}\right) } \rightarrow \) \( {R}^{n} \smallsetminus \{ \theta \} \) is continuous and odd. Let \( \psi = \varphi \circ \eta \left( {1, \cdot }\right) \) then \( \psi : A \rightarrow {R}^{n} \smallsetminus \{ \theta \} \) is continuous and odd. Therefore \( \gamma \left( A\right) \leq n \) . (5) (Continuity) Let \( \gamma \left( A\right) = n \) . From (2), with no loss of generality, we may assume \( n < + \infty \) . There exists an odd continuous \( \varphi : A \rightarrow {R}^{n} \smallsetminus \{ \theta \} \) . By Tietze’s theorem, there exists an odd continuous mapping \( \widetilde{\varphi } : M \rightarrow {R}^{n} \), with \( {\left. \widetilde{\varphi }\right| }_{A} = \varphi \) Since \( A \) is compact and \( \theta \notin \widetilde{\varphi }\left( A\right) ,\exists \delta > 0 \) such that \( \theta \notin \widetilde{\varphi }\left( N\right) \), where \( N = \overline{{A}_{\delta }} \), the closure of the \( \delta \) -neighborhood of \( A \), i.e., \[ \widetilde{\varphi } : N \rightarrow {R}^{n} \smallsetminus \{ \theta \} . \] Combining with the monotonicity, we obtain \[ n = \gamma \left( A\right) \leq \gamma \left( N\right) \leq n \] (6) (Normality) Note that \( {\operatorname{Fix}}_{G} = \{ \theta \} \), if \( p \neq \theta \), then \( \left\lbrack p\right\rbrack = \{ p, - p\} \) . We define \( \phi \left( {\pm p}\right) = \pm 1 \) ; it follows that \( \gamma \left( \left\lbrack p\right\rbrack \right) = 1 \) . Moreover, applying the Borsuk-Ulam theorem in Sect. 3.2, we have computed \( \gamma \left( {S}^{n - 1}\right) = n \) . Example 3. \( \left( {{S}^{1}\text{-index}}\right) \) Let \( M \) be a Banach space, \( {S}^{1} = \left\{ {{e}^{i\theta } \mid \theta \in \left\lbrack {0,{2\pi }}\right\rbrack }\right\} \) be a compact Lie group. Let \( G \) be the isometry representation of \( {S}^{1} \), i.e., the group consists of homomorphisms \( T : {S}^{1} \rightarrow L\left( {M, M}\right) \), i.e., \( T\left( {e}^{i\theta }\right) \) are isometric operators, satisfying \( T\left( {e}^{i\theta }\right) \cdot T\left( {e}^{i\varphi }\right) = T\left( {e}^{i\left( {\dot{\theta } + \varphi }\right) }\right) \forall \theta ,\varphi \in \left\lbrack {0,{2\pi }}\right\rbrack \) . Set \( \sum = \) the set of all \( G \) - invariant closed subsets of \( M \) . \[ \mathcal{H} = \left\{ {h \in C\left( {M, M}\right), T\left( {e}^{i\theta }\right) \circ h = h \circ T\left( {e}^{i\theta }\right) \forall \theta \in \left\lbrack {0,{2\pi }}\right\rbrack }\right\} . \] \( \forall A \in \sum \), define \[ \gamma \left( A\right) = \left\{ \begin{array}{ll} 0 & \\ \inf \left\{ {k \in \mathbb{N} \mid \exists \phi \in C\left( {A,{\mathbb{C}}^{k}\smallsetminus \{ \theta \} }\right) ,\exists n \in \mathbb{N}\text{ satisfying }}\right. & \\ \phi \left( {T\left( {e}^{i\theta }\right) x}\right) = {e}^{in\theta }\phi \left( x\right) \forall \left( {x,{e}^{i\theta }}\right) \in A \times {S}^{1}\} , & \\ + \infty & \text{ if the } \end{array}\right. \] where \( {\mathbb{C}}^{k} \) is \( k \) -dimensional unitary space (complex \( k \) space). We shall verify that \( \left( {\sum ,\mathcal{H},\gamma }\right) \) is an index with respect to \( {S}^{1} \) . By definition, (1) and (2) are trivial. Before going on to verify other fundamental properties, we need: Lemma 5.2.16 For any \( A \in \sum \), if \( \phi \in C\left( {A,{\mathbb{C}}^{k}\smallsetminus \{ \theta \} }\right) \) satisfies \[ \phi \left( {T\left( {e}^{i\theta }\right) x}\right) = {e}^{in\theta }\phi \left( x\right) \forall x \in A\text{ for some }n \in \mathbb{N}, \] then \( \exists \widetilde{\phi } : M \rightarrow {\mathbb{C}}^{k} \) satisfying \( {\left. \widetilde{\phi }\right| }_{A} = \phi \) and \[ \widetilde{\phi }\left( {T\left( {e}^{i\theta }\right) x}\right) = {e}^{in\theta }\widetilde{\phi }\left( x\
1359_[陈省身] Lectures on Differential Geometry
Definition 3.3
Definition 3.3. Suppose \( {\sum }_{0} \) is an open covering of \( M \) . If every compact subset of \( M \) intersects only finitely many elements of \( {\sum }_{0} \), then \( {\sum }_{0} \) is called a locally finite open covering of \( M \) . Theorem 3.1. Suppose \( \sum \) is a topological basis of the manifold \( M \) . Then there is a subset \( {\sum }_{0} \) of \( \sum \) such that \( {\sum }_{0} \) is a locally finite open covering of \( M \) . Proof. By the definition of manifolds, \( M \) is locally compact. Since we have assumed that \( M \) satisfies the second countability axiom, there exists a countable open covering \( \left\{ {U}_{i}\right\} \) of \( M \) such that the closure \( {\bar{U}}_{i} \) of every \( {U}_{i} \) is compact. Let \[ {P}_{i} = \mathop{\bigcup }\limits_{{1 \leq r \leq i}}{\bar{U}}_{r} \] (3.4) Then \( {P}_{i} \) is compact, \( {P}_{i} \subset {P}_{i + 1} \), and \( \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{P}_{i} = M \) . Now we construct another sequence of compact sets \( {Q}_{i} \) satisfying \[ {P}_{i} \subset {Q}_{i} \subset {\overset{ \circ }{Q}}_{i + 1} \] (3.5) where \( {\overset{ \circ }{Q}}_{i + 1} \) means the interior of \( {Q}_{i + 1} \) . By induction, assume that \( {Q}_{1},\ldots ,{Q}_{i} \) have been constructed. Since \( {Q}_{i} \cup \) \( {P}_{i + 1} \) is compact, there exist finitely many elements \( {U}_{\alpha },1 \leq \alpha \leq s \) of \( \left\{ {U}_{j}\right\} \) which together form a covering of \( {Q}_{i} \cup {P}_{i + 1} \) . Let \[ {Q}_{i + 1} = \mathop{\bigcup }\limits_{{1 \leq \alpha \leq s}}{\bar{U}}_{\alpha } \] (3.6) ![89cd1142-afa9-47ad-a74a-27b70d90fa5e_98_0.jpg](images/89cd1142-afa9-47ad-a74a-27b70d90fa5e_98_0.jpg) Figure 9. Then \( {Q}_{i + 1} \) satisfies condition (3.5). First, \( {Q}_{i + 1} \) is compact, and \( {P}_{i + 1} \subset {Q}_{i + 1} \) . Furthermore \[ {Q}_{i} \subset \mathop{\bigcup }\limits_{{1 \leq \alpha \leq s}}{U}_{\alpha } \subset {\overset{ \circ }{Q}}_{i + 1} \] (3.7) Obviously, \( \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{Q}_{i} = M \) . Now let \[ {L}_{i} = {Q}_{i} - {\overset{ \circ }{Q}}_{i - 1},\;{K}_{i} = {\overset{ \circ }{Q}}_{i + 1} - {Q}_{i - 2}, \] (3.8) where \( 1 \leq i < + \infty \), and \( {Q}_{-1} = {Q}_{0} = \varnothing \) . (see Figure 9). Then \( {L}_{i} \) is compact, \( {K}_{i} \) is open, and \( {L}_{i} \subset {K}_{i} \) . By assumption, \( \sum \) is a topological basis of \( M \) . Thus \( {K}_{i} \) can be expressed as a union of elements of \( \sum \) . Since \( {L}_{i} \) is compact, and \( {L}_{i} \subset {K}_{i} \), there exist for each \( i \) finitely many elements \( {V}_{\alpha }^{\left( i\right) },1 \leq \alpha \leq {r}_{i} \), in \( \sum \) such that \[ {L}_{i} \subset \mathop{\bigcup }\limits_{{1 \leq \alpha \leq {r}_{i}}}{V}_{\alpha }^{\left( i\right) } \subset {K}_{i} \] (3.9) Because \( \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{L}_{i} = M \) , \[ {\sum }_{0} = \left\{ {{V}_{\alpha }^{\left( i\right) },1 \leq \alpha \leq {r}_{i},1 \leq i < + \infty }\right\} \] (3.10) is a subcovering of \( \sum \) . We now show that \( {\sum }_{0} \) is locally finite. Suppose \( A \) is an arbitrary compact set. By (3.4) we know that there exists a sufficiently large integer \( i \) such that \( A \subset {P}_{i} \subset {Q}_{i} \) . For \( k \geq i + 2 \) , \[ {K}_{k} = {\overset{ \circ }{Q}}_{k + 1} - {Q}_{k - 2} \subset {\overset{ \circ }{Q}}_{k + 1} - {Q}_{i}, \] (3.11) hence \( {K}_{k} \cap {Q}_{i} = \varnothing \) . Therefore \[ {V}_{\alpha }^{\left( k\right) } \cap A \subset {K}_{k} \cap {Q}_{i} = \varnothing ,\;1 \leq \alpha \leq {r}_{k},\;k \geq i + 2, \] (3.12) that is, only finitely many elements of \( {\sum }_{0} \) intersect \( A \) . Theorem 3.2 (Partition of Unity Theorem). Suppose \( \sum \) is an open covering of a smooth manifold \( M \) . Then there exists a family of smooth functions \( \left\{ {g}_{\alpha }\right\} \) on \( M \) satisfying the following conditions: 1) \( 0 \leq {g}_{\alpha } \leq 1 \), and supp \( {g}_{\alpha } \) is compact for each \( \alpha \) . Moreover, there exists an open set \( {W}_{i} \in \sum \) such that \( \operatorname{supp}{g}_{\alpha } \subset {W}_{i} \) ; 2) For each point \( p \in M \), there is a neighborhood \( U \) that intersects supp \( {g}_{\alpha } \) for only finitely many \( \alpha \) ; 3) \( \mathop{\sum }\limits_{\alpha }{g}_{\alpha } = 1 \) . Because of condition 2), for any point \( p \in M \), there are only finitely many nonzero terms on the left hand side of condition 3). Thus the summation is meaningful. The family \( \left\{ {g}_{\alpha }\right\} \) is called a partition of unity subordinate to the open covering \( \sum \) . Proof. Because \( M \) is a manifold, there is a topological basis \( {\sum }_{0} = \left\{ {U}_{\alpha }\right\} \) such that each element \( {U}_{\alpha } \) is a coordinate neighborhood, \( {\bar{U}}_{\alpha } \) is compact, and there also exists \( {W}_{i} \in \sum \) such that \( {\bar{U}}_{\alpha } \subset {W}_{i} \) . By Theorem 3.1, \( {\sum }_{0} \) has a locally finite subcovering, so we may assume that \( {\sum }_{0} \) itself is a locally finite open covering of \( M \) and has countably many elements. It is not difficult to show by induction that we can obtain \( {V}_{\alpha } \) by a contraction of \( {U}_{\alpha }{}^{\mathrm{d}} \) such that \( {\bar{V}}_{\alpha } \subset {U}_{\alpha } \) and \( \left\{ {V}_{\alpha }\right\} \) is also an open covering for \( M \) . By Lemma 3 of \( §1 - 3 \), there exist smooth functions \( {h}_{\alpha } \), with \( 0 \leq {h}_{\alpha } \leq 1 \) on \( M \) such that \[ {h}_{\alpha }\left( p\right) = \left\{ \begin{array}{ll} 1, & p \in {V}_{\alpha } \\ 0, & p \notin {U}_{\alpha } \end{array}\right. \] (3.13) Clearly supp \( {h}_{\alpha } \subset {\bar{U}}_{\alpha } \) . For any point \( p \in M \), there exists a neighborhood \( U \) such that \( \bar{U} \) is compact. By the local finiteness of \( {\sum }_{0},\bar{U} \) intersects only finitely many elements of \( {\sum }_{0} \), and there are only finitely many nonzero terms in the sum \( \mathop{\sum }\limits_{\alpha }{h}_{\alpha }\left( p\right) \) . Thus \( h = \mathop{\sum }\limits_{\alpha }{h}_{\alpha } \) is a smooth function on \( M \) . Since \( {}^{\mathrm{d}} \) The way to contract is as follows: Let \( W = \mathop{\bigcup }\limits_{{i \neq \alpha }}{U}_{i} \) . Then \( M - W \) is a closed set contained in \( {U}_{\alpha } \) . Since \( {\bar{U}}_{\alpha } \) is compact, \( M - W \) is also compact. Thus there are finitely many coordinate neighborhoods \( {W}_{s},1 \leq s \leq r \), such that \( \overline{{W}_{s}} \subset {U}_{\alpha } \) and \( \mathop{\bigcup }\limits_{{s = 1}}^{r}{W}_{s} \subset M - W \) . Now let \( {V}_{\alpha } = \mathop{\bigcup }\limits_{{s = 1}}^{r}{W}_{s} \) . \( \left\{ {V}_{\alpha }\right\} \) forms a covering for \( M \), the point \( p \) must lie in some \( {V}_{\alpha } \), i.e., \( h\left( p\right) \geq 1 \) . Let \[ {g}_{\alpha } = \frac{{h}_{\alpha }}{h} \] (3.14) Then \( {g}_{\alpha } \) is a smooth function on \( M \) . It is easy to verify that the family \( \left\{ {g}_{\alpha }\right\} \) satisfies all the conditions of the theorem. With the above background, we can proceed to define the integration of exterior differential forms on a manifold \( M \) . Suppose \( M \) is an \( m \) -dimensional smooth manifold, and \( \varphi \) is an exterior differential \( m \) -form on \( M \) with a compact support. Choose any coordinate covering \( \sum = \left\{ {W}_{i}\right\} \) which is consistent with the orientation of \( M \), and suppose that \( \left\{ {g}_{\alpha }\right\} \) is a partition of unity subordinate to \( \sum \) . Then \[ \varphi = \left( {\mathop{\sum }\limits_{\alpha }{g}_{\alpha }}\right) \cdot \varphi = \mathop{\sum }\limits_{\alpha }\left( {{g}_{\alpha } \cdot \varphi }\right) \] (3.15) Clearly, \( \operatorname{supp}\left( {{g}_{\alpha } \cdot \varphi }\right) \subset \) supp \( {g}_{\alpha } \) is contained in some coordinate neighborhood \( {W}_{i} \in \sum \) . Therefore we can define \[ {\int }_{M}{g}_{\alpha } \cdot \varphi = {\int }_{{W}_{i}}{g}_{\alpha } \cdot \varphi \] (3.16) where the right hand side is the usual Riemann integral, that is, if \( {g}_{\alpha } \cdot \varphi \) with respect to the coordinate system \( {u}^{1},\ldots ,{u}^{m} \) in \( {W}_{i} \) is expressed as \[ f\left( {{u}^{1},\ldots ,{u}^{m}}\right) d{u}^{1} \land \cdots \land d{u}^{m}, \] then the integral on the right hand side in (3.16) is \[ {\int }_{{W}_{i}}f\left( {{u}^{i},\ldots ,{u}^{m}}\right) d{u}^{1}\cdots d{u}^{m} \] (3.17) To show that (3.16) is well-defined, we need only show that the right hand side is independent of the choice of \( {W}_{i} \) . Suppose supp \( \left( {{g}_{\alpha } \cdot \varphi }\right) \) is contained in two coordinate neighborhoods \( {W}_{i} \) and \( {W}_{j} \), and suppose the local coordinates consistent with the orientation of \( M \) are \( {u}^{k} \) and \( {v}^{k} \), respectively. Then the Jacobian of the change of coordinates satisfies \[ J = \frac{\partial \left( {{v}^{1},\ldots ,{v}^{m}}\right) }{\partial \left( {{u}^{1},\ldots ,{u}^{m}}\right) } > 0. \] (3.18) Suppose \( {g}_{\alpha } \cdot \varphi \) is expressed in \( {W}_{i} \) and \( {W}_{j} \) by \[ {g}_{\alpha } \cdot \varphi = {fd}{u}^{1} \land \cdots \land d{u}^{m} \] (3.19) \[ = \;{f}^{\prime }d{v}^{1} \land \cdots \land d{v}^{m}, \] respectively. Then \[ f = {f}^{\prime } \cdot J = {f}^{\prime } \cdot \left| J\right| \] \( \left( {3.20}\right) \) and \( \operatorname{supp}f = \operatorname{supp}{f}^{\prime } = \operatorname{supp}\left( {{g}_{\alpha } \cdot \varphi }\right) \subset {W}_{i} \cap {W}_{j} \) . By the formula for the change of variables in the Riemann integral, we have \[ {\int }_{{W}_{i} \cap {W}_{j}}{f}^{\prime }d{v}^{1}\cdots d{v}^
109_The rising sea Foundations of Algebraic Geometry
Definition 5.124
Definition 5.124. Let \( \left( {\mathcal{C},\delta }\right) \) be a spherical building of type \( \left( {W, S}\right) \), and let \( {w}_{0} \) continue to denote the longest element of \( W \) . Define a new function \( {\delta }_{ - } : \mathcal{C} \times \mathcal{C} \rightarrow W \) by \( \delta \left( {C, D}\right) \mathrel{\text{:=}} {w}_{0}\delta \left( {C, D}\right) {w}_{0} \) for \( C, D \in \mathcal{C} \) . Then \( \left( {\mathcal{C},{\delta }_{ - }}\right) \) is called the dual of \( \left( {\mathcal{C},\delta }\right) \) . We write \( {\mathcal{C}}_{ - } \) instead of \( \mathcal{C} \) when we want to emphasize that we are thinking of \( \mathcal{C} \) as the set of chambers of the dual building. It is very easy, as we will see below, to verify that \( \left( {{\mathcal{C}}_{ - },{\delta }_{ - }}\right) \) is indeed a building of type \( \left( {W, S}\right) \) . It is equal to the original building \( \left( {\mathcal{C},\delta }\right) \) if and only if \( {w}_{0} \) is central in \( W \), i.e., if and only if the function \( {\sigma }_{0} \) introduced in Definition 5.104 is the identity. We use the notation \( {\delta }_{ - } \) instead of \( {\delta }^{ * } \) for the dual Weyl distance function in order to be consistent with standard conventions in the theory of twin buildings (Section 5.8 below). Lemma 5.125. The dual \( \left( {\mathcal{C},{\delta }_{ - }}\right) \) of \( \left( {\mathcal{C},\delta }\right) \) is also a building of type \( \left( {W, S}\right) \), and it is \( {\sigma }_{0} \) -isometric to \( \left( {\mathcal{C},\delta }\right) \) . The associated simplicial buildings \( \Delta \left( \mathcal{C}\right) \) and \( \Delta \left( {\mathcal{C}}_{ - }\right) \) are identical. If one considers \( \Delta \left( \mathcal{C}\right) \) and \( \Delta \left( {\mathcal{C}}_{ - }\right) \) as buildings of type \( \left( {W, S}\right) \) with their natural colorings, then the identity map between \( \Delta \left( \mathcal{C}\right) \) and \( \Delta \left( {\mathcal{C}}^{\prime }\right) \) has \( {\sigma }_{0} \) (or more precisely \( {\left. {\sigma }_{0}\right| }_{S} \) ) as the associated type-change map. Proof. Observe that \( {\delta }_{ - } = {\delta }^{{\sigma }_{0}} \) with the notation introduced in Exercise 5.64. So it follows from this exercise (which was a straightforward verification) that \( \left( {\mathcal{C},\delta }\right) \) is a building of type \( \left( {W, S}\right) \) . By the very definition of the dual, the identity function \( {\operatorname{id}}_{\mathcal{C}} : \mathcal{C} \rightarrow \mathcal{C} \) defines a \( {\sigma }_{0} \) -isometry from \( \mathcal{C} \) onto \( {\mathcal{C}}_{ - } \) . Since \( \mathcal{C} \) and \( {\mathcal{C}}_{ - } \) have the same residues, Definition 5.87 implies \( \Delta \left( \mathcal{C}\right) = \Delta \left( {\mathcal{C}}_{ - }\right) \) . If a residue \( \mathcal{R} \) has type \( J \) in \( \mathcal{C} \), then it has type \( {w}_{0}J{w}_{0} = {\sigma }_{0}\left( J\right) \) in \( {\mathcal{C}}_{ - } \) . Hence \( {\sigma }_{0} \) is the type-change map of the identity isomorphism from \( \Delta \left( \mathcal{C}\right) \) onto \( \Delta \left( {\mathcal{C}}_{ - }\right) \) . From the simplicial point of view, then, the distinction between a spherical building and its dual appears only when one considers colored buildings. Lemma 5.126. If \( \left( {{\mathcal{C}}^{\prime },{\delta }^{\prime }}\right) \) is another building of type \( \left( {W, S}\right) \), the following are equivalent: (i) \( {\mathcal{C}}^{\prime } \) and \( {\mathcal{C}}_{ - } \) are isometric. (ii) There is a type-preserving simplicial isomorphism \( \Delta \left( {\mathcal{C}}^{\prime }\right) \rightarrow \Delta \left( {\mathcal{C}}_{ - }\right) \) . (iii) There is a simplicial isomorphism \( \Delta \left( {\mathcal{C}}^{\prime }\right) \rightarrow \Delta \left( \mathcal{C}\right) \) with \( {\sigma }_{0} \) as the associated type-change map. Proof. By Remark 5.90, an isometry between \( {\mathcal{C}}^{\prime } \) and \( {\mathcal{C}}_{ - } \) induces a type-preserving simplicial automorphism between \( \Delta \left( {\mathcal{C}}^{\prime }\right) \) and \( \Delta \left( {\mathcal{C}}_{ - }\right) \) . Conversely, a type-preserving automorphism between \( \Delta \left( {\mathcal{C}}^{\prime }\right) \) and \( \Delta \left( {\mathcal{C}}_{ - }\right) \) induces an isometry between \( {\mathcal{C}}^{\prime } \) and \( {\mathcal{C}}_{ - } \) by Proposition 5.95. This proves the equivalence of (i) and (ii). By Lemma 5.125, there is an isomorphism between the simplicial buildings \( \Delta \left( \mathcal{C}\right) \) and \( \Delta \left( {\mathcal{C}}_{ - }\right) \) of type \( \left( {W, S}\right) \) with associated type-change map \( {\sigma }_{0} \) . This immediately implies the equivalence of (ii) and (iii). Specializing to \( {\mathcal{C}}^{\prime } = \mathcal{C} \), we obtain the following: Corollary 5.127. The spherical building \( \left( {\mathcal{C},\delta }\right) \) is isometric to its dual \( \left( {\mathcal{C},{\delta }_{ - }}\right) \) if and only if the simplicial building \( \Delta \left( \mathcal{C}\right) \) of type \( \left( {W, S}\right) \) admits an automorphism with associated type-change map \( {\sigma }_{0} \) . Remark 5.128. Lemmas 5.125 and 5.126 show how the dual building should be defined in the category of simplicial spherical buildings of type \( \left( {W, S}\right) \) . Namely, let \( \Delta \) be a building of type \( \left( {W, S}\right) \) with type function \( \tau \) having values in \( S \) . Then the dual of \( \Delta \) is the same simplicial complex \( \Delta \), endowed with the type function \( {\tau }_{ - } \mathrel{\text{:=}} {\sigma }_{0} \circ \tau \) . Example 5.129. The motivation for the notion of "dual building" comes from the case that \( \Delta \) is the building \( \Delta \left( V\right) \) that we introduced in Section 4.3. The dual building in that case was essentially computed in Exercise 4.31. We will repeat some of the details. Let us first recall the setup. We are given a division ring \( k \) and a left vector space \( V \) over \( k \) of finite dimension \( n \) . We assume \( n \geq 3 \) to avoid trivial cases. Then \( \Delta \left( V\right) \) is the flag complex of the poset of proper nontrivial subspaces of \( V \) . We have seen in Section 4.3 that \( \Delta \left( V\right) \) is a building of type \( {\mathrm{A}}_{n - 1} \) . It has a canonical coloring having values in \( \{ 1,\ldots, n - 1\} \), where the type of a vertex \( U \) of \( \Delta \left( V\right) \) is its dimension as a subspace of \( V \) . As we saw in Section 5.7.4, \( {\sigma }_{0} \) is the unique nontrivial automorphism of the Coxeter diagram of type \( {\mathrm{A}}_{n - 1} \), i.e., \( {\sigma }_{0}\left( i\right) = n - i \) for all \( i \in \{ 1,\ldots, n - 1\} \) . So the dual building \( \Delta {\left( V\right) }_{ - } \), according to Remark 5.128, is the same building \( \Delta \left( V\right) \), where now a vertex \( U \) is declared to have type equal to its codi-mension in \( V \) instead of its dimension. Equivalently, we can identify \( \Delta {\left( V\right) }_{ - } \) with \( \Delta \left( {V}^{ * }\right) \), where \( {V}^{ * } \) is the vector space dual to \( V \), and the (type-preserving) isomorphism \( \Delta {\left( V\right) }_{ - } \rightarrow \Delta \left( {V}^{ * }\right) \) sends \( U \) to its annihilator in \( {V}^{ * } \) . There is one issue that deserves some attention in case the division ring \( k \) is not commutative. (This was already pointed out in the solution to Exercise 4.32.) In that case, the dual \( {V}^{ * } \) of \( V \) has to be considered either as a right \( k \) -vector space or as a left vector space over the opposite skew field \( {k}^{\mathrm{{op}}} \) . Therefore, even though we "only" changed the coloring, \( \Delta {\left( V\right) }_{ - } \) should really be viewed as different from \( \Delta \left( V\right) \), since it has a different coordinatizing division ring \( {k}^{\text{op }} \) instead of \( k \) . This brings us to the question of when \( \Delta {\left( V\right) }_{ - } \) is isomorphic, as a colored simplicial building, to \( \Delta \left( V\right) \) . Equivalently, we ask when there is a type-preserving isomorphism between \( \Delta \left( {V}^{ * }\right) \) and \( \Delta \left( V\right) \) . From the point of view of projective geometry, this is the question of when the projective space \( P\left( V\right) \) associated with \( V \) admits a correlation. The answer is classical and well known: \( P\left( V\right) \) admits a correlation if and only if the division rings \( k \) and \( {k}^{\text{op }} \) are isomorphic (or, in other words, if and only if \( k \) admits an antiautomorphism). In view of Corollary 5.127, we can restate this result as follows: The building \( \Delta \left( V\right) \) admits an automorphism that is not type-preserving (and so has \( {\sigma }_{0} \) as its type-change map) if and only if the underlying field \( k \) admits an antiautomorphism. ## Exercises In the exercises below, \( \left( {W, S}\right) \) always denotes a Coxeter system with finite \( W \) , and \( \left( {\mathcal{C},\delta }\right) \) denotes a building of type \( \left( {W, S}\right) \) . 5.130. Let \( {\mathcal{C}}^{\prime } \) be a subbuilding of \( \mathcal{C} \) (e.g., an apartment), and let \( \mathcal{R} \) and \( \mathcal{S} \) be two residues of \( \mathcal{C} \) that meet \( {\mathcal{C}}^{\prime } \) . Show that \( \mathcal{R} \) and \( \mathcal{S} \) are opposite in \( \mathcal{C} \) if and only if \( \mathcal{R} \cap {\mathcal{C}}^{\prime } \) and \( \mathcal{S} \cap {\mathcal{C}}^{\prime } \) are opposite in \( {\mathcal{C}}^{\prime } \) . 5.131. Let \( \mathcal{R} \) be a residue and \( D \) a chamber of \( \mathcal{C} \) that is opposite (at least) one chamber of \( \mathcal{R} \) . Show that the set
1167_(GTM73)Algebra
Definition 8.2
Definition 8.2. Let \( \mathrm{G} = {\mathrm{G}}_{0} > {\mathrm{G}}_{1} > \cdots > {\mathrm{G}}_{\mathrm{n}} \) be a subnormal series. A one-step refinement of this series is any series of the form \( \mathrm{G} = {\mathrm{G}}_{0} > \cdots > {\mathrm{G}}_{\mathrm{i}} > \mathrm{N} > {\mathrm{G}}_{\mathrm{i} + 1} > \cdots \) \( > {\mathrm{G}}_{\mathrm{n}} \) or \( \mathrm{G} = {\mathrm{G}}_{0} > \cdots > {\mathrm{G}}_{\mathrm{n}} > \mathrm{N} \), where \( \mathrm{N} \) is a normal subgroup of \( {\mathrm{G}}_{\mathrm{i}} \) and \( \left( {\text{if}\mathrm{i} < \mathrm{n}}\right) \) \( {\mathrm{G}}_{\mathrm{i} + 1} \) is normal in \( \mathrm{N} \) . A refinement of a subnormal series \( \mathrm{S} \) is any subnormal series obtained from \( \mathrm{S} \) by a finite sequence of one-step refinements. A refinement of \( \mathrm{S} \) is said to be proper if its length is larger than the length of \( \mathrm{S} \) . Definition 8.3. A subnormal series \( \mathrm{G} = {\mathrm{G}}_{0} > {\mathrm{G}}_{1} > \cdots > {\mathrm{G}}_{\mathrm{n}} = \langle \mathrm{e}\rangle \) is a composition series if each factor \( {\mathrm{G}}_{i}/{\mathrm{G}}_{i + 1} \) is simple. A subnormal series \( \mathrm{G} = {\mathrm{G}}_{0} > {\mathrm{G}}_{1} > \cdots > \) \( {\mathrm{G}}_{\mathrm{n}} = \langle \mathrm{e}\rangle \) is a solvable series if each factor is abelian. The following fact is used frequently when dealing with composition series: if \( N \) is a normal subgroup of a group \( G \), then every normal subgroup of \( G/N \) is of the form \( H/N \) where \( H \) is a normal subgroup of \( G \) which contains \( N \) (Corollary I.5.12). Therefore, when \( G \neq N, G/N \) is simple if and only if \( N \) is a maximal in the set of all normal subgroups \( M \) of \( G \) with \( M \neq G \) (such a subgroup \( N \) is called a maximal normal subgroup of \( G \) ). Theorem 8.4. (i) Every finite group \( \mathbf{G} \) has a composition series. (ii) Every refinement of a solvable series is a solvable series. (iii) A subnormal series is a composition series if and only if it has no proper refinements. PROOF. (i) Let \( {G}_{1} \) be a maximal normal subgroup of \( G \) ; then \( G/{G}_{1} \) is simple by Corollary I.5.12. Let \( {G}_{2} \) be a maximal normal subgroup of \( {G}_{1} \), and so on. Since \( G \) is finite, this process must end with \( {G}_{n} = \langle e\rangle \) . Thus \( G > {G}_{1} > \cdots > {G}_{n} = \langle e\rangle \) is a composition series. (ii) If \( {G}_{i}/{G}_{i + 1} \) is abelian and \( {G}_{i + 1} \vartriangleleft H \vartriangleleft {G}_{i} \), then \( H/{G}_{i + 1} \) is abelian since it is a subgroup of \( {G}_{i}/{G}_{i + 1} \) and \( {G}_{i}/H \) is abelian since it is isomorphic to the quotient \( \left( {{G}_{i}/{G}_{i + 1}}\right) /\left( {H/{G}_{i + 1}}\right) \) by the Third Isomorphism Theorem I.5.10. The conclusion now follows immediately. (iii) If \( {G}_{i + 1}\mathop{\vartriangleleft }\limits_{ \neq }H\mathop{\vartriangleleft }\limits_{ \neq }{G}_{i} \) are groups, then \( H/{G}_{i + 1} \) is a proper normal subgroup of \( {G}_{i}/{G}_{i + 1} \) and every proper normal subgroup of \( {G}_{i}/{G}_{i + 1} \) has this form by Corollary I.5.12. The conclusion now follows from the observation that a subnormal series \( G = {G}_{0} > {G}_{1} > \cdots > {G}_{n} = \langle e\rangle \) has a proper refinement if and only if there is a subgroup \( H \) such that for some \( i,{G}_{i + 1}\underset{ \neq }{ \vartriangleleft }H\underset{ \neq }{ \vartriangleleft }{G}_{i} \) . ## Theorem 8.5. A group \( \mathrm{G} \) is solvable if and only if it has a solvable series. PROOF. If \( G \) is solvable, then the derived series \( G > {G}^{\left( 1\right) } > {G}^{\left( 2\right) } > \cdots > {G}^{\left( n\right) } \) \( = \langle e\rangle \) is a solvable series by Theorem 7.8. If \( G = {G}_{0} > {G}_{1} > \cdots > {G}_{n} = \langle e\rangle \) is a solvable series for \( G \), then \( G/{G}_{1} \) abelian implies that \( {G}_{1} > {G}^{\left( 1\right) } \) by Theorem 7.8; \( {G}_{1}/{G}_{2} \) abelian implies \( {G}_{2} > {G}_{1}{}^{\prime } > {G}^{\left( 2\right) } \) . Continue by induction and conclude that \( {G}_{\iota } > {G}^{\left( i\right) } \) for all \( i \) ; in particular \( \langle e\rangle = {G}_{n} > {G}^{\left( n\right) } \) and \( G \) is solvable. EXAMPLES. The dihedral group \( {D}_{n} \) is solvable since \( {D}_{n} > \langle a\rangle > \langle e\rangle \) is a solvable series, where \( a \) is the generator of order \( n \) (so that \( {D}_{n}/\langle a\rangle \cong {Z}_{2} \) ). Similarly if \( \left| G\right| = {pq}\left( {p > q\text{primes}}\right) \), then \( G \) contains an element \( a \) of order \( p \) and \( \langle a\rangle \) is normal in \( G \) (Corollary 4.10). Thus \( G > \langle a\rangle > \langle e\rangle \) is a solvable series and \( G \) is solvable. More generally we have Proposition 8.6. A finite group \( \mathrm{G} \) is solvable if and only if \( \mathrm{G} \) has a composition series whose factors are cyclic of prime order. PROOF. A (composition) series with cyclic factors is a solvable series. Conversely, assume \( G = {G}_{0} > {G}_{1} > \cdots > {G}_{n} = \langle e\rangle \) is a solvable series for \( G \) . If \( {G}_{0} \neq {G}_{1} \) , let \( {H}_{1} \) be a maximal normal subgroup of \( G = {G}_{0} \) which contains \( {G}_{1} \) . If \( {H}_{1} \neq {G}_{1} \), let \( {H}_{2} \) be a maximal normal subgroup of \( {H}_{1} \) which contains \( {G}_{1} \), and so on. Since \( G \) is finite, this gives a series \( G > {H}_{1} > {H}_{2} > \cdots > {H}_{k} > {G}_{1} \) with each subgroup a maximal normal subgroup of the preceding, whence each factor is simple. Doing this for each pair \( \left( {{G}_{i},{G}_{i + 1}}\right) \) gives a solvable refinement \( G = {N}_{0} > {N}_{1} > \cdots > {N}_{r} = \langle e\rangle \) of the original series by Theorem 8.4 (ii). Each factor of this series is abelian and simple and hence cyclic of prime order (Exercise I.4.3). Therefore, \( G > {N}_{1} > \cdots > {N}_{r} = \langle e\rangle \) is a composition series. A given group may have many subnormal or solvable series. Likewise it may have several different composition series (Exercise 1). However we shall now show that any two composition series of a group are equivalent in the following sense. Definition 8.7. Two subnormal series \( \mathrm{S} \) and \( \mathrm{T} \) of a group \( \mathrm{G} \) are equivalent if there is a one-to-one correspondence between the nontrivial factors of \( \mathbf{S} \) and the nontrivial factors of \( \mathrm{T} \) such that corresponding factors are isomorphic groups. Two subnormal series need not have the same number of terms in order to be equivalent, but they must have the same length (that is, the same number of nontrivial factors). Clearly, equivalence of subnormal series is an equivalence relation. Lemma 8.8. If \( \mathrm{S} \) is a composition series of a group \( \mathrm{G} \), then any refinement of \( \mathrm{S} \) is equivalent to \( \mathrm{S} \) . PROOF. Let \( S \) be denoted \( G = {G}_{0} > {G}_{1} > \cdots > {G}_{n} = \langle e\rangle \) . By Theorem 8.4 (iii) \( S \) has no proper refinements. This implies that the only possible refinements of \( S \) are obtained by inserting additional copies of each \( {G}_{i} \) . Consequently any refinement of \( S \) has exactly the same nontrivial factors as \( S \) and is therefore equivalent to \( S \) . The next lemma is quite technical. Its value will be immediately apparent in the proof of Theorem 8.10. Lemma 8.9. (Zassenhaus) Let \( {\mathrm{A}}^{ * },\mathrm{\;A},{\mathrm{\;B}}^{ * },\mathrm{\;B} \) be subgroups of a group \( \mathrm{G} \) such that \( {\mathrm{A}}^{ * } \) is normal in \( \mathrm{A} \) and \( {\mathrm{B}}^{ * } \) is normal in \( \mathrm{B} \) . (i) \( {\mathrm{A}}^{ * }\left( {\mathrm{\;A} \cap {\mathrm{B}}^{ * }}\right) \) is a normal subgroup of \( {\mathrm{A}}^{ * }\left( {\mathrm{\;A} \cap \mathrm{B}}\right) \) ; (ii) \( {\mathrm{B}}^{ * }\left( {{\mathrm{\;A}}^{ * } \cap \mathrm{B}}\right) \) is a normal subgroup of \( {\mathrm{B}}^{ * }\left( {\mathrm{\;A} \cap \mathrm{B}}\right) \) ; (iii) \( {\mathrm{A}}^{ * }\left( {\mathrm{\;A} \cap \mathrm{B}}\right) /{\mathrm{A}}^{ * }\left( {\mathrm{\;A} \cap {\mathrm{B}}^{ * }}\right) \cong {\mathrm{B}}^{ * }\left( {\mathrm{\;A} \cap \mathrm{B}}\right) /{\mathrm{B}}^{ * }\left( {{\mathrm{A}}^{ * } \cap \mathrm{B}}\right) \) . PROOF. Since \( {B}^{ * } \) is normal in \( B, A \cap {B}^{ * } = \left( {A \cap B}\right) \cap {B}^{ * } \) is a normal subgroup of \( A \cap B \) (Theorem I.5.3 (i)); similarly \( {A}^{ * } \cap B \) is normal in \( A \cap B \) . Consequently \( D = \left( {{A}^{ * } \cap B}\right) \left( {A \cap {B}^{ * }}\right) \) is a normal subgroup of \( A \cap B \) (Theorem I.5.3 iii) and Exercise I.5.13). Theorem I.5.3 (iii) also implies that \( {A}^{ * }\left( {A \cap B}\right) \) and \( {B}^{ * }\left( {A \cap B}\right) \) are subgroups of \( A \) and \( B \) respectively. We shall define an epimorphism \( f : {A}^{ * }\left( {A \cap B}\right) \rightarrow \left( {A \cap B}\right) /D \) with kernel \( {A}^{ * }\left( {A \cap {B}^{ * }}\right) \) . This will imply that \( {A}^{ * }\left( {A \cap {B}^{ * }}\right) \) is normal in \( {A}^{ * }\left( {A \cap B}\right) \) (Theorem I.5.5) and that \( {A}^{ * }\left( {A \cap B}\right) /{A}^{ * }\left( {A \cap {B}^{ * }}\right) \cong \left( {A \cap B}\right) /D \) (Corollary I.5.7). Define \( f : {A}^{ * }\left( {A \cap B}\right) \rightarrow \left( {A \cap B}\right) /D \) as follows. If \( {a\varepsilon }{A}^{ * },{c\varepsilon A} \cap B \), let \( f\left( {ac}\right) = {Dc} \) . Then \( f \) is well defined since \( {ac} = {a}_{1}{c}_{1}\left( {a,{a}_{1} \in {A}^{ * };c,{c}_{1} \in A \cap B}\right) \) implies \( {c}_{1}{c}^{-1} = {a}_{1}^{-1}{a\varepsilon }\left( {A \cap B}\right) \cap {A}^{ * } = {A}^{ * } \c
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 7.1
Definition 7.1. A limit cycle \( \Gamma \) of system (7.1) is called strongly stable (or strongly unstable), if \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) < 0 \) (or \( > 0 \) ) on the entire cycle \( \Gamma \) . THEOREM 7.1. The sum of strongly stable and strongly unstable limit cycles of system (7.1) is less than or equal to \( \frac{1}{2}\left( {n - 2}\right) \left( {n - 3}\right) + 1 \) . If there is a critical point enclosed by all these limit cycles, then the sum is less than or equal to \( \left\lbrack {\left( {n - 1}\right) /2}\right\rbrack \) . Proof. Let \( \left\{ {\Gamma }_{\alpha }\right\} \) be the set of all strongly stable and strongly unstable limit cycles. We now show that every \( {\Gamma }_{\alpha } \) corresponds to at least one closed component of the algebraic curve \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0 \) . Thus the total number of cycles in \( \left\{ {\Gamma }_{\alpha }\right\} \) cannot be larger than the total number of closed components of \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0 \) . From the theory of algebraic curves, the total number of closed components of the \( n - 1 \) degree algebraic curve \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0 \) is \( \leq \frac{1}{2}\left( {n - 2}\right) \left( {n - 3}\right) + 1 \) . This will prove the first half of the theorem. Let \( {R}_{\alpha } \) be the region enclosed by \( {\Gamma }_{\alpha } \) . Green’s formula gives \[ {\iint }_{{R}_{\alpha }}\operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) {dxdy} = {\oint }_{{\Gamma }_{\alpha }}{X}_{n}{dy} - {Y}_{n}{dx} = 0. \] The expression \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) \) is always positive or negative on \( {\Gamma }_{\alpha } \) ; thus there must exist a region \( {\gamma }_{\alpha } \subset {R}_{\alpha } \) where the \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) \) is always of the opposite sign as that on \( {\Gamma }_{\alpha } \) . Therefore, there must be a closed (not necessarily simple closed) curve \( {L}_{\alpha } \subset {R}_{\alpha } \smallsetminus {\gamma }_{\alpha } \), such that \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0 \) on \( {L}_{\alpha } \), as shown in Figure 4.36. Let \( {\Gamma }_{{\alpha }_{1}},{\Gamma }_{{\alpha }_{2}},\ldots ,{\Gamma }_{{\alpha }_{\beta }},\ldots \) be all the strongly stable and strongly unstable limit cycles which do not mutually contain each other in the region \( {R}_{\alpha } \) . Let \( {R}_{{\alpha }_{\beta }} \) be the region contained inside \( {\Gamma }_{{\alpha }_{\beta }}.{R}_{\alpha } \smallsetminus \mathop{\bigcup }\limits_{\beta }{\bar{R}}_{{\alpha }_{\beta }} \) does not contain any strongly stable or strongly unstable limit cycle. Green's formula gives \[ {\iint }_{{R}_{\alpha } \smallsetminus \mathop{\bigcup }\limits_{\beta }{\bar{R}}_{{\alpha }_{\beta }}}\operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) {dxdy} \] \[ = {\oint }_{{\Gamma }_{\alpha }}{X}_{n}{dy} - {Y}_{n}{dx} - {\oint }_{\mathop{\bigcup }\limits_{\beta }{\Gamma }_{{\alpha }_{\beta }}}\left( {{X}_{n}{dy} - {Y}_{n}{dx}}\right) \] \[ = 0\text{.} \] As in the argument above, there must exist a closed component \( {\bar{L}}_{\alpha } \) of \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0 \) in \( {R}_{\alpha } \smallsetminus { \cup }_{\beta }{\bar{R}}_{{\alpha }_{\beta }} \) . Assigning \( {\bar{L}}_{\alpha } \) to \( {\Gamma }_{\alpha } \), we have thus set a correspondence of each strongly stable or strongly unstable limit cycle to a closed component of \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0 \) . This completes the proof of the first half of the theorem. ![bea09977-be18-4815-a30e-4fa2fe3b219c_306_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_306_0.jpg) FIGURE 4.36 If there is one critical point enclosed by all the strongly stable and strongly unstable limit cycles \( {\Gamma }_{l} \) of (7.1), i.e. \[ {\Gamma }_{1} \supset {\Gamma }_{2} \supset \cdots \supset {\Gamma }_{\iota } \supset {\Gamma }_{\iota + 1}\cdots , \] then the above argument shows that there exist closed components \( {L}_{t} \) of \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0,{\Gamma }_{l} \supset {L}_{l} \supset {\Gamma }_{l + 1} \), with \( {L}_{l} \) corresponding to \( {\Gamma }_{l} \) and \[ {L}_{1} \supset {L}_{2} \supset \cdots \supset {L}_{l} \supset \cdots . \] The \( n - 1 \) degree algebraic curve \( \operatorname{div}\left( {{X}_{n},{Y}_{n}}\right) = 0 \) can have at most \( \lbrack (n - \) \( 1)/2\rbrack \) closed components which enclose each other. Consequently, the total number of strong (i.e., strongly stable and strongly unstable) limit cycles enclosing a common critical point must be less than or equal to \( \left\lbrack {\left( {n - 1}\right) /2}\right\rbrack \) . This proves the theorem. EXAMPLE 7.1. \[ \left\{ \begin{array}{l} \frac{dx}{dt} = y - x\mathop{\prod }\limits_{{i = 1}}^{k}\left( {{x}^{2} + {y}^{2} - {i}^{2}}\right) , \\ \frac{dy}{dt} = - x - y\mathop{\prod }\limits_{{i = 1}}^{k}\left( {{x}^{2} + {y}^{2} - {i}^{2}}\right) . \end{array}\right. \] (7.2) Let \( x = \rho \cos \theta, y = \rho \sin \theta ,\left( {7.2}\right) \) then becomes \( {d\rho }/{d\theta } = - \rho \mathop{\prod }\limits_{{i = 1}}^{k}\left( {{\rho }^{2} - {i}^{2}}\right) \) . The curves \( \rho = i, i = 1,2,\ldots, k \) are strong limit cycles. The polynomials on the right-hand side of (7.2) are of degree \( n = {2k} + 1 \) . Moreover (7.2) has \( \left\lbrack {\left( {n - 1}\right) /2}\right\rbrack = k \) strong limit cycles. This shows that in the second half of Theorem 7.1, it is possible to obtain the upper estimate number. However, the estimate for the first half may not be as sharp. (B) Construction of a Liénard’s equation which has exactly \( n \) limit cycles. In the following, we present a few theorems due to Huang Ke-cheng [9]. Consider the system of differential equations \[ \frac{dx}{dt} = y - F\left( x\right) ,\;\frac{dy}{dt} = - g\left( x\right) , \] (7.3) where \( F\left( x\right), g\left( x\right) \) are continuous functions satisfying conditions for uniqueness of solutions. Further, assume \( F\left( 0\right) = 0 \), and \( {xg}\left( x\right) > 0 \) for \( x \neq 0 \) . Here, the origin \( O \) is the only critical point of (7.3). THEOREM 7.2. Suppose that (1) \( F\left( x\right) \geq F\left( a\right) \) for \( 0 \leq x \leq a \), and \( F\left( x\right) \) is monotonically nonincreasing in \( \left( {a, + \infty }\right) \) ; (2) \( F\left( x\right) \leq F\left( b\right) \) for \( b \leq x \leq 0 \), and \( F\left( x\right) \) is monotonically nonincreasing in \( \left( {-\infty, b}\right) \) ; (3) \( F\left( x\right) ≢ 0 \) for \( b \leq x \leq a \) . Then equations (7.3) can have at most one limit cycle which intersects both the straight lines \( x = a \) and \( x = b \) . This theorem can be deduced directly from the corollary of Lemma 4.1. In this case, the integral of the divergence along any closed orbit, which intersects both the lines \( x = a \) and \( x = b \), is always positive. Hence, there can be at most one such closed orbit. Similarly, the following theorem can be proved. THEOREM 7.3. Suppose that (1) \( F\left( x\right) \leq F\left( a\right) \) for \( 0 \leq x \leq a \), and \( F\left( x\right) \) is monotonically nondecreasing in \( \left( {a, + \infty }\right) \) ; (2) \( F\left( x\right) \geq F\left( b\right) \) for \( b \leq x \leq 0 \), and \( F\left( x\right) \) is monotonically nondecreasing in \( \left( {-\infty, b}\right) \) ; (3) \( F\left( x\right) ≢ 0 \) for \( b \leq x \leq a \) . Then equations (7.3) can have at most one limit cycle which intersects both the straight lines \( x = a \) and \( x = b \) . THEOREM 7.4. Suppose that (1) \( F\left( x\right) \geq - N \) for \( 0 \leq x \leq a \) ; (2) there exists \( b < 0 \), such that \( F\left( b\right) \leq - N - \sqrt{{2G}\left( a\right) } \), where \( G\left( x\right) = \) \( {\int }_{0}^{x}g\left( x\right) {dx}. \) Then any limit cycle of equations (7.3) which intersects the straight line \( x = b \) must also intersect the straight line \( x = a \) . Proof. Choose the points \( A\left( {a, - N}\right) \) and \( B\left( {b, F\left( b\right) }\right) \), and let the semior-bits \( \overrightarrow{f}\left( {A,{I}^{ + }}\right) \) and \( \overrightarrow{f}\left( {B,{I}^{ - }}\right) \) respectively intersect the negative \( y \) halfaxis at \( {A}^{\prime } \) and \( {B}^{\prime } \) . Let \( \lambda \left( {x, y}\right) = {\left( y + N\right) }^{2}/2 + G\left( x\right) \), then \[ {\left. \frac{d\lambda }{dt}\right| }_{\left( {7.3}\right) } = - g\left( x\right) \left( {N + F\left( x\right) }\right) < 0\;\text{ for }0 \leq x \leq a. \] Thus we have \( {y}_{{A}^{\prime }} \geq - N - \sqrt{{2G}\left( a\right) } \) . Also, we have \( {y}_{{B}^{\prime }} < {y}_{B} = F\left( b\right) \) ; and therefore we obtain \( {y}_{{B}^{\prime }} < {y}_{{A}^{\prime }} \) . Consequently, an orbit which intersects the straight line \( x = b \) must also intersect the straight line \( x = a \) . See Figure 4.37. This proves the theorem. The following theorems clearly hold. THEOREM 7.5. Suppose that (1) \( F\left( x\right) \leq M \) for \( 0 \leq x \leq a \) ; (2) there exists \( b < 0 \), such that \( F\left( b\right) \geq M + \sqrt{{2G}\left( a\right) } \) . Then any closed orbit of equations (7.3) which intersects the straight line \( x = b \) must also intersect the straight line \( x = a \) . THEOREM 7.6. Suppose that (1) \( F\left( x\right) \geq - N \) for \( b \leq x \leq 0 \) ; (2) there exists \( a > 0 \), such that \( F\left( a\right) \leq - N - \sqrt{{2G}\left( b\right) } \) . Then any closed orbit of equations (7.3) which intersects the straight line \( x = a \) must also intersect the straight line \( x = b \) . THEOREM 7.7. Suppose that (1) \( F\left( x\right) \leq M \) for \( b \leq x \leq 0 \) ; ![bea09977-be18-48
1099_(GTM255)Symmetry, Representations, and Invariants
Definition 4.1.8
Definition 4.1.8. A finite-dimensional \( \mathcal{A} \) -module \( V \) is completely reducible if for every \( \mathcal{A} \) -invariant subspace \( W \subset V \) there exists a complementary invariant subspace \( U \subset V \) such that \( V = W \oplus U \) . We proved in Chapter 3 that rational representations of classical groups and finite-dimensional representations of semisimple Lie algebras are completely reducible. For any associative algebra the property of complete reducibility is inherited by subrepresentations and quotient representations. Lemma 4.1.9. Let \( \left( {\rho, V}\right) \) be completely reducible and suppose \( W \subset V \) is an invariant subspace. Set \( {\left. \sigma \left( x\right) = \rho \left( x\right) \right| }_{W} \) and \( \pi \left( x\right) \left( {v + W}\right) = \rho \left( x\right) v + W \) for \( x \in \mathcal{A} \) and \( v \in V \) . Then the representations \( \left( {\sigma, W}\right) \) and \( \left( {\pi, V/W}\right) \) are completely reducible. Proof. The proof of Lemma 3.3.2 applies verbatim to this context. Remark 4.1.10. The converse to Lemma 4.1.9 is not true. For example, let \( \mathcal{A} \) be the algebra of matrices of the form \( \left\lbrack \begin{array}{ll} x & y \\ 0 & x \end{array}\right\rbrack \) with \( x, y \in \mathbb{C} \), acting on \( V = {\mathbb{C}}^{2} \) by left multiplication. The space \( W = \mathbb{C}{e}_{1} \) is invariant and irreducible. Since \( V/W \) is one-dimensional, it is also irreducible. But the matrices in \( \mathcal{A} \) have only one distinct eigenvalue and are not diagonal, so there is no invariant complement to \( W \) in \( V \) . Thus \( V \) is not completely reducible as an \( \mathcal{A} \) -module. Proposition 4.1.11. Let \( \left( {\rho, V}\right) \) be a finite-dimensional representation of the associative algebra \( \mathcal{A} \) . The following are equivalent: 1. \( \left( {\rho, V}\right) \) is completely reducible. 2. \( V = {W}_{1} \oplus \cdots \oplus {W}_{s} \) with each \( {W}_{i} \) an irreducible \( \mathcal{A} \) -module. 3. \( V = {V}_{1} + \cdots + {V}_{d} \) as a vector space, where each \( {V}_{i} \) is an irreducible \( \mathcal{A} \) -submodule. Furthermore, if \( V \) satisfies these conditions and if all the \( {V}_{i} \) in (3) are equivalent to a single irreducible \( \mathcal{A} \) -module \( W \), then every \( \mathcal{A} \) -submodule of \( V \) is isomorphic to a direct sum of copies of \( W \) . Proof. The equivalence of the three conditions follows by the proof of Proposition 3.3.3. Now assume that \( V \) satisfies these conditions and that the \( {V}_{i} \) are all mutually equivalent as \( \mathcal{A} \) -modules. Let \( M \) be an \( \mathcal{A} \) -submodule of \( V \) . Since \( V \) is completely reducible by (1), it follows from Lemma 4.1.9 that \( M \) is completely reducible. Hence by (2) we have \( M = {W}_{1} \oplus \cdots \oplus {W}_{r} \) with \( {W}_{i} \) an irreducible \( \mathcal{A} \) -module. Furthermore, there is a complementary \( \mathcal{A} \) -submodule \( N \) such that \( V = M \oplus N \) . Hence \[ V = {W}_{1} \oplus \cdots \oplus {W}_{r} \oplus N. \] Let \( {p}_{i} : V \rightarrow {W}_{i} \) be the projection corresponding to this decomposition. By (3) we have \( {W}_{i} = {p}_{i}\left( {V}_{1}\right) + \cdots + {p}_{i}\left( {V}_{d}\right) \) . Thus for each \( i \) there exists \( j \) such that \( {p}_{i}\left( {V}_{j}\right) \neq \left( 0\right) \) . Since \( {W}_{i} \) and \( {V}_{j} \) are irreducible and \( {p}_{i} \) is an \( \mathcal{A} \) -module map, Schur’s lemma implies that \( {W}_{i} \cong {V}_{j} \) as an \( \mathcal{A} \) -module. Hence \( {W}_{i} \cong W \) for all \( i \) . Corollary 4.1.12. Suppose \( \left( {\rho, V}\right) \) and \( \left( {\sigma, W}\right) \) are completely reducible representations of \( \mathcal{A} \) . Then \( \left( {\rho \oplus \sigma, V \oplus W}\right) \) is a completely reducible representation. Proof. This follows from the equivalence between conditions (1) and (2) in Proposition 4.1.11. ## 4.1.5 Double Commutant Theorem Let \( V \) be a vector space. For any subset \( \mathcal{S} \subset \operatorname{End}\left( V\right) \) we define \[ \operatorname{Comm}\left( \mathcal{S}\right) = \{ x \in \operatorname{End}\left( V\right) : {xs} = {sx}\;\text{ for all }s \in \mathcal{S}\} \] and call it the commutant of \( \mathcal{S} \) . We observe that \( \operatorname{Comm}\left( \mathcal{S}\right) \) is an associative algebra with unit \( {I}_{V} \) . Theorem 4.1.13 (Double Commutant). Suppose \( \mathcal{A} \subset \operatorname{End}V \) is an associative algebra with identity \( {I}_{V} \) . Set \( \mathcal{B} = \operatorname{Comm}\left( \mathcal{A}\right) \) . If \( V \) is a completely reducible \( \mathcal{A} \) -module, then \( \operatorname{Comm}\left( \mathcal{B}\right) = \mathcal{A} \) . Proof. By definition we have \( \mathcal{A} \subset \operatorname{Comm}\left( \mathcal{B}\right) \) . Let \( T \in \operatorname{Comm}\left( \mathcal{B}\right) \) and fix a basis \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) for \( V \) . It will suffice to find an element \( S \in \mathcal{A} \) such that \( S{v}_{i} = T{v}_{i} \) for \( i = 1,\ldots, n \) . Let \( {w}_{0} = {v}_{1} \oplus \cdots \oplus {v}_{n} \in {V}^{\left( n\right) } \) . Since \( {V}^{\left( n\right) } \) is a completely reducible \( \mathcal{A} \) - module by Proposition 4.1.11, the cyclic submodule \( M = \mathcal{A} \cdot {w}_{0} \) has an \( \mathcal{A} \) -invariant complement. Thus there is a projection \( P : {V}^{\left( n\right) } \rightarrow M \) that commutes with \( \mathcal{A} \) . The action of \( P \) is given by an \( n \times n \) matrix \( \left\lbrack {p}_{ij}\right\rbrack \), where \( {p}_{ij} \in \mathcal{B} \) . Since \( P{w}_{0} = {w}_{0} \) and \( T{p}_{ij} = {p}_{ij}T \), we have \[ P\left( {T{v}_{1} \oplus \cdots \oplus T{v}_{n}}\right) = T{v}_{1} \oplus \cdots \oplus T{v}_{n} \in M. \] Hence by definition of \( M \) there exists \( S \in \mathcal{A} \) such that \[ S{v}_{1} \oplus \cdots \oplus S{v}_{n} = T{v}_{1} \oplus \cdots \oplus T{v}_{n}. \] This proves that \( T = S \), so \( T \in \mathcal{A} \) . ## 4.1.6 Isotypic Decomposition and Multiplicities Let \( \mathcal{A} \) be an associative algebra with unit 1 . If \( U \) is a finite-dimensional irreducible \( \mathcal{A} \) -module, we denote by \( \left\lbrack U\right\rbrack \) the equivalence class of all \( \mathcal{A} \) -modules equivalent to \( U \) . Let \( \widehat{\mathcal{A}} \) be the set of all equivalence classes of finite-dimensional irreducible \( \mathcal{A} \) -modules. Suppose that \( V \) is an \( \mathcal{A} \) -module (we do not assume that \( V \) is finite-dimensional). For each \( \lambda \in \widehat{\mathcal{A}} \) we define the \( \lambda \) -isotypic subspace \[ {V}_{\left( \lambda \right) } = \mathop{\sum }\limits_{{U \subset V,\left\lbrack U\right\rbrack = \lambda }}U \] Fix a module \( {F}^{\lambda } \) in the class \( \lambda \) for each \( \lambda \in \widehat{\mathcal{A}} \) . There is a tautological linear map \[ {S}_{\lambda } : {\operatorname{Hom}}_{\mathcal{A}}\left( {{F}^{\lambda }, V}\right) \otimes {F}^{\lambda } \rightarrow V,\;{S}_{\lambda }\left( {u \otimes w}\right) = u\left( w\right) . \] (4.5) Make \( {\operatorname{Hom}}_{\mathcal{A}}\left( {{F}^{\lambda }, V}\right) \otimes {F}^{\lambda } \) into an \( \mathcal{A} \) -module with action \( x \cdot \left( {u \otimes w}\right) = u \otimes \left( {xw}\right) \) for \( x \in \mathcal{A} \) . Then \( {S}_{\lambda } \) is an \( \mathcal{A} \) -intertwining map. If \( 0 \neq u \in {\operatorname{Hom}}_{\mathcal{A}}\left( {{F}^{\lambda }, V}\right) \) then Schur’s lemma (Lemma 4.1.4) implies that \( u\left( {F}^{\lambda }\right) \) is an irreducible \( \mathcal{A} \) -submodule of \( V \) isomorphic to \( {F}^{\lambda } \) . Hence \[ {S}_{\lambda }\left( {{\operatorname{Hom}}_{\mathcal{A}}\left( {{F}^{\lambda }, V}\right) \otimes {F}^{\lambda }}\right) \subset {V}_{\left( \lambda \right) }\;\text{ for every }\lambda \in \widehat{\mathcal{A}}. \] Definition 4.1.14. The \( \mathcal{A} \) -module \( V \) is locally completely reducible if the cyclic \( \mathcal{A} \) - submodule \( \mathcal{A}v \) is finite-dimensional and completely reducible for every \( v \in V \) . For example, if \( G \) is a reductive linear algebraic group, then by Proposition 1.4.4 \( \mathcal{O}\left\lbrack G\right\rbrack \) is a locally completely reducible module for the group algebra \( \mathcal{A}\left\lbrack G\right\rbrack \) relative to the left or right translation action of \( G \) . Proposition 4.1.15. Let \( V \) be a locally completely reducible \( \mathcal{A} \) -module. Then the map \( {S}_{\lambda } \) gives an \( \mathcal{A} \) -module isomorphism \( {\operatorname{Hom}}_{\mathcal{A}}\left( {{F}^{\lambda }, V}\right) \otimes {F}^{\lambda } \cong {V}_{\left( \lambda \right) } \) for each \( \lambda \in \widehat{\mathcal{A}} \) . Furthermore, \[ V = {\bigoplus }_{\lambda \in \widehat{\mathcal{A}}}{V}_{\left( \lambda \right) }\;\text{ (algebraic direct sum). } \] (4.6) Proof. If \( U \subset V \) is an \( \mathcal{A} \) -invariant finite-dimensional irreducible subspace with \( \left\lbrack U\right\rbrack = \lambda \), then there exists \( u \in {\operatorname{Hom}}_{\mathcal{A}}\left( {{F}^{\lambda }, V}\right) \) such that Range \( \left( u\right) = U \) . Hence \( {S}_{\lambda } \) is surjective. To show that \( {S}_{\lambda } \) is injective, let \( {u}_{i} \in {\operatorname{Hom}}_{\mathcal{A}}\left( {{F}^{\lambda }, V}\right) \) and \( {w}_{i} \in {F}^{\lambda } \) for \( i = 1,\ldots, k \) , and suppose that \( \mathop{\sum }\limits_{i}{u}_{i}\left( {w}_{i}\right) = 0 \) . We may assume that \( \left\{ {{w}_{1},\ldots ,{w}_{k}}\right\} \) is linearly independent and that \( {u}_{i} \neq 0 \) for all \( i \) . Let \( W = {u}_{1}\l
1074_(GTM232)An Introduction to Number Theory
Definition 8.10
Definition 8.10. The convolution of arithmetic functions \( \mathrm{f} \) and \( \mathrm{g} \) is the function \( \mathrm{f} * \mathrm{\;g} \) defined by \[ \left( {\mathrm{f} * \mathrm{\;g}}\right) \left( n\right) = \mathop{\sum }\limits_{{d \mid n}}\mathrm{f}\left( d\right) \mathrm{g}\left( \frac{n}{d}\right) \] Theorem 8.11. Convolution is commutative and associative. In other words, \[ \mathrm{f} * \mathrm{g} = \mathrm{g} * \mathrm{f}\;\text{ and }\;\left( {\mathrm{f} * \mathrm{g}}\right) * \mathrm{h} = \mathrm{f} * \left( {\mathrm{g} * \mathrm{h}}\right) \] for any arithmetic functions \( f, g \), and \( h \) . Proof. The sum in \[ \mathop{\sum }\limits_{{d \mid n}}\mathrm{f}\left( d\right) \mathrm{g}\left( \frac{n}{d}\right) \] runs over all pairs \( d, e \in \mathbb{N} \) with \( {de} = n \), so it is equal to \[ \mathop{\sum }\limits_{{{de} = n}}\mathrm{f}\left( d\right) \mathrm{g}\left( e\right) \] and the latter expression is symmetric in \( \mathrm{f} \) and \( \mathrm{g} \) . To see that convolution is associative, check the property for \( n = p \) a prime by hand. The proof in the general case goes in much the same way as the proof of commutativity: \[ \left( {\left( {\mathrm{f} * \mathrm{g}}\right) * \mathrm{h}}\right) \left( n\right) = \left( {\mathrm{f} * \left( {\mathrm{g} * \mathrm{h}}\right) }\right) \left( n\right) = \mathop{\sum }\limits_{{{cde} = n}}\mathrm{f}\left( c\right) \mathrm{g}\left( d\right) \mathrm{h}\left( e\right) , \] from which associativity is clear. Lemma 8.12. Define the arithmetic function \( \mathrm{I} \) by \( \mathrm{I}\left( 1\right) = 1 \) and \( \mathrm{I}\left( n\right) = 0 \) for all \( n > 1 \) . Then, for any arithmetic function \( \mathrm{f} \) , \[ f * I = I * f = f. \] Proof. \[ \left( {\mathrm{f} * \mathrm{I}}\right) \left( n\right) = \mathop{\sum }\limits_{{d \mid n}}\mathrm{f}\left( d\right) \mathrm{I}\left( \frac{n}{d}\right) = \mathrm{f}\left( n\right) \mathrm{I}\left( 1\right) = \mathrm{f}\left( n\right) \] since all the other summands are zero by the definition of I. Theorem 8.13. If \( \mathrm{f} \) is an arithmetic function with \( \mathrm{f}\left( 1\right) \neq 0 \), then there is a unique arithmetic function \( \mathrm{g} \) such that \( \mathrm{f} * \mathrm{g} = \mathrm{I} \) . This function is denoted \( {\mathrm{f}}^{-1} \) . Proof. The equation \( \left( {f * g}\right) \left( 1\right) = f\left( 1\right) g\left( 1\right) \) determines \( g\left( 1\right) \) . Then define \( g \) recursively as follows. Assuming that \( \mathrm{g}\left( 1\right) ,\ldots ,\mathrm{g}\left( {n - 1}\right) \) have been defined uniquely, the equation \[ \left( {\mathrm{f} * \mathrm{\;g}}\right) \left( n\right) = \mathrm{f}\left( 1\right) \mathrm{g}\left( n\right) + \mathop{\sum }\limits_{{1 < d \mid n}}\mathrm{f}\left( d\right) \mathrm{g}\left( \frac{n}{d}\right) \] allows us to calculate \( \mathrm{g}\left( n\right) \) uniquely. Example 8.14. Let \( \mathfrak{u}\left( n\right) = 1 \) for all \( n \) . Then, by Theorem 8.8, \[ {u}^{-1} = \mu \] (8.10) Exercise 8.10. Let \( f \) be a multiplicative arithmetic function with \( f\left( 1\right) \neq 0 \) . (a) Prove that \( {\mathrm{f}}^{-1}\left( n\right) = \mu \left( n\right) \mathrm{f}\left( n\right) \) for all square-free \( n \) . (b) Prove that \( {\mathrm{f}}^{-1}\left( {p}^{2}\right) = \mathrm{f}{\left( p\right) }^{2} - \mathrm{f}\left( {p}^{2}\right) \) for all primes \( p \) . Exercise 8.11. Let \( f \) be an arithmetic function, and consider the (formal) relationship \[ \mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 - {x}^{n}\right) }^{\mathrm{f}\left( n\right) /n} = \mathop{\sum }\limits_{{n = 0}}^{\infty }\mathrm{R}\left( n\right) {x}^{n}. \] (8.11) (a) Prove that \( \mathrm{R}\left( n\right) = - \frac{1}{n}\mathop{\sum }\limits_{{a = 1}}^{n}\left( {\mathrm{f} * \mathrm{u}}\right) \left( a\right) \cdot \mathrm{R}\left( {n - a}\right) \) for all \( n \geq 1 \) . (b) Assume that \( \mathrm{R}\left( 0\right) = 1 \) . Prove that \( \mathrm{f} \) is uniquely determined by Equation (8.11). (c) For \( \mathrm{f}\left( n\right) = {n}^{\alpha } \), prove that \( \mathrm{R}\left( n\right) = - \frac{1}{n}\mathop{\sum }\limits_{{a = 1}}^{n}\left( {{n}^{\alpha } + 1}\right) \cdot \mathrm{R}\left( {n - a}\right) \) for all \( n \geq 1 \) . Exercise 8.12. If \( f \) is multiplicative, prove that \( f \) is completely multiplicative if and only if \( {\mathrm{f}}^{-1}\left( {p}^{a}\right) = 0 \) for all primes \( p \) and \( a \geq 2 \) . Exercise 8.13. Define an arithmetic function \( \nu \left( n\right) \) to be 1 when \( n = 0 \) and the number of distinct prime factors of \( n \) for \( n \geq 1 \) . Let \( \mathrm{f} = \mu * \nu \) . Prove that \( \mathrm{f}\left( n\right) \in \{ 0,1\} \) for all \( n \in \mathbb{N} \) . Exercise 8.14. (a) Prove that the collection of all arithmetic functions \( f \) with \( \mathrm{f}\left( 1\right) \neq 0 \) forms an Abelian group under Dirichlet convolution. (b) Prove that the multiplicative arithmetic functions form a subgroup. (c) Show by example that the completely multiplicative functions do not form a subgroup. Theorem 8.15. [Mößlus INVERSION FORMULA] Given arithmetic functions \( f \) and \( \mathrm{g},\mathrm{f}\left( n\right) = \mathop{\sum }\limits_{{d \mid n}}\mathrm{\;g}\left( d\right) \) if and only if \( \mathrm{g}\left( n\right) = \mathop{\sum }\limits_{{d \mid n}}\mathrm{f}\left( d\right) \mu \left( \frac{n}{d}\right) \) . Proof. Assume that \( \mathrm{f}\left( n\right) = \mathop{\sum }\limits_{{d \mid n}}\mathrm{\;g}\left( d\right) \), and let \( \mathrm{u}\left( n\right) = 1 \) for all \( n \) as in Example 8.14. Then \( \mathrm{f} = \mathrm{g} * \mathrm{u} \) . Convolve both sides of \( \mathrm{f} = \mathrm{g} * \mathrm{u} \) with \( \mu \) and use Equation (8.10) to see that \[ \mathrm{f} * \mu = \mathrm{g} * \mathrm{u} * \mu = \mathrm{g} * \mathrm{I} = \mathrm{g}, \] so \[ \mathrm{g}\left( n\right) = \mathop{\sum }\limits_{{d \mid n}}\mathrm{f}\left( d\right) \mu \left( \frac{n}{d}\right) . \] For the converse, convolve \( \mathrm{g} = \mathrm{f} * \mu \) with \( \mathrm{u} \) . Thus Theorem 3.9 and Theorem 8.9 are equivalent: We can move from one to the other by convolving with the Möbius function or its convolution inverse. Exercise 8.15. Suppose \( \sigma \) denotes a real number for which \[ \mathrm{F}\left( \sigma \right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\mathrm{f}\left( n\right) }{{n}^{\sigma }}\text{ and }\mathrm{G}\left( \sigma \right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\mathrm{g}\left( n\right) }{{n}^{\sigma }} \] are absolutely convergent series. Prove that \[ \mathrm{F}\left( \sigma \right) \cdot \mathrm{G}\left( \sigma \right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\left( {\mathrm{f} * \mathrm{\;g}}\right) \left( n\right) }{{n}^{\sigma }}. \] Example 8.16. If \( \mathrm{f} * \mathrm{\;g} = \mathrm{I} \), then \( \mathrm{F}\left( \sigma \right) \mathrm{G}\left( \sigma \right) = 1 \), so \[ \frac{1}{\zeta \left( \sigma \right) } = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\mu \left( n\right) }{{n}^{\sigma }} \] Series such as \( \mathrm{F},\mathrm{G} \), and \( \mathrm{F} \cdot \mathrm{G} \) are called Dirichlet series. We next study the Riemann zeta function in the context of Dirichlet series. Exercise 8.16. For all \( s \in \mathbb{C} \) with \( \Re \left( s\right) > 2 \), show that \[ \frac{\zeta \left( {s - 1}\right) }{\zeta \left( s\right) } = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\phi \left( n\right) }{{n}^{s}} \] The traditional notation for the variable \( s \) in Definition 1.4 of the Riemann zeta function is \[ s = \sigma + \mathrm{i}t\;\text{ with }\sigma, t \in \mathbb{R}. \] For \( s \) with real part \( \sigma = \Re \left( s\right) > 1 \), we claim that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{s}} \) converges absolutely. To prove this, notice that \[ {n}^{-s} = {n}^{-\sigma - {it}} = {n}^{-\sigma }{e}^{-{it}\log n} \] has modulus \( {n}^{-\sigma } \) and that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{\sigma }} \) is a convergent series by the integral test. ## 8.3.1 Application of Möbius Inversion to Zsigmondy's Theorem Before showing how Theorem 8.15 can be used to prove Zsigmondy's Theorem (Theorem 1.15), a preliminary observation needs to be made. The polynomial \( {x}^{n} - 1 \) already has some natural factorization according to the divisors of \( n \) . If \( d \geq 1 \) denotes any integer, let \( {\phi }_{d} \) denote the monic polynomial whose zeros are the primitive \( {}^{2}d \) th roots of unity. The polynomial \( {\phi }_{d} \) is known as the \( d \) th cyclotomic polynomial. A simple application of Galois theory says that \[ {\phi }_{d}\left( x\right) \in \mathbb{Z}\left\lbrack x\right\rbrack \text{ for every }d \geq 1. \] If you are not familiar with Galois theory we ask you take this on trust. A natural factorization of \( {x}^{n} - 1 \) into integral polynomials follows at once, by dividing the \( n \) th roots of unity into the \( d \) th primitive roots of unity for \( d \) running over the divisors of \( n \) , \[ {x}^{n} - 1 = \mathop{\prod }\limits_{{d \mid n}}{\phi }_{d}\left( x\right) \] (8.12) Exercise 8.17. Compute the polynomials \( {\phi }_{d} \) for \( 1 \leq d \leq {15} \) . The factorization given by Equation (8.12) into integral polynomials yields a partial factorization of \( {M}_{n} = {2}^{n} - 1 \) into integers, \[ {2}^{n} - 1 = \mathop{\prod }\limits_{{d \mid n}}{\phi }_{d}\left( 2\right) \] (8.13) The first thing to notice about Equation (8.13) is that, by definition, any primitive divisor of \( {M}_{n} \) must divide \( {\phi }_{n}\left( 2\right) \) . The proof of Theorem 1.15 proceeds by showing that any factor of \( {\phi }_{n}\left( 2\right) \) which is common to \( {M}_{d} \) for some \( d < n \) must itself already divide
1105_(GTM260)Monomial.Ideals,Jurgen.Herzog(2010)
Definition 5.1.2
Definition 5.1.2. Let \( \Delta \) be a simplicial complex on the vertex set \( \left\lbrack n\right\rbrack \), and let \( {J}_{\Delta } \subset E \) be the monomial ideal generated by the monomials \( {\mathbf{e}}_{F} \) with \( F \notin \Delta \) . The \( K \) -algebra \( K\{ \Delta \} = E/{J}_{\Delta } \) is called the exterior face ring of \( \Delta \) . Since \( {J}_{\Delta } \) is a graded ideal, the exterior face ring \( K\{ \Delta \} \) is a graded \( K \) - algebra, and one has \[ {\dim }_{K}K\{ \Delta {\} }_{i} = {f}_{i - 1}\;\text{ for }\;i = 0,\ldots, d - 1, \] where \( {f}_{-1} = 1 \) and \( \left( {{f}_{0},{f}_{1},\ldots ,{f}_{d - 1}}\right) \) is the \( f \) -vector of \( \Delta \) . Indeed, the residue classes of the monomials \( {\mathbf{e}}_{F} \) with \( F \in \Delta \) form a \( K \) -basis of \( K\{ \Delta \} \) . ## 5.1.3 Duality We will show that \( E \) is an injective object in \( \mathcal{G} \) . Let \( M, N \in \mathcal{G} \), and let \[ {}^{ * }{\operatorname{Hom}}_{E}\left( {M, N}\right) = {\bigoplus }_{i}{\operatorname{Hom}}_{E}{\left( M, N\right) }_{i} \] where \( {\operatorname{Hom}}_{E}{\left( M, N\right) }_{i} \) is the set of homogeneous \( E \) -module homomorphisms \( \varphi : M \rightarrow N \) of degree \( i \) . Then \( {}^{ * }{\operatorname{Hom}}_{E}\left( {M, N}\right) \) is a graded \( E \) -module with left and right \( E \) -module structure defined as follows: for \( f \in E \) and \( \varphi \in \) \( {}^{ * }{\operatorname{Hom}}_{E}\left( {M, N}\right) \) we set \( \left( {f\varphi }\right) \left( x\right) = \varphi \left( {xf}\right) \) and \( \left( {\varphi f}\right) \left( x\right) = \varphi \left( x\right) f \) for all \( x \in M \) . We check condition (3) in Definition 5.1.1: let \( f \in E \) be homogeneous of degree \( i \) and \( \varphi \in {}^{ * }{\operatorname{Hom}}_{E}\left( {M, N}\right) \) be homogeneous of degree \( j \) . Then for \( x \in {M}_{k} \) we have \[ \left( {f\varphi }\right) \left( x\right) = \varphi \left( {xf}\right) = {\left( -1\right) }^{ik}\varphi \left( {fx}\right) = {\left( -1\right) }^{ik}{f\varphi }\left( x\right) = {\left( -1\right) }^{{ik} + i\left( {j + k}\right) }\varphi \left( x\right) f \] \[ = {\left( -1\right) }^{ij}\left( {\varphi f}\right) \left( x\right) . \] We set \( {M}^{ \vee } = {}^{ * }{\operatorname{Hom}}_{E}\left( {M, E}\right) \) and \( {M}^{ * } = {}^{ * }{\operatorname{Hom}}_{K}\left( {M, K\left( {-n}\right) }\right) \) . Then \( {M}^{ * } \) is a graded \( E \) -module with graded components \[ {\left( {M}^{ * }\right) }_{j} \cong {\operatorname{Hom}}_{K}\left( {{M}_{n - j}, K}\right) \;\text{ for all }j. \] The left \( E \) -module structure of \( {M}^{ * } \) is defined similarly as for \( {}^{ * }{\operatorname{Hom}}_{E}\left( {M, N}\right) \) , while the right multiplication we define by the equation \[ {\varphi f} = {\left( -1\right) }^{ij}{f\varphi }\;\text{ for }\;\varphi \in {\left( {M}^{ * }\right) }_{j}\;\text{ and }\;f \in {E}_{i}. \] It is clear that \( M \mapsto {M}^{ * } \) is an exact functor. Let \( \varphi \in {M}^{ \vee } \) and \( x \in M \) . Then \( \varphi \left( x\right) = \mathop{\sum }\limits_{{F \subset \left\lbrack n\right\rbrack }}{\varphi }_{F}\left( x\right) {\mathbf{e}}_{F} \) with \( {\varphi }_{F}\left( x\right) \in K \) for all \( F \subset \left\lbrack n\right\rbrack \) . Thus for each \( F \subset \left\lbrack n\right\rbrack \) we obtain a \( K \) -linear map \( {\varphi }_{F} : M \rightarrow \) \( K{\mathbf{e}}_{\left\lbrack n\right\rbrack } = K\left( {-n}\right) \) . As the main result of this section we have Theorem 5.1.3. The map \( {M}^{ \vee } \rightarrow {M}^{ * },\varphi \mapsto {\varphi }_{\left\lbrack n\right\rbrack } \) is a functorial isomorphism of graded E-modules. Proof. For a subset \( F \subset \left\lbrack n\right\rbrack \) we set \( \bar{F} = \left\lbrack n\right\rbrack \smallsetminus F \) . We first consider the map \( \alpha : E \rightarrow {E}^{ * } \) of graded \( K \) -vector spaces given by \[ \alpha \left( {\mathbf{e}}_{F}\right) \left( {\mathbf{e}}_{G}\right) = \left\{ \begin{array}{ll} {\left( -1\right) }^{\sigma \left( {\bar{F}, F}\right) }, & \text{ if }G = \bar{F}, \\ 0, & \text{ otherwise. } \end{array}\right. \] For each \( G \subset \left\lbrack n\right\rbrack \) we define the element \( {\mathbf{e}}_{G}^{ * } \in {E}^{ * } \) by \[ {\mathbf{e}}_{G}^{ * }\left( {\mathbf{e}}_{F}\right) = \left\{ \begin{array}{ll} 1, & \text{ if }G = F \\ 0, & \text{ otherwise. } \end{array}\right. \] The elements \( {\mathbf{e}}_{G}^{ * } \) form a \( K \) -basis of \( {E}^{ * } \) (namely the dual basis of the basis \( {\mathbf{e}}_{G} \) with \( G \subset \left\lbrack n\right\rbrack \) ), and we have \( \alpha \left( {\mathbf{e}}_{F}\right) = {\left( -1\right) }^{\sigma \left( {\bar{F}, F}\right) }{\mathbf{e}}_{\bar{F}}^{ * } \) . This shows that \( \alpha \) is an isomorphism of graded \( K \) -vector spaces. We observe that \[ {\mathbf{e}}_{H}{\mathbf{e}}_{G}^{ * } = \left\{ \begin{array}{ll} {\left( -1\right) }^{\sigma \left( {G \smallsetminus H, H}\right) }{\mathbf{e}}_{G \smallsetminus H}^{ * }, & \text{ if }H \subset G, \\ 0, & \text{ otherwise. } \end{array}\right. \] Next we notice that \( \alpha \) is a morphism in the category \( \mathcal{G} \) of graded \( E \) -modules. Indeed for all \( F, G, H \subset \left\lbrack n\right\rbrack \) we have \[ {\mathbf{e}}_{H}\alpha \left( {\mathbf{e}}_{F}\right) = {\left( -1\right) }^{\sigma \left( {\bar{F}, F}\right) }{\mathbf{e}}_{H}{\mathbf{e}}_{\bar{F}}^{ * } = {\left( -1\right) }^{\sigma \left( {F,\bar{F}}\right) + \sigma \left( {\overline{\left( F \cup H\right) }, H}\right) }{\mathbf{e}}_{\left( F \cup H\right) }^{ * }, \] if \( H \cap F = \varnothing \), and \( {\mathbf{e}}_{H}\alpha \left( {\mathbf{e}}_{F}\right) = 0 \) if \( H \cap F \neq \varnothing \) . On the other hand, we have \[ \alpha \left( {{\mathbf{e}}_{H} \land {\mathbf{e}}_{F}}\right) = {\left( -1\right) }^{\sigma \left( {H, F}\right) }\alpha \left( {\mathbf{e}}_{H \cup F}\right) = {\left( -1\right) }^{\sigma \left( {H, F}\right) + \sigma \left( {\overline{\left( H \cup F\right) }, H \cup F}\right) }{\mathbf{e}}_{\left( H \cup F\right) }^{ * }, \] if \( H \cap F = \varnothing \), and \( \alpha \left( {{\mathbf{e}}_{H} \land {\mathbf{e}}_{F}}\right) = 0 \) if \( H \cap F \neq \varnothing \) . Since \( \left( {{\mathbf{e}}_{\overline{\left( F \cup H\right) }} \land {\mathbf{e}}_{H}}\right) \land {\mathbf{e}}_{F} = {\mathbf{e}}_{\overline{\left( F \cup H\right) }} \land \left( {{\mathbf{e}}_{H} \land {\mathbf{e}}_{F}}\right) \) we get \[ {\left( -1\right) }^{\sigma \left( {\bar{F}, F}\right) + \sigma \left( {\overline{\left( F \cup H\right) }, H}\right) } = {\left( -1\right) }^{\sigma \left( {H, F}\right) + \sigma \left( {\overline{\left( H \cup F\right) }, H \cup F}\right) }. \] Thus the above calculations show that \( {\mathbf{e}}_{H}\alpha \left( {\mathbf{e}}_{F}\right) = \alpha \left( {{\mathbf{e}}_{H} \land {\mathbf{e}}_{F}}\right) \), so that \( \alpha : E \rightarrow {E}^{ * } \) is an \( E \) -module homomorphism. Since \( \alpha \) respects the grading and is bijective, it is indeed an isomorphism of graded \( E \) -modules. Consider the functorial homomorphism \[ \psi : {}^{ * }{\operatorname{Hom}}_{E}\left( {M,{}^{ * }{\operatorname{Hom}}_{K}\left( {E, K}\right) }\right) \rightarrow {}^{ * }{\operatorname{Hom}}_{K}\left( {M, K}\right) \] which is defined as \( \psi \left( \rho \right) \left( x\right) = \rho \left( x\right) \left( 1\right) \) for all \( \rho \in {}^{ * }{\operatorname{Hom}}_{E}\left( {M,{}^{ * }{\operatorname{Hom}}_{K}\left( {E, K}\right) }\right) \) and all \( x \in M \) . Note that \( \psi \) is an isomorphism of graded \( E \) -modules. Thus we obtain the desired isomorphism \( {M}^{ \vee } \rightarrow {M}^{ * } \) with \( \varphi \mapsto {\varphi }_{\left\lbrack n\right\rbrack } \) as the composition of the isomorphisms \[ {}^{ * }{\operatorname{Hom}}_{E}\left( {M, E}\right) \xrightarrow[]{{}^{ * }\operatorname{Hom}\left( {M,\alpha }\right) }{}^{ * }{\operatorname{Hom}}_{E}\left( {M,{E}^{ * }}\right) \xrightarrow[]{\psi }{}^{ * }{\operatorname{Hom}}_{K}\left( {M, K}\right) . \] Corollary 5.1.4. (a) The functor \( M \mapsto {M}^{ \vee } \) is contravariant and exact. In particular, \( E \) is an injective object in \( \mathcal{G} \) . (b) For all \( M \in \mathcal{G} \) one has (i) \( {\left( {M}^{ \vee }\right) }^{ \vee } \cong M \), and (ii) \( {\dim }_{K}M = {\dim }_{K}{M}^{ \vee } \) . Proof. All statements follow from Theorem 5.1.3 and the fact that the functor \( M \mapsto {M}^{ * } \) obviously has all the desired properties. We apply the duality functor \( M \mapsto {M}^{ \vee } \) to face rings. Recall that for a simplicial complex \( \Delta \) we denote by \( {\Delta }^{ \vee } \) the Alexander dual of \( \Delta \) . Proposition 5.1.5. Let \( \Delta \) be a simplicial complex on the vertex set \( \left\lbrack n\right\rbrack \) . Then one has (a) \( 0 : {}_{E}{J}_{\Delta } = {J}_{{\Delta }^{ \vee }} \) ; (b) \( K\{ \Delta {\} }^{ \vee } = {J}_{{\Delta }^{ \vee }} \) and \( {\left( {J}_{\Delta }\right) }^{ \vee } = K\left\{ {\Delta }^{ \vee }\right\} \) . Proof. (a) Since \( {J}_{\Delta } \) is a monomial ideal, it follows that \( 0 : {}_{E}{J}_{\Delta } \) is again a monomial ideal. Then by using (5.2) we see that \( {\mathbf{e}}_{F} \in 0 : {}_{E}{J}_{\Delta } \) if and only if \( F \cap G \neq \varnothing \) for all \( G \notin \Delta \) . This is the case if and only if \( G ⊄ \left\lbrack n\right\rbrack \smallsetminus F \) for all \( G \notin \Delta \) . This is equivalent to saying that \( \left\lbrack n\right\rbrack \smallsetminus F \in \Delta \), which in turn implies that \( F \notin {\Delta }^{ \vee } \) . Hence \( {\mathbf{e}}_{F} \in 0 : {}_{E}{J}_{\Delta } \) if and only if \( {\mathbf{e}}_{F} \in {J}_{{\Delta }^{ \vee }} \) . This
1056_(GTM216)Matrices
Definition 2.2
Definition 2.2 The matrix \( P \) above is the matrix of the change of basis from \( \beta \) to \( {\beta }^{\prime } \) . The matrix \( {M}_{u} \) of a linear map \( u \in \mathcal{L}\left( {E;F}\right) \) depends upon the choice of the bases of \( E \) and \( F \) . Therefore it must be modified when they are changed. The following formula describes this modification. Let \( \beta ,{\beta }^{\prime } \) be two bases of \( E \), and \( \gamma ,{\gamma }^{\prime } \) two bases of \( F \) . Let \( M \) be the matrix of \( u \) associated with the bases \( \left( {\beta ,\gamma }\right) \), and \( {M}^{\prime } \) be that associated with \( \left( {{\beta }^{\prime },{\gamma }^{\prime }}\right) \) . Finally, let \( P \) be the matrix of the change of basis \( \beta \mapsto {\beta }^{\prime } \) and \( Q \) that of \( \gamma \mapsto {\gamma }^{\prime } \) . We have \( P \in {\mathbf{{GL}}}_{m}\left( K\right) \) and \( Q \in {\mathbf{{GL}}}_{n}\left( K\right) \) . With obvious notations, we have \[ {f}_{k}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{n}{q}_{ik}{f}_{i},\;{e}_{j}^{\prime } = \mathop{\sum }\limits_{{\ell = 1}}^{m}{p}_{\ell j}{e}_{\ell }. \] We have \[ u\left( {e}_{j}^{\prime }\right) = \mathop{\sum }\limits_{{k = 1}}^{n}{m}_{kj}^{\prime }{f}_{k}^{\prime } = \mathop{\sum }\limits_{{i, k = 1}}^{n}{m}_{kj}^{\prime }{q}_{ik}{f}_{i} \] On the other hand, we have \[ u\left( {e}_{j}^{\prime }\right) = u\left( {\mathop{\sum }\limits_{{\ell = 1}}^{m}{p}_{\ell j}{e}_{\ell }}\right) = \mathop{\sum }\limits_{{\ell = 1}}^{m}{p}_{\ell j}\mathop{\sum }\limits_{{i = 1}}^{m}{m}_{i\ell }{f}_{i}. \] Comparing the two formulæ, we obtain \[ \mathop{\sum }\limits_{{\ell = 1}}^{m}{m}_{i\ell }{p}_{\ell j} = \mathop{\sum }\limits_{{k = 1}}^{n}{q}_{ik}{m}_{kj}^{\prime },\;\forall 1 \leq i \leq n,1 \leq j \leq m. \] This exactly means the formula \[ {MP} = Q{M}^{\prime }\text{.} \] (2.4) Definition 2.3 Two matrices \( M,{M}^{\prime } \in {\mathbf{M}}_{n \times m}\left( K\right) \) are equivalent if there exist two matrices \( P \in {\mathbf{{GL}}}_{m}\left( K\right) \) and \( Q \in {\mathbf{{GL}}}_{n}\left( K\right) \) such that equality (2.4) holds true. Thus equivalent matrices represent the same linear map in different bases. ## 2.2.3.1 The Situation for Square Matrices When \( F = E \) and thus \( m = n \), it is natural to represent \( u \in \operatorname{End}\left( E\right) \) by using only one basis, that is, choosing \( {\beta }^{\prime } = \beta \) with the notations above. In a change of basis, we have likewise \( {\gamma }^{\prime } = \gamma \), which means that \( Q = P \) . We now have \[ {MP} = P{M}^{\prime } \] or equivalently \[ {M}^{\prime } = {P}^{-1}{MP} \] (2.5) Definition 2.4 Two matrices \( M,{M}^{\prime } \in {\mathbf{M}}_{n}\left( K\right) \) are similar if there exists a matrix \( P \in {\mathbf{{GL}}}_{n}\left( K\right) \) such that equality (2.5) holds true. Thus similar matrices represent the same endomorphism in different bases. The equivalence and the similarity of matrices both are equivalence relations. They are studied in detail in Chapter 9. ## 2.2.4 Multiplying Blockwise Let \( M \in {\mathbf{M}}_{n \times m}\left( K\right) \) and \( {M}^{\prime } \in {\mathbf{M}}_{{m}^{\prime } \times p}\left( K\right) \) be given. We assume that partitions \[ n = {n}_{1} + \cdots + {n}_{r},\;m = {m}_{1} + \cdots + {m}_{s}, \] \[ {m}^{\prime } = {m}_{1}^{\prime } + \cdots + {m}_{s}^{\prime },\;p = {p}_{1} + \cdots + {p}_{t} \] have been chosen, so that \( M \) and \( {M}^{\prime } \) can be written blockwise with blocks \( {M}_{\alpha \beta } \) and \( {M}_{\beta \gamma }^{\prime } \) with \( \alpha = 1,\ldots, r,\beta = 1,\ldots, s,\gamma = 1,\ldots, t \) . We can make the product \( M{M}^{\prime } \) , which is an \( n \times p \) matrix, provided that \( {m}^{\prime } = m \) . On the other hand, we wish to use the block form to calculate this product more concisely. Let us write blockwise \( M{M}^{\prime } \) by using the partitions \[ n = {n}_{1} + \cdots + {n}_{r},\;p = {p}_{1} + \cdots + {p}_{t}. \] We expect that the blocks \( {\left( M{M}^{\prime }\right) }_{\alpha \gamma } \) obey a simple formula, say \[ {\left( M{M}^{\prime }\right) }_{\alpha \beta } = \mathop{\sum }\limits_{{\beta = 1}}^{s}{M}_{\alpha \beta }{M}_{\beta \gamma }^{\prime } \] (2.6) The block products \( {M}_{\alpha \beta }{M}_{\beta \gamma }^{\prime } \) make sense provided \( {m}_{\beta }^{\prime } = {m}_{\beta } \) for every \( \beta = 1,\ldots, s \) (which in turn necessitates \( {m}^{\prime } = m \) ). Once this requirement is fulfilled, it is easy to see that formula (2.6) is correct. We leave its verification to the reader as an exercise. In conclusion, multiplication of matrices written blockwise follows the same rule as when the matrices are given entrywise. The multiplication is done in two stages: one level using block multiplication, the other one using multiplication in \( K \) . Ac- tually, we may have as many levels as wished, by writing blocks blockwise (using subblocks), and so on. This recursive strategy is employed in Section 11.1. ## 2.3 Matrices and Bilinear Forms Let \( E, F \) be two \( K \) -vector spaces. One chooses two respective bases \( \beta = \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) and \( \gamma = \left\{ {{f}_{1},\ldots ,{f}_{m}}\right\} \) . If \( B : E \times F \rightarrow K \) is a bilinear form, then \[ B\left( {x, y}\right) = \mathop{\sum }\limits_{{i, j}}B\left( {{e}_{i},{f}_{j}}\right) {x}_{i}{y}_{j} \] where the \( {x}_{i}\mathrm{\;s} \) are the coordinates of \( x \) in \( \beta \) and the \( {y}_{j}\mathrm{\;s} \) are those of \( y \) in \( \gamma \) . Let us define a matrix \( M \in {\mathbf{M}}_{n \times m}\left( K\right) \) by \( {m}_{ij} = B\left( {{e}_{i},{f}_{j}}\right) \) . Then \( B \) can be recovered from the formula \[ B\left( {x, y}\right) \mathrel{\text{:=}} {x}^{T}{My} = \mathop{\sum }\limits_{{i, j}}{m}_{ij}{x}_{i}{y}_{j}. \] (2.7) Conversely, if \( M \in {\mathbf{M}}_{n \times m}\left( K\right) \) is given, one can construct a bilinear form on \( E \times F \) by applying (2.7). We say that \( M \) is the matrix of the bilinear form \( B \), or that \( B \) is the bilinear form associated with \( M \) . We warn the reader that once again, the correspondence \( B \leftrightarrow M \) depends upon the choice of the bases. This correspondence is a (noncanonical) isomorphism between \( \mathbf{{Bil}}\left( {E, F}\right) \) and \( {\mathbf{M}}_{n \times m}\left( K\right) \) . We point out that, opposite to the isomorphism with \( \mathcal{L}\left( {E;F}\right), n \) is now the dimension of \( E \) and \( m \) that of \( F \) . If \( M \) is associated with \( B \), its transpose \( {M}^{T} \) is associated with the bilinear form \( {B}_{T} \) defined on \( F \times E \) by \[ {B}_{T}\left( {y, x}\right) \mathrel{\text{:=}} B\left( {x, y}\right) . \] When \( F = E \), it makes sense to assume that \( \gamma = \beta \) . Then \( M \) is symmetric if and only if \( B \) is symmetric: \( B\left( {x, y}\right) = B\left( {y, x}\right) \) . Likewise, one says that \( M \) is alternate if \( B \) itself is an alternate form. This is equivalent to saying that \[ {m}_{ij} + {m}_{ji} = 0,\;{m}_{ii} = 0,\;\forall 1 \leq i, j \leq n. \] An alternate matrix is skew-symmetric, and the converse is true if \( \operatorname{charc}\left( K\right) \neq 2 \) . If charc \( \left( K\right) = 2 \), an alternate matrix is a skew-symmetric matrix whose diagonal vanishes. ## 2.3.1 Change of Bases As for matrices associated with linear maps, we need a description of the effect of a change of bases for the matrix associated with a bilinear form. Denoting again by \( P, Q \) the matrices of the changes of basis \( \beta \mapsto {\beta }^{\prime } \) and \( \gamma \mapsto {\gamma }^{\prime } \) , and by \( M,{M}^{\prime } \) the matrices of \( B \) in the bases \( \left( {\beta ,\gamma }\right) \) or \( \left( {{\beta }^{\prime },{\gamma }^{\prime }}\right) \), respectively, one has \[ {m}_{ij}^{\prime } = B\left( {{e}_{i}^{\prime },{f}_{j}^{\prime }}\right) = \mathop{\sum }\limits_{{k, l}}{p}_{ki}{q}_{lj}B\left( {{e}_{k},{f}_{\ell }}\right) = \mathop{\sum }\limits_{{k, l}}{p}_{ki}{q}_{lj}{m}_{kl}. \] Therefore, \[ {M}^{\prime } = {P}^{T}{MQ} \] The case \( F = E \) When \( F = E \) and \( \gamma = \beta ,{\gamma }^{\prime } = {\beta }^{\prime } \), the change of basis has the effect of replacing \( M \) by \( {M}^{\prime } = {P}^{T}{MP} \) . We say that \( M \) and \( {M}^{\prime } \) are congruent. If \( M \) is symmetric, then \( {M}^{\prime } \) is too. This was expected, inasmuch as one expresses the symmetry of the underlying bilinear form \( B \) . If the characteristic of \( K \) is distinct from 2, there is an isomorphism between \( {\operatorname{Sym}}_{n}\left( K\right) \) and the set of quadratic forms on \( {K}^{n} \) . This isomorphism is given by the formula \[ Q\left( {{e}_{i} + {e}_{j}}\right) - Q\left( {e}_{i}\right) - Q\left( {e}_{j}\right) = 2{m}_{ij}. \] In particular, \( Q\left( {e}_{i}\right) = {m}_{ii} \) . ## Exercises 1. Let \( G \) be an \( \mathbb{R} \) -vector space. Verify that its complexification \( G{ \otimes }_{\mathbb{R}}\mathbb{C} \) is a \( \mathbb{C} \) - vector space and that \( {\dim }_{\mathbb{C}}G{ \otimes }_{\mathbb{R}}\mathbb{C} = {\dim }_{\mathbb{R}}G \) . 2. Let \( M \in {\mathbf{M}}_{n \times m}\left( K\right) \) and \( {M}^{\prime } \in {\mathbf{M}}_{m \times p}\left( K\right) \) be given. Show that \[ \operatorname{rk}\left( {M{M}^{\prime }}\right) \leq \min \left\{ {\operatorname{rk}M,\operatorname{rk}{M}^{\prime }}\right\} \] First show that \( \operatorname{rk}\left( {M{M}^{\prime }}\right) \leq \operatorname{rk}M \), and then apply this result to the transpose matrix. 3. Let \( K \) be a field and let \( A, B, C \) be matrices with entries in \( K \), of respective sizes \( n \times m,
18_Algebra Chapter 0
Definition 1.2
Definition 1.2. Let \( \varphi : A \rightarrow B \) be a morphism in an additive category A. A morphism \( \iota : K \rightarrow A \) is a kernel of \( \varphi \) if \( \varphi \circ \iota = 0 \) and for all morphisms \( \zeta : Z \rightarrow A \) such that \( \varphi \circ \zeta = 0 \) there exists a unique \( \widetilde{\zeta } : Z \rightarrow K \) making the diagram ![23387543-548b-40c2-8595-200756212a0f_584_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_584_0.jpg) commute. A morphism \( \pi : B \rightarrow C \) is a cokernel of \( \varphi \) if \( \pi \circ \varphi = 0 \) and for all morphisms \( \beta : B \rightarrow Z \) such that \( \beta \circ \varphi = 0 \) there exists a unique \( \widetilde{\beta } : C \rightarrow Z \) making the diagram ![23387543-548b-40c2-8595-200756212a0f_584_1.jpg](images/23387543-548b-40c2-8595-200756212a0f_584_1.jpg) commute. Here is the same in sound-bite format: If \( \varphi \circ \zeta = 0 \), then \( \zeta \) factors uniquely through \( \ker \varphi \) . If \( \beta \circ \varphi = 0 \), then \( \beta \) factors uniquely through \( \operatorname{coker}\varphi \) . We have to get used to the fact that morphisms may have 'many' kernels (or cokernels), uniquely identified with each other by virtue of being answers to a universal question (Proposition 15.4); it is common to harmlessly abuse language and talk about the kernel and the cokernel of a morphism. Further, we have to get used to the fact that the kernel \( \iota : K \rightarrow A \) of a morphism \( A \rightarrow B \) is a morphism. In a category such as \( \mathrm{{Ab}} \) we are used to thinking of the kernel as a subobject of \( A \), but this is really nothing but the datum of an inclusion map: the kernel is really that map. In an arbitrary category one cannot talk about 'subobjects' or 'inclusions'; the closest one can get to these notions is monomorphism. Similarly, 'surjective' is not really an option; epimorphism is the appropriate replacement. Recall ( 12.6) that \( \varphi : A \rightarrow B \) is a monomorphism if for all parallel morphisms \( {\zeta }_{1},{\zeta }_{2} : Z \rightarrow A \) , \[ Z\xrightarrow[{\zeta }_{2}]{{\zeta }_{1}}A\xrightarrow[]{\varphi }B \] \( \varphi \circ {\zeta }_{1} = \varphi \circ {\zeta }_{2} \Rightarrow {\zeta }_{1} = {\zeta }_{2} \) . It is an epimorphism if for all parallel \( {\beta }_{1},{\beta }_{2} : B \rightarrow Z \) , \[ A\xrightarrow[]{\varphi }B\xrightarrow[{\beta }_{2}]{{\beta }_{1}}Z \] \( {\beta }_{1} \circ \varphi = {\beta }_{2} \circ \varphi \Rightarrow {\beta }_{1} = {\beta }_{2} \) . One benefit of working in an additive category is that these definitions simplify a little: Lemma 1.3. A morphism \( \varphi : A \rightarrow B \) in an additive category is a monomorphism if and only if for all \( \zeta : Z \rightarrow A \) , \[ \varphi \circ \zeta = 0 \Rightarrow \zeta = 0. \] It is an epimorphism if and only if for all \( \beta : B \rightarrow Z \) , \[ \beta \circ \varphi = 0 \Rightarrow \beta = 0. \] Proof. This is simply because two morphisms with the same source and target are equal if and only if their difference in the corresponding Hom-set (which is an abelian group by hypothesis) is 0 . Are kernels necessarily monomorphisms? Yes: Lemma 1.4. In any additive category, kernels are monomorphisms and cokernels are epimorphisms. We will run into several such 'dual' statements, and I will give the proof for one half, leaving the other half to the reader; as a rule, the two proofs mirror each other closely. There is a good reason for this: the opposite (cf. Exercise 113.1) of an additive category is additive, and kernels, monomorphisms, etc., in one correspond to cokernels, epimorphisms, etc., in the other. Thus, proving one of these statements for all additive categories establishes at the same time the truth of its dual statement. However, going through the motions necessary to produce stand-alone proofs of the dual statements makes for good practice, and I invite the reader to work out the corresponding exercises at the end of this section. It is invariably the case that these arguments are relatively easy to understand and picture in one's mind but rather awkward to write down precisely. This seems to be a feature of many arrow-theoretic arguments and probably reflects inveterate biases acquired by early exposure to Set. Proof. Let \( \varphi : A \rightarrow B \) be a morphism in an additive category \( \mathrm{A} \), and let \( \operatorname{coker}\varphi \) : \( B \rightarrow C \) be its cokernel. Let \( \gamma : C \rightarrow Z \) be a morphism such that \( \gamma \circ \operatorname{coker}\varphi = 0 \) . The composition \( \left( {\gamma \circ \operatorname{coker}\varphi }\right) \circ \varphi \) is 0 ; by definition of cokernel, \( \gamma \circ \operatorname{coker}\varphi \) factors uniquely through \( C \) : ![23387543-548b-40c2-8595-200756212a0f_586_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_586_0.jpg) Since \( \gamma \circ \operatorname{coker}\varphi = 0 = 0 \circ \operatorname{coker}\varphi \), the uniqueness forces \( \gamma = 0 \) . This proves that coker \( \varphi \) is an epimorphism, by Lemma 1.3, The proof that kernels are monomorphisms is analogous and is left to the reader (Exercise 1.9). Certain limits are guaranteed to exist in an additive category: finite products and coproducts do. On the other hand, kernels and cokernels (which are also limits) do not necessarily exist in an additive category. For example, the category of finitely generated modules over a ring is additive, but it does not have kernels in general (essentially because a submodule of a finitely generated module is not necessarily finitely generated). But as soon as a morphism \( \varphi \) does have kernels or cokernels in an additive category, the basic qualities of that morphism can be detected 'as usual’ by looking at \( \ker \varphi \) and \( \operatorname{coker}\varphi \) . This is the case for monomorphisms and epimorphisms: Lemma 1.5. Let \( \varphi : A \rightarrow B \) be a morphism in an additive category. Then \( \varphi \) is a monomorphism if and only if \( 0 \rightarrow A \) is its kernel, and \( \varphi \) is an epimorphism if and only if \( B \rightarrow 0 \) is its cokernel. Compare this statement with Proposition 1116.2. ## Proof. Let's do kernels this time. First assume \( \varphi : A \rightarrow B \) is a monomorphism. If \( \zeta : Z \rightarrow A \) is any morphism such that the composition \( Z \rightarrow A \rightarrow B \) is 0, then \( \zeta \) is 0 by Lemma 1.3, and in particular \( \zeta \) factors (uniquely) through \( 0 \rightarrow A \) . This proves that \( 0 \rightarrow A \) is a kernel of \( \varphi \), as stated. Conversely, assume that \( 0 \rightarrow A \) is a kernel for \( \varphi : A \rightarrow B \), and let \( \zeta : Z \rightarrow A \) be a morphism such that \( \varphi \circ \zeta = 0 \) . It follows that \( \zeta \) factors through \( 0 \rightarrow A \), since the latter is a kernel for \( \varphi \) : ![23387543-548b-40c2-8595-200756212a0f_587_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_587_0.jpg) This implies \( \zeta = 0 \), proving that \( \varphi \) is a monomorphism. The statement about epimorphisms and cokernels is left to the reader (Exercise 1.9). In view of Lemma 1.5, we should be able to use diagrams \[ 0 \rightarrow A \rightarrow B,\;A \rightarrow B \rightarrow 0 \] to signal that \( A \rightarrow B \) is a monomorphism, resp., an epimorphism: think ’exact’. However, the fact that kernels and cokernels do not necessarily exist makes talking about exactness problematic in a category that is 'only' additive. This situation will be rectified very soon. Incidentally, it is common to denote monomorphisms and epimorphisms by suitably decorated arrows; popular choices are \( \leftrightarrow \) and \( \rightarrow \), respectively. 1.3. Abelian categories. The moral at this point is that if a morphism in an additive category has kernels and cokernels, then these will behave as expected. But kernels and cokernels do not necessarily exist, and this prevents us from going much further. Also, while (as we have seen) kernels are monomorphisms and cokernels are epimorphisms in an additive category, there is no guarantee that monomorphisms should necessarily be kernels and epimorphisms should be cokernels. In the end, we simply demand these additional features explicitly. Definition 1.6. An additive category A is abelian if kernels and cokernels exist in A; every monomorphism is the kernel of some morphism; and every epimorphism is the cokernel of some morphism. As mentioned already, \( R \) -Mod is an abelian category, for every ring \( R \) . The prototype of an abelian category is \( \mathrm{{Ab}} \) : this 1 is why these categories are called abelian. Since kernels are necessarily monomorphisms (by Lemma 1.4), we see that in an abelian category we can adopt a mantra entirely analogous to the useful 'kernel \( \Leftrightarrow \) submodule’ of [111]5.3 vintage: in abelian categories, the slogan would be ’kernel \( \Leftrightarrow \) monomorphism’ (and similarly for cokernels vs. epimorphisms). Remark 1.7. Just as it is convenient to think of monomorphisms \( A \mapsto B \) as defining \( A \) as a ’subobject’ of \( B \), it is occasionally convenient to think of epimorphisms as ’quotients’: if \( \varphi : A \mapsto B \) is a monomorphism, we can use \( B/A \) to denote (the target of) \( \operatorname{coker}\varphi \) . We will have no real use for this notation in this section, but it will come in handy later on. --- \( {}^{1} \) There is nothing particularly ’commutative’ about an abelian category. --- The very existence of kernels and cokernels links these two notions tightly in an abelian category: Lemma 1.8. In an abelian category \( \mathrm{A} \), every kernel is the kernel of its cokernel; every cokernel is the cokernel of its kernel. Proof. I will p
1139_(GTM44)Elementary Algebraic Geometry
Definition 2.11
Definition 2.11. Let \( V \subset {\mathbb{P}}^{n}\left( k\right) \) be a projective variety, and \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) a choice of hyperplane at infinity. The part of \( V \) in \( {\mathbb{P}}^{n}\left( k\right) \smallsetminus {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) is called the dehomogenization of \( V \) at \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \), or the affine part of \( V \) relative to \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) . The \( n + 1 \) canonical choices of hyperplanes described in the last section (defined by \( {X}_{1} = 0,\ldots ,{X}_{n + 1} = 0 \) in \( {k}^{n + 1} \) ) induce \( n + 1 \) canonical de-homogenizations of \( {\mathbb{P}}^{n}\left( k\right) \), and also of any projective variety \( V \) in \( {\mathbb{P}}^{n}\left( k\right) \) . As before, \( V \) is covered by the \( n + 1 \) corresponding affine parts of \( V \) . Notation 2.12. We denote the dehomogenization of \( V \) at \( {\mathbb{P}}_{\infty }{}^{n - 1} \) by \( {\mathrm{D}}_{{\mathbb{P}}_{\infty }}{}^{n - 1}\left( V\right) \) , or by \( \mathrm{D}\left( V\right) \) if the hyperplane \( {\mathbb{P}}_{\infty }{}^{n - 1} \) is clear from context. We denote the above canonical dehomogenizations of any \( V \) by \( {\mathrm{D}}_{1}\left( V\right) ,\ldots ,{\mathrm{D}}_{n + 1}\left( V\right) \) . Just as there are \( n + 1 \) canonical dehomogenizations of \( {\mathbb{P}}^{n}\left( k\right) \), there are also \( n + 1 \) canonical dehomogenizations of any homogeneous polynomial \( p \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack \) . Definition 2.13. Let \( q\left( {{X}_{1},\ldots ,{X}_{n + 1}}\right) \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack \) be a homogeneous polynomial. The polynomial \[ q\left( {{X}_{1},\ldots ,{X}_{i - 1},1,{X}_{i + 1},\ldots ,{X}_{n + 1}}\right) \] is called the dehomogenization of \( q\left( {{X}_{1},\ldots ,{X}_{n + 1}}\right) \) at \( {X}_{i} \) ; we denote it by \( {\mathrm{D}}_{{X}_{i}}\left( q\right) \), by \( {\mathrm{D}}_{i}\left( q\right) \), or by \( \mathrm{D}\left( q\right) \) if clear from context. Lemma 2.14. Let \( {q}_{1},\ldots ,{q}_{r} \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack \) be homogeneous; let \( \mathbf{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \subset {\mathbb{P}}^{n}\left( k\right) \) be the projective variety defined by \( {q}_{1},\ldots ,{q}_{r} \) . Then \[ {\mathrm{D}}_{i}\left( {\mathrm{\;V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) }\right) = \mathrm{V}\left( {{\mathrm{D}}_{i}\left( {q}_{1}\right) ,\ldots ,{\mathrm{D}}_{i}\left( {q}_{r}\right) }\right) . \] (3) Proof. The variety \( \left. {\mathrm{V}\left( {{\mathrm{D}}_{i}\left( {q}_{1}\right) ,\ldots ,{\mathrm{D}}_{i}\left( {q}_{r}\right) }\right) }\right) \) can be looked at as the intersection of the variety \( \mathrm{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \) with the plane given by \( {X}_{i} = 1 \) in \( {k}^{n + 1} \) . Here are some relations between \( \mathrm{D} \) and \( \mathrm{H} \) : Lemma 2.15. Let \( p \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) . Then \[ {\mathrm{D}}_{i}\left( {{\mathrm{H}}_{i}\left( p\right) }\right) = p. \] Proof. Obvious from the definitions of \( {\mathrm{D}}_{i} \) and \( {\mathrm{H}}_{i} \) . Lemma 2.16. Let \( q \) be a homogeneous polynomial in \( k\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) . Then for any \( i = 1,\ldots, n \), it can happen that \[ {\mathrm{H}}_{i}\left( {{\mathrm{D}}_{i}\left( q\right) }\right) \neq q\text{.} \] Proof. Let \( q\left( {{X}_{1},{X}_{2}}\right) = {X}_{1}{X}_{2} \) . Then \( {\mathrm{D}}_{1}\left( q\right) = {X}_{2} \), and \( {\mathrm{H}}_{1}\left( {{\mathrm{D}}_{1}\left( q\right) }\right) = {X}_{2} \neq \) \( {X}_{1}{X}_{2} \) . Similarly, \( {\mathrm{H}}_{2}\left( {{\mathrm{D}}_{2}\left( q\right) }\right) = {X}_{1} \neq {X}_{1}{X}_{2} \) . Lemma 2.17. Let \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) be a hyperplane at infinity of \( {\mathbb{P}}^{n}\left( k\right) \), and let \( V \subset {k}^{n} = {\mathbb{P}}^{n}\left( k\right) \smallsetminus {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) . Let \( \mathrm{H}\left( V\right) \) be the projective completion of \( V \) , and \( \mathrm{D} \) the operation of dehomogenizing \( \mathrm{H}\left( V\right) \) at \( {\mathbb{P}}_{\infty }{}^{n - 1} \) . Then \[ \mathrm{D}\left( {\mathrm{H}\left( V\right) }\right) = V \] (4) But if \( V \) is a variety in \( {\mathbb{P}}^{n}\left( k\right) \), then it can happen that \[ \mathrm{H}\left( {\mathrm{D}\left( V\right) }\right) \neq V \] (5) Proof. We leave verification of (4) as an easy exercise. (5) follows from Lemma 2.16 by letting \( V \) be \( \mathrm{V}\left( {{X}_{1}{X}_{2}}\right) \) . More generally, if \( V \) is any variety in \( {\mathbb{P}}^{n}\left( k\right) \) not containing \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \), then (5) holds for the variety \( V \cup {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) . We now give some illustrations of the above ideas. Many of the essential features can be brought out using real varieties; in fact we can learn much from real curves in \( {\mathbb{R}}^{2} \) and \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) . The reader will see that various ways of looking at \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) (1-subspaces in \( {\mathbb{R}}^{3} \), sphere with identified antipodal points, disk with antipodal boundary points identified) will all be valuable in understanding the nature of projective curves, of homogenization, and of de-homogenization. In the first four examples (Examples 2.18-2.21) we start with an affine variety \( V = \mathrm{V}\left( p\right) \subset {\mathbb{R}}_{XY}, p \in \mathbb{R}\left\lbrack {X, Y}\right\rbrack \) . The homogenized polynomial \( {\mathrm{H}}_{Z}\left( p\right) \) then defines a homogeneous variety \( {\mathrm{H}}_{Z}\left( V\right) = \mathrm{H}\left( V\right) \) in \( {\mathbb{R}}_{XYZ} \), the original affine part in \( {\mathbb{R}}_{XY} \) being represented by the 1-spaces of \( \mathrm{H}\left( V\right) \) in \( {\mathbb{R}}_{XYZ} \smallsetminus {\mathbb{R}}_{XY} \) . In each of these examples we note that \[ {\left\lbrack \mathrm{H}\left( V\right) \cap \left( {\mathbb{R}}_{XYZ} \smallsetminus {\mathbb{R}}_{XY}\right) \right\rbrack }^{ - } = \mathrm{H}\left( V\right) \] (The bar denotes topological closure in \( {\mathbb{R}}_{XYZ} \) .) Since \( \mathrm{H}\left( V\right) \) is a variety, by our earlier observation (Remark 2.10), \( \mathrm{H}\left( V\right) \) represents the projective completion \( {V}^{c} \) of \( V \) . Example 2.18. Consider the real circle \( \mathrm{V}\left( {{X}^{2} + {Y}^{2} - 1}\right) \subset {\mathbb{R}}^{2} \) . The homogenized polynomial \( {X}^{2} + {Y}^{2} - {Z}^{2} \in \mathbb{R}\left\lbrack {X, Y, Z}\right\rbrack \) determines the cone in Figure 3 as well as the circles in Figure 4. (Since antipodal points are identified, this is just one circle.) ![9396b131-9501-41be-b2cf-577fd90ab693_46_0.jpg](images/9396b131-9501-41be-b2cf-577fd90ab693_46_0.jpg) We may dehomogenize at an arbitrary \( {\mathbb{P}}_{\infty }{}^{1}\left( \mathbb{R}\right) \) by choosing an appropriate 2-space in \( {\mathbb{R}}_{XYZ} \) . Since the intersection of the cone with a parallel translate of this 2-space yields a copy of the affine part of the curve with respect to \( {\mathbb{P}}_{\infty }{}^{1}\left( \mathbb{R}\right) \), we see the various affine parts of the circle in \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) are conic sections. Thus, dehomogenizing at the plane in \( {\mathbb{R}}_{XYZ} \) with \( Z \) -axis as normal gives a circle, and as we vary the normal, we get ellipses, a parabola, and hyperbolas. Likewise, dehomogenizing \( {X}^{2} + {Y}^{2} - {Z}^{2} \) to \( {X}^{2} + {Y}^{2} - 1 \), to \( {X}^{2} + 1 - {Z}^{2} \), and to \( 1 + {Y}^{2} - {Z}^{2} \) yields a circle and two hyperbolas, respectively. We may also get specific equations for affine curves induced in 2-spaces other than in the above canonical way. For example, let the 2-space in \( {\mathbb{R}}^{3} \) given by \( X = Z \) define the line at infinity; this subspace intersects our cone in just one 1-subspace \( L \) . Hence in \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) the curve touches this line at infinity in exactly one point. An affine representative with respect to this infinite line is obtained by intersecting the cone with a parallel translate of the plane \( X = Z \), say \( X = Z + 1 \) . What is the polynomial describing this affine representative? It may easily be found by choosing new coordinates \( {X}^{\prime },{Y}^{\prime },{Z}^{\prime } \) of \( {\mathbb{R}}^{3} \) so the new \( {Z}^{\prime } \) -axis is the 1 -subspace \( L \) . This may be done by setting \[ X = {X}^{\prime } + {Z}^{\prime },\;Y = {Y}^{\prime },\;Z = {Z}^{\prime }. \] The plane \( X = Z + 1 \) then becomes \( {X}^{\prime } = 1 \) ; the equation of the cone in these coordinates becomes \[ {\left( {X}^{\prime }\right) }^{2} + 2{X}^{\prime }{Z}^{\prime } + {\left( {Y}^{\prime }\right) }^{2} = 0. \] In the affine plane \( {X}^{\prime } = 1 \) this equation becomes \[ 1 + 2{Z}^{\prime } + {\left( {Y}^{\prime }\right) }^{2} = 0 \] which is a parabola. The sketches of the affine curve in \( \mathrm{V}\left( {{X}^{\prime } - 1}\right) \) appears in Figure 5. (We identify \( \mathrm{V}\left( {{X}^{\prime } - 1}\right) \) with \( {\mathbb{R}}_{{Y}^{\prime }{Z}^{\prime }} \) .) The sketch of the entire projective curve appears in Figure 6. ![9396b131-9501-41be-b2cf-577fd90ab693_47_0.jpg](images/9396b131-9501-41be-b2cf-577fd90ab693_47_0.jpg) Our projective circle now touches the line at infinity in just one point, but by making this point finite, we can easily show it does so with multiplicity two. From what we have said so far, the reader can see that from a projective viewpoint
1124_(GTM30)Lectures in Abstract Algebra I Basic Concepts
Definition 1
Definition 1. A partially ordered set is a system consisting of a set \( S \) and a relation \( \geq \) ("greater than or equals" or "contains") satisfying the following postulates: \( {\mathrm{P}}_{1}\;a \geq b \) and \( b \geq a \) hold if and only if \( a = b \) . \( {\mathrm{P}}_{2}\; \) If \( a \geq b \) and \( b \geq c \), then \( a \geq c \) . If \( a \) and \( b \) are any elements of \( S \) we may have \( a \geq b \) or not; in the latter case we write \( a \ngeq b \) . Also if \( a \geq b \) and \( a \neq b \), then we write \( a > b \), and we agree to use \( b \leq a \) and \( b < a \) as alternatives for \( a \geq b \) and \( a > b \) . Examples. (1) The set \( I \) of integers, the set \( P \) of positive integers and the set \( R \) of real numbers are partially ordered sets relative to the usual \( \geq \) relation. (2) The set \( P \) of positive integers, the relation \( \geq \) defined by the rule that \( a \geq b \) if \( a \mid b \) . It is clear that \( {\mathrm{P}}_{1} \) and \( {\mathrm{P}}_{2} \) are satisfied. (3) The set \( \mathfrak{P} \) of subsets of an arbitrary set \( S \) with \( A \geq B \) defined to mean that \( B \) is a subset of \( A \) . (4) The set \( \mathfrak{L} \) of subgroups of a group \( \mathfrak{G} \) with \( {\mathfrak{H}}_{1} \geq {\mathfrak{H}}_{2} \) defined as in (3). In any one of the examples, (2), (3), or (4), there exist elements \( a \) and \( b \) that are not comparable in the sense that neither \( a \geq b \) nor \( b \geq a \) holds. If every pair of elements of a partially ordered set \( S \) is comparable \( \left( {a \geq b\text{or}b \geq a}\right) \), then \( S \) is said to be linearly ordered or is a chain. All of the examples in (1) are of this type. In a finite partially ordered set the relation \( > \) can be expressed in terms of the relation of covering. We say that \( {a}_{1} \) is a cover of \( {a}_{2} \) if \( {a}_{1} > {a}_{2} \) and no \( u \) exists such that \( {a}_{1} > u > {a}_{2} \) . It is clear that, if \( a > b \) in a finite partially ordered set, then we can find a chain \[ a = {a}_{1} > {a}_{2} > \cdots > {a}_{n} = b \] in which each \( {a}_{i} \) covers \( {a}_{i + 1} \) . Conversely the existence of such a chain implies that \( a > b \) . This remark enables us to represent any finite partially ordered set by a diagram. One obtains such a diagram by representing the elements of \( S \) by small circles (or dots) and placing the circle for \( {a}_{1} \) above that for \( {a}_{2} \) and connecting by a line if \( {a}_{1} \) is a cover of \( {a}_{2} \) . Then \( a > b \) if and only if there is a descending broken line connecting \( a \) to \( b \) . Some examples of such diagrams are the following: ![9c7d47d5-24bb-4360-bb03-9c6a5458d669_199_0.jpg](images/9c7d47d5-24bb-4360-bb03-9c6a5458d669_199_0.jpg) Evidently the notion of a diagram of a partially ordered set gives us another means to construct examples of such sets. ## EXERCISES 1. Show that the partially ordered set of subgroups of a cyclic group of prime power order is a chain. 2. Let \( S \) be the set of all functions which are continuous over the interval \( 0 \leq x \leq 1 \) . Define \( f \geq g \) if and only if \( f\left( x\right) \geq g\left( x\right) \) for all \( x \) in the closed interval. Show that the relation \( \geq \) is a partial ordering of \( S \) . 3. Obtain diagrams for the following partially ordered sets: the set of subsets of a set of three elements, the set of subgroups of the cyclic group of order 6 , the set of subgroups of \( {S}_{3} \) . 2. Lattices. An element \( u \) of a partially ordered set \( S \) is said to be an upper bound for the subset \( A \) of \( S \) if \( u \geq a \) for every \( {a\varepsilon A} \) . The element \( u \) is a least upper bound (l.u.b.) if \( u \) is an upper bound and \( u \leq v \) for any upper bound \( v \) of \( A \) . It is immediate that if a least upper bound exists then it is unique. Similar definitions and remarks apply to lower bounds. These notions are fundamental in the following Definition 2. A lattice (structure) is a partially ordered set in which any two elements have a least upper bound and a greatest lower bound (g.l.b.). We denote the l.u.b. of \( a \) and \( b \) by \( a \cup b \) (" \( a\operatorname{cup}b \) " or " \( a \) union \( b \) ") and the g.l.b. by \( a \cap b \) (" \( a \) cap \( b \) " or " \( a \) intersect \( b \) "). If \( a, b, c \) are any three elements of a lattice \( L \), then \( \left( {a \cup b}\right) \cup c \geq \) \( a, b, c \) . Moreover, if \( v \) is any element such that \( v \geq a, b, c \) then \( v \geq \left( {a \cup b}\right), c \) . Hence \( v \geq \left( {a \cup b}\right) \cup c \) . Thus \( \left( {a \cup b}\right) \cup c \) is a l.u.b. for \( a, b \) and \( c \) . A simple inductive argument shows that any finite subset of \( L \) has a l.u.b. Similarly any finite subset has a g.l.b. If the set consists of \( {a}_{1},{a}_{2},\cdots ,{a}_{n} \), then we denote these elements by \[ {a}_{1} \cup {a}_{2} \cup \cdots \cup {a}_{n}\text{ and }{a}_{1} \cap {a}_{2} \cap \cdots \cap {a}_{n} \] respectively. A lattice \( L \) is said to be complete if any (finite or infinite) subset \( A = \left\{ {a}_{\alpha }\right\} \) has a l.u.b. \( \cup {a}_{\alpha } \) and a g.l.b. \( \cap {a}_{\alpha } \) . The examples (1)-(4) of partially ordered sets listed in \( §1 \) are lattices. In the example (3) of subsets of a set, \( A \cup B \) and \( A \cap B \) have the usual significance of set-theoretic sum and set intersection. In the partially ordered set of subgroups of a group \( \mathfrak{G} \) , \( {\mathfrak{H}}_{1} \cup {\mathfrak{H}}_{2} \) is the group \( \left\lbrack {{\mathfrak{H}}_{1},{\mathfrak{H}}_{2}}\right\rbrack \) generated by \( {\mathfrak{H}}_{1} \) and \( {\mathfrak{H}}_{2} \) while \( {\mathfrak{H}}_{1} \cap {\mathfrak{H}}_{2} \) is the usual intersection. All of the diagrams given in § 1 except the last one represent lattices. The lattice of subsets of any set, and the lattice of subgroups of any group are complete. The lattice of rational numbers (the usual \( \geq \) ) is not complete. It is worth while to list the basic algebraic properties of the binary compositions \( U \) and \( \cap \) in a lattice. In doing so we shall be led to a second and somewhat more algebraic definition of a lattice. We note first that the l.u.b. and the g.l.b. are symmetric functions of their arguments, that is, \( a \cup b = b \cup a \) and \( a \cap b = \) \( b \cap a \) . Also we have seen that \( \left( {a \cup b}\right) \cup c \) is the l.u.b. of \( a, b, c \) . Since the l.u.b. is unique, \[ \left( {a \cup b}\right) \cup c = \left( {b \cup c}\right) \cup a = a \cup \left( {b \cup c}\right) . \] Similarly \[ \left( {a \cap b}\right) \cap c = a \cap \left( {b \cap c}\right) \text{.} \] It is clear that \[ a \cup a = a,\;a \cap a = a. \] Since \( a \cup b \geq a,\left( {a \cup b}\right) \cap a = a \) . Similarly \( \left( {a \cap b}\right) \cup a = a \) . Conversely suppose that \( L \) is any set in which there are defined two binary compositions \( U \) and \( \cap \) satisfying \( {\mathrm{L}}_{1}\;a \cup b = b \cup a,\;a \cap b = b \cap a. \) \( {\mathrm{L}}_{2}\;\left( {a \cup b}\right) \cup c = a \cup \left( {b \cup c}\right) ,\;\left( {a \cap b}\right) \cap c = a \cap \left( {b \cap c}\right) . \) \[ {\mathrm{L}}_{3}\;a \cup a = a,\;a \cap a = a. \] \[ {\mathrm{L}}_{4}\;\left( {a \cup b}\right) \cap a = a,\;\left( {a \cap b}\right) \cup a = a. \] We shall show that \( L \) is a lattice relative to a suitable definition of \( \geq \) and that \( U \) and \( \Omega \) are the l.u.b. and the g.l.b. in this lattice. Before proceeding to the proof we remark that we have made precisely the same assumptions on the two compositions \( U \) and N. Hence, we have the important principle of duality that states that, if \( \mathcal{S} \) is a statement which can be deduced from our axioms, then the dual statement \( {S}^{\prime } \) obtained by interchanging \( U \) and \( \Omega \) in \( S \) can also be deduced. We note next that, if \( a \) and \( b \) belong to a system satisfying \( {\mathrm{L}}_{1} - {\mathrm{L}}_{4} \), then the conditions \( a \cup b = a \) and \( a \cap b = b \) are equivalent; for, if \( a \cup b = a \) holds, then \( a \cap b = \left( {a \cup b}\right) \cap b = b \) and dually \( a \cap b = b \) implies \( a \cup b = a \) . We shall now define a relation \( \geq \) in \( L \) by specifying that \( a \geq b \) means that either \( a \cup b = a \) or \( a \cap b = b \) . Evidently in dualizing a statement \( a \geq b \) has to be replaced by \( b \geq a \) . We shall now show that the basic rules \( {\mathrm{P}}_{1} - {\mathrm{P}}_{2} \) for partially ordered sets hold for the relation that we have introduced. Suppose that \( a \geq b \) and \( b \geq a \) . Then \( a \cup b = a \) and \( b \cup a = b \) . Hence by the commutative law \( a = b \) . Also by \( {\mathrm{L}}_{3}a \cup a = a \) so that \( a \geq a \) . This proves \( {\mathrm{P}}_{1} \) . Next assume that \( a \geq b \) and \( b \geq c \) . Then \( a \cup b = a \) and \( b \cup c = b \) . Hence, \[ a \cup c = \left( {a \cup b}\right) \cup c = a \cup \left( {b \cup c}\right) = a \cup b = a \] and \( a \geq c \) . Hence \( {\mathrm{P}}_{2} \) holds. Since \( \left( {a \cup b}\right) \cap a = a, a \cup b \geq a \) . Similarly \( a \cup b \geq b \) . Now let \( c \) be any element such that \( c \geq a \) and \( c \geq b \) . Then \( a \cup c = c \) and \( b \cup c = c \) . Hence \[ \left( {a \cup b}\right) \cup c = a \cup \left( {b \cup c}\right) = a \cup c = c \] and \( c \geq a \cup b \) . This shows that \( a \cup b \) is a l.u.b. of \( a \) and \( b \) . By duality \( a \cap b \) is a g.l.b. of \( a \) and \( b \) . This concludes the proof that a system satisfying \( {\mathrm{L}}_{1} - {\mathrm{L}}_{4} \) is a lattice. A subs
1057_(GTM217)Model Theory
Definition 6.2.5
Definition 6.2.5 If \( \mathcal{M} \) is an \( \mathcal{L} \) -structure and \( \phi \) is any \( \mathcal{L} \) -formula, we define \( \operatorname{RM}\left( \phi \right) \), the Morley rank of \( \phi \), to be \( {\operatorname{RM}}^{\mathcal{N}}\left( \phi \right) \), where \( \mathcal{N} \) is any \( {\aleph }_{0} \) -saturated elementary extension of \( \mathcal{M} \) . Morley rank gives us our desired notion of "dimension" for definable sets. Definition 6.2.6 Suppose that \( \mathcal{M} \vDash T \) and \( X \subseteq {M}^{n} \) is defined by the \( {\mathcal{L}}_{M} \) -formula \( \phi \left( \bar{v}\right) \) . We let \( \operatorname{RM}\left( X\right) \), the Morley rank of \( X \), be \( \operatorname{RM}\left( \phi \right) \) . In particular, if \( \mathcal{M} \) is \( {\aleph }_{0} \) -saturated and \( X \subseteq {M}^{n} \) is definable, then \( \operatorname{RM}\left( X\right) \geq \alpha + 1 \) if and only if we can find \( {Y}_{1},{Y}_{2},\ldots \) pairwise disjoint definable subsets of \( X \) of Morley rank at least \( \alpha \) . The next lemma shows that Morley rank has some basic properties that we would want for a good notion of dimension. Lemma 6.2.7 Let \( \mathcal{M} \) be an \( \mathcal{L} \) -structure and let \( X \) and \( Y \) be definable subsets of \( {M}^{n} \) . i) If \( X \subseteq Y \), then \( \operatorname{RM}\left( X\right) \leq \operatorname{RM}\left( Y\right) \) . ii) \( \operatorname{RM}\left( {X \cup Y}\right) \) is the maximum of \( \operatorname{RM}\left( X\right) \) and \( \operatorname{RM}\left( Y\right) \) . iii) If \( X \) is nonempty, then \( \operatorname{RM}\left( X\right) = 0 \) if and only if \( X \) is finite. Proof We leave the proofs of i) and ii) as exercises. iii) Let \( X = \phi \left( \mathcal{M}\right) \) . Because \( X \) is nonempty, \( \operatorname{RM}\left( \phi \right) \geq 0 \) . Because \( \phi \left( \mathcal{M}\right) \) is finite if and only if \( \phi \left( \mathcal{N}\right) \) is finite for any \( \mathcal{M} \prec \mathcal{N} \), we may, without loss of generality, assume that \( \mathcal{M} \) is \( {\aleph }_{0} \) -saturated. If \( X \) is finite, then, because \( X \) cannot be partitioned into infinitely many nonempty sets, \( \operatorname{RM}\left( X\right) \ngeqslant 1 \) . Thus \( \operatorname{RM}\left( X\right) = 0 \) . If \( X \) is infinite, let \( {a}_{1},{a}_{2},\ldots \) be distinct elements of \( X \) . Then, \( \left\{ {a}_{1}\right\} ,\left\{ {a}_{2}\right\} ,\ldots \) is an infinite sequence of pairwise disjoint definable subsets of \( X \) . Thus \( \operatorname{RM}\left( X\right) \geq 1 \) . We will be interested in theories where every formula is ranked. Definition 6.2.8 A theory \( T \) is called totally transcendental if, for all \( \mathcal{M} \vDash T \), if \( \phi \) is an \( {\mathcal{L}}_{M} \) -formula, then \( \operatorname{RM}\left( \phi \right) < \infty \) . ## The Monster Model The definition we just gave of Morley rank is rather awkward because even if a formula has parameters from \( \mathcal{M} \vDash T \) we need to work in an \( {\aleph }_{0} \) -saturated elementary extension to calculate the Morley rank. Then, to show that Morley rank is well-defined, we must show that our calculation did not depend on our choice of \( {\aleph }_{0} \) -saturated model. Arguments such as this come up very frequently and tend to be both routine and repetitive. To simplify proofs, we will frequently adopt the expository device of assuming that we are working in a fixed, very large, saturated model of \( T \) . Let \( \mathbb{M} \vDash T \) be saturated of cardinality \( \kappa \), where \( \kappa \) is "very large." We call \( \mathbb{M} \) the monster model of \( T \) . If \( \mathcal{M} \vDash T \) and \( \left| M\right| \leq \kappa \), then by Lemma 4.3.17 there is an elementary embedding of \( \mathcal{M} \) into \( \mathbb{M} \) . Moreover, if \( \mathcal{M} \prec \mathbb{M} \) , \( f : \mathcal{M} \rightarrow \mathcal{N} \) is elementary, and \( \left| N\right| < \kappa \), we can find \( j : \mathcal{N} \rightarrow \mathbb{M} \) elementary such that \( j \mid M \) is the identity. Thus, if we focus attention on models of \( T \) of cardinality less than \( \kappa \), we can view all models as elementary submodels of M. There are several problems with this approach. First, we really want to prove theorems about all models of \( T \), not just the small ones. But if there are arbitrarily large saturated models of \( T \), then we can prove something about all models of \( T \) by proving it for submodels of larger and larger monster models. Second, and more problematic, for general theories \( T \) there may not be any saturated models. For our purposes, this is not a serious problem because, for the remainder of this text, we will be focusing on \( \omega \) -stable theories and, by Theorem 4.3.15, if \( T \) is \( \omega \) -stable, there are saturated models of \( T \) of cardinality \( \kappa \) for each regular cardinal \( \kappa \) . If we were considering arbitrary theories, we could get around this by making some extra set-theoretic assumptions. For example, we could assume that for all cardinals \( \lambda \) there is a strongly inaccessible cardinal \( \kappa > \lambda \) . Then, by Corollary 4.3.14, there are arbitrarily large saturated models. We will tacitly assume that \( T \) has arbitrarily large saturated models, and thus we can prove theorems about all models of \( T \) by proving theorems about elementary submodels of saturated models. \( {}^{1} \) We will only use this assumption in arguments where, by careful bookkeeping as in the proofs above, we could avoid it. For the remainder of the chapter, we make the following assumptions: - \( \mathbb{M} \) is a large saturated model of \( T \) ; - all \( \mathcal{M} \vDash T \) that we consider are elementary submodels of \( \mathbb{M} \) and \( \left| M\right| < \left| \mathbb{M}\right| \) - all sets \( A \) of parameters that we consider are subsets of \( \mathbb{M} \) with \( \left| A\right| < \mathbb{M} \) ; - if \( \phi \left( {\bar{v},\bar{a}}\right) \) is a formula with parameters, we assume \( \bar{a} \in \mathbb{M} \) ; - we write \( \operatorname{tp}\left( {\bar{a}/A}\right) \) for \( {\operatorname{tp}}^{\mathbb{M}}\left( {\bar{a}/A}\right) \) and \( {S}_{n}\left( A\right) \) for \( {S}_{n}^{\mathbb{M}}\left( A\right) \) . Note that if \( \bar{a} \in M \), then, because \( \mathcal{M} \prec \mathbb{M},\mathcal{M} \vDash \phi \left( \bar{a}\right) \) if and only if \( \mathbb{M} \vDash \phi \left( \bar{a}\right) \) . We will say that \( \phi \left( \bar{a}\right) \) holds if \( \mathbb{M} \vDash \phi \left( \bar{a}\right) \) . Because \( \mathbb{M} \) is saturated, if \( A \subset \mathbb{M} \) and \( p \in {S}_{n}\left( A\right) \), then \( p \) is realized in \( \mathbb{M} \) . Moreover, if \( f : A \rightarrow \mathbb{M} \) is a partial elementary map, then \( f \) extends to an automorphism of \( \mathbb{M} \) . We could define Morley rank referring only to the monster model. The Morley rank of an \( {\mathcal{L}}_{\mathbb{M}} \) -formula is inductively defined as follows: \( \operatorname{RM}\left( \phi \right) \geq 0 \) if and only if \( \phi \left( \mathbb{M}\right) \) is nonempty; \( \operatorname{RM}\left( \phi \right) \geq \alpha + 1 \) if and only if there are \( {\mathcal{L}}_{\mathbb{M}} \) -formulas \( {\psi }_{1},{\psi }_{2},\ldots \) such that \( {\psi }_{1}\left( \mathbb{M}\right) ,{\psi }_{2}\left( \mathbb{M}\right) ,\ldots \) is an infinite sequence of pairwise disjoint subsets of \( \phi \left( \mathbb{M}\right) \) and \( \operatorname{RM}\left( {\psi }_{i}\right) \geq \alpha \) for each \( i \) ; \( {}^{1} \) There are other approaches to the monster model, which we discuss in the remarks at the end of this chapter. if \( \alpha \) is a limit ordinal, \( \operatorname{RM}\left( \phi \right) \geq \alpha \) if and only if \( \operatorname{RM}\left( \phi \right) \geq \beta \) for each \( \beta < \alpha \) . ## Morley Degree If \( X \) is a definable set of Morley rank \( \alpha \), then we cannot partition \( X \) into infinitely many pairwise disjoint definable subsets of Morley rank \( \alpha \) . Indeed, we will show that there is a number \( d \) such that \( X \) cannot be partitioned into more than \( d \) definable sets of Morley rank \( \alpha \) . Proposition 6.2.9 Let \( \phi \) be an \( {\mathcal{L}}_{\mathbb{M}} \) -formula with \( \operatorname{RM}\left( \phi \right) = \alpha \) for some ordinal \( \alpha \) . There is a natural number \( d \) such that if \( {\psi }_{1},\ldots ,{\psi }_{n} \) are \( {\mathcal{L}}_{\mathbb{M}} \) - formulas such that \( {\psi }_{1}\left( \mathbb{M}\right) ,\ldots ,{\psi }_{n}\left( \mathbb{M}\right) \) are disjoint subsets of \( \phi \left( \mathbb{M}\right) \) such that \( \operatorname{RM}\left( {\psi }_{i}\right) = \alpha \) for all \( i \), then \( n \leq d \) . We call \( d \) the Morley degree of \( \phi \) and write \( {\deg }_{\mathrm{M}}\left( \phi \right) = d \) . Proof We build \( S \subseteq {2}^{ < \omega } \) and \( \left( {{\phi }_{\sigma } : \sigma \in S}\right) \) with the following properties. i) If \( \sigma \in S \) and \( \tau \subseteq \sigma \), then \( \tau \in S \) . ii) \( {\phi }_{\varnothing } = \phi \) . iii) \( \operatorname{RM}\left( {\phi }_{\sigma }\right) = \alpha \) for all \( \sigma \in S \) . iv) If \( \sigma \in S \), there are two cases to consider. If there is an \( {\mathcal{L}}_{\mathbb{M}} \) -formula \( \psi \) such that \( \operatorname{RM}\left( {{\phi }_{\sigma } \land \psi }\right) = \operatorname{RM}\left( {{\phi }_{\sigma }\land \neg \psi }\right) = \alpha \), then \( \sigma ,0 \) and \( \sigma ,1 \) are in \( S \) , \( {\phi }_{\sigma ,0} \) is \( {\phi }_{\sigma } \land \psi \), and \( {\phi }_{\sigma ,1} \) is \( {\phi }_{\sigma } \land \neg \psi \) . If there is no such
1092_(GTM249)Classical Fourier Analysis
Definition 3.3.5
Definition 3.3.5. Given \( 0 < \gamma < 1 \) and \( f \) a function on \( {\mathbf{T}}^{n} \), define the homogeneous Lipschitz seminorm of order \( \gamma \) of \( f \) by \[ \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }} = \mathop{\sup }\limits_{\substack{{x, h \in {\mathbf{T}}^{n}} \\ {h \neq 0} }}\frac{\left| f\left( x + h\right) - f\left( x\right) \right| }{{\left| h\right| }^{\gamma }} \] and define the homogeneous Lipschitz space of order \( \gamma \) as \[ {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) = \left\{ {f : {\mathbf{T}}^{n} \rightarrow \mathbf{C}\text{ with }\parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }} < \infty }\right\} \] Functions in \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) \) are called homogeneous Lipschitz functions of order \( \gamma \) . There is an analogous definition for the inhomogeneous norm. Definition 3.3.6. For \( 0 < \gamma < 1 \) and \( f \) a function on \( {\mathbf{T}}^{n} \), define the inhomogeneous Lipschitz norm of order \( \gamma \) of \( f \) by \[ \parallel f{\parallel }_{{\Lambda }_{\gamma }} = \parallel f{\parallel }_{{L}^{\infty }} + \mathop{\sup }\limits_{\substack{{x, h \in {\mathbf{T}}^{n}} \\ {h \neq 0} }}\frac{\left| f\left( x + h\right) - f\left( x\right) \right| }{{\left| h\right| }^{\gamma }} = \parallel f{\parallel }_{{L}^{\infty }} + \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }}. \] Also define the inhomogeneous Lipschitz space of order \( \gamma \) as \[ {\Lambda }_{\gamma }\left( {\mathbf{T}}^{n}\right) = \left\{ {f : {\mathbf{T}}^{n} \rightarrow \mathbf{C}\text{ with }\parallel f{\parallel }_{{\Lambda }_{\gamma }} < \infty }\right\} . \] Functions in \( {\Lambda }_{\gamma }\left( {\mathbf{T}}^{n}\right) \) are called inhomogeneous Lipschitz functions of order \( \gamma \) . Remark 3.3.7. Functions in both spaces \( {\Lambda }_{\gamma }\left( {\mathbf{T}}^{n}\right) \) and \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) \) are obviously continuous and therefore bounded. Moreover, the functional \( \parallel \cdot {\parallel }_{{\Lambda }_{\gamma }} \) is a norm on \( {\Lambda }_{\gamma }\left( {\mathbf{T}}^{n}\right) \) . The positive functional \( \parallel \cdot {\parallel }_{{\dot{\Lambda }}_{\gamma }} \) satisfies the triangle inequality, but it does not satisfy the property \( \parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }} = 0 \Rightarrow f = 0 \) required to be a norm. It is therefore a semi-norm on \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) \) . However, if we identify functions whose difference is a constant, we form a space of the equivalence classes \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) /\{ \) constants \( \} \) on which \( \parallel \cdot {\parallel }_{{\dot{\Lambda }}_{\gamma }} \) is a norm. Remark 3.3.8. We already observed that elements of \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) \) are continuous and thus bounded. Therefore, \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) \subseteq {L}^{\infty }\left( {\mathbf{T}}^{n}\right) \) in the set-theoretic sense. However, the norm inequality \( \parallel f{\parallel }_{{L}^{\infty }} \leq C\parallel f{\parallel }_{{\dot{\Lambda }}_{\gamma }} \) for all \( f \in {\dot{\Lambda }}_{\gamma } \) fails for all constants \( C \) . For example, take \( f = N + \sin \left( {{2\pi }{x}_{1}}\right) \) on \( {\mathbf{T}}^{n} \) and let \( N \rightarrow \infty \) to see that this is the case. The following theorem indicates how the smoothness of a function is reflected by the decay of its Fourier coefficients. Theorem 3.3.9. Let \( s \in \mathbf{Z} \) with \( s \geq 0 \) . (a) Suppose that \( {\partial }^{\alpha }f \) exist and are integrable for all \( \left| \alpha \right| \leq s \) . Then \[ \left| {\widehat{f}\left( m\right) }\right| \leq {\left( \frac{\sqrt{n}}{2\pi }\right) }^{s}\frac{\mathop{\max }\limits_{{\left| \alpha \right| = s}}\left| {\widehat{{\partial }^{\alpha }f}\left( m\right) }\right| }{{\left| m\right| }^{s}},\;m \neq 0, \] (3.3.7) and thus \[ \left| {\widehat{f}\left( m\right) }\right| \left( {1 + {\left| m\right| }^{s}}\right) \rightarrow 0 \] as \( \left| m\right| \rightarrow \infty \) . In particular this holds when \( f \) lies in \( {\mathcal{C}}^{s}\left( {\mathbf{T}}^{n}\right) \) . (b) Suppose that \( {\partial }^{\alpha }f \) exist for all \( \left| \alpha \right| \leq s \) and whenever \( \left| \alpha \right| = s,{\partial }^{\alpha }f \) are in \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) \) for some \( 0 < \gamma < 1 \) . Then \[ \left| {\widehat{f}\left( m\right) }\right| \leq \frac{{\left( \sqrt{n}\right) }^{s + \gamma }}{{\left( 2\pi \right) }^{s}{2}^{\gamma + 1}}\frac{\mathop{\max }\limits_{{\left| \alpha \right| = s}}{\begin{Vmatrix}{\partial }^{\alpha }f\end{Vmatrix}}_{{\dot{\Lambda }}_{\gamma }}}{{\left| m\right| }^{s + \gamma }},\;m \neq 0. \] (3.3.8) Proof. Fix \( m \in {\mathbf{Z}}^{n} \smallsetminus \{ 0\} \) and pick a \( j \) such that \( \left| {m}_{j}\right| = \mathop{\sup }\limits_{{1 \leq k \leq n}}\left| {m}_{k}\right| \) . Then clearly \( {m}_{j} \neq 0 \) . Integrating by parts \( s \) times with respect to the variable \( {x}_{j} \), we obtain \[ \widehat{f}\left( m\right) = {\int }_{{\mathbf{T}}^{n}}f\left( x\right) {e}^{-{2\pi ix} \cdot m}{dx} = {\left( -1\right) }^{s}{\int }_{{\mathbf{T}}^{n}}\left( {{\partial }_{j}^{s}f}\right) \left( x\right) \frac{{e}^{-{2\pi ix} \cdot m}}{{\left( -2\pi i{m}_{j}\right) }^{s}}{dx}, \] (3.3.9) where the boundary terms all vanish because of the periodicity of the integrand. Taking absolute values and using \( \left| m\right| \leq \sqrt{n}\left| {m}_{j}\right| \), we obtain assertion (3.3.7). We now turn to the second part of the theorem. Let \( {e}_{j} = \left( {0,\ldots ,1,\ldots ,0}\right) \) be the element of the torus \( {\mathbf{T}}^{n} \) whose \( j \) th coordinate is one and all the others are zero. A simple change of variables together with the fact that \( {e}^{\pi i} = - 1 \) gives that \[ {\int }_{{\mathbf{T}}^{n}}\left( {{\partial }_{j}^{s}f}\right) \left( x\right) {e}^{-{2\pi ix} \cdot m}{dx} = - {\int }_{{\mathbf{T}}^{n}}\left( {{\partial }_{j}^{s}f}\right) \left( {x - \frac{{e}_{j}}{2{m}_{j}}}\right) {e}^{-{2\pi ix} \cdot m}{dx}, \] which implies that \[ {\int }_{{\mathbf{T}}^{n}}\left( {{\partial }_{j}^{s}f}\right) \left( x\right) {e}^{-{2\pi ix} \cdot m}{dx} = \frac{1}{2}{\int }_{{\mathbf{T}}^{n}}\left\lbrack {\left( {{\partial }_{j}^{s}f}\right) \left( x\right) - \left( {{\partial }_{j}^{s}f}\right) \left( {x - \frac{{e}_{j}}{2{m}_{j}}}\right) }\right\rbrack {e}^{-{2\pi ix} \cdot m}{dx}. \] Now use the estimate \[ \left| {\left( {{\partial }_{j}^{s}f}\right) \left( x\right) - \left( {{\partial }_{j}^{s}f}\right) \left( {x - \frac{{e}_{j}}{2{m}_{j}}}\right) }\right| \leq \frac{{\begin{Vmatrix}{\partial }_{j}^{s}f\end{Vmatrix}}_{{\dot{\Lambda }}_{\gamma }}}{{\left( 2\left| {m}_{j}\right| \right) }^{\gamma }} \] and identity (3.3.9) to conclude the proof of (3.3.8). The following is an immediate consequence. Corollary 3.3.10. Let \( s \in \mathbf{Z} \) with \( s \geq 0 \) . (a) Suppose that \( {\partial }^{\alpha }f \) exist and are integrable for all \( \left| \alpha \right| \leq s \) . Then for some constant \( {c}_{n, s} \) we have \[ \left| {\widehat{f}\left( m\right) }\right| \leq {c}_{n, s}\frac{\max \left( {\parallel f{\parallel }_{{L}^{1}},\mathop{\max }\limits_{{\left| \alpha \right| = s}}{\begin{Vmatrix}{\partial }^{\alpha }f\end{Vmatrix}}_{{L}^{1}}}\right) }{{\left( 1 + \left| m\right| \right) }^{s}}. \] (3.3.10) (b) Suppose that \( {\partial }^{\alpha } \) fexist for all \( \left| \alpha \right| \leq s \) and whenever \( \left| \alpha \right| = s,{\partial }^{\alpha }f \) are in \( {\dot{\Lambda }}_{\gamma }\left( {\mathbf{T}}^{n}\right) \) for some \( 0 < \gamma < 1 \) . Then for some constant \( {c}_{n, s}^{\prime } \) we have \[ \left| {\widehat{f}\left( m\right) }\right| \leq {c}_{n, s}^{\prime }\frac{\max \left( {\parallel f{\parallel }_{{L}^{1}},\mathop{\max }\limits_{{\left| \alpha \right| = s}}{\begin{Vmatrix}{\partial }^{\alpha }f\end{Vmatrix}}_{{\dot{\Lambda }}_{\gamma }}}\right) }{{\left( 1 + \left| m\right| \right) }^{s + \gamma }}. \] (3.3.11) Remark 3.3.11. The conclusions of Theorem 3.3.9 and Corollary 3.3.10 are also valid when \( \gamma = 1 \) . In this case the spaces \( {\Lambda }_{\gamma } \) should be replaced by the space Lip 1 equipped with the seminorm \[ \parallel f{\parallel }_{\text{Lip }1} = \mathop{\sup }\limits_{\substack{{x, h \in {\mathbf{T}}^{n}} \\ {h \neq 0} }}\frac{\left| f\left( x + h\right) - f\left( x\right) \right| }{\left| h\right| }. \] There is a slight lack of uniformity in the notation here, since in the theory of Lipschitz spaces the notation \( {\dot{\Lambda }}_{1} \) is usually reserved for the space with seminorm \[ \parallel f{\parallel }_{{\dot{\Lambda }}_{1}} = \mathop{\sup }\limits_{\substack{{x, h \in {\mathbf{T}}^{n}} \\ {h \neq 0} }}\frac{\left| f\left( x + h\right) + f\left( x - h\right) - 2f\left( x\right) \right| }{\left| h\right| }. \] The following proposition provides a partial converse to Theorem 3.3.9. We denote below by \( \left\lbrack \left\lbrack s\right\rbrack \right\rbrack \) the largest integer strictly less than a given real number \( s \) . Then \( \left\lbrack \left\lbrack s\right\rbrack \right\rbrack \) is equal to the integer part \( \left\lbrack s\right\rbrack \) of \( s \), unless \( s \) is an integer, in which case \( \left\lbrack \left\lbrack s\right\rbrack \right\rbrack = \left\lbrack s\right\rbrack - 1 \) . Proposition 3.3.12. Let \( s > 0 \) and suppose that \( f \) is an integrable function on the torus with \[ \left| {\widehat{f}\left( m\right) }\right| \leq C{\left( 1 + \left| m\right| \right) }^{-s - n} \] (3.3.12) for all \( m \in {\mathbf{Z}}^{n} \) . Then \( f \) has partial derivatives of all
1096_(GTM252)Distributions and Operators
Definition 5.16
Definition 5.16. For \( u \in {\mathcal{S}}^{\prime } \), the prescription \[ \langle \mathcal{F}u,\varphi \rangle = \langle u,\mathcal{F}\varphi \rangle \;\text{ for all }\varphi \in \mathcal{S} \] (5.33) defines a temperate distribution \( \mathcal{F}u \) (also denoted \( \widehat{u} \) ); and \( \mathcal{F} : u \mapsto \mathcal{F}u \) is a continuous operator on \( {\mathcal{S}}^{\prime }\left( {\mathbb{R}}^{n}\right) \) . We also define \( F = {\left( 2\pi \right) }^{-n/2}\mathcal{F} \) . The definition is of course chosen such that it is consistent with the formula (5.17) for the case where \( u \in \mathcal{S} \) . It is also consistent with the definition of \( {\mathcal{F}}_{2} \) on \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \), since \( {\varphi }_{k} \rightarrow u \) in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \) implies \( \mathcal{F}{\varphi }_{k} \rightarrow {\mathcal{F}}_{2}u \) in \( {\mathcal{S}}^{\prime } \) . Similarly, the definition is consistent with the definition on \( {L}_{1}\left( {\mathbb{R}}^{n}\right) \) . That \( \mathcal{F} \) is a continuous operator on \( {\mathcal{S}}^{\prime } \) is seen as in Theorem 3.8 or by use of Remark 5.15. The operator \( \overline{\mathcal{F}} \) is similarly extended to \( {\mathcal{S}}^{\prime } \), on the basis of the identity \[ \langle \overline{\mathcal{F}}u,\varphi \rangle = \langle u,\overline{\mathcal{F}}\varphi \rangle \] (5.34) and since \[ {\left( 2\pi \right) }^{-n}\overline{\mathcal{F}}\mathcal{F} = {\left( 2\pi \right) }^{-n}\mathcal{F}\overline{\mathcal{F}} = I \] (5.35) on \( \mathcal{S} \), this identity is likewise carried over to \( {\mathcal{S}}^{\prime } \), so we obtain: Theorem 5.17. \( \mathcal{F} \) is a homeomorphism of \( {\mathcal{S}}^{\prime } \) onto \( {\mathcal{S}}^{\prime } \), with inverse \( {\mathcal{F}}^{-1} = \) \( {\left( 2\pi \right) }^{-n}\overline{\mathcal{F}} \) . This extension of \( \mathcal{F} \) to an operator on \( {\mathcal{S}}^{\prime } \) gives an enormous freedom in the use of the Fourier transform. We obtain directly from the theorems for \( \mathcal{F} \) on \( \mathcal{S} \), Lemma 5.9 and the definitions of the generalized operators: Theorem 5.18. For all \( u \in {\mathcal{S}}^{\prime } \), one has when \( \alpha \in {\mathbb{N}}_{0}^{n} \) and \( \varphi \in \mathcal{S} \) : \[ \text{(i)}\;\mathcal{F}\left( {{D}^{\alpha }u}\right) = {\xi }^{\alpha }\mathcal{F}u \] \[ \text{(ii)}\mathcal{F}\left( {{x}^{\alpha }u}\right) = {\left( -{D}_{\xi }\right) }^{\alpha }\mathcal{F}u\text{,} \] (5.36) \[ \text{(iii)}\mathcal{F}\left( {\varphi * u}\right) = \left( {\mathcal{F}\varphi }\right) \cdot \left( {\mathcal{F}u}\right) \text{,} \] \[ \text{(iv)}\mathcal{F}\left( {\varphi \cdot u}\right) = {\left( 2\pi \right) }^{-n}\left( {\mathcal{F}\varphi }\right) * \left( {\mathcal{F}u}\right) \text{.} \] Let us study some special examples. For \( u = \delta \) , \[ \langle \mathcal{F}u,\varphi \rangle = \langle u,\mathcal{F}\varphi \rangle = \widehat{\varphi }\left( 0\right) = \int \varphi \left( x\right) {dx} = \langle 1,\varphi \rangle ,\text{ for }\varphi \in \mathcal{S}, \] hence \[ \mathcal{F}\left\lbrack \delta \right\rbrack = 1\text{.} \] (5.37) Since clearly also \( \overline{\mathcal{F}}\left\lbrack \delta \right\rbrack = 1 \) (cf. (5.34)), we get from the inversion formula (5.35) that \[ \mathcal{F}\left\lbrack 1\right\rbrack = {\left( 2\pi \right) }^{n}\delta \] (5.38) An application of Theorem 5.18 then gives: \[ \mathcal{F}\left\lbrack {{D}^{\alpha }\delta }\right\rbrack = {\xi }^{\alpha } \] (5.39) \[ \mathcal{F}\left\lbrack {\left( -x\right) }^{\alpha }\right\rbrack = {\left( 2\pi \right) }^{n}{D}_{\xi }^{\alpha }\delta . \] Remark 5.19. We have shown that \( \mathcal{F} \) defines a homeomorphism of \( \mathcal{S} \) onto \( \mathcal{S} \), of \( {L}_{2} \) onto \( {L}_{2} \) and of \( {\mathcal{S}}^{\prime } \) onto \( {\mathcal{S}}^{\prime } \) . One can ask for the image by \( \mathcal{F} \) of other spaces. For example, \( \mathcal{F}\left( {{C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) }\right) \) must be a certain subspace of \( \mathcal{S} \) ; but this is not contained in \( {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \) . On the contrary, if \( \varphi \in {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \), then \( \widehat{\varphi } \) can only have compact support if \( \varphi = 0 \) ! For \( n = 1 \) we can give a quick explanation of this: When \( \varphi \in {C}_{0}^{\infty }\left( \mathbb{R}\right) \), then \( \widehat{\varphi }\left( \zeta \right) \) can be defined for all \( \zeta \in \mathbb{C} \) by the formula \[ \widehat{\varphi }\left( \zeta \right) = {\int }_{\operatorname{supp}\varphi }{e}^{-{ix\zeta }}\varphi \left( x\right) {dx} \] and this function \( \widehat{\varphi }\left( \zeta \right) \) is holomorphic in \( \zeta = \xi + {i\eta } \in \mathbb{C} \), since \( \left( {{\partial }_{\xi } + i{\partial }_{\eta }}\right) \widehat{\varphi }\left( {\xi + {i\eta }}\right) = 0 \) (the Cauchy-Riemann equation), as is seen by differentiation under the integral sign. (One could also appeal to Morera's Theorem.) Now if \( \widehat{\varphi }\left( \zeta \right) \) is identically 0 on an open, nonempty interval of the real axis, then \( \widehat{\varphi } = 0 \) everywhere. The argument can be extended to \( n > 1 \) . Even for distributions \( u \) with compact support, \( \widehat{u}\left( \zeta \right) \) is a function of \( \zeta \) which can be defined for all \( \zeta \in {\mathbb{C}}^{n} \) . In fact one can show that \( \widehat{u} \) coincides with the function \[ \widehat{u}\left( \zeta \right) = \left\langle {u,\psi \left( x\right) {e}^{-{ix} \cdot \zeta }}\right\rangle \left\lbrack { = \left\langle {\underset{{\mathcal{E}}^{\prime }}{u},{e}^{-{ix} \cdot \zeta }}\right\rangle }\right\rbrack \] (5.40) where \( \psi \left( x\right) \) is a function \( \in {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \) which is 1 on a neighborhood of supp \( u \) . It is seen as in Exercise 3.14 that this function \( \widehat{u}\left( \zeta \right) \) is \( {C}^{\infty } \) as a function of \( \left( {{\xi }_{1},{\eta }_{1},\ldots ,{\xi }_{n},{\eta }_{n}}\right) \in {\mathbb{R}}^{2n}\left( {{\zeta }_{j} = {\xi }_{j} + i{\eta }_{j}}\right) \), with \[ {\partial }_{{\xi }_{j}}\widehat{u}\left( \zeta \right) = \left\langle {u,\psi \left( x\right) {\partial }_{{\xi }_{j}}{e}^{-{ix} \cdot \zeta }}\right\rangle \] and similarly for \( {\partial }_{{\eta }_{j}} \) . Since \( {e}^{-{ix} \cdot \zeta } \) satisfies the Cauchy-Riemann equation in each complex variable \( {\zeta }_{j} \), so does \( \widehat{u}\left( \zeta \right) \), so \( \widehat{u}\left( \zeta \right) \) is a holomorphic function of \( {\zeta }_{j} \in \mathbb{C} \) for each \( j \) . Then it follows also here that the support of \( \widehat{u}\left( \zeta \right) \) cannot be compact unless \( u = 0 \) . The spaces of holomorphic functions obtained by applying \( \mathcal{F} \) to \( {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \) resp. \( {\mathcal{E}}^{\prime }\left( {\mathbb{R}}^{n}\right) \) may be characterized by their growth properties in \( \zeta \) (the Paley-Wiener Theorem, see e.g. the book of W. Rudin [R74, Theorems 7.22, 7.23], or the book of L. Hörmander [H63, Theorem 1.7.7]) For partial differential operators with constant coefficients, the Fourier transform gives a remarkable simplification. When \[ P\left( D\right) = \mathop{\sum }\limits_{{\left| \alpha \right| \leq m}}{a}_{\alpha }{D}^{\alpha } \] (5.41) is a differential operator on \( {\mathbb{R}}^{n} \) with coefficients \( {a}_{\alpha } \in \mathbb{C} \), the equation \[ P\left( D\right) u = f \] (5.42) (with \( u \) and \( f \in {\mathcal{S}}^{\prime } \) ) is by Fourier transformation carried over to the multiplication equation \[ p\left( \xi \right) \widehat{u}\left( \xi \right) = \widehat{f}\left( \xi \right) \] (5.43) where \( p\left( \xi \right) \) is the polynomial \[ p\left( \xi \right) = \mathop{\sum }\limits_{{\left| \alpha \right| \leq m}}{a}_{\alpha }{\xi }^{\alpha } \] (5.44) it is called the symbol of \( P\left( D\right) \) . The \( m \) -th order part of \( P\left( D\right) \) is called the principal part (often denoted \( \left. {{P}_{m}\left( D\right) }\right) \), and its associated symbol \( {p}_{m} \) the principal symbol, i.e., \[ {P}_{m}\left( D\right) = \mathop{\sum }\limits_{{\left| \alpha \right| = m}}{a}_{\alpha }{D}^{\alpha },\;{p}_{m}\left( \xi \right) = \mathop{\sum }\limits_{{\left| \alpha \right| = m}}{a}_{\alpha }{\xi }^{\alpha }. \] (5.45) It is often so that it is the principal part that determines the solvability properties of (5.42). The operator \( P\left( D\right) \) is in particular called elliptic if \( {p}_{m}\left( \xi \right) \neq 0 \) for \( \xi \neq 0 \) . Note that \( {p}_{m}\left( \xi \right) \) is a homogeneous polynomial in \( \xi \) of degree \( m \) . Example 5.20 ("THE WORLD'S SIMPLEST EXAMPLE"). Consider the operator \( P = 1 - \Delta \) on \( {\mathbb{R}}^{n} \) . By Fourier transformation, the equation \[ \left( {1 - \Delta }\right) u = f\text{ on }{\mathbb{R}}^{n} \] (5.46) is carried into the equation \[ \left( {1 + {\left| \xi \right| }^{2}}\right) \widehat{u} = \widehat{f}\text{ on }{\mathbb{R}}^{n}, \] (5.47) and this leads by division with \( 1 + {\left| \xi \right| }^{2} = \langle \xi {\rangle }^{2} \) to \[ \widehat{u} = \langle \xi {\rangle }^{-2}\widehat{f} \] Thus (5.46) has the solution \[ u = {\mathcal{F}}^{-1}\left( {\langle \xi {\rangle }^{-2}\mathcal{F}f}\right) \] We see that for any \( f \) given in \( {\mathcal{S}}^{\prime } \) there is one and only one solution \( u \in {\mathcal{S}}^{\prime } \) , and if \( f \) belongs to \( \mathcal{S} \), then the solution \( u \) belongs to \( \mathcal{S} \) . When \( f \) is given in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \), we see from (5.47) that \( \l