book_name
stringclasses
89 values
def_number
stringlengths
12
19
text
stringlengths
5.47k
10k
113_Topological Groups
Definition 11.33
Definition 11.33. For any formula \( \varphi \), let \( \mathrm{{Fv}}\varphi \) be the set of all variables occurring freely in \( \varphi \) . For any first-order language \( \mathcal{L} \), a primitive Skolem expansion of \( \mathcal{L} \) is an expansion \( {\mathcal{L}}^{\prime } \) of \( \mathcal{L} \) such that there is a function \( S \) mapping \( \left\{ {\exists {v}_{i}\varphi : \varphi \in {\mathrm{{Fmla}}}_{\mathcal{L}}}\right\} \) one-one onto the set of all nonlogical constants of \( {\mathcal{L}}^{\prime } \) which are not constants of \( \mathcal{L} \), such that for each \( \varphi \in {\mathrm{{Fmla}}}_{\mathcal{L}} \) and each variable \( \alpha ,{S}_{\exists {\alpha \varphi }} \) is an operation symbol of rank \( \left| {\mathrm{{Fv}}\exists {\alpha \varphi }}\right| \) . In case \( \mathcal{L} \) and \( {\mathcal{L}}^{\prime } \) are effectivized with Gödel numbering functions \( g,{g}^{\prime } \), we call \( {\mathcal{L}}^{\prime } \) an effective primitive Skolem expansion provided that \( {\mathcal{L}}^{\prime } \) is an effective expansion of \( \mathcal{L} \) and the function \[ {g}^{\prime } \circ S \circ {g}^{+ - 1} \upharpoonright {g}^{+ * }\left\{ {\exists {v}_{i}\varphi : \varphi \in {\operatorname{Fmla}}_{\mathcal{L}}}\right\} \] is partial recursive. Given first-order languages \( \mathcal{L} = \left( {L, v,\mathcal{O},\mathcal{R}}\right) \), and \( {\mathcal{L}}^{\prime } = \left( {L, v,{\mathcal{O}}^{\prime },\mathcal{R}}\right) \) , we say that \( {\mathcal{L}}^{\prime } \) is a Skolem expansion of \( \mathcal{L} \) provided there is a sequence \( \left\langle {{\mathcal{L}}_{i} : i \in \omega }\right\rangle \) of first-order languages \( {\mathcal{L}}_{i} = \left( {L, v,{\mathcal{O}}_{i},\mathcal{R}}\right) \) such that \( {\mathcal{L}}_{0} = \mathcal{L} \) , for each \( i \in \omega {\mathcal{L}}_{i + 1} \) is a primitive Skolem expansion of \( {\mathcal{L}}_{i} \) with associated function \( {S}^{i} \), and \( {\mathcal{O}}^{\prime } = \mathop{\bigcup }\limits_{{i \in \omega }}{\mathcal{O}}_{i} \) . If all of these languages are effectivized, \( {\mathcal{L}}^{\prime } \) is an effective Skolem expansion of \( \mathcal{L} \) provided that each \( {\mathcal{L}}_{i + 1} \) is an effective primitive Skolem expansion of \( {\mathcal{L}}_{i} \), and \( {\mathcal{L}}^{\prime } \) is an effective expansion of each \( {\mathcal{L}}_{i} \) . Assuming that \( {\mathcal{L}}^{\prime } \) is a Skolem expansion of \( \mathcal{L} \), with notation as above, the Skolem set of \( {\mathcal{L}}^{\prime } \) over \( \mathcal{L} \) is the set of all sentences \[ \left\lbrack \left\lbrack {\exists {v}_{i}\varphi \left( {{v}_{0},\ldots ,{v}_{i}}\right) \rightarrow \varphi \left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) }\right\rbrack \right\rbrack \] where \( \sigma = {S}_{\exists {vi\varphi }}^{j}{\alpha }_{0}\cdots {\alpha }_{m - 1}, m = \left| {\mathrm{{Fv}}\exists {v}_{i}\varphi }\right| ,\mathrm{{Fv}}\exists {v}_{i}\varphi = \left\{ {{\alpha }_{0},\ldots ,{\alpha }_{m - 1}}\right\} \) with \( {v}^{-1}{\alpha }_{0} < \cdots < {v}^{-1}{\alpha }_{m - 1} \), and \( j \) is minimal such that \( \exists {v}_{i}\varphi \) is a formula of \( {\mathcal{L}}_{j} \) . (Recall that \( \left\lbrack \left\lbrack \chi \right\rbrack \right\rbrack \) denotes the universal closure of \( \chi \) ; see 10.94.) The following two propositions are obvious. Proposition 11.34. Any first-order language has a Skolem expansion. Proposition 11.35. If \( {\mathcal{L}}^{\prime } \) is a Skolem expansion of \( \mathcal{L} \), then \( \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| = \) \( \left| {\mathrm{{Fmla}}}_{{\mathcal{L}}^{\prime }}\right| \) . The following lemma is fundamental for our main results. Lemma 11.36. If \( {\mathcal{L}}^{\prime } \) is a Skolem expansion of \( \mathcal{L} \), then any \( \mathcal{L} \) -structure can be expanded to a model of the Skolem set of \( {\mathcal{L}}^{\prime } \) over \( \mathcal{L} \) . Proof. From the definition of Skolem expansions, we see that it is enough to prove the following statement: Statement. Let \( {\mathcal{L}}^{\prime } \) be a primitive Skolem expansion of \( \mathcal{L} \), and \( \mathfrak{A} \) an \( \mathcal{L} \) -structure. Then \( \mathfrak{A} \) can be expanded to an \( {\mathcal{L}}^{\prime } \) -structure which is a model of all sentences \( \left\lbrack \left\lbrack {\exists {v}_{i}\varphi \left( {{v}_{0},\ldots ,{v}_{i}}\right) \rightarrow \varphi \left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) }\right\rbrack \right\rbrack \), where \( \varphi \) is a formula of \( \mathcal{L},\sigma = {S}_{\exists {vi\varphi }}{\alpha }_{0}\cdots {\alpha }_{m - 1}, m = \left| {\mathrm{{Fv}}\exists {v}_{i}\varphi }\right| \), and \( \mathrm{{Fv}}\exists {v}_{i}\varphi = \left\{ {{\alpha }_{0},\ldots ,{\alpha }_{m - 1}}\right\} \) with \( {v}^{-1}{\alpha }_{0} < \cdots < {v}^{-1}{\alpha }_{m - 1}. \) To prove this statement, let \( C \) be a choice function for nonempty subsets of A. Let \( \psi = \exists {v}_{i}\varphi \) be a formula of \( \mathcal{L} \), with \( \mathrm{{Fv}}\exists {v}_{i}\varphi = \left\{ {{v}_{j0},\ldots ,{v}_{j\left( {m - 1}\right) }}\right\} \), where \( {j0} < \cdots < j\left( {m - 1}\right) \) . We define an \( m \) -ary operation \( {t}_{\psi } \) on \( A \) as follows. For any \( {x}_{0},\ldots ,{x}_{m - 1} \in A \), set \[ {t}_{\psi }\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = C\left\{ {a\text{ : there is a }y \in {\varphi }^{\mathfrak{A}}}\right. \text{such that}{y}_{i} = a \] \[ \text{and}{y}_{jk} = {x}_{k}\text{for each}k < m\} \text{,} \] \[ {t}_{\psi }\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = {CA}\text{if the above set is empty.} \] Let \( {\mathfrak{A}}^{\prime } \) be the expansion of \( \mathfrak{A} \) to an \( {\mathcal{L}}^{\prime } \) -structure in which each symbol \( {S}_{\psi } \) is interpreted as \( {t}_{\psi } \) . To show that \( {\mathfrak{A}}^{\prime } \) is as desired, consider any formula \( \psi = \exists {v}_{i}\varphi \) of \( \mathcal{L} \) with notation as above. Suppose \( x \in {}^{\omega }A \) and \( {\mathfrak{A}}^{\prime } \vDash \exists {v}_{i}\varphi \left\lbrack x\right\rbrack \) . Then \( \mathfrak{A} \vDash \exists {v}_{i}\varphi \left\lbrack x\right\rbrack \) since \( \exists {v}_{i}\varphi \) is a formula of \( \mathcal{L} \) . Thus there is an \( a \in A \) such that \( \mathfrak{A} \vDash \varphi \left\lbrack {x}_{a}^{i}\right\rbrack \) . Hence the first clause in the definition of \( {t}_{\psi }\left( {{x}_{j0},\ldots ,{x}_{j\left( {m - 1}\right) }}\right) \) gives an element \( b \in A \) such that \( \mathfrak{A} \vDash \varphi \left\lbrack {x}_{b}^{i}\right\rbrack \) . Let \( \sigma \) be the term \( {S}_{\exists {vi\varphi }}{v}_{j0}\cdots \) \( {v}_{j\left( {m - 1}\right) } \) . Recall that the formula \( \varphi \left( {{v}_{0},\ldots ,{v}_{i - 1}\sigma }\right) \) has the form Subf \( {}_{\sigma }^{vi}{\varphi }^{\prime } \), where \( {\varphi }^{\prime } \) is obtained from \( \varphi \) by replacing bound variables suitably. Clearly \( \mathfrak{A} \vDash \) \( {\varphi }^{\prime }\left\lbrack {x}_{b}^{i}\right\rbrack \), where \( b = {t}_{\psi }\left( {{x}_{j0},\ldots ,{x}_{j\left( {m - 1}\right) }}\right) \) . Hence we easily infer that \( \mathfrak{A} \vDash \) \( {\operatorname{Subf}}_{\sigma }^{vi}{\varphi }^{\prime }\left\lbrack x\right\rbrack \) . Thus \( {\mathfrak{A}}^{\prime } \) is a model of \( \left\lbrack \left\lbrack {\exists {v}_{i}\varphi \left( {{v}_{0},\ldots ,{v}_{i}}\right) \rightarrow \varphi \left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) }\right\rbrack \right\rbrack \) , as desired. The functions introduced in the expansion of \( \mathfrak{A} \) to \( {\mathfrak{A}}^{\prime } \) in the proof of 11.36 are called Skolem functions. The entire method associated with Skolem expansions is sometimes called the method of Skolem functions. One of the main properties of Skolem expansions is that every formula becomes equivalent, in a certain sense, to a prenex formula having only universal quantifiers: Definition 11.37. A formula is universal if it is in prenex normal form with only universal quantifiers. Let \( {\mathcal{L}}^{\prime } \) be a Skolem expansion of \( \mathcal{L} \), with notation as in 11.33. With each prenex formula \( \varphi \) of \( {\mathcal{L}}^{\prime } \) we associate a formula \( {\varphi }^{\mathrm{s}} \) : \[ {\varphi }^{\mathrm{s}} = \varphi \text{if}\varphi \text{is quantifier free;} \] \[ {\left( \forall \alpha \varphi \right) }^{\mathrm{S}} = \forall \alpha {\varphi }^{\mathrm{S}}; \] \[ {\left( \exists {v}_{i}\varphi \right) }^{\mathrm{S}} = {\varphi }^{\mathrm{S}}\left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) \] where \( \sigma = {S}_{\exists {vio}}^{j}{\beta }_{0}\cdots {\beta }_{m - 1},\mathrm{{Fv}}\exists {v}_{i}\varphi = \left\{ {{\beta }_{0},\ldots ,{\beta }_{m - 1}}\right\} \) with \( {v}^{-1}{\beta }_{0} < \cdots \) \( < {v}^{-1}{\beta }_{m - 1} \), and \( j \) is chosen minimal so that \( \exists {v}_{i}\varphi \) is a formula of \( {\mathcal{L}}_{j} \) . Theorem 11.38 (Skolem normal form theorem). Let \( {\mathcal{L}}^{\prime } \) be a Skolem expansion of \( \mathcal{L} \) . For every prenex formula \( \varphi \) of \( {\mathcal{L}}^{\prime } \), the formula \( {\varphi }^{\mathrm{S}} \) is universal and the same variables occur free in \( \varphi \) as do in \( {\varphi }^{\mathrm{s}} \) . Furthermore, for any prenex formula \( \varphi \) of \( {\mathcal{L}}^{\prime } \) we have: (i) \( \mathrm{F}{\varphi }^{\mathrm{S}} \rightarrow \varphi \) ; (ii) if \( \mathfrak{A} \) is a model of the Skolem set of \( {\mathcal{L}}^{\prime } \) over \( \mathcal{L} \), then \( \mathfrak{A} \vDash \varphi \rightarrow {\varphi }^{\mathrm{S}} \) ; (iii) if \( \Gamma \) is a theory in \( \mathcal{L} \) and \( {\Gamma }^{\prime } = \left\{ {{\varphi }^{\mathrm{S}} : \varphi \in \Gamma }\right\}
1083_(GTM240)Number Theory II
Definition 15.2.4
Definition 15.2.4. Keep the above notation and let \( p \) be a prime number. We define \( {N}_{p} \) by the formula \( {}^{1} \) \[ {N}_{p} = N/\mathop{\prod }\limits_{\substack{{q\parallel N} \\ {p \mid {v}_{q}\left( {\Delta }_{\min }\right) } }}q \] in other words, \( {N}_{p} \) is equal to \( N \) divided by the product of all prime numbers \( q \) such that \( {v}_{q}\left( N\right) = 1 \) and \( p \mid {v}_{q}\left( {\Delta }_{\min }\right) \) . We emphasize that the \( {\Delta }_{\min } \) occurring in the definition of \( {N}_{p} \) must be the minimal discriminant. We can now state a simplified special case of Ribet's level-lowering theorem that will be sufficient for our applications (see [Rib1] for the full statement). Theorem 15.2.5 (Ribet's Level-Lowering Theorem). Let \( E \) be an elliptic curve defined over \( \mathbb{Q} \) and let \( p \geq 5 \) be a prime number. Assume that there does not exist a p-isogeny (i.e., of degree \( p \) ) defined over \( \mathbb{Q} \) from \( E \) to some other elliptic curve, and let \( {N}_{p} \) be as above. There exists a newform \( f \) of level \( {N}_{p} \) such that \( E{ \sim }_{p}f \) . As mentioned, Ribet's theorem is much more general than this, but the present statement is sufficient. In addition, in Ribet's general theorem there is a modularity assumption, but since we restrict to the case of elliptic curves this assumption is automatically satisfied thanks to the modularity theorem. Example. Let \( E \) be the elliptic curve with minimal Weierstrass equation \[ {y}^{2} = {x}^{3} - {x}^{2} - {77x} + {330}, \] referenced as \( {132B1} \) in [Cre2]. We compute that the minimal discriminant and the conductor are respectively \( {}^{1} \) Highbrow remark, to be omitted on first reading. This \( {N}_{p} \) is not always the same as the Serre conductor. If \( {N}_{p}^{\prime } \) denotes the Serre conductor then \( {N}_{p}^{\prime } \mid {N}_{p} \) , and \( {N}_{p}/{N}_{p}^{\prime } \) is a power of \( p \) . More precisely, \[ {N}_{p} = \left\{ \begin{array}{ll} {N}_{p}^{\prime } & \text{ if }E\text{ has good reduction at }p, \\ p{N}_{p}^{\prime } & \text{ if }E\text{ has multiplicative reduction at }p, \\ {p}^{2}{N}_{p}^{\prime } & \text{ if }E\text{ has additive reduction at }p. \end{array}\right. \] Ribet’s theorem allows us to obtain a newform of level \( {N}_{p}^{\prime } \) and weight \( {k}_{p} \geq \) 2 (where \( {k}_{p} \) is the Serre weight). Since we have limited ourselves to weight-2 newforms, it turns out that we obtain a newform of level \( {N}_{p} \) and not \( {N}_{p}^{7} \) . To understand why we have chosen to restrict to weight 2, note that later \( p \) will be an unknown exponent in some Diophantine equation. Often we will not know whether \( p \) is a prime of good reduction. The restriction that we have made allows us to deal with all these cases uniformly, by giving a unique level and weight regardless of whether \( E \) has good, multiplicative, or additive reduction at \( p \) . \[ {\Delta }_{\min } = {2}^{4} \cdot {3}^{10} \cdot {11}\;\text{ and }\;N = {2}^{2} \cdot 3 \cdot {11}. \] Using [Cre2] we see that the only isogeny that the curve has is a 2-isogeny, so we may apply Ribet’s theorem with \( p = 5 \) . We find that \( {N}_{p} = {2}^{2} \cdot {11} = {44} \) , so Ribet’s theorem asserts the existence of a newform \( f \) at level 44 such that \( E{ \sim }_{5}f \) . The formula given for the number of newforms shows that there is a single one at level 44, necessarily rational, and Cremona's tables show that it corresponds to the elliptic curve \( F = {44A1} \) with equation \[ {y}^{2} = {x}^{3} + {x}^{2} + {3x} - 1 \] so that \( E{ \sim }_{5}F \) . In order for the reader to understand what is expected from Proposition 15.2.3 we give the values of \( {a}_{\ell }\left( E\right) \) and \( {a}_{\ell }\left( F\right) \) for \( \ell \leq {37} \) . <table><thead><tr><th>l</th><th>2</th><th>3</th><th>5</th><th>7</th><th>11</th><th>13</th><th>17</th><th>19</th><th>23</th><th>29</th><th>31</th><th>37</th></tr></thead><tr><td>\( {a}_{\ell }\left( E\right) \)</td><td>0</td><td>\( - 1 \)</td><td>2</td><td>2</td><td>\( - 1 \)</td><td>6</td><td>\( - 4 \)</td><td>\( - 2 \)</td><td>-8</td><td>0</td><td>0</td><td>\( - 6 \)</td></tr><tr><td>\( {a}_{\ell }\left( F\right) \)</td><td>0</td><td>1</td><td>\( - 3 \)</td><td>2</td><td>\( - 1 \)</td><td>\( - 4 \)</td><td>6</td><td>8</td><td>\( - 3 \)</td><td>0</td><td>5</td><td>\( - 1 \)</td></tr></table> ## 15.2.3 Absence of Isogenies There are a number of technical difficulties that must be solved in order to be able to apply Ribet's theorem in practice. The most important one is the restriction that \( E \) should not have any \( p \) -isogenies defined over \( \mathbb{Q} \) (for simplicity we will say that \( E \) has no \( p \) -isogenies), in other words that there should be no subgroup of order \( p \) of \( E \) that is stable under conjugation (see Definition 8.4.1). This is not always easy to check, but there are several results that help us in doing so. We give here two of the most useful. Theorem 15.2.6 (Mazur [Maz]). Let \( E \) be an elliptic curve defined over \( \mathbb{Q} \) of conductor \( N \) . Then \( E \) does not have any p-isogeny if at least one of the following conditions holds: (1) \( p \geq {17} \) and \( j\left( E\right) \notin \mathbb{Z}\left\lbrack {1/2}\right\rbrack \) . (2) \( p \geq {11} \) and \( N \) is squarefree. (3) \( p \geq 5, N \) is squarefree, and \( 4 \mid \left| {{E}_{t}\left( \mathbb{Q}\right) }\right| \), this last condition meaning that \( E\left( \mathbb{Q}\right) \) has full 2-torsion. Theorem 15.2.7 (Diamond-Kramer [Dia-Kra]). Let \( E \) be an elliptic curve defined over \( \mathbb{Q} \) of conductor \( N \) . If \( {v}_{2}\left( N\right) = 3,5 \), or 7, then \( E \) does not have any p-isogeny for \( p \) an odd prime. Example. Let \( E \) be an elliptic curve defined over \( \mathbb{Q} \) of conductor \( N \) and minimal discriminant \( {\Delta }_{\min } \) . We have the following theorem, conjectured by Brumer-Kramer in [Bru-Kra] and proved by Serre in [Ser3] assuming a hypothesis that has now been removed thanks to the theorems of Ribet and Wiles. Theorem 15.2.8. Assume that \( N \) is squarefree. If \( {\Delta }_{\min } \) is a pth power for some prime \( p \) then \( p \leq 5 \) and \( E \) has a rational point of order \( p \) . Proof. We prove only the statement \( p \leq 7 \) using the tools that we have introduced. Since \( N \) is squarefree and \( {\Delta }_{\min } \) is a \( p \) th power, the definition shows that \( {N}_{p} = 1 \) . Assume first that \( p \geq {11} \) . Since \( N \) is squarefree, the second condition of Mazur’s theorem shows that \( E \) does not have any \( p \) - isogeny. We can thus apply Ribet’s theorem, which tells us that \( E{ \sim }_{p}f \) for a newform \( f \) of level 1 . Since there are no such newforms, we have a contradiction showing that \( p \leq 7 \) . With some extra work Serre shows that \( E \) has a rational point of order \( p \) . In addition one can prove that there are no curves with \( N \) squarefree whose discriminant is a power of 7 (see [Mes-Oes]). Remark. If \( E \) has no \( p \) -isogenies then Ribet’s theorem implies that \( E{ \sim }_{p}f \) for some newform \( f \) of level \( {N}_{p} \) . At that level there may be rational newforms, but also nonrational newforms defined over number fields of relatively large degree. In fact, the following proposition shows that the degree is unbounded: Proposition 15.2.9. An elliptic curve defined over \( \mathbb{Q} \) can arise from a new-form whose field of definition \( K \) has arbitrarily large degree. Proof. Let \( p \geq 5 \) be a prime, set \( L = {2}^{p + 4} + 1 \), and let \( E \) be the elliptic curve with equation \[ {Y}^{2} = X\left( {X + 1}\right) \left( {X - {2}^{p + 4}}\right) . \] Using Tate's algorithm we easily compute that the minimal discriminant and conductor are given by \[ {\Delta }_{\min } = {2}^{2p}{L}^{2},\;N = 2\operatorname{rad}\left( L\right) \] (see Definition 14.6.3). From Mazur’s Theorem 15.2.6 we know that \( E \) has no \( p \) -isogenies, so we can apply Ribet’s theorem, which tells us that \( E{ \sim }_{p}f \) for some newform at level \( {N}_{p} \) whose field of definition is some number field \( K \) . We cannot compute \( {N}_{p} \), but since \( L \) is odd, \( 2\parallel N \), and \( {v}_{2}\left( {\Delta }_{\min }\right) = {2p} \), it follows from the definition of \( {N}_{p} \) that \( {N}_{p} \) is odd. Thus by Proposition 15.2.2 (2) applied to \( \ell = 2 \) we deduce that \( p \mid {\mathcal{N}}_{K/\mathbb{Q}}\left( {3 \pm {c}_{2}}\right) \), where we denote by \( {c}_{\ell } \) the Fourier coefficients of the newform \( f \) . However, we know that all the conjugates of \( {c}_{2} \) in \( \overline{\mathbb{Q}} \) are bounded in absolute value by \( 2\sqrt{2} \), and that \( {c}_{2} \) is an algebraic integer. It follows that \( p \leq {\left( 3 + 2\sqrt{2}\right) }^{\left\lbrack K : \mathbb{Q}\right\rbrack } < {6}^{\left\lbrack K : \mathbb{Q}\right\rbrack } \), hence that \[ \left\lbrack {K : \mathbb{Q}}\right\rbrack > \frac{\log \left( p\right) }{\log \left( 6\right) } \] proving the proposition. ## 15.2.4 How to use Ribet's Theorem The general strategy for applying to a Diophantine equation the tools that we have introduced is the following. We assume that it has a solution, and to such a solution we associate if possible in some way an elliptic curve, called a Hellegouarch-Frey curve, or simply a Frey curve. \( {}^{2} \) The key properties that a "Frey curve" \( E \) must have are the following: - The coefficients of \( E \) depend on the solution of the Diophantine equation. - The minimal discriminant \( {\Delta }_{\min } \) of \( E \) can be written in the form \( {\Delta }_{\min } = \) \( C \cdot {D}^{p} \), where \( D \) depends on the solution o
1112_(GTM267)Quantum Theory for Mathematicians
Definition 8.8
Definition 8.8 For a bounded measurable function \( f \) on \( \sigma \left( A\right) \), let \( f\left( A\right) \) be the operator associated to the quadratic form \( {Q}_{f} \) by Proposition A.63. This means that \( f\left( A\right) \) is the unique operator such that \[ \langle \psi, f\left( A\right) \psi \rangle = {Q}_{f}\left( \psi \right) = {\int }_{\sigma \left( A\right) }{fd}{\mu }_{\psi } \] for all \( \psi \in \mathbf{H} \) . Observe that if \( f \) is real valued, then \( {Q}_{f}\left( \psi \right) \) is real for all \( \psi \in \mathbf{H} \), which means (Proposition A.63) that the associated operator \( f\left( A\right) \) is self-adjoint. We will shortly associate with \( A \) a projection-valued measure \( {\mu }^{A} \), and we will show that \( f\left( A\right) \), as given by Definition 8.8, agrees with \( f\left( A\right) \) as given by \( {\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }^{A}\left( \lambda \right) \) . [See (8.10) and compare Definition 7.13.] Proposition 8.9 For any two bounded measurable functions \( f \) and \( g \), we have \[ \left( {fg}\right) \left( A\right) = f\left( A\right) g\left( A\right) \] Proof. Let \( {\mathcal{F}}_{1} \) denote the space of bounded measurable functions \( f \) such that \( \left( {fg}\right) \left( A\right) = f\left( A\right) g\left( A\right) \) for all \( g \in \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . Then \( {\mathcal{F}}_{1} \) is a vector space and contains \( \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . We have already noted that dominated convergence guarantees that the map \( f \mapsto {Q}_{f}\left( \psi \right) ,\psi \in \mathbf{H} \), is continuous under uniformly bounded pointwise convergence. By the polarization identity (Proposition A.59), the same is true for the map \( f \mapsto {L}_{f}\left( {\phi ,\psi }\right) \), where \( {L}_{f} \) is the sesquilinear form associated to \( {Q}_{f} \) . Now, by the polarization identity, \( f \) will be in \( {\mathcal{F}}_{1} \) provided that \[ \langle \psi ,\left( {fg}\right) \left( A\right) \psi \rangle = \langle \psi, f\left( A\right) g\left( A\right) \psi \rangle \] or, equivalently, \[ {Q}_{fg}\left( \psi \right) = {L}_{f}\left( {\psi, g\left( A\right) \psi }\right) \] for all \( \psi \in \mathbf{H} \) and all \( g \in \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . From this, we can see that \( {\mathcal{F}}_{1} \) is closed under uniformly bounded pointwise limits. Thus, by Exercise \( 3,{\mathcal{F}}_{1} \) consists of all bounded, Borel-measurable functions. We now let \( {\mathcal{F}}_{2} \) denote the space of all bounded, Borel-measurable functions \( f \) such that \( \left( {fg}\right) \left( A\right) = f\left( A\right) g\left( A\right) \) for all bounded Borel-measurable functions \( g \) . Our result for \( {\mathcal{F}}_{1} \) shows that \( {\mathcal{F}}_{2} \) contains \( \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . Thus, the same argument as for \( {\mathcal{F}}_{1} \) shows that \( {\mathcal{F}}_{2} \) consists of all bounded, Borel-measurable functions. Theorem 8.10 Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint. For any measurable set \( E \subset \sigma \left( A\right) \), define an operator \( {\mu }^{A}\left( E\right) \) by \[ {\mu }^{A}\left( E\right) = {1}_{E}\left( A\right) \] where \( {1}_{E}\left( A\right) \) is given by Definition 8.8. Then \( {\mu }^{A} \) is a projection-valued measure on \( \sigma \left( A\right) \) and satisfies \[ {\int }_{\sigma \left( A\right) }{\lambda d}{\mu }^{A}\left( \lambda \right) = A \] Theorem 8.10 establishes the existence of the projection-valued measure in our first version of the spectral theorem (Theorem 7.12). Proof. Since \( {1}_{E} \) is real-valued and satisfies \( {1}_{E} \cdot {1}_{E} = {1}_{E} \), Proposition 8.4 tells us that \( {1}_{E}\left( A\right) \) is self-adjoint and satisfies \( {1}_{E}{\left( A\right) }^{2} = {1}_{E}\left( A\right) \) . Thus, \( {\mu }^{A}\left( E\right) \) is an orthogonal projection (Proposition A.57), for any measurable set \( E \subset X \) . If \( {E}_{1} \) and \( {E}_{2} \) are measurable sets, then \( {1}_{{E}_{1} \cap {E}_{2}} = {1}_{{E}_{1}} \cdot {1}_{{E}_{2}} \) and so \[ {\mu }^{A}\left( {{E}_{1} \cap {E}_{2}}\right) = {\mu }^{A}\left( {E}_{1}\right) {\mu }^{A}\left( {E}_{2}\right) . \] If \( {E}_{1},{E}_{2},\ldots \) are disjoint measurable sets, then \( {\mu }^{A}\left( {E}_{j}\right) {\mu }^{A}\left( {E}_{k}\right) = {\mu }^{A}\left( \varnothing \right) = 0 \) , for \( j \neq k \), and so the ranges of the projections \( {\mu }^{A}\left( {E}_{j}\right) \) and \( {\mu }^{A}\left( {E}_{k}\right) \) are orthogonal. It then follows by an elementary argument that, for all \( \psi \in \mathbf{H} \) , we have \[ \mathop{\sum }\limits_{{j = 1}}^{\infty }{\mu }^{A}\left( {E}_{j}\right) \psi = {P\psi } \] where the sum converges in the norm topology of \( \mathbf{H} \) and where \( P \) is the orthogonal projection onto the smallest closed subspace containing the range of \( {\mu }^{A}\left( {E}_{j}\right) \) for every \( j \) . On the other hand, if \( E \mathrel{\text{:=}} { \cup }_{j = 1}^{\infty }{E}_{j} \), then the sequence \( {f}_{N} \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{N}{1}_{{E}_{j}} \) is uniformly bounded (by 1) and converges pointwise to \( {1}_{E} \) . Thus, using again dominated convergence in (8.8), \[ \mathop{\lim }\limits_{{N \rightarrow \infty }}\left\langle {\psi ,\mathop{\sum }\limits_{{j = 1}}^{N}{1}_{{E}_{j}}\left( A\right) \psi }\right\rangle = \left\langle {\psi ,{1}_{E}\left( A\right) \psi }\right\rangle . \] It follows that \( {1}_{E}\left( A\right) \) coincides with \( P \), which establishes the desired countable additivity for \( {\mu }^{A} \) . Finally, if \( f = {1}_{E} \) for some Borel set \( E \), then \[ {\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }^{A}\left( \lambda \right) = f\left( A\right) \] (8.10) where \( f\left( A\right) \) is given by Definition 8.8. [The integral is equal to \( {\mu }^{A}\left( E\right) \), which is, by definition, equal to \( {1}_{E}\left( A\right) \) .] The equality (8.10) then holds for simple functions by linearity and for all bounded, Borel-measurable functions by taking limits. In particular, if \( f\left( \lambda \right) = \lambda \), then the integral of \( f \) against \( {\mu }^{A} \) agrees with \( f\left( A\right) \) as defined in Definition 8.8, which agrees with \( f\left( A\right) \) as defined in the continuous functional calculus, which in turn agrees with \( f\left( A\right) \) as defined for polynomials - namely, \( f\left( A\right) = A \) . This means that \[ {\int }_{\sigma \left( A\right) }{\lambda d}{\mu }^{A}\left( \lambda \right) = A \] as desired. - We have now completed the existence of the projection-valued measure \( {\mu }^{A} \) in Theorem 7.12. The uniqueness of \( {\mu }^{A} \) is left as an exercise (Exercise 4). We close this section by proving Proposition 7.16, which states that if a bounded operator \( B \) commutes with a bounded self-adjoint operator \( A \) , then \( B \) commutes with \( f\left( A\right) \), for all bounded, Borel-measurable functions \( f \) on \( \sigma \left( A\right) \) . Proof of Proposition 7.16. If \( B \) commutes with \( A \), then \( B \) commutes with \( p\left( A\right) \), for any polynomial \( p \) . Thus, by taking limits as in the construction of the continuous functional calculus, \( B \) will commute with \( f\left( A\right) \) for any continuous real-valued function \( f \) on \( \sigma \left( A\right) \) . We now let \( \mathcal{F} \) denote the space of all bounded, Borel-measurable functions \( f \) on \( \sigma \left( A\right) \) for which \( f\left( A\right) \) commutes with \( B \), so that \( \mathcal{C}\left( {\sigma \left( A\right) ;\mathbb{R}}\right) \) . To show that a bounded measurable \( f \) belongs to \( \mathcal{F} \), it suffices to show that for all \( \phi ,\psi \in \mathbf{H} \) we have \( \langle \phi, f\left( A\right) {B\psi }\rangle = \langle \phi ,{Bf}\left( A\right) \psi \rangle \), or, equivalently, \( \langle \phi, f\left( A\right) {B\psi }\rangle = \left\langle {{B}^{ * }\phi, f\left( A\right) \psi }\right\rangle \) . That is, we want \[ {L}_{f}\left( {\phi ,{B\psi }}\right) = {L}_{f}\left( {{B}^{ * }\phi ,\psi }\right) . \] But we have seen that for fixed vectors \( {\psi }_{1},{\psi }_{2} \in \mathbf{H} \), the map \( f \mapsto {L}_{f}\left( {{\psi }_{1},{\psi }_{2}}\right) \) is continuous under uniformly bounded pointwise limits. Thus, \( \mathcal{F} \) is closed under such limits, which implies (Exercise 3) that \( \mathcal{F} \) contains all bounded, Borel-measurable functions. - ## 8.2 Proof of the Spectral Theorem, Second Version We now turn to the proof of Theorem 7.19. As in the proof of Theorem 7.12, we will make use of continuous functional calculus for a bounded self-adjoint operator \( A \) and the Riesz representation theorem. We begin by establishing the special case in which \( A \) has a cyclic vector, that is, a vector \( \psi \) with the property that the vectors \( {A}^{k}\psi, k = 0,1,2,\ldots \), span a dense subspace of \( \mathbf{H} \) . In that case, the direct integral will be simply an \( {L}^{2} \) space (i.e., the Hilbert spaces \( {\mathbf{H}}_{\lambda } \) are equal to \( \mathbb{C} \) for all \( \lambda \) ). Thus, in this special case, the direct integral and multiplication operator versions of the spectral theorem coincide. Lemma 8.11 Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint and \( \psi \) is a cyclic vector for \( A \) . Let \( {\mu }_{\psi } \) be the unique measure on \( \sigma \left( A\right) \), given by Theorem 8.5, for which \[ \langle \psi, f\left( A\right) \psi \rangle = {\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }_{\psi
111_111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 5.1
Definition 5.1 A vector \( x \in \mathcal{H} \) is called a generating vector or a cyclic vector for \( A \) if the linear span of vectors \( {E}_{A}\left( M\right) x, M \in \mathfrak{B}\left( \mathbb{R}\right) \), is dense in \( \mathcal{H} \) . We say that \( A \) has a simple spectrum if \( A \) has a generating vector. That is, by Lemma 5.17, a vector \( x \in \mathcal{H} \) is generating for \( A \) if and only if \( \mathcal{H} \) is the smallest reducing subspace for \( A \) containing \( x \) . Example 5.4 (Multiplication operators on \( \mathbb{R} \) ) Suppose that \( \mu \) is a positive regular Borel measure on \( \mathbb{R} \) . Let \( {A}_{t} \) be the operator on \( \mathcal{H} = {L}^{2}\left( {\mathbb{R},\mu }\right) \) defined by \( \left( {{A}_{t}f}\right) \left( t\right) = {tf}\left( t\right) \) for \( f \in \mathcal{D}\left( {A}_{t}\right) = \{ f \in \mathcal{H} : {tf}\left( t\right) \in \mathcal{H}\} . \) By Example 5.2, the spectral projection \( {E}_{{A}_{t}}\left( M\right) \) acts as multiplication operator by the characteristic functions \( {\chi }_{M} \) . Hence, supp \( \mu = \operatorname{supp}{E}_{{A}_{t}} \) . Since \( \operatorname{supp}{E}_{{A}_{t}} = \) \( \sigma \left( {A}_{t}\right) \) by Proposition 5.10(i), we have \[ \operatorname{supp}\mu = \sigma \left( {A}_{t}\right) \] (5.25) Statement \( {1\lambda } \in \mathbb{R} \) is an eigenvalue of \( {A}_{t} \) if and only if \( \mu \left( {\{ \lambda \} }\right) \neq 0 \) . Each eigenvalue of \( {A}_{t} \) has multiplicity one. Proof Both assertions follow immediately from that fact that \( \mathcal{N}\left( {{A}_{t} - {\lambda I}}\right) \) consists of complex multiples of the characteristic function of the point \( \lambda \) . Statement \( 2{A}_{t} \) has a simple spectrum. Proof If the measure \( \mu \) is finite, then the constant function 1 is in \( \mathcal{H} \) and a generating vector for \( {A}_{t} \) . In the general case we set \[ x\left( t\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = - \infty }}^{\infty }{2}^{-\left| k\right| }\mu {\left( \lbrack k, k + 1)\right) }^{-1/2}{\chi }_{\lbrack k, k + 1)}\left( t\right) . \] Then \( x \in {L}^{2}\left( {\mathbb{R},\mu }\right) \) . Clearly, the linear span of functions \( {\chi }_{M} \cdot x, M \in \mathfrak{B}\left( \mathbb{R}\right) \), contains the characteristic functions \( {\chi }_{N} \) of all bounded Borel sets \( N \) . Since the span of such functions \( {\chi }_{N} \) is dense in \( {L}^{2}\left( {\mathbb{R},\mu }\right), x\left( t\right) \) is a generating vector for \( {A}_{t} \) . \( ▱ \circ \) The next proposition shows that up to unitary equivalence each self-adjoint operator with simple spectrum is of this form for some finite Borel measure \( \mu \) . For \( x \in \mathcal{H} \), we denote by \( {\mathcal{F}}_{x} \) the set of all \( {E}_{A} \) -a.e. finite measurable functions \( f : \mathbb{R} \rightarrow \mathbb{C} \cup \{ \infty \} \) for which \( x \in \mathcal{D}\left( {f\left( A\right) }\right) \) . Proposition 5.18 Let \( x \) be a generating vector for the self-adjoint operator \( A \) on \( \mathcal{H} \) . Set \( \mu \left( \cdot \right) \mathrel{\text{:=}} \left\langle {{E}_{A}\left( \cdot \right) x, x}\right\rangle \) . Then the map \( \left( {U\left( {f\left( A\right) x}\right) }\right) \left( t\right) = f\left( t\right), f \in {\mathcal{F}}_{x} \), is a unitary operator of \( \mathcal{H} \) onto \( {L}^{2}\left( {\mathbb{R},\mu }\right) \) such that \( A = {U}^{-1}{A}_{t}U \) and \( \left( {Ux}\right) \left( t\right) = 1 \) on \( \mathbb{R} \), where \( {A}_{t} \) is the multiplication operator on \( {L}^{2}\left( {\mathbb{R},\mu }\right) \) from Example 5.4. Proof For \( f \in {\mathcal{F}}_{x} \) it follows from Theorem 5.9,2) that \[ \parallel f\left( A\right) x{\parallel }^{2} = {\int }_{\mathbb{R}}{\left| f\left( t\right) \right| }^{2}d\left\langle {{E}_{A}\left( t\right) x, x}\right\rangle = \parallel f{\parallel }_{{L}^{2}\left( {\mathbb{R},\mu }\right) }^{2}. \] If \( f \in {L}^{2}\left( {\mathbb{R},\mu }\right) \), then \( x \in \mathcal{D}\left( {f\left( A\right) }\right) \) by (5.11). Therefore, the preceding formula shows that \( U \) is a well-defined isometric linear map of \( \left\{ {f\left( A\right) x : f \in {\mathcal{F}}_{x}}\right\} \) onto \( {L}^{2}\left( {\mathbb{R},\mu }\right) \) . Letting \( f\left( \lambda \right) \equiv 1 \), we get \( \left( {Ux}\right) \left( t\right) = 1 \) on \( \mathbb{R} \) . Let \( g \in {\mathcal{F}}_{x} \) . By (5.11), applied to the function \( f\left( \lambda \right) = \lambda \) on \( \mathbb{R} \), we conclude that \( g\left( A\right) x \in \mathcal{D}\left( A\right) \) if and only if \( {Ug}\left( A\right) x \in \mathcal{D}\left( {A}_{t}\right) \) . From the functional calculus we obtain \( {UAg}\left( A\right) x = {A}_{t}{Ug}\left( A\right) x \) . This proves that \( A = {U}^{-1}{A}_{t}U \) . Corollary 5.19 If the self-adjoint operator \( A \) has a simple spectrum, each eigenvalue of \( A \) has multiplicity one. Proof Combine Proposition 5.18 with Statement 1 of Example 5.4. Proposition 5.20 A self-adjoint operator A on \( \mathcal{H} \) has a simple spectrum if and only if there exists a vector \( x \in \mathop{\bigcap }\limits_{{n = 0}}^{\infty }\mathcal{D}\left( {A}^{n}\right) \) such that \( \operatorname{Lin}\left\{ {{A}^{n}x : n \in {\mathbb{N}}_{0}}\right\} \) is dense in \( \mathcal{H} \) . Proof Suppose that the condition is fulfilled. By the definition of spectral integrals, \( {A}^{n}x \) is a limit of linear combinations of vectors \( {E}_{A}\left( M\right) x \) . Therefore, the density of \( \operatorname{Lin}\left\{ {{A}^{n}x : n \in {\mathbb{N}}_{0}}\right\} \) in \( \mathcal{H} \) implies that the span of vectors \( {E}_{A}\left( M\right) x, M \in \mathfrak{B}\left( \mathbb{R}\right) \), is dense in \( \mathcal{H} \) . This means that \( x \) is a generating vector for \( A \) . The converse direction will be proved at the end of Sect. 7.4. Finally, we state without proof another criterion (see, e.g., [RS1, Theorem VII.5]): A self-adjoint operator A has a simple spectrum if and only if its commutant \[ \{ A{\} }^{\prime } \mathrel{\text{:=}} \{ B \in \mathbf{B}\left( \mathcal{H}\right) : {BA} \subseteq {AB}\} \] is commutative, or equivalently, if each operator \( B \in \{ A{\} }^{\prime } \) is a (bounded) function \( f\left( A\right) \) of \( A \) . ## 5.5 Spectral Theorem for Finitely Many Strongly Commuting Normal Operators In this section we develop basic concepts and results on the multidimensional spectral theory for strongly commuting unbounded normal operators. ## 5.5.1 The Spectral Theorem for n-Tuples Definition 5.2 We say that two unbounded normal operators \( S \) and \( T \) acting on the same Hilbert space strongly commute if their bounded transforms \( {Z}_{T} \) and \( {Z}_{S} \) (see (5.6)) commute. This definition is justified by Proposition 5.27 below which collects various equivalent conditions. Among others this proposition contains the simple fact that two bounded normals \( T \) and \( S \) strongly commute if and only if they commute. The following theorem is the most general spectral theorem in this book. Theorem 5.21 Let \( T = \left\{ {{T}_{1},\ldots ,{T}_{n}}\right\}, n \in \mathbb{N} \), be an \( n \) -tuple of unbounded normal operators acting on the same Hilbert space \( \mathcal{H} \) such that \( {T}_{k} \) and \( {T}_{l} \) strongly commute for all \( k, l = 1,\ldots, n, k \neq l \) . Then there exists a unique spectral measure \( E = {E}_{T} \) on the Borel \( \sigma \) -algebra \( \mathfrak{B}\left( {\mathbb{C}}^{n}\right) \) on the Hilbert space \( \mathcal{H} \) such that \[ {T}_{k} = {\int }_{{\mathbb{C}}^{n}}{t}_{k}d{E}_{T}\left( {{t}_{1},\ldots ,{t}_{n}}\right) ,\;k = 1,\ldots, n. \] (5.26) In the proof of Theorem 5.21 we use the following lemma. Lemma 5.22 Let \( E = {E}_{1} \times \cdots \times {E}_{m} \) be the product measure (by Theorem 4.10) of spectral measures \( {E}_{1},\ldots ,{E}_{m} \) on \( \mathfrak{B}\left( \mathbb{R}\right) \) . If \( f \) is a Borel function on \( \mathbb{R} \), then \[ {\int }_{\mathbb{R}}f\left( {\lambda }_{k}\right) d{E}_{k}\left( {\lambda }_{k}\right) = {\int }_{{\mathbb{R}}^{m}}f\left( {\lambda }_{k}\right) {dE}\left( {{\lambda }_{1},\ldots ,{\lambda }_{m}}\right) . \] (5.27) Proof Formula (5.27) holds for any characteristic function \( {\chi }_{M}, M \in \mathfrak{B}\left( \mathbb{R}\right) \), since \[ {\int }_{\mathbb{R}}{\chi }_{M}\left( {\lambda }_{k}\right) d{E}_{k}\left( {\lambda }_{k}\right) = {E}_{k}\left( M\right) = E\left( {\mathbb{R} \times \cdots \times M \times \cdots \times \mathbb{R}}\right) \] \[ = {\int }_{{\mathbb{R}}^{m}}{\chi }_{M}\left( {\lambda }_{k}\right) {dE}\left( {{\lambda }_{1},\ldots ,{\lambda }_{m}}\right) \] by (4.21) and Theorem 5.9, 8.). Hence, formula (5.27) is valid for simple functions by linearity and for arbitrary Borel functions by passing to limits. Proof of Theorem 5.21 By Lemma 5.8,(i) and (iii), the bounded transform \( {Z}_{k} \mathrel{\text{:=}} \) \( {Z}_{{T}_{k}} \) of the normal operator \( {T}_{k} \) is a bounded normal operator. The operators \( {Z}_{1},\ldots ,{Z}_{n} \) pairwise commute, because \( {T}_{1},\ldots ,{T}_{n} \) pairwise strongly commute by assumption. We first prove the existence of a spectral measure for \( {Z}_{1},\ldots ,{Z}_{n} \), and from this we then derive a spectral measure for \( {T}_{1},\ldots ,{T}_{n} \) . To be precise, we first show that if \( {Z}_{1},\ldots ,{Z}_{n} \) are pairwise commuting bounded normal operators on \( \mathcal{H} \), there exists a spectral measure \( F \) on \( \mathfrak{B}\left( {\mathbb{C}}^{n}\right) \) such that \[ {Z}_{k} = {\int }_{{\mathbb{C}}^{n}}{z}_{k}{dF}\left( {{z}_{1},\ldots ,{z}_{n}}\right) ,\;k = 1,\ldots, n. \] (5.28) Since \( {Z}_{k} \) is normal, there are commuting bounded self-adjoint operators \( {A
1077_(GTM235)Compact Lie Groups
Definition 3.16
Definition 3.16. Let \( V \) be a unitary representation of a compact Lie group \( G \) on a Hilbert space. For \( \left\lbrack \pi \right\rbrack \in \widehat{G} \), let \( {V}_{\left\lbrack \pi \right\rbrack } \) be the largest subspace of \( V \) that is a Hilbert space direct sum of irreducible submodules equivalent to \( {E}_{\pi } \) . The submodule \( {V}_{\lbrack \pi \rbrack } \) is called the \( \pi \) -isotypic component of \( V \) . As in the finite-dimensional case, the above definition of the isotypic component \( {V}_{\left\lbrack \pi \right\rbrack } \) is well defined and \( {V}_{\left\lbrack \pi \right\rbrack } \) is the closure of the sum of all submodules of \( V \) equivalent to \( {E}_{\pi } \) These statements are verified using Zorn’s Lemma in a fashion similar to the proof of Corollary 3.15 (Exercise 3.12). Lemma 3.17. Let \( V \) be a unitary representation of a compact Lie group \( G \) on a Hilbert space with invariant inner product \( {\left( \cdot , \cdot \right) }_{V} \) and let \( {E}_{\pi },\left\lbrack \pi \right\rbrack \in \widehat{G} \), be an irreducible representation of \( G \) with invariant inner product \( {\left( \cdot , \cdot \right) }_{{E}_{\pi }} \) . Then \( {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \) is a Hilbert space with a \( G \) -invariant inner product \( {\left( \cdot , \cdot \right) }_{\text{Hom }} \) defined by \( {\left( {T}_{1},{T}_{2}\right) }_{\text{Hom }}I = {T}_{2}^{ * } \circ {T}_{1} \) . It satisfies (3.18) \[ {\left( {T}_{1},{T}_{2}\right) }_{\text{Hom }}{\left( {x}_{1},{x}_{2}\right) }_{{E}_{\pi }} = {\left( {T}_{1}{x}_{1},{T}_{2}{x}_{2}\right) }_{V} \] for \( {T}_{i} \in {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \) and \( {x}_{i} \in {E}_{\pi } \) . Moreover, \( \parallel T{\parallel }_{\text{Hom }} \) is the same as the operator norm of \( T \) . Proof. The adjoint of \( {T}_{2},{T}_{2}^{ * } \in \operatorname{Hom}\left( {V,{E}_{\pi }}\right) \), is still a \( G \) -map since \[ {\left( {T}_{2}^{ * }\left( gv\right), x\right) }_{{E}_{\pi }} = {\left( gv,{T}_{2}x\right) }_{V} = {\left( v,{T}_{2}\left( {g}^{-1}x\right) \right) }_{V} = {\left( {T}_{2}^{ * }v,{g}^{-1}x\right) }_{{E}_{\pi }} \] \[ = {\left( g{T}_{2}^{ * }v, x\right) }_{{E}_{\pi }} \] for \( x \in {E}_{\pi } \) and \( v \in V \) . Thus \( {T}_{2}^{ * } \circ {T}_{1} \in \operatorname{Hom}\left( {{E}_{\pi },{E}_{\pi }}\right) \) . Schur’s Lemma implies that there is a scalar \( {\left( {T}_{1},{T}_{2}\right) }_{\text{Hom }} \in \overline{\mathbb{C}} \), so that \( {\left( {T}_{1},{T}_{2}\right) }_{\text{Hom }}I = {T}_{2}^{ * } \circ {T}_{1} \) . By definition, \( {\left( \cdot , \cdot \right) }_{\text{Hom }} \) is clearly a Hermitian form on \( {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \) and \[ {\left( {T}_{1}{x}_{1},{T}_{2}{x}_{2}\right) }_{V} = {\left( {T}_{2}^{ * }\left( {T}_{1}{x}_{1}\right) ,{x}_{2}\right) }_{{E}_{\pi }} = {\left( {\left( {T}_{1},{T}_{2}\right) }_{\text{Hom }}{x}_{1},{x}_{2}\right) }_{{E}_{\pi }} \] \[ = {\left( {T}_{1},{T}_{2}\right) }_{\text{Hom }}{\left( {x}_{1},{x}_{2}\right) }_{{E}_{\pi }}\text{.} \] In particular, for \( T \in {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) ,\parallel T{\parallel }_{\text{Hom }} \) is the quotient of \( \parallel {Tx}{\parallel }_{V} \) and \( \parallel x{\parallel }_{{E}_{\pi }} \) for any nonzero \( x \in {E}_{\pi } \) . Thus \( \parallel T{\parallel }_{\text{Hom }} \) is the same as the operator norm of \( T \) viewed as an element of \( \operatorname{Hom}\left( {{E}_{\pi }, V}\right) \) . Hence \( {\left( \cdot , \cdot \right) }_{\text{Hom }} \) is an inner product making \( {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \) into a Hilbert space. Note Equation 3.18 is independent of the choice of invariant inner product on \( {E}_{\pi } \) . To see this directly, observe that scaling \( {\left( \cdot , \cdot \right) }_{{E}_{\pi }} \) scales \( {T}_{2}^{ * } \), and therefore \( {\left( \cdot , \cdot \right) }_{{\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) } \), by the inverse scalar so that the product of \( {\left( \cdot , \cdot \right) }_{{E}_{\pi }} \) and \( {\left( \cdot , \cdot \right) }_{{\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) } \) remains unchanged. If \( {V}_{i} \) are Hilbert spaces with inner products \( {\left( \cdot , \cdot \right) }_{i} \), recall that the Hilbert space tensor product, \( {V}_{1}\widehat{ \otimes }{V}_{2} \), is the completion of \( {V}_{1} \otimes {V}_{2} \) with respect to the inner product generated by \( \left( {{v}_{1} \otimes {v}_{2},{v}_{1}^{\prime } \otimes {v}_{2}^{\prime }}\right) = \left( {{v}_{1},{v}_{1}^{\prime }}\right) \left( {{v}_{2},{v}_{2}^{\prime }}\right) \) (c.f. Exercise 3.1). Theorem 3.19 (Canonical Decomposition). Let \( V \) be a unitary representation of a compact Lie group \( G \) on a Hilbert space. (1) There is a \( G \) -intertwining unitary isomorphism \( {\iota }_{\pi } \) \[ {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \widehat{ \otimes }{E}_{\pi }\overset{ \cong }{ \rightarrow }{V}_{\left\lbrack \pi \right\rbrack } \] induced by \( {\iota }_{\pi }\left( {T \otimes v}\right) = T\left( v\right) \) for \( T \in {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \) and \( v \in V \) . (2) There is a \( G \) -intertwining unitary isomorphism \[ {\bigoplus }_{\left\lbrack \pi \right\rbrack \in \widehat{G}}{\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \otimes {E}_{\pi }\overset{ \cong }{ \rightarrow }V = {\bigoplus }_{\left\lbrack \pi \right\rbrack \in \widehat{G}}{V}_{\left\lbrack \pi \right\rbrack }. \] Proof. As in the proof of Theorem 2.24, \( {\iota }_{\pi } \) is a well-defined \( G \) -map from \( {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \otimes {E}_{\pi } \) to \( {V}_{\left\lbrack \pi \right\rbrack } \) with dense range (since \( {V}_{\left\lbrack \pi \right\rbrack } \) is a Hilbert space direct sum of irreducible submodules instead of finite direct sum as in Theorem 2.24). As Lemma 3.17 implies \( {\iota }_{\pi } \) is unitary on \( {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \otimes {E}_{\pi } \), it follows that \( {\iota }_{\pi } \) is injective and uniquely extends by continuity to a \( G \) -intertwining unitary isomorphism from \( {\operatorname{Hom}}_{G}\left( {{E}_{\pi }, V}\right) \widehat{ \otimes }{E}_{\pi } \) to \( {V}_{\left\lbrack \pi \right\rbrack } \) . Finally, \( V \) is the closure of \( \mathop{\sum }\limits_{{\left\lbrack \pi \right\rbrack \in \widehat{G}}}{V}_{\left\lbrack \pi \right\rbrack } \) by Corollary 3.15 and the sum is orthogonal by Corollary 2.21. ## 3.2.4 Exercises Exercise 3.11 Let \( V \) and \( W \) be unitary representations of a compact Lie group \( G \) on Hilbert spaces and let \( T \in {\operatorname{Hom}}_{G}\left( {V, W}\right) \) be injective with dense range. Show that \( {T}^{ * } \in {\operatorname{Hom}}_{G}\left( {W, V}\right) ,{T}^{ * } \) is injective, and that \( {T}^{ * } \) has dense range. Exercise 3.12 Let \( V \) be a unitary representation of a compact Lie group \( G \) on a Hilbert space and let \( \left\lbrack \pi \right\rbrack \in \widehat{G} \) . (a) Consider the collection of all sets \( \left\{ {{V}_{\alpha } \mid \alpha \in \mathcal{A}}\right\} \) satisfying the properties: (1) each \( {V}_{\alpha } \) is a submodule of \( V \) isomorphic to \( {E}_{\pi } \) and (2) \( {V}_{\alpha } \bot {V}_{\beta } \) for distinct \( \alpha ,\beta \in \mathcal{A} \) . Partially order this collection by inclusion and use Zorn's Lemma to show that there is a maximal element. (b) Write \( \left\{ {{V}_{\alpha } \mid \alpha \in \mathcal{A}}\right\} \) for the maximal element. Show that the orthogonal projection \( P : V \rightarrow {\left( {\widehat{\bigoplus }}_{\alpha \in \mathcal{A}}{V}_{\alpha }\right) }^{ \bot } \) is a \( G \) -map. If \( {V}_{\gamma } \subseteq V \) is any submodule equivalent to \( {E}_{\pi } \) , use irreducibility and maximality to show that \( P{V}_{\gamma } = \{ 0\} \) . (c) Show that the definition of the isotypic component \( {V}_{\left\lbrack \pi \right\rbrack } \) in Definition 3.16 is well defined and that \( {V}_{\left\lbrack \pi \right\rbrack } \) is the closure of the sum of all submodules of \( V \) equivalent to \( {E}_{\pi } \) . Exercise 3.13 Recall that \( \widehat{{S}^{1}} \cong \mathbb{Z} \) via the one-dimensional representations \( {\pi }_{n}\left( {e}^{i\theta }\right) = \) \( {e}^{in\theta } \) for \( n \in \mathbb{Z} \) (Exercise 2.21). View \( {L}^{2}\left( {S}^{1}\right) \) as a unitary representation of \( {S}^{1} \) under the action \( \left( {{e}^{i\theta } \cdot f}\right) \left( {e}^{i\alpha }\right) = f\left( {e}^{i\left( {\alpha - \theta }\right) }\right) \) for \( f \in {L}^{2}\left( {S}^{1}\right) \) . Calculate \( {\operatorname{Hom}}_{{S}^{1}}\left( {{\pi }_{n},{L}^{2}\left( {S}^{1}\right) }\right) \) and conclude that \( {L}^{2}\left( {S}^{1}\right) = {\widehat{\bigoplus }}_{n \in \mathbb{Z}}\mathbb{C}{e}^{in\theta } \) . Exercise 3.14 Use Exercise 2.28 and Theorem 2.33 to show that \[ {L}^{2}\left( {S}^{n - 1}\right) = {\widehat{\bigoplus }}_{m \in \mathbb{N}}{\left. {\mathcal{H}}_{m}\left( {\mathbb{R}}^{n}\right) \right| }_{{S}^{n - 1}},\;n \geq 2, \] is the canonical decomposition of \( {L}^{2}\left( {S}^{n - 1}\right) \) under \( O\left( n\right) \) (or \( {SO}\left( n\right) \) for \( n \geq 3 \) ) with respect to usual action \( \left( {gf}\right) \left( v\right) = f\left( {{g}^{-1}v}\right) \) . Exercise 3.15 Recall that the irreducible unitary representations of \( \mathbb{R} \) are given by the one-dimensional representations \( {\pi }_{r}\left( x\right) = {e}^{irx} \) for \( r \in \mathbb{R} \) (Exercise 2.21) and consider the unitary representation of \( \mathbb{R} \) on \( {L}^{2}\left( \mathbb{R}\right) \) under the action \( \left( {x \cdot f}\right) \left( y\right) = \) \( f\left( {x
113_Topological Groups
Definition 21.28
Definition 21.28. A is a prime model of \( \Gamma \) if \( \mathfrak{A} \) is a model of \( \Gamma \) and \( \mathfrak{A} \) can be embedded in any model of \( \Gamma \) . Proposition 21.29. If \( \Gamma \) is model complete and has a prime model, then \( \Gamma \) is complete. Proof. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be any two models of \( \Gamma \) ; we show that \( \mathfrak{A} \equiv \mathfrak{B} \) . Let \( \mathfrak{C} \) be a prime model of \( \Gamma \) . Thus there are embeddings \( f \) and \( g \) of \( \mathfrak{C} \) into \( \mathfrak{A} \) and \( \mathfrak{B} \) respectively. Since \( \Gamma \) is model complete, \( f \) and \( g \) are actually elementary embeddings. Hence \( \mathfrak{A} \equiv \mathfrak{C} \equiv \mathfrak{B} \) . Definition 21.30. A theory \( \Gamma \) has the joint extension property if any two models of \( \Gamma \) can be embedded in a model of \( \Gamma \) . The following proposition is proved just like 21.29: Proposition 21.31. If \( \Gamma \) is model complete and has the joint extension property, then \( \Gamma \) is complete. As our final theoretical result concerning model completeness, we shall use the concept to give a highly useful purely mathematical condition for completeness. Theorem 21.32. Let \( \Gamma \) be a theory satisfying the following two conditions: (i) if \( \mathfrak{A} \) and \( \mathfrak{B} \) are any two models of \( \Gamma \), then every finitely generated substructure of \( \mathfrak{A} \) is embeddable in \( \mathfrak{B} \) ; (ii) if \( \mathfrak{A} \) and \( \mathfrak{B} \) are any two models of \( \Gamma ,\mathfrak{A} \subseteq \mathfrak{B} \), and \( {\mathfrak{B}}^{\prime } \) is a finitely generated substructure of \( \mathfrak{B} \), then there is an embedding \( f \) of \( {\mathfrak{B}}^{\prime } \) into \( \mathfrak{A} \) such that \( f \upharpoonright A \cap {B}^{\prime } = \mathbf{I} \upharpoonright A \cap {B}^{\prime }. \) Then \( \Gamma \) is complete. Proof. First we show that \( \Gamma \) is model-complete; to prove this we shall apply 21.27(iii). Assume, then, that \( \mathfrak{A} \) and \( \mathfrak{B} \) are models of \( \Gamma ,\mathfrak{A} \subseteq \mathfrak{B}, x \in {}^{\omega }A \) , \( \varphi \) is a universal formula, and not \( \left( {\mathfrak{B} \vDash \varphi \left\lbrack x\right\rbrack }\right) \) ; we want to show that not \( \left( {\mathfrak{A} \vDash \varphi \left\lbrack x\right\rbrack }\right) \) . Say \( \varphi = \forall {v}_{i0}\cdots \forall {v}_{i\left( {m - 1}\right) }\psi \) with \( \psi \) quantifier free. Then there is a \( y \in {}^{\omega }B \) with not \( \left( {\mathfrak{B} \vDash \psi \left\lbrack y\right\rbrack }\right) \) and \( {y}_{j} = {x}_{j} \) for all \( j \in \omega \sim \left\{ {{i}_{0},\ldots ,{i}_{m - 1}}\right\} \) . Let \( M = \left\{ {{y}_{j} : {v}_{j}}\right. \) occurs in \( \left. \varphi \right\} \), and let \( {\mathfrak{B}}^{\prime } \) be the substructure of \( \mathfrak{B} \) generated by M. Choose \( z \in {}^{\omega }{B}^{\prime } \) with \( {z}_{j} = {y}_{j} \) whenever \( {v}_{j} \) occurs in \( \varphi \) . By our assumption (ii), there is an embedding \( f \) of \( {\mathfrak{B}}^{\prime } \) into \( \mathfrak{A} \) such that \( f \upharpoonright A \cap {B}^{\prime } = \mathbf{I} \upharpoonright \) \( A \cap {B}^{\prime } \) . Since \( \psi \) is quantifier-free, \( {\mathfrak{B}}^{\prime } \vDash \neg \psi \left\lbrack z\right\rbrack \) and so \( \mathfrak{A} \vDash \neg \psi \left\lbrack {f \circ z}\right\rbrack \) and \( \mathfrak{A} \vDash \neg \varphi \left\lbrack {f \circ z}\right\rbrack \) . Since \( {\left( f \circ z\right) }_{j} = {x}_{j} \) for \( {v}_{j} \) occurring free in \( \varphi ,\mathfrak{A} \vDash \neg \varphi \left\lbrack x\right\rbrack \), as desired. Now to show that \( \Gamma \) is complete it suffices by \( {21.27}\left( {iv}\right) \) to show that if a universal sentence holds in one model of \( \Gamma \) then it holds in any model of \( \Gamma \) . Let \( \varphi = \forall {v}_{i0}\cdots \forall {v}_{i\left( {m - 1}\right) }\psi \) be a universal sentence, with \( \psi \) quantifier-free; suppose that \( \mathfrak{A} \) and \( \mathfrak{B} \) are models of \( \Gamma \) and \( \mathfrak{A} \vDash \neg \varphi \) . Choose \( x \in {}^{\omega }A \) such that \( \mathfrak{A} \vDash \neg \psi \left\lbrack x\right\rbrack \), and let \( {\mathfrak{A}}^{\prime } \) be the substructure of \( \mathfrak{A} \) generated by \( \left\{ {{x}_{j} : {v}_{j}}\right. \) occurs in \( \psi \} \) . We may assume that \( x \in {}^{\omega }{A}^{\prime } \) . By our condition \( \left( i\right) \), let \( f \) be an embedding of \( {\mathfrak{A}}^{\prime } \) into \( \mathfrak{B} \) . Then \( {\mathfrak{A}}^{\prime } \vDash \neg \psi \left\lbrack x\right\rbrack \), hence \( \mathfrak{B} \vDash \neg \psi \left\lbrack {f \circ x}\right\rbrack \), hence \( \mathfrak{B} \vDash \neg \varphi \) . We finish this chapter with two important examples of complete theories proved via model completeness: the theory of infinite atomic Boolean algebras, and the theory of real-closed fields. To fix the notation for the logical theory of Boolean algebras we introduce the following definitions. ## Definition 21.33 (i) \( {\mathcal{L}}_{\mathrm{{BA}}} \) is a first-order language for Boolean algebras; the only nonlogical constants are \( + , \cdot , - ,0,1 \), operation symbols of ranks \( 2,2,1 \) , 0,0 respectively. (ii) \( {\Gamma }_{\mathrm{{BA}}} \), the theory of Boolean algebras, has as axioms the formal sentences corresponding to the axioms given in 9.3. (iii) \( {\varphi }_{\mathrm{{At}}} \) is the following formula of \( {\mathcal{L}}_{\mathrm{{BA}}} \) : \[ \neg \left( {{v}_{0} = 0}\right) \land \forall {v}_{1}\left( {{v}_{1} \cdot {v}_{0} = {v}_{1} \rightarrow {v}_{1} = 0 \vee {v}_{1} = {v}_{0}}\right) . \] Thus \( \mathrm{{Fv}}{\varphi }_{\mathrm{{At}}} = \left\{ {v}_{0}\right\} \) and for any BA \( \mathfrak{A},{}^{1}{\varphi }^{\mathfrak{A}} \) is the set of all atoms of \( \mathfrak{A} \) . (iv) \( {\Gamma }_{\mathrm{{At}}} \) is the extension of \( {\Gamma }_{\mathrm{{BA}}} \) with the additional axiom \[ \left. {\forall {v}_{1}\left\{ {\neg \left( {{v}_{1} = 0}\right) \rightarrow \exists {v}_{0}\left\lbrack {{\varphi }_{\mathrm{{At}}}\left( {v}_{0}\right) \land {v}_{0} \cdot {v}_{1} = {v}_{0}}\right\rbrack }\right\rbrack }\right\} . \] Thus \( \mathfrak{A} \) is a model of \( {\Gamma }_{\mathrm{{At}}} \) iff \( \mathfrak{A} \) is an atomic BA. (v) \( {\Gamma }_{\mathrm{{At}}}^{\infty } \) is the extension of \( {\Gamma }_{\mathrm{{At}}} \) with the following additional axioms (for each \( m \in \omega \sim 1 \) ): \[ \exists {v}_{0}\cdots \exists {v}_{m - 1}\mathop{\bigwedge }\limits_{{i < j < m}}\neg \left( {{v}_{i} \equiv {v}_{j}}\right) . \] Thus \( \mathfrak{A} \) is a model of \( {\Gamma }_{\mathrm{{At}}}^{\infty } \) iff \( \mathfrak{A} \) is an infinite atomic BA. (vi) \( {\mathcal{L}}_{\mathrm{{At}}} \) is an expansion of \( {\mathcal{L}}_{\mathrm{{BA}}} \) obtained by adjoining a new unary relation symbol \( \mathbf{P} \) . (vii) \( {}_{0}{\Gamma }_{\mathrm{{At}}} \) (resp. \( {}_{0}{\Gamma }_{\mathrm{{At}}}^{\infty } \) ) is obtained from \( {\Gamma }_{\mathrm{{At}}} \) (resp. \( {\Gamma }_{\mathrm{{At}}}^{\infty } \) ) by adjoining the following sentence as an axiom: \[ \forall {v}_{0}\left\lbrack {\mathbf{P}{v}_{0} \leftrightarrow {\varphi }_{\mathrm{{At}}}\left( {v}_{0}\right) }\right\rbrack \] It is obvious that \( \left( {{\mathcal{L}}_{\mathrm{{At}}},{}_{0}{\Gamma }_{\mathrm{{At}}}}\right) \) is a definitional expansion of \( \left( {{\mathcal{L}}_{\mathrm{{BA}}},{\Gamma }_{\mathrm{{At}}}}\right) \) , and \( \left( {{\mathcal{L}}_{\mathrm{{At}},0}{\Gamma }_{\mathrm{{At}}}^{\infty }}\right) \) is a definitional expansion of \( \left( {{\mathcal{L}}_{\mathrm{{BA}}},{\Gamma }_{\mathrm{{At}}}^{\infty }}\right) \) . We shall actually show that \( {}_{0}{\Gamma }_{\mathrm{{At}}}^{\infty } \) is complete, from which the completeness of \( {\Gamma }_{\mathrm{{At}}}^{\infty } \) is clear. To prove \( {}_{0}{\Gamma }_{\mathrm{{At}}}^{\infty } \) complete we apply 21.32, which shows that \( {}_{0}{\Gamma }_{\mathrm{{At}}}^{\infty } \) is also model-complete (see the proof of 21.32). It is easy to see that \( {\Gamma }_{\mathrm{{At}}}^{\infty } \) itself is not model-complete (see Exercise 21.47). The trick of adding a defined symbol to convert a theory into a model-complete theory is rather common. ## Theorem 21.34. The theory of infinite atomic Boolean algebras is complete. Proof. As indicated above, it suffices to use 21.32 to show that \( {}_{0}{\Gamma }_{\mathrm{{At}}}^{\infty } \) is complete. First we verify \( {21.32}\left( i\right) \) . To this end, let \( \left( {\mathfrak{A}, P}\right) \) and \( \left( {\mathfrak{B}, Q}\right) \) be two models of \( {}_{0}{\Gamma }_{\mathrm{{At}}}^{\infty } \), and let \( \left( {{\mathfrak{A}}^{\prime },{P}^{\prime }}\right) \) be a finitely generated substructure of \( \left( {\mathfrak{A}, P}\right) \) . By exercise 9.65 we know that \( {A}^{\prime } \) is finite, so by \( {9.29}{\mathfrak{A}}^{\prime } \) is atomic. Let \( {a}_{0},\ldots ,{a}_{m - 1} \) be all of the atoms of \( {\mathfrak{A}}^{\prime } \) ; say \( {a}_{0},\ldots ,{a}_{n - 1} \) are also atoms of \( \mathfrak{A} \), while \( {a}_{n},\ldots ,{a}_{m - 1} \) are not atoms of \( \mathfrak{A} \) . Thus \( n = 0 \) is possible, but \( n < m \), since otherwise \( {a}_{0},\ldots ,{a}_{m - 1} \) would be all of the atoms of \( \mathfrak{A} \) and hence, as is easily seen, \( \mathfrak{A} \) would be finite. Let \( {R}_{0},\ldots ,{R}_{m - 1} \) be a partition of \( \mathrm{Q} \) into nonempty sets such that \( \left| {R}_{0}\right| = \cdots = \left| {R}_{n - 1}\right| = 1,\left| {R}_{n}\right| = \cdots = \) \( \left| {R}_{m - 2}\right| = 2 \), and hence \( \left| {R}_{m - 1}\right| = \left| Q\right| \geq
1042_(GTM203)The Symmetric Group
Definition 3.6.7
Definition 3.6.7 Two permutations \( \pi ,\sigma \in {\mathcal{S}}_{n} \) are said to be \( Q \) -equivalent, written \( \pi \overset{Q}{ \cong }\sigma \), if \( Q\left( \pi \right) = Q\left( \sigma \right) \) . ∎ For example, \[ Q\left( \begin{array}{lll} 2 & 1 & 3 \end{array}\right) = Q\left( \begin{array}{lll} 3 & 1 & 2 \end{array}\right) = \begin{array}{ll} 1 & 3 \\ 2 & \end{array}\;\text{ and }\;Q\left( \begin{array}{lll} 1 & 3 & 2 \end{array}\right) = Q\left( \begin{array}{lll} 2 & 3 & 1 \end{array}\right) = \begin{array}{ll} 1 & 2 \\ 3 & \end{array}, \] so \[ {213}\overset{Q}{ \cong }{312}\;\text{ and }\;{132}\overset{Q}{ \cong }{231}. \] (3.12) We also have a dual notion for the Knuth relations. Definition 3.6.8 Permutations \( \pi ,\sigma \in {\mathcal{S}}_{n} \) differ by a dual Knuth relation of the first kind, written \( \pi \overset{{1}^{ * }}{ \cong }\sigma \), if for some \( k \) , 1. \( \pi = \ldots k + 1\ldots k\ldots k + 2\ldots \) and \( \sigma = \ldots k + 2\ldots k\ldots k + 1\ldots \) or vice versa. They differ by a dual Knuth relation of the second kind, written \( \pi \overset{{2}^{ * }}{ \cong }\sigma \), if for some \( k \) , 2. \( \pi = \ldots k\ldots k + 2\ldots k + 1\ldots \) and \( \sigma = \ldots k + 1\ldots k + 2\ldots k\ldots \) or vice versa. The two permutations are dual Knuth equivalent, written \( \pi \overset{{K}^{ * }}{ \cong }\sigma \), if there is a sequence of permutations such that \[ \pi = {\pi }_{1}\overset{{i}^{ * }}{ \cong }{\pi }_{2}\overset{{j}^{ * }}{ \cong }\cdots \overset{{l}^{ * }}{ \cong }{\pi }_{k} = \sigma \] where \( i, j,\ldots, l \in \{ 1,2\} \) . ∎ Note that the only two nontrivial dual Knuth relations in \( {S}_{3} \) are \[ {213}\overset{{1}^{ * }}{ \cong }{312}\text{ and }{132}\overset{{2}^{ * }}{ \cong }{231}\text{. } \] These correspond exactly to (3.12). The following lemma is obvious from the definitions. In fact, the definition of the dual Knuth relations was concocted precisely so that this result should hold. Lemma 3.6.9 If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then \[ \pi \overset{K}{ \cong }\sigma \Leftrightarrow {\pi }^{-1}\overset{{K}^{ * }}{ \cong }{\sigma }^{-1}\text{. ∎} \] Now it is an easy matter to derive the dual version of Knuth's theorem about \( P \) -equivalence (Theorem 3.4.3). Theorem 3.6.10 If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then \[ \pi \overset{{K}^{ * }}{ \cong }\sigma \Leftrightarrow \pi \overset{Q}{ \cong }\sigma . \] Proof. We have the following string of equivalences: \[ \pi \overset{{K}^{ * }}{ \cong }\sigma \; \Leftrightarrow \;{\pi }^{-1}\overset{K}{ \cong }{\sigma }^{-1}\;\text{ (Lem } \] \[ \Leftrightarrow \;P\left( {\pi }^{-1}\right) = P\left( {\sigma }^{-1}\right) \;\text{(Theorem 3.4.3)} \] \[ \Leftrightarrow \;Q\left( \pi \right) = Q\left( \sigma \right) .\;\text{ (Theorem 3.6.6) } \bullet \] ## 3.7 Schützenberger’s Jeu de Taquin The jeu de taquin (or "teasing game") of Schützenberger [Scii 76] is a powerful tool. It can be used to give alternative descriptions of both the \( P \) - and \( Q \) - tableaux of the Robinson-Schensted algorithm (Theorems 3.7.7 and 3.9.4) as well as the ordinary and dual Knuth relations (Theorems 3.7.8 and 3.8.8). To get the full-strength version of these concepts, we must generalize to skew tableaux. Definition 3.7.1 If \( \mu \subseteq \lambda \) as Ferrers diagrams, then the corresponding skew diagram, or skew shape, is the set of cells \[ \lambda /\mu = \{ c : c \in \lambda \text{ and }c \notin \mu \} . \] A skew diagram is normal if \( \mu = \varnothing \) . ∎ If \( \lambda = \left( {3,3,2,1}\right) \) and \( \mu = \left( {2,1,1}\right) \), then we have the skew diagram \[ \lambda /\mu = \] Of course, normal shapes are the left-justified ones we have been considering all along. The definitions of skew tableaux, standard skew tableaux, and so on, are all as expected. In particular, the definition of the row word of a tableau still makes sense in this setting. Thus we can say that two skew partial tableaux \( P, Q \) are Knuth equivalent, written \( P\overset{K}{ \cong }Q \), if \[ {\pi }_{P}\overset{K}{ \cong }{\pi }_{Q} \] Similar definitions hold for the other equivalence relations that we have introduced. Note that if \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \), then we can make \( \pi \) into a skew tableau by putting \( {x}_{i} \) in the cell \( \left( {n - i + 1, i}\right) \) for all \( i \) . This object is called the antidi-agonal strip tableau associated with \( \pi \) and is also denoted by \( \pi \) . For example, if \( \pi = {3142} \) (a good approximation, albeit without the decimal point), then ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg) So \( \pi \overset{K}{ \cong }\sigma \) as permutations if and only if \( \pi \overset{K}{ \cong }\sigma \) as tableaux. We now come to the definition of a jeu de taquin slide, which is essential to all that follows. Definition 3.7.2 Given a partial tableau \( P \) of shape \( \lambda /\mu \), we perform a forward slide on \( P \) from cell \( c \) as follows. F1 Pick \( c \) to be an inner corner of \( \mu \) . F2 While \( c \) is not an inner corner of \( \lambda \) do Fa If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \min \left\{ {{P}_{i + 1, j},{P}_{i, j + 1}}\right\} \) . Fb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . If only one of \( {P}_{i + 1, j},{P}_{i, j + 1} \) exists in step Fa, then the maximum is taken to be that single value. We denote the resulting tableau by \( {j}^{c}\left( P\right) \) . Similarly, a backward slide on \( P \) from cell \( c \) produces a tableau \( {j}_{c}\left( P\right) \) as follows B1 Pick \( c \) to be an outer corner of \( \lambda \) . B2 While \( c \) is not an outer corner of \( \mu \) do Ba If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \max \left\{ {{P}_{i - 1, j},{P}_{i, j - 1}}\right\} \) . Bb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . ∎ By way of illustration, let \[ P = \begin{array}{lllll} & & & 6 & 8 \\ & 2 & 4 & 5 & 9. \\ 1 & 3 & 7 & & \end{array} \] We let a dot indicate the position of the empty cell as we perform a forward slide from \( c = \left( {1,3}\right) \) . <table><tr><td></td><td></td><td>0</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>-</td><td>5</td><td>9</td><td></td><td>2</td><td>5</td><td>-</td><td>9</td><td></td><td>2</td><td>5</td><td>9</td><td>0</td></tr><tr><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td></tr></table> Thus \[ {j}^{c}\left( P\right) = \begin{array}{llll} & 4 & 6 & 8 \\ 2 & 5 & 9 & \\ 1 & 3 & 7 & \end{array} \] A backward slide from \( c = \left( {3,4}\right) \) looks like the following. <table><tr><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>0</td><td>5</td><td>9</td><td></td><td>-</td><td>2</td><td>5</td><td>9</td></tr><tr><td>1</td><td>3</td><td>7</td><td>-</td><td></td><td>1</td><td>3</td><td></td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td></tr></table> So ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg) Note that a slide is an invertible operation. Specifically, if \( c \) is a cell for a forward slide on \( P \) and the cell vacated by the slide is \( d \), then a backward slide into \( d \) restores \( P \) . In symbols, \[ {j}_{d}{j}^{c}\left( P\right) = P. \] (3.13) Similarly, \[ {j}^{c}{j}_{d}\left( P\right) = P. \] (3.14) if the roles of \( d \) and \( c \) are reversed. Of course, we may want to make many slides in succession. Definition 3.7.3 A sequence of cells \( \left( {{c}_{1},{c}_{2},\ldots ,{c}_{l}}\right) \) is a slide sequence for a tableau \( P \) if we can legally form \( P = {P}_{0},{P}_{1},\ldots ,{P}_{l} \), where \( {P}_{i} \) is obtained from \( {P}_{i - 1} \) by performing a slide into cell \( {c}_{i} \) . Partial tableaux \( P \) and \( Q \) are equivalent, written \( P \cong Q \), if \( Q \) can be obtained from \( P \) by some sequence of slides. - This equivalence relation is the same as Knuth equivalence, as the next series of results shows. Proposition 3.7.4 ([Scii 76]) If \( P, Q \) are standard skew tableaux, then \[ P \cong Q \Rightarrow P\overset{K}{ \cong }Q. \] Proof. By induction, it suffices to prove the theorem when \( P \) and \( Q \) differ by a single slide. In fact, if we call the operation in steps Fb or Rb of the slide definition a move, then we need to demonstrate the result only when \( P \) and \( Q \) differ by a move. (The row word of a tableau with a hole in it can still be defined by merely ignoring the hole.) The conclusion is trivial if the move is horizontal because then \( {\pi }_{P} = {\pi }_{Q} \) . If the move is vertical, then we can clearly restrict to the case where \( P \) and \( Q \) have only two rows. So suppose that \( x \) is the element being moved and that <table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>\( x \)</th><th>\( {R}_{r} \)</th></tr></thead><tr><td>\( {S}_{l} \)</td><td></td><td>\( \bullet \)</td><td>\( {S}_{r} \)</td></tr></table> <table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>·</th
1329_[肖梁] Abstract Algebra (2022F)
Definition 8.3.1
Definition 8.3.1. For a group \( G \), define the following subgroups \[ {G}^{0} = G,{G}^{1} = \left\lbrack {G, G}\right\rbrack ,{G}^{i + 1} = \left\lbrack {G,{G}^{i}}\right\rbrack \text{ for }i, \] Then we have a chain of subgroups \( {G}^{0} \geq {G}^{1} \geq {G}^{2} \geq \cdots \) . This is called the lower central series of \( G \) . Similar to Lemma 8.2.5, we may prove that each \( {G}^{i} \) is a normal subgroup of \( G \) . It is also clear that \( {G}^{i} \geq {G}^{\left( i\right) } \) . The group \( G \) is called nilpotent if \( {G}^{c} = \{ 1\} \) for some \( c \in \mathbb{N} \) . The smallest such \( c \) is called the nilpotence class of \( G \) . Remark 8.3.2. The construction of lower central series commutes with passing to quotients, i.e. if \( \pi : G \rightarrow H = G/N \) is the homomorphism given by taking quotient by a normal subgroup \( N \), then \( \pi \left( {G}^{i}\right) = {H}^{i} \) . Corollary 8.3.3. If \( G \) is nilpotent, then \( G \) is solvable. Proof. If \( {G}^{c} = \{ 1\} \) for some \( c \in \mathbb{N} \), then \( {G}^{\left( c\right) } \leq {G}^{c} = \{ 1\} \) . So \( {G}^{\left( c\right) } = 1 \) . Example 8.3.4. Going back to Example 8.2.2 For the groups \[ G = \left( \begin{matrix} {\mathbb{C}}^{ \times } & \mathbb{C} & \mathbb{C} \\ 0 & {\mathbb{C}}^{ \times } & \mathbb{C} \\ 0 & 0 & {\mathbb{C}}^{ \times } \end{matrix}\right) \supseteq {G}^{\left( 1\right) } = \left( \begin{array}{lll} 1 & \mathbb{C} & \mathbb{C} \\ 0 & 1 & \mathbb{C} \\ 0 & 0 & 1 \end{array}\right) \supseteq {G}^{\left( 2\right) } = \left( \begin{array}{lll} 1 & 0 & \mathbb{C} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) \supseteq {G}^{\left( 3\right) } = \left\{ {I}_{3}\right\} \] \( G \) is NOT nilpotent, but \( {G}^{\left( 1\right) } \) and \( {G}^{\left( 2\right) } \) are nilpotent. We also introduce a "dual picture". Definition 8.3.5. For any group \( G \), define the following subgroups inductively \[ {Z}_{0}\left( G\right) = 1,\;{Z}_{1}\left( G\right) = Z\left( G\right) , \] Let \( {Z}_{i + 1}\left( G\right) \) is the subgroup of \( G \) containing \( {Z}_{i}\left( G\right) \) such that \( {Z}_{i + 1}\left( G\right) /{Z}_{i}\left( G\right) \cong Z\left( {G/{Z}_{i}\left( G\right) }\right) \) . ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_52_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_52_0.jpg) (In particular, here, we always have inductively \( {Z}_{i}\left( G\right) \trianglelefteq G \) is a normal subgroup. Suppose that \( {Z}_{i}\left( G\right) \) is a normal subgroup. Yet \( Z\left( {G/{Z}_{i}\left( G\right) }\right) \) is a normal subgroup of \( G/{Z}_{i}\left( G\right) \) ; so its preimage \( {Z}_{i + 1}\left( G\right) \mathrel{\text{:=}} {\pi }^{-1}\left( {Z\left( {G/{Z}_{i}\left( G\right) }\right) }\right) \) is a normal subgroup of \( G \) . The sequence \( \{ 1\} = {Z}_{0}\left( G\right) \leq {Z}_{1}\left( G\right) \leq \cdots \) is called the upper central series of \( G \) . Remark 8.3.6. We remark on the convention in choosing superscript versus subscript: typically, the convention is - indexing by subscript for increasing filtration, - indexing by superscript for decreasing filtration. Theorem 8.3.7. A group \( G \) is nilpotent if and only if \( {Z}_{c}\left( G\right) = G \) for some \( c \in \mathbb{N} \) . More precisely, for some \( c \in \mathbb{N},{G}^{c} = \{ 1\} \) if and only if \( {Z}_{c}\left( G\right) = G \), and in this case, (8.3.7.1) \[ \{ 1\} \leq {G}^{c - 1} \leq {Z}_{1}\left( G\right) \leq {G}^{c - 2} \leq {Z}_{2}\left( G\right) \leq \cdots \leq {G}^{1} \leq {Z}_{c - 1}\left( G\right) \leq G. \] Proof. We prove this by induction on the minimal \( c \) such that either \( {G}^{c} = \{ 1\} \) or \( {Z}_{c}\left( G\right) = G \) . When \( c = 1 \), either conditions \( {G}^{1} = \{ 1\} \) and \( {Z}_{1}\left( G\right) = G \) is equivalent to the condition that \( G \) is abelian. The statement is clear. Now suppose the theorem has been proved for smaller \( c \), and we treat the case when either \( {G}^{c} = \{ 1\} \) or \( {Z}_{c}\left( G\right) = G \) for this \( c \in \mathbb{N} \) . Let \( \pi : G \rightarrow G/Z\left( G\right) = : \bar{G} \) . For a subgroup \( H \) of \( G \) , denote its image under \( \pi \) by \( \bar{H} \subseteq \bar{G} \) . We first show that inductive hypothesis may be applied to \( \bar{G} \) (whenever one of the condition of the theorem holds). In fact we will prove \[ \begin{matrix} {G}^{c} = \{ 1\} & {Z}_{c}\left( G\right) = G \\ {\biguplus }^{\left( 1\right) } & {\bigoplus }^{\left( 2\right) } \\ {\left( \bar{G}\right) }^{c - 1} = \{ 1\} & \xleftarrow[]{\text{ inductive hypothesis }}{Z}_{c - 1}\left( \bar{G}\right) = \bar{G}. \end{matrix} \] Indeed, for \( \left( 1\right) ,{G}^{c} = \{ 1\} \) is equivalent to \( \left\lbrack {G,{G}^{c - 1}}\right\rbrack = \{ 1\} \), i.e. all elements in \( {G}^{c - 1} \) commutes with all elements of \( G \) . So this is further equivalent to \( {G}^{c - 1} \leq Z\left( G\right) \) . As the construction of lower central series is compatible with passing to quotients, this is further equivalent to \( {\left( \overline{G}\right) }^{c - 1} = \{ 1\} \) . For statement (2), we simply note that the construction of upper central series implies inductively that \( {Z}_{i}\left( \bar{G}\right) = \overline{{Z}_{i - 1}\left( G\right) } \) . More precisely, we have the following picture that explains this. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_53_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_53_0.jpg) By definition \( {Z}_{2}\left( G\right) \) is the pullback of \( {Z}_{1}\left( \bar{G}\right) \) along \( {\pi }_{1} \), namely \( {Z}_{2}\left( G\right) = {\pi }_{1}^{-1}\left( {{Z}_{1}\left( \bar{G}\right) }\right) \) . After this, we consider \( G/{Z}_{2}\left( G\right) \cong \bar{G}/{Z}_{1}\left( \bar{G}\right) \) . Inside of this, we will pullback \( Z\left( {G/{Z}_{2}\left( G\right) }\right) = \) \( Z\left( {\bar{G}/{Z}_{1}\left( \bar{G}\right) }\right) \) . The pullback to \( \bar{G} \) gives \( {Z}_{2}\left( \bar{G}\right) = {\bar{\pi }}_{1}^{-1}\left( {Z\left( {\bar{G}/{Z}_{1}\left( \bar{G}\right) }\right) }\right) \) . This implies that \[ {Z}_{3}\left( G\right) = {\pi }_{2}^{-1}\left( {Z\left( {\bar{G}/{Z}_{1}\left( \bar{G}\right) }\right) }\right) = {\pi }_{1}^{-1}{\bar{\pi }}_{1}^{-1}\left( {Z\left( {\bar{G}/{Z}_{1}\left( \bar{G}\right) }\right) }\right) = {\pi }_{1}^{-1}\left( {{Z}_{2}\left( \bar{G}\right) }\right) . \] (2) is a special case of this results. Remark 8.3.8. One way to philosophically understand the lower or upper central series is that: (1) abelian groups are easy to understand. (2) If \( H \) is a nonabelian simple finite group, then \( \left\lbrack {H, H}\right\rbrack = H \) and \( Z\left( H\right) = 1 \) (because \( H \) has no nontrivial normal subgroups). So lower and upper central series is trivial for simple finite groups. The following visualize the lower and upper central series as follows. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_54_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_54_0.jpg) Coming from the upper central series, we will never reach the Jordan-Hölder factors that "look like" simple groups. This is indicated on the left. "Dually", coming from the lower central series, the subgroup \( {G}^{i} \) will always contain the Jordan-Hölder factors taht "look like" simple groups. This filtration "shrinks" the group from the right of the picture. 8.4. Structure theorem of nilpotent groups. In fact, nilpotent groups have a very nice structure theorem. Proposition 8.4.1. All p-groups are nilpotent. Proof. This is because for every \( p \) -group \( P, Z\left( P\right) \) is nontrivial by Proposition 6.3.4. Proposition 8.4.2. Let \( P \) be a p-group. (1) We have \( Z\left( P\right) \neq \{ 1\} \) (proved earlier). (2) If \( H \trianglelefteq P \) is a nontrivial normal subgroup, then \( H \cap Z\left( P\right) \neq \{ 1\} \) . (3) If \( H \lneq P \), then \( H \lneq {N}_{P}\left( H\right) \) . Proof. (1) is proved in Proposition 6.3.4. (2) Consider the conjugation action of \( P \) on \( H \) : \[ P\overset{\mathrm{{Ad}}}{ \hookrightarrow }H,\;{\operatorname{Ad}}_{p}\left( h\right) = {ph}{p}^{-1}. \] For this action, we write \( H \) as the disjoint union of orbits: \[ H = \mathop{\coprod }\limits_{i}P/{\operatorname{Stab}}_{P}\left( {a}_{i}\right) \] for representatives \( {a}_{1},{a}_{2},\ldots ,{a}_{r} \) of orbits. We note that \( {\operatorname{Stab}}_{P}\left( {a}_{i}\right) = P \) if and only if \( {a}_{i} \in Z\left( P\right) \), namely \( {a}_{i} \in Z\left( P\right) \cap H \) . So we have the following \[ 0 \equiv \left| H\right| = \mathop{\sum }\limits_{i}\left| P\right| /\left| {{\operatorname{Stab}}_{P}\left( {a}_{i}\right) }\right| \equiv \left| {Z\left( P\right) \cap H}\right| \;\left( {\;\operatorname{mod}\;p}\right) . \] This implies that \( Z\left( P\right) \cap H \neq \{ 1\} \) . (3) We use induction on \( \left| P\right| \) . There are two cases: Case 1 If \( Z\left( P\right) \nsubseteq H \), yet \( Z\left( P\right) \subset {N}_{P}\left( H\right) \) . So \( H \lneq {N}_{P}\left( H\right) \) . Case 2 If \( Z\left( P\right) \subseteq H \), consider \( \bar{H} = H/Z\left( P\right) \) and \( \bar{P} = P/Z\left( P\right) \) . By inductive hypothesis, \[ \bar{H} \lneq {N}_{\bar{P}}\left( \bar{H}\right) \Rightarrow H \lneq {N}_{P}\left( H\right) \] Corollary 8.4.3. If \( P \) is a p-group and if \( H < P \) has index \( p \), then \( H \) is a normal subgroup. Theorem 8.4.4 (Classification theorem for nilpotent groups). Let \( G \) be a finite group of order \( n = {p}_{1}^{{\alpha }_{1}}{p}_{2}^{{\alpha }_{2}}\cdots {p}_{r}^{{\alpha }_{r}} \) and \( {P}_{i} \in {\operatorname{Syl}}_{{p}_{i}}\left( G\right) \) . The following are equivalent. (1) \( G \) is nilpotent. (2) If \( H \lneq G \), then \( H \lneq {N}_{G}\left( H\right) \) . (3) All Sylow subgroups \( {P}_{i} \) are normal.
1083_(GTM240)Number Theory II
Definition 13.2.2
Definition 13.2.2. Let \( n \in \bar{K}{\left\lbrack \mathcal{C}\right\rbrack }^{ * } \) be a nonzero polynomial function and let \( P \) be a point in \( \mathcal{C}\left( \bar{K}\right) \) . We say that \( n \) has a zero at \( P \) if \( n\left( P\right) = 0 \) . Let \( n\left( {x, y}\right) = a\left( x\right) + b\left( x\right) y \) be a polynomial function in \( \bar{K}\left( \mathcal{C}\right) \) and let \( P = \left( {{x}_{0},{y}_{0}}\right) \) be a zero of \( n \) . Let \( r \) be the largest integer such that \( {\left( x - {x}_{0}\right) }^{r} \) divides \( n\left( {x, y}\right) \), so that we can write \[ n\left( {x, y}\right) = {\left( x - {x}_{0}\right) }^{r}\left( {\alpha \left( x\right) + \beta \left( x\right) y}\right) . \] Let \( s \) be the largest integer such that \( {\left( x - {x}_{0}\right) }^{s} \) divides \( {\alpha }^{2}\left( x\right) - {\beta }^{2}\left( x\right) f\left( x\right) \) . (1) If \( {y}_{0} \neq 0 \) we define the order \( {\operatorname{ord}}_{P}\left( n\right) \) of \( P \) to be \( r + s \), and we say that \( P \) is a zero of \( n \) of order \( {\operatorname{ord}}_{P}\left( n\right) \) . (2) If \( {y}_{0} = 0 \), we define the order of \( P \) to be \( {2r} + s \) . (3) If the degree of \( f \) is odd and \( P \) is the point at infinity, we define the order of \( P \) to be \( - \max \left( {2\deg \left( a\right) ,\deg \left( f\right) + 2\deg \left( b\right) }\right) \) . (4) If the degree of \( f \) is even and \( P \) is one of the two points on the nonsingular curve that lie over the point at infinity, we define the order of \( P \) to be \( - \max \left( {\deg \left( a\right) ,\deg \left( f\right) /2 + \deg \left( b\right) }\right) \) . Example 13.2.3. Let \( \mathcal{C} \) be the hyperelliptic curve of genus 2 defined over \( \mathbb{Q} \) by the equation \[ {y}^{2} = {x}^{5} + 4 \] Let \( {n}_{1} \) be the function \( {n}_{1}\left( {x, y}\right) = x - 2 \) . This function has a zero if the \( x \) - coordinate of the point is 2, so that the points \( \left( {2,6}\right) \) and \( \left( {2, - 6}\right) \) are zeros of \( {n}_{1} \), and their order is 1 . The coefficient of \( y \) in \( {n}_{1} \) is equal to zero, so that its degree is \( - \infty \) ; hence the order of the point at infinity \( \mathcal{O} \) is -2 . Example 13.2.4. Let \( \mathcal{C} \) be the hyperelliptic curve of genus 1 (in other words the elliptic curve) defined over \( \mathbb{Q} \) by the equation \[ {y}^{2} = {x}^{3} + 1 \] Let \( {n}_{2} \) be the function \( {n}_{2}\left( {x, y}\right) = x + 1 - y \) (which is the equation of a line). The zeros of this function are the points \( \left( {2,3}\right) ,\left( {0,1}\right) \), and \( \left( {-1,0}\right) \), each with order 1. Here the degree of \( f \) is equal to 3 and the degree of the coefficient of \( y \) is equal to 0, so that the point at infinity has order -3 . This of course corresponds to the group law on the elliptic curve. We can now define the order of a zero or a pole of a function of \( \bar{K}\left( \mathcal{C}\right) \) . Definition 13.2.5. Let \( m \in \bar{K}{\left( \mathcal{C}\right) }^{ * } \) be a nonzero function and let \( P \) be a point on \( \mathcal{C}\left( \bar{K}\right) \) . Write \( m = n/d \), where \( n \) and \( d \) are polynomial functions, and set \( {\operatorname{ord}}_{P}\left( m\right) = {\operatorname{ord}}_{P}\left( n\right) - {\operatorname{ord}}_{P}\left( d\right) \) . If \( {\operatorname{ord}}_{P}\left( m\right) \) is strictly positive (respectively strictly negative), we say that \( m \) has a zero (respectively a pole) at \( P \) of order \( \left| {{\operatorname{ord}}_{P}\left( m\right) }\right| \) . Theorem 13.2.6. Let \( m \) be a function in \( \bar{K}{\left( \mathcal{C}\right) }^{ * } \) . Counting orders, \( m \) has as many zeros as poles; in other words, \[ \mathop{\sum }\limits_{{P \in \mathcal{C}\left( \bar{K}\right) }}{\operatorname{ord}}_{P}\left( m\right) = 0 \] ## 13.2.2 Divisors We have seen that the group structure on the set of points of an elliptic curve is a powerful tool for solving many problems concerning elliptic curves, and in particular Diophantine problems that can be reduced to elliptic curves. The set of points of a curve of higher genus does not have a natural group structure, so we are going to embed this set into a larger one that does have a natural group structure by introducing the free abelian group generated by these points. Definition 13.2.7. Let \( \mathcal{C} \) be a smooth projective algebraic curve defined over \( K \) . The divisor group \( {\operatorname{Div}}_{\bar{K}} \) of \( \mathcal{C} \) is the free abelian group over the points of \( \mathcal{C}\left( \bar{K}\right) \) . An element \( D \) of \( {\operatorname{Div}}_{\bar{K}}\left( \mathcal{C}\right) \) is called a divisor and is thus of the form \[ D = \mathop{\sum }\limits_{{P \in \mathcal{C}\left( \bar{K}\right) }}{n}_{P}P \] where the integer \( {n}_{P} \) is called the order of \( D \) at \( P \) and is zero for almost all points \( P \) on the curve. Definition 13.2.8. (1) Let \( D = \mathop{\sum }\limits_{{P \in \mathcal{C}\left( \bar{K}\right) }}{n}_{P}P \in {\operatorname{Div}}_{\bar{K}}\left( \mathcal{C}\right) \) . We define \( \deg \left( D\right) = \mathop{\sum }\limits_{{P \in \mathcal{C}\left( \overline{K}\right) }}{n}_{P} \) (2) We say that \( D \) is effective if \( {n}_{P} \geq 0 \) for all \( P \) . (3) Let \( m \) be a function on \( \mathcal{C} \) . We define the divisor of \( m \) by \[ \operatorname{div}\left( m\right) = \mathop{\sum }\limits_{{P \in \mathcal{C}\left( \bar{K}\right) }}{\operatorname{ord}}_{P}\left( m\right) P \in {\operatorname{Div}}_{\bar{K}}\left( \mathcal{C}\right) . \] Such divisors are called principal divisors, and the set of principal divisors is denoted by \( \mathop{\Pr }\limits_{\bar{K}}\left( \mathcal{C}\right) \) . Example 13.2.9. Let us come back to Examples 13.2.3 and 13.2.4. The divisors of the functions \( {n}_{1} \) and \( {n}_{2} \) are \[ \operatorname{div}\left( {n}_{1}\right) = \left( {2,6}\right) + \left( {2, - 6}\right) - 2\mathcal{O} \] \[ \operatorname{div}\left( {n}_{2}\right) = \left( {2,3}\right) + \left( {0,1}\right) + \left( {-1,0}\right) - 3\mathcal{O}. \] Denote by \( {\operatorname{Div}}_{\bar{K}}^{0}\left( \mathcal{C}\right) \) the group of all divisors of degree 0 . It follows from Theorem 13.2.6 that the set of principal divisors is a subgroup of \( {\operatorname{Div}}_{\bar{K}}^{0}\left( \mathcal{C}\right) \) . We can thus set the following. Definition 13.2.10. The quotient group \( {\operatorname{Div}}_{\bar{K}}^{0}\left( \mathcal{C}\right) /\mathop{\Pr }\limits_{\bar{K}}\left( \mathcal{C}\right) \) is called the Picard group of \( \mathcal{C} \) . We will not define the Jacobian variety \( J\left( \mathcal{C}\right) \) of \( \mathcal{C} \), which is an Abelian variety functorially associated with \( \mathcal{C} \), but simply note that the group \( {J}_{\bar{K}}\left( \mathcal{C}\right) \) of \( \bar{K} \) -rational points on this variety, which is the only structure that we will use, is naturally isomorphic to the Picard group defined above. ## 13.2.3 Rational Divisors Definition 13.2.11. The set of \( K \) -rational divisors, denoted by \( {\operatorname{Div}}_{K}\left( \mathcal{C}\right) \), is defined by \[ {\operatorname{Div}}_{K}\left( \mathcal{C}\right) = {\left( {\operatorname{Div}}_{\bar{K}}\left( \mathcal{C}\right) \right) }^{\operatorname{Gal}\left( {\bar{K}/K}\right) }. \] This definition means that a divisor \( D \) is rational over \( K \) if it is invariant under the Galois action of \( \operatorname{Gal}\left( {\bar{K}/K}\right) \) . In other words, if \( P \) is a point such that the order of \( D \) at \( P \) is nonzero, then \( D \) has the same order at all the conjugates of \( P \) . Example 13.2.12. Let \( \mathcal{C} \) be the curve of genus 2 defined over \( \mathbb{Q} \) by the equation \[ {y}^{2} = {x}^{5} + x - 3 \] The point \( \left( {1, i}\right) \) is of course not a \( \mathbb{Q} \) -rational point, but the divisor \( D = \) \( \left( {1, i}\right) + \left( {1, - i}\right) \) is a \( \mathbb{Q} \) -rational divisor of degree 2 . The group of \( K \) -rational elements on the Jacobian, \( {J}_{K}\left( \mathcal{C}\right) \), is the group of classes of \( K \) -rational divisors modulo functions of \( K\left( \mathcal{C}\right) \) or, equivalently, \[ {J}_{K}\left( \mathcal{C}\right) = {\left( {J}_{\bar{K}}\left( \mathcal{C}\right) \right) }^{\operatorname{Gal}\left( {\bar{K}/K}\right) }. \] All these definitions are of course also valid for elliptic curves and in this case the curve is isomorphic to its Jacobian. This indicates that the Jacobian is the correct generalization of elliptic curves if we are interested in the group structure. In fact we have the following theorem, which generalizes the Mordell-Weil theorem for elliptic curves. Theorem 13.2.13 (Weil). If \( K \) is a number field, \( {J}_{K}\left( \mathcal{C}\right) \) is a finitely generated abelian group. In the following we will assume that \( K \) is a number field. The structure of \( {J}_{K}\left( \mathcal{C}\right) \) can be computed analogously to the computation of the Mordell-Weil group for elliptic curves that we have studied in Chapter 8, in other words by using descent methods. It is evidently more difficult, and we will not explain this computation here. The interested reader is referred to [Sch] or to [Sto]. We have already mentioned that in genus 1 , the Jacobian is isomorphic to the curve. This isomorphism is given by the map \( P \mapsto P - \mathcal{O} \) from \( \mathcal{C}\left( K\right) \) to \( {J}_{K}\left( \mathcal{C}\right) \) ; in other words, an element of the Jacobian is represented by a point on the curve. In higher genus \( g \), the following theorem states that the situation is analogous since a rational element of the Jacobian
1129_(GTM35)Several Complex Variables and Banach Algebras
Definition 25.2
Definition 25.2. The set-valued map \( \Phi : \lambda \mapsto {K}_{\lambda } \), with \( {K}_{\lambda } \) a compact subset of \( \mathbb{C} \) for each \( \lambda \in G \), is analytic if: (1) \( \Phi \) is upper semi-continuous, and (2) The set of all \( \left( {\lambda, w}\right) \) with \( \lambda \in G \) and \( w \in \mathbb{C} \smallsetminus {K}_{\lambda } \) is a pseudoconvex subset of \( {\mathbb{C}}^{2} \) . Note. We could also express condition (2) by saying that the graph of \( \Phi \) consisting of all points \( \left( {\lambda, w}\right) \) with \( \lambda \in G \) and \( w \in {K}_{\lambda } \) is pseudoconcave in \( G \times \mathbb{C} \) . Slodkowski showed in Theorem 2.1 of [S11] that, if \( X \) is a relatively closed set contained in a cylinder domain \( G \times \mathbb{C} \subseteq {\mathbb{C}}^{2}, A \) is the restriction to \( X \) of the algebra of polynomials in \( z \) and \( w \), and \( \pi \) is the projection \( \left( {z, w}\right) \mapsto z \), then \( \left( {A, X, G,\pi }\right) \) is a maximum modulus algebra if and only if \( X \) is pseudoconcave in \( G \times \mathbb{C} \) . Note. The definition of maximum modulus algebra in [S11] is somewhat stronger than the definition we have given in Chapter 11. For details see [S11]. Slodkowski's paper [S11] contains a number of interesting results relating analytic set-valued functions to operator theory and to the study of uniform algebras. An exposition of these relationships, and related questions, is given by B. Aupetit in [Au1] and [Au2]. An expository article on maximum modulus algebras is given by one of us in [We13]. Further work in this field is to be found in the book Uniform Frechet Algebras [Go] by H. Goldman, Chapters 15 and 16. Proofs of Theorem 11.7 were given by Slodkowski in [SI2] and Senichkin in [Sen]. See also Kumagai [Kum]. ## 4 Curve Theory In Chapter 12 we studied the problem of finding the hull of a given curve \( \gamma \) in \( {\mathbb{C}}^{n} \) . The case of a real-analytic curve was treated in the 1950s by one of us in the papers [We3], [We4], [We5], and [We6]. The principal tool in these papers was the Cauchy transform. An elegant treatment of this case and applications to the study of the algebra of bounded analytic functions on Riemann surfaces was given in [Ro2]. In two very influential papers, [Bi3] and [Bi2], Errett Bishop gave an abstract Banach algebra approach to the problem of finding hulls of curves. In particular, Bishop he proved a version of our Theorem 11.8 for the case of Banach algebras in [Bi3]. Based in part on Bishop’s work, G. Stolzenberg solved the problem for \( {\mathcal{C}}^{1} \) - smooth curves in [St2]. The case when \( \gamma \) is merely rectifiable was treated by Alexander in [Al1]. Independent of the study of algebras of functions, B. Aupetit in [Au3] applied the theory of subharmonic functions to problems in the spectral theory of operators. Aupetit and Wermer in [AuWe] gave a new proof and generalization of Bishop's result in [Bi3], by adapting the methods used in [Au3]. An independent proof of the result in [AuWe] was given by Senichkin in [Sen]. ## 5 Boundaries of Complex Manifolds Given a \( k \) -dimensional manifold \( X \) in \( {\mathbb{C}}^{n} \), identifying the polynomial hull in the case \( k > 1 \) turned out to be a much harder problem than in the case \( k = 1 \) . The first major result was found by A. Browder in [Bro1] in the case \( k = n \) . Let \( X \) be a compact orientable \( n \) -manifold in \( {\mathbb{C}}^{n} \) ; Browder shows that \( \widehat{X} \) is always larger than \( X \) . EXERCISE 25.2. Why is this true when \( k = n = 1 \) ? In [Al2], Alexander obtained the stronger result that, if \( X \) is as in Browder’s situation, then the closure of \( \widehat{X} \smallsetminus X \) contains \( X \), so \( \widehat{X} \smallsetminus X \) is "large." Let \( X \) be a \( k \) -dimensional smooth oriented manifold in \( {\mathbb{C}}^{n} \), where \( k \) is an odd integer. If \( X \) is the boundary of a complex manifold \( \sum \) with \( \sum \cup X \) compact, then \( \sum \subseteq \widehat{X} \) . So we may ask: Given \( X \), when does such a \( \sum \) exist? The solution was found in 1975 by R. Harvey and Blaine Lawson in their fundamental paper [HarL2] and developed in [Har]. To obtain a tractable problem, one allows \( \sum \) to have singularities and thus seeks an analytic variety \( \sum \) with boundary \( X \), rather than a manifold. We have sketched a proof of the result of [HarL2] in Chapter 19 for \( X \) in \( {\mathbb{C}}^{3} \) . One may ask a related question: Given a closed curve in the complex projective plane \( {\mathbb{{CP}}}^{2} \), when does there exist an analytic variety in \( {\mathbb{{CP}}}^{2} \) with boundary \( \gamma \) ? This problem was solved by P. Dolbeault and G. Henkin in [DHe]. ## 6 Sets Over the Circle Let \( X \) be a compact set in \( {\mathbb{C}}^{n} \) lying over the unit circle. Suppose that under the projection \( \left( {\lambda ,{w}_{1},\ldots ,{w}_{n - 1}}\right) \mapsto \lambda ,\widehat{X} \) covers some point in the open disk \( \{ \left| \lambda \right| < \) 1\} and hence covers every point. We are interested in discovering all analytic disks, if any, contained in \( \widehat{X} \smallsetminus X \) . Theorem 20.2 tells us that if the fiber \( {X}_{\lambda } \) with \( \lambda \in \Gamma \) is a convex set, then \( \widehat{X} \smallsetminus X \) is the union of a family of analytic disks, each of which is moreover a graph over \( \{ \left| \lambda \right| < 1\} \) . Theorem 20.2 was proved independently by Alexander and Wermer [AW] for \( n = 2 \) and by Slodkowski [S13] for arbitrary \( n \) . Forstnerič showed in [Fo1] that the hypothesis " \( {X}_{\lambda } \) is convex for all \( \lambda \) " could be replaced by the hypothesis " \( {X}_{\lambda } \) is a simply connected Jordan domain varying smoothly with \( \lambda \in \Gamma \), such that \( 0 \in \operatorname{int}\left( {X}_{\lambda }\right) \), for all \( \lambda \) ," with the same conclusion as in Theorem 20.2. The following stronger result was proved by Slodkowski in [S14], and a closely related result was proved by Helton and Marshall in [HeltM]: Theorem 25.3. Assume that each fiber \( {X}_{\lambda },\lambda \in \Gamma \), is connected and simply connected. Then \( \widehat{X} \smallsetminus X \) is a union of analytic graphs over \( \{ \left| \lambda \right| < 1\} \) . What if the fibers \( {X}_{\lambda } \) are allowed to be disconnected? We saw in Chapter 24, Theorem 24.3, that \( \widehat{X} \smallsetminus X \) may fail to contain any analytic disk, so no extension of Theorem 25.3 to arbitrary sets over the circle is possible. A number of interesting applications have been found for results concerning polynomial hulls of sets over the circle. We shall write \( D \) for the open unit disk. (i) Convex domains in \( {\mathbb{C}}^{n} \) . Let \( W \) be a smoothly bounded convex domain in \( {\mathbb{C}}^{n} \) . In [Lem], Lempert constructed a special homeomorphism \( \Phi \) of \( W \) onto the unit ball in \( {\mathbb{C}}^{n} \), which can be viewed as an analogue of the Riemann map in the case \( n = 1 \) . The construction of \( \Phi \) is based on certain maps of \( D \) into \( W \), called extremal: Given \( a \in W \) and \( \xi \in {\mathbb{C}}^{n} \smallsetminus \{ 0\} \), an analytic map \( f \) of \( D \) into \( W \) is called extremal with respect to \( a,\xi \) if \( f\left( 0\right) = a,{f}^{\prime }\left( 0\right) = {\lambda \xi } \) , where \( \lambda > 0 \), such that, for every analytic map \( g \) of \( D \) into \( W \) with \( g\left( 0\right) = a \) , \( {g}^{\prime }\left( 0\right) = {\mu \xi } \) with \( \mu > 0 \), we have \( \lambda \geq \mu \) . It is shown that, given \( a,\xi \), there exists a unique such corresponding extremal map. In [S15], Slodkowski gives a construction of Lempert’s map \( \Phi \) by using properties of polynomial hulls of sets over the circle. (ii) Corona Theorem. Carleson’s Corona Theorem [Car12] states that if \( {f}_{1},\ldots \) , \( {f}_{n} \) are bounded analytic functions on \( D \) such that there exists \( \delta > 0 \) with \( \mathop{\sum }\limits_{{j = 1}}^{n}\left| {{f}_{j}\left( z\right) }\right| \geq \delta \) for all \( z \in D \), then there exist bounded analytic functions \( {g}_{1},\ldots ,{g}_{n} \) on \( D \) satisfying \[ \mathop{\sum }\limits_{{j = 1}}^{n}{f}_{j}{g}_{j} = 1 \] on \( D \) . In [BR], Berndtsson and Ransford gave a geometric proof of the Corona Theorem in the case \( n = 2 \), basing themselves on the existence of analytic graphs in the polynomial hulls of certain sets in \( {\mathbb{C}}^{2} \) lying over the circle, as well as results on analytic set-valued functions in [S11]. In [S16], Slodkowski gave a related proof of the Corona Theorem for arbitrary \( n \) . (iii) Holomorphic motions. Let \( E \) be a subset of \( \mathbb{C} \) . A holomorphic motion of \( E \) in \( \mathbb{C} \), parametrized by \( D \), is a map \( F : D \times E \) into \( \mathbb{C} \) such that: (a) For fixed \( w \in E, z \mapsto f\left( {z, w}\right) \) is holomorphic on \( D \) . (b) If \( {w}_{1} \neq {w}_{2} \), then \( f\left( {z,{w}_{1}}\right) \neq f\left( {z,{w}_{2}}\right) \) for all \( z \) in \( D \) . (c) \( f\left( {0, w}\right) = w \) for all \( w \in E \) . In this "motion," time is the complex variable \( z \) . Extending the earlier work of Sullivan and Thurston [SuT], Slodkowski shows in [SI7] that a holomorphic motion of an arbitrary subset \( E \) of \( \mathbb{C} \) can be extended to a holomorphic motion of the full complex plane. As in earlier applications, his proof makes use of results about the structure of polynomial hulls of sets lying over the circle. (iv) \( {H}^{\infty } \) control theory. A branch of modern engineering known as " \( {H}^{\infty } \) control theory" leads to mat
1172_(GTM8)Axiomatic Set Theory
Definition 1.15
Definition 1.15. \( T \) is the discrete topology on \( X \) iff \( T = \mathcal{P}\left( X\right) \) . Definition 1.16. If \( T \) is a topology on \( X \) and \( A \subseteq X \) then 1. \( {A}^{0} \triangleq \{ x \in A \mid \left( {\exists N\left( x\right) }\right) \left\lbrack {N\left( x\right) \subseteq A}\right\rbrack \} \) . 2. \( {A}^{ - } \triangleq \{ x \in X \mid \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap A \neq 0}\right\rbrack \} \) . Theorem 1.17. If \( T \) is a topology on \( X \) and \( A \subseteq X \) then \( {A}^{0} \in T \) . Proof. If \( B = \{ N \in T \mid N \subseteq A\} \) then \( B \subseteq T \) . Furthermore \[ x \in {A}^{0} \leftrightarrow \exists N\left( x\right) \subseteq A \] \[ \leftrightarrow \exists N\left( x\right) \in B \] \[ \leftrightarrow x \in \cup \left( B\right) \text{.} \] Then \( {A}^{0} = \bigcup \left( B\right) \in T \) . Definition 1.18. \( {T}^{\prime } \) is a base for the topology \( T \) on \( X \) iff 1. \( {T}^{\prime } \subseteq T \) . 2. \( \left( {\forall A \subseteq X}\right) \left\lbrack {A = {A}^{0} \rightarrow \left( {\exists B \subseteq {T}^{\prime }}\right) \left\lbrack {A = \bigcup \left( B\right) }\right\rbrack }\right\rbrack \) . Theorem 1.19. If \( X \neq 0 \), if \( {T}^{\prime } \) is a collection of subsets of \( X \) with the properties 1. \( \left( {\forall a \in X}\right) \left( {\exists A \in {T}^{\prime }}\right) \left\lbrack {a \in A}\right\rbrack \) . 2. \( \left( {\forall a \in X}\right) \left( {\forall {A}_{1},{A}_{2} \in {T}^{\prime }}\right) \left\lbrack {a \in {A}_{1} \cap {A}_{2} \rightarrow }\right. \) \( \left( {\exists {A}_{3} \in {T}^{\prime }}\right) \left\lbrack {a \in {A}_{3} \land {A}_{3} \subseteq {A}_{1} \cap {A}_{2}}\right\rbrack \rbrack . \) Then \( {T}^{\prime } \) is a base for a topology on \( X \) . Proof. If \( T = \left\{ {B \subseteq X \mid \left( {\exists C \subseteq {T}^{\prime }}\right) \left\lbrack {B = \bigcup \left( C\right) }\right\rbrack }\right\} \) then \( 0 = \bigcup \left( 0\right) \in T \) and from property \( 1, X = \bigcup \left( {T}^{\prime }\right) \in T \) . This establishes property 1 of Definition 1.13. To prove 2 of Definition 1.13 we wish to show that \( \bigcup \left( S\right) \in T \) whenever \( S \subseteq T \) . From the definition of \( T \) it is clear that if \( S \subseteq T \) then \( \forall B \in S,\exists C \subseteq {T}^{\prime } \) \[ B = \bigcup \left( C\right) \] If \[ {C}_{B} = \left\{ {A \in {T}^{\prime } \mid A \subseteq B}\right\} \] then \[ B = \bigcup \left( {C}_{B}\right) \] and \[ \mathop{\bigcup }\limits_{{B \in S}}B = \mathop{\bigcup }\limits_{{B \in S}} \cup \left( {C}_{B}\right) \] \[ = \bigcup \left( {\mathop{\bigcup }\limits_{{B \in S}}{C}_{B}}\right) \] Since \( \mathop{\bigcup }\limits_{{B \in S}}{C}_{B} \subseteq {T}^{\prime },\bigcup \left( S\right) \in T \) . If \( {B}_{1},{B}_{2} \in T \) then \( \exists {C}_{1},{C}_{2} \subseteq {T}^{\prime } \) \[ {B}_{1} = \bigcup \left( {C}_{1}\right) \land {B}_{2} = \bigcup \left( {C}_{2}\right) \] Therefore \[ {B}_{1} \cap {B}_{2} = \left( {\mathop{\bigcup }\limits_{{{A}_{1} \in {C}_{1}}}{A}_{1}}\right) \cap \left( {\mathop{\bigcup }\limits_{{{A}_{2} \in {C}_{2}}}{A}_{2}}\right) \] \[ = \mathop{\bigcup }\limits_{\substack{{{A}_{1} \in {C}_{1}} \\ {{A}_{2} \in {C}_{2}} }}\left( {{A}_{1} \cap {A}_{2}}\right) \] \[ = \mathop{\bigcup }\limits_{\substack{{{A}_{1} \in {C}_{1}} \\ {{A}_{2} \in {C}_{2}} \\ {{A}_{3} \subseteq {A}_{1} \cap {A}_{2}} }}{A}_{3} \] (By 2). Then \( {B}_{1} \cap {B}_{2} \in T \) ; hence \( T \) is a topology on \( X \) . Clearly \( {T}^{\prime } \) is a base for \( T \) . Definition 1.20. If \( T \) is a topology on \( X \) and \( A \subseteq X \) then 1. \( A \) is open iff \( A = {A}^{0} \) . 2. \( A \) is regular open iff \( A = {A}^{-0} \) . 3. \( A \) is closed iff \( A = {A}^{ - } \) . 4. \( A \) is clopen iff \( A \) is both open and closed. 5. \( A \) is dense in \( X \) iff \( {A}^{ - } = X \) . Remark. From Theorem 1.17 we see that if \( T \) is a topology on \( X \) then \( T \) is the collection of open sets in that topology. A base for a topology is simply a collection of open sets from which all other open sets can be generated by unions. For the set of real numbers \( R \) the intervals \( \left( {a, b}\right) \triangleq \{ x \in R \mid a < x < b\} \) form a base for what is called the natural topology on \( R \) . In this topology \( \left( {0,1}\right) \), and indeed every interval \( \left( {a, b}\right) \), is not only open but regular open. \( \left\lbrack {a, b}\right\rbrack \triangleq \{ x \in R \mid a \leq x \leq b\} = {\left( a, b\right) }^{ - } \) . Thus for example \( \left\lbrack {1,2}\right\rbrack \) is closed. Furthermore \( \left( {0,1}\right) \cup \left( {1,2}\right) \) is open but not regular open. The set of all rationals is dense in \( R \) . In this topology there are exactly two clopen sets 0 and \( R \) . Theorem 1.21. 1. In any topology on \( X \) both 0 and \( X \) are clopen. 2. In the discrete topology on \( X \) every set is clopen and the collection of singleton sets is a base. Proof. Left to the reader. Remark. The next few theorems deal with properties that are true in every topological space \( \langle X, T\rangle \) . In discussing properties that depend upon \( X \) but are independent of the topology \( T \), it is conventional to suppress reference to \( T \) and to speak simply of a topological space \( X \) . Hereafter we will use this convention. Theorem 1.22. If \( A \subseteq X \) and if \( B \subseteq X \) then 1. \( {A}^{0} \subseteq A \subseteq {A}^{ - } \) . 2. \( {A}^{00} = {A}^{0} \land {A}^{- - } = {A}^{ - } \) . 3. \( A \subseteq B \rightarrow {A}^{0} \subseteq {B}^{0} \land {A}^{ - } \subseteq {B}^{ - } \) . 4. \( {\left( X - A\right) }^{ - } = X - {A}^{0} \land {\left( X - A\right) }^{0} = X - {A}^{ - } \) . Proof. 1. \( x \in {A}^{0} \rightarrow \exists N\left( x\right) \subseteq A \) \[ \rightarrow x \in A \] \[ x \in A \rightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap A \neq 0}\right\rbrack \] \[ \rightarrow x \in {A}^{ - }\text{.} \] 2. \( x \in {A}^{0} \rightarrow \exists N\left( x\right) \subseteq A \) \[ \rightarrow \left( {\exists N\left( x\right) }\right) \lbrack x \in \left( {N\left( x\right) \cap {A}^{0}}\right) \; \land \;\left( {N\left( x\right) \cap {A}^{0}}\right) \in T \] \[ \left. {\land \left( {N\left( x\right) \cap {A}^{0}}\right) \subseteq {A}^{0}}\right\rbrack \] \[ \rightarrow \exists N\left( x\right) \subseteq {A}^{0} \] \[ \rightarrow x \in {A}^{00}\text{.} \] Since by \( 1,{A}^{00} \subseteq {A}^{0} \) we conclude that \( {A}^{00} = {A}^{0} \) . \[ x \in {A}^{- - } \rightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap {A}^{ - } \neq 0}\right\rbrack \] \[ \rightarrow \left( {\forall N\left( x\right) }\right) \left( {\exists y}\right) \left\lbrack {y \in N\left( x\right) \land y \in {A}^{ - }}\right\rbrack \] \[ \rightarrow \left( {\forall N\left( x\right) }\right) \left( {\exists y}\right) \left( {\exists {N}^{\prime }\left( y\right) }\right) \left\lbrack {{N}^{\prime }\left( y\right) \cap A \neq 0 \land {N}^{\prime }\left( y\right) \subseteq N\left( x\right) }\right\rbrack \] \[ \rightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap A \neq 0}\right\rbrack \] \[ \rightarrow x \in {A}^{ - }\text{.} \] Since by \( 1,{A}^{ - } \subseteq {A}^{- - } \) it follows that \( {A}^{- - } = {A}^{ - } \) . 3. If \( A \subseteq B \) then \[ x \in {A}^{0} \rightarrow \exists N\left( x\right) \subseteq A \] \[ \rightarrow \exists N\left( x\right) \subseteq B \] \[ \rightarrow x \in {B}^{0} \] \[ x \in {A}^{ - } \rightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap A \neq 0}\right\rbrack \] \[ \rightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap B \neq 0}\right\rbrack \] \[ \rightarrow x \in {B}^{ - }\text{.}\] \[ \text{4.}x \in {\left( X - A\right) }^{ - } \leftrightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap \left( {X - A}\right) \neq 0}\right\rbrack \] \[ \leftrightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \nsubseteq A}\right\rbrack \] \[ \leftrightarrow x \notin {A}^{0} \] \[ \leftrightarrow x \in X - {A}^{0}\text{.} \] \[ x \in {\left( X - A\right) }^{0} \leftrightarrow \exists N\left( x\right) \subseteq \left( {X - A}\right) \] \[ \leftrightarrow \left( {\exists N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap A = 0}\right\rbrack \] \[ \leftrightarrow x \notin {A}^{ - } \] \[ \leftrightarrow x \in X - {A}^{ - }\text{.} \] Theorem 1.23. If \( A \subseteq X \) and if \( B \subseteq X \) then 1. \( A \) regular open implies \( A \) open. 2. \( A \) is open iff \( X - A \) is closed. 3. \( A \) is closed iff \( X - A \) is open. 4. \( A \subseteq B \) and \( A \) dense in \( X \) implies \( B \) dense in \( X \) . Proof. 1. If \( A = {A}^{-0} \) then \( {A}^{0} = {A}^{-{00}} = {A}^{-0} = A \) . 2. \( A = {A}^{0} \leftrightarrow \left( {X - A}\right) = \left( {X - {A}^{0}}\right) \) \[ \leftrightarrow \left( {X - A}\right) = {\left( X - A\right) }^{ - }\text{.} \] 3. Left to the reader. 4. \( A \subseteq B \rightarrow {A}^{ - } \subseteq {B}^{ - } \) . But \( A \) dense in \( X \) implies \( {A}^{ - } = X \) . Hence \( {B}^{ - } = X \) . Theorem 1.24. If \( C \) is a clopen set in the topological space \( X \) and \( {B}^{ - } - {B}^{0} \subseteq C \) then \( {B}^{ - } - C \) is clopen. Proof. If \( x \in {B}^{ - } - C \) then since \( {B}^{ - } - {B}^{0} \subseteq C \) \[ x \in {B}^{0} \land x \notin C. \] Since \( C \) is closed \( X - C \) is open. Therefore \( {B}^{0} \cap \left( {X - C}\right) \) is open. Then \( x
1075_(GTM233)Topics in Banach Space Theory
Definition 9.3.2
Definition 9.3.2. Two unconditional bases \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) and \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) of a Banach space \( X \) are said to be permutatively equivalent if there is a permutation \( \pi \) of \( \mathbb{N} \) such that \( {\left( {e}_{\pi \left( n\right) }\right) }_{n = 1}^{\infty } \) and \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) are equivalent. Then we say that a Banach space \( X \) has a (UTAP) unconditional basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) if every normalized unconditional basis in \( X \) is permutatively equivalent to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . Classifying spaces with (UTAP) bases is more difficult, because the initial step (reduction to symmetric bases) is no longer available. The first step toward this classification was taken in 1976 by Edelstein and Wojtaszczyk [83], who showed that the finite direct sums of the spaces \( {c}_{0},{\ell }_{1} \), and \( {\ell }_{2} \) have (UTAP) bases (thus adding four new spaces to the already known ones). After their work, Bourgain, Casazza, Lindenstrauss, and Tzafriri embarked on a comprehensive study, completed in 1985 [32]. They added the spaces \( {c}_{0}\left( {\ell }_{1}\right) ,{\ell }_{1}\left( {c}_{0}\right) \) and \( {\ell }_{1}\left( {\ell }_{2}\right) \) to the list, but showed, remarkably, that \( {\ell }_{2}\left( {\ell }_{1}\right) \) fails to have a (UTAP) basis! However, all hopes of a really satisfactory classification of Banach spaces having a (UTAP) basis were dashed when they also found a nonclassical Banach space that also has a (UTAP) basis. This space was a modification of Tsirelson space, to be constructed in the next chapter, which contains no copy of any space isomorphic to an \( {\ell }_{p}\left( {1 \leq p < \infty }\right) \) or \( {c}_{0} \) . The subject was revisited in [42,43], and several other "pathological" spaces with (UTAP) bases have been discovered, including the original Tsirelson space. For an account of this topic see [298]. For the classification of symmetric basic sequences in \( {L}_{p} \) spaces we refer to [34, 143, 267]. ## 9.4 Complementation of Block Basic Sequences We now turn our attention to the study of complementation of subspaces of a Banach space. Starting with the example of \( {c}_{0} \) in \( {\ell }_{\infty } \) we saw that a subspace of a Banach space need not be complemented. Using Zippin's theorem, we will now study the complementation in a Banach space of the span of block basic sequences of unconditional bases. Lemma 9.4.1. Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be an unconditional basis of a Banach space \( X \) . Suppose that \( {\left( {u}_{k}\right) }_{k = 1}^{\infty } \) is a normalized block basic sequence of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) such that the subspace \( \left\lbrack {u}_{k}\right\rbrack \) is complemented in \( X \) . Then there is a projection \( Q \) from \( X \) onto \( \left\lbrack {u}_{k}\right\rbrack \) of the form \[ Q\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{\infty }{u}_{k}^{ * }\left( x\right) {u}_{k} \] where \( \operatorname{supp}{u}_{k}^{ * } \subseteq \operatorname{supp}{u}_{k} \) for all \( k \in \mathbb{N} \) . Proof. Suppose \[ {u}_{k} = \mathop{\sum }\limits_{{j \in {A}_{k}}}{a}_{j}{e}_{j} \] where \( {A}_{k} = \operatorname{supp}{u}_{k} \), and that \( P \) is a bounded projection onto \( \left\lbrack {u}_{k}\right\rbrack \) . For each \( k \) let \( {Q}_{k} \) be the projection onto \( {\left\lbrack {e}_{j}\right\rbrack }_{j \in {A}_{k}} \) given by \[ {Q}_{k}x = \mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{j}^{ * }\left( x\right) {e}_{j} \] We will show that the formula \[ {Qx} = \mathop{\sum }\limits_{{k = 1}}^{\infty }{Q}_{k}P{Q}_{k}x,\;x \in X, \] defines a bounded projection onto \( \left\lbrack {u}_{k}\right\rbrack \) (and it is clearly of the prescribed form). Suppose \( x = \mathop{\sum }\limits_{{j = 1}}^{m}{e}_{j}^{ * }\left( x\right) {e}_{j} \) for some \( m \) . Then for a suitable \( N \) such that supp \( x \subset \) \( {A}_{1} \cup \cdots \cup {A}_{N} \), we have \[ {Qx} = \mathop{\sum }\limits_{{k = 1}}^{N}{Q}_{k}P{Q}_{k}x \] \[ = \underset{{\epsilon }_{k} = \pm 1}{\text{ Average }}\mathop{\sum }\limits_{{j = 1}}^{N}\mathop{\sum }\limits_{{k = 1}}^{N}{\epsilon }_{j}{\epsilon }_{k}{Q}_{j}P{Q}_{k}x \] \[ = {\operatorname{Average}}_{{\epsilon }_{k} = \pm 1}\left( {\mathop{\sum }\limits_{{j = 1}}^{N}{\epsilon }_{j}{Q}_{j}}\right) P\left( {\mathop{\sum }\limits_{{k = 1}}^{N}{\epsilon }_{k}{Q}_{k}}\right) x. \] By the unconditionality of the original basis, \[ \parallel {Qx}\parallel \leq {\mathrm{K}}_{\mathrm{u}}{}^{2}\parallel P\parallel \parallel x\parallel \] It is now easy to check that \( Q \) extends to a bounded operator and has the required properties. The following characterization of the canonical bases of the \( {\ell }_{p} \) -spaces and \( {c}_{0} \) is due to Lindenstrauss and Tzafriri [200]. Theorem 9.4.2. Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be an unconditional basis of a Banach space \( X \) . Suppose that for every block basic sequence \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) of a permutation of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) , the subspace \( \left\lbrack {u}_{n}\right\rbrack \) is complemented in \( X \) . Then \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is equivalent to the canonical basis of \( {c}_{0} \) or \( {\ell }_{p} \) for some \( 1 \leq p < \infty \) . Proof. Without loss of generality we may assume that the constant of unconditionality of the basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is 1 . Our first goal is to show that whenever we have that \[ {u}_{n} = \mathop{\sum }\limits_{{k \in {A}_{n}}}{\alpha }_{k}{e}_{k},\;{v}_{n} = \mathop{\sum }\limits_{{k \in {B}_{n}}}{\beta }_{k}{e}_{k},\;n \in \mathbb{N}, \] are two normalized block basic sequences of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) such that \( {A}_{n} \cap {B}_{m} = \varnothing \) for all \( n, m \), then \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \sim {\left( {v}_{n}\right) }_{n = 1}^{\infty } \) . First we will prove that if \( {\left( {a}_{n}\right) }_{n = 1}^{\infty } \) is a sequence of scalars for which \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{u}_{n} \) converges, then the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{s}_{n}{a}_{n}{v}_{n} \) converges for every sequence of scalars \( {\left( {s}_{n}\right) }_{n = 1}^{\infty } \) tending to 0 . For each \( n \in \mathbb{N} \) consider \[ {w}_{n} = {u}_{n} + {s}_{n}{v}_{n},\;n \in \mathbb{N}. \] Then \( {\left( {w}_{n}\right) }_{n = 1}^{\infty } \) is a seminormalized block basic sequence with respect to a permutation of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . To be precise, supp \( {w}_{n} = {A}_{n} \cup {B}_{n} \) for each \( n \) and \( 1 \leq \begin{Vmatrix}{w}_{n}\end{Vmatrix} \leq 2 \) (for \( n \) big enough that \( \left. {\left| {s}_{n}\right| \leq 1}\right) \) . By the hypothesis, the subspace \( \left\lbrack {w}_{n}\right\rbrack \) is complemented in \( X \) . Lemma 9.4.1 yields a projection \( Q : X \rightarrow X \) of the form \[ Q\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{w}_{n}^{ * }\left( x\right) {w}_{n} \] where the elements of the sequence \( {\left( {w}_{n}^{ * }\right) }_{n = 1}^{\infty } \subset {X}^{ * } \) satisfy supp \( {w}_{n}^{ * } \subseteq {A}_{n} \cup {B}_{n} \) . Moreover, it is easy to see that \( \begin{Vmatrix}{w}_{n}^{ * }\end{Vmatrix} \leq \parallel Q\parallel \) for all \( n \) . The series \[ \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}Q\left( {u}_{n}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{w}_{n}^{ * }\left( {u}_{n}\right) {w}_{n} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{w}_{n}^{ * }\left( {u}_{n}\right) \left( {{u}_{n} + {s}_{n}{v}_{n}}\right) \] converges because \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{u}_{n} \) does. Therefore, by unconditionality, it follows that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{w}_{n}^{ * }\left( {u}_{n}\right) {s}_{n}{v}_{n} \) converges as well. From here we deduce the convergence of the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{s}_{n}{v}_{n} \) by noticing that \( {w}_{n}^{ * }\left( {u}_{n}\right) \rightarrow 1 \), since \[ {w}_{n}^{ * }\left( {u}_{n}\right) = 1 - {s}_{n}{w}_{n}^{ * }\left( {v}_{n}\right) \] and \[ 0 \leq \left| {{s}_{n}{w}_{n}^{ * }\left( {v}_{n}\right) }\right| \leq \left| {s}_{n}\right| \begin{Vmatrix}{w}_{n}^{ * }\end{Vmatrix} \leq \parallel Q\parallel \left| {s}_{n}\right| \rightarrow 0. \] Now, if \( {\left( {a}_{n}\right) }_{n = 1}^{\infty } \) is a sequence of scalars for which \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{u}_{n} \) converges, we can find a sequence of scalars \( {\left( {t}_{n}\right) }_{n = 1}^{\infty } \) tending to \( \infty \) such that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{t}_{n}{a}_{n}{u}_{n} \) converges. Since \( {\left( 1/{t}_{n}\right) }_{n = 1}^{\infty } \) tends to 0, the previous argument applies, so \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{v}_{n} \) converges. Reversing the roles of \( \left( {u}_{n}\right) \) and \( \left( {v}_{n}\right) \), we get the equivalence of these two block basic sequences. This argument applies not only to block basic sequences of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) but to block basic sequences of a permutation of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . Thus \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) is equivalent to every permutation of \( {\left( {v}_{n}\right) }_{n = 1}^{\infty } \) . This implies that \( {\left( {e}_{2n}\right) }_{n = 1}^{\infty } \) and \( {\left( {e}_{{2n} - 1}\right) }_{n = 1}^{\infty } \) are both perfectly homogeneous and equivalent to each other. We conclude the proof by applying Zippin's theorem
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 6.2.2
Definition 6.2.2. A local G-orientation on \( M \) at \( x \) is a choice of isomorphism \( {\bar{\varphi }}_{x} \) : \( G \rightarrow {H}_{n}\left( {M, M - x;G}\right) . \) To (attempt to) define a \( G \) -orientation on \( M \) we need to see how local \( G \) -orientations fit together. Definition 6.2.3. (1) Let \( x \) and \( y \) both lie in some coordinate patch \( {U}_{\alpha } \) . Let \( {\varphi }_{\alpha } \) : \( {\mathbb{R}}^{n} \rightarrow {U}_{\alpha } \) and set \( p = {\varphi }_{\alpha }^{-1}\left( x\right) \) and \( q = {\varphi }_{\alpha }^{-1}\left( y\right) \) . Let \( D \) be a closed disc in \( {\mathbb{R}}^{n} \) containing both \( p \) and \( q \) . The local \( G \) -orientations \( {\bar{\varphi }}_{x} \) and \( {\bar{\varphi }}_{y} \) are compatible if the following diagram commutes ![21ef530b-1e09-406a-b041-cf4539af5c14_107_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_107_0.jpg) where the unlabelled maps are all induced by inclusions. (2) Let \( x \) and \( y \) both lie in the same component of \( M \) . Then \( {\bar{\varphi }}_{x} \) and \( {\bar{\varphi }}_{y} \) are compatible if there is a sequence of points \( {x}_{0} = x,{x}_{1},\ldots ,{x}_{k} = y \) with \( {x}_{i} \) and \( {x}_{i + 1} \) both lying in some coordinate patch, for each \( i \), and \( {\varphi }_{{x}_{i}} \) and \( {\varphi }_{{x}_{i + 1}} \) are compatible for each \( i \) . Remark 6.2.4. There is always such a sequence of points as we may let \( f : I \rightarrow M \) be an arbitrary map with \( f\left( 0\right) = x \) and \( f\left( y\right) = 1 \) . Then \( f\left( I\right) \) is a compact set so is covered by finitely many coordinate patches, and then \( {x}_{0},\ldots ,{x}_{k} \) are easy to find. Thus the condition in the definition is the condition on the local \( G \) -orientations. It is also easy to check that this condition is independent of the choice of intermediate points \( {x}_{1},\ldots ,{x}_{k - 1} \) . Definition 6.2.5. The \( n \) -manifold \( M \) is \( G \) -orientable if there exists a compatible collection of local \( G \) -orientations \( \left\{ {\bar{\varphi }}_{x}\right\} \) for all points \( x \in M \) . In that case a choice of mutually compatible \( \left\{ {\bar{\varphi }}_{x}\right\} \) is a \( G \) -orientation of \( M \) . If there is no compatible collection of local \( G \) -orientations on \( M \) then \( M \) is \( G \) -nonorientable. We have so far let \( G = \mathbb{Z}/2\mathbb{Z} \) or \( \mathbb{Z} \) . But now we see a big difference between these two cases. Theorem 6.2.6. Every manifold is \( \mathbb{Z}/2\mathbb{Z} \) -orientable. Proof. Following the diagram in Definition 6.1.3 all the way around from \( G \) to \( G \) gives an isomorphism from \( G \) to \( G \), and \( {\bar{\varphi }}_{x} \) and \( {\bar{\varphi }}_{y} \) are compatible if and only if this isomorphism is the identity. But the only isomorphism \( \bar{\varphi } : \mathbb{Z}/2\mathbb{Z} \rightarrow \mathbb{Z}/2\mathbb{Z} \) is the identity. Referring to the proof of this theorem, we see that in case \( G = \mathbb{Z} \) we have an isomorphism \( \varphi : \mathbb{Z} \rightarrow \mathbb{Z} \) . But now there are two isomorphisms, the identity (i.e., multiplication by 1) and multiplication by -1 . So a priori, \( M \) might or might not be orientable. Theorem 6.2.7. (1) Let \( M \) be the union of components \( M = {M}_{1} \cup {M}_{2} \cup \cdots \) . Then \( M \) is \( \mathbb{Z} \) -orientable if and only if each \( {M}_{i} \) is \( \mathbb{Z} \) -orientable. (2) Let \( M \) be connected. If \( M \) is \( \mathbb{Z} \) -orientable, then \( M \) has exactly two \( \mathbb{Z} \) -orientations. (3) If \( M \) has \( k \) components and is \( \mathbb{Z} \) -orientable, then \( M \) has \( {2}^{k}\mathbb{Z} \) -orientations. Proof. The important thing to note is that if \( {\bar{\varphi }}_{x} : \mathbb{Z} \rightarrow {H}_{n}\left( {M, M - x;\mathbb{Z}}\right) \) is a local \( \mathbb{Z} \) -orientation, there is exactly one other local \( \mathbb{Z} \) -orientation at \( x \), namely \( - {\bar{\varphi }}_{x} \), where \( - {\bar{\varphi }}_{x} : \mathbb{Z} \rightarrow {H}_{n}\left( {M, M - x;\mathbb{Z}}\right) \) is the map defined by \( - \left( {\bar{\varphi }}_{x}\right) \left( n\right) = - {\bar{\varphi }}_{x}\left( n\right) \), for \( n \in \mathbb{Z} \) . Thus if we have a compatible system of local \( \mathbb{Z} \) -orientations \( \left\{ {\bar{\varphi }}_{x}\right\} \) on a connected manifold \( M \), then there are exactly two compatible systems of local \( \mathbb{Z} \) -orientative on \( M \), namely \( \left\{ {\bar{\varphi }}_{x}\right\} \) and \( \left\{ {-{\bar{\varphi }}_{x}}\right\} \) . Having made our point, we now drop the \( \mathbb{Z} \) and use standard mathematical language. Definition 6.2.8. A local orientation on \( M \) is a local \( \mathbb{Z} \) -orientation. \( M \) is orientable (resp. nonorientable) if it is \( \mathbb{Z} \) -orientable (resp. \( \mathbb{Z} \) -nonorientable). An orientation of \( M \) is a \( \mathbb{Z} \) -orientation of \( M \) . Our first objective is to investigate when manifolds are orientable. In view of Theorem 6.2.7, we may confine our attention to connected manifolds. Definition 6.2.9. Let \( M \) be a connected \( n \) -manifold. Let \( {\varphi }_{x} \) be a local orientation of \( M \) at \( x \) . Let \( y \in M \) and let \( f : I \rightarrow M \) with \( f\left( 0\right) = x \) and \( f\left( 1\right) = y \) . The transfer of \( {\bar{\varphi }}_{x} \) along \( f \) to \( y \) is the local orientation \( {f}_{y}\left( {\bar{\varphi }}_{x}\right) \) of \( M \) at \( y \) obtained as follows: Let \( 0 = {t}_{0} < {t}_{1} < \cdots < {t}_{k} = 1 \) such that, for each \( i, f\left( \left\lbrack {{t}_{i},{t}_{i + 1}}\right\rbrack \right) \) is contained in some coordinate patch. Let \( {x}_{i} = f\left( {t}_{i}\right) \) for each \( i \), so that, in particular, \( {x}_{0} = x \) and \( {x}_{k} = y \) . For each \( i \), let \( {\bar{\varphi }}_{{x}_{i + 1}} \) be the local orientation of \( M \) at \( {x}_{i + 1} \) compatible with the local orientation \( {\bar{\varphi }}_{{x}_{i}} \) of \( M \) at \( {x}_{i} \) . Then \( {f}_{y}\left( {\bar{\varphi }}_{x}\right) = {\bar{\varphi }}_{{x}_{k}} \) . The transfer of \( {\bar{\varphi }}_{x} \) along \( f \) to \( y \) does not depend on the choice of points \( \left\{ {t}_{i}\right\} \) but it certainly may depend on the choice of \( f \) . However, we have the following result. Theorem 6.2.10. Let \( M \) be a connected \( n \) -manifold. Then a system of local orientations \( \left\{ {\bar{\varphi }}_{x}\right\} \) is an orientation of \( M \) if and only if for every \( x, y \in M \) and every path \( f : I \rightarrow M \) with \( f\left( 0\right) = x \) and \( f\left( 1\right) = y,{\bar{\varphi }}_{y} = {f}_{y}\left( {\bar{\varphi }}_{x}\right) \) . In particular, \( M \) is orientable if and only if \( {f}_{y}\left( {\bar{\varphi }}_{x}\right) \) is independent of the choice of \( f \) . Proof. If \( M \) is orientable, let \( \left\{ {\bar{\varphi }}_{x}\right\} \) be an orientation, i.e., a compatible system of local orientations. Then for any path \( f,{f}_{y}\left( {\bar{\varphi }}_{x}\right) = {\bar{\varphi }}_{y} \) is independent of the choice of \( f \) . Conversely, if \( {f}_{y}\left( {\bar{\varphi }}_{x}\right) \) is independent of the choice of \( f \), we may obtain a compatible system of local orientations as follows: Choose a point \( x \in M \) and a local orientation \( {\bar{\varphi }}_{x} \) . Then for a point \( y \in M \), choose any path \( f \) from \( x \) to \( y \) and let \( {\bar{\varphi }}_{y} = {f}_{y}\left( {\bar{\varphi }}_{x}\right) \) This result makes it crucial to investigate the dependence of \( {f}_{y}\left( {\bar{\varphi }}_{x}\right) \) on the choice of the path \( f \) . We do that now. Lemma 6.2.11. (1) Let \( x \) and \( y \) be two points in \( M \) that are both contained in some coordinate patch \( {U}_{\alpha } \) . Then for any two paths \( f \) and \( g \) from \( x \) to \( y \) with \( f\left( I\right) \subset {U}_{\alpha } \) and \( g\left( I\right) \subset {U}_{\alpha },{f}_{y}\left( {\bar{\varphi }}_{x}\right) = {g}_{y}\left( {\bar{\varphi }}_{x}\right) \) . (2) Let \( x \) and \( y \) be any two points in \( M \) . If \( f \) and \( g \) are any two paths in \( M \) that are homotopic rel \( \{ 0,1\} \), then \( {f}_{y}\left( {\bar{\varphi }}_{x}\right) = {g}_{y}\left( {\bar{\varphi }}_{x}\right) \) . Proof. (1) Since \( I \) is compact, \( f\left( I\right) \) is a compact subset of \( {U}_{\alpha } \), and hence \( {\varphi }_{\alpha }^{-1}\left( {f\left( I\right) }\right) \) is a compact subset of \( {\mathbb{R}}^{n} \), as is \( {\varphi }_{\alpha }^{-1}\left( {g\left( I\right) }\right) \) . But then we may choose the disc \( D \) in Definition 6.2.3 so large that \( D \) includes \( {\varphi }_{\alpha }^{-1}\left( {f\left( I\right) }\right) \cup {\varphi }_{\alpha }^{-1}\left( {g\left( I\right) }\right) \) . (2) Let \( F : I \times I \rightarrow M \) be a homotopy of \( f \) to \( g \) rel \( \{ 0,1\} \) . Then \( F\left( {I \times I}\right) \) is a compact subset of \( M \), so is covered by finitely many coordinate patches. It is then easy to see that there is some \( k \) such that if \( I \times I \) is divided into \( {k}^{2} \) subsquares \( {J}_{i, j} = \) \( \left\lbrack {i/k,\left( {i + 1}\right) /k}\right\rbrack \times \left\lbrack {j/k,\left( {j + 1}\right) /k}\right\rbrack \), each \( F\left( {J}_{i, j}\right) \) is contained in a single coordinate patch. But then we have a sequence of paths from \( f \) to \( g \), or, more precisely, to the constant path followed by \( g \) followed by the constant path, with each path giving the same transferred orientation, according to part (1) and the following picture: ![21ef530b-1e09-406a-b041-cf4539af5c14_109_0.jpg](images/21e
1164_(GTM70)Singular Homology Theory
Definition 2.1
Definition 2.1. An orientation of an \( n \) -dimensional manifold \( M \) is a function \( \mu \) which assigns to each point \( x \in M \) a local orientation \( {\mu }_{x} \in {H}_{n}\left( {M, M-\{ x\} ;\mathbf{Z}}\right) \) subject to the following continuity condition: Given any point \( x \in M \), there exists a neighborhood \( N \) of \( x \) and an element \( {\mu }_{N} \in {H}_{n}\left( {M, M - N}\right) \) such that \( {i}_{ * }\left( {\mu }_{N}\right) = {\mu }_{y} \) for any \( y \in N \), where \( {i}_{ * } : {H}_{n}\left( {M, M - N}\right) \rightarrow {H}_{n}\left( {M, M-\{ y\} }\right) \) denotes the homomorphism induced by inclusion. In order to better understand this continuity condition, recall that any point \( x \in M \) has an open neighborhood \( U \) which is homeomorphic to \( {\mathbf{R}}^{n} \) . By the excision property, for any \( y \in U \) , \[ {H}_{n}\left( {U, U-\{ y\} }\right) \approx {H}_{n}\left( {M, M-\{ y\} }\right) . \] However, if \( x \) and \( y \) are any two points of \( {\mathbf{R}}^{n} \), there is a canonical isomorphism \( {H}_{n}\left( {{\mathbf{R}}^{n},{\mathbf{R}}^{n}-\{ x\} }\right) \approx {H}_{n}\left( {{\mathbf{R}}^{n},{\mathbf{R}}^{n}-\{ y\} }\right) \) defined by choosing a closed ball \( {E}^{n} \subset {\mathbf{R}}^{n} \) large enough so that \( x \) and \( y \) are both in the interior of \( {E}^{n} \), and noting that in the following diagram, ![26a5d8f2-88cf-4556-8447-3a182179fff0_212_0.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_212_0.jpg) both \( {i}_{ * } \) and \( {j}_{ * } \) are isomorphisms. Moreover, the isomorphism between \( {H}_{n}\left( {{\mathbf{R}}^{n},{\mathbf{R}}^{n}-\{ x\} }\right) \) and \( {H}_{n}\left( {{\mathbf{R}}^{n},{\mathbf{R}}^{n}-\{ y\} }\right) \) that we thus obtain is independent of the choice of the ball \( {E}^{n} \) . Terminology. The manifold \( M \) is said to be orientable if it admits at least one orientation; otherwise, it is called nonorientable. A pair consisting of a manifold \( M \) and an orientation is called an oriented manifold. Example 2.1. (a) Euclidean \( n \) -space, \( {\mathbf{R}}^{n} \), is orientable (use the fact mentioned above that there exists a canonical isomorphism \( {H}_{n}\left( {{\mathbf{R}}^{n},{\mathbf{R}}^{n}-\{ x\} }\right) \approx \) \( {H}_{n}\left( {{\mathbf{R}}^{n},{\mathbf{R}}^{n}-\{ y\} }\right) \) for any two point \( x, y \in {\mathbf{R}}^{n} \) ). (b) Similarly, the \( n \) -sphere, \( {S}^{n} \) , is orientable according to our definition. (c) If \( M \) is an \( n \) -manifold, \( \mu \) is an orientation for \( M \), and \( N \) is an open subset of \( M \), then \( \mu \) restricted to \( N \) is an orientation of the \( n \) -manifold \( N \) . (d) Let \( M \) be an \( n \) -dimensional manifold with orientation \( \mu \) and \( N \) and \( n \) -dimensional manifold with orientation \( v \) . Let \( \mu \times v \) denote the function which assigns to each point \( \left( {x, y}\right) \in M \times N \) the homology class \[ {\mu }_{x} \times {v}_{y} \in {H}_{m + n}\left( {M \times N, M \times N-\{ \left( {x, y}\right) \} }\right) . \] Using the Künneth theorem, it is seen that \( {\mu }_{x} \times {v}_{y} \) is a generator of the homology group in question. It is also easy to verify that the required continuity condition holds, and thus \( \mu \times v \) is an orientation for \( M \times N \) . Thus the product of two orientable manifolds is orientable. In dealing with questions such as these, we will need to frequently consider for any subset \( A \) of the manifold \( M \), the homology groups \( {H}_{i}\left( {M, M - A}\right) \) . If \( B \subset A \), it will be convenient to denote the corresponding homomorphism \( {H}_{i}\left( {M, M - A}\right) \rightarrow {H}_{i}\left( {M, M - B}\right) \) by the symbol \( {\rho }_{B} \) ; for any homology class \( u \in {H}_{i}\left( {M, M - A}\right) ,{\rho }_{B}\left( u\right) \) can be thought of as the "restriction" of \( u \) to a homology group associated with \( B \) . Let \( M \) be an \( n \) -dimensional manifold with orientation \( \mu \) ; it would be advantageous if there were a global homology class \( {\mu }_{M} \in {H}_{n}\left( {M,\mathbf{Z}}\right) \) such that for any \( x \in M \) , \[ {\mu }_{x} = {\rho }_{x}\left( {\mu }_{M}\right) \] Unfortunately, this can not be true if \( M \) is noncompact, as the reader can easily verify by using Proposition III.6.1. The closest possible approximation to such a result is the following theorem. It will play a crucial role in the statement and proof of the Poincaré duality theorem: Theorem 2.1. Let \( M \) be an \( n \) -manifold with orientation \( \mu \) . Then for each compact set \( K \subset M \) there exists a unique homology class \( {\mu }_{K} \in {H}_{n}\left( {M, M - K}\right) \) such that \[ {\rho }_{x}\left( {\mu }_{K}\right) = {\mu }_{x} \] for each \( x \in K \) . Note that if \( M \) is a compact manifold, this theorem assures us of the existence of a unique global homology class \( {\mu }_{M} \in {H}_{n}\left( {M,\mathbf{Z}}\right) \) such that for any point \( x \in M \) , \[ {\mu }_{x} = {\rho }_{x}\left( {\mu }_{M}\right) \] Proof. The uniqueness of \( {\mu }_{K} \) is a direct consequence of a more general lemma below (Lemma 2.2). Therefore we will concentrate on the existence proof. Obviously, if the compact set \( K \) is contained in a sufficiently small neighborhood of some point, the continuity condition in the definition of \( \mu \) assures us of the existence of \( {\mu }_{K} \) . Next, suppose that \( K = {K}_{1} \cup {K}_{2} \), where \( {K}_{1} \) and \( {K}_{2} \) are compact subsets of \( M \), and both \( {\mu }_{{K}_{1}} \) and \( {\mu }_{{K}_{2}} \) are assumed to exist. Then \( \left\{ {M - {K}_{1}, M - {K}_{2}}\right\} \) is an excisive couple, and hence we have a relative Mayer-Vietoris sequence (cf. §VIII.6): \[ {H}_{n + 1}\left( {M, M - {K}_{1} \cap {K}_{2}}\right) \overset{\Delta }{ \rightarrow }{H}_{n}\left( {M, M - K}\right) \] \[ \overset{\varphi }{ \rightarrow }{H}_{n}\left( {M, M - {K}_{1}}\right) \oplus {H}_{n}\left( {M, M - {K}_{2}}\right) \] \[ \overset{\psi }{ \rightarrow }{H}_{n}\left( {M, M - {K}_{1} \cap {K}_{2}}\right) \] Recall that the homomorphisms \( \varphi \) and \( \psi \) are defined by \[ \varphi \left( u\right) = \left( {{\rho }_{{K}_{1}}\left( u\right) ,{\rho }_{{K}_{2}}\left( u\right) }\right) \] \[ \psi \left( {{v}_{1},{v}_{2}}\right) = {\rho }_{{K}_{1} \cap {K}_{2}}\left( {v}_{1}\right) - {\rho }_{{K}_{1} \cap {K}_{2}}\left( {v}_{2}\right) \] for any \( u \in {H}_{n}\left( {M, M - K}\right) ,{v}_{1} \in {H}_{n}\left( {M, M - {K}_{1}}\right) \), and \( {v}_{2} \in {H}_{n}\left( {M, M - {K}_{2}}\right) \) . By the uniqueness of \( {\mu }_{{K}_{1} \cap {K}_{2}} \), we see that \[ {\rho }_{{K}_{1} \cap {K}_{2}}\left( {\mu }_{{K}_{1}}\right) = {\rho }_{{K}_{1} \cap {K}_{2}}\left( {\mu }_{{K}_{2}}\right) \] \[ = {\mu }_{{K}_{1} \cap {K}_{2}} \] and hence \[ \psi \left( {{\mu }_{{K}_{1}},{\mu }_{{K}_{2}}}\right) = 0. \] It follows from Lemma 2.2 below that \( {H}_{n + 1}\left( {M, M - \left( {{K}_{1} \cap {K}_{2}}\right) }\right) = 0 \) ; hence by exactness there is a unique homology class \( {\mu }_{K} \in {H}_{n}\left( {M, M - K}\right) \) such that \[ \varphi \left( {\mu }_{K}\right) = \left( {{\mu }_{{K}_{1}},{\mu }_{{K}_{2}}}\right) \] It is readily verified that this homology class \( {\mu }_{\mathrm{K}} \) satisfies the desired condition \( {\rho }_{x}\left( {\mu }_{K}\right) = {\mu }_{x} \) for any \( x \in K \) . Next, assume that \( K = {K}_{1} \cup {K}_{2} \cup \cdots \cup {K}_{r} \), where each \( {K}_{i} \) is a compact subset of \( M \), and \( {\mu }_{{K}_{i}} \) exists. By an obvious induction on \( r \), using what we have just proved, we can conclude that \( {\mu }_{K} \) exists. But any compact subset \( K \) of \( M \) can obviously be expressed as a finite union of subsets \( {K}_{i} \), each of which is sufficiently small so that the corresponding homology class \( {\mu }_{{K}_{i}} \) exists. Hence \( {\mu }_{K} \) exists, as was to be proved. Q.E.D. It remains to state and prove Lemma 2.2. Lemma 2.2. Let \( M \) be an \( n \) -dimensional manifold and \( G \) an abelian group. (a) For any compact set \( K \subset M \) and all \( i > n \) , \[ {H}_{i}\left( {M, M - K;G}\right) = 0. \] (b) If \( u \in {H}_{n}\left( {M, M - K;G}\right) \) and \( {\rho }_{x}\left( u\right) = 0 \) for all \( x \in K \), then \( u = 0 \) . Proof. The method of proof is to start with the case \( M = {R}^{n} \) and then to progress to successively more complicated cases, ending with the general case. Case 1: \( M = {\mathbf{R}}^{n} \) and \( K \) is a compact, convex subset of \( {\mathbf{R}}^{n} \) . To prove this case, choose a large ball \( {E}^{n} \subset {\mathbf{R}}^{n} \) such that \( K \) is contained in the interior of \( {E}^{n} \) . For any \( x \in K \), consider the following commutative diagram: ![26a5d8f2-88cf-4556-8447-3a182179fff0_214_0.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_214_0.jpg) Then it is readily proved that Arrows 1 and 2 are isomorphisms. Hence \( {\rho }_{x} \) is an isomorphism for all \( i \) which suffices to prove the lemma in this case. Case 2: \( K = {K}_{1} \cup {K}_{2} \), where \( K,{K}_{1} \), and \( {K}_{2} \) are compact subsets of \( M \) and it is assumed that the lemma is true for \( {K}_{1},{K}_{2} \), and \( {K}_{1} \cap {K}_{2} \) . In order to prove this case, we will again use the relative Mayer-Vietoris sequence of the triad \( \left( {M;M - {K}_{1}, M - {K}_{2}}\right) \) . The proof of this case is based on the following portion of this Mayer-Vietoris sequence: \[ {H}_{i + 1}\left( {M, M - {K}_{1} \cap {K}_{2}}\right) \overset{\Delta }{ \rightarrow }{H}_{i}\left( {M, M - K}\right) \] \[ \overset{\varphi }{ \rightarrow }{H}_{i}\left( {M, M - {K}_{1}}\right) \oplus {H}_{i}\left( {M, M - {K}_{2
18_Algebra Chapter 0
Definition 5.5
Definition 5.5. Let \( \mathrm{A} \) be an abelian category. An object \( P \) of \( \mathrm{A} \) is projective if the functor \( {\operatorname{Hom}}_{\mathrm{A}}\left( {P,\_ }\right) \) is exact. An object \( Q \) is injective if the functor \( {\operatorname{Hom}}_{\mathrm{A}}\left( {\_, Q}\right) \) is exact. General remarks such as Lemma VIII 6.2 or the comments about splitting of sequences at the end of SVIII 6.1 hold in any abelian category, and a good exercise for the reader is to collect all such information and verify that it does hold in this new context. Also note that, at this level of generality, the parts of the theory dealing with injective objects are a faithful mirror of the parts dealing with projective objects. This is of course not a coincidence: the opposite of an abelian category is again abelian (Exercise 1.10), and projectives in one become injectives in the other. Thus, it is really only necessary to prove statements for, say, projectives in arbitrary abelian categories: the corresponding statements for injectives will automatically follow. This dual state of affairs should not be taken too far: projectives and injectives in a fixed abelian category may display very different features. For example, the characterizations worked out in \$VIII]6.2 and \$VIII]6.3 use the specific properties of the categories \( R \) -Mod, so they should not be expected to have a simple counterpart in arbitrary abelian categories. I will denote by \( \mathrm{P} \), resp., \( \mathrm{I} \), the full subcategories of \( \mathrm{A} \) determined by the projective, resp., injective, objects. Of course these are not abelian categories in any interesting case. Note that an abelian category may well have no nontrivial projective or injective objects. Example 5.6. The category of finite abelian groups is abelian (surprise, surprise), but contains no nontrivial projective or injective objects (Exercise 5.5). Definition 5.7. An abelian category A has enough projectives if for every object \( A \) in A there exists a projective object \( P \) in A and an epimorphism \( P \rightarrow A \) . The category has enough injectives if for every object \( A \) in \( \mathrm{A} \) there is an injective object \( Q \) in \( \mathrm{A} \) and a monomorphism \( A \rightarrowtail Q \) . These definitions are not new to the reader, since we ran across them in SVIII 6 in particular, we have already observed that, for every commutative ring \( R, R \) -Mod has enough projectives (this is not challenging: free modules and their direct summands are projective, Proposition VIII 6.4 ) and enough injectives (this is challenging; we verified it in Corollary VIII 6.12 ). Example 5.6 shows that we should not expect an abelian category A to have either property. An important case in which one can show that there are enough injectives is the category of sheaves of abelian groups over a topological space; this is a key step in the definition of sheaf cohomology as a 'derived functor'. In general, categories of sheaves do not have enough projectives. On the other hand, the category of finitely generated abelian groups has enough projectives but not enough (indeed, no nontrivial) injectives. So it goes. 5.4. Homotopy equivalences vs. quasi-isomorphisms in \( \mathrm{K}\left( \mathrm{A}\right) \) . As I already mentioned, we will be interested in certain subcategories of \( K\left( A\right) \) determined by complexes of projective or injective objects of \( A \) . Definition 5.8. I will denote by \( {\mathrm{K}}^{ - }\left( \mathrm{P}\right) \) the full subcategory of \( \mathrm{K}\left( \mathrm{A}\right) \) consisting of bounded-above complexes of projective objects of \( \mathrm{A} \), and \( \mathrm{I} \) will denote by \( {\mathrm{K}}^{ + }\left( \mathrm{I}\right) \) the full subcategory of \( \mathrm{K}\left( \mathrm{A}\right) \) consisting of bounded-below complexes of injective objects of A. The main result of the rest of this section is the following: Theorem 5.9. Let \( \mathrm{A} \) be an abelian category, and let \( {\alpha }^{ \bullet } : {P}_{0}^{ \bullet } \rightarrow {P}_{1}^{ \bullet } \), resp., \( {\alpha }^{ \bullet } \) : \( {Q}_{0}^{ \bullet } \rightarrow {Q}_{1}^{ \bullet } \), be a quasi-isomorphism between bounded-above complexes of projectives, resp., bounded-below complexes of injectives, in \( \mathrm{A} \) . Then \( {\alpha }^{ \bullet } \) is a homotopy equivalence. Corollary 5.10. In \( {\mathrm{K}}^{ - }\left( \mathrm{P}\right) \) and \( {\mathrm{K}}^{ + }\left( \mathrm{l}\right) \) ,(homotopy classes of) quasi-isomorphisms are isomorphisms. That is, all quasi-isomorphisms between bounded complexes of projectives or injectives are ’already’ inverted in \( {\mathrm{K}}^{ - }\left( \mathrm{P}\right) ,{\mathrm{K}}^{ + }\left( \mathrm{l}\right) \) . Thus, these categories are very close to the corresponding derived categories. In fact, we will see that if A has enough projectives, then \( {\mathrm{K}}^{ - }\left( \mathrm{P}\right) \) ’suffices’ to describe \( {\mathrm{D}}^{ - }\left( \mathrm{A}\right) \) (in a sense that will be made more precise); similarly, \( {\mathrm{K}}^{ + }\left( \mathrm{I}\right) \) acts as a concrete realization of \( {\mathrm{D}}^{ + }\left( \mathrm{A}\right) \) if A has enough injectives. The proof of Theorem 5.9 will require a few preliminaries, which further clarify the role played by complexes of projective and injective objects with regard to homotopies. A complicated way of saying that a complex \( {N}^{ \bullet } \) in \( \mathrm{C}\left( \mathrm{A}\right) \) is exact is to assert that the identity map \( {\operatorname{id}}_{{N}^{ \bullet }} \) and the trivial map 0 induce the same morphism in cohomology, as they would if they were homotopic to each other. It is however easy to construct examples of exact complexes for which the identity is not homotopic to 0 . As the reader will verify (Exercise 5.11), this has to do with whether the complex splits or not; in general, a complex \( {N}^{ \bullet } \) is said to be ’split exact’ if \( {\mathrm{{id}}}_{{N}^{ \bullet }} \) is homotopic to 0 . Keeping in mind that projective or injective objects 'cause' sequences to split, it may not be too surprising that for bounded exact complexes consisting of projective or injective objects the identity does turn out to be homotopic to 0 . To verify this, we study special conditions that ensure that morphisms are homotopic to 0 . Lemma 5.11. Let \( {P}^{ \bullet } \) be a complex of projective objects of an abelian category A such that \( {P}^{i} = 0 \) for \( i > 0 \), and let \( {L}^{ \bullet } \) be a complex in \( \mathrm{C}\left( \mathrm{A}\right) \) such that \( {H}^{i}\left( {L}^{ \bullet }\right) = 0 \) for \( i < 0 \) . Let \( {\alpha }^{ \bullet } : {P}^{ \bullet } \rightarrow {L}^{ \bullet } \) be a morphism inducing the zero-morphism in cohomology. Then \( {\alpha }^{ \bullet } \) is homotopic to 0 . The reader will provide the 'injective' version of this statement, dealing with morphisms from a complex which is exact in positive degree to a complex of injec-tives \( {Q}^{ \bullet } \) with \( {Q}^{i} = 0 \) for \( i < 0 \) . Proof. We have to construct morphisms \( {h}^{i} : {P}^{i} \rightarrow {L}^{i - 1} \) : ![23387543-548b-40c2-8595-200756212a0f_644_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_644_0.jpg) such that (*) \[ {\alpha }^{i} = {d}_{{L}^{ \bullet }}^{i - 1} \circ {h}^{i} + {h}^{i + 1} \circ {d}_{{P}^{ \bullet }}^{i}. \] Of course \( {h}^{i} = 0 \) necessarily for \( i > 0 \) . For \( i = 0 \), use the fact that the morphism induced by \( {\alpha }^{ \bullet } \) on cohomology is 0 ; this says that \( {\alpha }^{0} \) factors through the image of \( {d}_{{L}^{ \bullet }}^{-1} \) : ![23387543-548b-40c2-8595-200756212a0f_645_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_645_0.jpg) Since \( {P}^{0} \) is projective, there exists a morphism \( {h}^{0} : {P}^{0} \rightarrow {L}^{-1} \) such that \( {\alpha }^{0} = \) \( {d}_{{L}^{ \bullet }}^{-1} \circ {h}^{0} \), as needed. To define \( {h}^{i - 1} \) for \( i \leq 0 \), proceed inductively and assume that \( {h}^{i} \) and \( {h}^{i + 1} \) satisfying (*) have already been constructed. Note that by (*) (and the fact that \( {P}^{ \bullet } \) is a complex) \[ {d}_{{L}^{ \bullet }}^{i - 1} \circ {h}^{i} \circ {d}_{{P}^{ \bullet }}^{i - 1} = \left( {{\alpha }^{i} - {h}^{i + 1} \circ {d}_{{P}^{ \bullet }}^{i}}\right) \circ {d}_{{P}^{ \bullet }}^{i - 1} = {\alpha }^{i} \circ {d}_{{P}^{ \bullet }}^{i - 1} - {h}^{i + 1} \circ 0 = {\alpha }^{i} \circ {d}_{{P}^{ \bullet }}^{i - 1} : \] ![23387543-548b-40c2-8595-200756212a0f_645_1.jpg](images/23387543-548b-40c2-8595-200756212a0f_645_1.jpg) Therefore \[ {d}_{{L}^{ \bullet }}^{i - 1} \circ \left( {{\alpha }^{i - 1} - {h}^{i} \circ {d}_{{P}^{ \bullet }}^{i - 1}}\right) = {d}_{{L}^{ \bullet }}^{i - 1} \circ {\alpha }^{i - 1} - {\alpha }^{i} \circ {d}_{{P}^{ \bullet }}^{i - 1} = 0, \] since \( {\alpha }^{ \bullet } \) is a morphism of cochain complexes. This tells us that \( {\alpha }^{i - 1} - {h}^{i} \circ {d}_{{P}^{ \bullet }}^{i - 1} \) factors through \( \ker {d}_{{L}^{ \bullet }}^{i - 1} \) . Since \( {L}^{ \bullet } \) is exact at \( {L}^{i - 1} \) for \( i \leq 0 \) , \( \ker {d}_{{L}^{ \bullet }}^{i - 1} = \operatorname{im}{d}_{{L}^{ \bullet }}^{i - 2} \) . Therefore, we again have a factorization ![23387543-548b-40c2-8595-200756212a0f_645_2.jpg](images/23387543-548b-40c2-8595-200756212a0f_645_2.jpg) Now \( {P}^{i - 1} \) is projective; therefore there exists \( {h}^{i - 1} : {P}^{i - 1} \rightarrow {L}^{i - 2} \) such that \[ {\alpha }^{i - 1} - {h}^{i} \circ {d}_{{P}^{ \bullet }}^{i - 1} = {d}_{{L}^{ \bullet }}^{i - 2} \circ {h}^{i - 1}, \] which is precisely what we need for the induction step. Corollary 5.12. Let \( {P}^{ \bullet } \) be a bounded-above cochain complex of projectives of an abelian category \( \mathrm{A} \), and
1347_[陈亚浙&吴兰成] Second Order Elliptic Equations and Elliptic Systems
Definition 1.1
Definition 1.1. A function \( u \in {W}_{loc}^{1,2}\left( \Omega \right) \) is said to be a weak solution of (1.1), if for any \( \varphi \in {C}_{0}^{1}\left( \Omega \right) \) , (1.5) \[ Q\left\lbrack {u,\varphi }\right\rbrack \triangleq {\int }_{\Omega }{a}_{i}\left( {x, u,{Du}}\right) {D}_{i}{\varphi dx} + {\int }_{\Omega }b\left( {x, u,{Du}}\right) {\varphi dx} = 0. \] A function \( u \in {W}_{loc}^{1,2}\left( \Omega \right) \) is said to be a weak subsolution (weak supersolution), if (1.6) \[ Q\left\lbrack {u,\varphi }\right\rbrack \leq 0\;\left( { \geq 0}\right) \;\forall \varphi \in {C}_{0}^{1}\left( \Omega \right) ,\;\varphi \geq 0. \] The structure conditions (1.3) and (1.4) guarantee that the integrals in (1.5) are well defined. To derive the \( {L}^{\infty } \) estimate, we also assume the following structure condition: (1.7) \[ - b\left( {x, z,\eta }\right) \operatorname{sign}z \leq \Lambda \left\lbrack {\left| \eta \right| + f\left( x\right) }\right\rbrack ,\;\forall \left( {x, z,\eta }\right) \in \Omega \times \mathbb{R} \times {\mathbb{R}}^{n}. \] Theorem 1.1. Suppose that the structure conditions (1.2), (1.3), (1.4) and (1.7) hold, \( {F}_{0} \triangleq \parallel g{\parallel }_{{L}^{q}} + \parallel f{\parallel }_{{L}^{q * }} < \infty \) for some \( q > n \), where \( {q}_{ * } = {nq}/\left( {n + q}\right) \) . Then for a weak subsolution \( u \in {W}^{1,2}\left( \Omega \right) \) of (1.1), \[ \underset{\Omega }{\operatorname{ess}\sup }u \leq \mathop{\sup }\limits_{{\partial \Omega }}{u}^{ + } + C{F}_{0} \] where \( C \) depends only on \( n,\Lambda \) and \( \operatorname{diam}\Omega \) . Proof. Assume that \( \mathop{\sup }\limits_{{\partial \Omega }}{u}^{ + } < \infty \) . For \( k \geq \mathop{\sup }\limits_{{\partial \Omega }}{u}^{ + } \), we choose the test function \( \varphi = {\left( u - k\right) }^{ + } \) in (1.6), and we set \( v = {\left( u - k\right) }^{ + }, A\left( k\right) = \{ x \in \Omega \mid u\left( x\right) > k\} \) . Using the structure conditions (1.2) and (1.7) in (1.6), we obtain \[ {\int }_{\Omega }{\left| Dv\right| }^{2}{dx} \leq {\int }_{A\left( k\right) }{g}^{2}{dx} + \Lambda {\int }_{A\left( k\right) }\left( {\left| {Dv}\right| + f}\right) {vdx} \] \[ \leq \frac{1}{4}{\int }_{\Omega }{\left| Dv\right| }^{2}{dx} + C{\int }_{\Omega }{\left| v\right| }^{2}{dx} \] \[ + \parallel g{\parallel }_{{L}^{q}}^{2}{\left| A\left( k\right) \right| }^{1 - 2/q} + C\parallel v{\parallel }_{{L}^{{2}^{ * }}}\parallel f{\parallel }_{{L}^{q * }}{\left| A\left( k\right) \right| }^{\left( {1/2}\right) - \left( {1/q}\right) }, \] where \( {2}^{ * } = {2n}/\left( {n - 2}\right) \) . We shall discuss only the case \( n \geq 3 \) . By the Sobolev embedding theorem and Cauchy's inequality, \[ {\int }_{\Omega }{\left| Dv\right| }^{2}{dx} \leq C{\int }_{\Omega }{\left| v\right| }^{2}{dx} + C\left\lbrack {\parallel g{\parallel }_{{L}^{q}}^{2} + \parallel f{\parallel }_{{L}^{q * }}^{2}}\right\rbrack {\left| A\left( k\right) \right| }^{1 - \left( {2/q}\right) }. \] This estimate is similar to (4.14) of Chapter 1. Analogously to (4.18) of Chapter 1, we derive the bound \[ \underset{\Omega }{\operatorname{ess}\sup }u \leq \mathop{\sup }\limits_{{\partial \Omega }}{u}^{ + } + C\parallel u{\parallel }_{{L}^{2}\left( \Omega \right) } + C{F}_{0}{\left| \Omega \right| }^{\left( {1/n}\right) - \left( {1/q}\right) }. \] We can now follow the method of Step 2 in the proof of Theorem 4.2 in Chapter 1 to take care of the term \( C\parallel u{\parallel }_{{L}^{2}\left( \Omega \right) } \) on the right-hand side. Remark 1.1. If we know a priori that the weak solution \( u \in C\left( \bar{\Omega }\right) \cap {C}^{1}\left( \Omega \right) \) , then the integrals in (1.5) are well defined without structure conditions (1.3) and (1.4); and therefore conditions (1.3) and (1.4) are no longer needed in Theorem 1.1. In this case, the above theorem is valid with only the structure conditions (1.2) and (1.7). Remark 1.2. Let \( u \in C\left( \bar{\Omega }\right) \cap {C}^{1}\left( \Omega \right) \) . We can improve the structure conditions (1.2) and (1.7) as follows: for any \( k > 0 \) , \( {\left( {1.2}\right) }^{\prime } \) \[ {a}_{i}\left( {x, z,\eta }\right) {\eta }_{i} \geq {\left| \eta \right| }^{2} - {\left\lbrack \mu {\left( z - k\right) }^{ + }\right\rbrack }^{2} - {g}^{2}\left( x\right) \] \( {\left( {1.7}\right) }^{\prime } \) \[ - b\left( {x, z,\eta }\right) \operatorname{sign}z \leq \Lambda \left\lbrack {\left| \eta \right| + \mu {\left( z - k\right) }^{ + } + f\left( x\right) }\right\rbrack . \] The theorem is still valid. Remark 1.3. If we change the structure condition (1.2) to \[ {a}_{i}\left( {x, z,\eta }\right) {\eta }_{i} \geq {\left| \eta \right| }^{\tau } - {g}^{\tau }\left( x\right) \] where \( \tau > 1 \), then we can modify the remaining structure conditions and the definition of a weak solution so that a similar theorem holds. The proofs of the remarks are left as exercises for the reader. ## 2. Hölder estimates for bounded weak solutions The methods in this section are similar to those in \( §1 \) of Chapter 4. However, quasilinear nonhomogeneous equations are discussed here. In the structure conditions (1.3)-(1.4), the growth order of \( b \) in \( \eta \) is one order higher than that of \( {a}_{i} \) . Such a growth order condition is called a natural growth order condition (or called a natural growth condition, for simplicity). Examples of nonexistence of a classical solution can be constructed for the Dirichlet problem if such a natural growth order condition is not satisfied. Theorem 2.1. Let the structure conditions (1.2), (1.3) and (1.4) be in force. Suppose that \( u \in {W}^{1,2}\left( {B}_{R}\right) \) is a bounded weak subsolution of (1.1) on \( {B}_{R} \) and that (2.1) \[ {F}_{0} = {R}^{1 - n/q}\parallel g{\parallel }_{{L}^{q}} + {R}^{2 - {2n}/q}{\begin{Vmatrix}f + {g}^{2}\end{Vmatrix}}_{{L}^{q/2}} < \infty \] for some \( q > n \) . Then for any \( p > 0 \) and \( 0 < \theta < 1 \) , (2.2) \[ \underset{{B}_{\theta R}}{\operatorname{ess}\sup }\widetilde{u} \leq C{\left\lbrack {\int }_{{B}_{R}}{\widetilde{u}}^{p}dx\right\rbrack }^{1/p} \] where \( \widetilde{u} = {u}^{ + } + {F}_{0} \), and \( C \) depends only on \( n,\Lambda, q, p,{\left( 1 - \theta \right) }^{-1} \) and \( \parallel u{\parallel }_{{L}^{\infty }} \) . Proof. First, we assume that \( 0 < R \leq 1, p \geq 2 \) . We choose the test function (2.3) \[ \varphi = {\zeta }^{2}{\widetilde{u}}^{{2p} - 1}{e}^{\Lambda u} \] where \( \zeta \left( x\right) \) is a cutoff function on \( {B}_{R} \) . It follows from (1.6) and the structure condition (1.4) that (2.4) \[ {\int }_{{B}_{R}}{a}_{i}\left( {x, u,{Du}}\right) {D}_{i}\left( {{\zeta }^{2}{\widetilde{u}}^{{2p} - 1}{e}^{\Lambda u}}\right) {dx} \] \[ \leq \Lambda {\int }_{{B}_{R}}\left( {{\left| Du\right| }^{2} + f}\right) {\zeta }^{2}{\widetilde{u}}^{{2p} - 1}{e}^{\Lambda u}{dx}. \] Using the structure conditions (1.2)-(1.3), we derive \[ \left( {{2p} - 1}\right) {\int }_{{B}_{R}}{\zeta }^{2}{\widetilde{u}}^{{2p} - 2}{e}^{\Lambda u}{\left| Du\right| }^{2}{dx} \] \[ \leq \left( {{2p} - 1}\right) {\int }_{{B}_{R}}{\zeta }^{2}{\widetilde{u}}^{{2p} - 2}{e}^{\Lambda u}{\left| g\right| }^{2}{dx} + \Lambda {\int }_{{B}_{R}}\left( {{g}^{2} + f}\right) {\zeta }^{2}{\widetilde{u}}^{{2p} - 1}{e}^{\Lambda u}{dx} \] \[ + {2\Lambda }{\int }_{{B}_{R}}\left( {\left| {Du}\right| + g}\right) \zeta \left| {D\zeta }\right| {\widetilde{u}}^{{2p} - 1}{e}^{\Lambda u}{dx}. \] Since \( \widetilde{u} \geq {F}_{0} \) and \( \parallel u{\parallel }_{{L}^{\infty }} < \infty \) , \[ \left( {{2p} - 1}\right) {\int }_{{B}_{R}}{\zeta }^{2}{\widetilde{u}}^{{2p} - 2}{e}^{\Lambda u}{\left| Du\right| }^{2}{dx} \] \[ \leq {Cp}{\int }_{{B}_{R}}\left\lbrack {\frac{{g}^{2}}{{F}_{0}^{2}} + \frac{{g}^{2} + f}{{F}_{0}}}\right\rbrack {\zeta }^{2}{\widetilde{u}}^{2p}{dx} + \left( {{2p} - 1}\right) \varepsilon {\int }_{{B}_{R}}{\zeta }^{2}{\widetilde{u}}^{{2p} - 2}{\left| Du\right| }^{2}{dx} \] \[ + \frac{{C}_{\varepsilon }}{{2p} - 1}{\int }_{{B}_{R}}{\left| D\zeta \right| }^{2}{\widetilde{u}}^{2p}{dx} \] where \( C \) depends only on \( n,\Lambda \) and \( \parallel u{\parallel }_{{L}^{\infty }} \) . Let \( v = {\widetilde{u}}^{p} \) and \( \varepsilon = 1/2 \) . Then (2.5) \[ \frac{1}{p}{\int }_{{B}_{R}}{\zeta }^{2}{\left| Dv\right| }^{2}{dx} \leq {Cp}{\int }_{{B}_{R}}h\left( x\right) {\zeta }^{2}{\widetilde{u}}^{2p}{dx} + \frac{C}{p}{\int }_{{B}_{R}}{\left| D\zeta \right| }^{2}{v}^{2}{dx} \] where \[ h\left( x\right) = \frac{{g}^{2} + f}{{F}_{0}} + \frac{{g}^{2}}{{F}_{0}^{2}},\;\parallel h{\parallel }_{{L}^{q/2}} \leq 2. \] By Hölder's inequality and the Sobolev embedding theorem, \[ {\int }_{{B}_{R}}h{\zeta }^{2}{\widetilde{u}}^{2p}{dx} \leq \parallel h{\parallel }_{{L}^{q/2}}{\begin{Vmatrix}{\zeta }^{2}{v}^{2}\end{Vmatrix}}_{{L}^{q/\left( {q - 2}\right) }} \] (2.6) \[ \leq \varepsilon \parallel {\zeta v}{\parallel }_{{L}^{{2}^{ * }}}^{2} + C{\varepsilon }^{-n/\left( {q - n}\right) }\parallel {\zeta v}{\parallel }_{{L}^{2}}^{2} \] \[ \leq \;{C\varepsilon }\parallel D\left( {\zeta v}\right) {\parallel }_{{L}^{2}}^{2} + C{\varepsilon }^{-n/\left( {q - n}\right) }\parallel {\zeta v}{\parallel }_{{L}^{2}}^{2}. \] We take \( \varepsilon = \frac{1}{2C}{p}^{-2} \) in the above inequality and substitute (2.6) into (2.5). Then \[ {\int }_{{B}_{R}}{\left| D\left( \zeta v\right) \right| }^{2}{dx} \leq C\left( {{p}^{2 + {2n}/\left( {q - n}\right) } + \parallel \nabla \zeta {\parallel }_{{L}^{\infty }}^{2}}\right) {\int }_{{B}_{R}}{v}^{2}{dx}. \] We can now follow the standard Moser iteration to derive the desired estimate as in Lemma 1.2 of Chapter 4. Theorem 2.2. Let the structure conditions (1.2), (1.3) and (1.4) be in force. Suppose that \( u \in {W}^{1,2}\left( {B}_{\sigma R}\right) \left( {\sigma > 1}\right) \) is a bounded weak supersolution of (1.1
1167_(GTM73)Algebra
Definition 7.9
Definition 7.9. A group \( \mathrm{G} \) is said to be solvable if \( {\mathrm{G}}^{\left( \mathrm{n}\right) } = \langle \mathrm{e}\rangle \) for some \( \mathrm{n} \) . Every abelian group is trivially solvable. More generally, we have Proposition 7.10. Every nilpotent group is solvable. PROOF. Since by the definition of \( {C}_{i}\left( G\right) {C}_{i}\left( G\right) /{C}_{i - 1}\left( G\right) = C\left( {G/{C}_{i - 1}\left( G\right) }\right) \) is abelian, \( {C}_{i}{\left( G\right) }^{\prime } < {C}_{i - 1}\left( G\right) \) for all \( i > 1 \) and \( {C}_{1}{\left( G\right) }^{\prime } = C{\left( G\right) }^{\prime } = \langle e\rangle \) . For some \( n \) , \( G = {C}_{n}\left( G\right) \) . Therefore, \( C\left( {G/{C}_{n - 1}\left( G\right) }\right) = {C}_{n}\left( G\right) /{C}_{n - 1}\left( G\right) = G/{C}_{n - 1}\left( G\right) \) is abelian and hence \( {G}^{\left( 1\right) } = {G}^{\prime } < {C}_{n - 1}\left( G\right) \) . Therefore, \( {G}^{\left( 2\right) } = {G}^{{\left( 1\right) }^{\prime }} < {C}_{n - 1}{\left( G\right) }^{\prime } < {C}_{n - 2}\left( G\right) \) ; similarly \( {G}^{\left( 3\right) } < {C}_{n - 2}{\left( G\right) }^{\prime } < {C}_{n - 3}\left( G\right) ;\ldots ,{G}^{\left( n - 1\right) } < {C}_{2}{\left( G\right) }^{\prime } < {C}_{1}\left( G\right) ;{G}^{\left( n\right) } < {C}_{1}{\left( G\right) }^{\prime } \) \( = \langle e\rangle \) . Hence \( G \) is solvable. Theorem 7.11. (i) Every subgroup and every homomorphic image of a solvable group is solvable. (ii) If \( \mathrm{N} \) is a normal subgroup of a group \( \mathrm{G} \) such that \( \mathrm{N} \) and \( \mathrm{G}/\mathrm{N} \) are solvable, then G is solvable. SKETCH OF PROOF. (i) If \( f : G \rightarrow H \) is a homomorphism [epimorphism], verify that \( f\left( {G}^{\left( i\right) }\right) < {H}^{\left( i\right) }\left\lbrack {f\left( {G}^{\left( i\right) }\right) = {H}^{\left( i\right) }}\right\rbrack \) for all \( i \) . Suppose \( f \) is an epimorphism, and \( G \) is solvable. Then for some \( n,\langle e\rangle = f\left( e\right) = f\left( {G}^{\left( n\right) }\right) = {H}^{\left( n\right) } \), whence \( H \) is solvable. The proof for a subgroup is similar. (ii) Let \( f : G \rightarrow G/N \) be the canonical epimorphism. Since \( G/N \) is solvable, for some \( {nf}\left( {G}^{\left( n\right) }\right) = {\left( G/N\right) }^{\left( n\right) } = \langle e\rangle \) . Hence \( {G}^{\left( n\right) } < \operatorname{Ker}f = N \) . Since \( {G}^{\left( n\right) } \) is solvable by (i), there exists \( k \in {\mathbf{N}}^{ * } \) such that \( {G}^{\left( n + k\right) } = {\left( {G}^{\left( n\right) }\right) }^{\left( k\right) } = \langle e\rangle \) . Therefore, \( G \) is solvable. Corollary 7.12. If \( \mathrm{n} \geq 5 \), then the symmetric group \( {\mathrm{S}}_{\mathrm{n}} \) is not solvable. PROOF. If \( {S}_{n} \) were solvable, then \( {A}_{n} \) would be solvable. Since \( {A}_{n} \) is nonabelian, \( {A}_{n}{}^{\prime } \neq \left( 1\right) \) . Since \( {A}_{n}{}^{\prime } \) is normal in \( {A}_{n} \) (Theorem 7.8) and \( {A}_{n} \) is simple (Theorem I.6.10), we must have \( {A}_{n}{}^{\prime } = {A}_{n} \) . Therefore \( {A}_{n}{}^{\left( i\right) } = {A}_{n} \neq \left( 1\right) \) for all \( i \geq 1 \), whence \( {A}_{n} \) is not solvable. NOTE. The remainder of this section is not needed in the sequel. In order to prove a generalization of the Sylow theorems for finite solvable groups (as mentioned in the first paragraph of this section) we need some definitions and a lemma. A subgroup \( H \) of a group \( G \) is said to be characteristic [resp. fully invariant] if \( f\left( H\right) < H \) for every automorphism [resp. endomorphism] \( f : G \rightarrow G \) . Clearly every fully invariant subgroup is characteristic and every characteristic subgroup is normal (since conjugation is an automorphism). A minimal normal subgroup of a group \( G \) is a nontrivial normal subgroup that contains no proper subgroup which is normal in \( G \) . Lemma 7.13. Let \( \mathrm{N} \) be a normal subgroup of a finite group \( \mathrm{G} \) and \( \mathrm{H} \) any subgroup of \( \mathrm{G} \) . (i) If \( \mathrm{H} \) is a characteristic subgroup of \( \mathrm{N} \), then \( \mathrm{H} \) is normal in \( \mathrm{G} \) . (ii) Every normal Sylow p-subgroup of \( \mathbf{G} \) is fully invariant. (iii) If \( \mathrm{G} \) is solvable and \( \mathrm{N} \) is a minimal normal subgroup, then \( \mathrm{N} \) is an abelian \( \mathrm{p} \) - group for some prime \( \mathrm{p} \) . PROOF. (i) Since \( {aN}{a}^{-1} = N \) for all \( {a\epsilon G} \), conjugation by \( a \) is an automorphism of \( N \) . Since \( H \) is characteristic in \( N,{aH}{a}^{-1} < H \) for all \( {a\epsilon G} \) . Hence \( H \) is normal in \( G \) by Theorem I.5.1. (ii) is an exercise. (iii) It is easy to see that \( {N}^{\prime } \) is fully invariant in \( N \), whence \( {N}^{\prime } \) is normal in \( G \) by (i). Since \( N \) is a minimal normal subgroup, either \( {N}^{\prime } = \langle e\rangle \) or \( {N}^{\prime } = N \) . Since \( N \) is solvable (Theorem 7.11), \( {N}^{\prime } \neq N \) . Hence \( {N}^{\prime } = \langle e\rangle \) and \( N \) is a nontrivial abelian group. Let \( P \) be a nontrivial Sylow \( p \) -subgroup of \( N \) for some prime \( p \) . Since \( N \) is abelian, \( P \) is normal in \( N \) and hence fully invariant in \( N \) by (ii). Consequently \( P \) is normal in \( G \) by (i). Since \( N \) is minimal and \( P \) nontrivial we must have \( P = N \) . Proposition 7.14. (P. Hall) Let \( \mathrm{G} \) be a finite solvable group of order \( \mathrm{{mn}} \), with \( \left( {\mathrm{m},\mathrm{n}}\right) = 1 \) . Then (i) \( \mathrm{G} \) contains a subgroup of order \( \mathrm{m} \) ; (ii) any two subgroups of \( \mathbf{G} \) of order \( \mathbf{m} \) are conjugate; (iii) any subgroup of \( \mathrm{G} \) of order \( \mathrm{k} \), where \( \mathrm{k} \mid \mathrm{m} \), is contained in a subgroup of order \( \mathrm{m} \) . REMARKS. If \( m \) is a prime power, this theorem merely restates several results contained in the Sylow theorems. P. Hall has also proved the converse of (i): if \( G \) is a finite group such that whenever \( \left| G\right| = {mn} \) with \( \left( {m, n}\right) = 1, G \) has a subgroup of order \( m \), then \( G \) is solvable. The proof is beyond the scope of this book (see M. Hall [15; p. 143]). PROOF OF 7.14. The proof proceeds by induction on \( \left| G\right| \), the orders \( \leq 5 \) being trivial. There are two cases. CASE 1. There is a proper normal subgroup \( H \) of \( G \) whose order is not divisible by \( n \) . (i) \( \left| H\right| = {m}_{1}{n}_{1} \), where \( {m}_{1}\left| {m,{n}_{1}}\right| n \), and \( {n}_{1} < n.G/H \) is a solvable group of order \( \left( {m/{m}_{1}}\right) \left( {n/{n}_{1}}\right) < {mn} \), with \( \left( {m/{m}_{1}, n/{n}_{1}}\right) = 1 \) . Therefore by induction \( G/H \) contains a subgroup \( A/H \) of order \( \left( {m/{m}_{1}}\right) \) (where \( A \) is a subgroup of \( G \) -see Corollary I.5.12). Then \( \left| A\right| = \left| H\right| \left\lbrack {A : H}\right\rbrack = \left( {{m}_{1}{n}_{1}}\right) \left( {m/{m}_{1}}\right) = m{n}_{1} < {mn} \) . \( A \) is solvable (Theorem 7.11) and by induction contains a subgroup of order \( m \) . (ii) Suppose \( B, C \) are subgroups of \( G \) of order \( m \) . Since \( H \) is normal in \( G,{HB} \) is a subgroup (Theorem I.5.3), whose order \( k \) necessarily divides \( \left| G\right| = {mn} \) . Since \( k = \left| {HB}\right| = \left| H\right| \left| B\right| /\left| {H \cap B}\right| = {m}_{1}{n}_{1}m/\left| {H \cap B}\right| \), we have \( k\left| {H \cap B}\right| = {m}_{1}{n}_{1}m \) , whence \( k \mid {m}_{1}{n}_{1}m \) . Since \( \left( {{m}_{1}, n}\right) = 1 \), there are integers \( x, y \) such that \( {m}_{1}x + {ny} = 1 \) , and hence \( m{n}_{1}{m}_{1}x + m{n}_{1}{ny} = m{n}_{1} \) . Consequently \( k \mid m{n}_{1} \) . By Lagrange’s Theorem I.4.6 \( m = \left| B\right| \) and \( {m}_{1}{n}_{1} = \left| H\right| \) divide \( k \) . Thus \( \left( {m, n}\right) = 1 \) implies \( m{n}_{1} \mid k \) . Therefore \( k = m{n}_{1} \) ; similarly \( \left| {HC}\right| = m{n}_{1} \) . Thus \( {HB}/H \) and \( {HC}/H \) are subgroups of \( G/H \) of order \( m/{m}_{1} \) . By induction they are conjugate: for some \( \bar{x} \in G/H \) (where \( \bar{x} \) is the coset of \( x \in G),\bar{x}\left( {{HB}/H}\right) {\bar{x}}^{-1} = {HC}/H \) . It follows that \( {xHB}{x}^{-1} = {HC} \) . Consequently \( {xB}{x}^{-1} \) and \( C \) are subgroups of \( {HC} \) of order \( m \) and are therefore conjugate in \( {HC} \) by induction. Hence \( B \) and \( C \) are conjugate in \( G \) . (iii) If a subgroup \( K \) of \( G \) has order \( k \) dividing \( m \), then \( {HK}/H \cong K/H \cap K \) has order dividing \( k \) . Since \( {HK}/H \) is a subgroup of \( G/H \), its order also divides \( \left| {G/H}\right| \) \( = \left( {m/{m}_{1}}\right) \left( {n/{n}_{1}}\right) .\left( {k, n}\right) = 1 \) implies that the order of \( {HK}/H \) divides \( m/{m}_{1} \) . By induction there is a subgroup \( A/H \) of \( G/H \) of order \( m/{m}_{1} \) which contains \( {HK}/H \) (where \( A < G \) as above). Clearly \( K \) is a subgroup of \( A \) . Since \( \left| A\right| = \left| H\right| \left| {A/H}\right| = {m}_{1}{n}_{1}\left( {m/{m}_{1}}\right) \) \( = m{n}_{1} < {mn}, K \) is contained in a subgroup of \( A \) (and hence of \( G \) ) of order \( m \) by induction. CASE 2. Every proper normal subgroup of \( G \) has order divisible by \( n \) . If \( H \) is a minimal normal subgroup (such groups exist since \( G \) is finite), then \( \left| H\right| = {p}^{r} \) for some prime \( p \) by Lemma 7.13 (iii). Since \( \left( {m, n}\right) = 1 \) and \( n\left| \right| H \mid \), it follows that \( n = {p}^{r} \)
108_The Joys of Haar Measure
Definition 2.2.22
Definition 2.2.22. A fundamental discriminant \( D \) is said to be a prime discriminant if it is either equal to \( - 4, - 8 \), or 8, or equal to \( {\left( -1\right) }^{\left( {p - 1}\right) /2}p \) for \( p \) an odd prime. Note that all these expressions are indeed fundamental discriminants. Lemma 2.2.23. Any fundamental discriminant \( D \) can be written in a unique way as a product of prime fundamental discriminants. Proof. Since \( D \) is fundamental, no odd prime can divide \( D \) to a power larger than 1. Thus, we may write \( D = {2}^{u}\mathop{\prod }\limits_{{p \in S}}p \), where \( S \) is a finite set of odd primes. It follows that \( D = \varepsilon {2}^{u}\mathop{\prod }\limits_{{p \in S}}{\left( -1\right) }^{\left( {p - 1}\right) /2}p \) for some \( \varepsilon = \pm 1 \) . Note that the product over \( p \in S \) is congruent to 1 modulo 4 . Thus, either \( u = 0 \), in which case we must have \( \varepsilon = 1 \) (since \( D \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) ); or \( u = 2 \) , in which case we must have \( \varepsilon = - 1 \) (otherwise \( D/4 \) is also a discriminant), so the factor in front of the product is -4 ; or finally \( u = 3 \), in which case \( \varepsilon \) can be \( \pm 1 \), giving the two factors \( \pm 8 \) . Uniqueness of the decomposition is clear. The proof of the result that we are after is now immediate. Proposition 2.2.24. Let \( \chi \) be a real primitive character modulo \( m \), so that \( \chi \left( n\right) = \left( \frac{D}{n}\right) \) for \( D = \chi \left( {-1}\right) m \) a fundamental discriminant. Then \[ \tau \left( \chi \right) = \left\{ \begin{array}{ll} {m}^{1/2} & \text{ if }\chi \left( {-1}\right) = 1, \\ {m}^{1/2}i & \text{ if }\chi \left( {-1}\right) = - 1. \end{array}\right. \] Proof. By Theorem 2.2.15, we know that \( \chi = {\chi }_{D} \) with \( D = \chi \left( {-1}\right) m \) a fundamental discriminant. By Lemma 2.2.23, \( D \) is equal to a product of prime fundamental discriminants that are necessarily coprime. By Lemma 2.2.21, it is thus sufficient to prove the proposition for prime fundamental discriminants, and this is exactly the content of Theorem 2.2.19 and Lemma 2.2.20. In view of the functional equation for Dirichlet \( L \) -functions that we will study in Chapter 10 we make the following definition: Definition 2.2.25. Let \( \chi \) be any primitive character modulo \( m \) . We define the root number \( W\left( \chi \right) \) by the formula \[ W\left( \chi \right) = \left\{ \begin{array}{ll} \frac{\tau \left( \chi \right) }{{m}^{1/2}} & \text{ if }\chi \left( {-1}\right) = 1, \\ \frac{\tau \left( \chi \right) }{{m}^{1/2}i} & \text{ if }\chi \left( {-1}\right) = - 1. \end{array}\right. \] Thus a restatement of Proposition 2.2.24 is that when \( \chi \) is real we have \( W\left( \chi \right) = 1 \) . In the general case, since \( \left| {\tau \left( \chi \right) }\right| = {m}^{1/2} \) we have \( \left| {W\left( \chi \right) }\right| = 1 \), and one can show that \( W\left( \chi \right) \) is a root of unity if and only if \( \chi \) is real, in which case \( W\left( \chi \right) = 1 \) (see Exercise 17). ## 2.3 Lattices and the Geometry of Numbers ## 2.3.1 Definitions In this section, we let \( V \) be an \( \mathbb{R} \) -vector space of dimension \( n \) . Proposition 2.3.1. Let \( \Lambda \) be a sub- \( \mathbb{Z} \) -module of \( V \) . Consider the following three conditions: (1) \( \Lambda \) generates \( V \) as an \( \mathbb{R} \) -vector space. (2) \( \Lambda \) is discrete for the natural topology of \( V \) . (3) \( \Lambda \) is a free \( \mathbb{Z} \) -module of rank \( n \) . Then any two of these conditions imply the third. Note that (3) alone does not imply (1) since \( \Lambda \) may be a free \( \mathbb{Z} \) -module without being a free \( \mathbb{R} \) -module. Proof. Assume (1) and (2). Since \( \Lambda \) generates \( V \), by linear algebra there exists a set of \( n \) elements \( {\mathbf{b}}_{1},\ldots ,{\mathbf{b}}_{n} \) in \( \Lambda \) that are \( \mathbb{R} \) -linearly independent, hence that form an \( \mathbb{R} \) -basis of \( V \), and let \( {\Lambda }_{0} \) be the \( \mathbb{Z} \) -module generated by the \( {\mathbf{b}}_{i} \) . Since \( \Lambda \) is discrete in \( V \) there exists an integer \( M > 0 \) such that the only element \( \sum {x}_{i}{\mathbf{b}}_{i} \) of \( V \) with \( \left| {x}_{i}\right| < 1/M \) for all \( i \) and that belongs to \( \Lambda \) is the zero vector. It is clear that the \( {M}^{n} \) small cubes of the form \( {m}_{i}/M \leq \) \( {x}_{i} < \left( {{m}_{i} + 1}\right) /M \) for all \( i \), where \( {m}_{i} \) are integers such that \( 0 \leq {m}_{i} < M \), form a partition of the big cube \( C \) defined by \( 0 \leq {x}_{i} < 1 \) for all \( i \) . Let \( {\beta }_{1},\ldots ,{\beta }_{N} \) be some (not necessarily all) representatives of \( \Lambda /{\Lambda }_{0} \) . Translating them if necessary by elements of \( {\Lambda }_{0} \), we may assume that \( {\beta }_{j} \in C \) for all \( j \) . It is then clear that two distinct \( {\beta }_{j} \) cannot belong to the same small cube: indeed, if \( {\beta }_{j} \) and \( {\beta }_{k} \) both belong to the same cube, then \( {\beta }_{k} - {\beta }_{j} \) would be an element of \( \Lambda \) with coordinates \( \left| {x}_{i}\right| < 1/M \) for all \( i \), a contradiction since by assumption the only element of \( \Lambda \) lying in this cube is the origin. Thus the number of \( {\beta }_{j} \) is less than or equal to the number of small cubes, in other words \( N \leq {M}^{n} \) . It follows that \( \Lambda /{\Lambda }_{0} \) is finite (since \( N \) is uniformly bounded), and since \( {\Lambda }_{0} \) is finitely generated, \( \Lambda \) is also finitely generated. Thus \( \Lambda \) is a finitely generated \( \mathbb{Z} \) -module, and is of course torsion-free since \( \Lambda \subset V \) ; hence by the standard theorem on finitely generated torsion-free modules (see Corollary 2.1.2 for the case of \( \mathbb{Z} \) ) we deduce that \( \Lambda \) is a free \( \mathbb{Z} \) -module. In addition, since \( \Lambda /{\Lambda }_{0} \) is finite, Theorem 2.1.3 implies that the rank of \( \Lambda \) is equal to the rank of \( {\Lambda }_{0} \), which is equal to \( n \), proving (3). Assume (1) and (3); hence let \( {\mathbf{b}}_{1},\ldots ,{\mathbf{b}}_{n} \) be a \( \mathbb{Z} \) -basis of \( \Lambda \) . Thus they also form an \( \mathbb{R} \) -basis of \( V \) . If we consider the neighborhood \( \Omega \) of 0 consisting of \( x = \mathop{\sum }\limits_{{1 \leq i \leq n}}{x}_{i}{\mathbf{b}}_{i} \) with \( \left| {x}_{i}\right| < 1 \) for all \( i \), it is clear that the only element of \( \Lambda \) belonging to \( \Omega \) is 0 itself, proving that \( \Lambda \) is discrete. Finally, assume (2) and (3), and let \( W \) be the \( \mathbb{R} \) -vector space generated by \( \Lambda \) . Then (1) and (2) hold with \( V \) replaced by \( W \) ; hence by what we have proved, \( \Lambda \) is a free \( \mathbb{Z} \) -module on \( \dim \left( W\right) \) generators. It follows that \( \dim \left( W\right) = n \), hence that \( W = V \), proving (1). A \( \mathbb{Z} \) -module \( \Lambda \) satisfying the above three conditions (or any two of them, by the proposition) will be called a lattice in \( V \) . From now on, we will assume that \( V \) is a Euclidean vector space, in other words equipped with a Euclidean inner product \( x \cdot y \) . For instance, the most common case \( V = {\mathbb{R}}^{n} \) will be considered as a Euclidean vector space with the inner product \( x \cdot y = \mathop{\sum }\limits_{{1 \leq i \leq n}}{x}_{i}{y}_{i} \) with evident notation. We also let \( \parallel x\parallel = {\left( x \cdot x\right) }^{1/2} \) be the Euclidean norm. Definition and Proposition 2.3.2. Let \( {\left( {\mathbf{b}}_{j}\right) }_{1 \leq j \leq n} \) be a family of \( n \) vectors in \( V \) . (1) The absolute value of the determinant of the matrix of the \( {\mathbf{b}}_{j} \) on some orthonormal basis of \( V \) is independent of that basis. It will be called (with a slight abuse) the determinant of the family and denoted by \( \det \left( {{\mathbf{b}}_{1},\ldots ,{\mathbf{b}}_{n}}\right) \) . (2) The Gram matrix associated with the \( {\mathbf{b}}_{j} \) is by definition the matrix of scalar products \( G = {\left( {\mathbf{b}}_{i} \cdot {\mathbf{b}}_{j}\right) }_{1 \leq i, j \leq n} \), and we have \( \det \left( G\right) = \det {\left( {\mathbf{b}}_{1},\ldots ,{\mathbf{b}}_{n}\right) }^{2} \) . Proof. (1) follows from the fact that two orthonormal bases of \( V \) differ by a transition matrix \( P \) that is an orthogonal matrix, in other words such that \( {P}^{t}P = I \), hence with determinant equal to \( \pm 1 \) . For (2) we note that if \( \mathcal{B} \) is the matrix of the \( \left( {\mathbf{b}}_{j}\right) \) on some orthonormal basis then \( G = {\mathcal{B}}^{t}\mathcal{B} \) ; hence \( \det \left( G\right) = \det {\left( \mathcal{B}\right) }^{2} \) Remark. This terminology is the one used by Cassels and by all the literature dealing with the LLL algorithm, which is the main reason for which we study lattices. It is to be noted however that most modern experts in the geometry of numbers such as Conway-Sloane [Con-Slo] and Martinet [Mar] use a notation that is more adapted to the number-theoretic aspects of lattices: to avoid square roots, they call the determinant the determinant of the Gram matrix, hence the square of what we call the determinant. Proposition 2.3.3. Let \( \Lambda \) be a lattice in \( V \) and let \( {\left( {\mathbf{b}}_{j}\right) }_{1 \leq j \leq n} \) be a \( \mathbb{Z} \) -basis of \( \Lambda \) . (1) The quantity \( \det \left( {{\mathbf{b}}_{1},\ldots ,{\mathbf{b}}_{n}}\right) \) is independent of the choice of the \( \mathbb{Z} \) -basis \( {\mathbf{b}}_{j} \) . It is called the deter
1016_(GTM181)Numerical Analysis
Definition 3.6
Definition 3.6 Two norms on a linear space are called equivalent if they have the same convergent sequences. Theorem 3.7 Two norms \( \parallel \cdot {\parallel }_{a} \) and \( \parallel \cdot {\parallel }_{b} \) on a linear space \( X \) are equivalent if and only if there exist positive numbers \( c \) and \( C \) such that \[ c\parallel x{\parallel }_{a} \leq \parallel x{\parallel }_{b} \leq C\parallel x{\parallel }_{a} \] for all \( x \in X \) . The limits with respect to the two norms coincide. Proof. Provided that the conditions are satisfied, from \( {\begin{Vmatrix}{x}_{n} - x\end{Vmatrix}}_{a} \rightarrow 0 \) , \( n \rightarrow \infty \), it follows that \( {\begin{Vmatrix}{x}_{n} - x\end{Vmatrix}}_{b} \rightarrow 0, n \rightarrow \infty \), and vice versa. Conversely, let the two norms be equivalent and assume that there is no \( C > 0 \) such that \( \parallel x{\parallel }_{b} \leq C\parallel x{\parallel }_{a} \) for all \( x \in X \) . Then there exists a sequence \( \left( {x}_{n}\right) \) with \( {\begin{Vmatrix}{x}_{n}\end{Vmatrix}}_{a} = 1 \) and \( {\begin{Vmatrix}{x}_{n}\end{Vmatrix}}_{b} \geq {n}^{2} \) . Now, the sequence \( \left( {y}_{n}\right) \) with \( {y}_{n} \mathrel{\text{:=}} {x}_{n}/n \) converges to zero with respect to \( \parallel \cdot {\parallel }_{a} \), whereas with respect to \( \parallel \cdot {\parallel }_{b} \) it is divergent because of \( {\begin{Vmatrix}{y}_{n}\end{Vmatrix}}_{b} \geq n \) . Theorem 3.8 On a finite-dimensional linear space all norms are equivalent. Proof. In a linear space \( X \) with finite dimension \( n \) and basis \( {u}_{1},\ldots ,{u}_{n} \) every element can be expressed in the form \[ x = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \] As in Example 3.2, \[ \parallel x{\parallel }_{\infty } \mathrel{\text{:=}} \mathop{\max }\limits_{{j = 1,\ldots, n}}\left| {\alpha }_{j}\right| \] (3.2) defines a norm on \( X \) . Let \( \parallel \cdot \parallel \) denote any other norm on \( X \) . Then, by the triangle inequality we have \[ \parallel x\parallel \leq \mathop{\sum }\limits_{{j = 1}}^{n}\left| {\alpha }_{j}\right| \begin{Vmatrix}{u}_{j}\end{Vmatrix} \leq C\parallel x{\parallel }_{\infty } \] for all \( x \in X \), where \[ C \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{n}\begin{Vmatrix}{u}_{j}\end{Vmatrix} \] Assume that there is no \( c > 0 \) such that \( c\parallel x{\parallel }_{\infty } \leq \parallel x\parallel \) for all \( x \in X \) . Then there exists a sequence \( \left( {x}_{\nu }\right) \) with \( \begin{Vmatrix}{x}_{\nu }\end{Vmatrix} = 1 \) such that \( {\begin{Vmatrix}{x}_{\nu }\end{Vmatrix}}_{\infty } \geq \nu \) . Consider the sequence \( \left( {y}_{\nu }\right) \) with \( {y}_{\nu } \mathrel{\text{:=}} {x}_{\nu }/{\begin{Vmatrix}{x}_{\nu }\end{Vmatrix}}_{\infty } \) and write \[ {y}_{\nu } = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j\nu }{u}_{j} \] Because of \( {\begin{Vmatrix}{y}_{\nu }\end{Vmatrix}}_{\infty } = 1 \) each of the sequences \( \left( {\alpha }_{j\nu }\right), j = 1,\ldots, n \), is bounded in \( \mathbb{C} \) . Hence, by the Bolzano-Weierstrass theorem we can select convergent subsequences \( {\alpha }_{j,\nu \left( \ell \right) } \rightarrow {\alpha }_{j},\ell \rightarrow \infty \), for each \( j = 1,\ldots, n \) . This now implies \( {\begin{Vmatrix}{y}_{\nu \left( \ell \right) } - y\end{Vmatrix}}_{\infty } \rightarrow 0,\ell \rightarrow \infty \), where \[ y \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \] and also \( \begin{Vmatrix}{{y}_{\nu \left( \ell \right) } - y}\end{Vmatrix} \leq C{\begin{Vmatrix}{y}_{\nu \left( \ell \right) } - y\end{Vmatrix}}_{\infty } \rightarrow 0,\ell \rightarrow \infty \) . But on the other hand we have \( \begin{Vmatrix}{y}_{\nu }\end{Vmatrix} = 1/{\begin{Vmatrix}{x}_{\nu }\end{Vmatrix}}_{\infty } \rightarrow 0,\nu \rightarrow \infty \) . Therefore, \( y = 0 \), and consequently \( {\begin{Vmatrix}{y}_{\nu \left( \ell \right) }\end{Vmatrix}}_{\infty } \rightarrow 0,\ell \rightarrow \infty \), which contradicts \( {\begin{Vmatrix}{y}_{\nu }\end{Vmatrix}}_{\infty } = 1 \) for all \( \nu . \) The following definitions carry over some useful concepts from Euclidean space to general normed spaces. Definition 3.9 A subset \( U \) of a normed space \( X \) is called closed if it contains all limits of convergent sequences of \( U \) . The closure \( \bar{U} \) of a subset \( U \) of a normed space \( X \) is the set of all limits of convergent sequences of \( U \) . A subset \( U \) is called open if its complement \( X \smallsetminus U \) is closed. A set \( U \) is called dense in another set \( V \) if \( V \subset \bar{U} \), i.e., if each element in \( V \) is the limit of a convergent sequence from \( U \) . Obviously, a subset \( U \) is closed if and only if it coincides with its closure. For \( {x}_{0} \) in \( X \) and \( r > 0 \) the set \( B\left\lbrack {{x}_{0}, r}\right\rbrack \mathrel{\text{:=}} \left\{ {x \in X : \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} \leq r}\right\} \) is closed and is called the closed ball of radius \( r \) and center \( {x}_{0} \) . Correspondingly, the set \( B\left( {{x}_{0}, r}\right) \mathrel{\text{:=}} \left\{ {x \in X : \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} < r}\right\} \) is open and is called an open ball. Definition 3.10 A subset \( U \) of a normed space \( X \) is called bounded if there exists a positive number \( C \) such that \( \parallel x\parallel \leq C \) for all \( x \in U \) . Convergent sequences are bounded (see Problem 3.6). Theorem 3.11 Any bounded sequence in a finite-dimensional normed space \( X \) contains a convergent subsequence. Proof. Let \( {u}_{1},\ldots ,{u}_{n} \) be a basis of \( X \) and let \( \left( {x}_{\nu }\right) \) be a bounded sequence. Then writing \[ {x}_{\nu } = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j\nu }{u}_{j} \] and using the norm (3.2), as in the proof of Theorem 3.8 we deduce that each of the sequences \( \left( {\alpha }_{j\nu }\right), j = 1,\ldots, n \), is bounded in \( \mathbb{C} \) . Hence, by the Bolzano-Weierstrass theorem we can select convergent subsequences \( {\alpha }_{j,\nu \left( \ell \right) } \rightarrow {\alpha }_{j},\ell \rightarrow \infty \), for each \( j = 1,\ldots, n \) . This now implies \[ {x}_{\nu \left( \ell \right) } \rightarrow \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \in X,\;\ell \rightarrow \infty , \] and the proof is finished. ## 3.2 Scalar Products Definition 3.12 Let \( X \) be a complex (or real) linear space. Then a function \( \left( {\cdot , \cdot }\right) : X \times X \rightarrow \mathbb{C} \) (or \( \mathbb{R} \) ) with the properties (H1) \[ \left( {x, x}\right) \geq 0 \] (H2) \[ \left( {x, x}\right) = 0\text{ if and only if }x = 0,\;\text{ (definiteness) } \] (H3) \[ \left( {x, y}\right) = \overline{\left( y, x\right) }\;\text{ (symmetry) } \] (H4) \[ \left( {{\alpha x} + {\beta y}, z}\right) = \alpha \left( {x, z}\right) + \beta \left( {y, z}\right) ,\;\text{ (linearity) } \] for all \( x, y, z \in X \) and \( \alpha ,\beta \in \mathbb{C} \) (or \( \mathbb{R} \) ) is called a scalar product, or an inner product, on \( X \) . (By the bar we denote the complex conjugate.) \( A \) linear space \( X \) equipped with a scalar product is called a pre-Hilbert space. As a simple consequence of (H3) and (H4) we note the antilinearity \( \left( {\mathrm{{H4}}}^{\prime }\right) \) \[ \left( {x,{\alpha y} + {\beta z}}\right) = \bar{\alpha }\left( {x, y}\right) + \bar{\beta }\left( {x, z}\right) . \] Example 3.13 An example of a scalar product on \( {\mathbb{R}}^{n} \) and \( {\mathbb{C}}^{n} \) is given by \[ \left( {x, y}\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{i}{\bar{y}}_{i} \] for \( x = {\left( {x}_{1},\ldots ,{x}_{n}\right) }^{T} \) and \( y = {\left( {y}_{1},\ldots ,{y}_{n}\right) }^{T} \) . (Note that \( \left( {x, y}\right) = {y}^{ * }x \) .) Theorem 3.14 For a scalar product we have the Cauchy-Schwarz inequality \[ {\left| \left( x, y\right) \right| }^{2} \leq \left( {x, x}\right) \left( {y, y}\right) \] for all \( x, y \in X \), with equality if and only if \( x \) and \( y \) are linearly dependent. Proof. The inequality is trivial for \( x = 0 \) . For \( x \neq 0 \) it follows from \[ \left( {{\alpha x} + {\beta y},{\alpha x} + {\beta y}}\right) = {\left| \alpha \right| }^{2}\left( {x, x}\right) + 2\operatorname{Re}\{ \alpha \bar{\beta }\left( {x, y}\right) \} + {\left| \beta \right| }^{2}\left( {y, y}\right) \] \[ = \left( {x, x}\right) \left( {y, y}\right) - {\left| \left( x, y\right) \right| }^{2}, \] where we have set \( \alpha = - {\left( x, x\right) }^{-1/2}\overline{\left( x, y\right) } \) and \( \beta = {\left( x, x\right) }^{1/2} \) . Since \( \left( {\cdot , \cdot }\right) \) is positive definite, this expression is nonnegative, and it is equal to zero if and only if \( {\alpha x} + {\beta y} = 0 \) . In the latter case \( x \) and \( y \) are linearly dependent because \( \beta \neq 0 \) . Theorem 3.15 A scalar product \( \left( {\cdot , \cdot }\right) \) on a linear space \( X \) defines a norm by \[ \parallel x\parallel \mathrel{\text{:=}} {\left( x, x\right) }^{1/2} \] for all \( x \in X \) ; i.e., a pre-Hilbert space is always a normed space. Proof. We leave it as an exercise for the reader to verify the norm axioms. The triangle inequality follows by \[ \parallel x + y{\parallel }^{2} = \left( {x + y, x + y}\right) \leq \parallel x{\parallel }^{2} + 2\parallel x\parallel \parallel y\parallel + \parallel y{\parallel }^{2} = {\left( \parallel x\parallel + \parallel y\parallel \right) }^{2} \] from the Cauchy-Schwarz inequality. Note that we can rewrite the Cauchy-Schwarz inequality in the form \[ \left| \left( {x, y}\right) \right| \leq \parallel x\parallel \parallel y\parallel \] The scalar product of Example 3.13 generates the Euclidean norm of Exam
1172_(GTM8)Axiomatic Set Theory
Definition 23.39
Definition 23.39. Let \( \mathbf{P} = \langle P, \leq ,1\rangle \) be strongly coatomic. \( \mathbf{P} \) is said to be \( {\aleph }_{\alpha } \) -bounded iff 1. \( {CA}\left( p\right) < {\aleph }_{\alpha } \) . 2. Define \( {IC}\left( p\right) = \{ q \mid q \) is coatomic \( \land p \) and \( q \) are incompatible \( \} \) . Then \[ {IC}\left( p\right) \leq {\aleph }_{\alpha }\text{.} \] 3. If \( p \) and \( q \) are incompatible, then there exists \( {q}_{0} \in {IC}\left( p\right) \) such that \( q \leq {q}_{0} \) 4. Let \( A \) be a set of coatomic elements with \( \bar{A} \leq {\aleph }_{\alpha } \) . Then \[ \{ p \mid {CA}\left( p\right) \subseteq A\} \leq {\aleph }_{\alpha }. \] Remark. Condition 3 implies that the set of all \( q \) ’s that are incompatible with \( p \) is \[ \mathop{\bigcup }\limits_{{q \in {IC}\left( p\right) }}\left\lbrack q\right\rbrack \] Definition 23.40. Let \( {\mathbf{P}}_{0} = \left\langle {\Gamma ,{ \leq }_{0},{1}_{0}}\right\rangle \) and \( {\mathbf{P}}_{1} = \left\langle {\Delta ,{ \leq }_{1},{1}_{1}}\right\rangle \) be two partial order structures. We say that \( {\mathbf{P}}_{0} \) and \( {\mathbf{P}}_{1} \) form an \( {\aleph }_{\alpha } \) -Easton pair iff 1. \( {\mathbf{P}}_{0} \) is a set and an \( {\aleph }_{\alpha } \) -bounded strongly coatomic partial order structure, 2. For every \( \beta \leq {\aleph }_{\alpha } \) and for every \[ {q}_{0}{ \geq }_{1}{q}_{1}{ \geq }_{1}\cdots { \geq }_{1}{q}_{\gamma }{ \geq }_{1}\cdots \;\left( {\gamma < \beta }\right) \] there exists a \( q \in \Delta \) such that \[ \left( {\forall \gamma < \beta }\right) \left\lbrack {q{ \leq }_{1}{q}_{\gamma }}\right\rbrack \] (Next condition is dispensable. We add this in order to simplify the argument.) 3.1. \( \left( {\forall {p}_{1},{p}_{2} \in \Gamma }\right) \left\lbrack {{p}_{1}{ \nleq }_{0}p \rightarrow \left( {\exists p \in \Gamma }\right) \left\lbrack {p{ \leq }_{0}{p}_{1}\land \neg \operatorname{Comp}\left( {{p}_{1}{p}_{2}}\right) }\right\rbrack }\right\rbrack \) . 3.2. \( \left( {\forall {q}_{1}, q \in \Delta }\right) \left\lbrack {{q}_{1}{ \nleq }_{1}q \rightarrow \left( {\exists q \in \Delta }\right) \left\lbrack {q{ \leq }_{1}{q}_{1}\land \neg \operatorname{Comp}\left( {{q}_{1}{q}_{2}}\right) }\right\rbrack }\right\rbrack \) . That is, \( \Gamma \) and \( \Delta \) are fine in the sense of Definition 5.21. Remark. Let \( {\mathbf{P}}_{0} = \left\langle {\Gamma ,{ \leq }_{0},{1}_{0}}\right\rangle \) and \( {\mathbf{P}}_{1} = \left\langle {\Delta ,{ \leq }_{1},{1}_{1}}\right\rangle \) form an \( {\aleph }_{\alpha } \) - Easton pair. Define \( \mathbf{P} = \langle P, \leq ,1\rangle \) as \( {\mathbf{P}}_{0} \times {\mathbf{P}}_{1} \) . We use an abbreviated notation such that \( p \in \Gamma \) also denotes \( \left\langle {p,{1}_{1}}\right\rangle \) and \( q \in \Delta \) also denotes \( \left\langle {{1}_{0}, q}\right\rangle \) . With this abbreviation, every member of \( P \) can be denoted by \( p \cdot q \) where \( p \in \Gamma \) and \( q \in \Delta \) and \( 1 = {1}_{0} = {1}_{1} \) . Let \( \mathbf{B} \) be the Boolean algebra of all regular open sets in \( \mathbf{P} \) . \[ \widetilde{P} = \{ b \in B \mid b \neq \mathbf{0}\} . \] Let \( {p}_{1} \cdot {q}_{1} \) and \( {p}_{2} \cdot {q}_{2} \) be two members of \( P.{p}_{1} \cdot {q}_{1} \) and \( {p}_{2} \cdot {q}_{2} \) are compatible iff \( {p}_{1} \) and \( {p}_{2} \) are compatible and \( {q}_{1} \) and \( {q}_{2} \) are compatible. Then \( P \) is fine, hence \[ {\left\lbrack p \cdot q\right\rbrack }^{-0} = \left\lbrack {p \cdot q}\right\rbrack \] and we may assume that \( P \) is a dense subset of \( \widetilde{P} \), where \( 1 = 1 \) . For the member \( p \cdot q \) of \( P \), we shall intentionally confuse \( p \cdot q \in P \) and \( \left\lbrack {p \cdot q}\right\rbrack \in \widetilde{P} \), i.e., we sometimes use \( p \cdot q \) in the place of \( \left\lbrack {p \cdot q}\right\rbrack \) and vice versa. Therefore we sometimes express " \( {p}_{1} \cdot {q}_{1} \) and \( {p}_{2} \cdot {q}_{2} \) are compatible" by \( {p}_{1} \cdot {q}_{1} \cdot {p}_{2} \cdot {q}_{2} > 0 \) . The former is considered in \( P \) and the latter is considered in \( \widetilde{P} \) . In what follows, we assume that an \( {\aleph }_{\alpha } \) -Easton pair \( {\mathbf{P}}_{0},{\mathbf{P}}_{1} \) is given as above. Theorem 23.41. If \( p,{p}_{0} \in \Gamma \) and \( q,{q}_{0} \in \Delta \) then \[ \text{1.}{p}_{0} \cdot {q}_{0} \leq p \leftrightarrow {p}_{0} \leq p\text{.} \] \[ \text{2.}{p}_{0} \cdot {q}_{0} \leq q \leftrightarrow {q}_{0} \leq q\text{.} \] 3. \( {p}_{0} \cdot {q}_{0} \leq p \cdot q \leftrightarrow {p}_{0} \leq p \land {q}_{0} \leq q \) . 4. \( {p}_{0} \cdot {q}_{0} = p \cdot q \leftrightarrow {p}_{0} = p \land {q}_{0} = q \) . Theorem 23.42. Suppose \( b \in B,\left\{ {{b}_{j} \mid j \in J}\right\} \subseteq B, b = \mathop{\sum }\limits_{{j \in J}}{b}_{j} \), where \( J \) may be a proper class, and \( {b}^{\prime } \in \widetilde{P} \) . Then \[ \left( {\exists p \in \Gamma }\right) \left( {\exists q \in \Delta }\right) \left\lbrack {p \cdot q \leq {b}^{\prime } \land \left\lbrack {p \cdot q \leq - b \vee \left( {\exists j \in J}\right) \left\lbrack {p \cdot q \leq {b}_{j}}\right\rbrack }\right\rbrack }\right\rbrack . \] Proof. Case 1: \( {b}^{\prime } \cdot b > \mathbf{0} \) . Then \( {b}^{\prime } \cdot {b}_{j} > \mathbf{0} \) for some \( j \in J \) . Hence \[ \left( {\exists p \in \Gamma }\right) \left( {\exists q \in \Delta }\right) \left\lbrack {p \cdot q \leq {b}^{\prime } \cdot {b}_{j}}\right\rbrack \] since \( P \) is dense in \( \widetilde{P} \) . Case 2: \( {b}^{\prime } \cdot b = \mathbf{0} \) . Then \( {b}^{\prime } \leq - b \) . For the same reason \[ \left( {\exists p \in \Gamma }\right) \left( {\exists q \in \Delta }\right) \left\lbrack {p \cdot q \leq {b}^{\prime } \leq - b}\right\rbrack . \] Lemma 23.43. (Easton’s main lemma.) Suppose \( {\aleph }_{\alpha } \) is regular, \( q \in \Delta \) and \( b = \mathop{\sum }\limits_{{j \in J}}{b}_{j} \), where \( J \) may be a proper class. Then \[ \left( {\exists \bar{q} \in \Delta }\right) \left( {\exists \Lambda \subseteq \Gamma }\right) \lbrack \bar{q} \leq q \land \bar{\Lambda } \leq {\aleph }_{\alpha } \] \[ \land \left( {\forall p \in \Lambda }\right) \left( {\exists j \in J}\right) \left\lbrack {p \cdot \bar{q} \leq {b}_{j} \vee p \cdot \bar{q} \leq {}^{ - }b}\right\rbrack \land \left( {\forall r \in \widetilde{P}}\right) \left( {\exists p \in \Lambda }\right) \left\lbrack {r \cdot p > 0}\right\rbrack \rbrack . \] Proof. We construct, in \( {\aleph }_{\alpha } \) stages, \( {p}_{\mu } \in \Gamma \) and \( {q}_{\mu } \in \Delta \) for \( \mu < {\aleph }_{\alpha } \cdot {\aleph }_{\alpha } \) (the ordinal product) satisfying \[ \left( {\forall {\mu }_{1},{\mu }_{2} < {\aleph }_{\alpha } \cdot {\aleph }_{\alpha }}\right) \left\lbrack {{\mu }_{1} < {\mu }_{2} \rightarrow {q}_{{\mu }_{2}}{ \leq }_{1}{q}_{{\mu }_{1}}}\right\rbrack . \] Stage 0. We pick \( {p}_{0} \in \Gamma \) and \( {q}_{0} \in \Delta \) such that \[ {p}_{0} \cdot {q}_{0} \leq q \land \left( {\exists j \in J}\right) \left\lbrack {{p}_{0} \cdot {q}_{0} \leq q \cdot {b}_{j} \vee {p}_{0} \cdot {q}_{0} \leq q \cdot \left( {-b}\right) }\right\rbrack . \] (See the proof of the preceding theorem.) For all \( \nu < {\aleph }_{\alpha } \) set \( {p}_{v} = {p}_{0} \) and \( {q}_{v} = {q}_{0} \) Stage \( \mu \) (where \( 0 < \mu < {\aleph }_{\alpha } \) ). Define \( {S}_{\mu } = \mathop{\bigcup }\limits_{{v < {\aleph }_{\alpha } \cdot \mu }}{IC}\left( {p}_{v}\right) \) . Then by 2 of Definition 23.39, \( {IC}\left( \overrightarrow{{p}_{v}}\right) < {\aleph }_{\alpha } \) for \( \nu < {\aleph }_{\alpha } \cdot \mu \) and hence \( {S}_{\mu } = {\aleph }_{\alpha } \) . Therefore by 4 of Definition 23.39 \[ \left\{ {p \mid {CA}\left( p\right) \subseteq {S}_{u}}\right\} \leq {\aleph }_{\alpha }. \] So we can enumerate the set \( \left\{ {p \mid {CA}\left( p\right) \subseteq {S}_{\mu }}\right\} \) as follows: \[ \left\{ {p \mid {CA}\left( p\right) \subseteq {S}_{\mu }}\right\} = \left\{ {{p}_{v}{}^{0} \mid {\aleph }_{\alpha } \cdot \mu \leq v < {\aleph }_{\alpha } \cdot \left( {\mu + 1}\right) }\right\} . \] For \( {\aleph }_{\alpha } \cdot \mu \leq v < {\aleph }_{\alpha } \cdot \left( {\mu + 1}\right) \) we pick \( {p}_{v} \in \Gamma ,{q}_{v}^{\prime },{q}_{v} \in \Delta \) such that 1. \( {q}_{v}^{\prime }{ \leq }_{1}{q}_{\lambda } \) for every \( \lambda \) such that \( {\aleph }_{\alpha } \cdot \mu \leq \lambda < v \) . 2. \( {p}_{v} \cdot {q}_{v} \leq {p}_{v}{}^{0} \cdot {q}_{v}^{\prime } \land \left\lbrack {{p}_{v} \cdot {q}_{v} \leq {}^{ - }b \vee \left( {\exists j \in J}\right) \left\lbrack {{p}_{v} \cdot {q}_{v} \leq {b}_{j}}\right\rbrack }\right\rbrack \) where the existence of \( {q}_{v}^{\prime } \) in 1 follows from the property of \( {\mathbf{P}}_{1} \) (2 of Definition 23.40) and the existence of \( {p}_{v},{q}_{v} \) in 2 follows from Theorem 23.42. It is easily seen that \[ {\aleph }_{\alpha } \cdot \mu \leq \lambda < \nu \rightarrow {q}_{v}{ \leq }_{1}{q}_{\lambda } \] Finally, we pick \( \bar{q} \) such that \( \left( {\forall \mu < {\aleph }_{\alpha } \cdot {\aleph }_{\alpha }}\right) \left\lbrack {\bar{q}{ \leq }_{1}{q}_{\mu }}\right\rbrack \) and let \[ \Lambda = \left\{ {{p}_{\mu } \mid \mu < {\aleph }_{\alpha } \cdot {\aleph }_{\alpha }}\right\} \] Obviously, \[ \bar{q} \leq q \land \Lambda \subseteq \Gamma \land \overline{\bar{\Lambda }} \leq {\aleph }_{\alpha } \land \left( {\forall p \in \Lambda }\right) \left( {\exists j \in J}\right) \left\lbrack {p \cdot \bar{q} \leq {b}_{j} \vee p \cdot \bar{q} \leq - b}\right\rbrack . \] Thus it remains to show that \[ \left( {\forall r \in \widetilde{P}}\right) \left( {\exists p \in \Lambda }\right) \left\lbrack {r \cdot p > \mathbf{0}}\right\rbrack . \] Let \( r = {p}^
1077_(GTM235)Compact Lie Groups
Definition 2.2
Definition 2.2. Let \( \left( {\pi, V}\right) \) and \( \left( {{\pi }^{\prime },{V}^{\prime }}\right) \) be finite-dimensional representations of a Lie group \( G \) . (1) \( T \in \operatorname{Hom}\left( {V,{V}^{\prime }}\right) \) is called an intertwining operator or \( G \) -map if \( T \circ \pi = {\pi }^{\prime } \circ T \) . (2) The set of all \( G \) -maps is denoted by \( {\operatorname{Hom}}_{G}\left( {V,{V}^{\prime }}\right) \) . (3) The representations \( V \) and \( {V}^{\prime } \) are equivalent, \( V \cong {V}^{\prime } \), if there exists a bijective \( G \) -map from \( V \) to \( {V}^{\prime } \) . ## 2.1.2 Examples Let \( G \) be a Lie group. A representation of \( G \) on a finite-dimensional vector space \( V \) smoothly assigns to each \( g \in G \) an invertible linear transformation of \( V \) satisfying \[ \pi \left( g\right) \pi \left( {g}^{\prime }\right) = \pi \left( {g{g}^{\prime }}\right) \] for all \( g,{g}^{\prime } \in G \) . Although surprisingly important at times, the most boring example of a representation is furnished by the map \( \pi : G \rightarrow {GL}\left( {1,\mathbb{C}}\right) = \mathbb{C} \smallsetminus \{ 0\} \) given by \( \pi \left( g\right) = 1 \) . This one-dimensional representation is called the trivial representation. More generally, the action of \( G \) on a vector space is called trivial if each \( g \in G \) acts as the identity operator. 2.1.2.1 Standard Representations Let \( G \) be \( {GL}\left( {n,\mathbb{F}}\right) ,{SL}\left( {n,\mathbb{F}}\right), U\left( n\right) ,{SU}\left( n\right) \) , \( O\left( n\right) \), or \( {SO}\left( n\right) \) . The standard representation of \( G \) is the representation on \( {\mathbb{C}}^{n} \) where \( \pi \left( g\right) \) is given by matrix multiplication on the left by the matrix \( g \in G \) . It is clear that this defines a representation. 2.1.2.2 \( {SU}\left( 2\right) \) This example illustrates a general strategy for constructing new representations. Namely, if a group \( G \) acts on a space \( M \), then \( G \) can be made to act on the space of functions on \( M \) (or various generalizations of functions). Begin with the standard two-dimensional representation of \( {SU}\left( 2\right) \) on \( {\mathbb{C}}^{2} \) where \( {g\eta } \) is simply left multiplication of matrices for \( g \in {SU}\left( 2\right) \) and \( \eta \in {\mathbb{C}}^{2} \) . Let \[ {V}_{n}\left( {\mathbb{C}}^{2}\right) \] be the vector space of holomorphic polynomials on \( {\mathbb{C}}^{2} \) that are homogeneous of degree \( n \) . A basis for \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) is given by \( \left\{ {{z}_{1}^{k}{z}_{2}^{n - k} \mid 0 \leq k \leq n}\right\} \), so \( \dim {V}_{n}\left( {\mathbb{C}}^{2}\right) = \) \( n + 1 \) . Define an action of \( {SU}\left( 2\right) \) on \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) by setting \[ \left( {g \cdot P}\right) \left( \eta \right) = P\left( {{g}^{-1}\eta }\right) \] for \( g \in {SU}\left( 2\right), P \in {V}_{n}\left( {\mathbb{C}}^{2}\right) \), and \( \eta \in {\mathbb{C}}^{2} \) . To verify that this is indeed a representation, calculate that \[ \left\lbrack {{g}_{1} \cdot \left( {{g}_{2} \cdot P}\right) }\right\rbrack \left( \eta \right) = \left( {{g}_{2} \cdot P}\right) \left( {{g}_{1}^{-1}\eta }\right) = P\left( {{g}_{2}^{-1}{g}_{1}^{-1}\eta }\right) = P\left( {{\left( {g}_{1}{g}_{2}\right) }^{-1}\eta }\right) \] \[ = \left\lbrack {\left( {{g}_{1}{g}_{2}}\right) \cdot P}\right\rbrack \left( \eta \right) \] so that \( {g}_{1} \cdot \left( {{g}_{2} \cdot P}\right) = \left( {{g}_{1}{g}_{2}}\right) \cdot P \) . Since smoothness and invertibility are clear, this action yields an \( n + 1 \) -dimensional representation of \( {SU}\left( 2\right) \) on \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) . Although these representations are fairly simple, they turn out to play an extremely important role as a building blocks in representation theory. With this in mind, we write them out in all their glory. If \( g = \left( \begin{matrix} a & - \bar{b} \\ b & \bar{a} \end{matrix}\right) \in {SU}\left( 2\right) \), then \( {g}^{-1} = \left( \begin{matrix} \bar{a} & \bar{b} \\ - b & a \end{matrix}\right) \), so that \( {g}^{-1}\eta = \left( {\bar{a}{\eta }_{1} + \bar{b}{\eta }_{2}, - b{\eta }_{1} + a{\eta }_{2}}\right) \) where \( \eta = \left( {{\eta }_{1},{\eta }_{2}}\right) \) . In particular, if \( P = {z}_{1}^{k}{z}_{2}^{n - k} \), then \( \left( {g \cdot P}\right) \left( \eta \right) = {\left( \bar{a}{\eta }_{1} + \bar{b}{\eta }_{2}\right) }^{k}{\left( -b{\eta }_{1} + a{\eta }_{2}\right) }^{n - k} \), so that (2.3) \[ \left( \begin{matrix} a & - \bar{b} \\ b & \bar{a} \end{matrix}\right) \cdot \left( {{z}_{1}^{k}{z}_{2}^{n - k}}\right) = {\left( \bar{a}{z}_{1} + \bar{b}{z}_{2}\right) }^{k}{\left( -b{z}_{1} + a{z}_{2}\right) }^{n - k}. \] Let us now consider another family of representations of \( {SU}\left( 2\right) \) . Define \[ {V}_{n}^{\prime } \] to be the vector space of holomorphic functions in one variable of degree less than or equal to \( n \) . As such, \( {V}_{n}^{\prime } \) has a basis consisting of \( \left\{ {{z}^{k} \mid 0 \leq k \leq n}\right\} \), so \( {V}_{n}^{\prime } \) is also \( n + 1 \) -dimensional. In this case, define an action of \( {SU}\left( 2\right) \) on \( {V}_{n}^{\prime } \) by (2.4) \[ \left( {g \cdot Q}\right) \left( u\right) = {\left( -bu + a\right) }^{n}Q\left( \frac{\bar{a}u + \bar{b}}{-{bu} + a}\right) \] for \( g = \left( \begin{matrix} a & - \bar{b} \\ b & \bar{a} \end{matrix}\right) \in {SU}\left( 2\right), Q \in {V}_{n}^{\prime } \), and \( u \in \mathbb{C} \) . It is easy to see that (Exercise 2.1) this yields a representation of \( {SU}\left( 2\right) \) . In fact, this apparently new representation is old news since it turns out that \( {V}_{n}^{\prime } \cong {V}_{n}\left( {\mathbb{C}}^{2}\right) \) . To see this, we need to construct a bijective intertwining operator from \( {V}_{n}\left( {\mathbb{C}}^{2}\right) \) to \( {V}_{n}^{\prime } \) . Let \( T : {V}_{n}\left( {\mathbb{C}}^{2}\right) \rightarrow {V}_{n}^{\prime } \) be given by \( \left( {TP}\right) \left( u\right) = P\left( {u,1}\right) \) for \( P \in {V}_{n}\left( {\mathbb{C}}^{2}\right) \) and \( u \in \mathbb{C} \) . This map is clearly bijective. To see that \( T \) is a \( G \) -map, use the definitions to calculate that \[ \left\lbrack {T\left( {g \cdot P}\right) }\right\rbrack \left( u\right) = \left( {g \cdot P}\right) \left( {u,1}\right) = P\left( {\bar{a}u + \bar{b}, - {bu} + a}\right) \] \[ = {\left( -bu + a\right) }^{n}P\left( {\frac{\bar{a}u + \bar{b}}{-{bu} + a},1}\right) \] \[ = {\left( -bu + a\right) }^{n}\left( {TP}\right) \left( u\right) = \left\lbrack {g \cdot \left( {TP}\right) }\right\rbrack \left( u\right) , \] so \( T\left( {g \cdot P}\right) = g \cdot \left( {TP}\right) \) as desired. ## 2.1.2.3 \( O\left( n\right) \) and Harmonic Polynomials Let \[ {V}_{m}\left( {\mathbb{R}}^{n}\right) \] be the vector space of complex-valued polynomials on \( {\mathbb{R}}^{n} \) that are homogeneous of degree \( m \) . Since \( {V}_{m}\left( {\mathbb{R}}^{n}\right) \) has a basis consisting of \( \left\{ {{x}_{1}^{{k}_{1}}{x}_{2}^{{k}_{2}}\cdots {x}_{n}^{{k}_{n}} \mid {k}_{i} \in \mathbb{N}}\right. \) and \( \left. {{k}_{1} + {k}_{2} + \cdots + {k}_{n} = m}\right\} ,\dim {V}_{m}\left( {\mathbb{R}}^{n}\right) = \left( \begin{matrix} m + n - 1 \\ m \end{matrix}\right) \) (Exercise 2.4). Define an action of \( O\left( n\right) \) on \( {V}_{m}\left( {\mathbb{R}}^{n}\right) \) by \[ \left( {g \cdot P}\right) \left( x\right) = P\left( {{g}^{-1}x}\right) \] for \( g \in O\left( n\right), P \in {V}_{m}\left( {\mathbb{R}}^{n}\right) \), and \( x \in {\mathbb{R}}^{n} \) . As in \( §{2.1.2.2} \), this defines a representation. As fine and natural as this representation is, it actually contains a smaller, even nicer, representation. Write \( \Delta = {\partial }_{{x}_{1}}^{2} + \cdots + {\partial }_{{x}_{n}}^{2} \) for the Laplacian on \( {\mathbb{R}}^{n} \) . It is a well-known corollary of the chain rule and the definition of \( O\left( n\right) \) that \( \Delta \) commutes with this action, i.e., \( \Delta \left( {g \cdot P}\right) = g \cdot \left( {\Delta P}\right) \) (Exercise 2.5). Definition 2.5. Let \( {\mathcal{H}}_{m}\left( {\mathbb{R}}^{n}\right) \) be the subspace of all harmonic polynomials of degree \( m \), i.e., \( {\mathcal{H}}_{m}\left( {\mathbb{R}}^{n}\right) = \left\{ {P \in {V}_{m}\left( {\mathbb{R}}^{n}\right) \mid {\Delta P} = 0}\right\} \) . If \( P \in {\mathcal{H}}_{m}\left( {\mathbb{R}}^{n}\right) \) and \( g \in O\left( n\right) \), then \( \Delta \left( {g \cdot P}\right) = g \cdot \left( {\Delta P}\right) = 0 \) so that \( g \cdot P \in \) \( {\mathcal{H}}_{m}\left( {\mathbb{R}}^{n}\right) \) . In particular, the action of \( O\left( n\right) \) on \( {V}_{m}\left( {\mathbb{R}}^{n}\right) \) descends to a representation of \( O\left( n\right) \) (or \( {SO}\left( n\right) \), of course) on \( {\mathcal{H}}_{m}\left( {\mathbb{R}}^{n}\right) \) . It will turn out that these representations do not break into any smaller pieces. 2.1.2.4 Spin and Half-Spin Representations Any representation \( \left( {\pi, V}\right) \) of \( {SO}\left( n\right) \) automatically yields a representation of \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) by looking at \( \left( {\pi \circ \mathcal{A}, V}\right) \) where \( \mathcal{A} \) is the covering map from \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) to \( {SO}\left( n\right) \) . The set of representations of \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) constructed this way is exactly the set of representations in which \( - 1 \in {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) acts as the identity operator. In this section we construct an important representation, called the spin representation, of \( {\operatorname{Sp
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 3.2
Definition 3.2 (Chen Xiang-yan). Suppose that as \( \alpha \) varies in \( \left\lbrack {0, T}\right\rbrack \) , the critical points of the vector field \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) remain unchanged; and at all regular points (1) \( {d\theta }/{d\alpha } \geq 0 \), and \( {d\theta }/{d\alpha } ≢ 0 \) along any closed curve, (2) for any two points \( {\alpha }_{1} < {\alpha }_{2} \) in \( \left( {0, T}\right) \) . \[ 0 \leq {\int }_{{\alpha }_{1}}^{{\alpha }_{2}}\frac{d\theta }{d\alpha }{d\alpha } \leq \pi \] Then \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) are called generalized rotated vector fields. Referring to \( \left\lbrack {{27},{28}}\right\rbrack \), we now define generalized rotated vector fields as follows. Definition 3.3. Suppose that as \( \alpha \) varies in \( \left( {a, b}\right) \), the critical points of the vector fields \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) remain unchanged; and for any fixed point \( P\left( {x, y}\right) \) and any parameters \( {\alpha }_{1} < {\alpha }_{2} \) in \( \left( {a, b}\right) \), we have \[ \left| \begin{array}{ll} X\left( {x, y,{\alpha }_{1}}\right) & Y\left( {x, y,{\alpha }_{1}}\right) \\ X\left( {x, y,{\alpha }_{2}}\right) & Y\left( {x, y,{\alpha }_{2}}\right) \end{array}\right| \geq 0\left( {\text{ or } \leq 0}\right) , \] (3.5) where equality cannot hold on an entire closed orbit of \( {\left( {3.1}\right) }_{{\alpha }_{i}}, i = 1,2 \) . Then \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) are called generalized rotated vector fields. Here, the interval \( \left( {a, b}\right) \) can be either bounded or unbounded. The relation between conditions (1), (2) in Definition 3.2 and inequality (3.5) in Definition 3.3 is left to the readers as an exercise. If for some regular point \( \left( {{x}_{0},{y}_{0}}\right) \) and parameter \( {\alpha }_{0} \), there exists \( \delta \left( {{x}_{0},{y}_{0},{\alpha }_{0}}\right) > 0 \) such that for any \( \alpha \in \left\lbrack {{\alpha }_{0} - \delta ,{\alpha }_{0} + \delta }\right\rbrack \), the equality is valid in (3.5), then \( {\alpha }_{0} \) is called a stopping point for \( \left( {{x}_{0},{y}_{0}}\right) \) ; otherwise, \( {\alpha }_{0} \) is called a rotating point. Stopping points are allowed in generalized rotated vector fields. Moreover, generalized rotated vector fields do not necessarily depend on \( \alpha \) periodically; in particular, condition (3.3) is not required. The geometric meaning of condition (3.5) is that, at any fixed point \( P\left( {x, y}\right) \) , the oriented area between \[ \left( {X\left( {x, y,{\alpha }_{1}}\right), Y\left( {x, y,{\alpha }_{1}}\right) }\right) \text{ and }\left( {X\left( {x, y,{\alpha }_{2}}\right), Y\left( {x, y,{\alpha }_{2}}\right) }\right) \] has the same (or opposite) sign as \( \operatorname{sgn}\left( {{\alpha }_{2} - {\alpha }_{1}}\right) \) . That is, at any point \( P\left( {x, y}\right) \) , as the parameter \( \alpha \) increases, the vector \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) can only rotate in one direction; moreover, the angle of rotation cannot exceed \( \pi \) . This is also the geometric meaning of Definition 3.2. In the following, we describe two examples of rotated vector fields EXAMPLE 3.1. Consider the system of differential equations \[ \frac{dx}{dt} = X\left( {x, y}\right) ,\;\frac{dy}{dt} = Y\left( {x, y}\right) , \] (3.6) where \( X, Y \in {C}^{0} \), and satisfies conditions for uniqueness of solutions. Construct the system of differential equations containing parameter \( \alpha \) \[ \frac{dx}{dt} = X\cos \alpha - Y\sin \alpha ,\;\frac{dy}{dt} = X\sin \alpha + Y\cos \alpha . \] (3.7) It is not difficult to verify that equations (3.7) satisfy conditions (3.2), (3.3) and thus form a complete family of rotated vector fields. However, in \( 0 < \) \( \alpha \leq {2\pi } \), they are not generalized rotated vector fields. In fact, (3.7) can be regarded as a formula for axis rotation. It rotates the original vector field by an angle of \( \alpha \) , and keeps the vector lengths unchanged. Thus (3.7) are called uniformly rotated vector fields. EXAMPLE 3.2. Consider the system of differential equations \[ \frac{dx}{dt} = - {\alpha y},\;\frac{dy}{dt} = {\alpha x} - {\alpha yf}\left( {\alpha x}\right) , \] (3.8) where \( 0 < \alpha < + \infty \), and \( f\left( x\right) \) is monotonically increasing as \( \left| x\right| \) increases. It can be verified by condition (3.5) that (3.8) are generalized rotated vector fields; however, it is not a complete family of rotated vector fields. In the following, we will prove a few important theorems concerning limit cycles for generalized rotated vector fields. Naturally, they will also apply to complete families of rotated vector fields. We first prove several lemmas. LEMMA 3.1. Let \( {L}_{0} \) be a smooth simple closed curve, parametrized by \( x = \varphi \left( t\right), y = \psi \left( t\right) \) ; and suppose \( {L}_{0} \) is positively oriented (as \( t \) increases, it spirals counterclockwise). If on \( {L}_{0} \), we have \[ H\left( t\right) = \left| \begin{matrix} {\varphi }^{\prime }\left( t\right) & {\psi }^{\prime }\left( t\right) \\ X\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) & Y\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) \end{matrix}\right| \geq 0\left( {\text{ or } \leq 0}\right) , \] (3.9) then as \( t \) increases, the orbits of the system \[ \frac{dx}{dt} = X\left( {x, y}\right) ,\;\frac{dy}{dt} = Y\left( {x, y}\right) \] (3.10) cannot move from the interior (or exterior) of the region \( G \) bounded by \( {L}_{0} \) to the exterior (or interior) of \( G \) . (That is, from one region in \( {R}^{2} \smallsetminus {L}_{0} \) to another). Proof. We will only prove the case outside the parenthesis. Let \( \theta \) be the angle formed by the tangent vector at a point on \( {L}_{0} \) and the vector field \( \left( {X\left( {x, y}\right), Y\left( {x, y}\right) }\right) \) . We have \[ \sin \theta \left( t\right) = \frac{H\left( t\right) }{\sqrt{{\left\lbrack {\varphi }^{\prime }\left( t\right) \right\rbrack }^{2} + {\left\lbrack {\psi }^{\prime }\left( t\right) \right\rbrack }^{2}}\sqrt{{X}^{2}\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) + {Y}^{2}\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) }}. \] (3.11) From (3.9), we find \( \sin \theta \left( t\right) \geq 0 \), i.e., \( 0 \leq \theta \left( t\right) \leq \pi \) . If \( 0 < \theta \left( t\right) < \pi \), then the Lemma is clearly true. Suppose there is some point \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \in {L}_{0} \) with \( \theta \left( {t}_{0}\right) = 0 \) or \( \pi \), and the conclusion of the Lemma is not true, i.e., the orbit of (3.10) is tangent to \( {L}_{0} \) at \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \in {L}_{0} \) and moves from the interior of \( G \) to its exterior as \( t \) increases. From the continuous dependence of solutions on initial conditions, the orbits near \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \) also have this property. This situation is impossible, since we find from (3.9) and (3.11) that at these points we still have \( 0 \leq \theta \left( t\right) \leq \pi \) . Suppose that at all these points we have \( \theta \left( t\right) = 0 \) or \( \pi \), then the orbit of (3.10) near \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \) will be tangent to \( {L}_{0} \) and thus coincides with it. Thus it cannot move from the interior of \( G \) to its exterior. If there is a point with \( 0 < \theta \left( t\right) < \pi \) arbitrarily close to \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \), then this situation is impossible. This proves the Lemma. Naturally, \( {L}_{0} \) can itself be an orbit of (3.10). In this case, equality will hold identically in (3.9) on the orbit \( {L}_{0} \) . LEMMA 3.2. Consider the system \[ \frac{dx}{dt} = {X}_{l}\left( {x, y}\right) ,\;\frac{dy}{dt} = {Y}_{l}\left( {x, y}\right) , \] \( {\left( {3.12}\right) }_{l} \) where \( {X}_{t},{Y}_{t} \in {C}^{0}\left( {G \subseteq {R}^{2}}\right), i = 1,2 \), and satisfy conditions for uniqueness of solutions. Suppose that for \( \left( {x, y}\right) \in G \) \[ \left| \begin{array}{ll} {X}_{1}\left( {x, y}\right) & {Y}_{1}\left( {x, y}\right) \\ {X}_{2}\left( {x, y}\right) & {Y}_{2}\left( {x, y}\right) \end{array}\right| \] (3.13) does not change sign, then the closed orbits of \( {\left( {3.12}\right) }_{1} \), and \( {\left( {3.12}\right) }_{2} \) either coincide or do not intersect. Proof. Let \( {L}_{t} : {x}_{t} = {\varphi }_{t}\left( t\right), y = {\psi }_{t}\left( t\right) \) be closed orbits of \( {\left( {3.12}\right) }_{t}, i = \) 1,2. Without loss of generality, we may assume \( {L}_{1} \) is positively oriented. From system \( {\left( {3.12}\right) }_{1} \), we have \[ {\varphi }_{1}^{\prime }\left( t\right) = {X}_{1}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) ,\;{\psi }_{1}^{\prime }\left( t\right) = {Y}_{1}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) . \] Since (3.13) never changes sign, we find that \[ \left| \begin{matrix} {\varphi }_{1}^{\prime }\left( t\right) & {\psi }_{1}^{\prime }\left( t\right) \\ {X}_{2}\left( {{\varphi }_{1}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) }\right. & \left. {{Y}_{2}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) }\right) \end{matrix}\right| \] (3.14) never changes sign. Suppose that it is nonnegative, then from the case outside of the pa
109_The rising sea Foundations of Algebraic Geometry
Definition 1.73
Definition 1.73. Let \( G \) be a group and let \( S \) be a symmetric set of generators of \( G \) that does not contain the identity. Here "symmetric" means that \( S = \) \( {S}^{-1} \) . Then the Cayley graph of \( \left( {G, S}\right) \) is the (undirected) graph whose vertex set is \( G \) and whose edges are the unordered pairs \( \{ g, h\} \) such that \( h = {gs} \) for some \( s \in S \) . Note that the left-translation action of \( G \) on itself induces a left action of \( G \) on the Cayley graph, since the edges are defined using right translation. Note further that paths from 1 to \( g \) in the Cayley graph correspond to decompositions of \( g \) as a word in the elements of \( S \) . In particular, the distance from 1 to \( g \) is the minimal length \( l \) of an expression \[ g = {s}_{1}{s}_{2}\cdots {s}_{l} \] (1.14) of \( g \) as a product of generators. Definition 1.74. We call the minimal length \( l \) of a decomposition as in (1.14) the length of \( g \) with respect to \( S \), and we write \[ l = {l}_{S}\left( g\right) \] We omit the subscript \( S \) if it is clear from the context. A minimal-length decomposition (1.14) is called a reduced decomposition of \( g \) . The following result is little more than a restatement of our earlier analysis of galleries, combined with assertion (2) of Theorem 1.69. Corollary 1.75. The chamber graph of \( \sum \left( {W, V}\right) \) is isomorphic, as a graph with \( W \) -action, to the Cayley graph of \( \left( {W, S}\right) \) . For any \( w \in W \), there is a \( 1 - 1 \) correspondence between galleries from \( C \) to \( {wC} \) and decompositions of \( w \) as an \( S \) -word. It associates to the decomposition \( w = {s}_{1}{s}_{2}\cdots {s}_{l} \) the gallery pictured in (1.11). Consequently, minimal galleries from \( C \) to \( {wC} \) correspond to reduced decompositions of \( w \), and \[ d\left( {C,{wC}}\right) = l\left( w\right) . \] (1.15) Proof. The bijection \( {wC} \leftrightarrow w \) sets up the isomorphism on the level of vertices. The remaining details should be clear at this point and are left to the reader. In working with Cayley graphs, one often labels the edge from \( g \) to \( {gs} \) by the generator \( s \) . (Cayley [78] thought of the label as representing a color, and he called the graph a "colourgroup.") Following this convention, we will often write \[ {wC} - {wsC} \] (1.16) and say that \( {wC} \) is \( s \) -adjacent to \( {wsC} \) . Warning. If \( {C}_{1} \) and \( {C}_{2} \) are adjacent chambers, then the generator \( s \) that labels the edge in the chamber graph joining them is not in general the reflection that takes \( {C}_{1} \) to \( {C}_{2} \) . Indeed, the reflection taking \( {wC} \) to \( {wsC} \) is \( {ws}{w}^{-1} \), which is generally different from \( s \) . Exercise 1.76. Deduce from equation (1.15) that \( l\left( {ws}\right) = l\left( w\right) \pm 1 \) for all \( w \in W \) and \( s \in S \) . Deduce further that \( l\left( {sw}\right) = l\left( w\right) \pm 1 \) for all \( w, s \) . Note. The essential content of this is that one cannot have \( l\left( {ws}\right) = l\left( w\right) \) . One can prove this purely algebraically by a determinant argument. It is not, however, a general property of length functions on groups. Consider, for example, the direct product of two groups of order 2, with the three nontrivial elements as generators. ## 1.5.2 The Longest Element of \( W \) Recall from the general theory of hyperplane arrangements that \( - C \) is the unique chamber at maximal distance from the fundamental chamber \( C \) (Proposition 1.57). This leads to the following results about \( W \) and its generating set \( S \) . Proposition 1.77. Let \( \left( {W, V}\right) \) be a finite reflection group. (1) \( W \) has a unique element \( {w}_{0} \) of maximal length. Its length is given by \( l\left( {w}_{0}\right) = \left| \mathcal{H}\right| \) (2) The element \( {w}_{0} \) is characterized by the property that \( {w}_{0}C = - C \), where \( C \) is the fundamental chamber. (3) \( l\left( {w{w}_{0}}\right) = l\left( {w}_{0}\right) - l\left( w\right) \) for all \( w \in W \) . (4) \( {w}_{0}^{2} = 1 \), and \( {w}_{0} \) normalizes the set \( S \) of fundamental reflections. Proof. By Theorem 1.69, there is a unique \( {w}_{0} \in W \) such that \( {w}_{0}C = - C \) . Parts (1) and (2) now follow at once from Proposition 1.57 and equation (1.15). (3) We have \[ l\left( {w{w}_{0}}\right) = d\left( {C, w{w}_{0}C}\right) \] \[ = d\left( {C, w\left( {-C}\right) }\right) \] \[ = d\left( {-C,{wC}}\right) \] \[ = \left| \mathcal{H}\right| - d\left( {C,{wC}}\right) \] \[ = l\left( {w}_{0}\right) - l\left( w\right) \] where the second-to-last equality follows from equation (1.9). (4) \( {w}_{0}^{2}C = {w}_{0}\left( {-C}\right) = - {w}_{0}\left( C\right) = C \), so Theorem 1.69 implies that \( {w}_{0}^{2} = 1 \) . Note next that \( - C \) has the same walls as \( C \), so \( S \) is the set of reflections with respect to the walls of \( - C \) . On the other hand, \( {w}_{0}S{w}_{0}^{-1}\left\lbrack { = {w}_{0}S{w}_{0}}\right\rbrack \) is the set of reflections with respect to the walls of \( - C = {w}_{0}C \) . So \( {w}_{0}S{w}_{0}^{-1} = S \) . It follows from (4) that conjugation by \( {w}_{0} \) induces an involution of \( S \) (possibly trivial), which we denote by \( {\sigma }_{0} \) . We will give a geometric interpretation of this involution in Proposition 1.130. Exercise 1.78. Give an algebraic proof that \( {w}_{0} \) normalizes \( S \) by using (3) to calculate \( l\left( {{w}_{0}s{w}_{0}}\right) \) for \( s \in S \) . ## 1.5.3 Examples Example 1.79. Suppose that \( \left( {W, V}\right) \) is essential and of rank 2. One could simply give a direct analysis of this situation, but it will be instructive to see what Theorem 1.69 says about it. Let \( m \mathrel{\text{:=}} \left| \mathcal{H}\right| \) . Then \( m \geq 2 \), and the \( m \) lines in \( \mathcal{H} \) divide the plane \( V \) into \( {2m} \) chambers, each of which is a sector determined by two rays. The transitivity of \( W \) on the set of sectors implies that they are all congruent, so each sector must have angle \( {2\pi }/{2m} = \pi /m \) . In view of assertion (1) of Theorem 1.69, \( W \) is generated by two reflections in lines \( {L}_{1} \) and \( {L}_{2} \) that intersect at an angle of \( \pi /m \) . In other words, \( W \) is dihedral of order \( {2m} \) and \( \left( {W, V}\right) \) looks exactly like Example 1.9. Let us also record, for future reference, the following fact about this example: Let \( {L}_{1} \) and \( {L}_{2} \) be the walls of one of the chambers \( C \), and let \( {e}_{i} \) \( \left( {i = 1,2}\right) \) be the unit normal to \( {L}_{i} \) pointing to the side of \( {L}_{i} \) containing \( C \) ; see Figure 1.10. Then the inner product of \( {e}_{1} \) and \( {e}_{2} \) is given by \[ \left\langle {{e}_{1},{e}_{2}}\right\rangle = - \cos \frac{\pi }{m} \] [To understand the sign, note that the angle between \( {e}_{1} \) and \( - {e}_{2} \) is \( \pi /m \) .] Example 1.80. This is a trivial generalization of the previous example, but it will be useful to have it on record. Assume that \( \left( {W, V}\right) \) has rank 2 but is not necessarily essential. In other words, if we write \( V = {V}_{0} \oplus {V}_{1} \) as in Section 1.1, then \( \dim {V}_{1} = 2 \) . By the previous example applied to \( \left( {W,{V}_{1}}\right) \), we have \( W \cong {D}_{2m} \) for some \( m \geq 2 \) . Moreover, if \( {C}_{1} \) is a chamber in \( {V}_{1} \) with walls \( {L}_{i} \) and normals \( {e}_{i} \) as above, then \( {V}_{0} \times {C}_{1} \) is a chamber in \( V \) with walls \( {V}_{0} \oplus {L}_{i} \) and the same normals \( {e}_{i} \) . In particular, it is still true that a chamber \( C \) has two walls and that the corresponding unit normals (pointing toward the side containing \( C \) ) satisfy \[ \left\langle {{e}_{1},{e}_{2}}\right\rangle = - \cos \frac{\pi }{m} \] (1.17) ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_63_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_63_0.jpg) Fig. 1.10. The canonical unit normals associated with a chamber. Example 1.81. (Type \( {\mathrm{A}}_{n - 1} \) ) Let \( W \) be the symmetric group on \( n \) letters acting on \( {\mathbb{R}}^{n} \) as in Example 1.10. Thus a permutation \( \pi \) acts by \( \pi \left( {v}_{i}\right) = {v}_{\pi \left( i\right) } \) for \( 1 \leq i \leq n \), where \( {v}_{1},\ldots ,{v}_{n} \) is the standard basis for \( {\mathbb{R}}^{n} \) . [These basis vectors were called \( {e}_{i} \) in Example 1.10, but we now call them \( {v}_{i} \) in order to avoid confusion with the canonical unit vectors associated to a chamber.] In terms of coordinates, the action is given by \( \pi \left( {{x}_{1},\ldots ,{x}_{n}}\right) = \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) with \[ {x}_{i} = {y}_{\pi \left( i\right) }\;\text{ for }1 \leq i \leq n. \] (1.18) Indeed, we have \( \pi \left( {\mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{v}_{i}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}\pi \left( {v}_{i}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{v}_{\pi \left( i\right) } \), whence (1.18). To analyze this example we can take \( \mathcal{H} \) to be the braid arrangement discussed in Section 1.4.7; this set is clearly \( W \) -invariant, and the corresponding reflections generate \( W \) . We already saw in Section 1.4.7 that the chambers are in 1-1 correspondence with elements of \( W \) . The correspondence given there is identical with the one predicted by Theorem 1.69, if we take the fundamental chamber \( C \) to be given by \[ {x}_{1} > {x}_{2} > \cdots > {x}_{n} \] (1.19) To see this, just observe that by (1.18), \( {\pi C} \) is defined by \[ {x}_{\pi \left( 1\right) } > {x}_{\pi \left( 2\right) } > \cdots > {x}_{\pi \left( n\right) }. \] (1.20) The set of inequal
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 8.2.5
Definition 8.2.5 Let \( k \) be a totally real field and let \( A \) be a quaternion algebra over \( k \) which is ramified at all real places except one. Let \( \rho \) be a \( k \) -embedding of \( A \) in \( {M}_{2}\left( \mathbb{R}\right) \) and let \( \mathcal{O} \) be an order in \( A \) . Then a subgroup \( F \) of \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) (or \( \mathrm{{PSL}}\left( {2,\mathbb{R}}\right) \) ) is an arithmetic Fuchsian group if it is commensurable with some such \( \rho \left( {\mathcal{O}}^{1}\right) \) (or \( {P\rho }\left( {\mathcal{O}}^{1}\right) \) ). With \( k \) and \( A \) as described in this definition, \[ A{ \otimes }_{\mathbb{Q}}\mathbb{R} \cong {M}_{2}\left( \mathbb{R}\right) \oplus \mathcal{H} \oplus \cdots \oplus \mathcal{H} \] (8.3) and there is an embedding \( {\rho }_{1} : A \rightarrow {M}_{2}\left( \mathbb{R}\right) \) which we can take to be a \( k \) -embedding. Then, as earlier, \( {\rho }_{1}\left( {\mathcal{O}}^{1}\right) \) is a discrete subgroup of \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) of finite covolume which is cocompact if \( A \) is a division algebra. Again this definition is independent of the choice of order in \( A \) and arithmetic Fuchsian groups necessarily have finite covolume in \( {\mathbf{H}}^{2} \) . By similar arguments to those given in Theorems 8.2.2 and 8.2.3, we obtain the following two results: Theorem 8.2.6 Let \( k \) be a number field with at least one real embedding \( \sigma \) and let \( A \) be a quaternion algebra over \( k \) which is unramified at the place corresponding to \( \sigma \) . Let \( \rho \) be an embedding of \( A \) in \( {M}_{2}\left( \mathbb{R}\right) \) such that \( {\left. \rho \right| }_{Z\left( A\right) } = \sigma \) and let \( \mathcal{O} \) be an \( {R}_{k} \) -order in \( A \) . Then \( {P\rho }\left( {\mathcal{O}}^{1}\right) \) is a Fuchsian group of finite covolume if and only if \( k \) is totally real and \( A \) is ramified at all real places except \( \sigma \) . Theorem 8.2.7 Let \( F \) be an arithmetic Fuchsian group commensurable with \( {P\rho }\left( {\mathcal{O}}^{1}\right) \), where \( \mathcal{O} \) is an order in a quaternion algebra \( A \) over a field \( k \) . The following are equivalent: 1. \( F \) is non-cocompact. 2. \( k = \mathbb{Q} \) and \( A = {M}_{2}\left( k\right) \) . 3. \( F \) is commensurable in the wide sense with \( \operatorname{PSL}\left( {2,\mathbb{Z}}\right) \) . ## Exercise 8.2 1. Let \( {\rho }_{1}\left( {\mathcal{O}}^{1}\right) \) be an arithmetic Fuchsian group, where \( \mathcal{O} \) is an order in a quaternion algebra over a number field \( k \) . Show that \( {\rho }_{1}\left( {\mathcal{O}}^{1}\right) \) is contained in an arithmetic Kleinian group (cf. Exercise 7.3, No. 3 and Exercise 6.3, No. 3). 2. Show that there are no discrete \( S \) -arithmetic subgroups of \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) or \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) obtained via an \( {R}_{S} \) -order as in Exercise 8.1, No. 4, where \( S \neq \varnothing \) . 3. Define discrete arithmetic subgroups of \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \times \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) and give necessary and sufficient conditions as in Theorem 8.2.3 for these groups to be non-cocompact. 4. Let \( k = \mathbb{Q}\left( t\right) \), where \( t \) satisfies \( {x}^{3} + x + 1 = 0 \) . Show that the quaternion algebras \( \left( \frac{t,1 + {2t}}{\mathbb{Q}\left( t\right) }\right) \) and \( \left( \frac{t - 1,2{t}^{2} - 1}{\mathbb{Q}\left( t\right) }\right) \) give rise to the same wide commensurability class of arithmetic Kleinian groups (cf Exercise 2.7, No. 3). ## 8.3 The Identification Theorem This identification theorem will enable us to identify when a given finite-covolume Kleinian group is arithmetic. As has already been discussed in Chapter 3, to any finite covolume Kleinian group \( \Gamma \) there is associated a pair consisting of the invariant trace field \( {k\Gamma } \) and the invariant quaternion algebra \( {A\Gamma } \) which are invariants of the wide commensurability class of \( \Gamma \) . Recall that \( {k\Gamma } = \mathbb{Q}\left( {\operatorname{tr}{\Gamma }^{\left( 2\right) }}\right) \) and \[ {A\Gamma } = {A}_{0}{\Gamma }^{\left( 2\right) } = \left\{ {\sum {x}_{i}{\gamma }_{i} : {x}_{i} \in {k\Gamma },{\gamma }_{i} \in {\Gamma }^{\left( 2\right) }}\right\} . \] If \( \Gamma \) is arithmetic, then it is commensurable with some \( \rho \left( {\mathcal{O}}^{1}\right) \), where \( \mathcal{O} \) is an order in a quaternion algebra \( A \) over a number field \( k \) with exactly one complex place and \( \rho \) is a \( k \) -embedding. Thus \( {k\Gamma } = {k\rho }\left( {\mathcal{O}}^{1}\right) \) . As remarked in the preceding section, if \( \alpha \in {\mathcal{O}}^{1} \), then \( \operatorname{tr}\rho \left( \alpha \right) \) is the reduced trace of \( \alpha \) and so lies in \( {R}_{k} \) . Thus \( {k\Gamma } \subset k \) . Now \( {k\Gamma } \) cannot be real (Theorem 3.3.7) so that \( {k\Gamma } = k \) since \( k \) has exactly one complex place (see Exercise 0.1, No. 2). Note that \( \mathbb{Q}\left( {\operatorname{tr}\rho \left( {\mathcal{O}}^{1}\right) }\right) = k \) and so by choosing \( g, h \in {\Gamma }^{\left( 2\right) } \cap \rho \left( {\mathcal{O}}^{1}\right) \) such that \( \langle g, h\rangle \) is irreducible, we see that \[ {A\Gamma } = {A}_{0}{\Gamma }^{\left( 2\right) } \subset {A}_{0}\left( {\rho \left( {\mathcal{O}}^{1}\right) }\right) \subset \rho \left( A\right) . \] Since both \( {A\Gamma } \) and \( \rho \left( A\right) \) are quaternion algebras over \( k \), they coincide. We have thus established the following: Theorem 8.3.1 If \( \Gamma \) is an arithmetic Kleinian group which is commensurable with \( \rho \left( {\mathcal{O}}^{1}\right) \), where \( \mathcal{O} \) is an order in a quaternion algebra \( A \) over the field \( k \) and \( \rho \) is a \( k \) -embedding, then \( {k\Gamma } = k \) and \( {A\Gamma } = \rho \left( A\right) \) . Note that this result already imposes two necessary conditions on \( \Gamma \) if it is to be arithmetic; namely, that \( {k\Gamma } \) has exactly one complex place and that \( {A\Gamma } \) is ramified at all real places. In Chapter 3, a variety of methods were given to calculate \( {k\Gamma } \) and \( {A\Gamma } \) and then applied to diverse examples in Chapter 4. Thus the methodology to check these two conditions is already in place. We add one further condition. If \( \Gamma \) is commensurable with \( \rho \left( {\mathcal{O}}^{1}\right) \) and \( \gamma \in \Gamma \), then \( {\gamma }^{n} \in \rho \left( {\mathcal{O}}^{1}\right) \) for some \( n \in \mathbb{Z} \) . Now the trace of \( {\gamma }^{n} \) is a monic polynomial with integer coefficients in \( \operatorname{tr}\gamma \) . However, \( \operatorname{tr}{\gamma }^{n} \in {R}_{k} \) so that \( \operatorname{tr}\gamma \) satisfies a monic polynomial with coefficients in \( {R}_{k} \) and so \( \operatorname{tr}\gamma \) is an algebraic integer. In essence, the main ingredients of the following proof have appeared earlier in the book, but we give them again in view of the central nature of this result. Theorem 8.3.2 Let \( \Gamma \) be a finite-covolume Kleinian group. Then \( \Gamma \) is arithmetic if and only if the following three conditions hold. 1. \( {k\Gamma } \) is a number field with exactly one complex place. 2. \( \operatorname{tr}\gamma \) is an algebraic integer for all \( \gamma \in \Gamma \) . 3. \( {A\Gamma } \) is ramified at all real places of \( {k\Gamma } \) . Proof: We have just shown that if \( \Gamma \) is arithmetic, then it satisfies these three conditions. Now suppose that \( \Gamma \) satisfies these three conditions. We already know that \( {A\Gamma } \) is a quaternion algebra over \( {k\Gamma } \) (see \( §{3.2} \) and \( §{3.3} \) ). Now set \[ \mathcal{O}\Gamma = \left\{ {\sum {x}_{i}{\gamma }_{i} \mid {x}_{i} \in {R}_{k\Gamma },{\gamma }_{i} \in {\Gamma }^{\left( 2\right) }}\right\} \] (8.4) We show that \( \mathcal{O}\Gamma \) is an order (see Exercise 3.2, No. 1). Clearly \( \mathcal{O}\Gamma \) is an \( {R}_{k\Gamma } \) -module which contains a basis of \( {A\Gamma } \) over \( {k\Gamma } \) and is a ring with 1 . We show that \( \mathcal{O}\Gamma \) is an order in \( {A\Gamma } \) by establishing that it is a finitely generated \( {R}_{k\Gamma } \) -module. To do this, we use a dual basis as in Theorem 3.2.1. Thus let \( g, h \in {\Gamma }^{\left( 2\right) } \) be such that \( \langle g, h\rangle \) is an irreducible subgroup. Let \( \left\{ {{I}^{ * },{g}^{ * },{h}^{ * },{\left( gh\right) }^{ * }}\right\} \) denote the dual basis with respect to the trace form \( T \) . Let \( \gamma \in {\Gamma }^{\left( 2\right) } \) ; thus \[ \gamma = {x}_{0}{I}^{ * } + {x}_{1}{g}^{ * } + {x}_{2}{h}^{ * } + {x}_{3}{\left( gh\right) }^{ * },\;{x}_{i} \in {k\Gamma }. \] If \( {\gamma }_{i} \in \{ I, g, h,{gh}\} \), then \[ T\left( {\gamma ,{\gamma }_{i}}\right) = \operatorname{tr}\left( {\gamma {\gamma }_{i}}\right) = {x}_{j}\;\text{ for some }j \in \{ 0,1,2,3\} . \] Now \( \gamma {\gamma }_{i} \in {\Gamma }^{\left( 2\right) } \) and so \( \operatorname{tr}\left( {\gamma {\gamma }_{i}}\right) \) is an algebraic integer in \( {k\Gamma } \) . Thus \( {x}_{j} \in {R}_{k\Gamma } \) and \[ \mathcal{O}\Gamma \subset {R}_{k\Gamma }\left\lbrack {{I}^{ * },{g}^{ * },{h}^{ * },{\left( gh\right) }^{ * }}\right\rbrack \mathrel{\text{:=}} M. \] Since each of the dual basis elements is a linear combination of \( \{ I, g, h,{gh}\} \) with coefficients in \( {k\Gamma } \), there will be an integer \( m \) such that \( {mM} \subset \mathcal{O}\Gamma \) . Now \( M/{mM} \) is a finite group and \( {mM} \) is a finitely generated \( {R}_{k\Gamma } \) -module. Thus \( \mathcal{O}\Gamma \) is an order. By the conditions imposed on \( {k\
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 5.1.1
Definition 5.1.1. The standard 0-cube \( {I}^{0} \) is the point \( 0 \in {\mathbb{R}}^{0} \) . For \( n \geq 1 \), the standard \( n \) -cube is \[ {I}^{n} = \left\{ {x = \left( {{x}_{1},{x}_{2},\ldots ,{x}_{n}}\right) \in {\mathbb{R}}^{n} \mid 0 \leq {x}_{i} \leq 1, i = 1,\ldots, n}\right\} . \] Its \( i \) -th front face is \[ {A}_{i} = {A}_{i}\left( {I}^{n}\right) = \left\{ {x \in {I}^{n} \mid {x}_{i} = 0}\right\} \] and its \( i \) -th back face is \[ {B}_{i} = {B}_{i}\left( {I}^{n}\right) = \left\{ {x \in {I}^{n} \mid {x}_{i} = 1}\right\} . \] \( \diamond \) Definition 5.1.2. Let \( {I}^{n} \) be the standard \( n \) -cube. Its boundary is given by \[ \partial {I}^{0} = 0 \] and \[ \partial {I}^{0} = \mathop{\sum }\limits_{{i = 1}}^{n}{\left( -1\right) }^{i}\left( {{A}_{i} - {B}_{i}}\right) \] for \( n > 0 \) . \( \diamond \) In this definition, \( \partial {I}^{n} \) is considered to be an element in the free abelian group generated by \( \left\{ {{A}_{i},{B}_{i} \mid i = 1,\ldots, n}\right\} \) . We then have the following basic lemma: Lemma 5.1.3. For any \( n,\partial \left( {\partial {I}^{n}}\right) = 0 \) . Proof. For \( n \leq 1 \) this is clear. For \( n \geq 2,\partial \left( {\partial {I}^{n}}\right) \) is an element in the free abelian group generated by the \( \left( {n - 2}\right) \) - faces of \( {I}^{n} \), i.e., by the subsets, for each \( i \neq j \) and each \( {\varepsilon }_{i} = 0 \) or \( 1,{\varepsilon }_{j} = 0 \) or 1, \[ \left\{ {x \in {I}^{n} \mid {x}_{i} = {\varepsilon }_{i},{x}_{j} = {\varepsilon }_{j}}\right\} . \] Geometrically, each \( \left( {n - 2}\right) \) -face of \( {I}^{n} \) is a free of two \( \left( {n - 1}\right) \) -faces, and the signs are chosen in Definition 5.1.2 so that they cancel. This is a routine but tedious calculation. Definition 5.1.4. Let \( X \) be a topological space. A singular \( n \) -cube of \( X \) is a map \( \Phi : {I}^{n} \rightarrow X \) . \( \diamond \) We let \( {\alpha }_{i} : {I}^{n - 1} \rightarrow {I}^{n} \) be the inclusion of the \( i \) -th front face and \( {\beta }_{i} : {I}^{n - 1} \rightarrow {I}^{n} \) be the inclusion of the \( i \) -th back face, i.e. \[ {\alpha }_{i}\left( {{x}_{1},\ldots ,{x}_{n - 1}}\right) = \left( {{x}_{1},\ldots ,{x}_{i - 1},0,{x}_{i},\ldots ,{x}_{n - 1}}\right) \] \[ {\beta }_{i}\left( {{x}_{1},\ldots ,{x}_{n - 1}}\right) = \left( {{x}_{1},\ldots ,{x}_{i - 1},1,{x}_{i},\ldots ,{x}_{n - 1}}\right) . \] Definition 5.1.5. For \( n \geq 2 \), a singular \( n \) -cube \( f : {I}^{n} \rightarrow X \) is degenerate if the value of \( f \) is independent of at least one coordinate \( {x}_{i} \), i.e., if there is a singular \( \left( {n - 1}\right) \) - cube \( \Psi : {I}^{n - 1} \rightarrow X \) with \[ \Phi \left( {{x}_{1},\ldots ,{x}_{n}}\right) = \Psi \left( {{x}_{1},\ldots ,{x}_{i - 1},{x}_{i + 1},\ldots ,{x}_{n}}\right) . \] A singular 0-cube is always non-degenerate. Observe that in the degenerate case we have, in particular, \[ \Phi {\alpha }_{i} = \Phi {\beta }_{i} = \Psi : {I}^{n - 1} \rightarrow X,\;\text{ for }n \geq 1. \] Definition 5.1.6. Let \( X \) be a topological space. The group \( {Q}_{n}\left( X\right) \) is the free abelian group generated by the singular \( n \) -cubes of \( X \) . The subgroup \( {D}_{n}\left( X\right) \) is the free abelian group generated by the degenerate singular \( n \) -cubes. (In particular, \( {D}_{0}\left( X\right) = \) \( \{ 0\} \) .) For each \( n \geq 0 \), the boundary map \( {\partial }_{n}^{Q} = {\partial }^{Q} \) is defined by \( {\partial }^{Q} = 0 \) if \( n = 0 \), and if \( n \geq 1 \), and \( \Phi : {I}^{n} \rightarrow X \) is a singular \( n \) -cube, \[ {\partial }^{Q}\Phi = \mathop{\sum }\limits_{{i = 1}}^{n}{\left( -1\right) }^{i}\left( {{A}_{i}\Phi - {B}_{i}\Phi }\right) \in {Q}_{n - 1}\left( X\right) \] where \( {A}_{i}\Phi = \Phi {\alpha }_{i} : {I}^{n - 1} \rightarrow X \) and \( {B}_{i}\Phi = \Phi {\beta }_{i} : {I}^{n - 1} \rightarrow X \) . \( \diamond \) Lemma 5.1.7. (1) \( {\partial }_{n - 1}^{Q}{\partial }_{n}^{Q} : {Q}_{n}\left( X\right) \rightarrow {Q}_{n - 2}\left( X\right) \) is the 0 map. (2) \( {\partial }^{Q}\left( {{D}_{n}\left( X\right) }\right) \subseteq {D}_{n - 1}\left( X\right) \) . Corollary 5.1.8. Let \( {C}_{n}\left( X\right) = {Q}_{n}\left( X\right) /{D}_{n}\left( X\right) \) . Then \( {\partial }_{n}^{Q} : {Q}_{n}\left( X\right) \rightarrow {Q}_{n - 1}\left( X\right) \) induces \( {\partial }_{n} : {C}_{n}\left( X\right) \rightarrow {C}_{n - 1}\left( X\right) \) with \( {\partial }_{n - 1}{\partial }_{n} = 0 \) . Definition 5.1.9. The chain complex \( C\left( X\right) \) : \[ \cdots \rightarrow {C}_{2}\left( X\right) \overset{{\partial }_{2}}{ \rightarrow }{C}_{1}\left( X\right) \overset{{\partial }_{1}}{ \rightarrow }{C}_{0}\left( X\right) \overset{{\partial }_{0}}{ \rightarrow }0 \rightarrow 0 \rightarrow \cdots \] is the singular chain complex of \( X \) . \( \diamond \) Definition 5.1.10. The homology of the singular chain complex of \( X \) is the singular homology of \( X \) . To quote the definition of the homology of a chain complex from Sect. A.2: \( {Z}_{n}\left( X\right) = \operatorname{Ker}\left( {{\partial }_{n} : {C}_{n}\left( X\right) \rightarrow {C}_{n - 1}\left( X\right) }\right) , \) the group of singular \( n \) -cycles, \( {B}_{n}\left( X\right) = \operatorname{Im}\left( {{\partial }_{n + 1} : {C}_{n + 1}\left( X\right) \rightarrow {C}_{n}\left( X\right) }\right) , \) the group of singular \( n \) -boundaries, \( {H}_{n}\left( X\right) = {Z}_{n}\left( X\right) /{B}_{n}\left( X\right) \), the \( n \) -th singular homology group of \( X \) We now define the homology of a pair. Definition 5.1.11. Let \( \left( {X, A}\right) \) be a pair. The relative singular chain complex \( C\left( {X, A}\right) \) is the chain complex \[ \cdots \rightarrow {C}_{2}\left( X\right) /{C}_{2}\left( A\right) \overset{\partial }{ \rightarrow }{C}_{1}\left( X\right) /{C}_{1}\left( A\right) \overset{\partial }{ \rightarrow }{C}_{0}\left( X\right) /{C}_{0}\left( A\right) \overset{\partial }{ \rightarrow }0 \rightarrow 0 \rightarrow \cdots . \] Its homology is the singular homology of the pair \( \left( {X, A}\right) \) . Finally, we define the induced map on homology of a map of spaces, or of pairs. Lemma 5.1.12. Let \( f : X \rightarrow Y \) be a map. Then finduces a chain map \( \left\{ {{f}_{n} : {C}_{n}\left( X\right) \rightarrow }\right. \) \( \left. {{C}_{n}\left( Y\right) }\right\} \) where \( {f}_{n} : {C}_{n}\left( X\right) \rightarrow {C}_{n}\left( Y\right) \) as follows. Let \( \Phi : {I}^{n} \rightarrow X \) be a singular \( n \) -cube. Then \( {f}_{n}\Phi = {f\Phi } : {I}_{n} \rightarrow Y \) where \( {f\Phi } \) denotes the composition. Similarly \( f : \left( {X, A}\right) \rightarrow \) \( \left( {Y, B}\right) \) induces a map \( {f}_{n} : {C}_{n}\left( {X, A}\right) \rightarrow {C}_{n}\left( {Y, B}\right) \) by composition. This chain map induces a map on singular homology \( \left\{ {{f}_{n} : {H}_{n}\left( X\right) \rightarrow {H}_{n}\left( Y\right) }\right\} \) and similarly \( \left\{ {{f}_{n} : {H}_{n}\left( {X, A}\right) \rightarrow {H}_{n}\left( {Y, B}\right) }\right\} \) . Proof. This would be immediate if we were dealing with \( {Q}_{n}\left( X\right) \) and \( {Q}_{n}\left( Y\right) \) . But since \( {f\Phi } \) is degenerate wherever \( \Phi \) is, it is just about immediate for \( {C}_{n}\left( X\right) \) and \( {C}_{n}\left( Y\right) \) . Then the fact that we have maps on homology is a direct consequence of Lemma A.2.7. Definition 5.1.13. The above maps \( \left\{ {{f}_{n} : {H}_{n}\left( X\right) \rightarrow {H}_{n}\left( Y\right) }\right\} \) or \( \left\{ {{f}_{n} : {H}_{n}\left( {X, A}\right) \rightarrow }\right. \) \( \left. {{H}_{n}\left( {Y, B}\right) }\right\} \) are the induced maps on singular homology by the map \( f : X \rightarrow Y \) or the \( \operatorname{map}f : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) . We now verify that singular homology satisfies the Eilenberg-Steenrod axioms. Theorem 5.1.14. Singular homology satisfies Axioms 1 and 2. Proof. Immediate from the definition of the induced map on singular cubes as composition. Theorem 5.1.15. Singular homology satisfies Axiom 3. Proof. Immediate from the definition of the boundary map on singular cubes and from the definition of the induced map on singular cubes as composition. Theorem 5.1.16. Singular homology satisfies Axiom 4. Proof. We have defined \( {C}_{n}\left( {X, A}\right) = {C}_{n}\left( X\right) /{C}_{n}\left( A\right) \) . Thus for every \( n \), we have a short exact sequence \[ 0 \rightarrow {C}_{n}\left( A\right) \rightarrow {C}_{n}\left( X\right) \rightarrow {C}_{n}\left( {X, A}\right) \rightarrow 0. \] In other words, we have a short exact sequence of chain complexes \[ 0 \rightarrow {C}_{ * }\left( A\right) \rightarrow {C}_{ * }\left( X\right) \rightarrow {C}_{ * }\left( {X, A}\right) \rightarrow 0. \] But then we have a long exact sequence in homology by Theorem A.2.10. Theorem 5.1.17. Singular homology satisfies Axiom 5. Proof. For simplicity we consider the case of homotopic maps of spaces \( f : X \rightarrow Y \) and \( g : X \rightarrow Y \) (rather than maps of pairs). Then by definition, setting \( {f}_{0} = f \) and \( {f}_{1} = g \), there is a map \( F : X \times I \rightarrow Y \) with \( F\left( {x,0}\right) = {f}_{0}\left( x\right) \) and \( F\left( {x,1}\right) = {f}_{1}\left( x\right) \) . Define a map \( \widetilde{F} : {C}_{n}\left( X\right) \rightarrow {C}_{n + 1}\left( Y\right) \) as follows. Let \( \Phi : {I}^{n} \rightarrow X \) be a singular \( n \) -cube. Then \( \widetilde{F}\Phi : {I}^{n + 1} \rightarrow Y \) is defined by \[ \widetilde{F}\mathbf{\Phi }\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) = F\left( {\mathbf{\Phi }\left( {{x}_{1},\ldots ,{x}_{n}}\right) ,{x}_{n + 1}}\right) . \] The
1164_(GTM70)Singular Homology Theory
Definition 2.2
Definition 2.2. Let \( K = \left\{ {{K}_{n},{\partial }_{n}}\right\} \) and \( {K}^{\prime } = \left\{ {{K}_{n}^{\prime },{\partial }_{n}^{\prime }}\right\} \) be chain complexes. A chain map \( f : K \rightarrow {K}^{\prime } \) consists of a sequence of homomorphisms \( {f}_{n} : {K}_{n} \rightarrow {K}_{n}^{\prime } \) such that the commutativity condition \[ {f}_{n - 1}{\partial }_{n} = {\partial }_{n}^{\prime }{f}_{n} \] holds for all \( n \) . EXAMPLE 2.2. A continuous map \( \varphi : X \rightarrow Y \) induces chain maps \[ {\varphi }_{\# } : Q\left( X\right) \rightarrow Q\left( Y\right) \] \[ {\varphi }_{\# } : D\left( X\right) \rightarrow D\left( Y\right) \] \[ {\varphi }_{\# } : C\left( X\right) \rightarrow C\left( Y\right) \] etc. If \( f : K \rightarrow {K}^{\prime } \) is a chain map, then \( {f}_{n}\left\lbrack {{Z}_{n}\left( K\right) }\right\rbrack \subset {Z}_{n}\left( {K}^{\prime }\right) \) and \( {f}_{n}\left\lbrack {{B}_{n}\left( K\right) }\right\rbrack \subset \) \( {B}_{n}\left( {K}^{\prime }\right) \), hence there is induced a homomorphism \[ {f}_{ * } : {H}_{n}\left( K\right) \rightarrow {H}_{n}\left( {K}^{\prime }\right) \] for all \( n \) . Note that the set of all chain complexes and chain maps constitutes a category, and that \( {H}_{n} \) is a functor from this category to the category of abelian groups and homomorphisms. Note also that if \( f \) and \( g : K \rightarrow {K}^{\prime } \) are chain maps, their sum, \[ f + g = \left\{ {{f}_{n} + {g}_{n}}\right\} \] is also a chain map, and \[ {\left( f + g\right) }_{ * } = {f}_{ * } + {g}_{ * } : {H}_{n}\left( K\right) \rightarrow {H}_{n}\left( {K}^{\prime }\right) . \] In other words, \( {H}_{n} \) is an additive functor. Definition 2.3. Let \( f, g : K \rightarrow {K}^{\prime } \) be chain maps. A chain homotopy \( D : K \rightarrow {K}^{\prime } \) between \( f \) and \( g \) is a sequence of homomorphisms \[ {D}_{n} : {K}_{n} \rightarrow {K}_{n + 1}^{\prime } \] such that \[ {f}_{n} - {g}_{n} = {\partial }_{n + 1}^{\prime }{D}_{n} + {D}_{n - 1}{\partial }_{n} \] for all \( n \) . Two chain maps are said to be chain homotopic if there exists a chain homotopy between them (notation: \( f \simeq g \) ). EXAMPLE 2.3. If \( {\varphi }_{0},{\varphi }_{1} : X \rightarrow Y \) are continuous maps, any homotopy between \( {\varphi }_{0} \) and \( {\varphi }_{1} \) gives rise to a chain homotopy between the induced chain maps \( {\varphi }_{0\# } \) and \( {\varphi }_{1\# } \) on cubical singular chains (see §II.4). The reader should prove the following two facts for himself: Proposition 2.1. Let \( f, g : K \rightarrow {K}^{\prime } \) be chain maps. If \( f \) and \( g \) are chain homotopic, then \[ {f}_{ * } = {g}_{ * } : {H}_{n}\left( K\right) \rightarrow {H}_{n}\left( {K}^{\prime }\right) \] for all \( n \) . Proposition 2.2. Chain homotopy is an equivalence relation on the set of all chain maps from \( K \) to \( {K}^{\prime } \) . ## EXERCISES 2.1. By analogy with the category of topological spaces and continuous maps, complete the following definitions: (a) A chain map \( f : K \rightarrow {K}^{\prime } \) is a chain homotopy equivalence if _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20 (b) A chain complex \( {K}^{\prime } \) is a subcomplex of the chain complex \( K \) if___. (c) A subcomplex \( {K}^{\prime } \) of the chain complex \( K \) is a retract of \( K \) if (d) A subcomplex \( {K}^{\prime } \) of the chain complex \( K \) is a deformation retract of \( K \) if (e) If \( {K}^{\prime } \) is a subcomplex of \( K \), the quotient complex \( K/{K}^{\prime } \) is ___ In each case, what assertions can be made about the homology groups of the various chain complexes involved, and about the homomorphisms induced by the various chain maps? 2.2. Let \( f, g,{f}^{\prime } \), and \( {g}^{\prime } \) be chain maps \( K \rightarrow {K}^{\prime } \) . If \( f \) is chain homotopic to \( {f}^{\prime } \), and \( g \) is chain homotopic to \( {g}^{\prime } \), then prove that \( f + g \) is chain homotopic to \( {f}^{\prime } + {g}^{\prime } \) . 2.3. Let \( f, g : K \rightarrow {K}^{\prime } \) and \( {f}^{\prime },{g}^{\prime } : {K}^{\prime } \rightarrow {K}^{\prime \prime } \) be chain maps, \( D \) a chain homotopy between \( f \) and \( g \), and \( {D}^{\prime } \) a chain homotopy between \( {f}^{\prime } \) and \( {g}^{\prime } \) . Using \( D \) and \( {D}^{\prime } \), construct an explicit chain homotopy between \( {f}^{\prime }f \) and \( {g}^{\prime }g : K \rightarrow {K}^{\prime \prime } \) . 2.4. Let \( D \) be a chain homotopy between the maps \( f \) and \( g : K \rightarrow K \) (of \( K \) into itself). Use \( D \) to construct an explicit chain homotopy between \( {f}^{n} = {fff}\cdots f \) and \( {g}^{n} = \) \( {gg}\cdots g \) ( \( n \) -fold iterates). Definition 2.4. A sequence of chain complexes and chain maps \[ \cdots \rightarrow K\overset{f}{ \rightarrow }{K}^{\prime }\overset{g}{ \rightarrow }{K}^{\prime } \rightarrow \cdots \] is exact if for each integer \( n \) the sequence of abelian groups \[ \cdots \rightarrow {K}_{n}\overset{{f}_{n}}{ \rightarrow }{K}_{n}^{\prime }\overset{{g}_{n}}{ \rightarrow }{K}_{n}^{\prime \prime } \rightarrow \cdots . \] is exact in the usual sense. We will be especially interested in short exact sequences of chain complexes, i.e., those of the form \[ E : 0 \rightarrow {K}^{\prime }\overset{f}{ \rightarrow }K\overset{g}{ \rightarrow }{K}^{\prime \prime } \rightarrow 0. \] This means that for each \( n,{f}_{n} \) is an monomorphism, \( {g}_{n} \) is an epimorphism, and image \( {f}_{n} = \) kernel \( {g}_{n} \) . Given any such short exact sequence of chain complexes, we can follow the procedure of §II. 5 to define a connecting homomorphism or boundary operator \[ {\partial }_{E} : {H}_{n}\left( {K}^{\prime \prime }\right) \rightarrow {H}_{n - 1}\left( {K}^{\prime }\right) \] for all \( n \), and then prove that the following sequence of abelian groups \[ \cdots \overset{{\partial }_{E}}{ \rightarrow }{H}_{n}\left( {K}^{\prime }\right) \overset{{f}_{ * }}{ \rightarrow }{H}_{n}\left( K\right) \overset{{g}_{ * }}{ \rightarrow }{H}_{n}\left( {K}^{\prime \prime }\right) \overset{{\partial }_{E}}{ \rightarrow }{H}_{n - 1}\left( {K}^{\prime }\right) \rightarrow \cdots \] is exact. One can also prove the following important naturality property of this connecting homomorphism or boundary operator: Let ![26a5d8f2-88cf-4556-8447-3a182179fff0_119_0.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_119_0.jpg) be a commutative diagram of chain complexes and chain maps. It is assumed that the two rows, denoted by \( E \) and \( F \), are short exact sequences. Then the following diagram is commutative for each \( n \) : ![26a5d8f2-88cf-4556-8447-3a182179fff0_119_1.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_119_1.jpg) ## EXERCISES 2.5. Define the direct sum and direct product of an arbitrary family of chain complexes in the obvious way. How is the homology of such a direct sum or product related to the homology of the individual chain complexes of the family? 2.6. Let \( E : 0 \rightarrow {K}^{\prime }\overset{f}{ \rightarrow }K\overset{g}{ \rightarrow }{K}^{\prime \prime } \rightarrow 0 \) be a short exact sequence of chain complexes. By a splitting homomorphism for such a sequence we mean a sequence \( s = \left\{ {s}_{n}\right\} \) such that for each \( n,{s}_{n} : {K}_{n}^{\prime \prime } \rightarrow {K}_{n} \), and \( {g}_{n}{s}_{n} = \) identity map of \( {K}_{n}^{\prime \prime } \) onto itself. Note that we do not demand that \( s \) should be a chain map. Assume that such a splitting homomorphism exists. (a) Prove that there exist unique homomorphisms \( {\varphi }_{n} : {K}_{n}^{\prime \prime } \rightarrow {K}_{n - 1}^{\prime } \) for all \( n \) such that \[ {f}_{n - 1}{\varphi }_{n} = {\partial }_{n}{s}_{n} - {s}_{n - 1}{\partial }_{n}^{\prime \prime }. \] (b) Prove that \( {\partial }_{n - 1}^{\prime }{\varphi }_{n} + {\varphi }_{n - 1}{\partial }_{n}^{\prime \prime } = 0 \) for all \( n \) . (c) Let \( {s}^{\prime } = \left\{ {s}_{n}^{\prime }\right\} \) be another sequence of splitting homomorphisms, and \( {\varphi }_{n}^{\prime } : {K}_{n}^{\prime \prime } \rightarrow \) \( {K}_{n - 1}^{\prime } \) the unique homomorphisms such that \( {f}_{n - 1}{\varphi }_{n}^{\prime } = {\partial }_{n}{s}_{n}^{\prime } - {s}_{n - 1}^{\prime }{\partial }_{n}^{\prime \prime } \) . Prove that there exists a sequence of homomorphisms \( {D}_{n} : {K}_{n}^{\prime \prime } \rightarrow {K}_{n}^{\prime } \) such that \[ {\varphi }_{n} - {\varphi }_{n}^{\prime } = {\partial }_{n}^{\prime }{D}_{n} - {D}_{n - 1}{\partial }_{n}^{\prime \prime } \] for all \( n \) . (d) Prove that the connecting homomorphism \( {\partial }_{E} : {H}_{n}\left( {K}^{\prime \prime }\right) \rightarrow {H}_{n - 1}\left( {K}^{\prime }\right) \) is induced by the sequence of homomorphisms \( \left\{ {\varphi }_{n}\right\} \) in the same sense that a chain map induces homomorphisms of homology groups. (Note: The sequence of homomorphisms \( \left\{ {\varphi }_{n}\right\} \) can be thought of as a "chain map of degree -1 ." The sequence of homomorphisms \( \left\{ {D}_{n}\right\} \) in Part (c) is a chain homotopy between \( \left\{ {\varphi }_{n}\right\} \) and \( \left\{ {\varphi }_{n}^{\prime }\right\} \) .) We will conclude this section on chain complexes with a discussion of a construction called the algebraic mapping cone of a chain map. Definition 2.5. Let \( K = \left\{ {{K}_{n},{\partial }_{n}}\right\} \) and \( {K}^{\prime } = \left\{ {{K}_{n}^{\prime },{\part
1094_(GTM250)Modern Fourier Analysis
Definition 3.5.3
Definition 3.5.3. We define the Orlicz maximal operator \[ {M}_{L\log \left( {e + L}\right) }\left( f\right) \left( x\right) = \mathop{\sup }\limits_{{Q \ni x}}\parallel f{\parallel }_{L\log \left( {e + L}\right) \left( Q\right) }, \] where the supremum is taken over all cubes \( Q \) with sides parallel to the axes that contain the given point \( x \) . The boundedness properties of this maximal operator are a consequence of the following lemma. Lemma 3.5.4. There is a positive constant \( c\left( n\right) \) such that for any cube \( Q \) in \( {\mathbf{R}}^{n} \) and any nonnegative locally integrable function \( w \), we have \[ \parallel w{\parallel }_{L\log \left( {e + L}\right) \left( Q\right) } \leq \frac{c\left( n\right) }{\left| Q\right| }{\int }_{Q}{M}_{c}\left( w\right) {dx} \] (3.5.2) where \( {M}_{c} \) is the Hardy-Littlewood maximal operator with respect to cubes. Hence, for some other dimensional constant \( {c}^{\prime }\left( n\right) \) and all nonnegative \( w \) in \( {L}_{\mathrm{{loc}}}^{1}\left( {\mathbf{R}}^{n}\right) \) the inequality \[ {M}_{L\log \left( {e + L}\right) }\left( w\right) \left( x\right) \leq {c}^{\prime }\left( n\right) {M}^{2}\left( w\right) \left( x\right) \] (3.5.3) is valid, where \( {M}^{2} = M \circ M \) and \( M \) is the Hardy-Littlewood maximal operator. Proof. Fix a cube \( Q \) in \( {\mathbf{R}}^{n} \) with sides parallel to the axes. We introduce a maximal operator associated with \( Q \) as follows: \[ {M}_{c}^{Q}\left( f\right) \left( x\right) = \mathop{\sup }\limits_{\substack{{R \ni x} \\ {R \subseteq Q} }}\frac{1}{\left| R\right| }{\int }_{R}\left| {f\left( y\right) }\right| {dy} \] where the supremum is taken over cubes \( R \) in \( {\mathbf{R}}^{n} \) with sides parallel to the axes. The key estimate follows from the following local version of the reverse weak-type \( \left( {1,1}\right) \) estimate of the Hardy-Littlewood maximal function (see Exercise 2.1.4(c) in [156]). For each nonnegative function \( f \) on \( {\mathbf{R}}^{n} \) and \( \alpha \geq {\operatorname{Avg}}_{Q}f \), we have \[ \frac{1}{\alpha }{\int }_{Q\cap \{ f > \alpha \} }{fdx} \leq {2}^{n}\left| \left\{ {x \in Q : {M}_{c}^{Q}\left( f\right) \left( x\right) > \alpha }\right\} \right| . \] (3.5.4) Indeed, to prove (3.5.4), we apply Proposition 2.1.20 in [156] to the function \( f \) and the number \( \alpha > 0 \) . Then there exists a collection of disjoint (possibly empty) open cubes \( {Q}_{j} \) such that for almost all \( x \in {\left( \mathop{\bigcup }\limits_{j}{Q}_{j}\right) }^{c} \) we have \( f\left( x\right) \leq \alpha \) and \[ \alpha < \frac{1}{\left| {Q}_{j}\right| }{\int }_{{Q}_{j}}f\left( t\right) {dt} \leq {2}^{n}\alpha . \] (3.5.5) According to Corollary 2.1.21 in [156] we have \( Q \smallsetminus \left( {\mathop{\bigcup }\limits_{j}{Q}_{j}}\right) \subseteq \{ f \leq \alpha \} \) . This implies that \( Q \cap \{ f > \alpha \} \subseteq \mathop{\bigcup }\limits_{j}{Q}_{j} \), which is contained in \( \left\{ {x \in Q : {M}_{c}^{Q}\left( f\right) \left( x\right) > \alpha }\right\} \) . Multiplying both sides of (3.5.5) by \( \left| {Q}_{j}\right| /\alpha \) and summing over \( j \) we obtain (3.5.4). Using the definition of \( {M}_{L\log \left( {e + L}\right) } \) ,(3.5.2) follows from the fact that for some constant \( c > 1 \) independent of \( w \) we have \[ \frac{1}{\left| Q\right| }{\int }_{Q}\frac{w}{{\lambda }_{Q}}\log \left( {e + \frac{w}{{\lambda }_{Q}}}\right) {d\mu } \leq 1 \] (3.5.6) where \[ {\lambda }_{Q} = \frac{c}{\left| Q\right| }{\int }_{Q}{M}_{c}\left( w\right) {dx} = c{\operatorname{AvgM}}_{c}\left( w\right) . \] We let \( f = w/{\lambda }_{Q} \) ; by the Lebesgue differentiation theorem we have that \( 0 \leq \) \( {\operatorname{Avg}}_{Q}f \leq 1/c \) . It is true that \[ {\int }_{X}\phi \left( f\right) {dv} = {\int }_{0}^{\infty }{\phi }^{\prime }\left( t\right) v\left( {\{ x \in X : f\left( x\right) > t\} }\right) {dt} \] where \( v \geq 0,\left( {X, v}\right) \) is a \( \sigma \) -finite measure space, \( \phi \) is an increasing continuously differentiable function with \( \phi \left( 0\right) = 0 \), and \( f \in {L}^{p}\left( X\right) \) ; see Proposition 1.1.4 in [156]. We take \( X = Q,{dv} = {\left| Q\right| }^{-1}f{\chi }_{Q}{dx} \), and \( \phi \left( t\right) = \log \left( {e + t}\right) - 1 \) to deduce \[ \frac{1}{\left| Q\right| }{\int }_{Q}f\log \left( {e + f}\right) {dx} = \frac{1}{\left| Q\right| }{\int }_{Q}{fdx} + \frac{1}{\left| Q\right| }{\int }_{0}^{\infty }\frac{1}{e + t}\left( {{\int }_{Q\cap \{ f > t\} }{fdx}}\right) {dt} \] \[ = {I}_{0} + {I}_{1} + {I}_{2} \] where \[ {I}_{0} = \frac{1}{\left| Q\right| }{\int }_{Q}{fdx} \] \[ {I}_{1} = \frac{1}{\left| Q\right| }{\int }_{0}^{{\operatorname{Avg}}_{Q}f}\frac{1}{e + t}\left( {{\int }_{Q\cap \{ f > t\} }{fdx}}\right) {dt} \] \[ {I}_{2} = \frac{1}{\left| Q\right| }{\int }_{{\operatorname{Avg}}_{Q}f}^{\infty }\frac{1}{e + t}\left( {{\int }_{Q\cap \{ f > t\} }{fdx}}\right) {dt}. \] We now clearly have that \( {I}_{0} = {\operatorname{Avg}}_{Q}f \leq 1/c \), while \( {I}_{1} \leq {\left( {\operatorname{Avg}}_{Q}f\right) }^{2} \leq 1/{c}^{2} \) . For \( {I}_{2} \) we use estimate (3.5.4). Indeed, one has \[ {I}_{2} = \frac{1}{\left| Q\right| }{\int }_{{\operatorname{Avg}}_{Q}f}^{\infty }\frac{1}{e + t}\left( {{\int }_{Q\cap \{ f > t\} }{fdx}}\right) {dt} \] \[ \leq \frac{{2}^{n}}{\left| Q\right| }{\int }_{{\operatorname{Avg}}_{Q}f}^{\infty }\frac{t}{e + t}\left| \left\{ {x \in Q : {M}_{c}^{Q}\left( f\right) \left( x\right) > t}\right\} \right| {dt} \] \[ \leq \frac{{2}^{n}}{\left| Q\right| }{\int }_{0}^{\infty }\left| \left\{ {x \in Q : {M}_{c}^{Q}\left( f\right) \left( x\right) > \lambda }\right\} \right| {d\lambda } \] \[ = \frac{{2}^{n}}{\left| Q\right| }{\int }_{Q}{M}_{c}^{Q}\left( f\right) {dx} \] \[ = \frac{{2}^{n}}{\left| Q\right| }{\int }_{Q}{M}_{c}\left( w\right) {dx}\frac{1}{{\lambda }_{Q}} = \frac{{2}^{n}}{c} \] using the definition of \( {\lambda }_{Q} \) . Combining all the estimates obtained, we deduce that \[ {I}_{0} + {I}_{1} + {I}_{2} \leq \frac{1}{c} + \frac{1}{{c}^{2}} + \frac{{2}^{n}}{c} \leq 1 \] provided \( c \) is large enough. ## 3.5.2 A Pointwise Estimate for the Commutator We introduce certain modifications of the sharp maximal operator \( {M}^{\# } \) defined in Section 3.4. We have the centered version \[ {\mathcal{M}}^{\# }\left( f\right) \left( x\right) = \mathop{\sup }\limits_{\substack{{Q\text{ cube in }{\mathbf{R}}^{n}} \\ {\text{ center of }Q\text{ is }x} }}\frac{1}{\left| Q\right| }{\int }_{Q}\left| {f\left( y\right) - \mathop{\operatorname{Avg}}\limits_{Q}f}\right| {dy} \] which is pointwise equivalent to \( {M}^{\# }\left( f\right) \left( x\right) \) by a simple argument based on the fact that the smallest cube that contains a fixed cube and is centered at point in its interior has comparable size with the fixed cube. Then we introduce the "smaller" sharp maximal function \[ {\mathcal{M}}^{\# \# }\left( f\right) \left( x\right) = \mathop{\sup }\limits_{\substack{{Q\text{ cube in }{\mathbf{R}}^{n}} \\ {\text{ center of }Q\text{ is }x} }}\mathop{\inf }\limits_{c}\frac{1}{\left| Q\right| }{\int }_{Q}\left| {f\left( y\right) - c}\right| {dy} \] (3.5.7) which is pointwise equivalent to \( {\mathcal{M}}^{\# }\left( f\right) \left( x\right) \) [and thus to \( {M}^{\# }\left( f\right) \left( x\right) \) ] by an argument similar with that given in Proposition 3.4.2 (2). For \( \delta > 0 \) we also introduce the maximal operators \[ {M}_{\delta }\left( f\right) = M{\left( {\left| f\right| }^{\delta }\right) }^{1/\delta } \] \[ {M}_{\delta }^{\# }\left( f\right) = {M}^{\# }{\left( {\left| f\right| }^{\delta }\right) }^{1/\delta } \] \[ {\mathcal{M}}_{\delta }^{\# }\left( f\right) = {\mathcal{M}}^{\# }{\left( {\left| f\right| }^{\delta }\right) }^{1/\delta } \] \[ {\mathcal{M}}_{\delta }^{\# \# }\left( f\right) = {\mathcal{M}}^{\# \# }{\left( {\left| f\right| }^{\delta }\right) }^{1/\delta } \] where \( M \) is the Hardy-Littlewood maximal operator. Of these four maximal functions, the last three are pointwise comparable to each other. The next lemma states a pointwise estimate for commutators of singular integral operators with BMO functions in terms of the maximal functions and maximal functions of singular integrals. Lemma 3.5.5. Let \( T \) be a linear operator given by convolution with a tempered distribution on \( {\mathbf{R}}^{n} \) that coincides with a function \( K\left( x\right) \) on \( {\mathbf{R}}^{n} \smallsetminus \{ 0\} \) satisfying (3.4.13), (3.4.14), and (3.4.15). Let \( b \) be in \( \operatorname{BMO}\left( {\mathbf{R}}^{n}\right) \), and let \( 0 < \delta < \varepsilon \) . Then there exists a positive constant \( C = {C}_{\delta ,\varepsilon, n} \) such that for every smooth function \( f \) with compact support we have \[ {M}_{\delta }^{\# }\left( {\left\lbrack {b, T}\right\rbrack \left( f\right) }\right) \leq C\parallel b{\parallel }_{BMO}\left\{ {{M}_{\varepsilon }\left( {T\left( f\right) }\right) + {M}^{2}\left( f\right) }\right\} . \] (3.5.8) Proof. We will prove (3.5.8) for the equivalent operator \( {\mathcal{M}}_{\delta }^{\# \# }\left( {\left\lbrack {b, T}\right\rbrack \left( f\right) }\right) \) . Fix a cube \( Q \) in \( {\mathbf{R}}^{n} \) with sides parallel to the axes centered at the point \( x \) . Since for \( 0 < \delta < 1 \) we have \( \left| \right| \alpha \left| {{}^{\delta } - }\right| \beta \left| {}^{\delta }\right| \leq {\left| \alpha - \beta \right| }^{\delta } \) for \( \alpha ,\beta \in \mathbf{R} \), it is enough to show for some complex constant \( c = {c}_{Q} \) that there exists \( C = {C}_{\delta } > 0 \) such that \[ {\left( \frac{1}{\left| Q\right| }{\int }_{Q}{\left| \left\lbrack b, T\right\rbrack \left( f\right) \left( y\right) - c\right| }^{\delta }dy\right) }^{\frac{1}{\delta }} \leq C\parallel b{\parallel }_{BMO}\left\
1098_(GTM254)Algebraic Function Fields and Codes
Definition 2.1.5
Definition 2.1.5. The canonical inner product on \( {\mathbb{F}}_{q}^{n} \) is defined by \[ \langle a, b\rangle \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{b}_{i} \] for \( a = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) and \( b = \left( {{b}_{1},\ldots ,{b}_{n}}\right) \in {\mathbb{F}}_{q}^{n} \) . Obviously this is a non-degenerate symmetric bilinear form on \( {\mathbb{F}}_{q}^{n} \) . Definition 2.1.6. If \( C \subseteq {\mathbb{F}}_{q}^{n} \) is a code then \[ {C}^{ \bot } \mathrel{\text{:=}} \left\{ {u \in {\mathbb{F}}_{q}^{n}\mid \langle u, c\rangle = 0\text{ for all }c \in C}\right\} \] is called the dual of \( C \) . The code \( C \) is called self-dual (resp. self-orthogonal) if \( C = {C}^{ \bot } \) (resp. \( C \subseteq {C}^{ \bot } \) ). It is well-known from linear algebra that the dual of an \( \left\lbrack {n, k}\right\rbrack \) code is an \( \left\lbrack {n, n - k}\right\rbrack \) code, and \( {\left( {C}^{ \bot }\right) }^{ \bot } = C \) . In particular, the dimension of a self-dual code of length \( n \) is \( n/2 \) . Definition 2.1.7. A generator matrix \( H \) of \( {C}^{ \bot } \) is said to be a parity check matrix for \( C \) . Clearly a parity check matrix \( H \) of an \( \left\lbrack {n, k}\right\rbrack \) code \( C \) is an \( \left( {n - k}\right) \times n \) matrix of rank \( n - k \), and we have \[ C = \left\{ {u \in {\mathbb{F}}_{q}^{n} \mid H \cdot {u}^{t} = 0}\right\} \] (where \( {u}^{t} \) denotes the transpose of \( u \) ). Thus a parity check matrix ’checks’ whether a vector \( u \in {\mathbb{F}}_{q}^{n} \) is a codeword or not. One of the basic problems in algebraic coding theory is to construct - over a fixed alphabet \( {\mathbb{F}}_{q} \) - codes whose dimension and minimum distance are large in comparison with their length. However there are some restrictions. Roughly speaking, if the dimension of a code is large (with respect to its length), then its minimum distance is small. The simplest bound is the following. Proposition 2.1.8 (Singleton Bound). For an \( \left\lbrack {n, k, d}\right\rbrack \) code \( C \) holds \[ k + d \leq n + 1 \] Proof. Consider the linear subspace \( E \subseteq {\mathbb{F}}_{q}^{n} \) given by \[ E \mathrel{\text{:=}} \left\{ {\left( {{a}_{1},\ldots ,{a}_{n}}\right) \in {\mathbb{F}}_{q}^{n} \mid {a}_{i} = 0\;\text{ for all }\;i \geq d}\right\} . \] Every \( a \in E \) has weight \( \leq d - 1 \), hence \( E \cap C = 0 \) . As \( \dim E = d - 1 \) we obtain \[ k + \left( {d - 1}\right) = \dim C + \dim E \] \[ = \dim \left( {C + E}\right) + \dim \left( {C \cap E}\right) = \dim \left( {C + E}\right) \leq n. \] Codes with \( k + d = n + 1 \) are in a sense optimal; such codes are called MDS codes (maximum distance separable codes). If \( n \leq q + 1 \), there exist MDS codes over \( {\mathbb{F}}_{q} \) for all dimensions \( k \leq n \) (this will be shown in Section 2.3). The Singleton Bound does not take into consideration the size of the alphabet. Several other upper bounds for the parameters \( k \) and \( d \) (involving the length \( n \) of the code and the size \( q \) of the alphabet) are known. They are stronger than the Singleton Bound if \( n \) is large with respect to \( q \) . We refer to [25],[28], see also Chapter 8, Section 8.4. It is in general a much harder problem to obtain lower bounds for the minimum distance of a given code (or a given class of codes). Only few such classes are known, for instance BCH codes, Goppa codes or quadratic residue codes (cf. [25],[28]). One of the reasons for the interest in algebraic geometry codes (to be defined in the next section) is that for this large class of codes a good lower bound for the minimum distance is available. ## 2.2 AG Codes Algebraic geometry codes (AG codes) were introduced by V.D. Goppa in [15]. Therefore they are sometimes also called geometric Goppa codes. As a motivation for the construction of these codes we first consider Reed-Solomon codes over \( {\mathbb{F}}_{q} \) . This important class of codes is well-known in coding theory for a long time. Algebraic geometry codes are a very natural generalization of Reed-Solomon codes. Let \( n = q - 1 \) and let \( \beta \in {\mathbb{F}}_{q} \) be a primitive element of the multiplicative group \( {\mathbb{F}}_{q}^{ \times } \) ; i.e., \( {\mathbb{F}}_{q}^{ \times } = \left\{ {\beta ,{\beta }^{2},\ldots ,{\beta }^{n} = 1}\right\} \) . For an integer \( k \) with \( 1 \leq k \leq n \) we consider the \( k \) -dimensional vector space \[ {\mathcal{L}}_{k} \mathrel{\text{:=}} \left\{ {f \in {\mathbb{F}}_{q}\left\lbrack X\right\rbrack \mid \deg f \leq k - 1}\right\} \] (2.1) and the evaluation map ev : \( {\mathcal{L}}_{k} \rightarrow {\mathbb{F}}_{q}^{n} \) given by \[ \operatorname{ev}\left( f\right) \mathrel{\text{:=}} \left( {f\left( \beta \right), f\left( {\beta }^{2}\right) ,\ldots, f\left( {\beta }^{n}\right) }\right) \in {\mathbb{F}}_{q}^{n}. \] (2.2) Obviously this map is \( {\mathbb{F}}_{q} \) -linear, and it is injective because a non-zero polynomial \( f \in {\mathbb{F}}_{q}\left\lbrack X\right\rbrack \) of degree \( < n \) has less than \( n \) zeros. Therefore \[ {C}_{k} \mathrel{\text{:=}} \left\{ {\left( {f\left( \beta \right), f\left( {\beta }^{2}\right) ,\ldots, f\left( {\beta }^{n}\right) }\right) \mid f \in {\mathcal{L}}_{k}}\right\} \] (2.3) is an \( \left\lbrack {n, k}\right\rbrack \) code over \( {\mathbb{F}}_{q} \) ; it is called an RS code (Reed-Solomon code). The weight of a codeword \( 0 \neq c = \operatorname{ev}\left( f\right) \in {C}_{k} \) is given by \[ \operatorname{wt}\left( c\right) = n - \left| \left\{ {i \in \{ 1,\ldots, n\} ;f\left( {\beta }^{i}\right) = 0}\right\} \right| \] \[ \geq n - \deg f \geq n - \left( {k - 1}\right) \text{.} \] Hence the minimum distance \( d \) of \( {C}_{k} \) satisfies the inequality \( d \geq n + 1 - k \) . On the other hand, \( d \leq n + 1 - k \) by the Singleton Bound. Thus Reed-Solomon codes are MDS codes over \( {\mathbb{F}}_{q} \) . Observe however that RS codes are short in comparison with the size of the alphabet \( {\mathbb{F}}_{q} \), since \( n = q - 1 \) . Now we introduce the notion of an algebraic geometry code. Let us fix some notation valid for the entire section. \( F/{\mathbb{F}}_{q} \) is an algebraic function field of genus \( g \) . \( {P}_{1},\ldots ,{P}_{n} \) are pairwise distinct places of \( F/{\mathbb{F}}_{q} \) of degree 1 . \( D = {P}_{1} + \ldots + {P}_{n} \) \( G \) is a divisor of \( F/{\mathbb{F}}_{q} \) such that \( \operatorname{supp}G \cap \operatorname{supp}D = \varnothing \) . Definition 2.2.1. The algebraic geometry code (or AG code) \( {C}_{\mathcal{L}}\left( {D, G}\right) \) associated with the divisors \( D \) and \( G \) is defined as \[ {C}_{\mathcal{L}}\left( {D, G}\right) \mathrel{\text{:=}} \left\{ {\left( {x\left( {P}_{1}\right) ,\ldots, x\left( {P}_{n}\right) }\right) \mid x \in \mathcal{L}\left( G\right) }\right\} \subseteq {\mathbb{F}}_{q}^{n}. \] Note that this definition makes sense: for \( x \in \mathcal{L}\left( G\right) \) we have \( {v}_{{P}_{i}}\left( x\right) \geq 0 \) \( \left( {i = 1,\ldots, n}\right) \) because \( \operatorname{supp}G \cap \operatorname{supp}D = \varnothing \) . The residue class \( x\left( {P}_{i}\right) \) of \( x \) modulo \( {P}_{i} \) is an element of the residue class field of \( {P}_{i} \) (see Definition 1.1.14). As \( \deg {P}_{i} = 1 \), this residue class field is \( {\mathbb{F}}_{q} \), so \( x\left( {P}_{i}\right) \in {\mathbb{F}}_{q} \) . As in (2.2) we can consider the evaluation map \( {\operatorname{ev}}_{D} : \mathcal{L}\left( G\right) \rightarrow {\mathbb{F}}_{q}^{n} \) given by \[ {\operatorname{ev}}_{D}\left( x\right) \mathrel{\text{:=}} \left( {x\left( {P}_{1}\right) ,\ldots, x\left( {P}_{n}\right) }\right) \in {\mathbb{F}}_{q}^{n}. \] (2.4) The evaluation map is \( {\mathbb{F}}_{q} \) -linear, and \( {C}_{\mathcal{L}}\left( {D, G}\right) \) is the image of \( \mathcal{L}\left( G\right) \) under this map. The analogy with the definition of Reed-Solomon codes (2.3) is obvious. In fact, choosing the function field \( F/{\mathbb{F}}_{q} \) and the divisors \( D \) and \( G \) in an appropriate manner, RS codes are easily seen to be a special case of AG codes, see Section 2.3. Definition 2.2.1 looks like a very artificial way to define certain codes over \( {\mathbb{F}}_{q} \) . The next theorem will show why these codes are interesting: one can calculate (or at least estimate) their parameters \( n, k \) and \( d \) by means of the Riemann-Roch Theorem, and one obtains a non-trivial lower bound for their minimum distance in a very general setting. Theorem 2.2.2. \( {C}_{\mathcal{L}}\left( {D, G}\right) \) is an \( \left\lbrack {n, k, d}\right\rbrack \) code with parameters \[ k = \ell \left( G\right) - \ell \left( {G - D}\right) \;\text{ and }\;d \geq n - \deg G. \] Proof. The evaluation map (2.4) is a surjective linear map from \( \mathcal{L}\left( G\right) \) to \( {C}_{\mathcal{L}}\left( {D, G}\right) \) with kernel \[ \operatorname{Ker}\left( {\operatorname{ev}}_{D}\right) = \left\{ {x \in \mathcal{L}\left( G\right) \mid {v}_{{P}_{i}}\left( x\right) > 0\text{ for }i = 1,\ldots, n}\right\} = \mathcal{L}\left( {G - D}\right) . \] It follows that \( k = \dim {C}_{\mathcal{L}}\left( {D, G}\right) = \dim \mathcal{L}\left( G\right) - \dim \mathcal{L}\left( {G - D}\right) = \ell \left( G\right) - \) \( \ell \left( {G - D}\right) \) . The assertion regarding the minimum distance \( d \) makes sense only if \( {C}_{\mathcal{L}}\left( {D, G}\right) \neq 0 \), so we will assume this. Choose an element \( x \in \mathcal{L}\left( G\right) \) with \( \operatorname{wt}\left( {e{v}_{D}\left( x\right) }\right) = d \) . Then exactly \( n - d \) places \( {P}_{{i}_{1}},\ldots ,{P}_{{i}_{n - d}} \) in the support of \( D \) are zeros of \( x \), so \[ 0 \neq x \in \mathcal{L}\left( {G - \left( {{P}_
109_The rising sea Foundations of Algebraic Geometry
Definition 5.106
Definition 5.106. We denote by \( {\sigma }_{0} \) the automorphism of \( \left( {W, S}\right) \) given by \( {\sigma }_{0}\left( w\right) = {w}_{0}w{w}_{0} \) for all \( w \in W \) . If \( J \subseteq S \), we set \( {J}^{0} \mathrel{\text{:=}} {\sigma }_{0}\left( J\right) \) . We call two subsets \( J \) and \( K \) of \( S \) opposite, and we write \( J \) op \( K \), if \( K = {J}^{0} \) . We have already encountered \( {\sigma }_{0} \) in Chapter 1, where it occurred in several exercises as well as in Proposition 1.130 and Corollary 1.131. We will not use those results here except in our discussion of examples in Section 5.7.4. Here is an alternative characterization of opposite residues, which also justifies the definition of opposite types in Definition 5.106. Lemma 5.107. A J-residue \( \mathcal{R} \) and a \( K \) -residue \( \mathcal{S} \) of \( \mathcal{C} \) are opposite if and only if \( J = {K}^{0} \) and there is a chamber in \( \mathcal{R} \) that is opposite to some chamber in \( \mathcal{S} \) . If this is the case, then \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \), and \( {w}_{0}\left( J\right) {w}_{0} = \) \( {w}_{0}{w}_{0}\left( K\right) \) is the unique element of minimal length in \( \delta \left( {\mathcal{R},\mathcal{S}}\right) \) . Proof. Suppose there are chambers \( C \in \mathcal{R} \) and \( D \in \mathcal{S} \) with \( C \) op \( D \), i.e., \( \delta \left( {C, D}\right) = {w}_{0} \) . Applying Lemma 5.29, we obtain \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0}{W}_{K} \) . If \( \mathcal{R} \) and \( \mathcal{S} \) are opposite, then for each \( {D}^{\prime } \in \mathcal{S} \) there exists a \( {C}^{\prime } \in \mathcal{R} \) with \( \delta \left( {{C}^{\prime },{D}^{\prime }}\right) = {w}_{0} \) . This implies that \( \delta \left( {\mathcal{R},{D}^{\prime }}\right) = {W}_{J}{w}_{0} \) . Since this is true for every \( {D}^{\prime } \in \mathcal{S} \), it follows that \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} \) . Similarly, \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {w}_{0}{W}_{K} \) . Thus \( {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \), which implies that \( {W}_{J} = {w}_{0}{W}_{K}{w}_{0} = {W}_{{K}^{0}} \) and hence that \( J = {K}^{0} \) by the basic properties of standard parabolic subgroups. If, conversely, \( J = {K}^{0} \), then \( {W}_{J} = {w}_{0}{W}_{K}{w}_{0} \) ; hence \[ \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0}{W}_{K} = \left( {{w}_{0}{W}_{K}{w}_{0}}\right) {w}_{0}{W}_{K} = {w}_{0}{W}_{K}, \] and similarly \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} \) . This implies \( \delta \left( {{C}^{\prime },\mathcal{S}}\right) = {w}_{0}{W}_{K} \) for any \( {C}^{\prime } \in \mathcal{R} \) . (since \( \delta \left( {{C}^{\prime },\mathcal{S}}\right) \) is a left coset of \( {W}_{K} \) and is contained in \( \delta \left( {\mathcal{R},\mathcal{S}}\right) \) ), and similarly \( \delta \left( {\mathcal{R},{D}^{\prime }}\right) = {W}_{J}{w}_{0} \) for any \( {D}^{\prime } \in \mathcal{S} \) . But this just means that each \( {C}^{\prime } \in \mathcal{R} \) is opposite a chamber in \( \mathcal{S} \) and each \( {D}^{\prime } \in \mathcal{S} \) is opposite a chamber in \( \mathcal{R} \) . Hence the residues \( \mathcal{R} \) and \( \mathcal{S} \) are opposite, and the first part of the lemma is proved. Now assume that \( \mathcal{R} \) op \( \mathcal{S} \), so that \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \) by the argument above. Consider an arbitrary element \( {w}_{J}{w}_{0} \in {W}_{J}{w}_{0}\left( {{w}_{J} \in {W}_{J}}\right) \) . We have \( l\left( {{w}_{J}{w}_{0}}\right) = l\left( {w}_{0}\right) - l\left( {w}_{J}\right) \), so we minimize \( l\left( {{w}_{J}{w}_{0}}\right) \) by maximizing \( l\left( {w}_{J}\right) \) , i.e., by taking \( {w}_{J} = {w}_{0}\left( J\right) \) . Thus \( {w}_{0}\left( J\right) {w}_{0} \) is the unique element of minimal length in \( {W}_{J}{w}_{0} \) . Similarly, \( {w}_{0}{w}_{0}\left( K\right) \) is the element of minimal length in \( {w}_{0}{W}_{K} \) . Finally, we must have \( {w}_{0}\left( J\right) {w}_{0} = {w}_{0}{w}_{0}\left( K\right) \), since \( {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \) . ## *5.7.2 A Metric Characterization of Opposition It is natural to ask whether the opposition relation on residues admits a direct characterization in terms of distances between residues, as in the definition of opposition for chambers. In this optional subsection we will give such a characterization. Note first that the distance between residues makes sense, as a special case of the distance between subsets of a metric space. Namely, if \( \mathcal{R} \) and \( \mathcal{S} \) are residues, then \[ d\left( {\mathcal{R},\mathcal{S}}\right) \mathrel{\text{:=}} \min \{ d\left( {C, D}\right) \mid C \in \mathcal{R}, D \in \mathcal{S}\} . \] If \( \left( {\mathcal{C},\delta }\right) \) is the W-metric building associated to a simplicial building \( \Delta \), then this is a familiar concept. Indeed, we have \( \mathcal{R} = {\mathcal{C}}_{ \geq A} \) and \( \mathcal{S} = {\mathcal{C}}_{ \geq B} \) for some simplices \( A, B \in \Delta \), and \( d\left( {\mathcal{R},\mathcal{S}}\right) \) is the same as the gallery distance \( d\left( {A, B}\right) \) that we have worked with in earlier chapters. As usual, we will write \( d\left( {\mathcal{R}, D}\right) \) instead of \( d\left( {\mathcal{R},\{ D\} }\right) \) in case \( \mathcal{S} \) is a singleton. From the theory of projections, we know that \[ d\left( {\mathcal{R}, D}\right) = d\left( {{\operatorname{proj}}_{\mathcal{R}}D, D}\right) . \] The key to our metric characterization of opposition is the following lemma: Lemma 5.108. Let \( \mathcal{R} \) be a \( J \) -residue and \( D \) a chamber of \( \mathcal{C} \) . (1) \( \max \{ d\left( {C, D}\right) \mid C \in \mathcal{R}\} = {d}_{0}\left( J\right) + d\left( {\mathcal{R}, D}\right) \) . (2) \( d\left( {\mathcal{R}, D}\right) \leq {d}_{0} - {d}_{0}\left( J\right) \), with equality if and only if \( \mathcal{R} \) contains a chamber opposite \( D \) . Proof. Let \( {C}^{\prime } \mathrel{\text{:=}} {\operatorname{proj}}_{\mathcal{R}}D \) . Then by the gate property (Proposition 5.34) we have \[ d\left( {C, D}\right) = d\left( {C,{C}^{\prime }}\right) + d\left( {{C}^{\prime }, D}\right) \] for all \( C \in \mathcal{R} \) . This is maximal when \( d\left( {C,{C}^{\prime }}\right) \) is maximal. Since \( \delta \left( {\mathcal{R},{C}^{\prime }}\right) = \) \( {W}_{J} \), it follows that we maximize \( d\left( {C, D}\right) \) by taking \( C \in \mathcal{R} \) such that \( \delta \left( {C,{C}^{\prime }}\right) = \) \( {w}_{0}\left( J\right) \), in which case \( d\left( {C, D}\right) = {d}_{0}\left( J\right) + d\left( {{C}^{\prime }, D}\right) \) . This proves (1), and (2) follows immediately. We can now give our promised characterization of opposition. See Exercise 4.80 for the same result stated in the language of simplicial buildings. Proposition 5.109. Let \( \mathcal{R} \) be a \( J \) -residue, and let \( \mathcal{S} \) be a residue with \( \operatorname{rank}\mathcal{S} \geq \operatorname{rank}\mathcal{R} \) . Then the following conditions are equivalent. (i) \( \mathcal{R} \) op \( \mathcal{S} \) . (ii) \( d\left( {\mathcal{R},\mathcal{S}}\right) = {d}_{0} - {d}_{0}\left( J\right) \) . (iii) \( d\left( {\mathcal{R},\mathcal{S}}\right) = \max \left\{ {d\left( {\mathcal{R},{\mathcal{S}}^{\prime }}\right) \mid {\mathcal{S}}^{\prime }}\right. \) is a residue of \( \mathcal{C}\} \) . Proof. It is immediate from the definitions that the maximum on the right side of (iii) is equal to \[ \max \{ d\left( {\mathcal{R}, D}\right) \mid D \in \mathcal{C}\} \] By Lemma 5.108, this maximum is equal to \( {d}_{0} - {d}_{0}\left( J\right) \) . So (ii) and (iii) are equivalent. The last assertion of Lemma 5.107 shows that (i) implies (ii). To complete the proof, we assume that (ii) and (iii) hold, and we show that \( \mathcal{R} \) op \( \mathcal{S} \) . It follows from (ii) and (iii) that every chamber \( D \in \mathcal{S} \) satisfies \( d\left( {\mathcal{R}, D}\right) = \) \( {d}_{0} - {d}_{0}\left( J\right) \) and hence, by the lemma again, \( \mathcal{R} \) contains a chamber \( C \) op \( D \) . As in the proof of Lemma 5.107, we conclude that \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} = {w}_{0}{W}_{{J}^{0}} \) . On the other hand, if \( K \) is the type of \( \mathcal{S} \), then \( \delta \left( {\mathcal{R},\mathcal{S}}\right) \) contains \( {w}_{0}{W}_{K} \), so \( K \subseteq {J}^{0} \) . But \( \left| K\right| = \operatorname{rank}\mathcal{S} \geq \left| J\right| \), so we must have \( K = {J}^{0} \), and then \( \mathcal{R} \) op \( \mathcal{S} \) by Lemma 5.107. It is worth explicitly mentioning the following corollary of the proof: Corollary 5.110. Let \( \mathcal{R} \) be a \( J \) -residue and \( \mathcal{S} \) a \( K \) -residue. If \( \mathcal{R} \) and \( \mathcal{S} \) satisfy condition (iii) of the proposition, then \( K \subseteq {J}^{0} \) . ## 5.7.3 The Thin Case Next we want to investigate the opposition relation in an apartment, which we can identify with the standard thin building. Some basic properties are collected in the following lemma. Lemma 5.111. Let \( \left( {W,\delta }\right) \) be the standard thin building of type \( \left( {W, S}\right) \) (where \( W \) is finite). Then the following hold: (1) Any \( w \in W \) is opposite precisely one \( {w}^{\prime } \in W \), namely \( {w}^{\prime } = w{w}_{0} \) . (2) The map \( {\operatorname{op}}_{W} : W \rightarrow W \) defined by \( {\operatorname{op}}_{W}\left( w\right) \mathrel{\text{:=}} w{w}_{0} \) for all \( w \in W \) is a surjective \( {\sigma }_{0} \) -isometry. (3) Any residue \( w{W}_{J} \) in \( W \) (with \( w \in W \) and \( J \subset
1358_[陈松蹊&张慧铭] A Course in Fixed and High-dimensional Multivariate Analysis (2020)
Definition 1.2.7
Definition 1.2.7 A random vector \( \mathbf{X} \in {\mathbb{R}}^{p} \) is said to be distributed as a spherically contoured (SC) distribution if its characteristic function \( {\phi }_{\mathbf{X}}\left( \mathbf{t}\right) = \phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{t}}\right) \) for a real function \( \phi \), and \( \mathbf{t} \in {\mathbb{R}}^{p} \) . We denote as \( \mathbf{X} \sim {\mathbf{S}}_{p}\left( \phi \right) \) . Example : \( \mathbf{X} \sim {N}_{p}\left( {\mathbf{0},{\mathbf{I}}_{p}}\right) ,{\phi }_{\mathbf{X}}\left( \mathbf{t}\right) = {e}^{-\frac{1}{2}{\mathbf{t}}^{\mathbf{T}}\mathbf{t}} \) . So, \( \mathbf{X} \sim {\mathbf{S}}_{p}\left( \phi \right) \) with \( \phi \left( u\right) = {e}^{-\frac{u}{2}} \) . Lemma 1.2.2 Let \( f\left( {\cdot , \cdot }\right) \) be a bivariate function, \( \mathbf{a} = {\left( {a}_{1},\ldots ,{a}_{p}\right) }^{T} \neq \mathbf{0} \) . Then, \[ {\int }_{{\mathbb{R}}^{p}}f\left( {\mathop{\sum }\limits_{{i = 1}}^{p}{a}_{i}{x}_{i},\mathop{\sum }\limits_{{i = 1}}^{p}{x}_{i}^{2}}\right) d{x}_{1}\cdots d{x}_{p} = {\int }_{{\mathbb{R}}^{p}}f\left( {\parallel a\parallel {x}_{1},\mathop{\sum }\limits_{{i = 1}}^{p}{x}_{i}^{2}}\right) d{x}_{1}\cdots d{x}_{p}. \] Proof : Let \( {\gamma }_{1} = {\left( \frac{{a}_{1}}{\parallel a\parallel },\ldots ,\frac{{a}_{p}}{\parallel a\parallel }\right) }^{T} \neq \mathbf{0} \), so that \( {\gamma }_{1}^{T}{\gamma }_{1} = 1 \) . By Gram-Schmidt orthogonalization procedure, we can construct \( {\gamma }_{2},\ldots ,{\gamma }_{p} \) such that \( {\gamma }_{i}^{T}{\gamma }_{j} = {\delta }_{ij} \) for all \( i, j \in \) \( \{ 1,\ldots, n\} \) . Define \( \mathbf{\Gamma } = {\left( {\gamma }_{1},\ldots ,{\gamma }_{p}\right) }^{T} \), then clearly \( {\mathbf{\Gamma }}^{\mathbf{T}}\mathbf{\Gamma } = \mathbf{\Gamma }{\mathbf{\Gamma }}^{\mathbf{T}} = \mathbf{I} \) . Let \( \mathbf{y} = \mathbf{\Gamma }\mathbf{x} \) and \( {\mathbf{y}}^{\mathbf{T}}\mathbf{y} = {\mathbf{x}}^{\mathbf{T}}{\mathbf{\Gamma }}^{\mathbf{T}}\mathbf{\Gamma }\mathbf{x} = {\mathbf{x}}^{\mathbf{T}}\mathbf{x} \) . Then, \( \sum {x}_{i}^{2} = {\mathbf{x}}^{\mathbf{T}}\mathbf{x} = {\mathbf{y}}^{\mathbf{T}}\mathbf{y} = \sum {y}_{i}^{2} \) . Thus, \( {y}_{1} = {\gamma }_{1}^{\mathbf{T}}\mathbf{x} = \) \( \sum {a}_{i}{x}_{i}/\parallel \mathbf{a}\parallel \) and the Jacobian \( \left| \mathbf{J}\right| = \left| \widehat{\mathbf{\Gamma }}\right| = 1 \) . Hence, \[ {\int }_{{\mathbb{R}}^{p}}f\left( {\sum {a}_{i}{x}_{i},\sum {x}_{i}^{2}}\right) d{x}_{1}\cdots d{x}_{p} = {\int }_{{\mathbb{R}}^{p}}f\left( {\parallel \mathbf{a}\parallel {y}_{1},\sum {y}_{i}^{2}}\right) d{y}_{1}\cdots d{y}_{p}. \] Example : Let \( \mathbf{Z} \sim {N}_{p}\left( {\mathbf{0},{\mathbf{I}}_{p}}\right) \) and \( m{S}^{2} \sim {\chi }_{m}^{2} \), which is independent of \( \mathbf{Z} \) . Consider \( \mathbf{W} = {\left( {\mathbf{Z}}^{T}, m{S}^{2}\right) }^{T} \) and transformation \( \mathbf{Y} = \frac{\mathbf{Z}}{S} \) . As \( \mathbf{Y} \) is multivariate \( t \) -distributed with \( m \) -degrees of freedom, \( \mathbf{Y} \) has a density \[ f\left( \mathbf{y}\right) = \frac{\Gamma \left( \frac{m + p}{2}\right) }{\Gamma \left( \frac{m}{2}\right) {m}^{\frac{p}{2}}{\pi }^{\frac{p}{2}}}{\left( 1 + \frac{{\mathbf{y}}^{\mathbf{T}}\mathbf{y}}{m}\right) }^{-\frac{m + p}{2}}. \] Clearly, \( E\left( \mathbf{Y}\right) = E\left( {\frac{1}{S}E\left( \mathbf{Z}\right) }\right) = 0,\operatorname{Var}\left( \mathbf{Y}\right) = E\operatorname{Var}\left( {\frac{1}{S}\mathbf{Z} \mid S}\right) = E\left( {\frac{1}{{S}^{2}}{I}_{p}}\right) = \frac{m}{m - 2}{I}_{p} \), since \( E\left( \frac{1}{{S}^{2}}\right) = \frac{m}{m - 2}. \) Clearly the density is constant on the sphere: \( {\mathbf{Y}}^{\mathbf{T}}\mathbf{Y} = c,\forall c \in \mathbb{R} \) . Applying Lemma 1.2.2, the characteristic function of \( \mathbf{Y} \) is, by letting \( {c}_{m, p} = \frac{\Gamma \left( \frac{m + p}{2}\right) }{\Gamma \left( \frac{m}{2}\right) {m}^{\frac{p}{2}}{\pi }^{\frac{p}{2}}} \) , \[ {\phi }_{\mathbf{Y}}\left( \mathbf{t}\right) = {c}_{m, p}\int \cdots \int {e}^{i{\mathbf{t}}^{\mathbf{T}}\mathbf{y}}{\left( 1 + \frac{\sum {y}_{i}^{2}}{m}\right) }^{-\frac{m + p}{2}}d{y}_{1}\cdots d{y}_{p} \] \[ = {c}_{m, p}\int \cdots \int {e}^{i\parallel \mathbf{t}\parallel {y}_{1}}{\left( 1 + \frac{\sum {y}_{i}^{2}}{m}\right) }^{-\frac{m + p}{2}}d{y}_{1}\cdots d{y}_{p} \] So, \( \mathbf{Y} \) is a spherically contoured distribution with \[ \phi \left( u\right) = {c}_{m, p}\int \cdots \int {e}^{i\sqrt{u}{y}_{1}}{\left( 1 + \frac{\sum {y}_{i}^{2}}{m}\right) }^{-\frac{m + p}{2}}d{y}_{1}\cdots d{y}_{p} \] which is a function of \( \parallel \mathbf{t}\parallel \) . Hence, the \( t \) -distribution is spherically contoured distributed. A key member of \( {\mathbf{S}}_{p}\left( \phi \right) \) is \( {\mathbf{U}}^{\left( p\right) } \) the uniformly distributed random vector on the unit sphere \( {\mathbf{S}}_{p} \) in \( {\mathbb{R}}^{p} \) . Let us now gain the details. Lemma 1.2.3 Let \( g\left( \cdot \right) \) be a Borel function in \( \mathbb{R} \) and \( \mathbf{a} = {\left( {a}_{1},\ldots ,{a}_{p}\right) }^{\mathbf{T}} \neq 0 \) . Then \[ {\int }_{{\mathbf{S}}_{pc} : {\mathbf{x}}^{\mathbf{T}}\mathbf{x} = {c}^{2}}g\left( {{\mathbf{a}}^{\mathbf{T}}\mathbf{x}}\right) d{\mathbf{S}}_{pc} = \frac{2{c}^{p - 1}{\pi }^{\frac{p - 1}{2}}}{\Gamma \left( \frac{p - 1}{2}\right) }{\int }_{-\frac{\pi }{2}}^{\frac{\pi }{2}}g\left( {c\parallel \mathbf{a}\parallel \sin \theta }\right) {\cos }^{p - 2}{\theta d\theta }. \] Note that the integral on the LHS is restricted on \( {\mathbf{S}}_{pc} : {\mathbf{x}}^{\mathbf{T}}\mathbf{x} = {c}^{2} \) . Proof : Let \( \mathbf{T} = \left( \begin{matrix} {t}_{11} & \cdots & {t}_{1p} \\ \vdots & \ddots & \vdots \\ {t}_{p1} & \cdots & {t}_{pp} \end{matrix}\right) \) be an orthogonal matrix such that its first row \( \left( {{t}_{11},\ldots ,{t}_{1p}}\right) = \) \( \left( {\frac{{a}_{1}}{\parallel \mathbf{a}\parallel },\cdots ,\frac{{a}_{p}}{\parallel \mathbf{a}\parallel }}\right) \) . Let \( \mathbf{y} = \mathbf{T}\mathbf{x} \) . Then, \( {\mathbf{y}}^{\mathbf{T}}\mathbf{y} = {\mathbf{x}}^{\mathbf{T}}\mathbf{x} \) and \( {\mathbf{y}}_{\mathbf{1}} = {\mathbf{a}}^{\mathbf{T}}\mathbf{x}/\parallel \mathbf{a}\parallel \) . Choose \( {x}_{1},\ldots ,{x}_{p - 1} \) to be the only \( p - 1 \) independent variables on the sphere \( {\mathbf{S}}_{pc} \) . Without loss of generality (WLOG), choose \( {\mathbf{y}}_{\mathbf{1}},\ldots ,{\mathbf{y}}_{\mathbf{p} - \mathbf{1}} \) be the independent variables on the \( {\mathbf{{TS}}}_{pc} \) . Clearly, \( {x}_{p} = \) \( \sqrt{{c}^{2} - {x}_{1}^{2} - \cdots - {x}_{p - 1}^{2}} \) . Thus, \[ \left( \begin{matrix} {\mathbf{y}}_{\mathbf{1}} \\ \vdots \\ {\mathbf{y}}_{\mathbf{p} - \mathbf{1}} \end{matrix}\right) = \left( \begin{matrix} {t}_{1,1} & \cdots & {t}_{1, p} \\ \vdots & & \vdots \\ {t}_{p - 1,1} & \cdots & {t}_{p - 1, p} \end{matrix}\right) \left( \begin{matrix} {x}_{1} \\ \vdots \\ {x}_{p} \end{matrix}\right) \] 595 \[ = \left( \begin{matrix} {t}_{1,1}{x}_{1} + \cdots + {t}_{1, p}{x}_{p - 1} + {t}_{1, p}\sqrt{{c}^{2} - \mathop{\sum }\limits_{{i = 1}}^{{p - 1}}{x}_{i}^{2}} \\ \vdots \\ {t}_{p - 1,1}{x}_{1} + \cdots + {t}_{p - 1, p - 1}{x}_{p - 1} + {t}_{p - 1, p}\sqrt{{c}^{2} - \mathop{\sum }\limits_{{i = 1}}^{{p - 1}}{x}_{i}^{2}} \end{matrix}\right) . \] 596 The Jacobian is \[ \left| \frac{\partial \left( {{y}_{1}\cdots {y}_{p - 1}}\right) }{\partial \left( {{x}_{1}\cdots {x}_{p - 1}}\right) }\right| = \left| \begin{matrix} {t}_{1,1} - \frac{{t}_{1, p}{x}_{1}}{{x}_{p}} & \cdots & {t}_{1, p - 1} - \frac{{t}_{1, p}{x}_{p - 1}}{{x}_{p}} \\ \vdots & & \vdots \\ {t}_{p - 1,1} - \frac{{t}_{p - 1, p}{x}_{1}}{{x}_{p}} & \cdots & {t}_{p - 1, p - 1} - \frac{{t}_{p - 1, p}{x}_{p - 1}}{{x}_{p}} \end{matrix}\right| . \] 597 Note that \[ 1 = \left| \mathbf{T}\right| = \left| \begin{matrix} {t}_{1,1} & \cdots & {t}_{1, p - 1} & {t}_{1, p} \\ \vdots & \ddots & & \vdots \\ {t}_{p - 1,1} & \cdots & {t}_{p - 1, p - 1} & {t}_{p - 1, p} \\ {t}_{p,1} & \cdots & {t}_{n, p - 1} & {t}_{p, p} \end{matrix}\right| \] 598 \[ = \left| \begin{matrix} {t}_{1,1} - \frac{{t}_{1, p}{x}_{1}}{{x}_{p}} & \cdots & {t}_{1, p - 1} - \frac{{t}_{1, p}{x}_{p - 1}}{{x}_{p}} & {t}_{1, p} \\ \vdots & & \vdots & \vdots \\ {t}_{p - 1,1} - \frac{{t}_{p - 1, p}{x}_{1}}{{x}_{p}} & \cdots & {t}_{p - 1, p - 1} - \frac{{t}_{p - 1, p}{x}_{p - 1}}{{x}_{p}} & {t}_{p - 1, p} \\ {t}_{p,1} - \frac{{t}_{p, p}{x}_{1}}{{x}_{p}} & \cdots & {t}_{p, p - 1} - \frac{{t}_{p, p}{x}_{p - 1}}{{x}_{p}} & {t}_{p, p} \end{matrix}\right| = : \left| {\mathbf{T}\left( p\right) }\right| \] by multiplying \( {n}^{th} \) -column with \( - {x}_{j}/{x}_{p} \) and add on the \( {j}^{th} \) -column for \( j = 1,\ldots, p - 1 \) . 600 Hence, \[ = {t}_{p, p}\left| \frac{\partial \left( {{y}_{1}\cdots {y}_{p - 1}}\right) }{\partial \left( {{x}_{1}\cdots {x}_{p - 1}}\right) }\right| + \mathop{\sum }\limits_{{j = 1}}^{{p - 1}}{\left( -1\right) }^{p + j}\left( {{t}_{p, j} - \frac{{t}_{p, p}{x}_{j}}{{x}_{p}}}\right) \left| {\mathbf{T}\left( {p \mid j}\right) }\right| \] where \( \left| {\mathbf{T}\left( {p \mid j}\right) }\right| \) removes the \( {j}^{\text{th }} \) -column and the \( p \) -th row in \( \left| {\mathbf{T}\left( p\right) }\right| \) . Thus, \[ 1 = {t}_{p, p}\left| \frac{\partial \left( {{y}_{1}\cdots {y}_{p - 1}}\right) }{\partial \left( {{x}_{1}\cdots {x}_{p - 1}}\right) }\right| + \mathop{\sum }\limits_{{j = 1}}^{{p - 1}}{\left( -1\right) }^{p + j}{t}_{p, j}\left| {\mathbf{T}\left( {p \mid j}\right) }\right| + \mathop{\sum }\limits_{{j = 1}}^{{p - 1}}{\left( -1\right) }^{p + j + 1}\frac{{t}_{p, p}{x}_{j}}{{x}_{p}}\left| {\mathbf{T}\left( {p \mid j}\right) }\right| . \] Note that \[ 1 = \left| \mathbf{T}\right| = \mathop{\sum }\limits_{{j = 1}}^{p}{\left( -1\right) }^{p + j}{t}_{p, j}\left| {\mathbf{T}\left( {p \mid j}\right) }\right| \]
1129_(GTM35)Several Complex Variables and Banach Algebras
Definition 16.5
Definition 16.5. \[ {\mathcal{D}}_{{A}^{ * }} = \left\{ {x \in {H}_{2} \mid \exists {x}^{ * } \in {H}_{1}\text{ with }\left( {{Au}, x}\right) = \left( {u,{x}^{ * }}\right) \text{ for all }u \in {\mathcal{D}}_{A}.}\right\} \] Since \( {\mathcal{D}}_{A} \) is dense, \( {x}^{ * } \) is unique if it exists. For \( x \in {\mathcal{D}}_{A}^{ * } \), define \( {A}^{ * }x = {x}^{ * }.{A}^{ * } \) is called the adjoint of \( A.{\mathcal{D}}_{{A}^{ * }} \) is a linear space and \( {A}^{ * } \) is a linear transformation of \( {\mathcal{D}}_{{A}^{ * }} \rightarrow {H}_{1} \) Proposition. If \( A \) is closed, then \( {\mathcal{D}}_{{A}^{ * }} \) is dense in \( {H}_{2} \) . Moreover, if \( \beta \in {H}_{1} \) and if for some constant \( \delta \) \[ \left| \left( {{A}^{ * }f,\beta }\right) \right| \leq \delta \parallel f\parallel \] for all \( f \in {\mathcal{D}}_{{A}^{ * }} \), then \( \beta \in {\mathcal{D}}_{A} \) . For the proof of this proposition and related matters the reader may consult, e.g., F. Riesz and B. Sz.-Nagy, Lecons d'analyse fonctionelle, Budapest, 1953, Chap. 8. Consider now three Hilbert spaces \( {H}_{1},{H}_{2} \), and \( {H}_{3} \) and densely defined and closed linear operators \[ T : {H}_{1} \rightarrow {H}_{2}\;\text{ and }\;S : {H}_{2} \rightarrow {H}_{3}. \] Assume that (3) \[ S \cdot T = 0 \] i.e., for \( f \in {\mathcal{D}}_{T},{Tf} \in {\mathcal{D}}_{S} \) and \( S\left( {Tf}\right) = 0 \) . We write \( {\left( u, v\right) }_{j} \) for the inner product of \( u \) and \( v \) in \( {H}_{j}, j = 1,2,3 \), and similarly \( \parallel u{\parallel }_{j} \) for the norm in \( {H}_{j} \) . Theorem 16.2. Assume \( \exists \) a constant \( c \) such that for all \( f \in {\mathcal{D}}_{{T}^{ * }} \cap {\mathcal{D}}_{S} \) , (*) \[ {\begin{Vmatrix}{T}^{ * }f\end{Vmatrix}}_{1}^{2} + \parallel {Sf}{\parallel }_{3}^{2} \geq {c}^{2}\parallel f{\parallel }_{2}^{2}. \] Then if \( g \in {H}_{2} \) with \( {Sg} = 0,\exists u \in {\mathcal{D}}_{T} \) such that (4) \[ {Tu} = g \] and (5) \[ \parallel u{\parallel }_{1} \leq \frac{1}{c}\parallel g{\parallel }_{2} \] Proof. Put \( {N}_{S} = \left\{ {h \in {\mathcal{D}}_{S} \mid {Sh} = 0}\right\} .{N}_{S} \) is a closed subspace of \( {H}_{2} \) . (Why?) We claim that if \( g \in {N}_{S} \), then (6) \[ \left| {\left( g, f\right) }_{2}\right| \leq \frac{1}{c}{\begin{Vmatrix}{T}^{ * }f\end{Vmatrix}}_{1} \cdot \parallel g{\parallel }_{2} \] for all \( f \in {\mathcal{D}}_{T} \) To show this, fix \( f \in {\mathcal{D}}_{{T}^{ * }} \) . \[ f = {f}^{\prime } + {f}^{\prime \prime },\text{ where }{f}^{\prime } \bot {N}_{S},{f}^{\prime \prime } \in {N}_{S}. \] By (*) we have \( {\begin{Vmatrix}{T}^{ * }{f}^{\prime \prime }\end{Vmatrix}}_{1} \geq c{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2} \) . Then \[ \left| {\left( f, g\right) }_{2}\right| = \left| {\left( {f}^{\prime \prime }, g\right) }_{2}\right| \leq \parallel g{\parallel }_{2} \cdot {\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2} \leq \frac{1}{c}\parallel g{\parallel }_{2} \cdot {\begin{Vmatrix}{T}^{ * }{f}^{\prime \prime }\end{Vmatrix}}_{1}. \] But \( {T}^{ * }{f}^{\prime } = 0 \), for if \( h \in {\mathcal{D}}_{T},\left( {{Th},{f}^{\prime }}\right) = \left( {h,{T}^{ * }{f}^{\prime }}\right) \) and the left-hand side \( = 0 \), because \( {f}^{\prime } \bot {N}_{S} \) while \( S\left( {Th}\right) = 0 \) by (3). Hence \( {T}^{ * }f = {T}^{ * }{f}^{\prime \prime } \), and so (6) holds, as claimed. We now define a linear functional \( L \) on the range of \( {T}^{ * } \) in \( {H}_{1} \) by \[ L\left( {{T}^{ * }f}\right) = {\left( f, g\right) }_{2}, f \in {\mathcal{D}}_{{T}^{ * }}, g\text{ fixed in }{N}_{S}. \] By (6), then, \[ \left| {L\left( {{T}^{ * }f}\right) }\right| \leq \frac{1}{c}\parallel g{\parallel }_{2}{\begin{Vmatrix}{T}^{ * }f\end{Vmatrix}}_{1} \] It follows that \( L \) is well defined on the range of \( {T}^{ * } \) and that \( \parallel L\parallel \leq \left( {1/c}\right) \parallel g{\parallel }_{2} \) . Hence \( \exists u \in {H}_{1} \) representing \( L \) ; i.e., \[ L\left( {{T}^{ * }f}\right) = {\left( {T}^{ * }f, u\right) }_{1} \] and \( \parallel u{\parallel }_{1} = \parallel L\parallel \) . It follows by the proposition that \( u \in {\mathcal{D}}_{T} \), and \[ {\left( f, g\right) }_{2} = {\left( {T}^{ * }f, u\right) }_{1} = {\left( f, Tu\right) }_{2}, \] all \( f \in {\mathcal{D}}_{{T}^{ * }} \) . Hence \( g = {Tu} \), and \( \parallel u{\parallel }_{1} \leq \left( {1/c}\right) \parallel g{\parallel }_{2} \) . Thus (4) and (5) are established. It is now our task to verify hypothesis (*) for our operators \( {T}_{0} \) and \( {S}_{0} \) in order to apply Theorem 16.2 to the proof of Theorem 16.1. This means that we must find a lower bound for \( {\begin{Vmatrix}{T}_{0}^{ * }f\end{Vmatrix}}^{2} + {\begin{Vmatrix}{S}_{0}f\end{Vmatrix}}^{2} \) . For this purpose it is advantageous to use not the usual inner product on \( {L}^{2}\left( \Omega \right) \) but an equivalent inner product based on a weight function. Let \( \phi \) be a smooth positive function defined in a neighborhood of \( \bar{\Omega } \) . Put \( {H}_{1} = \) \( {L}^{2}\left( \Omega \right) \) with the inner product \[ {\left( f, g\right) }_{1} = {\int }_{\Omega }f\bar{g}{e}^{-\phi }{dV} \] Similarly, let \( {H}_{2} \) be the Hilbert space obtained by imposing on \( {L}_{0,1}^{2}\left( \Omega \right) \) the inner product \[ {\left( \mathop{\sum }\limits_{{j = 1}}^{n}{f}_{j}d{\bar{z}}_{j},\mathop{\sum }\limits_{{j = 1}}^{n}{g}_{j}d{\bar{z}}_{j}\right) }_{2} = {\int }_{\Omega }\left( {\mathop{\sum }\limits_{{j = 1}}^{n}{f}_{j}{\bar{g}}_{j}}\right) {e}^{-\phi }{dV}. \] Finally define \( {H}_{3} \) in an analogous way by putting a new inner product on \( {L}_{0,2}^{2}\left( \Omega \right) \) . Then \[ {T}_{0} : {H}_{1} \rightarrow {H}_{2},\;{S}_{0} : {H}_{2} \rightarrow {H}_{3}. \] It is easy to verify that \( {\mathcal{D}}_{{T}_{0}},{\mathcal{D}}_{{S}_{0}} \) are dense subspaces of \( {H}_{1} \) and \( {H}_{2} \), respectively, and that \( {T}_{0} \) and \( {S}_{0} \) are closed operators. Our basic result is the following: Define \( {C}_{0,1}^{1}\left( \bar{\Omega }\right) = \left\{ {f = \mathop{\sum }\limits_{{j = 1}}^{n}{f}_{j}d{\bar{z}}_{j} \mid }\right. \) each \( \left. {{f}_{j} \in {C}^{1}\text{in a neighborhood of}\bar{\Omega }\text{.}}\right\} \) Theorem 16.3. Fix \( f \) in \( {C}_{0,1}^{1}\left( \bar{\Omega }\right) \) . Let \( f \in {\mathcal{D}}_{{T}_{0}^{ * }} \cap {\mathcal{D}}_{{S}_{0}} \) . Then \( {\left. \left( 7\right) {T}_{0}^{ * }f\right| }_{1}^{2} + {\begin{Vmatrix}{S}_{0}f\end{Vmatrix}}_{3}^{2} = \mathop{\sum }\limits_{{j, k}}{\int }_{\Omega }{f}_{j}{\bar{f}}_{k}\frac{{\partial }^{2}\phi }{\partial {z}_{j}\partial {\bar{z}}_{k}}{e}^{-\phi }{dV} \) \[ + \mathop{\sum }\limits_{{j, k}}{\int }_{\Omega }{\left| \frac{\partial {f}_{k}}{\partial {\bar{z}}_{j}}\right| }^{2}{e}^{-\phi }{dV} + \mathop{\sum }\limits_{{j, k}}{\int }_{\partial \Omega }{f}_{j}{\bar{f}}_{k}\frac{{\partial }^{2}\rho }{\partial {z}_{j}\partial {\bar{z}}_{k}}{e}^{-\phi }{dS} \] \( {dS} \) denoting the element of surface area on \( \partial \Omega \) . Suppose for the moment that Theorem 16.3 has been established. Put \[ \phi \left( z\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {z}_{j}\right| }^{2} = {\left| z\right| }^{2} \] Then \( {\partial }^{2}\phi /\partial {z}_{j}\partial {\bar{z}}_{k} = 0 \) if \( j \neq k, = 1 \) if \( j = k \) . The first integral on the right in (7) is now \[ \mathop{\sum }\limits_{{j = 1}}^{n}{\int }_{\Omega }{\left| {f}_{j}\right| }^{2}{e}^{-\phi }{dV} = \parallel f{\parallel }_{2}^{2} \] The second integral is evidently \( \geq 0 \) . Now \[ \mathop{\sum }\limits_{{j, k}}\frac{{\partial }^{2}\rho }{\partial {z}_{j}\partial {\bar{z}}_{k}}{f}_{j}{\bar{f}}_{k} \geq 0\;\text{ if }\mathop{\sum }\limits_{j}\frac{\partial \rho }{\partial {z}_{j}}{f}_{j} = 0\text{ on }\partial \Omega , \] by (2). Hence (7) gives (8) \[ {\begin{Vmatrix}{T}_{0}^{ * }f\end{Vmatrix}}_{1}^{2} + {\begin{Vmatrix}{S}_{0}f\end{Vmatrix}}_{3}^{2} \geq \parallel f{\parallel }_{2}^{2}, \] if (9) \[ \mathop{\sum }\limits_{j}\frac{\partial \rho }{\partial {z}_{j}}{f}_{j} = 0\text{ on }\partial \Omega \] We shall show below that (9) holds whenever \( f \in {\mathcal{D}}_{{T}_{0}^{ * }} \cap {\mathcal{D}}_{{S}_{0}} \) and \( f \) is \( {C}^{1} \) in a neighborhood of \( \bar{\Omega } \) . Thus Theorem 16.3 implies that (8) hold for each smooth \( f \) in \( {\mathcal{D}}_{{T}_{0}^{ * }} \cap {\mathcal{D}}_{{S}_{0}} \) . We now quote a result from the theory of partial differential operators which seems plausible and is rather technical. We refer for its proof to [39], Proposition 2.1.1. Proposition. Let \( f \in {\mathcal{D}}_{{T}_{0}^{ * }} \cap {\mathcal{D}}_{{S}_{0}} \) (with no smoothness assumptions). Then \( \exists a \) sequence \( \left\{ {f}_{n}\right\} \) with \( {f}_{n} \in {\mathcal{D}}_{{T}_{0}^{ * }} \cap {\mathcal{D}}_{{S}_{0}} \) and \( {f}_{n} \) in \( {C}^{1} \) in a neighborhood of \( \bar{\Omega } \) such that as \( n \rightarrow \infty \) , \[ {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{2} \rightarrow 0,\;{\begin{Vmatrix}{T}_{0}^{ * }{f}_{n} - {T}_{0}^{ * }f\end{Vmatrix}}_{1} \rightarrow 0,\;{\begin{Vmatrix}{S}_{0}{f}_{n} - {S}_{0}f\end{Vmatrix}}_{3} \rightarrow 0. \] Since (8) holds when \( f \) is smooth, the proposition gives that (8) holds for all \( f \in {\mathcal{D}}_{{T}_{0}^{ * }} \cap {\mathcal{D}}_{{S}_{0}} \) Theorem 16.2 now applies to \( {T}_{0} \) and \( {S}_{0} \) with \( c = 1 \) . It follows from (4) and (5) that if \( g = \mathop{\sum }\limits_{{j = 1}}^{n}{g}_{j}d{\bar{z}}_{j} \in {H}_{2} \), and if \( {S}_{0}g = 0 \), then \( \exists u \) in \( {H}_{1} \) with \( {T}_{0}u = g \) and \( \parallel u{\parallel }_{1} \leq \parallel g{\parallel }_{2} \) . Thus \[ {\int }_{\Omega }{\left| u\right| }^{2}{e}^{-\phi }{dV
1097_(GTM253)Elementary Functional Analysis
Definition 1.1
Definition 1.1. Let \( X \) be a vector space over either the scalar field \( \mathbb{R} \) of real numbers or the scalar field \( \mathbb{C} \) of complex numbers. Suppose we have a function \( \parallel \cdot \parallel : X \rightarrow \) \( \lbrack 0,\infty ) \) such that (1) \( \;\parallel x\parallel = 0 \) if and only if \( x = 0 \) , (2) \( \parallel x + y\parallel \leq \parallel x\parallel + \parallel y\parallel \) for all \( x, y \in X \), and (3) \( \parallel {\alpha x}\parallel = \left| \alpha \right| \parallel x\parallel \) for all scalars \( \alpha \) and vectors \( x \) . We call \( \left( {X,\parallel \cdot \parallel }\right) \) a normed linear space. Property (2) is called the triangle inequality, and property (3) is referred to as homogeneity. The reverse triangle inequality, \[ \parallel x + y\parallel \geq \parallel \parallel x\parallel - \parallel y\parallel \] follows easily from (2); see Exercise 1.1. We give some examples of normed linear spaces. In these examples we won't give the details of the verification that the norm satisfies these defining properties. This verification is straightforward in some cases, while in others it may already be known to the reader or will be outlined in an exercise. Example 1.2. Let \( X = {\mathbb{C}}^{n} \equiv \left\{ {\left( {{z}_{1},{z}_{2},\ldots ,{z}_{n}}\right) : {z}_{j} \in \mathbb{C}}\right\} \) with \[ \begin{Vmatrix}\left( {{z}_{1},{z}_{2},\ldots ,{z}_{n}}\right) \end{Vmatrix} = {\left( \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {z}_{j}\right| }^{2}\right) }^{\frac{1}{2}}; \] this is called the Euclidean norm. The Euclidean space \( {\mathbb{R}}^{n} \) is similarly defined; in this case we restrict to real scalars. Example 1.3. Let \( X = {\mathbb{C}}^{n} \) with \( \begin{Vmatrix}\left( {{z}_{1},{z}_{2},\ldots ,{z}_{n}}\right) \end{Vmatrix} = \max \left\{ {\left| {z}_{j}\right| : 1 \leq j \leq n}\right\} \) . Example 1.4. Let \( Y = \left\lbrack {0,1}\right\rbrack \), or more generally any compact Hausdorff space, and let \( C\left( Y\right) \) be the vector space of continuous, complex-valued functions on \( Y \), under pointwise addition and scalar multiplication. Define a norm on \( C\left( Y\right) \) by \( \parallel f\parallel = \) \( \max \{ \left| {f\left( y\right) }\right| : y \in Y\} \) . This (specifically \( C\left\lbrack {a, b}\right\rbrack \), endowed with the metric which defines the distance between functions \( f \) and \( g \) to be \( \mathop{\max }\limits_{{a \leq x \leq b}}\left| {f\left( x\right) - g\left( x\right) }\right| \) ), was one of the important examples that Fréchet put forth in his 1906 dissertation. Example 1.5. Choose a value of \( p \geq 1 \), and let \( {\ell }^{p} = {\ell }^{p}\left( \mathbb{N}\right) \) denote the set of all sequences \( {\left\{ {a}_{n}\right\} }_{n = 1}^{\infty } \) of complex numbers (indexed by the positive integers \( \mathbb{N} \) ) for which \( \mathop{\sum }\limits_{1}^{\infty }{\left| {a}_{n}\right| }^{p} < \infty \) . In our notation for a sequence we will often abbreviate \( {\left\{ {a}_{n}\right\} }_{n = 1}^{\infty } \) by \( {\left\{ {a}_{n}\right\} }_{1}^{\infty } \) or even just \( \left\{ {a}_{n}\right\} \) . Define the norm of \( \left\{ {a}_{n}\right\} \in {\ell }^{p} \) by \[ {\begin{Vmatrix}\left\{ {a}_{n}\right\} \end{Vmatrix}}_{p} \equiv {\left( \mathop{\sum }\limits_{1}^{\infty }{\left| {a}_{n}\right| }^{p}\right) }^{1/p}. \] We can include the choice \( p = \infty \) by modifying this definition in the expected way: \[ {\ell }^{\infty } = \left\{ {{\left\{ {a}_{n}\right\} }_{1}^{\infty } : \mathop{\sup }\limits_{n}\left| {a}_{n}\right| < \infty }\right\} \] and \[ {\begin{Vmatrix}\left\{ {a}_{n}\right\} \end{Vmatrix}}_{\infty } = \mathop{\sup }\limits_{n}\left| {a}_{n}\right| \] For \( p = 1 \) and \( p = \infty \) the triangle inequality is easily verified; for \( 1 < p < \infty \) it goes by the name of Minkowski's inequality, in honor of Hermann Minkowski who first studied the analogue of this \( {\ell }^{p} \) -norm on the space \( {\mathbb{R}}^{n} \) . Example 1.6. We can generalize the last example as follows. Consider a positive measure space \( \left( {Y,\mathfrak{M},\mu }\right) \), where \( Y \) is a set, \( \mathfrak{M} \) is a \( \sigma \) -algebra of subsets of \( Y \), and \( \mu \) is a positive measure. Choose \( 1 \leq p < \infty \), and denote by \( {L}^{p}\left( {Y,\mu }\right) \) the collection of all equivalence classes of \( \mathfrak{M} \) -measurable functions on \( Y \) with \[ {\int }_{Y}{\left| f\right| }^{p}{d\mu } < \infty \] normed by \[ \parallel f{\parallel }_{p} = {\left( {\int }_{Y}{\left| f\right| }^{p}d\mu \right) }^{\frac{1}{p}} \] (the integral in this definition is the Lebesgue integral). Minkowski's inequality (for integrals) provides the proof that the norm satisfies the triangle inequality. We also define \( {L}^{\infty }\left( {X,\mu }\right) \) to be all equivalence classes of essentially bounded measurable functions, normed by \( \parallel f{\parallel }_{\infty } = \) ess sup \( \left| f\right| \), the essential supremum of \( f \) . Of particular interest to us will be the space \( {L}^{p}\left\lbrack {0,1}\right\rbrack = {L}^{p}\left( {\left\lbrack {0,1}\right\rbrack ,{dx}}\right) \) with respect to Lebesgue measure \( {dx} \) on the real line. For the reader unfamiliar with the concepts in the preceding example, the Appendix provides a summary of the relevant definitions and results from real analysis. The use of the Lebesgue integral in the definition of the \( {L}^{p} \) spaces is important, and in writing a history of functional analysis, Jean Dieudonné [10] states ...it is likely that progress in Functional Analysis might have been appreciably slowed down if the invention of the Lebesgue integral had not appeared, by a happy coincidence, exactly at the beginning of Hilbert's work...(pp. 119-120). replacing, what Dieudonné calls "the horrible and useless so-called Riemann integral." In Example 1.6, the particular choice \( Y = \mathbb{N} \) and \( \mu = \) counting measure on the subsets of \( \mathbb{N} \) gives the space \( {\ell }^{p} \) of Example 1.5; see Sections A. 2 and A. 3 in the Appendix for more details. Example 1.7. Fix a sequence \( \{ \beta \left( n\right) {\} }_{n = 0}^{\infty } \) of positive numbers with \( \beta \left( 0\right) = 1 \) and \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\beta {\left( n\right) }^{1/n} \geq 1 \] (1.1) The reason for this last restriction will be made clear shortly, but for right now notice that defining \( \beta \left( n\right) = {\left( n + 1\right) }^{a} \) for some fixed real number \( a \) will give an allowable choice. Define the weighted sequence space \( {\ell }_{\beta }^{2} \) to consist of all sequences \( {\left\{ {a}_{n}\right\} }_{0}^{\infty } \) with \[ \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left| {a}_{n}\right| }^{2}\beta {\left( n\right) }^{2} < \infty \] where the norm of \( {\left\{ {a}_{n}\right\} }_{0}^{\infty } \) is defined to be \[ {\left( \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left| {a}_{n}\right| }^{2}\beta {\left( n\right) }^{2}\right) }^{1/2}. \] From one perspective these weighted sequence spaces can be thought of simply as \( {L}^{2}\left( {X,\mathfrak{M},\mu }\right) \) for \( X = {\mathbb{N}}_{0} \equiv \{ 0\} \cup \mathbb{N},\mathfrak{M} \) the collection of all subsets of \( {\mathbb{N}}_{0} \), and \( \mu \) the measure that assigns to each point \( n \) of \( {\mathbb{N}}_{0} \) the mass \( \beta {\left( n\right) }^{2} \), so that we have a special case of the example discussed in Example 1.6. In particular, the general version of Minkowski’s inequality gives the triangle inequality in \( {\ell }_{\beta }^{2} \) . (See Exercise 1.6 for a more elementary approach.) The requirement in Equation (1.1) allows us to offer a second perspective on the spaces \( {\ell }_{\beta }^{2} \), and the interplay between the two perspectives endows these examples with a particular richness. Associate to a sequence \( {\left\{ {a}_{n}\right\} }_{0}^{\infty } \) in \( {\ell }_{\beta }^{2} \) the power series \( \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{z}^{n} \) . The radius of convergence of this series is at least one (see Exercise 1.9), and thus the series converges to an analytic function on the unit disk \( \mathbb{D} = \{ z \in \mathbb{C} \) : \( \left| z\right| < 1\} \) . This suggests that we may want to identify \( {\ell }_{\beta }^{2} \), a space of sequences, with the vector space \[ \left\{ {f = \mathop{\sum }\limits_{0}^{\infty }{a}_{n}{z}^{n}\text{ analytic in }\mathbb{D} : \mathop{\sum }\limits_{0}^{\infty }{\left| {a}_{n}\right| }^{2}\beta {\left( n\right) }^{2} < \infty }\right\} . \] In the latter guise, the space is referred to as a weighted Hardy space and denoted \( {H}^{2}\left( \beta \right) \) ; the case \( \beta \left( n\right) = 1 \) for all \( n \) gives the Hardy space \( {H}^{2} \) . In the next chapter we will have the language needed to make precise the properties of this identification, but for the moment we simply observe that the map sending \( {\left\{ {a}_{n}\right\} }_{0}^{\infty } \) to \( f = \mathop{\sum }\limits_{0}^{\infty }{a}_{n}{z}^{n} \) is one-to-one (by uniqueness of power series) and onto \( {H}^{2}\left( \beta \right) \) by definition, and we will regard \( {H}^{2}\left( \beta \right) \) as normed so that this mapping preserves norms. Example 1.8. Let \( \Omega \) be a nonempty open set in \( \mathbb{C} \) . Denote the collection of all bounded analytic functions on \( \Omega \) by \( {H}^{\infty }\left( \Omega \right) \), and introduce a norm on \( {H}^{\infty }\left( \Omega \right) \) by \( \parallel f\parallel = \sup \{ \left| {f\left( z\right) }\right| : z \in \Omega \} \) . The norms in Examples 1.3,1.4,1.8, and the \( {\ell }^{\infty } \) norm in Example 1.5, are all referred to as the "supremum norm," and when needed for clarity will be
113_Topological Groups
Definition 12.14
Definition 12.14. Let \( \mathfrak{A} = {\left\langle A,+,\cdot ,-,0,1,{\mathrm{c}}_{\kappa },{\mathrm{d}}_{\kappa \lambda }\right\rangle }_{\kappa ,\lambda < \alpha } \) be a \( {\mathrm{{CA}}}_{\alpha } \) . An ideal of \( \mathfrak{A} \) is an ideal of \( \langle A, + , \cdot , - ,0,1\rangle \) such that for all \( \kappa < \alpha \), if \( x \in I \) then \( {\mathrm{c}}_{\kappa }x \in I \) . Proposition 12.15. Let \( I \) be an ideal in a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} \), and let \( R = \{ \left( {x, y}\right) : x \cdot - y + \) \( - x \cdot y \in I\} \) (cf. 9.16). Then for any \( \kappa < \alpha \) and \( x, y \in A \), if \( {xRy} \) then \( {\mathrm{c}}_{\kappa }{xR}{\mathrm{c}}_{\kappa }y \) . Proof. Assume that \( {xRy} \) and \( \kappa < \alpha \) . Thus \( x \cdot - y + - x \cdot y \in I \) . Now \[ {\mathrm{c}}_{\kappa }x = {\mathrm{c}}_{\kappa }\left( {x \cdot - y + x \cdot y}\right) = {\mathrm{c}}_{\kappa }\left( {x \cdot - y}\right) + {\mathrm{c}}_{\kappa }\left( {x \cdot y}\right) \leq {\mathrm{c}}_{\kappa }\left( {x \cdot - y}\right) + {\mathrm{c}}_{\kappa }y \] hence \[ {\mathrm{c}}_{\kappa }x \cdot - {\mathrm{c}}_{\kappa }y \leq {\mathrm{c}}_{\kappa }\left( {x \cdot - y}\right) \leq {\mathrm{c}}_{\kappa }\left( {x \cdot - y + y \cdot - x}\right) \in I. \] Thus \( {\mathrm{c}}_{\kappa }x \cdot - {\mathrm{c}}_{\kappa }y \in I \), and by symmetry, \( {\mathrm{c}}_{\kappa }y \cdot - {\mathrm{c}}_{\kappa }x \in I \), so \( {\mathrm{c}}_{\kappa }x \cdot - {\mathrm{c}}_{\kappa }y + - {\mathrm{c}}_{\kappa }x \) . \( {\mathrm{c}}_{\kappa }y \in I \) and \( {\mathrm{c}}_{\kappa }{xR}{\mathrm{c}}_{\kappa }y \) . Along with 9.16 and 9.17, Proposition 12.15 justifies the following definition: Definition 12.16. Let \( I \) be an ideal in a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} = {\left\langle A,+,\cdot ,-,0,1,{\mathrm{c}}_{\kappa },{\mathrm{d}}_{\kappa \lambda }\right\rangle }_{\kappa ,\lambda < \alpha } \) . We define \( \mathfrak{A}/I = {\left\langle A/I,{ + }^{\prime },{ \cdot }^{\prime },{ - }^{\prime },{0}^{\prime },{1}^{\prime },{\mathrm{c}}_{\kappa }^{\prime },{\mathrm{d}}_{\kappa \lambda }^{\prime }\right\rangle }_{\kappa ,\lambda < \alpha } \), where \[ \langle A, + , \cdot , - ,0,1\rangle /I = \left\langle {A/I,{ + }^{\prime },{ \cdot }^{\prime },{ - }^{\prime },{0}^{\prime },{1}^{\prime }}\right\rangle \] in accordance with \( {9.17},{\mathrm{c}}_{\kappa }^{\prime }\left\lbrack a\right\rbrack = \left\lbrack {{\mathrm{c}}_{\kappa }a}\right\rbrack \) for all \( a \in A \) and \( \kappa < \alpha \), and \( {\mathrm{d}}_{\kappa \lambda }^{\prime } = \) \( \left\lbrack {\mathrm{d}}_{\kappa \lambda }\right\rbrack \) for all \( \kappa ,\lambda < \alpha \) . Proposition 12.17. If \( I \) is an ideal in a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} \), then \( \mathfrak{A}/I \) is a \( {\mathrm{{CA}}}_{\alpha } \), and \( {I}^{ * } \) is a homomorphism from \( \mathfrak{A} \) onto \( \mathfrak{A}/I \) . Proposition 12.18. If \( f \) is a homomorphism from a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{A} \) onto a \( {\mathrm{{CA}}}_{\alpha }\mathfrak{B} \) and \( I = \{ x \in A : {fx} = 0\} \), then \( I \) is an ideal of \( \mathfrak{A} \), and \( \mathfrak{B} \cong \mathfrak{A}/I \) . Proposition 12.19. The intersection of any nonempty family of ideals in a \( {\mathrm{{CA}}}_{\alpha } \) is an ideal. Definition 12.20. If \( \mathfrak{A} \) is a \( {\mathrm{{CA}}}_{\alpha } \) and \( x \subseteq A \), then the ideal generated by \( X \) is the set \[ \bigcap \{ I : X \subseteq I, I\text{ an ideal of }\mathfrak{A}\} . \] We can directly generalize 9.23 to give a simple expression for the members of the ideal generated by a set: Proposition 12.21. If \( X \subseteq A,\mathfrak{A}a{\mathrm{{CA}}}_{\alpha } \), then the ideal generated by \( X \) is the collection of all \( y \in A \) such that there exist \( m, n \in \omega \) and \( x \in {}^{m}X,\kappa \in {}^{n}\alpha \) with \( y \leq {\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1}}\right) . \) Proof. Let \( I \) be the collection of all \( y \in A \) such that such \( m, n, x,\kappa \) exist. Clearly \( I \) is contained in the ideal generated by \( X \) . Thus it is enough to show that \( X \subseteq I \) and \( I \) is an ideal. Taking \( m = 1 \) and \( n = 0 \) we easily see that \( X \subseteq I \) . Taking \( m = n = 0 \), we see that \( 0 \in I \) and hence \( I \neq 0 \) . If \( z \leq y \in I \) , obviously also \( z \in I \) . If \( y \in I \), with \( m, n, x,\kappa \) as above, and if \( \lambda < \alpha \), then \[ {\mathrm{c}}_{\lambda }y \leq {\mathrm{c}}_{\lambda }{\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1}}\right) , \] so \( {\mathrm{c}}_{\lambda }y \in I \) . Finally, suppose \( y,{y}^{\prime } \in I \), with \( m, n, x,\kappa \) and \( {m}^{\prime },{n}^{\prime },{x}^{\prime },{\kappa }^{\prime } \) satisfying the corresponding conditions. Then \[ y + {y}^{\prime } \leq {\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1}}\right) + {\mathrm{c}}_{{\kappa }^{\prime }0}\cdots {\mathrm{c}}_{{\kappa }^{\prime }\left( {{n}^{\prime } - 1}\right) }\left( {{x}_{0}^{\prime } + \cdots + {x}_{{m}^{\prime } - 1}^{\prime }}\right) \] \[ \leq {\mathrm{c}}_{╄}\cdots {\mathrm{c}}_{\kappa \left( {n - 1}\right) }{\mathrm{c}}_{{\kappa }^{\prime }0}\cdots {\mathrm{c}}_{{\kappa }^{\prime }\left( {{n}^{\prime } - 1}\right) }\left( {{x}_{0} + \cdots + {x}_{m - 1} + {x}_{0}^{\prime } + \cdots + {x}_{{m}^{\prime } - 1}^{\prime }}\right) , \] so \( y + {y}^{\prime } \in I \) also. We shall not develop the algebraic theory of \( {\mathrm{{CA}}}_{\alpha } \) ’s any further. Instead, we now turn to the relationships between first-order logic and cylindric algebras. In this regard the following definition is fundamental. Definition 12.22. For \( \mathcal{L} \) a first-order language and \( \Gamma \) a set of sentences in \( \mathcal{L} \) we set \[ { \equiv }_{\Gamma }^{\mathcal{L}} = \{ \left( {\varphi ,\psi }\right) : \varphi \text{ and }\psi \text{ are formulas of }\mathcal{L}\text{ and }\Gamma \vDash \varphi \leftrightarrow \psi \} . \] Furthermore, we let \( \mathfrak{M}\mathcal{F} = {\left\langle {\mathrm{{Fmla}}}_{\mathcal{L}}/ \equiv \mathcal{F},+,\cdot ,-,0,1,{\mathrm{c}}_{\kappa },{\mathrm{d}}_{\kappa \lambda }\right\rangle }_{\kappa ,\lambda < \omega } \) , where for any \( \varphi ,\psi \in {\operatorname{Fmla}}_{\mathcal{L}} \) and any \( \kappa ,\lambda \in \omega \) , \[ \left\lbrack \varphi \right\rbrack + \left\lbrack \psi \right\rbrack = \left\lbrack {\varphi \vee \psi }\right\rbrack \] \[ \left\lbrack \varphi \right\rbrack \cdot \left\lbrack \psi \right\rbrack = \left\lbrack {\varphi \land \psi }\right\rbrack ; \] \[ - \left\lbrack \varphi \right\rbrack = \left\lbrack {\neg \varphi }\right\rbrack \] \[ 0 = \left\lbrack {\neg {v}_{0} = {v}_{0}}\right\rbrack \] \[ 1 = \left\lbrack {{v}_{0} = {v}_{0}}\right\rbrack \] \[ {\mathrm{c}}_{\kappa }\left\lbrack \varphi \right\rbrack = \left\lbrack {\exists {\mathrm{v}}_{\kappa }\varphi }\right\rbrack \] \[ {\mathrm{d}}_{\kappa \lambda } = \left\lbrack {{\mathrm{v}}_{\kappa } = {\mathrm{v}}_{\lambda }}\right\rbrack . \] This definition is easily justified (see 9.54-9.56). Routine checking gives: Proposition 12.23. \( {\mathfrak{M}}_{\Gamma }^{\mathcal{L}} \) is \( a{\mathrm{{CA}}}_{\omega } \) . As in the case of sentential logic and Boolean algebras, there is a natural correspondence between notions of first-order logic and notions of cylindric algebras. We give two instances of this correspondence. The first one indicates the close relationship between set algebras and models of a theory: Proposition 12.24. Let \( \Gamma \) be a set of sentences in a first-order language \( \mathcal{L} \), and let \( \mathfrak{A} \) be a model of \( \Gamma \) . Then \( \left\{ {{\varphi }^{\mathfrak{A}} : \varphi \in {\mathrm{{Fmla}}}_{\mathcal{L}}}\right\} \) is an \( \omega \) -dimensional field of sets. Let \( \mathfrak{B} \) be the associated cylindric set algebra. Then the function \( f \) such that \( f\left\lbrack \varphi \right\rbrack = {\varphi }^{\mathfrak{A}} \) for each \( \varphi \in {\mathrm{{Fmla}}}_{\mathcal{L}} \) is a homomorphism of \( {\mathfrak{M}}_{\mathrm{F}}^{\mathcal{L}} \) onto \( \mathfrak{B} \) ( \( f \) is easily seen to be well defined). This proposition can be routinely checked. The following proposition is established just like 9.59. Proposition 12.25. Let \( I \) be an ideal in a \( {\mathrm{{CA}}}_{\omega }\mathfrak{M} \), and set \( \Delta = \left\{ {\varphi \in {\operatorname{Sent}}_{\mathcal{L}}}\right. \) : \( - \left. {\left\lbrack \varphi \right\rbrack \in I}\right\} \) . Then \( \Gamma \subseteq \Delta \), and \( {\mathfrak{M}}_{\Gamma }^{\mathcal{L}}/I \) is isomorphic to \( {\mathfrak{M}}_{\Delta }^{\mathcal{L}} \) . The algebras \( {\mathfrak{M}}_{\Gamma } \) possess a special property not possessed by other \( {\mathrm{{CA}}}_{\omega } \) ’s; the definition of this property is given in Definition 12.26. Let \( \mathfrak{A} \) be a \( {\mathrm{{CA}}}_{\alpha } \) . We say that \( \mathfrak{A} \) is locally finite dimensional provided that for all \( a \in A,{\Delta a} \) is finite. In the case of an algebra \( \mathfrak{M}\% \), if \( \varphi \) is any formula and \( {v}_{\kappa } \) is a variable not occurring in \( \varphi \), then \( \vDash \exists {v}_{\kappa }\varphi \leftrightarrow \varphi \), and hence \( {\mathrm{c}}_{\kappa }\left\lbrack \varphi \right\rbrack = \left\lbrack {\exists {\mathrm{v}}_{\kappa }\varphi }\right\rbrack = \left\lbrack \varphi \right\rbrack \) . Thus \( \Delta \left\lbrack \varphi \right\rbrack \subseteq \left\{ {\kappa : {v}_{\kappa }}\right. \) occurs in \( \left. \varphi \right\} \), and hence \( \Delta \left\lbrack \v
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 5.5.11
Definition 5.5.11 Let \( \left( {N, L}\right) \) be a pair of subspaces of \( X \) . A subset \( L \) of \( N \) is called positively invariant in \( N \) with respect to the flow \( \varphi \), if \( x \in L \) and \( \varphi \left( {\left\lbrack {0, t}\right\rbrack, x}\right) \subset N \) imply \( \varphi \left( {\left\lbrack {0, t}\right\rbrack, x}\right) \subset L \) . It is called an exit of \( N \), if \( \forall x \in \) \( N,\exists {t}_{1} > 0 \) such that \( \varphi \left( {{t}_{1}, x}\right) \notin N \), implies \( \exists {t}_{0} \in \left\lbrack {0,{t}_{1}}\right) \) such that \( \varphi \left( {\left\lbrack {0,{t}_{0}}\right\rbrack, x}\right) \subset \) \( N \) and \( \varphi \left( {{t}_{0}, x}\right) \in L \) . Example. Let \( \left( {M, f,\phi }\right) \) be a pseudo-gradient flow, and let \( \alpha < \beta < \gamma \) . Let \( N = {f}^{-1}\left\lbrack {\alpha ,\gamma }\right\rbrack \) and \( L = {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \) . \( L \) is positively invariant in \( N \), and also an exit set of \( N \) . To an isolated invariant set \( S \), we introduce: Definition 5.5.12 For \( U \in \sum \), let \( \left( {N, L}\right) \) be a pair of closed subsets of \( U \) with \( L \subset N \) . It is called an index pair relative to \( U \) if: (1) \( \overline{N \smallsetminus L} \in \sum \) , (2) \( L \) is positively invariant in \( N \) , (3) \( L \) is an exit set of \( N \) , (4) \( \overline{N \smallsetminus L} \subset U \) and \( \exists T > 0 \) such that \( {G}^{T}\left( U\right) \subset \overline{N \smallsetminus L} \) . According to the definition \( S \mathrel{\text{:=}} I\left( U\right) = I\left( {N \smallsetminus L}\right) \) is an isolated invariant set, and both \( U \) and \( \overline{N \smallsetminus L} \) are isolating neighborhoods of \( S \) . Example. Let \( \left( {M, f,\varphi }\right) \) be a pseudo-gradient flow. If \( \left( {\mathcal{O},\alpha ,\beta }\right) \) and \( \left( {{\mathcal{O}}^{\prime },{\alpha }^{\prime },{\beta }^{\prime }}\right) \) are two isolating triplets for a dynamically isolated critical set \( S \) for \( f \), with \( {\mathcal{O}}^{\prime } \subset \mathcal{O},\left\lbrack {{\alpha }^{\prime },{\beta }^{\prime }}\right\rbrack \subset \left\lbrack {\alpha ,\beta }\right\rbrack \), then \( \exists T > 0 \) such that \( N = {G}^{T}\left( W\right), L = \varphi \left( {-T,{W}_{ - }}\right) \) is an index pair relative to \( U = \operatorname{cl}\left( \widetilde{{\mathcal{O}}^{\prime }}\right) \cap {f}^{-1}\left\lbrack {{\alpha }^{\prime },{\beta }^{\prime }}\right\rbrack \), where \( W = \operatorname{cl}\left( \widetilde{\mathcal{O}}\right) \cap \) \( {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \) and \( {W}_{ - } = \operatorname{cl}\left( \widetilde{\mathcal{O}}\right) \cap {f}^{-1}\left( \alpha \right) \) . Moreover, both \( U \) and \( N \) are isolating neighborhoods of \( \left\lbrack S\right\rbrack \) . Note that \( \dot{N \smallsetminus L} = N \) and \( \operatorname{int}\left( {N \smallsetminus L}\right) = \operatorname{int}\left( N\right) \) . In order to verify the conclusion, we need: 1. \( \forall T > 0, N = {G}^{T}\left( W\right) \) is a closed MVP neighborhood of \( \left\lbrack S\right\rbrack \) . It is sufficient to verify that \( \left\lbrack S\right\rbrack \subset \operatorname{int}\left( N\right) \) . From Lemma 5.5.9, we have \( \left\lbrack S\right\rbrack \subset N \) . If the conclusion is not true, then \( \exists x \in \left\lbrack S\right\rbrack \cap \partial N \), i.e., \( \exists {x}_{n} \notin {G}^{T}\left( W\right) \) with \( {x}_{n} \rightarrow x \) . This means that there are \( {t}_{n} \in \left\lbrack {-T, T}\right\rbrack \), such that \( \varphi \left( {{t}_{n},{x}_{n}}\right) \notin W \) . After a subsequence, we have \( {t}_{n}^{\prime } \rightarrow t \in \left\lbrack {-T, T}\right\rbrack \) and then \( \varphi \left( {t, x}\right) \notin \operatorname{int}\left( W\right) \) . But \( x \in \left\lbrack S\right\rbrack \) implies that \( \varphi \left( {t, x}\right) \in \left\lbrack S\right\rbrack \subset \operatorname{int}\left( W\right) \), provided by Lemma 5.5.9. This is a contradiction. Now, we are going to verify conditions (1)-(4). 2. Applying Lemma 5.5.9 to \( U = N,\exists {T}_{0} > 0 \) such that \[ {G}^{T + {T}_{0}}\left( W\right) \subset {G}^{{T}_{0}}\left( W\right) \subset \operatorname{int}\left( N\right) . \] Since \( {G}^{T + {T}_{0}}\left( W\right) = {G}^{{T}_{0}}\left( N\right) \) . This shows that \( \overline{N \smallsetminus L} \in \sum \) ,(1) is verified. 3. Again applying Lemma 5.5.9, \( \exists T > 0 \) such that \( N = {G}^{T}\left( W\right) \subset \operatorname{int}\left( U\right) \subset \) \( U \) . Moreover, \[ {G}^{T}\left( U\right) \subset {G}^{T}\left( W\right) = N = \overline{N \smallsetminus L}. \] Poperty (4) is verified. 4. Since \( {W}_{ - } \) is an exit set of \( W, L \) is an exit set of \( N \) . Obviously, \( L \) is positively invariant in \( N \) . This completes the verification. From Lemma 5.5.9, both \( U \) and \( N \) are isolating neighborhoods of \( \left\lbrack S\right\rbrack \) . For a system without variational structure, does there exist an index pair relative to any set \( U \in \sum \) ? We have: Theorem 5.5.13 (Existence of an index pair) Let \( \varphi \) be a flow on a metric space \( X.\forall U \in \sum ,\left( {{G}^{T}\left( U\right) ,{\Gamma }^{T}\left( U\right) }\right) \) is an index pair relative to \( U \), where \( T > 0 \) is assumed such that \( {G}^{T}\left( U\right) \subset \operatorname{int}\left( U\right) \) . Proof. From the properties (1) and (4), both \( {G}^{T}\left( U\right) \) and \( {\Gamma }^{T}\left( U\right) \) are closed. We shall verify the four conditions in Definition 5.5.11 successively. (1) By property (4), \( \operatorname{int}\left( {{G}^{T}\left( U\right) \smallsetminus {\Gamma }^{T}\left( U\right) }\right) = \operatorname{int}\left( {{G}^{T}\left( U\right) }\right) \) . Applying property (2), \[ {G}^{T}\left( {{G}^{T}\left( U\right) \smallsetminus {\Gamma }^{T}\left( U\right) }\right) \subset {G}^{2T}\left( U\right) \subset \operatorname{int}\left( {{G}^{T}\left( U\right) }\right) . \] Thus \( \overline{{G}^{T}\left( U\right) \smallsetminus {\Gamma }^{T}\left( U\right) } \in \sum \) . (2) \( {\Gamma }^{T}\left( U\right) \) is positively invariant in \( {G}^{T}\left( U\right) \), i.e., if \( x \in {\Gamma }^{T}\left( U\right) \) and \( \varphi \left( {\left\lbrack {0,{T}_{1}}\right\rbrack, x}\right) \in {G}^{T}\left( U\right) \), then \( \varphi \left( {\left\lbrack {0,{T}_{1}}\right\rbrack, x}\right) \subset {\Gamma }^{T}\left( U\right) \) . Suppose not, then \( \exists t \in \left\lbrack {0,{T}_{1}}\right\rbrack \) such that \( \varphi \left( {t, x}\right) \notin {\Gamma }^{T}\left( U\right) \) . Let \( {t}^{ * } = \inf \{ s \in \) \( \left. {\left\lbrack {0,{T}_{1}}\right\rbrack \mid \varphi \left( {s, x}\right) \notin {\Gamma }^{T}\left( U\right) }\right\} \) . Since \( {\Gamma }^{T}\left( U\right) \) is closed, \( y = \varphi \left( {{t}^{ * }, x}\right) \in {\Gamma }^{T}\left( U\right) \) and \( \exists {\epsilon }_{n} \rightarrow + 0 \) such that \( \varphi \left( {{t}^{ * } + {\epsilon }_{n}, x}\right) \notin {\Gamma }^{T}\left( U\right) \) . Thus \( \varphi \left( {\left\lbrack {0, T}\right\rbrack, y}\right) \cap \partial U \neq \varnothing \) and \( \varphi \left( {\left\lbrack {{\epsilon }_{n}, T}\right\rbrack, y}\right) \cap \partial U = \varnothing \) ; it follows that \( y \in \partial U \) . But \( y \in {G}^{T}\left( U\right) \subset \operatorname{int}\left( U\right) \) . This is a contradiction. (3) \( {\Gamma }^{T}\left( U\right) \) is an exit set of \( {G}^{T}\left( U\right) \), i.e., if \( x \in {G}^{T}\left( U\right) \) and if \( \exists {t}_{1} > 0 \) such that \( \varphi \left( {{t}_{1}, x}\right) \notin {G}^{T}\left( U\right) \), then \( \exists {t}_{0} \in \left\lbrack {0,{t}_{1}}\right) \) such that \( \varphi \left( {\left\lbrack {0,{t}_{0}}\right\rbrack, x}\right) \subset {G}^{T}\left( U\right) \) and \( y = \varphi \left( {{t}_{0}, x}\right) \in {\Gamma }^{T}\left( U\right) . \) Let us define \[ {t}_{0} = \inf \{ s > 0 \mid \varphi \left( {\left\lbrack {s - T, s + T}\right\rbrack, x}\right) ⊄ U\} \] \[ = \inf \left\{ {s > 0 \mid \varphi \left( {s, x}\right) \notin {G}^{T}\left( U\right) }\right\} ; \] we have \( {t}_{0} \in \left\lbrack {0,{t}_{1}}\right\rbrack \) . Since \( {G}^{T}\left( U\right) \) is closed, \( \varphi \left( {\left\lbrack {0,{t}_{0}}\right\rbrack, x}\right) \subset {G}^{T}\left( U\right) \), therefore \( {t}_{0} < {t}_{1} \) . Defining \( y = \varphi \left( {{t}_{0}, x}\right) \), we have \[ \varphi \left( {T, y}\right) = \varphi \left( {{t}_{0} + T, x}\right) \in \partial U \] therefore \( y \in {\Gamma }^{T}\left( U\right) \) . (4) From property (4), \( \overline{{G}^{T}\left( U\right) \smallsetminus {\Gamma }^{T}\left( U\right) } = {G}^{T}\left( U\right) \subset U \) . For \( {T}_{1} > T \), we obtain \[ {G}^{{T}_{1}}\left( U\right) \subset {G}^{T}\left( U\right) = \overline{{G}^{T}\left( U\right) \smallsetminus {\Gamma }^{T}\left( u\right) }. \] A topological invariant is introduced to describe the index pair \( \left( {N, L}\right) \) relative to an isolating neighborhood \( U \) . Conley called the homotopy type \( h\left( U\right) = \left\lbrack {N/L}\right\rbrack \) the invariant. In comparing with that in the Morse theory, we prefer to replace it by the relative homology groups. However, in order to match Conley's definition, Alexander-Spanier cohomology is more suitable, because it possesses a special excision property not shared by singular cohomology theory. For a topological pair \( \left( {X, A}\right) \) and a coeficient field \( F,{\bar{H}}^{ * }\left( {X, A;F}\right) \) stands for Alexander-Spanier cohomology. The following excision property holds. Suppose that \( X \) and \( Y
1048_(GTM209)A Short Course on Spectral Theory
Definition 1.7.1
Definition 1.7.1. For every \( x \in A \) the spectral radius of \( x \) is defined by \[ r\left( x\right) = \sup \{ \left| \lambda \right| : \lambda \in \sigma \left( x\right) \} . \] REMARK 1.7.2. Since the spectrum of \( x \) is contained in the central disk of radius \( \parallel x\parallel \), it follows that \( r\left( x\right) \leq \parallel x\parallel \) . Notice too that for every \( \lambda \in \mathbb{C} \) we have \( r\left( {\lambda x}\right) = \left| \lambda \right| r\left( x\right) \) . We require the following rudimentary form of the spectral mapping theorem. If \( x \) is an element of \( A \) and \( f \) is a polynomial, then (1.11) \[ f\left( {\sigma \left( x\right) }\right) \subseteq \sigma \left( {f\left( x\right) }\right) . \] To see why this is so, fix \( \lambda \in \sigma \left( x\right) ) \) . Since \( z \mapsto f\left( z\right) - f\left( \lambda \right) \) is a polynomial having a zero at \( z = \lambda \), there a polynomial \( g \) such that \[ f\left( z\right) - f\left( \lambda \right) = \left( {z - \lambda }\right) g\left( z\right) \] Thus \[ f\left( x\right) - f\left( \lambda \right) \mathbf{1} = \left( {x - \lambda }\right) g\left( x\right) = g\left( x\right) \left( {x - \lambda }\right) \] cannot be invertible: A right (respectively left) inverse of \( f\left( x\right) - f\left( \lambda \right) \mathbf{1} \) gives rise to a right (respectively left) inverse of \( x - \lambda \) . Hence \( f\left( \lambda \right) \in \sigma \left( {f\left( x\right) }\right) \) . As a final observation, we note that for every \( x \in A \) one has (1.12) \[ r\left( x\right) \leq \mathop{\inf }\limits_{{n \geq 1}}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n}. \] Indeed, for every \( \lambda \in \sigma \left( x\right) \left( {1.11}\right) \) implies that \( {\lambda }^{n} \in \sigma \left( {x}^{n}\right) \) ; hence \[ {\left| \lambda \right| }^{n} = \left| {\lambda }^{n}\right| \leq r\left( {x}^{n}\right) \leq \begin{Vmatrix}{x}^{n}\end{Vmatrix} \] and (1.12) follows after one takes \( n \) th roots. The following formula is normally attributed to Gelfand and Mazur, although special cases were discovered independently by Beurling. THEOREM 1.7.3. For every \( x \in A \) we have \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} = r\left( x\right) \] The assertion here is that the limit exists in general, and has \( r\left( x\right) \) as its value. Proof. From (1.12) we have \( r\left( x\right) \leq \mathop{\liminf }\limits_{n}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} \), so it suffices to prove that (1.13) \[ \mathop{\limsup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} \leq r\left( x\right) \] We need only consider the case \( x \neq 0 \) . To prove (1.13) choose \( \lambda \in \mathbb{C} \) satisfying \( \left| \lambda \right| < 1/r\left( x\right) \) (when \( r\left( x\right) = 0,\lambda \) may be chosen arbitrarily). We claim that the sequence \( \left\{ {{\left( \lambda x\right) }^{n} : n = 1,2,\ldots }\right\} \) is bounded. Indeed, by the Banach-Steinhaus theorem it suffices to show that for every bounded linear functional \( \rho \) on \( A \) we have \[ \left| {\rho \left( {x}^{n}\right) {\lambda }^{n}}\right| = \left| {\rho \left( {\left( \lambda x\right) }^{n}\right) }\right| \leq {M}_{\rho } < \infty ,\;n = 1,2,\ldots , \] where \( {M}_{\rho } \) perhaps depends on \( \rho \) . To that end, consider the complex-valued function \( f \) defined on the (perhaps infinite) disk \( \{ z \in \mathbb{C} : \left| z\right| < 1/r\left( x\right) \} \) by \[ f\left( z\right) = \rho \left( {\left( 1 - zx\right) }^{-1}\right) . \] Note first that \( f \) is analytic. Indeed, for \( \left| z\right| < 1/\parallel x\parallel \) we may expand \( (1 - \) \( {zx}{)}^{-1} \) into a convergent series \( 1 + {zx} + {\left( zx\right) }^{2} + \cdots \) to obtain a power series representation for \( f \) : (1.14) \[ f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }\rho \left( {x}^{n}\right) {z}^{n} \] On the other hand, in the larger region \( R = \{ z : 0 < \left| z\right| < 1/r\left( x\right) \} \) we can write \[ f\left( z\right) = \frac{1}{z}\rho \left( {\left( {z}^{-1}\mathbf{1} - x\right) }^{-1}\right) \] and from formula (1.10) it is clear that \( f \) is analytic on \( R \) . Taken with (1.14), this implies that \( f \) is analytic on the disk \( \{ z : \left| z\right| < 1/r\left( x\right) \} \) . On the smaller disk \( \{ z : \left| z\right| < 1/\parallel x\parallel \} \) ,(1.14) gives a power series representation for \( f \) ; but since \( f \) is analytic on the larger disk \( \{ z : \left| z\right| < 1/r\left( x\right) \} \), it follows that the same series (1.14) must converge to \( f\left( z\right) \) for all \( \left| z\right| < 1/r\left( x\right) \) . Thus we are free to take \( z = \lambda \) in (1.14), and the resulting series converges. It follows that \( \rho \left( {x}^{n}\right) {\lambda }^{n} \) is a bounded sequence, proving the claim. Now choose any complex number \( \lambda \) satisfying \( 0 < \left| \lambda \right| < 1/r\left( x\right) \) . By the claim, there is a constant \( M = {M}_{\lambda } \) such that \( {\left| \lambda \right| }^{n}\parallel x{\parallel }^{n} = \parallel {\lambda x}{\parallel }^{n} \leq M \) for every \( n = 1,2,\ldots \) after taking \( n \) th roots, we find that \[ \mathop{\limsup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} \leq \mathop{\limsup }\limits_{{n \rightarrow \infty }}\frac{{M}^{1/n}}{\left| \lambda \right| } = \frac{1}{\left| \lambda \right| }. \] By allowing \( \left| \lambda \right| \) to increase to \( 1/r\left( x\right) \) we obtain (1.13). Definition 1.7.4. An element \( x \) of a Banach algebra \( A \) (with or without unit) is called quasinilpotent if \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} = 0 \] Significantly, quasinilpotence is characterized quite simply in spectral terms. Corollary 1. An element \( x \) of a unital Banach algebra \( A \) is quasinilpo-tent iff \( \sigma \left( x\right) = \{ 0\} \) . Proof. \( x \) is quasinilpotent \( \Leftrightarrow r\left( x\right) = 0 \Leftrightarrow \sigma \left( x\right) = \{ 0\} \) . Exercises. (1) Let \( {a}_{1},{a}_{2},\ldots \) be a sequence of complex numbers such that \( {a}_{n} \rightarrow 0 \) as \( n \rightarrow \infty \) . Show that the associated weighted shift operator on \( {\ell }^{2} \) (see the Exercises of Section 1.6) has spectrum \( \{ 0\} \) . (2) Consider the simplex \( {\Delta }_{n} \subset {\left\lbrack 0,1\right\rbrack }^{n} \) defined by \[ {\Delta }_{n} = \left\{ {\left( {{x}_{1},\ldots ,{x}_{n}}\right) \in {\left\lbrack 0,1\right\rbrack }^{n} : {x}_{1} \leq {x}_{2} \leq \cdots \leq {x}_{n}}\right\} \] Show that the volume of \( {\Delta }_{n} \) is \( 1/n \) !. Give a decent proof here: For example, you might consider the natural action of the permutation group \( {S}_{n} \) on the cube \( {\left\lbrack 0,1\right\rbrack }^{n} \) and think about how permutations act on \( {\Delta }_{n} \) . (3) Let \( k\left( {x, y}\right) \) be a Volterra kernel as in Example 1.1.4, and let \( K \) be its corresponding integral operator on the Banach space \( C\left\lbrack {0,1}\right\rbrack \) . Estimate the norms \( \begin{Vmatrix}{K}^{n}\end{Vmatrix} \) by showing that there is a positive constant \( M \) such that for every \( f \in C\left\lbrack {0,1}\right\rbrack \) and every \( n = 1,2,\ldots \) , \[ \begin{Vmatrix}{{K}^{n}f}\end{Vmatrix} \leq \frac{{M}^{n}}{n!}\parallel f\parallel \] (4) Let \( K \) be a Volterra operator as in the preceding exercise. Show that for every complex number \( \lambda \neq 0 \) and every \( g \in C\left\lbrack {0,1}\right\rbrack \), the Volterra equation of the second kind \( {Kf} - {\lambda f} = g \) has a unique solution \( f \in C\left\lbrack {0,1}\right\rbrack \) . ## 1.8. Ideals and Quotients The purpose of this section is to collect some basic information about ideals in Banach algebras and their quotient algebras. We begin with a complex algebra \( A \) . Definition 1.8.1. An \( {ideal} \) in \( A \) is linear subspace \( I \subseteq A \) that is invariant under both left and right multiplication, \( {AI} + {IA} \subseteq I \) . There are two trivial ideals, namely \( I = \{ 0\} \) and \( I = A \), and \( A \) is called simple if these are the only ideals. An ideal is proper if it is not all of \( A \) . Suppose now that \( I \) is a proper ideal of \( A \) . Forming the quotient vector space \( A/I \), we have a natural linear map \( x \in A \mapsto \dot{x} = x + I \in A/I \) of \( A \) onto \( A/I \) . Since \( I \) is a two-sided ideal, one can unambiguously define a multiplication in \( A/I \) by \[ \left( {x + I}\right) \cdot \left( {y + I}\right) = {xy} + I,\;x, y \in A. \] This multiplication makes \( A/I \) into a complex algebra, and the natural map \( x \mapsto \dot{x} \) becomes a surjective homomorphism of complex algebras having the given ideal \( I \) as its kernel. This information is conveniently summarized in the short exact sequence of complex algebras (1.15) \[ 0 \rightarrow I \rightarrow A \rightarrow A/I \rightarrow 0 \] the map of \( I \) to \( A \) being the inclusion map, and the map of \( A \) onto \( A/I \) being \( x \mapsto \dot{x} \) . A basic philosophical principle of mathematics is to determine what information about \( A \) can be extracted from corresponding information about both the ideal \( I \) and its quotient \( A/I \) . For example, suppose that \( A \) is finite-dimensional as a vector space over \( \mathbb{C} \) . Then both \( I \) and \( A/I \) are finite-dimensional vector spaces, and from the observation that (1.15) is an exact sequence of vector spaces and linear maps one finds that the dimension of \( A \) is determined by the dimensions of the ideal and its quo
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 4.2.15
Definition 4.2.15. Let \( X \) be a CW-complex. Then the cellular homology of \( X \) is the homology of the cellular chain complex \( {C}_{ * }^{\text{cell }}\left( X\right) \) as defined in Definition A.2.2. \( \diamond \) Lemma 4.2.16. The group \( {C}_{n}^{\text{cell }}\left( X\right) \) is the free abelian group on the n-cells of \( X \) . If \( {\alpha }_{\lambda }^{n} \) is the generator corresponding to the n-cell \( {D}_{\lambda }^{n},\lambda \in {\Lambda }_{n} \), then \( \partial \left( {\alpha }_{\lambda }^{n}\right) \) is given as follows: \[ {H}_{n}\left( {{D}_{\lambda }^{n},{S}_{\lambda }^{n - 1}}\right) \overset{\partial }{ \rightarrow }{H}_{n - 1}\left( {S}_{\lambda }^{n - 1}\right) \overset{{f}_{\lambda } \mid {S}_{\lambda }^{n - 1}}{ \rightarrow }{H}_{n - 1}\left( {X}^{n - 1}\right) \rightarrow {H}_{n - 1}\left( {{X}^{n - 1},{X}^{n - 2}}\right) . \] W \[ {\alpha }_{\lambda }^{n} \mid \; > \partial \left( {\alpha }_{\lambda }^{n}\right) \] Proof. This follows directly from Lemma 4.2.11 and its proof. We let \( {Z}_{n}^{\text{cell }}\left( X\right) \) denote \( \operatorname{Ker}\left( {\partial }_{n}\right) : {C}_{n}^{\text{cell }}\left( X\right) \rightarrow {C}_{n - 1}^{\text{cell }}\left( X\right) \) and \( {B}_{n}^{\text{cell }}\left( X\right) \) denote \( \operatorname{Im}\left( {\partial }_{n + 1}\right) : {C}_{n + 1}^{\text{cell }}\left( X\right) \rightarrow {C}_{n}^{\text{cell }}\left( X\right) \), so that \[ {H}_{n}^{\text{cell }}\left( X\right) = {Z}_{n}^{\text{cell }}\left( X\right) /{B}_{n}^{\text{cell }}\left( X\right) . \] Theorem 4.2.17. Let \( X \) be a CW-complex. Suppose that \( X \) is finite dimensional or that \( {H}_{ * } \) is compactly supported. Then the cellular homology of \( X \) is isomorphic to the ordinary homology of \( X \) . Proof. We begin with the following purely algebraic observation. Suppose we have three abelian groups and maps as shown: \[ {H}^{1}\overset{k}{ \leftarrow }{H}^{2}\overset{j}{ \rightarrow }{H}^{3} \] ## where 1. \( k \) is a surjection. 2. \( j \) is an injection; set \( {Z}^{3} = \operatorname{Im}\left( j\right) \) . 3. There is a subgroup \( {B}^{3} \subseteq {Z}^{3} \) with \( {j}^{-1}\left( {B}^{3}\right) = \operatorname{Ker}\left( k\right) \) . Then \( k \circ {j}^{-1} : {Z}^{3}/{B}^{3} \rightarrow {H}^{1} \) is an isomorphism with inverse \( j \circ {k}^{-1} \) . To see this, note that \( j : {H}^{2} \rightarrow {Z}^{3} \) is an isomorphism, so \( {j}^{-1} : {Z}^{3} \rightarrow {H}^{2} \) is well-defined, and then we have isomorphisms \[ {Z}^{3}/{B}^{3}\overset{{j}^{-1}}{ \rightarrow }{H}^{2}/{j}^{-1}\left( {B}^{3}\right) = {H}^{2}/\operatorname{Ker}\left( k\right) \cong {H}^{1}. \] Note also that \( j \circ {k}^{-1} \) is well-defined, as \( j\left( {\operatorname{Ker}\left( k\right) }\right) = {B}^{3} \) . We apply this here to construct isomorphisms, for each \( n \) , \[ {\Theta }_{n} : {H}_{n}^{\text{cell }}\left( X\right) \rightarrow {H}_{n}\left( X\right) \] Consider \[ {H}_{n}\left( X\right) \overset{{k}_{n}}{ \leftarrow }{H}_{n}\left( {X}^{n}\right) \overset{{j}_{n}}{ \rightarrow }{H}_{n}\left( {{X}^{n},{X}^{n - 1}}\right) \] with the maps induced by inclusion. We must verify the three conditions above. We have already shown (1), that \( {k}_{n} \) is a surjection, in Corollary 4.2.12. Also, (2) is immediate from \( 0 = {H}_{n}\left( {X}^{n - 1}\right) \rightarrow {H}_{n}\left( {X}^{n}\right) \rightarrow {H}_{n}\left( {{X}^{n},{X}^{n - 1}}\right) \) . Let us identify \( {Z}^{3} = \operatorname{Im}\left( {j}_{n}\right) \) . We have ![21ef530b-1e09-406a-b041-cf4539af5c14_53_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_53_0.jpg) Then \( {j}_{n - 1} \) is an injection, so \[ \operatorname{Im}\left( {j}_{n}\right) = \operatorname{Ker}\left( \partial \right) = \operatorname{Ker}\left( {{j}_{n - 1} \circ \partial }\right) = \operatorname{Ker}\left( {\partial }_{n}\right) = {Z}_{n}^{\text{cell }}\left( X\right) . \] As for (3), we have ![21ef530b-1e09-406a-b041-cf4539af5c14_53_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_53_1.jpg) Note that \( {H}_{n}\left( {X}^{n + 1}\right) \rightarrow {H}_{n}\left( X\right) \) is an isomorphism. Also, \( {B}_{n + 1}^{\text{cell }}\left( X\right) = \operatorname{Im}\left( {\partial }_{n + 1}\right) = \) \( \operatorname{Im}\left( {{j}_{n} \circ \partial }\right) \subseteq \operatorname{Im}\left( {j}_{n}\right) \) . Then \[ {j}_{n}^{-1}\left( {{B}_{n + 1}^{\text{cell }}\left( X\right) }\right) = {j}_{n}^{-1}\left( {\operatorname{Im}\left( {\partial }_{n + 1}\right) }\right) = {j}_{n}^{-1}\left( {\operatorname{Im}\left( {{j}_{n}\partial }\right) }\right) \] \[ = \operatorname{Im}\left( \partial \right) = \operatorname{Ker}\left( {\widetilde{k}}_{n}\right) = \operatorname{Ker}\left( {k}_{n}\right) . \] Thus if \( {\Theta }_{n} = {k}_{n} \circ {j}_{n}^{-1} \), we have an isomorphism \[ {\Theta }_{n} : {Z}_{n}^{\text{cell }}\left( X\right) /{B}_{n}^{\text{cell }}\left( X\right) = {H}_{n}^{\text{cell }}\left( X\right) \overset{ \cong }{ \rightarrow }{H}_{n}\left( X\right) . \] Remark 4.2.18. Theorem 4.2.17 shows that the point of cellular homology is not that it is different from ordinary homology. Rather, for CW-complexes (where it is defined) it is the same. The point of cellular homology is that it is a better way of looking at homology. It is better for two reasons. The first reason is psychological. It makes clear how the homology of a CW-complex comes from its cells. The second reason is mathematical. If \( X \) is a finite complex, the cellular chain complex of \( X \) is finitely generated. This inherent finiteness not only makes cellular homology easier to work with, it allows us to effectively, and indeed easily, compute an important and very classical invariant of topological spaces as well. Recall that any finitely generated abelian group \( A \) is isomorphic to \( F \oplus T \), where \( F \) is a free abelian group of well-defined rank \( r \) (i.e., \( F \) is isomorphic to \( {\mathbb{Z}}^{r} \) ) and \( T \) is a torsion group. In this case we define the rank of \( A \) to be \( r \) . Definition 4.2.19. Let \( X \) be a space with \( {H}_{i}\left( X\right) \) finitely generated for each \( i \), and nonzero for only finitely many values of \( i \) . Then the Euler characteristic \( \chi \left( X\right) \) is \[ \chi \left( X\right) = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\operatorname{rank}{H}_{i}\left( X\right) \] \( \diamond \) Theorem 4.2.20. Let \( X \) be a finite CW-complex. Then \[ \chi \left( X\right) = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i} \cdot \text{ number of }i\text{-cells of }X. \] Proof. Let \( X \) have \( {d}_{i}i \) -cells and suppose \( {d}_{i} = 0 \) for \( i > n \) . We have the cellular chain complex of \( X \) \[ 0 \rightarrow {C}_{n}^{\mathrm{{cell}}}\left( X\right) \rightarrow {C}_{n - 1}^{\mathrm{{cell}}}\left( X\right) \rightarrow \cdots \rightarrow {C}_{1}^{\mathrm{{cell}}}\left( X\right) \rightarrow {C}_{0}^{\mathrm{{cell}}}\left( X\right) \rightarrow 0 \] with \( {C}_{i}^{\text{cell }}\left( X\right) \) free abelian of rank \( {d}_{i} \) . But then it is purely algebraic result that \[ \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}{d}_{i} = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\operatorname{rank}{H}_{i}^{\text{cell }}\left( X\right) \] and by Theorem A.2.13 this is equal to \( \chi \left( X\right) \) . Remark 4.2.21. Note in particular that \( \chi \left( X\right) \) is independent of the CW-structure on \( X \) . For example, let \( X = {S}^{n} \) . Then \( \chi \left( X\right) = 2 \) for \( n \) even and 0 for \( n \) odd. We have seen in Example 4.2.9 three different CW-structures on \( X \) . In the first, \( X \) has a single 0-cell and a single \( n \) -cell. In the second, \( X \) has two \( i \) -cells for each \( i \) between 0 and \( n \) . In the third, \( X \) has a single 0-cell, a single \( \left( {n - 1}\right) \) -cell, and two \( n \) -cells. But counting cells in any of these CW-structures gives \( \chi \left( X\right) = 2 \) for \( n \) even and 0 for \( n \) odd. \( \;\diamond \) Remark 4.2.22. Let \( X \) be the surface of a convex polyhedron in \( {\mathbb{R}}^{3} \) . It is a famous theorem of Euler that, if \( V, E \), and \( F \) denote the number of vertices, edges, and faces of \( X \), then \[ V - E + F = 2\text{.} \] But \( X \) is topologically \( {S}^{2} \) and regarding \( X \) as the surface of a polyhedron gives a CW-structure on \( X \) with \( V \) 0-cells, \( E \) 1-cells, and \( F \) 2-cells, so this equation is a special case of Theorem 4.2.20. For example, we may compute \( V - E + F \) for each of the five Platonic solids. Tetrahedron: \( 4 - 6 + 4 = 2 \) , Cube: \( 8 - {12} + 6 = 2 \) , Octahedron: \( 6 - {12} + 8 = 2 \) , Dodecahedron: \( {20} - {30} + {12} = 2 \) , Icosahedron: \( {12} - {30} + {20} = 2 \) . \( \diamond \) Recall that we considered covering spaces in Sect. 2.2. In general, if \( \widetilde{X} \) is a covering space \( X \), there is no simple relationship between the homology groups of \( \widetilde{X} \) and the homology groups of \( X \) . There is, however, a simple relationship between their Euler characteristics. Theorem 4.2.23. Let \( X \) be a finite \( {CW} \) -complex. Let \( \widetilde{X} \) be an \( n \) -fold cover of \( X \) . Then \( \chi \left( \widetilde{X}\right) = {n\chi }\left( X\right) \) . Proof. Given any cell decomposition of \( X \), we may refine it to obtain a cell decomposition so that every cell is evenly covered by the covering projection. Then the inverse image of every cell is \( n \) cells, so the theorem immediately follows from Theorem 4.2.20. Example 4.2.24. Let \( R \) be a \( k \) -leafed rose, and let \( \widetilde{R} \) be any \( n \) -fold cover of \( R \) . Then \( R \) has one 0 -cell and \( {k1} \) -cells, so \( \chi \left( R\right) = 1 - k
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 3.3.6
Definition 3.3.6 Let \( \Gamma \) be a finitely generated non-elementary subgroup of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) . The field \( \mathbb{Q}\left( {\operatorname{tr}{\Gamma }^{\left( 2\right) }}\right) \) will henceforth be denoted by \( {k\Gamma } \) and referred to as the invariant trace field of \( \Gamma \) . Likewise, the quaternion algebra \( {A}_{0}{\Gamma }^{\left( 2\right) } \) over \( \mathbb{Q}\left( {\operatorname{tr}\overline{{\Gamma }^{\left( 2\right) }}}\right) \) will be denoted by \( {A\Gamma } \) and referred to as the invariant quaternion algebra of \( \Gamma \) . The cases of particular interest here occur when \( \Gamma \) has finite covolume. Theorem 3.3.7 If \( \Gamma \) is a Kleinian group of finite covolume, then its invariant trace field is a finite non-real extension of \( \mathbb{Q} \) . Proof: That \( {k\Gamma } \) is a finite extension of \( \mathbb{Q} \) follows from Theorem 3.1.2. Suppose that \( {k\Gamma } \) is a real field. By Corollary 3.2.5, \( {\Gamma }^{\left( 2\right) } \) is conjugate to a subgroup of \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) . However, \( {\Gamma }^{\left( 2\right) } \) cannot then have finite covolume. We also note the fundamental relationship between the basic structure of quaternion algebras and the topology of the quotient space. Theorem 3.3.8 If \( \Gamma \) is a non-elementary group which contains parabolic elements, then \( {A}_{0}\Gamma = {M}_{2}\left( {\mathbb{Q}\left( {\operatorname{tr}\Gamma }\right) }\right) \) . In particular, if \( \Gamma \) is a Kleinian group such that \( {\mathbf{H}}^{3}/\Gamma \) has finite volume but is non-compact, then \( {A\Gamma } = {M}_{2}\left( {k\Gamma }\right) \) . Proof: If \( \Gamma \) has a parabolic element \( \gamma \), then \( \gamma - I \) is non-invertible in the quaternion algebra. Thus \( {A}_{0}\Gamma \) cannot be a division algebra. The result then follows from Theorem 2.1.7. Given \( \Gamma \) as a subgroup of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) means that its trace field is naturally embedded in \( \mathbb{C} \) . Thus the invariant trace field is a subfield of \( \mathbb{C} \) and so is not just defined up to isomorphism, but is embedded in \( \mathbb{C} \) . Only in the first section of this chapter do we use the fact that the trace field is a number field. The results elsewhere in this chapter apply to any finitely generated non-elementary subgroup of \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) and so, in particular, apply to all finitely generated Fuchsian groups. It should be noted that even in the cases where the Kleinian groups are of finite covolume, the invariant trace field and quaternion algebra are not complete commensurability invariants. There are many examples of noncommensurable manifolds with the same invariant trace field and, indeed, of cocompact and non-cocompact groups with the same invariant trace field. Examples will be given in the next chapter and more will emerge later, particularly in the discussion of arithmetic groups. There are also examples of non-commensurable manifolds with isomorphic quaternion algebras and these will be discussed later. Let \( \Gamma \) be a finitely generated non-elementary subgroup of \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) so that \( {\Gamma }^{\left( 2\right) } \) is a normal subgroup of finite index. Then, as in the proof of Theorem 3.3.4, conjugation by \( g \in \Gamma \) induces an automorphism of \( {\Gamma }^{\left( 2\right) } \) and, hence, induces an automorphism of the quaternion algebra \( {A\Gamma } \) which is necessarily inner. Thus using (3.8), the assignment \( g \rightarrow a \) induces a homomorphism of \( \Gamma \) into \( A{\Gamma }^{ * }/{\left( k\Gamma \right) }^{ * } \) and, hence, into \( \operatorname{SO}\left( {{\left( A\Gamma \right) }_{0}, n}\right) \) by Theorem 2.4.1. Thus any finite-covolume Kleinian group \( \Gamma \) in \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) admits a faithful representation in the \( {k\Gamma } \) points of a linear algebraic group defined over \( {k\Gamma } \) , where \( {k\Gamma } \) is a number field. ## Exercise 3.3 1. Let \( \Gamma \) be a Kleinian group of finite covolume. Show that there are only finitely many Kleinian groups \( {\Gamma }_{1} \) such that \( {\Gamma }_{1}^{\left( 2\right) } = {\Gamma }^{\left( 2\right) } \) . 2. Show that if \( {\mathbf{H}}^{3}/\Gamma \) is a compact hyperbolic manifold whose volume is bounded by \( c \), then \( \left\lbrack {\Gamma : {\Gamma }^{\left( 2\right) }}\right\rbrack \) is bounded by a function of \( c \) . 3. Let \( \Gamma \) be a Kleinian group such that every element of \( \Gamma \) leaves a fixed circle in the complex plane invariant. Prove that the invariant trace field \( {k\Gamma } \subset \mathbb{R} \) . 4. Let \( \mathrm{{Ad}} \) denote the adjoint representation of \( {\mathrm{{SL}}}_{2} \) to \( \mathrm{{GL}}\left( \mathcal{L}\right) \), where \( \mathcal{L} \) is the Lie algebra of \( {\mathrm{{SL}}}_{2} \) . Let \( \Gamma \) be a subgroup of finite covolume in \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) . Show that \( {k\Gamma } = \mathbb{Q}\left( {\{ \operatorname{tr}\operatorname{Ad}\gamma : \gamma \in \Gamma \} }\right) \) . 5. Let \( \Gamma \) be a Kleinian group of finite covolume. Let \( \sigma \) be a Galois embedding of \( {k\Gamma } \) such that \( \sigma \left( {k\Gamma }\right) \) is real and \( {A\Gamma } \) is ramified at the real place corresponding to \( \sigma \) . Prove that if \( \tau \) is a Galois embedding of \( \mathbb{Q}\left( {\operatorname{tr}\Gamma }\right) \) such that \( {\left. \tau \right| }_{k\Gamma } = \sigma \), then \( \tau \left( {\mathbb{Q}\left( {\operatorname{tr}\Gamma }\right) }\right) \) is real. (See Exercise 2.9, No. 6.) 6. Show that, if \( \Gamma \) is the \( \left( {2,3,8}\right) \) -Fuchsian triangle group, then \( \mathbb{Q}\left( {\operatorname{tr}\Gamma }\right) \neq \) \( {k\Gamma } \) . Show that \( {A\Gamma } \) does not split over \( {k\Gamma } \) . (See Exercise 3.2, No. 4.) Describe the linear algebraic group \( G \) defined over \( {k\Gamma } \) such that \( \Gamma \) has a faithful representation in the \( {k\Gamma } \) points of \( G \) . Deduce that \( \Gamma \) has a faithful representation in \( \mathrm{{SO}}\left( {3,\mathbb{R}}\right) \) . 7. Let \( \Gamma \) denote the orientation-preserving subgroup of index 2 in the Coxeter group generated by reflections in the faces of the (ideal) tetrahedron in \( {\mathbf{H}}^{3} \) bounded by the planes \( y = 0, x = \sqrt{3}y, x = \left( {1 + \sqrt{5}}\right) /4 \) and the unit hemisphere. Determine the invariant trace field and quaternion algebra of \( \Gamma \) . Let \( \Delta \) denote the orientation-preserving subgroup of index 2 in the Coxeter group generated by reflections in the faces of a regular ideal dodecahedron in \( {\mathbf{H}}^{3} \) with dihedral angles \( \pi /3 \) . Find the invariant trace field and quaternion algebra of \( \Delta \) . ## 3.4 Trace Relations There are a number of identities between traces of matrices in \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) . These are particularly useful in the determination of generators of the trace fields, which is carried out in the next section. The most useful of these identities are listed below and many are established by straightforward calculation. Trace is, of course, invariant on conjugacy classes so that \[ \operatorname{tr}{XY} = \operatorname{tr}{ZXY}{Z}^{-1}\;\text{ for }X, Y \in {M}_{2}\left( \mathbb{C}\right), Z \in \mathrm{{GL}}\left( {2,\mathbb{C}}\right) . \] (3.10) In particular, \[ \operatorname{tr}{XY} = \operatorname{tr}{YX}\;\text{ and }\;\operatorname{tr}{X}_{1}{X}_{2}\cdots {X}_{n} = \operatorname{tr}{X}_{\sigma \left( 1\right) }{X}_{\sigma \left( 2\right) }\ldots {X}_{\sigma \left( n\right) } \] (3.11) for any cyclic permutation \( \sigma \) of \( 1,2,\ldots, n \) . Recall that for \( X \in \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) \[ {X}^{2} = \left( {\operatorname{tr}X}\right) X - I \] (3.12) from which we deduce \[ \operatorname{tr}{X}^{2} = {\operatorname{tr}}^{2}X - 2 \] (3.13) and other identities for higher powers of \( X \), as given in Lemma 3.1.3. The other basic identities for elements \( X, Y \in \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) are \[ \operatorname{tr}{XY} = \left( {\operatorname{tr}X}\right) \left( {\operatorname{tr}Y}\right) - \operatorname{tr}X{Y}^{-1},\;\operatorname{tr}X = \operatorname{tr}{X}^{-1}. \] (3.14) By repeated application of these relations, the following identities, which will be useful in the next two sections, are readily obtained. \[ \operatorname{tr}\left\lbrack {X, Y}\right\rbrack = {\operatorname{tr}}^{2}X + {\operatorname{tr}}^{2}Y + {\operatorname{tr}}^{2}{XY} - \operatorname{tr}X\operatorname{tr}Y\operatorname{tr}{XY} - 2 \] (3.15) \[ \operatorname{tr}{XYXZ} = \operatorname{tr}{XY}\operatorname{tr}{XZ} - \operatorname{tr}Y{Z}^{-1} \] (3.16) \[ \operatorname{tr}{XY}{X}^{-1}Z = \operatorname{tr}{XY}\operatorname{tr}{X}^{-1}Z - \operatorname{tr}{X}^{2}Y{Z}^{-1} \] (3.17) \[ \operatorname{tr}{X}^{2}{YZ} = \operatorname{tr}X\operatorname{tr}{XYZ} - \operatorname{tr}{YZ} \] (3.18) \[ \operatorname{tr}{XYZ} + \operatorname{tr}{YXZ} + \operatorname{tr}X\operatorname{tr}Y\operatorname{tr}Z = \operatorname{tr}X\operatorname{tr}{YZ} + \operatorname{tr}Y\operatorname{tr}{XZ} + \operatorname{tr}Z\operatorname{tr}{XY} \] (3.19) For this last identity, we argue as follows: \[ \operatorname{tr}{XYZ} = \operatorname{tr}X\operatorname{tr}{YZ} - \operatorname{tr}X{Z}^{-1}{Y}^{-1} \] \[ = \operatorname{tr}X\operatorname{tr}{YZ} - \left( {\operatorname{tr}X{Z}^{-1}\operatorname{tr}Y - \operatorname{tr}X{Z}^{-1}Y}\right) \] \[ = \operatorname{tr}X\operatorname{tr}{YZ} - \operatorname{tr}Y\left( {\operatorname{tr}X\operatorname{tr}Z - \operatorname{tr}{XZ}}\right) + \left( {
113_Topological Groups
Definition 18.15
Definition 18.15. Let \( I \) be any set and \( F \) a collection of subsets of \( I \) . (i) \( F \) has the finite intersection property if \( {a}_{0} \cap \cdots \cap {a}_{m - 1} \neq 0 \) whenever \( m \in \omega \) and \( a \in {}^{m}F \) . (ii) \( F \) is a filter over \( I \) provided \( F \neq 0 \) and: (a) \( b \supseteq a \in F \) implies \( b \in F \) ; (b) \( a, b \in F \) implies \( a \cap b \in F \) . (iii) \( F \) is an ultrafilter over \( I \) provided \( F \) is a filter over \( I,0 \notin F \), and for any \( J \subseteq I \), either \( J \in I \) or \( I \sim J \in F \) . The following proposition is obvious. Proposition 18.16. If \( F \) is a filter and \( 0 \notin F \), then \( F \) has the finite intersection property. For any filter \( F \) over \( I \) we have \( I \in F \) . For any filter \( F \) on \( I,0 \in F \) iff \( F \) is the set of all subsets of \( I \) . Proposition 18.17. Let \( F \) be a filter over \( I \) with \( 0 \notin F \) . Then the following conditions are equivalent: (i) \( F \) is an ultrafilter. (ii) for all \( a, b \subseteq I \), if \( a \cup b \in F \), then \( a \in F \) or \( b \in F \) . (iii) for any filter \( G \) on \( I \), if \( F \subset G \), then \( 0 \in G \) . Proof. \( \left( i\right) \Rightarrow \left( {ii}\right) \) . Assume that \( a \cup b \in F \) while \( a \notin F \) . Then by \( \left( i\right), I \sim \) \( a \in F \) . Now \( \left( {I \sim a}\right) \cap \left( {a \cup b}\right) \subseteq b \), so \( b \in F \) . (ii) \( \Rightarrow \) (iii). Say \( a \in G \sim F \) . Then \( a \cup \left( {I \sim a}\right) = I \in F \), so by (ii) \( I \sim a \in F \subseteq G \) . Hence \( 0 = a \cap \left( {I \sim a}\right) \in G \) . (iii) \( \Rightarrow \left( i\right) \) . Assume that \( a \subseteq I \) and \( a \notin F \) . Let \( G = \{ b \subseteq I \) : there is an \( x \in F \) with \( x \cap a \subseteq b\} \) . Clearly \( F \subseteq G, a \in G \), and \( G \) is a filter. In particular \( F \subset G \) , so \( 0 \in G \) by (iii). Hence \( x \cap a = 0 \) for some \( x \in F \) ; hence \( x \subseteq I \sim a \), so \( I \sim a \in F \) . The following is the basic existence principle for ultrafilters. Proposition 18.18. If \( F \) is a collection of subsets of \( I \neq 0 \) with the finite intersection property, then there is an ultrafilter \( G \) such that \( F \subseteq G \) . Proof. Let \( \mathcal{A} \) be the collection of all filters \( G \) such that \( F \subseteq G \) and \( 0 \notin G \) . Then \( \mathcal{A} \) is nonempty; for, let \( H = \{ x \subseteq I \) : there exist \( m \in \omega \) and \( y \in {}^{m}F \) with \( \left. {{y}_{0} \cap \cdots \cap {y}_{m - 1} \subseteq x}\right\} \) . Clearly \( H \in \mathcal{A} \) . If \( 0 \neq \mathcal{B} \subseteq \mathcal{A} \) is simply ordered by inclusion, clearly \( \bigcup \mathcal{B} \in \mathcal{A} \) . By Zorn’s lemma, let \( G \) be a maximal element of \( \mathcal{A} \) . By 18.17, \( G \) is an ultrafilter. Definition 18.19. Let \( \mathfrak{A} = \left\langle {{\mathfrak{A}}_{i} : i \in I}\right\rangle \) be a system of \( \mathcal{L} \) -structures, and let \( F \) be an ultrafilter on \( I \) . We define \[ {\bar{F}}^{\mathfrak{A}} = \left\{ {\left( {x, y}\right) \in {}^{2}{\mathrm{P}}_{i \in I}{A}_{i} : \left\{ {i : {x}_{i} = {y}_{i}}\right\} \in F}\right\} . \] We write \( \bar{F} \) if \( \mathfrak{A} \) is understood. Obviously \( \bar{F} \) depends only on the system \( \left\langle {{A}_{i} : i \in I}\right\rangle \) and not at all on the language \( \mathcal{L} \) . Proposition 18.20. Under the assumptions of 18.19, \( \bar{F} \) is an equivalence relation on \( {\mathrm{P}}_{i \in I}{A}_{i} \) . Let \( \mathfrak{B} = {\mathrm{P}}_{i \in I}{\mathfrak{A}}_{i} \) . Then (i) if \( \mathbf{O} \) is an m-ary operation symbol, and if \( {x}_{t}\bar{F}{y}_{t} \) for all \( t < m \), then \( {\mathbf{O}}^{\mathfrak{B}}{xF}\mathbf{O}{y}^{\mathfrak{B}} \) (ii) if \( \mathbf{R} \) is an m-ary relation symbol, and if \( {x}_{t}\bar{F}{y}_{t} \) for all \( t < m \), then \( \{ i \in I : \langle {x}_{0i},\ldots ,{x}_{m - 1, i}\rangle \in {\mathbf{R}}^{\mathfrak{A}i}\} \in F\; \) iff \( \{ i \in I : \langle {y}_{0i},\ldots ,{y}_{m - 1, i}\rangle \in {\mathbf{R}}^{\mathfrak{A}i}\} \in F. \) Proof. The proof is routine, and we just check \( \left( i\right) \) and the transitivity of \( \bar{F} \) as examples. Assume that \( x\bar{F}y\bar{F}z \) . Thus \( \left\{ {i \in I : {x}_{i} = {y}_{i}}\right\} \in F \) and \( \left\{ {i \in I : {y}_{i} = {z}_{i}}\right\} \) \( \in F \) . But \[ \left\{ {i \in I : {x}_{i} = {y}_{i}}\right\} \cap \left\{ {i \in I : {y}_{i} = {z}_{i}}\right\} \subseteq \left\{ {i \in I : {x}_{i} = {z}_{i}}\right\} , \] so also \( \left\{ {i \in I : {x}_{i} = {z}_{i}}\right\} \in F \), and so \( x\bar{F}z \) . To check \( \left( i\right) \), note that \[ \left\{ {i \in I : \forall t < m\left( {{x}_{ti} = {y}_{ti}}\right) }\right\} = \mathop{\bigcap }\limits_{{t < m}}\left\{ {i \in I : {x}_{ti} = {y}_{ti}}\right\} \in F, \] and hence \[ \left\{ {i \in I : \forall t < m\left( {{x}_{ti} = {y}_{ti}}\right) }\right\} \subseteq \left\{ {i \in I : {\left( {\mathbf{O}}^{\mathfrak{B}}x\right) }_{i} = {\left( {\mathbf{O}}^{\mathfrak{B}}y\right) }_{i}}\right\} \in F, \] so \( {\mathbf{O}}^{\mathfrak{B}}x\bar{F}{\mathbf{O}}^{\mathfrak{B}}y \) . We may think of the members of an ultrafilter \( F \) as "big" subsets of \( I \) . Thus the passage from a member \( x \in \mathop{P}\limits_{{i \in I}}{A}_{i} \) to its equivalence class under \( \bar{F} \) amounts to identifying all functions which are equal to \( x \) on a "big" subset of \( I \) . As usual, \( {\left\lbrack x\right\rbrack }_{\bar{F}} \), or simply \( \left\lbrack x\right\rbrack \) if \( \bar{F} \) is understood, denotes the equivalence class of \( x \) under \( \bar{F} \) . Proposition 18.20 justifies the definition of ultraproducts: Definition 18.21. Let \( \mathfrak{A} = \left\langle {{\mathfrak{A}}_{i} : i \in I}\right\rangle \) be a system of \( \mathcal{L} \) -structures, set \( \mathfrak{B} = \mathop{P}\limits_{{i \in I}}{\mathfrak{A}}_{i} \), and let \( F \) be an ultrafilter on \( I \) . The ultraproduct of \( \mathfrak{A} \) over \( \bar{F} \) , denoted by \( \mathfrak{A}/\bar{F} \) or \( {\mathrm{P}}_{i \in I}{\mathfrak{A}}_{i}/\bar{F} \), is the structure \( \mathfrak{C} \) with universe \( C = {\mathrm{P}}_{i \in I}{A}_{i}/\bar{F} \) (the collection of all equivalence classes under \( \bar{F} \) ) and with operations and relations given as follows: (i) if \( \mathbf{O} \) is an \( m \) -ary operation symbol and \( x \in {}^{m}B \), then \( {\mathbf{O}}^{\mathfrak{C}}\left( {\left\lbrack {x}_{0}\right\rbrack ,\ldots ,\left\lbrack {x}_{m - 1}\right\rbrack }\right) \) \( = \left\lbrack {{\mathbf{O}}^{\mathfrak{B}}\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) }\right\rbrack \) (ii) if \( \mathbf{R} \) is an \( m \) -ary relation symbol, then we let \( {\mathbf{R}}^{\mathfrak{C}} \) consist of all \( m \) -tuples of the form \( \left\langle {\left\lbrack {x}_{0}\right\rbrack ,\ldots ,\left\lbrack {x}_{m - 1}\right\rbrack }\right\rangle \) such that \( \left\{ {i : \left\langle {{x}_{0i},\ldots ,{x}_{m - 1, i}}\right\rangle \in {\mathbf{R}}^{2\mathrm{{li}}}}\right\} \in F \) . Of particular importance later will be the case when all factors \( {\mathfrak{A}}_{i} \) of an ultraproduct are equal to some structure \( \mathfrak{C} \) . Then the ultraproduct \( {\mathrm{P}}_{i \in I}{\mathfrak{A}}_{i}/\bar{F} \) is denoted of course by \( {}^{I}\mathfrak{C}/\bar{F} \), and is called an ultrapower of \( \mathfrak{C} \) . Sometimes we omit the bar on \( F \) . We shall give only a few basic properties of ultraproducts here. Our first very simple property uses the notion of an isomorphism, which we shall now briefly discuss. Definition 18.22. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures. An isomorphism from \( \mathfrak{A} \) into \( \mathfrak{B} \), or an embedding of \( \mathfrak{A} \) into \( \mathfrak{B} \) is a one-one function \( f \) mapping \( A \) into \( B \) such that \[ f{\mathbf{O}}^{\mathfrak{A}}\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) = {\mathbf{O}}^{\mathfrak{B}}\left( {f{a}_{0},\ldots, f{a}_{m - 1}}\right) \] whenever \( \mathbf{O} \) is an \( m \) -ary operation symbol and \( {a}_{0},\ldots ,{a}_{m - 1} \in A \), while \[ \left\langle {{a}_{0},\ldots ,{a}_{m - 1}}\right) \in {\mathbf{R}}^{\mathfrak{A}}\text{ iff }\left\langle {f{a}_{0},\ldots, f{a}_{m - 1}}\right) \in {\mathbf{R}}^{\mathfrak{B}} \] whenever \( \mathbf{R} \) is an \( m \) -ary relation symbol and \( {a}_{0},\ldots ,{a}_{m - 1} \in A \) . We say that \( f \) is an isomorphism from \( \mathfrak{A} \) onto \( \mathfrak{B} \) if the function \( f \) maps onto \( B \) . Finally, we write \( \mathfrak{A} \cong \mathfrak{B} \) if there is an isomorphism from \( \mathfrak{B} \) onto \( \mathfrak{B} \) . The following proposition, easily established by induction on \( \sigma \) and \( \varphi \), is the basic fact about isomorphism as far as first-order languages are concerned. Actually a similar theorem holds for any of the languages which have been considered by logicians. The result says roughly that isomorphic structures are indistinguishable in first-order logic. For this reason, most of our definitions and results extend automatically to isomorphic structures. Proposition 18.23. If \( f \) is an isomorphism from \( \mathfrak{A} \) onto \( \mathfrak{B}, x \in {}^{\omega }A,\sigma \) is a term, and \( \varphi \) is a formula, then: \[ f{\sigma }^{\mathfrak{A}}x = {\sigma }^{\mathfrak{B}}\left( {f \circ x}\right) \] \[ \mathfrak{A} \vDash \varphi \left\lbrack x\right\rbrack \;\text{ iff }\mathfrak{B} \vDash \varphi \left\lbrack {f \circ x}\right\rbrack . \] Thus for any sentence \( \varphi ,\varphi \) holds in \( \mathfrak{A} \) iff \( \va
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 5.5.6
Definition 5.5.6. Let \( G \) be an abelian group. The singular cohomology of \( X \) with coefficients \( G,{H}^{ * }\left( {X;G}\right) \), is the homology of \( {C}^{ * }\left( X\right) \otimes G \), where \( {C}^{ * }\left( X\right) \) is the dual chain complex to the singular chain complex of \( X \) . Theorem 5.5.7. Singular cohomology with coefficients in \( G \) is an ordinary cohomology theory with coefficient group \( G = {H}^{0}\left( {X;G}\right) \) . Proof. Again this mirrors the proof for singular homology in Sect. 5.1. Again Axiom 4 uses the fact that, for every \( n \), the sequence \( 0 \rightarrow {C}^{n}\left( {X, A}\right) \rightarrow {C}^{n}\left( X\right) \rightarrow \) \( {C}^{n}\left( A\right) \rightarrow 0 \) is split exact. Again we have a universal coefficient theorem for cohomology. Because of the problem with infinite ranks, we need additional hypotheses. Theorem 5.5.8 (Universal coefficient theorem). (1) Let \( X \) be a space and let \( G \) be an abelian group. Suppose that \( X \) is of finite type or that \( G \) is of finitely generated. Then there is a split short exact sequence \[ 0 \rightarrow {H}^{n}\left( X\right) \otimes G \rightarrow {H}^{n}\left( {X;G}\right) \rightarrow \operatorname{Tor}\left( {{H}^{n + 1}\left( X\right), G}\right) \rightarrow 0. \] (2) In this situation, if \( f : X \rightarrow Y \) is a map, there is a commutative diagram ![21ef530b-1e09-406a-b041-cf4539af5c14_85_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_85_0.jpg) Proof. Again we omit the purely algebraic argument, but we note that, while the cochain groups \( {C}^{ * }\left( X\right) \) are not in general free, they are torsion-free, and that fact, together with our additional hypotheses, suffices to be able to apply that argument. We have the following corollary (compare Corollary 5.3.13). Corollary 5.5.9. In the situation of Theorem 5.5.8, let \( f : X \rightarrow Y \) and suppose that \( {f}^{ * } : {H}^{n}\left( Y\right) \rightarrow {H}^{n}\left( X\right) \) is an isomorphism for all \( n \) . Then \( {f}^{ * } : {H}^{n}\left( {Y;G}\right) \rightarrow {H}^{n}\left( {X;G}\right) \) is an isomorphism for all \( n \) . We also have the analog of Theorem 5.3.16. Theorem 5.5.10. In the situation of Theorem 5.5.8, let \( \varphi : {G}_{1} \rightarrow {G}_{2} \) be a homomorphism. Then there is a commutative diagram of split short exact sequences, with all vertical maps induced by \( \varphi \) , ![21ef530b-1e09-406a-b041-cf4539af5c14_85_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_85_1.jpg) It is natural to expect that there will be a close relationship between homology and cohomology, and that is indeed the case. We develop that now. Given the definition of the dual chain complex, we have the evaluation map \[ e : {C}^{n}\left( X\right) \otimes {C}_{n}\left( X\right) \rightarrow \mathbb{Z} \] given by evaluating a cochain on a chain, i.e. \[ e\left( {\gamma, c}\right) = \gamma \left( c\right) \;\text{ for }\gamma \in {C}^{n}\left( X\right), c \in {C}_{n}\left( X\right) . \] Lemma 5.5.11. The evaluation map e induces a map \[ e : {H}^{n}\left( X\right) \otimes {H}_{n}\left( X\right) \rightarrow \mathbb{Z} \] by \( e\left( {\left\lbrack \gamma \right\rbrack ,\left\lbrack c\right\rbrack }\right) = \gamma \left( c\right) \), where \( \gamma \) (resp. \( c \) ) is a representative of the cohomology class [γ] (resp. the homology class [c]). Proof. We can restrict \( e \) to evaluate cocycles on cycles, \[ e : {Z}^{n}\left( X\right) \otimes {Z}_{n}\left( X\right) \rightarrow \mathbb{Z} \] by \( e\left( {\gamma, c}\right) = \gamma \left( c\right) \) . But then if \( c \) is a boundary, \( c = \partial d, e\left( {\gamma, c}\right) = e\left( {\gamma ,\partial d}\right) = e\left( {{\delta \gamma }, d}\right) = \) \( e\left( {0, d}\right) = 0 \), and similarly if \( \gamma \) is a coboundary. Given this lemma, we have a map \[ e : {H}^{n}\left( X\right) \rightarrow \operatorname{Hom}\left( {{H}_{n}\left( X\right) ,\mathbb{Z}}\right) \] given by \[ e\left( \left\lbrack \gamma \right\rbrack \right) \left( \left\lbrack c\right\rbrack \right) = e\left( {\left\lbrack \gamma \right\rbrack ,\left\lbrack c\right\rbrack }\right) = \gamma \left( c\right) \] i.e., \( f = e\left( \left\lbrack \gamma \right\rbrack \right) \) is the homomorphism \( f : {H}_{n}\left( X\right) \rightarrow \mathbb{Z} \) given by \( f\left( \left\lbrack c\right\rbrack \right) = \gamma \left( c\right) \) . This construction can be performed with arbitrary coefficients, to obtain maps \[ e : {H}^{n}\left( {X;G}\right) \otimes {H}_{n}\left( X\right) \rightarrow G \] and \[ e : {H}^{n}\left( {X, G}\right) \rightarrow \operatorname{Hom}\left( {{H}_{n}\left( X\right), G}\right) . \] Theorem 5.5.12 (Universal coefficient theorem). (1) For any space \( X \) and abelian group \( G \), there is a split short exact sequence \[ 0 \rightarrow \operatorname{Ext}\left( {{H}_{n - 1}\left( X\right), G}\right) \rightarrow {H}^{n}\left( {X;G}\right) \overset{e}{ \rightarrow }\operatorname{Hom}\left( {{H}_{n}\left( X\right), G}\right) \rightarrow 0. \] (2) If \( f : X \rightarrow Y \) is a map, there is a commutative diagram ![21ef530b-1e09-406a-b041-cf4539af5c14_87_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_87_0.jpg) Proof. Again this is a purely algebraic argument which we omit. Note that this theorem has important consequences even (indeed, especially) in the case \( G = \mathbb{Z} \) . Corollary 5.5.13. Let \( f : X \rightarrow Y \) induce an isomorphism \( {f}_{ * } : {H}_{n}\left( X\right) \rightarrow {H}_{n}\left( Y\right) \) for all \( n \) . Then \( {f}^{ * } : {H}^{n}\left( Y\right) \rightarrow {H}^{n}\left( X\right) \) is an isomorphism for all \( n \) . Corollary 5.5.14. If the inclusion \( \left( {X - U, A - U}\right) \rightarrow \left( {X, A}\right) \) is excisive for singular homology, it is excisive for singular cohomology. Corollary 5.5.15. Let \( X \) be a space of finite type and suppose that \( {H}_{n}\left( X\right) \approx {F}_{n} \oplus {T}_{n} \) , where \( {F}_{n} \) is a free abelian group and \( {T}_{n} \) is a torsion group, for each \( n \) . Then \[ {H}^{n}\left( X\right) \approx {F}_{n} \oplus {T}_{n - 1} \] for each \( n \) . Proof. This follows from the computation of Ext in Lemma A.3.12. Corollary 5.5.16. Let \( X \) be a space of finite type. Then \( {H}^{n}\left( X\right) \) is finitely generated for all \( n \) . Example 5.5.17. The integral singular cohomology of \( \mathbb{R}{P}^{n} \) is as follows: \[ {H}^{k}\left( {\mathbb{R}{P}^{n}}\right) = \left\{ \begin{array}{ll} 0 & k > n \\ \mathbb{Z} & k = n\text{ odd } \\ 0 & k = n\text{ even } \\ {\mathbb{Z}}_{2} & 1 \leq k \leq n - 1\text{ even } \\ 0 & 1 \leq k \leq n - 1\text{ odd } \\ \mathbb{Z} & k = 0, \end{array}\right. \] as we see from Corollary 5.5.15 and Theorem 4.3.4. \( \diamond \) Remark 5.5.18. We may take \( \operatorname{Hom}\left( {, R}\right) \) where \( R \) is a commutative ring, and then \( {H}^{n}\left( {X;R}\right) \) becomes an \( R \) -module. In particular we may take \( R = \mathbb{F} \), a field. We then see that \( {H}_{n}\left( {X;\mathbb{F}}\right) \) and \( {H}^{n}\left( {X;\mathbb{F}}\right) \) are \( \mathbb{F} \) -vector spaces, and either they are both finite-dimensional vector spaces or they are both infinite-dimensional vector spaces. In case they are both finite-dimensional, then they are dual vector spaces with the pairing \[ e : {H}^{n}\left( {X;\mathbb{F}}\right) \otimes {H}_{n}\left( {X;\mathbb{F}}\right) \rightarrow \mathbb{F} \] being nonsingular. (In particular, \( {H}^{n}\left( {X;\mathbb{F}}\right) \) and \( {H}_{n}\left( {X;\mathbb{F}}\right) \) are \( \mathbb{F} \) -vector spaces of the same dimension.) We have Theorem 5.5.12, which tells us how to pass from homology to cohomology, and now we have a theorem which tells us how to pass back (under favorable circumstances). Theorem 5.5.19 (Universal coefficient theorem). (1) Let \( X \) be a space of finite type. For any abelian group \( G \) there is a split short exact sequence \[ 0 \rightarrow \operatorname{Ext}\left( {{H}^{n + 1}\left( X\right), G}\right) \rightarrow {H}_{n}\left( {X;G}\right) \overset{e}{ \rightarrow }\operatorname{Hom}\left( {{H}^{n}\left( X\right), G}\right) \rightarrow 0. \] (2) If \( f : X \rightarrow Y \) is a map, where both \( X \) and \( Y \) are spaces of finite type, there is a commutative diagram ![21ef530b-1e09-406a-b041-cf4539af5c14_88_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_88_0.jpg) Proof. Again we omit this purely algebraic proof. (The reader has undoubtedly observed by now that we have several theorems called the universal coefficient theorem. This reflects the fact that all these theorems have this name in the literature.) Recall we defined the Euler characteristic \( \chi \left( X\right) \) of a space \( X \) with finitely generated homology in Definition 4.2.19. We observe: Theorem 5.5.20. Let \( X \) be a space with finitely generated homology. Let \( \mathbb{F} \) be an arbitrary field. Then \( \chi \left( X\right) \) is given by \[ \chi \left( X\right) = \left\{ \begin{array}{l} \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( -1\right) }^{n}\operatorname{rank}{H}_{n}\left( {X;\mathbb{Z}}\right) \\ \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( -1\right) }^{n}\operatorname{rank}{H}^{n}\left( {X;\mathbb{Z}}\right) \\ \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( -1\right) }^{n}\dim {H}_{n}\left( {X;\mathbb{F}}\right) \\ \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( -1\right) }^{n}\dim {H}^{n}\left( {X;\mathbb{F}}\right) \end{array}\right. \] Proof. This follows directly from the universal coefficient theorems. (If \( \mathbb{F} \) is a field of characteristic zero, then all of these ranks are equal for every integer \( n \) . If \( \mathbb{F} \) does not have characteristic 0, that may not be the case, depending on the space \( X
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 2.1.5
Definition 2.1.5 For \( x \in A \), the (reduced) norm and (reduced) trace of \( x \) lie in \( F \) and are defined by \( n\left( x\right) = x\bar{x} \) and \( \operatorname{tr}\left( x\right) = x + \bar{x} \), respectively. Thus on a matrix algebra, these coincide with the notions of determinant and trace. The norm map \( n : A \rightarrow F \) is multiplicative, as \( n\left( {xy}\right) = \left( {xy}\right) \overline{\left( xy\right) } = \) \( {xy}\bar{y}\bar{x} = n\left( x\right) n\left( y\right) \) . Thus the invertible elements of \( A \) are precisely those such that \( n\left( x\right) \neq 0 \), with the inverse of such an \( x \) being \( \bar{x}/n\left( x\right) \) . Thus if we let \( {A}^{ * } \) denote the invertible elements of \( A \), and \[ {A}^{1} = \{ x \in A \mid n\left( x\right) = 1\} \] then \( {A}^{1} \subset {A}^{ * } \) . This reduced norm \( n \) is related to field norms (see also Exercise 2.1, No. 7). An element \( w \) of the quaternion algebra \( A \) satisfies the quadratic \[ {x}^{2} - \operatorname{tr}\left( w\right) x + n\left( w\right) = 0 \] (2.3) with \( \operatorname{tr}\left( w\right), n\left( w\right) \in F \) . Let \( F\left( w\right) \) be the smallest subalgebra of \( A \) which contains \( {F1} \) and \( w \), so that \( F\left( w\right) \) is commutative. If \( A \) is a division algebra, then the polynomial (2.3) is reducible over \( F \) if and only if \( w \in Z\left( A\right) \) . Thus for \( w \notin Z\left( A\right), F\left( w\right) = E \) is a quadratic field extension \( E \mid F \) . Then \( {\left. {N}_{E \mid F} = n\right| }_{E} \) . Lemma 2.1.6 If the quaternion algebra \( A \) over \( F \) is a division algebra and \( w \notin Z\left( A\right) \), then \( E = F\left( w\right) \) is a quadratic field extension of \( F \) and \( {\left. n\right| }_{E} = {N}_{E \mid F} \) If \( A = \left( \frac{a, b}{F}\right) \) and \( x = {a}_{0} + {a}_{1}i + {a}_{2}j + {a}_{3}k \), then \[ n\left( x\right) = {a}_{0}^{2} - a{a}_{1}^{2} - b{a}_{2}^{2} + {ab}{a}_{3}^{2}. \] In the case of Hamilton’s quaternions \( \left( \frac{-1, - 1}{\mathbb{R}}\right) ,\;n\left( x\right) = {a}_{0}^{2} + {a}_{1}^{2} + {a}_{2}^{2} + {a}_{3}^{2} \) so that every non-zero element is invertible and \( \mathcal{H} \) is a division algebra. The matrix algebras \( {M}_{2}\left( F\right) \) are, of course, not division algebras. That these matrix algebras are the only non-division algebras among quaternion algebras is a consequence of Wedderburn's Theorem. From Wedderburn's Structure Theorem for finite-dimensional simple algebras (see Theorem 2.9.6), a quaternion algebra \( A \) is isomorphic to a full matrix algebra \( {M}_{n}\left( D\right) \), where \( D \) is a division algebra, with \( n \) and \( D \) uniquely determined by \( A \) . The \( F \) -dimension of \( {M}_{n}\left( D\right) \) is \( m{n}^{2} \), where \( m = {\dim }_{F}\left( D\right) \) and, so, for the four-dimensional quaternion algebras there are only two possibilities: \( m = 4, n = 1;m = 1, n = 2 \) . Theorem 2.1.7 If \( A \) is a quaternion algebra over \( F \), then \( A \) is either a division algebra or \( A \) is isomorphic to \( {M}_{2}\left( F\right) \) . We now use the Skolem Noether Theorem (see Theorem 2.9.8) to show that quaternion algebras can be characterised algebraically as follows: Theorem 2.1.8 Every four-dimensional central simple algebra over a field \( F \) of characteristic \( \neq 2 \) is a quaternion algebra. Proof: Let \( A \) be a four-dimensional central simple algebra over \( F \) . If \( A \) is isomorphic to \( {M}_{2}\left( F\right) \), it is a quaternion algebra, so by Theorem 2.1.7 we can assume that \( A \) is a division algebra. For \( w \notin Z\left( A\right) \), the subalgebra \( F\left( w\right) \) will be commutative. As a subring of \( A, F\left( w\right) \) is an integral domain and since \( A \) is finite-dimensional, \( w \) will satisfy an \( F \) -polynomial. Thus \( F\left( w\right) \) is a field. Since \( A \) is central, \( F\left( w\right) \neq A \) . Pick \( {w}^{\prime } \in A \smallsetminus F\left( w\right) \) . Now the elements \( 1, w,{w}^{\prime } \) and \( w{w}^{\prime } \) are necessarily independent over \( F \) and so form a basis of A. Thus \[ {w}^{2} = {a}_{0} + {a}_{1}w + {a}_{2}{w}^{\prime } + {a}_{3}w{w}^{\prime },\;{a}_{i} \in F. \] Since \( {w}^{\prime } \notin F\left( w\right) \), it follows that \( {w}^{2} = {a}_{0} + {a}_{1}w \) . Thus \( F\left( w\right) = E \) is a quadratic extension of \( F \) . Choose \( y \in E \) such that \( {y}^{2} = a \in F \) and \( E = F\left( y\right) \) . The automorphism on \( E \) induced by \( y \rightarrow - y \) will be induced by conjugation in \( A \) by an invertible element \( z \) of \( A \) by the Skolem Noether Theorem (see Theorem 2.9.8). Thus \( {zy}{z}^{-1} = - y \) . Clearly \( z \notin E \) and \( 1, y, z \) and \( {yz} \) are linearly independent over \( F \) . Also \( {z}^{2}y{z}^{-2} = y \) so that \( {z}^{2} \in Z\left( A\right) \) (i.e., \( {z}^{2} = b \in F \) ). However, \( \{ 1, y, z,{yz}\} \) is then a standard basis of \( A \) and \( A \cong \left( \frac{a, b}{F}\right) \) . \( ▱ \) Corollary 2.1.9 Let \( A \) be a quaternion division algebra over \( F \) . If \( w \in \) \( A \smallsetminus F \) and \( E = F\left( w\right) \), then \( A{ \otimes }_{F}E \cong {M}_{2}\left( E\right) \) . Proof: As in the above theorem, \( E \) is a quadratic extension field of \( F \) . Furthermore, there exists a standard basis \( \{ 1, y, z,{yz}\} \) of \( A \) with \( E = F\left( y\right) \) and \( {y}^{2} = a \in F \) . Thus there exists \( x \in A{ \otimes }_{F}E \) such that \( {x}^{2} = 1 \) . However, then \( A{ \otimes }_{F}E \) cannot be a division algebra and so must be isomorphic to \( {M}_{2}\left( E\right) \) . Deciding for a given quaternion algebra \( \left( \frac{a, b}{F}\right) \) whether or not it is isomorphic to \( {M}_{2}\left( F\right) \) is an important problem and, as will be seen later in our applications, has topological implications. For a given \( a \) and \( b \), the problem can be re-expressed in terms of quadratic forms, as will be shown in \( §{2.3} \) . ## Exercise 2.1 1. Let \( A \) be a four-dimensional central algebra over the field \( F \) such that there is a two-dimensional separable subalgebra \( L \) over \( F \) and an element \( c \in {F}^{ * } \) with \( A = L + {Lu} \) for some \( u \in A \) with \[ {u}^{2} = c\;\text{ and }\;{um} = \bar{m}u \] where \( m \in L \) and \( m \mapsto \bar{m} \) is the non-trivial \( F \) -automorphism of \( L \) . Prove that if \( F \) has characteristic \( \neq 2 \), then \( A \) is a quaternion algebra. Indeed, this is a definition of a quaternion algebra valid for any characteristic. Show that, under this definition, conjugation can be defined as: that \( F \) - endomorphism of \( A \), denoted \( x \mapsto \bar{x} \), such that \( \bar{u} = - u \) and restricted to \( L \) is the non-trivial automorphism. Prove also that Theorem 2.1.8 is valid in any characteristic. 2. Show that the ring of Hamilton’s quaternions \( \mathcal{H} = \left( \frac{-1, - 1}{\mathbb{R}}\right) \) is isomorphic to the \( \mathbb{R} \) -subalgebra \[ \left\{ {\left. \left( \begin{matrix} \alpha & \beta \\ - \bar{\beta } & \bar{\alpha } \end{matrix}\right) \right| \;\alpha ,\beta \in \mathbb{C}}\right\} . \] Hence show that \( {\mathcal{H}}^{1} = \{ h \in \mathcal{H} \mid n\left( h\right) = 1\} \) is isomorphic to \( \mathrm{{SU}}\left( 2\right) \) . 3. Let \[ A = \left\{ {\left. \left( \begin{matrix} \alpha & \sqrt{2}\beta \\ \sqrt{2}\bar{\beta } & \bar{\alpha } \end{matrix}\right) \right| \;\alpha ,\beta \in \mathbb{Q}\left( i\right) }\right\} . \] Prove that \( A \) is a quaternion algebra over \( \mathbb{Q} \) . Prove that it is isomorphic to \( {M}_{2}\left( \mathbb{Q}\right) \) (cf. Exercise 2.7, No.1). 4. Let \( A \) be a quaternion algebra over a number field \( k \) . Show that there exists a quadratic extension field \( L \mid k \) such that \( A \) has a faithful representation \( \rho \) in \( {M}_{2}\left( L\right) \), such that \( \rho \left( \bar{x}\right) = \overline{\rho \left( x\right) } \) for all \( x \in A \) . 5. Let \( F \) be a finite field of characteristic \( \neq 2 \) . If \( A \) is a quaternion algebra over \( F \), prove that \( A \cong {M}_{2}\left( F\right) \) . 6. For any quaternion algebra \( A \) and \( x \in A \), show that \[ \operatorname{tr}\left( {x}^{2}\right) = \operatorname{tr}{\left( x\right) }^{2} - {2n}\left( x\right) \] 7. Let \( \lambda \) denote the left regular representation of a quaternion algebra \( A \) . Prove that, for \( x \in A, n{\left( x\right) }^{2} = \det \lambda \left( x\right) \) . ## 2.2 Orders in Quaternion Algebras Throughout this chapter, we are mainly concerned with the structure of algebras, particularly quaternion algebras, over a field. However, in this section, we briefly introduce orders, which are the analogues in quaternion algebras of rings of integers in number fields. These play a vital role in developing the arithmetic theory of quaternion algebras over a number field and all of Chapter 6 is devoted to their study. Only some of the most basic notions associated to orders which are required in the following chapters will be discussed here. Throughout this section, the ring \( R \) will be a Dedekind domain (see §0.2 and §0.6) whose field of quotients \( k \) is either a number field or a \( \mathcal{P} \) -adic field. In applications, it will usually be the case that when \( k \) is a number field, \( R = {R}_{k} \), the ring of integers in \( k \) . Recall that a Dedekind domain is an integrally closed Noetherian ring in which every non-trivial prime ideal is maximal. Definition 2.2.1 If \( V \) is a vector space over \( k \), an R-lattice \( L \) in \( V \) is a finitely generated \( R \) -module contained in \( V \) . Furthermore, \
109_The rising sea Foundations of Algebraic Geometry
Definition 11.45
Definition 11.45. If the \( \bar{W} \) -cell \( \mathfrak{D} \) is a chamber (hence a simplicial cone), then the conical cell \( x + \mathfrak{D} \) will be called a sector ("quartier" in [59]). ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_579_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_579_0.jpg) Fig. 11.4. Conical cells based at \( x \) . If the \( n \) walls of \( \mathfrak{D} \) are defined by linear equations \( {f}_{i} = 0 \), where \( {f}_{i} > 0 \) on \( \mathfrak{D} \), then a sector \( \mathfrak{C} \) with direction \( \mathfrak{D} \) is given by linear inequalities of the form \( {f}_{i} > {c}_{i}\left( {i = 1,\ldots, n}\right) \) . It is clear from this that the intersection of two sectors with direction \( \mathfrak{D} \) is again a sector with direction \( \mathfrak{D} \) . See Figure 11.5. ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_579_1.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_579_1.jpg) Fig. 11.5. The intersection of two sectors with the same direction. If \( \mathfrak{C} \) and \( {\mathfrak{C}}^{\prime } \) are sectors with \( {\mathfrak{C}}^{\prime } \subseteq \mathfrak{C} \), then we will say that \( {\mathfrak{C}}^{\prime } \) is a subsector of \( \mathfrak{C} \) . Note that \( \mathfrak{C} \) and \( {\mathfrak{C}}^{\prime } \) then necessarily have the same direction. For suppose \( \mathfrak{C} = x + \mathfrak{D} \) and \( {\mathfrak{C}}^{\prime } = {x}^{\prime } + {\mathfrak{D}}^{\prime } \) . Letting \( \mathfrak{D} \) be defined by inequalities \( {f}_{i} > 0 \) as above, we conclude that the \( {f}_{i} \) are bounded below on \( {\mathfrak{D}}^{\prime } \) ; hence no \( {f}_{i} \) can be negative on the cone \( {\mathfrak{D}}^{\prime } \) . Thus \( {f}_{i} > 0 \) on \( {\mathfrak{D}}^{\prime } \) for all \( i \), which implies that \( {\mathfrak{D}}^{\prime } \subseteq \mathfrak{D} \) and hence \( {\mathfrak{D}}^{\prime } = \mathfrak{D} \) . Consider now two sectors \( {\mathfrak{C}}_{1} \mathrel{\text{:=}} x + \mathfrak{D} \) and \( {\mathfrak{C}}_{2} \mathrel{\text{:=}} y - \mathfrak{D} \) having opposite directions \( \pm \mathfrak{D} \) . Let \( {\overline{\mathfrak{C}}}_{1} \) and \( {\overline{\mathfrak{C}}}_{2} \) be the closures \( x + \overline{\mathfrak{D}} \) and \( y - \overline{\mathfrak{D}} \) . Assume that \( x \in {\overline{\mathfrak{C}}}_{2} \) and \( y \in {\overline{\mathfrak{C}}}_{1} \), so that the two closed sectors \( {\overline{\mathfrak{C}}}_{1},{\overline{\mathfrak{C}}}_{2} \) overlap, as in Figure 11.6. ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_580_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_580_0.jpg) Fig. 11.6. Two sectors with opposite directions. We will show that if \( {C}_{1} \) and \( {C}_{2} \) are chambers that are "sufficiently far out" in \( {\mathfrak{C}}_{1} \) and \( {\mathfrak{C}}_{2} \), respectively, then \( B\left( {{C}_{1},{C}_{2}}\right) \) contains the overlap \( {\overline{\mathfrak{C}}}_{1} \cap {\overline{\mathfrak{C}}}_{2} \) . Let \( {\mathfrak{C}}_{1}^{\prime } \) be the subsector of \( {\mathfrak{C}}_{1} \) based at \( y \) and let \( {\mathfrak{C}}_{2}^{\prime } \) be the subsector of \( {\mathfrak{C}}_{2} \) based at \( x \), as indicated by the dotted lines in Figure 11.7; in other words, \( {\mathfrak{C}}_{1}^{\prime } = y + \mathfrak{D} \) and \( {\mathfrak{C}}_{2}^{\prime } = x - \mathfrak{D}. \) ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_580_1.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_580_1.jpg) Fig. 11.7. Subsectors. Lemma 11.46. With the notation above, suppose \( {C}_{1} \) and \( {C}_{2} \) are chambers in \( E \) such that \( {\bar{C}}_{1} \) meets \( {\mathfrak{C}}_{1}^{\prime } \) and \( {\bar{C}}_{2} \) meets \( {\mathfrak{C}}_{2}^{\prime } \) . If \( C \) is any chamber of \( E \) such that \( \bar{C} \) meets \( {\overline{\mathfrak{C}}}_{1} \cap {\overline{\mathfrak{C}}}_{2} \), then \( C \subseteq B\left( {{C}_{1},{C}_{2}}\right) \) . (Note: Since sectors are open sets, the hypothesis that \( {\bar{C}}_{i} \) meets \( {\mathfrak{C}}_{i}^{\prime } \) implies that \( {C}_{i} \) meets \( {\mathfrak{C}}_{i}^{\prime } \) .) Proof. We must show that no wall (i.e., element of \( \mathcal{H} \) ) separates \( C \) from both \( {C}_{1} \) and \( {C}_{2} \) . Let \( H \) be a wall, defined by a linear equation \( f = c \) . We may choose \( f \) such that \( f > 0 \) on \( \mathfrak{D} \), in which case we will say that the closed half-space \( f \geq c \) (resp. \( f \leq c \) ) is the positive (resp. negative) side of \( H \) . (In Figures 11.6 and 11.7, think of \( {\mathfrak{C}}_{1} \) and \( {\mathfrak{C}}_{1}^{\prime } \) as opening in the positive direction.) The closed chamber \( \bar{C} \) is on one side of \( H \) . Suppose first that it is on the positive side. Then \( y \) must be on the positive side of \( H \) . For if \( y \) were strictly on the negative side, then \( {\overline{\mathfrak{C}}}_{2} \) would be strictly on the negative side, contradicting the fact that \( {\overline{\mathfrak{C}}}_{2} \) meets \( \bar{C} \) . It follows that \( {\mathfrak{C}}_{1}^{\prime } \) is strictly on the positive side of \( H \), and hence \( {C}_{1} \) is on the positive side. Thus \( H \) does not separate \( C \) from \( {C}_{1} \) . A similar argument shows that \( H \) does not separate \( C \) from \( {C}_{2} \) if \( C \) is on the negative side of \( H \) . The following consequence of the lemma is the key step in the proof of Theorem 11.43: Corollary 11.47. Let \( {\mathfrak{C}}_{1} \) and \( {\mathfrak{C}}_{2} \) be arbitrary sectors in \( E \) with opposite directions. Given any bounded subset \( Y \) of \( E \), there are subsectors \( {\mathfrak{C}}_{1}^{\prime } \subseteq {\mathfrak{C}}_{1} \) and \( {\mathfrak{C}}_{2}^{\prime } \subseteq {\mathfrak{C}}_{2} \) with the following property: If \( {C}_{1} \) and \( {C}_{2} \) are chambers in \( E \) such that \( {\bar{C}}_{1} \) meets \( {\mathfrak{C}}_{1}^{\prime } \) and \( {\bar{C}}_{2} \) meets \( {\mathfrak{C}}_{2}^{\prime } \), then \( B\left( {{C}_{1},{C}_{2}}\right) \) contains \( Y \) . Proof. Let \( \mathfrak{D} \) be the direction of \( {\mathfrak{C}}_{1} \), so that \( - \mathfrak{D} \) is the direction of \( {\mathfrak{C}}_{2} \) . Observe first that we can find sectors \( x + \mathfrak{D} \) and \( y - \mathfrak{D} \) as in Lemma 11.46, with \( Y \subseteq {\mathfrak{C}}_{1} \cap {\mathfrak{C}}_{2} \) . In fact, with the notation above, we need only choose constants \( {c}_{i},{c}_{i}^{\prime }\left( {i = 1,\ldots, n}\right) \) such that \( {c}_{i} < {f}_{i} < {c}_{i}^{\prime } \) on \( Y \) for all \( i \) . The lemma therefore implies that the subsectors \( y + \mathfrak{D} \) and \( x - \mathfrak{D} \) have the property stated in the corollary, but they might not be subsectors of the given sectors \( {\mathfrak{C}}_{1},{\mathfrak{C}}_{2} \) . To achieve this, we set \( {\mathfrak{C}}_{1}^{\prime } \mathrel{\text{:=}} \left( {y + \mathfrak{D}}\right) \cap {\mathfrak{C}}_{1} \) and \( {\mathfrak{C}}_{2}^{\prime } \mathrel{\text{:=}} \left( {x - \mathfrak{D}}\right) \cap {\mathfrak{C}}_{2} \) . Proof of Theorem 11.43. Suppose \( Y \) is a bounded subset of an apartment \( E \) . Take an arbitrary pair of sectors in \( E \) with opposite directions. Then Corollary 11.47 implies that there is a pair of chambers \( {C}_{1},{C}_{2} \) in \( E \) such that \( Y \subseteq B\left( {{C}_{1},{C}_{2}}\right) \) . Conversely, given chambers \( C,{C}^{\prime } \) of \( X \), choose an apartment \( E \) containing \( C \) and \( {C}^{\prime } \) . Then the combinatorial convexity of apartments implies that \( E \) contains \( B\left( {C,{C}^{\prime }}\right) \) . The latter is therefore a bounded subset of \( E \) [since there are only finitely many minimal galleries from \( C \) to \( {C}^{\prime } \) in \( E \) ]; hence so is any subset of it. We close this section with a variant of Corollary 11.47 that will be useful later. Lemma 11.48. Let \( E \) be the geometric realization of a Euclidean Coxeter complex, and let \( C \) and \( D \) be chambers of \( E \) . Then there are sectors \( {\mathfrak{C}}_{1},{\mathfrak{C}}_{2} \) in \( E \) with the following property: For any subsectors \( {\mathfrak{C}}_{1}^{\prime } \subseteq {\mathfrak{C}}_{1} \) and \( {\mathfrak{C}}_{2}^{\prime } \subseteq {\mathfrak{C}}_{2} \) , there is a gallery that starts at a chamber meeting \( {\mathfrak{C}}_{1}^{\prime } \), ends at a chamber meeting \( {\mathfrak{C}}_{2}^{\prime } \), and passes through both \( C \) and \( D \) (in that order). Proof. Choose a point \( x \in D \) and a direction \( \mathfrak{D} \) such that \( x + d \) belongs to \( C \) for some \( d \in \mathfrak{D} \) . Let \( {\mathfrak{C}}_{1} \) be a sector with direction \( \mathfrak{D} \), and let \( {\mathfrak{C}}_{2} \) be a sector with direction \( - \mathfrak{D} \) . Then any subsector \( {\mathfrak{C}}_{1}^{\prime } \subseteq {\mathfrak{C}}_{1} \) contains \( x + {td} \) for sufficiently large \( t > 0 \), and any subsector \( {\mathfrak{C}}_{2}^{\prime } \subseteq {\mathfrak{C}}_{2} \) contains \( x - {td} \) for all sufficiently large \( t > 0 \) . We can therefore find points \( {y}_{i} \in {\mathfrak{C}}_{i}^{\prime }\left( {i = 1,2}\right) \) such that the line segment \( \left\lbrack {{y}_{1},{y}_{2}}\right\rbrack \) passes through both \( C \) and \( D \) . Moving \( {y}_{1} \) and \( {y}_{2} \) slightly if necessary, we may assume that they are contained in open chambers of \( E \) and that the line segment never crosses two walls of \( E \) simultaneously. The successive chambers that it passes through therefore form the desired gallery. ## Exercises In these exercises \( E \) continues to denote the geometric realization of a Euclidean Coxeter complex. 11.49. Take \( x = y \) in Lemma 11.46, so that \( {\overline{\mathfrak{C}}}_{1} \) and \( {\overline{\mathfrak{C}}}_{2} \) meet only at the basepoint \( x \) . Deduce that \( B\left( {{C}_{1},{C
1347_[陈亚浙&吴兰成] Second Order Elliptic Equations and Elliptic Systems
Definition 3.1
Definition 3.1. We say that \( \Omega \) satisfies a uniform exterior cone condition if there exists a cone \( {V}_{h} \) of height \( h \) such that for any point \( {x}_{0} \in \partial \Omega \), there exists a cone congruent to \( {V}_{h} \) with vertex \( {x}_{0} \) and lying outside \( \Omega \) . Let \( {x}_{0} \in \partial \Omega \) . We set (3.1) \[ {u}_{M}^{ + } = \left\{ \begin{array}{l} \max \{ u, M\} \;\text{ for }x \in {B}_{R}\left( {x}_{0}\right) \cap \Omega , \\ M\;\text{ for }x \in {B}_{R}\left( {x}_{0}\right) \smallsetminus \Omega , \end{array}\right. \] (3.2) \[ {u}_{m}^{ - } = \left\{ \begin{array}{l} \min \{ u, m\} \;\text{ for }x \in {B}_{R}\left( {x}_{0}\right) \cap \Omega , \\ m\;\text{ for }x \in {B}_{R}\left( {x}_{0}\right) \smallsetminus \Omega . \end{array}\right. \] Lemma 3.1. Suppose that \( v \) is a bounded weak subsolution of \( {\left( {1.1}\right) }^{\prime },{x}_{0} \in \partial \Omega \) , \( M = \mathop{\sup }\limits_{{\partial \Omega \cap {B}_{R}\left( {x}_{0}\right) }}{v}^{ + } \) . Then for any \( p > 0,0 < \theta < 1 \) , (3.3) \[ \mathop{\sup }\limits_{{{B}_{\theta R}\left( {x}_{0}\right) }}{v}_{M}^{ + } \leq C{\left\lbrack {\int }_{{B}_{R}\left( {x}_{0}\right) }{\left( {v}_{M}^{ + }\right) }^{p}dx\right\rbrack }^{1/p} \] where \( C \) depends only on \( n,\Lambda /\lambda, p \) and \( {\left( 1 - \theta \right) }^{-1} \) . Proof. We choose the test function \( \varphi = {\zeta }^{2}{\left\lbrack {v}^{p - 1} - {M}^{p - 1}\right\rbrack }^{ + } \), where \( \zeta \) is a cutoff function on \( {B}_{R}\left( {x}_{0}\right) \) . Clearly, \( \varphi \in {W}_{0}^{1,2}\left( \Omega \right) \) and \( \varphi \geq 0 \) . The remaining proof is similar to that of Lemma 1.2. Lemma 3.2. Suppose that \( v \) is a bounded, nonnegative weak supersolution of \( {\left( {1.1}\right) }^{\prime },\Omega \) satisfies a uniform exterior cone condition, and \( \left( {a}^{ij}\right) \) satisfies (1.2). For \( {x}_{0} \in \partial \Omega \), we set \( m = \mathop{\inf }\limits_{{\partial \Omega \cap {B}_{R}\left( {x}_{0}\right) }}v \) . Then there exists \( {p}_{0} > 0 \) such that for \( 0 < \theta < 1 \) , (3.4) \[ \mathop{\inf }\limits_{{{B}_{\theta R}\left( {x}_{0}\right) }}{v}_{m}^{ - } \geq {C}^{-1}{\left\lbrack {\int }_{{B}_{R}\left( {x}_{0}\right) }{\left( {v}_{m}^{ - }\right) }^{{p}_{0}}dx\right\rbrack }^{1/{p}_{0}},\;\forall 0 < R \leq h, \] where \( h \) is the height of the exterior cone; \( {p}_{0} \) and \( C \) depend only on \( n,\Lambda /\lambda ,{\left( 1 - \theta \right) }^{-1} \) and \( h \) . Proof. From Lemma 1.1, we derive that \( {v}_{m}^{ - } \) is a weak supersolution of \( {\left( {1.1}\right) }^{\prime } \) and \( {\left( {v}_{m}^{ - }\right) }^{-p} \) is a weak subsolution of \( {\left( {1.1}\right) }^{\prime } \) . Using translation and scaling if necessary, we may assume without loss of generality that \( {x}_{0} \) is the origin and \( R = 1 \) . From Lemma 3.1, \[ \mathop{\inf }\limits_{{B}_{\theta }}{v}_{m}^{ - } \geq {C}^{-1}{\left\lbrack {\int }_{{B}_{1}}{\left( {v}_{m}^{ - }\right) }^{p}dx\right\rbrack }^{-1/p} \] \[ = \frac{1}{C}{\left\lbrack {\int }_{{B}_{1}}{\left( {v}_{m}^{ - }\right) }^{-p}dx{\int }_{{B}_{1}}{\left( {v}_{m}^{ - }\right) }^{p}dx\right\rbrack }^{-1/p}{\left\lbrack {\int }_{{B}_{1}}{\left( {v}_{m}^{ - }\right) }^{p}\right\rbrack }^{1/p}. \] Similarly, it suffices to prove that, for some \( p > 0 \) , \[ {\int }_{{B}_{1}}{e}^{p\left| w\right| }{dx} \leq C \] where \( w = \log {v}_{m}^{ - } - \beta \) . If we choose \( \beta = \log m \), then \( w \) vanishes on \( {B}_{1} \smallsetminus \Omega \) . Since \( \Omega \) satisfies a uniform exterior cone condition, \( {B}_{1} \smallsetminus \Omega \) contains a cone congruent to \( {V}_{h} \cap {B}_{1} \) . Poincaré’s inequality implies that \[ {\int }_{{B}_{1}}{w}^{2}{dx} \leq C{\int }_{{B}_{1}}{\left| Dw\right| }^{2}{dx} \] The rest of the proof is similar to that of Lemma 1.3. Theorem 3.3. Suppose that \( \Omega \) satisfies a uniform exterior cone condition, and \( \left( {a}^{ij}\right) \) satisfies (1.2). Let \( u \) be a weak solution of \( {\left( {1.1}\right) }^{\prime } \) with \( {\left\lbrack u\right\rbrack }_{{\varepsilon }_{1},\partial \Omega } < \infty \), where \( {\varepsilon }_{1} > 0 \) . Then for any \( {x}_{0} \in \partial \Omega ,0 < R \leq h \), there exist \( C > 0,0 < \gamma \leq {\varepsilon }_{1} \), such that (3.5) \[ \mathop{\operatorname{osc}}\limits_{{\Omega \cap {B}_{R}\left( {x}_{0}\right) }}u \leq C{\left( \frac{R}{h}\right) }^{\gamma }\left\lbrack {\mathop{\operatorname{osc}}\limits_{{{B}_{h}\left( {x}_{0}\right) \cap \Omega }}u + {h}^{\gamma }{\left\lbrack u\right\rbrack }_{{\varepsilon }_{1};\partial \Omega }}\right\rbrack \] where \( C \) and \( \gamma \) depend only on \( n,\Lambda /\lambda \) and the solid angle of opening of the exterior cone. Proof. Set \( {\Omega }_{R} = \Omega \cap {B}_{R}\left( {x}_{0}\right) ,\partial {\Omega }_{R} = \partial \Omega \cap {B}_{R}\left( {x}_{0}\right) \) and \[ M\left( R\right) = \mathop{\sup }\limits_{{\Omega }_{R}}u,\;m\left( R\right) = \mathop{\inf }\limits_{{\Omega }_{R}}u,\;\omega \left( R\right) = M\left( R\right) - m\left( R\right) , \] \[ {M}_{0}\left( R\right) = \mathop{\sup }\limits_{{\partial {\Omega }_{R}}}u,\;{m}_{0}\left( R\right) = \mathop{\inf }\limits_{{\partial {\Omega }_{R}}}u. \] For any \( m \geq 0 \), we use the exterior cone condition for \( {v}_{m}^{ - } \) defined in (3.2) to derive \[ {\left( {\int }_{{B}_{R}\left( {x}_{0}\right) }{\left( {v}_{m}^{ - }\right) }^{{p}_{0}}dx\right) }^{1/{p}_{0}} \geq m{\left\lbrack \frac{\left| {V}_{h} \cap {B}_{R}\left( {x}_{0}\right) \right| }{\left| {B}_{R}\left( {x}_{0}\right) \right| }\right\rbrack }^{1/{p}_{0}} = {C}^{-1}m\;\left( {R \leq h}\right) . \] Applying Lemma 3.2 to the functions \( v = M\left( R\right) - u \) and \( v = u - m\left( R\right) \), we get \[ M\left( R\right) - M\left( {\theta R}\right) \geq \frac{1}{C}\left\lbrack {M\left( R\right) - {M}_{0}\left( R\right) }\right\rbrack \] \[ m\left( {\theta R}\right) - m\left( R\right) \geq \frac{1}{C}\left\lbrack {{m}_{0}\left( R\right) - m\left( R\right) }\right\rbrack . \] Adding these two inequalities, we obtain \[ \omega \left( R\right) - \omega \left( {\theta R}\right) \geq \frac{1}{C}\left\lbrack {\omega \left( R\right) - \mathop{\operatorname{osc}}\limits_{{\partial {\Omega }_{R}}}u}\right\rbrack \] Therefore, for \( 0 < R \leq h \) , \[ \omega \left( {\theta R}\right) \leq \left( {1 - \frac{1}{C}}\right) \omega \left( R\right) + \frac{1}{C}{\left\lbrack u\right\rbrack }_{{\varepsilon }_{1};\partial {\Omega }_{h}}{R}^{{\varepsilon }_{1}}. \] Applying Lemma 2.1, we now obtain the lemma. Next we consider the nonhomogeneous equation (3.6) \[ - {D}_{j}\left( {{a}^{ij}{D}_{i}u}\right) = f + {D}_{i}{f}^{i} \] Theorem 3.4. Suppose that \( \Omega \) satisfies a uniform exterior cone condition, the coefficients of (3.6) satisfy (1.2), and \( f \in {L}^{{q}_{ * }}\left( \Omega \right) ,{f}^{i} \in {L}^{q}\left( \Omega \right) \) for some \( q > n \), where \( {q}_{ * } = {nq}/\left( {n + q}\right) \) . If \( u \) is a weak solution of (3.6) and \( {\left\lbrack u\right\rbrack }_{{\varepsilon }_{1};\partial \Omega } < \infty \) for some \( {\varepsilon }_{1} > 0 \) , then there exist \( C > 0 \) and \( 0 < \gamma < {\varepsilon }_{1} \) such that for \( 0 < R \leq h,{x}_{0} \in \partial \Omega \) , (3.7) \[ \mathop{\operatorname{osc}}\limits_{{\Omega \cap {B}_{R}\left( {x}_{0}\right) }}u \leq C{\left( \frac{R}{h}\right) }^{\gamma }\left\lbrack {\mathop{\operatorname{osc}}\limits_{{{B}_{h}\left( {x}_{0}\right) \cap \Omega }}u + {\left\lbrack u\right\rbrack }_{{\varepsilon }_{1};\partial \Omega } + \parallel f{\parallel }_{{L}^{{q}_{ * }}} + \mathop{\sum }\limits_{i}{\begin{Vmatrix}{f}^{i}\end{Vmatrix}}_{{L}^{q}})}\right\rbrack , \] where \( C \) and \( \gamma \) depend only on \( n,\Lambda /\lambda, q \) and the solid angle of opening of the exterior cone. The proof is exactly the same as that of Theorem 2.3. Theorem 3.5. Under the assumptions of Theorem 3.4, there exist \( C > 0 \) and \( 0 < \gamma < 1 \) such that (3.8) \[ \left| {u\left( x\right) - u\left( y\right) }\right| \leq C{\left| x - y\right| }^{\gamma }\left( {{\left| u\right| }_{0;\Omega } + {\left\lbrack u\right\rbrack }_{{\varepsilon }_{1};\partial \Omega } + \parallel f{\parallel }_{{L}^{{q}_{ * }}} + \mathop{\sum }\limits_{i}{\begin{Vmatrix}{f}^{i}\end{Vmatrix}}_{{L}^{q}}}\right) , \] \( x, y, \in \Omega \), where \( C \) and \( \gamma \) depend only on \( n,\Lambda /\lambda ,{\varepsilon }_{1}, q \) and \( \Omega \) . Proof. Assume without loss of generality that \( {d}_{xy} = {d}_{x} \) . For simplicity we let \( \delta = {d}_{x} \) . Theorem 3.4 implies the following interior estimate (similar to the corollary of Theorem 2.2): (3.9) \[ \left| {u\left( x\right) - u\left( y\right) }\right| \leq C{\left| x - y\right| }^{\gamma }\left( {{\delta }^{-\gamma }\mathop{\operatorname{osc}}\limits_{{{B}_{\delta }\left( x\right) }}u + {F}_{0}}\right) \] where \( {F}_{0} = \parallel f{\parallel }_{{L}^{{q}_{ * }}} + \sum {\begin{Vmatrix}{f}^{i}\end{Vmatrix}}_{{L}^{q}} \) . The definition of \( {d}_{x} \) implies that there exists \( {x}_{0} \in \partial \Omega \) such that \( \left| {x - \overline{{x}_{0}}}\right| = \delta \) . If \( {2\delta } \geq h \) (the height of the exterior cone), then we can replace \( \delta \) in (3.9) with \( h/2 \) to conclude the desired estimate. Now we assume that \( {2\delta } < h \) . Obviously, (3.10) \[ {\delta }^{-\gamma }\mathop{\operatorname{osc}}\limits_{{{B}_{\delta }\left( x\right) }}u \leq {\delta }^{-\gamma }\mathop{\operatorname{osc}}\limits_{{{B}_{2\delta }\left( {x}_{0}\right) \cap \Omega }}u. \] By Theorem 3.4, (3.11) \[ {\delta }^{-\gamma }\mathop{\operatorname{osc}}\limits_{{{B}_{2\delta }\left( {x}_{0}\right) \cap \Omega }}u
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 3.11
Definition 3.11. A Lie algebra \( \mathfrak{g} \) is called irreducible if the only ideals in \( \mathfrak{g} \) are \( \mathfrak{g} \) and \( \{ 0\} \) . A Lie algebra \( \mathfrak{g} \) is called simple if it is irreducible and \( \dim \mathfrak{g} \geq 2 \) . A one-dimensional Lie algebra is certainly irreducible, since it is has no nontrivial subspaces and therefore no nontrivial subalgebras and no nontrivial ideals. Nevertheless, such a Lie algebra is, by definition, not considered simple. Note that a one-dimensional Lie algebra \( \mathfrak{g} \) is necessarily commutative, since \( \left\lbrack {{aX},{bX}}\right\rbrack = 0 \) for any \( X \in \mathfrak{g} \) and any scalars \( a \) and \( b \) . On the other hand, if \( \mathfrak{g} \) is commutative, then any subspace of \( \mathfrak{g} \) is an ideal. Thus, the only way a commutative Lie algebra can be irreducible is if it is one dimensional. Thus, an equivalent definition of "simple" is that a Lie algebra is simple if it is irreducible and noncommutative. There is an analogy between groups and Lie algebras, in which the role of subgroups is played by subalgebras and the role of normal subgroups is played by ideals. (For example, the kernel of a Lie algebra homomorphism is always an ideal, just as the kernel of a Lie group homomorphism is always a normal subgroup.) There is, however, an inconsistency in the terminology in the two fields. On the group side, any group with no nontrivial normal subgroups is called simple, including the most obvious example, a cyclic group of prime order. On the Lie algebra side, by contrast, the most obvious example of an algebra with no nontrivial ideals-namely, a one-dimensional algebra-is not called simple. We will eventually see many examples of simple Lie algebras, but for now we content ourselves with a single example. Recall the Lie algebra \( \mathrm{{sl}}\left( {n;\mathbb{C}}\right) \) in Example 3.4. Proposition 3.12. The Lie algebra \( \mathrm{{sl}}\left( {2;\mathbb{C}}\right) \) is simple. Proof. We use the following basis for \( \mathrm{{sl}}\left( {2;\mathbb{C}}\right) \) : \[ X = \left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right) ;\;Y = \left( \begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array}\right) ;\;H = \left( \begin{array}{rr} 1 & 0 \\ 0 & - 1 \end{array}\right) . \] Direct calculation shows that these basis elements have the following commutation relations: \( \left\lbrack {X, Y}\right\rbrack = H,\left\lbrack {H, X}\right\rbrack = {2X} \), and \( \left\lbrack {H, Y}\right\rbrack = - {2Y} \) . Suppose \( \mathfrak{h} \) is an ideal in \( \mathrm{{sl}}\left( {2;\mathbb{C}}\right) \) and that \( \mathfrak{h} \) contains an element \( Z = {aX} + {bH} + {cY} \), where \( a, b \), and \( c \) are not all zero. We will show, then, that \( \mathfrak{h} = \operatorname{sl}\left( {2;\mathbb{C}}\right) \) . Suppose first that \( c \neq 0 \) . Then the element \[ \left\lbrack {X,\left\lbrack {X, Z}\right\rbrack }\right\rbrack = \left\lbrack {X,\left\lbrack {-{2bX} + {cH}}\right\rbrack }\right\rbrack = - {2cX} \] is a nonzero multiple of \( X \) . Since \( \mathfrak{h} \) is an ideal, we conclude that \( X \in \mathfrak{h} \) . But \( \left\lbrack {Y, X}\right\rbrack \) is a nonzero multiple of \( H \) and \( \left\lbrack {Y,\left\lbrack {Y, X}\right\rbrack }\right\rbrack \) is a nonzero multiple of \( Y \), showing that \( Y \) and \( H \) also belong to \( \mathfrak{h} \), from which we conclude that \( \mathfrak{h} = \operatorname{sl}\left( {2;\mathbb{C}}\right) \) . Suppose next that \( c = 0 \) but \( b \neq 0 \) . Then \( \left\lbrack {X, Z}\right\rbrack \) is a nonzero multiple of \( X \) and we may then apply the same argument in the previous paragraph to show that \( \mathfrak{h} = \mathrm{{sl}}\left( {2;\mathbb{C}}\right) \) . Finally, if \( c = 0 \) and \( b = 0 \) but \( a \neq 0 \), then \( Z \) itself is a nonzero multiple of \( X \) and we again conclude that \( \mathfrak{h} = \operatorname{sl}\left( {2;\mathbb{C}}\right) \) . Definition 3.13. If \( \mathfrak{g} \) is a Lie algebra, then the commutator ideal in \( \mathfrak{g} \), denoted \( \left\lbrack {\mathfrak{g},\mathfrak{g}}\right\rbrack \), is the space of linear combinations of commutators, that is, the space of elements \( Z \) in \( \mathfrak{g} \) that can be expressed as \[ Z = {c}_{1}\left\lbrack {{X}_{1},{Y}_{1}}\right\rbrack + \cdots + {c}_{m}\left\lbrack {{X}_{m},{Y}_{m}}\right\rbrack \] for some constants \( {c}_{j} \) and vectors \( {X}_{j},{Y}_{j} \in \mathfrak{g} \) . For any \( X \) and \( Y \) in \( \mathfrak{g} \), the commutator \( \left\lbrack {X, Y}\right\rbrack \) is in \( \left\lbrack {\mathfrak{g},\mathfrak{g}}\right\rbrack \) . This holds, in particular, if \( X \) is in \( \left\lbrack {\mathfrak{g},\mathfrak{g}}\right\rbrack \), showing that \( \left\lbrack {\mathfrak{g},\mathfrak{g}}\right\rbrack \) is an ideal in \( \mathfrak{g} \) . Definition 3.14. For any Lie algebra \( \mathfrak{g} \), we define a sequence of subalgebras \( {\mathfrak{g}}_{0},{\mathfrak{g}}_{1},{\mathfrak{g}}_{2},\ldots \) of \( \mathfrak{g} \) inductively as follows: \( {\mathfrak{g}}_{0} = \mathfrak{g},{\mathfrak{g}}_{1} = \left\lbrack {{\mathfrak{g}}_{0},{\mathfrak{g}}_{0}}\right\rbrack ,{\mathfrak{g}}_{2} = \left\lbrack {{\mathfrak{g}}_{1},{\mathfrak{g}}_{1}}\right\rbrack \) , etc. These subalgebras are called the derived series of \( \mathfrak{g} \) . A Lie algebra \( \mathfrak{g} \) is called solvable if \( {\mathfrak{g}}_{j} = \{ 0\} \) for some \( j \) . It is not hard to show, using the Jacobi identity and induction on \( j \), that each \( {\mathfrak{g}}_{j} \) is an ideal in \( \mathfrak{g} \) . Definition 3.15. For any Lie algebra \( \mathfrak{g} \), we define a sequence of ideals \( {\mathfrak{g}}^{j} \) in \( \mathfrak{g} \) inductively as follows. We set \( {\mathfrak{g}}^{0} = \mathfrak{g} \) and then define \( {\mathfrak{g}}^{j + 1} \) to be the space of linear combinations of commutators of the form \( \left\lbrack {X, Y}\right\rbrack \) with \( X \in \mathfrak{g} \) and \( Y \in {\mathfrak{g}}^{j} \) . These algebras are called the upper central series of \( \mathfrak{g} \) . A Lie algebra \( \mathfrak{g} \) is said to be nilpotent if \( {\mathfrak{g}}^{j} = \{ 0\} \) for some \( j \) . Equivalently, \( {\mathfrak{g}}^{j} \) is the space spanned by all \( j \) th-order commutators, \[ \left\lbrack {{X}_{1},\left\lbrack {{X}_{2},\left\lbrack {{X}_{3},\ldots \left\lbrack {{X}_{j},{X}_{j + 1}}\right\rbrack \ldots }\right\rbrack }\right\rbrack }\right\rbrack . \] Note that every \( j \) th-order commutator is also a \( \left( {j - 1}\right) \) th-order commutator, by setting \( {\widetilde{X}}_{j} = \left\lbrack {{X}_{j},{X}_{j + 1}}\right\rbrack \) . Thus, \( {\mathfrak{g}}^{j - 1} \supset {\mathfrak{g}}^{j} \) . For every \( X \in \mathfrak{g} \) and \( Y \in {\mathfrak{g}}^{j} \), we have \( \left\lbrack {X, Y}\right\rbrack \in {\mathfrak{g}}^{j + 1} \subset {\mathfrak{g}}^{j} \), showing that \( {\mathfrak{g}}^{j} \) is an ideal in \( \mathfrak{g} \) . Furthermore, it is clear that \( {\mathfrak{g}}_{j} \subset {\mathfrak{g}}^{j} \) for all \( j \) ; thus, if \( \mathfrak{g} \) is nilpotent, \( \mathfrak{g} \) is also solvable. Proposition 3.16. If \( \mathfrak{g} \subset {M}_{3}\left( \mathbb{R}\right) \) denotes the space of \( 3 \times 3 \) upper triangular matrices with zeros on the diagonal, then \( \mathfrak{g} \) satisfies the assumptions of Example 3.3. The Lie algebra \( \mathfrak{g} \) is a nilpotent Lie algebra. Proof. We will use the following basis for \( \mathfrak{g} \) , \[ X = \left( \begin{array}{lll} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right) ;\;Y = \left( \begin{array}{lll} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right) ;\;Z = \left( \begin{array}{lll} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right) . \] (3.5) Direct calculation then establishes the following commutation relations: \( \left\lbrack {X, Y}\right\rbrack = \) \( Z \) and \( \left\lbrack {X, Z}\right\rbrack = \left\lbrack {Y, Z}\right\rbrack = 0 \) . In particular, the bracket of two elements of \( \mathfrak{g} \) is again in \( \mathfrak{g} \), so that \( \mathfrak{g} \) is a Lie algebra. Then \( \left\lbrack {\mathfrak{g},\mathfrak{g}}\right\rbrack \) is the span of \( Z \) and \( \left\lbrack {\mathfrak{g},\left\lbrack {\mathfrak{g},\mathfrak{g}}\right\rbrack }\right\rbrack = 0 \) , showing that \( \mathfrak{g} \) is nilpotent. Proposition 3.17. If \( \mathfrak{g} \subset {M}_{2}\left( \mathbb{C}\right) \) denotes the space of \( 2 \times 2 \) matrices of the form \[ \left( \begin{array}{ll} a & b \\ 0 & c \end{array}\right) \] with \( a, b \), and \( c \) in \( \mathbb{C} \), then \( \mathfrak{g} \) satisfies the assumptions of Example 3.3. The Lie algebra \( \mathfrak{g} \) is solvable but not nilpotent. Proof. Direct calculation shows that \[ \left\lbrack {\left( \begin{array}{ll} a & b \\ 0 & c \end{array}\right) ,\left( \begin{array}{ll} d & e \\ 0 & f \end{array}\right) }\right\rbrack = \left( \begin{array}{ll} 0 & h \\ 0 & 0 \end{array}\right) \] (3.6) where \( h = {ae} + {bf} - {bd} - {ce} \), showing that \( \mathfrak{g} \) is a Lie subalgebra of \( {M}_{2}\left( \mathbb{C}\right) \) . Furthermore, the commutator ideal \( \left\lbrack {\mathfrak{g},\mathfrak{g}}\right\rbrack \) is one dimensional and hence commutative. Thus, \( {\mathfrak{g}}_{2} = \{ 0\} \), showing that \( \mathfrak{g} \) is solvable. On the other hand, consider the following elements of \( \mathfrak{g} \) : \[ H = \left( \begin{array}{rr} 1 & 0 \\ 0 & - 1 \end{array}\right) ;\;X = \left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right) . \] Using (3.6), we can see that \( \left\lbrack {H, X}\right\rbrack = {2X} \), and thus that \[ \left\lbrack {H,\left\lbrack {H,\left\lbrack {H,\cdots \left\lbrack {H, X}\right\rbrack \cdots }\right\rbrack }\right\rbrack }\right\rbrack \] is a nonzero multiple
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 3.5
Definition 3.5. The sector \( \bigtriangleup \overset{⏜}{OAB} \) consisting of radii \( {OA},{OB} \) and the circular arc \( \overset{⏜}{AB} \) centered at the critical point \( O \) is called a normal region if the following conditions are satisfied: (i) there are no critical points in \( \bigtriangleup \overset{⏜}{OAB} \) except \( O \), and \( {OA},{OB} \) (excluding the point \( O \) ) are cross sections; (ii) the vector field at any point in \( \bigtriangleup \overset{⏜}{OAB} \) is not perpendicular to the coordinate vector; (iii) there is at most one characteristic direction in \( \bigtriangleup \overset{⏜}{OAB} \), and the angles from the \( x \) axis to \( {OA},{OB} \) are not characteristic directions. From property (i), all orbits can only intersect \( {OA} \) (or \( {OB} \) ) on the same side, ie., either enter \( \bigtriangleup \overset{⏜}{OAB} \) from \( {OA} \) (or \( {OB} \) ) or leave \( \bigtriangleup \overset{⏜}{OAB} \) from \( {OA} \) (or \( {OB} \) ). From property (ii), orbits can only intersect \( {OA} \) (or \( {OB} \) ) in the same direction, i.e., either all in the positive direction or all in the negative direction. Moreover, it follows from property (ii) that orbits intersect \( {OA} \) and \( {OB} \) in the same direction. Otherwise, the continuity of the vector field implies that the coordinate vector and field vector would be perpendicular at some point \( P \) inside \( \bigtriangleup \overset{⏜}{OAB} \), contradicting property (ii). See Figure 2.15. From the above discussions, ignoring time direction, there are only three types of normal regions as indicated in Figure 2.16. Here, the arrows represent the direction as \( t \) increases or as \( t \) decreases. To fix ideas, in the following three lemmas we will interpret the arrows as pointing in the direction of increasing \( t \) . If the interpretation is changed to decreasing \( t \), then the statements of the lemmas will be valid if \( t \rightarrow + \infty \) is replaced by \( t \rightarrow - \infty \) . LEMMA 3.1. Suppose that \( \bigtriangleup \overset{⏜}{OAB} \) is a normal region of the first type, then an orbit starting from any point in \( {OA},{OB} \) will tend to the critical point \( O \) as \( t \rightarrow + \infty \) . Proof. From property (ii) in Definition 3.5, an orbit starting from any point in \( {OA},{OB} \) cannot leave the normal region through \( \overset{⏜}{AB} \) as \( t \) increases. Moreover, the radius of any point on the orbit is monotonically decreasing as \( t \) increases; otherwise, there would be a point in \( \bigtriangleup \overset{⏜}{OAB} \) at which the field vector is perpendicular to the coordinate vector. Further, an orbit cannot remain in \( \bigtriangleup \overset{⏜}{OAB} \) indefinitely without tending to \( O \) . Otherwise, its \( \omega \) -limit set would contain a critical point different from \( O \), contradicting property (i). Hence, the orbit must tend to the critical point \( O \) as \( t \rightarrow + \infty \) . ![bea09977-be18-4815-a30e-4fa2fe3b219c_72_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_72_0.jpg) FIGURE 2.15 ![bea09977-be18-4815-a30e-4fa2fe3b219c_72_1.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_72_1.jpg) FIGURE 2.16 LEMMA 3.2. Suppose \( \bigtriangleup \overset{⏜}{OAB} \) is a normal region of the second type, then there exists a point or a closed subarc in \( \overset{⏜}{AB} \) such that any orbit starting from there will tend to the critical point \( O \) as \( t \rightarrow + \infty \) . Proof. Let \( M \in {OA} \) . The orbit \( \overrightarrow{f}\left( {M,{I}^{ - }}\right) \) must intersect \( \overset{⏜}{AB} \) at \( {P}_{M} \) ; and let \( \mathop{\lim }\limits_{{M \rightarrow 0}}{P}_{M} = P \) . Similarly, let \( N \in {OB} \) and \( \overrightarrow{f}\left( {P,{I}^{ - }}\right) \) intersect \( {AB} \) at \( {Q}_{N} \) ; and let \( \mathop{\lim }\limits_{{N \rightarrow 0}}{Q}_{N} = Q \) . Clearly, starting from \( P \) or \( Q \), the orbit will tend to the critical point \( O \) as \( t \rightarrow + \infty \) . If \( P = Q \), then there is only one orbit \( \overrightarrow{f}\left( {P,{I}^{ + }}\right) \) that tends to \( O \) . If \( P \neq Q \), then an orbit \( \overrightarrow{f}\left( {R,{I}^{ + }}\right) \) starting from any point \( R \) on the closed arc \( \overset{⏜}{PQ} \) must tend to \( O \) . LEMMA 3.3. Suppose that \( \bigtriangleup \overset{⏜}{OAB} \) is a normal region of the third type, then there are two possible cases: (i) there is no orbit in \( \bigtriangleup \overset{⏜}{OAB} \) tending to the critical point \( O \) ; (ii) there exists \( P \in {OB} \) or \( \overset{⏜}{AB} \), such that for any \( R \in {OP} \) or \( R \in \) \( {OB} \cup \overset{⏜}{BP} \), the orbit \( \overrightarrow{f}\left( {R, I}\right) \) must tend to the critical point \( O \) as \( t \rightarrow + \infty \) . Proof. Let \( M \in {OA} \) . The orbit \( \overrightarrow{f}\left( {M,{I}^{ - }}\right) \) must intersect \( {OB} \) or \( \overset{⏜}{AB} \) at \( {P}_{M} \) . Let \( \mathop{\lim }\limits_{{M \rightarrow 0}}{P}_{M} = P \) . If \( P = 0 \), then we have case (i); if \( P \neq 0 \) , then we have case (ii). In the study of qualitative properties of orbits near a nonlinear critical point, a major technique is to find all the characteristic directions. Then we investigate the existence and number of orbits tending to the critical point along these characteristic directions. Let \( \alpha \) be the angle between the coordinate vector and field vector at the point \( P\left( {r,\theta }\right) \), as indicated in Figure 2.17. It follows that \[ \tan \alpha = \mathop{\lim }\limits_{{{\Delta r} \rightarrow 0}}r\frac{\Delta \theta }{\Delta r} = r\frac{d\theta }{dr} \] From Definition 3.2, if \( \theta = {\theta }_{0} \) is a characteristic direction, then calculating ![bea09977-be18-4815-a30e-4fa2fe3b219c_73_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_73_0.jpg) FIGURE 2.17 at the sequence of points \( {A}_{n}\left( {{r}_{n},{\theta }_{n}}\right) \), we obtain \[ \mathop{\lim }\limits_{{n \rightarrow + \infty }}\tan {\alpha }_{n} = {\left. \mathop{\lim }\limits_{{n \rightarrow + \infty }}r\frac{d\theta }{dr}\right| }_{\left( {r}_{n},{\theta }_{n}\right) } = 0. \] (3.4) Letting \( x = r\cos \theta, y = r\sin \theta \) ,(3.2) is transformed into \[ \cos \theta \left\lbrack {{X}_{m}\left( {r\cos \theta, r\sin \theta }\right) + \Phi \left( {r\cos \theta, r\sin \theta }\right) }\right\rbrack \] \[ \frac{1}{r}\frac{dr}{d\theta } = \frac{+\sin \theta \left\lbrack {{Y}_{n}\left( {r\cos \theta, r\sin \theta }\right) + \Psi \left( {r\cos \theta, r\sin \theta }\right) }\right\rbrack }{\cos \theta \left\lbrack {{Y}_{n}\left( {r\cos \theta, r\sin \theta }\right) + \Psi \left( {r\cos \theta, r\sin \theta }\right) }\right\rbrack } \] (3.5) \[ - \sin \theta \left\lbrack {{X}_{m}\left( {r\cos \theta, r\sin \theta }\right) + \Phi \left( {r\cos \theta, r\sin \theta }\right) }\right\rbrack \] \[ = \frac{M\left( {r,\theta }\right) }{I\left( {r,\theta }\right) } \] Let \[ \left. \begin{array}{l} G\left( \theta \right) = \cos \theta {Y}_{n}\left( {\cos \theta ,\sin \theta }\right) ,\;\text{ if }m > n \\ G\left( \theta \right) = - \sin \theta {X}_{m}\left( {\cos \theta ,\sin \theta }\right) ,\;\text{ if }m < n \\ G\left( \theta \right) = \cos \theta {Y}_{n}\left( {\cos \theta ,\sin \theta }\right) - \sin \theta {X}_{m}\left( {\cos \theta ,\sin \theta }\right) ,\;\text{ if }m = n. \end{array}\right\} \] (3.6) From the hypothesis on the terms \( \Phi \) and \( \Psi \), there exist \( \bar{r} > 0, K > 0 \) such that: 1. If \( m > n \), dividing the numerator and denominator by \( {r}^{n} \) yields \[ \left| \frac{M\left( {r,\theta }\right) }{{r}^{n}}\right| < K,\;\text{ when }r < \bar{r} \] and \[ \left| \frac{I\left( {r,\theta }\right) }{{r}^{n}}\right| = \cos \theta {Y}_{n}\left( {\cos \theta ,\sin \theta }\right) + o\left( 1\right) = G\left( \theta \right) + o\left( 1\right) ,\;\text{ as }r \rightarrow 0. \] 2. If \( m < n \), dividing the numerator and denominator by \( {r}^{m} \) yields \[ \left| \frac{M\left( {r,\theta }\right) }{{r}^{m}}\right| < K,\;\text{ when }r < \bar{r} \] and \[ \frac{I\left( {r,\theta }\right) }{{r}^{m}} = - \sin \theta {X}_{m}\left( {\cos \theta ,\sin \theta }\right) + o\left( 1\right) = G\left( \theta \right) + o\left( 1\right) ,\;\text{ as }r \rightarrow 0. \] 3. If \( m = n \), dividing the numerator and denominator by \( {r}^{m} \) yields \[ \left| \frac{M\left( {r,\theta }\right) }{{r}^{m}}\right| < K,\;\text{ when }r < \bar{r} \] and \[ \frac{I\left( {r,\theta }\right) }{{r}^{m}} = \cos \theta {Y}_{n}\left( {\cos \theta ,\sin \theta }\right) - \sin \theta {X}_{m}\left( {\cos \theta ,\sin \theta }\right) + o\left( 1\right) \] \[ = G\left( \theta \right) + o\left( 1\right) ,\;\text{ as }r \rightarrow 0. \] All of the three cases above can be combinely written as \[ \frac{1}{r}\frac{dr}{d\theta } = \frac{A\left( {r,\theta }\right) }{G\left( \theta \right) + o\left( 1\right) },\;r \rightarrow 0, \] (3.7) where \( \left| {A\left( {r,\theta }\right) }\right| < K \) when \( r < \bar{r} \), and \( G\left( \theta \right) \) is defined in (3.6). From (3.7), it follows that \( G\left( {\theta }_{0}\right) = 0 \) is a necessary condition for \( \theta = {\theta }_{0} \) to be a characteristic direction. Hence, \( G\left( \theta \right) = 0 \) is called the characteristic equation for the differential equation (3.2). In the following, we will analyze nonlinear critical points by studying the corresponding characteristic equation. THEOREM 3.1. Suppose that \( G\left( \theta \right) \neq 0 \) in the sector \( \bigtriangleup \overset{⏜}{OAB} : {\theta }_{0} \leq \theta \leq {\theta }_{1} \) , \( 0 \leq r \leq {r}_{1} \leq \bar{r} \), then there is no orbit tending to the critical point in \( \bigtriangleup \overset{⏜}{OAB} \) . Moreover, all orbits move from one boundary
1057_(GTM217)Model Theory
Definition 7.4.1
Definition 7.4.1 A variety \( {}^{1} \) is a topological space \( V \) such that \( V \) has a finite open cover \( V = {V}_{1} \cup \ldots \cup {V}_{n} \) where for \( i = 1,\ldots, n \) there is \( {U}_{i} \subseteq {K}^{{n}_{i}} \) a Zariski closed set and a homeomorphism \( {f}_{i} : {V}_{i} \rightarrow {U}_{i} \) such that: i) \( {U}_{i, j} = {f}_{i}\left( {{V}_{i} \cap {V}_{j}}\right) \) is an open subset of \( {U}_{i} \), and ii) \( {f}_{i, j} = {f}_{i} \circ {f}_{j}^{-1} : {U}_{j, i} \rightarrow {U}_{i, j} \) is a rational map. We call \( {f}_{1},\ldots ,{f}_{n} \) charts for \( V \) . Varieties arise in many natural ways. Let \( K \) be an algebraically closed field. Lemma 7.4.2 i) If \( V \subseteq {K}^{n} \) is Zariski closed, then \( V \) is a variety. ii) If \( V \subseteq {K}^{n} \) is Zariski closed and \( O \subseteq {K}^{n} \) is Zariski open, then \( V \cap O \) is a variety. iii) \( {\mathbb{P}}^{1}\left( K\right) \) is a variety. iv) If \( V \subseteq {\mathbb{P}}^{n}\left( K\right) \) is Zariski closed and \( O \subseteq {\mathbb{P}}^{n}\left( K\right) \) is Zariski open, then \( V \cap O \) is a variety. ## Proof i) Clear. ii) Let \( O = \mathop{\bigcup }\limits_{{i = 1}}^{m}{O}_{i} \) where \( {O}_{i} = \left\{ {x \in {K}^{n} : {g}_{i}\left( x\right) \neq 0}\right\} \) for some \( {g}_{i} \in \) \( K\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) . Let \( {V}_{i} = \left\{ {x \in V : {g}_{i}\left( x\right) \neq 0}\right\} \) . Then, \( {V}_{i} \) is an open subset of \( V \) and \( V \cap O = {V}_{1} \cup \ldots \cup {V}_{n} \) . Let \( {U}_{i} = \left\{ {\left( {x, y}\right) \in {K}^{n + 1} : x \in V}\right. \) and \( \left. {y{g}_{i}\left( x\right) = 1}\right\} \), and let \( {f}_{i} : {V}_{i} \rightarrow {U}_{i} \) be \( x \mapsto \left( {x,\frac{1}{{f}_{i}\left( x\right) }}\right) \) . Then, \( {f}_{i} \) is a rational bijection with rational inverse \( \left( {x, y}\right) \mapsto x \), so \( {V}_{i} \) and \( {U}_{i} \) are homeomorphic. In this case, \( {U}_{i, j} \) is the open set \( \left\{ {\left( {x, y}\right) \in {U}_{i} : {f}_{j}\left( x\right) \neq 0}\right\} \) and \( {f}_{i, j} \) is the rational map \( \left( {x, y}\right) \mapsto \left( {x,\frac{1}{{f}_{i}\left( x\right) }}\right) \) . --- \( {}^{1} \) The objects we are defining here are usually called prevarieties and varieties are prevarieties where the diagonal \( \{ \left( {x, y}\right) : x = y\} \) is closed in \( V \times V \) . If a prevariety has a group structure, the diagonal is automatically closed, so this distinction is not so important to us. See Exercise 7.6.22. --- iii) Projective 1-space \( {\mathbb{P}}^{1}\left( K\right) \) is the quotient of \( {K}^{2} \smallsetminus \{ \left( {0,0}\right) \} \) by the equivalence relation \( \left( {x, y}\right) \sim \left( {u, v}\right) \) if there is \( \lambda \in K \) such that \( {\lambda x} = u \) and \( {\lambda y} = v \) ,(i.e., if \( {xv} = {yu} \) ). Let \( {V}_{1} = \{ \left( {x, y}\right) / \sim : x \neq 0\} \), and let \( {V}_{2} = \) \( \{ \left( {x, y}\right) / \sim : x \neq 0\} \) . Let \( {U}_{1} = {U}_{2} = K \), and let \( {f}_{1}\left( {\left( {x, y}\right) / \sim }\right) = y/x \), while \( {f}_{2}\left( {\left( {x, y}\right) / \sim }\right) = x/y \) . Then, \( {U}_{1,2} = {U}_{2,1} = K \smallsetminus \{ 0\} \) and \( {f}_{i, j}\left( x\right) = {f}_{j, i}\left( x\right) = \frac{1}{x} \) . iv) Exercise. A quasiprojective variety is the intersection of Zariski open and Zariski closed subsets of projective space. Part iv) of the preceding lemma shows that quasiprojective varieties are examples of abstract algebraic varieties. Lemma 7.4.3 If \( V \) is a variety, then \( V \) is interpretable in the algebraically closed field \( K \) . Proof Let \( V = {V}_{1} \cup \ldots \cup {V}_{n} \) with charts \( {f}_{i} : {V}_{i} \rightarrow {U}_{i} \), without loss of generality, there is an \( m \) such that each \( {U}_{i} \subseteq {K}^{m} \) . Let \( {a}_{1},\ldots ,{a}_{n} \in K \) be distinct, and let \( X = \left\{ {\left( {x, y}\right) \in {K}^{m + 1} : y = {a}_{i}}\right. \) and \( \left. {x \in {U}_{i}\text{for some}i \leq n}\right\} \) . Then, \( X \) is a Zariski closed subset of \( {K}^{m + 1} \) . We define an equivalence relation \( \sim \) on \( X \), by \( \left( {x,{a}_{i}}\right) \sim \left( {y,{a}_{j}}\right) \), if and only if \( {a}_{i} = {a}_{j} \) and \( x = y \) or \( {a}_{i} \neq {a}_{j}, x \in {U}_{i, j}, y \in {U}_{j, i} \), and \( {f}_{i, j}\left( y\right) = x \) . If \( V \) is a variety with charts \( {f}_{1} : {V}_{0} \rightarrow {U}_{0},\ldots ,{f}_{n} : {V}_{n} \rightarrow {U}_{n}, X \subseteq {U}_{i} \) is open in \( {U}_{i} \), and \( W = {f}_{i}^{-1}\left( X\right) \), then we call \( W \) an affine open subset of \( V \) . Any open subset of \( V \) is a finite union of affine open subsets of \( V \) . We will consider maps between varieties that are given locally by rational functions. Definition 7.4.4 Suppose that \( V \) and \( W \) are varieties and \( f : V \rightarrow W \) . We say that \( f \) is a morphism if we can find \( {V}_{1},\ldots ,{V}_{n} \) and \( {W}_{1},\ldots ,{W}_{m} \) covers of \( V \) and \( W \) by affine open sets with homeomorphisms \( {f}_{i} : {V}_{i} \rightarrow {U}_{i} \) , \( {g}_{j} : {W}_{j} \rightarrow {U}_{j}^{\prime } \), where \( {U}_{i} \) and \( {U}_{j}^{\prime } \) are open subsets of affine Zariski closed sets and \( {g}_{j} \circ f \circ {f}_{i}^{-1} \) is a rational function for each \( i \leq n, j \leq m \) . In characteristic \( p > 0 \), one should also consider quasimorphisms where the maps are locally given by quasirational functions (i.e., compositions of rational functions and \( x \mapsto \sqrt[p]{x} \) ). We next define the product of two varieties. Definition 7.4.5 Suppose that \( V \) and \( W \) are varieties and \( f : V \rightarrow W \) . Suppose that \( V = {V}_{1} \cup \ldots \cup {V}_{n} \) and \( {f}_{i} : {V}_{i} \rightarrow {U}_{i} \) and \( W = {W}_{1} \cup \ldots \cup {W}_{m} \) and \( {g}_{i} : {W}_{i} \rightarrow {U}_{i}^{\prime } \) are charts for \( V \) and \( W \) . We topologize \( {V}_{i} \times {W}_{j} \) so that \( \left( {{f}_{i},{g}_{j}}\right) : {V}_{i} \times {W}_{j} \rightarrow {U}_{i} \times {U}_{j}^{\prime } \) is a homeomorphism and take \( \left\{ {{V}_{i} \times {W}_{j} : i \leq }\right. \) \( n, j \leq m\} \) as a finite open cover of \( V \times W \) . Note that the topology on \( V \times W \) is a proper refinement of the product topology. For example the line \( y = x \) is a closed subset of \( {K}^{2} \), but it is not closed in the product topology on \( K \times K \) . We summarize some of the basic topological properties of varieties that we will need. These follow easily from the corresponding properties of Zariski closed sets. We leave the proofs as exercises. Lemma 7.4.6 Suppose that \( V \) and \( W \) are varieties. i) If \( V \) is a variety and \( X \subseteq V \) is open, then \( X \) is a variety. ii) There are no infinite descending chains of closed subsets of \( V \) . iii) Any closed subset of \( V \) is a finite union of irreducible components (see Exercise 3.4.17). iv) If \( f : V \rightarrow W \) is a morphism, then \( f \) is continuous. v) If \( f : V \rightarrow W \) and for each \( a \in V \) there is an open \( U \subseteq V \) such that \( a \in U \) and \( f \mid U \) is a morphism, then \( f \) is a morphism. vi) The product \( V \times W \) is a variety and the topology on \( V \times W \) refines the product topology. In our proof that constructible groups are definably isomorphic to algebraic groups, we will use heavily Proposition 3.2.14, which states that if \( X \) is constructible and \( f : X \rightarrow K \) is definable, then we can partition \( X \) into constructible sets \( {X}_{1},\ldots ,{X}_{m} \) such that \( f \mid {X}_{i} \) is quasirational for each \( i \leq m \) . The next lemma shows how we will combine Proposition 3.2.14 and Lemma 6.2.26. Lemma 7.4.7 Suppose that \( V \) and \( W \) are varieties, \( {V}_{0} \subseteq V \) is open, and \( f : {V}_{0} \rightarrow W \) is a definable function. There is an affine open \( U \subseteq {V}_{0} \) such that \( f \mid U \) is a quasimorphism. Proof Without loss of generality, we may assume that \( {V}_{0} \) is an affine open subset of \( V \), the closure of \( {V}_{0} \) is irreducible, \( {W}_{0} \) is an affine open subset of \( W \), and \( f : {V}_{0} \rightarrow {W}_{0} \) . By Proposition 3.2.14, there are quasirational functions \( {f}_{1},\ldots ,{f}_{m} \) such that for each \( a \in {V}_{0} \) there is \( i \leq m \) such that \( f\left( a\right) = {f}_{i}\left( a\right) \) . Choose \( i \) such that \( \left\{ {x \in {V}_{0} : f\left( x\right) = {f}_{i}\left( x\right) }\right\} \) has maximal rank. By Lemma 6.2.26, this set has a nonempty interior in \( {V}_{0} \) . ## Algebraic Groups Definition 7.4.8 An algebraic group is a group \( \left( {G, \cdot }\right) \) where \( G \) is a variety and \( \cdot \) and \( x \mapsto {x}^{-1} \) are morphisms. We derive some basic properties of algebraic groups. Lemma 7.4.9 A definable subgroup of an algebraic group is closed. Proof Suppose that \( G \) is an algebraic group and \( H \leq G \) is definable. Let \( V \) be the closure of \( H \) in \( G \) . Suppose, for contradiction, that \( a \in V \smallsetminus H \) . By Exercise 6.6.14, \( \operatorname{RM}\left( {V \smallsetminus H}\right) < \operatorname{RM}\left( H\right) \) . Every open set containing \( a \) intersects \( H \) . If \( b \in H \), then \( x \mapsto {bx} \) is continuous. Thus, every open set containing \( {ba} \) intersects \( H \) and \( {Ha} \subseteq V \smallsetminus H \) . But \( x \mapsto {xa} \) is a definable bijection. Thus \[ \operatorname{RM}\left( H\right) = \operatorname{RM}\left( {Ha}\right) \leq \operatorname{RM}\left( {V \smallsetminus H}\
1096_(GTM252)Distributions and Operators
Definition 6.1
Definition 6.1. Let \( p\left( \xi \right) \in {\mathcal{O}}_{M} \) . The associated pseudodifferential operator \( \operatorname{Op}\left( {p\left( \xi \right) }\right) \), also called \( P\left( D\right) \), is defined by \[ \operatorname{Op}\left( p\right) u \equiv P\left( D\right) u = {\mathcal{F}}^{-1}\left( {p\left( \xi \right) \widehat{u}\left( \xi \right) }\right) ; \] (6.1) it maps \( \mathcal{S} \) into \( \mathcal{S} \) and \( {\mathcal{S}}^{\prime } \) into \( {\mathcal{S}}^{\prime } \) (continuously). The function \( p\left( \xi \right) \) is called the symbol of \( \operatorname{Op}\left( p\right) \) . As observed, differential operators with constant coefficients are covered by this definition; but it is interesting that also the solution operator in Example 5.20 is of this type, since it equals \( \operatorname{Op}\left( {\langle \xi {\rangle }^{-2}}\right) \) . For these pseudodifferential operators one has the extremely simple rule of calculus: \[ \mathrm{{Op}}\left( p\right) \mathrm{{Op}}\left( q\right) = \mathrm{{Op}}\left( {pq}\right) \] (6.2) since \( \operatorname{Op}\left( p\right) \operatorname{Op}\left( q\right) u = {\mathcal{F}}^{-1}\left( {p\mathcal{F}{\mathcal{F}}^{-1}\left( {q\mathcal{F}u}\right) }\right) = {\mathcal{F}}^{-1}\left( {{pq}\mathcal{F}u}\right) \) . In other words, composition of operators corresponds to multiplication of symbols. Moreover, if \( p \) is a function in \( {\mathcal{O}}_{M} \) for which \( 1/p \) belongs to \( {\mathcal{O}}_{M} \), then the operator \( \mathrm{{Op}}\left( p\right) \) has the inverse \( \mathrm{{Op}}\left( {1/p}\right) \) : \[ \operatorname{Op}\left( p\right) \operatorname{Op}\left( {1/p}\right) = \operatorname{Op}\left( {1/p}\right) \operatorname{Op}\left( p\right) = I. \] (6.3) For example, \( 1 - \Delta = \operatorname{Op}\left( {\langle \xi {\rangle }^{2}}\right) \) has the inverse \( \operatorname{Op}\left( {\langle \xi {\rangle }^{-2}}\right) \), cf. Example 5.20. Remark 6.2. We here use the notation pseudodifferential operator for all operators that are obtained by Fourier transformation from multiplication operators in \( \mathcal{S} \) (and \( {\mathcal{S}}^{\prime } \) ). In practical applications, one usually considers restricted classes of symbols with special properties. On the other hand, one allows symbols depending on \( x \) also, associating the operator \( \operatorname{Op}\left( {p\left( {x,\xi }\right) }\right) \) defined by \[ \left\lbrack {\operatorname{Op}\left( {p\left( {x,\xi }\right) }\right) u}\right\rbrack \left( x\right) = {\left( 2\pi \right) }^{-n}\int {e}^{{ix} \cdot \xi }p\left( {x,\xi }\right) \widehat{u}\left( \xi \right) {d\xi } \] (6.4) to the symbol \( p\left( {x,\xi }\right) \) . This is consistent with the fact that when \( P \) is a differential operator of the form \[ P\left( {x, D}\right) u = \mathop{\sum }\limits_{{\left| \alpha \right| \leq m}}{a}_{\alpha }\left( x\right) {D}^{\alpha }u, \] (6.5) then \( P\left( {x, D}\right) = \operatorname{Op}\left( {p\left( {x,\xi }\right) }\right) \), where the symbol is \[ p\left( {x,\xi }\right) = \mathop{\sum }\limits_{{\left| \alpha \right| \leq m}}{a}_{\alpha }\left( x\right) {\xi }^{\alpha }. \] (6.6) Allowing "variable coefficients" makes the theory much more complicated, in particular because the identities (6.2) and (6.3) then no longer hold in an exact way, but in a certain approximative sense, depending on which symbol class one considers. The systematic theory of pseudodifferential operators plays an important role in the modern mathematical literature, as a general framework around differential operators and their solution operators. It is technically more complicated than what we are doing at present, and will be taken up later, in Chapter 7. Let us consider the \( {L}_{2} \) -realizations of a pseudodifferential operator \( P\left( D\right) \) . In this "constant-coefficient" case we can appeal to Theorem 12.13 on multiplication operators in \( {L}_{2} \) . Theorem 6.3. Let \( p\left( \xi \right) \in {\mathcal{O}}_{M} \) and let \( P\left( D\right) \) be the associated pseudodiffer-ential operator \( \mathrm{{Op}}\left( p\right) \) . The maximal realization \( P{\left( D\right) }_{\max } \) of \( P\left( D\right) \) in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \) with domain \[ D\left( {P{\left( D\right) }_{\max }}\right) = \left\{ {u \in {L}_{2}\left( {\mathbb{R}}^{n}\right) \mid P\left( D\right) u \in {L}_{2}\left( {\mathbb{R}}^{n}\right) }\right\} , \] (6.7) is densely defined (with \( \mathcal{S} \subset D\left( {P{\left( D\right) }_{\max }}\right) \) ) and closed. Let \( P{\left( D\right) }_{\min } \) denote the closure of \( {\left. P\left( D\right) \right| }_{{C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) } \) (the minimal realization); then \[ P{\left( D\right) }_{\max } = P{\left( D\right) }_{\min }. \] (6.8) Furthermore, \( {\left( P{\left( D\right) }_{\max }\right) }^{ * } = {P}^{\prime }{\left( D\right) }_{\max } \), where \( {P}^{\prime }\left( D\right) = \mathrm{{Op}}\left( \bar{p}\right) \) . Proof. We write \( P \) for \( P\left( D\right) \) and \( {P}^{\prime } \) for \( {P}^{\prime }\left( D\right) \) . It follows immediately from the Parseval-Plancherel theorem (Theorem 5.5) that \[ {P}_{\max } = {\mathcal{F}}^{-1}{M}_{p}\mathcal{F};\text{ with } \] \[ D\left( {P}_{\max }\right) = {\mathcal{F}}^{-1}D\left( {M}_{p}\right) = {\mathcal{F}}^{-1}\left\{ {f \in {L}_{2}\left( {\mathbb{R}}^{n}\right) \mid {pf} \in {L}_{2}\left( {\mathbb{R}}^{n}\right) }\right\} , \] where \( {M}_{p} \) is the multiplication operator in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \) defined as in Theorem 12.13. In particular, \( {P}_{\max } \) is a closed, densely defined operator, and \( \mathcal{S} \subset \) \( D\left( {M}_{p}\right) \) implies \( \mathcal{S} \subset D\left( {P{\left( D\right) }_{\max }}\right) \) . We shall now first show that \( {P}_{\max } \) and \( {P}_{\min }^{\prime } \) are adjoints of one another. This goes in practically the same way as in Section 4.1: For \( u \in {\mathcal{S}}^{\prime } \) and \( \varphi \in {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \) one has: \[ \langle {Pu},\bar{\varphi }\rangle = \left\langle {{\mathcal{F}}^{-1}p\mathcal{F}u,\bar{\varphi }}\right\rangle = \left\langle {p\mathcal{F}u,{\mathcal{F}}^{-1}\bar{\varphi }}\right\rangle \] (6.9) \[ = \left\langle {u,\mathcal{F}p{\mathcal{F}}^{-1}\bar{\varphi }}\right\rangle = \left\langle {u,\overline{{\mathcal{F}}^{-1}\bar{p}\mathcal{F}\varphi }}\right\rangle = \left\langle {u,\overline{{P}^{\prime }\varphi }}\right\rangle , \] using that \( \overline{\mathcal{F}} = {\left( 2\pi \right) }^{n}{\mathcal{F}}^{-1} \) . We see from this on one hand that when \( u \in \) \( D\left( {P}_{\max }\right) \), i.e., \( u \) and \( {Pu} \in {L}_{2} \), then \[ \left( {{Pu},\varphi }\right) = \left( {u,{P}^{\prime }\varphi }\right) \;\text{ for all }\varphi \in {C}_{0}^{\infty }, \] so that \[ {P}_{\max } \subset {\left( {\left. {P}^{\prime }\right| }_{{C}_{0}^{\infty }}\right) }^{ * }\text{ and }{\left. {P}^{\prime }\right| }_{{C}_{0}^{\infty }} \subset {\left( {P}_{\max }\right) }^{ * }, \] and thereby \[ {P}_{\min }^{\prime } = \text{ closure of }{\left. {P}^{\prime }\right| }_{{C}_{0}^{\infty }} \subset {\left( {P}_{\max }\right) }^{ * }. \] On the other hand, we see from (6.9) that when \( u \in D\left( {\left( {\left. {P}^{\prime }\right| }_{{C}_{0}^{\infty }}\right) }^{ * }\right) \), i.e., there exists \( v \in {L}_{2} \) so that \( \left( {u,{P}^{\prime }\varphi }\right) = \left( {v,\varphi }\right) \) for all \( \varphi \in {C}_{0}^{\infty } \), then \( v \) equals \( {Pu} \), i.e., \[ {\left( {\left. {P}^{\prime }\right| }_{{C}_{0}^{\infty }}\right) }^{ * } \subset {P}_{\max } \] Thus \( {P}_{\max } = {\left( {\left. {P}^{\prime }\right| }_{{C}_{0}^{\infty }}\right) }^{ * } = {\left( {P}_{\min }^{\prime }\right) }^{ * } \) (cf. Corollary 12.6). So Lemma 4.3 extends to the present situation. But now we can furthermore use that \( {\left( {M}_{p}\right) }^{ * } = {M}_{\bar{p}} \) by Theorem 12.13, which by Fourier transformation is carried over to \[ {\left( {P}_{\max }\right) }^{ * } = {P}_{\max }^{\prime }. \] In detail: \[ {\left( {P}_{\max }\right) }^{ * } = {\left( {\mathcal{F}}^{-1}{M}_{p}\mathcal{F}\right) }^{ * } = {\mathcal{F}}^{ * }{M}_{p}^{ * }{\left( {\mathcal{F}}^{-1}\right) }^{ * } = \overline{\mathcal{F}}{M}_{\bar{p}}{\overline{\mathcal{F}}}^{-1} = {\mathcal{F}}^{-1}{M}_{\bar{p}}\mathcal{F} = {P}_{\max }^{\prime }, \] using that \( {\mathcal{F}}^{ * } = \overline{\mathcal{F}} = {\left( 2\pi \right) }^{n}{\mathcal{F}}^{-1} \) . Since \( {\left( {P}_{\max }\right) }^{ * } = {P}_{\min }^{\prime } \), it follows that \( {P}_{\max }^{\prime } = {P}_{\min }^{\prime } \), showing that the maximal and the minimal operators coincide, for all these multiplication operators and Fourier transformed multiplication operators. Theorem 6.4. One has for the operators introduced in Theorem 6.3: \( {1}^{ \circ }P{\left( D\right) }_{\max } \) is a bounded operator in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \) if and only if \( p\left( \xi \right) \) is bounded, and the norm satisfies \[ \begin{Vmatrix}{P{\left( D\right) }_{\max }}\end{Vmatrix} = \sup \left\{ {\left| {p\left( \xi \right) }\right| \mid \xi \in {\mathbb{R}}^{n}}\right\} . \] (6.10) \( {2}^{ \circ }P{\left( D\right) }_{\max } \) is selfadjoint in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \) if and only if \( p \) is real. \( {3}^{ \circ }P{\left( D\right) }_{\max } \) has the lower bound \[ m\left( {P{\left( D\right) }_{\max }}\right) = \inf \left\{ {\operatorname{Re}p\left( \xi \right) \mid \xi \in {\mathbb{R}}^{n}}\right\} \geq - \infty . \] (6.11) Proof. \( {1}^{ \circ } \) . We have from Theorem 12.13 and the subsequent remarks that \( {M}_{p} \) is a bounded operator in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \) when \( p \) is a
1189_(GTM95)Probability-1
Definition 4
Definition 4. We say that the sets (events) \( {A}_{1},\ldots ,{A}_{n} \) are mutually independent or statistically independent (with respect to the probability \( \mathrm{P} \) ) if for any \( k = 1,\ldots, n \) and \( 1 \leq {i}_{1} < {i}_{2} < \cdots < {i}_{k} \leq n \) \[ \mathrm{P}\left( {{A}_{{i}_{1}}\ldots {A}_{{i}_{k}}}\right) = \mathrm{P}\left( {A}_{{i}_{1}}\right) \ldots \mathrm{P}\left( {A}_{{i}_{k}}\right) . \] (12) Definition 5. The algebras \( {\mathcal{A}}_{1},\ldots ,{\mathcal{A}}_{n} \) of sets (events) are called mutually independent or statistically independent (with respect to the probability \( \mathrm{P} \) ) if any sets \( {A}_{1},\ldots ,{A}_{n} \) belonging respectively to \( {\mathcal{A}}_{1},\ldots ,{\mathcal{A}}_{n} \) are independent. Note that pairwise independence of events does not imply their independence. In fact if, for example, \( \Omega = \left\{ {{\omega }_{1},{\omega }_{2},{\omega }_{3},{\omega }_{4}}\right\} \) and all outcomes are equiprobable, it is easily verified that the events \[ A = \left\{ {{\omega }_{1},{\omega }_{2}}\right\} ,\;B = \left\{ {{\omega }_{1},{\omega }_{3}}\right\} ,\;C = \left\{ {{\omega }_{1},{\omega }_{4}}\right\} \] are pairwise independent, whereas \[ \mathrm{P}\left( {ABC}\right) = \frac{1}{4} \neq {\left( \frac{1}{2}\right) }^{3} = \mathrm{P}\left( A\right) \mathrm{P}\left( B\right) \mathrm{P}\left( C\right) . \] Also note that if \[ \mathrm{P}\left( {ABC}\right) = \mathrm{P}\left( A\right) \mathrm{P}\left( B\right) \mathrm{P}\left( C\right) \] for events \( A, B \) and \( C \), it by no means follows that these events are pairwise independent. In fact, let \( \Omega \) consist of the 36 ordered pairs \( \left( {i, j}\right) \), where \( i, j = 1,2,\ldots ,6 \) and all the pairs are equiprobable. Then if \( A = \{ \left( {i, j}\right) : j = 1,2 \) or \( 5\}, B = \{ \left( {i, j}\right) : j = \) \( 4,5 \) or \( 6\}, C = \{ \left( {i, j}\right) : i + j = 9\} \) we have \[ \mathrm{P}\left( {AB}\right) = \frac{1}{6} \neq \frac{1}{4} = \mathrm{P}\left( A\right) \mathrm{P}\left( B\right) \] \[ \mathrm{P}\left( {AC}\right) = \frac{1}{36} \neq \frac{1}{18} = \mathrm{P}\left( A\right) \mathrm{P}\left( C\right) , \] \[ \mathsf{P}\left( {BC}\right) = \frac{1}{12} \neq \frac{1}{18} = \mathsf{P}\left( B\right) \;\mathsf{P}\left( C\right) , \] but also \[ \mathrm{P}\left( {ABC}\right) = \frac{1}{36} = \mathrm{P}\left( A\right) \mathrm{P}\left( B\right) \mathrm{P}\left( C\right) . \] 6. Let us consider in more detail, from the point of view of independence, the classical discrete model \( \left( {\Omega ,\mathcal{A},\mathrm{P}}\right) \) that was introduced in Sect. 2 and used as a basis for the binomial distribution. In this model \[ \Omega = \left\{ {\omega : \omega = \left( {{a}_{1},\ldots ,{a}_{n}}\right) ,{a}_{i} = 0,1}\right\} ,\;\mathcal{A} = \{ A : A \subseteq \Omega \} \] and \[ p\left( \omega \right) = {p}^{\sum {a}_{i}}{q}^{n - \sum {a}_{i}}. \] (13) Consider an event \( A \subseteq \Omega \) . We say that this event depends on a trial at time \( k \) if it is determined by the value \( {a}_{k} \) alone. Examples of such events are \[ {A}_{k} = \left\{ {\omega : {a}_{k} = 1}\right\} ,\;{\bar{A}}_{k} = \left\{ {\omega : {a}_{k} = 0}\right\} . \] Let us consider the sequence of algebras \( {\mathcal{A}}_{1},{\mathcal{A}}_{2},\ldots ,{\mathcal{A}}_{n} \), where \( {\mathcal{A}}_{k} = \left\{ {{A}_{k},{\bar{A}}_{k}}\right. \) , \( \varnothing ,\Omega \} \) and show that under (13) these algebras are independent. It is clear that \[ \mathrm{P}\left( {A}_{k}\right) = \mathop{\sum }\limits_{\left\{ \omega : {a}_{k} = 1\right\} }p\left( \omega \right) = \mathop{\sum }\limits_{\left\{ \omega : {a}_{k} = 1\right\} }{p}^{\sum {a}_{i}}{q}^{n - \sum {a}_{i}} \] \[ = p\mathop{\sum }\limits_{\left( {a}_{1},\ldots ,{a}_{k - 1},{a}_{k + 1},\ldots ,{a}_{n}\right) }{p}^{{a}_{1} + \cdots + {a}_{k - 1} + {a}_{k + 1} + \cdots + {a}_{n}} \] \[ \times {q}^{\left( {n - 1}\right) - \left( {{a}_{1} + \cdots + {a}_{k - 1} + {a}_{k + 1} + \cdots + {a}_{n}}\right) } = p\mathop{\sum }\limits_{{l = 0}}^{{n - 1}}{C}_{n - 1}^{l}{p}^{l}{q}^{\left( {n - 1}\right) - l} = p, \] and a similar calculation shows that \( \mathrm{P}\left( {\bar{A}}_{k}\right) = q \) and that, for \( k \neq l \) , \[ \mathrm{P}\left( {{A}_{k}{A}_{l}}\right) = {p}^{2},\;\mathrm{P}\left( {{A}_{k}{\bar{A}}_{l}}\right) = {pq},\;\mathrm{P}\left( {{\bar{A}}_{k}{A}_{l}}\right) = {pq},\;\mathrm{P}\left( {{\bar{A}}_{k}{\bar{A}}_{l}}\right) = {q}^{2}. \] It is easy to deduce from this that \( {\mathcal{A}}_{k} \) and \( {\mathcal{A}}_{l} \) are independent for \( k \neq l \) . It can be shown in the same way that \( {\mathcal{A}}_{1},{\mathcal{A}}_{2},\ldots ,{\mathcal{A}}_{n} \) are independent. This is the basis for saying that our model \( \left( {\Omega ,\mathcal{A},\mathrm{P}}\right) \) corresponds to " \( n \) independent trials with two outcomes and probability \( p \) of success." James Bernoulli was the first to study this model systematically, and established the law of large numbers (Sect. 5) for it. Accordingly, this model is also called the Bernoulli scheme with two outcomes (success and failure) and probability \( p \) of success. A detailed study of the probability space for the Bernoulli scheme shows that it has the structure of a direct product of probability spaces, defined as follows. Suppose that we are given a collection \( \left( {{\Omega }_{1},{\mathcal{B}}_{1},{\mathrm{P}}_{1}}\right) ,\ldots ,\left( {{\Omega }_{n},{\mathcal{B}}_{n},{\mathrm{P}}_{n}}\right) \) of discrete probability spaces. Form the space \( \Omega = {\Omega }_{1} \times {\Omega }_{2} \times \cdots \times {\Omega }_{n} \) of points \( \omega = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \), where \( {a}_{i} \in {\Omega }_{i} \) . Let \( \mathcal{A} = {\mathcal{B}}_{1} \otimes \cdots \otimes {\mathcal{B}}_{n} \) be the algebra of the subsets of \( \Omega \) that consists of sums of sets of the form \[ A = {B}_{1} \times {B}_{2} \times \cdots \times {B}_{n} \] with \( {B}_{i} \in {\mathcal{B}}_{i} \) . Finally, for \( \omega = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) take \( p\left( \omega \right) = {p}_{1}\left( {a}_{1}\right) \cdots {p}_{n}\left( {a}_{n}\right) \) and define \( \mathrm{P}\left( A\right) \) for the set \( A = {B}_{1} \times {B}_{2} \times \cdots \times {B}_{n} \) by \[ \mathrm{P}\left( A\right) = \mathop{\sum }\limits_{\left\{ {a}_{1} \in {B}_{1},\ldots ,{a}_{n} \in {B}_{n}\right\} }{p}_{1}\left( {a}_{1}\right) \ldots {p}_{n}\left( {a}_{n}\right) . \] It is easy to verify that \( \mathrm{P}\left( \Omega \right) = 1 \) and therefore the triple \( \left( {\Omega ,\mathcal{A},\mathrm{P}}\right) \) defines a probability space. This space is called the direct product of the probability spaces \( \left( {{\Omega }_{1},{\mathcal{B}}_{1},{\mathrm{P}}_{1}}\right) ,\ldots ,\left( {{\Omega }_{n},{\mathcal{B}}_{n},{\mathrm{P}}_{n}}\right) \) We note an easily verified property of the direct product of probability spaces: with respect to \( \mathrm{P} \), the events \[ {A}_{1} = \left\{ {\omega : {a}_{1} \in {B}_{1}}\right\} ,\ldots ,{A}_{n} = \left\{ {\omega : {a}_{n} \in {B}_{n}}\right\} , \] where \( {B}_{i} \in {\mathcal{B}}_{i} \), are independent. In the same way, the algebras of subsets of \( \Omega \) , \[ {\mathcal{A}}_{1} = \left\{ {{A}_{1} : {A}_{1} = \left\{ {\omega : {a}_{1} \in {B}_{1}}\right\} ,{B}_{1} \in {\mathcal{B}}_{1}}\right\} \] \[ {\mathcal{A}}_{n} = \left\{ {{A}_{n} : {A}_{n} = \left\{ {\omega : {a}_{n} \in {B}_{n}}\right\} ,{B}_{n} \in {\mathcal{B}}_{n}}\right\} \] are independent. It is clear from our construction that the Bernoulli scheme \[ \left( {\Omega ,\mathcal{A},\mathrm{P}}\right) \text{with}\Omega = \left\{ {\omega : \omega = \left( {{a}_{1},\ldots ,{a}_{n}}\right) ,{a}_{i} = 0\text{or 1}}\right\} \text{,} \] \[ \mathcal{A} = \{ A : A \subseteq \Omega \} \text{ and }p\left( \omega \right) = {p}^{\sum {a}_{i}}{q}^{n - \sum {a}_{i}} \] can be thought of as the direct product of the probability spaces \( \left( {{\Omega }_{i},{\mathcal{B}}_{i},{\mathrm{P}}_{i}}\right), i = 1 \) , \( 2,\ldots, n \), where \[ {\Omega }_{i} = \{ 0,1\} ,\;{\mathcal{B}}_{i} = \left\{ {\{ 0\} ,\{ 1\} ,\varnothing ,{\Omega }_{i}}\right\} \] \[ {\mathrm{P}}_{i}\left( {\{ 1\} }\right) = p,\;{\mathrm{P}}_{i}\left( {\{ 0\} }\right) = q. \] 7. Problems 1. Give examples to show that in general the equations \[ \mathrm{P}\left( {B \mid A}\right) + \mathrm{P}\left( {B \mid \bar{A}}\right) = 1 \] \[ \mathsf{P}\left( {B\;|\;A}\right) + \mathsf{P}\left( {\overline{B}\;|\;\overline{A}}\right) = 1 \] are false. 2. An urn contains \( M \) balls, of which \( {M}_{1} \) are white. Consider a sample of size \( n \) . Let \( {B}_{j} \) be the event that the ball selected at the \( j \) th step is white, and \( {A}_{k} \) the event that a sample of size \( n \) contains exactly \( k \) white balls. Show that \[ \mathrm{P}\left( {{B}_{j} \mid {A}_{k}}\right) = k/n \] both for sampling with replacement and for sampling without replacement. 3. Let \( {A}_{1},\ldots ,{A}_{n} \) be independent events. Then \[ \mathrm{P}\left( {\mathop{\bigcup }\limits_{{i = 1}}^{n}{A}_{i}}\right) = 1 - \mathop{\prod }\limits_{{i = 1}}^{n}\mathrm{P}\left( {\bar{A}}_{i}\right) \] 4. Let \( {A}_{1},\ldots ,{A}_{n} \) be independent events with \( \mathrm{P}\left( {A}_{i}\right) = {p}_{i} \) . Then the probability \( {P}_{0} \) that neither event occurs is \[ {P}_{0} = \mathop{\prod }\limits_{{i = 1}}^{n}\left( {1 - {p}_{i}}\right) \] 5. Let \( A \) and \( B \) be independent events. In terms of \( \mathrm{P}\left( A\right) \) and \( \mathrm{P}\left( B\right) \), find the probabilities of the events that exactly \( k \), at least \( k \), and at most \( k \) of \( A \) and \( B \) occur \( \left( {k = 0,1,2}\right) \) . 6. Let event \( A \) be independent of itself, i.e., let \( A \) and \( A \
1042_(GTM203)The Symmetric Group
Definition 1.7.1
Definition 1.7.1 Given a matrix representation \( X : G \rightarrow G{L}_{d} \), the corresponding commutant algebra is \[ \operatorname{Com}X = \left\{ {T \in {\operatorname{Mat}}_{d} : {TX}\left( g\right) = X\left( g\right) T\text{ for all }g \in G}\right\} \] where \( {\operatorname{Mat}}_{d} \) is the set of all \( d \times d \) matrices with entries in \( \mathbb{C} \) . Given a \( G \) -module \( V \), the corresponding endomorphism algebra is \[ \text{End}V = \{ \theta : V \rightarrow V : \theta \text{is a}G\text{-homomorphism}\} \text{. ∎} \] It is easy to check that both the commutant and endomorphism algebras do satisfy the axioms for an algebra. The reader can also verify that if \( V \) is a \( G \) -module and \( X \) is a corresponding matrix representation, then End \( V \) and Com \( X \) are isomorphic as algebras. Merely take the basis \( \mathcal{B} \) that produced \( X \) and use the map that sends each \( \theta \in \operatorname{End}V \) to the matrix \( T \) of \( \theta \) in the basis \( \mathcal{B} \) . Let us compute \( \operatorname{Com}X \) for various representations \( X \) . Example 1.7.2 Suppose that \( X \) is a matrix representation such that \[ X = \left( \begin{matrix} {X}^{\left( 1\right) } & 0 \\ 0 & {X}^{\left( 2\right) } \end{matrix}\right) = {X}^{\left( 1\right) } \oplus {X}^{\left( 2\right) }, \] where \( {X}^{\left( 1\right) },{X}^{\left( 2\right) } \) are inequivalent and irreducible of degrees \( {d}_{1},{d}_{2} \), respectively. What does \( \operatorname{Com}X \) look like? Suppose that \[ T = \left( \begin{array}{ll} {T}_{1,1} & {T}_{1,2} \\ {T}_{2,1} & {T}_{2,2} \end{array}\right) \] is a matrix partitioned in the same way as \( X \) . If \( {TX} = {XT} \), then we can multiply out each side to obtain \[ \left( \begin{array}{ll} {T}_{1,1}{X}^{\left( 1\right) } & {T}_{1,2}{X}^{\left( 2\right) } \\ {T}_{2,1}{X}^{\left( 1\right) } & {T}_{2,2}{X}^{\left( 2\right) } \end{array}\right) = \left( \begin{array}{ll} {X}^{\left( 1\right) }{T}_{1,1} & {X}^{\left( 1\right) }{T}_{1,2} \\ {X}^{\left( 2\right) }{T}_{2,1} & {X}^{\left( 2\right) }{T}_{2,2} \end{array}\right) . \] Equating corresponding blocks we get \[ {T}_{1,1}{X}^{\left( 1\right) } = {X}^{\left( 1\right) }{T}_{1,1} \] \[ {T}_{1,2}{X}^{\left( 2\right) } = {X}^{\left( 1\right) }{T}_{1,2} \] \[ {T}_{2,1}{X}^{\left( 1\right) } = {X}^{\left( 2\right) }{T}_{2,1} \] \[ {T}_{2,2}{X}^{\left( 2\right) } = {X}^{\left( 2\right) }{T}_{2,2} \] Using Corollaries 1.6.6 and 1.6.8 along with the fact that \( {X}^{\left( 1\right) } \) and \( {X}^{\left( 2\right) } \) are inequivalent, these equations can be solved to yield \[ {T}_{1,1} = {c}_{1}{I}_{{d}_{1}},\;{T}_{1,2} = {T}_{2,1} = 0,\;{T}_{2,2} = {c}_{2}{I}_{{d}_{2}}, \] where \( {c}_{1},{c}_{2} \in \mathbb{C} \) and \( {I}_{{d}_{1}},{I}_{{d}_{2}} \) are identity matrices of degrees \( {d}_{1},{d}_{2} \) . Thus \[ T = \left( \begin{matrix} {c}_{1}{I}_{{d}_{1}} & 0 \\ 0 & {c}_{2}{I}_{{d}_{2}} \end{matrix}\right) \] We have shown that when \( X = {X}^{\left( 1\right) } \oplus {X}^{\left( 2\right) } \) with \( {X}^{\left( 1\right) } \ncong {X}^{\left( 2\right) } \) and irreducible, then \[ \operatorname{Com}X = \left\{ {{c}_{1}{I}_{{d}_{1}} \oplus {c}_{2}{I}_{{d}_{2}} : {c}_{1},{c}_{2} \in \mathbb{C}}\right\} \] where \( {d}_{1} = \deg {X}^{\left( 1\right) },{d}_{2} = \deg {X}^{\left( 2\right) } \) . In general, if \( X = { \oplus }_{i = 1}^{k}{X}^{\left( i\right) } \), where the \( {X}^{\left( i\right) } \) are pairwise inequivalent irreducibles, then a similar argument proves that \[ \operatorname{Com}X = \left\{ {{ \oplus }_{i = 1}^{k}{c}_{i}{I}_{{d}_{i}} : {c}_{i} \in \mathbb{C}}\right\} \] where \( {d}_{i} = \deg {X}^{\left( i\right) } \) . Notice that the degree of \( X \) is \( \mathop{\sum }\limits_{{i = 1}}^{k}{d}_{i} \) . Note also that the dimension of \( \operatorname{Com}X \) (as a vector space) is just \( k \) . This is because there are \( k \) scalars \( {c}_{i} \) that can vary, whereas the identity matrices are fixed. Next we deal with the case of sums of equivalent representations. A convenient notation is \[ {mX} = \overset{m}{\overbrace{X \oplus X \oplus \cdots \oplus X}} \] where the nonnegative integer \( m \) is called the multiplicity of \( X \) . Example 1.7.3 Suppose that \[ X = \left( \begin{matrix} {X}^{\left( 1\right) } & 0 \\ 0 & {X}^{\left( 1\right) } \end{matrix}\right) = 2{X}^{\left( 1\right) } \] where \( {X}^{\left( 1\right) } \) is irreducible of degree \( d \) . Take \( T \) partitioned as before. Doing the multiplication in \( {TX} = {XT} \) and equating blocks now yields four equations, all of the form \[ {T}_{i, j}{X}^{\left( 1\right) } = {X}^{\left( 1\right) }{T}_{i, j} \] for all \( i, j = 1,2 \) . Corollaries 1.6.6 and 1.6.8 come into play again to reveal that, for all \( i \) and \( j \) , \[ {T}_{i, j} = {c}_{i, j}{I}_{d} \] where \( {c}_{i, j} \in \mathbb{C} \) . Thus \[ \operatorname{Com}X = \left\{ {\left( \begin{array}{ll} {c}_{1,1}{I}_{d} & {c}_{1,2}{I}_{d} \\ {c}_{2,1}{I}_{d} & {c}_{2,2}{I}_{d} \end{array}\right) : {c}_{i, j} \in \mathbb{C}\text{ for all }i, j}\right\} \] (1.11) is the commutant algebra in this case. - The matrices in \( \operatorname{Com}2{X}^{\left( 1\right) } \) have a name. Definition 1.7.4 Let \( X = \left( {x}_{i, j}\right) \) and \( Y \) be matrices. Then their tensor product is the block matrix \[ X \otimes Y = \left( {{x}_{i, j}Y}\right) = \left( \begin{matrix} {x}_{1,1}Y & {x}_{1,2}Y & \cdots \\ {x}_{2,1}Y & {x}_{2,2}Y & \cdots \\ \vdots & \vdots & \ddots \end{matrix}\right) . \] Thus we could write the elements of (1.11) as \[ T = \left( \begin{array}{ll} {c}_{1,1} & {c}_{1,2} \\ {c}_{2,1} & {c}_{2,2} \end{array}\right) \otimes {I}_{d} \] and so \[ \operatorname{Com}X = \left\{ {{M}_{2} \otimes {I}_{d} : {M}_{2} \in {\operatorname{Mat}}_{2}}\right\} \] If we take \( X = m{X}^{\left( 1\right) } \), then \[ \operatorname{Com}X = \left\{ {{M}_{m} \otimes {I}_{d} : {M}_{m} \in {\operatorname{Mat}}_{m}}\right\} \] where \( d \) is the degree of \( {X}^{\left( 1\right) } \) . Computing degrees and dimensions, we obtain \[ \deg X = \deg m{X}^{\left( 1\right) } = m\deg {X}^{\left( 1\right) } = {md} \] and \[ \dim \left( {\operatorname{Com}X}\right) = \dim \left\{ {{M}_{m} : {M}_{m} \in {\operatorname{Mat}}_{m}}\right\} = {m}^{2}. \] Finally, we are led to consider the most general case: \[ X = {m}_{1}{X}^{\left( 1\right) } \oplus {m}_{2}{X}^{\left( 2\right) } \oplus \cdots \oplus {m}_{k}{X}^{\left( k\right) } \] \( \left( {1.12}\right) \) where the \( {X}^{\left( i\right) } \) are pairwise inequivalent irreducibles with \( \deg {X}^{\left( i\right) } = {d}_{i} \) . The degree of \( X \) is given by \[ \deg X = \mathop{\sum }\limits_{{i = 1}}^{k}\deg \left( {{m}_{i}{X}^{\left( i\right) }}\right) = {m}_{1}{d}_{1} + {m}_{2}{d}_{2} + \cdots + {m}_{k}{d}_{k}. \] The reader should have no trouble combining Examples 1.7.2 and 1.7.3 to obtain \[ \operatorname{Com}X = \left\{ {{ \oplus }_{i = 1}^{k}\left( {{M}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) : {M}_{{m}_{i}} \in {\operatorname{Mat}}_{{m}_{i}}\text{ for all }i}\right\} \] (1.13) of dimension \[ \dim \left( {\operatorname{Com}X}\right) = \dim \{ { \oplus }_{i = 1}^{k}{M}_{{m}_{i}}\; : \;{M}_{{m}_{i}} \in {\mathrm{{Mat}}}_{{m}_{i}}\} = {m}_{1}^{2} + {m}_{2}^{2} + \cdots + {m}_{k}^{2}. \] Before continuing our investigation of the commutant algebra, we should briefly mention the abstract vector space analogue of the tensor product. Definition 1.7.5 Given vector spaces \( V \) and \( W \), then their tensor product is the set \[ V \otimes W = \left\{ {\mathop{\sum }\limits_{{i, j}}{c}_{i, j}{\mathbf{v}}_{i} \otimes {\mathbf{w}}_{j} : {c}_{i, j} \in \mathbb{C},{\mathbf{v}}_{i} \in V,{\mathbf{w}}_{j} \in W}\right\} \] subject to the relations \[ \left( {{c}_{1}{\mathbf{v}}_{1} + {c}_{2}{\mathbf{v}}_{2}}\right) \otimes \mathbf{w} = {c}_{1}\left( {{\mathbf{v}}_{1} \otimes \mathbf{w}}\right) + {c}_{2}\left( {{\mathbf{v}}_{2} \otimes \mathbf{w}}\right) \] and \[ \mathbf{v} \otimes \left( {{d}_{1}{\mathbf{w}}_{1} + {d}_{2}{\mathbf{w}}_{2}}\right) = {d}_{1}\left( {\mathbf{v} \otimes {\mathbf{w}}_{1}}\right) + {d}_{2}\left( {\mathbf{v} \otimes {\mathbf{w}}_{2}}\right) . \] It is easy to see that \( V \otimes W \) is also a vector space. In fact, the reader can check that if \( \mathcal{B} = \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2},\ldots ,{\mathbf{v}}_{d}}\right\} \) and \( \mathcal{C} = \left\{ {{\mathbf{w}}_{1},{\mathbf{w}}_{2},\ldots ,{\mathbf{w}}_{f}}\right\} \) are bases for \( V \) and \( W \), respectively, then the set \[ \left\{ {{\mathbf{v}}_{i} \otimes {\mathbf{w}}_{j} : 1 \leq i \leq d,1 \leq j \leq f}\right\} \] is a basis for \( V \otimes W \) . This gives the connection with the definition of matrix tensor products: The algebra \( {\operatorname{Mat}}_{d} \) has as basis the set \[ \mathcal{B} = \left\{ {{E}_{i, j} : 1 \leq i, j \leq d}\right\} \] where \( {E}_{i, j} \) is the matrix of zeros with exactly one 1 in position \( \left( {i, j}\right) \) . So if \( X = \left( {x}_{i, j}\right) \in {\operatorname{Mat}}_{d} \) and \( Y = \left( {y}_{k, l}\right) \in {\operatorname{Mat}}_{f} \), then, by the fact that \( \otimes \) is linear, \[ X \otimes Y = \left( {\mathop{\sum }\limits_{{i, j = 1}}^{d}{x}_{i, j}{E}_{i, j}}\right) \otimes \left( {\mathop{\sum }\limits_{{k, l = 1}}^{f}{y}_{k, l}{E}_{k, l}}\right) \] \[ = \mathop{\sum }\limits_{{i, j = 1}}^{d}\mathop{\sum }\limits_{{k, l = 1}}^{f}{x}_{i, j}{y}_{k, l}\left( {{E}_{i, j} \otimes {E}_{k, l}}\right) . \] (1.14) But if \( {E}_{i, j} \otimes {E}_{k, l} \) represents the \( \left( {k, l}\right) \) th position of the \( \left( {i, j}\right) \) th block of a matrix, then equation (1.14) says that the corresponding entry for \( X \otimes Y \) should be \( {x}_{i, j}{y}_{k, l} \), agreeing with the matrix definition. We return
110_The Schwarz Function and Its Generalization to Higher Dimensions
Definition 2.4
Definition 2.4. A function \( f : D \rightarrow \mathbb{R} \) on a subset \( D \) of a normed vector space \( E \) is called coercive if \[ f\left( x\right) \rightarrow \infty \;\text{ as }\parallel x\parallel \rightarrow \infty . \] Corollary 2.5. If \( f : D \rightarrow \mathbb{R} \) is a continuous coercive function defined on a closed set \( D \subseteq {\mathbb{R}}^{n} \), then \( f \) achieves a global minimum on \( D \) . Proof. The sublevel sets \( {l}_{\alpha }\left( f\right) = \{ x \in D : f\left( x\right) \leq \alpha \} \) are closed, since \( f \) is continuous, and bounded since \( f \) is coercive. Thus, \( f \) achieves its minimum on \( L \) at a point \( {x}^{ * } \), which is also a global minimizer of \( f \) on \( D \) . Example 2.6. (The fundamental theorem of algebra) This famous theorem states that every polynomial \[ p\left( z\right) \mathrel{\text{:=}} {a}_{n}{z}^{n} + {a}_{n - 1}{z}^{-1} + \cdots + {a}_{1}z + {a}_{0}, \] with leading coefficient \( {a}_{n} \neq 0 \) and where the coefficients \( {a}_{i} \) are complex numbers, has a complex root, hence \( n \) complex roots counting multiplicities. The problem has a fascinating history, and it is generally agreed that the first rigorous proof of it was given by the great mathematician Gauss in 1797, when he was just 20 years old, and appeared in his doctoral thesis of 1799. Here, we give an elementary proof of this result. This very short proof from [253] uses optimization techniques, but the essential idea is already in Fefferman [92], and probably in earlier works. Consider minimizing the function \[ f\left( z\right) = \left| {p\left( z\right) }\right| \] over the complex numbers. We have \[ \left| {p\left( z\right) }\right| = {\left| z\right| }^{n} \cdot \left| {{a}_{n} + \frac{{a}_{n - 1}}{z} + \frac{{a}_{n - 2}}{{z}^{2}} + \cdots + \frac{{a}_{1}}{{z}^{n - 1}} + \frac{{a}_{0}}{{z}^{n}}}\right| . \] As \( \left| z\right| \rightarrow \infty \), the norm of the sum above converges to \( \left| {a}_{n}\right| > 0 \) . Thus, \( f\left( z\right) \) is a coercive function, and so has a minimizer \( {z}^{ * } \) in \( \mathbb{C} \) . Without loss of any generality, we may assume that \( {z}^{ * } = 0 \) ; otherwise, we can consider the polynomial \( q\left( z\right) = p\left( {z + {z}^{ * }}\right) \) . We have \[ \left| {a}_{0}\right| = f\left( 0\right) \leq f\left( z\right) = \left| {\mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{z}^{k}}\right| ,\;z \in \mathbb{C}. \] If \( {a}_{0} = 0, z = 0 \) is a root of \( p \), and we are done. We claim that in fact, \( {a}_{0} = 0 \) . Suppose \( {a}_{0} \neq 0 \) and let \[ p\left( z\right) = {a}_{0} + {a}_{k}{z}^{k} + {z}^{k + 1}q\left( z\right) , \] where \( {a}_{k} \neq 0 \) is the first nonzero coefficient after \( {a}_{0} \) and \( q \) is a polynomial. Choose a \( k \) th root \( w \in \mathbb{C} \) of \( - {a}_{0}/{a}_{k} \) . Then \[ p\left( {tw}\right) = {a}_{0} + {a}_{k}{t}^{k}{w}^{k} + {t}^{k + 1}{w}^{k + 1}q\left( {tw}\right) = \left( {1 - {t}^{k}}\right) {a}_{0} + {t}^{k}\left\lbrack {t{w}^{k + 1}q\left( {tw}\right) }\right\rbrack . \] If \( 0 < t < 1 \) is small enough, then \( t\left| {{w}^{k + 1}q\left( {tw}\right) }\right| < \left| {a}_{0}\right| \), and \[ \left| {p\left( {tw}\right) }\right| < \left( {1 - {t}^{k}}\right) \left| {a}_{0}\right| + {t}^{k}\left| {a}_{0}\right| = \left| {a}_{0}\right| , \] a contradiction. ## 2.2 First-Order Optimality Conditions Theorem 2.7. (First-order necessary condition for a local optimizer) Let \( f : U \rightarrow \mathbb{R} \) be a Gâteaux differentiable function on an open set \( U \subseteq {\mathbb{R}}^{n} \) . A local optimizer is a critical point, that is, \[ x\text{ a local optimizer }\; \Rightarrow \;\nabla f\left( x\right) = 0. \] Clearly, the theorem holds verbatim if \( U \subseteq {\mathbb{R}}^{n} \) is an arbitrary set with a nonempty interior, \( f \) is Gâteaux differentiable on int \( U \), and \( x \in \operatorname{int}U \) . We will not always point out such obvious facts in the interest of not complicating the statements of our theorems. Proof. We first assume that \( x \) is a local minimizer of \( f \) . If \( d \in {\mathbb{R}}^{n} \), then \[ {f}^{\prime }\left( {x;d}\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{f\left( {x + {td}}\right) - f\left( x\right) }{t} = \langle \nabla f\left( x\right), d\rangle . \] If \( \left| t\right| \) is small, then the numerator above is nonnegative, since \( x \) is a local minimizer. If \( t > 0 \), then the difference quotient is nonnegative, so in the limit as \( t \searrow 0 \), we have \( {f}^{\prime }\left( {x;d}\right) \geq 0 \) . However, if \( t < 0 \), the difference quotient is nonpositive, and we have \( {f}^{\prime }\left( {x;d}\right) \leq 0 \) . Thus, we conclude that \( {f}^{\prime }\left( {x;d}\right) = \) \( \langle \nabla f\left( x\right), d\rangle = 0 \) . If \( x \) is a local maximizer of \( f \), then \( \langle \nabla f\left( x\right), d\rangle = 0 \), since \( x \) is a local minimizer of \( - f \) . Picking \( d = \nabla f\left( x\right) \) gives \( {f}^{\prime }\left( {x;d}\right) = \parallel \nabla f\left( x\right) {\parallel }^{2} = 0 \) , that is, \( \nabla f\left( x\right) = 0 \) . We note that Theorem 2.7 proves the following more general result. Corollary 2.8. Let \( f : U \rightarrow \mathbb{R} \) be a function on an open set \( U \subseteq {\mathbb{R}}^{n} \) . If \( x \in U \) is a local minimizer of \( f \) and the directional derivative \( {f}^{\prime }\left( {x;d}\right) \) exists for a direction \( d \in {\mathbb{R}}^{n} \), then \( {f}^{\prime }\left( {x;d}\right) \geq 0 \) . Remark 2.9. Functions that have directional derivatives but are not necessarily differentiable occur naturally in optimization, for example in minimizing a function that is the pointwise maximum of a set of differentiable functions. See Danskin's theorem, Theorem 1.29, on page 20. In fact, it is possible use this approach to derive optimality conditions for constrained optimization problems. See Section 12.1 for the derivation of optimality conditions in semi-infinite programming. Example 2.10. Here is an optimization problem from the theory of orthogonal polynomials; see [250], whose solution is obtained using a novel technique, a differential equation. We determine the minimizers and the minimum value of the function \[ f\left( {{x}_{1},\ldots ,{x}_{n}}\right) = \frac{1}{2}\mathop{\sum }\limits_{1}^{n}{x}_{j}^{2} - \mathop{\sum }\limits_{{1 \leq i < j \leq n}}\ln \left| {{x}_{i} - {x}_{j}}\right| . \] Differentiate \( f \) with respect to each variable \( {x}_{j} \) and set to zero to obtain \[ \frac{\partial f}{\partial {x}_{j}} = {x}_{j} - \mathop{\sum }\limits_{{i \neq j}}\frac{1}{{x}_{j} - {x}_{i}} = 0. \] To solve for \( x \), consider the polynomial \[ g\left( x\right) = \mathop{\prod }\limits_{1}^{n}\left( {x - {x}_{j}}\right) \] which has roots at the point \( x = {x}_{1},\ldots ,{x}_{n} \) . Differentiating this function gives \[ {g}^{\prime }\left( {x}_{j}\right) = \mathop{\prod }\limits_{{i \neq j}}\left( {{x}_{j} - {x}_{i}}\right) ,\;\frac{{g}^{\prime \prime }\left( {x}_{j}\right) }{{g}^{\prime }\left( {x}_{j}\right) } = 2\mathop{\prod }\limits_{{i \neq j}}\frac{1}{{x}_{j} - {x}_{i}}, \] so that \( \partial f/\partial {x}_{j} = 0 \) can be written as \[ {g}^{\prime \prime }\left( {x}_{j}\right) - 2{x}_{j}{g}^{\prime }\left( {x}_{j}\right) = 0 \] meaning that the polynomial \[ {g}^{\prime \prime }\left( x\right) - {2x}{g}^{\prime }\left( x\right) \] of order \( n \) has the same roots as the polynomial \( g\left( x\right) \), so must be proportional to \( g\left( x\right) \) . Comparing the coefficients of \( {x}^{n} \) gives \[ {g}^{\prime \prime }\left( x\right) - {2x}{g}^{\prime }\left( x\right) + {2ng}\left( x\right) = 0. \] The solution to this differential equation is the Hermite polynomial of order \( n \) , \[ {H}_{n}\left( x\right) = n!\mathop{\sum }\limits_{0}^{\left\lbrack n/2\right\rbrack }\frac{{\left( -1\right) }^{k}{\left( 2x\right) }^{n - {2k}}}{k!\left( {n - {2k}}\right) !}. \] Therefore, the solutions \( {x}_{j} \) are the roots of the Hermite polynomial \( {H}_{n}\left( x\right) \) . The discriminant of \( {H}_{n} \) is given by \[ \mathop{\prod }\limits_{{i < j}}{\left( {x}_{i} - {x}_{j}\right) }^{2} = {2}^{-(n\left( {n - 1}\right) /2}\mathop{\prod }\limits_{1}^{n}{j}^{j} \] and the above formula for \( {H}_{n} \) gives \[ \mathop{\sum }\limits_{1}^{n}{x}_{j}^{2} = n\left( {n - 1}\right) /2 \] Thus, the minimum value of \( f \) is \[ \frac{1}{4}n\left( {n - 1}\right) \left( {1 + \ln 2}\right) - \frac{1}{2}\mathop{\sum }\limits_{1}^{n}j\ln j \] ## 2.3 Second-Order Optimality Conditions Definition 2.11. An \( n \times n \) matrix \( A \) is called positive semidefinite if \[ \langle {Ad}, d\rangle \geq 0\text{ for all }d \in {\mathbb{R}}^{n}. \] It is called positive definite if \[ \langle {Ad}, d\rangle > 0\text{ for all }d \in {\mathbb{R}}^{n}, d \neq 0. \] Note that if \( A \) is positive semidefinite, then \( {a}_{ii} = \left\langle {A{e}_{i},{e}_{i}}\right\rangle \geq 0 \), and if \( A \) is positive definite, then \( {a}_{ii} > 0 \) . Similarly, choosing \( d = t{e}_{i} + {e}_{j} \) gives \( q\left( t\right) \mathrel{\text{:=}} {a}_{ii}{t}^{2} + 2{a}_{ij}t + {a}_{jj} \geq 0 \) for all \( t \in \mathbb{R} \) . Recall that the quadratic function \( q\left( t\right) \) is nonnegative (positive) if and only if its discriminant \( \Delta = 4\left( {{a}_{ij}^{2} - {a}_{ii}{a}_{jj}}\right) \) is nonpositive (negative). Thus, \( {a}_{ii}{a}_{jj} - {a}_{ij}^{2} \geq 0 \) if \( A \) is positive semidefinite, and \( {a}_{ii}{a}_{jj} - {a}_{ij}^{2} > 0 \) if \( A \) is positive definite. Theorem 2.12. (Second-order necessary condition for a local minimizer) Let \( f : U \rightarrow \mathbb{R} \) be twice Gâteaux differentiable on an open set \( U \subseteq {\mathbb{R}}^{n} \
1347_[陈亚浙&吴兰成] Second Order Elliptic Equations and Elliptic Systems
Definition 1.1
Definition 1.1. Let \( u \in C\left( \Omega \right) \), where \( \Omega \) is a bounded open domain in \( {\mathbb{R}}^{n} \) . For \( y \in \Omega \), we set (1.1) \[ \chi \left( y\right) = \left\{ {p \in {\mathbb{R}}^{n} \mid u\left( x\right) \leq u\left( y\right) + p \cdot \left( {x - y}\right) ,\forall x \in \Omega }\right\} . \] \( \chi \) defines a map from \( \Omega \) to a class consisting of subsets of \( {\mathbb{R}}^{n} \) . We say that \( \chi \) is a normal mapping defined by \( u \) . A normal mapping has its clear geometric meaning. The lower space of the graph \( z = u\left( x\right) \) is the set (in \( {\mathbb{R}}^{n + 1} \) ) \[ \left\{ {\left( {x, z}\right) \in {\mathbb{R}}^{n} \times \mathbb{R} \mid x \in \Omega , - \infty < z < u\left( x\right) }\right\} . \] If \( p \in \chi \left( y\right) \), then the hyperplane \( z = u\left( y\right) + p \cdot \left( {x - y}\right) \) is a supporting plane for the lower space of the graph \( z = u\left( x\right) \) at \( \left( {y, u\left( y\right) }\right) .\chi \left( y\right) \) is the set of all \( p \) ’s corresponding to supporting planes for the lower space of the graph \( z = u\left( x\right) \) at \( \left( {y, u\left( y\right) }\right) \) such that \( \left( {-p,1}\right) \) is a normal vector of the supporting plane. Definition 1.2. Let \( u \in C\left( \Omega \right) \) . The set \[ {\Gamma }_{u} = \{ y \in \Omega \mid \chi \left( y\right) \neq \varnothing \} \] (1.2) \[ = \left\{ {y \in \Omega \mid \exists p \in {\mathbb{R}}^{n}\text{ such that }u\left( x\right) \leq u\left( y\right) + p \cdot \left( {x - y}\right) ,\forall x \in \Omega }\right\} \] is said to be the contact set of \( u \) . Next, we consider the convex hull of the lower space of the graph \( z = u\left( x\right) \) ; this convex hull is the lower space of some graph \( z = \widehat{u}\left( x\right) \) . It is clear that \( \widehat{u}\left( x\right) \geq u\left( x\right) \) and that it is the smallest concave function. \( {\Gamma }_{u} \) consists of the projections to the plane \( z = 0 \) of those points where the hypersurfaces \( z = \widehat{u}\left( x\right) \) and \( z = u\left( x\right) \) meet. This is where the name contact set comes from. If \( u \in {C}^{1}\left( \Omega \right), y \in {\Gamma }_{u} \), then \( \chi \left( y\right) = \{ {Du}\left( y\right) \} \) ; if furthermore \( u \in {C}^{2}\left( \Omega \right) \) and \( \chi \left( y\right) \) is nonempty, then \( - {D}^{2}u\left( y\right) \geq 0 \) (i.e., the Hessian matrix is negative semidefinite). In fact, the function associated with the normal mapping (1.3) \[ w\left( x\right) = u\left( y\right) + p \cdot \left( {x - y}\right) - u\left( x\right) ,\;\forall x \in \Omega , \] attains its minimum at \( y \), and therefore \( {Dw}\left( y\right) = 0 \) and \( {D}^{2}w\left( y\right) \geq 0 \) . This implies the above result. More generally, we have Lemma 1.1. Let \( u \in {W}_{loc}^{2,1}\left( \Omega \right) \cap C\left( \Omega \right) \) . Then (1.4) \[ \chi \left( y\right) = \{ {Du}\left( y\right) \} ,\; - {D}^{2}u\left( y\right) \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u}. \] Proof. Let \( w\left( x\right) \) be defined by (1.3). For each fixed direction \( \xi \in {\mathbb{R}}^{n},\left| \xi \right| = 1 \) , we have (1.5) \[ \frac{w\left( {y + {h\xi }}\right) - w\left( y\right) }{h} \rightarrow \frac{\partial w}{\partial \xi } \] (1.6) \[ \frac{w\left( {y + {h\xi }}\right) + w\left( {y - {h\xi }}\right) - {2w}\left( y\right) }{{h}^{2}} \rightarrow \frac{{\partial }^{2}w}{\partial {\xi }^{2}} \] where the convergence is in the space \( {L}_{loc}^{1}\left( \Omega \right) \) . If we take subsequences, then the above limits are valid for almost every \( y \in \Omega \) . If \( y \in {\Gamma }_{u} \), then \( w\left( x\right) \) takes its minimum at \( y \) . Letting \( h \rightarrow {0}^{ + } \) and \( h \rightarrow {0}^{ - } \) in (1.5), we deduce that \[ \frac{\partial w}{\partial \xi } = 0,\;\text{ a.e. }y \in {\Gamma }_{u} \] If we take \( \xi \) to be the direction of coordinate axes, then \[ \chi \left( y\right) = \{ {Du}\left( y\right) \} ,\;\text{ a.e. }y \in {\Gamma }_{u}. \] Similarly, by (1.6), \[ - \frac{{\partial }^{2}u}{\partial {\xi }^{2}} = \frac{{\partial }^{2}w}{\partial {\xi }^{2}} \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u}. \] It follows that for \( \xi \) in a dense countable subset of the unit sphere, \[ - \frac{{\partial }^{2}u}{\partial {\xi }^{2}} \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u} \] From this inequality we deduce that, for any \( \xi ,\left| \xi \right| = 1 \) , \[ - \frac{{\partial }^{2}u}{\partial {\xi }^{2}} \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u} \] This implies that \( - {D}^{2}u\left( y\right) \geq 0 \) for almost every \( y \in {\Gamma }_{u} \) . The proof is complete. Definition 1.3. The set \[ \chi \left( \Omega \right) = \chi \left( {\Gamma }_{u}\right) = \mathop{\bigcup }\limits_{{y \in \Omega }}\chi \left( y\right) \] is said to be the image set of the normal mapping determined by \( u \) . Example. Let \( \Omega = {B}_{d}\left( {x}_{0}\right) \) . Consider the function (1.7) \[ u\left( x\right) = \frac{\lambda }{d}\left( {d - \left| {x - {x}_{0}}\right| }\right) \] Its graph is a cone surface with vertex at \( \left( {{x}_{0},\lambda }\right) \), base \( {B}_{d}\left( {x}_{0}\right) \), and height \( \lambda \) . Clearly, \( {\Gamma }_{u} = \Omega \) and \[ \chi \left( y\right) = \left\{ \begin{array}{l} {B}_{\lambda /d}\left( 0\right) \;\text{ if }y = {x}_{0}, \\ - \frac{\lambda }{d}\frac{y - {x}_{0}}{\left| y - {x}_{0}\right| }\;\text{ if }y \neq {x}_{0}. \end{array}\right. \] The image set of the normal mapping is given by (1.8) \[ \chi \left( \Omega \right) = {B}_{\lambda /d}\left( 0\right) \] Definition 1.4. Let \( \Omega \subset {\mathbb{R}}^{n},{x}_{0} \in \Omega \) . Let \( w \) be the function such that its graph is a cone surface with vertex at \( \left( {{x}_{0},\lambda }\right) \) and base \( \Omega \) (see Fig. 1). We denote its image set of the normal mapping by (1.9) \[ \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack = {\chi }_{w}\left( \Omega \right) \] ![ea7682a1-65ee-483d-b2fe-4cbf0ebf6e25_96_0.jpg](images/ea7682a1-65ee-483d-b2fe-4cbf0ebf6e25_96_0.jpg) Fig. 1 Lemma 1.2. Let \( u \in C\left( \Omega \right) \) . Then (1) for any \( y \in {\Gamma }_{u} \) , (1.10) \[ \left| p\right| \leq \frac{2\sup \left| u\right| }{\operatorname{dist}\{ y,\partial \Omega \} },\;\forall p \in \chi \left( y\right) \] \( \left( 2\right) \) the normal mapping maps any compact subset of \( \Omega \) to a closed set in \( {\mathbb{R}}^{n} \) . Proof. For \( y \in {\Gamma }_{u} \) , (1.11) \[ u\left( y\right) + p \cdot \left( {x - y}\right) \geq u\left( x\right) ,\;\forall x \in \Omega . \] The ray starting at \( y \) with direction \( - p \) intersects \( \partial \Omega \) at \( {x}_{0} \), i.e., \( \left( {1.12}\right) \) \[ {x}_{0} = y - \frac{1}{\left| p\right| }\left| {{x}_{0} - y}\right| p \] Using compact subsets of \( \Omega \) to approximate \( \Omega \) if necessary, we may assume without loss of generality that \( u \) is continuous on \( \bar{\Omega } \) . Choosing \( x \) to be \( {x}_{0} \) in (1.11), we obtain \[ u\left( y\right) - \left| {{x}_{0} - y}\right| \left| p\right| \geq u\left( {x}_{0}\right) \] and therefore \[ \left| p\right| \leq \frac{2\sup \left| u\right| }{\left| {x}_{0} - y\right| } \leq \frac{2\sup \left| u\right| }{\operatorname{dist}\{ y,\partial \Omega \} }. \] Now we prove (2). Let \( F \) be a compact subset of \( \Omega \) . Suppose that \( \left\{ {p}_{n}\right\} \subset \chi \left( F\right) \) and \( {p}_{n} \rightarrow {p}_{0}\left( {n \rightarrow \infty }\right) \) . We want to show that \( {p}_{0} \in \chi \left( F\right) \) . Since \( {p}_{n} \in \chi \left( F\right) \), there exists \( {y}_{n} \in F \) such that \( {p}_{n} \in \chi \left( {y}_{n}\right) \) . From the definition of a normal mapping, \[ u\left( {y}_{n}\right) + {p}_{n} \cdot \left( {x - {y}_{n}}\right) \geq u\left( x\right) ,\;\forall x \in \Omega . \] Since \( F \) is compact, a subsequence \( \left\{ {y}_{{n}_{k}}\right\} \) converges to some \( {y}_{0} \in F \) as \( k \rightarrow \infty \) . Letting \( n = {n}_{k} \rightarrow \infty \) in the above inequality, we easily see that \( {p}_{0} \in \chi \left( {y}_{0}\right) \) . Lemma 1.3. Suppose that \( \Omega, A \) are open domains in \( {\mathbb{R}}^{n} \) . (1) If \( \Omega \subset A \), then for \( {x}_{0} \in \Omega \) , \[ \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack \supset A\left\lbrack {{x}_{0},\lambda }\right\rbrack \] (2) If the diameter of \( \Omega \) is \( d \), then (1.13) \[ \left| {\Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| \geq {\left( \frac{\lambda }{d}\right) }^{n}{\omega }_{n} \] where \( \left| \cdot \right| \) denotes the measure of the set and \( {\omega }_{n} \) is the volume of an \( n \) -dimensional unit ball. Proof. (1) is obvious. We prove (2). Clearly, \( {B}_{d}\left( {x}_{0}\right) \supset \Omega \) . Let \( A = {B}_{d}\left( {x}_{0}\right) \) . By (1) and (1.8), \[ \left| {\Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| \geq \left| {A\left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| = \left| {{B}_{\lambda /d}\left( 0\right) }\right| = {\omega }_{n}{\left( \frac{\lambda }{d}\right) }^{n}, \] where we use Lemma 1.2 (2) to deduce that \( \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack \) and \( A\left\lbrack {{x}_{0},\lambda }\right\rbrack \) are measurable sets. Lemma 1.4. Suppose that \( u \in {C}^{2}\left( \Omega \right), g \in C\left( \bar{\Omega }\right), g \geq 0 \), and \( E \) is a measurable subset of \( {\Gamma }_{u} \) . Then (1.14) \[ {\int }_{{Du}\left( E\right) }g\left( {\xi \left( p\right) }\right) {dp} \leq {\int }_{E}g\left( x\right) \det \left( {-{D}^{2}u}\right) {dx} \] where \( \xi \left( p\right) = {\left( Du\righ
18_Algebra Chapter 0
Definition 6.8
Definition 6.8. Let \( F \) be a free \( R \) -module, and let \( \alpha \in {\operatorname{End}}_{R}\left( F\right) \) . Denote by \( I \) the identity map \( F \rightarrow F \) . The characteristic polynomial of \( \alpha \) is the polynomial \[ {P}_{\alpha }\left( t\right) \mathrel{\text{:=}} \det \left( {{tI} - \alpha }\right) \in R\left\lbrack t\right\rbrack \] Proposition 6.9. Let \( F \) be a free \( R \) -module of rank \( n \), and let \( \alpha \in {\operatorname{End}}_{R}\left( F\right) \) . - The characteristic polynomial \( {P}_{\alpha }\left( t\right) \) is a monic polynomial of degree \( n \) . - The coefficient of \( {t}^{n - 1} \) in \( {P}_{\alpha }\left( t\right) \) equals \( - \operatorname{tr}\left( \alpha \right) \) . - The constant term of \( {P}_{\alpha }\left( t\right) \) equals \( {\left( -1\right) }^{n}\det \left( \alpha \right) \) . - If \( \alpha \) and \( \beta \) are similar, then \( {P}_{\alpha }\left( t\right) = {P}_{\beta }\left( t\right) \) . Proof. The first point is immediate, and the third is checked by setting \( t = 0 \) . To verify the second assertion, let \( A = \left( {a}_{ij}\right) \) be a matrix representing \( \alpha \) with respect to any basis for \( F \), so that \[ {P}_{\alpha }\left( t\right) = \det \left( \begin{matrix} t - {a}_{11} & - {a}_{12} & \ldots & - {a}_{1n} \\ - {a}_{21} & t - {a}_{22} & \ldots & - {a}_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ - {a}_{n1} & - {a}_{n2} & \ldots & t - {a}_{nn} \end{matrix}\right) . \] Expanding the determinant according to Definition 3.1 (or in any other way), we see that the only contributions to the coefficient of \( {t}^{n - 1} \) come from the diagonal entries, in the form \[ \mathop{\sum }\limits_{{i = 1}}^{n}t\cdots t \cdot \left( {-{a}_{ii}}\right) \cdot t\cdots t \] and the statement follows. Finally, assume that \( \alpha \) and \( \beta \) are similar. Then there exists an invertible \( \pi \) such that \( \beta = \pi \circ \alpha \circ {\pi }^{-1} \), and hence \[ {tI} - \beta = \pi \circ \left( {{tI} - \alpha }\right) \circ {\pi }^{-1} \] are similar (as endomorphisms of \( R{\left\lbrack t\right\rbrack }^{n} \) ). By Exercise 6.3 these two transformations must have the same determinant, and this proves the fourth point. By Proposition 6.9, all coefficients in the characteristic polynomial \[ {t}^{n} - \operatorname{tr}\left( \alpha \right) {t}^{n - 1} + \cdots + {\left( -1\right) }^{n}\det \left( \alpha \right) \] are invariant under similarity; as far as I know, trace and determinant are the only ones that have special names. Determinants, traces, and more generally the characteristic polynomial can show very quickly that two linear transformations are not similar; but do they tell us unfailingly when two transformations are similar? In general, the answer is no, even over fields. For example, the two matrices \[ I = \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) ,\;A = \left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) \] both have characteristic polynomial \( {\left( t - 1\right) }^{2} \), but they are not similar. Indeed, \( I \) is the identity and clearly only the identity is similar to the identity: \( {PI}{P}^{-1} = I \) for all invertible matrices \( P \) . Therefore, the equivalence relation defined by prescribing that two transformations have the same characteristic polynomial is coarser than similarity. This hints that there must be other interesting quantities associated with a linear transformation and invariant under similarity. We are going to understand this much more thoroughly in a short while (at least over fields), but we can already gain some insight by contemplating another kind of 'polynomial' information, which also turns out to be invariant under similarity. As \( {\operatorname{End}}_{R}\left( F\right) \) is an \( R \) -algebra, we can evaluate every polynomial \[ f\left( t\right) = {r}_{m}{t}^{m} + {r}_{m - 1}{t}^{m - 1} + \cdots + {r}_{0} \in R\left\lbrack t\right\rbrack \] at any \( \alpha \in {\operatorname{End}}_{R}\left( F\right) \) : \[ f\left( \alpha \right) = {r}_{m}{\alpha }^{m} + {r}_{m - 1}{\alpha }^{m - 1} + \cdots + {r}_{0} \in {\operatorname{End}}_{R}\left( F\right) . \] In other words, we can perform these operations in the ring \( {\operatorname{End}}_{R}\left( F\right) \) ; multiplication by \( r \in R \) amounts to composition with \( {rI} \in {\operatorname{End}}_{R}\left( F\right) \), and \( {\alpha }^{k} \) stands for the \( k \) -fold composition \( \alpha \circ \cdots \circ \alpha \) of \( \alpha \) with itself. The set of polynomials such that \( f\left( \alpha \right) = 0 \) is an ideal of \( R\left\lbrack t\right\rbrack \) (Exercise 6.7), which I will denote \( {\mathcal{I}}_{\alpha } \) and call the annihilator ideal of \( \alpha \) . Lemma 6.10. If \( \alpha \) and \( \beta \) are similar, then \( {\mathcal{I}}_{\alpha } = {\mathcal{I}}_{\beta } \) . Proof. By hypothesis there exists an invertible \( \pi \) such that \( \beta = \pi \circ \alpha \circ {\pi }^{-1} \) . As \[ {\beta }^{k} = {\left( \pi \circ \alpha \circ {\pi }^{-1}\right) }^{k} = \left( {\pi \circ \alpha \circ {\pi }^{-1}}\right) \circ \left( {\pi \circ \alpha \circ {\pi }^{-1}}\right) \circ \cdots \circ \left( {\pi \circ \alpha \circ {\pi }^{-1}}\right) = \pi \circ {\alpha }^{k} \circ {\pi }^{-1}, \] we see that for all \( f\left( t\right) \in R\left\lbrack t\right\rbrack \) we have \[ f\left( \beta \right) = \pi \circ f\left( \alpha \right) \circ {\pi }^{-1}. \] It follows immediately that \( f\left( \alpha \right) = 0 \Leftrightarrow f\left( \beta \right) = 0 \), which is the statement. Going back to the simple example shown above, the polynomial \( t - 1 \) is in the annihilator ideal of the identity, while it is not in the annihilator ideal of the matrix \[ A = \left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) \] An optimistic reader might now guess that two linear transformations are similar if and only if both their characteristic polynomials and annihilator ideals coincide. This is unfortunatley not the case in general, but I promise that the situation will be considerably clarified in short order (cf. Exercise 7.3). In any case, even the simple example given above allows me to point out a remarkable fact. Note that the (common) characteristic polynomial \( {\left( t - 1\right) }^{2} \) of \( I \) and \( A \) annihilates both: \[ {\left( A - I\right) }^{2} = {\left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right) }^{2} = \left( \begin{array}{ll} 0 & 0 \\ 0 & 0 \end{array}\right) \] This is not a coincidence: \( {P}_{\alpha }\left( t\right) \in {\mathcal{I}}_{\alpha } \) for all linear transformations \( \alpha \) . That is, Theorem 6.11 (Cayley-Hamilton). Let \( {P}_{\alpha }\left( t\right) \) be the characteristic polynomial of the linear transformation \( \alpha \in {\operatorname{End}}_{R}\left( F\right) \) . Then \[ {P}_{\alpha }\left( \alpha \right) = 0 \] This beautiful observation can be proved directly by judicious use of Cramer's rule \( {30} \), in the form of Corollary 3.5, cf. Exercise 6.9. In any case, the Cayley-Hamilton theorem will become essentially evident once we connect these linear algebra considerations with the classification theorem for finitely generated modules over a PID; the adventurous reader can already look back at Remark 5.8 and figure out why the Cayley-Hamilton theorem is obvious. If \( R \) is an arbitrary integral domain, we cannot expect too much of \( R\left\lbrack t\right\rbrack \), and it seems hard to say something a priori concerning \( {\mathcal{I}}_{\alpha } \) . However, consider the field of fractions \( K \) of \( R \) ( \( 1/{4.2} \) ); viewing \( \alpha \) as an element of \( {\operatorname{End}}_{K}\left( {K}^{n}\right) \) (that is, viewing the entries of a matrix representation of \( \alpha \) as elements of \( K \), rather than \( R \) ), \( \alpha \) will have an annihilator ideal \( {\mathcal{I}}_{\alpha }^{\left( K\right) } \) ’over \( K \) ’, and it is clear that \[ {\mathcal{I}}_{\alpha } = {\mathcal{I}}_{\alpha }^{\left( K\right) } \cap R\left\lbrack t\right\rbrack \] The advantage of considering \( {\mathcal{I}}_{\alpha }^{\left( K\right) } \subseteq K\left\lbrack t\right\rbrack \) is that \( K\left\lbrack t\right\rbrack \) is a PID, and it follows that \( {\mathcal{I}}_{\alpha }^{\left( K\right) } \) has a (unique) monic generator. Definition 6.12. Let \( F \) be a free \( R \) -module, and let \( \alpha \in {\operatorname{End}}_{R}\left( F\right) \) . Let \( K \) be the field of fractions of \( R \) . The minimal polynomial of \( \alpha \) is the monic generator \( {m}_{\alpha }\left( t\right) \in K\left\lbrack t\right\rbrack \) of \( {\mathcal{I}}_{\alpha }^{\left( K\right) } \) . With this terminology, the Cayley-Hamilton theorem amounts to the assertion that the minimal polynomial divides the characteristic polynomial: \( {m}_{\alpha }\left( t\right) \mid {P}_{\alpha }\left( t\right) \) . Of course the situation is simplified, at least from an expository point of view, if \( R \) is itself a field: then \( K = R,{m}_{\alpha }\left( t\right) \in R\left\lbrack t\right\rbrack \), and \( {\mathcal{I}}_{\alpha } = \left( {{m}_{\alpha }\left( t\right) }\right) \) . This is one reason why we will eventually assume that \( R \) is a field. ## 6.3. Eigenvalues, eigenvectors, eigenspaces. Definition 6.13. Let \( F \) be a free \( R \) -module, and let \( \alpha \in {\operatorname{End}}_{R}\left( F\right) \) be a linear transformation of \( F \) . A scalar \( \lambda \in R \) is an eigenvalue for \( \alpha \) if there exists \( \mathbf{v} \in F \) , \( \mathbf{v} \neq 0 \), such that \[ \alpha \left( \mathbf{v}\right) = \lambda \mathbf{v}. \] For example,0 is an eigenvalue for \( \alpha \) precisely when \( \alpha \) has a nontrivial kernel. The notion of eigenvalue is one of the most important in linear algebra, if not in algebra, if
1112_(GTM267)Quantum Theory for Mathematicians
Definition 3.13
Definition 3.13 If \( A \) is a self-adjoint operator on a Hilbert space \( \mathbf{H} \) and \( \psi \) is a unit vector in \( \mathbf{H} \), let \( {\Delta }_{\psi }A \) denote the standard deviation associated with measurements of \( A \) in the state \( \psi \), which is computed as \[ {\left( {\Delta }_{\psi }A\right) }^{2} = {\left\langle {\left( A - \langle A{\rangle }_{\psi }I\right) }^{2}\right\rangle }_{\psi } \] \[ = {\left\langle {A}^{2}\right\rangle }_{\psi } - {\left( \langle A{\rangle }_{\psi }\right) }^{2} \] We refer to \( {\Delta }_{\psi }A \) as the uncertainty of \( A \) in the state \( \psi \) . For any single observable \( A \), it is possible to choose \( \psi \) so that \( {\Delta }_{\psi }A \) is as small as we like. In Chap. 12, however, we will see that when two observables \( A \) and \( B \) do not commute, then \( {\Delta }_{\psi }A \) and \( {\Delta }_{\psi }B \) cannot both be made arbitrarily small for the same \( \psi \) . In particular, we will derive there the famous Heisenberg uncertainty principle, which states that \[ \left( {{\Delta }_{\psi }X}\right) \left( {{\Delta }_{\psi }P}\right) \geq \frac{\hslash }{2} \] for all \( \psi \) for which \( {\Delta }_{\psi }X \) and \( {\Delta }_{\psi }P \) are defined. ## 3.7 Time-Evolution in Quantum Theory ## 3.7.1 The Schrödinger Equation Up to now, we have been considering the wave function \( \psi \) at a fixed time. We now consider the way in which the wave function evolves in time. Recall that in the Hamiltonian formulation of classical mechanics (Sect. 2.5), the time-evolution of the system is governed by the Hamiltonian (energy) function \( H \), through Hamilton’s equations. According to Axiom 2, there is a corresponding self-adjoint linear operator \( \widehat{H} \) on the quantum Hilbert space \( \mathbf{H} \), which we call the Hamiltonian operator for the system. See Sect. 3.7.4 for an example. Recall that we motivated the definition of the momentum operator by the de Broglie hypothesis, \( p = \hslash k \), where \( k \) is the spatial frequency of the wave function. We can similarly motivate the time-evolution in quantum mechanics by a similar relation between the energy and the temporal frequency of our wave function: \[ E = \hslash \omega \] \( \left( {3.25}\right) \) This relationship between energy and temporal frequency is nothing but the relationship proposed by Planck in his model of blackbody radiation (Sect. 1.1.3). Suppose that a wave function \( {\psi }_{0} \) has definite energy \( E \), meaning that \( {\psi }_{0} \) is an eigenvector for \( \widehat{H} \) with eigenvalue \( E \) . Then (3.25) means that the time-dependence of the wave function should be purely at frequency \( \omega = E/\hslash \) . That is to say, if the state of the system at time \( t = 0 \) is \( {\psi }_{0} \), then the state of the system at any other time \( t \) should be \[ \psi \left( t\right) = {e}^{-{i\omega t}}{\psi }_{0} = {e}^{-{iEt}/\hslash }{\psi }_{0} \] (3.26) We can rewrite (3.26) as a differential equation: \[ \frac{d\psi }{dt} = - \frac{iE}{\hslash }\psi = \frac{E}{i\hslash }\psi \] (3.27) Note that we are taking "temporal frequency \( \omega \) " to mean that the time-dependence is of the form \( {e}^{-{i\omega t}} \), whereas we took "spatial frequency \( k \) " to mean that the space-dependence is of the form \( {e}^{ikx} \), with no minus sign in the exponent. This curious convention is convenient when we look at pure exponential solutions to the free Schrödinger equation (Chap. 4) of the form \( \exp \left\lbrack {i\left( {{kx} - {\omega t}}\right) }\right\rbrack \), which describes a solution moving to the right with speed \( \omega /k \) . Equation (3.27) tells us the time-evolution for a particle that is initially in a state of definite energy, that is, an eigenvector for the Hamiltonian operator. A natural way to generalize this equation is to recognize that \( {E\psi } \) is nothing but \( \widehat{H}\psi \), since \( \psi \) is just a multiple of \( {\psi }_{0} \), which is an eigenvector for \( \widehat{H} \) with eigenvalue \( E \) . Replacing \( E \) by \( \widehat{H} \) in (3.27) leads to the following general prescription for the time-evolution of a quantum system. Axiom 5 The time-evolution of the wave function \( \psi \) in a quantum system is given by the Schrödinger equation, \[ \frac{d\psi }{dt} = \frac{1}{i\hslash }\widehat{H}\psi \] (3.28) Here \( \widehat{H} \) is the operator corresponding to the classical Hamiltonian \( H \) by means of Axiom 2. Although both Hamilton's equations and the Schrödinger equation involve a Hamiltonian, the two equations otherwise do not seem parallel. Of course, since quantum mechanics is not classical mechanics, we should not expect the two theories to have the same time-evolution. Nevertheless, we might hope to see some similarities between the time-evolution of a classical system and that of the corresponding quantum system. Such a similarity can be seen when we consider how the expectation values of observables evolve in quantum mechanics. Proposition 3.14 Suppose \( \psi \left( t\right) \) is a solution of the Schrödinger equation and \( A \) is a self-adjoint operator on \( \mathbf{H} \) . Assuming certain natural domain conditions hold, we have \[ \frac{d}{dt}\langle A{\rangle }_{\psi \left( t\right) } = {\left\langle \frac{1}{i\hslash }\left\lbrack A,\widehat{H}\right\rbrack \right\rangle }_{\psi \left( t\right) }, \] (3.29) where \( \langle A{\rangle }_{\psi } \) is as in Notation 3.10 and where \( \left\lbrack {\cdot , \cdot }\right\rbrack \) denotes the commutator, defined as ' \[ \left\lbrack {A, B}\right\rbrack = {AB} - {BA}\text{.} \] Equation (3.29) should be compared to the way a function \( f \) on the classical phase space evolves in time along a solution of Hamilton's equations: \( {df}/{dt} = \{ f, H\} \) . We see, then, that the commutator of operators (divided by \( i\hslash \) ) plays a role in quantum mechanics similar to the role of the Poisson bracket in classical mechanics. Proof. Let \( \psi \left( t\right) \) be a solution to the Schrödinger equation and let us compute at first without worrying about domains of the operators involved. If we use the product rule (Exercise 1) for differentiation of the inner product, we obtain \[ \frac{d}{dt}\langle \psi \left( t\right) ,{A\psi }\left( t\right) \rangle = \left\langle {\frac{d\psi }{dt},{A\psi }}\right\rangle + \left\langle {\psi, A\frac{d\psi }{dt}}\right\rangle \] \[ = \frac{i}{\hslash }\langle \widehat{H}\psi ,{A\psi }\rangle - \frac{i}{\hslash }\langle \psi, A\widehat{H}\psi \rangle \] \[ = \frac{1}{i\hslash }\langle \psi ,\left\lbrack {A,\widehat{H}}\right\rbrack \psi \rangle \] where in the last step we have used the self-adjointness of \( \widehat{H} \) to move it to the other side of the inner product. Recall that we are following the convention of putting the complex conjugate on the first factor in the inner product, which accounts for the plus sign in the first term on the second line. Rewriting this using Notation 3.10 gives the desired result. If \( A \) and \( \widehat{H} \) are (as usual) unbounded operators, then the preceding calculation is not completely rigorous. Since, however, we are deferring a detailed examination of issues of unbounded operators until Chap. 9, let us simply state the conditions needed for the calculation to be valid. For every \( t \in \mathbb{R} \), we need to have \( \psi \left( t\right) \in \operatorname{Dom}\left( A\right) \cap \operatorname{Dom}\left( \widehat{H}\right) \), we need \( {A\psi }\left( t\right) \in \) \( \operatorname{Dom}\left( \widehat{H}\right) \), and we need \( \widehat{H}\psi \left( t\right) \in \operatorname{Dom}\left( A\right) \) . (These conditions are needed for \( \left\lbrack {A,\widehat{H}}\right\rbrack \psi \left( t\right) \) to be defined.) In addition, we need \( {A\psi }\left( t\right) \) to be a continuous path in \( \mathbf{H} \) . ∎ Note that to see interesting behavior in the time-evolution of a quantum system, there has to be noncommutativity present. If all the physically interesting operators \( A \) commuted with the Hamiltonian operator \( \widehat{H} \), then \( \left\lbrack {\widehat{H}, A}\right\rbrack \) would be zero and the expectation values of these operators would be constant in time. Noncommutativity of the basic operators is therefore an essential property of quantum mechanics. In the case of a particle in \( {\mathbb{R}}^{1} \), noncommutativity is built into the commutation relation for \( X \) and \( P \) , given in Proposition 3.8. Although it is not reasonable to have all physically interesting operators commute with \( \widehat{H} \), there may be some operators with this property. If \( \left\lbrack {A,\widehat{H}}\right\rbrack = 0 \), then the expectation value of \( A \) (and, indeed, all the moments of \( A \) ) is independent of time along any solution of the Schrödinger equation. We may therefore call such an operator \( A \) a conserved quantity (or constant of motion). Just as in the classical setting, conserved quantities (when we can find them) are helpful in understanding how to solve the Schrödinger equation. Proposition 3.14 suggests that the map \[ \left( {A, B}\right) \mapsto \frac{1}{i\hslash }\left\lbrack {A, B}\right\rbrack \] where \( A \) and \( B \) are self-adjoint operators, plays a role similar to that of the Poisson bracket in classical mechanics. This analogy is supported by the following list of elementary properties of the commutator, which should be compared to the properties of the Poisson bracket listed in Proposition 2.23. Proposition 3.15 For any vector space \( V \) over \( \mathbb{C} \) and linear operators \( A \) , \( B \), and \( C \) on \( V \), the following relations hold. \[ \text{1.}\left\lbrack {A, B + {\alpha C}}\right\rbrack = \left\lbrack {A, B}\right\rbrack + \alpha \left\lbrack {A, C}\
1139_(GTM44)Elementary Algebraic Geometry
Definition 6.9
Definition 6.9. The number given by Theorem 6.8 is called the order of \( V \) at \( P \) , or the multiplicity of \( V \) at \( P \) ; we denote it by \( m\left( {V;P}\right) \) . Our proofs of Theorems 6.6 and 6.8 will run along the same lines as that of Theorem 6.2, and for this reason we need corresponding local forms of Theorem 11.1.1 of Chapter III. The analogue of Theorem 11.1.1 of Chapter III we use in proving Theorem 6.6 is contained in the following Theorem 6.10. One may, in Theorem 2.13, replace the concluding phrase "above each point \( a \) in \( {\Delta }^{d} \) there is a point of \( V \) in \( a \times {\Delta }^{n - d} \) " by the phrase "above almost each point \( a \) in \( {\Delta }^{d} \) there is a common, fixed (positive) number of points of \( V \) in \( a \times {\Delta }^{n - d} \) ." The proof is the same as the proof of Theorem 2.13. Proof of Theorem 6.6 (6.6.1) The proof is essentially the same as the proof of Theorem 6.2; one need only replace the reference to Theorem 11.1.1 of Chapter III by a reference to Theorem 6.10, applied at a point of \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \times {\mathbb{C}}^{n + 1} \) corresponding to the point \( P \) of \( V \cap L \) . (6.6.2) We may assume that \( V \) is affine. Therefore let \( P = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \in {\mathbb{C}}^{n} \) ; then with respect to any of \( L \) ’s parametrizations \( {X}_{i} - {a}_{i} = {c}_{i}T\left( {i = 1,\ldots, n}\right) \) , \( p\left( {{c}_{1}T,\ldots ,{c}_{n}T}\right) \) has a zero of multiplicity \( m \) at \( T = 0 \), where \( m \) is the order with respect to \( L \) of \( p \) at \( \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) . But since \( p\left( {{X}_{1},\ldots ,{X}_{n}}\right) \) is a product of distinct irreducibles, its zeros are all distinct on almost every line in \( {\mathbb{C}}^{n} \) (Theorem 6.2), hence also on almost every line near \( L \) . Applying Lemma 10.4 of Chapter II together with Theorem 6.10 to the variety corresponding to (15) shows that for every \( {L}^{\prime } \) sufficiently near \( L \), there are exactly \( m \) distinct points of \( V \cap {L}^{\prime } \) arbitrarily near \( P \) . Let us now see what is involved in proving Theorem 6.8. In Theorem 6.6 we showed that the number of points near \( P \) of \( {V}^{r} \cap {L}^{\prime } \) is the same for all \( {L}^{\prime } \) near a fixed \( L = {L}^{n - r} \) properly intersecting \( {V}^{r} \) (that is, the same for all points near that point of \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \) corresponding to \( L \) ). We want to generalize from one space \( L \) through \( P \), to all transforms of \( {L}^{n - r} \) passing through \( P \) (that is, from one point of \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \) to a whole subspace of \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \) ). For this, we shall appropriately generalize Theorem 6.10 so " \( \left( 0\right) \in V \) " can be replaced by "irreducible subvariety of \( V \) ." First, from Theorem 6.10, we know that if \( {V}^{s} \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}} \) is an irreducible variety of dimension \( s \) variety-theoretically projecting onto \( {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{s}} \), then at every point \( P \in V \) , it is true that for each sufficiently small polydisk \( {\Delta }^{n - s}\left( P\right) \subset {\mathbb{C}}_{{X}_{s + 1},\ldots ,{X}_{n}} \) centered at \( P \), there is a polydisk \( {\Delta }^{s}\left( P\right) \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{s}} \) centered at \( P \) so that over almost each point \( Q \in {\Delta }^{s}\left( P\right) \), there is a common, fixed number \( n\left( P\right) \) of points of \( V \cap \left( {{\Delta }^{s}\left( P\right) \times {\Delta }^{n - s}\left( P\right) }\right) \) . We shall use the following result: Theorem 6.11. Let \( {V}^{s} \) and \( n\left( P\right) \) be as immediately above, and let \( W \) be any irreducible subvariety of \( {V}^{s} \) . The numbers \( n\left( P\right) \) assume the same value at almost all points \( P \in W \) . Theorem 6.11 is an immediate consequence of Theorem 6.12. Let \( {V}^{s} \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}} \) be an irreducible variety of dimension \( s \) variety-theoretically projecting onto \( {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{s}} \), and let \( W \) be an irreducible subvariety of \( {V}^{s} \) . Then for each integer \( k \geq 0 \), the set of points \( Q \) of \( W \) such that \( n\left( P\right) \geq k \) forms a subvariety \( {W}_{k} \) of \( W \) . Proof. Suppose, first, that \( {V}^{s} \) is a hypersurface. Without loss of generality, let \( p \in \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) be irreducible; then from Theorem 6.6.2, the order with respect to \( {\mathbb{C}}_{{X}_{n}} \) of \( {V}^{s} = \mathbf{V}\left( p\right) \) at \( \left( {{a}_{1},\ldots ,{a}_{n}}\right) \in \mathbf{V}\left( p\right) \) is just the order in \( {X}_{n} \) of \( p \) at \( \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) . Now it is easily seen that the order in \( {X}_{n} \) of \( p \in \mathbb{C}\left\lbrack {{X}_{1}\ldots ,{X}_{n}}\right\rbrack \) at \( \left( {{a}_{1},\ldots ,{a}_{n}}\right) \), is \( \geq k \) iff its first \( k - 1 \) partial derivatives with respect to \( {X}_{n} \) vanish there. This condition obviously defines a subvariety of \( \mathbf{V}\left( p\right) \), and therefore also of \( W \) . The proof for arbitrary irreducible \( {V}^{s} \) is very similar to the proof of Theorem 6.10 for arbitrary \( V \) ; we therefore leave it as an easy exercise (Exercise 6.2). Proof of Theorem 6.8 (6.8.1) The proof is basically the same as that of Theorem 6.2.1, except that we apply Theorem 6.11 (instead of Theorem 11.1.1 of Chapter III). For the variety \( {V}^{s} \) in Theorem 6.11, we take the variety \( {V}^{ \dagger } \cap \left( {{\mathbb{C}}^{{\left( n + 1\right) }^{2}} \times {V}^{\prime }}\right) \) appearing in (15). \( {V}^{s} \subset {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \times {\mathbb{C}}^{n} \times \{ 1\} \subset {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \times {\mathbb{C}}^{n + 1} \) ) is "algebraic" over \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \) (in the sense that the coordinate ring of \( {V}^{s} \) is algebraic over that of \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \) ). Now consider the subspace \( S \) of \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \) parametrizing those \( P \) -containing transforms \( {L}^{\prime } \) of \( L \) ; the subvariety in \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \times {\mathbb{C}}^{n + 1} \) consisting of those points of the variety in (15) which lie above \( S \) is easily seen to have as an irreducible component the translate \( S \times \{ P\} \times \{ 1\} \) of \( S \) . (All our transforms of the given \( L \) contain the fixed point \( P \in {V}^{r} \) .) Let this translate be \( W \) in Theorem 6.11. For almost every transformation \( T \in S,\dim {\left( L\right) }^{T} = n - r \), and for each such \( T, i\left( {{V}^{r},{\left( L\right) }^{T};P}\right) \) equals \( n\left( Q\right) \) (as defined immediately before Theorem (6.11), where \( Q \in W \) corresponds to \( {\left( L\right) }^{T} \) . This completes the proof of (6.8.1). (6.8.2) We assume without loss of generality that \( V \) is affine and that \( P = \left( 0\right) \in {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}} \) . We may write \( p = {p}_{m} + {p}_{m + 1} + \ldots \), where \( {p}_{i} \) is 0 or homogeneous in \( {X}_{1},\ldots ,{X}_{n} \) of degree \( i \), and where \( {p}_{m} \neq 0 \) . The order of \( p \) at (0) is then \( m \), and under the substitution \( {X}_{i} = {c}_{i}T \) parametrizing a typical line \( L \) through \( \left( 0\right) \) , \[ {p}_{m}\left( {{c}_{1}T,\ldots ,{c}_{n}T}\right) = {T}^{m}{p}_{m}\left( {{c}_{1},\ldots ,{c}_{n}}\right) , \] which is thus either zero or still homogeneous of degree \( m \) . It is zero only at points \( \left( {{c}_{1},\ldots ,{c}_{n}}\right) \in \mathbf{V}\left( {p}_{m}\right) \), and \( \mathbf{V}\left( {p}_{m}\right) \) is proper in \( {\mathbb{C}}^{n} \) ; when it is of degree \( m \) , \( i\left( {V, L;\left( 0\right) }\right) = m \) . Hence for almost all (0)-containing transforms \( {\left( L\right) }^{T} \) of some \( L, i{\left( V, L\right) }^{T};\left( 0\right) ) \) is the order of \( p \) at (0). Thus (6.8.2) is proved, and therefore also Theorem 6.8. We can generalize the notion of order or multiplicity of a variety at a point, to order or multiplicity of a variety at, or along, an irreducible subvariety. Theorem 6.13. Let \( X \) be an irreducible subvariety of a pure-dimensional variety \( V \) in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) . For almost every point \( P \) on \( X, m\left( {V;P}\right) \) has a common, fixed value. Proof. The proof is essentially the same as the proof of Theorem 6.8.1; assume without loss of generality that \( X \) is affine, and in place of the translate \( S \times \{ P\} \times \{ 1\} \) ), use \( S \times X \times \{ 1\} \) . This gives an "almost all" statement on points of \( X \) instead on only \( P \) . Definition 6.14. The number in Theorem 6.13 is called the multiplicity of \( V \) along \( W \), denoted by \( m\left( {V;W}\right) \) . More generally, if any \( V \) has multiplicity \( k \) at almost every point of a pure-dimensional subvariety \( W \), then \( k \) , denoted by \( m\left( {V;W}\right) \), is the multiplicity of \( V \) along \( W \) . We now turn to the definitions of degree and of intersection multiplicity of properly-intersecting varieties. As before, we use linear transformations. Thus, let \( {V}_{1} \) and \( {V}_{2} \) in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) be properly-intersecting varieties of pure dimensions \( r \) and \( s \), respectively. We assume \( r + s \geq n \) . (Therefore \( \dim \left( {{
1083_(GTM240)Number Theory II
Definition 16.3.1
Definition 16.3.1. We denote by \( X \) the annihilator of \( \left\lbrack {x - {\zeta }_{p}}\right\rbrack \) in \( \mathbb{Z}\left\lbrack G\right\rbrack \), in other words the set of \( \theta \in \mathbb{Z}\left\lbrack G\right\rbrack \) such that \( {\left( x - {\zeta }_{p}\right) }^{\theta } = {\alpha }^{q} \) for some \( \alpha \in {K}^{ * } \) . It is clear that \( X \) is an ideal of \( \mathbb{Z}\left\lbrack G\right\rbrack \) . Lemma 16.3.2. The map sending \( \theta \in X \) to \( \alpha \in {K}^{ * } \) such that \( {\left( x - {\zeta }_{p}\right) }^{\theta } = {\alpha }^{q} \) is a well-defined injective group homomorphism. Proof. The map is well defined since \( K = \mathbb{Q}\left( {\zeta }_{p}\right) \) does not contain any other \( q \) th root of unity than 1 . It is clear that it is a group homomorphism from the additive group \( X \) to the multiplicative group \( {K}^{ * } \) . Let us show that it is injective: let \( \theta \in X \) be such that \( {\left( x - {\zeta }_{p}\right) }^{\theta } = 1 \) . For any \( \sigma \in G \) we thus have \( {\left( x - \sigma \left( {\zeta }_{p}\right) \right) }^{\theta } = \sigma \left( 1\right) = 1 \), hence \( \mathcal{N}{\left( x - {\zeta }_{p}\right) }^{\theta } = 1 \) . If \( \theta = \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }\sigma \), since \( \mathcal{N}\left( {x - {\zeta }_{p}}\right) \in \mathbb{Z} \) it follows that \( \mathcal{N}{\left( x - {\zeta }_{p}\right) }^{s} = 1 \), where \( s = \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma } \) . Now recall that by Lemma 16.1.1 we have \( \left( {x - {\zeta }_{p}}\right) /\left( {1 - {\zeta }_{p}}\right) \in {\mathbb{Z}}_{K} \), and so, since \( \mathcal{N}\left( {1 - {\zeta }_{p}}\right) = p \), we have \( p \mid \mathcal{N}\left( {x - {\zeta }_{p}}\right) \), and in particular \( \mathcal{N}\left( {x - {\zeta }_{p}}\right) \geq p \) . Thus we must have \( s = \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma } = 0 \), so we can write \[ 1 = \frac{{\left( x - {\zeta }_{p}\right) }^{\theta }}{{\left( 1 - {\zeta }_{p}\right) }^{s}} = \mathop{\prod }\limits_{{\sigma \in G}}{\left( \frac{x - \sigma \left( {\zeta }_{p}\right) }{1 - {\zeta }_{p}}\right) }^{{a}_{\sigma }}, \] and since \( \left( {1 - \sigma \left( {\zeta }_{p}\right) }\right) /\left( {1 - {\zeta }_{p}}\right) \) is a unit for all \( \sigma \in G \) it follows that \( \mathop{\prod }\limits_{{\sigma \in G}}\sigma {\left( \beta \right) }^{{a}_{\sigma }} \) is a unit, where we have set \( \beta = \left( {x - {\zeta }_{p}}\right) /\left( {1 - {\zeta }_{p}}\right) \) . Now by Lemma 16.1.1 the ideals \( {\mathfrak{b}}_{\sigma } = \sigma \left( \beta \right) {\mathbb{Z}}_{K} \) are (integral and) coprime. Since \( \mathop{\prod }\limits_{{\sigma \in G}}{\mathfrak{b}}_{\sigma }^{{a}_{\sigma }} = {\mathbb{Z}}_{K} \) it follows that \( {a}_{\sigma } = 0 \) for all \( \sigma \in G \), in other words that \( \theta = 0 \), proving injectivity and the lemma. Proposition 16.3.3. Assume that \( \min \left( {p, q}\right) \geq {11} \) . Let \( \theta = \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }\sigma \in \) \( X \cap \left( {1 - \iota }\right) \mathbb{Z}\left\lbrack G\right\rbrack \), let \( \alpha \in {K}^{ * } \) be such that \( {\left( x - {\zeta }_{p}\right) }^{\theta } = {\alpha }^{q} \), and assume that \( \parallel \theta \parallel = \mathop{\sum }\limits_{{\sigma \in G}}\left| {a}_{\sigma }\right| \leq {3q}/\left( {p - 1}\right) \) . Then for all \( \tau \in G \) we have \[ \left| {\operatorname{Arg}\left( {\tau {\left( \alpha \right) }^{q}}\right) }\right| \leq \frac{\parallel \theta \parallel }{\left| x\right| - 1}\;\text{ and }\;\left| {\operatorname{Arg}\left( {\tau \left( \alpha \right) }\right) }\right| > \frac{\pi }{q}, \] where \( \operatorname{Arg}\left( z\right) \) denotes the principal determination of the argument, i.e., such that \( - \pi < \operatorname{Arg}\left( z\right) \leq \pi \) . Proof. Since \( \theta \in \left( {1 - \iota }\right) \mathbb{Z}\left\lbrack G\right\rbrack \) we have \( {\iota \theta } = - \theta \), so for all \( \tau \in G \) , \[ {\left| \tau \left( \alpha \right) \right| }^{2q} = {\left| {\left( x - {\zeta }_{p}\right) }^{\tau \theta }\right| }^{2} = {\left( x - {\zeta }_{p}\right) }^{\tau \theta }{\left( x - {\zeta }_{p}\right) }^{\tau \iota \theta } = {\left( x - {\zeta }_{p}\right) }^{\tau \theta }{\left( x - {\zeta }_{p}\right) }^{-{\tau \theta }} = 1, \] so that \( \left| {\tau \left( \alpha \right) }\right| = 1 \) . For the same reason we have \( {a}_{\iota \sigma } = - {a}_{\sigma } \), hence \( s = \) \( \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma } = 0 \) . It follows that \[ {\alpha }^{q} = {\left( x - {\zeta }_{p}\right) }^{\theta } = \mathop{\prod }\limits_{{\sigma \in G}}{\left( x - \sigma \left( {\zeta }_{p}\right) \right) }^{{a}_{\sigma }} \] \[ = {x}^{s}\mathop{\prod }\limits_{{\sigma \in G}}{\left( 1 - \sigma \left( {\zeta }_{p}\right) /x\right) }^{{a}_{\sigma }} = \mathop{\prod }\limits_{{\sigma \in G}}{\left( 1 - \sigma \left( {\zeta }_{p}\right) /x\right) }^{{a}_{\sigma }}. \] Fix some \( \tau \in G \), and set \( \zeta = \tau \left( {\zeta }_{p}\right) \) . We thus have \[ \tau {\left( \alpha \right) }^{q} = \mathop{\prod }\limits_{{\sigma \in G}}{\left( 1 - \sigma \left( \zeta \right) /x\right) }^{{a}_{\sigma }}. \] Denote by Log the principal branch of the complex logarithm, in other words such that \( \log \left( z\right) = \log \left( \left| z\right| \right) + i\operatorname{Arg}\left( z\right) \), and let \( f \) be some determination of the complex logarithm, so that \( f\left( z\right) - \log \left( z\right) \) is an integral multiple of \( {2i\pi } \) . We thus have \[ \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }\log \left( {1 - \sigma \left( \zeta \right) /x}\right) = f\left( {\tau {\left( \alpha \right) }^{q}}\right) . \] Now since \( \left| x\right| > 1 \) we have \[ \left| {\log \left( {1 - \sigma \left( \zeta \right) /x}\right) }\right| = \left| {\mathop{\sum }\limits_{{k \geq 1}}\sigma {\left( \zeta \right) }^{k}/\left( {k{x}^{k}}\right) }\right| \leq \mathop{\sum }\limits_{{k \geq 1}}{\left| x\right| }^{-k} = 1/\left( {\left| x\right| - 1}\right) . \] Note that for all \( z \) we have \( f\left( z\right) = \log \left( \left| z\right| \right) + i\left( {\operatorname{Arg}\left( z\right) + {2k\pi }}\right) \) for some \( k \in \mathbb{Z} \) , hence \( \left| {f\left( z\right) }\right| \geq \left| {\operatorname{Arg}\left( z\right) + {2k\pi }}\right| \) . If \( k = 0 \) this gives \( \left| {f\left( z\right) }\right| \geq \left| {\operatorname{Arg}\left( z\right) }\right| \), while if \( k \neq 0 \) this gives \[ \left| {f\left( z\right) }\right| \geq \left| {2k\pi }\right| - \left| {\operatorname{Arg}\left( z\right) }\right| \geq \left( {2\left| k\right| - 1}\right) \pi \geq \pi \geq \left| {\operatorname{Arg}\left( z\right) }\right| \] since \( \left| {\operatorname{Arg}\left( z\right) }\right| \leq \pi \), so that we always have \( \left| {f\left( z\right) }\right| \geq \left| {\operatorname{Arg}\left( z\right) }\right| \) . Thus \[ \left| {\operatorname{Arg}\left( {\tau {\left( \alpha \right) }^{q}}\right) }\right| \leq \left| {f\left( {\tau {\left( \alpha \right) }^{q}}\right) }\right| \leq \frac{1}{\left| x\right| - 1}\mathop{\sum }\limits_{{\sigma \in G}}\left| {a}_{\sigma }\right| \leq \frac{\parallel \theta \parallel }{\left| x\right| - 1}, \] proving the first inequality. Now assume by contradiction that \( \left| {\operatorname{Arg}\left( {\tau \left( \alpha \right) }\right) }\right| \leq \) \( \pi /q \) . It is immediately checked that in that case \( \left| {\operatorname{Arg}\left( {\tau {\left( \alpha \right) }^{q}}\right) }\right| = q\left| {\operatorname{Arg}\left( {\tau \left( \alpha \right) }\right) }\right| \) , so that \( \left| {\operatorname{Arg}\left( {\tau \left( \alpha \right) }\right) }\right| \leq \parallel \theta \parallel /\left( {q\left( {\left| x\right| - 1}\right) }\right) \) . Furthermore, if we set \( \phi = \operatorname{Arg}\left( {\tau \left( \alpha \right) }\right) \) , since \( \left| {\tau \left( \alpha \right) }\right| = 1 \) we have \( \tau \left( \alpha \right) = \cos \left( \phi \right) + i\sin \left( \phi \right) \), hence \[ \tau \left( \alpha \right) - 1 = 2\sin \left( {\phi /2}\right) \left( {-\sin \left( {\phi /2}\right) + i\cos \left( {\phi /2}\right) }\right) , \] so that \[ \left| {\tau \left( \alpha \right) - 1}\right| = 2\left| {\sin \left( {\phi /2}\right) }\right| \leq \left| \phi \right| = \left| {\operatorname{Arg}\left( {\tau \left( \alpha \right) }\right) }\right| . \] We thus have \( \left| {\tau \left( \alpha \right) - 1}\right| \leq \parallel \theta \parallel /\left( {q\left( {\left| x\right| - 1}\right) }\right) \), so taking the product over all \( \sigma \in G \) we obtain \[ \left| {\mathcal{N}\left( {\alpha - 1}\right) }\right| = {\left| \tau \left( \alpha \right) - 1\right| }^{2}\mathop{\prod }\limits_{\substack{{\sigma \in G} \\ {\sigma \neq \tau ,\sigma \neq {\iota \tau }} }}\left| {\sigma \left( \alpha \right) - 1}\right| \leq {\left( \frac{\parallel \theta \parallel }{q\left( {\left| x\right| - 1}\right) }\right) }^{2}{2}^{p - 3}, \] since \( \left| {\sigma \left( \alpha \right) - 1}\right| \leq \left| {\sigma \left( \alpha \right) }\right| + 1 = 2 \) . Now set \( {\theta }^{ + } = \mathop{\sum }\limits_{{\sigma \in G,{a}_{\sigma } \geq 0}}{a}_{\sigma }\sigma \) and \( {\theta }^{ - } = \mathop{\sum }\limits_{{\sigma \in G,{a}_{\sigma } \leq 0}}\left( {-{a}_{\sigma }}\right) \sigma \), so that \( \theta = \) \( {\theta }^{ + } - {\theta }^{ - } \) . Since \( {a}_{\iota \sigma } = - {a}_{\sigma } \) we have \( \iota {\theta }^{ + } = {\theta }^{ - } \) hence \( {\alpha }^{q} = {\left( x - {\zeta }_{p}\right) }^{\theta } = \beta /\iota \left( \beta \right) \) , where \( \beta = {\left( x - {\zeta }_{p}\right) }^{{\theta }^{ + }} \) is an a
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 7.5.5
Definition 7.5.5 Let \( K \) be a \( \mathcal{P} \) -adic field and \( H = K \) or a quaternion algebra \( A \) over \( K \) . Suppose \( K \) is a finite extension of \( {\mathbb{Q}}_{p} \) and let \( \mathcal{B} \) be a maximal order in \( H \) . Choose a \( {\mathbb{Z}}_{p} \) -basis \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{n}}\right\} \) of \( \mathcal{B} \) . Then the discriminant of \( H = {D}_{H} = {\begin{Vmatrix}\det \left( {T}_{H}\left( {e}_{i}{e}_{j}\right) \right) \end{Vmatrix}}_{{\mathbb{Q}}_{p}}^{-1} \) . Note that when \( H = K \), this notion of discriminant agrees with the field discriminant \( K \mid {\mathbb{Q}}_{p} \) . (See Definition 0.1.2.) For the connection in the cases where \( H = A \), see Exercise 7.5, No. 2. ## Lemma 7.5.6 1. If \( \mathbb{R} \subset H \), then the Tamagawa measure on \( H \) is \( d{x}_{H} \), as given in Definition 7.5.2. 2. If \( \mathbb{R} ⊄ H \), the Tamagawa measure on \( H \) is \( {D}_{H}^{-1/2}d{x}_{H} \), where \( d{x}_{H} \) is given in Definition 7.5.2. Proof: The first part is a straightforward calculation (see Exercise 7.5, No. 3). For the second part, consider first the case where \( H = {\mathbb{Q}}_{p} \) . Let \( \Phi \) denote the characteristic function of \( {\mathbb{Z}}_{p} \) and let \( {dx} \) denote the additive Haar measure. Recall that \( {\psi }_{p}\left( x\right) = {e}^{{2\pi i} < x > } \) where \( < x > \) is the unique rational of the form \( a/{p}^{m} \) in the interval \( (0,1\rbrack \) such that \( x - < x > \in {\mathbb{Z}}_{p} \) . Now \[ \widehat{\Phi }\left( x\right) = {\int }_{{\mathbb{Z}}_{p}}{\psi }_{p}\left( {xy}\right) {dy} = 1\;\text{ if }x \in {\mathbb{Z}}_{p}. \] Now suppose that \( x \notin {\mathbb{Z}}_{p} \) and so \( \langle x\rangle = a/{p}^{m} \in \left( {0,1}\right) \) . Let \( {\mathbb{Z}}_{p} = \) \( { \cup }_{i = 0}^{{p}^{m} - 1}\left( {i + {p}^{m}{\mathbb{Z}}_{p}}\right) \) and let \( \xi = \exp \left( {{2\pi i}/{p}^{m}}\right) \) . Then \[ \widehat{\Phi }\left( x\right) = {\int }_{{p}^{m}{\mathbb{Z}}_{p}}\left( {1 + \xi + {\xi }^{2} + \cdots + {\xi }^{{p}^{m} - 1}}\right) {dy} = 0. \] Thus \( \widehat{\Phi } = \Phi \) and so \( {dx} \) is the Tamagawa measure in this case. More generally, let \( \mathcal{B} \) denote a maximal order in \( H \) with the \( {\mathbb{Z}}_{p} \) -basis \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{n}}\right\} \) . Take the dual basis with respect to the trace so that \( {e}_{i}^{ * } \) is defined by \( {T}_{H}\left( {{e}_{i}^{ * }{e}_{j}}\right) = {\delta }_{ij} \) . Thus if \[ \widetilde{\mathcal{B}} = \left\{ {x \in H \mid {T}_{H}\left( {xy}\right) \in {\mathbb{Z}}_{p}\forall y \in \mathcal{B}}\right\} \] then \( \left\{ {{e}_{1}^{ * },{e}_{2}^{ * },\ldots ,{e}_{n}^{ * }}\right\} \) is a \( {\mathbb{Z}}_{p} \) -basis of \( \widetilde{\mathcal{B}} \) and \( \widetilde{\widetilde{\mathcal{B}}} = \mathcal{B} \) . Let \( \Phi \) be the characteristic function of \( \mathcal{B} \) . Then in the same way as for \( {\mathbb{Z}}_{p},\widehat{\Phi } \) is the characteristic function of \( \widetilde{\mathcal{B}} \) . Thus \( \widehat{\widehat{\Phi }} = \operatorname{Vol}\left( \widetilde{\mathcal{B}}\right) \Phi \), so that the Tamagawa measure will be \( \operatorname{Vol}{\left( \widetilde{\mathcal{B}}\right) }^{-1/2}d{x}_{H} \) . Now if \( {e}_{i}^{ * } = \sum {q}_{ji}{e}_{j} \), then \( \operatorname{Vol}\left( \widetilde{\mathcal{B}}\right) = \parallel \det \left( Q\right) {\parallel }_{{\mathbb{Q}}_{p}} \), where \( Q = \left( {q}_{ij}\right) \) . However, \( {Q}^{-1} = \left( {{T}_{H}\left( {{e}_{i}{e}_{j}}\right) }\right) \) and the result follows. This then normalises the Haar measure on the additive structures of local fields and quaternion algebras over these local fields. We now extend this to multiplicative structures and also to other related locally compact groups. Continuing to use our blanket notation \( H \), the multiplicative Tamagawa measure \( d{x}_{H}^{ * } \) on \( {H}^{ * } \) is obtained from the additive measure as in Definition 7.5.2. For discrete groups \( G \) which arise, the chosen measure will, in general, assign to each element the value 1. Exceptionally, in the cases where \( \mathbb{R} ⊄ H \) and \( G \) is the discrete group of modules \( \begin{Vmatrix}{H}^{ * }\end{Vmatrix} \), each element is assigned its real value. All other locally compact groups which will be considered both in this section and the following two are obtained from previously defined ones via obvious exact sequences. In these circumstances, it is required that the measures be compatible. Thus suppose that we have a short exact sequence of locally compact groups \[ 1 \rightarrow Y\overset{i}{ \rightarrow }Z\overset{j}{ \rightarrow }T \rightarrow 1 \] with Haar measures \( {dy},{dz} \) and \( {dt} \), respectively. These measures are said to be compatible if, for every suitable function \( f \) , \[ {\int }_{Z}f\left( z\right) {dz} = {\int }_{T}{\int }_{Y}f\left( {i\left( y\right) z}\right) {dydt}\;\text{ where }t = j\left( z\right) . \] It should be noted that this depends not just on the groups involved, but on the particular exact sequence used. Given measures on two of the groups involved in the exact sequence, the measure on the third group will be defined by requiring that it be compatible with the other two and the short exact sequence. All volumes which are calculated and used subsequently are computed using the Tamagawa measures and otherwise using compatible measures obtained from these. These local volumes will be used to obtain covolumes of arithmetic Kleinian and Fuchsian groups and so are key components going in to the volume calculations in \( §{11.1} \) . Some of the calculations are made here, others are assigned to Exercises 7.5. Lemma 7.5.7 \( \operatorname{Vol}\left( {\mathcal{H}}^{1}\right) = \operatorname{Vol}\left\{ {x \in {\mathcal{H}}^{ * } \mid n\left( x\right) = 1}\right\} = 4{\pi }^{2} \) . Proof: For the usual measures on \( {\mathbb{R}}^{4} \), the volume of a ball of radius \( r \) is \( {\pi }^{2}{r}^{4}/2 \) . Thus, for \( x = {x}_{1} + {x}_{2}i + {x}_{3}j + {x}_{4}{ij}, n\left( x\right) = {x}_{1}^{2} + {x}_{2}^{2} + {x}_{3}^{2} + {x}_{4}^{2} \), so that \( \parallel x\parallel = n{\left( x\right) }^{2} \) . The volume of \( {\mathcal{H}}^{1} \) will be obtained from the short exact sequence \[ 1 \rightarrow {\mathcal{H}}^{1}\overset{i}{ \rightarrow }{\mathcal{H}}^{ * }\overset{n}{ \rightarrow }{\mathbb{R}}^{ + } \rightarrow 1 \] Now the Tamagawa measure on \( {\mathcal{H}}^{ * } \) is \( n{\left( x\right) }^{-2}{4d}{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} \) (see Exercise 7.5, No.1), and on \( {\mathbb{R}}_{ + }^{ * } \) it is \( {t}^{-1}{dt} \) . As a suitable function on \( {\mathcal{H}}^{ * } \) choose, \[ g\left( x\right) = \left\{ \begin{array}{ll} n{\left( x\right) }^{2} & \text{if }1/2 \leq n\left( x\right) \leq 1 \\ 0 & \text{otherwise.} \end{array}\right. \] Now if \( t = n\left( x\right) \), we obtain \[ {\int }_{{\mathcal{H}}^{ * }}g\left( x\right) n{\left( x\right) }^{-2}{4d}{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} = \frac{4{\pi }^{2}}{2}\left( {1 - \frac{1}{4}}\right) = \frac{3{\pi }^{2}}{2} \] \[ = {\int }_{{\mathbb{R}}_{ + }^{ * }}{\int }_{{\mathcal{H}}^{1}}g\left( {i\left( y\right) x}\right) {dydt} = \operatorname{Vol}\left( {\mathcal{H}}^{1}\right) {\int }_{1/2}^{1}\frac{{t}^{2}{dt}}{t} = \frac{3}{8}\operatorname{Vol}\left( {\mathcal{H}}^{1}\right) . \] Lemma 7.5.8 Let \( \mathcal{O} \) be a maximal order in the quaternion algebra \( A \) over the \( \mathcal{P} \) - adic field \( K \) . Let \( {D}_{K} \) denote the discriminant of \( K \) and \( q = \left| {R/{\pi R}}\right| \) . Then \[ \operatorname{Vol}\left( {\mathcal{O}}^{1}\right) = {D}_{K}^{-3/2}\left( {1 - {q}^{-2}}\right) \left\{ \begin{array}{ll} {\left( q - 1\right) }^{-1} & \text{ if }A\text{ is a division algebra } \\ 1 & \text{ if }A = {M}_{2}\left( K\right) . \end{array}\right. \] Proof: Note that the reduced norm \( n \) maps \( {\mathcal{O}}^{ * } \) onto \( {R}^{ * } \) (see Exercise 6.7, No. 1) so there is an exact sequence \[ 1 \rightarrow {\mathcal{O}}^{1}\overset{i}{ \rightarrow }{\mathcal{O}}^{ * }\overset{n}{ \rightarrow }{R}^{ * } \rightarrow 1 \] Thus for the volume of \( {\mathcal{O}}^{1} \), we have \[ \frac{\text{ Tamagawa Vol of }{\mathcal{O}}^{ * }}{\text{ Tamagawa Vol of }{R}^{ * }} = \frac{\left( {1 - {q}^{-1}}\right) {D}_{A}^{-1/2}\text{ multiplicative Haar Vol. of }{\mathcal{O}}^{ * }}{\left( {1 - {q}^{-1}}\right) {D}_{K}^{-1/2}\text{ multiplicative Haar Vol. of }{R}^{ * }} \] by Lemma 7.5.6 and Definition 7.5.2. The result then follows by Lemma 7.5.3 and Exercise 7.5, No 2. ## Exercise 7.5 1. Show that the additive Haar measures on \( H \), where \( H \supset \mathbb{R} \), are as follows: (a) \( H = \mathbb{C}, x = {x}_{1} + i{x}_{2}, d{x}_{\mathbb{C}} = {2d}{x}_{1}d{x}_{2} \) . (b) \( H = \mathcal{H}, x = {x}_{1} + {x}_{2}i + {x}_{3}j + {x}_{4}{ij}, d{x}_{\mathcal{H}} = {4d}{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} \) . (c) \( H = {M}_{2}\left( \mathbb{R}\right), x = \left( \begin{array}{ll} {x}_{1} & {x}_{2} \\ {x}_{3} & {x}_{4} \end{array}\right), d{x}_{{M}_{2}\left( \mathbb{R}\right) } = d{x}_{1}d{x}_{2}d{x}_{3}d{x}_{4} \) . (d) \( H = {M}_{2}\left( \mathbb{C}\right), x = \left( \begin{array}{ll} {x}_{1} + i{x}_{2} & {x}_{3} + i{x}_{4} \\ {x}_{5} + i{x}_{6} & {x}_{7} + i{x}_{8} \end{array}\right), d{x}_{{M}_{2}\left( \mathbb{C}\right) } = d{x}_{1}\cdots d{x}_{8} \) . 2. If \( A \) is a quaternion algebra over the \( \mathcal{P} \) -adic field and \( \mathcal{O} \) is a maximal order in \( A \), show that the discriminant \( {D}_{A} \) defined in Definition 7.5.5 and the discriminant \( {D}_{K} \) are linked by the equation \[ {D}_{A} = {D}_{K}^{4}{N}_{K \mid {\mathbb{Q}}_{p}}\left( {d\left( \mathcal{O}\right) }\right) \] where \( d\left( \mathcal{O}\right) \) is the discriminant of \( \mathcal{O} \) . 3. If \( H \supse
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 1.2.5
Definition 1.2.5 A Kleinian group \( \Gamma \) is a discrete subgroup of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) . This condition is equivalent to requiring that \( \Gamma \) acts discontinuously on \( {\mathbf{H}}^{3} \), where this means that, for any compact subset \( K \) of \( {\mathbf{H}}^{3} \), the set \( \{ \gamma \in \) \( \Gamma \mid {\gamma K} \cap K \neq \varnothing \} \) is finite. Thus the stabiliser of a point in \( {\mathbf{H}}^{3} \) is finite. The stabiliser of a point on the sphere at infinity can be conjugated to a subgroup of \( B \) described at (1.7). The discrete subgroups of \( B \) fall into one of the following classes: - Finite cyclic - A finite extension of an infinite cyclic group generated either by a loxodromic element or by a parabolic element - A finite extension of \( \mathbb{Z} \oplus \mathbb{Z} \), which is generated by a pair of parabolics A more precise classification of the discrete subgroups of \( B \) can be given. One outcome of this is the following: Lemma 1.2.6 If \( \Gamma \) is a Kleinian group, then a parabolic and loxodromic element cannot have a fixed point in \( \widehat{\mathbb{C}} \) in common. The last category of discrete subgroups of \( B \) described above is critical for the description of hyperbolic manifolds. Definition 1.2.7 A point \( \zeta \in \widehat{\mathbb{C}} \), the sphere at infinity, is a cusp of the Kleinian group \( \Gamma \) if the stabiliser \( {\Gamma }_{\zeta } \) contains a free abelian group of rank 2. Since a Kleinian group acts discontinuously on \( {\mathbf{H}}^{3} \), we can construct a fundamental domain for this action of \( \Gamma \) on \( {\mathbf{H}}^{3} \) . A fundamental domain is a closed subset \( \mathcal{F} \) of \( {\mathbf{H}}^{3} \) such that - \( { \cup }_{\gamma \in \Gamma }\gamma \mathcal{F} = {\mathbf{H}}^{3} \) - \( {\mathcal{F}}^{o} \cap \gamma {\mathcal{F}}^{o} = \varnothing \) for every \( \gamma \neq \operatorname{Id},\gamma \in \Gamma \), where \( {\mathcal{F}}^{o} \) is the interior of \( \mathcal{F} \) - the boundary of \( \mathcal{F} \) has measure zero. The following construction of Dirichlet gives the existence of such a fundamental domain. Pick a point \( P \in {\mathbf{H}}^{3} \) such that \( \gamma \left( P\right) \neq P \) for all \( \gamma \in \Gamma ,\gamma \neq \) Id. Define \[ {\mathcal{F}}_{P}\left( \Gamma \right) = \left\{ {Q \in {\mathbf{H}}^{3} \mid d\left( {Q, P}\right) \leq d\left( {\gamma \left( Q\right), P}\right) \text{ for all }\gamma \in \Gamma }\right\} . \] (1.9) The boundary of \( {\mathcal{F}}_{P}\left( \Gamma \right) \) consists of parts of hyperbolic planes bounding the half-spaces which contain \( {\mathcal{F}}_{P}\left( \Gamma \right) \) . These fundamental domains are polyhedra so that the boundary is a union of faces, each of which is a polygon on a geodesic plane. The following definition gives the most important finiteness conditions on \( \Gamma \) : Definition 1.2.8 A Kleinian group \( \Gamma \) is called geometrically finite if it admits a finite-sided Dirichlet domain. Such groups are therefore finitely generated. A Kleinian group \( \Gamma \) is said to be of finite covolume if it has a fundamental domain of finite hyperbolic volume. The covolume of \( \Gamma \) is then \[ \operatorname{Covol}\left( \Gamma \right) = {\int }_{\mathcal{F}}{dV} \] (1.10) The group \( \Gamma \) is said to be cocompact if \( \Gamma \) has a compact fundamental domain. Implicit in the above definition is the result that the volume is independent of the choice of fundamental domain. This is stated more precisely as follows: Lemma 1.2.9 Let \( {\mathcal{F}}_{1} \) and \( {\mathcal{F}}_{2} \) be fundamental domains for the Kleinian group \( \Gamma \) . Then, if \( {\int }_{{\mathcal{F}}_{1}}{dV} \) is finite, so is \( {\int }_{{\mathcal{F}}_{2}}{dV} \) and they are equal. Of course, cocompact groups are necessarily of finite covolume. This condition has the following geometric and algebraic consequences. Theorem 1.2.10 If \( \Gamma \) has finite covolume, then there is a \( P \in {\mathbf{H}}^{3} \) such that \( {\mathcal{F}}_{P}\left( \Gamma \right) \) has finitely many faces. In particular, \( \Gamma \) is geometrically finite and so finitely generated. Again, if we start instead with \( \operatorname{PSL}\left( {2,\mathbb{R}}\right) \), much of the above discussion goes through, in particular with Kleinian replaced by Fuchsian. Definition 1.2.11 A Fuchsian group is a discrete subgroup of \( \operatorname{PSL}\left( {2,\mathbb{R}}\right) \) . Since a Fuchsian group is a Kleinian group, it is only when we consider the actions on \( {\mathbf{H}}^{2} \) or \( {\mathbf{H}}^{3} \) that differences arise. Thus a Fuchsian group will be said to have finite covolume (or strictly finite coarea) if a fundamental domain in \( {\mathbf{H}}^{2} \) has finite hyperbolic area. Connecting the two, note that if a Kleinian group \( \Gamma \) has a subgroup \( F \) which leaves invariant a circle or straight line in \( \mathbb{C} \) and the complementary components, then \( F \) will be termed a Fuchsian subgroup of \( \Gamma \) . Note that \( F \) is conjugate to a subgroup of \( \operatorname{PSL}\left( {2,\mathbb{R}}\right) \) . We will normally be interested only in the cases where \( F \) is non-elementary. There is a sharp distinction between those Kleinian groups \( \Gamma \) which contain parabolic elements and those that do not, which is reflected in the related topology, geometry and, as we shall see later, algebra. So let us assume that \( \Gamma \) contains a parabolic element whose fixed point, by conjugation, can be assumed to be at \( \infty \) . Then there is a horoball neighbourhood of \( \infty \) ; that is, an open upper-half space of the form \[ {H}_{\infty }\left( {t}_{0}\right) = \left\{ {\left( {x, y, t}\right) \in {\mathbf{H}}^{3} \mid t > {t}_{0}}\right\} \] (1.11) such that any two points of \( {H}_{\infty }\left( {t}_{0}\right) \) which are equivalent under the action of \( \Gamma \) are equivalent under the action of \( {\Gamma }_{\infty } \), the stabiliser of \( \infty \) . Now, \( {\Gamma }_{\infty } \) being a subgroup of \( B \) acts as a group of Euclidean similarities on the horosphere bounding the horoball [i.e., \( \left. \left\{ \left( {x, y,{t}_{0}}\right) \right\} \right\rbrack \) . Thus we have a precise description of the action of a Kleinian group in the neighbourhood of a cusp. A horoball neighbourhood of a parabolic fixed point \( \zeta \in \mathbb{C} \) is a Euclidean ball in \( {\mathbf{H}}^{3} \) tangent to \( \mathbb{C} \) at \( \zeta \), as it is the image of some \( {H}_{\infty }\left( {t}_{0}\right) \) at (1.11) under an element of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) . It is then not difficult to see that if \( \Gamma \) contains a parabolic element, then \( \Gamma \) is not cocompact. However, under the finite covolume condition, much more can be obtained: Theorem 1.2.12 Let \( \Gamma \) be a Kleinian group of finite covolume. If \( \Gamma \) is not cocompact, then \( \Gamma \) must contain a parabolic element. If \( \zeta \) is the fixed point of such a parabolic element, then \( \zeta \) is a cusp. Furthermore, there are only finitely many \( \Gamma \) -equivalence classes of cusps, so the horoball neighbourhoods can be chosen to be mutually disjoint. ## Exercise 1.2 1. Prove Lemma 1.2.3. 2. Establish the formula (1.8). 3. Determine the groups which can be stabilisers of cusps of Kleinian groups. 4. Let \( \Gamma \) be a non-elementary Kleinian group. The limit set \( \Lambda \left( \Gamma \right) \) of \( \Gamma \) is the set of accumulation points on the sphere at infinity of orbits of points in \( {\mathbf{H}}^{3} \) . Show that the limit set is the closure of the set of fixed points of loxodromic elements in \( \Gamma \) . Show also that it is the smallest non-empty \( \Gamma \) - invariant closed subset on the sphere at infinity. 5. If \( K \) is a non-trivial normal subgroup of \( \Gamma \) of infinite index where \( \Gamma \) is a Kleinian group of finite covolume, show that \( K \) cannot be geometrically finite. 6. Show that if the Kleinian group \( \Gamma \) contains a parabolic element, then \( \Gamma \) cannot be cocompact. ## 1.3 Hyperbolic Manifolds and Orbifolds A hyperbolic \( n \) -manifold is a manifold which is modelled on hyperbolic \( n \) -space. More precisely, it is an \( n \) -manifold \( M \) with a Riemannian metric such that every point on \( M \) has a neighbourhood isometric to an open subset of hyperbolic \( n \) -space. If \( \Gamma \) is a torsion-free Kleinian group, then \( \Gamma \) acts discontinuously and freely on \( {\mathbf{H}}^{3} \) so that the quotient \( {\mathbf{H}}^{3}/\Gamma \) is an orientable hyperbolic 3-manifold. Conversely, the hyperbolic structure of an orientable hyperbolic 3-manifold \( M \) can be lifted to the universal cover \( \widetilde{M} \) which, by uniqueness, will be isometric to \( {\mathbf{H}}^{3} \) . Thus the fundamental group \( {\pi }_{1}\left( M\right) \) can be identified with the covering group which will be a subgroup \( \Gamma \) of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) acting freely and discontinuously. Theorem 1.3.1 If \( M \) is an orientable hyperbolic 3-manifold, then \( M \) is isometric to \( {\mathbf{H}}^{3}/\Gamma \), where \( \Gamma \) is a torsion-free Kleinian group. Now let us suppose that the manifold \( M = {\mathbf{H}}^{3}/\Gamma \) has finite volume. This means that the fundamental domain for \( \Gamma \) has finite volume and so \( \Gamma \) has finite covolume. Thus \( \Gamma \) is finitely generated. Furthermore, if \( M \) is not compact, then the ends of \( M \) can be described following Theorem 1.2.12 and the remarks preceding it. Theorem 1.3.2 If \( M \) is a non-compact orientable hyperbolic 3-manifold of finite volume, then \( M \) has finitely many ends and e
117_《微积分笔记》最终版_by零蛋大
Definition 7.1
Definition 7.1. If \( \alpha ,\beta \) are open covers of \( X \) their join \( \alpha \vee \beta \) is the open cover by all sets of the form \( A \cap B \) where \( A \in \alpha, B \in \mathcal{B} \) . Similarly we can define the join \( \mathop{\bigvee }\limits_{{i = 1}}^{n}{\alpha }_{i} \) of any finite collection of open covers of \( X \) . Definition 7.2. An open cover \( \beta \) is a refinement of an open cover \( \alpha \), written \( \alpha < \beta \), if every member of \( \beta \) is a subset of a member of \( \alpha \) . Hence \( \alpha < \alpha \vee \beta \) ;for any open covers \( \alpha ,\beta \) . Also if \( \beta \) is a subcover of \( \alpha \) then \( \alpha < \beta \) . Definition 7.3. If \( \alpha \) is an open cover of \( X \) and \( T : X \rightarrow X \) is continuous then \( {T}^{-1}\alpha \) is the open cover consisting of all sets \( {T}^{-1}A \) where \( A \in \alpha \) . We have \[ {T}^{-1}\left( {\alpha \vee \beta }\right) = {T}^{-1}\left( \alpha \right) \vee {T}^{-1}\left( \beta \right) ,\text{ and }\alpha < \beta \text{ implies }{T}^{-1}\alpha < {T}^{-1}\beta . \] We shall denote \( \alpha \vee {T}^{-1}\alpha \vee \cdots \vee {T}^{-\left( {n - 1}\right) }\alpha \) by \( \mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha \) . Definition 7.4. If \( \alpha \) is an open cover of \( X \) let \( N\left( \alpha \right) \) denote the number of sets in a finite subcover of \( \alpha \) with smallest cardinality. We define the entropy of \( \alpha \) by \( H\left( \alpha \right) = \log N\left( \alpha \right) \) . Remarks (1) \( H\left( \alpha \right) \geq 0 \) . (2) \( H\left( \alpha \right) = 0 \) iff \( N\left( \alpha \right) = 1 \) iff \( X \in \alpha \) . (3) If \( \alpha < \beta \) then \( H\left( \alpha \right) \leq H\left( \beta \right) \) . Proof. Let \( \left\{ {{B}_{1},\ldots ,{B}_{N\left( \beta \right) }}\right\} \) be a subcover of \( \beta \) with minimal cardinality. For each \( i\exists {A}_{i} \in \alpha \) with \( {A}_{i} \supseteq {B}_{i} \) . Therefore \( \left\{ {{A}_{1},.,{A}_{N\left( \beta \right) }}\right\} \) covers \( X \) and is a subcover of \( \alpha \) . Thus \( N\left( \alpha \right) \leq N\left( \beta \right) \) . (4) \( H\left( {\alpha \vee \beta }\right) \leq H\left( \alpha \right) + H\left( \beta \right) \) . Proof. Let \( \left\{ {{A}_{1},\ldots ,{A}_{N\left( \alpha \right) }}\right\} \) be a subcover of \( \alpha \) of minimal cardinality, and \( \left\{ {{B}_{1},\ldots ,{B}_{N\left( \beta \right) }}\right\} \) be a subcover of \( \beta \) of minimal cardinality. Then \[ \left\{ {{A}_{i} \cap {B}_{j} : 1 \leq i \leq N\left( \alpha \right) ,1 \leq j \leq N\left( \beta \right) }\right\} \] is a subcover of \( \alpha \vee \beta \), so \( N\left( {\alpha \vee \beta }\right) \leq N\left( \alpha \right) N\left( \beta \right) \) . (5) If \( T : X \rightarrow X \) is a continuous map then \( H\left( {{T}^{-1}\alpha }\right) \leq H\left( \alpha \right) \) . If \( T \) is also surjective then \( H\left( {{T}^{-1}\alpha }\right) = H\left( \alpha \right) \) . Proof. If \( \left\{ {{A}_{1},\ldots ,{A}_{N\left( \alpha \right) }}\right\} \) is a subcover of \( \alpha \) of minimal cardinality then \( \left\{ {{T}^{-1}{A}_{1},\ldots ,{T}^{-1}{A}_{N\left( \alpha \right) }}\right\} \) is a subcover of \( {T}^{-1}\alpha \), so \( N\left( {{T}^{-1}\alpha }\right) \leq N\left( \alpha \right) \) . If \( T \) is surjective and \( \left\{ {{T}^{-1}{A}_{1},\ldots ,{T}^{-1}{A}_{N\left( {{T}^{-1}\alpha }\right) }}\right\} \) is a subcover of \( {T}^{-1}\alpha \) of minimal cardinality then \( \left\{ {{A}_{1},\ldots ,{A}_{N\left( {T - {1}_{\alpha }}\right) }}\right\} \) also covers \( X \) so \( N\left( \alpha \right) \leq N\left( {{T}^{-1}\alpha }\right) \) . Theorem 7.1. If \( \alpha \) is an open cover of \( X \) and \( T : X \rightarrow X \) is continuous then \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\left( {1/n}\right) H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) \) exists. Proof. Recall that if we set \[ {a}_{n} = H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) \] then by Theorem 4.9 it suffices to show that \[ {a}_{n + k} \leq {a}_{n} + {a}_{k}\;\text{ for }k, n \geq 1. \] We have \[ {a}_{n + k} = H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n + k - 1}}{T}^{-i}\alpha }\right) \] \[ \leq H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) + H\left( {{T}^{-n}\mathop{\bigvee }\limits_{{\jmath = 0}}^{{k - 1}}{T}^{-\jmath }\alpha }\right) \text{by Remark (4)} \] \[ \leq {a}_{n} + {a}_{k}\text{by Remark (5).} \] Definition 7.5. If \( \alpha \) is an open cover of \( X \) and \( T : X \rightarrow X \) is a continuous map then the entropy of \( T \) relative to \( \alpha \) is given by: \[ h\left( {T,\alpha }\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{1}{n}H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) \] Remarks (6) \( h\left( {T,\alpha }\right) \geq 0 \) by (1). (7) \( \alpha < \beta \) then \( h\left( {T,\alpha }\right) \leq h\left( {T,\beta }\right) \) . Proof. If \( \alpha < \beta \) then \( \mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha < \mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\beta \), so by (3) we have that \( H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) \leq H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\beta }\right) \) . Hence \( h\left( {T,\alpha }\right) \leq h\left( {T,\beta }\right) \) . Note that if \( \beta \) is a finite subcover of \( \alpha \) then \( \alpha < \beta \) so then \( h\left( {T,\alpha }\right) \leq h\left( {T,\beta }\right) \) . (8) \( h\left( {T,\alpha }\right) \leq H\left( \alpha \right) \) . Proof. By (4) we have \[ H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) \leq \mathop{\sum }\limits_{{i = 0}}^{{n - 1}}H\left( {{T}^{-i}\alpha }\right) \] \[ \leq n \cdot H\left( \alpha \right) \;\text{ by (5). } \] Definition 7.6. If \( T : X \rightarrow X \) is continuous, the topological entropy of \( T \) is given by: \[ h\left( T\right) = \mathop{\sup }\limits_{\alpha }h\left( {T,\alpha }\right) \] where \( \alpha \) ranges over all open covers of \( X \) . ## Remarks (9) \( h\left( T\right) \geq 0 \) . (10) In the definition of \( h\left( T\right) \) one can take the supremum over finite open covers of \( X \) . This follows from (7). (11) \( h\left( I\right) = 0 \) where \( I \) is the identity map of \( X \) . (12) If \( Y \) is a closed subset of \( X \) and \( {TY} = Y \) then \( h\left( {T \mid Y}\right) \leq h\left( T\right) \) . The next result shows that topological entropy is an invariant of topological conjugacy. Theorem 7.2. If \( {X}_{1},{X}_{2} \) are compact spaces and \( {T}_{i} : {X}_{i} \rightarrow {X}_{i} \) are continuous for \( i = 1,2 \), and if \( \phi : {X}_{1} \rightarrow {X}_{2} \) is a continuous map with \( \phi {X}_{1} = {X}_{2} \) and \( \phi {T}_{1} = {T}_{2}\phi \) then \( h\left( {T}_{1}\right) \geq h\left( {T}_{2}\right) \) . If \( \phi \) is a homeomorphism then \( h\left( {T}_{1}\right) = h\left( {T}_{2}\right) \) . Proof. Let \( \alpha \) be an open cover of \( {X}_{2} \) . Then \[ h\left( {{T}_{2},\alpha }\right) = \lim \frac{1}{n}H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}_{2}^{-i}\alpha }\right) \] \[ = \mathop{\lim }\limits_{n}\frac{1}{n}H\left( {{\phi }^{-1}\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}_{2}^{-i}\alpha }\right) 1 \] \[ = \mathop{\lim }\limits_{n}\frac{1}{n}H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{\phi }^{-1}{T}_{2}^{-i}\alpha }\right) \] \[ = \mathop{\lim }\limits_{n}\frac{1}{n}H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}_{1}^{-i}{\phi }^{-1}\alpha }\right) \] \[ = h\left( {{T}_{1},{\phi }^{-1}\alpha }\right) \] Hence \( h\left( {T}_{2}\right) \leq h\left( {T}_{1}\right) \) . If \( \phi \) is a homeomorphism then \( {\phi }^{-1}{T}_{2} = {T}_{1}{\phi }^{-1} \) so, by the above, \( h\left( {T}_{1}\right) \leq h\left( {T}_{2}\right) \) . In the next section we shall give a definition of \( h\left( T\right) \) that does not require \( X \) to be compact and we shall prove properties of \( h\left( T\right) \) in this more general setting. However, one result that is false when \( X \) is not compact is the following. Theorem 7.3. If \( T : X \rightarrow X \) is a homeomorphism of a compact space \( X \) then \( h\left( T\right) = h\left( {T}^{-1}\right) \) . Proof \[ h\left( {T,\alpha }\right) = \lim \frac{1}{n}H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) \] \[ = \lim \frac{1}{n}H\left( {{T}^{n - 1}\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{-i}\alpha }\right) }\right) \text{by Remark (5)} \] \[ = \lim \frac{1}{n}H\left( {\mathop{\bigvee }\limits_{{i = 0}}^{{n - 1}}{T}^{i}\alpha }\right) \] \[ = h\left( {{T}^{-1},\alpha }\right) \] ## §7.2 Bowen’s Definition In this section we give the definition of topological entropy using separating and spanning sets. This was done by Dinaburg and by Bowen, but Bowen also gave the definition when the space \( X \) is not compact and thus will prove useful later. We shall give the definition when \( X \) is a metric space but the definition can easily be formulated when \( X \) is a uniform space. In this section \( \left( {X, d}\right) \) is " metric space, not necessarily compact. The open ball centre \( x \) radius \( r \) w. be denoted by \( B\left( {x;r}\right) \), and the closed ball by \( \bar{B}\left( {x;r}\right) \) . We shall define topological entropy for uniformly continuous maps \( T : X \rightarrow X \) . The space of all uniformly continuous maps of the metric space \( \left( {X, d}\right) \) will be denoted by \( {UC}\left( {X, d}\right) \) . Our definitions will depend on the metric \( d \) on \( X \) ; we shall see later what the dependence on \(
1073_(GTM231)Combinatorics of Coxeter Groups
Definition 4.6.1
Definition 4.6.1 The depth of \( \beta \in {\Phi }^{ + } \) is \[ \operatorname{dp}\left( \beta \right) = \min \left\{ {k : w\left( \beta \right) \in {\Phi }^{ - }}\right. \text{for some}\left. {w \in W\text{with}\ell \left( w\right) = k}\right\} \text{.} \] Since \( {t}_{\beta }\left( \beta \right) \in {\Phi }^{ - } \), the concept of depth is well defined, and by Lemma 4.4.3, the roots of depth 1 are precisely the simple roots. It is important to know how the depth changes when acting on a positive root by a simple reflection. The answer is very elegant. Lemma 4.6.2 Let \( s \in S \) and \( \beta \in {\Phi }^{ + } - \left\{ {\alpha }_{s}\right\} \) . Then, \[ \operatorname{dp}\left( {s\left( \beta \right) }\right) = \left\{ \begin{array}{ll} \operatorname{dp}\left( \beta \right) - 1, & \text{ if }\left( {\beta \mid {\alpha }_{s}}\right) > 0, \\ \operatorname{dp}\left( \beta \right) , & \text{ if }\left( {\beta \mid {\alpha }_{s}}\right) = 0, \\ \operatorname{dp}\left( \beta \right) + 1, & \text{ if }\left( {\beta \mid {\alpha }_{s}}\right) < 0. \end{array}\right. \] Proof. If \( \left( {\beta \mid {\alpha }_{s}}\right) = 0 \), then \( s\left( \beta \right) = \beta \), so trivially \( \operatorname{dp}\left( {s\left( \beta \right) }\right) = \operatorname{dp}\left( \beta \right) \) . Suppose that \( \left( {\beta \mid {\alpha }_{s}}\right) > 0 \) . Clearly, \( \operatorname{dp}\left( {s\left( \beta \right) }\right) \geq \operatorname{dp}\left( \beta \right) - 1 \) . Hence, it will suffice to show that \( \operatorname{dp}\left( {s\left( \beta \right) }\right) < \operatorname{dp}\left( \beta \right) \) . For this, choose \( w \in W \) such that \( w\left( \beta \right) \in {\Phi }^{ - } \) and \( \ell \left( w\right) = \mathrm{{dp}}\left( \beta \right) \) . Now, consider the two possibilities: \( {ws} < w \) and \( {ws} > w \) . If \( {ws} < w \), we are done, since \( {ws}\left( {s\left( \beta \right) }\right) = w\left( \beta \right) \in {\Phi }^{ - } \) shows that \( \mathrm{{dp}}\left( {s\left( \beta \right) }\right) \leq \ell \left( {ws}\right) < \mathrm{{dp}}\left( \beta \right) . \) Assume that \( {ws} > w \) . Consider the root \[ \gamma = {ws}\left( \beta \right) = w\left( {\beta - 2\left( {\beta \mid {\alpha }_{s}}\right) {\alpha }_{s}}\right) = w\left( \beta \right) - 2\left( {\beta \mid {\alpha }_{s}}\right) w\left( {\alpha }_{s}\right) . \] By assumption, \( w\left( \beta \right) \in {\Phi }^{ - } \) and \( \left( {\beta \mid {\alpha }_{s}}\right) > 0 \), and (by Proposition 4.2.5) \( w\left( {\alpha }_{s}\right) \in {\Phi }^{ + } \) . Hence, \( \gamma \in {\Phi }^{ - } \) . Furthermore, \( \gamma \neq - {\alpha }_{{s}^{\prime }} \) for all \( {s}^{\prime } \in S \), since \( - {\alpha }_{{s}^{\prime }} \) can never be the sum of two negative roots. Now, choose \( {s}^{\prime } \in S \) such that \( {s}^{\prime }w < w \) . Then, \( {s}^{\prime }w\left( {s\left( \beta \right) }\right) = {s}^{\prime }\left( \gamma \right) \), and \( {s}^{\prime }\left( \gamma \right) \in {\Phi }^{ - } \) by Lemma 4.4.3, since \( \gamma \in {\Phi }^{ - } \smallsetminus \left\{ {-{\alpha }_{{s}^{\prime }}}\right\} \) . Therefore, \( \operatorname{dp}\left( {s\left( \beta \right) }\right) \leq \ell \left( {{s}^{\prime }w}\right) < \ell \left( w\right) = \operatorname{dp}\left( \beta \right) \), as desired. Finally, suppose that \( \left( {\beta \mid {\alpha }_{s}}\right) < 0 \) . Then, \( \left( {s\left( \beta \right) \mid {\alpha }_{s}}\right) = \left( {\beta - 2\left( {\beta \mid {\alpha }_{s}}\right) {\alpha }_{s} \mid {\alpha }_{s}}\right) = \) \( - \left( {\beta \mid {\alpha }_{s}}\right) > 0 \), so by the previous case \( \operatorname{dp}\left( \beta \right) = \operatorname{dp}\left( {s\left( {s\left( \beta \right) }\right) }\right) = \operatorname{dp}\left( {s\left( \beta \right) }\right) - \) 1. \( ▱ \) Using the concept of depth we can now define the root poset. Definition 4.6.3 For \( \beta ,\gamma \in {\Phi }^{ + } \), let \( \beta \leq \gamma \) if there exist \( {s}_{1},{s}_{2},\ldots ,{s}_{k} \in S \) such that (i) \( \gamma = {s}_{k}{s}_{k - 1}\ldots {s}_{1}\left( \beta \right) \) , (ii) \( \operatorname{dp}\left( {{s}_{i}{s}_{i - 1}\ldots {s}_{1}\left( \beta \right) }\right) = \operatorname{dp}\left( \beta \right) + i \), for all \( 1 \leq i \leq k \) . What we have proved so far shows that the root poset \( \left( {{\Phi }^{ + }, \leq }\right) \) has the following structure. The minimal elements are the simple roots. All maximal chains in an interval \( \left\lbrack {\beta ,\gamma }\right\rbrack \) have the same length \( \mathrm{{dp}}\left( \gamma \right) - \mathrm{{dp}}\left( \beta \right) \), and all maximal chains in \( \{ \beta \mid \beta \leq \gamma \} \) have the same length \( \operatorname{dp}\left( \gamma \right) - 1 \) . Hence, depth is a rank function. See Figures 4.4 and 4.5 for two examples of root posets. ![63e5d629-ce51-4f7f-a61a-425829a5c179_120_0.jpg](images/63e5d629-ce51-4f7f-a61a-425829a5c179_120_0.jpg) Figure 4.4. Root poset of \( {A}_{4} \) . ![63e5d629-ce51-4f7f-a61a-425829a5c179_120_1.jpg](images/63e5d629-ce51-4f7f-a61a-425829a5c179_120_1.jpg) Figure 4.5. Root poset of \( {\widetilde{A}}_{2} \) . Root posets have a natural edge labeling by elements of \( S \) . Namely, for every covering \( \beta \vartriangleleft \gamma \), there is a unique \( s \in S \) such that \( s\left( \beta \right) = \gamma \), which provides the label \( \lambda \left( {\beta ,\gamma }\right) = s \) . The labels are indicated in the figures. Let \( s \in S \) and suppose that \( \beta = \mathop{\sum }\limits_{{{s}^{\prime } \in S}}{b}_{{s}^{\prime }}{\alpha }_{{s}^{\prime }} \) . We have from equations (4.10) and (4.14) that \[ s\left( \beta \right) = \beta + \left( {\mathop{\sum }\limits_{{{s}^{\prime } \in S}}{k}_{s,{s}^{\prime }}{b}_{{s}^{\prime }}}\right) {\alpha }_{s}, \] which if we define \[ {B}_{s}\overset{\text{ def }}{ = } - {b}_{s} + \mathop{\sum }\limits_{{{s}^{\prime } : {s}^{\prime } = s}}{k}_{s,{s}^{\prime }}{b}_{{s}^{\prime }} \] (4.39) can be written \[ s\left( \beta \right) = \beta + \left( {{B}_{s} - {b}_{s}}\right) {\alpha }_{s} \] (4.40) The sum in the definition of \( {B}_{s} \) is over all neighbors \( {s}^{\prime } \) of \( s \) in the Coxeter diagram. We then get the following criterion for moving up or down along an \( s \) -labeled edge in the root poset. Lemma 4.6.4 \( s\left( \beta \right) > \beta \Leftrightarrow {B}_{s} > {b}_{s} \) . Proof. It is clear from our definitions that \( {B}_{s} - {b}_{s} = - 2\left( {{\alpha }_{s} \mid \beta }\right) \) . Now use Lemma 4.6.2. \( ▱ \) This lemma has the following useful consequence. Corollary 4.6.5 Let \( \beta = \mathop{\sum }\limits_{{s \in S}}{b}_{s}{\alpha }_{s} \) and \( \gamma = \mathop{\sum }\limits_{{s \in S}}{c}_{s}{\alpha }_{s},\beta ,\gamma \in {\Phi }^{ + } \) . If \( \beta \leq \gamma \), then \( {b}_{s} \leq {c}_{s} \) for all \( s \in S \) . \( ▱ \) Note, for example, from Figure 4.5, that the converse of this corollary is not true. Lemma 4.6.4 gives a simple algorithmic procedure for generating the edge-labeled root poset. Namely, start with the roots of depth 1 , (i.e., the unit basis vectors \( \left. \left\{ {{\alpha }_{s} \mid s \in S}\right\} \right) \) . Recursively, assume that we have constructed the edge-labeled root poset up to (and including) the roots of depth \( j \) . Then, for each root \( \beta \) of depth \( j \) and each \( s \in S \) such that no \( s \) - labeled edge leads down from \( \beta \), compute the quantity \( {B}_{s} \) defined by (4.39); that is, compute the \( {k}_{s,{s}^{\prime }} \) -weighted sum of \( \beta \) ’s coordinates at all neighbors of \( s \) in the Coxeter diagram minus its coordinate \( {b}_{s} \) at \( s \) . If \( {B}_{s} > {b}_{s} \), let \( \gamma \) be the vector that you get by replacing \( {b}_{s} \) by \( {B}_{s} \) as the \( s \) -coordinate of \( \beta \) . Then, \( \gamma \) is a root of depth \( j + 1 \) and \( \left( {\beta ,\gamma }\right) \) is an \( s \) -labeled edge. If \( {B}_{s} = {b}_{s} \) , then do nothing \( \left( {{B}_{s} < {b}_{s}}\right. \) cannot occur, since then an \( s \) -labeled edge would lead down from \( \beta \) to a root of depth \( j - 1 \) ). After performing this for all pairs \( \beta \) and \( s \) (of the specified kinds), the root poset up to depth \( j + 1 \) will be constructed. The algorithm described in the preceding paragraph can be thought of as a "dual numbers game." Namely, positive roots \( \beta = \mathop{\sum }\limits_{{s \in S}}{b}_{s}{\alpha }_{s} \) are assignments of numbers \( {b}_{s} \) to the nodes \( s \) of the Coxeter diagram, or "dual positions." It is allowed to move from one such position to another by "firing" the node \( s \), which replaces the number \( {b}_{s} \) attached to \( s \) by the number \( {B}_{s} \) . By only allowing firings that increase the number at the fired node we create the "legal" games. These correspond to unrefinable ascending chains in the root poset. Playing legal games from the starting positions with " 1 " on one node and " 0 " on all the others, all positive roots will be generated. Thus, an appealing picture emerges. The dual numbers game - modeling the root poset \( \left( {{\Phi }^{ + }, \leq }\right) \) - takes place in the space \( V \), whereas the numbers game - a combinatorial model of the Coxeter group \( W \) and its right weak order - lives in the dual space \( {V}^{ * } \) . See Figures 4.2 and 4.3 for an illustration. The Coxeter graph itself is the game board for both games, the positions are assignments of numbers to its vertices, and the moves in both cases are certain firings of the vertices involving only numbers at the fired node and its neighbors in the diagram. The two kinds of firing, both given by simple combinatorial rules, are dual in the sense of Proposition 4.2.3. Every position \( {p}^{w} \) of th
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 7.3
Definition 7.3. Two curves \( y = {F}_{1}\left( x\right) \) and \( y = {F}_{2}\left( x\right) \) are said to be \( n \) -fold mutually inclusive in the interval \( \left\lbrack {a, b}\right\rbrack \) if they satisfy the following conditions. (1) \( y = {F}_{1}\left( x\right) \) and \( y = {F}_{2}\left( x\right) \) intersect at \( n + 2 \) points \( \left( {{a}_{1},{b}_{1}}\right), i = \) \( 1,2,\ldots, n + 2 \), where \( a = {a}_{1} < {a}_{2} < \cdots < {a}_{n + 1} < {a}_{n + 2} = b \), and \( {\left( -1\right) }^{i + 1}\left\lbrack {{F}_{2}\left( x\right) - {F}_{1}\left( x\right) }\right\rbrack \geq 0 \) for \( {a}_{i} < x < {a}_{i + 1}, i = 1,2,\ldots, n + 1. \) (2) There exist \( {\tau }_{t + 1}^{j},{\xi }_{t + 1}^{j} \in \left\lbrack {{a}_{t + 1},{a}_{t + 2}}\right\rbrack \), with \( {\xi }_{t + 1}^{j} \geq {\tau }_{t + 1}^{j} \), such that if we let \( {\Delta }_{t + 1}^{J} = {\tau }_{t + 1}^{J} - {a}_{t},{\bar{\Delta }}_{t + 1}^{J} = {\xi }_{t + 1}^{J} - {a}_{t} \), and \( {\gamma }_{t + 1} = \mathop{\max }\limits_{{J = 1,2}}\left( {{\xi }_{t + 1}^{J} + {\Delta }_{t + 1}^{J}}\right) \), then (i) \( {\left( -1\right) }^{i + j}{F}_{J}\left( x\right) \geq 0 \) if \( x \in \left\lbrack {{\tau }_{\iota + 1}^{j},{\gamma }_{\iota + 1}}\right\rbrack \subset \left\lbrack {{a}_{\iota + 1},{a}_{\iota + 2}}\right\rbrack \) , (ii) \( {\left( -1\right) }^{l}\left\lbrack {{\left( -1\right) }^{J}{F}_{J}\left( x\right) + {\left( -1\right) }^{l}{F}_{l}\left( {x + {\bar{\Delta }}_{J + 1}^{l}}\right) }\right\rbrack \geq 0 \), if \( x \in \left\lbrack {{a}_{l},{\tau }_{l + 1}^{J}}\right\rbrack \) , where \( j \neq l, j, l = 1,2, i = 1,2,\ldots, n + 1 \) . Figure 4.40 indicates the situation when \( n = 1 \) . If \( F\left( x\right) = - F\left( {-x}\right) ,{F}_{1}\left( x\right) = F\left( x\right) ,{F}_{2}\left( x\right) = F\left( {-x}\right) \), then \( {F}_{1}\left( x\right) = \) \( - {F}_{2}\left( x\right) ,{b}_{1} = 0 \) for \( i = 1,2,\ldots, n + 2 \) and \( {\tau }_{t + 1}^{1} = {\tau }_{t + 1}^{2},{\xi }_{t + 1}^{1} = {\xi }_{t + 1}^{2} \) for \( i = 1,2,\ldots, n + 1 \) . In this case, the curves \( y = {F}_{1}\left( x\right) \) and \( y = {F}_{2}\left( x\right) \) are \( n \) -fold mutually inclusive in the interval \( \left\lbrack {a, b}\right\rbrack \) requires that there exist \( {\tau }_{\iota + 1},{\xi }_{\iota + 1} \in \left\lbrack {{a}_{\iota + 1},{a}_{\iota + 2}}\right\rbrack \), with \( {\xi }_{\iota + 1} \geq {\tau }_{\iota + 1} \), such that when the curve segment \( y = {F}_{1}\left( x\right) ,{a}_{t} \leq x < {\tau }_{t + 1} \) is shifted to the right for a distance of \( {\Delta }_{t + 1} = \) \( {\tau }_{t + 1} - {a}_{t} \), it will not intersect the curve \( y = {F}_{2}\left( x\right) \), for \( i = 1,2,\ldots, n \) . In the definition given by G. S. Rychkov, \( F\left( x\right) \neq - F\left( {-x}\right) \), but \( {\tau }_{l + 1}^{1} = \) \( {\tau }_{\imath + 1}^{2} = {\xi }_{\imath + 1}^{1} = {\xi }_{\imath + 1}^{2}. \) In the following, we show that when the definition is modified as described above, and the restriction that \( \phi \left( y\right) \) is an odd function as given in [71] is removed, but the conclusion as given in [71] still remains valid. REMARK. The remark following Definition 7.2 also applies to Definition 7.3 ![bea09977-be18-4815-a30e-4fa2fe3b219c_318_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_318_0.jpg) FIGURE 4.40 LEMMA 7.2. Consider (7.8) which is equivalent to the system (7.6). Suppose that in the interval \( \left\lbrack {a, b}\right\rbrack ,\left( {a \geq 0}\right) \), the curves for \( {F}_{1}\left( x\right) \) and \( {F}_{2}\left( x\right) \) are 1 -fold mutually inclusive. Further, assume the solution curves \( y = {y}_{J}\left( x\right) \) are defined on \( \left\lbrack {a, b}\right\rbrack \), and lie above \( \varphi \left( y\right) - {F}_{J}\left( x\right) = 0 \) for \( j = 1,2 \) . Then \[ {y}_{2}\left( a\right) - {y}_{1}\left( a\right) \geq 0 \] (7.21) implies that \[ {y}_{2}\left( b\right) - {y}_{1}\left( b\right) \geq 0 \] (7.22) Proof. From Lemma 7.1, if there exists a number \( \eta \in \left\lbrack {{a}_{2},{\gamma }_{2}}\right\rbrack \) with \[ {y}_{2}\left( \eta \right) - {y}_{1}\left( \eta \right) \geq 0 \] (7.23) then (7.22) must hold. Otherwise, we have \[ {y}_{2}\left( x\right) - {y}_{1}\left( x\right) < 0\;\text{ for }{a}_{2} \leq x \leq {\gamma }_{2}. \] (7.24) Since the solutions of (7.8), \( y = {y}_{1}\left( x\right), j = 1,2 \) are monotonic decreasing when \( x \geq 0,\left( {7.24}\right) \) implies that \[ {y}_{1}\left( x\right) \geq {y}_{2}\left( {x + {\bar{\Delta }}_{2}^{2}}\right) \;\text{ for }{a}_{1} \leq x \leq {\tau }_{2}^{1}. \] (7.25) Since \[ {y}_{2}\left( a\right) \geq {y}_{1}\left( a\right) \geq {y}_{1}\left( {a + {\bar{\Delta }}_{2}^{1}}\right) , \] (7.26) and \( {F}_{1}\left( x\right) ,{F}_{2}\left( x\right) \) are 1 -fold mutually inclusive, the condition (2ii) in Definition 7.3 implies that \[ {F}_{2}\left( x\right) \leq \left( ≢ \right) {F}_{1}\left( {x + {\bar{\Delta }}_{2}^{1}}\right) \;\text{ for }{a}_{1} \leq x \leq {\tau }_{2}^{2}. \] (7.27) Moreover, we have \[ g\left( x\right) \leq g\left( {x + {\bar{\Delta }}_{2}^{1}}\right) \;\text{ for }{a}_{1} \leq x \leq {\tau }_{2}^{2}, \] and thus Lemma 7.1 implies that \[ {y}_{2}\left( x\right) \geq {y}_{1}\left( {x + {\bar{\Delta }}_{2}^{1}}\right) \;\text{ for }{a}_{1} \leq x \leq {\tau }_{2}^{2}. \] (7.28) We will use inequalities (7.25), (7.28) to obtain a contradiction. From formulas (7.9), (7.10) we obtain \[ \mathop{\sum }\limits_{{j = 1}}^{2}{\left( -1\right) }^{j}{\int }_{a}^{{\gamma }_{2}}\varphi \left( {{y}_{j}\left( x\right) }\right) d{y}_{j}\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{2}{\left( -1\right) }^{j}{\int }_{a}^{{\gamma }_{2}}\frac{-{F}_{j}\left( x\right) g\left( x\right) {dx}}{\varphi \left( {{y}_{j}\left( x\right) }\right) - {F}_{j}\left( x\right) } \] \[ \geq {\int }_{a}^{{\tau }_{2}^{2}} + {\int }_{{\xi }_{2}^{2}}^{{\xi }_{2}^{2} + {\Delta }_{2}^{1}}\frac{-{F}_{2}\left( x\right) g\left( x\right) {dx}}{\varphi \left( {{y}_{2}\left( x\right) }\right) - {F}_{2}\left( x\right) } \] \[ + {\int }_{a}^{{\tau }_{2}^{1}} + {\int }_{{\xi }_{2}^{1}}^{{\xi }_{2}^{1} + {\Delta }_{2}^{2}}\frac{{F}_{1}\left( x\right) g\left( x\right) {dx}}{\varphi \left( {{y}_{1}\left( x\right) }\right) + {F}_{1}\left( x\right) } \] \[ = {\int }_{a}^{{\tau }_{2}^{2}}\left\lbrack {\frac{-{F}_{2}\left( x\right) g\left( x\right) }{\varphi \left( {{y}_{2}\left( x\right) }\right) - {F}_{2}\left( x\right) } + \frac{{F}_{1}\left( {x + {\bar{\Delta }}_{2}^{1}}\right) g\left( {x + {\bar{\Delta }}_{2}^{1}}\right) }{\varphi \left( {{y}_{1}\left( {x + {\bar{\Delta }}_{2}^{1}}\right) }\right) - {F}_{1}\left( {x + {\bar{\Delta }}_{2}^{1}}\right) }}\right\rbrack {dx} \] \[ + {\int }_{a}^{{\tau }_{2}^{1}}\left\lbrack {\frac{{F}_{1}\left( x\right) g\left( x\right) }{\varphi \left( {{y}_{1}\left( x\right) }\right) - {F}_{1}\left( x\right) } - \frac{{F}_{2}\left( {x + {\bar{\Delta }}_{2}^{2}}\right) g\left( {x + {\bar{\Delta }}_{2}^{2}}\right) }{\varphi \left( {{y}_{2}\left( {x + {\bar{\Delta }}_{2}^{2}}\right) }\right) - {F}_{2}\left( {x + {\bar{\Delta }}_{2}^{2}}\right) }}\right\rbrack {dx} \geq 0. \] (7.29) If \( {F}_{2}\left( x\right) < 0 \) in the subinterval \( \left\lbrack {a,{\tau }_{2}^{2}}\right\rbrack \), then the function under the first integral is clearly larger than zero in this subinterval; and (7.25), (7.28) imply that the first integral in the remaining part of \( \left\lbrack {a,{\tau }_{2}^{2}}\right\rbrack \) is also nonnegative. Similar properties can be proved for the second integral. In this manner, we can prove inequality (7.29) from inequalities (7.25), (7.28). On the other hand, from inequality (7.24), we have \( {y}_{1}\left( {\gamma }_{2}\right) < {y}_{1}\left( {\gamma }_{2}\right) \) ; and thus the monotone property of \( \Phi \left( y\right) \) leads to \[ \Phi \left( {{y}_{2}\left( {\gamma }_{2}\right) }\right) - \Phi \left( {{y}_{1}\left( {\gamma }_{2}\right) }\right) + \Phi \left( {{y}_{1}\left( a\right) }\right) - \Phi \left( {{y}_{2}\left( a\right) }\right) < 0. \] This contradicts inequality (7.29). Consequently, there must be some \( \eta \in \) \( \left\lbrack {{a}_{2},{\gamma }_{2}}\right\rbrack \) such that (7.23) holds, that is, \( {y}_{2}\left( \eta \right) - {y}_{1}\left( \eta \right) > 0 \) . This proves the lemma. THEOREM 7.9 ([72]). Consider the system of differential equations (7.6). Suppose that for its equivalent equation (7.8), \( {F}_{1}\left( x\right) \) and \( {F}_{2}\left( x\right) \) are \( n \) -fold mutually inclusive in the interval \( \left\lbrack {0, b}\right\rbrack \) . Then the system (7.6) has at least \( n \) closed orbits in the strip \( \left| x\right| \leq b = {a}_{n + 2} \), each intersecting one interval \( \left\lbrack {{a}_{l},{a}_{l + 1}}\right\rbrack, i = 2,3,\ldots, n + 1 \) . Proof. First assume that the solutions \( y = {y}_{1}\left( x\right) \) of (7.8), \( j = 1,2 \), lie above the curve \( \varphi \left( y\right) - F\left( x\right) = 0 \) . Since \[ {F}_{2}\left( x\right) \geq \left( ≢ \right) {F}_{1}\left( x\right) ,\;0 = {a}_{1} \leq x \leq {a}_{2}, \] we deduce from the fact \( {y}_{1}\left( 0\right) = {y}_{2}\left( 0\right) > 0 \) and Lemma 7.1 that \[ {y}_{1}\left( {a}_{2}\right) > {y}_{2}\left( {a}_{2}\right) \] (7.30) Since \( G\left( {-x}\right) = G\left( x\right) \) and \( \Phi \left( y\right) \) is a monotonic increasing function of \( y \) in (7.9), we obtain from (7.9) and (7.30) that \( \lambda \left( {a}_{2}\right) - \lambda \left( {-{a}_{2}}\right) > 0 \) . Since \( {F}_{1}\left( x\right) \) and \( {F}_{2}\left( x\right) \) are 1 -fold mutually inclusive in \( \left\lbrack {0,{a}_{3}}\right\rbrack \), using Lemma 7.2, we can deduce from \( {y}_{2}\left( 0\right) = {y}_{1}\left( 0\right) \) that \( {y}_{2}\left( {a}_{3}\right) - {y}_{1}\l
117_《微积分笔记》最终版_by零蛋大
Definition 4.7
Definition 4.7. The entropy of \( \mathcal{A} \) given \( \mathcal{C} \) is the number \[ H\left( {\xi \left( \mathcal{A}\right) /\xi \left( \mathcal{C}\right) }\right) = H\left( {\mathcal{A}/\mathcal{C}}\right) = - \mathop{\sum }\limits_{{j = 1}}^{p}m\left( {C}_{j}\right) \mathop{\sum }\limits_{{i = 1}}^{k}\frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \] \[ = - \mathop{\sum }\limits_{{i.j}}m\left( {{A}_{i} \cap {C}_{j}}\right) \log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \] omitting the \( \jmath \) -terms when \( m\left( {C}_{1}\right) = 0 \) . So to get \( H\left( {\mathcal{A}/\mathcal{C}}\right) \) one considers \( {C}_{j} \) as a measure space with normalized measure \( m\left( \cdot \right) /m\left( {C}_{i}\right) \) and calculates the entropy of the partition of the set \( {C}_{j} \) induced by \( \xi \left( \mathcal{A}\right) \) (this gives \[ - \mathop{\sum }\limits_{{i = 1}}^{k}\frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \] and then averages the answer taking into account the size of \( {C}_{j}.\left( {H\left( {\mathcal{A}/\% }\right) }\right. \) measures the uncertainty about the outcome of \( \mathcal{A} \) given that we will be told the outcome of \( \mathcal{C} \) .) Let \( {\Gamma }^{ * } \) denote the \( \sigma \) -field \( \{ \phi, X\} \) . The \( {\mathcal{A}}^{\prime }H\left( {\mathcal{A}/\mathcal{N}}\right) = H\left( \mathcal{A}\right) \) . (Since \( V \) represents the outcome of the trivial experiment one gains nothing from knowledge of it.) ## Remarks (1) \( H\left( {x/\mathcal{C}}\right) \geq 0 \) . (2) If \( \mathcal{A} \doteq \mathcal{L} \) then \( H\left( {\mathcal{A}/\mathcal{C}}\right) = H\left( {\mathcal{D}/\mathcal{C}}\right) \) . (3) If \( \mathcal{C} \doteq \mathcal{S} \) then \( H\left( {\mathcal{A}/\mathcal{C}}\right) = H\left( {\mathcal{A}/\mathcal{D}}\right) \) . Theorem 4.3. Let \( \left( {X,\mathcal{B}, m}\right) \) be a probability space. If \( \mathcal{A},\mathcal{C},\mathcal{D} \) are finite subalgebras of \( \mathcal{B} \) then: (i) \( H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) = H\left( {\mathcal{A}/\mathcal{D}}\right) + H\left( {\mathcal{C}/\mathcal{A} \vee \mathcal{D}}\right) \) . (ii) \( H\left( {\mathcal{A} \vee \mathcal{C}}\right) = H\left( \mathcal{A}\right) + H\left( {\mathcal{C}/\mathcal{A}}\right) \) . (iii) \( \mathcal{S} \subseteq \mathcal{C} \Rightarrow H\left( {\mathcal{A}/\mathcal{Q}}\right) \leq H\left( {\mathcal{C}/\mathcal{D}}\right) \) . (iv) \( \mathcal{A} \subseteq \mathcal{C} \Rightarrow H\left( \mathcal{A}\right) \leq H\left( \mathcal{C}\right) \) . (v) \( \mathcal{C} \subseteq \mathcal{D} \Rightarrow H\left( {\mathcal{A}/\mathcal{C}}\right) \geq H\left( {\mathcal{A}/\mathcal{D}}\right) \) . (vi) \( H\left( \mathcal{A}\right) \geq H\left( {\mathcal{A}/\mathcal{D}}\right) \) . (vii) \( H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) \leq H\left( {\mathcal{A}/\mathcal{D}}\right) + H\left( {\mathcal{C}/\mathcal{D}}\right) \) (viii) \( H\left( {\mathcal{A} \vee \mathcal{C}}\right) \leq H\left( \mathcal{A}\right) + H\left( \mathcal{C}\right) \) . (ix) If \( T \) is measure-preserving then: \[ H\left( {{T}^{-1}\mathcal{A}/{T}^{-1}\mathcal{C}}\right) = H\left( {\mathcal{A}/\mathcal{C}}\right) \text{, and} \] (x) \( H\left( {{T}^{-1}\mathcal{A}}\right) = H\left( {s\mathcal{A}}\right) \) . (The reader should think of the intuitive meaning of each statement. This enables one to remember these results easily.) Proof. Let \( \xi \left( \mathcal{A}\right) = \left\{ {A}_{i}\right\} ,\xi \left( \mathcal{C}\right) = \left\{ {C}_{j}\right\} ,\xi \left( \mathcal{D}\right) = \left\{ {D}_{k}\right\} \) and assume, without loss of generality, that all sets have strictly positive measure (since if \( \xi \left( \mathcal{A}\right) = \) \( \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} \) with \( m\left( {A}_{i}\right) > 0,1 \leq i \leq r \) and \( m\left( {A}_{i}\right) = 0, r < i \leq k \) we can replace \( \xi \left( \mathcal{N}\right) \) by \( \left\{ {{A}_{1},\ldots ,{A}_{r - 1},{A}_{r} \cup {A}_{r + 1} \cup \cdots \cup {A}_{k}}\right\} \) (see remarks (2),(3) above). \[ \text{(i)}H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) = - \mathop{\sum }\limits_{{i, j, k}}m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) \log \frac{m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) }\text{.} \] But \[ \frac{m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } = \frac{m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) }{m\left( {{A}_{i} \cap {D}_{k}}\right) }\frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } \] unless \( m\left( {{A}_{i} \cap {D}_{k}}\right) = 0 \) and then the left hand side is zero and we need not consider it; and therefore \[ H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) = - \mathop{\sum }\limits_{{i, j, k}}m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) \log \frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } \] \[ - \mathop{\sum }\limits_{{i, j, k}}m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) \log \frac{m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) }{m\left( {{A}_{i} \cap {D}_{k}}\right) } \] \[ = - \mathop{\sum }\limits_{{i, k}}m\left( {{A}_{i} \cap {D}_{k}}\right) \log \frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } + H\left( {\mathcal{C}/\mathcal{A} \vee \mathcal{D}}\right) \] \[ = H\left( {\mathcal{A}/\mathcal{D}}\right) + H\left( {\mathcal{C}/\mathcal{A} \vee \mathcal{D}}\right) . \] (ii) Put \( \mathcal{D} = \mathcal{N} = \{ \phi, X\} \) in (i). (iii) By (i) \[ H\left( {\mathcal{C}/\mathcal{D}}\right) = H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) = H\left( {\mathcal{A}/\mathcal{D}}\right) + H\left( {\mathcal{C}/\mathcal{A} \vee \mathcal{D}}\right) \geq H\left( {\mathcal{A}/\mathcal{D}}\right) \] (iv) Put \( \mathcal{D} = \mathcal{N} \) in (iii). (v) Fix \( i, j \) and let \[ {\alpha }_{k} = \frac{m\left( {{D}_{k} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) },\;{x}_{k} = \frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) }. \] Then by Theorem 4.2 \[ \phi \left( {\mathop{\sum }\limits_{k}\frac{m\left( {{D}_{k} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) }}\right) \leq \mathop{\sum }\limits_{k}\frac{m\left( {{D}_{k} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\phi \left( \frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) }\right) , \] but since \( \mathcal{C} \subseteq \mathcal{D} \) the left hand side equals \[ \phi \left( \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\right) = \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }. \] Multiply both sides by \( m\left( {C}_{j}\right) \) and sum over \( i \) and \( j \) to give \[ \mathop{\sum }\limits_{{i, j}}m\left( {{A}_{i} \cap {C}_{j}}\right) \mathrm{{log}}\frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \leq \mathop{\sum }\limits_{{i, j, k}}m\left( {{D}_{k} \cap {C}_{j}}\right) \frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) }\mathrm{{log}}\frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } \] \[ = \mathop{\sum }\limits_{{i, k}}m\left( {D}_{k}\right) \frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) }\log \frac{m\left( {{A}_{1} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } \] or \( - H\left( {\mathcal{A}/\mathcal{C}}\right) \leq - H\left( {\mathcal{A}/\mathcal{D}}\right) \) . Therefore, \( H\left( {\mathcal{A}/\mathcal{D}}\right) \leq H\left( {\mathcal{A}/\mathcal{C}}\right) \) . (vi) Put \( \mathcal{C} = \mathcal{N} \) in (v). (vii) Use (i) and (v). (viii) Set \( \mathcal{D} = \mathcal{N} \) in (vii). (ix), (x) Clear from definitions. The following also fits in with our intuitive ideas Theorem 4.4. Let \( \mathcal{A} \) , \( \mathcal{C} \) be finite sub-algebras of \( \mathcal{B} \) . Then (i) \( H\left( {\mathcal{A}/\mathcal{C}}\right) = 0 \) (i.e. \( H\left( {\mathcal{A} \vee \mathcal{C}}\right) = H\left( \mathcal{C}\right) \) ) iff \( \mathcal{A} \in \mathcal{C} \) . (ii) \( H\left( {\mathcal{A}/\mathcal{C}}\right) = H\left( \mathcal{A}\right) \) (i.e. \( H\left( {\mathcal{A} \vee \mathcal{C}}\right) = H\left( \mathcal{A}\right) + H\left( \mathcal{C}\right) \) ) iff \( \mathcal{A} \) and \( \mathcal{C} \) are in- dependent (i.e. \( m\left( {A \cap C}\right) = m\left( A\right) \cdot m\left( C\right) \) whenever \( A \in \mathcal{A}, C \in \mathcal{C} \) ). Proof. Let \( \xi \left( \mathcal{A}\right) = \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} ,\xi \left( \mathcal{C}\right) = \left\{ {{C}_{1},\ldots ,{C}_{p}}\right\} \) . Without loss of generality we can assume all these sets have non-zero measure. (i) \( \mathcal{A} \in \mathcal{C} \) means for each \( i \) and each \( j \) either \( m\left( {{A}_{i} \cap {C}_{j}}\right) = m\left( {C}_{j}\right) \) or \( m\left( {{A}_{i} \cap {C}_{j}}\right) = 0 \) . Clearly this implies \( H\left( {\mathcal{A}/\mathcal{C}}\right) = 0 \) . Suppose \( H\left( {\mathcal{A}/\mathcal{C}}\right) = 0 \) . Then \[ 0 = - \mathop{\sum }\limits_{{i = 1}}^{k}\mathop{\sum }\limits_{{j = 1}}^{p}m\left( {{A}_{i} \cap {C}_{j}}\right) \log \frac{m\left( {{A}_{1} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \] and since \[ - m\left( {{A}_{1} \cap {C}_{j}}\right) \log \frac{m\left( {{A}_{1} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \geq 0 \] we must have \[ m\left( {{A}_{i} \cap {C}_{j}}\right) \log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } = 0 \] for each
110_The Schwarz Function and Its Generalization to Higher Dimensions
Definition 13.20
Definition 13.20. Let \( K \subseteq E \) be a pointed convex cone. \( K \) is called decomposable if there exist cones \( {\left\{ {K}_{i}\right\} }_{i = 1}^{m}, m \geq 2 \), such that \( K = {K}_{1} + \cdots + {K}_{m} \) , where each \( {K}_{i} \) lies in a linear subspace \( {E}_{i} \subset E \), and where the spaces \( {\left\{ {E}_{i}\right\} }_{i = 1}^{m} \) decompose \( E \) into a direct sum \( E = {E}_{1} \oplus {E}_{2} \oplus \cdots \oplus {E}_{m} \) . Each \( {K}_{i} \) is called a direct summand of \( K \), and \( K \) is called the direct sum of the \( \left\{ {K}_{i}\right\} \) . We write \[ K = {K}_{1} \oplus {K}_{2} \oplus \cdots \oplus {K}_{m} \] (13.7) to denote this relationship between \( K \) and \( {\left\{ {K}_{i}\right\} }_{i = 1}^{m} \) . \( K \) is called indecomposable or irreducible if it cannot be decomposed into a nontrivial direct sum. Let us define \( {\widehat{E}}_{i} \mathrel{\text{:=}} { \oplus }_{j \neq i}{E}_{j} \) and \( {\widehat{K}}_{i} \mathrel{\text{:=}} { \oplus }_{j \neq i}{K}_{j} \) . If \( K \) is the direct sum (13.7), then every \( x \in K \) has a unique representation \( x = {x}_{1} + \cdots + {x}_{m} \) with \( {x}_{i} \in {K}_{i} \subseteq {E}_{i} \) . Thus, \( {x}_{i} = {\Pi }_{{E}_{i}}x \), where \( {\Pi }_{{E}_{i}} \) is the projection of \( E \) onto \( {E}_{i} \) along \( {\widehat{E}}_{i} \) . Also, since \( 0 \in {K}_{i} \), we have \( {K}_{i} = {K}_{i} + \mathop{\sum }\limits_{{j \neq i}}\{ 0\} \subseteq \mathop{\sum }\limits_{{j = 1}}^{m}{K}_{j} = K \) . Therefore, \[ {\Pi }_{{E}_{i}}K = {K}_{i} \subseteq K \] This implies that \( {K}_{i} = {\Pi }_{{E}_{i}}K \) is a convex cone. Similarly, we have \[ \left( {I - {\Pi }_{{E}_{i}}}\right) K = {\Pi }_{{\widehat{E}}_{i}}K = {\widehat{K}}_{i} \subseteq K. \] We first prove a useful technical result. Lemma 13.21. Let \( K \) be a pointed convex cone that decomposes into the direct sum (13.7). If \( x \in {K}_{i} \) is a sum \( x = {x}_{1} + \cdots + {x}_{k} \) of elements \( {x}_{j} \in K \) , then each \( {x}_{j} \in {K}_{i} \) . Proof. We have \( 0 = {\Pi }_{{\widehat{E}}_{i}}x = {\Pi }_{{\widehat{E}}_{i}}{x}_{1} + \cdots + {\Pi }_{{\widehat{E}}_{i}}{x}_{k} \) . Each term \( {\widehat{x}}_{j} \mathrel{\text{:=}} {\Pi }_{{\widehat{E}}_{i}}{x}_{j} \) belongs to \( {\widehat{K}}_{i} \subseteq K \), so that \( {\widehat{x}}_{j} \in K \) and \( - {\widehat{x}}_{j} = \mathop{\sum }\limits_{{l \neq j}}{\widehat{x}}_{l} \in K \) . Since \( K \) contains no lines, we have \( {\widehat{x}}_{j} = 0 \), that is, \( {x}_{j} = {\Pi }_{{E}_{i}}{x}_{j} \in {K}_{i}, j = 1,\ldots, k \) . Theorem 13.22. Let \( K \subseteq E \) be a decomposable pointed convex cone. The irreducible decompositions of \( K \) are identical modulo indexing, that is, the set of cones \( {\left\{ {K}_{i}\right\} }_{i = 1}^{m} \) is unique. Moreover, the subspaces \( {E}_{i} \) corresponding to the nonzero cones \( {K}_{i} \) are also unique. If \( K \) is a solid cone, then all the cones \( {K}_{i} \) are nonzero and the subspaces \( {\left\{ {E}_{i}\right\} }_{1}^{m} \) are unique. Proof. Suppose that \( K \) admits two irreducible decompositions \[ K = {\bigoplus }_{i = 1}^{m}{K}_{i} \subseteq {\bigoplus }_{i = 1}^{m}{E}_{i}\;\text{ and }\;K = {\bigoplus }_{j = 1}^{q}{C}_{j} \subseteq {\bigoplus }_{j = 1}^{q}{F}_{j}. \] Note that each nonzero summand in either decomposition of \( K \) must lie in \( \operatorname{span}\left( K\right) \) and that the subspace corresponding to each zero summand must be one-dimensional, for otherwise the summand would be decomposable. This implies that the number of zero summands in both decompositions is \( \operatorname{codim}\left( {\operatorname{span}\left( K\right) }\right) \) . We may thus concentrate our efforts on \( \operatorname{span}\left( K\right) \), that is, we can assume that \( K \) is solid and all the summands of both decompositions of \( K \) are nonzero. By (13.7), each \( x \in {C}_{j} \subseteq K \) has a unique representation \( x = {x}_{1} + \cdots + {x}_{m} \) , where \( {x}_{i} = {\Pi }_{{E}_{i}}x \in {K}_{i} \subseteq K \) . Also, Lemma 13.21 implies that \( {x}_{i} \in {C}_{j} \) , and hence \( {x}_{i} \in {K}_{i} \cap {C}_{j} \) . Consequently, every \( x \in {C}_{j} \) lies in the set \( \left( {{K}_{1} \cap }\right. \) \( \left. {C}_{j}\right) + \cdots + \left( {{K}_{m} \cap {C}_{j}}\right) \) . Conversely, we have \( {K}_{i} \cap {C}_{j} \subseteq {C}_{j} \), implying that \( \left( {{K}_{1} \cap {C}_{j}}\right) + \cdots + \left( {{K}_{m} \cap {C}_{j}}\right) \subseteq {C}_{j} \) ; therefore, \[ {C}_{j} = \left( {{K}_{1} \cap {C}_{j}}\right) + \cdots + \left( {{K}_{m} \cap {C}_{j}}\right) . \] Note that \( {K}_{i} \cap {C}_{j} \subseteq {E}_{i} \cap {F}_{j},{F}_{j} = \left( {{E}_{1} \cap {F}_{j}}\right) + \cdots + \left( {{E}_{m} \cap {F}_{j}}\right) \), and that the intersection of any two distinct summands in the last sum is the trivial subspace \( \{ 0\} \) . The above decompositions of \( {F}_{j} \) and \( {C}_{j} \) are therefore direct sums. Since \( {C}_{j} \) is indecomposable, exactly one of the summands in the decomposition of \( {C}_{j} \) is nontrivial. Thus, \( {C}_{j} = {K}_{i} \cap {C}_{j} \), and hence \( {C}_{j} \subseteq {K}_{i} \) for some \( i \) . Arguing symmetrically, we also have \( {K}_{i} \subseteq {C}_{l} \) for some \( l \), implying that \( {C}_{j} \subseteq {C}_{l} \) . Therefore, \( j = l \), for otherwise \( {C}_{j} \subseteq {F}_{j} \cap {F}_{l} = \{ 0\} \), contradicting our assumption above. This shows that \( {C}_{j} = {K}_{i} \) . The theorem is proved by repeating the above arguments for the cone \( {\widehat{K}}_{i} = { \oplus }_{k \neq i}{K}_{k} = { \oplus }_{l \neq j}{C}_{l} \) . Theorem 13.22 is reminiscent of the Krull-Remak-Schmidt theorem in algebra; see [184]. ## 13.7 Norms of Polynomials and Multilinear Maps Let \( E, F \) be two vector spaces over \( \mathbb{R} \) or \( \mathbb{C} \) endowed with some norms. A mapping \( p : E \rightarrow F \) is called a polynomial if for fixed \( x, y \in E \), the map \( t \mapsto p\left( {x + {ty}}\right) \) is a polynomial in \( t \) . A homogeneous polynomial of degree \( k \) induces a \( k \) -multilinear symmetric mapping \( \widetilde{p} : {E}^{k} \rightarrow F \) such that \[ p\left( x\right) = \widetilde{p}\left( {x, x,\ldots, x}\right) . \] In fact, it is a well-known result of Mazur and Orlicz [194] that \[ \widetilde{p}\left( {{x}_{1},\ldots ,{x}_{k}}\right) = \frac{1}{k!}\mathop{\sum }\limits_{{\varepsilon \in \{ 0,1{\} }^{k}}}{\left( -1\right) }^{k + \mathop{\sum }\limits_{1}^{k}{\varepsilon }_{j}}p\left( {\mathop{\sum }\limits_{1}^{k}{\varepsilon }_{j}{x}_{j}}\right) \] (13.8) see [35] and [141], p. 393. One may associate two norms with such a mapping, \[ \parallel p\parallel \mathrel{\text{:=}} \sup \{ \parallel p\left( x\right) \parallel : \parallel x\parallel = 1\} = \sup \{ \parallel \widetilde{p}\left( {x,\ldots, x}\right) \parallel : \parallel x\parallel = 1\} , \] \[ \parallel \widetilde{p}\parallel = \sup \left\{ {\begin{Vmatrix}{\widetilde{p}\left( {{x}_{1},\ldots ,{x}_{k}}\right) }\end{Vmatrix} : \begin{Vmatrix}{x}_{1}\end{Vmatrix} \leq 1,\ldots ,\begin{Vmatrix}{x}_{k}\end{Vmatrix} \leq 1}\right\} . \] Of course, \( \parallel p\parallel \leq \parallel \widetilde{p}\parallel \) ; conversely, the formula (13.8) implies that if \( \begin{Vmatrix}{x}_{i}\end{Vmatrix} = 1 \) , \( i = 1,\ldots, k \), then \[ \begin{Vmatrix}{\widetilde{p}\left( {{x}_{1},\ldots ,{x}_{k}}\right) }\end{Vmatrix} \leq \frac{1}{k!}\mathop{\sum }\limits_{{\varepsilon \in \{ 0,1{\} }^{k}}}\parallel p\parallel \cdot {\begin{Vmatrix}\mathop{\sum }\limits_{1}^{k}{\varepsilon }_{j}{x}_{j}\end{Vmatrix}}^{k} \leq \frac{{\left( 2k\right) }^{k}}{k!}\parallel p\parallel \] that is, \[ \parallel \widetilde{p}\parallel \leq \frac{{\left( 2k\right) }^{k}}{k!}\parallel p\parallel \] If \( E, F \) are finite vector spaces over \( \mathbb{R} \) and \( E \) is a Euclidean space, then the above norms are in fact equal. This result plays an important role in deriving some properties of self-concordant barrier functions in the book by Nesterov and Nemirovski [209] and is proved there in Appendix 1. It has an interesting history and seems to have been rediscovered many times. The first proof seems to have been given by Kellogg [162]. Subsequently, independent proofs have been given in \( \left\lbrack {{258},{21},{139},{35},{264},{209}}\right\rbrack \), and possibly others. The following simple and elegant proof of the result is in Bochnak and Siciak [35] and is attributed to Lojasiewicz. Theorem 13.23. Let \( E \) be a finite-dimensional real Euclidean space, and \( F \) a real normed space. Then \[ \parallel p\parallel = \parallel \widetilde{p}\parallel \] Proof. It suffices to show that \( \parallel \widetilde{p}\parallel \leq \parallel p\parallel \) . Let \( S = \{ x \in E : \parallel x\parallel = 1\} \) be the unit ball in \( E \) . First, consider the case \( k = 2 \) . If \( x, y \in S \) such that \( \parallel \widetilde{p}\left( {x, y}\right) \parallel = \parallel \widetilde{p}\parallel \), we claim that \[ \parallel \widetilde{p}\left( {x + y, x + y}\right) \parallel = \parallel \widetilde{p}\parallel \cdot \parallel x + y{\parallel }^{2}. \] Otherwise, \( \parallel \widetilde{p}\left( {x + y, x + y}\right) \parallel < \parallel \widetilde{p}\parallel \cdot \parallel x + y{\parallel }^{2} \) ; since \( \parallel \widetilde{p}\left( {x - y, x - y}\right) \parallel \leq \parallel \widetilde{p}\parallel \cdot \parallel x - y{\parallel }^{2} \) and \[ \widetilde{p}\left( {x, y}\right) = \frac{\widetilde{p}\left( {x + y, x + y}\right) - \widetilde{p}\left( {x - y, x - y}\right) }{4}, \] we have \[ \parallel \widetilde{p}\parallel = \parallel \widetilde{p}\left( {x, y}\right) \parallel < \frac{\parallel \widetilde{p}\parallel }{4}\left( {\parallel x + y{\parallel }^{2} + \para
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 3.7.11
Definition 3.7.11 Let \( \phi \) be a strict set contraction mapping; we define \[ \deg \left( {\mathrm{{id}} - \phi ,\Omega ,\theta }\right) = \deg \left( {\mathrm{{id}} - \widetilde{\phi },\Omega ,\theta }\right) . \] To verify that the degree is well defined, we shall prove that the definition does not depend on the special choice of \( \widetilde{\phi } \), i.e., if \( {\widetilde{\phi }}_{1},{\widetilde{\phi }}_{2} \) are two such extensions: \( {\widetilde{\phi }}_{i} : {\left. \bar{\Omega } \rightarrow A,{\widetilde{\phi }}_{i}\right| }_{\bar{\Omega } \cap A} = {\left. \phi \right| }_{\bar{\Omega } \cap A}, i = 1,2 \), then we define \( F\left( {t, x}\right) = x - \left\lbrack {\left( {1 - t}\right) {\widetilde{\phi }}_{1}\left( x\right) + t{\widetilde{\phi }}_{2}\left( x\right) }\right\rbrack \) . From \( \theta \notin \left( {\mathrm{{id}} - \phi }\right) \left( {\partial \Omega }\right) \), it follows that \( \theta \notin \left( {\mathrm{{id}} - F}\right) \left( {\left\lbrack {0,1}\right\rbrack \times \partial \Omega }\right) \), i.e., \( {\widetilde{\phi }}_{1} \simeq {\widetilde{\phi }}_{2} \) . According to the homotopy invariance of the Leray-Schauder degree, we have \( \deg \left( {\mathrm{{id}} - {\widetilde{\phi }}_{1},\Omega ,\theta }\right) = \deg \left( {\mathrm{{id}} - {\widetilde{\phi }}_{2},\Omega ,\theta }\right) \) . It is easy to verify that the degree enjoys the homotopy invariance, additivity, translation invariance and normality. They are left to readers as exercises. Accordingly, this enables us to apply the degree theory to a map, which can be decomposed into a summation of a contraction mapping and a compact map. Finally, we extend the degree to condensing mappings. If \( \phi \) is a condensing mapping, \( \phi \left( \bar{\Omega }\right) \) is bounded, say it is included in the ball centered at \( \theta \) with radius \( R > 0 \) . For any \( \epsilon > 0 \), setting \( \lambda \in \left( {1 - \frac{\epsilon }{R},1}\right) \), and \( {\phi }_{\lambda } = {\lambda \phi } \), we have \( \begin{Vmatrix}{\phi \left( x\right) - {\phi }_{\lambda }\left( x\right) }\end{Vmatrix} \leq \epsilon \), and \[ \alpha \left( {{\phi }_{\lambda }\left( A\right) }\right) \leq {\lambda \alpha }\left( {\phi \left( A\right) }\right) \leq {\lambda \alpha }\left( A\right) \forall \text{ bounded }A, \] i.e., \( {\phi }_{\lambda } \) is a strict set contraction mapping which is closed to \( \phi \) . We define \[ \deg \left( {\mathrm{{id}} - \phi ,\Omega ,\theta }\right) = \deg \left( {\mathrm{{id}} - {\phi }_{\lambda },\Omega ,\theta }\right) . \] Again, it is easy to verify that the degree is also well defined and enjoys all basic properties of the Leray-Schauder degree. Again, this are left to readers. ## 3.7.3 Fredholm Mappings We know that the Leray-Schauder degree can be applied to quasilinear elliptic equations: Find \( u \in {C}^{2,\gamma } \cap {C}_{0}\left( \bar{\Omega }\right) ,\gamma \in \left( {0,1}\right) \) satisfying \[ \mathop{\sum }\limits_{{i, j = 1}}^{n}{a}_{ij}\left( {x, u,\nabla u}\right) \frac{{\partial }^{2}u}{\partial {x}_{i}\partial {x}_{j}} + f\left( {x, u,\nabla u}\right) = 0,\text{ in }\Omega . \] (3.60) It is executed as follows. Define a mapping \( K : {C}_{0}^{1}\left( \bar{\Omega }\right) \rightarrow {C}_{0}^{1}\left( \bar{\Omega }\right) \) such that \( \forall u \in {C}_{0}^{1}\left( \bar{\Omega }\right), v = {Ku} \in {C}^{2,\gamma } \cap {C}_{0}\left( \bar{\Omega }\right) \) satisfies the following linear equation: \[ \mathop{\sum }\limits_{{i, j}}^{n}{a}_{ij}\left( {x, u,\nabla u}\right) \frac{{\partial }^{2}v}{\partial {x}_{i}\partial {x}_{j}} + f\left( {x, u,\nabla u}\right) = 0\text{ in }\Omega . \] Thus the fixed points of \( K \) are the solutions of equation (3.60). However, if we consider a general elliptic equation: \[ A\left( u\right) \mathrel{\text{:=}} A\left( {x, u,\nabla u,{\nabla }^{2}u}\right) = g\left( x\right) \text{ in }\Omega , \] (3.61) where \( g \) is a given function and the quadratic form is positive definite: \[ \mathop{\sum }\limits_{{i, j = 1}}^{n}\frac{\partial A}{\partial {u}_{{x}_{i}{x}_{j}}}{\xi }_{i}{\xi }_{j} \geq \alpha {\left| \xi \right| }^{2},\alpha > 0, \] then it seems that the Leray-Schauder degree argument is not applicable, because we do not know how to recast it as a fixed-point problem for compact mappings. However, the linearization of \( A\left( u\right) \) is a linear second-order elliptic operator, so is a Fredholm operator with index zero. People have made great efforts in defining an integer-valued degree theory for \( {C}^{1} \) Fredholm mappings of index 0 between Banach manifolds. Among them we should mention Caccioppoli [Cac 2], Smale [Sm 3](for \( {\mathbf{Z}}_{2} \) valued), Elworthy and Tromba [EIT 1],[EIT 2], Borisovich, Zvyagin, and Sapronov [BZS] (for \( {C}^{2} \) mappings and \( \mathbf{Z} \) valued), and Fitzpatrick, Pejsachowicz, and Rabier [FP],[FPR] and [PR 1] [PR 2] etc. We are satisfied to introduce the idea of the definition; details are to be found in the above-mentioned references. The main difficulty in extending the Leray-Schauder degree theory to Fredholm mappings lies in the fact discovered by Kuiper [Kui] that the general linear group of infinite-dimensional separable Hilbert space is connected and even contractible. A new ingredient has to be introduced to define the orientation of Fredholm mappings. Following [FPR], one defines the parity of curves of Fredholm operators. Let us denote by \( \mathcal{K}\left( X\right) \) the space of all compact linear operators, and by \( {\mathbf{\Phi }}_{0}\left( {X, Y}\right) ,\left( {{GL}\left( {X, Y}\right) }\right) \) the space of all Fredholm operators of index 0 (and isomorphisms resp.) between Banach spaces \( X \) and \( Y \) . For a curve of linear compact operators \( K \in C\left( {\left\lbrack {0,1}\right\rbrack ,\mathcal{K}\left( X\right) }\right) \), if \( I - K\left( i\right), i = \) 0,1 are invertible, then we define the parity by \[ \sigma \left( {I - K}\right) = {i}_{LS}\left( {I - K\left( 0\right) ,\theta }\right) {i}_{LS}\left( {I - K\left( 1\right) ,\theta }\right) , \] where \( {i}_{LS}\left( {I - K\left( i\right) ,\theta }\right), i = 0,1 \), are the Leray-Schauder indices. The notion of parity is extended to curves of Fredholm mappings with index 0: \( \forall A \in C\left( {\left\lbrack {0,1}\right\rbrack ,{\mathbf{\Phi }}_{0}\left( {X, Y}\right) }\right) \), if \( A\left( i\right) \in {GL}\left( {X, Y}\right) \), then there exists \( N \in C\left( {\left\lbrack {0,1}\right\rbrack ,{GL}\left( {Y, X}\right) }\right) \) such that \( N\left( t\right) A\left( t\right) = I - K\left( t\right) ,\forall t \in \left\lbrack {0,1}\right\rbrack \), where \( K \in C\left( {\left\lbrack {0,1}\right\rbrack ,\mathcal{K}\left( X\right) }\right) \) . If \( A\left( i\right) \in {GL}\left( {X, Y}\right), i = 0,1 \), then we define \[ \sigma \left( A\right) = \sigma \left( {I - K}\right) . \] One can show that \( \sigma \left( A\right) \) does not depend on the special choice of \( N \) . In particular, if \( A \in C\left( {\left\lbrack {0,1}\right\rbrack ,{GL}\left( {X, Y}\right) }\right) \), then we take \( N\left( t\right) = {A}^{-1}\left( t\right) \), and then \( \sigma \left( A\right) = 1 \) . If \( X = Y \) is a finite-dimensional Banach space, by definition, \( \sigma \left( A\right) = \pm 1 \) if and only if \( A\left( 0\right) \) and \( A\left( 1\right) \) lie in the same/different connected component(s) in \( {GL}\left( X\right) \) . But this is not true for infinite-dimensional space due to the above-mentioned Kuiper's theorem. A geometric interpretation is given in [FP]: Let \( {S}_{j} = \left\{ {L \in \Phi \left( {X, Y}\right) \mid \operatorname{dimker}L = j}\right\}, j = 1,2,\ldots \), and \( S = {\Phi }_{0}\left( {X, Y}\right) \smallsetminus {GL}\left( {X, Y}\right) \), then \( S = \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{S}_{j} \), and \( \sigma \left( A\right) \) is the number of points of transversal intersection of a generic path \( A \) with \( {S}_{1} \) . The parity of the curve \( A \in C\left( {\left\lbrack {0,1}\right\rbrack ,{\mathbf{\Phi }}_{0}}\right) \) enjoys the following homotopy invariance and the invariance under reparametrizations: (Homotopy invariance) Let \( H \in C\left( {{\left\lbrack 0,1\right\rbrack }^{2},{\mathbf{\Phi }}_{0}\left( {X, Y}\right) }\right) \) and suppose that \( H\left( {t, i}\right) \in {GL}\left( {X, Y}\right), i = 0,1,\forall t \in \left\lbrack {0,1}\right\rbrack \) . Then \( \sigma \left( {H\left( {t, \cdot }\right) }\right) \) is a constant. (Invariance under reparametrizations) Let \( A \in C\left( {\left\lbrack {0,1}\right\rbrack ,{\mathbf{\Phi }}_{0}\left( {X, Y}\right) }\right) \) with \( A\left( i\right) \in {GL}\left( {X, Y}\right), i = 0,1 \), and let \( \gamma \in C\left( {\left\lbrack {0,1}\right\rbrack ,\left\lbrack {0,1}\right\rbrack }\right) \) satisfy \( \gamma \left( i\right) = i, i = 0,1 \) . Then \( \sigma \left( {A \circ \gamma }\right) = \sigma \left( A\right) \) . Let \( \Omega \subset U \subset X \) be open subsets, and \( U \) be connected and simply connected. In the following the closure \( \bar{\Omega } \) and the boundary \( \partial \Omega \) are understood relative to \( U \) . Definition 3.7.12 A Fredholm mapping \( F \in {C}^{1}\left( {U, Y}\right) \) of index 0 is called \( \Omega \) -admissible if \( {\left. F\right| }_{\bar{\Omega }} \) is proper. A Fredholm homotopy \( H \in {C}^{1}\left( {\left\lbrack {0,1}\right\rbrack \times U, Y}\right) \) of index 1 is called \( \Omega \) -admissible if \( {\left. H\right| }_{\left\lbrack {0,1}\right\rbrack \times \bar{\Omega }} \) is proper. Let \( p \in U \) be a regular point of a \( {C}^{1}\Omega \) -admissible Fredholm mapping \( F \) ; we take it as a base point. Let \( y \in Y \smallsetminus F\left( {\partial \Omega }\right) \) be a regular value of \( {\left. F\right
1065_(GTM224)Metric Structures in Differential Geometry
Definition 5.3
Definition 5.3. Let \( f : M \rightarrow N \) be an immersion. The normal bundle of \( f \) is the bundle \( \nu \left( f\right) = {f}^{ * }{\tau }_{N}/h\left( {\tau }_{M}\right) \) over \( M \) . Since \( 0 \rightarrow {\tau }_{M} \rightarrow {f}^{ * }{\tau }_{N} \rightarrow \nu \left( f\right) \rightarrow 0 \) is an exact sequence of homomorphisms, Proposition 5.3 implies that \( {f}^{ * }{\tau }_{N} \cong {\tau }_{M} \oplus \nu \left( f\right) \) . In fact, given a Euclidean metric on \( {f}^{ * }{\tau }_{N} \) (for instance one induced by a Riemannian metric on \( N),\nu \left( f\right) \) is equivalent to the orthogonal complement of \( h\left( {\tau }_{M}\right) \) . EXAMPLE 5.2. Consider the inclusion \( \imath : {S}^{n} \rightarrow {\mathbb{R}}^{n + 1} \) . By the remark following Proposition 5.1, the pullback of the trivial tangent bundle of \( {\mathbb{R}}^{n + 1} \) via \( \imath \) is the trivial rank \( \left( {n + 1}\right) \) bundle \( {\epsilon }^{n + 1} \) over \( {S}^{n} \) . The normal bundle of \( \imath \) is also the trivial rank 1 bundle \( {\epsilon }^{1} \) over \( {S}^{n} \) : Indeed, the restriction of the position vector field \( p \mapsto {\mathcal{J}}_{p}p \) to the sphere is a section of the frame bundle of \( {\tau }_{{S}^{n}}^{ \bot } \) . Thus, \[ {\tau }_{{S}^{n}} \oplus {\epsilon }^{1} \cong {\epsilon }^{n + 1} \] even though \( {\tau }_{{S}^{n}} \) is not, in general, trivial. EXERCISE 58. Show that if \( \xi \) is a vector bundle, then \( \xi \oplus \xi \) admits a complex structure, see Exercise 57. Hint: Let \( J\left( {u, v}\right) = \left( {-v, u}\right) \) . EXERCISE 59. If \( \xi = \pi : E \rightarrow B \) is a vector bundle, show that the vertical bundle of \( \xi \) is equivalent to the pullback \( {\pi }^{ * }\xi \) . Hint: Recall the canonical isomorphism \( {\mathcal{J}}_{u} \) of the vector space \( {E}_{b} \) with its tangent space \( {\left( {E}_{b}\right) }_{u} \) at \( u \) . Show that \( f : {\pi }^{ * }E \rightarrow \mathcal{V}E \) is an equivalence, where \( f\left( {u, v}\right) = {\mathcal{J}}_{u}v \) . EXERCISE 60. If \( \xi = \pi : E \rightarrow B \) is a vector bundle, then by Example 5.1 and Exercise \( {59},{\tau }_{E} \cong {\pi }^{ * }\xi \oplus {\pi }^{ * }{\tau }_{B} \) . Prove that if \( s \) is the zero section of \( \xi \), then \[ {s}^{ * }{\tau }_{E} \cong \xi \oplus {\tau }_{B} \] Thus, the normal bundle of the zero section in \( \xi \) is \( \xi \) itself. ## 6. Fibrations and the Homotopy Lifting/Covering Properties Although we have so far only considered bundles over manifolds, the definition used also makes sense for manifolds with boundary (and even for topological spaces - the traditional type of base in bundle theory- if we replace diffeomorphisms by homeomorphisms). Let \( B \) be a manifold, \( I = \left\lbrack {0,1}\right\rbrack \), and for \( t \in I \), denote by \( {\imath }_{t} : B \rightarrow B \times I \) the imbedding \( {\imath }_{t}\left( b\right) = \left( {b, t}\right) \) . Recall that two maps \( f, g : \bar{B} \rightarrow B \) are said to be homotopic if there exists \( H : \bar{B} \times I \rightarrow B \) with \( H \circ {\imath }_{0} = f \) and \( H \circ {\imath }_{1} = g \) . \( H \) is called a homotopy of \( f \) into \( g \) . Homotopies play an essential role in the classification of bundles: In this section, we will see that if \( \xi \) is a bundle over \( B \), then for any two homotopic maps \( f, g : \bar{B} \rightarrow B \), the induced bundles \( {f}^{ * }\xi \) and \( {g}^{ * }\xi \) are equivalent. We begin by introducing the notion of fibration, which is weaker than that of fiber bundle: Definition 6.1. A surjective map \( \pi : M \rightarrow B \) is said to be a fibration if it has the homotopy lifting property: namely, given \( f : \bar{B} \rightarrow M \), any homotopy \( H : \bar{B} \times I \rightarrow B \) of \( \pi \circ f \) can be lifted to a homotopy \( \widetilde{H} : \bar{B} \times I \rightarrow M \) of \( f \) ; i.e., \( \pi \circ \widetilde{H} = H,\widetilde{H} \circ {\imath }_{0} = f \) . In order to show that a fiber bundle \( \xi = \pi : M \rightarrow B \) is a fibration, we first rephrase the problem: Notice that a homotopy \( H : \bar{B} \times I \rightarrow B \) can be lifted to \( \widetilde{H} : \bar{B} \times I \rightarrow M \) iff the pullback bundle \( {H}^{ * }M \rightarrow \bar{B} \times I \) admits a section. Indeed, if \( \widetilde{H} \) is a lift of \( H \), then \( \left( {b, t}\right) \mapsto \left( {b, t,\widetilde{H}\left( {b, t}\right) }\right) \) is a section. Conversely, if \( s \) is a section of \( {H}^{ * }M \rightarrow \bar{B} \times I \), then \( {\pi }_{2} \circ s \) is a lift of \( H \), where \( {\pi }_{2} : {H}^{ * }M \rightarrow M \) is the second factor projection. In other words, the homotopy lifting property may be paraphrased as saying that if \( \xi \) is a fiber bundle over \( B \times I \), then any section of \( {\xi }_{\mid B \times 0} \) can be extended to a section of \( \xi \) . We begin with the following: LEMMA 6.1. Let \( \xi \) be a principal bundle over \( B \times I \) . Then any \( b \in B \) has a neighborhood \( U \) such that the restriction \( {\xi }_{\mid U \times I} \) is trivial. Proof. By compactness of \( b \times I \), there exist neighborhoods \( {V}_{1},\ldots ,{V}_{k} \) of \( b \) , and intervals \( {I}_{1},\ldots ,{I}_{k} \) such that \( \left\{ {{V}_{i} \times {I}_{i}}\right\} \) is a cover of \( b \times I \), and each restriction \( {\xi }_{\mid {V}_{i} \times {I}_{i}} \) is trivial. We claim that \( U \) may be taken to be \( {V}_{1} \cap \cdots \cap {V}_{k} \) . The proof will be by induction on \( k \) . The case \( k = 1 \) being trivial, assume the statement holds for \( k - 1 \) . Order the intervals \( {I}_{j} \) by their left endpoints, so that if \( {I}_{j} = \left( {{t}_{j}^{0},{t}_{j}^{1}}\right) \), then \( {t}_{j}^{0} < {t}_{j + 1}^{0} \) (if \( {t}_{j}^{0} = {t}_{j + 1}^{0} \), then either \( {I}_{j} \) or \( {I}_{j + 1} \) can be discarded). We may also assume that \( {t}_{1}^{1} < {t}_{2}^{1} \) since otherwise \( {I}_{2} \) may be discarded. If \( {t}_{0} \in \left( {{t}_{2}^{0},{t}_{1}^{1}}\right) \), then by the induction hypothesis, \( \xi \) is trivial over \( {U}_{1} \times \left\lbrack {0,{t}_{0}}\right) \), and over \( {U}_{2} \times \left( {{t}_{0},1}\right\rbrack \), where \( {U}_{1} = {V}_{1} \) and \( {U}_{2} = {V}_{2} \cap \cdots \cap {V}_{k} \) . Let \( {s}_{1} \) and \( {s}_{2} \) be sections over these two sets. For each \( \left( {q, t}\right) \in \left( {{U}_{1} \cap {U}_{2}}\right) \times \left( {{t}_{2}^{0},{t}_{1}^{1}}\right) \), there exists a unique \( \left. {g\left( {q, t}\right) }\right) \in G \) such that \( {s}_{1}\left( {q, t}\right) = {s}_{2}\left( {q, t}\right) g\left( {q, t}\right) \), and \( g : \left( {{U}_{1} \cap {U}_{2}}\right) \times \left( {{t}_{2}^{0},{t}_{1}^{1}}\right) \rightarrow G \) is smooth because the sections are. Extend \( g \) to a differentiable map \( g : \left( {{U}_{1} \cap {U}_{2}}\right) \times \left( {{t}_{2}^{0},1}\right\rbrack \rightarrow G \) . We then obtain a section \( s \) of \( \xi \) restricted to \( \left( {{U}_{1} \cap {U}_{2}}\right) \times I \) by defining \[ s\left( {q, t}\right) = \left\{ \begin{array}{ll} {s}_{1}\left( {q, t}\right) , & \text{ for }t \leq {t}_{0}, \\ {s}_{2}\left( {q, t}\right) g\left( {q, t}\right) , & \text{ for }t \geq {t}_{0}. \end{array}\right. \] THEOREM 6.1. Let \( \xi = \pi : P \rightarrow B \times I \) be a principal \( G \) -bundle, and consider the maps \( p : B \times I \rightarrow B \times 1, p\left( {b, t}\right) = \left( {b,1}\right) \), and \( j : B \times 1 \rightarrow B \times I \) , \( j\left( {b,1}\right) = \left( {b,1}\right) \) . Then \[ \xi \cong {\left( j \circ p\right) }^{ * }\xi = {p}^{ * }{\xi }_{\mid B \times 1} \] Proof. Denote by \( {\pi }_{B} : P \rightarrow B \) and \( u : P \rightarrow I \) the maps obtained by composing \( \pi \) with the projections of \( B \times I \) onto its two factors. We will construct a \( G \) -equivariant bundle map \( f : P \rightarrow {\pi }^{-1}\left( {B \times 1}\right) \) covering \( p \) ; the theorem will then follow from Theorem 3.1 and Proposition 5.1. By Lemma 6.1, there exists a countable cover \( \left\{ {U}_{n}\right\} \) of \( B \) such that \( \xi \) is trivial over each \( {U}_{n} \times I \) . Let \( {s}_{n} \) denote a section of \( {\xi }_{\mid {U}_{n} \times I} \), and \( \left\{ {\phi }_{n}\right\} \) a partition of unity subordinate to \( \left\{ {U}_{n}\right\} \) . Since any element in \( {\pi }^{-1}\left( {b, t}\right) \) with \( b \in {U}_{n} \) can be written as \( {s}_{n}\left( {b, t}\right) g \) for a unique \( g \in G \), the assignment \[ {f}_{n} : {\pi }^{-1}\left( {{U}_{n} \times I}\right) \rightarrow {\pi }^{-1}\left( {{U}_{n} \times I}\right) \] \[ {s}_{n}\left( {b, t}\right) g \mapsto {s}_{n}\left( {b,\min \left\{ {t + {\phi }_{n}\left( b\right) ,1}\right\} }\right) g \] is a \( G \) -equivariant bundle map. Furthermore, \( {f}_{n} \) is the identity on an open set containing \( {\pi }^{-1}\left( {\partial {U}_{n} \times I}\right) \), and may therefore be continuously extended to all of \( P \) by defining \( {f}_{n}\left( q\right) = q \) for \( q \notin {\pi }^{-1}\left( {{U}_{n} \times I}\right) \) . Finally, set \( f = {f}_{1} \circ {f}_{2} \circ \cdots \) . The composition makes sense because all but finitely many \( {f}_{n} \) are the identity on a neighborhood of any point. \( f \) is \( G \) -equivariant since each \( {f}_{n} \) is, and \( u \circ f = \) \( \min \left\{ {u + \left( {\sum {\phi }_{n}}\right) \circ {\pi }_{B},1}\right\} \), so that \( u \circ f \equiv 1 \) . Thus, \( f \) maps into \( {\pi }^{-1}\left( {B \times 1}\right) \) , and furthermore, \( f \) is differentiable, because although \( u \circ {f}_{n} \) is in general only continuous, \( u \circ f \) is diffe
1077_(GTM235)Compact Lie Groups
Definition 7.4
Definition 7.4. Let \( G \) be connected and let \( V \) be an irreducible representation of \( G \) with highest weight \( \lambda \) . As \( V \) is uniquely determined by \( \lambda \), write \( V\left( \lambda \right) \) for \( V \) and write \( {\chi }_{\lambda } \) for its character. Lemma 7.5. Let \( G \) be connected. If \( V\left( \lambda \right) \) is an irreducible representation of \( G \), then \( V{\left( \lambda \right) }^{ * } \cong V\left( {-{w}_{0}\lambda }\right) \), where \( {w}_{0} \in W\left( {\Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) }\right) \) is the unique element mapping the positive Weyl chamber to the negative Weyl chamber (c.f. Exercise 6.40). Proof. Since \( V\left( \lambda \right) \) is irreducible, the character theory of Theorems 3.5 and 3.7 show that \( V{\left( \lambda \right) }^{ * } \) is irreducible. It therefore suffices to show that the highest weight of \( V{\left( \lambda \right) }^{ * } \) is \( - {w}_{0}\lambda \) . Fix a \( G \) -invariant inner product, \( \left( {\cdot , \cdot }\right) \), on \( V\left( \lambda \right) \), so that \( V{\left( \lambda \right) }^{ * } = \left\{ {{\mu }_{v} \mid v \in V\left( \lambda \right) }\right\} \) , where \( {\mu }_{v}\left( {v}^{\prime }\right) = \left( {{v}^{\prime }, v}\right) \) for \( {v}^{\prime } \in V\left( \lambda \right) \) . By the invariance of the form, \( g{\mu }_{v} = {\mu }_{gv} \) for \( g \in G \), so that \( X{\mu }_{v} = {\mu }_{Xv} \) for \( X \in \mathfrak{g} \) . Since \( \left( {\cdot , \cdot }\right) \) is Hermitian, it follows that \( Z{\mu }_{v} = {\mu }_{\theta \left( Z\right) v} \) for \( Z \in {\mathfrak{g}}_{\mathbb{C}} \) . Let \( {v}_{\lambda } \) be a highest weight vector for \( V\left( \lambda \right) \) . Identifying \( W\left( G\right) \) with \( W\left( {\Delta {\left( {\mathfrak{g}}_{\mathbb{C}}\right) }^{ \vee }}\right) \) and \( W\left( {\Delta \left( {\mathfrak{g}}_{\mathbb{C}}\right) }\right) \) as in Theorem 6.43 via the Ad-action of Equation 6.35, it follows from Theorem 7.3 that \( {w}_{0}{v}_{\lambda } \) is a weight vector of weight \( {w}_{0}\lambda \) (called the lowest weight vector). As \( \theta \left( Y\right) = - Y \) for \( Y \in {it} \) and since weights are real valued on \( {it} \), it follows that \( {\mu }_{{w}_{0}{v}_{\lambda }} \) is a weight vector of weight \( - {w}_{0}\lambda \) . It remains to see that \( {\mathfrak{n}}^{ - }{w}_{0}{v}_{\lambda } = 0 \) since Lemma 6.14 shows \( \theta {\mathfrak{n}}^{ + } = {\mathfrak{n}}^{ - } \) . By construction, \( {w}_{0}{\Delta }^{ + }\left( {\mathfrak{g}}_{\mathbb{C}}\right) = {\Delta }^{ - }\left( {\mathfrak{g}}_{\mathbb{C}}\right) \) and \( {w}_{0}^{2} = I \), so that \( \operatorname{Ad}\left( {w}_{0}\right) {\mathfrak{n}}^{ - } = {\mathfrak{n}}^{ + } \) . Thus \[ {\mathfrak{n}}^{ - }{w}_{0}{v}_{\lambda } = {w}_{0}\left( {\operatorname{Ad}\left( {w}_{0}^{-1}\right) {\mathfrak{n}}^{ - }}\right) {v}_{\lambda } = {w}_{0}{\mathfrak{n}}^{ + }{v}_{\lambda } = 0 \] and the proof is complete. ## 7.1.1 Exercises Exercise 7.1 Consider the representation of \( {SU}\left( n\right) \) on \( \mathop{\bigwedge }\limits^{p}{\mathbb{C}}^{n} \) . For \( T \) equal to the usual set of diagonal elements, show that a basis of weight vectors is given by vectors of the form \( {e}_{{l}_{1}} \land \cdots \land {e}_{{l}_{p}} \) with weight \( \mathop{\sum }\limits_{{i = 1}}^{p}{\epsilon }_{{l}_{i}} \) . Verify that only \( {e}_{1} \land \cdots \land {e}_{p} \) is a highest weight to conclude that \( \mathop{\bigwedge }\limits^{p}{\mathbb{C}}^{n} \) is an irreducible representation of \( {SU}\left( n\right) \) with highest weight \( \mathop{\sum }\limits_{{i = 1}}^{p}{\epsilon }_{i} \) . Exercise 7.2 Recall that \( {V}_{p}\left( {\mathbb{R}}^{n}\right) \), the space of complex-valued polynomials on \( {\mathbb{R}}^{n} \) homogeneous of degree \( p \), and \( {\mathcal{H}}_{p}\left( {\mathbb{R}}^{n}\right) \), the harmonic polynomials, are representations of \( {SO}\left( n\right) \) . Let \( T \) be the standard maximal torus given in \( §{5.1.2.3} \) and \( §{5.1.2.4} \) , let \( {h}_{j} = {E}_{{2j} - 1,{2j}} - {E}_{{2j},{2j} - 1} \in \mathfrak{t},1 \leq k \leq m \equiv \left\lfloor \frac{n}{2}\right\rfloor \), i.e., \( {h}_{j} \) is an embedding of \( \left( \begin{matrix} 0 & 1 \\ - 1 & 0 \end{matrix}\right) \), and let \( {\epsilon }_{j} \in {\mathfrak{t}}^{ * } \) be defined by \( {\epsilon }_{j}\left( {h}_{{j}^{\prime }}\right) = - i{\delta }_{j,{j}^{\prime }} \) (c.f. Exercise 6.14). (1) Show that \( {h}_{j} \) acts on \( {V}_{p}\left( {\mathbb{R}}^{n}\right) \) by the operator \( - {x}_{2j}{\partial }_{{x}_{{2j} - 1}} + {x}_{{2j} - 1}{\partial }_{{x}_{2j}} \) . (2) For \( n = {2m} + 1 \), conclude that a basis of weight vectors is given by polynomials of the form \[ {\left( {x}_{1} + i{x}_{2}\right) }^{{j}_{1}}\cdots {\left( {x}_{{2m} - 1} + i{x}_{2m}\right) }^{{j}_{m}}{\left( {x}_{1} - i{x}_{2}\right) }^{{k}_{1}}\cdots {\left( {x}_{{2m} - 1} - i{x}_{2m}\right) }^{{k}_{m}}{x}_{{2m} + 1}^{{l}_{0}}, \] \( {l}_{0} + \mathop{\sum }\limits_{i}{j}_{i} + \mathop{\sum }\limits_{i}{k}_{i} = p \), each with weight \( \mathop{\sum }\limits_{i}\left( {{k}_{i} - {j}_{i}}\right) {\epsilon }_{i} \) . (3) For \( n = {2m} \), conclude that a basis of weight vectors is given by polynomials of the form \[ {\left( {x}_{1} + i{x}_{2}\right) }^{{j}_{1}}\cdots {\left( {x}_{n - 1} + i{x}_{n}\right) }^{{j}_{m}}{\left( {x}_{1} - i{x}_{2}\right) }^{{k}_{1}}\cdots {\left( {x}_{n - 1} - i{x}_{n}\right) }^{{k}_{m}}, \] \( \mathop{\sum }\limits_{i}{j}_{i} + \mathop{\sum }\limits_{i}{k}_{i} = p \), each with weight \( \mathop{\sum }\limits_{i}\left( {{k}_{i} - {j}_{i}}\right) {\epsilon }_{i} \) . (4) Using the root system of \( \mathfrak{{so}}\left( {n,\mathbb{C}}\right) \) and Theorem 2.33, conclude that the weight vector \( {\left( {x}_{1} - i{x}_{2}\right) }^{p} \) of weight \( p{\epsilon }_{1} \) must be the highest weight vector of \( {\mathcal{H}}_{p}\left( {\mathbb{R}}^{n}\right) \) for \( n \geq 3 \) . (5) Using Lemma 2.27, show that a basis of highest weight vectors for \( {V}_{p}\left( {\mathbb{R}}^{n}\right) \) is given by the vectors \( {\left( {x}_{1} - i{x}_{2}\right) }^{p - {2j}}\parallel x{\parallel }^{2j} \) of weight \( \left( {p - {2j}}\right) {\epsilon }_{1},1 \leq j \leq m \) . Exercise 7.3 Consider the representation of \( {SO}\left( n\right) \) on \( \mathop{\bigwedge }\limits^{p}{\mathbb{C}}^{n} \) and continue the notation from Exercise 7.2. (1) For \( n = {2m} + 1 \), examine the wedge product of elements of the form \( {e}_{{2j} - 1} \pm i{e}_{2j} \) as well as \( {e}_{{2m} + 1} \) to find a basis of weight vectors (the weights will be of the form \( \left. {\pm {\epsilon }_{{j}_{1}}\cdots \pm {\epsilon }_{{j}_{r}}\text{with}1 \leq {j}_{1} < \ldots < {j}_{r} \leq p}\right) \) . For \( p \leq m \), show that only one is a highest weight vector and conclude that \( \mathop{\bigwedge }\limits^{p}{\mathbb{C}}^{n} \) is irreducible with highest weight \( \mathop{\sum }\limits_{{i = 1}}^{p}{\epsilon }_{i} \) (2) For \( n = {2m} \), examine the wedge product of elements of the form \( {e}_{{2j} - 1} \pm i{e}_{2j} \) to find a basis of weight vectors. For \( p < m \), show that only one is a highest weight vector and conclude that \( \mathop{\bigwedge }\limits^{p}{\mathbb{C}}^{n} \) is irreducible with highest weight \( \mathop{\sum }\limits_{{i = 1}}^{p}{\epsilon }_{i} \) . For \( p = m \) , show that there are exactly two highest weights and that they are \( \mathop{\sum }\limits_{{i = 1}}^{{m - 1}}{\epsilon }_{i} \pm {\epsilon }_{m} \) . In this case, conclude that \( \mathop{\bigwedge }\limits^{m}{\mathbb{C}}^{n} \) is the direct sum of two irreducible representations. Exercise 7.4 Let \( G \) be a compact Lie group, \( T \) a maximal torus, and \( {\Delta }^{ + }\left( {\mathfrak{g}}_{\mathbb{C}}\right) \) a system of positive roots with respect to \( {\mathfrak{t}}_{\mathbb{C}} \) with corresponding simple system \( \Pi \left( {\mathfrak{g}}_{\mathbb{C}}\right) \) . (1) If \( V\left( \lambda \right) \) and \( V\left( {\lambda }^{\prime }\right) \) are irreducible representations of \( G \), show that the weights of \( V\left( \lambda \right) \otimes V\left( {\lambda }^{\prime }\right) \) are of the form \( \mu + {\mu }^{\prime } \), where \( \mu \) is a weight of \( V\left( \lambda \right) \) and \( {\mu }^{\prime } \) is a weight of \( V\left( {\lambda }^{\prime }\right) \) . (2) By looking at highest weight vectors, show \( V\left( {\lambda + {\lambda }^{\prime }}\right) \) appears exactly once as a summand in \( V\left( \lambda \right) \otimes V\left( {\lambda }^{\prime }\right) \) . (3) Suppose \( V\left( v\right) \) is an irreducible summand of \( V\left( \lambda \right) \otimes V\left( {\lambda }^{\prime }\right) \) and write the highest weight vector of \( V\left( v\right) \) in terms of the weights of \( V\left( \lambda \right) \otimes V\left( {\lambda }^{\prime }\right) \) . By considering a term in which the contribution from \( V\left( \lambda \right) \) is as large as possible, show that \( v = \lambda + {\mu }^{\prime } \) for a weight \( {\mu }^{\prime } \) of \( V\left( {\lambda }^{\prime }\right) \) . Exercise 7.5 Recall that \( {V}_{p, q}\left( {\mathbb{C}}^{n}\right) \) from Exercise 2.35 is a representations of \( {SU}\left( n\right) \) on the set of complex polynomials homogeneous of degree \( p \) in \( {z}_{1},\ldots ,{z}_{n} \) and homogeneous of degree \( q \) in \( \overline{{z}_{1}},\ldots ,\overline{{z}_{n}} \) and that \( {\mathcal{H}}_{p, q}\left( {\mathbb{C}}^{n}\right) \) is an irreducible subrepre-sentation. (1) If \( H = \operatorname{diag}\left( {{t}_{1},\ldots ,{t}_{n}}\right) \) with \( \mathop{\sum }\limits_{j}{t}_{j} = 0 \), show that \( H \) acts on \( {V}_{p, q}\left( {\mathbb{C}}^{n}\right) \) as \( \mathop{\sum }\limits_{j}{t}_{j}\left( {-{z}_{j}{\partial }_{{z
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 1.2
Definition 1.2. Let \( {M}_{n}\left( \mathbb{C}\right) \) denote the space of all \( n \times n \) matrices with complex entries. We may identify \( {M}_{n}\left( \mathbb{C}\right) \) with \( {\mathbb{C}}^{{n}^{2}} \) and use the standard notion of convergence in \( {\mathbb{C}}^{{n}^{2}} \) . Explicitly, this means the following. Definition 1.3. Let \( {A}_{m} \) be a sequence of complex matrices in \( {M}_{n}\left( \mathbb{C}\right) \) . We say that \( {A}_{m} \) converges to a matrix \( A \) if each entry of \( {A}_{m} \) converges (as \( m \rightarrow \infty \) ) to the corresponding entry of \( A \) (i.e., if \( {\left( {A}_{m}\right) }_{jk} \) converges to \( {A}_{jk} \) for all \( 1 \leq j, k \leq n \) ). We now consider subgroups of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \), that is, subsets \( G \) of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) such that the identity matrix is in \( G \) and such that for all \( A \) and \( B \) in \( G \), the matrices \( {AB} \) and \( {A}^{-1} \) are also in \( G \) . Definition 1.4. A matrix Lie group is a subgroup \( G \) of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) with the following property: If \( {A}_{m} \) is any sequence of matrices in \( G \), and \( {A}_{m} \) converges to some matrix \( A \), then either \( A \) is in \( G \) or \( A \) is not invertible. The condition on \( G \) amounts to saying that \( G \) is a closed subset of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . (This does not necessarily mean that \( G \) is closed in \( {M}_{n}\left( \mathbb{C}\right) \) .) Thus, Definition 1.4 is equivalent to saying that a matrix Lie group is a closed subgroup of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . Throughout the book, all topological properties of a matrix Lie group \( G \) will be considered with respect to the topology \( G \) inherits as a subset of \( {M}_{n}\left( \mathbb{C}\right) \cong {\mathbb{C}}^{{n}^{2}} \) . The condition that \( G \) be a closed subgroup, as opposed to merely a subgroup, should be regarded as a technicality, in that most of the interesting subgroups of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) have this property. Most of the matrix Lie groups \( G \) we will consider have the stronger property that if \( {A}_{m} \) is any sequence of matrices in \( G \), and \( {A}_{m} \) converges to some matrix \( A \), then \( A \in G \) (i.e., that \( G \) is closed in \( {M}_{n}\left( \mathbb{C}\right) \) ). An example of a subgroup of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) which is not closed (and hence is not a matrix Lie group) is the set of all \( n \times n \) invertible matrices with rational entries. This set is, in fact, a subgroup of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \), but not a closed subgroup. That is, one can ![a7bfd4a7-7795-4350-a407-6ad11be11f96_18_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_18_0.jpg) Fig. 1.1 A small portion of the group \( G \) inside \( \bar{G} \) (left) and a larger portion (right) (easily) have a sequence of invertible matrices with rational entries converging to an invertible matrix with some irrational entries. (In fact, every real invertible matrix is the limit of some sequence of invertible matrices with rational entries.) Another example of a group of matrices which is not a matrix Lie group is the following subgroup of \( \mathrm{{GL}}\left( {2;\mathbb{C}}\right) \) . Let \( a \) be an irrational real number and let \[ G = \left\{ {\left. \left( \begin{matrix} {e}^{it} & 0 \\ 0 & {e}^{ita} \end{matrix}\right) \right| \;t \in \mathbb{R}}\right\} \] (1.1) Clearly, \( G \) is a subgroup of \( \mathrm{{GL}}\left( {2;\mathbb{C}}\right) \) . According to Exercise 10, the closure of \( G \) is the group \[ \bar{G} = \left\{ {\left. \left( \begin{matrix} {e}^{i\theta } & 0 \\ 0 & {e}^{i\phi } \end{matrix}\right) \right| \;\theta ,\phi \in \mathbb{R}}\right\} . \] The group \( G \) inside \( \bar{G} \) is known as an "irrational line in a torus"; see Figure 1.1. ## 1.2 Examples Mastering the subject of Lie groups involves not only learning the general theory but also familiarizing oneself with examples. In this section, we introduce some of the most important examples of (matrix) Lie groups. Among these are the classical groups, consisting of the general and special linear groups, the unitary and orthogonal groups, and the symplectic groups. The classical groups, and their associated Lie algebras, will be key examples in Parts II and III of the book. ## 1.2.1 General and Special Linear Groups The general linear groups (over \( \mathbb{R} \) or \( \mathbb{C} \) ) are themselves matrix Lie groups. Of course, \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) is a subgroup of itself. Furthermore, if \( {A}_{m} \) is a sequence of matrices in \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) and \( {A}_{m} \) converges to \( A \), then by the definition of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) , either \( A \) is in \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \), or \( A \) is not invertible. Moreover, \( \mathrm{{GL}}\left( {n;\mathbb{R}}\right) \) is a subgroup of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \), and if \( {A}_{m} \in \mathrm{{GL}}\left( {n;\mathbb{R}}\right) \) and \( {A}_{m} \) converges to \( A \), then the entries of \( A \) are real. Thus, either \( A \) is not invertible or \( A \in \mathrm{{GL}}\left( {n;\mathbb{R}}\right) \) . The special linear group (over \( \mathbb{R} \) or \( \mathbb{C} \) ) is the group of \( n \times n \) invertible matrices (with real or complex entries) having determinant one. Both of these are subgroups of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . Furthermore, if \( {A}_{n} \) is a sequence of matrices with determinant one and \( {A}_{n} \) converges to \( A \), then \( A \) also has determinant one, because the determinant is a continuous function. Thus, \( \mathrm{{SL}}\left( {n;\mathbb{R}}\right) \) and \( \mathrm{{SL}}\left( {n;\mathbb{C}}\right) \) are matrix Lie groups. ## 1.2.2 Unitary and Orthogonal Groups An \( n \times n \) complex matrix \( A \) is said to be unitary if the column vectors of \( A \) are orthonormal, that is, if \[ \mathop{\sum }\limits_{{l = 1}}^{n}\overline{{A}_{lj}}{A}_{lk} = {\delta }_{jk} \] (1.2) We may rewrite (1.2) as \[ \mathop{\sum }\limits_{{l = 1}}^{n}{\left( {A}^{ * }\right) }_{jl}{A}_{lk} = {\delta }_{jk} \] (1.3) where \( {\delta }_{jk} \) is the Kronecker delta, equal to 1 if \( j = k \) and equal to zero if \( j \neq k \) . Here \( {A}^{ * } \) is the adjoint of \( A \), defined by \[ {\left( {A}^{ * }\right) }_{jk} = \overline{{A}_{kj}} \] Equation (1.3) says that \( {A}^{ * }A = I \) ; thus, we see that \( A \) is unitary if and only if \( {A}^{ * } = {A}^{-1} \) . In particular, every unitary matrix is invertible. The adjoint operation on matrices satisfies \( {\left( AB\right) }^{ * } = {B}^{ * }{A}^{ * } \) . From this, we can see that if \( A \) and \( B \) are unitary, then \[ {\left( AB\right) }^{ * }\left( {AB}\right) = {B}^{ * }{A}^{ * }{AB} = {B}^{-1}{A}^{-1}{AB} = I, \] showing that \( {AB} \) is also unitary. Furthermore, since \( {\left( A{A}^{-1}\right) }^{ * } = {I}^{ * } = I \), we see that \( {\left( {A}^{-1}\right) }^{ * }{A}^{ * } = I \), which shows that \( {\left( {A}^{-1}\right) }^{ * } = {\left( {A}^{ * }\right) }^{-1} \) . Thus, if \( A \) is unitary, we have \[ {\left( {A}^{-1}\right) }^{ * }{A}^{-1} = {\left( {A}^{ * }\right) }^{-1}{A}^{-1} = {\left( A{A}^{ * }\right) }^{-1} = I, \] showing that \( {A}^{-1} \) is again unitary. Thus, the collection of unitary matrices is a subgroup of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . We call this group the unitary group and we denote it by \( \mathrm{U}\left( n\right) \) . We may also define the special unitary group \( \mathrm{{SU}}\left( n\right) \), the subgroup of \( \mathrm{U}\left( n\right) \) consisting of unitary matrices with determinant 1 . It is easy to check that both \( \mathrm{U}\left( n\right) \) and \( \mathrm{{SU}}\left( n\right) \) are closed subgroups of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) and thus matrix Lie groups. Meanwhile, let \( \langle \cdot , \cdot \rangle \) denote the standard inner product on \( {\mathbb{C}}^{n} \), given by \[ \langle x, y\rangle = \mathop{\sum }\limits_{j}\overline{{x}_{j}}{y}_{j} \] (Note that we put the conjugate on the first factor in the inner product.) By Proposition A.8, we have \[ \langle x,{Ay}\rangle = \left\langle {{A}^{ * }x, y}\right\rangle \] for all \( x, y \in {\mathbb{C}}^{n} \) . Thus, \[ \langle {Ax},{Ay}\rangle = \left\langle {{A}^{ * }{Ax}, y}\right\rangle \] from which we can see that if \( A \) is unitary, then \( A \) preserves the inner product on \( {\mathbb{C}}^{n} \), that is, \[ \langle {Ax},{Ay}\rangle = \langle x, y\rangle \] for all \( x \) and \( y \) . Conversely, if \( A \) preserves the inner product, we must have \( \left\langle {{A}^{ * }{Ax}, y}\right\rangle = \langle x, y\rangle \) for all \( x, y \) . It is not hard to see that this condition holds only if \( {A}^{ * }A = I \) . Thus, an equivalent characterization of unitarity is that \( A \) is unitary if and only if \( A \) preserves the standard inner product on \( {\mathbb{C}}^{n} \) . Finally, for any matrix \( A \), we have that \( \det {A}^{ * } = \overline{\det A} \) . Thus, if \( A \) is unitary, we have \[ \det \left( {{A}^{ * }A}\right) = {\left| \det A\right| }^{2} = \det I = 1. \] Hence, for all unitary matrices \( A \), we have \( \left| {\det A}\right| = 1 \) . In a similar fashion, an \( n \times n \) real matrix \( A \) is said to be orthogonal if the column vectors of \( A \) are orthonormal. As in the unitary case, we may give equivalent versions of this condition. The only difference is that if \( A \) is real, \( {A}^{ * } \) is the same as the transpose \( {A}^{tr} \) of \( A \), given by \[ {\left( {A}^{tr}\right) }_{jk} = {A}
1143_(GTM48)General Relativity for Mathematicians
Definition 2.3.2
Definition 2.3.2. A vector field \( \mathbf{W} \) over \( \gamma \) is called a neighbor of \( \gamma \) in \( \mathbf{Q} \) iff there exists a vector field \( {\mathbf{W}}^{\prime } \) over \( \gamma \) such that \( p{\mathbf{W}}^{\prime } = \mathbf{W} \) and \( {L}_{\mathbf{Q}}{\mathbf{W}}^{\prime } = 0 \) (Section 2.0.3 and Exercise 2.0.6). Let \( F \) be the Fermi-Walker connection over \( \gamma .{F}_{{\gamma }_{ \bullet }}W \) is called the neighbor’s 3-velocity relative to \( \gamma \), and \( {F}_{{\gamma }_{ \bullet }}{}^{2}W\left( { \equiv {F}_{{\gamma }_{ \bullet }}{F}_{{\gamma }_{ \bullet }}W}\right) \) is called the neighbor’s 3-acceleration relative to \( \gamma \) . Both \( \left( {{F}_{{\gamma }_{ \bullet }}W}\right) u \) and \( \left( {{F}_{{\gamma }_{ \bullet }}{}^{2}W}\right) u \) lie in the local rest space of \( \left( {{\gamma u},{\gamma }_{ * }u}\right) \forall u \in \mathcal{E} \) (which accounts for the " 3 " in these definitions) because, according to Proposition 2.2.2b and 2.2.2c, denoting the 3-velocity by \( \mathbf{V} \) for the moment, \( p\mathbf{W} = \mathbf{W} \) and \( q\mathbf{W} = 0 \Rightarrow \mathbf{V} = p{D}_{\gamma, p}p\mathbf{W} \) (Proposition 2.2.1) \( \Rightarrow {pV} = V \), and similarly for the acceleration. Newtonian analogue. Let \( \overrightarrow{x}\left( t\right) \) be the path of a point particle in Euclidean 3-space and \( \overrightarrow{y}\left( t\right) \) be another such path with \( \overrightarrow{n}\left( t\right) = \overrightarrow{y}\left( t\right) - \overrightarrow{x}\left( t\right) \) small \( \forall t \in \) Newtonian time axis. Regard \( \overrightarrow{n}\left( t\right) \) as an element of the tangent space at \( \overrightarrow{x}\left( t\right) \) . Then \( W,{F}_{{\gamma }_{ * }}W \), and \( {F}_{{\gamma }_{ * }}{}^{2}W \) are, respectively, like \( \overrightarrow{n} \) , \( d\overrightarrow{n}/{dt},{d}^{2}\overrightarrow{n}/d{t}^{2} \) . In Definition 2.3.2, the key quantity is \( {\mathbf{W}}^{\prime } \) . We have chosen to call \( \mathbf{W} \) (instead of \( {\mathbf{W}}^{\prime } \) ) a neighbor simply because for technical discussions, such as Newtonian interpretations, it is more convenient to have the property that a neighbor always lies in the local rest spaces of \( \gamma \) . Moreover, if \( \mathbf{Q} \) is geodesic, then a neighbor \( \mathbf{W} \) of \( \gamma \) in \( \mathbf{Q} \) must himself satisfy \( {L}_{\mathbf{Q}}\mathbf{W} = 0 \) (Exercise 2.0.4). Conceptually, one thinks of a neighbor as an "infinitesimally nearby" observer in \( \mathbf{Q} \) (compare Section 2.0.3, especially (d)). A neighbor replaces the more cumbersome concept of a one-parameter family of neighboring observers, in the same way that Jacobi fields replace a one-parameter family of geodesics in Riemannian geometry. Our next proposition in fact shows that a neighbor in a geodesic reference frame necessarily satisfies the Lorentzian version of the Jacobi equation. Compare the remarks in Section 2.1.2. We need some notation for the next two propositions, which give mathematical interpretations of a neighbor's 3-acceleration and 3-velocity, respectively. Let \( \left( {z, Z}\right) \) be an instantaneous observer and let \( {M}_{z} = R \oplus T \) be his associated orthogonal decomposition. Denoting the curvature tensor of \( \left( {M, g, D}\right) \) by \( \mathbf{R} \) as usual, we define a linear transformation \( {\psi }_{Z} : R \rightarrow R \) by \( {\psi }_{Z}X \rightarrow {R}_{ZX}Z,\forall X \in R \) . That, in fact, \( {\psi }_{Z}R \subset R \) follows from: \( g\left( {{\psi }_{Z}X, Z}\right) = \) \( g\left( {{R}_{zx}Z, Z}\right) = 0\forall X \in R \), because \( {R}_{zx} \) is skew-adjoint (Section 1.0.2 and Exercise 1.0.6). \( {\psi }_{Z} \) is self-adjoint with respect to \( {\left. gz\right| }_{R} \) because \( \forall V, W \in R \) , \( g\left( {{\psi }_{Z}V, W}\right) = g\left( {{R}_{ZV}Z, W}\right) = \widehat{R}\left( {W, Z, Z, V}\right) = \widehat{R}\left( {V, Z, Z, W}\right) = g\left( {{R}_{ZW}Z, V}\right) = \) \( g\left( {{\psi }_{Z}W, V}\right) \), where \( \widehat{\mathbf{R}} \) is the \( \left( {0,4}\right) \) tensor field physically equivalent to \( \mathbf{R} \) (Section 1.0.2). Proposition 2.3.3. Let \( Q \) be a geodesic reference frame and let \( W \) be a neighbor of an observer \( \gamma : \mathcal{E} \rightarrow M \) in \( Q \) . Then the 3-acceleration of \( W \) relative to \( \gamma \) satisfies: \( {F}_{{\gamma }_{ \bullet }}{}^{2}W = {\psi W} \), where \( \left( {\psi W}\right) u = {\psi }_{{\gamma }_{ \bullet }u}\left( {Wu}\right) \forall u \in \mathcal{E} \) . Proof. By Exercise \( {2.0.4}{L}_{Q}W = 0 \) . Fix a \( u \in \mathcal{E} \), and let \( \widetilde{W} \) be a vector field defined in some neighborhood of \( {\gamma u} \) such that \( \left\lbrack {\widetilde{W}, Q}\right\rbrack = 0 \) and \( \widetilde{W} \circ \gamma = W \) (Section 2.0.3). Now \( {D}_{Q}{}^{2}\widetilde{W} = {D}_{Q}{D}_{Q}\widetilde{W} = {D}_{Q}\left( {{D}_{\widetilde{W}}Q + \left\lbrack {Q,\widetilde{W}}\right\rbrack }\right) = {D}_{Q}{D}_{\widetilde{W}}Q = \) \( {R}_{Q\widetilde{W}}Q + {D}_{\widetilde{W}}{D}_{Q}Q + {D}_{1Q}{\widetilde{w}}_{1}Q = {R}_{Q\widetilde{W}}Q \) ; the last equality is because \( {D}_{Q}Q = 0 \) by assumption. Restricting to \( \gamma \), we have \( {D}_{{\gamma }_{ \bullet }}{}^{2}W = {\mathbf{R}}_{{\gamma }_{ \bullet }W}{\gamma }_{ * } \) . Since \( \gamma \) is a geodesic, the Fermi-Walker connection \( F \) coincides with \( {\gamma }^{ * }D \) (Proposition 2.2.2a). Thus the preceding equation is equivalent to \( {F}_{{\gamma }_{ \bullet }}{}^{2}W = {R}_{{\gamma }_{ \bullet }W}{\gamma }_{ * } \) \( = {\psi W} \) . Proposition 2.3.3 indicates the basic way to check, when inside a freely falling elevator, whether one is in the trivial gravitational field: Take two freely falling apples in the elevator; if they have nonvanishing relative 3-accelerations, then \( \mathbf{R} \neq 0 \) and spacetime is not isometric to Minkowski space. For Proposition 2.3.4, recall that given an observer \( \gamma : \mathcal{E} \rightarrow M,{R}_{u} \) denotes \( \gamma \) ’s local rest space at \( u \) . Proposition 2.3.4. Let \( Q \) be a reference frame and let \( \gamma : \mathcal{E} \rightarrow M \) be an observer in \( \mathbf{Q} \) . The assignment \( X \rightarrow - {D}_{X}\mathbf{Q} \) then defines a linear transformation \( {A}_{Q} : {R}_{u} \rightarrow {R}_{u} \) which assigns to each neighbor of \( \gamma \) in \( Q \) the negative of his 3-velocity relative to \( \gamma \) . The negative sign in the definition \( {A}_{Q}X = - {D}_{X}Q \) has no special significance; it is merely a convention universally adopted by differential geometers (cf. Sections 8.1.9 and 8.1.10 for more on \( {A}_{Q} \) ). Proof of 2.3.4. We first show that \( {A}_{Q}{R}_{u} \subset {R}_{u} \) . This is because \( g\left( {{A}_{Q}X, Q}\right) \) \( = - g\left( {{D}_{X}Q, Q}\right) = - \frac{1}{2}{Xg}\left( {Q, Q}\right) = \frac{1}{2}{X1} = 0,\forall X \in {R}_{u} \) . Next let \( \breve{W} \) be a neighbor of \( \gamma \) in \( \mathbf{Q} \) . We have to show that \( \forall u \in \mathcal{E},{F}_{\gamma, u}W = {D}_{{W}^{u}}Q \) . This is equivalent to showing \( g\left( {{F}_{\gamma, u}W, V}\right) = g\left( {{D}_{{W}^{u}}Q, V}\right) \forall V \in {R}_{u} \) . By Proposition 2.2.2d, \( g\left( {{F}_{\gamma \bullet u}W, V}\right) = g\left( {{D}_{\gamma \bullet u}W, V}\right) \) . Thus it suffices to show: \[ g\left( {{D}_{{\gamma }_{ \bullet }u}W, V}\right) = g\left( {{D}_{{W}^{u}}Q, V}\right) \] \( \forall V \in {R}_{u} \) . Let \( {W}^{\prime } \) be a vector field over \( \gamma \) such that \( p{W}^{\prime } = W \) and \( {L}_{Q}{W}^{\prime } = 0 \) . Write \( {W}^{\prime } - W = f{\gamma }_{ * } \) for some \( {C}^{\infty } \) function \( f \) over \( \gamma \), and let \( {\widetilde{W}}^{\prime } \) be a vector field defined in some neighborhood \( \mathcal{U} \) of \( {\gamma u} \) such that \( {\widetilde{W}}^{\prime } \circ \gamma = {W}^{\prime } \) and \( \left\lbrack {{\widetilde{W}}^{\prime }, Q}\right\rbrack = 0 \) (Section 2.0.3). We may assume \( \mathcal{U} \) ## 2 Observers is so small that there exists a \( {C}^{\infty } \) function \( F : \mathcal{U} \rightarrow \mathbb{R} \) such that \( F \circ \gamma = f \) . Define a vector field \( \widetilde{W} \) in \( \mathcal{U} \) by \( \widetilde{W} = {\widetilde{W}}^{\prime } - {FQ} \) ; then \( \widetilde{W} \circ \gamma = W \) . Now, \[ {D}_{\widetilde{W}}Q = {D}_{Q}\widetilde{W} + \left\lbrack {\widetilde{W}, Q}\right\rbrack \] \[ = {D}_{Q}\widetilde{W} + \left\lbrack {{\widetilde{W}}^{\prime } - {FQ}, Q}\right\rbrack \] \[ = {D}_{Q}\widetilde{W} - \left\lbrack {{FQ}, Q}\right\rbrack \] \[ = {D}_{Q}\widetilde{W} + \left( {QF}\right) Q\text{.} \] At \( u \), this becomes \[ {D}_{{W}^{u}}Q = {D}_{{\gamma }_{ * }u}W + {f}^{\prime }\left( u\right) {\gamma }_{ * }u \] where \( {f}^{\prime } \) denotes the derivative of \( f \) as usual. This immediately implies that \[ g\left( {{D}_{\gamma, u}W, V}\right) = g\left( {{D}_{Wu}Q, V}\right) \forall V \in {R}_{u},\text{ as desired. } \] \( \mathbf{Q} \) is called irrotational at \( x = {\gamma u} \) iff \( {A}_{\mathbf{Q}} \) is self-adjoint with respect to \( {\left. gx\right| }_{{R}_{u}} \), rigid at \( x \) iff \( {A}_{Q} \) is skew-adjoint with respect to \( {\left. gx\right| }_{{R}_{u}} \), and irrotational or rigid iff it is irrotational or rigid at every \( x \in M \) (cf. Section 8.1.10). The terminology intentionally parallels that of Newtonian hydrodynamics and can be motivated mathematically, as follows. We have seen that a neighbor of an observer \( \gamma \) in \( Q \) corresponds to a one-parameter family of \( \gamma \) ’s neighboring observers in \( \mathbf{Q} \) . Proposition 2.3.4 therefore implies that \( {A}_{\mathbf{Q}} : {R}_{u} \rightarrow {R}_{u} \) is the algebraic object that
113_Topological Groups
Definition 18.13
Definition 18.13. Let \( \left\langle {{\mathfrak{A}}_{i} : i \in I}\right\rangle \) be a system of \( \mathcal{L} \) -structures, where \( \mathcal{L} \) is any first-order language. By the product \( \mathop{P}\limits_{{i \in I}}{\mathfrak{A}}_{i} \) of the system \( \left\langle {{\mathfrak{A}}_{i} : i \in I}\right\rangle \) we mean the \( \mathcal{L} \) -structure \( \mathcal{L} \) with universe \( B = \mathop{P}\limits_{{i \in I}}{A}_{i} = \{ f : f \) is a function, \( \operatorname{Dmn}f = I \), and for each \( \left. {i \in I,{f}_{i} \in {A}_{i}}\right\} \), and with relations and operations as follows: (i) if \( \mathbf{O} \) is an \( m \) -ary operation symbol and \( {x}_{0},\ldots ,{x}_{m - 1} \in B \), then for any \( i \in I \) , \[ {\left\lbrack {\mathbf{O}}^{\mathfrak{B}}\left( {x}_{0},\ldots ,{x}_{m - 1}\right) \right\rbrack }_{i} = {\mathbf{O}}^{\mathfrak{A}i}\left( {{x}_{0i},\ldots ,{x}_{m - 1, i}}\right) ; \] (ii) if \( \mathbf{R} \) is an \( m \) -ary relation symbol, then \[ {\mathbf{R}}^{\mathfrak{B}} = \left\{ {\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \in {}^{m}B : \forall i \in I\left( {{x}_{0i},\ldots ,{x}_{m - 1, i}}\right) \in {\mathbf{R}}^{\mathfrak{A}i}}\right\} . \] If \( {\mathfrak{A}}_{i} = \mathfrak{C} \) for all \( i \in I \) we write \( {}^{I}\mathfrak{C} \) instead of \( \mathop{P}\limits_{{i \in I}}{\mathfrak{A}}_{i};{}^{I}\mathfrak{C} \) is the \( {I}^{\text{th }} \) direct power of \( \mathfrak{C} \) . In case \( I \) has exactly two elements \( i, j \), we write \( {\mathfrak{A}}_{i} \times {\mathfrak{A}}_{j} \) instead of \( {\mathrm{P}}_{i \in I}{\mathfrak{A}}_{i} \) . Thus the product of two structures \( \mathfrak{C} \) and \( \mathfrak{D} \) has been defined and is denoted by \( \mathfrak{C} \times \mathfrak{D} \) ; we think of it as \( {P}_{i \in 2}{\mathfrak{A}}_{i} \), where \( {\mathfrak{A}}_{0} = \mathfrak{C} \) and \( {\mathfrak{A}}_{1} = \mathfrak{D} \) . One final bit of notation for the general product \( \mathop{P}\limits_{{i \in I}}{\mathfrak{A}}_{i} \) : for each \( i \in I \) we denote by \( {\operatorname{pr}}_{i} \), or simply \( {\operatorname{pr}}_{i} \) when \( \mathfrak{A} \) is understood from the context, the projection function whose domain is \( \mathop{P}\limits_{{i \in I}}{A}_{i} \) such that for any \( x \in \mathop{P}\limits_{{i \in I}}{A}_{i} \) , \[ {\operatorname{pr}}_{i}x = {x}_{i} \] The following useful proposition is easily established by induction on \( \sigma \) : Proposition 18.14. Let \( \mathfrak{B} = \mathop{P}\limits_{{i \in I}}{\mathfrak{A}}_{i} \) . For any term \( \sigma \), any \( x \in {}^{ \circ }\mathop{P}\limits_{{i \in I}}{A}_{i} \), and any \( i \in I,{\left( {\sigma }^{\mathfrak{A}}x\right) }_{i} = {\sigma }^{\mathfrak{A}i}\left( {{\mathrm{{pr}}}_{i} \circ x}\right) . \) Now we give the basic definitions and a few basic facts concerning ultra-filters. Filters and ultrafilters are used in various parts of mathematics, especially in general topology. Their usefulness in logic is mainly connected with the ultraproduct construction and with certain large cardinals that arise in the study of infinitary languages and the metamathematics of set theory. Definition 18.15. Let \( I \) be any set and \( F \) a collection of subsets of \( I \) . (i) \( F \) has the finite intersection property if \( {a}_{0} \cap \cdots \cap {a}_{m - 1} \neq 0 \) whenever \( m \in \omega \) and \( a \in {}^{m}F \) . (ii) \( F \) is a filter over \( I \) provided \( F \neq 0 \) and: (a) \( b \supseteq a \in F \) implies \( b \in F \) ; (b) \( a, b \in F \) implies \( a \cap b \in F \) . (iii) \( F \) is an ultrafilter over \( I \) provided \( F \) is a filter over \( I,0 \notin F \), and for any \( J \subseteq I \), either \( J \in I \) or \( I \sim J \in F \) . The following proposition is obvious. Proposition 18.16. If \( F \) is a filter and \( 0 \notin F \), then \( F \) has the finite intersection property. For any filter \( F \) over \( I \) we have \( I \in F \) . For any filter \( F \) on \( I,0 \in F \) iff \( F \) is the set of all subsets of \( I \) . Proposition 18.17. Let \( F \) be a filter over \( I \) with \( 0 \notin F \) . Then the following conditions are equivalent: (i) \( F \) is an ultrafilter. (ii) for all \( a, b \subseteq I \), if \( a \cup b \in F \), then \( a \in F \) or \( b \in F \) . (iii) for any filter \( G \) on \( I \), if \( F \subset G \), then \( 0 \in G \) . Proof. \( \left( i\right) \Rightarrow \left( {ii}\right) \) . Assume that \( a \cup b \in F \) while \( a \notin F \) . Then by \( \left( i\right), I \sim \) \( a \in F \) . Now \( \left( {I \sim a}\right) \cap \left( {a \cup b}\right) \subseteq b \), so \( b \in F \) . (ii) \( \Rightarrow \) (iii). Say \( a \in G \sim F \) . Then \( a \cup \left( {I \sim a}\right) = I \in F \), so by (ii) \( I \sim a \in F \subseteq G \) . Hence \( 0 = a \cap \left( {I \sim a}\right) \in G \) . (iii) \( \Rightarrow \left( i\right) \) . Assume that \( a \subseteq I \) and \( a \notin F \) . Let \( G = \{ b \subseteq I \) : there is an \( x \in F \) with \( x \cap a \subseteq b\} \) . Clearly \( F \subseteq G, a \in G \), and \( G \) is a filter. In particular \( F \subset G \) , so \( 0 \in G \) by (iii). Hence \( x \cap a = 0 \) for some \( x \in F \) ; hence \( x \subseteq I \sim a \), so \( I \sim a \in F \) . The following is the basic existence principle for ultrafilters. Proposition 18.18. If \( F \) is a collection of subsets of \( I \neq 0 \) with the finite intersection property, then there is an ultrafilter \( G \) such that \( F \subseteq G \) . Proof. Let \( \mathcal{A} \) be the collection of all filters \( G \) such that \( F \subseteq G \) and \( 0 \notin G \) . Then \( \mathcal{A} \) is nonempty; for, let \( H = \{ x \subseteq I \) : there exist \( m \in \omega \) and \( y \in {}^{m}F \) with \( \left. {{y}_{0} \cap \cdots \cap {y}_{m - 1} \subseteq x}\right\} \) . Clearly \( H \in \mathcal{A} \) . If \( 0 \neq \mathcal{B} \subseteq \mathcal{A} \) is simply ordered by inclusion, clearly \( \bigcup \mathcal{B} \in \mathcal{A} \) . By Zorn’s lemma, let \( G \) be a maximal element of \( \mathcal{A} \) . By 18.17, \( G \) is an ultrafilter. Definition 18.19. Let \( \mathfrak{A} = \left\langle {{\mathfrak{A}}_{i} : i \in I}\right\rangle \) be a system of \( \mathcal{L} \) -structures, and let \( F \) be an ultrafilter on \( I \) . We define \[ {\bar{F}}^{\mathfrak{A}} = \left\{ {\left( {x, y}\right) \in {}^{2}{\mathrm{P}}_{i \in I}{A}_{i} : \left\{ {i : {x}_{i} = {y}_{i}}\right\} \in F}\right\} . \] We write \( \bar{F} \) if \( \mathfrak{A} \) is understood. Obviously \( \bar{F} \) depends only on the system \( \left\langle {{A}_{i} : i \in I}\right\rangle \) and not at all on the language \( \mathcal{L} \) . Proposition 18.20. Under the assumptions of 18.19, \( \bar{F} \) is an equivalence relation on \( {\mathrm{P}}_{i \in I}{A}_{i} \) . Let \( \mathfrak{B} = {\mathrm{P}}_{i \in I}{\mathfrak{A}}_{i} \) . Then (i) if \( \mathbf{O} \) is an m-ary operation symbol, and if \( {x}_{t}\bar{F}{y}_{t} \) for all \( t < m \), then \( {\mathbf{O}}^{\mathfrak{B}}{xF}\mathbf{O}{y}^{\mathfrak{B}} \) (ii) if \( \mathbf{R} \) is an m-ary relation symbol, and if \( {x}_{t}\bar{F}{y}_{t} \) for all \( t < m \), then \( \{ i \in I : \langle {x}_{0i},\ldots ,{x}_{m - 1, i}\rangle \in {\mathbf{R}}^{\mathfrak{A}i}\} \in F\; \) iff \( \{ i \in I : \langle {y}_{0i},\ldots ,{y}_{m - 1, i}\rangle \in {\mathbf{R}}^{\mathfrak{A}i}\} \in F. \) Proof. The proof is routine, and we just check \( \left( i\right) \) and the transitivity of \( \bar{F} \) as examples. Assume that \( x\bar{F}y\bar{F}z \) . Thus \( \left\{ {i \in I : {x}_{i} = {y}_{i}}\right\} \in F \) and \( \left\{ {i \in I : {y}_{i} = {z}_{i}}\right\} \) \( \in F \) . But \[ \left\{ {i \in I : {x}_{i} = {y}_{i}}\right\} \cap \left\{ {i \in I : {y}_{i} = {z}_{i}}\right\} \subseteq \left\{ {i \in I : {x}_{i} = {z}_{i}}\right\} , \] so also \( \left\{ {i \in I : {x}_{i} = {z}_{i}}\right\} \in F \), and so \( x\bar{F}z \) . To check \( \left( i\right) \), note that \[ \left\{ {i \in I : \forall t < m\left( {{x}_{ti} = {y}_{ti}}\right) }\right\} = \mathop{\bigcap }\limits_{{t < m}}\left\{ {i \in I : {x}_{ti} = {y}_{ti}}\right\} \in F, \] and hence \[ \left\{ {i \in I : \forall t < m\left( {{x}_{ti} = {y}_{ti}}\right) }\right\} \subseteq \left\{ {i \in I : {\left( {\mathbf{O}}^{\mathfrak{B}}x\right) }_{i} = {\left( {\mathbf{O}}^{\mathfrak{B}}y\right) }_{i}}\right\} \in F, \] so \( {\mathbf{O}}^{\mathfrak{B}}x\bar{F}{\mathbf{O}}^{\mathfrak{B}}y \) . We may think of the members of an ultrafilter \( F \) as "big" subsets of \( I \) . Thus the passage from a member \( x \in \mathop{P}\limits_{{i \in I}}{A}_{i} \) to its equivalence class under \( \bar{F} \) amounts to identifying all functions which are equal to \( x \) on a "big" subset of \( I \) . As usual, \( {\left\lbrack x\right\rbrack }_{\bar{F}} \), or simply \( \left\lbrack x\right\rbrack \) if \( \bar{F} \) is understood, denotes the equivalence class of \( x \) under \( \bar{F} \) . Proposition 18.20 justifies the definition of ultraproducts: Definition 18.21. Let \( \mathfrak{A} = \left\langle {{\mathfrak{A}}_{i} : i \in I}\right\rangle \) be a system of \( \mathcal{L} \) -structures, set \( \mathfrak{B} = \mathop{P}\limits_{{i \in I}}{\mathfrak{A}}_{i} \), and let \( F \) be an ultrafilter on \( I \) . The ultraproduct of \( \mathfrak{A} \) over \( \bar{F} \) , denoted by \( \mathfrak{A}/\bar{F} \) or \( {\mathrm{P}}_{i \in I}{\mathfrak{A}}_{i}/\bar{F} \), is the structure \( \mathfrak{C} \) with universe \( C = {\mathrm{P}}_{i \in I}{A}_{i}/\bar{F} \) (the collection of all equivalence classes under \( \bar{F} \) ) and with operations and relations given as follows: (i) if \( \mathbf{O} \) is an \( m \) -ary operation symbol and \( x \in {}^{m}B \), then \( {\mathbf{O}}^{\mathfrak{C}}\left( {\left\lbrack {x}_{0}\right\r
110_The Schwarz Function and Its Generalization to Higher Dimensions
Definition 4.8
Definition 4.8. A subset \( C \) in an affine space \( E \) is called a convex set if for \( x, y \in C \), the line segment \[ \left\lbrack {x, y}\right\rbrack \mathrel{\text{:=}} \{ \left( {1 - \lambda }\right) x + {\lambda y} : 0 \leq \lambda \leq 1\} \] lies in \( C \) . In other words, if \( x, y \in C \), then \( \left( {1 - \lambda }\right) x + {\lambda y} \in C \) for all \( 0 \leq \lambda \leq 1 \) . In Figure 4.1, the sets \( A \) and \( B \) are convex. However, the set \( C \) is non-convex, since \( x \) and \( y \) are in \( C \), but not all of the line segment connecting \( x \) and \( y \) . ![968fd3dd-2b91-4cd3-8e1b-204f8f1c2faa_105_0.jpg](images/968fd3dd-2b91-4cd3-8e1b-204f8f1c2faa_105_0.jpg) Fig. 4.1. Convex and nonconvex sets. Lemma 4.9. Let \( E \) be an affine space. The following statements are true. (a) Intersections of convex sets are convex: if \( {\left\{ {C}_{\gamma }\right\} }_{\gamma \in \Gamma } \) is a family of convex sets in \( E \), then \( { \cap }_{\gamma \in \Gamma }{C}_{\gamma } \) is a convex set. (b) Minkowski sums of convex sets are convex: if \( {\left\{ {C}_{i}\right\} }_{i = 1}^{k} \) is a set of convex sets, then their Minkowski sum \[ {C}_{1} + \cdots + {C}_{k} \mathrel{\text{:=}} \left\{ {{x}_{1} + \cdots + {x}_{k} : {x}_{i} \in {C}_{i}, i = 1,\ldots, k}\right\} \] is a convex set. (c) An affine image of a convex set is convex: if \( C \subseteq E \) is a convex set and \( T : E \rightarrow F \) is an affine map from \( E \) into another affine space \( F \), then \( T\left( C\right) \subseteq F \) is also a convex set. Proof. These statements are all easy to prove; we prove only (a). Let \( x, y \in \) \( C \mathrel{\text{:=}} { \cap }_{\gamma \in \Gamma }{C}_{\gamma } \) . For each \( \gamma \in \Gamma \), we have \( x, y \in {C}_{\gamma } \), and since \( {C}_{\gamma } \) is convex, \( \left\lbrack {x, y}\right\rbrack \subseteq {C}_{\gamma } \) ; therefore, \( \left\lbrack {x, y}\right\rbrack \subseteq C \) and \( C \) is a convex set. Definition 4.10. Let \( {\left\{ {x}_{i}\right\} }_{1}^{k} \) be a finite set of points in an affine space \( E \) . \( A \) convex combination of \( {\left\{ {x}_{i}\right\} }_{1}^{k} \) is any point of the form \[ \mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}{x}_{i},\;{\lambda }_{i} \geq 0,\;\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i} = 1 \] Let \( A \subseteq E \) be a nonempty set. The convex hull of \( A \) is the set of all convex combinations of points from \( A \), that is, \[ \operatorname{co}\left( A\right) \mathrel{\text{:=}} \left\{ {\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}{x}_{i} : {x}_{i} \in A,\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i} = 1,{\lambda }_{i} \geq 0, k \geq 1}\right\} . \] Theorem 4.11. Let \( A \neq \varnothing \) be a subset of an affine space \( E \) . Then \( \operatorname{co}\left( A\right) \) is a convex set; in fact, \( \operatorname{co}\left( A\right) \) is the smallest convex set containing \( A \) . Proof. The proof is essentially a repeat of the proof of Lemma 4.2, but we now make the additional requirements that \( 0 < \alpha < 1 \) and that \( {\left\{ {\lambda }_{i}\right\} }_{1}^{k},{\left\{ {\mu }_{j}\right\} }_{1}^{l} \) be nonnegative in that proof. It suffices to note that all the affine combinations now become convex combinations. The following result is an immediate consequence of the above theorem. Corollary 4.12. If \( C \) is a convex set in an affine space \( E \), then \( \operatorname{co}\left( C\right) = C \) , that is, all convex combinations of elements from \( C \) lie in \( C \) , \[ {\lambda }_{i} \geq 0,{x}_{i} \in C, i = 1,\ldots, k,\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i} = 1\; \Rightarrow \;\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}{x}_{i} \in C. \] The following theorem, due to Carathéodory [53], is a fundamental result in convexity in finite-dimensional vector spaces, and has many applications, including in optimization. Theorem 4.13. (Carathéodory) Let \( A \) be a nonempty subset of an affine space \( E \) . Every element of \( \operatorname{co}\left( A\right) \) can be represented as a convex combination of affinely independent elements from \( A \) . Consequently, if \( n = \dim \left( {\operatorname{aff}\left( A\right) }\right) < \infty \), then every element of \( \operatorname{co}\left( A\right) \) can be represented as a convex combination of at most \( n + 1 \) elements from \( A \) ; in other words, \[ \operatorname{co}\left( A\right) = \left\{ {\mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i}{x}_{i} : {x}_{i} \in A,{\lambda }_{i} \geq 0, i = 1,\ldots, n + 1,\mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i} = 1}\right\} . \] Proof. Let \[ x = \mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}{x}_{i} \in \operatorname{co}\left( A\right) ,\text{ where }\mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i} = 1,\;{\lambda }_{i} > 0. \] (4.3) If \( {\left\{ {x}_{i}\right\} }_{1}^{k} \) is affinely independent, then \( {\left\{ {x}_{i} - {x}_{1}\right\} }_{2}^{k} \) is linearly independent and \( k - 1 \leq n \) ; thus, \( k \leq n + 1 \), and the theorem is proved. Suppose that \( {\left\{ {x}_{i}\right\} }_{1}^{k} \) is affinely dependent. It follows from Lemma 4.6 that there exist scalars \( {\left\{ {\delta }_{i}\right\} }_{1}^{k} \) such that \[ \mathop{\sum }\limits_{{i = 1}}^{k}{\delta }_{i}{x}_{i} = 0,\;\mathop{\sum }\limits_{{i = 1}}^{k}{\delta }_{i} = 0,\left( {{\delta }_{1},\ldots ,{\delta }_{k}}\right) \neq 0. \] (4.4) If we subtract from (4.3) \( \varepsilon \) times (4.4) \( \left( {\varepsilon > 0}\right) \), we obtain \[ x = \left( {{\lambda }_{1} - \varepsilon {\delta }_{1}}\right) {x}_{1} + \cdots + \left( {{\lambda }_{k} - \varepsilon {\delta }_{k}}\right) {x}_{k},\;\mathop{\sum }\limits_{{i = 1}}^{k}\left( {{\lambda }_{i} - \varepsilon {\delta }_{i}}\right) = 1. \] (4.5) Since \( \mathop{\sum }\limits_{{i = 1}}^{k}{\delta }_{i} = 0 \), there exist positive and negative scalars \( {\delta }_{i} \) . If \( {\delta }_{i} \leq 0 \) , then \( {\lambda }_{i} - \varepsilon {\delta }_{i} \geq 0 \) remains nonnegative for all \( \varepsilon \geq 0 \) ; however, if \( {\delta }_{i} > 0 \), then \( {\lambda }_{i} - \varepsilon {\delta }_{i} \geq 0 \) if and only if \( \varepsilon \leq {\lambda }_{i}/{\delta }_{i} \) . Therefore, if we set \( \varepsilon = \min \left\{ {{\lambda }_{i}/{\delta }_{i} : {\delta }_{i} > }\right. \) \( 0\} \), then \( x \) remains a convex combination in (4.5), but has at least one fewer term. We can continue this process until the vectors \( \left\{ {{x}_{2} - {x}_{1},\ldots ,{x}_{k} - {x}_{1}}\right\} \) in the representation (4.3) are linearly independent. When we halt, we will have \( k \leq n + 1 \) . We immediately have the following. Corollary 4.14. If \( C \) is a nonempty subset of an \( n \) -dimensional vector space \( E \), then every element of \( \operatorname{co}\left( A\right) \) can be represented as a convex combination of at most \( n + 1 \) elements from \( A \) . Corollary 4.15. If \( C \) is a nonempty compact subset of a finite-dimensional affine space \( E \), then so is the set \( \operatorname{co}\left( C\right) \) . Proof. It follows from Theorem 4.13 that \[ \operatorname{co}\left( C\right) = \left\{ {\mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i}{x}_{i} : {\lambda }_{i} \geq 0,{x}_{i} \in C, i = 1,\ldots, n + 1,\mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i} = 1}\right\} , \] where \( n = \dim \left( C\right) \) . Consider a sequence \( {\left\{ {x}^{k}\right\} }_{1}^{\infty } \) in \( \operatorname{co}\left( C\right) \), where \[ {x}^{k} = \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i}^{k}{x}_{i}^{k} \] Since \( C \) is compact, the sequence \( \left\{ {x}_{1}^{k}\right\} \) has a convergent subsequence \( {x}_{1}^{{k}_{j}} \rightarrow \) \( {x}_{1} \in C \) . Next, let the sequence \( \left\{ {x}_{2}^{{k}_{j}}\right\} \) have convergent subsequence \( {x}_{2}^{{k}_{j}} \rightarrow \) \( {x}_{2} \in C \), and so on. Eventually, we can find a subsequence \( {k}_{j} \) such that \[ \mathop{\lim }\limits_{{j \rightarrow \infty }}{x}_{i}^{{k}_{j}} = {x}_{i} \in C\text{ for all }i = 1,\ldots, n + 1. \] Using the same arguments, we can assume that \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{\lambda }_{i}^{{k}_{j}} = {\lambda }_{i} \geq 0 \) for all \( i = 1,\ldots, n + 1 \) . Then we have \( \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i} = 1 \), and \[ {x}^{{k}_{j}} \rightarrow \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i}{x}_{i} \in \operatorname{co}\left( C\right) \] This proves that \( \operatorname{co}\left( C\right) \) is compact. An alternative proof runs as follows: Let \[ {\Delta }_{n} \mathrel{\text{:=}} \left\{ {\left( {{\lambda }_{1},\ldots ,{\lambda }_{n + 1}}\right) : {\lambda }_{i} \geq 0, i = 1,\ldots, n + 1,\mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i} = 1}\right\} \] be the standard unit simplex in \( {\mathbb{R}}^{n + 1} \), and consider the map \[ {\Delta }_{n} \times \underset{n + 1\text{ times }}{\underbrace{C \times \cdots \times C}} \rightarrow E \] given by \[ T\left( {{\lambda }_{1},\ldots ,{\lambda }_{n + 1},{x}_{1},\ldots ,{x}_{n + 1}}\right) = \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i}{x}_{i}. \] Note that the image of \( T \) is \( \operatorname{co}\left( C\right) \) . Since the map \( T \) is continuous and the domain of \( T \) is compact \( \left( {\Delta }_{n}\right. \) and \( C \) are compact), we conclude that \( \operatorname{co}\left( C\right) \) is compact. We also record here the following elementary results. Lemma 4.16. Let \( E \) be an affine space in a normed vector space. If \( C \subseteq E \) is a convex set, then its closure \( \bar{C} \) is also a convex set. If \( {C}_{1},{C}_{2} \subseteq E \) are convex sets, \( {C}_{1} \) is compact, and \( {C}_{2} \) is closed, then
1189_(GTM95)Probability-1
Definition 1
Definition 1. Let \( \left( {\Omega ,\mathcal{F}}\right) \) and \( \left( {E,\mathcal{E}}\right) \) be measurable spaces. We say that a function \( X = X\left( \omega \right) \), defined on \( \Omega \) and taking values in \( E \), is \( \mathcal{F}/\mathcal{E} \) -measurable, or is a random element (with values in \( E \) ), if 5 Random Elements \[ \{ \omega : X\left( \omega \right) \in B\} \in \mathcal{F} \] (1) for every \( B \in \mathcal{E} \) . Random elements (with values in \( E \) ) are sometimes called \( E \) -valued random variables. Let us consider some special cases. If \( \left( {E,\mathcal{E}}\right) = \left( {R,\mathcal{B}\left( R\right) }\right) \), the definition of a random element is the same as the definition of a random variable (Sect. 4). Let \( \left( {E,\mathcal{E}}\right) = \left( {{R}^{n},\mathcal{B}\left( {R}^{n}\right) }\right) \) . Then a random element \( X\left( \omega \right) \) is a "random point" in \( {R}^{n} \) . If \( {\pi }_{k} \) is the projection of \( {R}^{n} \) on the \( k \) th coordinate axis, \( X\left( \omega \right) \) can be represented in the form \[ X\left( \omega \right) = \left( {{\xi }_{1}\left( \omega \right) ,\ldots ,{\xi }_{n}\left( \omega \right) }\right) , \] (2) where \( {\xi }_{k} = {\pi }_{k} \circ X \) . It follows from (1) that \( {\xi }_{k} \) is an ordinary random variable. In fact, for \( B \in \mathcal{B}\left( R\right) \) we have \[ \left\{ {\omega : {\xi }_{k}\left( \omega \right) \in B}\right\} = \left\{ {\omega : {\xi }_{1}\left( \omega \right) \in R,\ldots ,{\xi }_{k - 1} \in R,{\xi }_{k} \in B,{\xi }_{k + 1} \in R,\ldots ,{\xi }_{n}\left( \omega \right) \in R}\right\} \] \[ = \{ \omega : X\left( \omega \right) \in \left( {R \times \cdots \times R \times B \times R \times \cdots \times R}\right) \} \in \mathcal{F}, \] since \( R \times \cdots \times R \times B \times R \times \cdots \times R \in \mathcal{B}\left( {R}^{n}\right) \) . Definition 2. An ordered set \( \left( {{\eta }_{1}\left( \omega \right) ,\ldots ,{\eta }_{n}\left( \omega \right) }\right) \) of random variables is called an \( n \) -dimensional random vector. According to this definition, every random element \( X\left( \omega \right) \) with values in \( {R}^{n} \) is an \( n \) -dimensional random vector. The converse is also true: every random vector \( X\left( \omega \right) = \left( {{\xi }_{1}\left( \omega \right) ,\ldots ,{\xi }_{n}\left( \omega \right) }\right) \) is a random element in \( {R}^{n} \) . In fact, if \( {B}_{k} \in \mathcal{B}\left( R\right), k = \) \( 1,\ldots, n \), then \[ \left\{ {\omega : X\left( \omega \right) \in \left( {{B}_{1} \times \cdots \times {B}_{n}}\right) }\right\} = \mathop{\bigcap }\limits_{{k = 1}}^{n}\left\{ {\omega : {\xi }_{k}\left( \omega \right) \in {B}_{k}}\right\} \in \mathcal{F}. \] But \( \mathcal{B}\left( {R}^{n}\right) \) is the smallest \( \sigma \) -algebra containing the sets \( {B}_{1} \times \cdots \times {B}_{n} \) . Consequently we find immediately, by an evident generalization of Lemma 1 of Sect. 4, that whenever \( B \in \mathcal{B}\left( {R}^{n}\right) \), the set \( \{ \omega : X\left( \omega \right) \in B\} \) belongs to \( \mathcal{F} \) . Let \( \left( {E,\mathcal{E}}\right) = \left( {\mathbb{Z}, B\left( \mathbb{Z}\right) }\right) \), where \( \mathbb{Z} \) is the set of complex numbers \( x + {iy}, x, y \in R \) , and \( B\left( \mathbb{Z}\right) \) is the smallest \( \sigma \) -algebra containing the sets \( \{ z : z = x + {iy},{a}_{1} < x \leq \) \( \left. {{b}_{1},{a}_{2} < y \leq {b}_{2}}\right\} \) . It follows from the discussion above that a complex-valued random variable \( Z\left( \omega \right) \) can be represented as \( Z\left( \omega \right) = X\left( \omega \right) + {iY}\left( \omega \right) \), where \( X\left( \omega \right) \) and \( Y\left( \omega \right) \) are random variables. Hence we may also call \( Z\left( \omega \right) \) a complex random variable. Let \( \left( {E,\mathcal{E}}\right) = \left( {{R}^{T},\mathcal{B}\left( {R}^{T}\right) }\right) \), where \( T \) is a subset of the real line. In this case every random element \( X = X\left( \omega \right) \) can evidently be represented as \( X = {\left( {\xi }_{t}\right) }_{t \in T} \) with \( {\xi }_{t} = {\pi }_{t} \circ X \), and is called a random function with time domain \( T \) . Definition 3. Let \( T \) be a subset of the real line. A set of random variables \( X = {\left( {\xi }_{t}\right) }_{t \in T} \) is called a random process* with time domain \( T \) . If \( T = \{ 1,2,\ldots \} \), we call \( X = \left( {{\xi }_{1},{\xi }_{2},\ldots }\right) \) a random process with discrete time, or a random sequence. If \( T = \left\lbrack {0,1}\right\rbrack ,\left( {-\infty ,\infty }\right) ,\lbrack 0,\infty ),\ldots \), we call \( X = {\left( {\xi }_{t}\right) }_{t \in T} \) a random process with continuous time. It is easy to show, by using the structure of the \( \sigma \) -algebra \( \mathcal{B}\left( {R}^{T}\right) \) (Sect. 2) that every random process \( X = {\left( {\xi }_{t}\right) }_{t \in T} \) (in the sense of Definition 3) is also a random function (a random element with values in \( {R}^{T} \) ). Definition 4. Let \( X = {\left( {\xi }_{t}\right) }_{t \in T} \) be a random process. For each given \( \omega \in \Omega \) the function \( {\left( {\xi }_{t}\left( \omega \right) \right) }_{t \in T} \) is said to be a realization or a trajectory of the process corresponding to the outcome \( \omega \) . The following definition is a natural generalization of Definition 2 of Sect. 4. Definition 5. Let \( X = {\left( {\xi }_{t}\right) }_{t \in T} \) be a random process. The probability measure \( {P}_{X} \) on \( \left( {{R}^{T},\mathcal{B}\left( {R}^{T}\right) }\right) \) defined by \[ {P}_{X}\left( B\right) = \mathrm{P}\{ \omega : X\left( \omega \right) \in B\} ,\;B \in \mathcal{B}\left( {R}^{T}\right) , \] is called the probability distribution of \( X \) . The probabilities \[ {P}_{{t}_{1},\ldots ,{t}_{n}}\left( B\right) \equiv \mathrm{P}\left\{ {\omega : \left( {{\xi }_{{t}_{1}},\ldots ,{\xi }_{{t}_{n}}}\right) \in B}\right\} ,\;B \in \mathcal{B}\left( {R}^{n}\right) \] with \( {t}_{1} < {t}_{2} < \cdots < {t}_{n},{t}_{i} \in T \), are called finite-dimensional probabilities (or probability distributions). The functions \[ {F}_{{t}_{1},\ldots ,{t}_{n}}\left( {{x}_{1},\ldots ,{x}_{n}}\right) \equiv \mathrm{P}\left\{ {\omega : {\xi }_{{t}_{1}} \leq {x}_{1},\ldots ,{\xi }_{{t}_{n}} \leq {x}_{n}}\right\} \] with \( {t}_{1} < {t}_{2} < \cdots < {t}_{n},{t}_{i} \in T \), are called finite-dimensional distribution functions of the process \( X = {\left( {\xi }_{t}\right) }_{t \in T} \) . Let \( \left( {E,\mathcal{E}}\right) = \left( {C,{\mathcal{B}}_{0}\left( C\right) }\right) \), where \( C \) is the space of continuous functions \( x = \) \( {\left( {x}_{t}\right) }_{t \in T} \) on \( T = \left\lbrack {0,1}\right\rbrack \) and \( {\mathcal{B}}_{0}\left( C\right) \) is the \( \sigma \) -algebra generated by the open sets (Sect. 2). We show that every random element \( X \) on \( \left( {C,{\mathcal{B}}_{0}\left( C\right) }\right) \) is also a random process with continuous trajectories in the sense of Definition 3. In fact, according to Sect. 2 the set \( A = \left\{ {x \in C : {x}_{t} < a}\right\} \) is open in \( {\mathcal{B}}_{0}\left( C\right) \) . Therefore \[ \left\{ {\omega : {\xi }_{t}\left( \omega \right) < a}\right\} = \{ \omega : X\left( \omega \right) \in A\} \in \mathcal{F}. \] On the other hand, let \( X = {\left( {\xi }_{t}\left( \omega \right) \right) }_{t \in T} \) be a random process (in the sense of Definition 3) whose trajectories are continuous functions for every \( \omega \in \Omega \) . According to (17) of Sect. 2 \[ \left\{ {x \in C : x \in {S}_{\rho }\left( {x}^{0}\right) }\right\} = \mathop{\bigcap }\limits_{{t}_{k}}\left\{ {x \in C : \left| {{x}_{{t}_{k}} - {x}_{{t}_{k}}^{0}}\right| < \rho }\right\} , \] * Or stochastic process (Translator). where \( {t}_{k} \) are the rational points of \( \left\lbrack {0,1}\right\rbrack ,{x}^{0} \) is an element of \( C \) and \[ {S}_{\rho }\left( {x}^{0}\right) = \left\{ {x \in C : \mathop{\sup }\limits_{{t \in T}}\left| {{x}_{t} - {x}_{t}^{0}}\right| < \rho }\right\} . \] Therefore \[ \left\{ {\omega : X\left( \omega \right) \in {S}_{\rho }\left( {{X}^{0}\left( \omega \right) }\right) }\right\} = \mathop{\bigcap }\limits_{{t}_{k}}\left\{ {\omega : \left| {{\xi }_{{t}_{k}}\left( \omega \right) - {\xi }_{{t}_{k}}^{0}\left( \omega \right) }\right| < \rho }\right\} \in \mathcal{F}, \] and therefore we also have \( \{ \omega : X\left( \omega \right) \in B\} \in \mathcal{F} \) for every \( B \in {\mathcal{B}}_{0}\left( C\right) \) . Similar reasoning will show that every random element of the space \( \left( {D,{\mathcal{B}}_{0}\left( D\right) }\right) \) can be considered as a random process with trajectories in the space of functions with no discontinuities of the second kind; and conversely. 2. Let \( \left( {\Omega ,\mathcal{F},\mathrm{P}}\right) \) be a probability space and \( \left( {{E}_{\alpha },{\mathcal{E}}_{\alpha }}\right) \) measurable spaces, where \( \alpha \) belongs to an (arbitrary) set \( \mathfrak{A} \) . Definition 6. We say that the \( \mathcal{F}/{\mathcal{E}}_{\alpha } \) -measurable functions \( \left( {{X}_{\alpha }\left( \omega \right) }\right) ,\alpha \in \mathfrak{A} \), are independent (or mutually independent) if, for every finite set of indices \( {\alpha }_{1},\ldots ,{\alpha }_{n} \) the random elements \( {X}_{{\alpha }_{1}},\ldots ,{X}_{{\alpha }_{n}} \) are independent, i.e. \[ \mathrm{P}\left( {{X}_{{\alpha }_{1}} \in {B}_{{\alpha }_{1}},\ldots ,{X}_{{\alpha }_{n}} \in {B}_{{\alpha }_{n}}}\right) = \mathrm{P}\left( {{X}_{{\alpha }_{1}} \in {B}_{{\alpha }_{1}}}\right) \cdots \mathrm{P}\left( {{X}_{{\alpha }_{n}} \in {B}_{{\alpha }_{n}}}\right) , \] (3) where \( {B}_{\alph
1359_[陈省身] Lectures on Differential Geometry
Definition 4.1
Definition 4.1. Suppose \( M \) is an \( m \) -dimensional smooth manifold. A region \( D \) with boundary is a subset of the manifold \( M \) with two kinds of points: 1) Interior points, each of which has a neighborhood in \( M \) contained in \( D \) . 2) Boundary points \( p \), for each of which there exists a coordinate chart \( \left( {U;{u}^{i}}\right) \) such that \( {u}^{i}\left( p\right) = 0 \) and \[ U \cap D = \left\{ {q \in U \mid {u}^{m}\left( q\right) \geq 0}\right\} . \] (4.8) A coordinate system \( {u}^{i} \) with the above property is called an adapted coordinate system for the boundary point \( p \) . The set of all the boundary points of \( D \) is called the boundary of \( D \) , denoted by \( B \) . Theorem 4.1. The boundary \( B \) of a region \( D \) with boundary is a regular imbedded closed submanifold. If \( M \) is orientable, then \( B \) is also orientable. Proof. The boundary \( B \) of the region \( D \) is obviously a closed subset of \( M \) . Suppose \( \left( {U;{u}^{i}}\right) \) is an adapted coordinate neighborhood. Then \[ U \cap B = \left\{ {q \in U \mid {u}^{m}\left( q\right) = 0}\right\} . \] (4.9) By Definition 3.2 in Chapter \( 1, B \) is a regular imbedded closed submanifold of \( M \) . Suppose \( M \) is an oriented manifold. Choose an adapted coordinate neighborhood \( \left( {U;{u}^{i}}\right) \) which is consistent with the orientation of \( M \) at an arbitrary point \( p \in B \) . Then \( \left( {u,\ldots ,{u}^{m - 1}}\right) \) is a local coordinate system of \( B \) at the point \( p \) . Let \[ {\left( -1\right) }^{m}d{u}^{1} \land \cdots \land d{u}^{m - 1} \] (4.10) specify the orientation of the boundary \( B \) in the coordinate neighborhood \( U \cap B \) of the point \( p \) . We will prove that the orientations given in this way to the coordinate neighborhoods are consistent. Suppose \( \left( {V;{v}^{i}}\right) \) is another coordinate neighborhood of a boundary point \( p \) consistent with the orientation of \( M \) . Then \[ \frac{\partial \left( {{v}^{1},\ldots ,{v}^{m}}\right) }{\partial \left( {{u}^{1},\ldots ,{u}^{m}}\right) } > 0 \] (4.11) Suppose \( {v}^{m} = {f}^{m}\left( {{u}^{1},\ldots ,{u}^{m}}\right) \) . Then for any fixed \( {u}^{i},\ldots ,{u}^{m - 1} \) the sign of the variable \( {v}^{m} \) is the same as that of \( {u}^{m} \), and \( {v}^{m} = 0 \) when \( {u}^{m} = 0 \) . Therefore, at the point \( p,\partial {v}^{m}/\partial {u}^{m} > 0 \) . Without loss of generality, we may assume that \( {v}^{m} = {u}^{m} \) . Then (4.11) becomes \[ \frac{\partial \left( {{v}^{1},\ldots ,{v}^{m - 1}}\right) }{\partial \left( {{u}^{1},\ldots ,{u}^{m - 1}}\right) } > 0 \] \( \left( {4.12}\right) \) This shows that \( {\left( -1\right) }^{m}d{u}^{1} \land \cdots \land d{u}^{m - 1} \) and \( {\left( -1\right) }^{m}d{v}^{1} \land \cdots \land d{v}^{m - 1} \) give consistent orientations in \( U \cap V \cap B \) . Hence \( B \) is orientable. The orientation of \( B \) given in (4.10) is called the induced orientation on the boundary \( B \) by an oriented manifold \( M \) . If \( D \) has the same orientation as \( M \) we denote the boundary \( B \) with the induced orientation by \( \partial D \) . It is easy to verify that the orientations of \( \partial D \) and \( \partial \sum \) in the preceding four examples are induced in this way. Theorem 4.2 (Stokes’ Formula). Suppose \( D \) is a region with boundary in an \( m \) -dimensional oriented manifold \( M \), and \( \omega \) is an exterior differential ( \( m - \) 1)-form on \( M \) with compact support. Then \[ {\int }_{D}{d\omega } = {\int }_{\partial D}\omega \] (4.13) If \( \partial D = \varnothing \), then the integral on the right hand side is zero. Proof. Suppose \( \left\{ {U}_{i}\right\} \) is a coordinate covering consistent with the orientation of \( M \), and \( \left\{ {g}_{\alpha }\right\} \) is a subordinate partition of unity. Then \[ \omega = \mathop{\sum }\limits_{\alpha }{g}_{\alpha } \cdot \omega \] (4.14) Since supp \( \omega \) is compact, the right hand side of the above formula is a sum of finitely many terms. Therefore \[ {\int }_{D}{d\omega } = \mathop{\sum }\limits_{\alpha }{\int }_{D}d\left( {{g}_{\alpha } \cdot \omega }\right) \] (4.15) \[ {\int }_{\partial D}\omega = \mathop{\sum }\limits_{\alpha }{\int }_{\partial D}{g}_{\alpha } \cdot \omega . \] This implies that we only need to prove \[ {\int }_{D}d\left( {{g}_{\alpha } \cdot \omega }\right) = {\int }_{\partial D}{g}_{\alpha } \cdot \omega \] (4.16) for each \( \alpha \) . We may assume that supp \( \omega \) is contained in a coordinate neighborhood \( \left( {U;{u}^{i}}\right) \) consistent with the orientation of \( M \) . Suppose \( \omega \) can be expressed as \[ \omega = \mathop{\sum }\limits_{{j = 1}}^{m}{\left( -1\right) }^{j - 1}{a}_{j}d{u}^{1} \land \cdots \land \overset{⏜}{d{u}^{j}} \land \cdots \land d{u}^{m}, \] (4.17) where the \( {a}_{j} \) are smooth functions on \( U \) . Then \[ {d\omega } = \left( {\mathop{\sum }\limits_{{j = 1}}^{m}\frac{\partial {a}_{j}}{\partial {u}^{j}}}\right) d{u}^{1} \land \cdots \land d{u}^{m} \] (4.18) There are two cases to consider. 1) If \( U \cap \partial D = \varnothing \), the right hand side of (4.13) is zero. Then either \( U \) is contained in \( M - D \) or in the interior of \( D \) . In the former case, the left hand side of (4.13) is obviously zero. In the latter, we have \[ {\int }_{D}{d\omega } = \mathop{\sum }\limits_{{j = 1}}^{m}{\int }_{U}\frac{\partial {a}_{j}}{\partial {u}^{j}}d{u}^{1}\cdots d{u}^{m}. \] (4.19) Consider a cube \( C = \left\{ {u \in {\mathbb{R}}^{m}\left| \right| {u}^{i}| \leq K,1 \leq i \leq m}\right\} \) such that \( U \) is contained in \( C \) . Extend the functions \( {a}_{j} \) to \( C \) by letting them be zero outside \( U \) . Obviously the \( {a}_{j} \) are continuously differentiable in \( C \) . Hence \[ {\int }_{U}\frac{\partial {a}_{j}}{\partial {u}^{j}}d{u}^{1}\cdots d{u}^{m} = {\int }_{C}\frac{\partial {a}^{j}}{\partial {u}^{j}}d{u}^{1}\cdots d{u}^{m} \] \[ = {\int }_{\begin{matrix} {\left| {u}^{i}\right| \leq K} \\ {i \neq j} \end{matrix}}\left( {{\int }_{-K}^{+K}\frac{\partial {a}_{j}}{\partial {u}^{j}}d{u}^{j}}\right) d{u}^{1}\cdots d{u}^{j - 1}d{u}^{j + 1}\cdots d{u}^{m} \] \[ = 0\text{. } \] (4.20) The last integral above vanishes since \[ {\int }_{-K}^{+K}\frac{\partial {a}_{j}}{\partial {u}^{j}}d{u}^{j} = {a}_{j}\left( {{u}^{1},\ldots ,{u}^{j - 1}, K,{u}^{j + 1},\ldots ,{u}^{m}}\right) \] \[ - {a}_{j}\left( {{u}^{1},\ldots ,{u}^{j - 1}, - K,{u}^{j + 1},\ldots ,{u}^{m}}\right) \] (4.21) \[ = 0\text{.} \] 2) If \( U \cap \partial D \neq \varnothing \), we may assume that \( U \) is an adapted coordinate neighborhood consistent with the orientation of \( M \) . Then we have \[ U \cap D = \left\{ {q \in U \mid {u}^{m}\left( q\right) \geq 0}\right\} ,\text{ and } \] (4.22) \[ U \cap \partial D = \left\{ {q \in U \mid {u}^{m}\left( q\right) = 0}\right\} . \] (4.23) Choose a cube \( C = \left\{ {u \in {\mathbb{R}}^{m}\left| \right| {u}^{i}\left| {\; \leq K,1 \leq i \leq m - 1;0 \leq {u}^{m} \leq K}\right. }\right\} \) . When \( K \) is sufficiently large, \( U \cap D \) lies in the union of the interior of \( C \) and the boundary \( {u}^{m} = 0 \) . Extend \( {a}_{j} \) as in 1). Then the right hand side of (4.13) becomes \[ {\int }_{\partial D}\omega = {\int }_{U \cap \partial D}\omega \] \[ = \mathop{\sum }\limits_{{j = 1}}^{m}{\left( -1\right) }^{j - 1}{\int }_{U \cap \partial D}{a}_{j}d{u}^{1} \land \cdots \land d{u}^{j - 1} \land d{u}^{j + 1} \land \cdots d{u}^{m} \] \[ = {\left( -1\right) }^{m - 1}{\int }_{U \cap \partial D}{a}_{m}d{u}^{1} \land \cdots \land d{u}^{m - 1} \] \[ = - {\int }_{\begin{matrix} {\left| {u}^{i}\right| \leq K} \\ {1 \leq i \leq m - 1} \end{matrix}}{a}_{m}\left( {{u}^{1},\ldots ,{u}^{m - 1},0}\right) d{u}^{1}\cdots d{u}^{m - 1}, \] (4.24) where the third equality holds since \( d{u}^{m} = 0 \) on \( U \cap \partial D \), while in the last equality we have used the induced orientation on \( \partial D \) by \( M \) . \[ {\int }_{D}{d\omega } = {\int }_{D \cap U}{d\omega } \] (4.25) \[ = \mathop{\sum }\limits_{{j = 1}}^{m}{\int }_{D \cap U}\frac{\partial {a}_{j}}{\partial {u}^{j}}d{u}^{1} \land \cdots \land d{u}^{m} \] But for \( 1 \leq j \leq m - 1 \) , \[ {\int }_{D \cap U}\frac{\partial {a}_{j}}{\partial {u}^{j}}d{u}^{1} \land \cdots \land d{u}^{m} \] \[ = \mathop{\int }\limits_{\substack{{\left| {u}^{i}\right| \leq K, i \neq j, m} \\ {0 \leq {u}^{m} \leq K} }}\left( {{\int }_{-K}^{+K}\frac{\partial {a}_{j}}{\partial {u}^{j}}d{u}^{j}}\right) d{u}^{1}\cdots d{u}^{j - 1}d{u}^{j + 1}\cdots d{u}^{m} \] (4.26) \[ = 0\text{.} \] Thus there is only one nonzero term in (4.25): \[ {\int }_{D \cap U}\frac{\partial {a}_{m}}{\partial {u}^{m}}d{u}^{1} \land \cdots \land d{u}^{m} \] \[ = \mathop{\int }\limits_{{\left| {u}^{i}\right| \leq K, i \neq m}}\left( {{a}_{m}\left( {{u}^{1},\ldots ,{u}^{m - 1}, K}\right) - {a}_{m}\left( {{u}^{1},\ldots ,{u}^{m - 1},0}\right) }\right) d{u}^{1}\cdots d{u}^{m - 1} \] \[ = - {\int }_{\left| {u}^{i}\right| \leq K, i \neq m}{a}_{m}\left( {{u}^{1},\ldots ,{u}^{m - 1},0}\right) d{u}^{1}\cdots d{u}^{m - 1}. \] (4.27) Hence (4.13) is true, and the theorem is proved. Remark. In most applications, the closed region \( D \) is compact. In such cases we need not assume that the exterior differential \( \left( {m - 1}\right) \) -form has a compact support for the Stokes' formula to be valid. The Stokes formula plays an important role in physics, mechanics, partial differential equations, and differential geometry. Integration is additive with respect to the domain of integration, hence we can further define the integral of exterior differential forms on singular chains. \( {}^{e} \) Viewing the integral as a pairing between an exterior differential form and a domain of int
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 4.2.1
Definition 4.2.1. A commutative Banach algebra \( A \) is called regular if its algebra of Gelfand transforms is regular in the above sense, that is, given any closed subset \( E \) of \( \Delta \left( A\right) \) and \( {\varphi }_{0} \in \Delta \left( A\right) \smallsetminus E \), there exists \( x \in A \) such that \( {\varphi }_{0}\left( x\right) \neq 0 \) and \( \varphi \left( x\right) = 0 \) for all \( \varphi \in E \) . Some authors (see [108] and [19], for example) call such Banach algebras completely regular rather than regular. However, the term regular is more widely used. Example 4.2.2. (1) Every commutative \( {C}^{ * } \) -algebra \( A \) is regular. Indeed, \( A \) is isomorphic to \( {C}_{0}\left( {\Delta \left( A\right) }\right) \), and Urysohn’s lemma ensures that for any locally compact Hausdorff space \( T,{C}_{0}\left( T\right) \) is a regular space of functions. (2) It is easily seen that \( {C}^{n}\left\lbrack {a, b}\right\rbrack \) is regular since, when \( \Delta \left( {{C}^{n}\left\lbrack {a, b}\right\rbrack }\right) \) is identified with \( \left\lbrack {a, b}\right\rbrack \), the Gelfand homomorphism is nothing but the identity. (3) The disc algebra \( A\left( \mathbb{D}\right) \) fails to be regular since the Gelfand homomorphism is the identity mapping and a nonzero holomorphic function cannot vanish on, say, a nonempty open set. It is fairly difficult to prove and is postponed to Section 4.4 that \( {L}^{1}\left( G\right) \) , for \( G \) a locally compact Abelian group, is regular. In fact, this is one of the most crucial results in commutative harmonic analysis. We continue to identify \( \Delta \left( A\right) \) with \( \operatorname{Max}\left( A\right) \) via the mapping \( \varphi \rightarrow \ker \varphi \) (Theorem 2.1.7) and proceed with relating regularity of a commutative Banach algebra \( A \) to properties of the hull-kernel topology on \( \Delta \left( A\right) \) . Theorem 4.2.3. For a commutative Banach algebra \( A \), the following conditions are equivalent. (i) \( A \) is regular. (ii) The hull-kernel topology and the Gelfand topology on \( \Delta \left( A\right) \) coincide. (iii) The hull-kernel topology on \( \Delta \left( A\right) \) is Hausdorff, and every point in \( \Delta \left( A\right) \) possesses a hull-kernel neighbourhood with modular kernel. Proof. We show the chain of implications (i) \( \Rightarrow \) (ii) \( \Rightarrow \) (iii) \( \Rightarrow \) (i). If \( I \) is a closed ideal of \( A \), we consider \( \Delta \left( {A/I}\right) \) as embedded into \( \Delta \left( A\right) \) (Lemma 4.1.5). Suppose that \( A \) is regular and consider a subset \( E \) of \( \Delta \left( A\right) \) that is closed in the Gelfand topology. Then, for every \( \varphi \in \Delta \left( A\right) \smallsetminus E \), there exists \( {x}_{\varphi } \in A \) with \( {\left. \widehat{{x}_{\varphi }}\right| }_{E} = 0 \) and \( \widehat{{x}_{\varphi }}\left( \varphi \right) \neq 0 \) . This means that \( k\left( E\right) \nsubseteq \ker \varphi \) for every \( \varphi \in \Delta \left( A\right) \smallsetminus E \), and hence \( E = h\left( {k\left( E\right) }\right) \), which is an \( {hk} \) -closed set. This proves that the two topologies on \( \Delta \left( A\right) \) coincide. To prove (ii) \( \Rightarrow \) (iii) we only have to show that every \( {\varphi }_{0} \in \Delta \left( A\right) \) has a neighbourhood \( V \) with modular kernel \( k\left( V\right) \) . Fix \( x \in A \) with \( {\varphi }_{0}\left( x\right) \neq 0 \) and let \[ V = \left\{ {\varphi \in \Delta \left( A\right) : \left| {\varphi \left( x\right) }\right| > \frac{1}{2}\left| {{\varphi }_{0}\left( x\right) }\right| }\right\} . \] Then \( V \) is open and \( \bar{V} \), the closure of \( V \) in the Gelfand topology, is contained in the set \( \{ \varphi \in \Delta \left( A\right) : \left| {\varphi \left( x\right) }\right| \geq \frac{1}{2}\left| {{\varphi }_{0}\left( x\right) }\right| \} \) . Because \( \widehat{x} \) vanishes at infinity, \( \bar{V} \) is compact. Now, by hypothesis, \( \bar{V} = h\left( {k\left( V\right) }\right) = \Delta \left( {A/k\left( V\right) }\right) \) . Thus the semisimple algebra \( A/k\left( V\right) \) has a compact structure space and \( \psi \left( {x + k\left( V\right) }\right) \neq \) 0 for every \( \psi \in \bar{V} \) . Corollary 3.2.2 now yields that \( A/k\left( V\right) \) has an identity. Finally, suppose that (iii) holds. To show (i), let \( E \) be a subset of \( \Delta \left( A\right) \) which is closed in the Gelfand topology and let \( {\varphi }_{0} \in \Delta \left( A\right) \smallsetminus E \) . Choose an open hull-kernel neighbourhood \( V \) of \( {\varphi }_{0} \) with modular kernel \( k\left( V\right) \) . Since \( A/k\left( V\right) \) has an identity, \( h\left( {k\left( V\right) }\right) = \Delta \left( {A/k\left( V\right) }\right) \) is compact with respect to the Gelfand topology, and hence so is \( {E}_{0} = E \cap h\left( {k\left( V\right) }\right) \) . Consequently, \( {E}_{0} \) is \( {hk} \) -compact. Now, \( {\varphi }_{0} \notin {E}_{0} \), and the hull-kernel topology is Hausdorff by hypothesis. By the standard covering argument, \( {\varphi }_{0} \) and \( {E}_{0} \) can be separated by \( {hk} \) -open sets. Thus, let \( U \) be an \( {hk} \) -open set containing \( {E}_{0} \) such that \( {\varphi }_{0} \notin \bar{U} = h\left( {k\left( U\right) }\right) \) . Then there exists \( y \in A \) such that \( {\varphi }_{0}\left( y\right) \neq 0 \), but \( \varphi \left( y\right) = 0 \) for all \( \varphi \in \bar{U} \) . On the other hand, \( {\varphi }_{0} \in V \) and \( \Delta \left( A\right) \smallsetminus V \) is \( {hk} \) -closed. Hence there exists \( z \in k\left( {\Delta \left( A\right) \smallsetminus V}\right) \) with \( {\varphi }_{0}\left( z\right) \neq 0 \) . Let \( x = {yz} \), then \( {\varphi }_{0}\left( x\right) = {\varphi }_{0}\left( y\right) {\varphi }_{0}\left( z\right) \neq 0 \) and \( \varphi \left( x\right) = 0 \) for all \( \varphi \in \left( {\Delta \left( A\right) \smallsetminus V}\right) \cup \bar{U} \) . Now \[ \left( {\Delta \left( A\right) \smallsetminus V}\right) \cup \bar{U} \supseteq \left\lbrack {\Delta \left( A\right) \smallsetminus h\left( {k\left( V\right) }\right) }\right\rbrack \cup \left\lbrack {E \cap h(k\left( V\right) \rbrack \supseteq E,}\right. \] so that \( {\left. \widehat{x}\right| }_{E} = 0 \) . This shows that \( A \) is regular. By Lemma 2.2.14, the Gelfand topology on \( \Delta \left( A\right) \) equals the weak topology with respect to the functions \( \widehat{x}, x \in A \) . Therefore the equivalence of (i) and (ii) in Theorem 4.2.3 can obviously be reformulated as follows. Corollary 4.2.4. A is regular if and only if \( \widehat{x} \) is hull-kernel continuous on \( \Delta \left( A\right) \) for each \( x \in A \) . Remark 4.2.5. In [75, Theorem 7.1.2] it is claimed that a commutative Banach algebra is regular provided that the hull-kernel topology on \( \Delta \left( A\right) \) is Hausdorff. Of course, this is true when \( A \) is unital. However, even though we are unaware of a counterexample, this strengthening of the implication (iii) \( \Rightarrow \) (i) in Theorem 4.2.3 does not seem to be correct. In what follows we show that in the definition of regularity the singleton \( \{ \varphi \} \) can be replaced by any compact subset of \( \Delta \left( A\right) \) which is disjoint from \( E \), and we investigate how regularity behaves under the standard operations on Banach algebras, such as adjoining an identity and forming closed ideals, quotients, and tensor products. Theorem 4.2.6. Let \( A \) be a commutative Banach algebra. (i) Let \( I \) be closed ideal of \( A \) . If \( A \) is regular, then so are the algebras \( I \) and \( A/I \) . (ii) \( A \) is regular if and only if \( {A}_{e} \), the unitisation of \( A \), is regular. Proof. (i) Because \( A \) is regular, by Theorem 4.2.3 the Gelfand topology coincides with the \( {hk} \) -topology on \( \Delta \left( A\right) \) . By Lemma 4.1.5(ii), the map \( \varphi \rightarrow {\left. \varphi \right| }_{I} \) is a homeomorphism for the \( {hk} \) -topologies on \( \Delta \left( A\right) \smallsetminus h\left( I\right) \) and \( \Delta \left( I\right) \), and the same is true of the Gelfand topologies by Lemma 2.2.15(ii). So the Gelfand topology and the \( {hk} \) -topology on \( \Delta \left( I\right) \) coincide. Another application of Theorem 4.2.3 now shows that \( I \) is regular. Similarly, using Lemma 4.1.5(i) and Lemma 2.2.15(i), it follows that \( A/I \) is regular. (ii) If \( {A}_{e} \) is regular, so is \( A \) by (i). Conversely, suppose that \( A \) is regular. Then, for every \( a \in A,\widehat{a} \) is \( {hk} \) -continuous on \( \Delta \left( A\right) \) by Corollary 4.2.4 and hence on \( \Delta \left( {A}_{e}\right) \) by Lemma 4.1.6. This of course implies that \( \widehat{x} \) is \( {hk} \) -continuous on \( \Delta \left( {A}_{e}\right) \) for each \( x \in {A}_{e} \) . So \( {A}_{e} \) is regular by Corollary 4.2.4. We show later (Theorem 4.3.8) that conversely \( A \) is regular whenever \( A \) has a closed ideal \( I \) such that both \( I \) and \( A/I \) are regular. This result is more difficult and involves the existence of a greatest closed regular ideal in a commutative Banach algebra. It is worth mentioning that a closed subalgebra of a regular algebra need not be regular. In fact, \( C\left( \mathbb{D}\right) \) is regular whereas the closed subalgebra \( A\left( \mathbb{D}\right) \) is not. Lemma 4.2.7. Let \( I \) be an ideal in the regular commutative Banach algebra \( A \) . Given any \( {\varphi }_{0} \in \Delta \left( A\right) \smallsetminus h\left( I\right) \), there exists \( u \in I \) such that \( \widehat{u} = 1 \) in some neighbourhood of \( {\varphi }_{0} \) . Proof. Because \( A \) is regular, by Theorem 4.2.3 the hull-ker
1029_(GTM192)Elements of Functional Analysis
Definition 1.2
Definition 1.2 If \( T \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) and \( \alpha \in \mathcal{E}\left( \Omega \right) \), the product distribution \( {\alpha T} \) on \( \Omega \) is defined by setting \[ \langle {\alpha T},\varphi \rangle = \langle T,{\alpha \varphi }\rangle \;\text{ for all }\varphi \in \mathcal{D}\left( \Omega \right) . \] If \( T \in {\mathcal{D}}^{\prime m}\left( \Omega \right) \) and \( \alpha \in {\mathcal{E}}^{m}\left( \Omega \right) \), the product \( {\alpha T} \in {\mathcal{D}}^{\prime m}\left( \Omega \right) \) is defined by \[ \langle {\alpha T},\varphi \rangle = \langle T,{\alpha \varphi }\rangle \;\text{ for all }\varphi \in {\mathcal{D}}^{m}\left( \Omega \right) . \] (Recall that \( {\mathcal{D}}^{\prime m}\left( \Omega \right) \) is the set of continuous linear forms on the space \( {\mathcal{D}}^{m}\left( \Omega \right) \), which by Proposition 3.3 on page 282 can be identified with the space of distributions of order at most \( m \) ). That \( {\alpha T} \) really is a distribution follows from the preceding lemma: If \( {\left( {\varphi }_{n}\right) }_{n \in \mathbb{N}} \) is a sequence in \( \mathcal{D}\left( \Omega \right) \) or \( {\mathcal{D}}^{m}\left( \Omega \right) \) that converges to 0, Lemma 1.1 implies that the sequence \( {\left( \left\langle T,\alpha {\varphi }_{n}\right\rangle \right) }_{n \in \mathbb{N}} \) tends to 0 since \( T \) is a distribution. Thus \( {\alpha T} \) really is a continuous linear form on \( \mathcal{D}\left( \Omega \right) \) or \( {\mathcal{D}}^{m}\left( \Omega \right) \), as the case may be. Obviously, if \( f \in {L}_{\text{loc }}^{1}\left( \Omega \right) \) and \( \alpha \in C\left( \Omega \right) \), we have \[ \alpha \left\lbrack f\right\rbrack = \left\lbrack {\alpha f}\right\rbrack \] In this sense, this multiplication extends the usual product of functions. We will see in Exercise 1 below that this extension cannot be pushed further to the case of the product of two arbitrary distributions without the loss of the elementary algebraic properties of multiplication, such as associativity and commutativity. Remark. The definition immediately implies that if \( \alpha \in \mathcal{E}\left( \Omega \right) \) the linear map \( T \mapsto {\alpha T} \) from \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) to \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) is continuous, in the sense that, if \( {\left( {T}_{n}\right) }_{n \in \mathbb{N}} \) converges to \( T \) in \( {\mathcal{D}}^{\prime }\left( \Omega \right) \), then \( {\left( \alpha {T}_{n}\right) }_{n \in \mathbb{N}} \) converges to \( {\alpha T} \) in \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) . Proposition 1.3 With the notation introduced in Definition 1.2, we have \[ \operatorname{Supp}\left( {\alpha T}\right) \subset \operatorname{Supp}\alpha \cap \operatorname{Supp}T \] and, if \( \beta \in \mathcal{E}\left( \Omega \right) \) (or \( \beta \in {\mathcal{E}}^{m}\left( \Omega \right) \) ), we have \[ \alpha \left( {\beta T}\right) = \left( {\alpha \beta }\right) T \] Proof. The second claim is obvious. To show the first, take \( \varphi \in \mathcal{D}\left( \Omega \right) \) . If \( \operatorname{Supp}\varphi \subset \Omega \smallsetminus \operatorname{Supp}\alpha \), then \( {\alpha \varphi } = 0 \), so \( \langle {\alpha T},\varphi \rangle = 0 \) . It follows that \( \Omega \smallsetminus \operatorname{Supp}\alpha \) is contained in \( \Omega \smallsetminus \operatorname{Supp}\left( {\alpha T}\right) \), so \( \operatorname{Supp}\left( {\alpha T}\right) \subset \operatorname{Supp}\alpha \) . Now if \( \operatorname{Supp}\varphi \subset \Omega \smallsetminus \operatorname{Supp}T \), then \[ \operatorname{Supp}{\alpha \varphi } \subset \operatorname{Supp}\varphi \subset \Omega \smallsetminus \operatorname{Supp}T, \] which implies that \( \langle {\alpha T},\varphi \rangle = 0 \) . Therefore \( \Omega \smallsetminus \operatorname{Supp}T \) is contained in \( \Omega \smallsetminus \) \( \operatorname{Supp}\left( {\alpha T}\right) \), so \( \operatorname{Supp}\left( {\alpha T}\right) \subset \operatorname{Supp}T \) . The result follows. The inclusion in the proposition may be strict. For example, if \( T = \delta \) is the Dirac measure at 0 in \( {\mathbb{R}}^{d} \), and if \( \alpha \in C\left( {\mathbb{R}}^{d}\right) \) is such that \( \alpha \left( 0\right) = 0 \) and \( 0 \in \operatorname{Supp}\alpha \) (say \( \alpha \left( x\right) = x \) ), then \( {\alpha T} = \alpha \left( 0\right) \delta = 0 \) and the support of \( {\alpha T} \) is empty, whereas \( \operatorname{Supp}\alpha \cap \operatorname{Supp}T = \{ 0\} \) . Division of distributions is an important problem: If \( S \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) and \( \alpha \in \mathcal{E}\left( \Omega \right) \), is there a \( T \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) such that \( {\alpha T} = S \) ? Clearly, if \( \alpha \) vanishes nowhere in \( \Omega \), the product \( T = \left( {1/\alpha }\right) S \) is the unique solution to the problem, by the second part of Proposition 1.3. In the general case, the restriction of \( T \) to the open set \( \left\{ {x \in {\mathbb{R}}^{d} : \alpha \left( x\right) \neq 0}\right\} \) is uniquely defined by the same equality, but the global problem may have infinitely many solutions. Here is an example in dimension 1. Proposition 1.4 For every \( S \in {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \), there exists \( T \in {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) such that \( {xT} = S \) . If \( {T}_{0} \) is such that \( x{T}_{0} = S \), the set of solutions of the equation \( {xT} = S \) equals \( \left\{ {{T}_{0} + {C\delta } : C \in \mathbb{C}}\right\} \) . Proof. Take \( \chi \in \mathcal{D}\left( \mathbb{R}\right) \) such that \( \chi \left( 0\right) = 1 \) . To each \( \varphi \in \mathcal{D}\left( \mathbb{R}\right) \) we associate \( \widetilde{\varphi } \), defined by \[ \widetilde{\varphi }\left( x\right) = {\int }_{0}^{1}\left( {{\varphi }^{\prime }\left( {tx}\right) - \varphi \left( 0\right) {\chi }^{\prime }\left( {tx}\right) }\right) {dt}. \] One easily checks that \( \widetilde{\varphi } \in \mathcal{D}\left( \mathbb{R}\right) \) and that the map \( \varphi \mapsto \widetilde{\varphi } \) from \( \mathcal{D}\left( \mathbb{R}\right) \) to \( \mathcal{D}\left( \mathbb{R}\right) \) is continuous. Moreover, if \( x \in {\mathbb{R}}^{ * },\widetilde{\varphi }\left( x\right) = \left( {\varphi \left( x\right) - \varphi \left( 0\right) \chi \left( x\right) }\right) /x \) . Now put \[ \langle T,\varphi \rangle = \langle S,\widetilde{\varphi }\rangle \;\text{ for all }\varphi \in \mathcal{D}\left( \mathbb{R}\right) . \] Since \( \varphi \mapsto \widetilde{\varphi } \) is continuous, \( T \) belongs to \( {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) ; since \( \widetilde{x\varphi } = \varphi \), we get \( {xT} = S \) . Now take \( T \in {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) with \( {xT} = 0 \) . If \( \varphi \in \mathcal{D}\left( \mathbb{R}\right) \), we have \[ 0 = \langle {xT},\widetilde{\varphi }\rangle = \langle T,\varphi - \varphi \left( 0\right) \chi \rangle = \langle T,\varphi \rangle - \langle T,\chi \rangle \langle \delta ,\varphi \rangle . \] It follows that \( T = \langle T,\chi \rangle \delta \) . Here is a particular case. Proposition 1.5 Suppose \( T \in {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) . Then \( {xT} = 1 \) if and only if there exists \( C \in \mathbb{C} \) such that \( T = \operatorname{pv}\left( {1/x}\right) + {C\delta } \) . Note that, in the equality \( {xT} = 1 \), the symbol 1 represents the constant function equal to 1 , identified with the distribution [1], which is none other than Lebesgue measure \( \lambda \) . Proof. By Proposition 1.4, it suffices to show that \( x\mathrm{{pv}}\left( {1/x}\right) = 1 \) . To do this, take \( \varphi \in \mathcal{D}\left( \mathbb{R}\right) \) . By definition, \[ \langle x\operatorname{pv}\left( {1/x}\right) ,\varphi \rangle = \langle \operatorname{pv}\left( {1/x}\right) ,{x\varphi }\rangle = \mathop{\lim }\limits_{{\varepsilon \rightarrow {0}^{ + }}}{\int }_{\{ \left| x\right| > \varepsilon \} }\left( {1/x}\right) {x\varphi }\left( x\right) {dx} \] \[ = \int \varphi \left( x\right) {dx} = \langle \left\lbrack 1\right\rbrack ,\varphi \rangle \] as we wished to show. ## Exercises 1. Show that it is impossible to define a multiplication operation on the set \( {\mathcal{D}}^{\prime }\left( \mathbb{R}\right) \) that is at once associative, commutative and an extension of the multiplication defined in the text. Hint. Suppose there is such a multiplication and compute in two ways the product \( {x\delta }\operatorname{pv}\left( {1/x}\right) \), where \( \delta \) is the Dirac measure at 0 . 2. Consider an open set \( \Omega \) in \( {\mathbb{R}}^{d} \) and elements \( \alpha \in \mathcal{E}\left( \Omega \right) \) and \( T \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) . Assume that \( \alpha = 1 \) on an open set that contains the support of \( T \) . Show that \( {\alpha T} = T \) . 3. Suppose \( T \in {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right), a \in {\mathbb{R}}^{d} \), and \( m \in \mathbb{N} \) . Show that \( {\left( x - a\right) }^{p}T = 0 \) for every multiindex \( p \) of length \( m + 1 \) if and only if \( T \) can be written as \[ \langle T,\varphi \rangle = \mathop{\sum }\limits_{{\left| q\right| \leq m}}{c}_{q}{D}^{q}\varphi \left( a\right) \;\text{ for all }\varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) , \] with \( {c}_{q} \in \mathbb{C} \) for \( \left| q\right| \leq m \) . (As might be expected, by \( {\left( x - a\right) }^{p} \) we mean the product \( {\left( {x}_{1} - {a}_{1}\right) }^{{p}_{1}}\ldots {\left( {x}_{d} - {a}_{d}\right) }^{{p}_{d}} \) .) Hint. Show first that, if \( {\left( x - a\right) }^{p}T = 0 \) for every multiindex \( p \
1347_[陈亚浙&吴兰成] Second Order Elliptic Equations and Elliptic Systems
Definition 1.2
Definition 1.2. Let \( u \in C\left( \Omega \right) \) . The set \[ {\Gamma }_{u} = \{ y \in \Omega \mid \chi \left( y\right) \neq \varnothing \} \] (1.2) \[ = \left\{ {y \in \Omega \mid \exists p \in {\mathbb{R}}^{n}\text{ such that }u\left( x\right) \leq u\left( y\right) + p \cdot \left( {x - y}\right) ,\forall x \in \Omega }\right\} \] is said to be the contact set of \( u \) . Next, we consider the convex hull of the lower space of the graph \( z = u\left( x\right) \) ; this convex hull is the lower space of some graph \( z = \widehat{u}\left( x\right) \) . It is clear that \( \widehat{u}\left( x\right) \geq u\left( x\right) \) and that it is the smallest concave function. \( {\Gamma }_{u} \) consists of the projections to the plane \( z = 0 \) of those points where the hypersurfaces \( z = \widehat{u}\left( x\right) \) and \( z = u\left( x\right) \) meet. This is where the name contact set comes from. If \( u \in {C}^{1}\left( \Omega \right), y \in {\Gamma }_{u} \), then \( \chi \left( y\right) = \{ {Du}\left( y\right) \} \) ; if furthermore \( u \in {C}^{2}\left( \Omega \right) \) and \( \chi \left( y\right) \) is nonempty, then \( - {D}^{2}u\left( y\right) \geq 0 \) (i.e., the Hessian matrix is negative semidefinite). In fact, the function associated with the normal mapping (1.3) \[ w\left( x\right) = u\left( y\right) + p \cdot \left( {x - y}\right) - u\left( x\right) ,\;\forall x \in \Omega , \] attains its minimum at \( y \), and therefore \( {Dw}\left( y\right) = 0 \) and \( {D}^{2}w\left( y\right) \geq 0 \) . This implies the above result. More generally, we have Lemma 1.1. Let \( u \in {W}_{loc}^{2,1}\left( \Omega \right) \cap C\left( \Omega \right) \) . Then (1.4) \[ \chi \left( y\right) = \{ {Du}\left( y\right) \} ,\; - {D}^{2}u\left( y\right) \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u}. \] Proof. Let \( w\left( x\right) \) be defined by (1.3). For each fixed direction \( \xi \in {\mathbb{R}}^{n},\left| \xi \right| = 1 \) , we have (1.5) \[ \frac{w\left( {y + {h\xi }}\right) - w\left( y\right) }{h} \rightarrow \frac{\partial w}{\partial \xi } \] (1.6) \[ \frac{w\left( {y + {h\xi }}\right) + w\left( {y - {h\xi }}\right) - {2w}\left( y\right) }{{h}^{2}} \rightarrow \frac{{\partial }^{2}w}{\partial {\xi }^{2}} \] where the convergence is in the space \( {L}_{loc}^{1}\left( \Omega \right) \) . If we take subsequences, then the above limits are valid for almost every \( y \in \Omega \) . If \( y \in {\Gamma }_{u} \), then \( w\left( x\right) \) takes its minimum at \( y \) . Letting \( h \rightarrow {0}^{ + } \) and \( h \rightarrow {0}^{ - } \) in (1.5), we deduce that \[ \frac{\partial w}{\partial \xi } = 0,\;\text{ a.e. }y \in {\Gamma }_{u} \] If we take \( \xi \) to be the direction of coordinate axes, then \[ \chi \left( y\right) = \{ {Du}\left( y\right) \} ,\;\text{ a.e. }y \in {\Gamma }_{u}. \] Similarly, by (1.6), \[ - \frac{{\partial }^{2}u}{\partial {\xi }^{2}} = \frac{{\partial }^{2}w}{\partial {\xi }^{2}} \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u}. \] It follows that for \( \xi \) in a dense countable subset of the unit sphere, \[ - \frac{{\partial }^{2}u}{\partial {\xi }^{2}} \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u} \] From this inequality we deduce that, for any \( \xi ,\left| \xi \right| = 1 \) , \[ - \frac{{\partial }^{2}u}{\partial {\xi }^{2}} \geq 0,\;\text{ a.e. }y \in {\Gamma }_{u} \] This implies that \( - {D}^{2}u\left( y\right) \geq 0 \) for almost every \( y \in {\Gamma }_{u} \) . The proof is complete. Definition 1.3. The set \[ \chi \left( \Omega \right) = \chi \left( {\Gamma }_{u}\right) = \mathop{\bigcup }\limits_{{y \in \Omega }}\chi \left( y\right) \] is said to be the image set of the normal mapping determined by \( u \) . Example. Let \( \Omega = {B}_{d}\left( {x}_{0}\right) \) . Consider the function (1.7) \[ u\left( x\right) = \frac{\lambda }{d}\left( {d - \left| {x - {x}_{0}}\right| }\right) \] Its graph is a cone surface with vertex at \( \left( {{x}_{0},\lambda }\right) \), base \( {B}_{d}\left( {x}_{0}\right) \), and height \( \lambda \) . Clearly, \( {\Gamma }_{u} = \Omega \) and \[ \chi \left( y\right) = \left\{ \begin{array}{l} {B}_{\lambda /d}\left( 0\right) \;\text{ if }y = {x}_{0}, \\ - \frac{\lambda }{d}\frac{y - {x}_{0}}{\left| y - {x}_{0}\right| }\;\text{ if }y \neq {x}_{0}. \end{array}\right. \] The image set of the normal mapping is given by (1.8) \[ \chi \left( \Omega \right) = {B}_{\lambda /d}\left( 0\right) \] Definition 1.4. Let \( \Omega \subset {\mathbb{R}}^{n},{x}_{0} \in \Omega \) . Let \( w \) be the function such that its graph is a cone surface with vertex at \( \left( {{x}_{0},\lambda }\right) \) and base \( \Omega \) (see Fig. 1). We denote its image set of the normal mapping by (1.9) \[ \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack = {\chi }_{w}\left( \Omega \right) \] ![ea7682a1-65ee-483d-b2fe-4cbf0ebf6e25_96_0.jpg](images/ea7682a1-65ee-483d-b2fe-4cbf0ebf6e25_96_0.jpg) Fig. 1 Lemma 1.2. Let \( u \in C\left( \Omega \right) \) . Then (1) for any \( y \in {\Gamma }_{u} \) , (1.10) \[ \left| p\right| \leq \frac{2\sup \left| u\right| }{\operatorname{dist}\{ y,\partial \Omega \} },\;\forall p \in \chi \left( y\right) \] \( \left( 2\right) \) the normal mapping maps any compact subset of \( \Omega \) to a closed set in \( {\mathbb{R}}^{n} \) . Proof. For \( y \in {\Gamma }_{u} \) , (1.11) \[ u\left( y\right) + p \cdot \left( {x - y}\right) \geq u\left( x\right) ,\;\forall x \in \Omega . \] The ray starting at \( y \) with direction \( - p \) intersects \( \partial \Omega \) at \( {x}_{0} \), i.e., \( \left( {1.12}\right) \) \[ {x}_{0} = y - \frac{1}{\left| p\right| }\left| {{x}_{0} - y}\right| p \] Using compact subsets of \( \Omega \) to approximate \( \Omega \) if necessary, we may assume without loss of generality that \( u \) is continuous on \( \bar{\Omega } \) . Choosing \( x \) to be \( {x}_{0} \) in (1.11), we obtain \[ u\left( y\right) - \left| {{x}_{0} - y}\right| \left| p\right| \geq u\left( {x}_{0}\right) \] and therefore \[ \left| p\right| \leq \frac{2\sup \left| u\right| }{\left| {x}_{0} - y\right| } \leq \frac{2\sup \left| u\right| }{\operatorname{dist}\{ y,\partial \Omega \} }. \] Now we prove (2). Let \( F \) be a compact subset of \( \Omega \) . Suppose that \( \left\{ {p}_{n}\right\} \subset \chi \left( F\right) \) and \( {p}_{n} \rightarrow {p}_{0}\left( {n \rightarrow \infty }\right) \) . We want to show that \( {p}_{0} \in \chi \left( F\right) \) . Since \( {p}_{n} \in \chi \left( F\right) \), there exists \( {y}_{n} \in F \) such that \( {p}_{n} \in \chi \left( {y}_{n}\right) \) . From the definition of a normal mapping, \[ u\left( {y}_{n}\right) + {p}_{n} \cdot \left( {x - {y}_{n}}\right) \geq u\left( x\right) ,\;\forall x \in \Omega . \] Since \( F \) is compact, a subsequence \( \left\{ {y}_{{n}_{k}}\right\} \) converges to some \( {y}_{0} \in F \) as \( k \rightarrow \infty \) . Letting \( n = {n}_{k} \rightarrow \infty \) in the above inequality, we easily see that \( {p}_{0} \in \chi \left( {y}_{0}\right) \) . Lemma 1.3. Suppose that \( \Omega, A \) are open domains in \( {\mathbb{R}}^{n} \) . (1) If \( \Omega \subset A \), then for \( {x}_{0} \in \Omega \) , \[ \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack \supset A\left\lbrack {{x}_{0},\lambda }\right\rbrack \] (2) If the diameter of \( \Omega \) is \( d \), then (1.13) \[ \left| {\Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| \geq {\left( \frac{\lambda }{d}\right) }^{n}{\omega }_{n} \] where \( \left| \cdot \right| \) denotes the measure of the set and \( {\omega }_{n} \) is the volume of an \( n \) -dimensional unit ball. Proof. (1) is obvious. We prove (2). Clearly, \( {B}_{d}\left( {x}_{0}\right) \supset \Omega \) . Let \( A = {B}_{d}\left( {x}_{0}\right) \) . By (1) and (1.8), \[ \left| {\Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| \geq \left| {A\left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| = \left| {{B}_{\lambda /d}\left( 0\right) }\right| = {\omega }_{n}{\left( \frac{\lambda }{d}\right) }^{n}, \] where we use Lemma 1.2 (2) to deduce that \( \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack \) and \( A\left\lbrack {{x}_{0},\lambda }\right\rbrack \) are measurable sets. Lemma 1.4. Suppose that \( u \in {C}^{2}\left( \Omega \right), g \in C\left( \bar{\Omega }\right), g \geq 0 \), and \( E \) is a measurable subset of \( {\Gamma }_{u} \) . Then (1.14) \[ {\int }_{{Du}\left( E\right) }g\left( {\xi \left( p\right) }\right) {dp} \leq {\int }_{E}g\left( x\right) \det \left( {-{D}^{2}u}\right) {dx} \] where \( \xi \left( p\right) = {\left( Du\right) }^{-1}\left( p\right) \) is well defined and is continuous on \( {Du}\left( E\right) \) except on a zero measure set. Proof. Let \( J\left( x\right) = \det \left( {-{D}^{2}u}\right), S = \{ x \in \Omega \mid J\left( x\right) = 0\} \) . By Sard’s theorem (cf. Appendix 2), \( \left| {{Du}\left( S\right) }\right| = 0 \) . First we assume that \( E \) is open. Then \( E \smallsetminus S \) is an open set. Thus there exist cubes \( {\left\{ {C}_{l}\right\} }_{l = 1}^{\infty } \) with disjoint interior and sides parallel to the coordinate axes such that \( E \smallsetminus S = \mathop{\bigcup }\limits_{{l = 1}}^{\infty }{C}_{l} \) . We assume without loss of generality that the \( {C}_{l} \) are small enough so that \( {Du} : {C}_{l} \rightarrow {Du}\left( {C}_{l}\right) \) is a diffeomorphism. Then \[ {\int }_{{Du}\left( {C}_{l}\right) }g\left( {\xi \left( p\right) }\right) {dp} = {\int }_{{C}_{l}}g\left( x\right) J\left( x\right) {dx}. \] It follows that \[ {\int }_{{Du}\left( {E \smallsetminus S}\right) }g\left( {\xi \left( p\right) }\right) {dp} \leq \mathop{\sum }\limits_{l}{\int }_{{Du}\left( {C}_{l}\right) }g\left( {\xi \left( p\right) }\right) {dp} \] \[ = \mathop{\sum }\limits_{l}{\int }_{{C}_{l}}g\left( x\right) J\left( x\right) {dx} = {\int }_{E \smallsetminus S}g\left( x\right) J\l
113_Topological Groups
Definition 19.1
Definition 19.1. Two \( \mathcal{L} \) -structures \( \mathfrak{A} \) and \( \mathfrak{B} \) are elementary equivalent, in symbols \( \mathfrak{A}{ \equiv }_{\mathrm{{ee}}}\mathfrak{B} \), if \( \left( {\mathfrak{A} \vDash \varphi \Leftrightarrow \mathfrak{B} \vDash \varphi }\right) \) for every sentence \( \varphi \) . Thus two elementarily equivalent structures are indistinguishable by first-order means. Note from 18.19 that isomorphic structures are automatically elementarily equivalent. We shall see that the converse is far from true. The following useful theorem is easy to establish: Proposition 19.2. For any theory \( \Gamma \) the following conditions are equivalent: (i) \( \Gamma \) is complete; (ii) any two models of \( \Gamma \) are elementarily equivalent. Proposition 19.3. If \( {\mathfrak{A}}_{i}{ \equiv }_{\mathrm{{ee}}}{\mathfrak{B}}_{i} \) for each \( i \in I \), and \( F \) is an ultrafilter over \( I \) , then \( \mathop{\bigcap }\limits_{{i \in I}}{\mathfrak{A}}_{i}/\bar{F}{ \equiv }_{\mathrm{{ee}}}\mathop{\bigcap }\limits_{{i \in I}}{\mathfrak{B}}_{i}/\bar{F} \) . Proof. Let \( \varphi \) be a sentence which holds in \( \mathop{P}\limits_{{i \in I}}{\mathfrak{A}}_{i}/\bar{F} \) . Then by the basic theorem on ultraproducts, \( \left\{ {i \in I : {\mathfrak{A}}_{i} \vDash \varphi }\right\} \in F \) . But \( \left\{ {i \in I : {\mathfrak{A}}_{i} \vDash \varphi }\right\} = \left\{ {i \in I : {\mathfrak{B}}_{i} \vDash \varphi }\right\} \) by hypothesis, so by the basic theorem on ultraproducts, \( {P}_{i \in I}{\mathfrak{V}}_{i}/\bar{F} \vDash \varphi \) . Taking \( \neg \varphi \) for \( \varphi \), we see that the converse holds also. In Chapter 26 we shall make a deeper study of elementary equivalence; in particular, we provide there mathematical equivalents of this logical notion. We shall be concerned for most of this section with a stronger form of elementary equivalence which can hold between two structures when one is a substructure of the other. So we now turn to a brief discussion of the general algebraic notion of a substructure. Definition 19.4. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be two \( \mathcal{L} \) -structures. We say that \( \mathfrak{A} \) is a substructure of \( \mathfrak{B} \), and \( \mathfrak{B} \) is an extension of \( \mathfrak{A},\mathfrak{A} \subseteq \mathfrak{B} \) or \( \mathfrak{B} \supseteq \mathfrak{A} \), provided that the following conditions hold: (i) \( A \subseteq B \) ; (ii) for each operation symbol \( \mathbf{O} \) (say \( m \) -ary), \( {\mathbf{O}}^{\mathfrak{A}} = {\mathbf{O}}^{\mathfrak{B}}{ \upharpoonright }^{m}A \) ; (iii) for each relation symbol \( \mathbf{R} \) (say \( m \) -ary), \( {\mathbf{R}}^{\mathfrak{A}} = {\mathbf{R}}^{\mathfrak{B}} \cap {}^{m}A \) . Note that if \( \mathfrak{B} \) is an \( \mathcal{L} \) -structure, \( 0 \neq A \subseteq B \), and \( A \) is closed under all of the operations of \( \mathfrak{B} \), then there is a unique \( \mathcal{L} \) -structure \( \mathfrak{A} \) with universe \( A \) such that \( \mathfrak{A} \subseteq \mathfrak{B} \) . In case \( \mathcal{L} \) has no operation symbols, each nonempty subset of \( B \) is the universe of some substructure of \( \mathfrak{B} \) . This is no longer true in general when there are operation symbols. For example, \( \mathfrak{B} = \left( {\omega, s,0}\right) \) has no proper subalgebras, and in particular no finite substructures. The following simple proposition will be useful later. Proposition 19.5. Suppose \( \mathfrak{A} \subseteq \mathfrak{B}, x \in {}^{\omega }A \), and \( \sigma \) is a term. Then \( {\sigma }^{\mathfrak{A}}x = {\sigma }^{\mathfrak{B}}x \) . Proposition 19.6. If \( \mathfrak{A} \) and \( \mathfrak{B} \) are \( \mathcal{L} \) -structures, a function \( f \) is an embedding of \( \mathfrak{A} \) into \( \mathfrak{B} \) iff fs an isomorphism of \( \mathfrak{A} \) onto a substructure of \( \mathfrak{B} \) . The following is a basic result on embeddings which is usually implicit in basic courses in algebra. Proposition 19.7. Iff is an embedding of \( \mathfrak{A} \) into \( \mathfrak{B} \), then there is an \( \mathcal{L} \) -structure \( \mathfrak{C} \) and an isomorphism \( g \) of \( \mathfrak{C} \) onto \( \mathfrak{B} \) such that \( \mathfrak{A} \subseteq \mathfrak{C} \) and \( f \subseteq g \) . This is indicated by the following diagram, where \( \mathfrak{D} \) is the image of \( f \) : ![57474f65-18c7-4127-acaf-c92c2d62e43e_333_0.jpg](images/57474f65-18c7-4127-acaf-c92c2d62e43e_333_0.jpg) Proof. Let \( C = A \cup \{ \left( {A, x}\right) : x \in B \sim D\} \) . Note that \( A \cap \{ \left( {A, x}\right) : x \in B \sim D\} \) \( = 0 \) ; for if \( \left( {A, x}\right) \in A \), then \( A \in \{ A\} \in \left( {A, x}\right) \in A \), contradicting the regularity axiom of set theory. Define \( g : C \rightarrow B \) by: \( {ga} = {fa} \) for all \( a \in A \), and \( g\left( {A, x}\right) \) \( = x \) for all \( x \in B \sim D \) . Clearly \( g \) is a one-one function mapping \( C \) onto \( B \) . We define an \( \mathcal{L} \) -structure \( \mathfrak{C} \) with universe \( C \) so that \( g \) is automatically on isomorphism from \( \mathfrak{C} \) onto \( \mathfrak{B} \) : for \( \mathbf{O} \) an \( m \) -ary operation symbol and for \( {c}_{0},\ldots ,{c}_{m - 1} \in C, \) \[ {\mathbf{O}}^{\mathfrak{C}}\left( {{c}_{0},\ldots ,{c}_{m - 1}}\right) = {g}^{-1}{\mathbf{O}}^{\mathfrak{B}}\left( {g{c}_{0},\ldots, g{c}_{m - 1}}\right) \] while for \( \mathbf{R} \) an \( m \) -ary relation symbol, \[ {\mathbf{R}}^{\mathfrak{C}} = \left\{ {\left( {{c}_{0},\ldots ,{c}_{m - 1}}\right) : \left( {g{c}_{0},\ldots, g{c}_{m - 1}}\right) \in {\mathbf{R}}^{\mathfrak{B}}}\right\} . \] Since \( f \subseteq g \), it remains only to check that \( \mathfrak{A} \subseteq \mathfrak{C} \) . For an \( m \) -ary operation symbol \( \mathbf{O} \) of \( \mathcal{L} \) and for any \( {a}_{0},\ldots ,{a}_{n - 1} \in A \) , \[ {\mathbf{O}}^{\mathfrak{C}}\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) = {g}^{-1}{\mathbf{O}}^{\mathfrak{B}}\left( {g{a}_{0},\ldots, g{a}_{m - 1}}\right) = {g}^{-1}{\mathbf{O}}^{\mathfrak{B}}\left( {f{a}_{0},\ldots, f{a}_{m - 1}}\right) \] \[ = {g}^{-1}{\mathbf{O}}^{\mathfrak{D}}\left( {f{a}_{0},\ldots, f{a}_{m - 1}}\right) = {g}^{-1}f{\mathbf{O}}^{\mathfrak{A}}\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) \] \[ = {\mathbf{O}}^{\mathfrak{A}}\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) \text{.} \] The case of relation symbols is similar. We now introduce the technique of diagrams, due to Henkin and A. Robinson, which gives a method for dealing with substructures in a logical context. There are many variations on this important technique, and we will meet with some of them later. Definition 19.8. (Diagrams). Let \( X \) be any set. An \( X \) -expansion of \( \mathcal{L} \) is an expansion \( {\mathcal{L}}^{\prime } \) of \( \mathcal{L} \) obtained from \( \mathcal{L} \) by adding new distinct individual constants \( {\mathbf{c}}_{x} \) for all \( x \in X \) . We sometimes denote \( {\mathcal{L}}^{\prime } \) by \( {\left( \mathcal{L},{\mathbf{c}}_{x}\right) }_{x \in X} \), and \( {\mathcal{L}}^{\prime } \) structures are denoted by \( {\left( \mathfrak{A},{l}_{x}\right) }_{x \in X} \), where \( \mathfrak{A} \) is an \( \mathcal{L} \) -structure and \( {l}_{x} \) is a member of \( A \) for each \( x \in X;{l}_{x} \) is the denotation in the structure of \( {\mathbf{c}}_{x} \) . Let \( \mathfrak{A} \) be an \( \mathcal{L} \) -structure, and let \( {\mathcal{L}}^{\prime } \) be an \( A \) -expansion of \( \mathcal{L} \) . The \( {\mathcal{L}}^{\prime } \) - diagram of \( \mathfrak{A} \) is the set of all sentences of \( {\mathcal{L}}^{\prime } \) of the following forms: \( \neg {\mathbf{c}}_{a} = {\mathbf{c}}_{b} \) for \( a, b \in A \) and \( a \neq b; \) \( {\mathbf{{Oc}}}_{a0}\cdots {\mathbf{c}}_{a\left( {m - 1}\right) } = {\mathbf{c}}_{b} \) if \( \mathbf{O} \) is an operation symbol of \( \mathcal{L} \) of rank \( m \) , \( a \in {}^{m}A \), and \( {\mathbf{O}}^{\mathfrak{A}}{a}_{0}\cdots {a}_{m - 1} = b \) ; \( {\mathbf{{Rc}}}_{a0}\cdots {\mathbf{c}}_{a\left( {m - 1}\right) } \) if \( \mathbf{R} \) is an \( m \) -ary relation symbol of \( \mathcal{L} \), and \( \left\langle {{a}_{0},\ldots ,{a}_{m - 1}}\right\rangle \in {\mathbf{R}}^{\mathfrak{A}} \) \( \neg {\mathbf{{Rc}}}_{a0}\cdots {\mathbf{c}}_{a\left( {m - 1}\right) } \) if \( \mathbf{R} \) is an \( m \) -ary relation symbol of \( \mathcal{L} \), and \( \left\langle {{a}_{0},\ldots ,{a}_{m - 1}}\right\rangle \notin {\mathbf{R}}^{\mathfrak{A}}. \) Diagrams are essentially a logical expression of the notion of substructure, or embedding. The basic properties of diagrams are given in the next two theorems, which essentially show that the models of the diagram of \( \mathfrak{A} \) are exactly the structures in which \( \mathfrak{A} \) can be embedded. Proposition 19.9. Let \( \mathfrak{A} \) and \( \mathfrak{B} \) be \( \mathcal{L} \) -structures with \( f \) an embedding of \( \mathfrak{A} \) into \( \mathfrak{B} \) . Let \( {\mathcal{L}}^{\prime } \) be an \( A \) -expansion of \( \mathcal{L} \) . Then \( {\left( \mathfrak{B}, fa\right) }_{a \in A} \) is a model of the \( {\mathcal{L}}^{\prime } \) -diagram of \( \mathfrak{A} \) . Proof. The proof is essentially trivial, and we only illustrate it by verifying that a member \( \varphi = \neg {\mathbf{{Rc}}}_{a0}\cdots {\mathbf{c}}_{a\left( {m - 1}\right) } \) of the diagram holds in \( {\left( \mathfrak{V}, fa\right) }_{a \in A} \) , where \( \mathbf{R} \) is an \( m \) -ary relation symbol of \( \mathcal{L} \) and \( \left\langle {{a}_{0},\ldots ,{a}_{m - 1}}\right\rangle \notin {\mathbf{R}}^{\mathfrak{A}} \) . \[ {\left( \mathfrak{B}, fa\right) }_{a \in A} \vDash \varphi \;\text{iff not }{\left( \mathfrak{B}, fa\right) }_{a \in A} \vDash {\mathbf{{Rc}}}_{a0}\cdots {\mathbf{c}}_{a\left( {m -
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 3.3
Definition 3.3. Suppose that as \( \alpha \) varies in \( \left( {a, b}\right) \), the critical points of the vector fields \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) remain unchanged; and for any fixed point \( P\left( {x, y}\right) \) and any parameters \( {\alpha }_{1} < {\alpha }_{2} \) in \( \left( {a, b}\right) \), we have \[ \left| \begin{array}{ll} X\left( {x, y,{\alpha }_{1}}\right) & Y\left( {x, y,{\alpha }_{1}}\right) \\ X\left( {x, y,{\alpha }_{2}}\right) & Y\left( {x, y,{\alpha }_{2}}\right) \end{array}\right| \geq 0\left( {\text{ or } \leq 0}\right) , \] (3.5) where equality cannot hold on an entire closed orbit of \( {\left( {3.1}\right) }_{{\alpha }_{i}}, i = 1,2 \) . Then \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) are called generalized rotated vector fields. Here, the interval \( \left( {a, b}\right) \) can be either bounded or unbounded. The relation between conditions (1), (2) in Definition 3.2 and inequality (3.5) in Definition 3.3 is left to the readers as an exercise. If for some regular point \( \left( {{x}_{0},{y}_{0}}\right) \) and parameter \( {\alpha }_{0} \), there exists \( \delta \left( {{x}_{0},{y}_{0},{\alpha }_{0}}\right) > 0 \) such that for any \( \alpha \in \left\lbrack {{\alpha }_{0} - \delta ,{\alpha }_{0} + \delta }\right\rbrack \), the equality is valid in (3.5), then \( {\alpha }_{0} \) is called a stopping point for \( \left( {{x}_{0},{y}_{0}}\right) \) ; otherwise, \( {\alpha }_{0} \) is called a rotating point. Stopping points are allowed in generalized rotated vector fields. Moreover, generalized rotated vector fields do not necessarily depend on \( \alpha \) periodically; in particular, condition (3.3) is not required. The geometric meaning of condition (3.5) is that, at any fixed point \( P\left( {x, y}\right) \) , the oriented area between \[ \left( {X\left( {x, y,{\alpha }_{1}}\right), Y\left( {x, y,{\alpha }_{1}}\right) }\right) \text{ and }\left( {X\left( {x, y,{\alpha }_{2}}\right), Y\left( {x, y,{\alpha }_{2}}\right) }\right) \] has the same (or opposite) sign as \( \operatorname{sgn}\left( {{\alpha }_{2} - {\alpha }_{1}}\right) \) . That is, at any point \( P\left( {x, y}\right) \) , as the parameter \( \alpha \) increases, the vector \( \left( {X\left( {x, y,\alpha }\right), Y\left( {x, y,\alpha }\right) }\right) \) can only rotate in one direction; moreover, the angle of rotation cannot exceed \( \pi \) . This is also the geometric meaning of Definition 3.2. In the following, we describe two examples of rotated vector fields EXAMPLE 3.1. Consider the system of differential equations \[ \frac{dx}{dt} = X\left( {x, y}\right) ,\;\frac{dy}{dt} = Y\left( {x, y}\right) , \] (3.6) where \( X, Y \in {C}^{0} \), and satisfies conditions for uniqueness of solutions. Construct the system of differential equations containing parameter \( \alpha \) \[ \frac{dx}{dt} = X\cos \alpha - Y\sin \alpha ,\;\frac{dy}{dt} = X\sin \alpha + Y\cos \alpha . \] (3.7) It is not difficult to verify that equations (3.7) satisfy conditions (3.2), (3.3) and thus form a complete family of rotated vector fields. However, in \( 0 < \) \( \alpha \leq {2\pi } \), they are not generalized rotated vector fields. In fact, (3.7) can be regarded as a formula for axis rotation. It rotates the original vector field by an angle of \( \alpha \) , and keeps the vector lengths unchanged. Thus (3.7) are called uniformly rotated vector fields. EXAMPLE 3.2. Consider the system of differential equations \[ \frac{dx}{dt} = - {\alpha y},\;\frac{dy}{dt} = {\alpha x} - {\alpha yf}\left( {\alpha x}\right) , \] (3.8) where \( 0 < \alpha < + \infty \), and \( f\left( x\right) \) is monotonically increasing as \( \left| x\right| \) increases. It can be verified by condition (3.5) that (3.8) are generalized rotated vector fields; however, it is not a complete family of rotated vector fields. In the following, we will prove a few important theorems concerning limit cycles for generalized rotated vector fields. Naturally, they will also apply to complete families of rotated vector fields. We first prove several lemmas. LEMMA 3.1. Let \( {L}_{0} \) be a smooth simple closed curve, parametrized by \( x = \varphi \left( t\right), y = \psi \left( t\right) \) ; and suppose \( {L}_{0} \) is positively oriented (as \( t \) increases, it spirals counterclockwise). If on \( {L}_{0} \), we have \[ H\left( t\right) = \left| \begin{matrix} {\varphi }^{\prime }\left( t\right) & {\psi }^{\prime }\left( t\right) \\ X\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) & Y\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) \end{matrix}\right| \geq 0\left( {\text{ or } \leq 0}\right) , \] (3.9) then as \( t \) increases, the orbits of the system \[ \frac{dx}{dt} = X\left( {x, y}\right) ,\;\frac{dy}{dt} = Y\left( {x, y}\right) \] (3.10) cannot move from the interior (or exterior) of the region \( G \) bounded by \( {L}_{0} \) to the exterior (or interior) of \( G \) . (That is, from one region in \( {R}^{2} \smallsetminus {L}_{0} \) to another). Proof. We will only prove the case outside the parenthesis. Let \( \theta \) be the angle formed by the tangent vector at a point on \( {L}_{0} \) and the vector field \( \left( {X\left( {x, y}\right), Y\left( {x, y}\right) }\right) \) . We have \[ \sin \theta \left( t\right) = \frac{H\left( t\right) }{\sqrt{{\left\lbrack {\varphi }^{\prime }\left( t\right) \right\rbrack }^{2} + {\left\lbrack {\psi }^{\prime }\left( t\right) \right\rbrack }^{2}}\sqrt{{X}^{2}\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) + {Y}^{2}\left( {\varphi \left( t\right) ,\psi \left( t\right) }\right) }}. \] (3.11) From (3.9), we find \( \sin \theta \left( t\right) \geq 0 \), i.e., \( 0 \leq \theta \left( t\right) \leq \pi \) . If \( 0 < \theta \left( t\right) < \pi \), then the Lemma is clearly true. Suppose there is some point \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \in {L}_{0} \) with \( \theta \left( {t}_{0}\right) = 0 \) or \( \pi \), and the conclusion of the Lemma is not true, i.e., the orbit of (3.10) is tangent to \( {L}_{0} \) at \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \in {L}_{0} \) and moves from the interior of \( G \) to its exterior as \( t \) increases. From the continuous dependence of solutions on initial conditions, the orbits near \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \) also have this property. This situation is impossible, since we find from (3.9) and (3.11) that at these points we still have \( 0 \leq \theta \left( t\right) \leq \pi \) . Suppose that at all these points we have \( \theta \left( t\right) = 0 \) or \( \pi \), then the orbit of (3.10) near \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \) will be tangent to \( {L}_{0} \) and thus coincides with it. Thus it cannot move from the interior of \( G \) to its exterior. If there is a point with \( 0 < \theta \left( t\right) < \pi \) arbitrarily close to \( \left( {\varphi \left( {t}_{0}\right) ,\psi \left( {t}_{0}\right) }\right) \), then this situation is impossible. This proves the Lemma. Naturally, \( {L}_{0} \) can itself be an orbit of (3.10). In this case, equality will hold identically in (3.9) on the orbit \( {L}_{0} \) . LEMMA 3.2. Consider the system \[ \frac{dx}{dt} = {X}_{l}\left( {x, y}\right) ,\;\frac{dy}{dt} = {Y}_{l}\left( {x, y}\right) , \] \( {\left( {3.12}\right) }_{l} \) where \( {X}_{t},{Y}_{t} \in {C}^{0}\left( {G \subseteq {R}^{2}}\right), i = 1,2 \), and satisfy conditions for uniqueness of solutions. Suppose that for \( \left( {x, y}\right) \in G \) \[ \left| \begin{array}{ll} {X}_{1}\left( {x, y}\right) & {Y}_{1}\left( {x, y}\right) \\ {X}_{2}\left( {x, y}\right) & {Y}_{2}\left( {x, y}\right) \end{array}\right| \] (3.13) does not change sign, then the closed orbits of \( {\left( {3.12}\right) }_{1} \), and \( {\left( {3.12}\right) }_{2} \) either coincide or do not intersect. Proof. Let \( {L}_{t} : {x}_{t} = {\varphi }_{t}\left( t\right), y = {\psi }_{t}\left( t\right) \) be closed orbits of \( {\left( {3.12}\right) }_{t}, i = \) 1,2. Without loss of generality, we may assume \( {L}_{1} \) is positively oriented. From system \( {\left( {3.12}\right) }_{1} \), we have \[ {\varphi }_{1}^{\prime }\left( t\right) = {X}_{1}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) ,\;{\psi }_{1}^{\prime }\left( t\right) = {Y}_{1}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) . \] Since (3.13) never changes sign, we find that \[ \left| \begin{matrix} {\varphi }_{1}^{\prime }\left( t\right) & {\psi }_{1}^{\prime }\left( t\right) \\ {X}_{2}\left( {{\varphi }_{1}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) }\right. & \left. {{Y}_{2}\left( {{\varphi }_{1}\left( t\right) ,{\psi }_{1}\left( t\right) }\right) }\right) \end{matrix}\right| \] (3.14) never changes sign. Suppose that it is nonnegative, then from the case outside of the parenthesis in Lemma 3.1 we conclude that \( {L}_{2} \) and \( {L}_{1} \) cannot intersect. If \( {L}_{1} \) and \( {L}_{2} \) do not coincide, and are tangential to each other ![bea09977-be18-4815-a30e-4fa2fe3b219c_223_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_223_0.jpg) FIGURE 4.15 internally or externally as shown in Figure 4.15, then the continuous dependence of solutions on initial conditions implies that an orbit \( {\overrightarrow{f}}_{2}\left( {P, I}\right) \) of (3.12) moves from the interior of region \( {G}_{1} \) enclosed by \( {L}_{1} \) to the exterior of \( {G}_{1} \) as \( t \) increases. This contradicts Lemma 3.1; and consequently either \( {L}_{2} \) coincides with \( {L}_{1} \) or they never intersect each other. If (3.14) is nonpositive, the proof is similar. THEOREM 3.1 (nonintersect
1112_(GTM267)Quantum Theory for Mathematicians
Definition 22.2
Definition 22.2 For any symplectic potential \( \theta \) and vector field \( X \) on \( {\mathbb{R}}^{2n} \) , let \( {\nabla }_{X} \) denote the covariant derivative operator, acting on \( {C}^{\infty }\left( {\mathbb{R}}^{2n}\right) \) , given by \[ {\nabla }_{X} = X - \frac{i}{\hslash }\theta \left( X\right) \] (22.4) Note that our prequantized operators can be written as \[ {Q}_{\text{pre }}\left( f\right) = i\hslash {\nabla }_{{X}_{f}} + f \] Proposition 22.3 For any symplectic potential \( \theta \), let \( {\nabla }_{X} \) denote the associated covariant derivative in (22.4). Then for all smooth vector fields \( X \) and \( Y \) on \( {\mathbb{R}}^{2n} \), we have \[ \left\lbrack {{\nabla }_{X},{\nabla }_{Y}}\right\rbrack = {\nabla }_{\left\lbrack X, Y\right\rbrack } - \frac{i}{\hslash }\omega \left( {X, Y}\right) \] (22.5) In particular, if \( X = {X}_{f} \) and \( Y = {X}_{g} \), we have \[ \left\lbrack {{\nabla }_{{X}_{f}},{\nabla }_{{X}_{g}}}\right\rbrack = {\nabla }_{{X}_{\{ f, g\} }} + \frac{i}{\hslash }\{ f, g\} . \] According to standard differential geometric definitions, the 2-form \( \omega /\hslash \) on the right-hand side of (22.5) is the curvature of the covariant derivative \( \nabla \) . For our purposes, the fact that \( \left\lbrack {{\nabla }_{{X}_{f}},{\nabla }_{{X}_{g}}}\right\rbrack \) in not simply \( {\nabla }_{{X}_{\{ f, g\} }} \) is an advantage. The extra term in the formula for the commutator is just what we need to compensate for the failure of the operators \( i\hslash {X}_{f} + f \) to have the desired commutation relations. Proof. Using the easily verified identity \( \left\lbrack {{\nabla }_{X}, f}\right\rbrack = X\left( f\right) \), we obtain \[ \left\lbrack {{\nabla }_{X},{\nabla }_{Y}}\right\rbrack - {\nabla }_{\left\lbrack X, Y\right\rbrack } = - \frac{i}{\hslash }\left\lbrack {X\left( {\theta \left( Y\right) }\right) - Y\left( {\theta \left( X\right) }\right) - \theta \left( \left\lbrack {X, Y}\right\rbrack \right) }\right\rbrack \] In light of (21.6), the right-hand side becomes \( - \left( {i/\hslash }\right) \left( {d\theta }\right) \left( {X, Y}\right) \), where \( {d\theta } = \omega \) . ∎ We may now easily prove Proposition 22.1. Proof of Proposition 22.1. Using Proposition 22.3, we obtain \[ \frac{1}{i\hslash }\left\lbrack {i\hslash {\nabla }_{{X}_{f}} + f, i\hslash {\nabla }_{{X}_{g}} + g}\right\rbrack \] \[ = \left( {i\hslash }\right) \left( {{\nabla }_{{X}_{\{ f, g\} }} + \frac{i}{\hslash }\{ f, g\} }\right) + {X}_{f}\left( g\right) - {X}_{g}\left( f\right) \] \[ = i\hslash {\nabla }_{{X}_{\{ f, g\} }} - \{ f, g\} + \{ f, g\} + \{ f, g\} \] which reduces to what we want. - Example 22.4 If \( \theta = {p}_{j}d{x}_{j} \), the prequantized position and momentum operators are given by \[ {Q}_{\text{pre }}\left( {x}_{j}\right) = {x}_{j} + i\hslash \frac{\partial }{\partial {p}_{j}} \] \[ {Q}_{\text{pre }}\left( {p}_{j}\right) = - i\hslash \frac{\partial }{\partial {x}_{j}}. \] These operators are essentially self-adjoint on \( {C}_{c}^{\infty }\left( {\mathbb{R}}^{2n}\right) \) and their self-adjoint extensions satisfy the exponentiated commutation relations of Definition 14.2. Proof. We compute that \( {X}_{{x}_{j}} = \partial /\partial {p}_{j} \) and that \( \theta \left( {X}_{{x}_{j}}\right) = 0 \), giving the indicated expression for \( {Q}_{\text{pre }}\left( {x}_{j}\right) \) . Meanwhile, \( {X}_{{p}_{j}} = - \partial /\partial {x}_{j} \) and \( \theta \left( {X}_{{p}_{j}}\right) = \) \( - {p}_{j} \) . There is a cancellation of the \( \theta \left( {X}_{{p}_{j}}\right) \) term in the definition of \( {Q}_{\mathrm{{pre}}}\left( {p}_{j}\right) \) with the \( {p}_{j} \) term, leaving \( {Q}_{\text{pre }}\left( {p}_{j}\right) = i\hslash {X}_{{p}_{j}} \) . The essential self-adjointness of the operators follows from Proposition 9.40. To verify the exponentiated commutation relations, we calculate the associated one-parameter unitary groups as \[ \left( {{e}^{{it}{Q}_{\mathrm{{pre}}}\left( {x}_{j}\right) }\psi }\right) \left( {\mathbf{x},\mathbf{p}}\right) = {e}^{{it}{x}_{j}}\psi \left( {\mathbf{x},\mathbf{p} - t\hslash {\mathbf{e}}_{j}}\right) \] \[ \left( {{e}^{{it}{Q}_{\text{pre }}\left( {p}_{j}\right) }\psi }\right) \left( {\mathbf{x},\mathbf{p}}\right) = \psi \left( {\mathbf{x} + t\hslash {\mathbf{e}}_{j},\mathbf{p}}\right) , \] (22.6) where we now let \( {Q}_{\mathrm{{pre}}}\left( {x}_{j}\right) \) and \( {Q}_{\mathrm{{pre}}}\left( {p}_{j}\right) \) denote the unique self-adjoint extensions of the given operators on \( {C}_{c}^{\infty }\left( {\mathbb{R}}^{2n}\right) \) . (Compare Proposition 13.5.) The exponentiated commutation relations can now be easily verified by direct calculation. - As we have presented things so far, the concept of covariant derivative, and thus also of prequantization, depends on the choice of symplectic potential \( \theta \) . This dependence is, however, illusory; we will now show that the prequantum maps obtained with two different symplectic potentials are unitarily equivalent. Proposition 22.5 Suppose that \( {\theta }_{1} \) and \( {\theta }_{2} \) are two different symplectic potentials for the canonical 2 -form \( \omega \), so that \( d\left( {{\theta }^{1} - {\theta }^{2}}\right) = 0 \) . Let the associated covariant derivatives be denoted by \( {\nabla }^{1} \) and \( {\nabla }^{2} \) . Choose a real-valued function \( \gamma \) so that \( {d\gamma } = {\theta }^{1} - {\theta }^{2} \) and let \( {U}_{\gamma } \) be the unitary map of \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) to itself given by \[ {U}_{\gamma }\psi = {e}^{-{i\gamma }/\hslash }\psi \] Then for every vector field \( X \), we have \[ {U}_{\gamma }{\nabla }_{X}^{1}{U}_{\gamma }^{-1} = {\nabla }_{X}^{2} \] (22.7) If \( {Q}_{\mathrm{{pre}}}^{j}\left( f\right), j = 1,2 \), are the associated prequantization maps, it follows that \[ {U}_{\gamma }{Q}_{\text{pre }}^{1}\left( f\right) {U}_{\gamma }^{-1} = {Q}_{\text{pre }}^{2}\left( f\right) \] (22.8) The map \( {U}_{\gamma } \) is called a gauge transformation. Proof. The operation of multiplication by \( {\theta }^{1}\left( X\right) \) commutes with multiplication by \( {e}^{-{i\gamma }/\hslash } \), whereas \[ X\left( {{e}^{{i\gamma }/\hslash }\psi }\right) = {e}^{{i\gamma }/\hslash }{X\psi } + \frac{i}{\hslash }{e}^{{i\gamma }/\hslash }X\left( \gamma \right) \psi . \] Since \( X\left( \gamma \right) = \left( {d\gamma }\right) \left( X\right) = {\theta }^{1}\left( X\right) - {\theta }^{2}\left( X\right) \), we obtain \[ {\nabla }_{X}^{1}\left( {{e}^{{i\gamma }/\hslash }\psi }\right) = {e}^{{i\gamma }/\hslash }\left( {X + \frac{i}{\hslash }X\left( \gamma \right) - \frac{i}{\hslash }{\theta }^{1}\left( {X}_{f}\right) }\right) \psi \] \[ = {e}^{{i\gamma }/\hslash }\left( {X - \frac{i}{\hslash }{\theta }^{2}\left( {X}_{f}\right) }\right) \psi \] \[ = {e}^{{i\gamma }/\hslash }{\nabla }_{X}^{2}\psi \] Multiplying both sides of this equality by \( {e}^{-{i\gamma }/\hslash } \) gives (22.7). Equation (22.8) follows by observing that multiplication by \( f \) commutes with multiplication by \( {e}^{-{i\gamma }/\hslash } \) . ## 22.3 Problems with Prequantization Given the naturalness of the prequantization construction, it is tempting to think that prequantization could actually be considered as quantization. Why not take our Hilbert space to be \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) and the quantized operators to be \( {Q}_{\mathrm{{pre}}}\left( f\right) \) ? To answer this question, we now examine some undesirable properties of prequantization. In the first place, the Hilbert space \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) is very far from irreducible under the action of the quantized position and momentum operators, in contrast to the ordinary Schrödinger Hilbert space \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \), which is irreducible, by Proposition 14.7. Indeed, in Sect. 22.4, we will construct a large family of invariant subspaces. (See Proposition 22.13.) In the second place, the prequantization map is very far from being multiplicative. Of course, since quantum operators do not commute, we cannot expect any quantization scheme \( Q \) to satisfy \( Q\left( {fg}\right) = Q\left( f\right) Q\left( g\right) \) for all \( f \) and \( g \) . Nevertheless, the standard quantization schemes we have considered in Chap. 13 do satisfy this relation for certain classes of observables \( f \) and \( g \) . In the Weyl quantization, for example, we have multiplicativity if \( f \) and \( g \) are both functions of \( \mathbf{x} \) only, independent of \( \mathbf{p} \) (or functions of \( \mathbf{p} \), independent of \( \mathbf{x} \) ). For the prequantization map, however, we almost never have multiplicativity, for the simple reason that \( {Q}_{\text{pre }}\left( {fg}\right) \) is a first-order differential operator, whereas \( {Q}_{\text{pre }}\left( f\right) {Q}_{\text{pre }}\left( g\right) \) is second-order, provided there is at least one point where \( {X}_{f} \) and \( {X}_{g} \) are both nonzero. In the third place, the prequantization map badly fails to map positive functions to positive operators. Although most of the quantization schemes in Chap. 13 do not always map positive functions to positive operators, they somehow come close to doing so. Indeed, \( {Q}_{\text{Weyl }},{Q}_{\text{Wick }} \), and \( {Q}_{\text{anti-Wick }} \) all map the harmonic oscillator Hamiltonian to a non-negative operator, since \( {a}^{ * }a + \left( {1/2}\right) I,{a}^{ * }a \), and \( a{a}^{ * } \) are all non-negative. (See Exercise 4 in Chap. 13.) By contrast, the prequantized harmonic oscillator Hamiltonian has spectrum that is unbounded below, as we now demonstrate. Proposition 22.6 Consider a harmonic oscillator Hamiltonian of the form \[ H\left( {x, p}\right) = \frac{1}{2m}\left( {{p}^{2} + {\left( m\omega x\right) }^{2}}\right) . \] Then for each integer
1075_(GTM233)Topics in Banach Space Theory
Definition 9.2.4
Definition 9.2.4. A basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of a Banach space \( X \) is subsymmetric provided it is unconditional and for every increasing sequence of integers \( {\left\{ {n}_{i}\right\} }_{i = 1}^{\infty } \), the subbasis \( {\left( {e}_{{n}_{i}}\right) }_{i = 1}^{\infty } \) is equivalent to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . Lemma 9.2.2 yields that symmetric bases are subsymmetric. However, these two concepts do not coincide, as shown by the following example, due to Garling [98]. Example 9.2.5. A subsymmetric basis that is not symmetric. Let \( X \) be the space of all sequences of scalars \( \xi = {\left( {\xi }_{n}\right) }_{n = 1}^{\infty } \) for which \[ \parallel \xi \parallel = \sup \mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{\left| {\xi }_{{n}_{k}}\right| }{\sqrt{k}} < \infty \] the supremum being taken over all increasing sequences of integers \( {\left( {n}_{k}\right) }_{k = 1}^{\infty } \) . We leave for the reader the task to check that \( X \), endowed with the norm defined above, is a Banach space whose unit vectors \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) form a subsymmetric basis that is not symmetric. Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be a symmetric basis in a Banach space \( X \) . For every permutation \( \pi \) of \( \mathbb{N} \) and every sequence of signs \( \epsilon = {\left( {\epsilon }_{n}\right) }_{n = 1}^{\infty } \), there is an automorphism \[ {T}_{\pi ,\epsilon } : X \rightarrow X,\;x = \mathop{\sum }\limits_{{n = 1}}{a}_{n}{e}_{n} \mapsto {T}_{\pi ,\epsilon }\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\epsilon }_{n}{a}_{n}{e}_{\pi \left( n\right) }. \] The uniform boundedness principle yields a number \( K \) such that \[ \mathop{\sup }\limits_{{\pi ,\epsilon }}\begin{Vmatrix}{T}_{\pi ,\epsilon }\end{Vmatrix} \leq K \] i.e., the estimate \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{\infty }{\epsilon }_{n}{a}_{n}{e}_{\pi \left( n\right) }}\end{Vmatrix} \leq K\begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{e}_{n}}\end{Vmatrix} \] (9.10) holds for all choices of signs \( \left( {\epsilon }_{n}\right) \) and all permutations \( \pi \) . The smallest constant \( 1 \leq K \) in (9.10) is called the symmetric constant of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) and will be denoted by \( {\mathrm{K}}_{\mathrm{s}} \) . We then say that \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is \( K \) -symmetric whenever \( {\mathrm{K}}_{\mathrm{s}} \leq K \) . For every \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{e}_{n} \in X \), put \[ \parallel \left| x\right| \parallel = \sup \begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{\infty }{\epsilon }_{n}{a}_{n}{e}_{\pi \left( n\right) }}\end{Vmatrix} \] (9.11) the supremum being taken over all choices of scalars \( \left( {\epsilon }_{n}\right) \) of signs and all permutations of the natural numbers. Equation (9.11) defines a new norm on \( X \) equivalent to \( \parallel \cdot \parallel \), since \( \parallel x\parallel \leq \parallel \left| x\right| \parallel \leq K\parallel x\parallel \) for all \( x \in X \) . With respect to this norm, \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a 1-symmetric basis of \( X \) . Theorem 9.2.6. Let \( X \) be a Banach space with normalized 1-symmetric basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . Suppose that \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) is a normalized constant-coefficient block basic sequence. Then the subspace \( \left\lbrack {u}_{n}\right\rbrack \) is complemented in \( X \) by a norm-one projection. Proof. For each \( k = 1,2,\ldots \), let \( {u}_{k} = {c}_{k}\mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{k} \), where \( {\left( {A}_{k}\right) }_{k = 1}^{\infty } \) is a sequence of mutually disjoint subsets of \( \mathbb{N} \) (notice that since \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is 1 -symmetric, the blocks of the basis need not be in increasing order). For every fixed \( n \in \mathbb{N} \), let \( {\Pi }_{n} \) denote the set of all permutations \( \pi \) of \( \mathbb{N} \) such that for each \( 1 \leq k \leq n,\pi \) restricted to \( {A}_{k} \) acts as a cyclic permutation of the elements of \( {A}_{k} \) (in particular, \( \left. {\pi \left( {A}_{k}\right) = {A}_{k}}\right) \) ), and \( \pi \left( j\right) = j \) for all \( j \notin { \cup }_{k = 1}^{n}{A}_{k} \) . Every \( \pi \in {\Pi }_{n} \) has associated an operator on \( X \) defined for \( x = \mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{j} \) as \[ {T}_{n,\pi }\left( {\mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{j}}\right) = \mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{\pi \left( j\right) } \] Notice that due to the 1 -symmetry of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \), we have \( \begin{Vmatrix}{{T}_{n,\pi }\left( x\right) }\end{Vmatrix} = \parallel x\parallel \) . Let us define an operator on \( X \) by averaging over all possible choices of permutations \( \pi \in {\Pi }_{n} \) . Given \( x = \mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{j} \) , \[ {T}_{n}\left( x\right) = \frac{1}{\left| {\Pi }_{n}\right| }\mathop{\sum }\limits_{{\pi \in {\Pi }_{n}}}{T}_{n,\pi }\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{n}\left( {\frac{1}{\left| {A}_{k}\right| }\mathop{\sum }\limits_{{j \in {A}_{k}}}{a}_{j}}\right) \mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{j} + \mathop{\sum }\limits_{{j \notin { \cup }_{k = 1}^{n}{A}_{k}}}{a}_{j}{e}_{j}. \] Then, \[ \begin{Vmatrix}{{T}_{n}\left( x\right) }\end{Vmatrix} = \begin{Vmatrix}{\frac{1}{\left| {\Pi }_{n}\right| }\mathop{\sum }\limits_{{\pi \in {\Pi }_{n}}}{T}_{n,\pi }\left( x\right) }\end{Vmatrix} \leq \frac{1}{\left| {\Pi }_{n}\right| }\mathop{\sum }\limits_{{\pi \in {\Pi }_{n}}}\begin{Vmatrix}{{T}_{n,\pi }\left( x\right) }\end{Vmatrix} = \parallel x\parallel . \] Therefore, for each \( n \in \mathbb{N} \) the operator \[ {P}_{n}\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{n}\left( \left( {\frac{1}{\left| {A}_{k}\right| }\mathop{\sum }\limits_{{j \in {A}_{k}}}{a}_{j}}\right) \right) \mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{j},\;x \in X, \] is a norm-one projection onto \( {\left\lbrack {u}_{k}\right\rbrack }_{k = 1}^{n} \) . Now it readily follows that \[ P\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {\frac{1}{\left| {A}_{k}\right| }\mathop{\sum }\limits_{{j \in {A}_{k}}}{a}_{j}}\right) \underset{{c}_{k}^{-1}{u}_{k}}{\underbrace{\mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{j}}} \] is a well-defined projection from \( X \) onto \( \left\lbrack {u}_{k}\right\rbrack \) with \( \parallel P\parallel = 1 \) . ## 9.3 Uniqueness of Unconditional Basis Zippin's theorem (Theorem 9.1.8) has a number of very elegant applications. We give a couple in this section. The first relates to the theorem of Lindenstrauss and Pelczyński proved in Section 8.3. There we saw that the normalized unconditional bases of the three spaces \( {c}_{0},{\ell }_{1} \), and \( {\ell }_{2} \) are unique (up to equivalence); we also saw that in contrast, the spaces \( {\ell }_{p} \) for \( p \neq 1,2 \) have at least two nonequivalent normalized unconditional bases. In 1969, Lindenstrauss and Zippin [205] completed the story by showing that the list ends with these three spaces! Theorem 9.3.1 (Lindenstrauss-Zippin). A Banach space X has a unique unconditional basis (up to equivalence) if and only if \( X \) is isomorphic to one of the following three spaces: \( {c}_{0},{\ell }_{1},{\ell }_{2} \) . Proof. Suppose that \( X \) has a unique normalized unconditional basis, \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . Then, in particular, the basis \( {\left( {e}_{\pi \left( n\right) }\right) }_{n = 1}^{\infty } \) is equivalent to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) for each permutation \( \pi \) of \( \mathbb{N} \) . That is, \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a symmetric basis of \( X \) . Without loss of generality we can assume that its symmetric constant is 1 . Let \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) be a normalized constant-coefficient block basic sequence with respect to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) such that there are infinitely many blocks of size \( k \) for all \( k \in \mathbb{N} \) . That is, \( \left| \left\{ {{u}_{n} : \left| {\operatorname{supp}{u}_{n}}\right| = k}\right\} \right| = \infty \) for each \( k \in \mathbb{N} \) . Let us call \( Y \) the closed linear span of the sequence \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) . The subspace \( Y \) is complemented in \( X \) by Theorem 9.2.6. On the other hand, the subsequence of \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) consisting of the blocks whose supports have size 1 spans a subspace isometrically isomorphic to \( X \), which is complemented in \( Y \) because of the unconditionality of \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) . By the symmetry of the basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty }, X \) is isomorphic to \( {X}^{2} \) . Analogously, if we split the natural numbers into two subsets \( {S}_{1},{S}_{2} \) such that \[ \left| \left\{ {n \in {S}_{1} : \left| {\operatorname{supp}{u}_{n}}\right| = k}\right\} \right| = \left| \left\{ {n \in {S}_{2} : \left| {\operatorname{supp}{u}_{n}}\right| = k}\right\} \right| = \infty \] for all \( k \in \mathbb{N} \), we see that \[ {\left\lbrack {u}_{n}\right\rbrack }_{n = 1}^{\infty } \approx {\left\lbrack {u}_{n}\right\rbrack }_{n \in {S}_{1}} \oplus {\left\lbrack {u}_{n}\right\rbrack }_{n \in {S}_{2}} \approx {\left\lbrack {u}_{n}\right\rbrack }_{n = 1}^{\infty } \oplus {\left\lbrack {u}_{n}\right\rbrack }_{n = 1}^{\infty }. \] Hence \( Y \approx {Y}^{2} \) . Using Pelczyński’s decomposition trick (Theorem 2.2.3), we deduce that \( X \approx Y \) . Since \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) is an unconditio
1164_(GTM70)Singular Homology Theory
Definition 2.3
Definition 2.3. Let \( f, g : K \rightarrow {K}^{\prime } \) be chain maps. A chain homotopy \( D : K \rightarrow {K}^{\prime } \) between \( f \) and \( g \) is a sequence of homomorphisms \[ {D}_{n} : {K}_{n} \rightarrow {K}_{n + 1}^{\prime } \] such that \[ {f}_{n} - {g}_{n} = {\partial }_{n + 1}^{\prime }{D}_{n} + {D}_{n - 1}{\partial }_{n} \] for all \( n \) . Two chain maps are said to be chain homotopic if there exists a chain homotopy between them (notation: \( f \simeq g \) ). EXAMPLE 2.3. If \( {\varphi }_{0},{\varphi }_{1} : X \rightarrow Y \) are continuous maps, any homotopy between \( {\varphi }_{0} \) and \( {\varphi }_{1} \) gives rise to a chain homotopy between the induced chain maps \( {\varphi }_{0\# } \) and \( {\varphi }_{1\# } \) on cubical singular chains (see §II.4). The reader should prove the following two facts for himself: Proposition 2.1. Let \( f, g : K \rightarrow {K}^{\prime } \) be chain maps. If \( f \) and \( g \) are chain homotopic, then \[ {f}_{ * } = {g}_{ * } : {H}_{n}\left( K\right) \rightarrow {H}_{n}\left( {K}^{\prime }\right) \] for all \( n \) . Proposition 2.2. Chain homotopy is an equivalence relation on the set of all chain maps from \( K \) to \( {K}^{\prime } \) . ## EXERCISES 2.1. By analogy with the category of topological spaces and continuous maps, complete the following definitions: (a) A chain map \( f : K \rightarrow {K}^{\prime } \) is a chain homotopy equivalence if _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20 (b) A chain complex \( {K}^{\prime } \) is a subcomplex of the chain complex \( K \) if___. (c) A subcomplex \( {K}^{\prime } \) of the chain complex \( K \) is a retract of \( K \) if (d) A subcomplex \( {K}^{\prime } \) of the chain complex \( K \) is a deformation retract of \( K \) if (e) If \( {K}^{\prime } \) is a subcomplex of \( K \), the quotient complex \( K/{K}^{\prime } \) is ___ In each case, what assertions can be made about the homology groups of the various chain complexes involved, and about the homomorphisms induced by the various chain maps? 2.2. Let \( f, g,{f}^{\prime } \), and \( {g}^{\prime } \) be chain maps \( K \rightarrow {K}^{\prime } \) . If \( f \) is chain homotopic to \( {f}^{\prime } \), and \( g \) is chain homotopic to \( {g}^{\prime } \), then prove that \( f + g \) is chain homotopic to \( {f}^{\prime } + {g}^{\prime } \) . 2.3. Let \( f, g : K \rightarrow {K}^{\prime } \) and \( {f}^{\prime },{g}^{\prime } : {K}^{\prime } \rightarrow {K}^{\prime \prime } \) be chain maps, \( D \) a chain homotopy between \( f \) and \( g \), and \( {D}^{\prime } \) a chain homotopy between \( {f}^{\prime } \) and \( {g}^{\prime } \) . Using \( D \) and \( {D}^{\prime } \), construct an explicit chain homotopy between \( {f}^{\prime }f \) and \( {g}^{\prime }g : K \rightarrow {K}^{\prime \prime } \) . 2.4. Let \( D \) be a chain homotopy between the maps \( f \) and \( g : K \rightarrow K \) (of \( K \) into itself). Use \( D \) to construct an explicit chain homotopy between \( {f}^{n} = {fff}\cdots f \) and \( {g}^{n} = \) \( {gg}\cdots g \) ( \( n \) -fold iterates). Definition 2.4. A sequence of chain complexes and chain maps \[ \cdots \rightarrow K\overset{f}{ \rightarrow }{K}^{\prime }\overset{g}{ \rightarrow }{K}^{\prime } \rightarrow \cdots \] is exact if for each integer \( n \) the sequence of abelian groups \[ \cdots \rightarrow {K}_{n}\overset{{f}_{n}}{ \rightarrow }{K}_{n}^{\prime }\overset{{g}_{n}}{ \rightarrow }{K}_{n}^{\prime \prime } \rightarrow \cdots . \] is exact in the usual sense. We will be especially interested in short exact sequences of chain complexes, i.e., those of the form \[ E : 0 \rightarrow {K}^{\prime }\overset{f}{ \rightarrow }K\overset{g}{ \rightarrow }{K}^{\prime \prime } \rightarrow 0. \] This means that for each \( n,{f}_{n} \) is an monomorphism, \( {g}_{n} \) is an epimorphism, and image \( {f}_{n} = \) kernel \( {g}_{n} \) . Given any such short exact sequence of chain complexes, we can follow the procedure of §II. 5 to define a connecting homomorphism or boundary operator \[ {\partial }_{E} : {H}_{n}\left( {K}^{\prime \prime }\right) \rightarrow {H}_{n - 1}\left( {K}^{\prime }\right) \] for all \( n \), and then prove that the following sequence of abelian groups \[ \cdots \overset{{\partial }_{E}}{ \rightarrow }{H}_{n}\left( {K}^{\prime }\right) \overset{{f}_{ * }}{ \rightarrow }{H}_{n}\left( K\right) \overset{{g}_{ * }}{ \rightarrow }{H}_{n}\left( {K}^{\prime \prime }\right) \overset{{\partial }_{E}}{ \rightarrow }{H}_{n - 1}\left( {K}^{\prime }\right) \rightarrow \cdots \] is exact. One can also prove the following important naturality property of this connecting homomorphism or boundary operator: Let ![26a5d8f2-88cf-4556-8447-3a182179fff0_119_0.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_119_0.jpg) be a commutative diagram of chain complexes and chain maps. It is assumed that the two rows, denoted by \( E \) and \( F \), are short exact sequences. Then the following diagram is commutative for each \( n \) : ![26a5d8f2-88cf-4556-8447-3a182179fff0_119_1.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_119_1.jpg) ## EXERCISES 2.5. Define the direct sum and direct product of an arbitrary family of chain complexes in the obvious way. How is the homology of such a direct sum or product related to the homology of the individual chain complexes of the family? 2.6. Let \( E : 0 \rightarrow {K}^{\prime }\overset{f}{ \rightarrow }K\overset{g}{ \rightarrow }{K}^{\prime \prime } \rightarrow 0 \) be a short exact sequence of chain complexes. By a splitting homomorphism for such a sequence we mean a sequence \( s = \left\{ {s}_{n}\right\} \) such that for each \( n,{s}_{n} : {K}_{n}^{\prime \prime } \rightarrow {K}_{n} \), and \( {g}_{n}{s}_{n} = \) identity map of \( {K}_{n}^{\prime \prime } \) onto itself. Note that we do not demand that \( s \) should be a chain map. Assume that such a splitting homomorphism exists. (a) Prove that there exist unique homomorphisms \( {\varphi }_{n} : {K}_{n}^{\prime \prime } \rightarrow {K}_{n - 1}^{\prime } \) for all \( n \) such that \[ {f}_{n - 1}{\varphi }_{n} = {\partial }_{n}{s}_{n} - {s}_{n - 1}{\partial }_{n}^{\prime \prime }. \] (b) Prove that \( {\partial }_{n - 1}^{\prime }{\varphi }_{n} + {\varphi }_{n - 1}{\partial }_{n}^{\prime \prime } = 0 \) for all \( n \) . (c) Let \( {s}^{\prime } = \left\{ {s}_{n}^{\prime }\right\} \) be another sequence of splitting homomorphisms, and \( {\varphi }_{n}^{\prime } : {K}_{n}^{\prime \prime } \rightarrow \) \( {K}_{n - 1}^{\prime } \) the unique homomorphisms such that \( {f}_{n - 1}{\varphi }_{n}^{\prime } = {\partial }_{n}{s}_{n}^{\prime } - {s}_{n - 1}^{\prime }{\partial }_{n}^{\prime \prime } \) . Prove that there exists a sequence of homomorphisms \( {D}_{n} : {K}_{n}^{\prime \prime } \rightarrow {K}_{n}^{\prime } \) such that \[ {\varphi }_{n} - {\varphi }_{n}^{\prime } = {\partial }_{n}^{\prime }{D}_{n} - {D}_{n - 1}{\partial }_{n}^{\prime \prime } \] for all \( n \) . (d) Prove that the connecting homomorphism \( {\partial }_{E} : {H}_{n}\left( {K}^{\prime \prime }\right) \rightarrow {H}_{n - 1}\left( {K}^{\prime }\right) \) is induced by the sequence of homomorphisms \( \left\{ {\varphi }_{n}\right\} \) in the same sense that a chain map induces homomorphisms of homology groups. (Note: The sequence of homomorphisms \( \left\{ {\varphi }_{n}\right\} \) can be thought of as a "chain map of degree -1 ." The sequence of homomorphisms \( \left\{ {D}_{n}\right\} \) in Part (c) is a chain homotopy between \( \left\{ {\varphi }_{n}\right\} \) and \( \left\{ {\varphi }_{n}^{\prime }\right\} \) .) We will conclude this section on chain complexes with a discussion of a construction called the algebraic mapping cone of a chain map. Definition 2.5. Let \( K = \left\{ {{K}_{n},{\partial }_{n}}\right\} \) and \( {K}^{\prime } = \left\{ {{K}_{n}^{\prime },{\partial }_{n}^{\prime }}\right\} \) be chain complexes and \( f : K \rightarrow {K}^{\prime } \) a chain map. The algebraic mapping cone of \( f \), denoted by \( M\left( f\right) = \left\{ {M{\left( f\right) }_{n},{d}_{n}}\right\} \) is a chain complex defined as follows: \[ M{\left( f\right) }_{n} = {K}_{n - 1} \oplus {K}_{n}^{\prime }\;\text{ (direct sum). } \] The boundary operator \( {d}_{n} : M{\left( f\right) }_{n} \rightarrow M{\left( f\right) }_{n - 1} \) is defined by \[ {d}_{n}\left( {x,{x}^{\prime }}\right) = \left( {-{\partial }_{n - 1}x,{\partial }_{n}^{\prime }{x}^{\prime } + {f}_{n - 1}x}\right) \] for any \( x \in {K}_{n - 1} \) and \( {x}^{\prime } \in {K}_{n}^{\prime } \) . It is trivial to verify that \( {d}_{n - 1}{d}_{n} = 0 \) . Next, define \( {i}_{n} : {K}_{n}^{\prime } \rightarrow M{\left( f\right) }_{n} \) by \( {i}_{n}\left( {x}^{\prime }\right) = \left( {0,{x}^{\prime }}\right) \) . The sequence of homomorphisms \( i = \left\{ {i}_{n}\right\} \) is easily seen to be a chain map \( {K}^{\prime } \rightarrow M\left( f\right) \) . Similarly, the sequence of projections \( {j}_{n} : M{\left( f\right) }_{n} \rightarrow {K}_{n - 1} \) (defined by \( {j}_{n}\left( {x,{x}^{\prime }}\right) = x \) ) is almost a chain map. However, it reduces degrees by one, and instead of commuting with the boundary operators, we have the relation \[ {\partial }_{n - 1}{j}_{n} = - {j}_{n - 1}{d}_{n} \] It is a "chain map of degree -1." It induces a homomorphism of homology groups which reduces degrees by one. The chain maps \( i \) and \( j \) define a short exact sequence of chain complexes: \[ 0 \rightarrow {K}^{\prime }\overset{i}{ \rightarrow }M\left( f\right) \overset{j}{ \rightarrow }K \rightarrow 0. \]
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 4.54
Definition 4.54. Let \( X \) be an arbitrary normed space and let \( f : X \rightarrow \overline{\mathbb{R}} \) be a function finite at \( \bar{x} \in X \) . The viscosity Hadamard (resp. Fréchet) subdifferential of \( f \) at \( \bar{x} \) is the set \( {\partial }_{H}f\left( \bar{x}\right) \) (resp. \( {\partial }_{F}^{V}f\left( \bar{x}\right) \) ) of Hadamard (resp. Fréchet) derivatives \( {\varphi }^{\prime }\left( \bar{x}\right) \) of functions \( \varphi \) of class \( {D}^{1} \) (resp. \( {C}^{1} \) ) on some neighborhood \( U \) of \( \bar{x} \) minorizing \( f \) on \( U \) and satisfying \( \varphi \left( \bar{x}\right) = f\left( \bar{x}\right) \) . When there exists a bump function of class \( {D}^{1} \) (resp. \( {C}^{1} \) ) on \( X \), we may suppose \( \varphi \) is defined on the whole of \( X \) in this definition (however, the inequality \( \varphi \leq f \) is required only near \( \bar{x} \) ). It seems necessary to make a distinction between \( {\partial }_{D} \) and \( {\partial }_{H} \) even in smooth spaces. In contrast, since we shall show that \( {\partial }_{F}^{V}f = {\partial }_{F}f \) for \( f \) defined on a Fréchet smooth space, we can keep for a while the heavy notation \( {\partial }_{F}^{V}f \) . The proof of the coincidence \( {\partial }_{F}^{V}f = {\partial }_{F}f \) uses the following smoothing result for one-variable functions. Lemma 4.55. For \( a > 0 \), let \( r : \left\lbrack {0, a}\right\rbrack \rightarrow {\mathbb{R}}_{ + } \) be a remainder, i.e., a function with a right derivative at 0 and such that \( r\left( 0\right) = 0,{r}_{ + }^{\prime }\left( 0\right) = 0 \) . Suppose \( b \mathrel{\text{:=}} \sup r\left( \left\lbrack {0, a}\right\rbrack \right) < \) \( + \infty \) . Then there exists a nondecreasing remainder \( s : \left\lbrack {0, a}\right\rbrack \rightarrow {\mathbb{R}}_{ + } \) of class \( {C}^{1} \) such that \( s \geq r, s\left( t\right) \leq \sup r\left( \left\lbrack {0,{2t}}\right\rbrack \right) \) for \( t \in \left\lbrack {0, a/2}\right\rbrack \) . Proof. Let \( {a}_{0} = a,{b}_{0} \mathrel{\text{:=}} b,{a}_{n} \mathrel{\text{:=}} {2}^{-n}a,{b}_{n} \mathrel{\text{:=}} \sup r\left( \left\lbrack {0,{a}_{n - 1}}\right\rbrack \right) \) for \( n \geq 1 \), so that \( \left( {b}_{n}\right) \) is nonincreasing and \( \left( {{b}_{n}/{a}_{n - 1}}\right) \rightarrow 0 \) . Let us set \( {m}_{n} \mathrel{\text{:=}} \left( {1/2}\right) \left( {{a}_{n} + {a}_{n + 1}}\right) ,{c}_{n} \mathrel{\text{:=}} \) \( 2{\left( {a}_{n} - {a}_{n + 1}\right) }^{-2}\left( {{b}_{n} - {b}_{n + 1}}\right) \) and construct \( s \) by setting \( s\left( 0\right) \mathrel{\text{:=}} 0 \) , \[ s\left( t\right) = {b}_{n + 1} + {c}_{n}{\left( t - {a}_{n + 1}\right) }^{2},\;t \in \left\lbrack {{a}_{n + 1},{m}_{n}}\right\rbrack , \] \[ s\left( t\right) = {b}_{n} - {c}_{n}{\left( t - {a}_{n}\right) }^{2},\;t \in \left\lbrack {{m}_{n},{a}_{n}}\right\rbrack , \] so that \( {D}_{\ell }s\left( {a}_{n}\right) = 0,{D}_{r}s\left( {a}_{n + 1}\right) = 0, s \) is continuous and derivable at \( {a}_{n},{m}_{n} \) with \[ s\left( {a}_{n}\right) = {b}_{n},\;{s}^{\prime }\left( {a}_{n}\right) = 0,\;s\left( {m}_{n}\right) = \left( {{b}_{n} + {b}_{n + 1}}\right) /2,\;{s}^{\prime }\left( {m}_{n}\right) = {c}_{n}\left( {{a}_{n} - {a}_{n + 1}}\right) . \] Thus \( s \) is of class \( {C}^{1} \) and for \( t \in \left\lbrack {{a}_{n + 1},{a}_{n}}\right\rbrack, s\left( t\right) \geq {b}_{n + 1} \geq r\left( t\right) ,0 \leq s\left( t\right) /t \leq {b}_{n}/{a}_{n + 1} \leq \) \( 4{b}_{n}/{a}_{n - 1} \), so that \( s\left( t\right) /t \rightarrow 0 \) as \( t \rightarrow {0}_{ + } \) . Theorem 4.56. Let \( X \) be a normed space satisfying condition \( \left( {H}_{F}\right) \) . Then for every lower semicontinuous function \( f \) on \( \bar{X},{\partial }_{F}^{V}f\left( \bar{x}\right) \), the viscosity Fréchet subdifferential of \( f \) at \( \bar{x} \), coincides with \( {\partial }_{F}f\left( \bar{x}\right) \) . Proof. Without loss of generality we suppose \( \bar{x} = 0 \) . Clearly, \( {\partial }_{F}^{V}f\left( 0\right) \subset {\partial }_{F}f\left( 0\right) \) . Given \( {\bar{x}}^{ * } \in {\partial }_{F}f\left( 0\right) \), consider the remainder \[ r\left( t\right) \mathrel{\text{:=}} \sup \left\{ {f\left( 0\right) - f\left( x\right) + \left\langle {{\overrightarrow{x}}^{ * }, x}\right\rangle : x \in t{B}_{X}}\right\} ,\;t \in {\mathbb{R}}_{ + }, \] and we associate with it the remainder \( s \) of the preceding lemma, where \( a \) is chosen so that \( \sup r\left( \left\lbrack {0, a}\right\rbrack \right) < + \infty \) . Then using \( j \) given by \( \left( {\mathrm{H}}_{F}\right) \), the function \( \varphi \) defined by \[ \varphi \left( x\right) \mathrel{\text{:=}} f\left( 0\right) + \left\langle {{\bar{x}}^{ * }, x}\right\rangle - s\left( {j\left( x\right) }\right) ,\;x \in {j}^{-1}\left( \left( {-a, a}\right) \right) , \] is of class \( {C}^{1} \) and satisfies \( \varphi \left( 0\right) = f\left( 0\right) ,{\varphi }^{\prime }\left( 0\right) = {\bar{x}}^{ * } \), since \( s\left( t\right) /t \rightarrow 0 \) as \( t \rightarrow {0}_{ + } \) , \( \varphi \leq f \), since \( s \) is nondecreasing, \( s \geq r \), and \( \parallel x\parallel \leq j\left( x\right) \leq c\parallel x\parallel \) . Thus \( {\bar{x}}^{ * } \in {\partial }_{F}^{V}f\left( \bar{x}\right) \) . While in spaces satisfying \( \left( {\mathrm{H}}_{F}\right) \) there is no need to distinguish the Fréchet viscosity subdifferential from the Fréchet subdifferential, the situation is not the same for the Hadamard subdifferential, even when assumption \( \left( {\mathrm{H}}_{D}\right) \) holds. However, in a finite-dimensional space one has \( {\partial }_{H} = {\partial }_{D} \), since \( {\partial }_{F}^{\widehat{V}} = {\partial }_{H} \subset {\partial }_{D} = {\partial }_{F} \) . Let us study some relationships with geometrical notions. If \( S \) is a subset of \( X \) and \( \bar{x} \in S \), we denote by \( {N}_{H}\left( {S,\bar{x}}\right) \) the (viscosity) Hadamard normal cone defined by \( {N}_{H}\left( {S,\bar{x}}\right) \mathrel{\text{:=}} {\partial }_{H}{\iota }_{S}\left( \bar{x}\right) \) . In the next statements, we just write \( N\left( {S,\bar{x}}\right) \) for the viscosity normal cone associated with a subdifferential \( \partial \in \left\{ {{\partial }_{H},{\partial }_{F}^{V}}\right\} \) . If \( F : X \rightrightarrows Y \) is a multimap between two normed spaces, the (viscosity) Hadamard coderivative \( {D}_{H}^{ * }F\left( {\bar{x},\bar{y}}\right) \) of \( F \) at \( \left( {\bar{x},\bar{y}}\right) \in \operatorname{gph}\left( F\right) \) is defined by \[ \operatorname{gph}\left( {{D}_{H}^{ * }F\left( {\bar{x},\bar{y}}\right) }\right) \mathrel{\text{:=}} \left\{ {\left( {{y}^{ * },{x}^{ * }}\right) : \left( {{x}^{ * }, - {y}^{ * }}\right) \in {N}_{H}\left( {\operatorname{gph}\left( F\right) ,\left( {\bar{x},\bar{y}}\right) }\right) }\right\} . \] Proposition 4.57. Let \( E \) be a closed subset of a Banach space \( X \) and let \( \bar{x} \in E \) . For both viscosity subdifferentials \( \partial = {\partial }_{H},{\partial }_{F}^{V} \), one has \( N\left( {E,\bar{x}}\right) = {\mathbb{R}}_{ + }\partial {d}_{E}\left( \bar{x}\right) \) . Proof. Since for every \( r \in {\mathbb{R}}_{ + } \) and every smooth function \( \varphi \) satisfying \( \varphi \leq r{d}_{E} \) around \( \bar{x},\varphi \left( \bar{x}\right) = r{d}_{E}\left( \bar{x}\right) \) one has \( \varphi \leq {\iota }_{E} \) near \( \bar{x} \), we get the inclusion \( {\mathbb{R}}_{ + }\partial {d}_{E}\left( \bar{x}\right) \subset \) \( N\left( {E,\bar{x}}\right) \) . Conversely, let \( {\bar{x}}^{ * } \in N\left( {E,\bar{x}}\right) \), so that there exists a smooth function \( \varphi \) minorizing \( {\iota }_{E} \) around \( \bar{x} \) and satisfying \( \varphi \left( \bar{x}\right) = {\iota }_{E}\left( \bar{x}\right) ,{\varphi }^{\prime }\left( \bar{x}\right) = {\bar{x}}^{ * } \) . Since \( \varphi \) is locally Lipschitzian, we can find \( \rho, r > 0 \) such that the Lipschitz rate of \( {r\varphi } \) on \( U \mathrel{\text{:=}} B\left( {\bar{x},{2\rho }}\right) \) is 1 . Then for \( x \in B\left( {\bar{x},\rho }\right) \) and \( u \in E \cap U \), we have \( {r\varphi }\left( x\right) \leq {r\varphi }\left( u\right) + \parallel x - u\parallel \leq \parallel x - u\parallel \), hence \( {r\varphi }\left( x\right) \leq d\left( {x, E \cap U}\right) = d\left( {x, E}\right) \) by an easy argument already used. Thus \( r{\bar{x}}^{ * } \in \partial {d}_{E}\left( \bar{x}\right) \) . Let us note the following analogue of Corollary 4.15 and Proposition 4.16. Proposition 4.58. Let \( {E}_{f} \) be the epigraph of a lower semicontinuous function \( f \) on an arbitrary Banach space \( X \) and for \( \bar{x} \in \operatorname{dom}f \), let \( {\bar{x}}_{f} \mathrel{\text{:=}} \left( {\bar{x}, f\left( \bar{x}\right) }\right) \) . Then for both viscosity subdifferentials, one has \( {\bar{x}}^{ * } \in \partial f\left( \bar{x}\right) \) if and only if \( \left( {{\bar{x}}^{ * }, - 1}\right) \in N\left( {{E}_{f},{\bar{x}}_{f}}\right) \) . Proof. Let us first consider the viscosity Fréchet case. Given \( {\bar{x}}^{ * } \in \partial f\left( \bar{x}\right) \), let \( \varphi \) be a smooth function satisfying \( \varphi \leq f \) on a neighborhood \( U \) of \( \bar{x} \) and \( \varphi \left( \bar{x}\right) = f\left( \bar{x}\right) \) , \( {\varphi }^{\prime }\left( \bar{x}\right) = {\bar{x}}^{ * } \) . Then \( \psi : U \times \mathbb{R} \rightarrow \mathbb{R} \) given by \( \psi \left( {x, r}\right) \mathrel{\text{:=}} \varphi \left( x\right) - r \) is smooth, minorizes \( {\iota }_{{E}_{f}} \), and satisfies \( \psi \left( {\bar{x}}_{f}\right) = 0 = {\iota }_{{E}_{f}}\left( {\bar{x}}_{f}\right) ,\left( {{\bar{x}}^{ * }, - 1}\right) = {\psi }^{\prime }\left( {\bar{x}}_{f}\right) \in N\left( {{E}_{f},{\bar{x}}_{f}}\right
1075_(GTM233)Topics in Banach Space Theory
Definition 6.1.1
Definition 6.1.1. Given \( f \in {L}_{1}\left( {\Omega ,\sum ,\mu }\right) \), the conditional expectation of \( f \) on the \( \sigma \) -algebra \( {\sum }^{\prime } \) is the (unique) function \( \psi \) that satisfies \[ {\int }_{E}{fd\mu } = {\int }_{E}{\psi d\mu },\;\forall E \in {\sum }^{\prime }. \] The function \( \psi \) will be denoted by \( \mathbb{E}\left( {f \mid {\sum }^{\prime }}\right) \) . Let us notice that if \( {\sum }^{\prime } \) consists of countably many disjoint atoms \( {\left( {A}_{n}\right) }_{n = 1}^{\infty } \), the definition of \( \mathbb{E}\left( {f \mid {\sum }^{\prime }}\right) \) is especially simple: \[ \mathbb{E}\left( {f \mid {\sum }^{\prime }}\right) \left( t\right) = \mathop{\sum }\limits_{{j = 1}}^{\infty }\frac{1}{\mu \left( {A}_{j}\right) }\left( {{\int }_{{A}_{j}}{fd\mu }}\right) {\chi }_{{A}_{j}}\left( t\right) . \] We also observe that if \( f \in {L}_{1}\left( \mu \right) \), for all \( {\sum }^{\prime } \) -measurable simple functions \( g \), we have \[ {\int }_{\Omega }{gfd\mu } = {\int }_{\Omega }g\mathbb{E}\left( {f \mid {\sum }^{\prime }}\right) {d\mu } \] and \[ \mathbb{E}\left( {{fg} \mid {\sum }^{\prime }}\right) = g\mathbb{E}\left( {f \mid {\sum }^{\prime }}\right) \] Lemma 6.1.2. Let \( \left( {\Omega ,\sum ,\mu }\right) \) be a probability measure space and suppose \( {\sum }^{\prime } \) is a sub- \( \sigma \) -algebra of \( \sum \) . Then \( \mathbb{E}\left( {\cdot \mid {\sum }^{\prime }}\right) \) is a norm-one linear projection from \( {L}_{p}\left( {\Omega ,\sum ,\mu }\right) \) onto \( {L}_{p}\left( {\Omega ,{\sum }^{\prime },\mu }\right) \) for every \( 1 \leq p \leq \infty \) . Proof. Fix \( 1 \leq p \leq \infty \) . It is immediate to check that \( \mathbb{E}{\left( \cdot \mid {\sum }^{\prime }\right) }^{2} = \mathbb{E}\left( {\cdot \mid {\sum }^{\prime }}\right) \) . If \( f \in {L}_{p}\left( \mu \right) \), using Hölder’s inequality in \( {L}_{p}\left( {\Omega ,{\sum }^{\prime },\mu }\right) \) (see C. 2 in the appendix), we have \[ {\begin{Vmatrix}\mathbb{E}\left( f \mid {\sum }^{\prime }\right) \end{Vmatrix}}_{p} = \sup \left\{ {{\int }_{\Omega }\mathbb{E}\left( {f \mid {\sum }^{\prime }}\right) {gd\mu } : g\text{ simple }{\sum }^{\prime }\text{-measurable with }\parallel g{\parallel }_{q} \leq 1}\right\} \] \[ = \sup \left\{ {{\int }_{\Omega }{fgd\mu } : g\text{ simple }{\sum }^{\prime }\text{-measurable with }\parallel g{\parallel }_{q} \leq 1}\right\} \] \[ \leq \sup \left\{ {{\int }_{\Omega }{fgd\mu } : g\text{ simple with }\parallel g{\parallel }_{q} \leq 1}\right\} = \parallel f{\parallel }_{p}. \] We leave the case \( p = \infty \) to the reader. Proposition 6.1.3. The Haar system is a monotone basis in \( {L}_{p} \) for \( 1 \leq p < \infty \) . Proof. Let us consider an increasing sequence of \( \sigma \) -algebras, \( {\left( {\mathcal{B}}_{n}\right) }_{n = 1}^{\infty } \), contained in the Borel \( \sigma \) -algebra of \( \left\lbrack {0,1}\right\rbrack \) defined as follows: we let \( {\mathcal{B}}_{1} \) be the trivial \( \sigma \) -algebra, \( \{ \varnothing ,\left\lbrack {0,1}\right\rbrack \} \), and for \( n = {2}^{k} + s\left( {k = 0,1,2,\ldots ,1 \leq s \leq {2}^{k}}\right) \) we let \( {\mathcal{B}}_{n} \) be the finite subalgebra of the Borel sets of \( \left\lbrack {0,1}\right\rbrack \) whose atoms are the dyadic intervals of the family \[ {\mathcal{F}}_{n} = \left\{ \begin{array}{ll} \left\lbrack {\frac{j - 1}{{2}^{k + 1}},\frac{j}{{2}^{k + 1}}}\right) & \text{ for }j = 1,\ldots ,{2s}, \\ \left\lbrack {\frac{j - 1}{{2}^{k}},\frac{j}{{2}^{k}}}\right) & \text{ for }j = s + 1,\ldots ,{2}^{k}. \end{array}\right. \] Fix \( 1 \leq p < \infty \) . For each \( n,{\mathbb{E}}_{n} \) will denote the conditional expectation operator on the \( \sigma \) -algebra \( {\mathcal{B}}_{n} \) . By Lemma 6.1.2, \( {\mathbb{E}}_{n} \) is a norm-one projection from \( {L}_{p} \) onto \( {L}_{p}\left( {\left\lbrack {0,1}\right\rbrack ,{\mathcal{B}}_{n},\lambda }\right) \), the space of functions that are constant on intervals of the family \( {\dot{\mathcal{F}}}_{n} \) . We will denote this space by \( {L}_{p}\left( {\mathcal{B}}_{n}\right) \) . Clearly, rank \( {\mathbb{E}}_{n} = n \) . Furthermore, \( {\mathbb{E}}_{n}{\mathbb{E}}_{m} = {\mathbb{E}}_{m}{\mathbb{E}}_{n} = {\mathbb{E}}_{\min \{ m, n\} } \) for any two positive integers \( m, n \) . On the other hand, the set \[ \left\{ {f \in {L}_{p} : {\begin{Vmatrix}{\mathbb{E}}_{n}\left( f\right) - f\end{Vmatrix}}_{p} \rightarrow 0}\right\} \] is closed by the partial converse of the Banach-Steinhaus theorem (see E. 14 in the appendix) and contains the set \( { \cup }_{k = 1}^{\infty }{L}_{p}\left( {\mathcal{B}}_{k}\right) \), which is dense in \( {L}_{p} \) . Therefore \( {\begin{Vmatrix}{\mathbb{E}}_{n}\left( f\right) - f\end{Vmatrix}}_{p} \rightarrow 0 \) for all \( f \in {L}_{p} \) . By Proposition 1.1.7, \( {L}_{p} \) has a basis whose natural projections are \( {\left( {\mathbb{E}}_{n}\right) }_{n = 1}^{\infty } \) . This basis is actually the Haar system, because for each \( n \in \mathbb{N} \) we have \( {\mathbb{E}}_{m}\left( {h}_{n}\right) = {h}_{n} \) for \( m \geq n \) and \( {\mathbb{E}}_{m}{h}_{n} = 0 \) for \( m < n \) . The basis constant is \( \mathop{\sup }\limits_{n}\begin{Vmatrix}{\mathbb{E}}_{n}\end{Vmatrix} = 1 \) . Remark 6.1.4. (a) Integrating the Haar system, we obtain Schauder's original basis \( {\left( {\varphi }_{n}\left( t\right) \right) }_{n = 1}^{\infty } \) for \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) (see Section 1.2). More precisely, if \( n = {2}^{k} + s \), where \( k = 0,1,2,\ldots \), and \( s = 1,2,\ldots ,{2}^{k} \), then \[ {\varphi }_{n}\left( t\right) = {2}^{k + 1}{\int }_{0}^{t}{h}_{n}\left( u\right) {du} = \left\{ \begin{array}{lll} {2}^{k + 1}t - \left( {{2s} - 2}\right) & \text{ if } & \frac{{2s} - 2}{{2}^{k + 1}} \leq t \leq \frac{{2s} - 1}{{2}^{k + 1}}, \\ - {2}^{k + 1}t + {2s} & \text{ if } & \frac{{2s} - 1}{{2}^{k + 1}} \leq t \leq \frac{2s}{{2}^{k + 1}}, \\ 0 & & \text{ otherwise. } \end{array}\right. \] (b) The Haar system as we have defined it is not normalized in \( {L}_{p} \) for \( 1 \leq p < \infty \) . It is normalized in \( {L}_{\infty } \), since \( {\begin{Vmatrix}{h}_{{2}^{k} + s}\end{Vmatrix}}_{p} = {\left( 1/{2}^{k}\right) }^{1/p} \) . To normalize in \( {L}_{p} \) one should take \( {h}_{n}/{\begin{Vmatrix}{h}_{n}\end{Vmatrix}}_{p} = {\left| {I}_{n}\right| }^{-1/p}{h}_{n} \), where \( {I}_{n} \) denotes the support of the Haar function \( {h}_{n} \) . (c) Let us observe that if \( f \in {L}_{p}\left( {1 \leq p < \infty }\right) \), then \[ {\mathbb{E}}_{n}\left( f\right) - {\mathbb{E}}_{n - 1}\left( f\right) = \left( {\frac{1}{\left| {I}_{n}\right| }\int f\left( t\right) {h}_{n}\left( t\right) {dt}}\right) {h}_{n}. \] We deduce that the dual functionals associated to the Haar system are given by \[ {h}_{n}^{ * } = \frac{1}{\left| {I}_{n}\right| }{h}_{n},\;n \in \mathbb{N}, \] and the series expansion of \( f \in {L}_{p} \) in terms of the Haar basis is \[ f = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {\frac{1}{\left| {I}_{n}\right| }\int f\left( t\right) {h}_{n}\left( t\right) {dt}}\right) {h}_{n} \] Notice that if \( p = 2 \), then \( {\left( {h}_{n}/{\begin{Vmatrix}{h}_{n}\end{Vmatrix}}_{2}\right) }_{n = 1}^{\infty } \) is an orthonormal basis for the Hilbert space \( {L}_{2} \) and is thus unconditional. It is an important fact that, actually, the Haar basis is an unconditional basis in \( {L}_{p} \) for \( 1 < p < \infty \) . This was first proved by Paley [235] in 1932. Much more recently, Burkholder [37] established the best constant. We are going to present another proof by Burkholder from 1988 [38]. We will treat only the real case here, although, remarkably, the same proof works for complex scalars with the same constant; however, the calculations needed for the complex case are a little harder to follow. For our purposes the constant is not so important, and we simply note that if the Haar basis is unconditional for real scalars, one readily checks that it is also unconditional for complex scalars. There is one drawback to Burkholder's argument: it is simply too clever in the sense that the proof looks very much like magic. We start with some elementary calculus. Lemma 6.1.5. Suppose \( p > 2 \) . Then \[ \frac{{p}^{p - 2}}{{\left( p - 1\right) }^{p - 1}} < 1 \] (6.1) Proof. If we let \( t = p - 1 \), inequality (6.1) is equivalent to \[ H\left( t\right) = - \left( {t - 1}\right) \log \left( {1 + t}\right) + t\log \left( t\right) > 0,\;\forall t > 1. \] Indeed, differenting \( H \) gives \[ {H}^{\prime }\left( t\right) = \frac{2}{t + 1} - \log \left( \frac{1 + t}{t}\right) \geq \frac{2}{t + 1} - \frac{1}{t} = \frac{t - 1}{t\left( {t + 1}\right) } > 0 \] for all \( t > 1 \) . Therefore \( H\left( t\right) > H\left( 1\right) = 0 \) for all \( t > 1 \) . In the next lemma we introduce a mysterious function that will enable us to prove Burkholder's theorem. This function appears to be plucked out of the air, although there are sound reasons behind its selection. The use of such functions to prove sharp inequalities has been developed extensively by Nazarov, Treil, and Volberg, who termed them Bellman functions. We refer to [226] for a discussion of this technique. Lemma 6.1.6. Suppose \( p > 2 \) and define a function \( \varphi \) on the first quadrant of \( {\mathbb{R}}^{2} \) by \[ \varphi \left( {x, y}\right) = {\left( x + y\right) }^{p - 1}\left( {\left( {p - 1}\right) x - y}\right) ,\;x, y \geq 0. \] (a) The following inequality holds for all \( \left( {x, y}\right) \) with \( x \geq 0 \) and \( y \geq 0 \) : \[ \frac{{\left( p - 1\right) }^{p - 1}}{{p}^{p - 2}}\varphi \left( {x, y}\right) \leq {\left( p - 1\right) }^{p}{x}^{p} - {y}^{p}. \] (b) For all real numbers \( x, y \), a and for \( \varepsilon = \pm 1 \) , \[ \varphi \left( {\left| {x +
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 0.2.1
Definition 0.2.1 An element \( \alpha \in \mathbb{C} \) is an algebraic integer if it satisfies a monic polynomial with coefficients in \( \mathbb{Z} \) . From Gauss' Lemma (see Exercise 0.2, No. 1), the minimum polynomial of an algebraic integer will have its coefficients in \( \mathbb{Z} \) . Also, an element \( \alpha \in \mathbb{C} \) will be an algebraic integer if and only if the ring \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) is a finitely generated abelian group. Using this, it follows that the set of all algebraic integers is a subring of \( \mathbb{C} \) . Notation Let \( k \) be a number field. The set of algebraic integers in \( k \) will be denoted by \( {R}_{k} \) . Theorem 0.2.2 The set \( {R}_{k} \) is a ring. In the next section, the ideal structure of these rings will be discussed. For the moment, only the elementary structure will be considered. To distinguish elements of \( \mathbb{Z} \) among all algebraic integers, they may be referred to as rational integers. An algebraic integer is integral over \( \mathbb{Z} \) in the following more general sense. ## Definition 0.2.3 - Let \( R \) be a subring of the commutative ring \( A \) . Then \( \alpha \in A \) is integral over \( R \) if it satisfies a monic polynomial with coefficients in \( R \) . - The set of all elements of \( A \) which are integral over \( R \) is called the integral closure of \( R \) in \( A \) . Thus \( {R}_{k} \) is the integral closure of \( \mathbb{Z} \) in \( k \) . If \( \alpha \in \mathbb{C} \) satisfies a monic polynomial whose coefficients are algebraic integers \( {\alpha }_{1},{\alpha }_{2},\ldots ,{\alpha }_{n} \), then \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) is a finitely generated module over the ring \( \mathbb{Z}\left\lbrack {{\alpha }_{1},{\alpha }_{2},\ldots ,{\alpha }_{n}}\right\rbrack \), which is a finitely generated abelian group. Thus \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) is a finitely generated abelian group and so \( \alpha \) is an algebraic integer. Thus if \( \ell \mid k \) is a finite extension, then \( {R}_{\ell } \) is also the integral closure of \( {R}_{k} \) in \( \ell \) . This also shows that \( {R}_{k} \) is integrally closed in \( k \) ; that is, if \( \alpha \in k \) is integral over \( {R}_{k} \), then \( \alpha \in {R}_{k} \) . Let \( k \) be a number field and let \( \alpha \in k \) have minimum polynomial \( f \) of degree \( n \) . If \( N \) is the least common multiple of the denominators of the coefficients of \( f \), then \( {N\alpha } \) is an algebraic integer. Thus the field \( k \) can be recovered from \( {R}_{k} \) as the field of fractions of \( {R}_{k} \) . Since every number field \( k \) is a simple extension \( \mathbb{Q}\left( \alpha \right) \) of \( \mathbb{Q} \), it also follows that \( \alpha \) can be chosen to be an algebraic integer. Thus the free abelian group \( {R}_{k} \) has rank at least \( n \) . Definition 0.2.4 A \( \mathbb{Z} \) -basis for the abelian group \( {R}_{k} \) is called an integral basis of \( k \) . Theorem 0.2.5 Every number field has an integral basis. If \( \alpha \) is an algebraic integer such that \( k = \mathbb{Q}\left( \alpha \right) \), then we have seen that \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \subset {R}_{k} \) . If \( \delta \) is the discriminant of the basis \( \left\{ {1,\alpha ,{\alpha }^{2},\ldots ,{\alpha }^{n - 1}}\right\} \), then it can be shown that \( {R}_{k} \subset \frac{1}{\delta }\mathbb{Z}\left\lbrack \alpha \right\rbrack \) (see Exercise 0.2, No. 4), so that \( {R}_{k} \) has rank exactly \( n \) . Not every number field has an integral basis which has the simple form \( \left\{ {1,\alpha ,{\alpha }^{2},\ldots ,{\alpha }^{n - 1}}\right\} \) . Such a basis is termed a power basis. (See Examples 0.3.11, No. 3 and Exercise 0.2, No. 11). In general, finding an integral basis is a tricky problem. The discriminant of an integral basis is an algebraic integer which also lies in \( \mathbb{Q} \), and hence its discriminant lies in \( \mathbb{Z} \) . For two integral bases of a number field \( k \), the change of bases matrix, and its inverse, will have rational integer entries and, hence, determinant \( \pm 1 \) . Thus by (0.2), any two integral bases of \( k \) will have the same discriminant. Definition 0.2.6 The discriminant of a number field \( k \), written \( {\Delta }_{k} \), is the discriminant of any integral basis of \( k \) . Recall that the discriminant is defined in terms of all Galois embeddings of \( k \), so that the discriminant of a number field is an invariant of the isomorphism class of \( k \) . ## Examples 0.2.7 1. The quadratic number fields \( k = \mathbb{Q}\left( \sqrt{d}\right) \), where \( d \) is a square-free integer, positive or negative, have integral bases \( \{ 1,\alpha \} \), where \( \alpha = \sqrt{d} \) if \( d ≢ \) \( 1\left( {\;\operatorname{mod}\;4}\right) \) and \( \alpha = \left( {1 + \sqrt{d}}\right) /2 \) if \( d \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) . Thus \( {\Delta }_{k} = {4d} \) if \( d ≢ 1\left( {\;\operatorname{mod}\;4}\right) \) and \( {\Delta }_{k} = d \) if \( d \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) . 2. For the cyclotomic number fields \( k = \mathbb{Q}\left( \xi \right) \) where \( \xi \) is a primitive \( p \) th root of unity for some odd prime \( p \), it can be shown with some effort that \( 1,\xi ,{\xi }^{2},\ldots ,{\xi }^{p - 2} \) is an integral basis. Hence, \( {\Delta }_{k} = {\left( -1\right) }^{\left( {p - 1}\right) /2}{p}^{p - 2} \) (see Exercise 0.1, No. 9). The discriminant is a strong invariant as the following important theorem shows. Theorem 0.2.8 For any positive integer \( D \), there are only finitely many fields with \( \left| {\Delta }_{k}\right| \leq D \) . This theorem can be deduced from Minkowski's theorem in the geometry of numbers on the existence of lattice points in convex bodies in \( {\mathbb{R}}^{n} \) whose volume is large enough relative to a fundamental region for the lattice. Considerable effort has gone into determining fields of small discriminant and much data is available on these. There do exist non-isomorphic fields with the same discriminant, but they are rather thinly spread. (See Examples 0.2.11). Thus in pinning down a number field, it is frequently sufficient to determine its degree over \( \mathbb{Q} \), the number of real and complex places and its discriminant. One of our first priorities is to be able to compute the discriminant. Recall that the discriminant of a polynomial, and, hence, of a basis of the form \( \left\{ {1, t,{t}^{2},\ldots ,{t}^{d - 1}}\right\} \) can be determined systematically (see Exercise 0.1, No. 6). Note also, that if \( \left\{ {{\alpha }_{1},{\alpha }_{2},\ldots ,{\alpha }_{d}}\right\} \) is a basis of \( k \) consisting of algebraic integers, then \[ \operatorname{discr}\left\{ {{\alpha }_{1},{\alpha }_{2},\ldots ,{\alpha }_{d}}\right\} = {m}^{2}{\Delta }_{k} \] \( \left( {0.11}\right) \) where \( m \in \mathbb{Z} \) by (0.2). Thus if the discriminant of a basis consisting of algebraic integers is square-free, then that basis will be an integral basis and that discriminant will be the field discriminant. We may also use relative discriminants to assist in the computation. In general, for a field extension \( \ell \mid k \), there may not be a relative integral basis, since \( {R}_{k} \) need not be a principal ideal domain and \( {R}_{\ell } \) is not necessarily a free \( {R}_{k} \) -module. Definition 0.2.9 The relative discriminant \( {\delta }_{\ell \mid k} \) of a finite extension of number fields \( \ell \mid k \) is the ideal in \( {R}_{k} \) generated by the set of elements \( \left\{ {\operatorname{discr}\left\{ {{\alpha }_{1},{\alpha }_{2},\ldots ,{\alpha }_{d}}\right\} }\right\} \) where \( \left\{ {{\alpha }_{1},{\alpha }_{2},\ldots ,{\alpha }_{d}}\right\} \) runs through the bases of \( \ell \mid \) \( k \) consisting of algebraic integers. The following theorem then connects the discriminants (cf. (0.7)). Theorem 0.2.10 Let \( \ell \mid k \) be a finite extension of number fields, with \( \left\lbrack {\ell : k}\right\rbrack = d \) . \[ \left| {\Delta }_{\ell }\right| = \left| {N\left( {\delta }_{\ell \mid k}\right) {\Delta }_{k}^{d}}\right| \] \( \left( {0.12}\right) \) In this formula, \( N\left( I\right) \) is the norm of the ideal \( I \), which is the cardinality of the ring \( {R}_{k}/I \) . As we shall see in the next section, this is finite. ## Examples 0.2.11 1. Let \( k = \mathbb{Q}\left( t\right) \), where \( t \) satisfies the polynomial \( {x}^{3} + x + 1 \) . This polynomial has discriminant -31 . Thus this is the field discriminant and \( \left\{ {1, t,{t}^{2}}\right\} \) is an integral basis. 2. Consider again the example \( \ell = \mathbb{Q}\left( t\right) \), where \( t = \sqrt{}\left( {3 - 2\sqrt{5}}\right) \) . From (0.4) the discriminant of the basis \( \left\{ {1, t,{t}^{2},{t}^{3}}\right\} \) is \( 1,{126},{400} \) . However, \( u = \left( {1 + t}\right) /2 \) satisfies \( {x}^{2} - x + \left( {-1 + \sqrt{5}}\right) /2 = 0 \) and so is an algebraic integer. The discriminant of the basis \( \left\{ {1, u,{u}^{2},{u}^{3}}\right\} \) is -275 (see Exercise \( {0.1} \), No. 7). Note that \( k = \mathbb{Q}\left( \sqrt{5}\right) \subset \ell \) and so by \( \left( {0.12}\right), N\left( {\delta }_{\ell \mid k}\right) \mid {11} \) . In this case, \( {R}_{k} \) is a principal ideal domain, so that \( {R}_{\ell } \) is a free \( {R}_{{k}^{ - }} \) module and has a basis over \( {R}_{k} \), which we can take to be of the form \( \left\{ {{a}_{1} + {b}_{1}u,{a}_{2} + {b}_{2}u}\right\} \) with \( {a}_{i},{b}_{i} \in k \) . The discriminant of this basis is the ideal generated by \( {\left( {a}_{2}{b}_{1} - {a}_{1}{b}_{2}\right) }^{2}\left( {3 - 2\sqrt{5}}\right) \) . It now easily follows that \( {\delta }_{\ell \mid k} \) cannot be \( {R}_{k} \) . Th
113_Topological Groups
Definition 4.11
Definition 4.11. \( {R}_{2} = \{ \left( {p, m, n}\right) : p \) is the Gödel number of a Markov algorithm \( A, m, n \) are Gödel numbers of words \( a, b \) respectively, and \( \left( {a, b}\right) \) is a nonterminating computation step under \( A\} \) . Lemma 4.12. \( {R}_{2} \) is elementary. Proof. \( \left( {p, m, n}\right) \in {R}_{2} \) iff \( p \) is the Gödel number of a Markov algorithm, \( m \) and \( n \) are Gödel numbers of words, \( \exists i \leq \lg \) such that \( \left( {{\left( {\left( p\right) }_{i}\right) }_{0}, m}\right) \in {R}_{0} \), and \( \forall i \leq \lg \forall x \leq m\;\forall y \leq m\lbrack \left( {{\left( {\left( p\right) }_{i}\right) }_{0}, m}\right) \in {R}_{0}\;\& \;\forall j < i\lbrack \left( {{\left( {\left( p\right) }_{j}\right) }_{0}, m}\right) \notin {R}_{0}\rbrack \;\& \;(x, \) \( \left. {{\left( {\left( p\right) }_{i}\right) }_{0}, y, m) \in {R}_{1} \Rightarrow \operatorname{Cat}\left( {\operatorname{Cat}\left( {x,{\left( {\left( p\right) }_{i}\right) }_{1}, y}\right) = n\& {\left( {\left( p\right) }_{i}\right) }_{2} = 0}\right. }\right\rbrack . \) Definition 4.13. \( {R}_{3} \) is like \( {R}_{2} \) except with "terminating" instead of "nonterminating". Lemma 4.14. \( {R}_{3} \) is elementary. Definition 4.15. If \( \left\langle {{d}_{0},\ldots ,{d}_{m}}\right\rangle \) is a finite sequence of words, its Gödel number is \[ \mathop{\prod }\limits_{{i < m}}{p}_{i}^{{\mathcal{G}}_{di}} \] Also let \( {R}_{4} = \{ \left( {m, n}\right) : m \) is the Gödel number of a Markov algorithm \( A \) , and \( n \) is the Gödel number of a computation under \( A\} \) . Lemma 4.16. \( {R}_{4} \) is elementary. Proof. \( \left( {m, n}\right) \in {R}_{4} \) iff \( m \) is a Gödel number of a Markov algorithm, \( \ln \geq 1 \) , and \( \forall i < \ln - 1\left\lbrack {\left( {m,{\left( n\right) }_{i},{\left( n\right) }_{i + 1}}\right) \in {R}_{2}}\right\rbrack \) and \( \left( {m,{\left( n\right) }_{\ln + 1},{\left( n\right) }_{\ln }}\right) \in {R}_{3} \) . Definition 4.17. \( {f}_{1}x = \mathop{\prod }\limits_{{i \leq x}}{p}_{i}^{2} \) . Lemma 4.18. \( {f}_{1} \) is elementary. Definition 4.19. \( {f}_{2}^{1}x = \operatorname{Cat}\left( {2,{f}_{1}x}\right) \cdot {f}_{2}^{m + 1}\left( {{x}_{0},\ldots ,{x}_{m}}\right) = \operatorname{Cat}\left( {{f}_{2}^{m}\left( {{x}_{0},\ldots ,{x}_{m - 1},}\right) }\right. \) \( \left. {{f}_{2}^{1}{x}_{m}}\right) \) . Lemma 4.20. \( {f}_{2}^{m} \) is elementary, for each \( m \in \omega \sim \{ 0\} \) . Lemma 4.21. \( {f}_{2}^{m}\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \) is the Gödel number of \( \begin{array}{llllll} 0 & {1}^{\left( x0 + 1\right) } & 0 & \cdots & 0 & {1}^{\left( x\left( m - 1\right) + 1\right) }. \end{array} \) The notations \( {R}_{1},{R}_{2},{R}_{3},{R}_{4},{f}_{1},{f}_{2}^{m} \) will not be used beyond the present section. Definition 4.22. \( {T}_{m}^{\prime } = \left\{ {\left( {e,{x}_{0},\ldots ,{x}_{m - 1}, c}\right) : e}\right. \) is the Gödel number of a Markov algorithm \( A \), and \( c \) is the Gödel number of a computation \( \left\langle {{d}_{0},\ldots ,{d}_{n}}\right\rangle \) under \( A,{\left( c\right) }_{0} = \operatorname{Cat}\left( {{f}_{2}^{m}\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) ,2 \cdot {3}^{3}}\right) \), and 2 occurs only once in \( {d}_{n}\} \) . Lemma 4.23. \( {T}_{m}^{\prime } \) is elementary. Definition 4.24. \( {V}^{\prime }y = {\mu x} \leq y\left\lbrack {\left( {\operatorname{Cat}\left( {{f}_{2}^{1}x,2 \cdot {3}^{3}}\right) ,{\left( y\right) }_{1y}}\right) \in {R}_{0}}\right\rbrack \) . Lemma 4.25. \( {V}^{\prime } \) is elementary. Lemma 4.26. Every algorithmic function is recursive. Proof. Say \( f \) is \( m \) -ary and is computed by a Markov algorithm \( A \) . Let \( e \) be the Gödel number of \( A \) . Then for any \( {x}_{0},\ldots ,{x}_{m - 1} \in \omega \) we have \[ f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = {V}^{\prime }{\mu z}\left( {\left\langle {e,{x}_{0},\ldots ,{x}_{m - 1}, z}\right\rangle \in {T}_{m}^{\prime }}\right) . \] Thus \( f \) is recursive, as desired. Theorem 4.27. Turing computable \( = \) recursive \( = \) algorithmic. ## BIBLIOGRAPHY 1. Curry, H. Foundations of Mathematical Logic. New York: McGraw-Hill (1963). 2. Detlovs, V. The equivalence of normal algorithms and recursive functions. A.M.S. Translations Ser. 2, Vol. 23, pp. 15-81. 3. Markov, A. Theory of Algorithms. Jerusalem: Israel Program for Scientific Translations (1961). ## EXERCISES 4.28. Let \( A \) be the algorithm \[ {20} \rightarrow 0\;2 \] \[ 2\;1 \rightarrow 1\;2 \] \[ 2 \rightarrow \cdot \;{1}^{\left( 3\right) } \] \[ \rightarrow 2 \] Show that \( A \) converts any word \( a \) on 0,1 (i.e., involving only 0 and 1) into \( a{1}^{\left( 3\right) } \) . 4.29. Construct an algorithm which converts every word into a fixed word \( a \) . 4.30. Construct an algorithm which converts every word \( a \) into \( {1}^{\left( n + 1\right) } \), where \( n \) is the length of \( a \) . 4.31. Let \( a \) be a fixed word. Construct an algorithm which converts any word \( \neq a \) into the empty word, but leaves \( a \) alone. 4.32. There is no algorithm which converts any word \( a \) into \( {aa} \) . 4.33. Construct an algorithm which converts any word \( a \) on 0,1 into \( {aa} \) . 4.34*. Show directly that any algorithmic function is Turing computable. ## Recursion Theory We have been concerned so far with just the definitions of mathematical notions of effectiveness. We now want to give an introduction to the theory of effectiveness based on these definitions. Most of the technical details of the proofs of the results of this chapter are implicit in our earlier work. We wish to look at the proofs and results so far stated and try to see their significance. In order to formulate some of the results in their proper degree of generality we need to discuss the notion of partial functions. An \( m \) -ary partial function on \( \omega \) is a function \( f \) mapping some subset of \( {}^{m}\omega \) into \( \omega \) . The domain of \( f \) may be empty-then \( f \) itself is the empty set. The domain of \( f \) may be finite; it may also be infinite but not consist of all of \( {}^{m}\omega \) . Finally, it may be all of \( {}^{m}\omega \) , in which case \( f \) is an ordinary \( m \) -ary function on \( \omega \) . When talking about partial functions, we shall sometimes refer to those \( f \) with \( \operatorname{Dmn}f = {}^{m}\omega \) as total. Intuitively speaking, a partial function \( f \) (say \( m \) -ary) is effective if there is an automatic procedure \( P \) such that for any \( {x}_{0},\ldots ,{x}_{m - 1} \in \omega \), if \( P \) is presented with the \( m \) -tuple \( \left\langle {{x}_{0},\ldots ,{x}_{m - 1}}\right\rangle \) then it proceeds to calculate, and if \( \left\langle {{x}_{0},\ldots ,{x}_{m - 1}}\right\rangle \in \operatorname{Dmn}f \), then after finitely many steps \( P \) produces the answer \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \) and stops. In case \( \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \notin \operatorname{Dmn}f \) the procedure \( P \) never stops. We do not require that there be an automatic method for recognizing membership in Dmn \( f \) . Clearly if \( f \) is total then this notion of effectiveness coincides with our original intuitive notion (see p. 12). Now we want to give mathematical equivalents for the notion of an effective partial function. Definition 5.1. Let \( f \) be an \( m \) -ary partial function. We say that \( f \) is partial Turing computable iff there is a Turing machine \( M \) as in 1.1 such that for every tape description \( F \), all \( q, n \in \mathbb{Z} \), and all \( {x}_{0},\ldots ,{x}_{m - 1} \in \omega \), if \( {01}^{\left( x0 + 1\right) }0 \) \( \cdots {01}^{\left( x\left( m - 1\right) + 1\right) } \) lies on \( F \) beginning at \( q \) and ending at \( n \), and if \( {Fi} = 0 \) for all \( i > n \), then the two conditions (i) \( \left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \in \operatorname{Dmn}F \) , (ii) there is a computation of \( M \) beginning with \( \left( {F,{c}_{1}, n + 1}\right) \) are equivalent; and if one of them holds, and \( \left\langle {F,{c}_{1}, n + }\right. \) 1), \( \left( {{G}_{1},{a}_{1},{b}_{1}}\right) ,\ldots \) , \( \left( {{G}_{p - 1},{a}_{p - 1},{b}_{p - 1}}\right) \rangle \) is a computation of \( M \), then (1)-(3) of 3.9(ii) hold. Clearly any partial Turing computable function is effectively calculable. Corollary 5.2. Every Turing computable function is partial Turing computable. Every total partial Turing computable function is Turing computable. Next, we want to generalize our Definition 3.1 of recursive functions. To shorten some of our following exposition we shall use the informal notation \( \cdots \simeq - - - \) to mean that \( \cdots \) is defined iff - - - is defined, and if \( \cdots \) is defined, then \( \cdots = \cdots \) . For example, if \( f \) is the function with domain \( \{ 2,3\} \) then when we say \[ {gx} + {hx} \simeq f\left( {x + 2}\right) \;\text{ for all }x \in \omega , \] we mean that \( \operatorname{Dmn}g \cap \operatorname{Dmn}h = \{ 0,1\} \) and for any \( x \in \{ 0,1\} ,{gx} + {hx} = \) \( f\left( {x + 2}\right) \) . ## Definition 5.3 (i) Composition. We extend the operator \( {\mathrm{K}}_{n}^{m} \) of 2.1 to act upon partial functions. Let \( f \) be an \( m \) -ary partial function, and \( {g}_{0},\ldots ,{g}_{m - 1}n \) -ary partial functions. Then \( {\mathrm{K}}_{n}^{m} \) is the \( n \) -ary partial function \( h \) such that for any \( {x}_{0},\ldots ,{x}_{n - 1} \in \omega \) \[ h\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) \simeq f\left( {{g}_{0}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) ,\ldots ,{g}_{m - 1}\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) }\right) . \] (ii) Primitive recursion with parameters. If \( f \) is an \( m \) -ary partial function and \( h \)
1185_(GTM91)The Geometry of Discrete Groups
Definition 3.7.4
Definition 3.7.4. Let \( Q \) be the hyperboloid model defined by \[ Q = \left\{ {\left( {{x}_{0},\ldots ,{x}_{n}}\right) \in {\mathbb{R}}^{n + 1} : q\left( {x, x}\right) = 1,{x}_{0} > 0}\right\} , \] where \[ q\left( {x, y}\right) = {x}_{0}{y}_{0} - \left( {{x}_{1}{y}_{1} + \cdots + {x}_{n}{y}_{n}}\right) . \] Observe that \( Q \) is one sheet of a hyperboloid of two sheets and that if \( x \in Q \) then \[ {x}_{0}^{2} = 1 + \left( {{x}_{1}^{2} + \cdots + {x}_{n}^{2}}\right) \] so, in fact, \( {x}_{0} \geq 1 \) . Now let \( \gamma = \left( {{\gamma }_{0},\ldots ,{\gamma }_{n}}\right) \) be any smooth curve on \( Q \) . Thus for all \( t \) , \[ {\gamma }_{0}{\left( t\right) }^{2} = {\gamma }_{1}{\left( t\right) }^{2} + \cdots + {\gamma }_{n}{\left( t\right) }^{2} + 1, \] so differentiating, \[ {\gamma }_{0}\left( t\right) {\dot{\gamma }}_{0}\left( t\right) = {\gamma }_{1}\left( t\right) {\dot{\gamma }}_{1}\left( t\right) + \cdots + {\gamma }_{n}\left( t\right) {\dot{\gamma }}_{n}\left( t\right) \] (more briefly, \( q\left( {\gamma ,\gamma }\right) = 1 \) so \( q\left( {\gamma ,\dot{\gamma }}\right) = 0 \) ). We deduce that \[ q\left( {\dot{\gamma },\dot{\gamma }}\right) = {\left( \frac{{\gamma }_{1}{\dot{\gamma }}_{1} + \cdots + {\gamma }_{n}{\dot{\gamma }}_{n}}{{\gamma }_{0}}\right) }^{2} - \left( {{\dot{\gamma }}_{1}^{2} + \cdots + {\dot{\gamma }}_{n}^{2}}\right) \] \[ \leq \left( {\sum {\gamma }_{j}^{2}}\right) \left( {\sum {\dot{\gamma }}_{j}^{2}}\right) /{\gamma }_{0}^{2} - \left( {\sum {\dot{\gamma }}_{j}^{2}}\right) \] \[ = - \left( {\sum {\dot{\gamma }}_{j}^{2}}\right) /{\gamma }_{0}^{2} \] \[ \leq 0 \] the summations being over \( j = 1,\ldots, n \) . Observe also that a strict inequality holds unless \( {\dot{\gamma }}_{1} = \cdots = {\dot{\gamma }}_{n} = 0 \) in which case, \( {\dot{\gamma }}_{0} = 0 \) also. It follows that we can construct a metric on \( Q \) in the usual way by the line element \[ d{s}^{2} = d{x}_{1}^{2} + \cdots + d{x}_{n}^{2} - d{x}_{0}^{2}, \] (3.7.1) the distance between two points on \( Q \) being the infimum of \[ \int {\left\lbrack -q\left( \dot{\gamma },\dot{\gamma }\right) \right\rbrack }^{1/2}{dt} \] over all curves joining the two points. The associated metric topology is the Euclidean topology on \( Q \) . We shall now compare \( Q \) and this metric with the model \( {B}^{n} \) and the metric \[ d{s}^{2} = \frac{{4d}{x}^{2}}{{\left( 1 - {\left| x\right| }^{2}\right) }^{2}}. \] (3.7.2) Theorem 3.7.5. The map \[ F : \left( {{x}_{0},\ldots ,{x}_{n}}\right) \mapsto \left( {\frac{{x}_{1}}{1 + {x}_{0}},\ldots ,\frac{{x}_{n}}{1 + {x}_{0}}}\right) \] is an isometry of \( Q \) with the metric (3.7.1) onto \( {B}^{n} \) with the metric (3.7.2). Proof. For brevity, we write \[ \left( {{y}_{1},\ldots ,{y}_{n}}\right) = \left( {\frac{{x}_{1}}{1 + {x}_{0}},\ldots ,\frac{{x}_{n}}{1 + {x}_{0}}}\right) \] and denote the vectors by \( x \) and \( y \) in the obvious way. As \( x \in Q \), a computation yields \[ {\left| y\right| }^{2} = \frac{{x}_{0} - 1}{{x}_{0} + 1} \] (3.7.3) so \( 0 \leq \left| y\right| < 1 \) and \( F \) maps \( Q \) into \( {B}^{n} \) . By direct computation we find that the map \[ {F}^{-1} : \left( {{y}_{1},\ldots ,{y}_{n}}\right) \mapsto \left( {\frac{1 + {\left| y\right| }^{2}}{1 - {\left| y\right| }^{2}},\frac{2{y}_{1}}{1 - {\left| y\right| }^{2}},\ldots ,\frac{2{y}_{n}}{1 - {\left| y\right| }^{2}}}\right) \] (3.7.4) is indeed the inverse of \( F \) and so \( F \) is a bijection of \( Q \) onto \( {B}^{n} \) . To verify that \( F \) is an isometry, we observe that \[ d{y}_{j} = \frac{d{x}_{j}}{1 + {x}_{0}} - \frac{{x}_{j}d{x}_{0}}{{\left( 1 + {x}_{0}\right) }^{2}} \] Thus, using this and (3.7.3) we have \[ \frac{4\left( {d{y}_{1}^{2} + \cdots + d{y}_{n}^{2}}\right) }{{\left( 1 - {\left| y\right| }^{2}\right) }^{2}} = {\left( 1 + {x}_{0}\right) }^{2}\mathop{\sum }\limits_{{j = 1}}^{n}{\left( \frac{d{x}_{j}}{1 + {x}_{0}} - \frac{{x}_{j}d{x}_{0}}{{\left( 1 + {x}_{0}\right) }^{2}}\right) }^{2} \] \[ = \mathop{\sum }\limits_{{j = 1}}^{n}d{x}_{j}^{2} + \frac{d{x}_{0}^{2}}{{\left( 1 + {x}_{0}\right) }^{2}}\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}^{2} - \frac{2\left( {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}d{x}_{j}}\right) d{x}_{0}}{\left( 1 + {x}_{0}\right) } \] \[ = \mathop{\sum }\limits_{{j = 1}}^{n}d{x}_{j}^{2} + \left( \frac{{x}_{0} - 1}{{x}_{0} + 1}\right) d{x}_{0}^{2} - \frac{d{x}_{0}d\left( {{x}_{0}^{2} - 1}\right) }{1 + {x}_{0}} \] \[ = \mathop{\sum }\limits_{{j = 1}}^{n}d{x}_{j}^{2} - d{x}_{0}^{2} \] It is now clear that the group \( G\left( Q\right) \) of isometries of \( Q \) and the group \( \operatorname{GM}\left( {B}^{n}\right) \) of isometries of \( {B}^{n} \) are isomorphic by virtue of the relation \[ \operatorname{GM}\left( {B}^{n}\right) = F\left( {G\left( Q\right) }\right) {F}^{-1}. \] Our aim now is to prove an alternative characterization of \( G\left( Q\right) \) and hence of \( \operatorname{GM}\left( {B}^{n}\right) \) . Theorem 3.7.6. The isometries of \( Q \) are precisely the \( \left( {n + 1}\right) \times \left( {n + 1}\right) \) matrices which preserve both the quadratic form \( q\left( {x, x}\right) \) and the half-space given by \( {x}_{0} > 0 \) . Proof. First, let \( A \) be any matrix with the prescribed properties. As \( {x}_{0} > 0 \) is preserved and as \[ q\left( {{xA},{xA}}\right) = q\left( {x, x}\right) = 1, \] when \( x \in Q \) we see that \( A \) preserves \( Q \) . Moreover, for any curve \( \gamma \) on \( Q \), let \( \Gamma = {\gamma A} \) . Then \( \dot{\Gamma } = \dot{\gamma }A \) so \[ q\left( {\dot{\Gamma },\dot{\Gamma }}\right) = q\left( {\dot{\gamma },\dot{\gamma }}\right) \] and this simply expresses the fact that \( \gamma \) and \( {\gamma A} \) have the same length. Thus each such \( A \) is an isometry of \( Q \) onto itself. It remains to show that every \( \phi \) in \( \operatorname{GM}\left( {B}^{n}\right) \) is of the form \( F\left( A\right) {F}^{-1} \) for some such matrix \( A \) and to do this, we simply compute the action of \( F\left( A\right) {F}^{-1} \) on \( {B}^{n} \) . Suppose then that \( A = \left( {a}_{ij}\right) \) where \( i, j = 0,1,\ldots, n \) . With the obvious notation, we write \[ \left( {{y}_{1},\ldots ,{y}_{n}}\right) \overset{{F}^{-1}}{ \mapsto }\left( {{u}_{0},{u}_{1},\ldots ,{u}_{n}}\right) \] \[ \overset{A}{ \mapsto }\left( {{v}_{0},{v}_{1},\ldots ,{v}_{n}}\right) \] \[ \overset{F}{ \mapsto }\left( {{w}_{1},\ldots ,{w}_{n}}\right) \] Now \[ \left( {{v}_{0},\ldots ,{v}_{n}}\right) = \left( {{u}_{0},\ldots ,{u}_{n}}\right) A, \] so \[ {v}_{j} = {u}_{0}{a}_{0j} + \cdots + {u}_{n}{a}_{nj} \] Using (3.7.4), this yields \[ \left( {1 - {\left| y\right| }^{2}}\right) {v}_{j} = \left( {1 + {\left| y\right| }^{2}}\right) {a}_{0j} + 2\left( {{y}_{1}{a}_{1j} + \cdots + {y}_{n}{a}_{nj}}\right) . \] Thus \[ {w}_{j} = \frac{{v}_{j}}{1 + {v}_{0}} \] \[ = \frac{\left( {1 - {\left| y\right| }^{2}}\right) {v}_{j}}{\left( {1 - {\left| y\right| }^{2}}\right) + \left( {1 - {\left| y\right| }^{2}}\right) {v}_{0}} \] \[ = \frac{\left( {1 + {\left| y\right| }^{2}}\right) {a}_{0j} + 2\left( {{y}_{1}{a}_{1j} + \cdots + {y}_{n}{a}_{nj}}\right) }{{\left| y\right| }^{2}\left( {{a}_{00} - 1}\right) + 2\left( {{y}_{1}{a}_{10} + \cdots + {y}_{n}{a}_{n0}}\right) + \left( {{a}_{00} + 1}\right) } \] (3.7.5) and this is the explicit expression for the map \( F\left( A\right) {F}^{-1} \) . If \( {A}_{0} \) is an orthogonal \( n \times n \) matrix (viewed as an isometry of \( {B}^{n} \) ), then \[ A = \left( \begin{matrix} 1 & 0 & \cdots & 0 \\ 0 & & & {A}_{0} \\ \vdots & & & \\ 0 & & & \end{matrix}\right) \] preserves \( q \) and the condition \( {x}_{0} > 0 \) . In this case,(3.7.5) yields \( w = y{A}_{0} \) and so every isometry of \( {B}^{n} \) which fixes the origin does arise in the form \( F\left( A\right) {F}^{-1} \) . It is only necessary to show now that the reflection in the sphere \( S\left( {\zeta, r}\right) \) orthogonal to \( {S}^{n - 1} \) is of the form \( F\left( A\right) {F}^{-1} \) . Because orthogonal transformations are of this form, we need only consider the case when \( \zeta \) is of the form \( \left( {s,0,\ldots ,0}\right) \) . It is actually more convenient to introduce another positive parameter \( t \) with \[ \zeta = \left( {c\left( t\right) ,0,\ldots ,0}\right) ,\;c\left( t\right) = \frac{\cosh t}{\sinh t} \] and \[ r = 1/\sinh t \] so the orthogonality requirement \( {\left| \zeta \right| }^{2} = 1 + {r}^{2} \) is satisfied. Consider now the matrix \[ P = \left( \begin{matrix} \cosh {2t} & \sinh {2t} & 0 & \cdots & 0 \\ - \sinh {2t} & - \cosh {2t} & 0 & \cdots & 0 \\ 0 & 0 & & & \\ \vdots & \vdots & & {I}_{n - 1} & \\ 0 & 0 & & & \end{matrix}\right) . \] observe that \( \det \left( P\right) = - 1 \) and that \( P \) preserves both the quadratic form \( q\left( {x, x}\right) \) and the half-space \( {x}_{0} > 0 \) . The effect \( y \mapsto w \) of \( F\left( A\right) {F}^{-1} \) on \( {B}^{n} \) is given by (3.7.5) and the denominator of this expression can be simplified as follows: \[ {\left| y\right| }^{2}\left( {{a}_{00} - 1}\right) + 2\left( {{y}_{1}{a}_{10} + \cdots + {y}_{n}{a}_{n0}}\right) + \left( {{a}_{00} + 1}\right) \] \[ = 2{\left| y\right| }^{2}{\sinh }^{2}t - 2{y}_{1}\sinh \left( {2t}\right) + 2{\cosh }^{2}t \] \[ = 2{\left| y - \zeta \right| }^{2}{\sinh }^{2}t \] \[ = 2{\left| y - \zeta \right| }^{2}/{r}^{2}\text{.} \] Now for \( j = 2,\ldots, n \) the formula (3.7.5) yields \[ {w}_{j} = \frac{{r}^{2}{y}_{j}}{{\left| y - \zeta \right| }^{2}} \] Also, \[ {w}_{1} = \frac{\left( {1 + {\left| y\right| }^{2}}\right) \sinh \left( {2t}\right) - 2{y}_{1}\cosh \left( {2t}\right) }{2{\left| y - \zeta \right| }^{2}{\sinh }^{2}t} \] \[ = \frac{\sinh \left( {2t}\right) \left\lbrack {{\left| y - \zeta \right| }^{2} + 1 - {\left| \zeta \right| }^{2} + 2\left( {y.\zeta }\right) }\right\rb
1083_(GTM240)Number Theory II
Definition 10.5.18
Definition 10.5.18. Let \( K \) be a number field and \( \chi \) a character of the class group of \( K \) . For any ideal \( \mathfrak{a} \) of \( K \) denote by \( \left\lbrack \mathfrak{a}\right\rbrack \) its ideal class. We define the \( L \) -function \( {L}_{K}\left( {\chi, s}\right) \) associated with \( \chi \) by the formula \[ {L}_{K}\left( {\chi, s}\right) = \mathop{\sum }\limits_{{\mathfrak{a} \subset {\mathbb{Z}}_{K}}}\frac{\chi \left( \left\lbrack \mathfrak{a}\right\rbrack \right) }{\mathcal{N}{\left( \mathfrak{a}\right) }^{s}} = \mathop{\prod }\limits_{\mathfrak{p}}\frac{1}{1 - \chi \left( \left\lbrack \mathfrak{p}\right\rbrack \right) \mathcal{N}{\left( \mathfrak{p}\right) }^{-s}}, \] where as usual \( \mathfrak{a} \) runs through all integral ideals of \( {\mathbb{Z}}_{K} \) and \( \mathfrak{p} \) through all prime ideals of \( {\mathbb{Z}}_{K} \) . It is clear that \[ {L}_{K}\left( {\chi, s}\right) = \mathop{\sum }\limits_{{\mathcal{A} \in {Cl}\left( K\right) }}\chi \left( \mathcal{A}\right) {\zeta }_{K}\left( {\mathcal{A}, s}\right) . \] Proposition 10.5.19. As above, let \( {D}_{1} \) and \( {D}_{2} \) be two coprime fundamental discriminants, \( D = {D}_{1}{D}_{2}, K = \mathbb{Q}\left( \sqrt{D}\right) \), and let \( {\chi }_{{D}_{1}} \) be the character of \( {Cl}\left( K\right) \) defined in the above corollary. Then \[ {L}_{K}\left( {{\chi }_{{D}_{1}}, s}\right) = L\left( {{\chi }_{{D}_{1}}, s}\right) L\left( {{\chi }_{{D}_{2}}, s}\right) , \] where the \( L \) -functions on the right-hand side are the ordinary Dirichlet \( L \) - functions associated with the Dirichlet characters \( {\chi }_{{D}_{i}} \) . Proof. It is sufficient to show that the corresponding Euler factors are the same on both sides. Let \( p \) be a prime number. As usual we consider three cases. If \( p \) is inert in \( K \) there is a single prime ideal \( \mathfrak{p} = p{\mathbb{Z}}_{K} \) above \( p \) that is a principal ideal, so that \( {\chi }_{{D}_{1}}\left( \left\lbrack \mathfrak{p}\right\rbrack \right) = 1 \), and in fact \[ {\chi }_{{D}_{1}}\left( \left\lbrack \mathfrak{p}\right\rbrack \right) = \left( \frac{{D}_{1}}{\mathcal{N}\left( \mathfrak{p}\right) }\right) = \left( \frac{{D}_{1}}{{p}^{2}}\right) = 1. \] The Euler factor on the LHS is thus equal to \( {\left( 1 - {p}^{-{2s}}\right) }^{-1} \) . On the other hand, since \( p \) is inert we have \( \left( \frac{D}{p}\right) = - 1 \), hence \( \left( \frac{{D}_{1}}{p}\right) = - \left( \frac{{D}_{2}}{p}\right) \), so the Euler factor on the RHS is equal to \( {\left( 1 - {p}^{-s}\right) }^{-1}{\left( 1 + {p}^{-s}\right) }^{-1} = {\left( 1 - {p}^{-{2s}}\right) }^{-1} \) . If \( p \) is split in \( K \) we have two ideals \( \mathfrak{p} \) and \( \overline{\mathfrak{p}} \) above \( p \), and \[ {\chi }_{{D}_{1}}\left( \left\lbrack \mathfrak{p}\right\rbrack \right) = {\chi }_{{D}_{1}}\left( \left\lbrack \overline{\mathfrak{p}}\right\rbrack \right) = \left( \frac{{D}_{1}}{\mathcal{N}\left( \mathfrak{p}\right) }\right) = \left( \frac{{D}_{1}}{p}\right) , \] so the Euler factor on the LHS is equal to \( {\left( 1 - \left( \frac{{D}_{1}}{p}\right) {p}^{-s}\right) }^{-2} \) . On the other hand, since \( p \) is split we have \( \left( \frac{D}{p}\right) = 1 \) hence \( \left( \frac{{D}_{1}}{p}\right) = \left( \frac{{D}_{2}}{p}\right) \), so the Euler factor on the RHS is equal to \( {\left( 1 - \left( \frac{{D}_{1}}{p}\right) {p}^{-s}\right) }^{-2} \) . Finally, if \( p \) is ramified in \( K \) we have a single ideal \( \mathfrak{p} \) above \( p \), and \( p \mid D \) . Since \( D = {D}_{1}{D}_{2} \) with \( {D}_{1} \) and \( {D}_{2} \) coprime, \( p \) divides exactly one of the \( {D}_{i} \) . We consider both cases. If \( p \mid {D}_{2} \) then \( \mathcal{N}\left( \mathfrak{p}\right) = p \) is coprime to \( {D}_{1} \) ; hence \( {\chi }_{{D}_{1}}\left( \left\lbrack \mathfrak{p}\right\rbrack \right) = \left( \frac{{D}_{1}}{p}\right) \), so the Euler factor on the LHS is equal to \( {\left( 1 - \left( \frac{{D}_{1}}{p}\right) {p}^{-s}\right) }^{-1} \), which is equal to the Euler factor on the RHS since \( \left( \frac{{D}_{2}}{p}\right) = 0 \) . If \( p \mid {D}_{1} \) then \( p \nmid {D}_{2} \), and by Corollary 10.5.16 (4) we have \( {\chi }_{{D}_{1}}\left( \left\lbrack \mathfrak{p}\right\rbrack \right) = {\chi }_{{D}_{2}}\left( \left\lbrack \mathfrak{p}\right\rbrack \right) = \left( \frac{{D}_{2}}{p}\right) \), and since \( \left( \frac{{D}_{1}}{p}\right) = 0 \) we conclude again that the Euler factors are equal. We can now obtain the desired result on real quadratic fields. Corollary 10.5.20. Let \( {D}_{1} \) and \( {D}_{2} \) be two coprime fundamental discrimi-nants, \( D = {D}_{1}{D}_{2}, K = \mathbb{Q}\left( \sqrt{D}\right) \), and assume that \( {D}_{1} > 0,{D}_{2} < 0 \) hence \( D < 0 \) . Then \[ L\left( {{\chi }_{{D}_{1}},1}\right) = - \frac{\omega \left( {D}_{2}\right) }{h\left( {D}_{2}\right) {D}_{1}^{1/2}}\mathop{\sum }\limits_{{\mathcal{A} \in {Cl}\left( K\right) }}{\chi }_{{D}_{1}}\left( \mathcal{A}\right) \log \left( {\Im {\left( {\tau }_{\mathcal{A}}\right) }^{1/2}{\left| \eta \left( {\tau }_{\mathcal{A}}\right) \right| }^{2}}\right) , \] where \( {\tau }_{\mathcal{A}} \) is the complex number corresponding to the ideal class \( \mathcal{A} \) as above. Equivalently, if we denote by \( {\varepsilon }_{{D}_{1}} \) the fundamental unit greater than 1 of the real quadratic field \( \mathbb{Q}\left( \sqrt{{D}_{1}}\right) \) we have \[ {\varepsilon }_{{D}_{1}} = {\left( \mathop{\prod }\limits_{{\mathcal{A} \in {Cl}\left( K\right) }}{\left( \Im {\left( {\tau }_{\mathcal{A}}\right) }^{1/4}\left| \eta \left( {\tau }_{\mathcal{A}}\right) \right| \right) }^{{\chi }_{{D}_{1}}\left( \mathcal{A}\right) }\right) }^{-\omega \left( {D}_{2}\right) /\left( {h\left( {D}_{1}\right) h\left( {D}_{2}\right) }\right) }. \] Proof. By the proposition we have \[ {L}_{K}\left( {{\chi }_{{D}_{1}}, s}\right) = \mathop{\sum }\limits_{{\mathcal{A} \in {Cl}\left( K\right) }}{\chi }_{{D}_{1}}\left( \mathcal{A}\right) {\zeta }_{K}\left( {\mathcal{A}, s}\right) = L\left( {{\chi }_{{D}_{1}}, s}\right) L\left( {{\chi }_{{D}_{2}}, s}\right) . \] Note the trivial fact that \( {D}_{1} \geq 5 \) and \( \left| {D}_{2}\right| \geq 3 \), so that \( \left| D\right| \geq {15} \) and \( \omega \left( D\right) = 2 \) . By Kronecker’s limit formula (here Corollary 10.5.9), around \( s = 1 \) we have \[ \zeta \left( {\mathcal{A}, s}\right) = \frac{\pi }{{\left| D\right| }^{1/2}}\left( {\frac{1}{s - 1} + {C}_{K}\left( \mathcal{A}\right) + O\left( {s - 1}\right) }\right) , \] where \[ {C}_{K}\left( \mathcal{A}\right) = {2\gamma } - 2\log \left( 2\right) - \frac{\log \left( {\left| D\right| /4}\right) }{2} - 2\log \left( {\Im {\left( \tau \right) }^{1/2}{\left| \eta \left( \tau \right) \right| }^{2}}\right) . \] Since \( {\chi }_{{D}_{1}} \) is a nontrivial character on \( {Cl}\left( K\right) \) we have \( \mathop{\sum }\limits_{{\mathcal{A} \in {Cl}\left( K\right) }}{\chi }_{{D}_{1}}\left( \mathcal{A}\right) = 0 \) , so \( {L}_{K}\left( {{\chi }_{{D}_{1}}, s}\right) \) does not have a pole at \( s = 1 \) and we have \[ {L}_{K}\left( {{\chi }_{{D}_{1}},1}\right) = - \frac{2\pi }{{\left| D\right| }^{1/2}}\mathop{\sum }\limits_{{\mathcal{A} \in {Cl}\left( K\right) }}{\chi }_{{D}_{1}}\left( \mathcal{A}\right) \log \left( {\Im {\left( {\tau }_{\mathcal{A}}\right) }^{1/2}{\left| \eta \left( {\tau }_{\mathcal{A}}\right) \right| }^{2}}\right) . \] On the other hand, by Proposition 10.5.10 (which is the simplest nontrivial case of Dirichlet's class number formula) we have \[ L\left( {{\chi }_{{D}_{2}},1}\right) = \frac{{2\pi h}\left( {D}_{2}\right) }{\omega \left( {D}_{2}\right) {\left| {D}_{2}\right| }^{1/2}}. \] Thus we obtain the formula \[ L\left( {{\chi }_{{D}_{1}},1}\right) = - \frac{\omega \left( {D}_{2}\right) }{h\left( {D}_{2}\right) {D}_{1}^{1/2}}\mathop{\sum }\limits_{{\mathcal{A} \in {Cl}\left( K\right) }}{\chi }_{{D}_{1}}\left( \mathcal{A}\right) \log \left( {\Im {\left( {\tau }_{\mathcal{A}}\right) }^{1/2}{\left| \eta \left( {\tau }_{\mathcal{A}}\right) \right| }^{2}}\right) , \] proving the first formula of the corollary. The second immediately follows from Dirichlet’s class number formula for real quadratic fields \( L\left( {{\chi }_{{D}_{1}},1}\right) = \) \( {2h}\left( {D}_{1}\right) \log \left( {\varepsilon }_{{D}_{1}}\right) /{D}_{1}^{1/2} \) . Since \( {\varepsilon }_{{D}_{1}} \) is the fundamental solution of Pell’s equation, this is called Kronecker's solution to Pell's equation, expressing an algebraic number as a combination of values of a transcendental function, which was part of Kronecker's Jugendtraum. Example. Consider the case \( {D}_{1} = 8,{D}_{2} = - 3 \), so that \( D = - {24}, K = \) \( \mathbb{Q}\left( \sqrt{-6}\right) \) . There are two ideal classes in \( {\mathbb{Z}}_{K} \), the corresponding \( {\tau }_{\mathcal{A}} \) are \( \sqrt{-6} \) and \( \sqrt{-6}/2 \), and the corresponding values of \( {\chi }_{{D}_{1}}\left( \mathcal{A}\right) \) are 1 (for the trivial class) and -1 (since otherwise \( {\chi }_{{D}_{1}} \) would be a trivial character). Since \( {\varepsilon }_{8} = \) \( 1 + \sqrt{2} \) we obtain the formula \[ \frac{2\log \left( {1 + \sqrt{2}}\right) }{\sqrt{8}} = - \frac{6}{\sqrt{8}}\left( {\log \left( {{\left( \sqrt{6}\right) }^{1/2}{\left| \eta \left( \sqrt{-6}\right) \right| }^{2}}\right) }\right. \] \[ \left. {-\log \left( {{\left( \sqrt{6}/2\right) }^{1/2}{\left| \eta \left( \sqrt{-6}/2\right) \right| }^{2}}\right) }\right) , \] hence \[ \log \left( {1 + \sqrt{2}}\right) = - 3\left( {\frac{\log \left( 2\right) }{2} + \log \left( \frac{{\left| \eta \left( \sqrt{-6}\right) \right| }^{2}}{{\left| \eta \left( \sqrt{-6}/2\right) \right| }^{2}}\right) }\right) . \] Since by definition \( \eta \left( \sqrt{-6}\right) \) and \( \eta \left( {\sqrt{-6}/2}\right) \) are positive real, this can also be writ
1065_(GTM224)Metric Structures in Differential Geometry
Definition 2.1
Definition 2.1. A fiber bundle \( \pi : P \rightarrow B \) with fiber and group \( G \) is called a principal \( G \) -bundle if there exists a free right action of \( G \) on \( P \) and an atlas such that for each bundle chart \( \left( {{\pi }^{-1}\left( U\right) ,\left( {\pi ,\phi }\right) }\right) \), the map \( \phi : {\pi }^{-1}\left( U\right) \rightarrow G \) is \( G \) -equivariant; i.e., \[ \left( {\pi ,\phi }\right) \left( {pg}\right) = \left( {\pi \left( p\right) ,\phi \left( p\right) g}\right) ,\;p \in {\pi }^{-1}\left( U\right) ,\;g \in G. \] It follows that \( B \) is the quotient space \( P/G \) : Since \( \pi \left( {pg}\right) = \pi \left( p\right) \), the orbit \( G\left( p\right) = \{ {pg} \mid g \in G\} \) of \( p \) is contained in \( {\pi }^{-1}\left( {\pi \left( p\right) }\right) \) ; conversely, if \( \left( {\pi ,\phi }\right) \) is a bundle chart around \( p \), then for \( q \in {\pi }^{-1}\left( {\pi \left( p\right) }\right) \) , \[ q = {\left( \pi ,\phi \right) }^{-1}\left( {\pi \left( q\right) ,\phi \left( q\right) }\right) = {\left( \pi ,\phi \right) }^{-1}\left( {\pi \left( p\right) ,\phi \left( p\right) \phi {\left( p\right) }^{-1}\phi \left( q\right) }\right) \] \[ = {\left( \pi ,\phi \right) }^{-1}\left( {\pi \left( p\right) ,\phi \left( p\right) g}\right) = {pg}, \] where \( g = \phi {\left( p\right) }^{-1}\phi \left( q\right) \in G \) . Furthermore, the structure group is \( G \) acting on itself by left translations: for \( p \in P \) , \[ {f}_{\phi ,\psi }\left( {\pi \left( p\right) }\right) = \psi \left( p\right) \phi {\left( p\right) }^{-1}, \] where the choice of the element \( p \in {\pi }^{-1}\left( {\pi \left( p\right) }\right) \) is irrelevant because \[ \psi \left( {pg}\right) \phi {\left( pg\right) }^{-1} = \psi \left( p\right) g{\left( \phi \left( p\right) g\right) }^{-1} = \psi \left( p\right) g{g}^{-1}\phi {\left( p\right) }^{-1} = \psi \left( p\right) \phi {\left( p\right) }^{-1}. \] EXAMPLES AND REMARKS 2.1. (i) The Hopf fibrations \( {S}^{{2n} + 1} \rightarrow \mathbb{C}{P}^{n} \) and \( {S}^{{4n} + 3} \rightarrow \mathbb{H}{P}^{n} \) are principal \( {S}^{1} \) and \( {S}^{3} \) bundles. (ii) The trivial principal \( G \) -bundle over \( B \) is the projection \( B \times G \rightarrow B \) onto the first factor. The action of \( G \) is by right multiplication \( \left( {b,{g}_{1}}\right) g = \left( {b,{g}_{1}g}\right) \) on the second factor. (iii) Let \( G \) be a Lie group, \( H \) a closed subgroup of \( G \), and denote by \( B \) the homogeneous space \( G/H \) . We first show that the quotient space \( {G}^{n}/{H}^{k} \) admits a (unique) differentiable structure of dimension \( n - k \) for which the projection \( \pi : G \rightarrow G/H \) becomes a submersion. This actually follows from Theorem 14.2 in Chapter 1, but we provide an independent argument, since that theorem won’t be proved until Chapter 5. Observe that \( \pi \) is an open map for the quotient topology on \( G/H \) : If \( U \) is open in \( G \), then so is \( \pi \left( U\right) \) (in \( G/H \) ), because \( {\pi }^{-1}\left( {\pi \left( U\right) }\right) = { \cup }_{h \in H}{R}_{h}\left( U\right) \) is open in \( G \) . Furthermore, the quotient space is Hausdorff: If \( \pi \left( a\right) \neq \pi \left( b\right) \), so that \( {a}^{-1}b \notin H \), there exists a neighborhood of \( {a}^{-1}b \) that does not intersect \( H \) . Such a neighborhood always contains an open set of the form \( U \cdot {a}^{-1}b \cdot U \), where \( U \) is a neighborhood of the identity with \( U = {U}^{-1} \) . Then \( U{a}^{-1}{bU} \cap H = \varnothing \), which implies that \( {bUH} \cap {aUH} = \varnothing \) . Thus, \( \pi \circ {L}_{b}\left( U\right) \) and \( \pi \circ {L}_{a}\left( U\right) \) are disjoint open sets containing \( \pi \left( b\right) \) and \( \pi \left( a\right) \), respectively. In order to exhibit a manifold structure on \( G/H \), recall that Frobenius’ theorem applied to the distribution \( {L}_{g * }{H}_{e}, g \in G \), guarantees the existence of a chart \( \left( {U, x}\right) \) around \( e \), with \( x\left( U\right) = {\left( 0,1\right) }^{n} \), such that each slice \[ \left\{ {g \in U \mid {x}^{k + 1}\left( g\right) = {a}_{1},\ldots ,{x}^{n}\left( g\right) = {a}_{n - k}}\right\} \] is contained in a left coset of \( H \) . If \( S \) denotes the slice containing \( e \), there exists a neighborhood \( V \) of \( e \) such that \( V \cap S = V \cap H \) (since \( H \) is a submanifold of \( G \) ), and \( V = {V}^{-1}, V \cdot V \subset U \) . For the sake of simplicity, denote \( V \) by \( U \) again. Let \( N = {\left( {\pi }_{1} \circ x\right) }^{-1}\left( a\right) \), where \( {\pi }_{1} : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{k} \times 0 \) denotes projection, and \( a \mathrel{\text{:=}} {\pi }_{1} \circ x\left( e\right) \) . We claim that \( \pi \) is one-to-one when restricted to \( N \) : Indeed, if \( \pi \left( a\right) = \pi \left( b\right) \), then \( {a}^{-1}b \in U \cap H = U \cap S \), so that \( b \) belongs to \( {L}_{a}\left( {U \cap S}\right) \) . The latter set, being connected, is contained in a single slice. Since it also contains \( a, a \) and \( b \) lie in the same slice, so that \( x\left( a\right) = x\left( b\right) \) ; i.e, \( a = b \) . It follows that \( {\pi }_{\mid N} : N \rightarrow W \mathrel{\text{:=}} \pi \left( N\right) \) is an open, bijective map, hence a homeomorphism. So is \( \widetilde{x} \mathrel{\text{:=}} {\pi }_{2} \circ x \circ {\left( {\pi }_{\mid N}\right) }^{-1} : W \rightarrow \widetilde{x}\left( W\right) \subset 0 \times {\mathbb{R}}^{n - k} \), where \( {\pi }_{2} : {\mathbb{R}}^{n} \rightarrow 0 \times {\mathbb{R}}^{n - k} \) denotes the projection onto the other factor. We may then take \( \left( {W,\widetilde{x}}\right) \) as a chart around \( \pi \left( e\right) \) . In order to produce a chart around \( \pi \left( a\right) \), consider the homeomorphism \( {\mathbb{L}}_{a} \) of \( G/H \) induced by left-multiplication by \( a \) in \( G,{\mathbb{L}}_{a}\left( {\pi \left( g\right) }\right) \mathrel{\text{:=}} \pi \left( {ag}\right) \) . The desired chart is then given by \( \left( {{\mathbb{L}}_{a}\left( W\right) ,\widetilde{x}}\right. \circ \) \( {\mathbb{L}}_{{a}^{-1}} \) ). Given \( b \in G \), the corresponding transition function is \( {\pi }_{2} \circ {x}_{\mid N} \circ {L}_{{a}^{-1}b} \circ \) \( {\left( {\pi }_{2} \circ {x}_{\mid N}\right) }^{-1} \), so that the collection \( \left\{ {\left( {{\mathbb{L}}_{a}\left( W\right) ,\widetilde{x} \circ {\mathbb{L}}_{{a}^{-1}}}\right) \mid a \in G}\right\} \) induces a differentiable structure on \( G/H \) . It remains to check that \( \pi \) is differentiable at \( g \in G \) . Using the charts \( \left( {{L}_{g}\left( U\right), x \circ {L}_{{g}^{-1}}}\right) \) around \( g \) and \( \left( {{\mathbb{L}}_{g}\left( W\right) ,\widetilde{x} \circ {\mathbb{L}}_{{g}^{-1}}}\right) \) around \( \pi \left( g\right) \), we have \[ \widetilde{x} \circ {\mathbb{L}}_{{g}^{-1}} \circ \pi \circ {\left( x \circ {L}_{{g}^{-1}}\right) }^{-1} = \widetilde{x} \circ {\mathbb{L}}_{{g}^{-1}} \circ \pi \circ {L}_{g} \circ {x}^{-1} = \widetilde{x} \circ \pi \circ {x}^{-1} = {\pi }_{2}, \] which establishes the claim. Finally, we show that \( \pi : G \rightarrow G/H \) is a principal \( H \) -bundle: Notice that for any \( \left\lbrack g\right\rbrack \mathrel{\text{:=}} \pi \left( g\right) \), there exists a neighborhood \( U = {\mathbb{L}}_{g}\left( W\right) \) of \( \left\lbrack g\right\rbrack \) on which \( \pi \) has a right inverse \( {s}_{U} \) . In fact, taking \( {s}_{U} = {L}_{g} \circ {\left( {\pi }_{\mid N}\right) }^{-1} \circ {\mathbb{L}}_{{g}^{-1} \mid U} \), we have \( \pi \circ {s}_{U} = {1}_{U} \) . Then the map \[ U \times H \rightarrow {\pi }^{-1}\left( U\right) \] \[ \left( {\left\lbrack g\right\rbrack, h}\right) \mapsto \left( {{s}_{U}\left\lbrack g\right\rbrack }\right) \cdot h \] is a diffeomorphism. Its inverse is of the form \( \left( {\pi ,{\phi }_{U}}\right) \), where \( {\phi }_{U} : {\pi }^{-1}\left( U\right) \rightarrow H \) is \( H \) -equivariant, since \( {\phi }_{U}\left( g\right) = {s}_{U}{\left( \pi \left( g\right) \right) }^{-1}g \), so that \[ {\phi }_{U}\left( {gh}\right) = {s}_{U}{\left( \pi \left( gh\right) \right) }^{-1}{gh} = \left( {{s}_{U}{\left( \pi \left( g\right) \right) }^{-1} \cdot g}\right) h = {\phi }_{U}\left( g\right) h. \] Thus, the collection of such maps \( \left( {\pi ,{\phi }_{U}}\right) \) forms a principal bundle atlas on \( G \) over \( B \) . We have seen that given a fiber bundle \( \pi : M \rightarrow B \) with fiber \( F \) and group \( G \), a bundle atlas \( \left\{ \left( {{\pi }^{-1}\left( {U}_{\alpha }\right) ,\left( {\pi ,{\phi }_{\alpha }}\right) }\right) \right\} \) determines a family of transition functions \( {f}_{\alpha ,\beta } : {U}_{\alpha } \cap {U}_{\beta } \rightarrow G \) which satisfy \( {f}_{\alpha ,\gamma } = {f}_{\beta ,\gamma } \cdot {f}_{\alpha ,\beta } \) . It turns out that the bundle may be reconstructed from these transition functions. More generally, one has the following: Proposition 2.1. Let \( {\left\{ {U}_{\alpha }\right\} }_{\alpha \in A} \) be an open cover of a manifold \( B \), and \( G \) a Lie group acting effectively on a manifold \( F \) . Suppose there is a collection of maps \( {f}_{\alpha ,\beta } : {U}_{\alpha } \cap {U}_{\beta } \rightarrow G \) such that (2.1) \[ {f}_{\alpha ,\gamma }\left( p\right) = {f}_{\beta ,\gamma }\left( p\right) \cdot {f}_{\alpha ,\beta }\left( p\right) ,\;p \in {U}_{\alpha } \cap {U}_{\beta } \cap {U}_{\gamma },\;\alpha ,\beta ,\gamma \in A. \] Then there exists a fiber bundle \( \pi : M \rightarrow B \) with fiber \( F \), structure group \( G \) , and a bundle atlas whose transition functions are the given collection \( \left\{ {f}_{\alpha ,\beta }\right\} \) . Furthermore, if \( F = G \) and \(
1167_(GTM73)Algebra
Definition 2.1
Definition 2.1. An ideal \( \mathrm{P} \) of a ring \( \mathrm{R} \) is said to be left [resp. right] primitive if the quotient ring \( \mathrm{R}/\mathrm{P} \) is a left [resp. right] primitive ring. REMARK. Since the zero ring has no simple modules and hence is not primitive, \( R \) itself is not a left (or right) primitive ideal. Definition 2.2. An element \( \mathrm{a} \) in a ring \( \mathrm{R} \) is said to be left quasi-regular if there exists \( \mathrm{r}\varepsilon \mathrm{R} \) such that \( \mathrm{r} + \mathrm{a} + \mathrm{{ra}} = 0 \) . The element \( \mathrm{r} \) is called a left quasi-inverse of \( \mathrm{a} \) . \( A \) (right, left or two-sided) ideal \( \mathbf{I} \) of \( \mathbf{R} \) is said to be \( \mathbf{{left}} \) quasi-regular if every element of \( \mathbf{I} \) is left quasi-regular. Similarly, \( \mathbf{a} \in \mathbf{R} \) is said to be \( \mathbf{{right}} \) quasi-regular if there exists \( \mathrm{r}\varepsilon \mathrm{R} \) such that \( \mathrm{a} + \mathrm{r} + \mathrm{{ar}} = 0 \) . Right quasi-inverses and right quasi-regular ideals are defined analogously. REMARKS. It is sometimes convenient to write \( r \circ a \) for \( r + a + {ra} \) . If \( R \) has an identity, then \( a \) is left [resp. right] quasi-regular if and only if \( {1}_{R} + a \) is left [resp. right] invertible (Exercise 1). In order to simplify the statement of several results, we shall adopt the following convention (which is actually a theorem of axiomatic set theory). If the class \( \mathcal{C} \) of those subsets of a ring \( \mathbf{R} \) that satisfy a given property is empty, then \( \bigcap \mathrm{I} \) is defined to be \( \mathrm{R} \) . ## Theorem 2.3. If \( \mathrm{R} \) is a ring, then there is an ideal \( \mathrm{J}\left( \mathrm{R}\right) \) of \( \mathrm{R} \) such that: (i) \( \mathrm{J}\left( \mathrm{R}\right) \) is the intersection of all the left annihilators of simple left \( \mathrm{R} \) -modules; (ii) \( \mathrm{J}\left( \mathrm{R}\right) \) is the intersection of all the regular maximal left ideals of \( \mathrm{R} \) ; (iii) J(R) is the intersection of all the left primitive ideals of \( \mathbf{R} \) ; (iv) \( \mathrm{J}\left( \mathrm{R}\right) \) is a left quasi-regular left ideal which contains every left quasi-regular left ideal of \( \mathrm{R} \) ; (v) Statements (i)-(iv) are also true if "left" is replaced by "right". Theorem 2.3 is proved below (p. 428). The ideal \( J\left( R\right) \) is called the Jacobson radical of the ring \( R \) . Historically it was first defined in terms of quasi-regularity (Theorem 2.3 (iv)), which turns out to be a radical property as defined in the introductory remarks above (see p. 431). As the importance of the role of modules in the study of rings became clearer the other descriptions of \( J\left( R\right) \) were developed (Theorem 2.3 (i)-(iii)). REMARKS. According to Theorem 2.3 (i) and the convention adopted above, \( J\left( R\right) = R \) if \( R \) has no simple left \( R \) -modules (and hence no annihilators of same). If \( R \) has an identity, then every ideal is regular and maximal left ideals always exist (Theorem III.2.18), whence \( J\left( R\right) \neq R \) by Theorem 2.3(ii). Theorem 2.3(iv) does not imply that \( J\left( R\right) \) contains every left quasi-regular element of \( R \) ; see Exercise 4. The proof of Theorem 2.3 (which begins on p. 428) requires five preliminary lemmas. The lemmas are stated and proved for left ideals. However, each of Lemmas 2.4-2.8 is valid with "left" replaced by "right" throughout. Examples are given after the proof of Theorem 2.3. Lemma 2.4. If \( \mathrm{I}\left( { \neq \mathrm{R}}\right) \) is a regular left ideal of a ring \( \mathrm{R} \), then \( \mathrm{I} \) is contained in a maximal left ideal which is regular. SKETCH OF PROOF. Since \( I \) is regular, there exists \( e \in R \) such that \( r - {re\varepsilon I} \) for all \( {r\varepsilon R} \) . Thus any left ideal \( J \) containing \( I \) is also regular (with the same element \( e \in R) \) . If \( I \subset J \) and \( {e\varepsilon J} \), then \( r - {re\varepsilon I} \subset J \) implies \( {r\varepsilon J} \) for every \( {r\varepsilon R} \), whence \( R = J \) . Use this fact to verify that Zorn’s Lemma is applicable to the set \( \mathcal{S} \) of all left ideals \( L \) such that \( I \subset L \subset R \), partially ordered by inclusion. A maximal element of \( \mathcal{S} \) is a regular maximal left ideal containing \( I \) . Lemma 2.5. Let \( \mathrm{R} \) be a ring and let \( \mathrm{K} \) be the intersection of all regular maximal left ideals of \( \mathbf{R} \) . Then \( \mathbf{K} \) is a left quasi-regular left ideal of \( \mathbf{R} \) . PROOF. \( K \) is obviously a left ideal. If \( a \in K \) let \( T = \{ r + {ra} \mid r \in R\} \) . If \( T = R \) , then there exists \( {r\varepsilon R} \) such that \( r + {ra} = - a \) . Consequently \( r + a + {ra} = 0 \) and hence \( a \) is left quasi-regular. Thus it suffices to show that \( T = R \) . Verify that \( T \) is a regular left ideal of \( R \) (with \( e = - a \) ). If \( T \neq R \), then \( T \) is contained in a regular maximal left ideal \( {I}_{0} \) by Lemma 2.4. (Thus \( T \neq R \) is impossible if \( R \) has no regular maximal left ideals.) Since \( a \in K \subset {I}_{0},{ra\varepsilon }{I}_{0} \) for all \( r \in R \) . Thus since \( r + {ra\varepsilon T} \subset {I}_{0} \), we must have \( r \in {I}_{0} \) for all \( r \in R \) . Consequently, \( R = {I}_{0} \), which contradicts the maximality of \( {I}_{0} \) . Therefore \( T = R \) . Lemma 2.6. Let \( \mathrm{R} \) be a ring that has a simple left \( \mathrm{R} \) -module. If \( \mathrm{I} \) is a left quasi-regular left ideal of \( \mathrm{R} \), then \( \mathrm{I} \) is contained in the intersection of all the left annihilators of simple left \( \mathrm{R} \) -modules. PROOF. If \( I ⊄ \cap \mathcal{Q}\left( A\right) \), where the intersection is taken over all simple left \( R \) -modules \( A \), then \( {IB} \neq 0 \) for some simple left \( R \) -module \( B \), whence \( {Ib} \neq 0 \) for some nonzero \( {b\varepsilon B} \) . Since \( I \) is a left ideal, \( {Ib} \) is a nonzero submodule of \( B \) . Consequently \( B = {Ib} \) by simplicity and hence \( {ab} = - b \) for some \( a \in I \) . Since \( I \) is left quasi-regular, there exists \( r \in R \) such that \( r + a + {ra} = 0 \) . Therefore, \( 0 = {0b} \) \( = \left( {r + a + {ra}}\right) b = {rb} + {ab} + {rab} = {rb} - b - {rb} = - b \) . Since this conclusion contradicts the fact that \( b \neq 0 \), we must have \( I \subset \cap \mathcal{Q}\left( A\right) \) . I Lemma 2.7. An ideal \( \mathrm{P} \) of a ring \( \mathrm{R} \) is left primitive if and only if \( \mathrm{P} \) is the left annihilator of a simple left \( \mathrm{R} \) -module. PROOF. If \( P \) is a left primitive ideal, let \( A \) be a simple faithful \( R/P \) -module. Verify that \( A \) is an \( R \) -module, with \( {ra}\left( {r \in R, a \in A}\right) \) defined to be \( \left( {r + P}\right) a \) . Then \( {RA} = \left( {R/P}\right) A \neq 0 \) and every \( R \) -submodule of \( A \) is an \( R/P \) -submodule of \( A \), whence \( A \) is a simple \( R \) -module. If \( r \in R \), then \( {rA} = 0 \) if and only if \( \left( {r + P}\right) A = 0 \) . But \( \left( {r + P}\right) A = 0 \) if and only if \( r \in P \) since \( A \) is a faithful \( R/P \) -module. Therefore \( P \) is the left annihilator of the simple \( R \) -module \( A \) . Conversely suppose that \( P \) is the left annihilator of a simple \( R \) -module \( B \) . Verify that \( B \) is a simple \( R/P \) -module with \( \left( {r + P}\right) b = {rb} \) for \( r \in R, b \in B \) . Furthermore if \( \left( {r + P}\right) B = 0 \), then \( {rB} = 0 \), whence \( r \in \mathcal{Q}\left( B\right) = P \) and \( r + P = 0 \) in \( R/P \) . Consequently, \( B \) is a faithful \( R/P \) -module. Therefore \( R/P \) is a left primitive ring, whence \( P \) is a left primitive ideal of \( R \) . Lemma 2.8. Let \( \mathrm{I} \) be a left ideal of a ring \( \mathrm{R} \) . If \( \mathrm{I} \) is left quasi-regular, then \( \mathrm{I} \) is right quasi-regular. PROOF. If \( I \) is left quasi-regular and \( {a\varepsilon I} \), then there exists \( {r\varepsilon R} \) such that \( r \circ a = r + a + {ra} = 0 \) . Since \( r = - a - {ra\varepsilon I} \), there exists \( s \in R \) such that \( s \circ r = s + r + {sr} = 0 \), whence \( s \) is right quasi-regular. The operation \( \circ \) is easily seen to be associative. Consequently \[ a = 0 \circ a = \left( {s \circ r}\right) \circ a = s \circ \left( {r \circ a}\right) = s \circ 0 = s. \] Therefore \( a \), and hence \( I \), is right quasi-regular. PROOF OF THEOREM 2.3. Let \( J\left( R\right) \) be the intersection of all the left an-nilators of simple left \( R \) -modules. If \( R \) has no simple left \( R \) -modules, then \( J\left( R\right) = R \) by the convention adopted above. \( J\left( R\right) \) is an ideal by Theorem 1.4. We now show that statements (ii)-(iv) are true for all left ideals. We first observe that \( R \) itself cannot be the annihilator of a simple left \( R \) -module \( A \) (otherwise \( {RA} = 0 \) ). This fact together with Theorem 1.3 and Lemma 2.7 implies that the following conditions are equivalent: (a) \( J\left( R\right) = R \) ; (b) \( R \) has no simple left \( R \) -modules; (c) \( R \) has no regular maximal left ideals; (d) \( R \) has no left primitive ideals. Therefore by the convention adopted above,(ii),(iii), and (iv) are true if \( J\left( R\right) = R \) . (ii) Assume \( J\left( R\right) \neq R \) and let \( K \) be the intersection of all the regular maximal left ideals of \( R \) . Then \( K \subset J\left( R\right) \) by Lemmas 2.5 and 2.6. Conversely suppose \( c \in J\left( R\right) \) . By Theorem 1.3, \( J\left
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 4.7
Definition 4.7. If \( V \) is a finite-dimensional inner product space and \( G \) is a matrix Lie group, a representation \( \Pi : G \rightarrow \mathrm{{GL}}\left( V\right) \) is unitary if \( \Pi \left( A\right) \) is a unitary operator on \( V \) for every \( A \in G \) . Proposition 4.8. Suppose \( G \) is a matrix Lie group with Lie algebra \( \mathfrak{g} \) . Suppose \( V \) is a finite-dimensional inner product space, \( \Pi \) is a representation of \( G \) acting on \( V \) , and \( \pi \) is the associated representation of \( \mathfrak{g} \) . If \( \Pi \) is unitary, then \( \pi \left( X\right) \) is skew selfadjoint for all \( X \in \mathfrak{g} \) . Conversely, if \( G \) is connected and \( \pi \left( X\right) \) is skew self-adjoint for all \( X \in \mathfrak{g} \), then \( \Pi \) is unitary. In a slight abuse of notation, we will say that a representation \( \pi \) of a real Lie algebra \( \mathfrak{g} \) acting on a finite-dimensional inner product space is unitary if \( \pi \left( X\right) \) is skew self-adjoint for all \( X \in \mathfrak{g} \) . Proof. The proof is similar to the computation of the Lie algebra of the unitary group \( \mathrm{U}\left( n\right) \) . If \( \Pi \) is unitary, then for all \( X \in \mathfrak{g} \) we have \[ {\left( {e}^{{t\pi }\left( X\right) }\right) }^{ * } = \Pi {\left( {e}^{tX}\right) }^{ * } = \Pi {\left( {e}^{tX}\right) }^{-1} = {e}^{-{t\pi }\left( X\right) },\;t \in \mathbb{R}, \] so that \( {e}^{{t\pi }{\left( X\right) }^{ * }} = {e}^{-{t\pi }\left( X\right) } \) . Differentiating this relation with respect to \( t \) at \( t = 0 \) gives \( \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \) . In the other direction, if \( \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \), then the above calculation shows that \( \Pi \left( {e}^{tX}\right) = {e}^{{t\pi }\left( X\right) } \) is unitary. If \( G \) is connected, then by Corollary 3.47, every \( A \in G \) is a product of exponentials, showing that \( \Pi \left( A\right) \) is unitary. ## 4.2 Examples of Representations A matrix Lie group \( G \) is, by definition, a subset of some \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . The inclusion map of \( G \) into \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) (i.e., the map \( \Pi \left( A\right) = A \) ) is a representation of \( G \), called the standard representation of \( G \) . If \( G \) happens to be contained in \( \mathrm{{GL}}\left( {n;\mathbb{R}}\right) \subset \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \), then we can also think of the standard representation as a real representation. Thus, for example, the standard representation of \( \mathrm{{SO}}\left( 3\right) \) is the one in which \( \mathrm{{SO}}\left( 3\right) \) acts in the usual way on \( {\mathbb{R}}^{3} \) and the standard representation of \( \mathrm{{SU}}\left( 2\right) \) is the one in which \( \mathrm{{SU}}\left( 2\right) \) acts on \( {\mathbb{C}}^{2} \) in the usual way. Similarly, if \( \mathfrak{g} \subset {M}_{n}\left( \mathbb{C}\right) \) is a Lie algebra of matrices, the map \( \pi \left( X\right) = X \) is called the standard representation of \( \mathfrak{g} \) . Consider the one-dimensional complex vector space \( \mathbb{C} \) . For any matrix Lie group \( G \), we can define the trivial representation, \( \Pi : G \rightarrow \mathrm{{GL}}\left( {1;\mathbb{C}}\right) \), by the formula \[ \Pi \left( A\right) = I \] for all \( A \in G \) . Of course, this is an irreducible representation, since \( \mathbb{C} \) has no nontrivial subspaces, let alone nontrivial invariant subspaces. If \( \mathfrak{g} \) is a Lie algebra, we can also define the trivial representation of \( \mathfrak{g},\pi : \mathfrak{g} \rightarrow \mathfrak{{gl}}\left( {1;\mathbb{C}}\right) \), by \[ \pi \left( X\right) = 0 \] for all \( X \in \mathfrak{g} \) . This is an irreducible representation. Recall the adjoint map of a group or Lie algebra, described in Definitions 3.32 and 3.7. Definition 4.9. If \( G \) is a matrix Lie group with Lie algebra \( \mathfrak{g} \), the adjoint representation of \( G \) is the map \( \mathrm{{Ad}} : G \rightarrow \mathrm{{GL}}\left( \mathfrak{g}\right) \) given by \( A \mapsto {\mathrm{{Ad}}}_{A} \) . Similarly, the adjoint representation of a finite-dimensional Lie algebra \( \mathfrak{g} \) is the map ad : \( \mathfrak{g} \rightarrow \mathfrak{{gl}}\left( \mathfrak{g}\right) \) given by \( X \mapsto {\operatorname{ad}}_{X} \) . If \( G \) is a matrix Lie group with Lie algebra \( \mathfrak{g} \), then by Proposition 3.34, the Lie algebra representation associated to the adjoint representation of \( G \) is the adjoint representation of \( \mathfrak{g} \) . Note that in the case of \( \mathrm{{SO}}\left( 3\right) \), the standard representation and the adjoint representation are both three-dimensional real representations. In fact, these two representations are isomorphic (Exercise 2). Example 4.10. Let \( {V}_{m} \) denote the space of homogeneous polynomials of degree \( m \) in two complex variables. For each \( U \in \mathrm{{SU}}\left( 2\right) \), define a linear transformation \( {\Pi }_{m}\left( U\right) \) on the space \( {V}_{m} \) by the formula \[ \left\lbrack {{\Pi }_{m}\left( U\right) f}\right\rbrack \left( z\right) = f\left( {{U}^{-1}z}\right) ,\;z \in {\mathbb{C}}^{2}. \] (4.2) Then \( {\Pi }_{m} \) is a representation of \( \mathrm{{SU}}\left( 2\right) \) . Elements of \( {V}_{m} \) have the form \[ f\left( {{z}_{1},{z}_{2}}\right) = {a}_{0}{z}_{1}^{m} + {a}_{1}{z}_{1}^{m - 1}{z}_{2} + {a}_{2}{z}_{1}^{m - 2}{z}_{2}^{2} + \cdots + {a}_{m}{z}_{2}^{m} \] (4.3) with \( {z}_{1},{z}_{2} \in \mathbb{C} \) and the \( {a}_{j} \) ’s arbitrary complex constants, from which we see that \( \dim \left( {V}_{m}\right) = m + 1 \) . Explicitly, if \( f \) is as in (4.3), then \[ \left\lbrack {{\Pi }_{m}\left( U\right) f}\right\rbrack \left( {{z}_{1},{z}_{2}}\right) = \mathop{\sum }\limits_{{k = 0}}^{m}{a}_{k}{\left( {U}_{11}^{-1}{z}_{1} + {U}_{12}^{-1}{z}_{2}\right) }^{m - k}{\left( {U}_{21}^{-1}{z}_{1} + {U}_{22}^{-1}{z}_{2}\right) }^{k}. \] By expanding out the right-hand side of this formula, we see that \( {\Pi }_{m}\left( U\right) f \) is again a homogeneous polynomial of degree \( m \) . Thus, \( {\Pi }_{m}\left( U\right) \) actually maps \( {V}_{m} \) into \( {V}_{m} \) . To see that \( {\Pi }_{m} \) is actually a representation, compute that \[ {\Pi }_{m}\left( {U}_{1}\right) \left\lbrack {{\Pi }_{m}\left( {U}_{2}\right) f}\right\rbrack \left( z\right) = \left\lbrack {{\Pi }_{m}\left( {U}_{2}\right) f}\right\rbrack \left( {{U}_{1}^{-1}z}\right) = f\left( {{U}_{2}^{-1}{U}_{1}^{-1}z}\right) \] \[ = {\Pi }_{m}\left( {{U}_{1}{U}_{2}}\right) f\left( z\right) \text{.} \] The inverse on the right-hand side of (4.2) is necessary in order to make \( {\Pi }_{m} \) a representation. We will see in Proposition 4.11 that each \( {\Pi }_{m} \) is irreducible and we will see in Sect. 4.6 that every finite-dimensional irreducible representation of \( \mathrm{{SU}}\left( 2\right) \) is isomorphic to one (and only one) of the \( {\Pi }_{m} \) ’s. (Of course, no two of the \( {\Pi }_{m} \) ’s are isomorphic, since they do not even have the same dimension.) The associated representation \( {\pi }_{m} \) of \( \mathrm{{su}}\left( 2\right) \) can be computed as \[ \left( {{\pi }_{m}\left( X\right) f}\right) \left( z\right) = {\left. \frac{d}{dt}f\left( {e}^{-{tX}}z\right) \right| }_{t = 0}. \] Now, let \( z\left( t\right) = \left( {{z}_{1}\left( t\right) ,{z}_{2}\left( t\right) }\right) \) be the curve in \( {\mathbb{C}}^{2} \) defined as \( z\left( t\right) = {e}^{-{tX}}z \) . By the chain rule, we have \[ {\pi }_{m}\left( X\right) f = {\left. \frac{\partial f}{\partial {z}_{1}}\frac{d{z}_{1}}{dt}\right| }_{t = 0} + {\left. \frac{\partial f}{\partial {z}_{2}}\frac{d{z}_{2}}{dt}\right| }_{t = 0}. \] Since \( {dz}/{\left. dt\right| }_{t = 0} = - {Xz} \), so we obtain \[ {\pi }_{m}\left( X\right) f = - \frac{\partial f}{\partial {z}_{1}}\left( {{X}_{11}{z}_{1} + {X}_{12}{z}_{2}}\right) - \frac{\partial f}{\partial {z}_{2}}\left( {{X}_{21}{z}_{1} + {X}_{22}{z}_{2}}\right) . \] (4.4) We may then take the unique complex-linear extension of \( \pi \) to \( \operatorname{sl}\left( {2;\mathbb{C}}\right) \cong \operatorname{su}{\left( 2\right) }_{\mathbb{C}} \) , as in Proposition 3.39. This extension is given by the same formula, but with \( X \in \) sl \( \left( {2;\mathbb{C}}\right) \) . If \( X, Y \), and \( H \) are the following basis elements for \( \operatorname{sl}\left( {2;\mathbb{C}}\right) \) : \[ H = \left( \begin{array}{rr} 1 & 0 \\ 0 & - 1 \end{array}\right) ;\;X = \left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right) ;\;Y = \left( \begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array}\right) , \] then applying formula (4.4) gives \[ {\pi }_{m}\left( H\right) = - {z}_{1}\frac{\partial }{\partial {z}_{1}} + {z}_{2}\frac{\partial }{\partial {z}_{2}} \] \[ {\pi }_{m}\left( X\right) = - {z}_{2}\frac{\partial }{\partial {z}_{1}} \] \[ {\pi }_{m}\left( Y\right) = - {z}_{1}\frac{\partial }{\partial {z}_{2}}. \] Applying these operators to a basis element \( {z}_{1}^{m - k}{z}_{2}^{k} \) for \( {V}_{m} \) gives \[ {\pi }_{m}\left( H\right) \left( {{z}_{1}^{m - k}{z}_{2}^{k}}\right) = \left( {-m + {2k}}\right) {z}_{1}^{m - k}{z}_{2}^{k} \] \[ {\pi }_{m}\left( X\right) \left( {{z}_{1}^{m - k}{z}_{2}^{k}}\right) = - \left( {m - k}\right) {z}_{1}^{m - k - 1}{z}_{2}^{k + 1}, \] \[ {\pi }_{m}\left( Y\right) \left( {{z}_{1}^{m - k}{z}_{2}^{k}}\right) = - k{z}_{1}^{m - k + 1}{z}_{2}^{k - 1}. \] (4.5) Thus, \( {z}_{1}^{m - k}{z}_{2}^{k} \) is an eigenvector for \( {\pi }_{m}\left( H\right) \) with eigenvalue \( - m + {2k} \), while \( {\pi }_{m}\left( X\right) \) and \( {\pi }_{m}\left( Y\right) \) have the effect of shifting the exponent \( k \) of \( {z}_{2} \) up or down by one. Note that since \( {\pi }_{m}\left( X\righ
1126_(GTM32)Lectures in Abstract Algebra III. Theory of Fields and Galois Theory
Definition 7
Definition 7. Let \( \Phi \) be a field and let \( V \) be an ordered (commutative) group with 0. A mapping \( \varphi : \alpha \rightarrow \varphi \left( \alpha \right) \) of \( \Phi \) into \( V \) is called a valuation if (i) \( \varphi \left( \alpha \right) = 0 \) if and only if \( \alpha = 0 \) . (ii) \( \varphi \left( {\alpha \beta }\right) = \varphi \left( \alpha \right) \varphi \left( \beta \right) \) . (iii) \( \varphi \left( {\alpha + \beta }\right) \leq \max \left( {\varphi \left( \alpha \right) ,\varphi \left( \beta \right) }\right) \) . The exact sweep of this definition will become apparent soon. At this point it is clear that real non-archimedean valuations are a special case in which \( V \) is the set of non-negative real numbers. On the other hand, it should be noted that the real archimedean valuations are not valuations in the present sense. This inconsistency in terminology will cause no real difficulty. We shall now give an example of a valuation for which \( V \) is not the non-negative reals. Example. In this example we shall find it convenient to use the additive notation in the group \( G \) . The modifications in Definition 7 which are necessitated by this change are obvious, so we shall not write these down. The group \( G \) we shall consider is the additive group of integer pairs \( \left( {k, l}\right) \) . We introduce the lexicographic order in \( G \), that is, we define \( \left( {k, l}\right) < \left( {{k}^{\prime },{l}^{\prime }}\right) \) if either \( k < {k}^{\prime } \) or \( k = {k}^{\prime } \) and \( l < {l}^{\prime } \) . One checks that this is a linear ordering preserved under addition; hence \( G \) is an ordered (additive) group. We let \( V = G \cup \{ \infty \} \) where the ordering is extended to \( V \) by setting \( \infty > \left( {k, l}\right) \) for every \( \left( {k, l}\right) \) e \( G \) . Also we define \( \left( {k, l}\right) + \infty = \infty \) . Now let \( \mathrm{P} = \Phi \left( {\xi ,\eta }\right) \), a purely transcendental extension of a field \( \Phi \) where \( \{ \xi ,\eta \} \) is a transcendency basis for \( \mathrm{P} \) over \( \Phi \) . If \( {a\varepsilon }\mathrm{P} \) and \( a \neq 0 \), we can write \( a = {\xi }^{m}{\eta }^{n}p\left( {\xi ,\eta }\right) q{\left( \xi ,\eta \right) }^{-1} \) where \( p\left( {\xi ,\eta }\right) \) and \( q\left( {\xi ,\eta }\right) \) are polynomials in \( \xi ,\eta \) with non-zero constant terms, and \( m \) and \( n \) are integers. Then we define \( \varphi \left( a\right) = \left( {m, n}\right) \) . Also we set \( \varphi \left( 0\right) = \infty \) . Then (i) holds. It is easy to check that \( \varphi \left( {ab}\right) = \varphi \left( a\right) + \varphi \left( b\right) \) and \( \varphi \left( {a + b}\right) \geq \min \left( {\varphi \left( a\right) ,\varphi \left( b\right) }\right) \) . The first of these is (ii) in the additive notation and the second can be changed to (iii) by reversing the ordering (writing \( > \) for \( < \) ). Hence our function is essentially a valuation. ## EXERCISES 1. Let \( G \) be the additive ordered group of integer pairs \( \left( {k, l}\right) \) given in the foregoing example. Let \( c \) and \( e \) be real numbers such that \( 0 < c < 1 \) and \( e \) is positive and irrational. Show that the mapping \( \left( {k, l}\right) \rightarrow {c}^{k + {el}} \) is an isomorphism of \( G \) into the ordered multiplicative group of positive real numbers \( P \) . Show that \( G \) is not order isomorphic to a subgroup of \( P \) . 2. Let \( \mathrm{P} = \Phi \left( {\xi ,\eta }\right) \) and \( a = {\xi }^{m}{\eta }^{n}p\left( {\xi ,\eta }\right) q{\left( \xi ,\eta \right) }^{-1} \) where \( p \) and \( q \) are polynomials in \( \xi ,\eta \) with non-zero constant terms, as in the example above. Define \( \psi \left( a\right) = \) \( {c}^{m + {en}} \) where \( c \) and \( e \) are real numbers, \( 0 < c < 1, e \) positive irrational. Show that \( \psi \) is a non-archimedean real valuation which is not discrete. 3. Define a valuation \( \varphi \) of an integral domain 0 by replacing the field \( \Phi \) in Definition 7 by the integral domain 0 . Show that any valuation \( \psi \) of 0 into \( V \) has a unique extension to a valuation of the field of fractions \( \Phi \) of \( 0 \) . 4. Let \( G \) be an arbitrary (commutative) ordered group and let \( \mathfrak{o} = {\Phi }_{0}\left( G\right) \) be the group ring over a field \( {\Phi }_{0} \) of \( G \) (Vol. I, ex. 2, p. 95). Show that \( \mathfrak{o} \) is an integral domain. If \( a = \mathop{\sum }\limits_{1}^{r}{\alpha }_{i}{g}_{i},{\alpha }_{i} \neq 0 \) in \( {\Phi }_{0},{g}_{i} \) e \( G \), define \( \varphi \left( a\right) = \min {g}_{i} \) (in the ordering \( < \) defined in \( G \) ). Define \( \varphi \left( 0\right) = 0 \) . Show that \( \varphi \) is a valuation of \( \mathfrak{o} \) . Use exs. 3 and 4 to show that if \( V \) is any ordered group with 0, then there exists a field \( \Phi \) with a valuation \( \varphi \) of \( \Phi \) into \( V \) such that \( \varphi \left( \Phi \right) = V \) . 9. Valuations, valuation rings, and places. In this section we shall establish an equivalence between the concepts of a valuation in the sense of Definition 7 and two other concepts: valuation ring and place. The first of these, valuation ring, is an intrinsic notion in the sense that its definition does not require any system external to the given field \( \Phi \) . Moreover, the valuation rings give the link between valuations and places. We have already encountered these for real non-archimedean valuations. Now let \( \Phi \) be any field and let \( \varphi \) be a valuation with values in the ordered group \( V \) with 0 . We note first that \( \varphi {\left( 1\right) }^{2} = \varphi \left( {1}^{2}\right) = \) \( \varphi \left( 1\right) \) and, since \( \bar{G} \) contains no elements of finite order \( \neq 1,\varphi \left( 1\right) \) \( = 1 \) . Also \( \varphi {\left( -1\right) }^{2} = \varphi \left( 1\right) = 1 \), so \( \varphi \left( {-1}\right) = 1 \) and \( \varphi \left( {-\alpha }\right) = \) \( \varphi \left( {-1}\right) \varphi \left( \alpha \right) = \varphi \left( \alpha \right) \) . From \( \alpha {\alpha }^{-1} = 1 \) we obtain \( \varphi \left( {\alpha }^{-1}\right) = \varphi {\left( \alpha \right) }^{-1} \) and \( \varphi \left( {\alpha {\beta }^{-1}}\right) = \varphi \left( \alpha \right) \varphi {\left( \beta \right) }^{-1} \) . Now let \( \mathfrak{o} \) be the subset of \( \Phi \) of elements \( \alpha \) such that \( \varphi \left( \alpha \right) \leq 1 \) . Then, if \( \alpha ,{\beta \varepsilon }\mathfrak{o},\varphi \left( {\alpha - \beta }\right) \leq \max \) \( \left( {\varphi \left( \alpha \right) ,\varphi \left( \beta \right) }\right) \leq 1 \) and \( \varphi \left( {\alpha \beta }\right) = \varphi \left( \alpha \right) \varphi \left( \beta \right) \leq 1 \) . Hence \( \mathfrak{o} \) is a subring. Now suppose \( \alpha \notin \mathfrak{o} \), then \( \varphi \left( \alpha \right) > 1 \) and \( \varphi \left( {\alpha }^{-1}\right) = \varphi {\left( \alpha \right) }^{-1} \) \( < 1 \) . Hence \( {\alpha }^{-1}\varepsilon \mathfrak{o} \) . We therefore see that \( \mathfrak{o} \) is a valuation ring (in \( \Phi \) ) in the sense of the following Definition 8. If \( \Phi \) is a field, a valuation ring \( \mathfrak{o} \) in \( \Phi \) is a subring of \( \Phi \) (containing 1) such that every element of \( \Phi \) is either in \( \mathfrak{o} \) or is the inverse of an element of \( \mathfrak{o} \) . If \( \mathfrak{o} \) is the subring of elements \( \alpha \) satisfying \( \varphi \left( \alpha \right) \leq 1 \) for the valuation \( \varphi \), then \( \mathfrak{o} \) is called the valuation ring of \( \varphi \) . This is a direct generalization of the definition we gave before for non-archimedean real valuations. We shall now show that any valuation ring gives rise to a valuation \( {\varphi }^{\prime } \) for which the given ring is the valuation ring. Suppose \( \mathfrak{o} \) is a valuation ring in \( \Phi \) . Let \( U \) be the set of units of \( \mathfrak{o},\mathfrak{p} \) the set of non-units, \( {\mathfrak{p}}^{ * } \) the set of non-units \( \neq 0,{\Phi }^{ * } \) the multiplicative group of non-zero elements of \( \Phi \) . Then \( U \) is a subgroup of the commutative group \( {\Phi }^{ * } \) and we shall take \( {G}^{\prime } = \) \( {\Phi }^{ * }/U \) for our group. We introduce an ordering in \( {G}^{\prime } \) by letting \( {H}^{\prime } \) be the set of cosets \( {\beta U},{\beta \varepsilon }{\mathfrak{p}}^{ * } \) . It is clear that the product of a non-unit of \( \mathfrak{o} \) with any element of \( \mathfrak{o} \) is a non-unit. Hence if \( {\beta }_{1} \) , \( {\beta }_{2}\varepsilon {\mathfrak{p}}^{ * } \), then \( {\beta }_{1}{\beta }_{2}\varepsilon {\mathfrak{p}}^{ * } \) ; so if \( {\beta }_{1}U,{\beta }_{2}U,\varepsilon {H}^{\prime } \), then \( \left( {{\beta }_{1}U}\right) \left( {{\beta }_{2}U}\right) \) \( = {\beta }_{1}{\beta }_{2}{U\varepsilon }{H}^{\prime } \) . If \( {\beta U} \) is any element of \( {G}^{\prime } = {\Phi }^{ * }/U \), then \( \beta \neq 0 \), and if \( \beta \notin {\mathfrak{p}}^{ * } \), then either \( {\beta \varepsilon U} \) or \( \beta \notin U \) and \( \beta \notin {\mathfrak{p}}^{ * } \) . In the first case \( {\beta U} = U \), and in the second \( \beta \notin \mathfrak{o} \), so \( {\beta }^{-1} \) e \( \mathfrak{o} \) and, since \( {\beta }^{-1}{\varepsilon U} \) implies \( {\beta \varepsilon U} \), we have \( {\beta }^{-1}\varepsilon {\mathfrak{p}}^{ * } \) . Hence \( {\left( \beta U\right) }^{-1} = \) \( {\beta }^{-1}{U\varepsilon }{H}^{\prime } \) . Thus we see that \( {G}^{\prime } = {H}^{\prime } \cup \{ 1\} \cup {\left( {H}^{\prime }\right) }^{-1} \) holds. Also \( 1 = U \notin {H}^{\prime } \) . Hence \( {H}^{\prime } \) makes \( {G}^{\prime } \) an ordered group as in Defini
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 12.29
Definition 12.29. If \( \delta \) is not analytically integral, we say that \( \lambda \in \mathfrak{t} \) is half integral if \( \lambda - \delta \) is analytically integral. That is to say, the half integral elements are those of the form \( \lambda = \delta + {\lambda }^{\prime } \), with \( {\lambda }^{\prime } \) being analytically integral. Proposition 12.30. If \( \lambda \) and \( \eta \) are half integral, then \( \lambda + \eta \) is analytically integral. If \( \lambda \) is half integral, then \( - \lambda \) is also half integral and \( w \cdot \lambda \) is half integral for all \( w \in W \) . Proof. If \( \lambda = \delta + {\lambda }^{\prime } \) and \( \eta = \delta + {\eta }^{\prime } \) are half integral, then \( \lambda + \eta = {2\delta } + {\lambda }^{\prime } + {\eta }^{\prime } \) is analytically integral. If \( \lambda = \delta + {\lambda }^{\prime } \) is half integral, so is \[ - \lambda = - \delta - {\lambda }^{\prime } = \delta - {2\delta } - {\lambda }^{\prime }. \] For each \( w \in W \), the set \( w \cdot {R}^{ + } \) will consist of a certain subset \( S \) of the positive roots, together with the negatives of the roots in \( {R}^{ + } \smallsetminus S \) . Thus, \( w \cdot \delta \) will consist of half the sum of the elements of \( S \) minus half the sum of the elements of \( {R}^{ + } \smallsetminus S \) . It follows that \[ \delta - w \cdot \delta = \mathop{\sum }\limits_{{\alpha \in {R}^{ + } \smallsetminus S}}\alpha \] showing that \( w \cdot \delta \) is again half integral. (Recall that each root is analytically integral, by Proposition 12.7.) More generally, if \( \lambda = \delta + {\lambda }^{\prime } \) is half integral, so is \( w \cdot \lambda = \) \( w \cdot \delta + w \cdot {\lambda }^{\prime } \) Note that exponentials of the form \( {e}^{i\langle \lambda, H\rangle } \), with \( \lambda \) being half integral, do not descend to functions on \( T \) . Our next result says that, nevertheless, the product of two such exponentials (possibly conjugated) does descend to \( T \) . Furthermore, such exponentials are still "orthonormal on \( T \) ," as in Proposition 12.10 in the integral case. Proposition 12.31. If \( \lambda \) and \( \eta \) are half integral, there is a well-defined function \( f \) on \( T \) such that \[ f\left( {e}^{H}\right) = \overline{{e}^{i\langle \lambda, H\rangle }}{e}^{i\langle \eta, H\rangle } \] and \[ {\int }_{T}\overline{{e}^{i\langle \lambda, H\rangle }}{e}^{i\langle \eta, H\rangle }{dH} = {\delta }_{\lambda ,\eta } \] Proof. If \( \lambda \) and \( \eta \) are half integral, then \( - \lambda \) is half integral, so that \( \eta - \lambda \) is analytically integral. Thus, by Proposition 12.9, there is a well-defined function \( f : T \rightarrow \mathbb{C} \) satisfying \[ f\left( {e}^{H}\right) = \overline{{e}^{i\langle \lambda, H\rangle }}{e}^{i\langle \eta, H\rangle } = {e}^{i\langle \eta - \lambda, H\rangle }. \] Furthermore, if \( \lambda = \delta + {\lambda }^{\prime } \) and \( \eta = \delta + {\eta }^{\prime } \) are half integral, we have \[ {\int }_{T}\overline{{e}^{i\langle \lambda, H\rangle }}{e}^{i\langle \eta, H\rangle }{dH} = {\int }_{T}{e}^{-i\left\langle {\delta + {\lambda }^{\prime }, H}\right\rangle }{e}^{i\left\langle {\delta + {\eta }^{\prime }, H}\right\rangle }{dH} \] \[ = {\int }_{T}{e}^{-i\left\langle {{\lambda }^{\prime }, H}\right\rangle }{e}^{i\left\langle {{\eta }^{\prime }, H}\right\rangle }{dH} \] \[ = {\delta }_{{\lambda }^{\prime },{\delta }^{\prime }} \] by Proposition 12.10. Since \( \lambda = \eta \) if and only if \( {\lambda }^{\prime } = {\delta }^{\prime } \), we have the desired "orthonormality" result. We now discuss how the results of Sects. 11.6, 12.4, and 12.5 should be modified when \( \delta \) is not analytically integral. In the case of the Weyl integral formula (Theorem 11.30), the function \( q\left( H\right), H \in \mathfrak{t} \), does not descend to \( T \) when \( \delta \) is not integral. That is to say, there is no function \( Q\left( t\right) \) on \( T \) such that \( Q\left( {e}^{H}\right) = q\left( H\right) \) . Nevertheless, the function \( {\left| q\left( H\right) \right| }^{2} \) does descend to \( T \), since \( {\left| q\left( H\right) \right| }^{2} \) is a sum of products of half integral exponentials. The Weyl integral formula, with the same proof, then holds even if \( \delta \) is not analytically integral, provided that the expression \( {\left| Q\left( t\right) \right| }^{2} \) is interpreted as the function \( {e}^{H} \mapsto {\left| q\left( H\right) \right| }^{2} \) . We may then consider the case \( f \equiv 1 \) and use Proposition 12.31 to verify the correctness of the normalization in the Weyl integral formula. In the case of the Weyl character formula, we claim that the right-hand side of (12.14) descends to a function on \( T \) . To see this, note that we can pull a factor of \( {e}^{i\langle \delta, H\rangle } \) out of each exponential in the numerator and each exponential in the denominator. After canceling these factors, we are left with exponentials in both the numerator and denominator that descend to \( T \) . Meanwhile, in the proof of the character formula, although the function \( Q\left( t\right) {\chi }_{\Pi }\left( t\right) \) is not well defined on \( T \) , the function \( {\left| Q\left( t\right) {\chi }_{\Pi }\left( t\right) \right| }^{2} \) is well defined. The Weyl integral formula (interpreted as in the previous paragraph) tells us that the integral of \( {\left| Q\left( t\right) {\chi }_{\Pi }\left( t\right) \right| }^{2} \) over \( T \) is equal to \( \left| W\right| \), as in the case where \( \delta \) is analytically integral. If we then apply the orthonormality result in Proposition 12.31, we see that, just as in the integral case, the only exponentials present in the product \( q\left( H\right) {\chi }_{\Pi }\left( {e}^{H}\right) \) are those in the numerator of the character formula. Finally, we consider the proof that every dominant, analytically integral element is the highest weight of a representation. If \( \delta \) is not analytically integral, then neither the numerator nor the denominator on the right-hand side of (12.19) descends to function on \( T \) . Nevertheless, the ratio of these functions does descend to \( T \), by the argument in the preceding paragraph. The argument that \( {\phi }_{\mu } \) extends to a continuous function on \( T \) then goes through without change. (This argument requires only that each weight \( \lambda = w \cdot \left( {\mu + \delta }\right) \) in \( {\psi }_{\mu } \) be algebraically integral, which holds even if \( \delta \) is not analytically integral, by Proposition 8.38.) Thus, we may apply the half-integral version of the Weyl integral formula to show that the functions \( {\Phi }_{\mu } \) on \( K \) are orthonormal, as \( \mu \) ranges over the set of dominant, analytically integral elements. The rest of the argument then proceeds without change. ## 12.7 Exercises 1. Let \( K = \mathrm{{SU}}\left( 2\right) \) and let \( \mathrm{t} \) be the diagonal subalgebra of \( \mathrm{{su}}\left( 2\right) \) . Prove directly that every algebraically integral element is analytically integral. Note: Since \( \mathrm{{SU}}\left( 2\right) \) is simply connected, this claim also follows from the general result in Corollary 13.20. 2. This exercise asks you to use the theory of Fourier series to give a direct proof of the completeness result for characters (Theorem 12.18), in the case \( K = \mathrm{{SU}}\left( 2\right) \) . To this end, suppose \( f \) is a continuous class function on \( \mathrm{{SU}}\left( 2\right) \) that \( f \) is orthogonal to the character of every representation. (a) Using the explicit form of the Weyl integral formula for \( \mathrm{{SU}}\left( 2\right) \) (Example 11.33) and the explicit form of the characters for \( \mathrm{{SU}}\left( 2\right) \) (Example 12.23), show that \[ {\int }_{-\pi }^{\pi }\overline{f\left( {\operatorname{diag}\left( {{e}^{i\theta },{e}^{-{i\theta }}}\right) }\right) \left( {\sin \theta }\right) }\sin \left( {\left( {m + 1}\right) \theta }\right) {d\theta } = 0 \] for every non-negative integer \( m \) . (b) Show that the function \( \theta \mapsto f\left( {\operatorname{diag}\left( {{e}^{i\theta },{e}^{-{i\theta }}}\right) }\right) \left( {\sin \theta }\right) \) is an odd function of \( \theta \) . (c) Using standard results from the theory of Fourier series, conclude that \( f \) must be identically zero. 3. Suppose \( \left( {\Pi, V}\right) \) and \( \left( {\sum, W}\right) \) are representations of a group \( G \), and let \( \operatorname{Hom}\left( {V, W}\right) \) denote the space of all linear maps from \( V \) to \( W \) . Let \( G \) act on \( \operatorname{Hom}\left( {V, W}\right) \) by \[ g \cdot A = \sum \left( g\right) {A\Pi }{\left( g\right) }^{-1}, \] (12.25) for all \( g \in G \) and \( A \in \operatorname{Hom}\left( {W, V}\right) \) . Show that \( A \) is an intertwining map of \( V \) to \( W \) if and only if \( g \cdot A = A \) for all \( g \in G \) . 4. If \( V \) and \( W \) are finite-dimensional vector spaces, let \( \Phi : {V}^{ * } \otimes W \rightarrow \operatorname{Hom}\left( {V, W}\right) \) be the unique linear map such that for all \( \xi \in {V}^{ * } \) and \( w \in W \), we have \[ \Phi \left( {\xi \otimes w}\right) \left( v\right) = \xi \left( v\right) w,\;v \in V. \] (a) Show that \( \Phi \) is an isomorphism. (b) Let \( \left( {\Pi, V}\right) \) and \( \left( {\sum, W}\right) \) be representations of a group \( G \), let \( G \) act on \( {V}^{ * } \) as in Sect. 4.3.3, and let \( G \) act on \( \operatorname{Hom}\left( {V, W}\right) \) as in (12.25). Show that the map \( \Phi : {V}^{ * } \otimes W \rightarrow \operatorname{Hom}\left( {V, W}\right) \) in Part (a) is an intertwining map. 5. Suppose \( f\left( x\right) \mathrel{\text{:=}} \operatorname{trac
1098_(GTM254)Algebraic Function Fields and Codes
Definition 1.4.1
Definition 1.4.1. The divisor group of \( F/K \) is defined as the (additively written) free abelian group which is generated by the places of \( F/K \) ; it is denoted by \( \operatorname{Div}\left( F\right) \) . The elements of \( \operatorname{Div}\left( F\right) \) are called divisors of \( F/K \) . In other words, a divisor is a formal sum \[ D = \mathop{\sum }\limits_{{P \in {\mathbb{P}}_{F}}}{n}_{P}P\;\text{ with }\;{n}_{P} \in \mathbb{Z},\;\text{ almost all }\;{n}_{P} = 0. \] The support of \( D \) is defined as \[ \operatorname{supp}D \mathrel{\text{:=}} \left\{ {P \in {\mathbb{P}}_{F} \mid {n}_{P} \neq 0}\right\} . \] It will often be found convenient to write \[ D = \mathop{\sum }\limits_{{P \in S}}{n}_{P}P \] where \( S \subseteq {\mathbb{P}}_{F} \) is a finite set with \( S \supseteq \operatorname{supp}D \) . \( A \) divisor of the form \( D = P \) with \( P \in {\mathbb{P}}_{F} \) is called a prime divisor. Two divisors \( D = \sum {n}_{P}P \) and \( {D}^{\prime } = \sum {n}_{P}^{\prime }P \) are added coefficientwise, \[ D + {D}^{\prime } = \mathop{\sum }\limits_{{P \in {\mathbb{P}}_{F}}}\left( {{n}_{P} + {n}_{P}^{\prime }}\right) P. \] The zero element of the divisor group \( \operatorname{Div}\left( F\right) \) is the divisor \[ 0 \mathrel{\text{:=}} \mathop{\sum }\limits_{{P \in {\mathbb{{IP}}}_{F}}}{r}_{P}P,\text{ all }{r}_{P} = 0. \] For \( Q \in {\mathbb{P}}_{F} \) and \( D = \sum {n}_{P}P \in \operatorname{Div}\left( F\right) \) we define \( {v}_{Q}\left( D\right) \mathrel{\text{:=}} {n}_{Q} \), therefore \[ \operatorname{supp}D = \left\{ {P \in {\mathbb{P}}_{F} \mid {v}_{P}\left( D\right) \neq 0}\right\} \text{ and }D = \mathop{\sum }\limits_{{P \in \operatorname{supp}D}}{v}_{P}\left( D\right) \cdot P. \] \( A \) partial ordering on \( \operatorname{Div}\left( F\right) \) is defined by \[ {D}_{1} \leq {D}_{2} : \Leftrightarrow {v}_{P}\left( {D}_{1}\right) \leq {v}_{P}\left( {D}_{2}\right) \text{ for all }P \in {\mathbb{P}}_{F}. \] If \( {D}_{1} \leq {D}_{2} \) and \( {D}_{1} \neq {D}_{2} \) we will also write \( {D}_{1} < {D}_{2} \) . A divisor \( D \geq 0 \) is called positive (or effective). The degree of a divisor is defined as \[ \deg D \mathrel{\text{:=}} \mathop{\sum }\limits_{{P \in {\mathbb{P}}_{F}}}{v}_{P}\left( D\right) \cdot \deg P \] and this yields a homomorphism \( \deg : \operatorname{Div}\left( F\right) \rightarrow \mathbb{Z} \) . By Corollary 1.3.4 a nonzero element \( x \in F \) has only finitely many zeros and poles in \( {\mathbb{P}}_{F} \) . Thus the following definition makes sense. Definition 1.4.2. Let \( 0 \neq x \in F \) and denote by \( Z \) (resp. \( N \) ) the set of zeros (resp. poles) of \( x \) in \( {\mathbb{P}}_{F} \) . Then we define \[ {\left( x\right) }_{0} \mathrel{\text{:=}} \mathop{\sum }\limits_{{P \in Z}}{v}_{P}\left( x\right) P\text{, the zero divisor of}x\text{,} \] \[ {\left( x\right) }_{\infty } \mathrel{\text{:=}} \mathop{\sum }\limits_{{P \in N}}\left( {-{v}_{P}\left( x\right) }\right) P\text{, the pole divisor of}x\text{,} \] \[ \left( x\right) \mathrel{\text{:=}} {\left( x\right) }_{0} - {\left( x\right) }_{\infty }\text{, the principal divisor of}x\text{.} \] Clearly \( {\left( x\right) }_{0} \geq 0,{\left( x\right) }_{\infty } \geq 0 \) and \[ \left( x\right) = \mathop{\sum }\limits_{{P \in {\mathbb{P}}_{F}}}{v}_{P}\left( x\right) P \] (1.17) The elements \( 0 \neq x \in F \) which are constant are characterized by \[ x \in K \Leftrightarrow \left( x\right) = 0. \] This follows immediately from Corollary 1.1.20 (note the general assumption made previously that \( K \) is algebraically closed in \( F \) ). Definition 1.4.3. The set of divisors \[ \operatorname{Princ}\left( F\right) \mathrel{\text{:=}} \{ \left( x\right) \mid 0 \neq x \in F\} \] is called the group of principal divisors of \( F/K \) . This is a subgroup of \( \operatorname{Div}\left( F\right) \) , since for \( 0 \neq x, y \in F,\left( {xy}\right) = \left( x\right) + \left( y\right) \) by (1.17). The factor group \[ \mathrm{{Cl}}\left( F\right) \mathrel{\text{:=}} \operatorname{Div}\left( F\right) /\operatorname{Princ}\left( F\right) \] is called the divisor class group of \( F/K \) . For a divisor \( D \in \operatorname{Div}\left( F\right) \), the corresponding element in the factor group \( \mathrm{{Cl}}\left( F\right) \) is denoted by \( \left\lbrack D\right\rbrack \), the divisor class of \( D \) . Two divisors \( D,{D}^{\prime } \in \operatorname{Div}\left( F\right) \) are said to be equivalent, written \[ D \sim {D}^{\prime } \] if \( \left\lbrack D\right\rbrack = \left\lbrack {D}^{\prime }\right\rbrack \) ; i.e., \( D = {D}^{\prime } + \left( x\right) \) for some \( x \in F \smallsetminus \{ 0\} \) . This is easily verified to be an equivalence relation. Our next definition plays a fundamental role in the theory of algebraic function fields. Definition 1.4.4. For a divisor \( A \in \operatorname{Div}\left( F\right) \) we define the Riemann-Roch space associated to \( A \) by \[ \mathcal{L}\left( A\right) \mathrel{\text{:=}} \{ x \in F \mid \left( x\right) \geq - A\} \cup \{ 0\} . \] This definition has the following interpretation: if \[ A = \mathop{\sum }\limits_{{i = 1}}^{r}{n}_{i}{P}_{i} - \mathop{\sum }\limits_{{j = 1}}^{s}{m}_{j}{Q}_{j} \] with \( {n}_{i} > 0,{m}_{j} > 0 \) then \( \mathcal{L}\left( A\right) \) consists of all elements \( x \in F \) such that - \( x \) has zeros of order \( \geq {m}_{j} \) at \( {Q}_{j} \), for \( j = 1,\ldots, s \), and - \( x \) may have poles only at the places \( {P}_{1},\ldots ,{P}_{r} \), with the pole order at \( {P}_{i} \) being bounded by \( {n}_{i}\left( {i = 1,\ldots, r}\right) \) . Remark 1.4.5. Let \( A \in \operatorname{Div}\left( F\right) \) . Then (a) \( x \in \mathcal{L}\left( A\right) \) if and only if \( {v}_{P}\left( x\right) \geq - {v}_{P}\left( A\right) \) for all \( P \in {\mathbb{P}}_{F} \) . (b) \( \mathcal{L}\left( A\right) \neq \{ 0\} \) if and only if there is a divisor \( {A}^{\prime } \sim A \) with \( {A}^{\prime } \geq 0 \) . The proof of these remarks is trivial; nevertheless they are often very useful. In particular Remark 1.4.5(b) will be used frequently. Lemma 1.4.6. Let \( A \in \operatorname{Div}\left( F\right) \) . Then we have: (a) \( \mathcal{L}\left( A\right) \) is a vector space over \( K \) . (b) If \( {A}^{\prime } \) is a divisor equivalent to \( A \), then \( \mathcal{L}\left( A\right) \simeq \mathcal{L}\left( {A}^{\prime }\right) \) (isomorphic as vector spaces over \( K \) ). Proof. (a) Let \( x, y \in \mathcal{L}\left( A\right) \) and \( a \in K \) . Then for all \( P \in {\mathbb{P}}_{F},{v}_{P}\left( {x + y}\right) \geq \) \( \min \left\{ {{v}_{P}\left( x\right) ,{v}_{P}\left( y\right) }\right\} \geq - {v}_{P}\left( A\right) \) and \( {v}_{P}\left( {ax}\right) = {v}_{P}\left( a\right) + {v}_{P}\left( x\right) \geq - {v}_{P}\left( A\right) \) . So \( x + y \) and \( {ax} \) are in \( \mathcal{L}\left( A\right) \) by Remark 1.4.5(a). (b) By assumption, \( A = {A}^{\prime } + \left( z\right) \) with \( 0 \neq z \in F \) . Consider the mapping \[ \varphi : \left\{ \begin{matrix} \mathcal{L}\left( A\right) & \rightarrow & F, \\ x & \mapsto & {xz}. \end{matrix}\right. \] This is a \( K \) -linear mapping whose image is contained in \( \mathcal{L}\left( {A}^{\prime }\right) \) . In the same manner, \[ {\varphi }^{\prime } : \left\{ \begin{matrix} \mathcal{L}\left( {A}^{\prime }\right) & \rightarrow & F, \\ x & \mapsto & x{z}^{-1} \end{matrix}\right. \] is \( K \) -linear from \( \mathcal{L}\left( {A}^{\prime }\right) \) to \( \mathcal{L}\left( A\right) \) . These mappings are inverse to each other, hence \( \varphi \) is an isomorphism between \( \mathcal{L}\left( A\right) \) and \( \mathcal{L}\left( {A}^{\prime }\right) \) . Lemma 1.4.7. (a) \( \mathcal{L}\left( 0\right) = K \) . (b) If \( A < 0 \) then \( \mathcal{L}\left( A\right) = \{ 0\} \) . Proof. (a) We have \( \left( x\right) = 0 \) for \( 0 \neq x \in K \), therefore \( K \subseteq \mathcal{L}\left( 0\right) \) . Conversely, if \( 0 \neq x \in \mathcal{L}\left( 0\right) \) then \( \left( x\right) \geq 0 \) . This means that \( x \) has no pole, so \( x \in K \) by Corollary 1.1.20. (b) Assume there exists an element \( 0 \neq x \in \mathcal{L}\left( A\right) \) . Then \( \left( x\right) \geq - A > 0 \) , which implies that \( x \) has at least one zero but no pole. This is impossible. In the sequel we shall consider various \( K \) -vector spaces. The dimension of such a vector space \( V \) will be denoted by \( \dim V \) . Our next objective is to show that \( \mathcal{L}\left( A\right) \) is finite-dimensional for each divisor \( A \in \operatorname{Div}\left( F\right) \) . Lemma 1.4.8. Let \( A, B \) be divisors of \( F/K \) with \( A \leq B \) . Then we have \( \mathcal{L}\left( A\right) \subseteq \mathcal{L}\left( B\right) \) and \[ \dim \left( {\mathcal{L}\left( B\right) /\mathcal{L}\left( A\right) }\right) \leq \deg B - \deg A. \] Proof. \( \mathcal{L}\left( A\right) \subseteq \mathcal{L}\left( B\right) \) is trivial. In order to prove the other assertion we can assume that \( B = A + P \) for some \( P \in {\mathbb{P}}_{F} \) ; the general case follows then by induction. Choose an element \( t \in F \) with \( {v}_{P}\left( t\right) = {v}_{P}\left( B\right) = {v}_{P}\left( A\right) + 1 \) . For \( x \in \mathcal{L}\left( B\right) \) we have \( {v}_{P}\left( x\right) \geq - {v}_{P}\left( B\right) = - {v}_{P}\left( t\right) \), so \( {xt} \in {\mathcal{O}}_{P} \) . Thus we obtain a \( K \) -linear map \[ \psi : \left\{ \begin{matrix} \mathcal{L}\left( B\right) & \rightarrow & {F}_{P}, \\ x & \mapsto & \left( {xt}\right) \left( P\right) . \end{matrix}\right. \] An element \( x \) is in the kernel of \( \psi \) if and only if \( {v}_{P}\left( {xt}\right) > 0 \) ; i.e., \( {v}_{P}\left( x\right) \geq \) \( - {v}_{P}\left( A\right) \) . Consequently \(
1048_(GTM209)A Short Course on Spectral Theory
Definition 4.7.2
Definition 4.7.2. Let \( \rho \) be a positive linear functional on a Banach *-algebra \( A \) . By a GNS pair for \( \rho \) we mean a pair \( \left( {\pi ,\xi }\right) \) consisting of a representation \( \pi \) of \( A \) on a Hilbert space \( H \) and a vector \( \xi \in H \) such that (1) (Cyclicity) \( \overline{\pi \left( A\right) }\xi = H \), and (2) \( \rho \left( x\right) = \langle \pi \left( x\right) \xi ,\xi \rangle \), for every \( x \in A \) . Two GNS pairs \( \left( {\pi ,\xi }\right) \) and \( \left( {{\pi }^{\prime },{\xi }^{\prime }}\right) \) are said to be equivalent if there is a unitary operator \( W : H \rightarrow {H}^{\prime } \) such that \( {W\xi } = {\xi }^{\prime } \) and \( {W\pi }\left( x\right) = {\pi }^{\prime }\left( x\right) W \) , \( x \in A \) . THEOREM 4.7.3. Every positive linear functional \( \rho \) on a unital Banach \( * \) - algebra \( A \) has a GNS pair \( \left( {\pi ,\xi }\right) \), and any two GNS pairs for \( \rho \) are equivalent. Proof. Consider the set \[ N = \left\{ {a \in A : \rho \left( {{a}^{ * }a}\right) = 0}\right\} . \] With fixed \( a \in A \), the Schwarz inequality (4.19) implies that for every \( x \in A \) we have \( {\left| \rho \left( {x}^{ * }a\right) \right| }^{2} \leq \rho \left( {{a}^{ * }a}\right) \rho \left( {{x}^{ * }x}\right) \), from which it follows that \( \rho \left( {{a}^{ * }a}\right) = \) \( 0 \Leftrightarrow \rho \left( {{x}^{ * }a}\right) = 0 \) for every \( x \in A \) . Thus \( N \) is a left ideal: a linear subspace of \( A \) such that \( A \cdot N \subseteq N \) . The sesquilinear form \( x, y \in A \mapsto \rho \left( {{y}^{ * }x}\right) \) promotes naturally to sesquilin-ear form \( \langle \cdot , \cdot \rangle \) on the quotient space \( A/N \) via \[ \langle x + N, y + N\rangle = \rho \left( {{y}^{ * }x}\right) ,\;x, y \in A, \] and for every \( x \) we have \[ \langle x + N, x + N\rangle = \rho \left( {{x}^{ * }x}\right) = 0 \Rightarrow x + N = 0. \] Hence \( A/N \) becomes an inner product space. Its completion is a Hilbert space \( H \), and there is a natural vector \( \xi \in H \) defined by \[ \xi = 1 + N \] It remains to define \( \pi \in \operatorname{rep}\left( {A, H}\right) \), and this is done as follows. Since \( N \) is a left ideal, for every fixed \( a \in A \) there is a linear operator \( \pi \left( a\right) \) defined on \( A/N \) by \( \pi \left( a\right) \left( {x + N}\right) = {ax} + N, x \in A \) . Note first that (4.20) \[ \langle \pi \left( a\right) \eta ,\zeta \rangle = \left\langle {\eta ,\pi \left( {a}^{ * }\right) \zeta }\right\rangle \] for every pair of elements \( \eta = y + N,\zeta = z + N \in A/N \) . Indeed, the left side of (4.20) is \( \rho \left( {{z}^{ * }{ay}}\right) \), while the right side is \( \rho \left( {{\left( {a}^{ * }z\right) }^{ * }y}\right) = \rho \left( {{z}^{ * }{ay}}\right) \), as asserted. We claim next that for every \( a \in A,\parallel \pi \left( a\right) \parallel \leq \parallel a\parallel \), where \( \pi \left( a\right) \) is viewed as an operator on the inner product space \( A/N \) . Indeed, if \( \parallel a\parallel \leq 1 \), then for every \( x \in A \) we have \[ \langle \pi \left( a\right) \left( {x + N}\right) ,\pi \left( a\right) \left( {x + N}\right) \rangle = \langle {ax} + N,{ax} + N\rangle = \rho \left( {{\left( ax\right) }^{ * }{ax}}\right) \] (4.21) \[ = \rho \left( {{x}^{ * }{a}^{ * }{ax}}\right) \text{.} \] Since \( {a}^{ * }a \) is a self-adjoint element in the unit ball of \( A \), we can find a self-adjoint square root \( y \) of \( 1 - {a}^{ * }a \) (see Exercise (2b)). It follows that \( {x}^{ * }x - {x}^{ * }{a}^{ * }{ax} = {x}^{ * }\left( {1 - {a}^{ * }a}\right) x = {x}^{ * }{y}^{2}x = {\left( yx\right) }^{ * }{yx} \) ; hence \[ \rho \left( {{x}^{ * }x - {x}^{ * }{a}^{ * }{ax}}\right) = \rho \left( {{\left( yx\right) }^{ * }{yx}}\right) \geq 0, \] from which we conclude that \( \rho \left( {{x}^{ * }{a}^{ * }{ax}}\right) \leq \rho \left( {{x}^{ * }x}\right) \) . This provides an upper bound for the right side of (4.21), and we obtain \[ \langle \pi \left( a\right) \left( {x + N}\right) ,\pi \left( a\right) \left( {x + N}\right) \rangle \leq \rho \left( {{x}^{ * }x}\right) = \langle x + N, x + N\rangle . \] It follows that \( \parallel \pi \left( a\right) \parallel \leq 1 \) when \( \parallel a\parallel \leq 1 \), and the claim is proved. Thus, for each \( a \in A \) we may extend \( \pi \left( a\right) \) uniquely to a bounded operator on the completion \( H \) by taking the closure of its graph; and we denote the closure \( \pi \left( a\right) \in \mathcal{B}\left( H\right) \) with the same notation. Note that (4.20) implies that \( \langle \pi \left( a\right) \eta ,\zeta \rangle = \left\langle {\eta ,\pi \left( {a}^{ * }\right) \zeta }\right\rangle \) for all \( \eta ,\zeta \in H \), and from this we deduce that \( \pi \left( {a}^{ * }\right) = \pi \left( {a}^{ * }\right), a \in A \) . It is clear from the definition of \( \pi \) that \( \pi \left( {ab}\right) = \) \( \pi \left( a\right) \pi \left( b\right) \) for \( a, b \in A \) ; hence \( \pi \in \operatorname{rep}\left( {A, H}\right) \) . Finally, note that \( \left( {\pi ,\xi }\right) \) is a GNS pair for \( \rho \) . Indeed, \[ \pi \left( A\right) \xi = \pi \left( A\right) \left( {\mathbf{1} + N}\right) = \{ a + N : a \in A\} \] is obviously dense in \( H \), and \[ \langle \pi \left( a\right) \xi ,\xi \rangle = \langle a + N,\mathbf{1} + N\rangle = \rho \left( {{\mathbf{1}}^{ * }a}\right) = \rho \left( a\right) . \] For the uniqueness assertion, let \( \left( {{\pi }^{\prime },{\xi }^{\prime }}\right) \) be another GNS pair for \( \rho \) , \( {\pi }^{\prime } \in \operatorname{rep}\left( {A,{H}^{\prime }}\right) \) . Notice that there is a unique linear isometry \( {W}_{0} \) from the dense subspace \( \pi \left( A\right) \xi \) onto \( {\pi }^{\prime }\left( A\right) {\xi }^{\prime } \) defined by \( {W}_{0} : \pi \left( a\right) \xi \mapsto {\pi }^{\prime }\left( a\right) {\xi }^{\prime } \), simply because for all \( a \in A \) , \[ \langle \pi \left( a\right) \xi ,\pi \left( a\right) \xi \rangle = \left\langle {\pi \left( {{a}^{ * }a}\right) \xi ,\xi }\right\rangle = \rho \left( {{a}^{ * }a}\right) = \left\langle {{\pi }^{\prime }\left( a\right) {\xi }^{\prime },{\pi }^{\prime }\left( a\right) {\xi }^{\prime }}\right\rangle . \] The isometry \( {W}_{0} \) extends uniquely to a unitary operator \( W : H \rightarrow {H}^{\prime } \), and one verifies readily that \( {W\xi } = {\xi }^{\prime } \), and that \( {W\pi }\left( a\right) = {\pi }^{\prime }\left( a\right) W \) on the dense set of vectors \( \pi \left( A\right) \xi \subseteq H \) . It follows that \( \left( {\pi ,\xi }\right) \) and \( \left( {{\pi }^{\prime },{\xi }^{\prime }}\right) \) are equivalent. REMARK 4.7.4. Many important Banach *-algebras do not have units. For example, the group algebras \( {L}^{1}\left( G\right) \) of locally compact groups fail to have units except when \( G \) is discrete. \( {C}^{ * } \) -algebras such as \( \mathcal{K} \) do not have units. But the most important examples of Banach \( * \) -algebras have "approximate units," and it is significant that there is an appropriate generalization of the GNS construction (Theorem 4.7.3) that applies to Banach \( * \) -algebras containing an approximate unit [10], [2]. Exercises. (1) (a) Fix \( \alpha \) in the interval \( 0 < \alpha < 1 \) . Show that the binomial series of \( {\left( 1 - z\right) }^{\alpha } \) has the form \[ {\left( 1 - z\right) }^{\alpha } = 1 - \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n}{z}^{n} \] where \( {c}_{n} > 0 \) for \( n = 1,2,\ldots \) . (b) Deduce that \[ \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n} = 1 \] (2) (a) Let \( A \) be a Banach algebra with normalized unit, and let \( {c}_{1},{c}_{2},\ldots \) be the binomial coefficients of the preceding exercise for the parameter value \( \alpha = \frac{1}{2} \) . Show that for every element \( x \in A \) satisfying \( \parallel x\parallel \leq 1 \), the series \[ 1 - \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n}{x}^{n} \] converges absolutely to an element \( y \in A \) satisfying \[ {y}^{2} = 1 - x \] (b) Suppose in addition that \( A \) is a Banach \( * \) -algebra. Deduce that for every self-adjoint element \( x \) in the unit ball of \( A,1 - x \) has a self-adjoint square root in \( A \) . In the remaining exercises, \( \Delta = \{ z \in \mathbb{C} : \left| z\right| \leq 1\} \) denotes the closed unit disk and \( A \) denotes the disk algebra, consisting of all functions \( f \in C\left( \Delta \right) \) that are analytic on the interior of \( \Delta \) . (3) (a) Show that the map \( f \mapsto {f}^{ * } \) defined by \[ {f}^{ * }\left( z\right) = \overline{f\left( \bar{z}\right) },\;z \in \Delta , \] makes \( A \) into a Banach \( * \) -algebra. (b) For each \( z \in \Delta \), let \( {\omega }_{z}\left( f\right) = f\left( z\right), f \in A \) . Show that \( {\omega }_{z} \) is a positive linear functional if and only if \( z \in \left\lbrack {-1,1}\right\rbrack \) is real. (4) Let \( \rho \) be the linear functional defined on \( A \) by \[ \rho \left( f\right) = {\int }_{0}^{1}f\left( x\right) {dx} \] (a) Show that \( \rho \) is a state. (b) Calculate a GNS pair \( \left( {\pi ,\xi }\right) \) for \( \rho \) in concrete terms as follows. Consider the Hilbert space \( {L}^{2}\left\lbrack {0,1}\right\rbrack \), and let \( \xi \in {L}^{2}\left\lbrack {0,1}\right\rbrack \) be the constant function \( \xi \left( t\right) = 1, t \in \left\lbrack {0,1}\right\rbrack \) . Exhibit a representation \( \pi \) of \( A \) on \( {L}^{2}\left\lbrack {0,1}\right\rbrack \) such that \( \left( {\pi ,\xi }\right) \) becomes a GNS pair for \( \rho \) . (c) Show that \( \pi \) is faithful; that is, for \( f \in A \) we have \[ \pi \left( f\right) = 0 \Rightarrow f = 0. \] (d) Show that the closure of
1112_(GTM267)Quantum Theory for Mathematicians
Definition 2.14
Definition 2.14 For a system of \( N \) particles moving in \( {\mathbb{R}}^{n} \), the center of mass of the system at a fixed time is the vector \( \mathbf{c} \in {\mathbb{R}}^{n} \) given by \[ \mathbf{c} = \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{m}_{j}}{M}{\mathbf{x}}^{j} \] where \( M = \mathop{\sum }\limits_{{j = 1}}^{N}{m}_{j} \) is the total mass of the system. The center of mass is a weighted average of the positions of the various particles. Differentiating \( \mathbf{c}\left( t\right) \) with respect to \( t \) gives \[ \frac{d\mathbf{c}}{dt} = \frac{1}{M}\mathop{\sum }\limits_{{j = 1}}^{N}{m}_{j}{\dot{\mathbf{x}}}^{j} = \frac{\mathbf{p}}{M} \] (2.15) where \( \mathbf{p} \) is the total momentum. Proposition 2.15 Suppose the total momentum \( \mathbf{p} \) of a system is conserved. Then the center of mass moves in a straight line at constant speed. Specifically, \[ \mathbf{c}\left( t\right) = \mathbf{c}\left( {t}_{0}\right) + \left( {t - {t}_{0}}\right) \frac{\mathbf{p}}{M} \] where \( \mathbf{c}\left( {t}_{0}\right) \) is the center of mass at some initial time \( {t}_{0} \) . Proof. The result follows easily from (2.15). ∎ The notion of center of mass is particularly useful in a system of two particles in which momentum is conserved. For a system of two particles, if the potential energy \( V\left( {{\mathbf{x}}^{1},{\mathbf{x}}^{2}}\right) \) is invariant under simultaneous translations of \( {\mathbf{x}}^{1} \) and \( {\mathbf{x}}^{2} \), then it is of the form \[ V\left( {{\mathbf{x}}^{1},{\mathbf{x}}^{2}}\right) = \widetilde{V}\left( {{\mathbf{x}}^{1} - {\mathbf{x}}^{2}}\right) \] where \( \widetilde{V}\left( \mathbf{a}\right) = V\left( {\mathbf{a},0}\right) \) . Now, the positions \( {\mathbf{x}}^{1},{\mathbf{x}}^{2} \) of the particles can be recovered from knowledge of the center of mass and the relative position \[ \mathbf{y} \mathrel{\text{:=}} {\mathbf{x}}^{1} - {\mathbf{x}}^{2} \] as follows: \[ {\mathbf{x}}^{1} = \frac{\mathbf{c} + {m}_{2}\mathbf{y}}{{m}_{1} + {m}_{2}} \] \[ {\mathbf{x}}^{2} = \frac{\mathbf{c} - {m}_{1}\mathbf{y}}{{m}_{1} + {m}_{2}} \] Meanwhile, we may compute that \[ \ddot{\mathbf{y}}\left( t\right) = {\ddot{\mathbf{x}}}^{1} - {\ddot{\mathbf{x}}}^{2} = - \frac{1}{{m}_{1}}\nabla \widetilde{V}\left( {{\mathbf{x}}^{1} - {\mathbf{x}}^{2}}\right) - \frac{1}{{m}_{2}}\nabla \widetilde{V}\left( {{\mathbf{x}}^{1} - {\mathbf{x}}^{2}}\right) . \] This calculation gives the following result. Proposition 2.16 For a two-particle system with potential energy of the form \( V\left( {{\mathbf{x}}^{1},{\mathbf{x}}^{2}}\right) = \widetilde{V}\left( {{\mathbf{x}}^{1} - {\mathbf{x}}^{2}}\right) \), the relative position \( \mathbf{y} \mathrel{\text{:=}} {\mathbf{x}}^{1} - {\mathbf{x}}^{2} \) satisfies the differential equation \[ \mu \ddot{\mathbf{y}} = - \nabla \widetilde{V}\left( \mathbf{y}\right) \] where \( \mu \) is the reduced mass given by \[ \mu = \frac{1}{\frac{1}{{m}_{1}} + \frac{1}{{m}_{2}}} = \frac{{m}_{1}{m}_{2}}{{m}_{1} + {m}_{2}}. \] Thus, when the total momentum of a two-particle system is conserved, the relative position evolves as a one-particle system with "effective" mass \( \mu \) , while the center of mass moves "trivially," as described in Proposition 2.15. ![c8334438-d654-407c-88c9-c41bf21128ed_47_0.jpg](images/c8334438-d654-407c-88c9-c41bf21128ed_47_0.jpg) FIGURE 2.1. \( A\left( t\right) \) is the area of the shaded region. ## 2.4 Angular Momentum We start by considering angular momentum in the simplest nontrivial case, motion in \( {\mathbb{R}}^{2} \) . Definition 2.17 Consider a particle moving in \( {\mathbb{R}}^{2} \), having position \( \mathbf{x} \) , velocity \( \mathbf{v} \), and momentum \( \mathbf{p} = m\mathbf{v} \) . Then the angular momentum of the particle, denoted \( J \), is given by \[ J = {x}_{1}{p}_{2} - {x}_{2}{p}_{1} \] (2.16) In more geometric terms, \( J = \left| \mathbf{x}\right| \left| \mathbf{p}\right| \sin \phi \), where \( \phi \) is the angle (measured counterclockwise) between \( \mathbf{x} \) and \( \mathbf{p} \) . We can look at \( J \) in yet another way as follows. If \( \theta \) is the usual angle in polar coordinates on \( {\mathbb{R}}^{2} \), then an elementary calculation (Exercise 9) shows that \[ J = m{r}^{2}\frac{d\theta }{dt} \] \( \left( {2.17}\right) \) It then follows that \[ J = {2m}\frac{dA}{dt} \] (2.18) where \( A = \left( {1/2}\right) \int {r}^{2}{d\theta } \) is the area being swept out by the curve \( \mathbf{x}\left( t\right) \) . See Fig. 2.1. One significant property of the angular momentum is that it (like the energy) is conserved in certain situations. Proposition 2.18 Suppose a particle of mass \( m \) is moving in \( {\mathbb{R}}^{2} \) under the influence of a conservative force with the potential function \( V\left( \mathbf{x}\right) \) . If \( V \) is invariant under rotations in \( {\mathbb{R}}^{2} \), then the angular momentum \( J = \) \( {x}_{1}{p}_{2} - {x}_{2}{p}_{1} \) is independent of time along any solution of Newton’s equation. Conversely, if \( J \) is independent of time along every solution of Newton’s equation, then \( V \) is invariant under rotations. Proof. Differentiating (2.16) along a solution of Newton's law gives \[ \frac{dJ}{dt} = \frac{d{x}_{1}}{dt}{p}_{2} + {x}_{1}\frac{d{p}_{2}}{dt} - \frac{d{x}_{2}}{dt}{p}_{1} - {x}_{2}\frac{d{p}_{1}}{dt} \] \[ = \frac{1}{m}{p}_{1}{p}_{2} - {x}_{1}\frac{\partial V}{\partial {x}_{2}} - \frac{1}{m}{p}_{2}{p}_{1} + {x}_{2}\frac{\partial V}{\partial {x}_{1}} \] \[ = {x}_{2}\frac{\partial V}{\partial {x}_{1}} - {x}_{1}\frac{\partial V}{\partial {x}_{2}} \] On the other hand, consider rotations \( {R}_{\theta } \) in \( {\mathbb{R}}^{2} \) given by \[ {R}_{\theta } = \left( \begin{array}{rr} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{array}\right) \] If we differentiate \( V \) along this family of rotations, we obtain \[ {\left. \frac{d}{d\theta }V\left( {R}_{\theta }\mathbf{x}\right) \right| }_{\theta = 0} = \frac{\partial V}{\partial x}\frac{dx}{d\theta } + \frac{\partial V}{\partial y}\frac{dy}{d\theta } = - {x}_{2}\frac{\partial V}{\partial {x}_{1}} + {x}_{1}\frac{\partial V}{\partial {x}_{2}} = - \frac{dJ}{dt}\left( \mathbf{x}\right) . \] Thus, the angular derivative of \( V \) is zero if and only if \( J \) is constant. ∎ Conservation of \( J \) [together with the relation (2.18)] gives the following result. Corollary 2.19 (Kepler's Second Law) Suppose a particle is moving in \( {\mathbb{R}}^{2} \) in the presence of a force associated with a rotationally invariant potential. If \( \mathbf{x}\left( t\right) \) is the trajectory of the particle, then the area swept out by \( \mathbf{x}\left( t\right) \) between times \( t = a \) and \( t = b \) is \( \left( {b - a}\right) J/\left( {2m}\right) \), where \( J \) is the constant value of the angular momentum along the trajectory. Since the area swept out depends only on \( b - a \), we may say that "equal areas are swept out in equal times." Kepler, of course, was interested in the motion of planets in \( {\mathbb{R}}^{3} \), not in \( {\mathbb{R}}^{2} \) . The motion of a planet moving in the "inverse square" force of a sun will, however, always lie in a plane. (This claim follows from the three-dimensional version of conservation of angular momentum, as explained in Sect. 2.6.1.) In \( {\mathbb{R}}^{3} \), the angular momentum of the particle is a vector, given by \[ \mathbf{J} = \mathbf{x} \times \mathbf{p} \] (2.19) where \( \times \) denotes the cross product (or vector product). Thus, for example, \[ {J}_{3} = {x}_{1}{p}_{2} - {x}_{2}{p}_{1} \] \( \left( {2.20}\right) \) If, then, we have a particle in \( {\mathbb{R}}^{3} \) that just happens to be moving in \( {\mathbb{R}}^{2} \) (i.e., \( {x}_{3} = 0 \) and \( {p}_{3} = 0 \) ), then the angular momentum will be in the \( z \) - direction with \( z \) -component given by the quantity \( J \) defined in Definition 2.17. The representation of the angular momentum of a particle in \( {\mathbb{R}}^{3} \) as a vector is a low-dimensional peculiarity. For a particle in \( {\mathbb{R}}^{n} \), the angular momentum is a skew-symmetric matrix given by \[ {J}_{jk} = {x}_{j}{p}_{k} - {x}_{k}{p}_{j} \] (2.21) In the \( {\mathbb{R}}^{3} \) case, the entries of the \( 3 \times 3 \) angular momentum matrix are made up by the three components of the angular momentum vector together with their negatives, with zeros along the diagonal. [Compare, e.g., (2.20) and (2.21).] Definition 2.20 For a system of \( N \) particles moving in \( {\mathbb{R}}^{n} \), the total angular momentum of the system is the skew-symmetric matrix \( \mathbf{J} \) given \( {by} \) \[ {J}_{jk} = \mathop{\sum }\limits_{{l = 1}}^{N}\left( {{x}_{j}^{l}{p}_{k}^{l} - {x}_{k}^{l}{p}_{j}^{l}}\right) . \] (2.22) Theorem 2.21 Suppose a system of \( N \) particles in \( {\mathbb{R}}^{n} \) is moving under the influence of conservative forces with potential function \( V \) . If \( V \) satisfies \[ V\left( {R{\mathbf{x}}^{1}, R{\mathbf{x}}^{2},\ldots, R{\mathbf{x}}^{N}}\right) = V\left( {{\mathbf{x}}^{1},{\mathbf{x}}^{2},\ldots ,{\mathbf{x}}^{N}}\right) \] (2.23) for every rotation matrix \( R \), then the total angular momentum of the system is conserved (constant along each trajectory). Conversely, if the total angular momentum is constant along each trajectory, then \( V \) satisfies (2.23). The proof of this result is similar to that of Proposition 2.18 and is left as an exercise (Exercise 10). We will re-examine the concept of angular momentum in the next section using the language of Poisson brackets and Hamiltonian flows. ## 2.5 Poisson Brackets and Hamiltonian Mechanics We consider now the Hamiltonian approach to classical mechanics. (There is also the Lagrangian approach, but that approach is not as relevant for our purposes.) The Hamiltonian approach, and in particular the Poisson br
1042_(GTM203)The Symmetric Group
Definition 5.5.2
Definition 5.5.2 A coloring of a graph \( \Gamma \) from a color set \( C \) is a function \( \kappa : V \rightarrow C \) . The coloring is proper if it satifies \[ {uv} \in E \Rightarrow \kappa \left( u\right) \neq \kappa \left( v\right) \text{. ∎} \] So a coloring just assigns a color to each vertex and, it is proper if no two vertices of the same color are connected by an edge. One coloring of our graph from the color set \( C = \{ 1,2,3\} \) is \[ \kappa \left( {v}_{1}\right) = \kappa \left( {v}_{2}\right) = \kappa \left( {v}_{4}\right) = 1\;\text{ and }\;\kappa \left( {v}_{3}\right) = 2. \] It is not proper, since both vertices of edge \( {v}_{1}{v}_{2} \) have the same color. The proper colorings of \( \Gamma \) are exactly those where \( {v}_{1},{v}_{2},{v}_{3} \) all have distinct colors and \( {v}_{4} \) has a color different from that of \( {v}_{3} \) . Perhaps the most famous (and controversial because of its computer proof by Appel and Haken) theorem about proper colorings is the Four Color Theorem. It is equivalent to the fact that one can always color a map with four colors so that adjacent countries are colored differently. Theorem 5.5.3 (Four Color Theorem [A-H 76]) If graph \( \Gamma \) is planar (i.e., can be drawn in the plane without edge crossings), then there is a proper coloring \( \kappa : V \rightarrow \{ 1,2,3,4\} \) . ∎ Definition 5.5.4 The chromatic polynomial of \( \Gamma \) is defined to be \[ {P}_{\Gamma } = {P}_{\Gamma }\left( n\right) = \text{the number of proper}\kappa : V \rightarrow \{ 1,2,\ldots, n\} \text{. ∎} \] The chromatic polynomial provides an interesting tool for studying proper colorings. Note that Theorem 5.5.3 can be restated: If \( \Gamma \) is planar, then \( {P}_{\Gamma }\left( 4\right) \geq 1 \) . To compute \( {P}_{\Gamma }\left( n\right) \) in our running example, if we color the verticies in order of increasing subscript, then there are \( n \) ways to color \( {v}_{1} \) . This leaves \( n - 1 \) choices for \( {v}_{2} \), since \( {v}_{1}{v}_{2} \in E \), and \( n - 2 \) for \( {v}_{3} \) . Finally, there are \( n - 1 \) colors available for \( {v}_{4} \), namely any except \( \kappa \left( {v}_{3}\right) \) . So in this case \( {P}_{\Gamma }\left( n\right) = n{\left( n - 1\right) }^{2}\left( {n - 2}\right) \) is a polynomial in \( n \) . This is always true, as suggested by the terminology, but is not obvious. The interested reader will find two different proofs in Exercises 28 and 29. In order to bring symmetric functions into play, one needs to generalize the chromatic polynomial. This motivates the following definition of Stanley. Definition 5.5.5 ([Stn 95]) If \( \Gamma \) has vertex set \( V = \left\{ {{v}_{1},\ldots ,{v}_{d}}\right\} \), then the chromatic symmetric function of \( \Gamma \) is defined to be \[ {X}_{\Gamma } = {X}_{\Gamma }\left( \mathbf{x}\right) = \mathop{\sum }\limits_{{\kappa : V \rightarrow \mathbb{P}}}{x}_{\kappa \left( {v}_{1}\right) }{x}_{\kappa \left( {v}_{2}\right) }\cdots {x}_{\kappa \left( {v}_{d}\right) }, \] where the sum is over all proper colorings \( \kappa \) with colors from \( \mathbb{P} \), the positive integers. - It will be convenient to let \( {\mathbf{x}}^{\kappa } = {x}_{\kappa \left( {v}_{1}\right) }{x}_{\kappa \left( {v}_{2}\right) }\cdots {x}_{\kappa \left( {v}_{d}\right) } \) . If we return to our example, we see that given any four colors we can assign them to \( V \) in \( 4! = {24} \) ways. If we have three colors with one specified as being used twice, then it is not hard to see that there are 4 ways to color \( \Gamma \) . Since these are the only possibilities, \[ \begin{matrix} {X}_{\Gamma }\left( \mathbf{x}\right) & = & {24}{x}_{1}{x}_{2}{x}_{3}{x}_{4} + {24}{x}_{1}{x}_{2}{x}_{3}{x}_{5} + \cdots + 4{x}_{1}^{2}{x}_{2}{x}_{3} + 4{x}_{1}{x}_{2}^{2}{x}_{3} + \cdots \end{matrix} \] \[ = {24}{m}_{{1}^{4}} + 4{m}_{2,{1}^{2}}\text{.} \] We next collect a few elementary properties of \( {X}_{\Gamma } \), including its connection to the chromatic polynomial. Proposition 5.5.6 ([Stn 95]) Let \( \Gamma \) be any graph with vertex set \( V \) . 1. \( {X}_{\Gamma } \) is homogeneous of degree \( d = \left| V\right| \) . 2. \( {X}_{\Gamma } \) is a symmetric function. 3. If we set \( {x}_{1} = \cdots = {x}_{n} = 1 \) and \( {x}_{i} = 0 \) for \( i > n \), written \( \mathbf{x} = {1}^{n} \), then \[ {X}_{\Gamma }\left( {1}^{n}\right) = {P}_{\Gamma }\left( n\right) \] Proof. 1. Every monomial in \( {X}_{\Gamma } \) has a factor for each vertex. 2. Any permutation of the colors of a proper coloring gives another proper coloring. This means that permuting the subscripts of \( {X}_{\Gamma } \) leaves the function invariant. And since it is homogeneous of degree \( d \), it is a finite sum of monomial symmetric functions. 3. With the given specialization, each monomial of \( {X}_{\Gamma } \) becomes either 1 or 0 . The surviving monomials are exactly those that use only the first \( n \) colors. So their sum is \( {P}_{\Gamma }\left( n\right) \) by definition of the chromatic polynomial as the number of such colorings. - Since \( {X}_{\Gamma } \in \Lambda \), we can consider its expansion in the various bases introduced in Chapter 4. In order to do so, we must first talk about partitions of a set \( S \) . Definition 5.5.7 A set partition \( \beta = {B}_{1}/{\mathcal{B}}_{2}/\ldots /{B}_{l} \) of \( S,\beta \vdash S \), is a collection of subsets, or blocks, \( {B}_{1},\ldots ,{B}_{l} \) whose disjoint union is \( S \) . The type of \( \beta \) is the integer partition \[ \lambda \left( \beta \right) = \left( {\left| {B}_{1}\right| ,\left| {B}_{2}\right| ,\ldots ,\left| {B}_{l}\right| }\right) \] where we assume that the \( {B}_{i} \) are listed in weakly decreasing order of size. ∎ For example, the partitions of \( \{ 1,2,3,4\} \) with two blocks are \[ 1,2,3/4;1,2,4/3;1,3,4/2;2,3,4/1;1,2/3,4;1,3/2,4;1,4/2,3\text{.} \] The first four are of type \( \left( {3,1}\right) \), while the last three are of type \( \left( {2}^{2}\right) \) . Let us first talk about the expansion of \( {X}_{\Gamma } \) in terms of monomial symmetric functions. Definition 5.5.8 An independent, or stable, set of \( \Gamma \) is a subset \( W \subseteq V \) such that there is no edge \( {uv} \) with both of its vertices in \( W \) . We call a partition \( \beta \vdash V \) independent or stable if all of its blocks are. We let \( {i}_{\lambda } = {i}_{\lambda }\left( V\right) = \) the number of independent partitions of \( V \) of type \( \lambda \) . ∎ Continuing with our example graph, the independent partitions of \( V \) are \[ {v}_{1}/{v}_{2}/{v}_{3}/{v}_{4};{v}_{1},{v}_{4}/{v}_{2}/{v}_{3};{v}_{2},{v}_{4}/{v}_{1}/{v}_{3} \] \( \left( {5.15}\right) \) having types \( \left( {1}^{4}\right) \) and \( \left( {2,{1}^{2}}\right) \) . The fact that these are exactly the partitions occuring in the previous expansion of \( {X}_{\Gamma } \) into monomial symmetric functions is no accident. To state the actual result it will be convenient to associate with a partition \( \lambda = \left( {{1}^{{m}_{1}},\ldots ,{d}^{{m}_{d}}}\right) \) the integer \[ {y}_{\lambda }\overset{\text{ def }}{ = }{m}_{1}!{m}_{2}!\cdots {m}_{d}! \] Proposition 5.5.9 ([Stn 95]) The expansion of \( {X}_{\Gamma } \) in terms of monomial symmetric functions is \[ {X}_{\Gamma } = \mathop{\sum }\limits_{{\lambda \vdash d}}{i}_{\lambda }{y}_{\lambda }{m}_{\lambda } \] Proof. In any proper coloring, the set of all vertices of a given color form an independent set. So given \( \kappa : V \rightarrow \mathbb{P} \) proper, the set of nonempty \( {\kappa }^{-1}\left( i\right) \) , \( i \in \mathbb{P} \), form an independent partition \( \beta \vdash V \) . Thus the coefficient of \( {\mathbf{x}}^{\lambda } \) in \( {X}_{\Gamma } \) is just the number of ways to choose \( \beta \) of type \( \lambda \) and then assign colors to the blocks so as to give this monomial. There are \( {i}_{\lambda } \) possiblities for the first step. Also, colors can be permuted among the blocks of a fixed size without changing \( {\mathbf{x}}^{\lambda } \), which gives the factor of \( {y}_{\lambda } \) for the second. - To illustrate this result in our example, the list (5.15) of independent partitions shows that \[ {X}_{\Gamma } = {i}_{\left( {1}^{4}\right) }{y}_{\left( {1}^{4}\right) }{m}_{\left( {1}^{4}\right) } + {i}_{\left( 2,{1}^{2}\right) }{y}_{\left( 2,{1}^{2}\right) }{m}_{\left( 2,{1}^{2}\right) } = 1 \cdot 4!{m}_{\left( {1}^{4}\right) } + 2 \cdot 2!{m}_{\left( 2,{1}^{2}\right) }, \] which agrees with our previous calculations. We now turn to the expansion of \( {X}_{\Gamma } \) in terms of the power sum symmetric functions. If \( F \subseteq E \) is a set of edges, it will be convenient to let \( F \) also stand for the subgraph of \( \Gamma \) with vertex set \( V \) and edge set \( F \) . Also, by the components of a graph \( \Gamma \) we will mean the topologically connected components. These components determine a partition \( \beta \left( F\right) \) of the vertex set, whose type will be denoted by \( \lambda \left( F\right) \) . In our usual example, \( F = \left\{ {{e}_{1},{e}_{2}}\right\} \) is a subgraph of \( \Gamma \) with two components. The corresponding partitions are \( \beta \left( F\right) = {v}_{1},{v}_{2},{v}_{3}/{v}_{4} \) and \( \lambda \left( F\right) = \left( {3,1}\right) \) . Theorem 5.5.10 ([Stn 95]) We have \[ {X}_{\Gamma } = \mathop{\sum }\limits_{{F \subseteq E}}{\left( -1\right) }^{\left| F\right| }{p}_{\lambda \left( F\right) } \] Proof. Let \( K\left( F\right) \) denote the set of all colorings of \( \Gamma \) that are monochromatic on the components of \( F \) (which will usually not be proper). If \( \beta \left( F\right) = \) \( {B}_{1}/\ldots /{B}_{l} \), then we can compute the weight generating function for such colorings as \[ \mathop{\sum }\limits_{{\kappa \in K\left( F\right) }}{\mathbf{x}}^{\kappa }
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 1.2.1
Definition 1.2.1. Two maps \( {f}_{0} : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) and \( {f}_{1} : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) are homotopic, written \( {f}_{0} \sim {f}_{1} \), if there is a map \[ F : \left( {X, A}\right) \times I \rightarrow \left( {Y, B}\right) \] with \( F\left( {x,0}\right) = {f}_{0}\left( x\right) \) and \( F\left( {x,1}\right) = {f}_{1}\left( x\right) \) for every \( x \in X \) . \( \diamond \) Note that \( F : \left( {X, A}\right) \times I \rightarrow \left( {Y, B}\right) \) is equivalent to \( F : X \rightarrow Y \) with \( f\left( {a, t}\right) \in B \) for every \( a \in A \) and \( t \in I \) . It is psychologically very helpful to think of \( t \in I \) as "time" and \( {f}_{t} : \left( {X, A}\right) \rightarrow \) \( \left( {Y, B}\right) \) by \( {f}_{t}\left( x\right) = f\left( {x, t}\right) \) being " \( f \) at time \( t \) ". With this notion, a homotopy is a deformation through time of \( {f}_{0} \) into \( {f}_{1} \) . It is important to note that while each \( {f}_{t} \) must be continuous, the condition that \( F \) be continuous is stronger than that. For example, the maps \( {f}_{0} : \{ 0,1\} \rightarrow \{ 0,1\} \) given by \( {f}_{0}\left( 0\right) = 0 \) and \( {f}_{0}\left( 1\right) = 1 \), and \( {f}_{1} : \{ 0,1\} \rightarrow \) \( \{ 0,1\} \) given by \( {f}_{1}\left( 0\right) = {f}_{1}\left( 1\right) = 0 \) are not homotopic, but \( F : \{ 0,1\} \times I \rightarrow \{ 0,1\} \) defined by \( f\left( {x, t}\right) = 0 \) if \( \left( {x, t}\right) \neq \left( {1,1}\right) \) and \( F\left( {1,1}\right) = 1 \) has each \( {f}_{t} \) continuous. Lemma 1.2.2. Homotopy is an equivalence relation. Proof. Reflexive: \( \;f \) is homotopic to \( f \) via the homotopy of waiting (i.e., changing nothing) for one unit of time. Symmetric: If \( f \) is homotopic to \( g \), then \( g \) is homotopic to \( f \) via the homotopy of running the original homotopy backwards in time. Transitive: If \( f \) is homotopic to \( g \) and \( g \) is homotopic to \( h \), then \( f \) is homotopic to \( h \) via the homotopy of first doing the original homotopy from \( f \) to \( g \) twice as fast in the first half of the interval of time, and then doing the original homotopy from \( g \) to \( h \) twice as fast in the second half of the interval of time. We have a closely allied definition. Definition 1.2.3. Two maps \( {f}_{0} : X \rightarrow Y \) and \( {f}_{1} : X \rightarrow Y \) are homotopic rel \( A \), where \( A \) is a subspace of \( X \), written \( {f}_{0}{ \sim }_{A}{f}_{1} \), if there is a map \[ F : X \times I \rightarrow Y \] with \( F\left( {x,0}\right) = {f}_{0}\left( x\right), F\left( {x,1}\right) = {f}_{1}\left( x\right) \) for every \( x \in X \), and also \( F\left( {a, t}\right) = {f}_{0}\left( a\right) = \) \( {f}_{1}\left( a\right) \) for every \( a \in A \) and every \( t \in I \) . \( \diamond \) In other words, in a homotopy rel \( A \), the points in \( A \) never move under the deformation from \( {f}_{0} \) to \( {f}_{1} \) . By exactly the same logic, homotopy rel \( A \) is an equivalence relation. Similar to the relationship of homotopy between maps there is a relationship between spaces. Definition 1.2.4. Two pairs \( \left( {X, A}\right) \) and \( \left( {Y, B}\right) \) are homotopy equivalent if there are maps \( f : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) and \( g : \left( {Y, B}\right) \rightarrow \left( {X, A}\right) \) such that the composition \( {gf} \) : \( \left( {X, A}\right) \rightarrow \left( {X, A}\right) \) is homotopic to the identity map \( {id} : \left( {X, A}\right) \rightarrow \left( {X, A}\right) \) and \( {fg} \) : \( \left( {Y, B}\right) \rightarrow \left( {Y, B}\right) \) is homotopic to the identity map \( {id} : \left( {Y, B}\right) \rightarrow \left( {Y, B}\right) \) . As a special case of this we have the following. Definition 1.2.5. A subspace \( A \) of \( X \) is a deformation retract of \( X \) if there is a retraction \( g : X \rightarrow A \) (i.e., a map \( g : X \rightarrow A \) with \( g\left( a\right) = a \) for every \( a \in A \) ) such that \( g \) is homotopic to the identity map \( {id} : X \rightarrow X \) . A subspace \( A \) of \( X \) is a strong deformation retract if there is a retraction \( g : X \rightarrow A \) such that \( g \) is homotopic rel \( A \) to the identity map \( {id} : X \rightarrow X \) . Lemma 1.2.6. If \( A \) is a deformation retract of \( X \) then the inclusion map \( i : A \rightarrow X \) (defined by \( i\left( a\right) = a \in X \) for every \( a \in A \) ) is a homotopy equivalence. Example 1.2.7. Let us regard \( X : {\mathbb{R}}^{n} - \{ \left( {0,\ldots ,0}\right) \} \) as the space of nonzero vectors \( \left\{ {v \in {\mathbb{R}}^{n} \mid v \neq 0}\right\} \) . Then \( A = {S}^{n - 1} = \left\{ {v \in {\mathbb{R}}^{n} \mid \parallel v\parallel = 1}\right\} \) is a subspace of \( X \), and is a strong deformation retract of \( X \) . The map \[ F : X \times I \rightarrow A \] given by \[ F\left( {v, t}\right) = \parallel v{\parallel }^{t}\left( {v/\parallel v\parallel }\right) \] gives a homotopy rel \( A \) from the retraction \( {f}_{0}\left( v\right) = v/\parallel v\parallel \) to the identity map \( {f}_{1}\left( v\right) = v \) \( \diamond \) We have the following common language, which we will use throughout. Definition 1.2.8. Spaces \( X \) and \( Y \) that are homotopy equivalent are said to be of the same homotopy type. \( \diamond \) Definition 1.2.9. A space \( X \) is contractible if it has the homotopy type of a point \( * \), or, equivalently, if for some, and hence for any, point \( {x}_{0} \in X,{x}_{0} \) is a deformation retract of \( X \) . \( \diamond \) Here is an important construction. Definition 1.2.10. Let \( X \) be a space. The cone on \( X \) is the quotient space \( {cX} = \) \( X \times I/X \times \{ 1\} \) . \( \diamond \) Example 1.2.11. For any \( n \geq 1, c{S}^{n - 1} \) is homeomorphic to \( {D}^{n} \) . \( \diamond \) Lemma 1.2.12. For any space \( X,{cX} \) is contractible. Proof. Let \( F : {cX} \times I \rightarrow {cX} \) be defined by \[ F\left( {\left( {x, s}\right), t}\right) = \left( {x,\max \left( {s, t}\right) }\right) . \] ## 1.3 Exercises Exercise 1.3.1. (a) Construct a homeomorphism \( h : {\mathring{D}}^{n} \rightarrow {\mathbb{R}}^{n} \) . (b) Construct a homeomorphism \( h : {D}^{n}/{S}^{n - 1} \rightarrow {S}^{n} \) . Exercise 1.3.2. Let \( H \) be the "southern hemisphere" in \( {S}^{n}, H = \left\{ {\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) \in }\right. \) \( \left. {{S}^{n} \mid {x}_{n + 1} \leq 0}\right\} \) . Let \( p \) be the "south pole" \( p = \left( {0,0,\ldots ,0, - 1}\right) \in {S}^{n} \) . Show that the inclusion \( \left( {{S}^{n}, p}\right) \rightarrow \left( {{S}^{n}, H}\right) \) is a homotopy equivalence of pairs. Exercise 1.3.3. Carefully prove that homotopy is an equivalence relation (Lemma 1.2.2). Exercise 1.3.4. Prove Lemma 1.2.6. Exercise 1.3.5. (a) Prove that \( c{S}^{n - 1} \) is homeomorphic to \( {D}^{n} \) (Example 1.2.11). (b) The suspension \( {\sum X} \) of a space \( X \) is the quotient space \( X \times \left\lbrack {-1,1}\right\rbrack / \sim \), where the relation \( \sim \) identifies \( X \times \{ 1\} \) to a point and \( X \times \{ - 1\} \) to a point. Prove that \( \sum {S}^{n - 1} \) is homeomorphic to \( {S}^{n} \) . ## Chapter 2 The Fundamental Group One of the basic invariants of the homotopy type of a topological space is its fundamental group. In this chapter we define the fundamental group of a space and see how to calculate it. We also see the intimate relationship between the fundamental group and covering spaces. Throughout this chapter, all spaces are assumed to be path-connected, unless explicitly stated otherwise. ## 2.1 Definition and Basic Properties Definition 2.1.1. Let \( {x}_{0} \in X \) be a given point, called the base point. The fundamental group \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) is the set of homotopy classes of maps \[ f : \left( {{S}^{1},1}\right) \rightarrow \left( {X,{x}_{0}}\right) \] with composition given by \( h = {fg} \) where \[ h\left( {e}^{i\theta }\right) = \left\{ \begin{array}{ll} f\left( {e}^{2i\theta }\right) & 0 \leq \theta \leq \pi \\ g\left( {e}^{{2i}\left( {\theta - \pi }\right) }\right) & \pi \leq \theta \leq {2\pi }. \end{array}\right. \] Equivalently, \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) is the set of homotopy classes of maps \[ f : \left( {I,\{ 0\} \cup \{ 1\} }\right) \rightarrow \left( {X,{x}_{0}}\right) \] with composition given by \( h = {fg} \) where \[ h\left( t\right) = \left\{ \begin{array}{ll} f\left( {2t}\right) & 0 \leq t \leq \frac{1}{2} \\ g\left( {{2t} - 1}\right) & \frac{1}{2} \leq t \leq 1 \end{array}\right. \] We give the equivalence explicitly. We let \( {S}^{1} = \{ \exp \left( {2\pi it}\right) \mid 0 \leq t \leq 1\} \) and note that \( 1 = \exp \left( {2\pi i0}\right) = \exp \left( {2\pi i1}\right) \) . If \( \widetilde{f} : \left( {{S}^{1},1}\right) \rightarrow \left( {X,{x}_{0}}\right) \) is a map, then we obtain \( f \) : \( \left( {I,\{ 0\} \cup \{ 1\} }\right) \rightarrow \left( {X,{x}_{0}}\right) \) by \( f\left( t\right) = \widetilde{f}\left( {\exp \left( {2\pi it}\right) }\right) \), and if \( f : \left( {I,\{ 0\} \cup \{ 1\} }\right) \rightarrow \left( {X,{x}_{0}}\right) \) is a map, then we obtain \( \widetilde{f} : \left( {{S}^{1},1}\right) \rightarrow \left( {X,{x}_{0}}\right) \) by \( \widetilde{f}\left( {\exp \left( {2\pi it}\right) }\right) = f\left( t\right) \) . But in the sequel we will use this identification implicitly and will not distinguish between \( \widetilde{f} \) and \( f \) . Lemma 2.1.2. The fundamental group \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) is a group. Proof. The identity element of this group is represented by the constant map \( i
1048_(GTM209)A Short Course on Spectral Theory
Definition 1.6.4
Definition 1.6.4. A division algebra (over \( \mathbb{C} \) ) is a complex associative algebra \( A \) with unit 1 such that every nonzero element in \( A \) is invertible. Definition 1.6.5. An isomorphism of Banach algebras \( A \) and \( B \) is an isomorphism \( \theta : A \rightarrow B \) of the underlying algebraic structures that is also a topological isomorphism; thus there are positive constants \( a, b \) such that \[ a\parallel x\parallel \leq \parallel \theta \left( x\right) \parallel \leq b\parallel x\parallel \] for every element \( x \in A \) . COROLLARY 1. Any Banach division algebra is isomorphic to the one-dimensional algebra \( \mathbb{C} \) . Proof. Define \( \theta : \mathbb{C} \rightarrow A \) by \( \theta \left( \lambda \right) = \lambda \mathbf{1} \) . \( \theta \) is clearly an isomorphism of \( \mathbb{C} \) onto the Banach subalgebra \( \mathbb{C}1 \) of \( A \) consisting of all scalar multiples of the identity, and it suffices to show that \( \theta \) is onto \( A \) . But for any element \( x \in A \) Gelfand’s theorem implies that there is a complex number \( \lambda \in \sigma \left( x\right) \) . Thus \( x - \lambda \) is not invertible. Since \( A \) is a division algebra, \( x - \lambda \) must be 0, hence \( x = \theta \left( \lambda \right) \), as asserted. There are many division algebras in mathematics, especially commutative ones. For example, there is the algebra of all rational functions \( r\left( z\right) = p\left( z\right) /q\left( z\right) \) of one complex variable, where \( p \) and \( q \) are polynomials with \( q \neq 0 \), or the algebra of all formal Laurent series of the form \( \mathop{\sum }\limits_{{-\infty }}^{\infty }{a}_{n}{z}^{n} \) , where \( \left( {a}_{n}\right) \) is a doubly infinite sequence of complex numbers with \( {a}_{n} = 0 \) for sufficiently large negative \( n \) . It is significant that examples such as these cannot be endowed with a norm that makes them into a Banach algebra. Exercises. (1) Give an example of a one-dimensional Banach algebra that is not isomorphic to the algebra of complex numbers. (2) Let \( X \) be a compact Hausdorff space and let \( A = C\left( X\right) \) be the Banach algebra of all complex-valued continuous functions on \( X \) . Show that for every \( f \in C\left( X\right) ,\sigma \left( f\right) = f\left( X\right) \) . (3) Let \( T \) be the operator defined on \( {L}^{2}\left\lbrack {0,1}\right\rbrack \) by \( {Tf}\left( x\right) = {xf}\left( x\right), x \in \) \( \left\lbrack {0,1}\right\rbrack \) . What is the spectrum of \( T \) ? Does \( T \) have point spectrum? For the remaining exercises, let \( \left( {{a}_{n} : n = 1,2,\ldots }\right) \) be a bounded sequence of complex numbers and let \( H \) be a complex Hilbert space having an orthonormal basis \( {e}_{1},{e}_{2},\ldots \) . (4) Show that there is a (necessarily unique) bounded operator \( A \in \) \( \mathcal{B}\left( H\right) \) satisfying \( A{e}_{n} = {a}_{n}{e}_{n + 1} \) for every \( n = 1,2,\ldots \) Such an operator \( A \) is called a unilateral weighted shift (with weight sequence \( \left. \left( {a}_{n}\right) \right) \) . A unitary operator on a Hilbert space \( H \) is an invertible isometry \( U \in \mathcal{B}\left( H\right) \) . (5) Let \( A \in \mathcal{B}\left( H\right) \) be a weighted shift as above. Show that for every complex number \( \lambda \) with \( \left| \lambda \right| = 1 \) there is a unitary operator \( U = \) \( {U}_{\lambda } \in \mathcal{B}\left( H\right) \) such that \( {UA}{U}^{-1} = {\lambda A} \) . (6) Deduce that the spectrum of a weighted shift must be the union of (possibly degenerate) concentric circles about \( z = 0 \) . (7) Let \( A \) be the weighted shift associated with a sequence \( \left( {a}_{n}\right) \in {\ell }^{\infty } \) . (a) Calculate \( \parallel A\parallel \) in terms of \( \left( {a}_{n}\right) \) . (b) Assuming that \( {a}_{n} \rightarrow 0 \) as \( n \rightarrow \infty \), show that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = 0 \] ## 1.7. Spectral Radius Throughout this section, \( A \) denotes a unital Banach algebra with \( \parallel \mathbf{1}\parallel = 1 \) . We introduce the concept of spectral radius and prove a useful asymptotic formula due to Gelfand, Mazur, and Beurling. Definition 1.7.1. For every \( x \in A \) the spectral radius of \( x \) is defined by \[ r\left( x\right) = \sup \{ \left| \lambda \right| : \lambda \in \sigma \left( x\right) \} . \] REMARK 1.7.2. Since the spectrum of \( x \) is contained in the central disk of radius \( \parallel x\parallel \), it follows that \( r\left( x\right) \leq \parallel x\parallel \) . Notice too that for every \( \lambda \in \mathbb{C} \) we have \( r\left( {\lambda x}\right) = \left| \lambda \right| r\left( x\right) \) . We require the following rudimentary form of the spectral mapping theorem. If \( x \) is an element of \( A \) and \( f \) is a polynomial, then (1.11) \[ f\left( {\sigma \left( x\right) }\right) \subseteq \sigma \left( {f\left( x\right) }\right) . \] To see why this is so, fix \( \lambda \in \sigma \left( x\right) ) \) . Since \( z \mapsto f\left( z\right) - f\left( \lambda \right) \) is a polynomial having a zero at \( z = \lambda \), there a polynomial \( g \) such that \[ f\left( z\right) - f\left( \lambda \right) = \left( {z - \lambda }\right) g\left( z\right) \] Thus \[ f\left( x\right) - f\left( \lambda \right) \mathbf{1} = \left( {x - \lambda }\right) g\left( x\right) = g\left( x\right) \left( {x - \lambda }\right) \] cannot be invertible: A right (respectively left) inverse of \( f\left( x\right) - f\left( \lambda \right) \mathbf{1} \) gives rise to a right (respectively left) inverse of \( x - \lambda \) . Hence \( f\left( \lambda \right) \in \sigma \left( {f\left( x\right) }\right) \) . As a final observation, we note that for every \( x \in A \) one has (1.12) \[ r\left( x\right) \leq \mathop{\inf }\limits_{{n \geq 1}}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n}. \] Indeed, for every \( \lambda \in \sigma \left( x\right) \left( {1.11}\right) \) implies that \( {\lambda }^{n} \in \sigma \left( {x}^{n}\right) \) ; hence \[ {\left| \lambda \right| }^{n} = \left| {\lambda }^{n}\right| \leq r\left( {x}^{n}\right) \leq \begin{Vmatrix}{x}^{n}\end{Vmatrix} \] and (1.12) follows after one takes \( n \) th roots. The following formula is normally attributed to Gelfand and Mazur, although special cases were discovered independently by Beurling. THEOREM 1.7.3. For every \( x \in A \) we have \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} = r\left( x\right) \] The assertion here is that the limit exists in general, and has \( r\left( x\right) \) as its value. Proof. From (1.12) we have \( r\left( x\right) \leq \mathop{\liminf }\limits_{n}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} \), so it suffices to prove that (1.13) \[ \mathop{\limsup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} \leq r\left( x\right) \] We need only consider the case \( x \neq 0 \) . To prove (1.13) choose \( \lambda \in \mathbb{C} \) satisfying \( \left| \lambda \right| < 1/r\left( x\right) \) (when \( r\left( x\right) = 0,\lambda \) may be chosen arbitrarily). We claim that the sequence \( \left\{ {{\left( \lambda x\right) }^{n} : n = 1,2,\ldots }\right\} \) is bounded. Indeed, by the Banach-Steinhaus theorem it suffices to show that for every bounded linear functional \( \rho \) on \( A \) we have \[ \left| {\rho \left( {x}^{n}\right) {\lambda }^{n}}\right| = \left| {\rho \left( {\left( \lambda x\right) }^{n}\right) }\right| \leq {M}_{\rho } < \infty ,\;n = 1,2,\ldots , \] where \( {M}_{\rho } \) perhaps depends on \( \rho \) . To that end, consider the complex-valued function \( f \) defined on the (perhaps infinite) disk \( \{ z \in \mathbb{C} : \left| z\right| < 1/r\left( x\right) \} \) by \[ f\left( z\right) = \rho \left( {\left( 1 - zx\right) }^{-1}\right) . \] Note first that \( f \) is analytic. Indeed, for \( \left| z\right| < 1/\parallel x\parallel \) we may expand \( (1 - \) \( {zx}{)}^{-1} \) into a convergent series \( 1 + {zx} + {\left( zx\right) }^{2} + \cdots \) to obtain a power series representation for \( f \) : (1.14) \[ f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }\rho \left( {x}^{n}\right) {z}^{n} \] On the other hand, in the larger region \( R = \{ z : 0 < \left| z\right| < 1/r\left( x\right) \} \) we can write \[ f\left( z\right) = \frac{1}{z}\rho \left( {\left( {z}^{-1}\mathbf{1} - x\right) }^{-1}\right) \] and from formula (1.10) it is clear that \( f \) is analytic on \( R \) . Taken with (1.14), this implies that \( f \) is analytic on the disk \( \{ z : \left| z\right| < 1/r\left( x\right) \} \) . On the smaller disk \( \{ z : \left| z\right| < 1/\parallel x\parallel \} \) ,(1.14) gives a power series representation for \( f \) ; but since \( f \) is analytic on the larger disk \( \{ z : \left| z\right| < 1/r\left( x\right) \} \), it follows that the same series (1.14) must converge to \( f\left( z\right) \) for all \( \left| z\right| < 1/r\left( x\right) \) . Thus we are free to take \( z = \lambda \) in (1.14), and the resulting series converges. It follows that \( \rho \left( {x}^{n}\right) {\lambda }^{n} \) is a bounded sequence, proving the claim. Now choose any complex number \( \lambda \) satisfying \( 0 < \left| \lambda \right| < 1/r\left( x\right) \) . By the claim, there is a constant \( M = {M}_{\lambda } \) such that \( {\left| \lambda \right| }^{n}\parallel x{\parallel }^{n} = \parallel {\lambda x}{\parallel }^{n} \leq M \) for every \( n = 1,2,\ldots \) after taking \( n \) th roots, we find that \[ \mathop{\limsup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{x}^{n}\end{Vmatrix}}^{1/n} \leq \mathop{\limsup }\limits_{{n \rightarrow \infty }}\frac{{M}^{1/n}}{\left| \lambda \right|
1359_[陈省身] Lectures on Differential Geometry
Definition 1.2
Definition 1.2. Suppose \( M \) and \( N \) are \( m \) -dimensional and \( n \) -dimensional complex manifolds, respectively, and \( f : M \rightarrow N \) is a continuous map. If, for every point \( p \in M \), there exists a neighborhood \( U \) such that \( f \) can be expressed in \( U \) by local coordinates as \[ {w}^{k} = {w}^{k}\left( {{z}^{1},\ldots ,{z}^{m}}\right) ,\;1 \leq k \leq n, \] (1.6) where \( {w}^{k} \) are all holomorphic functions, then \( f \) is called a holomorphic map. Suppose \( f : M \rightarrow \mathbb{C} \) is a holomorphic function on the complex manifold \( M \) . By the maximum modulus theorem, if the modulus of \( f \) assumes its maximum value at \( {p}_{0} \) in a neighborhood \( U \) of \( {p}_{0} \in M \), i.e., \( \left| {f\left( p\right) }\right| \leq \left| {f\left( {p}_{0}\right) }\right| \) \( \left( {p \in U}\right) \), then we have, in \( U \) , \[ f\left( p\right) = f\left( {p}_{0}\right) \] Suppose \( M \) is a compact, connected complex manifold. If \( \left| {f\left( p\right) }\right| \left( {p \in M}\right) \) is a continuous function on \( M \), then it must assume a maximum value on \( M \) . By the previous conclusion, a holomorphic function \( f \) on \( M \) must be a constant. Thus we know that a holomorphic map \( f : M \rightarrow {\mathbb{C}}_{n} \) from a compact, connected complex manifold \( M \) to \( {\mathbb{C}}_{n} \) must map \( M \) to a point in \( {\mathbb{C}}_{n} \) . Example 1. \( {\mathbb{C}}_{m} \) is an \( m \) -dimensional complex manifold. \( {\mathbb{C}}_{1} \) is called the Gauss complex plane. Example 2 (The \( m \) -dimensional complex projective space \( \mathbb{C}{P}_{m} \) ). Define a relation \( \sim \) among the elements in \( {C}_{m + 1} - \{ 0\} \) as follows: \[ \left( {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right) \sim \left( {{w}^{0},{w}^{1},\ldots ,{w}^{m}}\right) \] if and only if there exists a nonzero complex number \( \lambda \) such that \[ \left( {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right) = \lambda \left( {{w}^{0},{w}^{1},\ldots ,{w}^{m}}\right) . \] (1.7) It is easy to verify that this is an equivalence relation. The \( m \) -dimensional complex projective space \( \mathbb{C}{P}_{m} \) is the quotient space \( \left( {{\mathbb{C}}_{m + 1}-\{ 0\} }\right) / \sim \) . An element in this space is denoted by \( \left\lbrack {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right\rbrack \) . The \( \left( {m + 1}\right) \) -tuple \( \left( {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right) \) of numbers are called the homogeneous coordinates of the point \( \left\lbrack {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right\rbrack \), and are determined by a point in \( \mathbb{C}{P}_{m} \) up to a nonzero complex factor. As in real projective spaces, \( \mathbb{C}{P}_{m} \) can be covered by \( m + 1 \) open sets \( {U}_{j}\left( {0 \leq j \leq m}\right) \), where \[ {U}_{j} = \left\{ {\left\lbrack {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right\rbrack \in \mathbb{C}{P}_{m}, \mid {z}^{j} \neq 0}\right\} , \] (1.8) and the coordinates on \( {U}_{j} \) are \[ {}_{j}{\zeta }^{k} = {z}^{k}/{z}^{j},\;0 \leq k \leq m,\;k \neq j. \] (1.9) Since \( {}_{j}{\zeta }^{k} \) can assume any complex value, every \( {U}_{j} \) is homeomorphic to \( {\mathbb{C}}_{m} \) . The formula for the change of coordinates in \( {U}_{j} \cap {U}_{k} \) is \[ \left\{ \begin{array}{l} {}_{j}{\zeta }^{h} = {}_{k}{\zeta }^{h}/{}_{k}{\zeta }^{j},\;h \neq j, k, \\ {}_{j}{\zeta }^{k} = 1/{}_{k}{\zeta }^{j}. \end{array}\right. \] \( \left( {1.10}\right) \) These are all holomorphic functions, so \( \mathbb{C}{P}_{m} \) is an \( m \) -dimensional complex manifold. When the 1-dimensional complex projective space \( \mathbb{C}{P}_{1} \) is viewed as a 2- dimensional real manifold, it is usually called the Riemann sphere. Because \( \mathbb{C}{P}_{1} \) can be covered by two coordinate neighborhoods \( {U}_{0},{U}_{1} \), and \( {U}_{0} \) is just \( \mathbb{C}{P}_{1} \) without the point \( p = \left\lbrack {0,1}\right\rbrack ,{U}_{0} \) is homeomorphic to the Gauss complex plane, and the Riemann sphere \( \mathbb{C}{P}_{1} \) is hence the 2-dimensional sphere \( {S}^{2} \) . Consider the natural projection \( \pi : {\mathbb{C}}_{m + 1} - \{ 0\} \rightarrow \mathbb{C}{P}_{m} \) such that \[ \pi \left( {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right) = \left\lbrack {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right\rbrack . \] (1.11) For \( p \in \mathbb{C}{P}_{m} \), we can identify \( {\pi }^{-1}\left( p\right) \) with \( {\mathbb{C}}^{ * } = {\mathbb{C}}_{1} - \{ 0\} \) . If we substitute for the coordinates \( \left( {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right) \) of \( {\pi }^{-1}\left( {U}_{j}\right) \) the quantities \( {}_{j}{\zeta }^{k} = {z}^{k}/{z}^{j} \) \( \left( {0 \leq k \leq m, k \neq j}\right) \), and \( {z}^{j} \), then \[ {\pi }^{-1}\left( {U}_{j}\right) \cong {U}_{j} \times {\mathbb{C}}^{ * } \] This shows that locally \( {\mathbb{C}}_{m + 1} - \{ 0\} \) has a product structure, and \( {z}^{j} \) gives the coordinates of the fiber \( {\pi }^{-1}\left( p\right) \) . If \( p \in {U}_{j} \cap {U}_{k} \), then \( {U}_{j} \) and \( {U}_{k} \) give coordinate systems \( {z}^{j} \) and \( {z}^{k} \), respectively, on \( {\pi }^{-1}\left( p\right) \) . At the same point \( x \in {\pi }^{-1}\left( p\right) \), there is a relation between the two coordinates \( {z}^{j} \) and \( {z}^{k} \) : \[ {z}^{j} = {z}^{k} \cdot {}_{k}{\zeta }^{j} = {z}^{k}/{}_{j}{\zeta }^{k} \] (1.12) where \( {}_{k}{\zeta }^{j} : {U}_{j} \cap {U}_{k} \rightarrow {\mathbb{C}}^{ * } \) is a nonzero holomorphic function on \( {U}_{j} \cap {U}_{k}. \) Hence \( {\mathbb{C}}_{m + 1} - \{ 0\} \) is a holomorphic fiber bundle on the \( m \) -dimensional complex projective space \( \mathbb{C}{P}_{m} \) whose typical fiber and structure group are both \( {\mathbb{C}}^{ * } \) . Consider the following equation in \( {\mathbb{C}}_{m + 1} \) : \[ \mathop{\sum }\limits_{{k = 0}}^{m}{z}^{k}{\bar{z}}^{k} = 1 \] \( \left( {1.13}\right) \) If we view \( {\mathbb{C}}_{m + 1} \) as the real vector space \( {\mathbb{R}}^{2\left( {m + 1}\right) } \), then equation (1.13) defines a \( \left( {{2m} + 1}\right) \) -dimensional unit sphere \( {S}^{{2m} + 1} \) in \( {\mathbb{R}}^{2\left( {m + 1}\right) } \) . (To distinguish these structures, we denote the real dimension on the upper right and the complex dimension on the lower right.) Restricting the natural projection (1.11) on \( {S}^{{2m} + 1} \) we get \[ \pi : {S}^{{2m} + 1} \rightarrow \mathbb{C}{P}_{m} \] (1.14) For any \( p \in \mathbb{C}{P}_{m} \), the complete preimage \( {\pi }^{-1}\left( p\right) \) is a circle. The map \( \pi \) is called the Hopf fibering of \( {S}^{{2m} + 1} \) . When \( m = 1,\left( {1.14}\right) \) can be written \[ \pi : {S}^{3} \rightarrow \mathbb{C}{P}_{1} \approx {S}^{2} \] \( \left( {1.15}\right) \) where \( \mathbb{C}{P}_{1} \) is topologically homeomorphic to \( {S}^{2} \) . This is an example of an essential map \( {}^{\mathrm{b}} \) from a higher to lower dimension, which plays an important historical role in the development of homotopy theory in topology. Example 3. The orbit determined by the system of equations \[ {P}_{l}\left( {{z}^{0},{z}^{1},\ldots ,{z}^{m}}\right) = 0,\;1 \leq l \leq q, \] where each \( {P}_{l} \) is a homogeneous polynomial on \( \mathbb{C}{P}_{m} \), is called an algebraic variety. For instance, the complex manifold given by the equation \[ {\left( {z}^{0}\right) }^{2} + \cdots + {\left( {z}^{m}\right) }^{2} = 0 \] (1.16) is called a hyperquadric. A theorem of W.-L. Chow (Chow's theorem) states that every compact submanifold imbedded in \( \mathbb{C}{P}_{m} \) is an algebraic variety. Example 4 (Complex torus). \( {\mathbb{C}}_{m} \) can be viewed as a \( {2m} \) -dimensional real vector space \( {\mathbb{R}}^{2m} \) . Choose \( {2m} \) real-linearly independent vectors \( \left\{ {v}_{\alpha }\right\} \) in \( {\mathbb{R}}^{2m} \) . They form the lattice \[ L = \left\{ {\mathop{\sum }\limits_{{\alpha = 1}}^{{2m}}{n}_{\alpha }{v}_{\alpha },\;{n}_{\alpha } \in \mathbb{Z}}\right\} \] (1.17) \( {\mathbb{C}}_{m} \) and \( L \) are both groups under addition. The quotient group \( {\mathbb{C}}_{m}/L \) is an \( m \) -dimensional complex manifold, called the \( m \) -dimensional complex torus. Topologically, the \( m \) -dimensional complex torus and the \( {2m} \) -dimensional real torus are homeomorphic. Yet the former has a complex manifold structure, and thus has richer contents. For instance, when \( m = 1 \), a holomorphic map from a complex torus to itself is conformal (angle preserving). Hence the angle between two vectors \( {v}_{1},{v}_{2} \) and the ratio of their lengths are invariant under holomorphic maps. If a complex torus can be imbedded in a complex projective space as a non-singular submanifold, then for a sufficiently large \( N \) there exists a nondegenerate holomorphic map \[ f : {\mathbb{C}}_{m}/L \rightarrow \mathbb{C}{P}_{N} \] (1.18) Such a complex torus is called an Abelian variety. The study of Abelian varieties forms an important branch of Algebraic Geometry and Number Theory. \( {}^{\mathrm{b}} \) A continuous map \( f : X \rightarrow Y \) is called essential if it is not homotopic to a constant map \( X \rightarrow {y}_{0} \in Y \), that is, not homotopic to zero. ![89cd1142-afa9-47ad-a74a-27b70d90fa5e_237_0.jpg](images/89cd1142-afa9-47ad-a74a-27b70d90fa5e_237_0.jpg) FIGURE 13. Example 5 (Hopf Manifolds). Consider the transformation \( \alpha : {\mathbb{C}}_{m} - \) \( \{ 0\} \rightarrow {\mathbb{C}}_{m} - \{ 0\} \) such that \[ \alpha \left( {{z}^{1},\ldots ,{z}^{m}}\right) = 2\left( {{z}^{1},\ldots ,{z}^{m}}\right) . \] (1.19) Let the discrete group generated by \( \alpha \) be denoted by \( \Delta \) . Then the quotient space \( \left( {{\mathbb{C}}_{m}-\{ 0\} }\right) /\Delta \) is an \( m \) -dimensional complex manifold, called a Hopf manifold. Topologi
1042_(GTM203)The Symmetric Group
Definition 5.5.1
Definition 5.5.1 A graph, \( \Gamma \), consists of a finite set of vertices \( V = V\left( \Gamma \right) \) and a set \( E = E\left( \Gamma \right) \) of edges, which are unordered pairs of vertices. An edge connecting vertices \( u \) and \( v \) will be denoted by \( {uv} \) . In this case we say that \( u \) and \( v \) are neighbors. - As an example, the graph in the following figure has \[ V = \left\{ {{v}_{1},{v}_{2},{v}_{3},{v}_{4}}\right\} \;\text{ and }\;E = \left\{ {{e}_{1},{e}_{2},{e}_{3},{e}_{4}}\right\} = \left\{ {{v}_{1}{v}_{2},{v}_{1}{v}_{3},{v}_{2}{v}_{3},{v}_{3}{v}_{4}}\right\} \] so \( {v}_{1} \) is is a neighbor of \( {v}_{2} \) and \( {v}_{3} \) but not of \( {v}_{4} \) : ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_226_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_226_0.jpg) Definition 5.5.2 A coloring of a graph \( \Gamma \) from a color set \( C \) is a function \( \kappa : V \rightarrow C \) . The coloring is proper if it satifies \[ {uv} \in E \Rightarrow \kappa \left( u\right) \neq \kappa \left( v\right) \text{. ∎} \] So a coloring just assigns a color to each vertex and, it is proper if no two vertices of the same color are connected by an edge. One coloring of our graph from the color set \( C = \{ 1,2,3\} \) is \[ \kappa \left( {v}_{1}\right) = \kappa \left( {v}_{2}\right) = \kappa \left( {v}_{4}\right) = 1\;\text{ and }\;\kappa \left( {v}_{3}\right) = 2. \] It is not proper, since both vertices of edge \( {v}_{1}{v}_{2} \) have the same color. The proper colorings of \( \Gamma \) are exactly those where \( {v}_{1},{v}_{2},{v}_{3} \) all have distinct colors and \( {v}_{4} \) has a color different from that of \( {v}_{3} \) . Perhaps the most famous (and controversial because of its computer proof by Appel and Haken) theorem about proper colorings is the Four Color Theorem. It is equivalent to the fact that one can always color a map with four colors so that adjacent countries are colored differently. Theorem 5.5.3 (Four Color Theorem [A-H 76]) If graph \( \Gamma \) is planar (i.e., can be drawn in the plane without edge crossings), then there is a proper coloring \( \kappa : V \rightarrow \{ 1,2,3,4\} \) . ∎ Definition 5.5.4 The chromatic polynomial of \( \Gamma \) is defined to be \[ {P}_{\Gamma } = {P}_{\Gamma }\left( n\right) = \text{the number of proper}\kappa : V \rightarrow \{ 1,2,\ldots, n\} \text{. ∎} \] The chromatic polynomial provides an interesting tool for studying proper colorings. Note that Theorem 5.5.3 can be restated: If \( \Gamma \) is planar, then \( {P}_{\Gamma }\left( 4\right) \geq 1 \) . To compute \( {P}_{\Gamma }\left( n\right) \) in our running example, if we color the verticies in order of increasing subscript, then there are \( n \) ways to color \( {v}_{1} \) . This leaves \( n - 1 \) choices for \( {v}_{2} \), since \( {v}_{1}{v}_{2} \in E \), and \( n - 2 \) for \( {v}_{3} \) . Finally, there are \( n - 1 \) colors available for \( {v}_{4} \), namely any except \( \kappa \left( {v}_{3}\right) \) . So in this case \( {P}_{\Gamma }\left( n\right) = n{\left( n - 1\right) }^{2}\left( {n - 2}\right) \) is a polynomial in \( n \) . This is always true, as suggested by the terminology, but is not obvious. The interested reader will find two different proofs in Exercises 28 and 29. In order to bring symmetric functions into play, one needs to generalize the chromatic polynomial. This motivates the following definition of Stanley. Definition 5.5.5 ([Stn 95]) If \( \Gamma \) has vertex set \( V = \left\{ {{v}_{1},\ldots ,{v}_{d}}\right\} \), then the chromatic symmetric function of \( \Gamma \) is defined to be \[ {X}_{\Gamma } = {X}_{\Gamma }\left( \mathbf{x}\right) = \mathop{\sum }\limits_{{\kappa : V \rightarrow \mathbb{P}}}{x}_{\kappa \left( {v}_{1}\right) }{x}_{\kappa \left( {v}_{2}\right) }\cdots {x}_{\kappa \left( {v}_{d}\right) }, \] where the sum is over all proper colorings \( \kappa \) with colors from \( \mathbb{P} \), the positive integers. - It will be convenient to let \( {\mathbf{x}}^{\kappa } = {x}_{\kappa \left( {v}_{1}\right) }{x}_{\kappa \left( {v}_{2}\right) }\cdots {x}_{\kappa \left( {v}_{d}\right) } \) . If we return to our example, we see that given any four colors we can assign them to \( V \) in \( 4! = {24} \) ways. If we have three colors with one specified as being used twice, then it is not hard to see that there are 4 ways to color \( \Gamma \) . Since these are the only possibilities, \[ \begin{matrix} {X}_{\Gamma }\left( \mathbf{x}\right) & = & {24}{x}_{1}{x}_{2}{x}_{3}{x}_{4} + {24}{x}_{1}{x}_{2}{x}_{3}{x}_{5} + \cdots + 4{x}_{1}^{2}{x}_{2}{x}_{3} + 4{x}_{1}{x}_{2}^{2}{x}_{3} + \cdots \end{matrix} \] \[ = {24}{m}_{{1}^{4}} + 4{m}_{2,{1}^{2}}\text{.} \] We next collect a few elementary properties of \( {X}_{\Gamma } \), including its connection to the chromatic polynomial. Proposition 5.5.6 ([Stn 95]) Let \( \Gamma \) be any graph with vertex set \( V \) . 1. \( {X}_{\Gamma } \) is homogeneous of degree \( d = \left| V\right| \) . 2. \( {X}_{\Gamma } \) is a symmetric function. 3. If we set \( {x}_{1} = \cdots = {x}_{n} = 1 \) and \( {x}_{i} = 0 \) for \( i > n \), written \( \mathbf{x} = {1}^{n} \), then \[ {X}_{\Gamma }\left( {1}^{n}\right) = {P}_{\Gamma }\left( n\right) \] Proof. 1. Every monomial in \( {X}_{\Gamma } \) has a factor for each vertex. 2. Any permutation of the colors of a proper coloring gives another proper coloring. This means that permuting the subscripts of \( {X}_{\Gamma } \) leaves the function invariant. And since it is homogeneous of degree \( d \), it is a finite sum of monomial symmetric functions. 3. With the given specialization, each monomial of \( {X}_{\Gamma } \) becomes either 1 or 0 . The surviving monomials are exactly those that use only the first \( n \) colors. So their sum is \( {P}_{\Gamma }\left( n\right) \) by definition of the chromatic polynomial as the number of such colorings. - Since \( {X}_{\Gamma } \in \Lambda \), we can consider its expansion in the various bases introduced in Chapter 4. In order to do so, we must first talk about partitions of a set \( S \) . Definition 5.5.7 A set partition \( \beta = {B}_{1}/{\mathcal{B}}_{2}/\ldots /{B}_{l} \) of \( S,\beta \vdash S \), is a collection of subsets, or blocks, \( {B}_{1},\ldots ,{B}_{l} \) whose disjoint union is \( S \) . The type of \( \beta \) is the integer partition \[ \lambda \left( \beta \right) = \left( {\left| {B}_{1}\right| ,\left| {B}_{2}\right| ,\ldots ,\left| {B}_{l}\right| }\right) \] where we assume that the \( {B}_{i} \) are listed in weakly decreasing order of size. ∎ For example, the partitions of \( \{ 1,2,3,4\} \) with two blocks are \[ 1,2,3/4;1,2,4/3;1,3,4/2;2,3,4/1;1,2/3,4;1,3/2,4;1,4/2,3\text{.} \] The first four are of type \( \left( {3,1}\right) \), while the last three are of type \( \left( {2}^{2}\right) \) . Let us first talk about the expansion of \( {X}_{\Gamma } \) in terms of monomial symmetric functions. Definition 5.5.8 An independent, or stable, set of \( \Gamma \) is a subset \( W \subseteq V \) such that there is no edge \( {uv} \) with both of its vertices in \( W \) . We call a partition \( \beta \vdash V \) independent or stable if all of its blocks are. We let \( {i}_{\lambda } = {i}_{\lambda }\left( V\right) = \) the number of independent partitions of \( V \) of type \( \lambda \) . ∎ Continuing with our example graph, the independent partitions of \( V \) are \[ {v}_{1}/{v}_{2}/{v}_{3}/{v}_{4};{v}_{1},{v}_{4}/{v}_{2}/{v}_{3};{v}_{2},{v}_{4}/{v}_{1}/{v}_{3} \] \( \left( {5.15}\right) \) having types \( \left( {1}^{4}\right) \) and \( \left( {2,{1}^{2}}\right) \) . The fact that these are exactly the partitions occuring in the previous expansion of \( {X}_{\Gamma } \) into monomial symmetric functions is no accident. To state the actual result it will be convenient to associate with a partition \( \lambda = \left( {{1}^{{m}_{1}},\ldots ,{d}^{{m}_{d}}}\right) \) the integer \[ {y}_{\lambda }\overset{\text{ def }}{ = }{m}_{1}!{m}_{2}!\cdots {m}_{d}! \] Proposition 5.5.9 ([Stn 95]) The expansion of \( {X}_{\Gamma } \) in terms of monomial symmetric functions is \[ {X}_{\Gamma } = \mathop{\sum }\limits_{{\lambda \vdash d}}{i}_{\lambda }{y}_{\lambda }{m}_{\lambda } \] Proof. In any proper coloring, the set of all vertices of a given color form an independent set. So given \( \kappa : V \rightarrow \mathbb{P} \) proper, the set of nonempty \( {\kappa }^{-1}\left( i\right) \) , \( i \in \mathbb{P} \), form an independent partition \( \beta \vdash V \) . Thus the coefficient of \( {\mathbf{x}}^{\lambda } \) in \( {X}_{\Gamma } \) is just the number of ways to choose \( \beta \) of type \( \lambda \) and then assign colors to the blocks so as to give this monomial. There are \( {i}_{\lambda } \) possiblities for the first step. Also, colors can be permuted among the blocks of a fixed size without changing \( {\mathbf{x}}^{\lambda } \), which gives the factor of \( {y}_{\lambda } \) for the second. - To illustrate this result in our example, the list (5.15) of independent partitions shows that \[ {X}_{\Gamma } = {i}_{\left( {1}^{4}\right) }{y}_{\left( {1}^{4}\right) }{m}_{\left( {1}^{4}\right) } + {i}_{\left( 2,{1}^{2}\right) }{y}_{\left( 2,{1}^{2}\right) }{m}_{\left( 2,{1}^{2}\right) } = 1 \cdot 4!{m}_{\left( {1}^{4}\right) } + 2 \cdot 2!{m}_{\left( 2,{1}^{2}\right) }, \] which agrees with our previous calculations. We now turn to the expansion of \( {X}_{\Gamma } \) in terms of the power sum symmetric functions. If \( F \subseteq E \) is a set of edges, it will be convenient to let \( F \) also stand for the subgraph of \( \Gamma \) with vertex set \( V \) and edge set \( F \) . Also, by the components of a graph \( \Gamma \) we will mean the topologically connected components. These components determine a partition \( \beta \left( F\right) \) of the vertex set, whose type will be denoted by \( \lambda \left( F\right) \) . In our usual
1172_(GTM8)Axiomatic Set Theory
Definition 6.7
Definition 6.7. A B-valued structure \( \mathbf{A} = \left\langle {A,\bar{ = },{\bar{R}}_{0},\ldots ,{\bar{c}}_{0},\ldots }\right\rangle \) is separated iff \[ \left( {\forall {a}_{1},{a}_{2} \in A}\right) \left\lbrack {\llbracket {a}_{1} = {a}_{2}\rrbracket = 1 \rightarrow {a}_{1} = {a}_{2}}\right\rbrack . \] Remark. Every B-valued structure \( \mathbf{A} \) is equivalent to a separated B-valued structure \( \left\langle {\widehat{A}, \triangleq ,{\widehat{R}}_{0},\ldots ,{\widehat{c}}_{0},\ldots }\right\rangle \) obtained from \( \mathbf{A} \) by considering the equivalence classes of the relation \( \left\{ {\langle a, b\rangle \in {A}^{2} \mid \llbracket a = b\rrbracket = 1}\right\} \) . If \( \widehat{A} \) is the set of these equivalence classes there are B-valued relations, \( \triangleq {\widehat{R}}_{0},\ldots \), on \( \widehat{A} \) and members \( {\widehat{c}}_{0},\ldots \) of \( \widehat{A} \) (which are uniquely determined) such that for every formula \( \varphi \) of \( \mathcal{L} \) and any \( {a}_{1},\ldots ,{a}_{n} \in A \) \[ \llbracket \varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) {\rrbracket }_{\mathbf{A}} = \llbracket \varphi \left( {{\widehat{a}}_{1},\ldots ,{\widehat{a}}_{n}}\right) {\rrbracket }_{\widehat{\mathbf{A}}} \] where \( {\widehat{a}}_{i} \) is the equivalence class containing \( {a}_{i} \) . Definition 6.8. A partition of unity is an indexed family \( \left\langle {{b}_{i} \mid i \in I}\right\rangle \) of elements of \( B \) such that \[ \mathop{\sum }\limits_{{i \in I}}{b}_{i} = \mathbf{1} \land \left( {\forall i, j \in I}\right) \left\lbrack {i \neq j \rightarrow {b}_{i}{b}_{j} = \mathbf{0}}\right\rbrack . \] A B-valued structure \( \mathbf{A} = \left\langle {A,\bar{ = },{\bar{R}}_{0},\ldots ,{\bar{c}}_{0},\ldots }\right\rangle \) is complete iff whenever \( \left\langle {{b}_{i} \mid i \in I}\right\rangle \) is a partition of unity and \( \left\langle {{a}_{i} \mid i \in I}\right\rangle \) is any family of elements of \( A \) then \[ \left( {\exists a \in A}\right) \left( {\forall i \in I}\right) \left\lbrack {{b}_{i} \leq \left\lbrack {a = {a}_{i}}\right\rbrack }\right\rbrack . \] Remark. The set \( a \), of Definition 6.8, is unique in the following sense. Theorem 6.9. If \( \left\langle {{b}_{i} \mid i \in I}\right\rangle \) is a partition of unity, if \[ \left( {\forall i \in I}\right) \left\lbrack {{b}_{i} \leq \left\lbrack {a = {a}_{i}}\right\rbrack }\right\rbrack \] and \[ \left( {\forall i \in I}\right) \left\lbrack {{b}_{i} \leq \left\lbrack {{a}^{\prime } = {a}_{i}}\right\rbrack }\right\rbrack \] then \( \llbracket a = {a}^{\prime }\rrbracket = 1 \) . \[ \text{Proof.}{b}_{i} \leq \llbracket a = {a}_{i}\rrbracket \left\lbrack {{a}^{\prime } = {a}_{i}}\right\rbrack \leq \llbracket a = {a}^{\prime }\rrbracket, i \in I \] \[ 1 = \mathop{\sum }\limits_{{i \in I}}{b}_{i} \leq \left\lbrack {a = {a}^{\prime }}\right\rbrack \leq 1. \] Hence \( \llbracket a = {a}^{\prime }\rrbracket = 1 \) . Remark. This unique \( a \) is sometimes denoted by \( \mathop{\sum }\limits_{{i \in I}}{b}_{i}{a}_{i} \) . We next provide an example of a B-valued structure that is separated and complete. Example. Define \( \mathbf{A} = \langle B, \equiv \rangle \) by \[ \left\lbrack {{b}_{1} = {b}_{2}}\right\rbrack = {b}_{1}{b}_{2} + \left( {-{b}_{1}}\right) \left( {-{b}_{2}}\right) \] i.e., \( \llbracket {b}_{1} = {b}_{2}\rrbracket \) is the Boolean complement of the symmetric difference of \( {b}_{1} \) and \( {b}_{2} \) . It is easily proved that \( \mathbf{A} \) is a \( \mathbf{B} \) -valued structure that satisfies \( 1 - 3 \) of Definition 6.5. Since \[ \llbracket {b}_{1} = {b}_{2}\rrbracket = 1 \rightarrow {b}_{1}{b}_{2} + \left( {-{b}_{1}}\right) \left( {-{b}_{2}}\right) = 1 \] \[ \rightarrow {b}_{1}{b}_{2} = - \left( {-{b}_{1}}\right) \left( {-{b}_{2}}\right) = {b}_{1} + {b}_{2} \] \[ \rightarrow {b}_{1}{b}_{2} = {b}_{1} \land {b}_{2}{b}_{1} = {b}_{2} \] \[ \rightarrow {b}_{2} \leq {b}_{1} \land {b}_{1} \leq {b}_{2} \] \[ \rightarrow {b}_{1} = {b}_{2} \] \( \mathbf{A} \) is separated. If \( \left\langle {{b}_{i} \mid i \in I}\right\rangle \) is a partition of unity and \( \left\langle {{a}_{i} \mid i \in I}\right\rangle \) is a family of sets of \( B \) then since \( \mathbf{B} \) is complete \[ a \triangleq \mathop{\sum }\limits_{{i \in I}}{b}_{i}{a}_{i} \in B \] Then \[ {b}_{i}\llbracket a = {a}_{i}\rrbracket = {b}_{i}{a}_{i}\mathop{\sum }\limits_{{j \in I}}{a}_{j}{b}_{j} + {b}_{i}\left( {-{a}_{i}}\right) \left( {-\mathop{\sum }\limits_{{j \in I}}{a}_{j}{b}_{j}}\right) \] \[ = \left( {{a}_{i}{b}_{i} + \left( {-{a}_{i}}\right) {b}_{i}}\right) \left( {\mathop{\sum }\limits_{{j \in I}}{a}_{j}{b}_{j} + - \mathop{\sum }\limits_{{j \in I}}{a}_{j}{b}_{j}}\right) . \] But the \( {b}_{i} \) ’s are pairwise disjoint. Then \[ \left( {-{a}_{i}}\right) {b}_{i}\mathop{\sum }\limits_{{j \in I}}{a}_{j}{b}_{j} = \left( {-{a}_{i}}\right) {a}_{i}{b}_{i} = 0 \] \[ {a}_{i}{b}_{i}\left( {-\mathop{\sum }\limits_{{j \in I}}{a}_{j}{b}_{j}}\right) = 0 \] Then \[ {b}_{i}\llbracket a = {a}_{i}\rrbracket = {a}_{i}{b}_{i} + \left( {-{a}_{i}}\right) {b}_{i} = {b}_{i}. \] Therefore \( \left( {\forall i \in I}\right) \left\lbrack {{b}_{i} \leq \llbracket a = {a}_{i}\rrbracket }\right\rbrack \) i.e., \( \mathbf{A} \) is complete. ## 7. Relative Constructibility Godel's constructibility was generalized, in a natural way, by Levy and Shoenfield to a relative constructibility which assures us of the existence of a standard transitive model \( L\left\lbrack a\right\rbrack \) of \( {ZF} \) for each set \( a \) . Levy-Shoenfield’s relative constructibility is rather narrow but quite easily generalized. In this section we will study a general theory of relative constructibility and deal with several basic relative constructibilities as special cases. Later we will extend our relative constructibility to Boolean valued relative constructibility from which we will in turn define forcing. There is a modern tendency to avoid the rather cumbersome theory of relative constructibility. We believe this to be a mistake. Although we do not pursue the subject, it is clear that one can consider wider and wider types of relative constructibility. Accordingly, we have many types of Boolean valued relative constructibility. We feel that these sometimes wild Boolean valued relative constructibilities might be very important for future work. Indeed, it is not at all clear whether the structures they produce can be constructed by the usual method of Scott-Solovay's Boolean valued models without using relative constructibility. If \( a \) and \( b \) are sets there are two different definitions of the notion " \( b \) is constructible from \( a \) " namely \( b \in {L}_{a} \) or \( b \in L\left\lbrack a\right\rbrack \) where \( {L}_{a} \) is the smallest class \( M \) satisfying 1. \( M \) is a standard transitive model of \( {ZF} \) . 2. \( {On} \subseteq M \) . 3. \( \left( {\forall x \in M}\right) \left\lbrack {x \cap a \in M}\right\rbrack \) . \( L\left\lbrack a\right\rbrack \) is the smallest class \( M \) satisfying 1. \( M \) is a standard transitive model of \( {ZF} \) . 2. \( {On} \subseteq M \) . 3. \( a \in M \) . Obviously, \( {L}_{a} \subseteq L\left\lbrack a\right\rbrack \) . In this section we will show, by a modification of Gödel's methods used to define the class \( L \) of constructible sets, that the classes \( {L}_{a} \) and \( L\left\lbrack a\right\rbrack \) exist. It should be noted that neither the characterization of \( {L}_{a} \) nor of \( L\left\lbrack a\right\rbrack \) can be formalized in \( {ZF} \) . The main difference between \( {L}_{a} \) and \( L\left\lbrack a\right\rbrack \) as we will see is that \( {L}_{a} \) satisfies the \( {AC} \) while \( L\left\lbrack a\right\rbrack \) need not. Since we will eventually wish to prove the independence of the \( {AC} \) from the axioms of \( {ZF} \) using results of this and later sections we must exercise care to avoid the use of the \( {AC} \) in proving the following results. It is of interest to consider a slightly more general situation allowing \( a \) to be a proper class \( A.{L}_{A} \) can be characterized exactly as \( {L}_{a} \) was. For \( L\left\lbrack A\right\rbrack \) there is however a problem in that we cannot have \( A \in M \) . Instead we define: \[ L\left\lbrack A\right\rbrack = \mathop{\bigcup }\limits_{{\alpha \in {0}_{n}}}L\left\lbrack {A \cap {R}^{\prime }\alpha }\right\rbrack \] We first develop a general theory that allows us to treat \( {L}_{A} \) and \( L\left\lbrack A\right\rbrack \) simultaneously. Let \( \mathcal{L} \) be a language with predicate constants \[ {R}_{0},\ldots ,{R}_{n} \] and individual constants \[ {c}_{0},\ldots ,{c}_{m} \] Some results of this section remain true if we allow \( \mathcal{L} \) to have an arbitrary well-ordered set (possible even uncountably many) of constants. \[ \text{Definition 7.1. If}\mathbf{A} = \left\langle {A,{R}_{0}{}^{\mathbf{A}},\ldots ,{R}_{n}{}^{\mathbf{A}},{c}_{0}{}^{\mathbf{A}},\ldots ,{c}_{m}{}^{\mathbf{A}}}\right\rangle \text{and} \] \[ \mathbf{B} = \left\langle {B,{R}_{0}{}^{\mathbf{B}},\ldots ,{R}_{n}{}^{\mathbf{B}},{c}_{0}{}^{\mathbf{B}},\ldots ,{c}_{m}{}^{\mathbf{B}}}\right\rangle \] are two structures for the language \( \mathcal{L} \), then \( \mathbf{A} \) is a substructure of \( \mathbf{B},\left( {\mathbf{A} \subseteq \mathbf{B}}\right) \) iff 1. \( A \subseteq B \) . 2. For each \( {R}_{i}, i = 0,\ldots, n \) if \( {R}_{i} \) is \( p \) -ary then \( \forall {a}_{1},\ldots ,{a}_{p} \in A \) \[ {R}_{i}{}^{\mathbf{A}}\left( {{a}_{1},\ldots ,{a}_{p}}\right) \leftrightarrow {R}_{i}{}^{\mathbf{B}}\left( {a,\ldots ,{a}_{p}}\right) . \] 3. \( {c}_{j}{}^{\mathbf{A}} = {c}_{j}{}^{\mathbf{B}}\;j = 0,\ldots, m \) . Exercise. If \( \mathbf{B} = \left\langle {B,{R}_{0}{}^{\mathbf{B}},\ldots ,{R}_{n}{}^{\mathbf{B}},{c}_{0}{}
1099_(GTM255)Symmetry, Representations, and Invariants
Definition 6.1.1
Definition 6.1.1. A Clifford algebra for \( \left( {V,\beta }\right) \) is an associative algebra Cliff \( \left( {V,\beta }\right) \) with unit 1 over \( \mathbb{C} \) and a linear map \( \gamma : V \rightarrow \operatorname{Cliff}\left( {V,\beta }\right) \) satisfying the following properties: (C1) \( \{ \gamma \left( x\right) ,\gamma \left( y\right) \} = \beta \left( {x, y}\right) 1 \) for \( x, y \in V \), where \( \{ a, b\} = {ab} + {ba} \) is the anticommu-tator of \( a, b \) . (C2) \( \gamma \left( V\right) \) generates \( \operatorname{Cliff}\left( {V,\beta }\right) \) as an algebra. (C3) Given any complex associative algebra \( \mathcal{A} \) with unit element 1 and a linear map \( \varphi : V \rightarrow \mathcal{A} \) such that \( \{ \varphi \left( x\right) ,\varphi \left( y\right) \} = \beta \left( {x, y}\right) 1 \), there exists an associative algebra homomorphism \( \widetilde{\varphi } : \operatorname{Cliff}\left( {V,\beta }\right) \rightarrow \mathcal{A} \) such that \( \varphi = \widetilde{\varphi } \circ \gamma \) : ![5eec4b33-2d74-487e-bbfe-c2de39f4bff7_322_0.jpg](images/5eec4b33-2d74-487e-bbfe-c2de39f4bff7_322_0.jpg) \( \operatorname{Cliff}\left( {V,\beta }\right) \) It is easy to see that an algebra satisfying properties (C1), (C2), and (C3) is unique (up to isomorphism). Indeed, if \( \mathcal{C} \) and \( {\mathcal{C}}^{\prime } \) are two such algebras with associated linear maps \( \gamma : V \rightarrow \mathcal{C} \) and \( {\gamma }^{\prime } : V \rightarrow \mathcal{C} \), then property (C3) provides algebra homomorphisms \( \widetilde{\gamma } : V \rightarrow {\mathbb{C}}^{\prime } \) and \( \widetilde{\gamma } : V \rightarrow \mathbb{C} \) such that \( {\gamma }^{\prime } = \widetilde{\gamma } \circ \gamma \) and \( \gamma = \widetilde{\gamma } \circ {\gamma }^{\prime } \) . It follows that \( \widetilde{\gamma } \circ \widetilde{\gamma } \) is the identity map on \( \gamma \left( V\right) \) and hence it is the identity map on \( \mathcal{C} \) by property (C2). Likewise, \( \widetilde{\gamma } \circ \widetilde{\gamma } \) is the identity map on \( \mathcal{C} \) . This shows that \( \widetilde{\gamma } : {\mathcal{C}}^{\prime } \rightarrow \mathcal{C} \) is an algebra isomorphism. To prove existence of a Clifford algebra, we start with the tensor algebra \( \mathcal{T}\left( V\right) \) (see Appendix C.1.2) and let \( \mathcal{J}\left( {V,\beta }\right) \) be the two-sided ideal of \( \mathcal{T}\left( V\right) \) generated by the elements \[ x \otimes y + y \otimes x - \beta \left( {x, y}\right) 1,\;x, y \in V. \] Define \( \operatorname{Cliff}\left( {V,\beta }\right) = \mathcal{T}\left( V\right) /\mathcal{J}\left( {V,\beta }\right) \) and let \( \gamma : V \rightarrow \operatorname{Cliff}\left( {V,\beta }\right) \) be the natural quotient map coming from the embedding \( V \hookrightarrow \mathcal{T}\left( V\right) \) . Clearly this pair satisfies (C1) and (C2). To verify (C3), we first factor \( \varphi \) through the map \( \widehat{\varphi } : \mathcal{T}\left( V\right) \rightarrow \mathcal{A} \) whose existence is provided by the universal property of \( \mathcal{T}\left( V\right) \) . Then \( \widehat{\varphi }\left( {\mathcal{J}\left( {V,\beta }\right) }\right) = 0 \) so we obtain a map \( \widetilde{\varphi } \) by passing to the quotient. Let \( {\operatorname{Cliff}}_{k}\left( {V,\beta }\right) \) be the span of 1 and the operators \( \gamma \left( {a}_{1}\right) \cdots \gamma \left( {a}_{p}\right) \) for \( {a}_{i} \in V \) and \( p \leq k \) . The subspaces \( {\operatorname{Cliff}}_{k}\left( {V,\beta }\right) \), for \( k = 0,1,\ldots \), give a filtration of the Clifford algebra: \[ {\operatorname{Cliff}}_{k}\left( {V,\beta }\right) \cdot {\operatorname{Cliff}}_{m}\left( {V,\beta }\right) \subset {\operatorname{Cliff}}_{k + m}\left( {V,\beta }\right) . \] Let \( \left\{ {{v}_{i} : i = 1,\ldots, n}\right\} \) be a basis for \( V \) . Since \( \left\{ {\gamma \left( {v}_{i}\right) ,\gamma \left( {v}_{j}\right) }\right\} = \beta \left( {{v}_{i},{v}_{j}}\right) \), we see from (C1) that \( {\operatorname{Cliff}}_{k}\left( {V,\beta }\right) \) is spanned by 1 and the products \[ \gamma \left( {v}_{{i}_{1}}\right) \cdots \gamma \left( {v}_{{i}_{p}}\right) ,\;1 \leq {i}_{1} < {i}_{2} < \cdots < {i}_{p} \leq n, \] where \( p \leq k \) . In particular, \[ \operatorname{Cliff}\left( {V,\beta }\right) = {\operatorname{Cliff}}_{n}\left( {V,\beta }\right) \;\text{ and }\;\dim \operatorname{Cliff}\left( {V,\beta }\right) \leq {2}^{n}. \] The linear map \( v \mapsto - \gamma \left( v\right) \) from \( V \) to \( \operatorname{Cliff}\left( {V,\beta }\right) \) satisfies (C3), so it extends to an algebra homomorphism \[ \alpha : \operatorname{Cliff}\left( {V,\beta }\right) \rightarrow \operatorname{Cliff}\left( {V,\beta }\right) \] such that \( \alpha \left( {\gamma \left( {v}_{1}\right) \cdots \gamma \left( {v}_{k}\right) }\right) = {\left( -1\right) }^{k}\gamma \left( {v}_{1}\right) \cdots \gamma \left( {v}_{k}\right) \) . Obviously \( {\alpha }^{2}\left( u\right) = u \) for all \( u \in \) \( \operatorname{Cliff}\left( {V,\beta }\right) \) . Hence \( \alpha \) is an automorphism, which we call the main involution of \( \operatorname{Cliff}\left( {V,\beta }\right) \) . There is a decomposition \[ \operatorname{Cliff}\left( {V,\beta }\right) = {\operatorname{Cliff}}^{ + }\left( {V,\beta }\right) \oplus {\operatorname{Cliff}}^{ - }\left( {V,\beta }\right) , \] where \( {\operatorname{Cliff}}^{ + }\left( {V,\beta }\right) \) is spanned by products of an even number of elements of \( V \) , \( {\operatorname{Cliff}}^{ - }\left( {V,\beta }\right) \) is spanned by products of an odd number of elements of \( V \), and \( \alpha \) acts by \( \pm 1 \) on \( {\operatorname{Cliff}}^{ \pm }\left( {V,\beta }\right) \) . ## 6.1.2 Spaces of Spinors From now on we assume that \( V \) is a finite-dimensional complex vector space with nondegenerate symmetric bilinear form \( \beta \) . In the previous section we proved the existence and uniqueness of the Clifford algebra \( \operatorname{Cliff}\left( {V,\beta }\right) \) (as an abstract associative algebra). We now study its irreducible representations. Definition 6.1.2. Let \( S \) be a complex vector space and let \( \gamma : V \rightarrow \operatorname{End}\left( S\right) \) be a linear map. Then \( \left( {S,\gamma }\right) \) is a space of spinors for \( \left( {V,\beta }\right) \) if (S1) \( \{ \gamma \left( x\right) ,\gamma \left( y\right) \} = \beta \left( {x, y}\right) I \) for all \( x, y \in V \) . (S2) The only subspaces of \( S \) that are invariant under \( \gamma \left( V\right) \) are 0 and \( S \) . If \( \left( {S,\gamma }\right) \) is a space of spinors for \( \left( {V,\beta }\right) \), then the map \( \gamma \) extends to an irreducible representation \[ \widetilde{\gamma } : \operatorname{Cliff}\left( {V,\beta }\right) \rightarrow \operatorname{End}\left( S\right) \] (by axioms (C1), (C2), and (C3) of Section 6.1.1). Conversely, every irreducible representation of \( \operatorname{Cliff}\left( {V,\beta }\right) \) arises this way. Since \( \operatorname{Cliff}\left( {V,\beta }\right) \) is a finite-dimensional algebra, a space of spinors for \( \left( {V,\beta }\right) \) must also be finite-dimensional. Let \( \left( {S,\gamma }\right) \) and \( \left( {{S}^{\prime },{\gamma }^{\prime }}\right) \) be spaces of spinors for \( \left( {V,\beta }\right) \) . One says that \( \left( {S,\gamma }\right) \) is isomorphic to \( \left( {{S}^{\prime },{\gamma }^{\prime }}\right) \) if there exists a linear bijection \( T : S \rightarrow {S}^{\prime } \) such that \( {T\gamma }\left( v\right) = \) \( {\gamma }^{\prime }\left( v\right) T \) for all \( v \in V \) . Theorem 6.1.3. Assume that \( \beta \) is a nondegenerate bilinear form on \( V \) . 1. If \( \dim V = {2l} \) is even, then up to isomorphism there is exactly one space of spinors for \( \left( {V,\beta }\right) \), and it has dimension \( {2}^{l} \) . 2. If \( \dim V = {2l} + 1 \) is odd, then there are exactly two nonisomorphic spaces of spinors for \( \left( {V,\beta }\right) \), and each space has dimension \( {2}^{l} \) . Proof. Let \( \dim V = n \) . We begin by an explicit construction of some spaces of spinors. Fix a pair \( W,{W}^{ * } \) of dual maximal isotropic subspaces of \( V \) relative to \( \beta \), as in Section B.2.1. We identify \( {W}^{ * } \) with the dual space of \( W \) via the form \( \beta \) and write \( \beta \left( {{x}^{ * }, x}\right) = \left\langle {{x}^{ * }, x}\right\rangle \) for \( x \in W \) and \( {x}^{ * } \in {W}^{ * } \) . When \( n \) is even, then \( V = {W}^{ * } \oplus W \) and \[ \beta \left( {x + {x}^{ * }, y + {y}^{ * }}\right) = \left\langle {{x}^{ * }, y}\right\rangle + \left\langle {{y}^{ * }, x}\right\rangle \] (6.1) for \( x, y \in W \) and \( {x}^{ * },{y}^{ * } \in {W}^{ * } \) . When \( n \) is odd, we take a one-dimensional subspace \( U = \mathbb{C}{e}_{0} \) such that \( \beta \left( {{e}_{0},{e}_{0}}\right) = 2 \) and \( \beta \left( {{e}_{0}, W}\right) = \beta \left( {{e}_{0},{W}^{ * }}\right) = 0 \) . Then \( V = W \oplus U \oplus \) \( {W}^{ * } \) and \[ \beta \left( {x + \lambda {e}_{0} + {x}^{ * }, y + \mu {e}_{0} + {y}^{ * }}\right) = \left\langle {{x}^{ * }, y}\right\rangle + {2\lambda \mu } + \left\langle {{y}^{ * }, x}\right\rangle \] (6.2) for \( x, y \in W,{x}^{ * },{y}^{ * } \in {W}^{ * } \), and \( \lambda ,\mu \in \mathbb{C} \) . We shall identify \( \mathop{\bigwedge }\limits^{p}{W}^{ * } \) with \( {C}^{p}\left( W\right) \), the space of \( p \) -multilinear functions on \( W \) that are skew-symmetric in their arguments, as follows (see Appendix B.2.4): Given \( p \) elements \( {w}_{1}^{ * },\ldots ,{w}_{p}^{ * } \in {W}^{ * } \), define a skew-symmetric \( p \) -linear function \( \psi \) on \( W \) by \[ \psi \left( {{w}_{1},\ldots ,{w}_{p}}\righ
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 12.14
Definition 12.14. Suppose \( \left( {\Pi, V}\right) \) is representation of \( K \) . Then the character of \( \Pi \) is the function \( {\chi }_{\Pi } : K \rightarrow \mathbb{C} \) given by \[ {\chi }_{\Pi }\left( x\right) = \operatorname{trace}\left( {\Pi \left( x\right) }\right) . \] Note that we now consider the character as a function on the group \( K \), rather than on the Lie algebra \( \mathfrak{g} = {\mathfrak{k}}_{\mathbb{C}} \), as in Chapter 10. If \( \pi \) is the associated representation of \( \mathfrak{g} \), then the character \( {\chi }_{\pi } \) of \( \pi \) (Definition 10.11) is related to the character \( {\chi }_{\Pi } \) of \( \Pi \) by \[ {\chi }_{\Pi }\left( {e}^{H}\right) = {\chi }_{\pi }\left( H\right) ,\;H \in \mathfrak{k}. \] Note that each character is a class function on \( K \) : \[ {\chi }_{\Pi }\left( {{yx}{y}^{-1}}\right) = \operatorname{trace}\left( {\Pi \left( y\right) \Pi \left( x\right) \Pi {\left( y\right) }^{-1}}\right) = \operatorname{trace}\left( {\Pi \left( x\right) }\right) . \] The following theorem says that the characters of irreducible representations form an orthonormal set in the space of class functions. Theorem 12.15. If \( \left( {\Pi, V}\right) \) and \( \left( {\sum, W}\right) \) are irreducible representations of \( K \), then \[ {\int }_{K}\overline{\operatorname{trace}\left( {\Pi \left( x\right) }\right) }\operatorname{trace}\left( {\sum \left( x\right) }\right) {dx} = \left\{ {\begin{array}{ll} 1 & \text{ if }V \cong W \\ 0 & \text{ if }V ≆ W \end{array},}\right. \] where \( {dx} \) is the normalized left-invariant volume form on \( K \) . If \( \left( {\Pi, V}\right) \) is a representation of \( K \), let \( {V}^{K} \) denote the space given by \[ {V}^{K} = \{ v \in V \mid \Pi \left( x\right) v = v\text{ for all }x \in K\} . \] Lemma 12.16. Suppose \( \left( {\Pi, V}\right) \) is a finite-dimensional representation of \( K \), and let \( P \) be the operator on \( V \) given by \[ P = {\int }_{K}\Pi \left( x\right) {dx} \] Then \( P \) is a projection onto \( {V}^{K} \) . That is to say, \( P \) maps \( V \) into \( {V}^{K} \) and \( {Pv} = v \) for all \( v \in {V}^{K} \) . Clearly, \( {V}^{K} \) is an invariant subspace for \( \Pi \) . If we pick an inner product on \( V \) for which \( \Pi \) is unitary, then \( {\left( {V}^{K}\right) }^{ \bot } \) is also invariant under each \( \Pi \left( x\right) \) and thus under \( P \) . But since \( P \) maps into \( {V}^{K} \), the map \( P \) must be zero on \( {\left( {V}^{K}\right) }^{ \bot } \) ; thus, \( P \) is actually the orthogonal projection onto \( {V}^{K} \) . Proof. For any \( y \in K \) and \( v \in V \), we have \[ \Pi \left( y\right) {Pv} = \Pi \left( y\right) \left( {{\int }_{K}\Pi \left( x\right) {dx}}\right) v \] \[ = \left( {{\int }_{K}\Pi \left( {yx}\right) {dx}}\right) v \] \[ = {Pv} \] by the left-invariance of the form \( {dx} \) . This shows that \( {Pv} \) belongs to \( {V}^{K} \) . Meanwhile, if \( v \in {V}^{K} \), then \[ {Pv} = {\int }_{K}\Pi \left( x\right) {vdx} \] \[ = \left( {{\int }_{K}{dx}}\right) v \] \[ = v \] by the normalization of the volume form \( {dx} \) . Note that if \( V \) is irreducible and nontrivial, then \( {V}^{K} = \{ 0\} \) . In this case, the proposition says that \( {\int }_{K}\Pi \left( x\right) {dx} = 0 \) . Lemma 12.17. For \( A : V \rightarrow V \) and \( B : W \rightarrow W \), we have \[ \operatorname{trace}\left( A\right) \operatorname{trace}\left( B\right) = \operatorname{trace}\left( {A \otimes B}\right) , \] where \( A \otimes B : V \otimes W \rightarrow V \otimes W \) is as in Proposition 4.16. Proof. If \( \left\{ {v}_{j}\right\} \) and \( \left\{ {w}_{l}\right\} \) are bases for \( V \) and \( W \), respectively, then \( \left\{ {{v}_{j} \otimes {w}_{l}}\right\} \) is a basis for \( V \otimes W \) . If \( {A}_{jk} \) and \( {B}_{lm} \) are the matrices of \( A \) and \( B \) with respect to \( \left\{ {v}_{j}\right\} \) and \( \left\{ {w}_{l}\right\} \), respectively, then the matrix of \( A \otimes B \) with respect to \( \left\{ {{v}_{j} \otimes {w}_{l}}\right\} \) is easily seen to be \[ {\left( A \otimes B\right) }_{\left( {j, l}\right) \left( {k, m}\right) } = {A}_{jk}{B}_{lm} \] Thus, \[ \operatorname{trace}\left( {A \otimes B}\right) = \mathop{\sum }\limits_{{j, l}}{A}_{jj}{B}_{ll} = \operatorname{trace}\left( A\right) \operatorname{trace}\left( B\right) , \] as claimed. Proof of Theorem 12.15. We know that there exists an inner product on \( V \) for which each \( \Pi \left( x\right) \) is unitary. Thus, \[ \overline{\operatorname{trace}\left( {\Pi \left( x\right) }\right) } = \operatorname{trace}\left( {\Pi {\left( x\right) }^{ * }}\right) = \operatorname{trace}\left( {\Pi \left( {x}^{-1}\right) }\right) . \] (12.7) Recall from Sect. 4.3.3 that for any \( A : V \rightarrow V \), we have the transpose operator \( {A}^{tr} : {V}^{ * } \rightarrow {V}^{ * } \) . Since the matrix of \( {A}^{tr} \) with respect to the dual of any basis \( \left\{ {v}_{j}\right\} \) of \( V \) is the transpose of the matrix of \( A \) with respect to \( \left\{ {v}_{j}\right\} \), we see that \( \operatorname{trace}\left( {A}^{tr}\right) = \operatorname{trace}\left( A\right) \) . Thus, \[ \overline{\operatorname{trace}\left( {\Pi \left( x\right) }\right) } = \operatorname{trace}\left( {\Pi {\left( {x}^{-1}\right) }^{tr}}\right) = \operatorname{trace}\left( {{\Pi }^{tr}\left( x\right) }\right) , \] where \( {\Pi }^{tr} \) is the dual representation to \( \Pi \) . Thus, the complex conjugate of the character of \( \Pi \) is the character of the dual representation \( {\Pi }^{tr} \) of \( \Pi \) . Using Lemma 12.17, we then obtain \[ \overline{\operatorname{trace}\left( {\Pi \left( x\right) }\right) }\operatorname{trace}\left( {\sum \left( x\right) }\right) = \operatorname{trace}\left( {{\Pi }^{tr}\left( x\right) \otimes \sum \left( x\right) }\right) \] \[ = \operatorname{trace}\left( {\left( {{\Pi }^{tr} \otimes \sum }\right) \left( x\right) }\right) . \] By Lemma 12.16, this becomes \[ {\int }_{K}\overline{\operatorname{trace}\left( {\Pi \left( x\right) }\right) }\operatorname{trace}\left( {\sum \left( x\right) }\right) {dx} = {\int }_{K}\operatorname{trace}\left( {\left( {{\Pi }^{tr} \otimes \sum }\right) \left( x\right) }\right) {dx} \] \[ = \operatorname{trace}\left( {{\int }_{K}\left( {{\Pi }^{tr} \otimes \sum }\right) \left( x\right) {dx}}\right) \] \[ = \operatorname{trace}\left( P\right) \] \[ = \dim \left( {\left( {V}^{ * } \otimes W\right) }^{K}\right) \] (12.8) where \( P \) is a projection of \( {V}^{ * } \otimes W \) onto \( {\left( {V}^{ * } \otimes W\right) }^{K} \) . Now, for any two finite-dimensional vector spaces \( V \) and \( W \), there is a natural isomorphism between \( {V}^{ * } \otimes W \) and \( \operatorname{End}\left( {V, W}\right) \), the space of linear maps from \( V \) to \( W \) . This isomorphism is actually an intertwining map of representations, where \( x \in K \) acts on \( A \in \operatorname{End}\left( {V, W}\right) \) by \[ x \cdot A = \sum \left( x\right) {A\Pi }{\left( x\right) }^{-1}. \] Finally, under this isomorphism, \( {\left( {V}^{ * } \otimes W\right) }^{K} \) maps to the space of intertwining maps of \( V \) to \( W \) . (See Exercises 3 and 4 for the proofs of the preceding claims.) By Schur’s lemma, the space of intertwining maps has dimension 1 if \( V \cong W \) and dimension 0 otherwise. Thus, (12.8) reduces to the claimed result. Our next result says that the characters form a complete orthonormal set in the space of class functions on \( K \) . Theorem 12.18. Suppose \( f \) is a continuous class function on \( K \) and that for every finite-dimensional, irreducible representation \( \Pi \) of \( K \), the function \( f \) is orthogonal to the character of \( \Pi \) : \[ {\int }_{K}\overline{f\left( x\right) }\operatorname{trace}\left( {\Pi \left( x\right) }\right) {dx} = 0. \] Then \( f \) is identically zero. If \( K = {S}^{1} \), the irreducible representations are one dimensional and of the form \( \Pi \left( {e}^{i\theta }\right) = {e}^{im\theta }I, m \in \mathbb{Z} \), so that \( {\chi }_{\Pi }\left( {e}^{i\theta }\right) = {e}^{im\theta } \) . In this case, the completeness result for characters reduces to a standard result about Fourier series. The proof given in this section assumes in an essential way that \( K \) is a compact matrix Lie group. (Of course, we work throughout the book with matrix Lie groups, but most of the proofs we give extend with minor modifications to arbitrary Lie groups.) In Appendix D, we sketch a proof of Theorem 12.18 that does not rely on the assumption that \( K \) is a matrix group. That proof, however, requires a bit more functional analysis than the proof given in this section. We now consider a class of functions called matrix entries, that include as a special case the characters of representations. We will prove a completeness result for matrix entries and then specialize this result to class functions in order to obtain completeness for characters. Definition 12.19. If \( \left( {\Pi, V}\right) \) is a representation of \( K \) and \( \left\{ {v}_{j}\right\} \) is a basis for \( V \), the functions \( f : K \rightarrow \mathbb{C} \) of the form \[ f\left( x\right) = {\left( \Pi \left( x\right) \right) }_{jk} \] (12.9) are called matrix entries for \( \Pi \) . Here \( {\left( \Pi \left( x\right) \right) }_{jk} \) denotes the \( \left( {j, k}\right) \) entry of the matrix of \( \Pi \left( x\right) \) in the basis \( \left\{ {v}_{j}\right\} \) . In a slight abuse of notation, we will also call \( f \) a matrix entry for \( \Pi \) if \( f \) is expressible as a linear combination of the functions in (12.9): \[ f\left( x\right) = \mathop{\sum }\limits_{{j, k}}{c}_{jk}{\left( \Pi \left( x\right) \right) }_{jk} \] (12.10) We may write functions of the form (12.10) in a basis-independen