book_name
stringclasses
89 values
def_number
stringlengths
12
19
text
stringlengths
5.47k
10k
1054_(GTM214)Partial Differential Equations
Definition 8.1.2
Definition 8.1.2. A continuous semigroup \( {\left\{ {T}_{t}\right\} }_{t \geq 0} \) of continuous linear operators of a Banach space \( B \) with norm \( \parallel \cdot \parallel \) is called contracting if for all \( v \in B \) and all \( t \geq 0 \) , \[ \begin{Vmatrix}{{T}_{t}v}\end{Vmatrix} \leq \parallel v\parallel \] (8.1.13) (Here, continuity of the semigroup means continuous dependence of the operators \( {T}_{t} \) on \( t \) .) ## 8.2 Infinitesimal Generators of Semigroups If the initial values \( f\left( x\right) = u\left( {x,0}\right) \) of a solution \( u \) of the heat equation \[ {u}_{t}\left( {x, t}\right) - {\Delta u}\left( {x, t}\right) = 0 \] (8.2.1) are of class \( {C}^{2} \), we expect that \[ \mathop{\lim }\limits_{{t \searrow 0}}\frac{u\left( {x, t}\right) - u\left( {x,0}\right) }{t} = {u}_{t}\left( {x,0}\right) = {\Delta u}\left( {x,0}\right) = {\Delta f}\left( x\right) , \] (8.2.2) or with the notation \[ u\left( {x, t}\right) = {P}_{t}f\left( u\right) \] of the previous section, \[ \mathop{\lim }\limits_{{t \searrow 0}}\frac{1}{t}\left( {{P}_{t} - \mathrm{{Id}}}\right) f = {\Delta f} \] (8.2.3) We want to discuss this in more abstract terms and verify the following definition: Definition 8.2.1. Let \( {\left\{ {T}_{t}\right\} }_{t \geq 0} \) be a continuous semigroup on a Banach space \( B \) . We put \[ D\left( A\right) \mathrel{\text{:=}} \left\{ {v \in B : \mathop{\lim }\limits_{{t \searrow 0}}\frac{1}{t}\left( {{T}_{t} - \operatorname{Id}}\right) v\text{ exists }}\right\} \subset B \] (8.2.4) and call the linear operator \[ A : D\left( A\right) \rightarrow B, \] defined as \[ {Av} \mathrel{\text{:=}} \mathop{\lim }\limits_{{t \searrow 0}}\frac{1}{t}\left( {{T}_{t} - \mathrm{{Id}}}\right) v \] (8.2.5) the infinitesimal generator of the semigroup \( \left\{ {T}_{t}\right\} \) . Then \( D\left( A\right) \) is nonempty, since it contains 0 . Lemma 8.2.1. For all \( v \in D\left( A\right) \) and all \( t \geq 0 \), we have \[ {T}_{t}{Av} = A{T}_{t}v \] (8.2.6) Thus \( A \) commutes with all the \( {T}_{t} \) . Proof. For \( v \in D\left( A\right) \), we have \[ {T}_{t}{Av} = {T}_{t}\mathop{\lim }\limits_{{\tau \searrow 0}}\frac{1}{\tau }\left( {{T}_{\tau } - \mathrm{{Id}}}\right) v \] \[ = \mathop{\lim }\limits_{{\tau \searrow 0}}\frac{1}{\tau }\left( {{T}_{t}{T}_{\tau } - {T}_{t}}\right) v\text{(since}{T}_{t}\text{is continuous and linear)} \] \[ = \mathop{\lim }\limits_{{\tau \searrow 0}}\frac{1}{\tau }\left( {{T}_{\tau }{T}_{t} - {T}_{t}}\right) v\text{(by the semigroup property)} \] \[ = \mathop{\lim }\limits_{{\tau \searrow 0}}\frac{1}{\tau }\left( {{T}_{\tau } - \operatorname{Id}}\right) {T}_{t}v \] \[ = A{T}_{t}v \] In particular, if \( v \in D\left( A\right) \), then so is \( {T}_{t}v \) . In that sense, there is no loss of regularity of \( {T}_{t}v \) when compared with \( v\left( { = {T}_{0}v}\right) \) . In the sequel, we shall employ the notation \[ {J}_{\lambda }v \mathrel{\text{:=}} {\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}{T}_{s}v\mathrm{\;d}s\;\text{ for }\lambda > 0 \] (8.2.7) for a contracting semigroup \( \left\{ {T}_{t}\right\} \) . The integral here is a Riemann integral for functions with values in some Banach space. The standard definition of the Riemann integral as a limit of step functions easily generalizes to the Banach-space-valued case. The convergence of the improper integral follows from the estimate \[ \mathop{\lim }\limits_{{K, M \rightarrow \infty }}\begin{Vmatrix}{{\int }_{K}^{M}\lambda {\mathrm{e}}^{-{\lambda s}}{T}_{s}v\mathrm{\;d}s}\end{Vmatrix} \leq \mathop{\lim }\limits_{{K, M \rightarrow \infty }}{\int }_{K}^{M}\lambda {\mathrm{e}}^{-{\lambda s}}\begin{Vmatrix}{{T}_{s}v}\end{Vmatrix}\mathrm{d}s \] \[ \leq \mathop{\lim }\limits_{{K, M \rightarrow \infty }}\parallel v\parallel {\int }_{K}^{M}\lambda {\mathrm{e}}^{-{\lambda s}}\mathrm{\;d}s \] \[ = 0\text{,} \] which holds because of the contraction property and the completeness of \( B \) . Since \[ {\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}\mathrm{\;d}s = {\int }_{0}^{\infty } - \frac{\mathrm{d}}{\mathrm{d}s}\left( {\mathrm{e}}^{-{\lambda s}}\right) \mathrm{d}s = 1, \] (8.2.8) \( {J}_{\lambda }v \) is a weighted mean of the semigroup \( \left\{ {T}_{t}\right\} \) applied to \( v \) . Since \[ \begin{Vmatrix}{{J}_{\lambda }v}\end{Vmatrix} \leq {\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}\begin{Vmatrix}{{T}_{s}v}\end{Vmatrix}\mathrm{d}s \] \[ \leq \parallel v\parallel {\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}\mathrm{\;d}s \] by the contraction property \[ \leq \parallel v\parallel \] (8.2.9) by (8.2.8), \( {J}_{\lambda } : B \rightarrow B \) is a bounded linear operator with norm \( \begin{Vmatrix}{J}_{\lambda }\end{Vmatrix} \leq 1 \) . Lemma 8.2.2. For all \( v \in B \), we have \[ \mathop{\lim }\limits_{{\lambda \rightarrow \infty }}{J}_{\lambda }v = v \] (8.2.10) Proof. By (8.2.8), \[ {J}_{\lambda }v - v = {\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}\left( {{T}_{s}v - v}\right) \mathrm{d}s. \] For \( \delta > 0 \), let \[ {I}_{\lambda }^{1} \mathrel{\text{:=}} \begin{Vmatrix}{{\int }_{0}^{\delta }\lambda {\mathrm{e}}^{-{\lambda s}}\left( {{T}_{s}v - v}\right) \mathrm{d}s}\end{Vmatrix},\;{I}_{\lambda }^{2} \mathrel{\text{:=}} \begin{Vmatrix}{{\int }_{\delta }^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}\left( {{T}_{s}v - v}\right) \mathrm{d}s}\end{Vmatrix}. \] Now let \( \varepsilon > 0 \) be given. Since \( {T}_{s}v \) is continuous in \( s \), there exists \( \delta > 0 \) such that \[ \begin{Vmatrix}{{T}_{s}v - v}\end{Vmatrix} < \frac{\varepsilon }{2}\;\text{ for }0 \leq s \leq \delta \] and thus also \[ {I}_{\lambda }^{1} \leq \frac{\varepsilon }{2}{\int }_{0}^{\delta }\lambda {\mathrm{e}}^{-{\lambda s}}\mathrm{\;d}s < \frac{\varepsilon }{2} \] by (8.2.8). For each \( \delta > 0 \), there also exists \( {\lambda }_{0} \in \mathbb{R} \) such that for all \( \lambda \geq {\lambda }_{0} \) , \[ {I}_{\lambda }^{2} \leq {\int }_{\delta }^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}\left( {\begin{Vmatrix}{{T}_{s}v}\end{Vmatrix} + \parallel v\parallel }\right) \mathrm{d}s \] \[ \leq 2\parallel v\parallel {\int }_{\delta }^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}\mathrm{\;d}s\text{ (by the contraction property) } \] \[ < \frac{\varepsilon }{2}\text{.} \] This easily implies (8.2.10). Theorem 8.2.1. Let \( {\left\{ {T}_{t}\right\} }_{t \geq 0} \) be a contracting semigroup with infinitesimal generator \( A \) . Then \( D\left( A\right) \) is dense in \( B \) . Proof. We shall show that for all \( \lambda > 0 \) and all \( v \in B \) , \[ {J}_{\lambda }v \in D\left( A\right) \text{.} \] (8.2.11) Since by Lemma 8.2.2, \[ \left\{ {{J}_{\lambda }v : \lambda > 0, v \in B}\right\} \] is dense in \( B \), this will imply the assertion. We have \[ \frac{1}{t}\left( {{T}_{t} - \mathrm{{Id}}}\right) {J}_{\lambda }v = \frac{1}{t}{\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}{T}_{t + s}v\mathrm{\;d}s - \frac{1}{t}{\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}{T}_{s}v\mathrm{\;d}s \] since \( {T}_{t} \) is continuous and linear \[ = \frac{1}{t}{\int }_{t}^{\infty }\lambda {\mathrm{e}}^{\lambda t}{\mathrm{e}}^{-{\lambda \sigma }}{T}_{\sigma }v\mathrm{\;d}\sigma - \frac{1}{t}{\int }_{0}^{\infty }\lambda {\mathrm{e}}^{-{\lambda s}}{T}_{s}v\mathrm{\;d}s \] \[ = \frac{{\mathrm{e}}^{\lambda t} - 1}{t}{\int }_{t}^{\infty }\lambda {\mathrm{e}}^{-{\lambda \sigma }}{T}_{\sigma }v\mathrm{\;d}\sigma - \frac{1}{t}{\int }_{0}^{t}\lambda {\mathrm{e}}^{-{\lambda s}}{T}_{s}v\mathrm{\;d}s \] \[ = \frac{{\mathrm{e}}^{\lambda t} - 1}{t}\left( {{J}_{\lambda }v - {\int }_{0}^{t}\lambda {\mathrm{e}}^{-{\lambda \sigma }}{T}_{\sigma }v\mathrm{\;d}\sigma }\right) - \frac{1}{t}{\int }_{0}^{t}\lambda {\mathrm{e}}^{-{\lambda s}}{T}_{s}v\mathrm{\;d}s. \] The last term, the integral being continuous in \( s \), for \( t \rightarrow 0 \) tends to \( - \lambda {T}_{0}v = \) \( - {\lambda v} \), while the first term in the last line tends to \( \lambda {J}_{\lambda }v \) . This implies \[ A{J}_{\lambda }v = \lambda \left( {{J}_{\lambda } - \mathrm{{Id}}}\right) v\;\text{ for all }v \in B, \] (8.2.12) which in turn implies (8.2.11). 口 For a contracting semigroup \( {\left\{ {T}_{t}\right\} }_{t \geq 0} \), we now define operators \[ {D}_{t}{T}_{t} : D\left( {{D}_{t}{T}_{t}}\right) \left( { \subset B}\right) \rightarrow B \] by \[ {D}_{t}{T}_{t}v \mathrel{\text{:=}} \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{1}{h}\left( {{T}_{t + h} - {T}_{t}}\right) v \] (8.2.13) where \( D\left( {{D}_{t}{T}_{t}}\right) \) is the subspace of \( B \) where this limit exists. Lemma 8.2.3. \( v \in D\left( A\right) \) implies \( v \in D\left( {{D}_{t}{T}_{t}}\right) \), and we have \[ {D}_{t}{T}_{t}v = A{T}_{t}v = {T}_{t}{Av}\;\text{ for }t \geq 0. \] (8.2.14) Proof. The second equation has already been established as shown in Lemma 8.2.1. We thus have for \( v \in D\left( A\right) \) , \[ \mathop{\lim }\limits_{{h \searrow 0}}\frac{1}{h}\left( {{T}_{t + h} - {T}_{t}}\right) v = A{T}_{t}v = {T}_{t}{Av}. \] (8.2.15) Equation (8.2.15) means that the right derivative of \( {T}_{t}v \) with respect to \( t \) exists for all \( v \in D\left( A\right) \) and is continuous in \( t \) . By a well-known calculus lemma, this then implies that the left derivative exists as well and coincides with the right one, implying differentiability and (8.2.14). The proof of the calculus lemma goes as follows: Let \( f : \lbrack 0,\infty ) \rightarrow B \) be continuous, and suppose that for all \( t \geq 0 \), the right derivative \( {d}^{ + }f\left( t\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{h \searrow 0}}\frac{1}{h}\left( {f\left( {t + h}\right) - f\left( t\right) }\right) \) exists and is continuous. The continuity of \( {d}^{ + }f \) implies that on every interval \( \left\l
109_The rising sea Foundations of Algebraic Geometry
Definition 7.137
Definition 7.137. Let \( V \) be a \( K \) -vector space, possibly infinite-dimensional. Fix \( \epsilon = \pm 1 \) . A function \( B : V \times V \rightarrow K \) is said to be \( \left( {\sigma ,\epsilon }\right) \) -Hermitian if it is linear in the first variable and satisfies \[ B\left( {y, x}\right) = {\epsilon B}{\left( x, y\right) }^{\sigma } \] (7.27) for all \( x, y \in V \) . Note that (7.27) implies that \( B \) is \( \sigma \) -linear in the second variable, i.e., it is additive and satisfies \( B\left( {x,{\lambda y}}\right) = {\lambda }^{\sigma }B\left( {x, y}\right) \) for \( \lambda \in K \) and \( x, y \in V \) . In order to relate this notion to the examples we saw in Chapter 6, note that: - If \( \left( {\sigma ,\epsilon }\right) = \left( {\mathrm{{id}},1}\right) \), then \( B \) is a symmetric bilinear form; this is the orthogonal case. - If \( \left( {\sigma ,\epsilon }\right) = \left( {\mathrm{{id}}, - 1}\right) \), then \( B \) is a skew-symmetric bilinear form; this is the symplectic case. - If \( \sigma \neq \mathrm{{id}} \) and \( \epsilon = 1 \), then \( B \) is Hermitian in the sense of Section 6.8; this is the unitary case. Remark 7.138. The remaining case, in which \( \sigma \neq \) id and \( \epsilon = - 1 \), is essentially the same as the unitary case. Indeed, if \( \sigma \neq \mathrm{{id}} \), then there is a scalar \( a \neq 0 \) such that \( {a}^{\sigma } = - a \) . [Choose any \( b \in K \) with \( {b}^{\sigma } \neq b \), and set \( \left. {a \mathrel{\text{:=}} b - {b}^{\sigma }\text{.}}\right\rbrack \) We can then replace \( B \) by \( {aB} \) to convert a \( \left( {\sigma ,\epsilon }\right) \) -Hermitian form to a \( \left( {\sigma , - \epsilon }\right) \) -Hermitian form. We will therefore also refer to this case as the unitary case. Assume from now on that we are given a \( \left( {\sigma ,\epsilon }\right) \) -Hermitian form on \( V \) satisfying the following two conditions: (1) \( B \) is nondegenerate in the sense that \( {V}^{ \bot } = 0 \), where \[ {V}^{ \bot } \mathrel{\text{:=}} \{ x \in V \mid B\left( {x, - }\right) = 0\} . \] (2) \( B \) has finite Witt index \( n \geq 1 \) . Here, as before, the Witt index is the maximal dimension of a totally isotropic subspace. It follows from these assumptions that we can find vectors \( {e}_{1},\ldots ,{e}_{n},{e}_{-n},\ldots ,{e}_{-1} \) satisfying the same relations as in the examples we treated in Sections 6.6,6.7, and 6.8, where \( {e}_{-i} \) plays the role of the vector called \( {f}_{i}\left( {1 \leq i \leq n}\right) \) in those examples. Explicitly, if we set \[ \epsilon \left( i\right) = \left\{ \begin{array}{ll} 1 & \text{ if }i > 0 \\ \epsilon & \text{ if }i < 0 \end{array}\right. \] and \( \langle x, y\rangle \mathrel{\text{:=}} B\left( {x, y}\right) \), then the relations are \[ \left\langle {{e}_{i},{e}_{-i}}\right\rangle = \epsilon \left( i\right) \] (7.28) for all \( i \in I \mathrel{\text{:=}} \{ \pm 1,\ldots , \pm n\} \) and \[ \left\langle {{e}_{i},{e}_{j}}\right\rangle = 0 \] (7.29) for all \( i, j \in I \) with \( j \neq - i \) . Our assumptions also imply that \( V \) splits into a direct sum \[ V = {V}_{1} \oplus \cdots \oplus {V}_{n} \oplus {V}_{0} \oplus {V}_{-n} \oplus \cdots \oplus {V}_{-1}, \] where \( {V}_{i} = K{e}_{i} \) for \( i \in I \) and \( {V}_{0} \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{i \in I}}{e}_{i}^{ \bot } \) . We might have \( {V}_{0} = 0 \) ; this is necessarily true in the symplectic case. Or at the other extreme, \( {V}_{0} \) might be infinite-dimensional. In what follows we will represent linear maps \( V \rightarrow V \) by matrices whose rows and columns are labeled by \( I \cup \{ 0\} \), where the \( \left( {i, j}\right) \) -entry describes the \( \left( {{V}_{j} \rightarrow {V}_{i}}\right) \) component of the map. These components are not scalars, in general, if they involve \( {V}_{0} \) . For example, the \( \left( {0,1}\right) \) -entry is a vector in \( {V}_{0} \) (the image of \( {e}_{1} \) under a map \( {V}_{1} \rightarrow {V}_{0} \) ); the \( \left( {1,0}\right) \) -entry is an element of \( {V}_{0}^{ * } \) (representing a map \( {V}_{0} \rightarrow {V}_{1} \cong K \) ); and the \( \left( {0,0}\right) \) -entry is a linear map \( {V}_{0} \rightarrow {V}_{0} \) We now consider the isometry group \[ G \mathrel{\text{:=}} \{ g \in \mathrm{{GL}}\left( V\right) \mid \langle {gx},{gy}\rangle = \langle x, y\rangle \text{ for all }x, y \in V\} , \] and we will exhibit an RGD system. First, we set \( T \) equal to the set of all \( g \in G \) represented by diagonal matrices (i.e., \( g{V}_{i} = {V}_{i} \) for all \( i \in I \cup \{ 0\} \) ). Next, we define root groups using "elementary" automorphisms of \( V \) . The basic idea for this has already been illustrated in some of the examples in Chapter 6, where we used elementary subgroups in various copies of \( {\mathrm{{SL}}}_{2} \) or \( {\mathrm{{GL}}}_{2} \) in \( G \) as an aid in verifying the BN-pair axioms. Given any \( i, j \in I \) with \( i \neq \pm j \) and any \( \lambda \in K \), we can perform an "elementary change of basis" in which we replace \( {e}_{j} \) by \( {e}_{j}^{\prime } \mathrel{\text{:=}} {e}_{j} + \epsilon \left( i\right) \lambda {e}_{i} \) and we replace \( {e}_{-i} \) by \( {e}_{-i}^{\prime } \mathrel{\text{:=}} {e}_{-i} - \epsilon \left( j\right) {\lambda }^{\sigma }{e}_{-j} \) . The new basis vectors satisfy the same inner-product relations as the old ones and have the same orthogonal complement \( {V}_{0} \) . So there is an element \( {E}_{ij}\left( \lambda \right) \in G \) given by \[ {e}_{j} \mapsto {e}_{j}^{\prime } \] \[ {e}_{-i} \mapsto {e}_{-i}^{\prime } \] \[ {e}_{l} \mapsto {e}_{l}\;\text{ if }l \in I \smallsetminus \{ j, - i\} , \] \[ x \mapsto x\;\text{ if }x \in {V}_{0}. \] Thus the restriction of \( {E}_{ij}\left( \lambda \right) \) to \( {V}_{ij} \mathrel{\text{:=}} {V}_{i} \oplus {V}_{j} \oplus {V}_{-j} \oplus {V}_{-i} \) is represented by the matrix ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_452_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_452_0.jpg) and \( {E}_{ij}\left( \lambda \right) \) is the identity on \( {V}_{ij}^{ \bot } \) . We now set \( {U}_{ij} \mathrel{\text{:=}} \left\{ {{E}_{ij}\left( \lambda \right) \mid \lambda \in K}\right\} \leq G \) . It is isomorphic to the additive group of \( K \) . Note that \( {U}_{ij} \) is simply the image of the strict upper-triangular subgroup of \( {\mathrm{{SL}}}_{2}\left( K\right) \) under an embedding of the latter into \( G \) . In most cases there is a second family of root subgroups \( {U}_{i}\left( {i \in I}\right) \), where \( {U}_{i} \) consists of the elements \( g \in G \) satisfying the following conditions: (1) \( g{e}_{j} = {e}_{j} \) for \( j \in I \smallsetminus \{ \pm i\} \) . (2) \( g \) stabilizes \( {V}_{i} \oplus {V}_{0} \oplus {V}_{-i} \) . (3) The restriction of \( g \) to \( {V}_{i} \oplus {V}_{0} \oplus {V}_{-i} \) has a matrix of the form <table><thead><tr><th></th><th>\( i \)</th><th>0</th><th>\( - i \)</th></tr></thead><tr><td>\( i \)</td><td>1</td><td>\( f \)</td><td>\( \lambda \)</td></tr><tr><td>0</td><td>0</td><td>1</td><td>\( v \)</td></tr><tr><td>\( - i \)</td><td>0</td><td>0</td><td>1</td></tr></table> for some \( f \in {V}_{0}^{ * },\lambda \in K \), and \( v \in {V}_{0} \) . In other words, \( g \) is given by \[ {e}_{j} \mapsto {e}_{j}\;\text{ if }j \in I \smallsetminus \{ \pm i\} , \] \[ {e}_{i} \mapsto {e}_{i} \] \[ {e}_{-i} \mapsto {e}_{-i} + v + \lambda {e}_{i} \] \[ x \mapsto x + f\left( x\right) {e}_{i}\;\text{ if }x \in {V}_{0}. \] It is easy to work out the conditions that \( f,\lambda, v \) must satisfy in order for \( g \) to preserve inner products. The crucial relations turn out to be \( \left\langle {{gx}, g{e}_{-i}}\right\rangle = 0 \) (for \( x \in {V}_{0} \) ) and \( \left\langle {g{e}_{-i}, g{e}_{-i}}\right\rangle = 0 \), which translate to \[ f\left( x\right) = - \epsilon \left( i\right) \langle x, v\rangle \] (7.30) and \[ \lambda + \epsilon {\lambda }^{\sigma } = - \epsilon \left( i\right) Q\left( v\right) \] (7.31) where \( Q\left( v\right) \mathrel{\text{:=}} \langle v, v\rangle \) . In particular, \( g \) is completely determined by the two parameters \( v \in {V}_{0} \) and \( \lambda \in K \) . If we write \( g = {g}_{i}\left( {v,\lambda }\right) \), then we have the multiplication rule \[ {g}_{i}\left( {v,\lambda }\right) {g}_{i}\left( {{v}^{\prime },{\lambda }^{\prime }}\right) = {g}_{i}\left( {v + {v}^{\prime },\lambda + {\lambda }^{\prime } - \epsilon \left( i\right) \left\langle {{v}^{\prime }, v}\right\rangle }\right) , \] (7.32) which follows from the calculation \[ \left( \begin{array}{lll} 1 & f & \lambda \\ 0 & 1 & v \\ 0 & 0 & 1 \end{array}\right) \left( \begin{matrix} 1 & {f}^{\prime } & {\lambda }^{\prime } \\ 0 & 1 & {v}^{\prime } \\ 0 & 0 & 1 \end{matrix}\right) = \left( \begin{matrix} 1 & f + {f}^{\prime } & \lambda + {\lambda }^{\prime } + f\left( {v}^{\prime }\right) \\ 0 & 1 & v + {v}^{\prime } \\ 0 & 0 & 1 \end{matrix}\right) \] for \( f,{f}^{\prime } \in {V}_{0}^{ * },\lambda ,{\lambda }^{\prime } \in K \), and \( v,{v}^{\prime } \in {V}_{0} \) . In the symplectic case, where \( \left( {\sigma ,\epsilon }\right) = \left( {\mathrm{{id}}, - 1}\right) \), we have \( {V}_{0} = 0 \), and (7.31) holds for all \( \lambda \) . An element \( g \in {U}_{i} \) is then determined by the parameter \( \lambda \) , and \( {U}_{i} \) is isomorphic to the additive group of \( K \) . It is the image of the strict upper-triangular subgroup of \( {\mathrm{{SL}}}_{2}\left( K\right) \) under an embedding of the latter into \( G \) . In the orthogonal case, where \( \left( {\sigma ,\epsilon }\right) = \left( {\mathrm{{id}},1}\right) \), equation (7.31) says that \[ \lambda = - \epsilon \left( i\right) Q\left( v\right) /2, \] so that an element \( g = {g}_{i}\left( {v,\lambda }\right)
111_111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 4.2
Definition 4.2 A spectral premeasure on \( \mathfrak{A} \) is a mapping \( E \) of \( \mathfrak{A} \) into the orthogonal projections on \( \mathcal{H} \) such that (i) \( E\left( \Omega \right) = I \) , (ii) \( E \) is countably additive, that is, \( E\left( {\mathop{\bigcup }\limits_{{n = 1}}^{\infty }{M}_{n}}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }E\left( {M}_{n}\right) \) for any sequence \( {\left( {M}_{n}\right) }_{n \in \mathbb{N}} \) of pairwise disjoint sets from \( \mathfrak{A} \) whose union is also in \( \mathfrak{A} \) . If \( \mathfrak{A} \) is a \( \sigma \) -algebra, then a spectral premeasure on \( \mathfrak{A} \) is called a spectral measure. Infinite sums as above are always meant in the strong convergence, that is, the equation in (ii) says that \( E\left( {\mathop{\bigcup }\limits_{{n = 1}}^{\infty }{M}_{n}}\right) x = \mathop{\lim }\limits_{{k \rightarrow \infty }}\mathop{\sum }\limits_{{n = 1}}^{k}E\left( {M}_{n}\right) x \) for \( x \in \mathcal{H} \) . Let us begin with the standard example of a spectral measure. Example 4.4 Let \( \left( {\Omega ,\mathfrak{A},\mu }\right) \) be a measure space, and \( \mathcal{H} = {L}^{2}\left( {\Omega ,\mu }\right) \) . For \( M \in \mathfrak{A} \) , let \( E\left( M\right) \) be the multiplication operator by the characteristic function \( {\chi }_{M} \), that is, \[ \left( {E\left( M\right) f}\right) \left( t\right) = {\chi }_{M}\left( t\right) \cdot f\left( t\right) ,\;f \in \mathcal{H}. \] (4.15) Since \( {\chi }_{M}^{2} = {\chi }_{M} = {\bar{\chi }}_{M} \), we have \( E{\left( M\right) }^{2} = E\left( M\right) = E{\left( M\right) }^{ * } \), so \( E\left( M\right) \) is an orthogonal projection. Obviously, \( E\left( \Omega \right) = I \) . We verify axiom (ii). Let \( {\left( {M}_{n}\right) }_{n \in \mathbb{N}} \) be a sequence of disjoint sets of \( \mathfrak{A} \) and set \( M \mathrel{\text{:=}} \mathop{\bigcup }\limits_{n}{M}_{n} \) . For \( f \in \mathcal{H} \), set \( {f}_{k} \mathrel{\text{:=}} \mathop{\sum }\limits_{{n = 1}}^{k}{\chi }_{{M}_{n}}f \) . Since \( {\left| {\chi }_{M}f - {f}_{k}\right| }^{2} \rightarrow 0 \) as \( k \rightarrow \infty \) and \( {\left| {\chi }_{M}f - {f}_{k}\right| }^{2} \leq 4{\left| f\right| }^{2} \) on \( \Omega \), it follows from Lebesgue’s theorem (Theorem B.1) that \( \mathop{\lim }\limits_{k}{\begin{Vmatrix}{\chi }_{M}f - {f}_{k}\end{Vmatrix}}^{2} = 0 \), and so \( {\chi }_{M}f = \mathop{\lim }\limits_{k}\mathop{\sum }\limits_{{n = 1}}^{k}{\chi }_{{M}_{n}}f \) . The latter means that \( E\left( M\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }E\left( {M}_{n}\right) \) . Hence, \( E \) is a spectral measure on \( \mathfrak{A} \) . Now we discuss some simple consequences of Definition 4.2 and suppose that \( E \) is a spectral premeasure on an algebra \( \mathfrak{A} \) . The case \( {M}_{n} = \varnothing \) for all \( n \) in (ii) yields \( E\left( \varnothing \right) = 0 \) . Setting \( {M}_{n} = \varnothing \) if \( n \geq k + 1 \) in (ii), we obtain the finite additivity of \( E \), that is, for pairwise disjoint sets \( {M}_{1},\ldots ,{M}_{k} \in \mathfrak{A} \), we have \[ E\left( {{M}_{1} \cup \cdots \cup {M}_{k}}\right) = E\left( {M}_{1}\right) + \cdots + E\left( {M}_{k}\right) . \] Lemma 4.3 If \( E \) is a finitely additive map of an algebra \( \mathfrak{A} \) into the orthogonal projections on a Hilbert space \( \mathcal{H} \), then we have \[ E\left( M\right) E\left( N\right) = E\left( {M \cap N}\right) \;\text{ for }M, N \in \mathfrak{A}. \] (4.16) In particular, \( E\left( M\right) E\left( N\right) = 0 \) if \( M, N \in \mathfrak{A} \) are disjoint. Proof First we note that \( E\left( {M}_{1}\right) E\left( {M}_{2}\right) = 0 \) if \( {M}_{1},{M}_{2} \in \mathfrak{A} \) are disjoint. Indeed, by the finite additivity of \( E \), the sum of the two projections \( E\left( {M}_{1}\right) \) and \( E\left( {M}_{2}\right) \) is again a projection. Therefore, \( E\left( {M}_{1}\right) E\left( {M}_{2}\right) = 0 \) . Now we put \( {M}_{0} \mathrel{\text{:=}} M \cap N,{M}_{1} \mathrel{\text{:=}} M \smallsetminus {M}_{0},{M}_{2} \mathrel{\text{:=}} N \smallsetminus {M}_{0} \) . Since \( {M}_{1} \cap {M}_{2} = \) \( {M}_{0} \cap {M}_{2} = {M}_{1} \cap {M}_{0} = \varnothing \), by the preceding we have \[ E\left( {M}_{1}\right) E\left( {M}_{2}\right) = E\left( {M}_{0}\right) E\left( {M}_{2}\right) = E\left( {M}_{1}\right) E\left( {M}_{0}\right) = 0. \] (4.17) Since \( M = {M}_{1} \cup {M}_{0} \) and \( N = {M}_{2} \cup {M}_{0} \), from the finite additivity of \( E \) and from formula (4.17) we derive \[ E\left( M\right) E\left( N\right) = \left( {E\left( {M}_{1}\right) + E\left( {M}_{0}\right) }\right) \left( {E\left( {M}_{2}\right) + E\left( {M}_{0}\right) }\right) = E{\left( {M}_{0}\right) }^{2} \] \[ = E\left( {M \cap N}\right) \text{.} \] Note that for scalar measures, equality (4.16) does not holds in general. By (4.16), two arbitrary projections \( E\left( M\right) \) and \( E\left( N\right) \) for \( M, N \in \mathfrak{A} \) commute. Moreover, if \( M \supseteq N \) for \( M, N \in \mathfrak{A} \), then \( E\left( M\right) = E\left( N\right) + E\left( {M \smallsetminus N}\right) \geq E\left( N\right) \) . The next lemma characterizes a spectral measure in terms of scalar measures. Lemma 4.4 A map \( E \) of an algebra (resp. \( \sigma \) -algebra) \( \mathfrak{A} \) on a set \( \Omega \) into the orthogonal projections on \( \mathcal{H} \) is a spectral premeasure (resp. spectral measure) if and only if \( E\left( \Omega \right) = I \) and for each vector \( x \in \mathcal{H} \), the set function \( {E}_{x}\left( \cdot \right) \mathrel{\text{:=}} \langle E\left( \cdot \right) x, x\rangle \) on \( \mathfrak{A} \) is countably additive (resp. is a measure). Proof The only if assertion follows at once from Definition 4.2. We prove the if direction. Let \( {\left( {M}_{n}\right) }_{n \in \mathbb{N}} \) be a sequence of disjoint sets in \( \mathfrak{A} \) such that \( M \mathrel{\text{:=}} \mathop{\bigcup }\limits_{n}{M}_{n} \) is also in \( \mathfrak{A} \) . Since \( {E}_{x} \) is finitely additive for each element \( x \in \mathcal{H} \) , \( E \) is finitely additive as well. Therefore, by Lemma 4.3, \( \left( {E\left( {M}_{n}\right) }\right) \) is a sequence of pairwise orthogonal projections. Hence, the series \( \mathop{\sum }\limits_{n}E\left( {M}_{n}\right) \) converges strongly. Because \( {E}_{x} \) is countably additive by assumption, we have \[ \langle E\left( M\right) x, x\rangle = {E}_{x}\left( M\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{E}_{x}\left( {M}_{n}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {E\left( {M}_{n}\right) x, x}\right\rangle = \left\langle {\mathop{\sum }\limits_{{n = 1}}^{\infty }E\left( {M}_{n}\right) x, x}\right\rangle \] for each \( x \in \mathcal{H} \), and hence \( E\left( M\right) = \mathop{\sum }\limits_{n}E\left( {M}_{n}\right) \) by the polarization formula (1.2). Thus, \( E \) is a spectral premeasure on \( \mathfrak{A} \) . Let \( E \) be a spectral measure on the \( \sigma \) -algebra \( \mathfrak{A} \) in \( \mathcal{H} \) . As noted in Lemma 4.4, each vector \( x \in \mathcal{H} \) gives rise to a scalar positive measure \( {E}_{x} \) on \( \mathfrak{A} \) by \[ {E}_{x}\left( M\right) \mathrel{\text{:=}} \parallel E\left( M\right) x{\parallel }^{2} = \langle E\left( M\right) x, x\rangle ,\;M \in \mathfrak{A}. \] The measure \( {E}_{x} \) is finite, since \( {E}_{x}\left( \Omega \right) = \parallel E\left( \Omega \right) x{\parallel }^{2} = \parallel x{\parallel }^{2} \) . The family of these measures \( {E}_{x} \) plays a crucial role for the study of spectral integrals in Section 4.3. Let \( x, y \in \mathcal{H} \) . Then there is a complex measure \( {E}_{x, y} \) on \( \mathfrak{A} \) given by \( {E}_{x, y}\left( M\right) = \) \( \langle E\left( M\right) x, y\rangle \) . The complex measure \( {E}_{x, y} \) is a linear combination of four positive measures \( {E}_{z}, z \in \mathcal{H} \) . Indeed, by the polarization formula (1.2), \[ {E}_{x, y} = \frac{1}{4}\left( {{E}_{x + y} - {E}_{x - y} + \mathrm{i}{E}_{x - \mathrm{i}y} - \mathrm{i}{E}_{x - \mathrm{i}y}}\right) . \] The following lemma is very similar to the scalar case. We omit the simple proof. Lemma 4.5 Let \( E \) be a spectral premeasure on \( \mathfrak{A} \) . Suppose that \( {\left( {M}_{n}\right) }_{n \in \mathbb{N}} \) is a decreasing sequence and \( {\left( {N}_{n}\right) }_{n \in \mathbb{N}} \) is an increasing sequence of sets in \( \mathfrak{A} \) such that \( M \mathrel{\text{:=}} \mathop{\bigcap }\limits_{n}{M}_{n} \) and \( N \mathrel{\text{:=}} \mathop{\bigcup }\limits_{n}{N}_{n} \) are in \( \mathfrak{A} \) . Then we have \( E\left( M\right) = \mathrm{s} - \mathop{\lim }\limits_{{n \rightarrow \infty }}E\left( {M}_{n}\right) \) and \( E\left( N\right) = \mathrm{s} - \mathop{\lim }\limits_{{n \rightarrow \infty }}E\left( {N}_{n}\right) \) . The following theorem states a one-to-one correspondence between resolutions of the identity and spectral measures on the Borel \( \sigma \) -algebra \( \mathfrak{B}\left( \mathbb{R}\right) \) . Its proof uses Lemma 4.9 below which deals with the main technical difficulty showing that the corresponding operators \( E\left( M\right), M \in \mathfrak{B}\left( \mathbb{R}\right) \), are indeed projections. Theorem 4.6 If \( E \) is a spectral measure on the Borel \( \sigma \) -algebra \( \mathfrak{B}\left( \mathbb{R}\right) \) in \( \mathcal{H} \), then \[ E\left( \lambda \right) \mathrel{\text{:=}} E(\left( {-\infty ,\lambda \rbrack }\right) ,\;\lambda \in \mathbb{R}, \] (4.18) defines a resolution of the identity. Conversely, for each resolution of the identity \( \{ E\left( \lambda \right) : \lambda \in \mathbb{R}\} \), there is a unique spectral measure \( E \) on \( \mathfrak{B}\left( \mathbb{R}\right) \) such that (4.18) holds. Proof Let \( E \
1112_(GTM267)Quantum Theory for Mathematicians
Definition 13.7
Definition 13.7 For all \( f \in {L}^{2}\left( {\mathbb{R}}^{2n}\right) \), define \( {\kappa }_{f} : {\mathbb{R}}^{2n} \rightarrow \mathbb{C} \) by \[ {\kappa }_{f}\left( {\mathbf{x},\mathbf{y}}\right) = {\left( 2\pi \hslash \right) }^{-n}{\int }_{{\mathbb{R}}^{n}}f\left( {\left( {\mathbf{x} + \mathbf{y}}\right) /2,\mathbf{p}}\right) {e}^{-i\left( {\mathbf{y} - \mathbf{x}}\right) \cdot \mathbf{p}/\hslash }d\mathbf{p}, \] (13.26) and define the Weyl quantization of \( f \), as an operator on \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \), by \[ {Q}_{\text{Weyl }}\left( f\right) = {A}_{{\kappa }_{f}} \] where \( {A}_{{\kappa }_{f}} \) is defined by (13.25). The integral in (13.26) is not necessarily absolutely convergent, and should be understood as computing a partial Fourier transform. Thus, we should, strictly speaking, replace the right-hand side of (13.26) with \[ \mathop{\lim }\limits_{{R \rightarrow \infty }}{\left( 2\pi \hslash \right) }^{-n}{\int }_{\left| \mathbf{p}\right| \leq R}f\left( {\left( {\mathbf{x} + \mathbf{y}}\right) /2,\mathbf{p}}\right) {e}^{-i\left( {\mathbf{y} - \mathbf{x}}\right) \cdot \mathbf{p}/\hslash }d\mathbf{p}, \] (13.27) where the limit is in the norm topology of \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) . [The partial Fourier transform maps the Schwartz space \( \mathcal{S}\left( {\mathbb{R}}^{2n}\right) \) to itself. By Fubini’s theorem and the Plancherel formula for \( {\mathbb{R}}^{n} \), the partial Fourier transform is an \( {L}^{2} \) - isometry and extends to a unitary map of \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) to itself. This unitary map can be computed by the usual formula on functions in \( {L}^{1} \cap {L}^{2} \) and can be computed by the limiting formula similar to (13.27) in general.] In words, we may describe the procedure for computing \( {\kappa }_{f} \) at a point \( \left( {{\mathbf{x}}^{1},{\mathbf{x}}^{2}}\right) \) in \( {\mathbb{R}}^{2n} \) as follows. First, compute the partial Fourier transform \( {\mathcal{F}}_{\mathbf{p}} \) of \( f\left( {\mathbf{x},\mathbf{p}}\right) \) in the \( \mathbf{p} \) -variable, resulting in the function \( \left( {{\mathcal{F}}_{\mathbf{p}}f}\right) \left( {\mathbf{x},\xi }\right) \) . Then evaluate \( {\mathcal{F}}_{\mathbf{p}}f \) at the point \( \mathbf{x} = \left( {{\mathbf{x}}^{1} + {\mathbf{x}}^{2}}\right) /2,\xi = \left( {{\mathbf{x}}^{2} - {\mathbf{x}}^{1}}\right) /\hslash \) . Finally, multiply the result by \( {\hslash }^{-n}{\left( 2\pi \right) }^{-n/2} \) to get \[ {\kappa }_{f}\left( {{\mathbf{x}}^{1},{\mathbf{x}}^{2}}\right) = {\hslash }^{-n}{\left( 2\pi \right) }^{-n/2}\left( {{\mathcal{F}}_{\mathbf{p}}f}\right) \left( {\left( {{\mathbf{x}}^{1} + {\mathbf{x}}^{2}}\right) /2,\left( {{\mathbf{x}}^{2} - {\mathbf{x}}^{1}}\right) /\hslash }\right) . \] (13.28) Theorem 13.8 The map \( {Q}_{\text{Weyl }} \) is a constant multiple of a unitary map of \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) onto \( \operatorname{HS}\left( {{L}^{2}\left( {\mathbb{R}}^{n}\right) }\right) \) . The inverse map \( {Q}_{\text{Weyl }}^{-1} : \operatorname{HS}\left( {{L}^{2}\left( {\mathbb{R}}^{n}\right) }\right) \rightarrow \) \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) is given by \[ {Q}_{\text{Weyl }}^{-1}\left( A\right) \left( {\mathbf{x},\mathbf{p}}\right) = {\hslash }^{n}{\int }_{{\mathbb{R}}^{n}}\kappa \left( {\mathbf{x} - \hslash \mathbf{b}/2,\mathbf{x} + \hslash \mathbf{b}/2}\right) {e}^{i\mathbf{b} \cdot \mathbf{p}}d\mathbf{b}, \] where \( \kappa \) is the integral kernel of \( A \) as in Proposition 13.6. Furthermore, for all \( f \in {L}^{2}\left( {\mathbb{R}}^{2n}\right) \), we have \( {Q}_{\text{Weyl }}\left( \bar{f}\right) = {Q}_{\text{Weyl }}{\left( f\right) }^{ * } \) ; in particular, \( {Q}_{\text{Weyl }}\left( f\right) \) is self-adjoint if \( f \) is real valued. Properly speaking, the integral in the theorem should be understood as an \( {L}^{2} \) limit, as in (13.27). The fact that \( {Q}_{\text{Weyl }} \) is unitary (up to a constant) tells us that for an appropriate constant \( c \), the operators \( c{e}^{i\left( {\mathbf{a} \cdot \mathbf{X} + \mathbf{b} \cdot \mathbf{P}}\right) } \) form an "orthonormal basis in the continuous sense" for the Hilbert space \( \operatorname{HS}\left( {{L}^{2}\left( {\mathbb{R}}^{n}\right) }\right) \) . (Compare Sect. 6.6.) It is possible, using the same formulas, to extend the notion of Weyl quantization to symbols belonging the space of tempered distributions, that is, the space of continuous linear functionals on \( \mathcal{S}\left( {\mathbb{R}}^{2n}\right) \) . We will not, however, develop this construction here. See [11] for more information. Proof. Proposition 13.6 gives a unitary identification of \( \operatorname{HS}\left( {{L}^{2}\left( {\mathbb{R}}^{n}\right) }\right) \) with \( {L}^{2}\left( {{\mathbb{R}}^{n} \times {\mathbb{R}}^{n}}\right) \) . Thus, it suffices to show that the map \( f \mapsto {\kappa }_{f} \) is a multiple of a unitary map. This result holds because the partial Fourier transform is a unitary map of \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) to itself and composition with an invertible linear map is a constant multiple of a unitary map. The inverse of the map \( f \mapsto {\kappa }_{f} \) is obtained by inverting the linear map and undoing the partial Fourier transform. Finally, it is apparent from (13.26) that \[ {\kappa }_{\bar{f}}\left( {\mathbf{x},\mathbf{y}}\right) = \overline{{\kappa }_{f}\left( {\mathbf{y},\mathbf{x}}\right) } \] This, along with Exercise 6, shows that \( {Q}_{\text{Weyl }}\left( \bar{f}\right) = {Q}_{\text{Weyl }}{\left( f\right) }^{ * } \) . ∎ ## 13.3.3 The Composition Formula If \( f \) and \( g \) are \( {L}^{2} \) functions on \( {\mathbb{R}}^{2n} \), then \( {Q}_{\text{Weyl }}\left( f\right) \) and \( {Q}_{\text{Weyl }}\left( g\right) \) are Hilbert-Schmidt operators, in which case their product is again Hilbert-Schmidt. (Indeed, the product of a Hilbert-Schmidt operator and a bounded operator is always Hilbert-Schmidt.) Thus, since \( {Q}_{\text{Weyl }} \) is a bijection of \( {L}^{2}\left( {\mathbb{R}}^{2n}\right) \) with \( \operatorname{HS}\left( {{L}^{2}\left( {\mathbb{R}}^{n}\right) }\right) \), there is a unique \( {L}^{2} \) function, which we denote by \( f \star g \), such that \[ {Q}_{\text{Weyl }}\left( f\right) {Q}_{\text{Weyl }}\left( g\right) = {Q}_{\text{Weyl }}\left( {f \star g}\right) . \] (13.29) (Of course, the operator \( \star \), like the Weyl quantization itself, depends on \( \hslash \) , but we suppress this dependence in the notation.) Proposition 13.9 The Moyal product \( f \star g \) may be characterized in terms of the Fourier transform as \[ \widehat{\left( f \star g\right) }\left( {\mathbf{a},\mathbf{b}}\right) = {\left( 2\pi \right) }^{-n}\iint {e}^{-i\hslash \left( {\mathbf{a} \cdot {\mathbf{b}}^{\prime } - \mathbf{b} \cdot {\mathbf{a}}^{\prime }}\right) /2} \] \[ \times \widehat{f}\left( {\mathbf{a} - {\mathbf{a}}^{\prime },\mathbf{b} - {\mathbf{b}}^{\prime }}\right) \widehat{g}\left( {{\mathbf{a}}^{\prime },{\mathbf{b}}^{\prime }}\right) d{\mathbf{a}}^{\prime }d{\mathbf{b}}^{\prime }, \] where both integrals are over \( {\mathbb{R}}^{n} \) . Note that if we set \( \hslash = 0 \) in the above formula, \( \widehat{f \star g} \) reduces to \( {\left( 2\pi \right) }^{-n} \) times the convolution of \( \widehat{f} \) and \( \widehat{g} \), which is nothing but the Fourier transform of \( {fg} \) . It is thus not difficult to show (Exercise 10) that \[ \mathop{\lim }\limits_{{\hslash \rightarrow {0}^{ + }}}f \star g = {fg} \] That is to say, the Moyal product \( f \star g \) is a "deformation" of the ordinary pointwise product of functions on \( {\mathbb{R}}^{2n} \) . More generally, the Moyal product can be expanded in an asymptotic expansion in powers of \( \hslash \), as explained in Sect. 2.3 of [11]. This expansion terminates in the case that \( f \) and \( g \) are both polynomials. Proof. It is, of course, possible to obtain this formula using kernel functions. It is, however, easier to work with the (13.17), which can be shown (Exercise 7) to give the same result as Definition 13.7 when \( f \) is a Schwartz function. We assume standard properties of the Bochner integral for functions with values in a Banach space [in our case, \( \mathcal{B}\left( \mathbf{H}\right) \) ], which are similar to those of the Lebesgue integral. (See, for example, Sect. V. 5 of [46].) We have, then, \[ {Q}_{\text{Weyl }}\left( f\right) {Q}_{\text{Weyl }}\left( g\right) = {\left( 2\pi \right) }^{-n}\iint \widehat{f}\left( {\mathbf{a},\mathbf{b}}\right) {e}^{i\left( {\mathbf{a} \cdot \mathbf{X} + \mathbf{b} \cdot \mathbf{P}}\right) }d\mathbf{a}d\mathbf{b} \] \[ \times {\left( 2\pi \right) }^{-n}\iint \widehat{g}\left( {{\mathbf{a}}^{\prime },{\mathbf{b}}^{\prime }}\right) {e}^{i\left( {{\mathbf{a}}^{\prime } \cdot \mathbf{X} + {\mathbf{b}}^{\prime } \cdot \mathbf{P}}\right) }d{\mathbf{a}}^{\prime }d{\mathbf{b}}^{\prime }. \] (13.30) Now, it is an easy calculation to verify, using Proposition 13.5, that \[ {e}^{i\left( {\mathbf{a} \cdot \mathbf{X} + \mathbf{b} \cdot \mathbf{P}}\right) }{e}^{i\left( {{\mathbf{a}}^{\prime } \cdot \mathbf{X} + {\mathbf{b}}^{\prime } \cdot \mathbf{P}}\right) } = {e}^{-i\hslash \left( {\mathbf{a} \cdot {\mathbf{b}}^{\prime } - \mathbf{b} \cdot {\mathbf{a}}^{\prime }}\right) /2}{e}^{i\left( {\left( {\mathbf{a} + {\mathbf{a}}^{\prime }}\right) \cdot \mathbf{X} + \left( {\mathbf{b} + {\mathbf{b}}^{\prime }}\right) \cdot \mathbf{P}}\right) }, \] (13.31) which is what one obtains by formally applying the special case of the Baker-Campbell-Hausdorff formula in (13.18). Thus, we may combine the integrals in (13.30) to obtain \[ {Q}_{\text{Weyl }}\left( f\right) {Q}_{\text{Weyl }}\left( g\right) = {\left( 2\pi \right) }^{-{2n}}\iiiint {e}^{-i\hslash \left( {\mathbf{a} \cdot {\mathbf{b}}^{\prime } - \mathbf{b} \cdot {\mathbf
18_Algebra Chapter 0
Definition 7.5
Definition 7.5. Let A, B be abelian categories, and assume that A has enough projectives. Let \( \mathcal{F} : \mathrm{A} \rightarrow \mathrm{B} \) be an additive functor. The \( i \) -th left-derived functor \( {\mathrm{L}}_{i}\mathcal{F} \) of \( \mathcal{F} \) is the functor \( \mathrm{A} \rightarrow \mathrm{B} \) given by \[ {\mathsf{L}}_{i}\mathcal{F} = {H}^{-i} \circ \mathsf{L}\mathcal{F} \circ {\mathcal{P}}_{\mathrm{A}} \circ {\iota }_{\mathrm{A}} \] For an object \( M \) in \( \mathrm{A} \), the complex in \( \mathrm{C}\left( \mathrm{B}\right) \) with \( {\mathrm{L}}_{i}\mathcal{F}\left( M\right) \) in degree \( - i \) and with vanishing differentials is denoted by \( {\mathrm{L}}_{ \bullet }\mathcal{F}\left( M\right) \) . Let’s spell this out. Given an object \( M \) of \( \mathrm{A},{\mathrm{L}}_{i}\mathcal{F}\left( M\right) \) is obtained by finding any projective resolution \( {P}_{M}^{ \bullet } \) of \( M \), applying the functor \( \mathrm{C}\left( \mathcal{F}\right) \) to \( {P}_{M}^{ \bullet } \) to obtain a complex in \( \mathrm{C}\left( \mathrm{B}\right) \), and taking the \( \left( {-i}\right) \) -th cohomology of this complex 31. Up to isomorphism, the result does not depend on the choice of the projective resolution: this is clear from the path of concepts that led us here, and it is directly implied by - Proposition 6.4, showing that any two projective resolutions are homotopy equivalent, in conjunction with - Theorem 4.14, showing that the images of homotopy equivalent complexes via an additive functor have isomorphic cohomology. Of course we can similarly define the \( i \) -th right-derived functor \( {\mathrm{R}}^{i}\mathcal{F} \) of an additive functor \( \mathcal{F} : \mathrm{A} \rightarrow \mathrm{B} \) and complexes \( {\mathrm{R}}^{ \bullet }\mathcal{F}\left( M\right) \), provided that \( \mathrm{A} \) has enough injectives: concretely, \( {\mathrm{R}}^{i}\mathcal{F}\left( M\right) \) is the cohomology of the image of an injective resolution of \( M \), i.e., \[ {\mathrm{R}}^{i}\mathcal{F} = {H}^{i} \circ \mathrm{R}\mathcal{F} \circ {\mathcal{Q}}_{\mathrm{A}} \circ {\iota }_{\mathrm{A}} \] So far, I have always implicitly assumed that \( \mathcal{F} \) was a covariant functor. Contravariant functors \( \mathrm{A} \rightarrow \mathrm{B} \) should be viewed as covariant functors \( {\mathrm{A}}^{op} \rightarrow \mathrm{B} \) (Definition VIII 1.1); the roles of injectives and projectives should therefore be swapped. Thus, the right-derived functors of an additive contravariant functor \( \mathrm{A} \rightarrow \mathrm{B} \) will be defined if \( \mathrm{A} \) has enough projectives: \( {\mathrm{A}}^{op} \) will then have enough injectives, as needed. The attentive reader already knows two families of examples of derived functors, both defined from the category \( R \) -Mod of modules over a (commutative) ring to itself. Recall that \( R \) -Mod has both enough projectives and injectives (as seen in [VIII]6.2] and [VIII]6.3]; thus every covariant/contravariant functor \( R \) -Mod \( \rightarrow \) \( R \) -Mod can be derived on the left and on the right. Example 7.6. Every \( R \) -module \( N \) determines a functor \( {}_{ - }{ \otimes }_{R}N : M \mapsto M{ \otimes }_{R}N \) (see SVIII 2.2). The left-derived functor of \( {}_{ - }{ \otimes }_{R}N \) is denoted \( {}_{ - }{ \otimes }_{R}N \) and acts \( {\mathrm{D}}^{ - }\left( {R\text{-Mod}}\right) \rightarrow {\mathrm{D}}^{ - }\left( {R\text{-Mod}}\right) \) . The \( i \) -th left-derived functor of \( {}_{ - }{ \otimes }_{R}N \), viewed as a functor \( R \) -Mod \( \rightarrow R \) -Mod, is \( {\operatorname{Tor}}_{i}^{R}\left( {\_ { \otimes }_{R}N}\right) \) : indeed, the construction of \( {\operatorname{Tor}}_{i}^{R}\left( {M, N}\right) \) given in SVIII 2.4 matches precisely the ’concrete’ interpretation of the \( i \) -th left-derived functor given above. The reader may note that in SVIII 2.4 we used a free resolution of \( M \) ; free modules are projective, so this was simply a convenient way to choose a projective resolution. The fact that we could use any projective resolution of \( M \) to compute \( {\operatorname{Tor}}_{i}^{R}\left( {M, N}\right) \) was mentioned at the end of SVIII 6.2 and was attributed there to the 'magic of homological algebra'. This piece of magic has now been explained in gory detail. However, I also mentioned that flat resolutions may be used in place of projective resolutions, and this piece of magic still needs to be explained. The same applies to the fact, mentioned in SVIII 2.4, that \( {\operatorname{Tor}}_{i}\left( {M, N}\right) \) may in fact be computed by using a projective resolution of \( N \) rather than \( M \) . Both mysteries will be dispelled in [8] \( {}^{31} \) Projective resolutions of an object are complexes with nonzero objects only in degree \( \leq 0 \) ; hence \( {\mathrm{L}}_{i}\mathcal{F} = 0 \) for \( i < 0 \) . Example 7.7. Similarly, \( {\operatorname{Hom}}_{R} \) admits a right-derived functor \( {\operatorname{RHom}}_{R} \), and its manifestations as the right-derived functors of \( {\operatorname{Hom}}_{R}\left( {M,\_ }\right) \) are the Ext modules: \( {\operatorname{Ext}}_{R}^{i}\left( {M, N}\right) \) is (isomorphic to) the \( i \) -th cohomology of \( {\operatorname{Hom}}_{R}\left( {M,{N}^{ \bullet }}\right) \), where \( {N}^{ \bullet } \) is an injective resolution of \( N \) . As mentioned in SVIII 6.4, it is also the \( i \) -th cohomology of \( {\operatorname{Hom}}_{R}\left( {{M}^{ \bullet }, N}\right) \), where \( {M}^{ \bullet } \) is a projective resolution of \( M \) . This projective resolution should really be viewed as an injective resolution in the opposite category \( R - {\operatorname{Mod}}^{op} \), since the functor \( {\operatorname{Hom}}_{R}\left( {\_, N}\right) \) is contravariant. At this point we understand why the results of these operations are independent of the chosen injective/projective resolution. We are, however, not quite ready to verify that the two strategies lead to the same Ext functor; this last point will also be clarified in \( §8 \) Also note that we could define the Ext functors as functors to \( \mathrm{{Ab}} \) on any abelian category with enough injectives and/or projectives: any abelian category has left-exact Hom functors \( {}^{32} \) to \( \mathrm{{Ab}} \) . 7.4. Long exact sequence of derived functors. The most remarkable property of the functors \( {\operatorname{Tor}}_{i} \) and \( {\operatorname{Ext}}^{i} \) mentioned in Chapter VIII is probably that they ’repair’ the lack of exactness of \( \otimes \), Hom, respectively, in the sense that they agree with these functors in degree 0 and they fit into long exact sequences. I stated the existence of these exact sequences in SVIII 2.4 and SVIII 6.4 without proof (save for indications in case the base ring \( R \) is a PID); now we are ready to understand fully why these sequences exist, in the general context of derived functors. I will keep assuming that A has enough projectives. We will prove that every short exact sequence \[ 0 \rightarrow L \rightarrow M \rightarrow N \rightarrow 0 \] in A induces a 'long exact sequence of derived functors'; the sequences for Tor and Ext encountered in Chapter VIII will be particular cases. Surely the reader expects this general fact to follow one way or the other from the long exact cohomology sequence (Theorem 3.5); that reader will not be disappointed. From a more sophisticated perspective, what happens is that derived functors fit the vertices of a 'distinguished triangle' in the derived category: as I mentioned in passing in 4.2, these triangles play the role of exact sequences in the homotopic and derived categories, which do not happen to be abelian. Distinguished triangles give rise to long exact sequences, in much the same way as do exact sequences of complexes in the abelian case explored in [3.3] Since we do not have the machinery of triangulated categories at our disposal, we have to resort to bringing the action back to the ordinary category of complexes, with the aim of using Theorem 3.5. The key point is therefore the following: assuming that \( \mathrm{A} \) has enough projectives and that \[ 0 \rightarrow L \rightarrow M \rightarrow N \rightarrow 0 \] is an exact sequence in \( \mathrm{A} \), can we arrange for projective resolutions of \( L, M, N \) to form an exact sequence in \( \mathrm{C}\left( \mathrm{A}\right) \) ? \( {}^{32} \) In fact, the presence of injectives or projectives may be bypassed if we adopt the ’Yoneda’ viewpoint on Ext. Yes. This is often called the 'horseshoe lemma', after the shape of the main diagram appearing in its proof. Lemma 7.8. Let (*) \[ 0 \rightarrow L \rightarrow M \rightarrow N \rightarrow 0 \] be an exact sequence in an abelian category A with enough projectives. Assume \( {P}_{L}^{ \bullet },{P}_{N}^{ \bullet } \) are projective resolutions of \( L, N \), respectively. Then there exists an exact sequence \( \left( {* * }\right) \) \[ 0 \rightarrow {P}_{L}^{ \bullet } \rightarrow {P}_{M}^{ \bullet } \rightarrow {P}_{N}^{ \bullet } \rightarrow 0 \] where \( {P}_{M}^{ \bullet } \) is a projective resolution of \( M \), inducing (*) in cohomology. By 'inducing (*) in cohomology' I mean that the (not too) long exact cohomology sequence induced by (**) reduces to the \( {H}^{0} \) part, since all other cohomology objects of a resolution vanish; identifying the zero-th cohomology of the resolutions with the corresponding objects, this part is nothing but the original short exact sequence \( \left( *\right) \) . Proof. The hypotheses give us the solid part of the diagram ![23387543-548b-40c2-8595-200756212a0f_671_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_671_0.jpg) and our task is to fill in the blanks with projective objects and morphisms so that all rows are exact, and the middle column is a resolution of \( M \) . I claim that
1009_(GTM175)An Introduction to Knot Theory
Definition 1.6
Definition 1.6. Let \( K \) be an oriented knot in (oriented) \( {S}^{3} \) with solid torus neighbourhood \( N \) . A meridian \( \mu \) of \( K \) is a non-separating simple closed curve in \( \partial N \) that bounds a disc in \( N \) . A longitude \( \lambda \) of \( K \) is a simple closed curve in \( \partial N \) that is homologous to \( K \) in \( N \) and null-homologous in the exterior of \( K \) . Note that \( \lambda \) and \( \mu \), the longitude and meridian, both have standard orientations coming from orientations of \( K \) and \( {S}^{3} \), they are well defined up to homotopy in \( \partial N \) and their homology classes form a base for \( {H}_{1}\left( {\partial N}\right) \) . The above ideas can easily be extended to the following result for links of several components. Theorem 1.7. Let \( L \) be an oriented link of \( n \) components in (oriented) \( {S}^{3} \) and let \( X \) be its exterior. Then \( {H}_{2}\left( X\right) = {\bigoplus }_{n - 1}\mathbb{Z} \) . Further, \( {H}_{1}\left( X\right) \) is canonically isomorphic to \( {\bigoplus }_{n}\mathbb{Z} \) generated by the homology classes of the meridians \( \left\{ {\mu }_{i}\right\} \) of the individual components of \( L \) . Proof. The proof of this is just an adaptation of that of the previous theorem. Here \( N \) is now a disjoint union of \( n \) solid tori. The map \( {H}_{3}\left( {S}^{3}\right) \rightarrow {H}_{2}\left( {X \cap N}\right) \) is the map \( \mathbb{Z} \rightarrow {\bigoplus }_{n}\mathbb{Z} \) that sends 1 to \( \left( {1,1,\ldots ,1}\right) \), implying that \( {H}_{2}\left( X\right) = {\bigoplus }_{n - 1}\mathbb{Z} \) . Now \( {H}_{1}\left( {N \cap X}\right) = {\bigoplus }_{2n}\mathbb{Z} \) and \( {H}_{1}\left( N\right) = {\bigoplus }_{n}\mathbb{Z} \), and the map \( {H}_{1}\left( {N \cap X}\right) \rightarrow \) \( {H}_{1}\left( N\right) \oplus {H}_{1}\left( X\right) \) is still an isomorphism, so \( {H}_{1}\left( X\right) = {\bigoplus }_{n}\mathbb{Z} \) . The argument about the generators is as before. If \( C \) is an oriented simple closed curve in the exterior of the oriented link \( L \) , the linking number of \( C \) and \( L \) is defined by \( \operatorname{lk}\left( {C, L}\right) = \mathop{\sum }\limits_{i}\operatorname{lk}\left( {C,{L}_{i}}\right) \) where the \( {L}_{i} \) are the components of \( L \) . By Theorem \( {1.7},\operatorname{lk}\left( {C, L}\right) \) is the image of \( \left\lbrack C\right\rbrack \in \) \( {H}_{1}\left( X\right) \equiv {\bigoplus }_{n}\mathbb{Z} \) under the projection onto \( \mathbb{Z} \) that maps each generator to 1 . ## Exercises 1. Show that the knot \( {4}_{1} \) is equivalent to its reverse and to its reflection. 2. A diagram of an oriented knot is shown on a screen by means of an overhead projector. What knot appears on the screen if the transparency is turned over? 3. From the theory of the Reidemeister moves, prove that two diagrams in \( {S}^{2} \) of the same oriented knot in \( {S}^{3} \) are equivalent, by Reidemeister moves of only Types II and III, if and only if the the sum of the signs of the crossings is the same for the two diagrams. 4. Attempt a classificaton of links of two components up to six crossings, noting any pairs of links in your table that you have not yet proved to be distinct. 5. Show that any diagram of a knot \( K \) can be changed to a diagram of the unknot by changing some of the crossings from "over" to "under". How many changes are necessary? 6. Prove that the \( \left( {p, q}\right) \) torus knot, where \( p \) and \( q \) are coprime, is equivalent to the \( \left( {q, p}\right) \) torus knot. How does it relate to the \( \left( {p, - q}\right) \) and \( \left( {-p, - q}\right) \) torus knots? 7. Find descriptions of the knot \( {8}_{9} \) in the Dowker-Thistlethwaite notation, in the Conway notation as a 2-bridge knot \( C\left( {{a}_{1},{a}_{2},{a}_{3},{a}_{4}}\right) \) and also as a closed braid \( \widehat{b} \) . 8. Prove that any 2-bridge knot is an alternating knot. 9. A knot diagram is said to be three-colourable if each segment of the diagram (from one under-pass to the next) can be coloured red, blue or green so that all three colours are used and at each crossing either one colour or all three colours appear. Show that three-colourability is unchanged by Reidemeister moves. Deduce that the knot \( {3}_{1} \) is indeed distinct from the unknot and that \( {3}_{1} \) and \( {4}_{1} \) are distinct. Generalise this idea to \( n \) -colourability by labelling segments with integers so that at every crossing, the over-pass is labelled with the average, modulo \( n \), of the labels of the two segments on either side. 10. Can \( n \) -colourability distinguish the Kinoshita-Terasaka knot (Figure 3.3) from the unknot? 11. Let \( {X}_{1} \) and \( {X}_{2} \) be the exteriors of two non-trivial knots \( {K}_{1} \) and \( {K}_{2} \) . Determine how a homeomorphism \( h : \partial {X}_{1} \rightarrow \partial {X}_{2} \) can be chosen so that the 3-manifold \( {X}_{1}{ \cup }_{h}{X}_{2} \) has the same homology groups as \( {S}^{3} \) . 12. Let \( M \) be a homology 3-sphere, that is, a 3-manifold with the same homology groups as \( {S}^{3} \) . Show that the linking number of a link of two disjoint oriented simple closed curves in \( M \) can be defined in a way that gives the standard linking number when \( M = {S}^{3} \) . 2 ## Seifert Surfaces and Knot Factorisation It will now be shown that any link in \( {S}^{3} \) can be regarded as the boundary of some surface embedded in \( {S}^{3} \) . Such surfaces can be used to study the link in different ways. Here they are used to show that knots can be factorised into a sum of prime knots. Later they will feature in the theory and calculation of the Alexander polynomial. Definition 2.1. A Seifert surface for an oriented link \( L \) in \( {S}^{3} \) is a connected compact oriented surface contained in \( {S}^{3} \) that has \( L \) as its oriented boundary. Examples of such surfaces are shown in Figure 2.1 and have been mentioned in Chapter 1 for two-bridge knots. Of course, any embedding into \( {S}^{3} \) of a compact connected oriented surface with non-empty boundary provides an example of a link equipped with a Seifert surface. A surface is non-orientable if and only if it contains a Möbius band. Some surface can be constructed with a given link as its boundary in the following way: Colour black or white, in chessboard fashion, the regions of \( {S}^{2} \) that form the complement of a diagram of the link. Consider all the regions of one colour joined by "half-twisted" strips at the crossings. This is a surface with the link as boundary, and it may well be orientable. However, it may quite well be non-orientable for either one or both of the two colours. The usual diagram of the knot \( {4}_{1} \) has both such surfaces non-orientable. Thus, although this method may provide an excellent Seifert surface, a general method, such as that of Seifert which follows, is needed. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_25_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_25_0.jpg) Figure 2.1 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_0.jpg) Figure 2.2 ## Theorem 2.2. Any oriented link in \( {S}^{3} \) has a Seifert surface. Proof. Let \( D \) be an oriented diagram for the oriented link \( L \) and let \( \widehat{D} \) be \( D \) modified as shown in Figure 2.2. \( \widetilde{D} \) is the same as \( D \) except in a small neighbourhood of each crossing where the crossing has been removed in the only way compatible with the orientation. This \( \widehat{D} \) is just a disjoint union of oriented simple closed curves in \( {S}^{2} \) . Thus \( \widehat{D} \) is the boundary of the union of some disjoint discs all on one side of (above) \( {S}^{2} \) . Join these discs together with half-twisted strips at the crossings. This forms an oriented surface with \( L \) as boundary; each disc gets an orientation from the orientation of \( \widehat{D} \), and the strips faithfully relay this orientation. If this surface is not connected, connect components together by removing small discs and inserting long, thin tubes. In the above proof, \( \widehat{D} \) was a collection of disjoint simple closed curves constructed from \( D \) . These curves are called the Seifert circuits of \( D \) . The Seifert circuits of the knot \( {8}_{20} \) are shown in Figure 2.3. A Seifert surface for this knot is then constructed by adding three discs above the page and eight half-twisted strips near the crossings to join the discs together. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_1.jpg) Figure 2.3 The proof of Theorem 2.2 gives a way of constructing a Seifert surface from a diagram of the link. The surface that results may however not be the easiest for any specific use. A surface coming from the chessboard colouring technique, or from some partial use of it, may well seem more agreeable. The diagram of Figure 2.4 shows how, at least intuitively, a knot can have two very different Seifert surfaces; the two thin circles can be joined by a tube after following along the narrow ("knotted") strip or after swallowing that part of the picture. Definition 2.3. The genus \( g\left( K\right) \) of a knot \( K \) is defined by \[ g\left( K\right) = \min \text{. \{genus}\left( F\right) : F\text{is a Seifert surface for}K\} \text{.} \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_27_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_27_0.jpg) Figure 2.4 Here \( F \) has one boundary component, so as an abstract surface it is a disc with a number of "hollow handles" added. That number is its genus. More precisely, the genus of \( F \) is \( \frac{1}{2}\left( {1 - \chi \left( F\right) }\right) \), where \( \chi \left( F\right) \) is the Euler characteristic of \( F \) . The
1329_[肖梁] Abstract Algebra (2022F)
Definition 3.4.4
Definition 3.4.4. A group \( G \) is called solvable if there exists a chain of subgroups \[ \{ 1\} = {G}_{0} \vartriangleleft {G}_{1} \vartriangleleft \cdots \vartriangleleft {G}_{s} = G \] such that \( {G}_{i}/{G}_{i - 1} \) is abelian for \( i = 1,\ldots, s \) . Corollary 3.4.5. For a finite group \( G, G \) is solvable if and only if all of the composition factors of \( G \) are of the form \( {\mathbf{Z}}_{p} \) . Example 3.4.6. The group of upper triangular invertible matrices is solvable. \[ G = \left\{ {\left. {\left( \begin{matrix} * & * & * \\ 0 & * & * \\ 0 & 0 & * \end{matrix}\right) \in {\mathrm{{GL}}}_{3}\left( \mathbb{C}\right) }\right\} \supseteq N = \left\{ \left. {\left( \begin{matrix} 1 & * & * \\ 0 & 1 & * \\ 0 & 0 & 1 \end{matrix}\right) \in {\mathrm{{GL}}}_{3}\left( \mathbb{C}\right) }\right| \right\} \supseteq {N}^{\prime } = \left\{ {\left. {\left( \begin{matrix} 1 & 0 & * \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \in {\mathrm{{GL}}}_{3}\left( \mathbb{C}\right) }\right| \;}\right. }\right\} \] The subquotients are \[ G/N \cong {\left( {\mathbb{C}}^{ \times }, \cdot \right) }^{3},\;N/N \cong {\left( \mathbb{C}, + \right) }^{2},\;{N}^{\prime } \cong \left( {\mathbb{C}, + }\right) . \] 4. Jordan-Hölder theorem, simplicity of \( {A}_{n} \), and direct product groups ## 4.1. Jordan-Hölder theorem. Theorem 4.1.1 (Jordan-Hölder). Assume that a group \( G \) has the following two composition series \[ \{ 1\} = {A}_{0} \vartriangleleft {A}_{1} \vartriangleleft \cdots \vartriangleleft {A}_{m} = G\;\text{ and }\;\{ 1\} = {B}_{0} \vartriangleleft {B}_{1} \vartriangleleft \cdots \vartriangleleft {B}_{n} = G, \] then \( m = n \), and there exists a bijection \( \sigma : \{ 1,\ldots, m\} \rightarrow \{ 1,\ldots, n = m\} \) such that \[ {A}_{\sigma \left( i\right) }/{A}_{\sigma \left( i\right) - 1} \simeq {B}_{i}/{B}_{i - 1}. \] 4.1.2. Toy model. A set theoretic version. Let \( X \) be a set with two filtrations. \[ \varnothing = {A}_{0} \subseteq {A}_{1} \subseteq \cdots \subseteq {A}_{m} = X,\;\varnothing = {B}_{0} \subseteq {B}_{1} \subseteq \cdots \subseteq {B}_{n} = X. \] We use the following picture to explain the situation. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_23_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_23_0.jpg) Then we must have for every \( i, j \) , (4.1.2.1) \[ \left( {{A}_{i - 1} \cup \left( {{A}_{i} \cap {B}_{j}}\right) }\right) \smallsetminus \left( {{A}_{i - 1} \cup \left( {{A}_{i} \cap {B}_{j - 1}}\right) }\right) = \left( {{B}_{j - 1} \cup \left( {{A}_{i} \cap {B}_{j}}\right) }\right) \smallsetminus \left( {{B}_{j - 1} \cup \left( {{A}_{i - 1} \cap {B}_{j}}\right) }\right) . \] Here \( {B}_{j - 1} \cup \left( {{A}_{i - 1} \cap {B}_{j}}\right) \) is the blue part and \( {A}_{i - 1} \cup \left( {{A}_{i} \cap {B}_{j - 1}}\right) \) is brown part. The equality can be seen as both parts represent the shaded red area. To make the proof a bit more effective, we can first show that both sides are the same as \[ \left( {{A}_{i} \cap {B}_{j}}\right) \smallsetminus \left( {\left( {{A}_{i} \cap {B}_{j - 1}}\right) \cup \left( {{A}_{i - 1} \cap {B}_{j}}\right) }\right) , \] where the latter set is the green shaded area. (Indeed, to identify the above complement with the left hand side of (4.1.2.1), we may intersect both terms with \( {B}_{j} \) ; and to identify the above complement with the left hand side of (4.1.2.1), we may intersect both terms with \( \left. {{A}_{i}\text{.}}\right) \) The above proof is of course trivial, but we will see quickly how that help us understand the proof of Jordan-Hölder theorem. 4.1.3. Proof of Theorem 4.1.1. We prove a slightly stronger version: let \( G \) be a group. Suppose that we are given two chains of subgroups \[ \{ 1\} = {A}_{0} \trianglelefteq {A}_{1} \trianglelefteq \cdots \trianglelefteq {A}_{m} = G,\;\{ 1\} = {B}_{0} \trianglelefteq {B}_{1} \trianglelefteq \cdots \trianglelefteq {B}_{n} = G. \] Then we have (1) \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) \) is a normal subgroup of the group \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) ; (2) \( {B}_{j - 1}\left( {{A}_{i - 1} \cap {B}_{j}}\right) \) is a normal subgroup of the group \( {B}_{j - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) ; (3) and we have an isomorphism (4.1.3.1) \[ \frac{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) } \cong \frac{{B}_{j - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{B}_{j - 1}\left( {{A}_{i - 1} \cap {B}_{j}}\right) }. \] This in particular shows that one may refine both chains of subgroups into (setting \( {A}_{ij}^{\prime } = \) \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) and \( \left. {{B}_{ij}^{\prime } = {B}_{j - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }\right) \) \[ \{ 1\} = {A}_{0} = {A}_{00}^{\prime } \trianglelefteq {A}_{01}^{\prime } \trianglelefteq \cdots \trianglelefteq {A}_{0n}^{\prime } = {A}_{1} = {A}_{10}^{\prime } \trianglelefteq \cdots \trianglelefteq {A}_{1n}^{\prime } = {A}_{2} = {A}_{20}^{\prime } \trianglelefteq \cdots \trianglelefteq {A}_{m - 1, n}^{\prime } = {A}_{n} = G, \] \[ \{ 1\} = {B}_{0} = {B}_{00}^{\prime } \trianglelefteq {B}_{10}^{\prime } \trianglelefteq \cdots \trianglelefteq {B}_{m0}^{\prime } = {B}_{1} = {B}_{01}^{\prime } \trianglelefteq \cdots \trianglelefteq {B}_{m1}^{\prime } = {B}_{2} = {B}_{02}^{\prime } \trianglelefteq \cdots \trianglelefteq {B}_{m, n - 1}^{\prime } = {B}_{n} = G, \] so that \( {A}_{ij}^{\prime }/{A}_{i, j - 1}^{\prime } \cong {B}_{ij}^{\prime }/{B}_{i - 1, j}^{\prime } \) . This means that in the special case of the theorem when \( {A}_{i}/{A}_{i - 1} \) and \( {B}_{j}/{B}_{j - 1} \) are simple groups, \( {A}_{i}/{A}_{i - 1} \cong {A}_{i,\sigma \left( i\right) }^{\prime }/{A}_{i,\sigma \left( i\right) - 1}^{\prime } \) for a unique \( \sigma \left( i\right) \in \{ 1,\ldots, n\} \) and \( {B}_{j}/{B}_{j - 1} \cong \) \( {B}_{\sigma \left( j\right), j}^{\prime }/{B}_{\sigma \left( j\right) - 1, j}^{\prime } \) for a unique \( \tau \left( j\right) \in \{ 1,\ldots, m\} \) . It is clear that \( m = n \), and \( \sigma \) and \( \tau \) are inverse of each other. Moreover,(4.1.3.1) implies that \( {A}_{i}/{A}_{i - 1} \cong {B}_{\sigma \left( i\right) }/{B}_{\sigma \left( i\right) - 1} \) . Now we return to prove the stronger version of the theorem above. We first check (1) and (2). By symmetry, it suffices to prove (1). We first show that \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) is a subgroup of \( G \) . Indeed, viewing both \( {A}_{i - 1} \) and \( {A}_{i} \cap {B}_{j} \) as a subgroup of \( {A}_{i},{A}_{i - 1} \) is normal; so \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) is a subgroup of \( {A}_{i} \) (and hence of \( G \) ). Next, we observe that \( {B}_{j - 1} \trianglelefteq {B}_{j} \) implies that \( \left( {{A}_{i} \cap {B}_{j - 1}}\right) \trianglelefteq \left( {{A}_{i} \cap {B}_{j}}\right) \) . To show that \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) \trianglelefteq {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \), take \( a \in {A}_{i - 1}, b \in {A}_{i} \cap {B}_{j - 1},\alpha \in {A}_{i - 1} \), and \( \beta \in {A}_{i} \cap {B}_{j} \) , we have \[ \left( {\alpha \beta }\right) \left( {ab}\right) {\left( \alpha \beta \right) }^{-1} = {\alpha \beta ab}{\beta }^{-1}{\alpha }^{-1} = \alpha \cdot \underset{\text{in }{A}_{i - 1}}{\underbrace{{\beta a}{\beta }^{-1}}} \cdot \underset{\text{in }{A}_{i} \cap {B}_{j}}{\underbrace{{\beta b}{\beta }^{-1}}}{\alpha }^{-1} \in {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) {A}_{i - 1} = {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) . \] (The last equality uses that \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) is a group.) This completes the proof of (1), and the two quotients in (4.1.3.1) makes sense. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_25_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_25_0.jpg) As suggested by the proof in the toy model, we hope to prove the following: (4.1.3.2) \[ \frac{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) } \cong \frac{{A}_{i} \cap {B}_{j}}{\left( {{A}_{i - 1} \cap {B}_{j}}\right) \cdot \left( {{A}_{i} \cap {B}_{j - 1}}\right) } \cong \frac{{B}_{j - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{B}_{j - 1}\left( {{A}_{i - 1} \cap {B}_{j}}\right) }. \] By symmetry, it suffices to prove the left isomorphism. We construct a homomorphism \[ \phi : {A}_{i} \cap {B}_{j} \rightarrow {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \rightarrow \frac{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) } \] \[ a \mapsto a \mapsto a \mapsto a{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) . \] Such homomorphism is clearly surjective. It suffices to find its kernel. \[ \ker \phi = \left( {{A}_{i} \cap {B}_{j}}\right) \cap \left( {{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) }\right) . \] Let \( a \in {A}_{i - 1} \) and \( \beta \in {A}_{i} \cap {B}_{j - 1} \) . Then \[ {a\beta } \in {B}_{j}\; \Rightarrow \;a \in {B}_{j} \cdot {\beta }^{-1} = {B}_{j}. \] So \( \ker \phi \subseteq \left( {{A}_{i - 1} \cap {B}_{j}}\right) \left( {{A}_{i} \cap {B}_{j - 1}}\right) \) . Conversely, we clearly have \[ \left( {{A}_{i - 1} \cap {B}_{j}}\right) \left( {{A}_{i} \cap {B}_{j - 1}}\right) \subseteq {A}_{i} \cap {B}_{j} \cap \left( {{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) }\right) . \] Thus, we have \( \ker \phi = \left( {{A}_{i - 1} \cap {B}_{j}}\right) \left( {{A}_{i} \cap {B}_{j - 1}}\right) \) . By the first isomorphism theorem, we deduce \[ \frac{{A}_{i} \cap {B}_{j}}{\left( {{A}_{i - 1} \cap {B}_{j}}\right) \cdot \left( {{A}_{i} \cap {B}_{j - 1}}\right) } \cong \frac{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) }. \] This completes the proof of (3), and the Jordan-Höl
1068_(GTM227)Combinatorial Commutative Algebra
Definition 5.45
Definition 5.45 Given an ideal \( I \) generated in degrees preceding \( \mathbf{a} \), the cohull complex of \( I \) with respect to \( \mathbf{a} \) is the weakly colabeled complex \[ {\operatorname{cohull}}_{\mathbf{a}}\left( I\right) = {\left( \mathbf{a} + \mathbf{1} - X\right) }_{ \preccurlyeq \mathbf{a}}\;\text{ for }\;X = \operatorname{hull}\left( {{I}^{\left\lbrack \mathbf{a}\right\rbrack } + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) , \] and \( {\mathcal{F}}^{{\operatorname{cohull}}_{\mathbf{a}}\left( I\right) } \) is the cohull resolution of \( I \) with respect to \( \mathbf{a} \) . Theorem 5.37 justifies our terminology. Corollary 5.46 \( {\mathcal{F}}^{{\operatorname{cohull}}_{\mathbf{a}}\left( I\right) } \) is a weakly cocellular free resolution of \( I \) . Proof. The complex \( {\mathcal{F}}^{{\operatorname{cohull}}_{\mathbf{a}}\left( I\right) } \) is Alexander dual to the hull resolution of \( S/\left( {{I}^{\left\lbrack \mathbf{a}\right\rbrack } + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \), which satisfies the hypotheses of Theorem 5.37. The center diagram in Fig. 5.3 betrays the fact that the cohull resolution of \( {I}^{ \star } \) can also be construed as a cellular resolution supported on the right-hand cell complex of Fig. 5.3. In fact, this is the cellular resolution we drew in Section 4.3.4. This example suggests that cohull resolutions are always cellular (Exercise 5.16). It is not hard to show that arbitrary cohull resolutions are weakly cellular (Exercise 4.3), and therefore cellular if minimal; see Exercises 5.13-5.15. ![9d852306-8a03-41f2-b2e7-a141e7b451e2_109_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_109_0.jpg) Figure 5.4: The cellular resolutions of Example 5.47 Example 5.47 Not all cellular resolutions come directly from hull and cohull resolutions. All resolutions in this example can be construed as being cellular, supported on labeled cell complexes depicted in Fig. 5.4. Set \( I = \left\langle {{z}^{2},{x}^{3}z,{x}^{4},{y}^{3},{y}^{2}z,{xyz}}\right\rangle \) so that \( {I}^{\left\lbrack {432}\right\rbrack } = \left\langle {{xy}{z}^{2},{x}^{2}{y}^{3}z,{x}^{4}{y}^{2}z}\right\rangle \) . Then hull \( \left( I\right) \) and \( {\operatorname{cohull}}_{432}\left( I\right) \) are not minimal (the offending cells have italic labels); moreover, \( {\operatorname{cohull}}_{\mathbf{a}}\left( I\right) = {\operatorname{cohull}}_{432}\left( I\right) \) for all \( \mathbf{a} \succcurlyeq \left( {4,3,2}\right) \) . Nonetheless, \( {I}^{\left\lbrack {432}\right\rbrack } + {\mathfrak{m}}^{432} \) has a minimal cellular resolution \( {\mathcal{F}}_{X} \), so Theorem 5.37 yields a minimal cocellular resolution for \( I \) . In fact, this cocellular resolution is cellular, supported on the labeled cell complex \( Y \) . The next theorem can be thought of as the reflection for arbitrary monomial ideals of the fact that Hochster's formula has two equivalent and dual statements. In the case where \( I = {I}_{\Delta } \) and \( \mathbf{a} = \left( {1,\ldots ,1}\right) \), it reduces to simplicial Alexander duality, Theorem 5.6. Theorem 5.48 (Duality for Betti numbers) If \( I \) is generated in degrees preceding \( \mathbf{a} \) and \( \mathbf{1} \preccurlyeq \mathbf{b} \preccurlyeq \mathbf{a} \), then \( {\beta }_{n - i,\mathbf{b}}\left( {S/I}\right) = {\beta }_{i,\mathbf{a} + \mathbf{1} - \mathbf{b}}\left( {I}^{\left\lbrack \mathbf{a}\right\rbrack }\right) \) . Proof. Let \( X = \operatorname{hull}\left( {I + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \) and \( Y = {\operatorname{cohull}}_{\mathbf{a}}\left( {I}^{\left\lbrack \mathbf{a}\right\rbrack }\right) \) . By Theorem 4.7 applied to \( X \), we get the equality \( {\beta }_{i,\mathbf{b}}\left( {S/I}\right) = {\beta }_{i,\mathbf{b}}\left( {S/\left( {I + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) }\right) \) when \( \mathbf{b} \preccurlyeq \mathbf{a} \) . Now calculate the Betti numbers of \( S/I \) and \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) as in Lemma 1.32 by tensoring \( {\mathcal{F}}_{X} \) and \( {\mathcal{F}}^{Y} \) with \( \mathbb{k} \) . By Theorem 4.31 and Theorem 5.37, the resulting complexes \( \mathbb{k}{ \otimes }_{S}{\mathcal{F}}_{X} \) and \( \mathbb{k}{ \otimes }_{S}{\mathcal{F}}^{Y} \) in degrees \( \mathbf{b} \) and \( \mathbf{a} + \mathbf{1} - \mathbf{b} \) are vector space duals over \( \mathbb{k} \), and their homological indexing has been reversed (subtracted from \( n \) ). Therefore the \( {\left( n - i\right) }^{\text{th }} \) homology of \( \mathbb{k}{ \otimes }_{S}{\mathcal{F}}_{X} \) has the same vector space dimension as the \( {i}^{\text{th }} \) homology of \( \mathbb{k}{ \otimes }_{S}{\mathcal{F}}^{Y} \) over \( \mathbb{k} \) . When \( S/\left( {I + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \) has a minimal cellular resolution \( {\mathcal{F}}_{X} \), the equality of Betti numbers in Theorem 5.48 comes from a geometric bijection of syzygies rather than an equality of vector space dimensions: the \( \left( {n - i - 1}\right) \) -faces labeled by \( \mathbf{b} \) in \( X \) are the same faces of \( \underline{X} \) labeled by \( \mathbf{a} + \mathbf{1} - \mathbf{b} \) in \( Y \) . It is just that \( G \in X \) represents a minimal \( {\left( n - i\right) }^{\text{th }} \) syzygy of \( S/I \), whereas \( G \in Y \) represents a minimal \( {i}^{\text{th }} \) syzygy of \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . Example 5.49 The following table lists some instances where the Betti numbers are 1 for the permutohedron and tree ideals \( I \) and \( {I}^{ \star } = {I}^{\left\lbrack {333}\right\rbrack } \) of Sections 4.3.3 and 4.3.4: \[ \begin{matrix} 3 - i & \mathbf{b} & i & \mathbf{a} + \mathbf{1} - \mathbf{b} \\ 0 & \left( {1,2,3}\right) & 2 & \left( {3,2,1}\right) \\ 1 & \left( {1,3,3}\right) & 1 & \left( {3,1,1}\right) \\ 2 & \left( {3,3,3}\right) & 0 & \left( {1,1,1}\right) \end{matrix} \] \[ {\beta }_{3 - i,\mathbf{b}}\left( I\right) = {\beta }_{i,{444} - \mathbf{b}}\left( {I}^{\left\lbrack {333}\right\rbrack }\right) = 1 \] Look at the figures in Sections 4.3.3 and 4.3.4 to verify these equalities, noting both the positions of these degrees in the staircase diagrams and which faces correspond in the cellular resolutions. Fig. 5.3 may also be helpful. \( \diamond \) Alexander duality for resolutions in three variables has a striking interpretation for planar graphs. To state it, let us call axial an almost 3- connected planar map that minimally resolves an artinian ideal in \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) . This term refers to the three axial vertices each labeled by a power of a variable and lying on the corresponding axis in the staircase surface. An axial planar map has a well-defined outer cycle. The planar dual of a given map \( G \) is the planar map \( \widehat{G} \) obtained by placing a vertex in each region of \( G \) and connecting pairs of vertices if they are in adjacent regions. For axial planar maps, we omit the vertex of \( \widehat{G} \) in the unique unbounded region of \( G \) , and we instead draw infinite arcs emanating from vertices of \( \widehat{G} \) in bounded regions of \( G \) adjacent to the unbounded region. The resulting dual of an axial planar map is called its dual radial map. Theorem 5.50 Let \( I \supseteq {\mathfrak{m}}^{\mathbf{a}} \), where \( \mathfrak{m} = \langle x, y, z\rangle \) . An axial planar map \( G \) supports a minimal cellular resolution of \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack /I \) if and only if its dual radial map \( \widehat{G} \) supports a minimal cellular resolution of \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack /{I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . ![9d852306-8a03-41f2-b2e7-a141e7b451e2_111_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_111_0.jpg) Figure 5.5: Duality for planar graphs as Alexander duality Example 5.51 In nice cases, the dual axial and radial graphs can both be embedded in their staircase surfaces. We shall not make this precise here, but we instead present an example in Fig. 5.5 that we hope is convincing. Note that both surfaces are the same; this makes it easier to compare the planar maps drawn on them. Turning the picture upside down yields two pictures of the Alexander dual staircase surface, with the radial embedding appearing the right way out and the axial embedding backward. Note how the irreducible components form natural spots to place the dual vertices and how the "outer" ridges naturally carry edges of the planar dual. \( \diamond \) The reader is invited to produce their own proof for Theorem 5.50 (the key being duality for resolutions) or to see the Notes for references. It is an open question how to generalize the embeddings of planar maps in 3-dimensional staircases to get embeddings of cellular resolutions inside staircases - canonically or otherwise - in higher dimensions. ## 5.5 Projective dimension and regularity The interaction of Alexander duality with the commutative algebra of arbitrary monomial ideals, as developed in this chapter, was sparked in large part by a fundamental observation relating free resolutions of Alexander dual squarefree ideals. Specifically, duality interchanges two standard types of homological invariants, which we introduce in Definitions 5.52 and 5.54. Definition 5.52 The length of a minimal resolution of a module \( M \) is the projective dimension \( \operatorname{pd}\left( M\right) \) . The module \( M \) is Cohen-Macaulay if \( \operatorname{pd}\left( M\right) \) equals the codimension of \( M \) . The Auslander-Buchsbaum formula [BH98, Theorem 1.3.3] implies that the projective dimension of \( M \) is at least its codimension, which-if \( M \) is a monomial quotient \( S/I \) -equals the smallest number of generators of any irreducible component of \( I \) . Hence the Cohen-Macaulay condition is a certain kind of desirable minimality: the f
1088_(GTM245)Complex Analysis
Definition 5.17
Definition 5.17. A cycle \( \gamma \) is a finite sequence of continuous closed paths in the complex plane. If \( \gamma \) is a cycle, we refer to the continuous closed paths \( {\gamma }_{1},{\gamma }_{2},\ldots ,{\gamma }_{n} \) that make up \( \gamma \) as its component curves, and we write \( \gamma = \left( {{\gamma }_{1},{\gamma }_{2},\ldots ,{\gamma }_{n}}\right) \) . Note that the component curves of a cycle need not be distinct; we also mention that the order of the component curves will not be relevant in our considerations. We consider the range of \( \gamma \) to be the union of the ranges of its components. We extend the notion of the integral of a function over a single closed path to the integral over a cycle as follows. Definition 5.18. If \( \gamma = \left( {{\gamma }_{1},{\gamma }_{2},\ldots ,{\gamma }_{n}}\right) \) is a cycle, then for any holomorphic function \( f \) defined on a domain \( D \) such that range \( \gamma \subset D \), we set \[ {\int }_{\gamma }f\left( z\right) \mathrm{d}z = {\int }_{{\gamma }_{1}}f\left( z\right) \mathrm{d}z + \cdots + {\int }_{{\gamma }_{n}}f\left( z\right) \mathrm{d}z. \] (5.4) We can extend the notion of the index of a point with respect to a path to the index of a point with respect to a cycle as follows. Definition 5.19. The index of a cycle \( \gamma = \left( {{\gamma }_{1},{\gamma }_{2},\ldots ,{\gamma }_{n}}\right) \) with respect to a point \( c \in \mathbb{C} \) - range \( \gamma \) is denoted by \( I\left( {\gamma, c}\right) \) and defined by \[ I\left( {\gamma, c}\right) = I\left( {{\gamma }_{1}, c}\right) + \cdots + I\left( {{\gamma }_{n}, c}\right) . \] (5.5) Definition 5.20. A cycle \( \gamma \) with range contained in a domain \( D \subseteq \mathbb{C} \) is said to be homologous to zero in \( D \) if \( I\left( {\gamma, c}\right) = 0 \) for every \( c \in \mathbb{C} - D \) . Observe that if a continuous closed path \( \gamma \) is homotopic to a point in \( D \), then the cycle \( \left( \gamma \right) \) with the single component \( \gamma \) is homologous to zero in \( D \) . However, the two notions are different, see Exercise 5.3. With these definitions and some work, \( {}^{1} \) we can obtain the most general forms of Cauchy's theorem and integral formula. Theorem 5.21 (Cauchy’s Theorem and Integral Formula: General Form). If \( f \) is analytic in a domain \( D \subseteq \mathbb{C} \) and \( \gamma \) is a cycle homologous to zero in \( D \), then (a) \( {\int }_{\gamma }f\left( z\right) \mathrm{d}z = 0 \) . (b) For all \( c \in D \) -range \( \gamma \), we have (5.1). Proof. If \[ E = \{ z \in \mathbb{C} - \text{ range }\gamma ;I\left( {\gamma, z}\right) = 0\} , \] then the set \( E \) is open in \( \mathbb{C} \) and contains the unbounded component of the complement of the range of \( \gamma \) in \( \mathbb{C} \), because it contains the unbounded component of the complement of the range of each component curve of \( \gamma \), as we saw in Sect. 4.5. Moreover \( E \supset \left( {\mathbb{C} - D}\right) \), since \( \gamma \) is homologous to zero in \( D \) . Define \( g : D \times D \rightarrow \mathbb{C} \) by \[ g\left( {w, z}\right) = \left\{ \begin{array}{ll} \frac{f\left( z\right) - f\left( w\right) }{z - w} & \text{ for }z \neq w, \\ {f}^{\prime }\left( w\right) & \text{ for }z = w. \end{array}\right. \] The function \( g \) is continuous in \( D \times D \), and for fixed \( z \in D, g\left( {\cdot, z}\right) \) is holomorphic on \( D \) . Furthermore, for all \( c \in D \) -range \( \gamma \), we have \[ {\int }_{\gamma }g\left( {c, z}\right) \mathrm{d}z = {\int }_{\gamma }\frac{f\left( z\right) - f\left( c\right) }{z - c}\mathrm{\;d}z \] \[ = {\int }_{\gamma }\frac{f\left( z\right) }{z - c}\mathrm{\;d}z - f\left( c\right) {\int }_{\gamma }\frac{\mathrm{d}z}{z - c} \] \[ = {\int }_{\gamma }\frac{f\left( z\right) }{z - c}\mathrm{\;d}z - f\left( c\right) {2\pi \iota I}\left( {\gamma, c}\right) . \] (5.6) We define next \[ h\left( w\right) = \left\{ \begin{array}{l} {\int }_{\gamma }g\left( {w, z}\right) \mathrm{d}z\text{ for }w \in D, \\ {\int }_{\gamma }\frac{f\left( z\right) \mathrm{d}z}{z - w}\;\text{ for }w \in E. \end{array}\right. \] Noting that \( D \cup E = \mathbb{C} \), we see from (5.6) that for \( w \in D \cap E \) , \( {}^{1} \) We are following a course outlined by J. D. Dixon, A brief proof of Cauchy’s integral formula, Proc. Amer. Math. Soc. 29 (1971), 625-626. \[ {\int }_{\gamma }g\left( {w, z}\right) \mathrm{d}z = {\int }_{\gamma }\frac{f\left( z\right) }{z - w}\mathrm{\;d}z \] because \( I\left( {\gamma, w}\right) = 0 \), and thus \( h \) is a well-defined function on the plane. The set \( E \) contains the complement of a large disc, and the function \( h \) is clearly bounded there. By Exercise 4.11, \( h \) is complex differentiable; thus \( h \) is a bounded analytic function in \( \mathbb{C} \) and hence constant, by Liouville’s theorem. Since \( \mathop{\lim }\limits_{{z \rightarrow \infty }}h\left( w\right) = 0, h \) is the zero function. In particular, \[ {\int }_{\gamma }g\left( {w, z}\right) \mathrm{d}z = 0 \] for all \( w \in D \) - range \( \gamma \), and (b) follows from (5.6). We now fix a point \( c \in D \) - range \( \gamma \) and apply part (b) to the analytic function defined on \( D \) by \( z \mapsto \left( {z - c}\right) f\left( z\right) \) and the cycle \( \gamma \), to obtain \[ I\left( {\gamma, w}\right) \left( {w - c}\right) f\left( w\right) = \frac{1}{{2\pi }\imath }{\int }_{\gamma }\frac{\left( {z - c}\right) f\left( z\right) }{z - w}\mathrm{\;d}z \] for all \( w \in D \) -range \( \gamma \) . We obtain part (a) by evaluating the last equation at \( w = c \) . Remark 5.22. 1. A topologist would develop the concept of homology in much more detail using chains and cycles. However, for our purposes, the above definitions suffice. 2. To help the reader with some of the problems of the last chapter, we review a standard definition from algebraic topology: two cycles \( \gamma = \left( {{\gamma }_{1},\ldots ,{\gamma }_{n}}\right) \) and \( \delta = \left( {{\delta }_{1},\ldots ,{\delta }_{m}}\right) \) with ranges contained in a domain \( D \) are homologous in \( D \) if the cycle with components \( \left( {{\gamma }_{1},\ldots ,{\gamma }_{n},{\delta }_{{1}_{ - }},\ldots ,{\delta }_{{m}_{ - }}}\right) \) is homologous to zero in \( D \) , where \( {\delta }_{i} \) _ is the curve \( {\delta }_{i} \) traversed backward (see Definition 4.9). 3. It is also standard to define the next relation between curves, that we have not had any reason to use. Two non-closed paths \( {\gamma }_{1} \) and \( {\gamma }_{2} \) (with ranges contained in \( D \) ) are homologous in \( D \) if they have the same initial point and the same end point and the cycle \( \left( {{\gamma }_{1} * {\gamma }_{2}}\right) \) is homologous to zero in \( D \) ; note that the only component of this cycle is the closed path \( {\gamma }_{1} * {\gamma }_{2} \) . 4. The notions of a cycle \( \gamma = \left( {{\gamma }_{1},{\gamma }_{2},\ldots ,{\gamma }_{n}}\right) \) and the sum \( {\gamma }_{1} + {\gamma }_{2} + \cdots + \) \( {\gamma }_{n} \) of its components, as in Definition 4.58, are different and should not be confused. 5. A domain \( D \) in \( \mathbb{C} \) is simply connected if and only if \( I\left( {\gamma, c}\right) = 0 \) for all cycles \( \gamma \) in \( \mathrm{D} \) and all \( c \in \mathbb{C} - D \) . ## 5.3 Jordan Curves We recall that the continuous closed path \( \gamma : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbb{C} \) is a simple closed path or a Jordan curve whenever \( \gamma \left( {t}_{1}\right) = \gamma \left( {t}_{2}\right) \) with \( 0 \leq {t}_{1} < {t}_{2} \leq 1 \) implies \( {t}_{1} = 0 \) and \( {t}_{2} = 1 \) . In this case, the range of \( \gamma \) is a homeomorphic image of the unit circle \( {S}^{1} \) . To see this, we define \[ h\left( {\mathrm{e}}^{2\pi it}\right) = \gamma \left( t\right) \] and note that \( h \) maps \( {S}^{1} \) onto the range of \( \gamma \) . Observe that \( h \) is well defined, continuous, and injective. Since the circle is compact, \( h \) is a homeomorphism. Theorem 5.23 (Jordan Curve Theorem \( {}^{2} \) ). If \( \gamma \) is a simple closed path in \( \mathbb{C} \), then (a) \( \mathbb{C} \) - range \( \gamma \) has exactly two connected components, one of which is bounded. (b) Range \( \gamma \) is the boundary of each of these components, and (c) \( I\left( {\gamma, c}\right) = 0 \) for all \( c \) in the unbounded component of the complement of the range of \( \gamma .I\left( {\gamma, c}\right) = \pm 1 \) for all \( c \) in the bounded component of the complement of the range of \( \gamma \) . The choice of sign depends only on the choice of direction for traversal on \( \gamma \) . Definition 5.24. For a simple closed path \( \gamma \) in \( \mathbb{C} \) we define the interior of \( \gamma, i\left( \gamma \right) \) , to be the bounded component of \( \mathbb{C} \) - range \( \gamma \) and the exterior of \( \gamma, e\left( \gamma \right) \), to be the unbounded component of \( \mathbb{C} \) -range \( \gamma \) . If \( I\left( {\gamma, c}\right) = + 1 \) (respectively -1) for \( c \) in \( i\left( \gamma \right) \) then we say that \( \gamma \) is a Jordan curve with positive (respectively negative) orientation. We shall not prove the above theorem. It is a deep result. In all of our applications, it will be obvious that our Jordan curves have the above properties. Remark 5.25. Another important (and nontrivial to prove) property of Jordan curves is the fact that the interior of a Jordan curve is always a simply connected domain in \( \mathbb{C} \) . If we view the Jordan curve as lying on the Riemann sphere \( \widehat{\mathbb{C}} \), then each component of the complement of its range is simply connected. This property allows us to prove the following result. Theorem 5.26 (C
1189_(GTM95)Probability-1
Definition 1
Definition 1. The distance in variation between measures \( P \) and \( \widetilde{P} \) in \( \mathcal{P} \) (notation: \( \parallel P - \widetilde{P}\parallel ) \) is the total variation of \( P - \widetilde{P} \), i.e., \[ \parallel P - \widetilde{P}\parallel = \operatorname{var}\left( {P - \widetilde{P}}\right) \equiv \sup \left| {{\int }_{\Omega }\varphi \left( \omega \right) d\left( {P - \widetilde{P}}\right) }\right| , \] (1) where the sup is over the class of all \( \mathcal{F} \) -measurable functions that satisfy the condition that \( \left| {\varphi \left( \omega \right) }\right| \leq 1 \) . Lemma 1. The distance in variation is given by \[ \parallel P - \widetilde{P}\parallel = 2\mathop{\sup }\limits_{{A \in \mathcal{F}}}\left| {P\left( A\right) - \widetilde{P}\left( A\right) }\right| . \] (2) Proof. Since, for all \( A \in \mathcal{F} \) , \[ P\left( A\right) - \widetilde{P}\left( A\right) = \widetilde{P}\left( \bar{A}\right) - P\left( \bar{A}\right) \] we have \[ 2\left| {P\left( A\right) - \widetilde{P}\left( A\right) }\right| = \left| {P\left( A\right) - \widetilde{P}\left( A\right) }\right| + \left| {P\left( \bar{A}\right) - \widetilde{P}\left( \bar{A}\right) }\right| \leq \parallel P - \widetilde{P}\parallel , \] where the last inequality follows from (1). For the proof of the converse inequality we turn to the Hahn decomposition (see, for example,[52],§ 5, Chapter VI, or [39], p. 121) of the signed measure \( \mu \equiv P - \widetilde{P} \) . In this decomposition the measure \( \mu \) is represented in the form \( \mu = {\mu }_{ + } - {\mu }_{ - } \), where the nonnegative measures \( {\mu }_{ + } \) and \( {\mu }_{ - } \) (the upper and lower variations of \( \mu \) ) are of the form \[ {\mu }_{ + }\left( A\right) = {\int }_{A \cap M}{d\mu },{\mu }_{ - }\left( A\right) = - {\int }_{A \cap \bar{M}}{d\mu },\;A \in \mathcal{F}, \] where \( M \) is a set in \( \mathcal{F} \) . Here \[ \operatorname{var}\mu = \operatorname{var}{\mu }_{ + } + \operatorname{var}{\mu }_{ - } = {\mu }_{ + }\left( \Omega \right) + {\mu }_{ - }\left( \Omega \right) . \] Since \[ {\mu }_{ + }\left( \Omega \right) = P\left( M\right) - \widetilde{P}\left( M\right) ,\;{\mu }_{ - }\left( \Omega \right) = \widetilde{P}\left( \bar{M}\right) - P\left( \bar{M}\right) , \] we have \[ \parallel P - \widetilde{P}\parallel = \left( {P\left( M\right) - \widetilde{P}\left( M\right) }\right) + \left( {\widetilde{P}\left( \bar{M}\right) - P\left( \bar{M}\right) }\right) \leq 2\mathop{\sup }\limits_{{A \in \mathcal{F}}}\left| {P\left( A\right) - \widetilde{P}\left( A\right) }\right| . \] This completes the proof of the lemma. Definition 2. A sequence of probability measures \( {P}_{n}, n \geq 1 \), is said to be convergent in variation to the measure \( P \) (denoted \( {P}_{n}\xrightarrow[]{\text{ var }}P \) ), if \[ \begin{Vmatrix}{{P}_{n} - P}\end{Vmatrix} \rightarrow 0,\;n \rightarrow \infty . \] (3) From this definition and Theorem 1 of Sect. 1 it is easily seen that convergence in variation of probability measures defined on a metric space \( \left( {\Omega ,\mathcal{F},\rho }\right) \) implies their weak convergence. The proximity in variation of distributions is, perhaps, the strongest form of closeness of probability distributions, since if two distributions are close in variation, then in practice, in specific situations, they can be considered indistinguishable. In this connection, the impression may be created that the study of distance in variation is not of much probabilistic interest. However, for example, in Poisson's theorem (Sect. 6, Chap. 1) the convergence of the binomial to the Poisson distribution takes place in the sense of convergence to zero of the distance in variation between these distributions. (Later, in Sect. 12, we shall obtain an upper bound for this distance.) We also provide an example from the field of mathematical statistics, where the necessity of determining the distance in variation between measures \( P \) and \( \widetilde{P} \) arises in a natural way in connection with the problem of discrimination (based on observed data) between two statistical hypotheses \( H \) (the true distribution is \( P \) ) and \( \widetilde{H} \) (the true distribution is \( \widetilde{P} \) ) in order to decide which probabilistic model \( \left( {\Omega ,\mathcal{F}, P}\right) \) or \( \left( {\Omega ,\mathcal{F},\widetilde{P}}\right) \) better fits the statistical data. If \( \omega \in \Omega \) is treated as the result of an observation, by a test (for discrimination between the hypotheses \( H \) and \( \widetilde{H} \) ) we understand any \( \mathcal{F} \) -measurable function \( \varphi = \varphi \left( \omega \right) \) with values in \( \left\lbrack {0,1}\right\rbrack \), the statistical meaning of which is that \( \varphi \left( \omega \right) \) is "the probability with which hypothesis \( \widetilde{H} \) is accepted if the result of the observation is \( \omega \) ." We shall characterize the performance of this rule for discrimination between \( H \) and \( \widetilde{H} \) by the probabilities of errors of the first and second kind: \[ \alpha \left( \varphi \right) = {E\varphi }\left( \omega \right) \;\left( { = \operatorname{Prob}\left( {\text{ accepting }\widetilde{H} \mid H\text{ is true }}\right) }\right) , \] \[ \beta \left( \varphi \right) = \widetilde{E}\left( {1 - \varphi \left( \omega \right) }\right) \;\left( { = \operatorname{Prob}\left( {\text{ accepting }H \mid \widetilde{H}\text{ is true }}\right) }\right) . \] In the case when hypotheses \( H \) and \( \widetilde{H} \) are equally significant to us, it is natural to consider the test \( {\varphi }^{ * } = {\varphi }^{ * }\left( \omega \right) \) (if there is such a test) that minimizes the sum \( \alpha \left( \varphi \right) + \beta \left( \varphi \right) \) of the errors as the optimal one. We set \[ \mathcal{E}r\left( {P,\widetilde{P}}\right) = \mathop{\inf }\limits_{\varphi }\left\lbrack {\alpha \left( \varphi \right) + \beta \left( \varphi \right) }\right\rbrack \] (4) Let \( Q = \left( {P + \widetilde{P}}\right) /2 \) and \( z = {dP}/{dQ},\widetilde{z} = d\widetilde{P}/{dQ} \) . Then \[ \mathcal{E}r\left( {P,\widetilde{P}}\right) = \mathop{\inf }\limits_{\varphi }\left\lbrack {{E\varphi } + \widetilde{E}\left( {1 - \varphi }\right) }\right\rbrack \] \[ = \mathop{\inf }\limits_{\varphi }{E}_{Q}\left\lbrack {{z\varphi } + \widetilde{z}\left( {1 - \varphi }\right) }\right\rbrack = 1 + \mathop{\inf }\limits_{\varphi }{E}_{Q}\left\lbrack {\varphi \left( {z - \widetilde{z}}\right) }\right\rbrack \] where \( {E}_{Q} \) is the expectation with respect to the measure \( Q \) . It is easy to see that the inf is attained by the function \[ {\varphi }^{ * }\left( \omega \right) = I\{ \widetilde{z} < z\} \] and, since \( {E}_{Q}\left( {z - \widetilde{z}}\right) = 0 \), that \[ \mathcal{E}r\left( {P,\widetilde{P}}\right) = 1 - \frac{1}{2}{E}_{Q}\left| {z - \widetilde{z}}\right| = 1 - \frac{1}{2}\parallel P - \widetilde{P}\parallel , \] (5) where the last equation will follow from Lemma 2, below. Thus it is seen from (5) that the performance \( \mathcal{E}r\left( {P,\widetilde{P}}\right) \) of the optimal test for discrimination between the two hypotheses depends on the total variation distance between \( P \) and \( \widetilde{P} \) . Lemma 2. Let \( Q \) be a \( \sigma \) -finite measure such that \( P \ll Q,\widetilde{P} \ll Q \) and let \( z = {dP}/{dQ} \) , \( \widetilde{z} = d\widetilde{P}/{dQ} \) be the Radon-Nikodym derivatives of \( P \) and \( \widetilde{P} \) with respect to \( Q \) . Then \[ \parallel P - \widetilde{P}\parallel = {E}_{Q}\left| {z - \widetilde{z}}\right| \] (6) and if \( Q = \left( {P + \widetilde{P}}\right) /2 \), we have \[ \parallel P - \widetilde{P}\parallel = {E}_{Q}\left| {z - \widetilde{z}}\right| = 2{E}_{Q}\left| {1 - z}\right| = 2{E}_{Q}\left| {1 - \widetilde{z}}\right| . \] (7) Proof. For all \( \mathcal{F} \) -measurable functions \( \psi = \psi \left( \omega \right) \) with \( \left| {\psi \left( \omega \right) }\right| \leq 1 \), we see from the definitions of \( z \) and \( \widetilde{z} \) that \[ \left| {{E\psi } - \widetilde{E}\psi }\right| = \left| {{E}_{Q}\psi \left( {z - \widetilde{z}}\right) }\right| \leq {E}_{Q}\left| \psi \right| \left| {z - \widetilde{z}}\right| \leq {E}_{Q}\left| {z - \widetilde{z}}\right| . \] (8) Therefore, \[ \parallel P - \widetilde{P}\parallel \leq {E}_{Q}\left| {z - \widetilde{z}}\right| \] (9) However, for the function \[ \psi = \operatorname{sign}\left( {\widetilde{z} - z}\right) = \begin{cases} 1, & \widetilde{z} \geq z \\ - 1, & \widetilde{z} < z \end{cases} \] we have \[ \left| {{E\psi } - \widetilde{E}\psi }\right| = {E}_{Q}\left| {z - \widetilde{z}}\right| \] (10) We obtain the required equation (6) from (9) and (10). Then (7) follows from (6) because \( z + \widetilde{z} = 2 \) ( \( Q \) -a. s.). Corollary 1. Let \( P \) and \( \widetilde{P} \) be two probability distributions on \( \left( {R,\mathcal{B}\left( R\right) }\right) \) with probability densities (with respect to Lebesgue measure \( {dx})p\left( x\right) \) and \( \widetilde{p}\left( x\right), x \in R \) . Then \[ \parallel P - \widetilde{P}\parallel = {\int }_{-\infty }^{\infty }\left| {p\left( x\right) - \widetilde{p}\left( x\right) }\right| {dx}. \] (11) (As the measure \( Q \), we are to take Lebesgue measure on \( \left( {R,\mathcal{B}\left( R\right) }\right) \) .) Corollary 2. Let \( P \) and \( \widetilde{P} \) be two discrete measures, \( P = \left( {{p}_{1},{p}_{2},\ldots }\right) ,\widetilde{P} = \) \( \left( {{\widetilde{p}}_{1},{\widetilde{p}}_{2},\ldots }\right) \), concentrated on a countable set of points \( {x}_{1},{x}_{2},\ldots \) . Then \[ \parallel P - \widetilde{P}\parallel = \mathop{\sum }\limits_{{i = 1}}^{\infty }\left| {{p}_{i} - {\widetilde{p}}_{i}}\right| \] (12) (As the measure \( Q \), we are to take
1359_[陈省身] Lectures on Differential Geometry
Definition 1.1
Definition 1.1. Suppose \( E, M \) are two smooth manifolds, and \( \pi : E \rightarrow M \) is a smooth surjective map. Let \( V = {\mathbb{R}}^{q} \) be a \( q \) -dimensional vector space. If an open covering \( \{ U, W, Z,\ldots \} \) of \( M \) and a set of maps \( \left\{ {{\varphi }_{U},{\varphi }_{W},{\varphi }_{Z},\ldots }\right\} \) satisfy all of the following conditions, then \( \left( {E, M,\pi }\right) \) is called a (real) \( q \) -dimensional vector bundle on \( M \), where \( E \) is called the bundle space, \( M \) is called the base space, \( \pi \) is called the bundle projection, and \( V = {\mathbb{R}}^{q} \) is called the typical fiber: 1) Every map \( {\varphi }_{U} \) is a diffeomorphism from \( U \times {\mathbb{R}}^{q} \) to \( {\pi }^{-1}\left( U\right) \), and for any \( p \in U, y \in {\mathbb{R}}^{q} \) , \[ \pi \circ {\varphi }_{U}\left( {p, y}\right) = p. \] (1.21) 2) For any fixed \( p \in U \), let \[ {\varphi }_{U, p}\left( y\right) = {\varphi }_{U}\left( {p, y}\right) ,\;y \in {\mathbb{R}}^{q}. \] (1.22) Then \( {\varphi }_{U, p} : {\mathbb{R}}^{q} \rightarrow {\pi }^{-1}\left( p\right) \) is a homeomorphism. When \( U \cap W \neq \varnothing \) , for any \( p \in U \cap W \) , \[ {g}_{UW}\left( p\right) = {\varphi }_{W, p}^{-1} \circ {\varphi }_{U, p} : {\mathbb{R}}^{q} \rightarrow {\mathbb{R}}^{q} \] (1.23) is a linear automorphism of \( V = {\mathbb{R}}^{q} \), i.e., \( {g}_{UW}\left( p\right) \in {GL}\left( V\right) \) . 3) When \( U \cap W \neq \varnothing \), the map \( {g}_{UW} : U \cap W\xrightarrow[]{U \cap W}{GL}\left( V\right) \) is smooth. From condition 2), we know that a necessary and sufficient condition for elements \( {y}_{U},{y}_{W} \) in \( V \) to satisfy \[ {\varphi }_{U}\left( {p,{y}_{U}}\right) = {\varphi }_{W}\left( {p,{y}_{W}}\right) \] (1.24) is \[ {y}_{U} \cdot {g}_{UW}\left( p\right) = {y}_{W} \] \( \left( {1.25}\right) \) where \( {g}_{UW}\left( p\right) \) is viewed as a nondegenerate \( \left( {q \times q}\right) \) matrix. For any \( p \in M \), define \( {E}_{p} = {\pi }^{-1}\left( p\right) \) and call it the fiber of the vector bundle \( E \) at the point \( p \) . Suppose \( U \) is a coordinate neighborhood of \( M \) containing \( p \) . Then the linear structure of the typical fiber \( V \) can be transported to the fiber \( {E}_{p} \) through the map \( {\varphi }_{U, p} \), making \( {E}_{p} \) a \( q \) -dimensional vector space. By condition 2), the linear structure of \( {E}_{p} \) is independent of the choices of \( U \) and \( {\varphi }_{U} \) . (The reader should verify this.) A vector bundle \( E \) can therefore be viewed intuitively as the result of pasting together product manifolds of the form \( U \times {\mathbb{R}}^{q} \) along corresponding fibers at the same point \( p \in M \) ( \( U \) being a coordinate neighborhood of \( M \) ), in such a way that the linear relationship between the fibers is preserved. The product manifold \( M \times {\mathbb{R}}^{q} = E \) is the simplest example of a vector bundle, called the trivial bundle over \( M \), or the product bundle. Obviously, all the tensor bundles \( {T}_{s}^{r} \) mentioned previously are vector bundles. Remark. If \( V \) is a \( q \) -dimensional complex vector space, then Definition 1.1 defines a \( q \) -dimensional complex vector bundle on \( M \) . In this case, \( {GL}\left( V\right) \) is isomorphic to \( {GL}\left( {q;\mathbb{C}}\right) \), and the fiber \( {\pi }^{-1}\left( p\right), p \in M \), is a \( q \) -dimensional complex vector space. Even though the contents in this section are developed for real vector bundles, they can be applied to complex vector bundles after appropriate adjustments. The map \( {g}_{UW} : U \cap W \rightarrow {GL}\left( V\right) \) defined in condition 2) satisfies the following compatibility conditions: 1) for \( p \in U,{g}_{UU}\left( p\right) = \mathrm{{id}} : V \rightarrow V \) ; 2) if \( p \in U \cap W \cap Z \neq \varnothing \), then \[ {g}_{UW}\left( p\right) \cdot {g}_{WZ}\left( p\right) \cdot {g}_{ZU}\left( p\right) = \text{id} : V \rightarrow V. \] The set \( \left\{ {g}_{UW}\right\} \) is called the family of transition functions of the vector bundle \( \left( {E, M,\pi }\right) \), and the above compatibility conditions are necessary and sufficient conditions for \( \left\{ {g}_{UW}\right\} \) to be such a family. More precisely, we have the following theorem: Theorem 1.1. Suppose \( M \) is an \( m \) -dimensional smooth manifold, \( {\left\{ {U}_{\alpha }\right\} }_{\alpha \in \mathcal{A}} \) is an open covering of \( M \), and \( V \) is a \( q \) -dimensional vector space. If for any pair of indices, \( \alpha ,\beta \in \mathcal{A} \) where \( {U}_{\alpha } \cap {U}_{\beta } \neq \varnothing \), there exists a smooth map \( {g}_{\alpha \beta } : {U}_{\alpha } \cap {U}_{\beta } \rightarrow {GL}\left( V\right) \) that satisfies both compatibility conditions 1) and 2), then there exists a q-dimensional vector bundle \( \left( {E, M,\pi }\right) \) which has \( \left\{ {g}_{\alpha \beta }\right\} \) as its transition functions. For a detailed proof of Theorem 1.1, see p.14 of Steenrod 1951. The idea of the proof is to paste the local products \( {U}_{\alpha } \times V \) along the corresponding fibers. To describe it briefly, let \[ \widetilde{E} = \mathop{\bigcup }\limits_{{\alpha \in \mathcal{A}}}\{ \alpha \} \times {U}_{\alpha } \times V \] (1.26) which is naturally a differentiable manifold. Define an equivalence relation \( \sim \) in \( \widetilde{E} \) as follows. For any \( \left( {\alpha, p, y}\right) ,\left( {\beta ,{p}^{\prime },{y}^{\prime }}\right) \in \widetilde{E} \), a necessary and sufficient condition for \( \left( {\alpha, p, y}\right) \sim \left( {\beta ,{p}^{\prime },{y}^{\prime }}\right) \) is that \[ p = {p}^{\prime } \in {U}_{\alpha } \cap {U}_{\beta },\;\text{ and } \] \[ {y}^{\prime } = y \cdot {g}_{\alpha \beta }\left( p\right) \] \( \left( {1.27}\right) \) Let \( e = \widetilde{E}/ \sim \) denote the quotient space of \( \widetilde{E} \) with respect to the equivalence relation \( \sim \) . Then it is also a smooth manifold. Denote the equivalence class of \( \left( {\alpha, p, y}\right) \) by \( \left\lbrack {\alpha, p, y}\right\rbrack \), and define the projection \( \pi : E \rightarrow M \) by \[ \pi \left( \left\lbrack {\alpha, p, y}\right\rbrack \right) = p \] (1.28) which is a smooth map. We can show that \( \left( {E, M,\pi }\right) \) is a \( q \) -dimensional vector bundle on \( M \), and its transition functions are precisely \( \left\{ {g}_{\alpha \beta }\right\} \) . By the theorem, we know that the family of transition functions describes the essence of a vector bundle. To construct a vector bundle, we need only specify its transition functions. Example 1 (The dual bundle \( {E}^{ * } \) of a vector bundle \( E \) ). Suppose \( {V}^{ * } \) is the dual space of \( V,{E}^{ * } \) is the vector bundle on \( M \) with \( {V}^{ * } \) as its typical fiber, and the bundle projection is denoted by \( \widetilde{\pi } \) . The structure of the local products of the bundle \( {E}^{ * } \) is given by \( \left\{ {\left( {U,{\psi }_{U}}\right) ,\left( {W,{\psi }_{W}}\right) ,\left( {Z,{\psi }_{Z}}\right) ,\ldots }\right\} \) . If for any \( p \in U \cap W \neq \varnothing \), and \( {y}_{U},{y}_{W} \in V,{\lambda }_{U},{\lambda }_{W} \in {V}^{ * } \) satisfying \[ {\varphi }_{U}\left( {p,{y}_{U}}\right) = {\varphi }_{W}\left( {p,{y}_{W}}\right) \] (1.29) \[ {\psi }_{U}\left( {p,{\lambda }_{U}}\right) = {\psi }_{W}\left( {p,{\lambda }_{W}}\right) \] it is always true that \[ < {y}_{U},{\lambda }_{U} > = < {y}_{W},{\lambda }_{W} > \] (1.30) then we can define a pairing between the fibers \( {\pi }^{-1}\left( p\right) \) and \( {\widetilde{\pi }}^{-1}\left( p\right) \) such that they become vector spaces dual to each other. The pairing between the fibers is defined by \[ \left\langle {{\varphi }_{U}\left( {p,{y}_{U}}\right) ,{\psi }_{U}\left( {p,{\lambda }_{U}}\right) }\right\rangle = < {y}_{U},{\lambda }_{U} > , \] (1.31) which is independent of the choice of \( U \) . We call the vector bundle \( {E}^{ * } \) the dual bundle of \( E \) . If we choose dual bases of \( V \) and \( {V}^{ * } \), and then denote any element \( y \) in \( V \) by a coordinate row and any element \( \lambda \) in \( {V}^{ * } \) by a coordinate column, then the pairing between \( V \) and \( {V}^{ * } \) can be expressed as multiplication of matrices: \[ < y,\lambda > = y \cdot \lambda \text{.} \] (1.32) By the first equation in (1.29), \[ {y}_{W} = {y}_{U} \cdot {g}_{UW}\left( p\right) . \] Substituting into (1.30), we get \[ {y}_{U} \cdot {\lambda }_{U} = {y}_{U} \cdot {g}_{UW}\left( p\right) \cdot {\lambda }_{W} \] Therefore \[ {\lambda }_{U} = {g}_{UW}\left( p\right) \cdot {\lambda }_{W} \] (1.33) If we also denote the elements in \( {V}^{ * } \) by coordinate rows, then elements of \( {GL}\left( {V}^{ * }\right) \) operate on \( {V}^{ * } \) on the right, and the transition functions of \( {E}^{ * } \) are \[ {h}_{UW} = {}^{t}\left( {g}_{UW}^{-1}\right) = {}^{t}{g}_{WU}. \] (1.34) When \( E \) is the tangent bundle of \( M \), the family \( \left\{ {J}_{UW}\right\} \) of its transition functions is composed of the Jacobian matrices of coordinate transformations. Any transition function of the cotangent bundle is the transpose of the inverse matrix of some \( {J}_{UW} \) . Hence the cotangent bundle is the dual bundle to the tangent bundle. Example 2 (The direct sum \( E \oplus {E}^{\prime } \) of \( E \) and \( {E}^{\prime } \) ). Suppose \( E \) and \( {E}^{\prime } \) are vector bundles on a manifold \( M \) with typical fibers \( V \) and \( {V}^{\prime } \), transition functions \( \left\{ {g}_{UW}\right\} \) and \( \left\{ {g}_{UW}^{\prime }\right\} \), respectively. Let \[ {h}_{UW} = \left( \begin{matrix} {g}_{UW} & 0 \\ 0
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space
Definition 2.6.9
Definition 2.6.9. The inner function \( {\phi }_{2} \) divides the inner function \( {\phi }_{1} \) if there exists an inner function \( {\phi }_{3} \) such that \( {\phi }_{1} = {\phi }_{2}{\phi }_{3} \) . The inner function \( {\phi }_{g} \) is the greatest common divisor of the collection \( \left\{ {\phi }_{\alpha }\right\} \) of inner functions if \( {\phi }_{g} \) divides \( {\phi }_{\alpha } \) for every \( \alpha \) and \( \phi \) divides \( {\phi }_{g} \) whenever \( \phi \) is an inner function that divides all the \( {\phi }_{\alpha } \) . The inner function \( {\phi }_{m} \) is the least common multiple of the collection \( \left\{ {\phi }_{\alpha }\right\} \) of inner functions if \( {\phi }_{\alpha } \) divides \( {\phi }_{m} \) for every \( \alpha \) and \( {\phi }_{m} \) divides \( \phi \) whenever \( \phi \) is an inner function such that \( {\phi }_{\alpha } \) divides \( \phi \) for all \( \alpha \) . Theorem 2.6.10. Every collection of inner functions has a greatest common divisor. Every finite collection of inner functions has a least common multiple. Proof. Let \( \left\{ {\phi }_{\alpha }\right\} \) be a collection of inner functions. Define \[ \mathcal{M} = \bigvee \left\{ {f : f \in {\phi }_{\alpha }{\mathbf{H}}^{2}\text{ for some }\alpha }\right\} \] It is easily seen that \( \mathcal{M} \) is invariant under \( U \) . Thus \( \mathcal{M} = {\phi }_{g}{\mathbf{H}}^{2} \) for some inner function \( {\phi }_{g} \) . Since \( {\phi }_{\alpha }{\mathbf{H}}^{2} \subset {\phi }_{g}{\mathbf{H}}^{2} \) for every \( \alpha \), it follows that \( {\phi }_{g} \) divides \( {\phi }_{\alpha } \) for all \( \alpha \), by Theorem 2.6.7. Therefore \( {\phi }_{g} \) is a common divisor of the \( {\phi }_{\alpha } \) . Suppose \( \phi \) is an inner function such that \( \phi \) divides \( {\phi }_{\alpha } \) for all \( \alpha \) . Then \( {\phi }_{\alpha }{\mathbf{H}}^{2} \subset \phi {\mathbf{H}}^{2} \) for all \( \alpha \) . But then \[ \bigvee \left\{ {f : f \in {\phi }_{\alpha }{\mathbf{H}}^{2}\text{ for some }\alpha }\right\} \subset \phi {\mathbf{H}}^{2} \] and thus \( {\phi }_{g}{\mathbf{H}}^{2} \subset \phi {\mathbf{H}}^{2} \) . This implies that \( \phi \) divides \( {\phi }_{g} \) . Therefore \( {\phi }_{g} \) is the greatest common divisor. (Notice that \( {\phi }_{g} \) could be the constant function 1.) To prove the second assertion, let \( \left\{ {{\phi }_{1},{\phi }_{2},\ldots ,{\phi }_{n}}\right\} \) be a finite collection of inner functions. Let \[ \mathcal{M} = \mathop{\bigcap }\limits_{{j = 1}}^{n}{\phi }_{j}{\mathbf{H}}^{2} \] Clearly \( \left( {{\phi }_{1}{\phi }_{2}\cdots {\phi }_{n}}\right) \in \mathcal{M} \), so \( \mathcal{M} \) is not \( \{ 0\} \) . Therefore \( \mathcal{M} = {\phi }_{l}{\mathbf{H}}^{2} \) for some inner function \( {\phi }_{l} \) . Since \( {\phi }_{l}{\mathbf{H}}^{2} \subset {\phi }_{j}{\mathbf{H}}^{2} \), it follows that every \( {\phi }_{j} \) divides \( {\phi }_{l} \) . Moreover, if \( \psi \) is an inner function that is divisible by every \( {\phi }_{j} \), then \( \psi {\mathbf{H}}^{2} \subset \) \( {\phi }_{j}{\mathbf{H}}^{2} \) for each \( j \), so \( \psi {\mathbf{H}}^{2} \subset \mathcal{M} = {\phi }_{l}{\mathbf{H}}^{2} \) . Thus \( {\phi }_{l} \) divides \( \psi \) . Therefore \( {\phi }_{l} \) is the least common multiple. Corollary 2.6.11. If \( \mathcal{M} \) is an invariant subspace, other than \( \{ 0\} \), of the unilateral shift, then \( \mathcal{M} = \phi {\mathbf{H}}^{2} \), where \( \phi \) is the greatest common divisor of all the inner parts of all the functions in \( \mathcal{M} \) . Proof. Since \( \mathcal{M} \) is an invariant subspace for the unilateral shift, Beurling’s theorem (Theorem 2.2.12) guarantees that there exists an inner function \( \phi \) such that \( \mathcal{M} = \phi {\mathbf{H}}^{2} \) . We will show that \( \phi \) is the greatest common divisor of all the inner parts of all the functions in \( {\mathbf{H}}^{2} \) . Let \( f \in \mathcal{M} = \phi {\mathbf{H}}^{2} \) . Then there exists a function \( g \in {\mathbf{H}}^{2} \) such that \( f = {\phi g} \) . Factor the functions \( f \) and \( g \) into their inner and outer parts: \( f = {f}_{I}{f}_{O} \) and \( g = {g}_{I}{g}_{O} \) . We then have \( {f}_{I}{f}_{O} = \phi {g}_{I}{g}_{O} \), and, by Theorem 2.3.4, there must exist a constant \( c \) of modulus 1 such that \( {f}_{I} = {c\phi }{g}_{I} \) . Hence \( \phi \) divides the inner part of \( f \) . Let \( \psi \) be an inner function such that \( \psi \) divides the inner parts of all functions in \( \mathcal{M} \) . Since \( \phi \) is inner and is in \( \phi {\mathbf{H}}^{2} = \mathcal{M} \), it follows that \( \psi \) divides \( \phi \) . Therefore \( \phi \) is the greatest common divisor of all the inner parts of all the functions in \( {\mathbf{H}}^{2} \) . It is of interest to determine what abstract lattices can arise in the form Lat \( A \) for bounded linear operators on a separable Hilbert space. Definition 2.6.12. The abstract lattice \( \mathcal{L} \) is attainable if there exists a bounded linear operator \( A \) on an infinite-dimensional separable complex Hilbert space such that Lat \( A \) is order-isomorphic to \( \mathcal{L} \) . Surprisingly little is known about which lattices are obtainable. The invariant subspace problem, the question whether there is an \( A \) whose only invariant subspaces are \( \{ 0\} \) and \( \mathcal{H} \), can be rephrased as: is the totally ordered lattice with two elements attainable? For \( U \) the unilateral shift, Lat \( U \) is a very complicated and rich lattice, as Theorem 2.6.7 indicates. Some of its sublattices will be of the form Lat \( A \) for suitable operators \( A \) . Recall that given two subspaces \( \mathcal{M} \) and \( \mathcal{N} \) of a Hilbert space \( \mathcal{H} \), the subspace \( \mathcal{N} \ominus \mathcal{M} \) is defined to be \( \mathcal{N} \cap {\mathcal{M}}^{ \bot } \) . The next theorem shows that an "interval" of an attainable lattice is an attainable lattice. Theorem 2.6.13. Let \( A \) be a bounded operator on an infinite-dimensional separable Hilbert space. Suppose that \( \mathcal{M} \) and \( \mathcal{N} \) are in Lat \( A \) and \( \mathcal{M} \subset \mathcal{N} \) . If \( \mathcal{N} \ominus \mathcal{M} \) is infinite-dimensional, then the lattice \[ \{ \mathcal{L} : \mathcal{L} \in \operatorname{Lat}A\text{and}\mathcal{M} \subset \mathcal{L} \subset \mathcal{N}\} \] is attainable. Proof. Let \( P \) be the projection onto the subspace \( \mathcal{N} \ominus \mathcal{M} \) . Define the bounded linear operator \( B \) on \( \mathcal{N} \ominus \mathcal{M} \) as \( B = {\left. PA\right| }_{\mathcal{N} \ominus \mathcal{M}} \) . We will show that Lat \( B \) is order-isomorphic to the lattice of the theorem. Let \( \mathcal{K} \in \operatorname{Lat}B \) . We show that \( \mathcal{M} \oplus \mathcal{K} \) is in the lattice of the theorem. First of all, since \( \mathcal{K} \subset \mathcal{N} \ominus \mathcal{M} \), it is clear that \( \mathcal{M} \subset \mathcal{M} \oplus \mathcal{K} \subset \mathcal{N} \) . Let \( m + k \in \mathcal{M} \oplus \mathcal{K} \) . Then \[ A\left( {m + k}\right) = {Am} + \left( {I - P}\right) {Ak} + {PAk}. \] Since \( m \in \mathcal{M} \) and \( \mathcal{M} \in \operatorname{Lat}A \), it follows that \( {Am} \in \mathcal{M} \) . Since \( k \in \mathcal{K} \subset \mathcal{N} \) and \( \mathcal{N} \in \operatorname{Lat}A \), we have that \( {Ak} \in \mathcal{N} \) . Thus, since \( I - P \) is the projection onto \( {\mathcal{N}}^{ \bot } \oplus \mathcal{M} \), it follows that \( \left( {I - P}\right) {Ak} \in \mathcal{M} \) . Lastly, \( {PAk} = {Bk} \) and since \( k \in \mathcal{K} \) and \( \mathcal{K} \in \operatorname{Lat}B \) we must have that \( {Bk} \in \mathcal{K} \) . Thus \[ A\left( {m + k}\right) = \left( {{Am} + \left( {I - P}\right) {Ak}}\right) + {Bk} \in \mathcal{M} \oplus \mathcal{K}. \] Hence \( \mathcal{M} \oplus \mathcal{K} \) is a member of the lattice of the theorem. Now suppose that \( \mathcal{L} \) is a member of the given lattice. Define \( \mathcal{K} = \mathcal{L} \ominus \mathcal{M} \) . Clearly \( \mathcal{K} \subset \mathcal{N} \ominus \mathcal{M} \) . We will prove that \( \mathcal{K} \in \operatorname{Lat}B \) . Let \( k \in \mathcal{K} \) . Since \( \mathcal{K} \subset \mathcal{L} \) and \( \mathcal{L} \in \operatorname{Lat}A \), we have \( {Ak} \in \mathcal{L} \) . Write \( {Ak} \) as \( {Ak} = f + g \) with \( f \in \mathcal{L} \ominus \mathcal{M} \) and \( g \in \mathcal{M} \) . Then \( {PAk} = f \) since \( P \) is the projection onto \( \mathcal{N} \ominus \mathcal{M} \) and \( \mathcal{L} \ominus \mathcal{M} \subset \mathcal{N} \ominus \mathcal{M} \) . Thus \( {Bk} = {PAk} = f \in \mathcal{L} \ominus \mathcal{M} = \mathcal{K} \), so \( \mathcal{K} \in \operatorname{Lat}B \) . Since \( \mathcal{M} \subset \mathcal{L},\mathcal{K} = \mathcal{L} \ominus \mathcal{M} \) is equivalent to \( \mathcal{L} = \mathcal{M} \oplus \mathcal{K} \) . Thus \( \mathcal{K} \in \operatorname{Lat}B \) if and only if \( \mathcal{L} = \mathcal{M} \oplus \mathcal{K} \) is in the lattice in the statement of the theorem, which establishes the isomorphism. The invariant subspace lattice of the unilateral shift has interesting "intervals", including the ordinary closed unit interval. Example 2.6.14. Let \[ \phi \left( z\right) = \exp \left( \frac{z + 1}{z - 1}\right) \] and let \( \mathcal{M} = {\left( \phi {\mathbf{H}}^{2}\right) }^{ \bot } \) . Then Lat \( \left( {\left. {U}^{ * }\right| }_{\mathcal{M}}\right) \) is order-isomorphic to the closed unit interval \( \left\lbrack {0,1}\right\rbrack \) with its standard ordering. Proof. The function \( \phi \) is inner singular. The measure \( \mu \) defined by \( \mu \left( A\right) = {2\pi } \) for any Borel set \( A \) containing 0 and \( \mu \left( B\right) = 0 \) for Borel sets \( B \
1068_(GTM227)Combinatorial Commutative Algebra
Definition 1.4
Definition 1.4 An (abstract) simplicial complex \( \Delta \) on the vertex set \( \{ 1,\ldots, n\} \) is a collection of subsets called faces or simplices, closed under taking subsets; that is, if \( \sigma \in \Delta \) is a face and \( \tau \subseteq \sigma \), then \( \tau \in \Delta \) . A simplex \( \sigma \in \Delta \) of cardinality \( \left| \sigma \right| = i + 1 \) has dimension \( i \) and is called an \( i \) -face of \( \Delta \) . The dimension \( \dim \left( \Delta \right) \) of \( \Delta \) is the maximum of the dimensions of its faces, or it is \( - \infty \) if \( \Delta = \{ \} \) is the void complex, which has no faces. The empty set \( \varnothing \) is the unique dimension -1 face in any simplicial complex \( \Delta \) that is not the void complex \( \{ \} \) . Thus the irrelevant complex \( \{ \varnothing \} \) , whose unique face is the empty set, is to be distinguished from the void complex. The reason for this distinction will become clear when we introduce (co)homology as well as in numerous applications to monomial ideals. We frequently identify \( \{ 1,\ldots, n\} \) with the variables \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \), as in our next example, or with \( \{ a, b, c,\ldots \} \), as in Example 1.8. Example 1.5 The simplicial complex \( \Delta \) on \( \{ 1,2,3,4,5\} \) consisting of all subsets of the sets \( \{ 1,2,3\} ,\{ 2,4\} ,\{ 3,4\} \), and \( \{ 5\} \) is pictured below: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_15_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_15_0.jpg) The simplicial complex \( \Delta \) Note that \( \Delta \) is completely specified by its facets, or maximal faces, by definition of simplicial complex. Simplicial complexes determine squarefree monomial ideals. For notation, we identify each subset \( \sigma \subseteq \{ 1,\ldots, n\} \) with its squarefree vector in \( \{ 0,1{\} }^{n} \), which has entry 1 in the \( {i}^{\text{th }} \) spot when \( i \in \sigma \), and 0 in all other entries. This convention allows us to write \( {\mathbf{x}}^{\sigma } = \mathop{\prod }\limits_{{i \in \sigma }}{x}_{i} \) . Definition 1.6 The Stanley-Reisner ideal of the simplicial complex \( \Delta \) is the squarefree monomial ideal \[ {I}_{\Delta } = \left\langle {{\mathbf{x}}^{\tau } \mid \tau \notin \Delta }\right\rangle \] generated by monomials corresponding to nonfaces \( \tau \) of \( \Delta \) . The Stanley-Reisner ring of \( \Delta \) is the quotient ring \( S/{I}_{\Delta } \) . There are two ways to present a squarefree monomial ideal: either by its generators or as an intersection of monomial prime ideals. These are generated by subsets of \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) . For notation, we write \[ {\mathfrak{m}}^{\tau } = \left\langle {{x}_{i} \mid i \in \tau }\right\rangle \] for the monomial prime ideal corresponding to \( \tau \) . Frequently, \( \tau \) will be the complement \( \bar{\sigma } = \{ 1,\ldots, n\} \smallsetminus \sigma \) of some simplex \( \sigma \) . Theorem 1.7 The correspondence \( \Delta \rightsquigarrow {I}_{\Delta } \) constitutes a bijection from simplicial complexes on vertices \( \{ 1,\ldots, n\} \) to squarefree monomial ideals inside \( S = \mathbb{k}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . Furthermore, \[ {I}_{\Delta } = \mathop{\bigcap }\limits_{{\sigma \in \Delta }}{\mathfrak{m}}^{\bar{\sigma }} \] Proof. By definition, the set of squarefree monomials that have nonzero images in the Stanley-Reisner ring \( S/{I}_{\Delta } \) is precisely \( \left\{ {{\mathbf{x}}^{\sigma } \mid \sigma \in \Delta }\right\} \) . This shows that the map \( \Delta \rightsquigarrow {I}_{\Delta } \) is bijective. In order for \( {\mathbf{x}}^{\tau } \) to lie in the intersection \( \mathop{\bigcap }\limits_{{\sigma \in \Delta }}{\mathfrak{m}}^{\bar{\sigma }} \), it is necessary and sufficient that \( \tau \) share at least one element with \( \bar{\sigma } \) for each face \( \sigma \in \Delta \) . Equivalently, \( \tau \) must be contained in no face of \( \Delta \) ; that is, \( \tau \) must be a nonface of \( \Delta \) . Example 1.8 The simplicial complex \( \Delta = {}_{a}{\widetilde{\Delta /}}_{b}^{d}{}_{e} \) from Example 1.5, after replacing the variables \( \left\{ {{x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5}}\right\} \) by \( \{ a, b, c, d, e\} \), has Stanley-Reisner ideal ![9d852306-8a03-41f2-b2e7-a141e7b451e2_16_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_16_0.jpg) This expresses \( {I}_{\Delta } \) via its prime decomposition and its minimal generators. Above each prime component is drawn the corresponding facet of \( \Delta \) . Remark 1.9 Because of the expression of Stanley-Reisner ideals \( {I}_{\Delta } \) as intersections in Theorem 1.7, they are also in bijection with unions of coordinate subspaces in the vector space \( {\mathbb{k}}^{n} \), or equivalently, unions of coordinate subspaces in the projective space \( {\mathbb{P}}_{\mathbb{k}}^{n - 1} \) . A little bit of caution is warranted here: if \( \mathbb{k} \) is finite, it is not true that \( {I}_{\Delta } \) equals the ideal of polynomials vanishing on the corresponding collection of coordinate subspaces; in fact, this vanishing ideal will not be a monomial ideal! On the other hand, when \( \mathbb{k} \) is infinite, the Zariski correspondence between radical ideals and algebraic sets does induce the bijection between squarefree monomial ideals and their zero sets, which are unions of coordinate subspaces. (The zero set inside \( {\mathbb{k}}^{n} \) of an ideal \( I \) in \( \mathbb{k}\left\lbrack \mathbf{x}\right\rbrack \) is the set of points \( \left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \in {\mathbb{k}}^{n} \) such that \( f\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) = 0 \) for every polynomial \( f \in I \) .) ## 1.2 Hilbert series Even if the goal is to study monomial ideals, it is necessary to consider graded modules more general than ideals. Definition 1.10 An \( S \) -module \( M \) is \( {\mathbb{N}}^{n} \) -graded if \( M = {\bigoplus }_{\mathbf{b} \in {\mathbb{N}}^{n}}{M}_{\mathbf{b}} \) and \( {\mathbf{x}}^{\mathbf{a}}{M}_{\mathbf{b}} \subseteq {M}_{\mathbf{a} + \mathbf{b}} \) . If the vector space dimension \( {\dim }_{\mathbb{k}}\left( {M}_{\mathbf{a}}\right) \) is finite for all \( \mathbf{a} \in {\mathbb{N}}^{n} \), then the formal power series \[ H\left( {M;\mathbf{x}}\right) = \mathop{\sum }\limits_{{\mathbf{a} \in {\mathbb{N}}^{n}}}{\dim }_{\mathbb{k}}\left( {M}_{\mathbf{a}}\right) \cdot {\mathbf{x}}^{\mathbf{a}} \] is the finely graded or \( {\mathbb{N}}^{n} \) -graded Hilbert series of \( M \) . Setting \( {x}_{i} = t \) for all \( i \) yields the ( \( \mathbb{Z} \) -graded or coarse) Hilbert series \( H\left( {M;t,\ldots, t}\right) \) . The ring of formal power series in which finely graded Hilbert series live is \( \mathbb{Z}\left\lbrack \left\lbrack \mathbf{x}\right\rbrack \right\rbrack = \mathbb{Z}\left\lbrack \left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \right\rbrack \) . In this ring, each element \( 1 - {x}_{i} \) is invertible, the series \( \frac{1}{1 - {x}_{i}} = 1 + {x}_{i} + {x}_{i}^{2} + \cdots \) being its inverse. Example 1.11 The Hilbert series of \( S \) itself is the rational function \[ H\left( {S;\mathbf{x}}\right) = \mathop{\prod }\limits_{{i = 1}}^{n}\frac{1}{1 - {x}_{i}} \] \[ = \text{sum of all monomials in}S\text{.} \] Denote by \( S\left( {-\mathbf{a}}\right) \) the free module generated in degree \( \mathbf{a} \), so \( S\left( {-\mathbf{a}}\right) \cong \left\langle {\mathbf{x}}^{\mathbf{a}}\right\rangle \) as \( {\mathbb{N}}^{n} \) -graded modules. The Hilbert series \[ H\left( {S\left( {-\mathbf{a}}\right) ;\mathbf{x}}\right) = \frac{{\mathbf{x}}^{\mathbf{a}}}{\mathop{\prod }\limits_{{i = 1}}^{n}\left( {1 - {x}_{i}}\right) } \] of such an \( {\mathbb{N}}^{n} \) -graded translate of \( S \) is just \( {\mathbf{x}}^{\mathbf{a}} \cdot H\left( {S;\mathbf{x}}\right) \) . In the rest of Part I, our primary examples of Hilbert series are \[ H\left( {S/I;\mathbf{x}}\right) = \text{ sum of all monomials not in }I \] for monomial ideals \( I \) . A running theme of Part I of this book is to analyze not so much the whole Hilbert series, but its numerator, as defined in Definition 1.12. (In fact, Parts II and III are frequently concerned with similar analyses of such numerators, for ideals in other gradings.) Definition 1.12 If the Hilbert series of an \( {\mathbb{N}}^{n} \) -graded \( S \) -module \( M \) is expressed as a rational function \( H\left( {M;\mathbf{x}}\right) = \mathcal{K}\left( {M;\mathbf{x}}\right) /\left( {1 - {x}_{1}}\right) \cdots \left( {1 - {x}_{n}}\right) \) , then its numerator \( \mathcal{K}\left( {M;\mathbf{x}}\right) \) is the \( K \) -polynomial of \( M \) . We will eventually see in Corollary 4.20 (but see also Theorem 8.20) that the Hilbert series of every monomial quotient of \( S \) can in fact be expressed as a rational function as in Definition 1.12, and therefore every such quotient has a \( K \) -polynomial. That these \( K \) -polynomials are polynomials (as opposed to Laurent polynomials, say) is also proved in Corollary 4.20. Next we want to show that Stanley-Reisner rings \( S/{I}_{\Delta } \) have \( K \) -polynomials by explicitly writing them down in terms of \( \Delta \) . Theorem 1.13 The Stanley-Reisner ring \( S/{I}_{\Delta } \) has the \( K \) -polynomial \[ \mathcal{K}\left( {S/{I}_{\Delta };\mathbf{x}}\right) = \mathop{\sum }\limits_{{\sigma \in \Delta }}\left( {\mathop{\prod }\limits_{{i \in \sigma }}{x}_{i} \cdot \mathop{\prod }\limits_{{j \notin \sigma }}\left( {1 - {x}_{j}}\right) }\right) . \] Proof. The definition of \( {I}_{\Delta } \) says which squarefree monomials are not in \( {I}_{\Delta } \) . However, because the generators of \( {I}_{\Delta } \) are themselves squarefree, a monomial \( {\mathbf{x}}^{\mathbf{a}} \) lies outside \( {I}_{\Delta } \) precisely w
113_Topological Groups
Definition 18.4
Definition 18.4. Let \( {\mathcal{L}}^{\prime } \) be a rich expansion of \( \mathcal{L} \) by \( C \) . Let \( S \) be a family of sets of sentences of \( {\mathcal{L}}^{\prime } \) . Then \( S \) is a consistency family (for \( \mathcal{L},{\mathcal{L}}^{\prime } \) ) iff for each \( \Gamma \in S \) all of the following hold, for all sentences \( \varphi ,\psi \) of \( {\mathcal{L}}^{\prime } \) : (C0) If \( \Delta \subseteq \Gamma \), then \( \Delta \in S \) . (C1) \( \varphi \notin \Gamma \) or \( \neg \varphi \notin \Gamma \) . (C2) If \( \neg \varphi \in \Gamma \), then \( \Gamma \cup \{ \varphi \rightarrow \} \in S \) . (C3) If \( \varphi \land \psi \in \Gamma \), then \( \Gamma \cup \{ \varphi \} \in S \) and \( \Gamma \cup \{ \psi \} \in S \) . (C4) If \( \varphi \vee \psi \in \Gamma \), then \( \Gamma \cup \{ \varphi \} \in S \) or \( \Gamma \cup \{ \psi \} \in S \) . (C5) If \( \forall {\alpha \varphi } \in \Gamma \), then for all \( \mathbf{c} \in C,\Gamma \cup \left\{ {{\operatorname{Subf}}_{\mathbf{c}}^{\alpha }\varphi }\right\} \in S \) . (C6) If \( \exists {\alpha \varphi } \in \Gamma \), then for some \( \mathbf{c} \in C,\Gamma \cup \left\{ {{\operatorname{Subf}}_{\mathbf{c}}^{\alpha }\varphi }\right\} \in S \) . (C7) If \( \mathbf{c},\mathbf{d} \in C \), and \( \left( {\mathbf{c} = \mathbf{d}}\right) \in \Gamma \), then \( \Gamma \cup \{ \mathbf{d} = \mathbf{c}\} \in S \) . (C8) If \( \mathbf{c} \in C,\tau \) is a primitive term, \( \mathbf{c} = \tau \in \Gamma \) and \( {\operatorname{Subf}}_{\tau }^{\alpha }\varphi \in \Gamma \), then \( \Gamma \cup \left\{ {{\mathrm{{Subf}}}_{\mathbf{c}}^{\alpha }\varphi }\right) \in S. \) (C9) For any primitive term \( \tau \) there is a \( \mathbf{c} \in C \) such that \( \Gamma \cup \{ \mathbf{c} = \tau \} \in S \) . (C10) If \( \alpha \) is a limit ordinal \( < \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| ,\Gamma \in {}^{\alpha }S,{\Gamma }_{\beta } \subseteq {\Gamma }_{\gamma } \) whenever \( \beta < \gamma < \alpha \), and \( \left| \left\{ {\mathbf{c} \in C : \mathbf{c}\text{ occurs in }\varphi \text{ for some }\varphi \in \mathop{\bigcup }\limits_{{\beta < \alpha }}{\Gamma }_{\beta }}\right\} \right| < \) \( \left| C\right| \), then \( \mathop{\bigcup }\limits_{{\beta < \alpha }}{\Gamma }_{\beta } \in S \) . (C11) \( \left| {\{ \mathbf{c} : \mathbf{c}\text{occurs in some}\varphi \in \Gamma \} }\right| < \left| C\right| \) . We may think of the members of a consistency family as being consistent sets of sentences in \( {\mathcal{L}}^{\prime } \) . They are so related in the family that they can be extended using the properties (C1)-(C11) just as we extended our consistent set in in Chapter 11 to a complete rich consistent set. As we shall see, analyzing that construction in this way enables us to get "inside" the construction and make modifications which will assure special desirable properties of the resulting model; see the later proofs of 22.1, 27.4, and 28.6, for example. Note that if \( \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| = {\aleph }_{0} \), then condition (C10) drops away. Under the assumption \( \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| = {\aleph }_{0} \), assumption (C11) can also be dropped and the main theorem 18.9 remains valid; and by this assumption (C0) can also be dropped (see 18.7). Now we shall give a couple of examples of consistency families. More will be found in the exercises and in later chapters. Proposition 18.5. Let \( {\mathcal{L}}^{\prime } \) be a rich expansion of \( \mathcal{L} \) by \( C \) . Let \( S \) be the set of all formally consistent sets \( \Gamma \) of sentences of \( {\mathcal{L}}^{\prime } \) such that \( \mid \{ \mathbf{c} \in C : \mathbf{c} \) occurs in some \( \varphi \in \Gamma \} \left| < \right| C \mid \) . Then \( S \) is a consistency family. Proof. We need to check conditions (C0)-(C11) for an arbitrary \( \Gamma \in S \) . (C0) and (C1) are clear since \( \Gamma \) is consistent. (C2) follows from 18.2. Since \( \varphi \land \psi \in \Gamma \) implies that \( \Gamma \vDash \varphi \) and \( \Gamma \vDash \psi \) ,(C3) is clear. For (C4), assume that \( \Gamma \cup \{ \varphi \} \notin S \) and \( \Gamma \cup \{ \psi \} \notin S \) . Then \( \Gamma \vDash \neg \varphi \) and \( \Gamma \vDash \neg \psi \), so \( \Gamma \vDash \neg \left( {\varphi \vee \psi }\right) \) and hence \( \varphi \vee \psi \notin \Gamma \) since \( \Gamma \) is consistent. (C5) is clear since \( \Gamma \vDash {\operatorname{Subf}}_{\mathrm{c}}^{\alpha }\varphi \) . For (C6), assume that \( \Gamma \cup \left\{ {{\operatorname{Subf}}_{\mathbf{c}}^{\alpha }\varphi }\right\} \notin S \) for all \( \mathbf{c} \in C \) . Choose \( \mathbf{c} \in C \) so that \( \mathbf{c} \) does not occur in \( \varphi \) or in any sentence of \( \Gamma \) . Then \( \Gamma \vDash \neg {\operatorname{Subf}}_{\mathrm{c}}^{\alpha }\varphi \) . In a proof of \( \neg {\operatorname{Subf}}_{\mathrm{c}}^{\alpha }\varphi \) from \( \Gamma \), replace \( \mathbf{c} \) be a new variable \( \beta \) . Thus \( \Gamma \vDash \neg {\operatorname{Subf}}_{\beta }^{\alpha }\varphi \) , hence \( \Gamma \vDash \forall \beta \neg {\operatorname{Subf}}_{\beta }^{\alpha }\varphi \), hence \( \Gamma \vDash \forall \alpha \neg \varphi \) . Thus \( \exists {\alpha \varphi } \notin \Gamma \) . (C7) and (C8) are clear. For (C9), choose \( \mathbf{c} \in C \) not occurring in \( \tau \) or in any sentence of \( \Gamma \) . If \( \Gamma \cup \{ \mathbf{c} = \tau \} \notin S \), then \( \Gamma \vDash \neg \left( {\mathbf{c} = \tau }\right) \) . In a proof of \( \neg \left( {\mathbf{c} = \tau }\right) \) from \( \Gamma \), replace c by a new variable \( \alpha \) . Thus \( \Gamma \vDash \neg \left( {\alpha = \tau }\right) \), hence \( \Gamma \vDash \forall \alpha \neg \left( {\alpha \equiv \tau }\right) \), hence \( \Gamma \vDash \neg \left( {\tau = \tau }\right) \), hence \( \Gamma \) is inconsistent, contradiction. (C10) is clear, as is (C11). Proposition 18.6. Let \( {\mathcal{L}}^{\prime } \) be a rich expansion of \( \mathcal{L} \) by \( C \), where \( \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| = \) \( {\aleph }_{0} \) . Let \( S \) be the set of all sets \( \Gamma \) of sentences such that \( \Gamma \) has a model \( {\left( \mathfrak{A},{a}_{\mathrm{c}}\right) }_{\mathrm{c} \in C} \) with \( A = \left\{ {{a}_{\mathrm{c}} : \mathbf{c} \in C}\right\} \) and (C11) holds. Then \( S \) is a consistency family. This proposition is clear. Another lemma which will be used in applying our main theorem is as follows. Lemma 18.7. If \( S \) satisfies conditions (C1)-(C9), then \( {S}^{\prime } = \{ \Gamma : \Gamma \subseteq \Delta \) for some \( \Delta \in S\} \) satisfies (C0)-(C9). Proof. Assume that \( \Gamma \in {S}^{\prime } \), say \( \Gamma \subseteq \Delta \in S \) . Both (C0) and (C1) are clear for \( \Gamma \) . To check (C2), suppose \( \neg \varphi \in \Gamma \) . Thus \( \neg \varphi \in \Delta \), so by (C2) for \( S \) , \( \Delta \cup \{ \varphi \rightarrow \} \in S \) . Now \( \Gamma \cup \{ \varphi \rightarrow \} \subseteq \Delta \cup \{ \varphi \rightarrow \} \), so \( \Gamma \cup \{ \varphi \rightarrow \} \in {S}^{\prime } \) . (C3)-(C9) are established similarly. The concept that really enables us to "get inside" the construction is as follows: Definition 18.8. Let \( {\mathcal{L}}^{\prime } \) be a rich expansion of \( \mathcal{L} \) by \( C \), and let \( S \) be a consistency family. A function \( f : S \rightarrow S \) is admissible over \( S \) provided that for any \( \Gamma \in S \) the following two conditions hold: (i) \( \Gamma \subseteq {f\Gamma } \) ; (ii) \( \left| {\{ \mathbf{c} \in C : \mathbf{c}\text{occurs in some}\varphi \in {f\Gamma }\} }\right| = \mid \{ \mathbf{c} \in C : \mathbf{c} \) occurs in some \( \varphi \in \Gamma \} \mid + m \) for some \( m \in \omega \) . Theorem 18.9. (Model existence theorem). Let \( {\mathcal{L}}^{\prime } \) be a rich expansion of \( \mathcal{L} \) by \( C \), let \( S \) be a consistency family, and let \( \left\langle {{f}_{\alpha } : \alpha < \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| }\right\rangle \) be a family of admissible functions over \( S \) . Then for any \( \Gamma \in S \) there is a model \( {\left( \mathfrak{A},{a}_{\mathrm{c}}\right) }_{\mathrm{c} \in C} \) of \( \Gamma \) satisfying the following two conditions: (i) \( A = \left\{ {{a}_{\mathrm{c}} : \mathbf{c} \in C}\right\} \) and so \( \left| A\right| \leq \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| \) ; (ii) for each \( \alpha < \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| \) there is a \( \Delta \in S \) such that \[ \Gamma \subseteq {f}_{\alpha }\Delta \subseteq \left\{ {\varphi : {\left( \mathfrak{A},{a}_{\mathbf{c}}\right) }_{\mathbf{c} \in C} \vDash \varphi }\right\} \] Proof. Set \( m = \left| {\mathrm{{Fmla}}}_{\mathcal{L}}\right| \) for brevity. Let \( \varphi : m \gg {\operatorname{Sent}}_{{\mathcal{L}}^{\prime }} \), let \( \tau \) map \( m \) one-one onto the set of primitive terms of \( {\mathcal{L}}^{\prime } \), and let a well-ordering of \( C \) be fixed. Let \( \Gamma \in S \) . We now define a sequence \( \left\langle {{\Theta }_{\alpha } : \alpha \leq m}\right\rangle \) . Let \( {\Theta }_{0} = \Gamma \) . Suppose \( {\Theta }_{\alpha } \in S \) has been defined, where \( \alpha < m \), and the following condition holds: (0) \( \left| \left\{ {\mathbf{c} : \mathbf{c}\text{ occurs in some }\varphi \in {\Theta }_{\alpha }}\right\} \right| \leq \left| {\{ \mathbf{c} : \mathbf{c}\text{ occurs in some }\varphi \in \Gamma \} }\right| + \) \( \left| \alpha \right| + {\aleph }_{0} \) . We now define \( {\Theta }_{\alpha + 1} \) . Let \( {\Theta }_{\alpha }^{\prime } = {\Theta }_{\alpha }\; \) if \( {\Theta }_{\alpha } \cup \left\{ {\varphi }_{\alpha }\right\} \notin S, \) \( {\Theta }_{\alpha }^{\prime } = {\Theta }_{\alpha } \cup \left\{ {\varphi }_{\alpha }\rig
113_Topological Groups
Definition 3.17
Definition 3.17. Let \( \mathbb{E} \) be the set of even numbers. Let \( T \) be the class of all Turing machines. If \( M \) is a Turing machine, with notation as in 1.1, we let the Gödel number of \( M,{gM} \), be the number \[ \mathop{\prod }\limits_{{i < {2m}}}{\mathrm{p}}_{i}^{ti} \] where, for each \( i < {2m},{ti} = {2}^{c\left\lbrack {i + 2/2}\right\rbrack } \cdot {3}^{\left( {xE}\right) \left( {i + 1}\right) } \cdot {5}^{v\left( {i + 1}\right) } \cdot {7}^{d\left( {i + 1}\right) } \) . Lemma 3.18. \( {g}^{ * }T \) is elementary. Proof. For any \( x \in \omega, x \in {\mathcal{J}}^{ * }T \) if \( \ln x \) is odd, \( x > 1 \), for every \( i \leq \ln x \) we have \( {\left( {\left( x\right) }_{i}\right) }_{2} < 5 \), for every \( i \leq {1x} \) there is a \( j \leq {1x} \) such that \( {\left( {\left( x\right) }_{i}\right) }_{3} = {\left( {\left( x\right) }_{j}\right) }_{0} \), for every \( i \leq \mathrm{l}x \), if \( i \) is even then \( {\left( {\left( x\right) }_{i}\right) }_{0} = {\left( {\left( x\right) }_{i + 1}\right) }_{0} \), and for all \( i, j \leq \mathrm{l}x \), if \( i + 2 \leq j \), then \( {\left( {\left( x\right) }_{i}\right) }_{0} \neq {\left( {\left( x\right) }_{j}\right) }_{0} \), and if \( i \) is even then \( {\left( {\left( x\right) }_{i}\right) }_{1} = 0 \), while if \( i \) is odd, \( {\left( {\left( x\right) }_{i}\right) }_{1} = 1 \) . Definition 3.19. If \( F \) is a tape description, then the Gödel number of \( F,{gF} \) , is the number \[ \mathop{\prod }\limits_{{i = 0}}^{\infty }{\mathrm{p}}_{i}^{ki} \] where \[ {k}_{i} = \left\{ \begin{array}{l} F\left( {i/2}\right) \;\text{ if }i\text{ is even,} \\ F\left( {-\left( {i + 1}\right) /2}\right) \;\text{ if }i\text{ is odd. } \end{array}\right. \] Note that a natural number \( m \) is the Gödel number of some tape description iff \( \forall x < \operatorname{lm}\left( {{\left( m\right) }_{x} < 2}\right) \) and \( m \neq 0 \) . Definition 3.20. A complete configuration is a quadruple \( \left( {M, F, d, e}\right) \) such that \( \left( {F, d, e}\right) \) is a configuration in the Turing machine \( M \) . \( \mathbb{C} \) is the set of all complete configurations. The Gödel number \( \mathcal{G}\left( {M, F, d, e}\right) \) of such a complete configuration is the number \[ {29}^{M} \cdot {39}^{F} \cdot {5}^{d} \cdot {7}^{n} \] where \[ n = \left\{ \begin{array}{l} {2e}\;\text{ if }e \geq 0 \\ - {2e} - 1\;\text{ if }e < 0 \end{array}\right. \] Lemma 3.21. \( {g}^{ * }\mathbb{C} \) is elementary. Proof. For any \( x \in \omega, x \in {\mathcal{J}}^{ * }\mathbb{C} \) iff \( \forall i \leq 1{\left( x\right) }_{1}\left( {{\left( {\left( x\right) }_{1}\right) }_{i} < 2}\right) ,{\left( x\right) }_{1} \neq 0,{\left( x\right) }_{0} \in \) \( {\mathcal{J}}^{ * }T \), and there is an \( i \leq \mathrm{l}{\left( x\right) }_{0} \) such that \( {\left( x\right) }_{2} = {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{0} \), and \( \mathrm{l}x \leq 3 \) . Definition 3.22. (i) For any \( e \in \mathbb{Z} \), let \[ {ge} = \left\{ \begin{array}{l} {2e}\;\text{ if }e \geq 0 \\ - {2e} - 1\;\text{ if }e < 0 \end{array}\right. \] For any \( x \in \omega \), let \[ {f}_{0}x = \left\{ \begin{array}{ll} x + 2 & \text{ if }x\text{ is even,} \\ 0 & \text{ if }x = 1, \\ x - 2 & \text{ if }x\text{ is odd and }x > 1, \end{array}\right. \] \[ {f}_{1}x = \left\{ \begin{array}{ll} x - 2 & \text{ if }x\text{ is even and }x > 0, \\ 1 & \text{ if }x = 0, \\ x + 2 & \text{ if }x\text{ is odd. } \end{array}\right. \] Lemma 3.23. \( {f}_{0} \) and \( {f}_{1} \) are elementary. For any \( e \in \mathbb{Z} \) we have \( {f}_{0}{ge} = g\left( {e + 1}\right) \) and \( {f}_{1}{ge} = g\left( {e - 1}\right) \) . Proof \[ {f}_{0}{ge} = \left\{ \begin{array}{ll} {f}_{0}{2e} & e \geq 0 \\ {f}_{0}\left( {-{2e} - 1}\right) & e < 0 \end{array}\right\} = \left\{ \begin{array}{ll} 2\left( {e + 1}\right) & e \geq 0 \\ 0 & e = - 1 \\ - {2e} - 3 & e < - 1 \end{array}\right\} = g\left( {e + 1}\right) ; \] \[ {f}_{1}{ge} = \left\{ \begin{array}{ll} {f}_{1}{2e} & e \geq 0 \\ {f}_{1}\left( {-{2e} - 1}\right) & e < 0 \end{array}\right\} = \left\{ \begin{array}{ll} 2\left( {e - 1}\right) & e > 0 \\ 1 & e = 0 \\ - {2e} + 1 & e < 0 \end{array}\right\} = g\left( {e - 1}\right) . \] Lemma 3.24. Let \( {R}_{0} = \{ \left( {x, n,\varepsilon, y}\right) : x = {gF} \) for some tape description \( F \) , \( n = {ge} \) for some \( e \in \mathbb{Z},\varepsilon = 0 \) or \( \varepsilon = 1 \), and \( \left. {y = g\left( {F}_{\varepsilon }^{e}\right) }\right\} \) . Then \( {R}_{0} \) is elementary. Proof. \( \left( {x, n,\varepsilon, y}\right) \in {R}_{0} \) iff \( \forall i \leq \operatorname{lx}\left( {{\left( x\right) }_{i} < 2}\right), x \neq 0,\varepsilon < 2 \), and \( y = \) \( \left\lbrack {x/{\mathrm{p}}_{n}^{\left( x\right) n}}\right\rbrack \cdot {\mathrm{p}}_{n}^{\varepsilon }. \) Lemma 3.25. Let \( {R}_{1} = \{ \left( {x, y}\right) : x \) is the Gödel number of a complete configuration \( \left( {M, F, d, e}\right), y \) is the Gödel number of a complete configuration \( \left( {M,{F}^{\prime },{d}^{\prime },{e}^{\prime }}\right) \) (same \( M \) ), and \( \left( {\left( {F, d, e}\right) ,\left( {{F}^{\prime },{d}^{\prime },{e}^{\prime }}\right) }\right) \) is a computation step \( \} \) . Then \( {R}_{1} \) is elementary. Proof. For any \( x, y,\left( {x, y}\right) \in {R}_{1} \) iff \( x \in {\mathcal{g}}^{ * }\mathbb{C}, y \in {\mathcal{g}}^{ * }\mathbb{C},{\left( x\right) }_{0} = {\left( y\right) }_{0} \), and there is an \( i \leq \mathrm{l}\left( {\left( x\right) }_{0}\right) \) such that \( {\left( x\right) }_{2} = {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{0},{\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{1} = {\left( {\left( x\right) }_{1}\right) }_{\left( x\right) 3} \), and one of the following conditions holds: (a) \( {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{2} = 0,\left( {{\left( x\right) }_{1},{\left( x\right) }_{3},0,{\left( y\right) }_{1}}\right) \in {R}_{0},{\left( y\right) }_{2} = {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{3} \), and \( {\left( y\right) }_{3} = {\left( x\right) }_{3} \) ; (b) \( {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{2} = 1,\left( {{\left( x\right) }_{1},{\left( x\right) }_{3},1,{\left( y\right) }_{1}}\right) \in {R}_{0},{\left( y\right) }_{2} = {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{3} \), and \( {\left( y\right) }_{3} = {\left( x\right) }_{3} \) ; (c) \( {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{2} = 2,{\left( y\right) }_{1} = {\left( x\right) }_{1},{\left( y\right) }_{2} = {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{3} \), and \( {\left( y\right) }_{3} = {f}_{1}\left( {\left( x\right) }_{3}\right) \) ; (d) \( {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{2} = 3,{\left( y\right) }_{1} = {\left( x\right) }_{1},{\left( y\right) }_{2} = {\left( {\left( {\left( x\right) }_{0}\right) }_{i}\right) }_{3} \), and \( {\left( y\right) }_{3} = {f}_{0}\left( {\left( x\right) }_{3}\right) \) . Definition 3.26. A complete computation is a sequence \( \mathfrak{M} = \left\langle \left( {M,{F}_{0},{d}_{0},{e}_{0}}\right) \right. \) , \( \left. {\ldots ,\left( {M,{F}_{m},{d}_{m},{e}_{m}}\right) }\right\rangle \) such that \( \left\langle {\left( {{F}_{0},{d}_{0},{e}_{0}}\right) ,\ldots ,\left( {{F}_{m},{d}_{m},{e}_{m}}\right) }\right\rangle \) is a computation in \( M \) . The Gödel number of such a complete computation is the number \[ \mathop{\prod }\limits_{{i < m}}{\mathrm{p}}_{i}^{\mathcal{G}\left( {M,{F}_{i},{d}_{i}{e}_{i}}\right) } \] Let \( {R}_{2} \) be the set of all Gödel numbers of complete computations. Lemma 3.27. \( {R}_{2} \) is elementary. ![57474f65-18c7-4127-acaf-c92c2d62e43e_62_0.jpg](images/57474f65-18c7-4127-acaf-c92c2d62e43e_62_0.jpg) \( {\left( {\left( x\right) }_{0}\right) }_{2} \), and for every \( i < \mathrm{l}x,\left( {{\left( x\right) }_{i},{\left( x\right) }_{i + 1}}\right) \in {R}_{1} \), and there is an \( i \leq \mathrm{l}{\left( {\left( x\right) }_{0}\right) }_{0} \) such that \( {\left( {\left( {\left( {\left( x\right) }_{0}\right) }_{0}\right) }_{i}\right) }_{0} = {\left( {\left( x\right) }_{1x}\right) }_{2},{\left( {\left( {\left( {\left( x\right) }_{0}\right) }_{0}\right) }_{i}\right) }_{1} = {\left( {\left( {\left( x\right) }_{1x}\right) }_{1}\right) }_{\left( {\left( x\right) {1x}}\right) 3} \), and \[ {\left( {\left( {\left( {\left( x\right) }_{0}\right) }_{0}\right) }_{i}\right) }_{2} = 4. \] Definition 3.28. If \( h \) is a finite sequence of 0 ’s and 1’s, we let \[ {gh} = \mathop{\prod }\limits_{{i < \text{ Dmnh }}}{\mathrm{p}}_{i}^{{hi} + 1}. \] For any \( x \in \omega \), let \( {f}_{2}x = \mathop{\prod }\limits_{{i \leq x}}{\mathrm{p}}_{i}^{2} \) . Lemma 3.29. \( {f}_{2} \) is elementary, and \( {f}_{2}x = {g}^{\mid \left( {x + 1}\right) } \) for any \( x \) . Definition 3.30. For any \( x, y \in \omega \), Cat \( \left( {x, y}\right) = x \cdot \mathop{\prod }\limits_{{i \leq {1y}}}{\mathrm{p}}_{{ix} + i + 1}^{\left( y\right) i} \) . Lemma 3.31. If \( h \) and \( k \) are finite sequences of 0 ’s and 1’s, then \( g\left( {hk}\right) = \) Cat \( \left( {{gh},{gk}}\right) \) . (Recall the definition of \( {hk} \) from 1.11.) Definition 3.32. \( {f}_{3}^{1}x = \operatorname{Cat}\left( {2,{f}_{2}x}\right) \) . For \( m > 1 \) , \[ {f}_{3}^{m}\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = \operatorname{Cat}\left( {{f}_{3}^{m - 1}\left( {{x}_{0},\ldots ,{x}_{m - 2}}\right) ,\operatorname{Cat}\left( {2,{f}_{2}{x}_{m - 1}}\right) }\right) . \] Lemma 3.33. \( {f}_{3}^{m} \) is elementary for each \( m \), and \( {f}_{3}^{m}\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = \) \( g\left( {{\left\lbrack \begin{matrix} 0 & 1 \end{matrix}\right\rbrack }^{\left( x0 + 1\right) }{\left\lbrack \begin{matrix}
1358_[陈松蹊&张慧铭] A Course in Fixed and High-dimensional Multivariate Analysis (2020)
Definition 1.1.4
Definition 1.1.4 It is said that \( {X}_{1},{X}_{2},\ldots ,{X}_{p} \) are independent if \[ F\left( {{u}_{1},\ldots ,{u}_{p}}\right) = {F}_{1}\left( {u}_{1}\right) {F}_{2}\left( {u}_{2}\right) \cdots {F}_{p}\left( {u}_{p}\right) \] for any \( {\left( {u}_{1},\ldots ,{u}_{p}\right) }^{T} \in {\mathbb{R}}^{p} \) or \[ P\left( {{X}_{1} \leq {u}_{1},\ldots {X}_{p} \leq {u}_{p}}\right) = P\left( {{X}_{1} \leq {u}_{1}}\right) P\left( {{X}_{2} \leq {u}_{2}}\right) \cdots P\left( {{X}_{p} \leq {u}_{p}}\right) \] If \( F,{F}_{1},\ldots ,{F}_{p} \) have absolutely continuous density functions, say \( f,{f}_{1},\ldots ,{f}_{p} \), then the above definition of independence can be replaced by \( f\left( {{u}_{1},\ldots ,{u}_{p}}\right) = {f}_{1}\left( {u}_{1}\right) \cdots {f}_{p}\left( {u}_{p}\right) \) . If \( {X}_{1},{X}_{2},\ldots ,{X}_{p} \) are independent, \[ E\left( {{X}_{1}^{{h}_{1}}\cdots {X}_{p}^{{h}_{p}}}\right) = E\left( {X}_{1}^{{h}_{1}}\right) \cdots E\left( {X}_{p}^{{h}_{p}}\right) \] 220 and \[ \operatorname{Cov}\left( {{X}_{i},{X}_{j}}\right) = \left\{ {\begin{array}{ll} \operatorname{Var}\left( {X}_{i}\right) & \text{ if }i = j; \\ 0 & \text{ if }i \neq j \end{array}.}\right. \] Therefore, \[ \operatorname{Var}\left( \mathbf{X}\right) = \left( \begin{matrix} \operatorname{Var}\left( {X}_{1}\right) & \cdots & 0 \\ \vdots & \ddots & \\ 0 & & \operatorname{Var}\left( {X}_{p}\right) \end{matrix}\right) = \operatorname{diag}\left( {\operatorname{Var}\left( {X}_{1}\right) ,\ldots ,\operatorname{Var}\left( {X}_{p}\right) }\right) . \] ## 1.1.3 Convergence of Random Vectors ## 1.1.4 Slutsky's theorem and Delta method 1.1.5 Multivariate Characteristic Functions and independent CLT Now we begin with a the moment-generating function of a \( p \) -dimensional random vector \( X \) : \[ {M}_{\mathbf{X}}\left( t\right) = \mathrm{E}{e}^{-{\mathbf{t}}^{T}\mathbf{X}}\;\text{ for any }\mathbf{t} \in {\mathbb{R}}^{p}. \] If \( \mathbf{X} \) admits the density function \( f\left( \mathbf{x}\right) \), then \[ {M}_{\mathbf{X}}\left( \mathbf{t}\right) = {\int }_{-\infty }^{\infty }\cdots {\int }_{-\infty }^{\infty }{e}^{{\mathbf{t}}^{T}\mathbf{x}}f\left( \mathbf{x}\right) d\mathbf{x} \] \[ = {\int }_{-\infty }^{\infty }\cdots {\int }_{-\infty }^{\infty }{e}^{-\left( {{t}_{1}{x}_{1} + \ldots + {t}_{p}{x}_{p}}\right) }f\left( {{x}_{1},\ldots ,{x}_{p}}\right) d{x}_{1}\cdots d{x}_{p}. \] provided that the integral is absolutely integrable. This is also called Laplace transform of function \( f\left( \mathbf{x}\right) \) . However, for \( p = 1 \), if \( f\left( x\right) = 0 \) when \( x > 0 \), i.e. \( X \) does not take positive value. Thus the associated Laplace transform may not convergence. For example, let \( P\left( {X = - n}\right) = \) \( {2}^{-n} \) for \( n = - 1, - 2,\cdots \), we have \[ {M}_{X}\left( t\right) = \mathop{\lim }\limits_{{k \rightarrow \infty }}\mathop{\sum }\limits_{{a = 1}}^{k}{2}^{-a}{e}^{at} = \mathop{\lim }\limits_{{k \rightarrow \infty }}\frac{{e}^{t}}{2} \cdot \frac{{\left( {e}^{t}/2\right) }^{k} - 1}{{e}^{t}/2 - 1} = + \infty \] with \( t > \log 2 \) . To avoid this drawback, we place the negative vector \( - \mathbf{t} \) in moment-generating function by the purely imaginary \( i{t}^{T} \) with \( i = \sqrt{-1} \) and \( \theta \) being non-negative real vector. Thus, the characteristic function (cf) of \( X \) a \( p \) -dimensional random vector is defined by \[ {\phi }_{\mathbf{X}}\left( \mathbf{t}\right) = \mathrm{E}{e}^{i{\mathbf{t}}^{T}\mathbf{X}} = {\int }_{{\mathbb{R}}^{p}}{e}^{i{\mathbf{t}}^{T}\mathbf{x}}d{F}_{\mathbf{X}}\left( \mathbf{x}\right) \;\text{ for any }t \in {\mathbb{R}}^{p} \] (1.1.1) where \( {F}_{\mathbf{X}} \) is the cumulative distribution function. The characteristic function \( {\phi }_{X}\left( t\right) \) always exists and it is uniformly continuous on \( {\mathbb{R}}^{p} \) , the reason is that the (1.1.1) is an integral of a bounded continuous function over a space with finite measure. If \( X \) has a \( p \) -dimensional density function \( f\left( \mathbf{x}\right) \), then \[ {\phi }_{\mathbf{X}}\left( \mathbf{t}\right) = {\int }_{-\infty }^{\infty }\cdots {\int }_{-\infty }^{\infty }{e}^{i\left( {{t}_{1}{x}_{1} + \ldots + {t}_{p}{x}_{p}}\right) }f\left( {{x}_{1},\ldots ,{x}_{p}}\right) d{x}_{1}\cdots d{x}_{p}. \] In mathematics, \( {\phi }_{X}\left( t\right) \) is also known as the Fourier transform of \( f\left( x\right) \) . The inverse Fourier transform of \( \phi \left( t\right) \) is \( f\left( x\right) \), namely \[ f\left( x\right) = \frac{1}{{\left( 2\pi \right) }^{p}}{\int }_{-\infty }^{\infty }\cdots {\int }_{-\infty }^{\infty }{e}^{-i{t}^{T}x}\phi \left( t\right) d{t}_{1}\cdots d{t}_{p} \] (see Cramér (1946) section 10.6 - 10.7. for details). If \( X \) does not have a density, the characteristic function uniquely defines \( P\left( {x \in \mathbb{A}}\right) \) for any Borel set \( A \in {\mathbb{R}}^{p} \) ; see (10.6.2) of Cramér(1946). Hence the characteristic function uniquely defines the distribution of a real vector \( \mathbf{X} \) . The characteristic function has the following nice properties. ## Properties of characteristic function 1. \( {\phi }_{\mathbf{X}}\left( \mathbf{0}\right) = 1 \) . 2. \( \left| {{\phi }_{\mathbf{X}}\left( \mathbf{t}\right) }\right| \leq 1 \) . 3. \( {\phi }_{\mathbf{X}}\left( \mathbf{t}\right) \) is uniformly continuous in \( {\mathbb{R}}^{p} \) . 4. If \( \mathbf{X} \) and \( \mathbf{Y} \) are independent real vectors in \( {\mathbb{R}}^{p},{\phi }_{\mathbf{X} + \mathbf{Y}}\left( \mathbf{t}\right) = {\phi }_{\mathbf{X}}\left( \mathbf{t}\right) \cdot {\phi }_{\mathbf{Y}}\left( \mathbf{t}\right) \) . 5. If a random variable \( X \) has moments up to \( h \) -th order, then \( \mathrm{E}{X}^{h} = {\left( -i\right) }^{h}{\varphi }_{X}^{\left( h\right) }\left( 0\right) \) . 6. (Mixed moments) Like the univariate case, let \( \mathbf{h} \mathrel{\text{:=}} {\left( {h}_{1},\ldots ,{h}_{k}\right) }^{T}, k \leq p \) . The \( \mathbf{h} \) -th order moments of \( \mathbf{X} \mathrel{\text{:=}} {\left( {X}_{1},{X}_{2},\ldots ,{X}_{p}\right) }^{T} \) are given by (moments involving multiple variables) \[ \mathrm{E}\left( {{X}_{{i}_{1}}^{{h}_{1}}{X}_{{i}_{2}}^{{h}_{2}}\cdots {X}_{{i}_{k}}^{{h}_{k}}}\right) = {\left| \frac{1}{{i}^{\left| h\right| }}\frac{{\partial }^{h}{\phi }_{\mathbf{X}}\left( \mathbf{t}\right) }{\partial {t}_{{i}_{1}}^{{h}_{1}}\cdots \partial {t}_{{i}_{k}}^{{h}_{k}}}\right| }_{\mathbf{t} = {\left( {i}_{1},\ldots ,{i}_{k}\right) }^{T} = \mathbf{0}} \] (1.1.2) where \( \left| \mathbf{h}\right| \mathrel{\text{:=}} {h}_{1} + {h}_{2} + \cdots + {h}_{k} \) and \( \mathrm{E}\left( {{Y}_{{i}_{1}}^{{h}_{1}}{Y}_{{i}_{2}}^{{h}_{2}}\cdots {Y}_{{i}_{k}}^{{h}_{k}}}\right) \) exist. Equality in Distribution \( \left( \frac{d}{ - }\right) \) Two random vectors are said equal or equivalence in distribution if they have the same distribution functions. Definition 1.1.5 If random vectors \( \mathbf{X} \) and \( \mathbf{Y} \) have the same distribution, \[ P\left( {\mathbf{X} \leq \mathbf{x}}\right) = P\left( {\mathbf{Y} \leq \mathbf{x}}\right) ,\text{ for all }\mathbf{x} \] we denote \( \mathbf{X}\overset{d}{ = }\mathbf{Y} \) . These two random vectors do not need to be restrict on a same probability space, we measure their equality in distribution by comparing with the distribution functions. The notation \( \triangleq \) simply signifies that the two distribution functions are the same. Clearly, if \( \mathbf{X}\overset{d}{ = }\mathbf{Y} \) if and only if (iff) \( {\phi }_{\mathbf{X}}\left( t\right) = {\phi }_{\mathbf{Y}}\left( t\right) \), almost surely for all \( t \in {\mathbb{R}}^{n} \) . 1. If \( \mathbf{X} \triangleq \mathbf{Y},{f}_{j}\left( \cdot \right) j = 1\ldots m \) are \( m \) Borel functions, then \[ \left( \begin{matrix} {f}_{1}\left( \mathbf{X}\right) \\ \vdots \\ {f}_{m}\left( \mathbf{X}\right) \end{matrix}\right) \overset{d}{ = }\left( \begin{matrix} {f}_{1}\left( \mathbf{Y}\right) \\ \vdots \\ {f}_{m}\left( \mathbf{Y}\right) \end{matrix}\right) \] 267 \[ {\phi }_{\left( {f}_{1}\left( \mathbf{X}\right) \ldots {f}_{m}\left( \mathbf{X}\right) \right) }\left( t\right) = E\left( {e}^{i\sum {t}_{i}{f}_{i}\left( \mathbf{X}\right) }\right) \] \[ = \int {e}^{i\sum {t}_{i}{f}_{i}\left( x\right) }d{F}_{X}\left( x\right) \] \[ = \int {e}^{i\sum {t}_{i}{f}_{i}\left( y\right) }d{F}_{Y}\left( y\right) = {\phi }_{\left( {f}_{1}\left( \mathbf{Y}\right) \ldots {f}_{m}\left( \mathbf{Y}\right) \right) }\left( t\right) \] 2. \( \mathbf{X},\mathbf{Y},\mathbf{Z} \) and \( \mathbf{W} \) are random vectors; \( \mathbf{X} \) and \( \mathbf{Z} \) are independent; \( \mathbf{Y} \) and \( \mathbf{W} \) are independent. a) If \( \mathbf{X}\overset{d}{ = }\mathbf{Y},\mathbf{Z}\overset{d}{ = }\mathbf{W} \Rightarrow \mathbf{X} + \mathbf{Z}\overset{d}{ = }\mathbf{Y} + \mathbf{W} \) . b) If \( \mathbf{Z}\overset{d}{ = }\mathbf{W},{\phi }_{\mathbf{Z}}\left( t\right) \neq 0 \) almost surely (a.s.) for all \( t \) (the set of \( {\phi }_{\mathbf{Z}}\left( t\right) = 0 \) has zero measure), then \( \mathbf{X} + \mathbf{Z}\overset{d}{ = }\mathbf{Y} + \mathbf{W} \) implies \( \mathbf{X}\overset{d}{ = }\mathbf{Y} \) . Proof of (b): \[ {\phi }_{\mathbf{X}}\left( t\right) {\phi }_{\mathbf{Z}}\left( t\right) = {\phi }_{\mathbf{X} + \mathbf{Z}}\left( t\right) = {\phi }_{\mathbf{Y} + \mathbf{W}}\left( t\right) = {\phi }_{\mathbf{Y}}\left( t\right) {\phi }_{\mathbf{W}}\left( t\right) . \] As \( {\phi }_{\mathbf{Z}}\left( t\right) = {\phi }_{\mathbf{W}}\left( t\right) \neq 0 \) almost surely for all \( t \), then \[ {\phi }_{\mathbf{X}}\left( t\right) = {\phi }_{\mathbf{Y}}\left( t\right) \text{ almost surely for all }t \] \[ \Rightarrow \mathbf{X}\overset{d}{ = }\mathbf{Y} \] The Cramër-Wold device allows the issue of convergence of multivariate distribution to be reduced to that of con
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 3.1
Definition 3.1. Let \( L \) be an orbit of the system, and \( A\left( {r,\theta }\right) \) be a point moving on \( L \) . If \( \theta \rightarrow {\theta }_{0} \) as \( r \rightarrow 0 \), then the orbit \( L \) is said to tend to the critical point \( O\left( {0,0}\right) \) along the fixed direction \( \theta = {\theta }_{0} \) . Definition 3.2. Let the origin \( O \) be an isolated critical point of system (3.1). If there exists a sequence of points \( {A}_{n} = A\left( {{r}_{n},{\theta }_{n}}\right) \) such that as \( n \rightarrow \) \( + \infty ,{r}_{n} \rightarrow 0,{\theta }_{n} \rightarrow {\theta }_{0} \), and \( {\alpha }_{n} \rightarrow 0 \), then \( \theta = {\theta }_{0} \) is called a characteristic direction for (3.1). (Here \( {\alpha }_{n} \) is the tangent of the angle rotating from the position coordinate vector for \( {A}_{n} \) in counterclockwise manner toward the direction of the vector field evaluated at \( {A}_{n} \) .) Clearly, if there is an orbit tending to the critical point \( O \) along the (fixed) direction \( \theta = {\theta }_{0} \), then \( \theta = {\theta }_{0} \) will be a characteristic direction. For saddles and nodes in \( §2 \), the corresponding systems after transformations have four characteristic directions, \( \theta = 0,\frac{\pi }{2},\pi \), and \( \frac{3\pi }{2} \) . For improper nodes, the transformed systems have two characteristic directions \( \theta = 0,\pi \) or \( \theta = \pi /2,{3\pi }/2 \) . For proper nodes, every direction is a characteristic direction. For centers and foci, there is no characteristic direction. Note that along a characteristic direction, there might be no orbit tending to the critical point. This is indicated by the following example. Example 3.1. Consider the system \[ \frac{dx}{dt} = - x \] \[ \frac{dy}{dt} = - y + \frac{x\cos \ln \left| {\ln 1/}\right| x\left| \right| }{\ln 1/\left| x\right| } \] and discuss the structure of its orbits in a neighborhood of the critical point 0. Solution. From the first equation, \( x\left( t\right) = x\left( 0\right) {e}^{-t} \) . Suppose \( 0 < \left| {x\left( 0\right) }\right| < \) 1. Substituting into the second equation, we obtain \[ y\left( t\right) = {e}^{-t}\left\lbrack {{C}_{1} + {\int }_{0}^{t}\frac{x\left( 0\right) \cos \ln \left| \left( {t - \ln \left| {x\left( 0\right) }\right| }\right) \right| }{\left( t - \ln \left| x\left( 0\right) \right| \right) }{dt}}\right\rbrack \] \[ = {e}^{-t}\left\lbrack {{C}_{1} + x\left( 0\right) \sin \ln \left| {t - \ln }\right| x\left( 0\right) \left| \right| }\right. \] \[ - x\left( 0\right) \sin \ln \left( {-\ln \left| {x\left( 0\right) }\right| }\right) \rbrack \] \[ = x\left( t\right) \left\lbrack {{C}_{2} + \sin \ln \left| {\ln \frac{1}{\left| x\left( t\right) \right| }}\right| }\right\rbrack . \] Consequently, \[ \mathop{\lim }\limits_{{t \rightarrow + \infty }}x\left( t\right) = 0,\;\mathop{\lim }\limits_{{t \rightarrow + \infty }}y\left( t\right) = 0, \] and \[ \left. \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow + \infty }}\frac{y\left( t\right) }{x\left( t\right) } = {C}_{2} + 1 \\ \mathop{\lim }\limits_{{t \rightarrow + \infty }}\frac{y\left( t\right) }{x\left( t\right) } = {C}_{2} - 1 \end{array}\right\} \] (3.3) We can select a sequence of time \( \left\{ {t}_{n}\right\} \), where \( {t}_{n} \rightarrow + \infty \), such that \[ \ln \left| {\ln \frac{1}{\left| x\left( {t}_{n}\right) \right| }}\right| = \pm \frac{\pi }{2} + {2n\pi },\;n \gg 1. \] Hence when \( t = {t}_{n} \), we have \( {dy}/{dx} = y/x = {C}_{2} \pm 1 \) . From Definition \( {3.2},\theta = {\theta }_{0} = {\tan }^{-1}\left( {{C}_{2} \pm 1}\right) \) is a characteristic direction, where \( {C}_{2} \) is an arbitrary constant. Clearly \( {C}_{2} \) can be chosen such that any \( 0 \leq \theta < {2\pi } \) is a characteristic direction. Since \( x = 0 \) is a solution, there are orbits tending to the critical point along directions \( \theta = \frac{\pi }{2},\frac{3\pi }{2} \) . However, when \( x\left( 0\right) \neq 0 \) , (3.3) implies that any other orbit cannot tend to the critical point \( O \) along a fixed direction (i.e., along any other fixed direction, there is no orbit tending to the critical point). The vector field defined by the differential equations is symmetric with respect to the origin, and the phase portrait is illustrated in Figure 2.13. Definition 3.3. Let the orbit \( L \) intersect the ray \( \theta = {\theta }_{0} \) at the point \( P \) . If the angle \( {\alpha }_{P} \) between the coordinate vector and field vector at \( P \) satisfies \( {\alpha }_{P} < \pi \left( { > \pi }\right) \), then \( L \) is said to intersect \( \theta = {\theta }_{0} \) on the positive (or negative) side (cf. Definition 3.2 for the calculation of \( {\alpha }_{P} \) ). ![bea09977-be18-4815-a30e-4fa2fe3b219c_71_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_71_0.jpg) Figure 2.14 Definition 3.4. Let the orbit \( L \) intersect the ray \( \theta = {\theta }_{0} \) at the point \( P \) . If the angle \( {\alpha }_{P} \) between the coordinate vector and field vector at \( P \) satisfies \[ \frac{\pi }{2} < {\alpha }_{P} < \frac{3\pi }{2}\;\left( {\text{ or }\frac{-\pi }{2} < {\alpha }_{P} < \frac{\pi }{2}}\right) , \] then \( L \) is said to intersect \( \theta = {\theta }_{0} \) in the positive (or negative) direction (cf. Definition 3.2 for the calculation of \( {\alpha }_{P} \) ). See Figure 2.14. Definition 3.5. The sector \( \bigtriangleup \overset{⏜}{OAB} \) consisting of radii \( {OA},{OB} \) and the circular arc \( \overset{⏜}{AB} \) centered at the critical point \( O \) is called a normal region if the following conditions are satisfied: (i) there are no critical points in \( \bigtriangleup \overset{⏜}{OAB} \) except \( O \), and \( {OA},{OB} \) (excluding the point \( O \) ) are cross sections; (ii) the vector field at any point in \( \bigtriangleup \overset{⏜}{OAB} \) is not perpendicular to the coordinate vector; (iii) there is at most one characteristic direction in \( \bigtriangleup \overset{⏜}{OAB} \), and the angles from the \( x \) axis to \( {OA},{OB} \) are not characteristic directions. From property (i), all orbits can only intersect \( {OA} \) (or \( {OB} \) ) on the same side, ie., either enter \( \bigtriangleup \overset{⏜}{OAB} \) from \( {OA} \) (or \( {OB} \) ) or leave \( \bigtriangleup \overset{⏜}{OAB} \) from \( {OA} \) (or \( {OB} \) ). From property (ii), orbits can only intersect \( {OA} \) (or \( {OB} \) ) in the same direction, i.e., either all in the positive direction or all in the negative direction. Moreover, it follows from property (ii) that orbits intersect \( {OA} \) and \( {OB} \) in the same direction. Otherwise, the continuity of the vector field implies that the coordinate vector and field vector would be perpendicular at some point \( P \) inside \( \bigtriangleup \overset{⏜}{OAB} \), contradicting property (ii). See Figure 2.15. From the above discussions, ignoring time direction, there are only three types of normal regions as indicated in Figure 2.16. Here, the arrows represent the direction as \( t \) increases or as \( t \) decreases. To fix ideas, in the following three lemmas we will interpret the arrows as pointing in the direction of increasing \( t \) . If the interpretation is changed to decreasing \( t \), then the statements of the lemmas will be valid if \( t \rightarrow + \infty \) is replaced by \( t \rightarrow - \infty \) . LEMMA 3.1. Suppose that \( \bigtriangleup \overset{⏜}{OAB} \) is a normal region of the first type, then an orbit starting from any point in \( {OA},{OB} \) will tend to the critical point \( O \) as \( t \rightarrow + \infty \) . Proof. From property (ii) in Definition 3.5, an orbit starting from any point in \( {OA},{OB} \) cannot leave the normal region through \( \overset{⏜}{AB} \) as \( t \) increases. Moreover, the radius of any point on the orbit is monotonically decreasing as \( t \) increases; otherwise, there would be a point in \( \bigtriangleup \overset{⏜}{OAB} \) at which the field vector is perpendicular to the coordinate vector. Further, an orbit cannot remain in \( \bigtriangleup \overset{⏜}{OAB} \) indefinitely without tending to \( O \) . Otherwise, its \( \omega \) -limit set would contain a critical point different from \( O \), contradicting property (i). Hence, the orbit must tend to the critical point \( O \) as \( t \rightarrow + \infty \) . ![bea09977-be18-4815-a30e-4fa2fe3b219c_72_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_72_0.jpg) FIGURE 2.15 ![bea09977-be18-4815-a30e-4fa2fe3b219c_72_1.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_72_1.jpg) FIGURE 2.16 LEMMA 3.2. Suppose \( \bigtriangleup \overset{⏜}{OAB} \) is a normal region of the second type, then there exists a point or a closed subarc in \( \overset{⏜}{AB} \) such that any orbit starting from there will tend to the critical point \( O \) as \( t \rightarrow + \infty \) . Proof. Let \( M \in {OA} \) . The orbit \( \overrightarrow{f}\left( {M,{I}^{ - }}\right) \) must intersect \( \overset{⏜}{AB} \) at \( {P}_{M} \) ; and let \( \mathop{\lim }\limits_{{M \rightarrow 0}}{P}_{M} = P \) . Similarly, let \( N \in {OB} \) and \( \overrightarrow{f}\left( {P,{I}^{ - }}\right) \) intersect \( {AB} \) at \( {Q}_{N} \) ; and let \( \mathop{\lim }\limits_{{N \rightarrow 0}}{Q}_{N} = Q \) . Clearly, starting from \( P \) or \( Q \), the orbit will tend to the critical point \( O \) as \( t \rightarrow + \infty \) . If \( P = Q \), then there is only one orbit \( \overrightarrow{f}\left( {P,{I}^{ + }}\right) \) that tends to \( O \) . If \( P \neq Q \), then an orbit \( \overrightarrow{f}\left( {R,{I}^{ + }}\right) \) starting from any point \( R \) on the closed arc \( \overset{⏜}{PQ} \) must tend to \( O \) . LEMMA 3.3. Suppose that \( \bigtriangleup \overset{⏜}{OAB} \) is a normal region of the third type, then there are two possible cases
1189_(GTM95)Probability-1
Definition 2
Definition 2. Let \( x \in {R}^{1} \) . The function \[ {F}_{\xi }\left( x\right) = \mathrm{P}\{ \omega : \xi \left( \omega \right) \leq x\} \] is called the distribution function of the random variable \( \xi \) . Clearly \[ {F}_{\xi }\left( x\right) = \mathop{\sum }\limits_{\left\{ i : {x}_{i} \leq x\right\} }{P}_{\xi }\left( {x}_{i}\right) \] * We use the terms "Bernoulli, binomial, Poisson, Gaussian, ..., random variables" for what are more usually called random variables with Bernoulli, binomial, Poisson, Gaussian, ..., distributions. and \[ {P}_{\xi }\left( {x}_{i}\right) = {F}_{\xi }\left( {x}_{i}\right) - {F}_{\xi }\left( {{x}_{i} - }\right) , \] where \( {F}_{\xi }\left( {x - }\right) = \mathop{\lim }\limits_{{y \uparrow x}}{F}_{\xi }\left( y\right) \) . If we suppose that \( {x}_{1} < {x}_{2} < \cdots < {x}_{m} \) and put \( {F}_{\xi }\left( {x}_{0}\right) = 0 \), then \[ {P}_{\xi }\left( {x}_{i}\right) = {F}_{\xi }\left( {x}_{i}\right) - {F}_{\xi }\left( {x}_{i - 1}\right) ,\;i = 1,\ldots, m. \] The following diagrams (Fig. 5) exhibit \( {P}_{\xi }\left( x\right) \) and \( {F}_{\xi }\left( x\right) \) for a binomial random variable. ![ae2752e7-1bef-4323-9faa-016008475a91_51_0.jpg](images/ae2752e7-1bef-4323-9faa-016008475a91_51_0.jpg) Fig. 5 It follows immediately from Definition 2 that the distribution function \( {F}_{\xi } = \) \( {F}_{\xi }\left( x\right) \) has the following properties: (1) \( {F}_{\xi }\left( {-\infty }\right) = 0,{F}_{\xi }\left( {+\infty }\right) = 1 \) ; (2) \( {F}_{\xi }\left( x\right) \) is continuous on the right \( \left( {{F}_{\xi }\left( {x + }\right) = {F}_{\xi }\left( x\right) }\right) \) and piecewise constant. Along with random variables it is often necessary to consider random vectors \( \xi = \left( {{\xi }_{1},\ldots ,{\xi }_{r}}\right) \) whose components are random variables. For example, when we considered the multinomial distribution we were dealing with a random vector \( v = \) \( \left( {{v}_{1},\ldots ,{v}_{r}}\right) \), where \( {v}_{i} = {v}_{i}\left( \omega \right) \) was the number of elements equal to \( {b}_{i}, i = 1,\ldots, r \) , in the sequence \( \omega = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) . The set of probabilities \[ {P}_{\xi }\left( {{x}_{1},\ldots ,{x}_{r}}\right) = \mathrm{P}\left\{ {\omega : {\xi }_{1}\left( \omega \right) = {x}_{1},\ldots ,{\xi }_{r}\left( \omega \right) = {x}_{r}}\right\} , \] where \( {x}_{i} \in {X}_{i} \), the range of \( {\xi }_{i} \), is called the probability distribution of the random vector \( \xi \), and the function \[ {F}_{\xi }\left( {{x}_{1},\ldots ,{x}_{r}}\right) = \mathrm{P}\left\{ {\omega : {\xi }_{1}\left( \omega \right) \leq {x}_{1},\ldots ,{\xi }_{r}\left( \omega \right) \leq {x}_{r}}\right\} , \] where \( {x}_{i} \in {R}^{1} \), is called the distribution function of the random vector \( \xi = \) \( \left( {{\xi }_{1},\ldots ,{\xi }_{r}}\right) \) . For example, for the random vector \( v = \left( {{v}_{1},\ldots ,{v}_{r}}\right) \) mentioned above, \[ {P}_{v}\left( {{n}_{1},\ldots ,{n}_{r}}\right) = {C}_{n}\left( {{n}_{1},\ldots ,{n}_{r}}\right) {p}_{1}^{{n}_{1}}\cdots {p}_{r}^{{n}_{r}} \] (see (2), Sect. 2). 2. Let \( {\xi }_{1},\ldots ,{\xi }_{r} \) be a set of random variables with values in a (finite) set \( X \subseteq {R}^{1} \) . Let \( \mathcal{X} \) be the algebra of all subsets of \( X \) . Definition 3. The random variables \( {\xi }_{1},\ldots ,{\xi }_{r} \) are said to be (mutually) independent if \[ \mathrm{P}\left\{ {{\xi }_{1} = {x}_{1},\ldots ,{\xi }_{r} = {x}_{r}}\right\} = \mathrm{P}\left\{ {{\xi }_{1} = {x}_{1}}\right\} \cdots \mathrm{P}\left\{ {{\xi }_{r} = {x}_{r}}\right\} \] for all \( {x}_{1},\ldots ,{x}_{r} \in X \) ; or, equivalently, if \[ \mathrm{P}\left\{ {{\xi }_{1} \in {B}_{1},\ldots ,{\xi }_{r} \in {B}_{r}}\right\} = \mathrm{P}\left\{ {{\xi }_{1} \in {B}_{1}}\right\} \cdots \mathrm{P}\left\{ {{\xi }_{r} \in {B}_{r}}\right\} \] for all \( {B}_{1},\ldots ,{B}_{r} \in \mathcal{X} \) . We can get a very simple example of independent random variables from the Bernoulli scheme. Let \[ \Omega = \left\{ {\omega : \omega = \left( {{a}_{1},\ldots ,{a}_{n}}\right) ,{a}_{i} = 0,1}\right\} ,\;p\left( \omega \right) = {p}^{\sum {a}_{i}}{q}^{n - \sum {a}_{i}} \] and \( {\xi }_{i}\left( \omega \right) = {a}_{i} \) for \( \omega = \left( {{a}_{1},\ldots ,{a}_{n}}\right), i = 1,\ldots, n \) . Then the random variables \( {\xi }_{1},{\xi }_{2},\ldots ,{\xi }_{n} \) are independent, as follows from the independence of the events \[ {A}_{1} = \left\{ {\omega : {a}_{1} = 1}\right\} ,\ldots ,{A}_{n} = \left\{ {\omega : {a}_{n} = 1}\right\} , \] which was established in Sect. 3. 3. We shall frequently encounter the problem of finding the probability distributions of random variables that are functions \( f\left( {{\xi }_{1},\ldots ,{\xi }_{r}}\right) \) of random variables \( {\xi }_{1},\ldots ,{\xi }_{r} \) . For the present we consider only the determination of the distribution of a sum \( \zeta = \xi + \eta \) of random variables. If \( \xi \) and \( \eta \) take values in the respective sets \( X = \left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) and \( Y = \) \( \left\{ {{y}_{1},\ldots ,{y}_{l}}\right\} \), the random variable \( \zeta = \xi + \eta \) takes values in the set \( Z = \{ z : z = \) \( \left. {{x}_{i} + {y}_{j}, i = 1,\ldots, k;j = 1,\ldots, l}\right\} \) . Then it is clear that \[ {P}_{\zeta }\left( z\right) = \mathrm{P}\{ \zeta = z\} = \mathrm{P}\{ \xi + \eta = z\} = \mathop{\sum }\limits_{\left\{ \left( i, j\right) : {x}_{i} + {y}_{j} = z\right\} }\mathrm{P}\left\{ {\xi = {x}_{i},\eta = {y}_{j}}\right\} . \] The case of independent random variables \( \xi \) and \( \eta \) is particularly important. In this case \[ \mathrm{P}\left\{ {\xi = {x}_{i},\eta = {y}_{j}}\right\} = \mathrm{P}\left\{ {\xi = {x}_{i}}\right\} \mathrm{P}\left\{ {\eta = {y}_{j}}\right\} , \] and therefore \[ {P}_{\zeta }\left( z\right) = \mathop{\sum }\limits_{\left\{ \left( i, j\right) : {x}_{i} + {y}_{j} = z\right\} }{P}_{\xi }\left( {x}_{i}\right) {P}_{\eta }\left( {y}_{j}\right) = \mathop{\sum }\limits_{{i = 1}}^{k}{P}_{\xi }\left( {x}_{i}\right) {P}_{\eta }\left( {z - {x}_{i}}\right) \] (3) for all \( z \in Z \), where in the last sum \( {P}_{\eta }\left( {z - {x}_{i}}\right) \) is taken to be zero if \( z - {x}_{i} \notin Y \) . For example, if \( \xi \) and \( \eta \) are independent Bernoulli random variables, taking the values 1 and 0 with respective probabilities \( p \) and \( q \), then \( Z = \{ 0,1,2\} \) and \[ {P}_{\zeta }\left( 0\right) = {P}_{\xi }\left( 0\right) {P}_{\eta }\left( 0\right) = {q}^{2}, \] \[ {P}_{\zeta }\left( 1\right) = {P}_{\xi }\left( 0\right) {P}_{\eta }\left( 1\right) + {P}_{\xi }\left( 1\right) {P}_{\eta }\left( 0\right) = {2pq} \] \[ {P}_{\zeta }\left( 2\right) = {P}_{\xi }\left( 1\right) {P}_{\eta }\left( 1\right) = {p}^{2}. \] It is easy to show by induction that if \( {\xi }_{1},{\xi }_{2},\ldots ,{\xi }_{n} \) are independent Bernoulli random variables with \( \mathrm{P}\left\{ {{\xi }_{i} = 1}\right\} = p,\mathrm{P}\left\{ {{\xi }_{i} = 0}\right\} = q \), then the random variable \( \zeta = {\xi }_{1} + \cdots + {\xi }_{n} \) has the binomial distribution \[ {P}_{\zeta }\left( k\right) = {C}_{n}^{k}{p}^{k}{q}^{n - k},\;k = 0,1,\ldots, n. \] (4) 4. We now turn to the important concept of the expectation, or mean value, of a random variable. Let \( \left( {\Omega ,\mathcal{A},\mathrm{P}}\right) \) be a (discrete) probability space and \( \xi = \xi \left( \omega \right) \) a random variable with values in the set \( X = \left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) . If we put \( {A}_{i} = \left\{ {\omega : \xi = {x}_{i}}\right\}, i = 1,\ldots, k \) , then \( \xi \) can evidently be represented as \[ \xi \left( \omega \right) = \mathop{\sum }\limits_{{i = 1}}^{k}{x}_{i}I\left( {A}_{i}\right) \] (5) where the sets \( {A}_{1},\ldots ,{A}_{k} \) form a decomposition of \( \Omega \) (i.e., they are pairwise disjoint and their sum is \( \Omega \) ; see Subsection 3 of Sect. 1). Let \( {p}_{i} = \mathrm{P}\left\{ {\xi = {x}_{i}}\right\} \) . It is intuitively plausible that if we observe the values of the random variable \( \xi \) in " \( n \) repetitions of identical experiments," the value \( {x}_{i} \) ought to be encountered about \( {p}_{i}n \) times, \( i = 1,\ldots, k \) . Hence the mean value calculated from the results of \( n \) experiments is roughly \[ \frac{1}{n}\left\lbrack {n{p}_{1}{x}_{1} + \cdots + n{p}_{k}{x}_{k}}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{k}{p}_{i}{x}_{i} \] This discussion provides the motivation for the following definition. Definition 4. The expectation* or mean value of the random variable \( \xi = \mathop{\sum }\limits_{{i = 1}}^{k}{x}_{i} \) \( I\left( {A}_{i}\right) \) is the number \[ \mathrm{E}\xi = \mathop{\sum }\limits_{{i = 1}}^{k}{x}_{i}\mathrm{P}\left( {A}_{i}\right) \] (6) Since \( {A}_{i} = \left\{ {\omega : \xi \left( \omega \right) = {x}_{i}}\right\} \) and \( {P}_{\xi }\left( {x}_{i}\right) = \mathrm{P}\left( {A}_{i}\right) \), we have \[ \mathrm{E}\xi = \mathop{\sum }\limits_{{i = 1}}^{k}{x}_{i}{P}_{\xi }\left( {x}_{i}\right) \] (7) Recalling the definition of \( {F}_{\xi } = {F}_{\xi }\left( x\right) \) and writing \[ \Delta {F}_{\xi }\left( x\right) = {F}_{\xi }\left( x\right) - {F}_{\xi }\left( {x - }\right) \] we obtain \( {P}_{\xi }\left( {x}_{i}\right) = \Delta {F}_{\xi }\left( {x}_{i}\right) \) and consequently \[ \mathrm{E}\xi = \mathop{\sum }\limits_{{i = 1}}^{k}{x}_{i}\Delta {F}_{\xi }\left( {x}_{i}\right) \] (8) Before discussing the properties of the expectation, we remark that it is often convenient to use another representation of the random variable \( \xi \), namely \[ \xi \left( \omega \right) = \mathop{\sum }\limits_{{j = 1}}^{l}{x}_{j}^{\prime }I\left( {B}_{j}\right) \] where \( {B}_{1} + \cdots + {B}_{l}
1235_[丁一文] Number Theory 2
Definition 4.2.1
Definition 4.2.1. A subgreoup \( \mathcal{N} \) of \( {A}_{K} \) is called a norm group, if there exists a finite separable extension \( L \) over \( K \) such that \( \mathcal{N} = {N}_{L/K}{A}_{L} \mathrel{\text{:=}} {\mathcal{N}}_{L} \) . Lemma 4.2.2. Let \( M \supset L \supset K \) be finite separable extensions, then \( {\mathcal{N}}_{M} \subset {\mathcal{N}}_{L} \) . Lemma 4.2.3. Let \( L/K \) be a finite separable extension of \( K \), and let \( E \) be the maximal abelian extension of \( K \) in \( L \), then \( {\mathcal{N}}_{L} = {\mathcal{N}}_{E} \) . Proof. It suffices to show \( {\mathcal{N}}_{E} \subset {\mathcal{N}}_{L} \) . Let \( a \in {\mathcal{N}}_{E} \) . Let \( M \) be a finite Galois extension of \( K \) containing \( L, G \mathrel{\text{:=}} \operatorname{Gal}\left( {M/K}\right), H \mathrel{\text{:=}} \operatorname{Gal}\left( {M/L}\right) \), and let \( {M}^{\text{ab }} \) be the maximal abelian extension of \( K \) in \( M \) (so \( {M}^{\text{ab }} \supset E \) ). We have \( {\rho }_{M/K}\left( a\right) \) acts trivially on \( E \) . As \( E \) is maximal abelian over \( K \), we see \( \left\lbrack {G, G}\right\rbrack H = \operatorname{Gal}\left( {M/E}\right) \) and hence the morphism \( H \hookrightarrow \operatorname{Gal}\left( {M/E}\right) \rightarrow \operatorname{Gal}\left( {{M}^{ab}/E}\right) \) is surjective. Note the composition factors through \( {H}^{\mathrm{{ab}}} \) . In summary, for \( {\rho }_{M/K}\left( a\right) \) there exists \( \sigma \in {H}^{\text{ab }} \) such that \( {\rho }_{M/K}\left( a\right) = \sigma \) . Let \( b \in {A}_{L} \) such that \( \sigma = {\rho }_{M/L}\left( b\right) \) . By Proposition 4.1.6, we see \( {\rho }_{M/K}\left( {{N}_{L/K}\left( b\right) }\right) = {\rho }_{M/K}\left( a\right) \) . Hence there exists \( c \in {A}_{M} \) such that \( {N}_{L/K}\left( b\right) - a = {N}_{M/K}\left( c\right) \) . So \( a = {N}_{L/K}\left( {b - {N}_{M/L}\left( c\right) }\right) \) . The lemma follows. Remark 4.2.4. Note the statement is stronger than Lemma 3.2.11. Corollary 4.2.5. Every norm group \( \mathcal{N} \) has finite index in \( {A}_{K} \), with \( \left\lbrack {{A}_{K} : \mathcal{N}}\right\rbrack \leq \left\lbrack {L : K}\right\rbrack \) for \( \mathcal{N} = {\mathcal{N}}_{L} \), and the equality holds if and only if \( L/K \) is abelian. Proposition 4.2.6. For any finite abelian extensions \( L, M \) of \( K \), the followings hold. (1) \( {\mathcal{N}}_{L} \cap {\mathcal{N}}_{M} = {\mathcal{N}}_{LM} \) . (2) \( {\mathcal{N}}_{L} + {\mathcal{N}}_{M} = {\mathcal{N}}_{L \cap M} \) . (3) \( {\mathcal{N}}_{M} \subset {\mathcal{N}}_{L} \) if and only if \( L \subset M \) . (4) For any subgroup \( \mathcal{N} \) of \( {A}_{K} \) containing \( {\mathcal{N}}_{L} \), there exists an intermediate field \( E \) in \( L/K \) with \( \mathcal{N} = {\mathcal{N}}_{E} \) . Proof. (1) \( \supset \) is clear. We have ![00eeb6ce-d106-4d6c-bb86-c4abb4702764_59_0.jpg](images/00eeb6ce-d106-4d6c-bb86-c4abb4702764_59_0.jpg) hence \( {\mathcal{N}}_{L} \cap {\mathcal{N}}_{M} = {\mathcal{N}}_{LM} \) . (3) The "if" part is clear. If \( {\mathcal{N}}_{M} \subset {\mathcal{N}}_{L} \), then \( {\mathcal{N}}_{LM} = {\mathcal{N}}_{M} \) by (1). By reciprocity law, we see \( {LM} = M \) . (4) Let \( E \mathrel{\text{:=}} {L}^{{\rho }_{L/K}\left( \mathcal{N}\right) } \) . We have a commutative diagram \[ {A}_{E}/{N}_{L/E}\left( {A}_{L}\right) \xrightarrow[ \sim ]{{\rho }_{L/E}}\operatorname{Gal}\left( {L/E}\right) \] \[ {N}_{E/K} \downarrow \] \[ {A}_{K}/{N}_{L/K}\left( {A}_{L}\right) \xrightarrow[ \sim ]{{\rho }_{L/K}}\operatorname{Gal}\left( {L/K}\right) \] and we deduce \( {N}_{E/K}\left( {A}_{E}\right) = \mathcal{N} \) . \( \left( 2\right) \subset \) is clear. By (4), there exists an intermediate field \( E \) of \( L/K \) such that \( {\mathcal{N}}_{E} = \) \( {\mathcal{N}}_{L} + {\mathcal{N}}_{M} \) . As \( {\mathcal{N}}_{E} \supset {\mathcal{N}}_{M} \), we see by (3) \( E \subset M \) hence \( E \subset L \cap M \) and \( {\mathcal{N}}_{E} \supset {\mathcal{N}}_{L \cap M} \) . Corollary 4.2.7. For a norm subgroup \( \mathcal{N} \) of \( {A}_{K} \), there exists a unique finite abelian extension \( L/K \) such that \( \mathcal{N} = {\mathcal{N}}_{L} \) . For a finite separable extension \( L \) of \( K \), we set \( {D}_{L} = \ker {\rho }_{L} \) . Lemma 4.2.8. Let \( L \) be a finite separable extension of \( K \) . We have \[ {D}_{L} = { \cap }_{M}{N}_{M/L}\left( {A}_{M}\right) \] where \( M \) runs through finite abelian (or separable) extensions of \( L \) . Definition 4.2.9. We say that a class formation \( \left( {A,\mathrm{{inv}}}\right) \) for \( K \) is topological if each \( {A}_{L} \) (for finite extensions \( L \) of \( K \) ) is given an additional Hausdorff topology such that if \( L/K \) is Galois, \( {A}_{L} \) is a topological \( \operatorname{Gal}\left( {L/K}\right) \) -module, and for \( M \subset L \subset K \), the topology on \( {A}_{L} \) coincides with the induced topology from \( {A}_{M} \) via \( {A}_{L} \hookrightarrow {A}_{M} \), and moreover the following properties are satisfied: 1. the norm map \( {N}_{M/L} : {A}_{M} \rightarrow {A}_{L} \) has closed image and compact kernel for each finite extension \( M/L \) of finite separable extensions of \( K \) , 2. for each prime \( p \), there exists a finite separable extension \( {K}_{p} \) over \( K \) such that for all finite separable extensions \( L \) of \( {K}_{p} \), the kernel of \( {\phi }_{p} : {A}_{L} \rightarrow {A}_{L}, a \mapsto {pa} \) is compact and the image of \( {\phi }_{p} \) contains \( {D}_{L} \) , 3. for each finite separable extension \( L \) of \( K \) there exists a compact subgroup \( {U}_{L} \) of \( {A}_{L} \) such that every closed subgroup of finite index in \( {A}_{L} \) that contains \( {U}_{L} \) is a norm group. Remark 4.2.10. Let \( L \) be a finite separable extension of \( K \), and \( M \) be a finite Galois extension of \( K \) containing \( L \) . Then \( {A}_{M} \) is a topological \( \operatorname{Gal}\left( {M/K}\right) \) -module, and we deduce \( {A}_{K} \cong {A}_{M}^{\mathrm{{Gal}}\left( {M/K}\right) } \) is closed in \( {A}_{L} \cong {A}_{M}^{\mathrm{{Gal}}\left( {M/L}\right) } \), that is closed in \( {A}_{M} \) . The property 1 implies \( {D}_{K} \) is closed in \( {A}_{K} \) . The property 2 implies \( \ker \left\lbrack {{\phi }_{p} : {A}_{K} \rightarrow {A}_{K}}\right\rbrack \) is compact. Example 4.2.11. Let \( K \) be a finite extension of \( {\mathbb{Q}}_{p} \), we show \( \left( {{\bar{K}}^{ \times },\text{inv }}\right) \) is a topological class formation where \( {\bar{K}}^{ \times } \) is equipped with the p-adic topology. Condition (1) is clear. For (3), one can take \( {U}_{L} \mathrel{\text{:=}} {\mathcal{O}}_{L}^{ \times } \), then (3) follows by considering unramified extensions of \( L \) . For (2), the map \( {L}^{ \times } \rightarrow {L}^{ \times }, x \mapsto {x}^{p} \) has compact kernel. As we knew \( {D}_{L} = 1 \), the second part of (2) is also clear (recalling our proof of \( {D}_{L} = 1 \) used the Lubin-Tate theory). But in fact, the second part can also follow from (the much eaiser) Kummer theory, that we leave as an exercise. Theorem 4.2.12. Suppose \( \left( {A,\operatorname{inv}}\right) \) is a topological class formation for \( K \), then a subgroup \( \mathcal{M} \) of \( {A}_{K} \) is a norm group if and only if \( \mathcal{M} \) is closed of finite index in \( {A}_{K} \) . Corollary 4.2.13. Let \( \left( {A,\mathrm{{inv}}}\right) \) be a topological class formation for \( K \), then there is a canonical isomorphism (induced by the reciprocity map): \[ {\rho }_{K} : \widehat{{A}_{K}} \mathrel{\text{:=}} \mathop{\lim }\limits_{\mathcal{M}}{A}_{K}/\mathcal{M}\overset{ \sim }{ \rightarrow }\operatorname{Gal}{\left( \bar{K}/K\right) }^{\mathrm{{ab}}}, \] where \( \mathcal{M} \) runs through open subgroups of finite index of \( {A}_{K} \) . We prove the theorem in the rest of the section. We will frequently use the finite intersection property of compact spaces: let \( X \) be a compact space, \( \left\{ {Z}_{i}\right\} \) be a set of closed subsets, if any finite \( {Z}_{i} \) have non-empty intersection, then \( \cap {Z}_{i} \neq \varnothing \) . Now let \( \left( {A\text{, inv}}\right) \) be a topological class formation, we first prove: Lemma 4.2.14. Let \( L \) be a finite separable extension of \( K \), then \( {N}_{L/K}{D}_{L} = {D}_{K} \) . Proof. It is clear that \( {N}_{L/K}{D}_{L} \subset {D}_{K} \) . Let \( a \in {D}_{K} \), and consider \( {N}_{L/K}^{-1}\left( a\right) \), that is a compact subset of \( {A}_{L} \) by Property 1. For any finite separable extension \( M/L \), as \( a \in \) \( {N}_{M/K}\left( {A}_{M}\right) \), we deduce \( {N}_{M/L}\left( {A}_{M}\right) \cap {N}_{L/K}^{-1}\left( a\right) \neq \varnothing \) . Using the finite intersection property (and Proposition 4.2.6 (1)), we deduce \( \varnothing \neq { \cap }_{M}\left( {{N}_{M/L}\left( {A}_{M}\right) \cap {N}_{L/K}^{-1}\left( a\right) }\right) = {D}_{L} \cap {N}_{L/K}^{-1}\left( a\right) \) . The lemma follows. Proposition 4.2.15. The group \( {D}_{K} \) is divisible, and \( {D}_{K} = { \cap }_{n}n{A}_{K} \) . Proof. To show \( {D}_{K} \) is divisible, it suffices to show \( {\phi }_{p} : {D}_{K} \rightarrow {D}_{K}, x \mapsto {px} \) is surjective for any prime number \( p \) . For any \( x \in {D}_{K},{\phi }_{p}^{-1}\left( x\right) \) is closed hence compact. For any finite separable extension \( L/K \), there exists \( y \in {\dot{D}}_{L} \) such that \( {N}_{L/K}\left( y\right) = x \) . Enlarging \( L \), we assume \( L \) contains \( {K}_{p} \), by property 2, we have \( y \in p{A}_{L} \) hence \( {\phi }_{p}^{-1}\left( x\right) \cap {N}_{L/K}\left( {A}_{L}\right) \neq \varnothing \) (for any finite separable \( L \) over \( {K}_{p} \) ). Using finite intersection property, we deduce \( {\phi
1094_(GTM250)Modern Fourier Analysis
Definition 4.1.10
Definition 4.1.10. Given a kernel \( K \) in \( {SK}\left( {\delta, A}\right) \) and \( \varepsilon > 0 \), we define the truncated kernel \[ {K}^{\left( \varepsilon \right) }\left( {x, y}\right) = K\left( {x, y}\right) {\chi }_{\left| {x - y}\right| > \varepsilon }. \] Given a continuous linear operator \( T \) from \( \mathcal{S}\left( {\mathbf{R}}^{n}\right) \) to \( {\mathcal{S}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) and \( \varepsilon > 0 \), we define the truncated operator \( {T}^{\left( \varepsilon \right) } \) by \[ {T}^{\left( \varepsilon \right) }\left( f\right) \left( x\right) = {\int }_{{\mathbf{R}}^{n}}{K}^{\left( \varepsilon \right) }\left( {x, y}\right) f\left( y\right) {dy} \] and the maximal singular operator associated with \( T \) as follows: \[ {T}^{\left( *\right) }\left( f\right) \left( x\right) = \mathop{\sup }\limits_{{\varepsilon > 0}}\left| {{T}^{\left( \varepsilon \right) }\left( f\right) \left( x\right) }\right| . \] Note that both \( {T}^{\left( \varepsilon \right) }\left( f\right) \) and \( {T}^{\left( *\right) }\left( f\right) \) are well defined for \( f \) in \( \mathop{\bigcup }\limits_{{1 \leq p < \infty }}{L}^{p}\left( {\mathbf{R}}^{n}\right) \), by an application of Hölder's inequality. We investigate a certain connection between the boundedness of \( T \) and the boundedness of the family \( {\left\{ {T}^{\left( \varepsilon \right) }\right\} }_{\varepsilon > 0} \) uniformly in \( \varepsilon > 0 \) . Proposition 4.1.11. Let \( K \) be a kernel in \( {SK}\left( {\delta, A}\right) \) and let \( T \) in \( {CZO}\left( {\delta, A, B}\right) \) be associated with \( K \) . For \( \varepsilon > 0 \), let \( {T}^{\left( \varepsilon \right) } \) be the truncated operators obtained from \( T \) . Assume that there exists a constant \( {B}^{\prime } < \infty \) such that \[ \mathop{\sup }\limits_{{\varepsilon > 0}}{\begin{Vmatrix}{T}^{\left( \varepsilon \right) }\end{Vmatrix}}_{{L}^{2} \rightarrow {L}^{2}} \leq {B}^{\prime } \] (4.1.21) Then there exists a linear operator \( {T}_{0} \) defined on \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \) such that (1) The distributional kernel of \( {T}_{0} \) coincides with \( K \) on \[ {\mathbf{R}}^{n} \times {\mathbf{R}}^{n} \smallsetminus \left\{ {\left( {x, x}\right) : x \in {\mathbf{R}}^{n}}\right\} \] (2) For some subsequence \( {\varepsilon }_{j} \downarrow 0 \), we have \[ {\int }_{{\mathbf{R}}^{n}}{T}^{\left( {\varepsilon }_{j}\right) }\left( f\right) \left( x\right) g\left( x\right) {dx} \rightarrow {\int }_{{\mathbf{R}}^{n}}{T}_{0}\left( f\right) \left( x\right) g\left( x\right) {dx} \] (4.1.22) as \( j \rightarrow \infty \) for all \( f, g \) in \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \) . (3) \( {T}_{0} \) is bounded on \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \) with norm \[ {\begin{Vmatrix}{T}_{0}\end{Vmatrix}}_{{L}^{2} \rightarrow {L}^{2}} \leq {B}^{\prime } \] (4) There exists a measurable function \( b \) on \( {\mathbf{R}}^{n} \) with \( \parallel b{\parallel }_{{L}^{\infty }} \leq B + {B}^{\prime } \) such that \[ T\left( f\right) - {T}_{0}\left( f\right) = {bf} \] for all \( f \in {L}^{2}\left( {\mathbf{R}}^{n}\right) \) . Proof. Since \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \) is separable, by the Banach-Alaoglu theorem the unit ball of its dual is weak* compact and metrizable for the weak* topology. Let \( {\left\{ {f}_{k}\right\} }_{k = 1}^{\infty } \) be a dense countable subset of \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \) . By (4.1.21), the functions \( {T}^{\left( \varepsilon \right) }\left( {f}_{k}\right) \) lie in multiple of the unit ball of \( {\left( {L}^{2}\right) }^{ * } \), which is weak* compact, and hence for each \( {f}_{k} \) we find a sequence \( {\left\{ {\varepsilon }_{j}^{k}\right\} }_{j = 1}^{\infty } \) such that for each \( g \in {L}^{2}\left( {\mathbf{R}}^{n}\right) \) we have \[ \mathop{\lim }\limits_{{j \rightarrow \infty }}{\int }_{{\mathbf{R}}^{n}}{T}^{\left( {\varepsilon }_{j}^{k}\right) }\left( {f}_{k}\right) \left( x\right) g\left( x\right) {dx} = {\int }_{{\mathbf{R}}^{n}}{T}_{0}^{{f}_{k}}\left( x\right) g\left( x\right) {dx} \] (4.1.23) for some function \( {T}_{0}^{{f}_{k}} \) in \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \) . Moreover, each \( {\left\{ {\varepsilon }_{j}^{k}\right\} }_{j = 1}^{\infty } \) can be chosen to be a subsequence of \( {\left\{ {\varepsilon }_{j}^{k - 1}\right\} }_{j = 1}^{\infty }, k \geq 2 \) . Then the diagonal sequence \( {\left\{ {\varepsilon }_{j}^{j}\right\} }_{j = 1}^{\infty } = {\left\{ {\varepsilon }_{j}\right\} }_{j = 1}^{\infty } \) satisfies \[ \mathop{\lim }\limits_{{j \rightarrow \infty }}{\int }_{{\mathbf{R}}^{n}}{T}^{\left( {\varepsilon }_{j}\right) }\left( {f}_{k}\right) \left( x\right) g\left( x\right) {dx} = {\int }_{{\mathbf{R}}^{n}}{T}_{0}^{{f}_{k}}\left( x\right) g\left( x\right) {dx} \] (4.1.24) for each \( k \) and \( g \in {L}^{2} \) . Since \( {\left\{ {f}_{k}\right\} }_{k = 1}^{\infty } \) is dense in \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \), a standard \( \varepsilon /3 \) argument gives that the sequence of complex numbers \[ {\int }_{{\mathbf{R}}^{n}}{T}^{\left( {\varepsilon }_{j}\right) }\left( {f}_{k}\right) \left( x\right) g\left( x\right) {dx} \] is Cauchy and thus it converges. Now \( {L}^{2} \) is complete \( {}^{1} \) in the weak* topology; therefore for each \( f \in {L}^{2}\left( {\mathbf{R}}^{n}\right) \) there is a function \( {T}_{0}\left( f\right) \) such that (4.1.22) holds for all \( f, g \) in \( {L}^{2}\left( {\mathbf{R}}^{n}\right) \) as \( j \rightarrow \infty \) . It is easy to see that \( {T}_{0} \) is a linear operator with the property \( {T}_{0}\left( {f}_{k}\right) = {T}_{0}^{{f}_{k}} \) for each \( k = 1,2,\ldots \) . This proves (2). The \( {L}^{2} \) boundedness of \( {T}_{0} \) is a consequence of (4.1.22),(4.1.21), and duality, since \[ {\begin{Vmatrix}{T}_{0}\left( f\right) \end{Vmatrix}}_{{L}^{2}} \leq \mathop{\sup }\limits_{{\parallel g{\parallel }_{{L}^{2}} \leq 1}}\mathop{\limsup }\limits_{{j \rightarrow \infty }}\left| {{\int }_{{\mathbf{R}}^{n}}{T}^{\left( {\varepsilon }_{j}\right) }\left( f\right) \left( x\right) g\left( x\right) {dx}}\right| \leq {B}^{\prime }\parallel f{\parallel }_{{L}^{2}}. \] This proves (3). Finally, (1) is a consequence of the integral representation \[ {\int }_{{\mathbf{R}}^{n}}{T}^{\left( {\varepsilon }_{j}\right) }\left( f\right) \left( x\right) g\left( x\right) {dx} = {\int }_{{\mathbf{R}}^{n}}{\int }_{{\mathbf{R}}^{n}}{K}^{\left( {\varepsilon }_{j}\right) }\left( {x, y}\right) f\left( y\right) {dyg}\left( x\right) {dx}, \] whenever \( f, g \) are Schwartz functions with disjoint supports, by letting \( j \rightarrow \infty \) . We finally prove (4). We first observe that if \( g \) is a bounded function with compact support and \( Q \) is an open cube in \( {\mathbf{R}}^{n} \), we have \[ \left( {{T}^{\left( {\varepsilon }_{j}\right) } - T}\right) \left( {g{\chi }_{Q}}\right) \left( x\right) = {\chi }_{Q}\left( x\right) \left( {{T}^{\left( {\varepsilon }_{j}\right) } - T}\right) \left( g\right) \left( x\right) , \] (4.1.25) for almost all \( x \notin \partial Q \) whenever \( {\varepsilon }_{j} \) is small enough (depending on \( x \) ). Indeed, since \( g{\chi }_{Q} \) is bounded and has compact support, by the integral representation formula (4.1.18) in Proposition 4.1.9 there is a null set \( E\left( {g{\chi }_{Q}}\right) \) such that for \( x \notin \bar{Q} \cap E\left( {g{\chi }_{Q}}\right) \) and for \( {\varepsilon }_{j} < \operatorname{dist}\left( {x,\operatorname{supp}g{\chi }_{Q}}\right) \), the left-hand side in (4.1.25) is zero, since in this case \( x \) is not in the support of \( g{\chi }_{Q} \) . Moreover, since \( g{\chi }_{{Q}^{c}} \) is also bounded and compactly supported, there is a null set \( E\left( {g{\chi }_{{Q}^{c}}}\right) \) such that for \( x \in Q \cap E\left( {g{\chi }_{{Q}^{c}}}\right) \) and \( {\varepsilon }_{j} < \operatorname{dist}\left( {x,\operatorname{supp}g{\chi }_{{Q}^{c}}}\right) \) we have that \( x \) does not lie in the support of \( g{\chi }_{{Q}^{c}} \), and thus \( \left( {{T}^{\left( {\varepsilon }_{j}\right) } - T}\right) \left( {g{\chi }_{{Q}^{c}}}\right) \left( x\right) = 0 \) ; hence (4.1.25) holds in this case as well. This proves (4.1.25) for almost \( x \) not in the boundary \( \partial Q \) . Taking weak limits in (4.1.25) as \( {\varepsilon }_{j} \rightarrow 0 \), we obtain that \[ \left( {{T}_{0} - T}\right) \left( {g{\chi }_{Q}}\right) = {\chi }_{Q}\left( {{T}_{0} - T}\right) \left( g\right) \;\text{ a.e. } \] (4.1.26) \( {}^{1} \) the unit ball of \( {L}^{2} \) in the weak* topology is compact and metrizable, hence complete. for all open cubes \( Q \) in \( {\mathbf{R}}^{n} \) . This means that for any \( g \) bounded function with compact support and cube \( Q \) in \( {\mathbf{R}}^{n} \) there is a set of measure zero \( {E}_{Q, g} \) such that (4.1.26) holds on \( {\mathbf{R}}^{n} \smallsetminus {E}_{Q, g} \) . Consider the countable family \( \mathcal{F} \) of all cubes in \( {\mathbf{R}}^{n} \) with corners in \( {\mathbf{Q}}^{n} \) and set \( {E}_{g} = { \cup }_{Q \in \mathcal{F}}{E}_{Q, g} \) . Then \( \left| {E}_{g}\right| = 0 \) and by linearity we obtain \[ \left( {{T}_{0} - T}\right) \left( {gh}\right) = h\left( {{T}_{0} - T}\right) \left( g\right) \;\text{ on }{\mathbf{R}}^{n} \smallsetminus {E}_{g} \] whenever \( h \) is a finite linear combination of characteristic functions of cubes in \( \mathcal{F} \) , which is a dense subspace of \( {L}^{2} \) . Via a simple density argument, using the fact that \( {T}_{0} - T \) is \( {L}^{2} \) bounded, we obtain that for all \( f \) in \( {L}^{2} \) and \( g \) bounded with compact support there is a null set \( {E}_{f, g} \) such that \[ \left( {{T}_{0} - T}\right) \left( {gf}\right) = f\left( {{T}_{0} - T}\right) \left( g\right) \;\text{ on }{\mathbf{R}}^{n} \smallsetminus {E}_{f, g}. \]
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 5.1.27
Definition 5.1.27. A space \( X \) is of finite type if \( {H}_{n}\left( X\right) \) is finitely generated for each \( n \) . ## 5.2 The Geometric Meaning of \( {H}_{0} \) and \( {H}_{1} \) In this section we see the geometric content of the singular homology groups \( {H}_{0}\left( X\right) \) and \( {H}_{1}\left( X\right) \) . Theorem 5.2.1. Let \( X \) be a space. Then \( {H}_{0}\left( X\right) \) is isomorphic to the free abelian group on the path components of \( X \) . Proof. We assume \( X \) nonempty. We have already seen in Corollary 5.1.25 that if \( X = {X}_{1} \cup {X}_{2} \cup \cdots \) is a union of path components, then \( {H}_{i}\left( X\right) = {\bigoplus }_{k}{H}_{i}\left( {X}_{k}\right) \) . Thus it satisfies to prove the theorem in case \( X \) is path connected, so we make that assumption. A singular 0 -simplex of \( X \) is \( f\left( *\right) = x \) for some \( x \in X \), so we may identify \( {C}_{0}\left( X\right) \) with the free abelian group on the points of \( X,{C}_{0}\left( X\right) = \left\{ {\mathop{\sum }\limits_{i}{n}_{i}{x}_{i} \mid {n}_{i} \in \mathbb{Z},{x}_{i} \in X}\right\} \) . Since \( {C}_{-1}\left( X\right) = 0,{Z}_{0}\left( X\right) = {C}_{0}\left( X\right) \), i.e., every chain is a cycle. Let \( \varepsilon : {Z}_{0}\left( X\right) \rightarrow \mathbb{Z} \) by \( \varepsilon \left( {\mathop{\sum }\limits_{i}{n}_{i}{x}_{i}}\right) = \mathop{\sum }\limits_{i}{n}_{i} \) . We claim that \( \varepsilon \) is a surjection with kernel \( {B}_{0}\left( X\right) \) . Then \( {H}_{0}\left( X\right) = \) \( {Z}_{0}\left( X\right) /{B}_{0}\left( X\right) \cong \mathbb{Z} \) . Now to prove the claim. First, \( \varepsilon \) is obviously surjective: Choose \( x \in X \) . Then for any \( n,\varepsilon \left( {nx}\right) = n \) . Next, \( \operatorname{Ker}\left( \varepsilon \right) \supseteq {B}_{0}\left( X\right) : {B}_{0}\left( X\right) \) is generated by the boundaries of singular 1-simplices. But a singular 1-simplex is a map \( f : I \rightarrow X \), and the boundary of that is \( q - p \) where \( q = f\left( 1\right) \) and \( p = f\left( 0\right) \) . Then \( \varepsilon \left( {q - p}\right) = 1 - 1 = 0 \) . Finally, \( {B}_{0}\left( X\right) \subseteq \operatorname{Ker}\left( \varepsilon \right) \) : Suppose \( \varepsilon \left( {\mathop{\sum }\limits_{i}{n}_{i}{x}_{i}}\right) = 0 \), i.e., \( \mathop{\sum }\limits_{i}{n}_{i} = 0 \) . Rewrite \( {n}_{i}{x}_{i} \) as \( {x}_{i} + \cdots + {x}_{i} \) , where there are \( {n}_{i} \) terms, if \( {n}_{i} > 0 \), or as \( - {x}_{i} - \cdots - {x}_{i} \), where there are \( \left| {n}_{i}\right| \) terms, if \( {n}_{i} < 0 \) . Then \( \mathop{\sum }\limits_{i}{n}_{i}{x}_{i} = {x}_{1}^{\prime } + \cdots + {x}_{k}^{\prime } + \left( {-{x}_{1}^{\prime \prime }}\right) + \cdots + \left( {-{x}_{k}^{\prime \prime }}\right) \) for some \( k \) and some points \( {x}_{1}^{\prime },\ldots ,{x}_{k}^{\prime \prime } \) . But now for each \( i \) between 1 and \( k \), let \( {c}_{i} \) be the singular 1-simplex given by \( f : I \rightarrow X \) with \( f\left( 0\right) = {x}_{i}^{\prime \prime } \) and \( f\left( 1\right) = {x}_{i}^{\prime } \) . Then \( \partial {c}_{i} = {x}_{1}^{\prime } - {x}_{1}^{\prime \prime } \) so \( \mathop{\sum }\limits_{i}{n}_{i}{x}_{i} = \) \( \partial \left( {\mathop{\sum }\limits_{{j = 1}}^{k}{c}_{j}}\right) \in {B}_{0}\left( X\right) \) . Lemma 5.2.2. Let \( f : I \rightarrow X \) and \( g : I \rightarrow X \) with \( f\left( 1\right) = g\left( 0\right) \) . Define \( h : I \rightarrow X \) by \( h\left( t\right) = f\left( {2t}\right) \) for \( 0 \leq t \leq \frac{1}{2} \), and \( h\left( t\right) = g\left( {{2t} - 1}\right) \) for \( \frac{1}{2} \leq t \leq 1 \) . Then \( \left\lbrack {f + g - h}\right\rbrack = \) \( 0 \in {H}_{1}\left( X\right) \) . Proof. We exhibit a 2-cell \( C \) with \( \partial C = f + g - h.C : I \rightarrow I \rightarrow X \) is given by following \( f \) and then \( g \) along each of the heavy solid lines as indicated: ![21ef530b-1e09-406a-b041-cf4539af5c14_72_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_72_0.jpg) Then \( \partial C = f + g - h - k \), where \( k \) is the path on the left-hand side of the square. But that is the constant path at \( f\left( 0\right) \), and hence a degenerate 1-chain, so \( \partial C = f + g - h \) in \( {Z}_{1}\left( X\right) \) . Remark 5.2.3. Obviously this generalizes to the composition of any finite number of paths (proof by induction). Theorem 5.2.4. Let \( X \) be a path-connected space. The map \( \theta : {\pi }_{1}\left( {X,{x}_{0}}\right) \rightarrow {H}_{1}\left( X\right) \) given by \( \theta \left( f\right) = \left\lbrack f\right\rbrack \), where \( f : \left( {{S}^{1},1}\right) \rightarrow \left( {X,{x}_{0}}\right) \), is an epimorphism with kernel the commutator subgroup of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . Thus \( {H}_{1}\left( X\right) \) is isomorphic to the abelianization of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . Proof. There are several things to show: (1) \( \theta \) is a homomorphism: That follows immediately from Lemma 5.2.2. (2) \( \theta \) is surjective: Once and for all, for every point \( x \in X \) choose a path \( {\alpha }_{x} \) from \( {x}_{0} \) to \( x \) . We make this choice completely arbitrarily, except that we let \( {\alpha }_{{x}_{0}} \) be the constant path at \( {x}_{0} \) . Let \( {\beta }_{x} \) be \( {\alpha }_{x} \) run backwards, \( {\beta }_{x} \) from \( x \) to \( {x}_{0} \) . Let \( z = \mathop{\sum }\limits_{{i \in I}}{a}_{i}{T}^{i} \) represent an element of \( {H}_{1}\left( X\right) ,{T}^{i} : {I}^{1} \rightarrow X \) . Let \( {p}_{i} = \) \( {\left( {T}^{i}\right) }^{-1}\left( 0\right) \) and \( {q}_{i} = {\left( {T}^{i}\right) }^{-1}\left( 1\right) \) . Then \( 0 = \partial z = \mathop{\sum }\limits_{{i \in I}}{a}_{i}\left( {{B}_{1}T - {A}_{1}T}\right) \) so after gathering terms the coefficient of every 0 -simplex \( {S}_{j} : {I}^{0} \rightarrow X, j \in J \), is zero. Then \( \mathop{\sum }\limits_{{i \in I}}{a}_{i}\left( {{\alpha }_{{p}_{i}} + {T}^{i} + {\beta }_{{q}_{i}}}\right) = \sum {a}_{i}{T}^{i} = z \) . Now for each \( i,{\alpha }_{{p}_{i}} + {T}^{i} + {\beta }_{{q}_{i}} \) is homologous, again by Lemma 5.2.2, to the image of an element of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) , that element being obtained by beginning at \( {x}_{0} \), following \( {\alpha }_{{p}_{i}} \) from \( {x}_{0} \) to \( {p}_{i} \) , then following \( {T}^{i} \) from \( {p}_{i} \) to \( {q}_{i} \), then following \( {\beta }_{{q}_{i}} \) from \( {q}_{i} \) back to \( {x}_{0} \) . (Observe that this composite path is a loop at \( {x}_{0} \) .) (3) \( \operatorname{Ker}\left( \theta \right) \supseteq G \), the commutator subgroup of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . Algebraically, that is immediate, as \( \operatorname{Im}\left( \theta \right) \subseteq {H}_{1}\left( X\right) \), which is an abelian group. But geometrically that is easy to see as well. It amounts to showing that if \( {f}_{1} \) and \( {f}_{2} \) are conjugate elements of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \), then \( \theta \left( {f}_{2}\right) = \theta \left( {f}_{1}\right) \) . But \( {f}_{1} \) and \( {f}_{2} \) conjugate simply means \( {f}_{2} = g{f}_{1}{g}^{-1} \) for some \( g \in {\pi }_{1}\left( {X,{x}_{0}}\right) \), and then \( \theta \left( {f}_{2}\right) = \theta \left( {f}_{1}\right) \) again by Lemma 5.2.2. (4) \( \operatorname{Ker}\left( \theta \right) \subseteq G \) . For a 1-cell \( T \), let \( \delta \left( T\right) = {\alpha }_{p}T{\beta }_{q} : I \rightarrow X \) where \( p = T\left( 0\right) \) and \( q = T\left( 1\right) \) . Observe that if \( T \) is degenerate, \( \delta \left( T\right) \) represents 1 in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . For a 2-cell \( U \), let \( \bigtriangleup \left( U\right) = \delta \left( {{A}_{2}U}\right) \delta \left( {{B}_{1}U}\right) \delta \left( {\left( {B}_{2}U\right) }^{-1}\right) \delta \left( {\left( {A}_{1}U\right) }^{-1}\right) \) where the inverse denotes that the path is traversed in the opposite direction. ![21ef530b-1e09-406a-b041-cf4539af5c14_73_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_73_0.jpg) Note that \( \bigtriangleup \left( U\right) \) is homotopic to \( {\alpha }_{p}{\beta }_{p}, p = U\left( {0,0}\right) \), so \( \bigtriangleup \left( U\right) \) represents 1 in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . Now suppose \( \theta \left( f\right) = 0 \) in \( {H}_{1}\left( X\right) \) . Then \[ \theta \left( f\right) = \partial \left( {\mathop{\sum }\limits_{{n \in K}}{a}_{k}{U}^{k}}\right) \;\text{ in }{C}_{ * }\left( X\right) \] \[ = \partial \left( {\mathop{\sum }\limits_{{k \in K}}{a}_{k}{U}^{k}}\right) + \mathop{\sum }\limits_{{q \in Q}}{b}_{q}{D}_{q}\;\text{ in }{Q}_{ * }\left( X\right) \] where \( \left\{ {D}_{q}\right\} \) are degenerate 1-cells, \[ = \sum {a}_{k}\left( {{B}_{1}{T}^{k} - {A}_{1}{T}^{k} + {A}_{2}{T}^{k} - {B}_{2}{T}^{k}}\right) + \sum {b}_{q}{D}^{q}. \] In this sum, \( \theta \left( f\right) \) appears with coefficient 1 and every other non-degenerate cell appears with coefficient 0 . Now, if \( \left\lbrack \right\rbrack \) denotes the homotopy class in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) , \[ \mathop{\prod }\limits_{k}{\left\lbrack \bigtriangleup \left( {U}^{k}\right) \right\rbrack }^{{a}_{k}} = 1,\;\mathop{\prod }\limits_{q}{\left\lbrack \delta \left( {D}_{q}\right) \right\rbrack }^{{b}_{q}} = 1, \] so in \( {\pi }_{1}\left( {X,{x}_{0}}\right) /G \) , \[ 0 = \sum {a}_{k}\bigtriangleup \left( {U}^{k}\right) + \sum {b}_{q}\delta \left( {D}^{q}\right) \] \[ = \sum {a}_{k}\left( {\delta \left( {{B}_{1}{U}^{k}}\right) - \delta \left( {{A}_{1}{U}^{k}}\right) + \delta \left( {{A}_{2}{U}^{k}}\right) - \delta \left( {{B}_{2}{U}^{k}}\right) }\right) + \mathop{\sum }\limits_{q}{b}_{q}\delta \left( {D}^{q}\right) . \] Applying \( \delta
1359_[陈省身] Lectures on Differential Geometry
Definition 2.1
Definition 2.1. Suppose \( V \) is an \( n \) -dimensional vector space over \( \mathbb{F} \) with dual space \( {V}^{ * } \) . The elements in the tensor product \[ {V}_{s}^{r} = \underset{r\text{ terms }}{\underbrace{V \otimes \cdots \otimes V}} \otimes \underset{s\text{ terms }}{\underbrace{{V}^{ * } \otimes \cdots \otimes {V}^{ * }}} \] (2.1) are called \( \left( {r, s}\right) \) -type tensors, where \( r \) is the contravariant order and \( s \) is the covariant order. In particular, the elements in \( {V}_{0}^{r} \) are called contravariant tensors of order \( r \), and those in \( {V}_{s}^{0} \) are called covariant tensors of order \( s \) . We also have the following conventions: \( {V}_{0}^{0} = \mathbb{F},{V}_{0}^{1} = V,{V}_{1}^{0} = {V}^{ * } \) . The elements of \( V \) are called contravectors, and the elements of \( {V}^{ * } \) are called covectors. Remark. In practice, the elements in \( V \) and in \( {V}^{ * } \) may appear alternately in the tensor product \( {V}_{s}^{r} \) . Writing them down in the order as shown in (2.1) is done here just for convenience of notation. By the discussion in \( §2 - 1,\dim {V}_{s}^{r} = {n}^{r + s} \), and \[ {V}_{s}^{r} = \mathcal{L}\left( {\underset{r\text{ terms }}{\underbrace{{V}^{ * },\ldots ,{V}^{ * }}},\underset{s\text{ terms }}{\underbrace{V,\ldots, V}};\mathbb{F}}\right) . \] This shows that \( \left( {r, s}\right) \) -type tensors are \( \mathbb{F} \) -valued \( \left( {r + s}\right) \) -linear functions defined on \[ \underset{r\text{ terms }}{\underbrace{{V}^{ * } \times \cdots \times {V}^{ * }}} \times \underset{s\text{ terms }}{\underbrace{V \times \cdots \times V}}. \] Suppose \( \{ {e}_{i}{\} }_{1 \leq i \leq n} \) and \( \{ {e}^{*i}{\} }_{1 \leq i \leq n} \) are dual bases in \( V \) and \( {V}^{ * } \), respectively. Then \[ {e}_{{i}_{1}} \otimes \cdots \otimes {e}_{{i}_{r}} \otimes {e}^{*{k}_{1}} \otimes \cdots \otimes {e}^{*{k}_{s}},\;1 \leq {i}_{1},\ldots ,{i}_{r},{k}_{1},\ldots ,{k}_{s} \leq n \] (2.2) form a basis of \( {V}_{s}^{r} \) . Therefore an \( \left( {r, s}\right) \) -type tensor \( x \) can be uniquely expressed as \[ x = \mathop{\sum }\limits_{\substack{{{i}_{1}\cdots {i}_{r}} \\ {{k}_{1}\cdots {k}_{s}} }}{x}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}}{e}_{{i}_{1}} \otimes \cdots \otimes {e}_{{i}_{r}} \otimes {e}^{*{k}_{1}} \otimes \cdots \otimes {e}^{*{k}_{s}}, \] (2.3) where the \( {x}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}} \) are called the components of the tensor \( x \) under the basis (2.2). It is obvious that \[ {x}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}} = x\left( {{e}^{*{i}_{1}},\ldots ,{e}^{*{i}_{r}},{e}_{{k}_{1}},\ldots ,{e}_{{k}_{s}}}\right) \] \[ = \left\langle {{e}^{*{i}_{1}} \otimes \cdots \otimes {e}^{*{i}_{r}} \otimes {e}_{{k}_{1}} \otimes \cdots \otimes {e}_{{k}_{s}},, x}\right\rangle . \] (2.4) In working with tensors, we often use the summation convention of Einstein: if an index occurs as both a subscript and superscript in the same term, then the term is summed over the range of the repeated index, and the summation sign is omitted. For example, there should be \( \left( {r + s}\right) \) summation signs in (2.3), but by this convention it can be written as \[ x = {x}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}}{e}_{{i}_{1}} \otimes \cdots \otimes {e}_{{i}_{r}} \otimes {e}^{*{k}_{1}} \otimes \cdots \otimes {e}^{*{k}_{s}}, \] (2.5) When the basis of the vector space \( V \) is changed, the components of a tensor are changed according to specific rules. Suppose \( {\left\{ {\bar{e}}_{i}\right\} }_{1 \leq i \leq n} \) is another basis of \( V \) with dual basis \( {\left\{ {\bar{e}}^{*i}\right\} }_{1 \leq i \leq n} \) . We may assume that under the original basis, \[ {\bar{e}}_{i} = {\alpha }_{i}^{j}{e}_{j} \] (2.6) where \( \alpha = \left( {\alpha }_{i}^{j}\right) \) is a nonsingular \( n \times n \) matrix. Therefore \[ {\bar{e}}^{*i} = {\beta }_{j}^{i}{e}^{*j} \] (2.7) where \( \beta = \left( {\beta }_{i}^{j}\right) \) is the inverse matrix of \( \alpha \), that is, \[ {\alpha }_{i}^{j}{\beta }_{j}^{k} = {\beta }_{i}^{j}{\alpha }_{j}^{k} = {\delta }_{i}^{k} \] (2.8) If the components of a tensor \( x \) under the new basis are denoted by \( {\bar{x}}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}} \) , then \[ x = {\bar{x}}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}}{\bar{e}}_{{i}_{1}} \otimes \cdots \otimes {\bar{e}}_{{i}_{r}} \otimes {\bar{e}}^{*{k}_{1}} \otimes \cdots \otimes {\bar{e}}^{*{k}_{s}} \] \[ = {\bar{x}}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}}{\alpha }_{{i}_{1}}^{{j}_{1}}\cdots {\alpha }_{{i}_{r}}^{{j}_{r}}{\beta }_{{l}_{1}}^{{k}_{1}}\cdots {\beta }_{{l}_{s}}^{{k}_{s}}{e}_{{j}_{1}} \otimes \cdots \otimes {e}_{{j}_{r}} \otimes {e}^{*{l}_{1}} \otimes \cdots \otimes {e}^{*{l}_{s}}. \] Therefore \[ {x}_{{l}_{1}\cdots {l}_{s}}^{{j}_{1}\cdots {j}_{r}} = {\bar{x}}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}}{\alpha }_{{i}_{1}}^{{j}_{1}}\cdots {\alpha }_{{i}_{r}}^{{j}_{r}}{\beta }_{{l}_{1}}^{{k}_{1}}\cdots {\beta }_{{l}_{s}}^{{k}_{s}}. \] (2.9) In classical tensor analysis, (2.9) is used to define tensors. The space \( {V}_{s}^{r} \) of all \( \left( {r, s}\right) \) -type tensors is a vector space. Thus two tensors of the same type can be added, and any tensor can be multiplied by scalars. Also tensors admit the operations of multiplication and contraction. The product of two tensors is their tensor product when they are viewed as multilinear functions. Definition 2.2. Suppose \( x \) is an \( \left( {{r}_{1},{s}_{1}}\right) \) -type tensor and \( y \) is an \( \left( {{r}_{2},{s}_{2}}\right) \) -type tensor. Then their tensor product \( x \otimes y \) is an \( \left( {{r}_{1} + {r}_{2},{s}_{1} + {s}_{2}}\right) \) -type tensor given by \[ x \otimes y\left( {{v}^{*1},\ldots ,{v}^{*{r}_{1} + {r}_{2}},{v}_{1},\ldots ,{v}_{{s}_{1} + {s}_{2}}}\right) \] \[ = x\left( {{v}^{*1},\ldots ,{v}^{*{r}_{1}},{v}_{1},\ldots ,{v}_{{s}_{1}}}\right) \] (2.10) \[ \cdot y\left( {{v}^{*{r}_{1} + 1},\ldots ,{v}^{*{r}_{1} + {r}_{2}},{v}_{{s}_{1} + 1},\ldots ,{v}_{{s}_{1} + {s}_{2}}}\right) . \] When a basis is chosen, the components of \( x \otimes y \) are the products of the components of \( x \) and \( y \), i.e., \[ {\left( x \otimes y\right) }_{{k}_{1}\cdots {k}_{{s}_{1} + {s}_{2}}}^{{i}_{1}\cdots {i}_{{r}_{1} + {r}_{2}}} = {x}_{{k}_{1}\cdots {k}_{{s}_{1}}}^{{i}_{1}\cdots {i}_{{r}_{1}}} \cdot {y}_{{k}_{{s}_{1} + 1}\cdots {k}_{{s}_{1} + {s}_{2}}}^{{i}_{{r}_{1} + 1}\cdots {i}_{{r}_{1} + {r}_{2}}}. \] (2.11) As discussed in \( §2 - 1 \), the multiplication of tensors satisfies the distributive and associative laws (see Theorem 1.2). Definition 2.3. Choose two indices \( \lambda ,\mu ,1 \leq \lambda \leq r,1 \leq \mu \leq s \) . For any reducible \( \left( {r, s}\right) \) -type tensor \[ x = {v}_{1} \otimes \cdots \otimes {v}_{r} \otimes {v}^{*1} \otimes \cdots \otimes {v}^{*s} \in {V}_{s}^{r}, \] (2.12) let \[ {C}_{\lambda \mu }\left( x\right) = \left\langle {{v}_{\lambda },{v}^{*\mu }}\right\rangle {v}_{1} \otimes \cdots \otimes {\widehat{v}}_{\lambda } \otimes \cdots \otimes {v}_{r} \otimes {v}^{*1} \otimes \cdots \otimes {\widehat{v}}^{*\mu } \otimes \cdots \otimes {v}^{*s}, \] (2.13) where the notation " \( {\widehat{v}}_{\lambda } \) " means that we omit the term \( {v}_{\lambda } \) . Then \( {C}_{\lambda \mu }\left( x\right) \in \) \( {V}_{s - 1}^{r - 1} \) . We extend the map \( x \mapsto {C}_{\lambda \mu }\left( x\right) \) linearly to get a linear map \( {C}_{\lambda \mu } \) : \( {V}_{s}^{r} \rightarrow {V}_{s - 1}^{r - 1} \), called a tensor contraction map. Suppose \( x \) can be expressed in components as \[ x = {x}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}}{e}_{{i}_{1}} \otimes \cdots \otimes {e}_{{i}_{r}} \otimes {e}^{*{k}_{1}} \otimes \cdots \otimes {e}^{*{k}_{s}}. \] (2.14) By the definition of tensor contraction, we get \[ {C}_{\lambda \mu }\left( x\right) = {x}_{{k}_{1}\cdots {k}_{s}}^{{i}_{1}\cdots {i}_{r}}{C}_{\lambda \mu }\left( {{e}_{{i}_{1}} \otimes \cdots \otimes {e}_{{i}_{r}} \otimes {e}^{*{k}_{1}} \otimes \cdots \otimes {e}^{*{k}_{s}}}\right) \] \( \left( {2.15}\right) \) \[ = {x}_{{k}_{1}\cdots {k}_{\mu - 1}j{k}_{\mu }\cdots {k}_{s - 1}}^{{i}_{1}\cdots {i}_{\lambda - 1}j{i}_{\lambda }\cdots {i}_{r - 1}}{e}_{{i}_{1}} \otimes \cdots \otimes {e}_{{i}_{r - 1}} \otimes {e}^{*{k}_{1}} \otimes \cdots \otimes {e}^{*{k}_{s - 1}}. \] Thus from the viewpoint of components, the tensor contraction \( {C}_{\mu \lambda } \) is the sum with respect to equated values of the \( \lambda \) -th upper index and the \( \mu \) -th lower index. A contraction lowers the order of a tensor, and is a very basic operation. For instance, suppose \( x = {\xi }_{j}^{i}{e}_{i} \otimes {e}^{*j} \) is a \( \left( {1,1}\right) \) -type tensor. Then the tensor contraction of \( x \) is the trace \( \mathop{\sum }\limits_{{i = 1}}^{n}{\xi }_{i}^{i} \) of the matrix \( \left( {\xi }_{j}^{i}\right) \), and hence a scalar independent of the choice of coordinate systems. Suppose \[ {T}^{r}\left( V\right) = {V}_{0}^{r} = \underset{r\text{ terms }}{\underbrace{V \otimes \cdots \otimes V}}. \] Consider the direct sum \( T\left( V\right) = \mathop{\sum }\limits_{{r \geq 0}}{T}^{r}\left( V\right) \) . Any element \( x \) in it can be expressed as the formal sum \[ x = \mathop{\sum }\limits_{{r \geq 0}}{x}^{r},\;{x}^{r} \in {T}^{r}\left( V\right) \] (2.16) where all but finitely many terms are zero. Thus \( T\left( V\right) \) is an infinite-dimensional vector space. With the distributive law, the multiplication between tensors can be extended to multiplication in \( T\left( V\right) \) . Therefore the vector space \( T\left( V\right) \) becomes an algebra with respect to this multiplication, and is called the tensor algebra of \( V \) . Similarly, the tensor algebra of \( {V}^{ * } \) is \( T\left( {V}^{ * }\right) =
109_The rising sea Foundations of Algebraic Geometry
Definition 1.38
Definition 1.38. Given two cells \( A, B \in \sum \), their product is the cell \( {AB} \) with sign sequence \[ {\sigma }_{i}\left( {AB}\right) = \left\{ \begin{array}{ll} {\sigma }_{i}\left( A\right) & \text{ if }{\sigma }_{i}\left( A\right) \neq 0, \\ {\sigma }_{i}\left( B\right) & \text{ if }{\sigma }_{i}\left( A\right) = 0. \end{array}\right. \] (1.4) The cell \( {AB} \) is characterized by the property that if we choose \( x \in A \) and \( y \in B \), then \( \left( {1 - t}\right) x + {ty} \) is in \( {AB} \) for all sufficiently small \( t > 0 \) . See Figure 1.5 for a simple example, where \( A \) and \( B \) are half-lines and \( {AB} \) turns out to be a chamber. For a second example, let \( {A}^{\prime } \) be the half-line opposite \( A \) in the same figure; then \( A{A}^{\prime } = A \) . One can easily check from (1.4) ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_47_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_47_0.jpg) Fig. 1.5. The product of two half-lines. that the associative law holds: \[ A\left( {BC}\right) = \left( {AB}\right) C \] (1.5) for all \( A, B, C \in \sum \) . In fact, the triple product, with either way of associating, can be characterized by the property that \( {\sigma }_{i}\left( {ABC}\right) \) is \( {\sigma }_{i}\left( A\right) \) unless \( {\sigma }_{i}\left( A\right) = 0 \) , in which case it is \( {\sigma }_{i}\left( B\right) \) unless \( {\sigma }_{i}\left( B\right) = 0 \), in which case it is \( {\sigma }_{i}\left( C\right) \) . So \( \sum \) is indeed a semigroup. It has an identity, consisting of the cell \( \mathop{\bigcap }\limits_{{i \in I}}{H}_{i} \) with sign sequence \( \left( {0,0,\ldots ,0}\right) \) . Following Tits [247], we will often call \( {AB} \) the projection of \( B \) on \( A \) and write \[ {AB} = {\operatorname{proj}}_{A}B. \] This may serve as a reminder of the geometric meaning of the product. We will see, however, that the product notation is quite useful, especially to facilitate application of the associative law. Note that the associative law, in the language of projections, takes the complicated form \[ {\operatorname{proj}}_{A}\left( {{\operatorname{proj}}_{B}C}\right) = {\operatorname{proj}}_{{\operatorname{proj}}_{A}B}C. \] (1.6) Equation (1.6) appears (in a slightly different context from ours) in Tits's appendix [249] to Solomon's paper [221] on the descent algebra, and the observation that (1.6) is actually an associative law can be used to give a much simpler treatment; see [55]. The geometry of projections is especially clear when the second factor is a chamber. In order to state the result, we introduce a metric on the set \( \mathcal{C} \mathrel{\text{:=}} \mathcal{C}\left( \mathcal{H}\right) \) of chambers. We will temporarily denote this metric by \( {d}_{\mathcal{H}}\left( {-, - }\right) \) ; later, after showing that \( {d}_{\mathcal{H}} \) coincides with another naturally defined metric, we will drop the subscript \( \mathcal{H} \) . Definition 1.39. The distance \( {d}_{\mathcal{H}}\left( {C, D}\right) \) between two chambers \( C, D \) is the number of hyperplanes in \( \mathcal{H} \) separating \( C \) and \( D \) . Equivalently, \( {d}_{\mathcal{H}}\left( {C, D}\right) \) is the number of positions at which the sign sequences of \( C \) and \( D \) differ. The following result justifies the term "projection." Proposition 1.40. Given a cell \( A \) and a chamber \( C \), the product \( {AC} \) (or the projection of \( C \) on \( A \) ) is a chamber having \( A \) as a face; among the chambers having \( A \) as a face, it is the unique one at minimal distance from \( C \) . Proof. To minimize the distance to \( C \) of a chamber \( D \geq A \), we must maximize the number of indices \( i \) such that \( {\sigma }_{i}\left( D\right) = {\sigma }_{i}\left( C\right) \) . We have no choice about \( {\sigma }_{i}\left( D\right) \) whenever \( {\sigma }_{i}\left( A\right) \neq 0 \), so the best we can do is make \( {\sigma }_{i}\left( D\right) = {\sigma }_{i}\left( C\right) \) whenever \( {\sigma }_{i}\left( A\right) = 0 \) . This is precisely what the definition of \( {AC} \) in (1.4) achieves. Finally, since \( \sum \) is now both a poset and a semigroup, it is natural to ask how these structures interact. We record a few simple results in the following proposition, whose proof is routine and is left to the reader. Proposition 1.41. Let \( A \) and \( B \) be arbitrary cells. (1) \( A \leq {AB} \), with equality if and only if \( \operatorname{supp}B \leq \operatorname{supp}A \) . (2) \( A \leq B \) if and only if \( {AB} = B \) . (3) \( \operatorname{supp}A = \operatorname{supp}B \) if and only if \( {AB} = A \) and \( {BA} = B \) . (4) \( {AB} \) and \( {BA} \) have the same support, which is the intersection of the hy-perplanes in \( \mathcal{H} \) containing both \( A \) and \( B \) . ## Exercises 1.42. Prove the following more precise version of Proposition 1.40: For any chamber \( D \geq A \) , \[ {d}_{\mathcal{H}}\left( {C, D}\right) = {d}_{\mathcal{H}}\left( {C,{AC}}\right) + {d}_{\mathcal{H}}\left( {{AC}, D}\right) . \] (1.7) In the language of Dress-Scharlau [97], this says that the set \( {\mathcal{C}}_{ \geq A} \) of chambers \( D \geq A \) is a gated subset of the metric space of chambers. Here \( {AC} \) is the "gate" through which one enters \( {\mathcal{C}}_{ \geq A} \) to get from \( C \) to an arbitrary chamber \( D \geq A \) . See Figure 1.6 for a schematic illustration. 1.43. We say that cells \( A, B,\ldots \) are joinable if they have an upper bound in the poset \( \sum \) . Show that this holds if and only if they commute with one another in the semigroup \( \sum \), in which case their product is their least upper bound. ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_49_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_49_0.jpg) Fig. 1.6. The gate property. 1.44. If \( A \) and \( B \) have the same support, show that left multiplication by \( A \) gives a bijection \( {\sum }_{ \geq B} \rightarrow {\sum }_{ \geq A} \), with inverse given by multiplication by \( B \) . This holds, for example, if \( A \) and \( B \) are opposite, i.e., \( A = - B \) . 1.45. Given \( A \in \sum \), show that the poset \( {\sum }_{ \geq A} \) is isomorphic to the set of cells of a hyperplane arrangement. ## 1.4.7 Example: The Braid Arrangement Let \( \mathcal{H} \) be the arrangement in \( {\mathbb{R}}^{n} \) consisting of the \( \left( \begin{array}{l} n \\ 2 \end{array}\right) \) hyperplanes \( {x}_{i} = {x}_{j} \) \( \left( {i \neq j}\right) \) . This has already occurred implicitly in the discussion of Example 1.10. This arrangement, or its essential version in the \( \left( {n - 1}\right) \) -dimensional subspace \( {x}_{1} + \cdots + {x}_{n} = 0 \), is called the braid arrangement for reasons that are explained in [182]. It is also called, for more transparent reasons, the reflection arrangement of type \( {A}_{n - 1} \) . A chamber with respect to \( \mathcal{H} \) is a nonempty set defined by inequalities \( {x}_{i} - {x}_{j} > 0 \) or \( {x}_{i} - {x}_{j} < 0\left( {i < j}\right) \), i.e., it is a set defined by specifying an ordering of the coordinates. Thus there are \( n \) ! chambers, one for each possible ordering. A typical chamber is given by \[ {x}_{\pi \left( 1\right) } > {x}_{\pi \left( 2\right) } > \cdots > {x}_{\pi \left( n\right) } \] where \( \pi \) is a permutation of \( \{ 1,2,\ldots, n\} \) . Figure 1.7 shows the correspondence between chambers and permutations when \( n = 4 \) . Here a permutation \( \pi \) is represented by its list of values \( \pi \left( 1\right) \pi \left( 2\right) \cdots \pi \left( n\right) \) . (Recall that this is a rank-3 example, and that cells can be represented by their intersections with the unit sphere; see Figure 1.3.) Faces are gotten by changing zero or more inequalities to equalities. They correspond to compositions \( B = \left( {{B}_{1},{B}_{2},\ldots ,{B}_{k}}\right) \) of the set \( \{ 1,2,\ldots, n\} \), also called ordered partitions. Here the blocks \( {B}_{i} \) form a set partition in the usual sense, and their order matters. The set composition \( B \) encodes the ordering of the coordinates and which coordinates are equal to one another. For example, the common face between the chambers 1234 and 1324 in Figure 1.7 is given by \[ {x}_{1} > {x}_{2} = {x}_{3} > {x}_{4} \] and it corresponds to the set composition \( \left( {\{ 1\} ,\{ 2,3\} ,\{ 4\} }\right) \) . Notice that the chambers can be identified with the set compositions in which all blocks are singletons. ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_50_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_50_0.jpg) Fig. 1.7. Chambers correspond to permutations. One can verify from equation (1.4) the following interpretation of the product in terms of set compositions: Take (nonempty) intersections of the blocks in lexicographic order; more precisely, if \( B = \left( {{B}_{1},\ldots ,{B}_{l}}\right) \) and \( C = \left( {{C}_{1},\ldots ,{C}_{m}}\right) \), then \[ {BC} = {\left( {B}_{1} \cap {C}_{1},\ldots ,{B}_{1} \cap {C}_{m},\ldots ,{B}_{l} \cap {C}_{1},\ldots ,{B}_{l} \cap {C}_{m}\right) }^{ \frown }, \] where the hat means "delete empty intersections." More briefly, \( {BC} \) is obtained by using \( C \) to refine \( B \) . This product, at least when the second factor is a chamber, has an interesting interpretation in terms of card shuffling. See [55] and further references cited there. Remark 1.46. Although chambers correspond to permutations, their product in the semigroup \( \sum \) of cells has nothing to do with the usual product of permutations. In fact, the product of two chambers is always equal to the first factor. ## 1.4.8 Formal Properties of the Poset of Cells Recall from Section 1.4.2 that the poset \( \sum \mathrel{\text{:=}} \sum \left( \mathcal{H}\right) \) of (open) cells is isomorphic to the poset of closed cells, where the latt
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 4.23
Definition 4.23. The directional or contingent derivative at \( z \mathrel{\text{:=}} \left( {x, y}\right) \) of a mul-timap \( F : X \rightrightarrows Y \) between two normed spaces is the multimap \( {DF}\left( {x, y}\right) : X \rightrightarrows Y \) whose graph is the tangent cone \( T\left( {F, z}\right) \mathrel{\text{:=}} {T}^{D}\left( {F, z}\right) \) to the graph of \( F \) at \( z \) : \[ {DF}\left( {x, y}\right) \left( u\right) \mathrel{\text{:=}} {D}_{D}F\left( {x, y}\right) \left( u\right) \mathrel{\text{:=}} \{ v \in Y : \left( {u, v}\right) \in T\left( {F, z}\right) \} . \] Definition 4.24. The directional (or contingent) coderivative of \( F : X \rightrightarrows Y \) at \( z \mathrel{\text{:=}} \) \( \left( {x, y}\right) \) is the multimap \( {D}^{ * }F\left( {x, y}\right) \mathrel{\text{:=}} {D}_{D}^{ * }F\left( {x, y}\right) : {Y}^{ * } \rightrightarrows {X}^{ * } \) that is the transpose of \( {DF}\left( {x, y}\right) \) : \[ {D}^{ * }F\left( {x, y}\right) \left( {y}^{ * }\right) = \left\{ {{x}^{ * } \in {X}^{ * } : \left\langle {{x}^{ * }, u}\right\rangle - \left\langle {{y}^{ * }, v}\right\rangle \leq 0\forall u \in X,\forall v \in {DF}\left( z\right) \left( u\right) }\right\} \] \[ = \left\{ {{x}^{ * } \in {X}^{ * } : \left( {{x}^{ * }, - {y}^{ * }}\right) \in {N}_{D}\left( {F, z}\right) }\right\} . \] The firm (or Fréchet) coderivative of \( F \) at \( \left( {x, y}\right) \) is the multimap \( {D}_{F}^{ * }F\left( {x, y}\right) : {Y}^{ * } \rightrightarrows {X}^{ * } \) given by \[ {D}_{F}^{ * }F\left( {x, y}\right) \left( {y}^{ * }\right) \mathrel{\text{:=}} \left\{ {{x}^{ * } \in {X}^{ * } : \left( {{x}^{ * }, - {y}^{ * }}\right) \in {N}_{F}\left( {F, z}\right) }\right\} . \] Since \( {N}_{F}\left( {F, z}\right) \subset {N}_{D}\left( {F, z}\right) \), one has \( {D}_{F}^{ * }F\left( {x, y}\right) \left( {y}^{ * }\right) \subset {D}^{ * }F\left( {x, y}\right) \left( {y}^{ * }\right) \) for all \( {y}^{ * } \in {Y}^{ * } \) . When \( F\left( x\right) \) is a singleton \( \{ y\} \), one writes \( {DF}\left( x\right) \) instead of \( {DF}\left( {x, y}\right) \) and \( {D}^{ * }F\left( x\right) \) (resp. \( {D}_{F}^{ * }F\left( x\right) \) ) instead of \( {D}^{ * }F\left( {x, y}\right) \) (resp. \( {D}_{F}^{ * }F\left( {x, y}\right) \) ). When \( F \) is a mapping that is Hadamard differentiable at \( x \), one has \( {DF}\left( x\right) = {F}^{\prime }\left( x\right) \) and \( {D}^{ * }F\left( {x, y}\right) = {F}^{\prime }{\left( x\right) }^{\top } \), the transpose of the derivative \( {F}^{\prime }\left( x\right) \) of \( F \) at \( x \), as is easily checked. Similarly, when \( F \) is Fréchet differentiable at \( x \), one has \( {D}_{F}^{ * }F\left( x\right) = {F}^{\prime }{\left( x\right) }^{\top } \) . When \( Y = \mathbb{R} \) and \( F\left( \cdot \right) \mathrel{\text{:=}} \) \( \lbrack f\left( \cdot \right) , + \infty ) \) for some function \( f : X \rightarrow {\mathbb{R}}_{\infty } \), one has \[ {\partial }_{D}f\left( x\right) = {D}^{ * }F\left( {x, f\left( x\right) }\right) \left( 1\right) \] in view of Proposition 4.15, which asserts that \( {x}^{ * } \in {\partial }_{D}f\left( x\right) \) if and only if \( \left( {{x}^{ * }, - 1}\right) \in \) \( {N}_{D}\left( {\operatorname{epi}f,{x}_{f}}\right) = {N}_{D}\left( {F,{x}_{f}}\right) \) for \( {x}_{f} \mathrel{\text{:=}} \left( {x, f\left( x\right) }\right) \) ; similarly, \( {\partial }_{F}f\left( x\right) = {D}_{F}^{ * }F\left( {x}_{f}\right) \left( 1\right) \) . The calculus rules we have given for normal cones entail calculus rules for coderivatives. We also have the following scalarization result. Here we say that a map \( g : X \rightarrow Y \) between two normed spaces is tangentially compact at \( \bar{x} \in X \) if for every \( u \in X \smallsetminus \{ 0\} ,\left( {u}_{n}\right) \rightarrow u,\left( {t}_{n}\right) \rightarrow {0}_{ + } \) the sequence \( \left( {{t}_{n}^{-1}\left( {g\left( {\bar{x} + {t}_{n}{u}_{n}}\right) - g\left( \bar{x}\right) }\right) }\right) \) has a convergent subsequence. This condition is satisfied if \( g \) is directionally differentiable at \( \bar{x} \) or if \( Y \) is finite-dimensional and if \( g \) is directionally stable at \( \bar{x} \) in the sense that for every \( u \in X \smallsetminus \{ 0\} \) there exist \( \varepsilon > 0 \) and \( c \in {\mathbb{R}}_{ + } \) such that \( \parallel g\left( {\bar{x} + {tv}}\right) - g\left( \bar{x}\right) \parallel \leq {ct} \) for all \( t \in \left( {0,\varepsilon }\right), v \in B\left( {u,\varepsilon }\right) \) . The latter condition is satisfied when \( g \) is stable (or Stepanovian) at \( \bar{x} \) in the sense that there exist \( r > 0 \) and \( k \in {\mathbb{R}}_{ + } \) such that \( \parallel g\left( x\right) - g\left( \bar{x}\right) \parallel \leq k\parallel x - \bar{x}\parallel \) for all \( x \in B\left( {\bar{x}, r}\right) \) ; for \( Y = \mathbb{R} \) this definition coincides with the one given above for functions. Proposition 4.25 (Scalarization). For every map \( g : X \rightarrow Y \) between two normed spaces and for every \( \bar{x} \in X,{y}^{ * } \in {Y}^{ * } \) one has the following inclusions. The first one is an equality if \( g \) is tangentially compact at \( \bar{x} \) ; the second one is an equality if \( g \) is stable at \( \bar{x} \) : \[ {\partial }_{D}\left( {{y}^{ * } \circ g}\right) \left( \bar{x}\right) \subset {D}^{ * }g\left( \bar{x}\right) \left( {y}^{ * }\right) ,\;{\partial }_{F}\left( {{y}^{ * } \circ g}\right) \left( \bar{x}\right) \subset {D}_{F}^{ * }g\left( \bar{x}\right) \left( {y}^{ * }\right) . \] Proof. Let \( h \mathrel{\text{:=}} {y}^{ * } \circ g \), let \( {x}^{ * } \in {\partial }_{D}h\left( \bar{x}\right) \), and let \( G \) be the graph of \( g \) . Then for every \( \left( {u, v}\right) \in {T}^{D}\left( {G,\left( {\bar{x},\bar{y}}\right) }\right) \), where \( \bar{y} \mathrel{\text{:=}} g\left( \bar{x}\right) \), we can find sequences \( \left( {t}_{n}\right) \rightarrow {0}_{ + },\left( {u}_{n}\right) \rightarrow u \) , \( \left( {v}_{n}\right) \rightarrow v \) such that \( \bar{y} + {t}_{n}{v}_{n} = g\left( {\bar{x} + {t}_{n}{u}_{n}}\right) \) for all \( n \), hence \[ \left\langle {{y}^{ * }, v}\right\rangle = \left\langle {{y}^{ * },\mathop{\lim }\limits_{n}\frac{1}{{t}_{n}}\left( {g\left( {\bar{x} + {t}_{n}{u}_{n}}\right) - g\left( \bar{x}\right) }\right) }\right\rangle \] \[ = \mathop{\lim }\limits_{n}\frac{1}{{t}_{n}}\left\lbrack {\left\langle {{y}^{ * }, g\left( {\bar{x} + {t}_{n}{u}_{n}}\right) }\right\rangle - \left\langle {{y}^{ * }, g\left( \bar{x}\right) }\right\rangle }\right\rbrack \geq {h}^{D}\left( {\bar{x}, u}\right) \geq \left\langle {{x}^{ * }, u}\right\rangle , \] so that \( {x}^{ * } \in {D}^{ * }g\left( \bar{x}\right) \left( {y}^{ * }\right) \) . If \( {x}^{ * } \in {\partial }_{F}h\left( \bar{x}\right) \), then for every \( \varepsilon > 0 \) we can find some \( \delta > 0 \) such that for all \( x \in B\left( {\bar{x},\delta }\right) \) (hence for all \( x \) such that \( \left( {x, g\left( x\right) }\right) \in B\left( {\left( {\bar{x},\bar{y}}\right) ,\delta }\right) \) ), we have \[ \left\langle {\left( {{x}^{ * }, - {y}^{ * }}\right) ,\left( {x, g\left( x\right) }\right) - \left( {\bar{x},\bar{y}}\right) }\right\rangle = \left\langle {{x}^{ * }, x - \bar{x}}\right\rangle - \left( {h\left( x\right) - h\left( \bar{x}\right) }\right) \leq \varepsilon \parallel x - \bar{x}\parallel . \] Since \( \parallel x - \bar{x}\parallel \leq \parallel \left( {x - \bar{x}, g\left( x\right) - \bar{y}}\right) \parallel \), we get \( \left( {{x}^{ * }, - {y}^{ * }}\right) \in {N}_{F}\left( {G,\left( {\bar{x},\bar{y}}\right) }\right) : {x}^{ * } \in {D}_{F}^{ * }g\left( \bar{x}\right) \left( {y}^{ * }\right) \) . Let \( g \) be tangentially compact at \( \bar{x} \) and let \( {x}^{ * } \in {D}^{ * }g\left( \bar{x}\right) \left( {y}^{ * }\right) \) . Then for every \( u \in \) \( X \) and every sequence \( \left( \left( {{u}_{n},{t}_{n}}\right) \right) \rightarrow \left( {u,{0}_{ + }}\right) \) such that \( \left( {{t}_{n}^{-1}\left( {h\left( {\bar{x} + {t}_{n}{u}_{n}}\right) - h\left( \bar{x}\right) }\right) }\right) \rightarrow \) \( {h}^{D}\left( {\bar{x}, u}\right) \) we can find \( v \in Y \) that is a cluster point of the sequence \( \left( {{t}_{n}^{-1}\left( {g\left( {\bar{x} + {t}_{n}{u}_{n}}\right) - }\right. }\right. \) \( g\left( \bar{x}\right) )) \) . Then \( \left( {u, v}\right) \in {T}^{D}\left( {G,\left( {\bar{x},\bar{y}}\right) }\right) ,{h}^{D}\left( {\bar{x}, u}\right) = \left\langle {{y}^{ * }, v}\right\rangle \) and \( \left( {{x}^{ * }, - {y}^{ * }}\right) \in {N}_{D}\left( {G,\left( {\bar{x},\bar{y}}\right) }\right) \) , whence \[ \left\langle {{x}^{ * }, u}\right\rangle - {h}^{D}\left( {\bar{x}, u}\right) = \left\langle {{x}^{ * }, u}\right\rangle - \left\langle {{y}^{ * }, v}\right\rangle \leq 0, \] so that \( {x}^{ * } \in {\partial }_{D}h\left( \bar{x}\right) \) . Finally, suppose \( g \) is stable at \( \bar{x} \) and \( {x}^{ * } \in {D}_{F}^{ * }g\left( \bar{x}\right) \left( {y}^{ * }\right) \) . Let \( c \in {\mathbb{R}}_{ + },\rho > 0 \) be such that \( \parallel g\left( x\right) - g\left( \bar{x}\right) \parallel \leq c\parallel x - \bar{x}\parallel \) for all \( x \in B\left( {\bar{x},\rho }\right) \) . Since \( \left( {{x}^{ * }, - {y}^{ * }}\right) \in {N}_{F}\left( {G,\left( {\bar{x},\bar{y}}\right) }\right) \) , given \( \varepsilon > 0 \) we can find \( \delta \in \left( {0,\rho }\right) \) such that for all \( \left( {x, y}\right) \in G \cap B\left( {\left( {\bar{x},\bar{y}}\right) ,\delta }\right) \) one has \[ \left\langle {{x}^{ * }, x - \bar{x}}\right\rangle - \left\langle {{y}^{ * }, y - \bar{y}}\right\rangle \leq \varepsilon {\left( c + 1\right) }^{-1}\left( {\parallel x - \bar{x}\parallel + \parallel y - \bar{y}\parallel }\right) . \] Since \( y - \bar{y} = g\left( x\right) - g\left( \bar{x}
1069_(GTM228)A First Course in Modular Forms
Definition 6.4.4
Definition 6.4.4. A number field is a field \( \mathbb{K} \subset \overline{\mathbb{Q}} \) such that the degree \( \left\lbrack {\mathbb{K} : \mathbb{Q}}\right\rbrack = {\dim }_{\mathbb{Q}}\left( \mathbb{K}\right) \) is finite. The ring of integers \( \mathbb{Z} \) in the rational number field \( \mathbb{Q} \) has a natural analog in the field of algebraic numbers \( \overline{\mathbb{Q}} \) . A complex number \( \alpha \) is an algebraic integer if \( \alpha \) satisfies some monic polynomial with integer coefficients. The algebraic integers are denoted \( \overline{\mathbb{Z}} \) . The algebraic integers in the rational number field \( \mathbb{Q} \) are the usual integers \( \mathbb{Z} \) (Exercise 6.4.3), now called the rational integers. Every algebraic number takes the form of an algebraic integer divided by a rational integer (Exercise 6.4.4). Similarly to Theorem 6.4.1 and its corollaries (Exercise 6.4.5), Theorem 6.4.5. Let \( \alpha \) be a complex number. The following conditions on \( \alpha \) are equivalent: (1) \( \alpha \) is an algebraic integer, i.e., \( \alpha \in \overline{\mathbb{Z}} \) , (2) The ring \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) is finitely generated as an Abelian group, (3) \( \alpha \) belongs to a ring \( R \) in \( \mathbb{C} \) that is finitely generated as an Abelian group. Corollary 6.4.6. The algebraic integers \( \overline{\mathbb{Z}} \) form a ring. Corollary 6.4.7. The algebraic integers form an integrally closed ring, meaning that every monic polynomial with coefficients in \( \overline{\mathbb{Z}} \) factors down to linear terms over \( \overline{\mathbb{Z}} \), i.e., its roots lie in \( \overline{\mathbb{Z}} \) . Definition 6.4.8. Let \( \mathbb{K} \) be a number field. The number ring of \( \mathbb{K} \) is the ring of algebraic integers in \( \mathbb{K} \) , \[ {\mathcal{O}}_{\mathbb{K}} = \overline{\mathbb{Z}} \cap \mathbb{K} \] We need a few more results from algebraic number theory, stated here without proof. Every number ring \( {\mathcal{O}}_{\mathbb{K}} \) is a free Abelian group of rank \( \left\lbrack {\mathbb{K} : \mathbb{Q}}\right\rbrack \) . That is, letting \( d = \left\lbrack {\mathbb{K} : \mathbb{Q}}\right\rbrack \), there is a basis \( \left\{ {{\alpha }_{1},\ldots ,{\alpha }_{d}}\right\} \subset {\mathcal{O}}_{\mathbb{K}} \) such that \( {\mathcal{O}}_{\mathbb{K}} \) is the free Abelian group \[ {\mathcal{O}}_{\mathbb{K}} = \mathbb{Z}{\alpha }_{1} \oplus \cdots \oplus \mathbb{Z}{\alpha }_{d} \] There exist \( d \) nonzero field homomorphisms from \( \mathbb{K} \) to \( \mathbb{C} \) . These have trivial kernel, making them embeddings \( {\sigma }_{1},\ldots ,{\sigma }_{d} : \mathbb{K} \hookrightarrow \mathbb{C} \), and they fix the rational field \( \mathbb{Q} \) elementwise. The \( d \) -by- \( d \) matrix \[ A = \left\lbrack {\alpha }_{i}^{{\sigma }_{j}}\right\rbrack = \left\lbrack \begin{matrix} {\alpha }_{1}^{{\sigma }_{1}} & \cdots & {\alpha }_{1}^{{\sigma }_{d}} \\ \vdots & \ddots & \vdots \\ {\alpha }_{d}^{{\sigma }_{1}} & \cdots & {\alpha }_{d}^{{\sigma }_{d}} \end{matrix}\right\rbrack \] has nonzero determinant. Every embedding \( \sigma : \mathbb{K} \hookrightarrow \mathbb{C} \) extends (in many ways) to an automorphism \( \sigma : \mathbb{C} \rightarrow \mathbb{C} \) . Conversely every such automorphism restricts to an automorphism \( \sigma : \overline{\mathbb{Q}} \rightarrow \overline{\mathbb{Q}} \) of the algebraic numbers that takes the algebraic integers \( \overline{\mathbb{Z}} \) to \( \overline{\mathbb{Z}} \), and this automorphism further restricts to an embedding of \( \mathbb{K} \) in \( \overline{\mathbb{Q}} \) that injects \( {\mathcal{O}}_{\mathbb{K}} \) into \( \overline{\mathbb{Z}} \) . The only algebraic numbers that are fixed under all automorphisms of \( \mathbb{C} \) are the rational numbers \( \mathbb{Q} \) . After this we will be casual about distinguishing between maps \( \sigma \) as embeddings of \( \mathbb{K} \), as automorphisms of \( \mathbb{C} \), or as automorphisms of \( \overline{\mathbb{Q}} \) . ## Exercises 6.4.1. Recall from algebra that if \( f\left( x\right) = {a}_{0}{x}^{m} + {a}_{1}{x}^{n - 1} + \cdots + {a}_{m} \) and \( g\left( x\right) = {b}_{0}{x}^{n} + {b}_{1}{x}^{n - 1} + \cdots + {b}_{n} \) then their resultant \( R\left( {f\left( x\right), g\left( x\right) ;x}\right) \) is the determinant of their Sylvester matrix \[ M = \left\lbrack \begin{matrix} {a}_{0} & {a}_{1} & \cdots & \cdots & {a}_{m} & & \\ & \ddots & \ddots & & & \ddots & \\ & & {a}_{0} & {a}_{1} & \cdots & \cdots & {a}_{m} \\ {b}_{0} & {b}_{1} & \cdots & {b}_{n} & & & \\ & {b}_{0} & {b}_{1} & \cdots & {b}_{n} & & \\ & & \ddots & \ddots & & \ddots & \\ & & & {b}_{0} & {b}_{1} & \cdots & {b}_{n} \end{matrix}\right\rbrack \] ( \( n \) staggered rows of \( {a}_{i} \) ’s, \( m \) staggered rows of \( {b}_{j} \) ’s, all other entries 0 ). The resultant eliminates \( x \), leaving a polynomial in the coefficients that vanishes if and only if \( f \) and \( g \) share a root. Let \( p\left( x\right) \) and \( q\left( x\right) \) be monic polynomials in \( \mathbb{Q}\left\lbrack x\right\rbrack \) . Consider the resultants \[ \widetilde{q}\left( {t, u}\right) = R\left( {p\left( s\right), u - s - t;s}\right) ,\;r\left( u\right) = R\left( {\widetilde{q}\left( {t, u}\right), q\left( t\right) ;t}\right) . \] Show that if \( \alpha \) and \( \beta \) satisfy the polynomials \( p \) and \( q \) then \( \alpha + \beta \) satisfies the polynomial \( r \) . Similarly, find polynomials satisfied by \( {\alpha \beta } \) and \( 1/\alpha \) if \( \alpha \neq 0 \) . (A hint for this exercise is at the end of the book.) 6.4.2. Use the methods of the section or of Exercise 6.4.1 to find a monic integer polynomial satisfied by \( \sqrt{2} + {\mu }_{3} \) . 6.4.3. Show that \( \overline{\mathbb{Z}} \cap \mathbb{Q} = \mathbb{Z} \) . (A hint for this exercise is at the end of the book.) 6.4.4. Show that every algebraic number takes the form of an algebraic integer divided by a rational integer. (A hint for this exercise is at the end of the book.) 6.4.5. Prove Theorem 6.4.5 and its corollaries. ## 6.5 Algebraic eigenvalues Returning to the material of Section 6.3, recall the action of the weight-2 Hecke operators \( T = {T}_{p} \) and \( T = \langle d\rangle \) on the dual space as composition from the right, \[ T : {\mathcal{S}}_{2}{\left( {\Gamma }_{1}\left( N\right) \right) }^{ \land } \rightarrow {\mathcal{S}}_{2}{\left( {\Gamma }_{1}\left( N\right) \right) }^{ \land },\;\varphi \mapsto \varphi \circ T, \] and recall that the action descends to the quotient \( {\mathrm{J}}_{1}\left( N\right) \) . Thus the operators act as endomorphisms on the kernel \( {\mathrm{H}}_{1}\left( {{X}_{1}\left( N\right) ,\mathbb{Z}}\right) \), a finitely generated Abelian group. In particular the characteristic polynomial \( f\left( x\right) \) of \( {T}_{p} \) acting on \( {\mathrm{H}}_{1}\left( {{X}_{1}\left( N\right) ,\mathbb{Z}}\right) \) has integer coefficients, and being a characteristic polynomial it is monic. Since an operator satisfies its characteristic polynomial, \( f\left( {T}_{p}\right) = 0 \) on \( {\mathrm{H}}_{1}\left( {{X}_{1}\left( N\right) ,\mathbb{Z}}\right) \) . Since \( {T}_{p} \) is \( \mathbb{C} \) -linear, also \( f\left( {T}_{p}\right) = 0 \) on \( {\mathcal{S}}_{2}{\left( {\Gamma }_{1}\left( N\right) \right) }^{ \land } \) and so \( f\left( {T}_{p}\right) = 0 \) on \( {\mathcal{S}}_{2}\left( {{\Gamma }_{1}\left( N\right) }\right) \) . Therefore the minimal polynomial of \( {T}_{p} \) on \( {\mathcal{S}}_{2}\left( {{\Gamma }_{1}\left( N\right) }\right) \) divides \( f\left( x\right) \) and the eigenvalues of \( {T}_{p} \) satisfy \( f\left( x\right) \) , making them algebraic integers. Since \( p \) is arbitrary this proves Theorem 6.5.1. Let \( f \in {\mathcal{S}}_{2}\left( {{\Gamma }_{1}\left( N\right) }\right) \) be a normalized eigenform for the Hecke operators \( {T}_{p} \) . Then the eigenvalues \( {a}_{n}\left( f\right) \) are algebraic integers. To refine this result we need to view the Hecke operators as lying within an algebraic structure, not merely as a set. Definition 6.5.2. The Hecke algebra over \( \mathbb{Z} \) is the algebra of endomor-phisms of \( {\mathcal{S}}_{2}\left( {{\Gamma }_{1}\left( N\right) }\right) \) generated over \( \mathbb{Z} \) by the Hecke operators, \[ {\mathbb{T}}_{\mathbb{Z}} = \mathbb{Z}\left\lbrack \left\{ {{T}_{n},\langle n\rangle : n \in {\mathbb{Z}}^{ + }}\right\} \right\rbrack \] The Hecke algebra \( {\mathbb{T}}_{\mathbb{C}} \) over \( \mathbb{C} \) is defined similarly. Each level has its own Hecke algebra, but \( N \) is omitted from the notation since it is usually written somewhere nearby. Clearly any \( f \in {\mathcal{S}}_{2}\left( {{\Gamma }_{1}\left( N\right) }\right) \) is an eigenform for all of \( {\mathbb{T}}_{\mathbb{C}} \) if and only if \( f \) is an eigenform for all Hecke operators \( {T}_{p} \) and \( \langle d\rangle \) . For the remainder of this chapter the methods will shift to working with algebraic structure rather than thinking about objects such as Hecke operators one at a time. In particular modules will figure prominently, and so in this context Abelian groups will often be called \( \mathbb{Z} \) -modules. For example, viewing the \( \mathbb{Z} \) -module \( {\mathbb{T}}_{\mathbb{Z}} \) as a ring of endomorphisms of the finitely generated free \( \mathbb{Z} \) -module \( {\mathrm{H}}_{1}\left( {{X}_{1}\left( N\right) ,\mathbb{Z}}\right) \) shows that it is finitely generated as well (Exercise 6.5.1). Again letting \( f\left( \tau \right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\left( f\right) {q}^{n} \) be a normalized eigenform, the eigenvalue homomorphism (defined by its characterizing property in the next display) \[ {\lambda }_{f} : {\mathbb{T}}_{\mathbb{Z}} \rightarrow \mathbb{C},\;{Tf} = {\lambda }_{f}\left( T
110_The Schwarz Function and Its Generalization to Higher Dimensions
Definition 1.20
Definition 1.20. Let \( f : U \rightarrow {\mathbb{R}}^{m} \) be a map on an open set \( U \subseteq {\mathbb{R}}^{n} \) . We call \( f \) twice Fréchet differentiable on \( U \) if both \( f \) and \( {Df} \) are Fréchet differentiable on \( U \), and denote by \( {D}^{2}f \mathrel{\text{:=}} D\left( {Df}\right) \) the second derivative of \( f \) . By induction, we say that \( f \) is \( k \) -times Fréchet differentiable on \( U \) if \( f \) is \( \left( {k - 1}\right) \) - times Fréchet differentiable and \( {D}^{k - 1}f \) is Fréchet differentiable. We denote by \( {D}^{k}f = D\left( {{D}^{k - 1}f}\right) \) the \( k \) th derivative of \( f \) . If \( a \in U \) and \( u, v \in {\mathbb{R}}^{n} \), then we denote by \( {D}^{2}f\left( a\right) \left\lbrack {u, v}\right\rbrack \) the directional derivative of the function \( h\left( x\right) \mathrel{\text{:=}} {f}^{\prime }\left( {x;u}\right) \) along direction \( v \), that is, \[ {D}^{2}f\left( a\right) \left\lbrack {u, v}\right\rbrack \mathrel{\text{:=}} {h}^{\prime }\left( {x;v}\right) . \] The operation of taking successive directional derivatives is commutative under suitable differentiability assumptions. Theorem 1.21. If \( f : U \rightarrow \mathbb{R} \) is twice Fréchet differentiable on an open set \( U \) in \( {\mathbb{R}}^{n} \), then \( {D}^{2}f\left( a\right) \) is a symmetric bilinear form for all \( a \in U \), that is, \[ {D}^{2}f\left( a\right) \left\lbrack {u, v}\right\rbrack = {D}^{2}f\left( a\right) \left\lbrack {v, u}\right\rbrack \;\text{ for all }\;u, v \in {\mathbb{R}}^{n}. \] Proof. Define \( g\left( t\right) \mathrel{\text{:=}} f\left( {a + u + {tv}}\right) - f\left( {a + {tv}}\right) \) . We have \[ {g}^{\prime }\left( t\right) = {Df}\left( {a + u + {tv}}\right) \left( v\right) - {Df}\left( {a + {tv}}\right) \left( v\right) , \] and Lemma 1.17 applied to \( g\left( t\right) - t{g}^{\prime }\left( 0\right) \) gives \[ \begin{Vmatrix}{g\left( 1\right) - g\left( 0\right) - {g}^{\prime }\left( 0\right) }\end{Vmatrix} \leq \mathop{\sup }\limits_{{0 \leq t \leq 1}}\begin{Vmatrix}{{g}^{\prime }\left( t\right) - {g}^{\prime }\left( 0\right) }\end{Vmatrix}. \] (1.10) Since \( {Df} \) is Fréchet differentiable, we have \[ {Df}\left( {a + u + {tv}}\right) \left( v\right) - {Df}\left( a\right) \left( v\right) - {D}^{2}f\left( a\right) \left\lbrack {v, u + {tv}}\right\rbrack = o\left( {\parallel v\parallel \cdot \parallel u + {tv}\parallel }\right) \] \[ \leq o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) \] \[ {Df}\left( {a + {tv}}\right) \left( v\right) - {Df}\left( a\right) \left( v\right) - {D}^{2}f\left( a\right) \left\lbrack {v,{tv}}\right\rbrack = o\left( {\parallel v\parallel \cdot \parallel {tv}\parallel }\right) \] \[ \leq o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) . \] Subtracting the second equation from the first one gives \[ {Df}\left( {a + u + {tv}}\right) \left( v\right) - {Df}\left( {a + {tv}}\right) \left( v\right) - {D}^{2}f\left( a\right) \left\lbrack {v, u}\right\rbrack = o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) , \] that is, \[ {g}^{\prime }\left( t\right) - {D}^{2}f\left( a\right) \left\lbrack {v, u}\right\rbrack = o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) . \] Using this in (1.10) gives the equations \( {g}^{\prime }\left( t\right) - {g}^{\prime }\left( 0\right) = o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) \) and \[ g\left( 1\right) - g\left( 0\right) - {D}^{2}f\left( a\right) \left\lbrack {v, u}\right\rbrack = o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) . \] Since \( g\left( 1\right) - g\left( 0\right) = f\left( {a + u + v}\right) - f\left( {a + v}\right) - f\left( {a + u}\right) + f\left( a\right) \) is symmetric in \( u \) and \( v \), we similarly have \[ g\left( 1\right) - g\left( 0\right) - {D}^{2}f\left( a\right) \left\lbrack {u, v}\right\rbrack = o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) . \] Consequently, \[ B\left\lbrack {u, v}\right\rbrack \mathrel{\text{:=}} {D}^{2}f\left( a\right) \left\lbrack {u, v}\right\rbrack - {D}^{2}f\left( a\right) \left\lbrack {v, u}\right\rbrack = o\left( {\left( \parallel u\parallel + \parallel v\parallel \right) }^{2}\right) . \] Let \( \parallel u\parallel = \parallel v\parallel = 1 \), and let \( t \rightarrow 0 \) . We have \( B\left\lbrack {{tu},{tv}}\right\rbrack = o\left( {t}^{2}\right) \) . Thus, \( B\left\lbrack {u, v}\right\rbrack = \) \( o\left( {t}^{2}\right) /{t}^{2} \rightarrow 0 \), that is, \( B\left\lbrack {u, v}\right\rbrack = 0 \) . This proves the symmetry of \( {D}^{2}f\left( a\right) \) . Exercise 12 shows that \( {D}^{2}f\left( x\right) \left\lbrack {u, v}\right\rbrack = {D}^{2}f\left( x\right) \left\lbrack {v, u}\right\rbrack \) may fail in the absence of sufficient differentiability assumptions. Corollary 1.22. If \( f : U \rightarrow \mathbb{R} \) is \( k \) -times Fréchet differentiable on an open set \( U \) in \( {\mathbb{R}}^{n} \) and \( a \in U \), then \( {D}^{k}f\left( a\right) \) is a symmetric \( k \) -linear form, that is, \[ {D}^{k}f\left( a\right) \left\lbrack {{u}_{\sigma \left( 1\right) },\ldots ,{u}_{\sigma \left( k\right) }}\right\rbrack = {D}^{k}f\left( a\right) \left\lbrack {{u}_{1},\ldots ,{u}_{k}}\right\rbrack \text{ for all }{u}_{1},\ldots ,{u}_{k} \in E, \] where \( \sigma \) is a permutation of the set \( \{ 1,2,\ldots, k\} \) . The proof is obtained from Theorem 1.21 by induction. ## 1.5 Taylor's Formula for Functions of Several Variables Taylor’s formula for a function of a single variable extends to a function \( f \) : \( U \rightarrow \mathbb{R} \) defined on an open set \( U \subseteq {\mathbb{R}}^{n} \) . For this purpose, we restrict \( f \) to line segments in \( U \) . Let \( \{ x + {td} : t \in \mathbb{R}\} \) be a line passing through \( x \) and having the direction \( d \neq 0 \in {\mathbb{R}}^{n} \) . Define the function \( h\left( t\right) = f\left( {g\left( t\right) }\right) = f\left( {x + {td}}\right) = \) \( \left( {f \circ g}\right) \left( t\right) \), where \( g\left( t\right) = x + {td} \) . It follows from the chain rule that \[ {h}^{\prime }\left( t\right) = \langle \nabla f\left( {x + {td}}\right), d\rangle = \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{1}}{d}_{1} + \cdots + \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{n}}{d}_{n}. \] Differentiating \( {h}^{\prime } \) using the chain rule again, we obtain \[ {h}^{\prime \prime }\left( t\right) = \left\langle {\nabla \left( \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{1}}\right), d}\right\rangle {d}_{1} + \cdots + \left\langle {\nabla \left( \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{n}}\right), d}\right\rangle {d}_{n} \] \[ = \left( {\frac{\partial }{\partial {x}_{1}}\left( \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{1}}\right) {d}_{1} + \cdots + \frac{\partial }{\partial {x}_{n}}\left( \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{1}}\right) {d}_{n}}\right) {d}_{1} + \cdots \] \[ + \left( {\frac{\partial }{\partial {x}_{1}}\left( \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{n}}\right) {d}_{1} + \cdots + \frac{\partial }{\partial {x}_{n}}\left( \frac{\partial f\left( {x + {td}}\right) }{\partial {x}_{n}}\right) {d}_{n}}\right) {d}_{n} \] \[ = \left( {\frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{1}^{2}}{d}_{1}^{2} + \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{2}\partial {x}_{1}}{d}_{2}{d}_{1} + \cdots + \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{n}\partial {x}_{1}}{d}_{n}{d}_{1}}\right) \] \[ + \cdots + \left( {\frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{1}\partial {x}_{n}}{d}_{1}{d}_{n} + \cdots + \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{n}^{2}}{d}_{n}^{2}}\right) \] \[ = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{i = 1}}^{n}\frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{i}\partial {x}_{j}}{d}_{i}{d}_{j} \] Therefore, \[ {h}^{\prime \prime }\left( t\right) = \left( {{d}_{1},\ldots ,{d}_{n}}\right) \left\lbrack \begin{matrix} \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{1}^{2}} & \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{j}\partial {x}_{1}} & \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{n}\partial {x}_{1}} \\ \vdots & \vdots & \vdots \\ \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{1}\partial {x}_{i}} & \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{j}\partial {x}_{i}} & \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{n}\partial {x}_{i}} \\ \vdots & \vdots & \vdots \\ \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{1}\partial {x}_{n}} & \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{j}\partial {x}_{n}} & \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{n}^{2}} \end{matrix}\right\rbrack \left( \begin{matrix} {d}_{1} \\ \vdots \\ {d}_{n} \end{matrix}\right) \] \[ = {d}^{T}\left\lbrack \frac{{\partial }^{2}f\left( {x + {td}}\right) }{\partial {x}_{j}\partial {x}_{i}}\right\rbrack d = {d}^{T}{D}^{2}f\left( {x + {td}}\right) d. \] The matrix \[ {Hf}\left( x\right) \mathrel{\text{:=}} {D}^{2}F\left( x\right) = D\left( {\nabla f\left( x\right) }\right) = \left\lbrack {{\partial }^{2}f\left( x\right) /\partial {x}_{i}\partial {x}_{j}}\right\rbrack \] is called the Hessian matrix of \( f \) at \( x \) . If the second-order partial derivatives \( {\partial }^{2}f\left( x\right) /\partial {x}_{i}\partial {x}_{j} \) are continuous, then the mixed derivatives are equal, that is, \[ \frac{{\partial }^{2}f\left( x\right) }{\partial {x}_{i}\partial {x}_{j}} = \frac{{\partial }^{2}f\left( x\right) }
113_Topological Groups
Definition 17.1
Definition 17.1. Let \( \mathcal{L} \) be an expansion of \( {\mathcal{L}}_{\text{nos }} \) . A theory \( \Gamma \) in \( \mathcal{L} \) is a strong theory provided that for every \( m \) -ary elementary function \( f \) there is an \( m \) -ary operation symbol \( \mathbf{O} \) of \( \mathcal{L} \) such that for any \( {x}_{0},\ldots ,{x}_{m - 1}, y \in \omega \) the following conditions hold: (i) if \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) = y \), then \( \Gamma \vDash \mathbf{O}\left( {{\mathbf{x}}_{0},\ldots ,{\mathbf{x}}_{m - 1}}\right) = \mathbf{y} \) ; (ii) if \( f\left( {{x}_{0},\ldots ,{x}_{m - 1}}\right) \neq y \), then \( \Gamma \vDash \neg \mathbf{O}\left( {{\mathbf{x}}_{0},\ldots ,{\mathbf{x}}_{m - 1}}\right) = \mathbf{y} \) . Thus we can find strong theories among definitional expansions of the theory \( R \) of \( {\mathcal{L}}_{\text{nos }} \), or the set theory \( S \) (cf. 14.11 and 16.48). We shall now formulate a general theorem giving conditions on a formula \( \Pr \left( {v}_{0}\right) \) so that unprovability of consistency follows. Intuitively we think of \( \Pr \left( {v}_{0}\right) \) as saying that \( {v}_{0} \) is provable. After the statement and proof of the general theorem we shall indicate how the conditions on \( \Pr \left( {v}_{0}\right) \) can be satisfied. For simplicity we write \( \Pr \sigma \) in place of \( \Pr \left( \sigma \right) \) . Theorem 17.2 (Löb). Let \( \Gamma \) be a strong theory in a language \( \mathcal{L} \) . Assume that \( \Pr \) is a formula of \( \mathcal{L} \) such that \( \operatorname{Fv}\Pr \subseteq \left\{ {v}_{0}\right\} \) and the following conditions hold for all sentences \( \psi ,\chi \) of \( \mathcal{L} \) : (i) if \( \Gamma \vDash \psi \), then \( \Gamma \vDash \Pr \Delta {g}^{ + }\psi \) ; (ii) \( \Gamma \vDash \Pr \Delta {g}^{ + }\psi \rightarrow \Pr \Delta {g}^{ + }\Pr \Delta {g}^{ + }\psi \) ; (iii) \( \Gamma \vDash \Pr \Delta {\mathcal{g}}^{ + }\left( {\chi \rightarrow \psi }\right) \rightarrow \left( {\Pr \Delta {\mathcal{g}}^{ + }\chi \rightarrow \Pr \Delta {\mathcal{g}}^{ + }\psi }\right) \) . Furthermore, let \( \varphi \) be a sentence of \( \mathcal{L} \) such that \( \Gamma \vDash \Pr \Delta {g}^{ + }\varphi \rightarrow \varphi \) . Then \( \Gamma \vDash \varphi . \) Proof. There clearly is an elementary function \( f \) such that for any formula \( \psi \) and any \( m \in \omega, f\left( {{g}^{ + }\psi, m}\right) = {\mathcal{g}}^{ + }\psi \left( \mathbf{m}\right) \) . Let \( \mathbf{O} \) be a binary operation symbol of \( \mathcal{L} \) which represents \( f \), in the sense of 17.1. Let \( a = {\mathcal{J}}^{ + }\left( {\Pr \mathbf{O}\left( {{v}_{0},{v}_{0}}\right) \rightarrow \varphi }\right) \) , and let \( \psi \) be the sentence \( \Pr \mathbf{O}\left( {\mathbf{a},\mathbf{a}}\right) \rightarrow \varphi \) . Thus if \( \chi \) is the formula \( \Pr \mathbf{O}\left( {{v}_{0},{v}_{0}}\right) \) \( \rightarrow \varphi \), then \( f\left( {{g}^{ + }\chi ,{g}^{ + }\chi }\right) = {g}^{ + }\psi \) . Hence (1) \( \Gamma \vDash \mathbf{O}\left( {\mathbf{a},\mathbf{a}}\right) = \Delta {g}^{ + }\psi \) ; (2) \( \Gamma \vDash \psi \rightarrow \left( {\Pr \Delta {g}^{ + }\psi \rightarrow \varphi }\right) \; \) by (1), definition of \( \psi \) (3) \( \Gamma \vDash \Pr \Delta {\mathcal{A}}^{ + }\left( {\psi \rightarrow \left( {\Pr \Delta {\mathcal{A}}^{ + }\psi \rightarrow \varphi }\right) }\right) \; \) by (2 (4) \( \Gamma \vDash \Pr \Delta {g}^{ + }\psi \rightarrow \Pr \Delta {g}^{ + }\left( {\Pr \Delta {g}^{ + }\psi \rightarrow \varphi }\right) \; \) by (3),(iii) (5) \( \Gamma \vDash \Pr \Delta {g}^{ + }\left( {\Pr \Delta {g}^{ + }\psi \rightarrow \varphi }\right) \rightarrow \left( {\Pr \Delta {g}^{ + }\Pr \Delta {g}^{ + }\psi \rightarrow \Pr \Delta {g}^{ + }\varphi }\right) \) by (iii) (6) \( \Gamma \vDash \Pr \Delta {g}^{ + }\psi \rightarrow \Pr \Delta {g}^{ + }\Pr \Delta {g}^{ + }\psi \; \) by (ii) (7) \( \Gamma \vDash \Pr {\Delta }_{{\mathcal{G}}^{ + }}\varphi \rightarrow \varphi \; \) by hypothesis (8) \( \Gamma \vDash \Pr {\Delta }_{{\mathcal{G}}^{ + }}\psi \rightarrow \varphi \; \) by (4),(5),(6),(7), and a tautology (9) \( \Gamma \vDash \Pr \mathbf{O}\left( {\mathbf{a},\mathbf{a}}\right) \rightarrow \varphi \; \) by (8),(1) (10) \( \Gamma \vDash \psi \; \) by (9), definition of \( \psi \) (11) \( \Gamma \vDash \Pr \Delta {g}^{ + }\psi \) by \( \left( {10}\right) ,\left( i\right) \) (12) \( \Gamma \vDash \varphi \) by (8), (11) Corollary 17.3. Assume the hypothesis of 17.2, up to "Furthermore." Let \( \varphi \) be the sentence \( \neg \forall {v}_{0}\left( {{v}_{0} = {v}_{0}}\right) \) . If \( \Gamma \) is consistent, then \( \Gamma \mathrel{\text{\vDash \not{} }} \neg \Pr \Delta {g}^{ + }\varphi \) . Proof. If \( \Gamma \vDash \neg \Pr \Delta {g}^{ + }\varphi \), then of course \( \Gamma \vDash \Pr \Delta {g}^{ + }\varphi \rightarrow \varphi \), so by 17.2, \( \Gamma \vDash \varphi \), contradicting the consistency of \( \Gamma \) by \( \left( i\right) \) . The formula \( \neg \Pr {\Delta }_{\mathcal{J}}{}^{ + }\varphi \) in 17.3 of course intuitively expresses that \( \Gamma \) is consistent, if \( \Pr {v}_{0} \) expresses that \( {v}_{0} \) is provable in \( \Gamma \) . The content of 17.3 can be expressed in an intuitive form as follows. If \( \Gamma \) is a consistent theory in \( \mathcal{L} \) and we have a consistency proof for \( \Gamma \), then there is no formula \( {Pr} \) which represents our consistency proof in \( \Gamma \), in the sense of the hypotheses of 17.2 and 17.3. What has become of attempts to prove important theories consistent in a convincing way? Finitary consistency proofs for the theory \( P \) have been given. Although finitary, such proofs cannot be internalized in \( P \) . It can be seen, in fact, that the proofs involve induction exceeding induction over natural numbers, and in fact going up the first \( \varepsilon \) -number. No finitary consistency proofs for ZF (full set theory) have been given, and indeed it is hard to imagine any proof which could be called finitary which could not be formulated in ZF, in the sense of 17.2. Now we turn to the proof of \( {17.2}\left( i\right) - \left( {iii}\right) \) for certain natural formulas \( {Pr} \) . For this purpose we shall make some further assumptions on the theories we deal with. These additional assumptions, as is easily seen, do not really restrict the generality of the final result. The assumptions amount to an extension of our assumptions in 17.1 so as to formalize within a theory the full syntax of first-order logic. Definition 17.4. We describe an expansion \( {\mathcal{L}}_{\mathrm{{el}}} \) of \( {\mathcal{L}}_{\text{nos }} \) and, simultaneously, a theory \( \mathcal{P} \) in \( {\mathcal{L}}_{\mathrm{{el}}} \) . The following are the first axioms of \( \mathcal{P} \) : (i) \( \forall {v}_{0}\left( {\neg \mathrm{s}{v}_{0} = 0}\right) \) ; (ii) \( \forall {v}_{0}\forall {v}_{1}\left( {\mathbf{s}{v}_{0} \equiv \mathbf{s}{v}_{1} \rightarrow {v}_{0} \equiv {v}_{1}}\right) \) . (iii) \( \forall {v}_{0}\left( {{v}_{0} + 0 = {v}_{0}}\right) \) ; (iv) \( \forall {v}_{0}\forall {v}_{1}\left\lbrack {{v}_{0} + \mathrm{s}{v}_{1} = \mathrm{s}\left( {{v}_{0} + {v}_{1}}\right) }\right\rbrack \) ; (v) \( \forall {v}_{0}\left( {{v}_{0} \cdot \mathbf{0} = \mathbf{0}}\right) \) ; (vi) \( \forall {v}_{0}\forall {v}_{1}\left( {{v}_{0} \cdot \mathbf{s}{v}_{1} = {v}_{0} \cdot {v}_{1} + {v}_{0}}\right) \) . We introduce a binary operation symbol \( \left| -\right| \) and a new axiom (vii) \( \forall {v}_{0}\forall {v}_{1}\left\lbrack {\left( {{v}_{0} \leq {v}_{1} \rightarrow {v}_{0} + \left| {{v}_{0} - {v}_{1}}\right| = {v}_{1}}\right) \land }\right. \) \[ \left. \left( {{v}_{1} \leq {v}_{0} \rightarrow {v}_{1} + \left| {{v}_{0} - {v}_{1}}\right| = {v}_{0}}\right) \right\rbrack , \] We introduce a binary operation symbol [1] and a new axiom \[ \text{(viii)}\forall {v}_{0}\left( {\left\lbrack {{v}_{0}/\mathbf{0}}\right\rbrack = \mathbf{0}}\right) \land \forall {v}_{0}\forall {v}_{1}\left( {\neg {v}_{1} = \mathbf{0}}\right. \] \[ \rightarrow \left\lbrack {{v}_{0}/{v}_{1}}\right\rbrack \cdot {v}_{1} \leq {v}_{0} \land {v}_{0} + \mathrm{s}\mathbf{0} \leq \left( {\left\lbrack {{v}_{0}/{v}_{1}}\right\rbrack + \mathrm{s}\mathbf{0}}\right) \cdot {v}_{1}). \] Next we introduce new \( n \) -ary operation symbols \( {\mathbf{U}}_{i}^{n} \) and axioms \[ \text{(ix)}\forall {v}_{0}\cdots \forall {v}_{n - 1}\left( {{\mathbf{U}}_{i}^{n}{v}_{0}\cdots {v}_{n - 1} = {v}_{i}}\right) \text{,} \] where \( i < n \) . Having introduced an \( m \) -ary operation symbol \( \mathbf{O} \) and \( {mn} \) -ary operation symbols \( {\mathbf{P}}_{0},\ldots ,{\mathbf{P}}_{m - 1} \), we introduce an \( n \) -ary operation symbol \( {\mathbf{C}}_{n}^{m}\left( {\mathbf{O};{\mathbf{P}}_{0},\ldots ,{\mathbf{P}}_{n - 1}}\right) \) and an axiom (x) \[ \text{ () }\begin{aligned} & \forall {v}_{0}\cdots \forall {v}_{n - 1}\left\lbrack {{\mathbf{C}}_{n}^{m}\left( {\mathbf{O};{\mathbf{P}}_{0},\ldots ,{\mathbf{P}}_{n - 1}}\right) \left( {{v}_{0},\ldots ,{v}_{n - 1}}\right) }\right. \\ & \left. { = \mathbf{O}\left( {{\mathbf{P}}_{0}\left( {{v}_{0},\ldots ,{v}_{n - 1}}\right) ,\ldots ,{\mathbf{P}}_{n - 1}\left( {{v}_{0},\ldots ,{v}_{n - 1}}\right) }\right) }\right\rbrack . \end{aligned} \] Having introduced an \( m \) -ary operation symbol \( \mathbf{O} \), we introduce an \( m \) -ary operation symbol \( \mathbf{\sum }\left( \mathbf{O}\right) \) and axioms \[ \text{(xi)}\forall {v}_{0}\cdots \forall {v}_{m - 2}\left\lbrack {\mathbf{\sum }\left( \mathbf{O}\right) \left( {{v}_{0},\ldots ,{v}_{m - 2},\mathbf{0}}\right) = \mathbf{0}}\right\rbrack \text{;} \] (xii) Having introduced an \( m \) -ary operation symbol \( \mathbf{O} \), we introduce an \( m \) -ary operation symbol \( \mathbf{\Pi }\left( \mathbf{O}\right) \) and axioms \[ \text
1075_(GTM233)Topics in Banach Space Theory
Definition 12.3.2
Definition 12.3.2. A sequence space \( \mathcal{X} \) is spreading if the canonical basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of \( \mathcal{X} \) is spreading. Definition 12.3.3. Let \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) be a bounded sequence in a Banach space \( X \), and let \( {\left( {y}_{n}\right) }_{n = 1}^{\infty } \) be a bounded sequence in a Banach space \( Y \) . We will say that \( {\left( {y}_{n}\right) }_{n = 1}^{\infty } \) is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) if given \( \epsilon > 0 \) and \( N \in \mathbb{N} \) there exist a sequence of blocks of \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) , \[ {u}_{j} = \mathop{\sum }\limits_{{{p}_{j - 1} + 1}}^{{p}_{j}}{a}_{j}{x}_{j},\;j = 1,2,\ldots, N, \] where \( \left( {p}_{j}\right) \) are integers with \( 0 = {p}_{0} < {p}_{1} < \cdots < {p}_{N} \), and \( \left( {a}_{n}\right) \) are scalars, and an operator \( T : {\left\lbrack {y}_{j}\right\rbrack }_{j = 1}^{N} \rightarrow {\left\lbrack {u}_{j}\right\rbrack }_{j = 1}^{N} \) with \( T{y}_{j} = {u}_{j} \) for \( 1 \leq j \leq N \) such that \( \parallel T\parallel \begin{Vmatrix}{T}^{-1}\end{Vmatrix} < \) \( 1 + \epsilon \) . Note here that we do not assume that \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) or \( {\left( {y}_{n}\right) }_{n = 1}^{\infty } \) is a basic sequence, although usually they are. Definition 12.3.4. Let \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) be a bounded sequence in a Banach space \( X \) . A sequence space \( \mathcal{X} \) is said to be block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) if the canonical basis vectors \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) in \( \mathcal{X} \) are block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) . Obviously if \( \mathcal{X} \) is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \), it is also true that \( \mathcal{X} \) is finitely representable in \( X \) . We are thus asking for a strong form of finite representability. Definition 12.3.5. A sequence space \( \mathcal{X} \) is said to be block finitely representable in another sequence space \( \mathcal{Y} \) if it is block finitely representable in the canonical basis of \( \mathcal{Y} \) . Proposition 12.3.6. Suppose \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a nonconstant spreading sequence in a Banach space X. (i) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) fails to be weakly Cauchy, then \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a basic sequence equivalent to the canonical \( {\ell }_{1} \) -basis. (ii) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is weakly null, then it is an unconditional basic sequence with suppression constant \( {K}_{s} = 1 \) . i) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is weakly Cauchy, then \( {\left( {x}_{{2n} - 1} - {x}_{2n}\right) }_{n = 1}^{\infty } \) is weakly null and spreading. Proof. (i) If \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is not weakly Cauchy, then no subsequence can be weakly Cauchy (by the spreading property), and so by Rosenthal's theorem (Theorem 11.2.1), some subsequence is equivalent to the canonical \( {\ell }_{1} \) -basis; but then again, this means that the entire sequence is equivalent to the \( {\ell }_{1} \) -basis. (ii) It is enough to show that if \( {a}_{1},\ldots ,{a}_{n} \in \mathbb{R} \) and \( 1 \leq m \leq n \), then \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + \mathop{\sum }\limits_{{m < j \leq n}}{a}_{j}{x}_{j}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{e}_{j}}\end{Vmatrix}. \] Suppose \( \epsilon > 0 \) . By Mazur’s theorem we can find \( {c}_{j} \geq 0 \) for \( 1 \leq j \leq l \), say, such that \( \mathop{\sum }\limits_{{j = 1}}^{l}{c}_{j} = 1 \) and \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{l}{c}_{j}{x}_{j}}\end{Vmatrix} < \epsilon \] Now consider \[ x = \mathop{\sum }\limits_{{j = 1}}^{{m - 1}}{a}_{j}{x}_{j} + {a}_{m}\mathop{\sum }\limits_{{j = m}}^{{m + l - 1}}{c}_{j - m + 1}{x}_{j} + \mathop{\sum }\limits_{{j = m + l}}^{{m + l - 1}}{a}_{j - l + 1}{x}_{j}. \] Then \[ x = \mathop{\sum }\limits_{{i = 1}}^{l}{c}_{i}\left( {\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + {a}_{m}{x}_{i + m} + \mathop{\sum }\limits_{{j = m + 1}}^{n}{a}_{j}{x}_{l + j - 1}}\right) , \] and so \[ \parallel x\parallel \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{x}_{j}}\end{Vmatrix} \] But \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + \mathop{\sum }\limits_{{m < j \leq n}}{a}_{j}{x}_{j}}\end{Vmatrix} \leq \parallel x\parallel + \left| {a}_{m}\right| \epsilon \] and so \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{j < m}}{a}_{j}{x}_{j} + \mathop{\sum }\limits_{{m < j \leq n}}{a}_{j}{x}_{j}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{x}_{j}}\end{Vmatrix} + \left| {a}_{m}\right| \epsilon . \] Since \( \epsilon > 0 \) is arbitrary, we are done. (iii) This is immediate, since \( {\left( {x}_{{2n} - 1} - {x}_{2n}\right) }_{n = 1}^{\infty } \) is weakly null and spreading (obviously, it cannot be constant). Theorem 12.3.7. Suppose \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a normalized sequence in a Banach space \( X \) such that \( {\left\{ {x}_{n}\right\} }_{n = 1}^{\infty } \) is not relatively compact. Then there is a spreading sequence space that is block finitely representable in \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) . More precisely, there exist a subsequence \( {\left( {x}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) of \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) and a spreading sequence space \( \mathcal{X} \) such that if we let \( M = {\left\{ {n}_{k}\right\} }_{k = 1}^{\infty } \), then \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( M\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{x}_{{p}_{j}}}\end{Vmatrix} = {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{e}_{j}\end{Vmatrix}}_{\mathcal{X}}. \] Proof. This is a neat application of Ramsey's theorem due to Brunel and Sucheston [36]. We first observe that by taking a subsequence, we can assume that \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) has no convergent subsequence. Let us fix some finite sequence of real numbers \( {\left( {a}_{j}\right) }_{j = 1}^{r} \) . According to Theorem 11.1.1, given any infinite subset \( M \) of \( \mathbb{N} \), we can find a further infinite subset \( {M}_{1} \) such that \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{1}\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{x}_{{p}_{j}}}\end{Vmatrix}\text{ exists. } \] Let \( {\left( {a}_{1}^{\left( k\right) },\ldots ,{a}_{{r}_{k}}^{\left( k\right) }\right) }_{k = 1}^{\infty } \) be an enumeration of all finitely nonzero sequences of rationals, and let us construct a decreasing sequence \( {\left( {M}_{k}\right) }_{k = 1}^{\infty } \) of infinite subsets of \( \mathbb{N} \) such that \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{k}\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}^{\left( k\right) }{x}_{{p}_{j}}}\end{Vmatrix}\text{ exists. } \] A diagonal procedure allows us to pick an infinite subset \( {M}_{\infty } \) that is contained in each \( {M}_{k} \) up to a finite set. It is not difficult to check that \[ \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{\infty }\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{x}_{{p}_{j}}}\end{Vmatrix}\text{ exists } \] for every finite sequence of reals \( {\left( {a}_{j}\right) }_{j = 1}^{r} \) . Given \( \xi = {\left( \xi \left( j\right) \right) }_{j = 1}^{\infty } \in {c}_{00} \), put \[ \parallel \xi {\parallel }_{\mathcal{X}} = \mathop{\lim }\limits_{\substack{{\left( {{p}_{1},\ldots ,{p}_{r}}\right) \in {\mathcal{F}}_{r}\left( {M}_{\infty }\right) } \\ {{p}_{1} < \cdots < {p}_{r}} }}\begin{Vmatrix}{\mathop{\sum }\limits_{{j = 1}}^{r}\xi \left( j\right) {x}_{{p}_{j}}}\end{Vmatrix}. \] Then \( \parallel \cdot {\parallel }_{\mathcal{X}} \) satisfies the spreading property, but we need to check that it is a norm on \( {c}_{00} \) (it obviously is a seminorm). If \( \parallel \xi {\parallel }_{\mathcal{X}} = 0 \) and \( \xi = \mathop{\sum }\limits_{{j = 1}}^{r}{a}_{j}{e}_{j} \) with \( {a}_{r} \neq 0 \) , then we also have \( {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{{r - 1}}{a}_{j}{e}_{j} + {a}_{r}{e}_{r + 1}\end{Vmatrix}}_{\mathcal{X}} = 0 \) . Hence \[ {\begin{Vmatrix}{e}_{1} - {e}_{2}\end{Vmatrix}}_{\mathcal{X}} = {\begin{Vmatrix}{e}_{r} - {e}_{r + 1}\end{Vmatrix}}_{\mathcal{X}} = 0. \] Returning to the definition, we see that this implies \[ \mathop{\lim }\limits_{{\left( {{p}_{1},{p}_{2}}\right) \in {\mathcal{F}}_{2}\left( {M}_{\infty }\right) }}\begin{Vmatrix}{{x}_{{p}_{1}} - {x}_{{p}_{2}}}\end{Vmatrix} = 0 \] which can mean only that the subsequence \( {\left( {x}_{j}\right) }_{j \in {M}_{\infty }} \) is convergent, contrary to our construction. Definition 12.3.8. The spreading sequence space \( \mathcal{X} \) introduced in Theorem 12.3.7 is called a spreading model for the sequence \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) . We now turn to Krivine's theorem. This result was obtained by Krivine in 1976, and although the main ideas of the proof we include here are the same as in Krivine's original proof, we have used ideas from two subsequent expositions of Krivine's theorem by Rosenthal [274] and Lemberg [186]. Krivine's theorem
1076_(GTM234)Analysis and Probability Wavelets, Signals, Fractals
Definition 8.4.1
Definition 8.4.1. Let \( U \) be a unitary operator in a Hilbert space \( \mathcal{H} \), and let \( {\mathcal{H}}_{0} \) be a closed subspace in \( \mathcal{H} \) . We say that \( {\mathcal{H}}_{0} \) generates a multiresolution for \( U \) if the following two conditions (i)-(ii) hold (if all three conditions (i)-(iii) hold, we say that the multiresolution is pure): (i) \( {\mathcal{H}}_{0} \subset U{\mathcal{H}}_{0} \) , (ii) \( \mathop{\bigvee }\limits_{{n \in \mathbb{Z}}}{U}^{n}{\mathcal{H}}_{0} = \mathcal{H} \) (iii) \( \mathop{\bigwedge }\limits_{{n \in \mathbb{Z}}}{U}^{n}{\mathcal{H}}_{0} = \{ 0\} \) where the symbols \( \vee \) and \( \land \) denote the lattice operations from Hilbert space, i.e., \( \bigvee \) applied to a family of closed subspaces in \( \mathcal{H} \) means "the closed linear span", and \( \bigwedge \) means "intersection". Remark 8.4.2. Let a system \( \left( {U,\mathcal{H},{\mathcal{H}}_{0}}\right) \) be given as in Definition 8.4.1, but suppose only (i)-(ii) are satisfied. Then \( U \) and \( \mathcal{H} \) may be modified such that the resulting reduced system \( \left( {{U}^{\prime },{\mathcal{H}}^{\prime },{\mathcal{H}}_{0}^{\prime }}\right) \) satisfies all three conditions (i)-(iii). This follows from a simple application of the Wold decomposition; see [BrJo02b]. We now sketch the details of proof: Suppose only (i)-(ii) hold. Then set \[ \mathcal{K} \mathrel{\text{:=}} \mathop{\bigwedge }\limits_{{n \in \mathbb{Z}}}{U}^{n}{\mathcal{H}}_{0} \] (8.4.1) \[ {\mathcal{H}}^{\prime } \mathrel{\text{:=}} \mathcal{H} \ominus \mathcal{K} = \{ f \in \mathcal{H} \mid \langle f \mid k\rangle = 0\text{ for all }k \in \mathcal{K}\} , \] (8.4.2) and \[ {\mathcal{H}}_{0}^{\prime } = {\mathcal{H}}_{0} \ominus \mathcal{K} \] We leave to the reader the verification of the assertions; i.e., that the reduced system \( {\mathcal{H}}_{0} \rightarrow {\mathcal{H}}_{0}^{\prime },\mathcal{H} \rightarrow {\mathcal{H}}^{\prime } \), and \( U \rightarrow {U}^{\prime },{U}^{\prime } \mathrel{\text{:=}} {\left. U\right| }_{{\mathcal{H}}^{\prime }} \), satisfies all three conditions (i)-(iii). The significance of Definition 8.4.1 is reflected in the following general result on unitary equivalence. Theorem 8.4.3. Let \( \left( {U,\mathcal{H},{\mathcal{H}}_{0}}\right) \) and \( \left( {{U}^{\prime },{\mathcal{H}}^{\prime },{\mathcal{H}}_{0}^{\prime }}\right) \) be two given systems as stated in Definition 8.4.1, and assume that all three conditions (i)–(iii) hold for both systems, i.e., that the two multiresolutions are pure. Then there is a unitary isomorphism \[ W : \mathcal{H} \rightarrow {\mathcal{H}}^{\prime } \] (8.4.3) which satisfies the following two conditions: \[ W{\mathcal{H}}_{0} = {\mathcal{H}}_{0}^{\prime } \] (8.4.4) and \[ {WU} = {U}^{\prime }W\;\text{ (intertwining). } \] (8.4.5) (We say that the systems are unitarily equivalent.) Proof. This is a standard result in operator theory, and we refer the reader to [BrJo02b, Chapter 2]. We further add that the underlying ideas date back to Kolmogorov [Kol77] in the 1930s. We now show that the tensor factorization \( U = \mathop{\sum }\limits_{i}{P}_{i} \otimes {S}_{i}^{ * } \) from Section 8.2 above gives rise to multiresolutions. Theorem 8.4.4. Let \( N \in \mathbb{N}, N \geq 2 \), and let \( \left( {X,\sigma ,{\tau }_{i}}\right) \) satisfy the conditions in Lemma 8.2.2. Let \( {\left( {P}_{i}\right) }_{i = 0}^{N - 1} \) be the corresponding representation of \( {\mathcal{O}}_{N} \) in \( {\ell }^{2}\left( X\right) \) . Let \( {\left( {S}_{i}\right) }_{i = 0}^{N - 1} \) be a representation of \( {\mathcal{O}}_{N} \) in a Hilbert space \( \mathcal{K} \), and set \[ \mathcal{H} \mathrel{\text{:=}} {\ell }^{2}\left( X\right) \otimes \mathcal{K} \] (8.4.6) \[ U \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{P}_{i} \otimes {S}_{i}^{ * } \] (8.4.7) Let a subset \( E \subset X \) satisfy (a) and (b) below, (a) \( \;\sigma \left( E\right) = E \) and (b) \( \mathop{\bigcup }\limits_{{n \in {\mathbb{N}}_{0}}}{\sigma }^{-n}\left( E\right) = X \) and set \[ {\mathcal{H}}_{n} \mathrel{\text{:=}} {\ell }^{2}\left( {{\sigma }^{-n}\left( E\right) }\right) \otimes \mathcal{K}. \] (8.4.8) Then \[ {U}^{n}{\mathcal{H}}_{0} = {\mathcal{H}}_{n} \] (8.4.9) and \[ \left( {U,\mathcal{H},{\mathcal{H}}_{0}}\right) \text{is a multiresolution system,} \] (8.4.10) i.e., \[ {\mathcal{H}}_{0} \mathrel{\text{:=}} {\ell }^{2}\left( E\right) \otimes \mathcal{K} \] generates a multiresolution for \( U \) . Proof. To prove that (i) in Definition 8.4.1 is satisfied note that for every subset \( E \subset X \), we have \[ E \subset {\sigma }^{-1}\left( {\sigma \left( E\right) }\right) \] (8.4.11) Substituting (a) from the theorem, we get \[ E \subset {\sigma }^{-1}\left( E\right) = \mathop{\bigcup }\limits_{{i = 0}}^{{N - 1}}{\tau }_{i}\left( E\right) \] (8.4.12) But if \( e \in E \) and \( k \in \mathcal{K} \), then \[ \left| {e\rangle \otimes k = U{U}^{ * }}\right| e\rangle \otimes k \] (8.4.13) \[ = U\left( {\mathop{\sum }\limits_{i}{P}_{i}^{ * }|e\rangle \otimes {S}_{i}k}\right) \] \[ = U\left( {\mathop{\sum }\limits_{i}{\chi }_{{\tau }_{i}\left( X\right) }\left( e\right) |\sigma \left( e\right) \rangle \otimes {S}_{i}k}\right) . \] The desired conclusion \( {\mathcal{H}}_{0} \subset U{\mathcal{H}}_{0} \) follows. If \( n \in {\mathbb{N}}_{0} \), then \[ {U}^{n}\left( {|e\rangle \otimes k}\right) = \mathop{\sum }\limits_{{{i}_{1},\ldots ,{i}_{p}}}\left| {{\tau }_{{i}_{1}}{\tau }_{{i}_{2}}\cdots {\tau }_{{i}_{p}}e}\right\rangle \otimes {S}_{{i}_{p}}^{ * }\cdots {S}_{{i}_{1}}^{ * }k. \] (8.4.14) Now, combining (8.4.14) and (8.4.12), we conclude that \[ {U}^{n}{\mathcal{H}}_{0} \subset {\mathcal{H}}_{n} \] However, the argument from (8.4.13) shows that this inclusion is in fact an identity. The remaining property (ii) from Definition 8.4.1 clearly follows from this, and (b) in the theorem. Remark 8.4.5. In applications of Theorem 8.4.4, it is useful to reduce general mul-tiresolutions to a system of minimal ones. If \( N \) and \( \left( {X,\sigma ,{\tau }_{i}}\right) \) are as stated in Theorem 8.4.4, it is helpful to make a careful choice of the sets \( E \) to be used in (a) and (b) of the theorem. If \( E \subset X \) satisfies \( \sigma \left( E\right) = E \), we say that \( E \) is minimal if the following holds: \[ \left\{ \begin{matrix} \varnothing \neq A \subset E, \\ \sigma \left( A\right) = A \end{matrix}\right\} \Rightarrow A = E. \] (8.4.15) In our example (8.2.6) above, we had \( E = \{ - 3, - 2, - 1,0\} \) satisfy \( \sigma \left( E\right) = E \) , but each of the sets \[ {E}_{1} = \{ 0\} \] \[ {E}_{2} = \{ - 2, - 1\} \] and \[ {E}_{3} = \{ - 3\} \] satisfies \( \sigma \left( {E}_{i}\right) = {E}_{i} \), and each \( {E}_{i} \) is minimal. Remark 8.4.6. The next result shows that such minimal sets \( {E}_{i} \) may be chosen in general. However, it is only for rather special branching systems \( \left( {X,\sigma ,{\tau }_{i}}\right) \) that the choices are natural. Lemma 8.4.7. Let \( N \in \mathbb{N}, N \geq 2 \), and let \( \left( {X,\sigma ,{\tau }_{i}}\right) \) satisfy the conditions in Lemma 8.2.2. Suppose some subset \( E \subset X \) satisfies conditions (a) and (b) in Theorem 8.4.4. Then there are subsets \( {E}_{i} \subset E \) such that \[ \sigma \left( {E}_{i}\right) = {E}_{i} \] (8.4.16) \[ {E}_{i} \cap {E}_{j} = \varnothing \;\text{ for }i \neq j, \] (8.4.17) \[ \mathop{\bigcup }\limits_{i}{E}_{i} = E \] (8.4.18) and \[ \text{each}{E}_{i}\text{is a minimal solution to (8.4.16).} \] (8.4.19) Proof. Zorn's lemma. Definition 8.4.8. If \( A \) is a subset of \( X \), we set \[ \bar{A} \mathrel{\text{:=}} \mathop{\bigcup }\limits_{{n \in {\mathbb{N}}_{0}}}{\sigma }^{-n}\left( A\right) \] (8.4.20) Proposition 8.4.9. Let \( N \in \mathbb{N}, N \geq 2 \), and let \( \left( {X,\sigma ,{\tau }_{i}}\right) \) satisfy the conditions in Lemma 8.2.2. Let \( \left( {E,{\left( {E}_{i}\right) }_{i \in I}}\right) \) be a system of subsets in \( X \) which satisfies the conditions in Lemma 8.4.7, where 1 is a chosen index set. Then \[ {\bar{E}}_{i} \cap {\bar{E}}_{j} = \varnothing \;\text{ for }i \neq j \] (8.4.21) and \[ \mathop{\bigcup }\limits_{{i \in I}}{\bar{E}}_{i} = X \] (8.4.22) Proof. We leave the easy details to the reader. Remark 8.4.10. The significance of conditions (8.4.21)-(8.4.22) in Proposition 8.4.9 is that they generate mutually orthogonal multiresolutions in the Hilbert space \( {\ell }^{2}\left( X\right) \otimes \mathcal{K} \) . Specifically, we get an orthogonal decomposition \[ {\ell }^{2}\left( X\right) \otimes \mathcal{K} = \mathop{\sum }\limits_{{i \in I}}{}^{ \oplus }{\ell }^{2}\left( {\bar{E}}_{i}\right) \otimes \mathcal{K} \] (8.4.23) which reduces the operators, and the representations on \( {\ell }^{2}\left( X\right) \otimes \mathcal{K} \) considered above. Returning to the three sets \( {E}_{1},{E}_{2} \), and \( {E}_{3} \) from Remark 8.4.5 (and Figure 8.2, p. 161, from (8.2.6)), we get \( \mathbb{Z} \) written as a disjoint union of the associated three sets \( {\bar{E}}_{1},{\bar{E}}_{2} \), and \( {\bar{E}}_{3} \) . The sets are equivalence classes for an equivalence relation studied in [BrJo99a]: \[ {\bar{E}}_{1} = \{ 0,3,6,9,{12},{15},{18},{21},{24},{27},{30},{33},{36},{39},{42},{45},\ldots \} , \] \[ {\bar{E}}_{2} = \{ - 2, - 1, - 4,1, - 8, - 5,2,5, - {16}, - {13}, - {10}, - 7,4,7,{10},{13},\ldots \} , \] ![c9c6b607-efb6-4d90-9e43-38960086b6c1_217_0.jpg](images/c9c6b607-efb6-4d90-9e43-38960086b6c1_217_0.jpg) Fig. 8.3. The pyramid on \( {E}_{2} = \{ - 2, - 1\} \) (may be seen as a subdiagram of Figure 8.2, p. 161). and \[ {\bar{E}}_{3} = \{ - 3, - 6, - {12}, - 9, - {24}, - {21}, - {18}, - {15},\ldots \} . \] The two sets \( {\bar{E}}_{1} \) and \( {\bar{E}}_{3} \) are singly generated. The middle one can be
1124_(GTM30)Lectures in Abstract Algebra I Basic Concepts
Definition 1
Definition 1. A ring is a system consisting of a set \( \mathfrak{A} \) and two binary compositions in \( \mathfrak{A} \) called addition and multiplication such that 1. A together with addition \( \left( +\right) \) is a commutative group. 2. A together with multiplication \( \left( \cdot \right) \) is a semi-group. 3. The distributive laws D \[ a\left( {b + c}\right) = {ab} + {ac} \] \[ \left( {b + c}\right) a = {ba} + {ca} \] hold. Thus the assumptions included under 1 and 2 are that \( a + b \) and \( {ab}\mathrm{e}\mathfrak{A} \) and satisfy the following conditions: A1 \[ \left( {a + b}\right) + c = a + \left( {b + c}\right) . \] A2 \[ a + b = b + a. \] A3 There is an element 0 such that \( a + 0 = a = 0 + a \) . A4 For each \( a \) there is a negative \( - a \) such that \( a + \left( {-a}\right) = 0 \) \( = - a + a \) . M \[ \left( {ab}\right) c = a\left( {bc}\right) . \] The system \( \mathfrak{A}, + \) will be called the additive group and the system \( \mathfrak{A} \) ,- will be called the multiplicative semi-group of the ring. Examples. (1) The set \( I \) of integers with the ordinary addition and multiplication operations. We have noted in the Introduction that this is a ring. (2) The set \( {R}_{0} \) of rational numbers with the usual addition and multiplication. A rigorous definition of this ring will be given in the next chapter. (3) The set \( R \) of real numbers with the usual addition and multiplication. (4) The set \( I\left\lbrack \sqrt{2}\right\rbrack \) of real numbers of the form \( m + n\sqrt{2} \) where \( m \) and \( n \) are integers, addition and multiplication as usual. Clearly the sum and difference of two numbers in \( I\left\lbrack \sqrt{2}\right\rbrack \) belong to this set. Also \[ \left( {m + n\sqrt{2}}\right) \left( {{m}^{\prime } + {n}^{\prime }\sqrt{2}}\right) = \left( {m{m}^{\prime } + {2n}{n}^{\prime }}\right) + \left( {m{n}^{\prime } + n{m}^{\prime }}\right) \sqrt{2} \] so that \( I\left\lbrack \sqrt{2}\right\rbrack \) is closed under multiplication. It follows easily that this system is a ring (see the discussion of subrings in \( §5 \) ). (5) The set \( {R}_{0}\left\lbrack \sqrt{2}\right\rbrack \) of real numbers of the form \( a + b\sqrt{2} \) where \( a \) and \( b \) are rational numbers, addition and multiplication as usual. (6) The set \( C \) of complex numbers with the usual addition and multiplication. (7) The set \( I\left\lbrack \sqrt{-1}\right\rbrack \) of complex numbers of the form \( m + n\sqrt{-1}, m \) and \( n \) integers with ordinary addition and multiplication. This example is similar to (4). (8) The set \( \Gamma \) of real valued continuous functions on the interval \( \left\lbrack {0,1}\right\rbrack \) where \( \left( {f + g}\right) \left( x\right) = f\left( x\right) + g\left( x\right) \) and \( \left( {fg}\right) \left( x\right) = f\left( x\right) g\left( x\right) \) . (9) The set consisting of the two elements 0,1 with the following addition and multiplication tables: ![9c7d47d5-24bb-4360-bb03-9c6a5458d669_61_0.jpg](images/9c7d47d5-24bb-4360-bb03-9c6a5458d669_61_0.jpg) ![9c7d47d5-24bb-4360-bb03-9c6a5458d669_61_1.jpg](images/9c7d47d5-24bb-4360-bb03-9c6a5458d669_61_1.jpg) ## EXERCISES 1. Let \( A \) be the set of all real valued functions on \( \left( {-\infty ,\infty }\right) \) . Show that \( A \) is a group with the ordinary addition and that \( A \) is a semi-group relative to \( f \cdot g\left( x\right) = f\left( {g\left( x\right) }\right) \) . Is \( A \) a ring relative to these two compositions? 2. Show that the three elements \( 0,1,2 \) constitute a ring if addition and multiplication are defined by the following tables ![9c7d47d5-24bb-4360-bb03-9c6a5458d669_62_0.jpg](images/9c7d47d5-24bb-4360-bb03-9c6a5458d669_62_0.jpg) A number of elementary properties of rings are consequences of the fact that a ring is a group relative to addition and a semigroup relative to multiplication. For example, we have \( - \left( {a + b}\right) \) \( = - a - b \equiv - a + \left( {-b}\right) \) and, if \( {na} \) is defined for the integer \( n \) as before, then the rules for multiples \[ n\left( {a + b}\right) = {na} + {nb} \] \[ \left( {n + m}\right) a = {na} + {ma} \] \[ \left( {nm}\right) a = n\left( {ma}\right) \] hold. Also the generalized associative laws hold for addition and for multiplication, and the generalized commutative law holds for addition. There are also a number of other simple results that follow from the distributive laws. In the first place, induction on \( m \) and \( n \) gives the generalization \[ \left( {{a}_{1} + {a}_{2} + \cdots + {a}_{m}}\right) \left( {{b}_{1} + {b}_{2} + \cdots + {b}_{n}}\right) \] \[ = {a}_{1}{b}_{1} + {a}_{1}{b}_{2} + \cdots + {a}_{1}{b}_{n} + {a}_{2}{b}_{1} + {a}_{2}{b}_{2} + \cdots + {a}_{2}{b}_{n} + \cdots \] \[ + {a}_{m}{b}_{1} + \cdots + {a}_{m}{b}_{n} \] or \[ \left( {\mathop{\sum }\limits_{1}^{m}{a}_{i}}\right) \left( {\mathop{\sum }\limits_{1}^{n}{b}_{j}}\right) = \mathop{\sum }\limits_{{i = 1, j = 1}}^{{m, n}}{a}_{i}{b}_{j} \] We note next that \[ {a0} = 0 = {0a} \] for all \( a \) ; for we have \( {a0} = a\left( {0 + 0}\right) = {a0} + {a0} \) . Addition of \( - {a0} \) gives \( {a0} = 0 \) . Similarly \( {0a} = 0 \) . We have the equation \[ 0 = {0b} = \left( {a + \left( {-a}\right) }\right) b = {ab} + \left( {-a}\right) b, \] which shows that \[ \left( {-a}\right) b = - {ab}. \] Similarly \( a\left( {-b}\right) = - {ab} \) and consequently \[ \left( {-a}\right) \left( {-b}\right) = - a\left( {-b}\right) = - \left( {-{ab}}\right) = {ab}. \] ## EXERCISES 1. Prove that \( a\left( {b - c}\right) = {ab} - {ac} \) . 2. Prove that for any integer \( n, n\left( {ab}\right) = \left( {na}\right) b = a\left( {nb}\right) \) . 3. Let \( \mathfrak{A} \) be a system which satisfies all the conditions for a ring except commutativity of addition. Prove that, if \( \mathfrak{A} \) contains an element \( c \) that can be left cancelled in the sense that \( {ca} = {cb} \) implies \( a = b \), then \( \mathfrak{A} \) is a ring. If \( a \) and \( b \) commute in the sense that \( {ab} = {ba} \), then the powers of \( a \) commute with the powers of \( b \) and we can prove by induction the important binomial theorem: (1) \[ {\left( a + b\right) }^{n} = {a}^{n} + \left( \begin{array}{l} n \\ 1 \end{array}\right) {a}^{n - 1}b + \left( \begin{array}{l} n \\ 2 \end{array}\right) {a}^{n - 2}{b}^{2} + \cdots + {b}^{n}, \] where \( \left( \begin{array}{l} n \\ i \end{array}\right) \) is an integer and is given by the formula (2) \[ \left( \begin{array}{l} n \\ i \end{array}\right) = \frac{n!}{i!\left( {n - i}\right) !} \] This is evident if \( n = 1 \) . Assume now that (3) \[ {\left( a + b\right) }^{r} = \mathop{\sum }\limits_{{k = 0}}^{r}\left( \begin{array}{l} r \\ k \end{array}\right) {a}^{k}{b}^{r - k}. \] We use here the convention that \( 0! = 1 \) so that (3) agrees with (1) for \( n = r \) . Now multiply both sides of (3) by \( a + b \) . Then we obtain \[ {\left( a + b\right) }^{r + 1} = \mathop{\sum }\limits_{{k = 0}}^{r}\left( \begin{array}{l} r \\ k \end{array}\right) {a}^{k + 1}{b}^{r - k} + \mathop{\sum }\limits_{{k = 0}}^{r}\left( \begin{array}{l} r \\ k \end{array}\right) {a}^{k}{b}^{r - k + 1}. \] The term \( {a}^{k}{b}^{r + 1 - k}, k \neq 0, r + 1 \), in the right-hand side of this equation has the coefficient \[ \left( \begin{array}{l} r \\ k \end{array}\right) + \left( \begin{matrix} r \\ k - 1 \end{matrix}\right) = \frac{r!}{k!\left( {r - k}\right) !} + \frac{r!}{\left( {k - 1}\right) !\left( {r - k + 1}\right) !} \] \[ = \frac{r!\left( {r - k + 1}\right) + r!k}{k!\left( {r - k + 1}\right) !} \] \[ = \frac{\left( {r + 1}\right) !}{k!\left( {r - k + 1}\right) !} = \left( \begin{matrix} r + 1 \\ k \end{matrix}\right) . \] Hence (1) holds for \( n = r + 1 \) and this completes the proof. 2. Types of rings. We obtain various types of rings by imposing conditions on the multiplicative semi-group. Thus a ring \( \mathfrak{A} \) is said to be commutative if its multiplicative semi-group is commutative. The ring \( \mathfrak{A} \) is said to have an identity if its multiplicative semi-group has an identity. If such an element exists, it is unique. All of the examples listed above are commutative and have identities. An example of a ring without an identity is the set of even integers. Examples of non-commutative rings will be given in \( §§4 - 5 \) . If the identity \( 1 = 0 \), any \( a = {a1} = {a0} = 0 \) so that \( \mathfrak{A} \) has only one element. In other words, if \( \mathfrak{A} \neq 0 \), then \( 1 \neq 0 \) . A ring is called an integral domain (domain of integrity) if the set \( {\mathfrak{A}}^{ * } \) of non-zero elements determines a sub-semi-group of the multiplicative semi-group. This, of course, means simply that, if \( a \neq 0 \) and \( b \neq 0 \) in \( \mathfrak{A} \), then \( {ab} \neq 0 \) . All of the foregoing examples except (8) are of this type. On the other hand, in (8) we can take the two elements \[ f\left( x\right) = \left\{ \begin{array}{l} 0\text{ for }0 \leq x \leq \frac{1}{2} \\ x - \frac{1}{2}\text{ for }\frac{1}{2} < x \leq 1 \end{array}\right. \] \[ g\left( x\right) = \left\{ \begin{array}{l} - x + \frac{1}{2}\text{ for }0 \leq x \leq \frac{1}{2} \\ 0\text{ for }\frac{1}{2} < x \leq 1 \end{array}\right. . \] Then \( f \neq 0 \) (the constant function 0 ) and \( g \neq 0 \) but \( {fg} = 0 \) . Hence the ring of continuous functions on \( \left\lbrack {0,1}\right\rbrack \) is not an integral domain. If \( a \) is an element of a ring \( \mathfrak{A} \) for which there exists \( {ab} \neq 0 \) such that \( {ab} = 0\left( {{ba} = 0}\right) \), then \( a \) is called a left (right) zero-divisor in \( \mathfrak{A} \) . Clearly the element 0 is a left and right zero-divisor if \( \mathfrak{A} \) contains more than one element. If \( a \neq 0 \) is a left zero-divisor and \( {ab} = 0 \) for \( b \
1098_(GTM254)Algebraic Function Fields and Codes
Definition 2.3.1
Definition 2.3.1. An algebraic geometry code \( {C}_{\mathcal{L}}\left( {D, G}\right) \) associated with divisors \( G \) and \( D \) of a rational function field \( {\mathbb{F}}_{q}\left( z\right) /{\mathbb{F}}_{q} \) is said to be rational (as in Section 2.2 it is assumed that \( D = {P}_{1} + \ldots + {P}_{n} \) with pairwise distinct places of degree one, and \( \operatorname{supp}G \cap \operatorname{supp}D = \varnothing \) ). Observe that the length of a rational AG code is bounded by \( q + 1 \) because \( {\mathbb{F}}_{q}\left( z\right) \) has only \( q + 1 \) places of degree one: the pole \( {P}_{\infty } \) of \( z \) and for each \( \alpha \in {\mathbb{F}}_{q} \), the zero \( {P}_{\alpha } \) of \( z - \alpha \) (see Proposition 1.2.1). The following results follow immediately from Section 2.2. Proposition 2.3.2. Let \( C = {C}_{\mathcal{L}}\left( {D, G}\right) \) be a rational AG code over \( {\mathbb{F}}_{q} \), and let \( n, k, d \) be the parameters of \( C \) . Then we have: (a) \( n \leq q + 1 \) . (b) \( k = 0 \) if and only if \( \deg G < 0 \), and \( k = n \) if and only if \( \deg G > n - 2 \) . (c) For \( 0 \leq \deg G \leq n - 2 \) , \[ k = 1 + \deg G\;\text{ and }d = n - \deg G. \] In particular, \( C \) is an MDS code. (d) \( {C}^{ \bot } \) is also a rational \( {AG} \) code. Next we determine specific generator matrices for rational AG codes. Proposition 2.3.3. Let \( C = {C}_{\mathcal{L}}\left( {D, G}\right) \) be a rational AG code over \( {\mathbb{F}}_{q} \) with parameters \( n, k \) and \( d \) . (a) If \( n \leq q \) then there exist pairwise distinct elements \( {\alpha }_{1},\ldots ,{\alpha }_{n} \in {\mathbb{F}}_{q} \) and \( {v}_{1},\ldots ,{v}_{n} \in {\mathbb{F}}_{q}^{ \times } \) (not necessarily distinct) such that \[ C = \left\{ {\left( {{v}_{1}f\left( {\alpha }_{1}\right) ,{v}_{2}f\left( {\alpha }_{2}\right) ,\ldots ,{v}_{n}f\left( {\alpha }_{n}\right) }\right) \mid f \in {\mathbb{F}}_{q}\left\lbrack z\right\rbrack \text{ and }\deg f \leq k - 1}\right\} . \] The matrix \[ M = \left( \begin{matrix} {v}_{1} & {v}_{2} & \ldots & {v}_{n} \\ {\alpha }_{1}{v}_{1} & {\alpha }_{2}{v}_{2} & \ldots & {\alpha }_{n}{v}_{n} \\ {\alpha }_{1}^{2}{v}_{1} & {\alpha }_{2}^{2}{v}_{2} & \ldots & {\alpha }_{n}^{2}{v}_{n} \\ \vdots & \vdots & & \vdots \\ {\alpha }_{1}^{k - 1}{v}_{1} & {\alpha }_{2}^{k - 1}{v}_{2} & \ldots & {\alpha }_{n}^{k - 1}{v}_{n} \end{matrix}\right) \] (2.14) is a generator matrix for \( C \) . (b) If \( n = q + 1, C \) has a generator matrix \[ M = \left( \begin{matrix} {v}_{1} & {v}_{2} & \ldots & {v}_{n - 1} & 0 \\ {\alpha }_{1}{v}_{1} & {\alpha }_{2}{v}_{2} & \ldots & {\alpha }_{n - 1}{v}_{n - 1} & 0 \\ {\alpha }_{1}^{2}{v}_{1} & {\alpha }_{2}^{2}{v}_{2} & \ldots & {\alpha }_{n - 1}^{2}{v}_{n - 1} & 0 \\ \vdots & \vdots & & \vdots & \vdots \\ {\alpha }_{1}^{k - 1}{v}_{1} & {\alpha }_{2}^{k - 1}{v}_{2} & \ldots & {\alpha }_{n - 1}^{k - 1}{v}_{n - 1} & 1 \end{matrix}\right) \] (2.15) where \( {\mathbb{F}}_{q} = \left\{ {{\alpha }_{1},\ldots ,{\alpha }_{n - 1}}\right\} \) and \( {v}_{1},\ldots ,{v}_{n - 1} \in {\mathbb{F}}_{q}^{ \times } \) . Proof. (a) Let \( D = {P}_{1} + \ldots + {P}_{n} \) . As \( n \leq q \), there is a place \( P \) of degree one which is not in the support of \( D \) . Choose a place \( Q \neq P \) of degree one (e.g., \( Q = {P}_{1} \) ). By Riemann-Roch, \( \ell \left( {Q - P}\right) = 1 \), hence \( Q - P \) is a principal divisor (Corollary 1.4.12). Let \( Q - P = \left( z\right) \) ; then \( z \) is a generating element of the rational function field over \( {\mathbb{F}}_{q} \) and \( P \) is the pole divisor of \( z \) . As usually we write \( P = {P}_{\infty } \) . By Proposition 2.3.2 we can assume that \( \deg G = k - 1 \geq 0 \) (the case \( k = 0 \) being trivial). The divisor \( \left( {k - 1}\right) {P}_{\infty } - G \) has degree zero, so it is principal (Riemann-Roch and Corollary 1.4.12), say \( \left( {k - 1}\right) {P}_{\infty } - G = \left( u\right) \) with \( 0 \neq u \in F \) . The \( k \) elements \( u,{zu},\ldots ,{z}^{k - 1}u \) are in \( \mathcal{L}\left( G\right) \) and they are linearly independent over \( {\mathbb{F}}_{q} \) . Since \( \ell \left( G\right) = k \), they constitute a basis of \( \mathcal{L}\left( G\right) \) ; i.e., \[ \mathcal{L}\left( G\right) = \left\{ {{uf}\left( z\right) \mid f \in {\mathbb{F}}_{q}\left\lbrack z\right\rbrack \text{ and }\deg f \leq k - 1}\right\} . \] Setting \( {\alpha }_{i} \mathrel{\text{:=}} z\left( {P}_{i}\right) \) and \( {v}_{i} \mathrel{\text{:=}} u\left( {P}_{i}\right) \) we obtain \[ \left( {{uf}\left( z\right) }\right) \left( {P}_{i}\right) = u\left( {P}_{i}\right) f\left( {z\left( {P}_{i}\right) }\right) = {v}_{i}f\left( {\alpha }_{i}\right) \] for \( i = 1,\ldots, n \) . Therefore \[ C = {C}_{\mathcal{L}}\left( {D, G}\right) = \left\{ {\left( {{v}_{1}f\left( {\alpha }_{1}\right) ,\ldots ,{v}_{n}f\left( {\alpha }_{n}\right) }\right) \mid \deg f \leq k - 1}\right\} . \] The codeword in \( C \) corresponding to \( u{z}^{j} \) is \( \left( {{v}_{1}{\alpha }_{1}^{j},{v}_{2}{\alpha }_{2}^{j},\ldots ,{v}_{n}{\alpha }_{n}^{j}}\right) \), so the matrix (2.14) is a generator matrix of \( C \) . (b) The proof is essentially the same as in the case \( n \leq q \) . Now we have \( n = q + 1 \) and we can choose \( z \) in such a way that \( {P}_{n} = {P}_{\infty } \) is the pole of \( z \) . As above, \( \left( {k - 1}\right) {P}_{\infty } - G = \left( u\right) \) with \( 0 \neq u \in F \), and \( \left\{ {u,{zu},\ldots ,{z}^{k - 1}u}\right\} \) is a basis of \( \mathcal{L}\left( G\right) \) . For \( 1 \leq i \leq n - 1 = q \) the elements \( {\alpha }_{i} \mathrel{\text{:=}} z\left( {P}_{i}\right) \in {\mathbb{F}}_{q} \) are pairwise distinct, so \( {\mathbb{F}}_{q} = \left\{ {{\alpha }_{1},\ldots ,{\alpha }_{n - 1}}\right\} \) . Moreover, \( {v}_{i} \mathrel{\text{:=}} u\left( {P}_{i}\right) \in {\mathbb{F}}_{q}^{ \times } \) for \( i = 1,\ldots, n - 1 \) . For \( 0 \leq j \leq k - 2 \) we obtain \[ \left( {\left( {u{z}^{j}}\right) \left( {P}_{1}\right) ,\ldots ,\left( {u{z}^{j}}\right) \left( {P}_{n}\right) }\right) = \left( {{\alpha }_{1}^{j}{v}_{1},\ldots ,{\alpha }_{n - 1}^{j}{v}_{n - 1},0}\right) , \] but for \( j = k - 1 \) holds \[ \left( {\left( {u{z}^{k - 1}}\right) \left( {P}_{1}\right) ,\ldots ,\left( {u{z}^{k - 1}}\right) \left( {P}_{n}\right) }\right) = \left( {{\alpha }_{1}^{k - 1}{v}_{1},\ldots ,{\alpha }_{n - 1}^{k - 1}{v}_{n - 1},\gamma }\right) \] with an element \( 0 \neq \gamma \in {\mathbb{F}}_{q} \) . Substituting \( u \) by \( {\gamma }^{-1}u \) yields the generator matrix (2.15). Definition 2.3.4. Let \( \alpha = \left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \) where the \( {\alpha }_{i} \) are distinct elements of \( {\mathbb{F}}_{q} \), and let \( v = \left( {{v}_{1},\ldots ,{v}_{n}}\right) \) where the \( {v}_{i} \) are nonzero (not necessarily distinct) elements of \( {\mathbb{F}}_{q} \) . Then the Generalized Reed-Solomon code, denoted by \( {\operatorname{GRS}}_{k}\left( {\alpha, v}\right) \), consists of all vectors \[ \left( {{v}_{1}f\left( {\alpha }_{1}\right) ,\ldots ,{v}_{n}f\left( {\alpha }_{n}\right) }\right) \] with \( f\left( z\right) \in {\mathbb{F}}_{q}\left\lbrack z\right\rbrack \) and \( \deg f \leq k - 1 \) (for a fixed \( k \leq n \) ). In the case \( \alpha = \left( {\beta ,{\beta }^{2},\ldots ,{\beta }^{n}}\right) \) (where \( n = q - 1 \) and \( \beta \) is a primitive \( n \) -th root of unity) and \( v = \left( {1,1,\ldots ,1}\right) ,{\operatorname{GRS}}_{k}\left( {\alpha, v}\right) \) is a Reed-Solomon code, cf. Section 2.2. Obviously \( {\operatorname{GRS}}_{k}\left( {\alpha, v}\right) \) is an \( \left\lbrack {n, k}\right\rbrack \) code, and Proposition 2.3.3.(a) states that all rational AG codes over \( {\mathbb{F}}_{q} \) of length \( n \leq q \) are Generalized Reed-Solomon codes. The converse is also true: Proposition 2.3.5. Every generalized Reed-Solomon code \( {\operatorname{GRS}}_{k}\left( {\alpha, v}\right) \) can be represented as a rational \( \mathrm{{AG}} \) code. Proof. Let \( \alpha = \left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \) with \( {\alpha }_{i} \in {\mathbb{F}}_{q} \) and \( v = \left( {{v}_{1},\ldots ,{v}_{n}}\right) \) with \( {v}_{i} \in {\mathbb{F}}_{q}^{ \times } \) . Consider the rational function field \( F = {\mathbb{F}}_{q}\left( z\right) \) . Denote by \( {P}_{i} \) the zero of \( z - {\alpha }_{i}\left( {i = 1,\ldots, n}\right) \) and by \( {P}_{\infty } \) the pole of \( z \) . Choose \( u \in F \) such that \[ u\left( {P}_{i}\right) = {v}_{i}\;\text{ for }\;i = 1,\ldots, n. \] (2.16) Such an element exists by the Approximation Theorem. (One can also determine a polynomial \( u = u\left( z\right) \in {\mathbb{F}}_{q}\left\lbrack z\right\rbrack \) satisfying (2.16), by using Lagrange interpolation.) Now let \[ D \mathrel{\text{:=}} {P}_{1} + \ldots + {P}_{n}\;\text{ and }\;G \mathrel{\text{:=}} \left( {k - 1}\right) {P}_{\infty } - \left( u\right) . \] The proof of Proposition 2.3.3 shows that \( {\operatorname{GRS}}_{k}\left( {\alpha, v}\right) = {C}_{\mathcal{L}}\left( {D, G}\right) \) . The same arguments apply to a code of length \( n = q + 1 \) over \( {\mathbb{F}}_{q} \) which has a generator matrix of the specific form (2.15). All such codes can be represented as rational AG codes. In order to determine the dual of a rational AG code \( C = {C}_{\mathcal{L}}\left( {D, G}\right) \), we need (by Theorem 2.2.8 and Proposition 2.2.10) a Weil differential \( \omega \) of \( {\mathbb{F}}_{q}\left( z\right) \) such that \[ {v}_{{P}_{i}}\left( \omega \right) = - 1\;\text{ and }\;{\omega }_{{P}_{i}}\left( 1\right) = 1\;\text{ for }\;i = 1,\ldots, n. \] (2.17) Lemma 2.3.6. Consider the rational function field \( F = {\mathbb{F}}_{q}\left( z\right) \) and \( n \) distinct elements \( {\alpha }_{1},\ldots ,{\alpha }_{n} \in {\mat
1309_[杨诗武] Hyperbolic Partial Differential Equations (2022)
Definition 2.2
Definition 2.2. Let \( P\left( \partial \right) = \sum {a}_{\alpha }{\partial }^{\alpha },{a}_{\alpha } \in \mathbb{C} \) be some linear partial differential operator. The fundamental solution to this operator is a distribution \( E \) such that \[ P\left( \partial \right) E = \delta \] Here recall that the distribution \( \delta \) verifies \( \delta \left( \phi \right) = \phi \left( 0\right) \) . The fundamental solution is used to solve the linear equation \( P\left( \partial \right) \left( {E * f}\right) = f \) . In fact \[ P\left( \partial \right) \left( {E * f}\right) = P\left( \partial \right) \left( {\int E\left( {x - y}\right) f\left( y\right) {dy}}\right) = \int P\left( \partial \right) E\left( {x - y}\right) f\left( y\right) {dy} \] \[ = \int \delta \left( {x - y}\right) f\left( y\right) {dy} = \int \delta \left( y\right) f\left( {x - y}\right) {dy} = f\left( x\right) . \] The fundamental solution \( E \) for the linear operator \( P\left( \partial \right) \) is unique (if it exists). This can be shown as follows \[ E = P\left( \partial \right) {E}^{\prime } * E = {E}^{\prime } * P\left( \partial \right) E = {E}^{\prime } \] by assuming that both \( E,{E}^{\prime } \) are the fundamental solution \[ P\left( \partial \right) E = P\left( \partial \right) {E}^{\prime } = \delta . \] Now we are looking for the fundamental solution to the wave operator \( ▱ \) . For this we need to construct a distribution. On the real line \( \mathbb{R} \) define \[ {\chi }_{ + }^{a} = \frac{{x}_{ + }^{a}}{\Gamma \left( {a + 1}\right) },\;a > - 1,\;{x}_{ + } = \max \{ 0, x\} . \] where \( \Gamma \left( x\right) \) is the standard Gamma function \[ \Gamma \left( x\right) = {\int }_{0}^{\infty }{t}^{x - 1}{e}^{-t}{dt} \] As distributions, we can extend the above distributions to the whole real line (in fact can extend to the whole complex plane) verifying the following rule \[ \frac{d}{dx}{\chi }_{ + }^{a} = {\chi }_{ + }^{a - 1} \] This leads to the following Lemma 2.1. For positive integer \( k \), we have \[ {\chi }_{ + }^{-k} = {\delta }^{\left( k - 1\right) }\left( x\right) \] \[ {\chi }_{ + }^{-k - \frac{1}{2}} = \frac{1}{\sqrt{\pi }}{\left( {x}_{ + }^{-\frac{1}{2}}\right) }^{\left( k\right) }. \] For the proof, one only needs to note that \( {\chi }_{ + }^{0} = H\left( x\right) \), in particular \( {\chi }_{ + }^{-1} = \delta \) . This implies the integer case. For the half integer case, it suffices to check the case when \( k = 0 \) . By definition \[ {\chi }_{ + }^{-\frac{1}{2}} = \frac{{x}_{ + }^{-\frac{1}{2}}}{\Gamma \left( \frac{1}{2}\right) } = {\left( \pi \right) }^{-\frac{1}{2}}{x}_{ + }^{-\frac{1}{2}}. \] Here we used the formula \[ \Gamma \left( a\right) \Gamma \left( {1 - a}\right) = \frac{\pi }{\sin {a\pi }},\;0 < a < 1. \] The value of \( \Gamma \left( \frac{1}{2}\right) \) also follows from the Gauss integral. We now look for the forward fundamental solution \( {E}_{ + } \) to the wave operator \( ▱ \), verifying the following properties: - As distribution it verifies the relation \[ ▱{E}_{ + } = \delta \] Here \( \delta \) is the distribution on the whole spacetime \( {\mathbb{R}}^{1 + n} \) . - The support of \( {E}_{ + } \) lies in the forward light cone \[ \operatorname{supp}{E}_{ + } \subset \{ \left( {t, x}\right) \left| {0 \leq }\right| x \mid \leq t\} . \] In particular this means that we only consider the case in the future \( t \geq 0 \) . The reason for doing this is that the wave operator is time reversible, that is, changing \( t \) to \( - t \), the operator is the same. This is in contrast to the heat operator \( {\partial }_{t} - \Delta \), for which we can only solve the equation forward while wave equation can be solved both in the future and in the past. In other words, the property of the solution in the future is the same as that in the past. Hence for this type of equations, it suffices to study the situation in the future \( t \geq 0 \) . Through out this course we always assume that \( t \geq 0 \) unless it is specified. We could explicit find the forward fundamental solution to the wave operator. Proposition 2.3. The forward fundamental solution to the wave operator \( ▱ \) is \[ {E}_{ + }\left( {t, x}\right) = - \frac{{\pi }^{\frac{1 - n}{2}}}{2}H\left( t\right) {\chi }_{ + }^{-\frac{n - 1}{2}}\left( {{t}^{2} - {\left| x\right| }^{2}}\right) . \] Here \( n \) is the space dimension, that is, \( x \in {\mathbb{R}}^{n} \) . Proof. We first check that \( {E}_{ + } \) given in the proposition is indeed a forward fundamental solution. It is easy to see from the definition of \( {\chi }_{ + }^{-k} \) that the support of \( {E}_{ + } \) lies inside of the forward light cone \( \{ 0 \leq \left| x\right| \leq t\} \) . Inside this cone with \( {t}^{2} + {\left| x\right| }^{2} > 0 \) (this means that \( t > 0 \) ), we can compute that \[ ▱\left( {{\chi }_{ + }^{-\frac{n - 1}{2}}\left( {{t}^{2} - {\left| x\right| }^{2}}\right) }\right) \] \[ = ▱\left( {{\chi }_{ + }^{-\frac{n - 1}{2}}\left( {{t}^{2} - {\left| x\right| }^{2}}\right) }\right) \] \[ = - {\partial }_{t}\left( {{2t}{\chi }_{ + }^{-\frac{n + 1}{2}}\left( {{t}^{2} - {\left| x\right| }^{2}}\right) }\right) - 2{x}_{i}\left( {{x}_{i}{\chi }_{ + }^{-\frac{n + 1}{2}}\left( {{t}^{2} - {\left| x\right| }^{2}}\right) }\right) \] \[ = - 2\left( {n + 1}\right) {\chi }_{ + }^{-\frac{n + 1}{2}}\left( {{t}^{2} - {\left| x\right| }^{2}}\right) - 4\left( {{t}^{2} - {\left| x\right| }^{2}}\right) {\chi }_{ + }^{-\frac{n + 3}{2}}\left( {{t}^{2} - {\left| x\right| }^{2}}\right) . \] On the other hand, by definition \[ x{\chi }_{ + }^{a} = x\frac{{x}_{ + }^{a}}{\Gamma \left( {a + 1}\right) } = \frac{{\chi }_{ + }^{a + 1}}{\Gamma \left( {a + 2}\right) }\frac{\Gamma \left( {a + 2}\right) }{\Gamma \left( {a + 1}\right) } = \left( {a + 1}\right) {\chi }_{ + }^{a + 1}. \] This indicates that \( ▱{E}_{ + } \) is a distribution supported at the origin \( \left( {t = 0, x = 0}\right) \) . According to Proposition 2.2, \( ▱{E}_{ + } \) must be the linear combination of finite derivatives of the \( \delta \) distribution. Next we show that \( ▱{E}_{ + } \) should be a constant multiple of \( \delta \) . For smooth compactly supported test function \( \phi \), define \( {\phi }_{\lambda } = \phi \left( {{\lambda t},{\lambda x}}\right) \) . Then we have \[ < ▱{E}_{ + },{\phi }_{\lambda } > = < {E}_{ + },▱{\phi }_{\lambda } > = {\lambda }^{2} < {E}_{ + },▱\phi \left( {{\lambda t},{\lambda x}}\right) > = {\lambda }^{1 - n} < {E}_{ + }\left( {{\lambda }^{-1}t,{\lambda }^{-1}x}\right) ,▱\phi > \] \[ = < {E}_{ + }, \boxdot \phi > = < \boxdot {E}_{ + },\phi > \] due to the fact that \( {\chi }_{ + }^{a}\left( {\lambda x}\right) = {\lambda }^{a}{\chi }_{ + }^{a}\left( x\right) \) . On the other hand, \[ < {\delta }^{\left( k\right) },{\phi }_{\lambda } > = {\lambda }^{k} < {\delta }^{\left( k\right) },\phi > \text{.} \] The above two facts leads to the conclusion that \( ▱{E}_{ + } \) is a constant multiple of \( \delta \) . The precise constant is determined by a test function which we will leave for exercise. Once we have the fundamental solution to the wave operator \( ▱ \), we then can write down the solution to the linear wave equation \( ▱\phi = F \) with initial data \( \left( {{\phi }_{0},{\phi }_{1}}\right) \) . Proposition 2.4. Assume that the initial data \( {\phi }_{0},{\phi }_{1} \) and the inhomogeneous term \( F \) are compactly supported smooth functions. Let \( {E}_{ + }\left( {t, x}\right) \) be the forward fundamental solution to the wave operator we just constructed. Then the Cauchy problem to the linear wave equation \[ ▱\phi = F,\;\phi \left( {0, x}\right) = {\phi }_{0}\left( x\right) ,\;{\partial }_{t}\phi \left( {0, x}\right) = {\phi }_{1} \] has the following unique solution \[ \phi \left( {t, x}\right) = - {E}_{ + } * \left( {{\phi }_{1}\delta \left( t\right) }\right) - \left( {{\partial }_{t}{E}_{ + }}\right) * \left( {{\phi }_{0}\delta \left( t\right) }\right) + \left( {{FH}\left( t\right) }\right) * {E}_{ + }. \] Here \( H\left( t\right) \) is the heaviside function. Proof. In fact we can compute that \[ \phi \left( {t, x}\right) H\left( t\right) = \left( {\phi H}\right) * \delta = \left( {\phi H}\right) * ▱{E}_{ + } \] \[ = - \left( {\phi H}\right) * \left( {{\partial }_{tt}{E}_{ + }}\right) + \left( {H\Delta \phi }\right) * {E}_{ + } \] \[ = - \left( {H{\partial }_{t}\phi }\right) * \left( {{\partial }_{t}{E}_{ + }}\right) - \left( {\delta \left( t\right) {\phi }_{0}}\right) * \left( {{\partial }_{t}{E}_{ + }}\right) + \left( {HF}\right) * {E}_{ + } + \left( {H{\partial }_{tt}\phi }\right) * {E}_{ + } \] \[ = - \left( {\delta \left( t\right) {\phi }_{1}}\right) * {E}_{ + } - \left( {\delta \left( t\right) {\phi }_{0}}\right) * \left( {{\partial }_{t}{E}_{ + }}\right) + \left( {HF}\right) * {E}_{ + }. \] Here during the argument we may note that \( H{\left( t\right) }^{\prime } = \delta \left( t\right) \) . Next we find the explicit representation for the solution in terms of the initial data and the inhomogeneous term. First note that we can write \[ \phi \left( {t, x}\right) = - {E}_{ + } * \left( {{\phi }_{1}\delta \left( t\right) }\right) - \left( {{\partial }_{t}{E}_{ + }}\right) * \left( {{\phi }_{0}\delta \left( t\right) }\right) + \left( {{FH}\left( t\right) }\right) * {E}_{ + } \] \[ = - {E}_{ + } * \left( {{\phi }_{1}\delta \left( t\right) }\right) - {\partial }_{t}\left( {{E}_{ + } * \left( {{\phi }_{0}\delta \left( t\right) }\right) }\right) + {\int }_{0}^{t}{\int }_{{\mathbb{R}}^{d}}F\left( {t - s, y}\right) {E}_{ + }\left( {s, y}\right) {dsdy} \] \[ = - {E}_{ + } * \left( {{\phi }_{1}\delta \left( t\right) }\right) - {\partial }_{t}\left( {{E}_{ + } * \left( {{\phi }_{0}\delta \left( t\right) }\right) }\right) + {\int }_{0}^{t}\left( {\left( {F\left( {t - s, y}\right) \delta \left( {s}^{\prime }\right) }\right) * {E}_{ + }\left( {{s}^{\prime }, y}\right) }
1112_(GTM267)Quantum Theory for Mathematicians
Definition 17.5
Definition 17.5 If \( \left( {\pi, V}\right) \) is an irreducible finite-dimensional representation of \( \operatorname{so}\left( 3\right) \), then the spin of \( \left( {\pi, V}\right) \) is the largest eigenvalue of the operator \( {L}_{3} \mathrel{\text{:=}} {i\pi }\left( {F}_{3}\right) \) . Equivalently, \( l \) is the unique number such that \( \dim V = {2l} + 1 \) . Our next result says that all the values of \( l \) in (17.7) actually arise as spins of irreducible representations of \( \mathfrak{{so}}\left( 3\right) \) . Theorem 17.6 For any \( l = 0,\frac{1}{2},1,\frac{3}{2},\ldots \) there exists an irreducible representation of \( \operatorname{so}\left( 3\right) \) of dimension \( {2l} + 1 \), and any two irreducible representations of \( \operatorname{so}\left( 3\right) \) of dimension \( {2l} + 1 \) are isomorphic. Note that the theorem is only asserting the existence, for each \( l \), of a representation of the Lie algebra so(3). As we will see in the next section, an irreducible representation \( \pi \) of so(3) comes from a representation \( \Pi \) of \( \mathrm{{SO}}\left( 3\right) \) if and only if \( l \) is an integer. Nevertheless, the representations of so(3) with half-integer values of \( l \) -the ones where \( l \) is half of an integer but not an integer-still play an important role in quantum physics, as discussed in Sect. 17.8. (Although it would be clearer to refer to the case \( l = 1/2,3/2,5/2,\ldots \) as "integer plus a half," the terminology "half-integer" is firmly established.) By comparison to Proposition 17.3, we may think of \( {L}_{3} \) as the analog of the third component of the dimensionless angular momentum operator on the space \( V \) . Indeed, we will eventually be interested in applying Theorem 17.4 to the case in which \( V \) is a subspace of \( {L}^{2}\left( {\mathbb{R}}^{3}\right) \) that is invariant under the action of \( \mathrm{{SO}}\left( 3\right) \) . In that case, \( {L}_{3} \) will be precisely (the restriction to \( V \) of) the dimensionless angular momentum operator \( {\widetilde{J}}_{3} \) . Observe that Theorem 17.4 bears a strong similarity to our analysis of the quantum harmonic oscillator. In both cases, we have a "chain" of eigenvectors for a certain operator, along with raising and lowering operators that raise and lower the eigenvalue of that operator. In the case of the harmonic oscillator, we have a chain that begins with a ground state and then extends infinitely in one direction. In the case of so(3) representations, we have a chain that is finite in both directions. The chain begins with an eigenvector \( {v}_{0} \) for \( {L}_{3} \) with maximal eigenvalue, so that \( {v}_{0} \) is annihilated by the raising operator \( {L}^{ + } \) . A key step in the proof of Theorem 17.4 is to determine how the chain can terminate (in the direction of lower eigenvalues for \( {L}_{3} \) ) without violating the commutation relations among \( {L}_{3},{L}^{ + } \) , and \( {L}^{ - } \) . Proof of Theorem 17.4. Since \( \pi \) is a Lie algebra homomorphism, the \( \pi \left( {F}_{j}\right) \) ’s satisfy the same commutation relations as the \( {F}_{j} \) ’s themselves. From this we can easily verify the following relations among the operators \( {L}^{ + },{L}^{ - } \), and \( {L}_{3} \) : \[ \left\lbrack {{L}_{3},{L}^{ + }}\right\rbrack = {L}^{ + } \] (17.8) \[ \left\lbrack {{L}_{3},{L}^{ - }}\right\rbrack = - {L}^{ - } \] (17.9) \[ \left\lbrack {{L}^{ + },{L}^{ - }}\right\rbrack = 2{L}_{3} \] \( \left( {17.10}\right) \) Now, since we are working over the algebraically closed field \( \mathbb{C} \), the operator \( {L}_{3} \) has at least one eigenvector \( v \) with eigenvalue \( \lambda \) . Consider, then, \( {L}^{ + }v \) . Using (17.8), we compute that \[ {L}_{3}{L}^{ + }v = \left( {{L}^{ + }{L}_{3} + {L}^{ + }}\right) v = {L}^{ + }\left( {\lambda v}\right) + {L}^{ + }v = \left( {\lambda + 1}\right) {L}^{ + }v. \] (17.11) Thus, either \( {L}^{ + }v = 0 \) or \( {L}^{ + }v \) is an eigenvector for \( {L}_{3} \) with eigenvalue \( \lambda + 1 \) . We call \( {L}^{ + } \) the "raising operator," since it has the effect of raising the eigenvalue of \( {L}_{3} \) by 1 . If we apply \( {L}^{ + } \) repeatedly to \( v \), we obtain eigenvectors for \( {L}_{3} \) with eigenvalues increasing by 1 at each step, as long as we do not get the zero vector. Eventually, though, we must get 0, since the operator \( {L}_{3} \) has only finitely many eigenvalues. Thus, there exists \( k \geq 0 \) such that \( {\left( {L}^{ + }\right) }^{k}v \neq 0 \) but \( {\left( {L}^{ + }\right) }^{k + 1}v = 0 \) . By applying (17.11) repeatedly, we see that \( {\left( {L}^{ + }\right) }^{k}v \) is an eigenvector for \( {L}_{3} \) with eigenvalue \( \lambda + k \) . Let us now introduce the notation \( {v}_{0} \mathrel{\text{:=}} {\left( {L}^{ + }\right) }^{k}v \) and \( \mu = \lambda + k \) . Then \( {v}_{0} \) is a nonzero vector with \( {L}^{ + }{v}_{0} = 0 \) and \( {L}_{3}{v}_{0} = \mu {v}_{0} \) . We now forget about the original vector \( v \) and eigenvalue \( \lambda \) and consider only \( {v}_{0} \) and \( \mu \) . Define vectors \( {v}_{j} \) by \[ {v}_{j} = {\left( {L}^{ - }\right) }^{j}{v}_{0},\;j = 0,1,2,\ldots \] Arguing as in (17.11), but using (17.9) in place of (17.8), we see that \( {L}^{ - } \) has the effect of either lowering the eigenvalue of \( {L}_{3} \) by 1 or of giving the zero vector. Thus, \( {L}_{3}{v}_{j} = \left( {\mu - j}\right) {v}_{j} \) . Next, we claim that for \( j \geq 1 \) we have \[ {L}^{ + }{v}_{j} = j\left( {{2\mu } + 1 - j}\right) {v}_{j},\;j = 1,2,3,\ldots , \] (17.12) which is easily proved by induction on \( j \), using (17.10) (Exercise 2). Since, again, \( {L}_{3} \) has only finitely many eigenvectors, \( {v}_{j} \) must eventually be zero. Thus, there exists some \( N \geq 0 \) such that \( {v}_{N} \neq 0 \) but \( {v}_{N + 1} = 0 \) . Since \( {v}_{N + 1} = 0 \), applying (17.12) with \( j = N \) gives \[ 0 = {L}^{ + }{v}_{N + 1} = \left( {N + 1}\right) \left( {{2\mu } - N}\right) {v}_{N} \] Since \( {v}_{N} \neq 0 \) and \( N + 1 > 0 \), we must have \( \left( {{2\mu } - N}\right) = 0 \) . This means that \( \mu \) must equal \( N/2 \) . Letting \( l = N/2 \) and putting \( \mu = N/2 = l \), we have the formulas recorded in (17.6). Meanwhile, since the \( {v}_{j} \) ’s are eigenvectors for \( {L}_{3} \) with distinct eigenvalues, the \( {v}_{j} \) ’s are automatically linearly independent. Furthermore, the span of the \( {v}_{j} \) ’s is invariant under \( {L}^{ + },{L}^{ - } \), and \( {L}_{3} \), hence under all of so(3). Since \( V \) is assumed to be irreducible, the span of the \( {v}_{j} \) ’s must be all of \( V \) . Thus, the \( {v}_{j} \) ’s form a basis for \( V \) . The dimension of \( V \) is therefore equal to the number of \( {v}_{j} \) ’s, which is \( N + 1 = {2l} + 1 \) . ∎ Proof of Theorem 17.6. We construct \( V \) simply by defining a space \( V \) with basis \( {v}_{0},{v}_{1},\ldots ,{v}_{2l} \) and defining the action of so(3) by (17.6). It is a simple matter (Exercise 4) to check that \( {L}^{ + },{L}^{ - } \), and \( {L}_{3} \), defined in this way, have the correct commutation relations, so that \( V \) is indeed a representation of \( \operatorname{so}\left( 3\right) \) . It remains to show that \( V \) is irreducible. Suppose that \( W \) is an invariant subspace of \( V \) and that \( W \neq \{ 0\} \) . We need to show that \( W = V \) . To this end, suppose that \( w \) is some nonzero element of \( W \), which we can decompose as \( w = \mathop{\sum }\limits_{{j = 0}}^{{2l}}{a}_{j}{v}_{j} \) . Let \( {j}_{0} \) be the largest index for which \( {a}_{j} \) is nonzero. According to the formula for \( {L}^{ + } \) in (17.6), applying \( {L}^{ + } \) to any of the vectors \( {v}_{1},\ldots ,{v}_{2l} \) gives a nonzero multiple of the previous element in our chain. Thus, \( {\left( {L}^{ + }\right) }^{{j}_{0}}w \) will be a nonzero multiple of \( {v}_{0} \) . Since \( W \) is invariant, this means that \( {v}_{0} \) belongs to \( W \) . But then by applying \( {L}^{ - } \) repeatedly, we see that \( {v}_{j} \) belongs to \( W \) for each \( j \), so that \( W = V \) . Theorem 17.4 tells us that any irreducible representation of \( \operatorname{so}\left( 3\right) \) of dimension \( {2l} + 1 \) has a basis as in (17.6). We can then construct an isomorphism between any two irreducible representations by mapping this basis in one space to the corresponding basis in the other space. - In the rest of this section, we look at some additional properties of representations of \( \operatorname{so}\left( 3\right) \) . Proposition 17.7 Let \( \pi : \operatorname{so}\left( 3\right) \rightarrow \mathrm{{gl}}\left( V\right) \) be an irreducible representation of \( \operatorname{so}\left( 3\right) \) . Then there exists an inner product on \( V \), unique up to multiplication by a constant, such that \( \pi \left( X\right) \) is skew-self-adjoint for all \( X \in \operatorname{so}\left( 3\right) \) . Proof. Recalling how the operators \( {L}_{3},{L}^{ + } \), and \( {L}^{ - } \) are defined, we can see that the assertion that each \( \pi \left( X\right), X \in \mathfrak{{so}}\left( 3\right) \), is skew-self-adjoint is equivalent to the assertion that \( {L}_{3} \) is self-adjoint and that \( {L}^{ + } \) and \( {L}^{ - } \) are adjoints of each other. Since the \( {v}_{j} \) ’s are eigenvectors for \( {L}_{3} \) with distinct eigenvalues, if \( {L}_{3} \) is to be self-adjoint, the \( {v}_{j} \) ’s must be orthogonal. Conversely, if we have any inner product for which the \( {v}_{j} \) ’s are orthogonal, then \( {L}_{3} \) will be self-adjoint, as is easily verified. It remains to investigate the consequences of the condition \( {\left( {L}^{ + }\right) }^{ * } = {L}^{ - } \) . Assuming this condition, we compute that \[ \left\langle {{v}_{j},{v}_{j}}\right\rangle = \left\la
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 2.1.1
Definition 2.1.1 A quaternion algebra A over \( F \) is a four-dimensional \( F \) -space with basis vectors \( 1, i, j \) and \( k \), where multiplication is defined on A by requiring that 1 is a multiplicative identity element, that \[ {i}^{2} = {a1},\;{j}^{2} = {b1},\;{ij} = - {ji} = k \] (2.1) for some \( a \) and \( b \) in \( {F}^{ * } \) and by extending the multiplication linearly so that \( A \) is an associative algebra over \( F \) . The algebra so constructed can be denoted by the Hilbert symbol \[ \left( \frac{a, b}{F}\right) \] (2.2) Note that \[ {k}^{2} = {\left( ij\right) }^{2} = - {ab} \] and that any pair of the basis vectors \( i, j \) and \( k \) anti-commute. Thus this quaternion algebra could equally well be denoted by the Hilbert symbols \[ \left( \frac{b, a}{F}\right) ,\;\left( \frac{a, - {ab}}{F}\right) ,\;\text{ etc. } \] Thus it should be noted that the quaternion algebra does not uniquely determine a Hilbert Symbol. If \( K \) is a field extending \( F \), then \[ \left( \frac{a, b}{F}\right) { \otimes }_{F}K \cong \left( \frac{a, b}{K}\right) \] Familiar examples of quaternion algebras are Hamilton's quaternions \[ \mathcal{H} = \left( \frac{-1, - 1}{\mathbb{R}}\right) \] and, for any field \( F \) , \[ {M}_{2}\left( F\right) \cong \left( \frac{1,1}{F}\right) \] with generators \[ i = \left( \begin{matrix} 1 & 0 \\ 0 & - 1 \end{matrix}\right) ,\;j = \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \] Lemma 2.1.2 1. \( \left( \frac{a, b}{F}\right) \cong \left( \frac{a{x}^{2}, b{y}^{2}}{F}\right) \) for any \( a, b, x, y \in {F}^{ * } \) 2. The centre of \( \left( \frac{a, b}{F}\right) \) is \( {F1} \) . 3. \( \left( \frac{a, b}{F}\right) \) is a simple algebra (i.e., has no proper two-sided ideals). ## Proof: 1. Let \( A = \left( \frac{a, b}{F}\right) \) and \( {A}^{\prime } = \left( \frac{a{x}^{2}, b{y}^{2}}{F}\right) \) have bases \( \{ 1, i, j, k\} \) and \( \{ 1,{i}^{\prime },{j}^{\prime },{k}^{\prime }\} \) , respectively. Define \( \phi : {A}^{i} \rightarrow A \) by \( \phi \left( 1\right) = 1,\phi \left( {i}^{\prime }\right) = {xi},\phi \left( {j}^{\prime }\right) = {yj} \) and \( \phi \left( {k}^{\prime }\right) = {xyk} \) and extend linearly. Since \( {\left( xi\right) }^{2} = a{x}^{2},{\left( yj\right) }^{2} = b{y}^{2} \), and \( \left( {xi}\right) \left( {yj}\right) = \left( {xy}\right) {ij} = - \left( {xy}\right) {ji} = - \left( {yj}\right) \left( {xi}\right) \), it follows that \( \phi \) is an \( F \) - algebra isomorphism. 2. Let \( \widehat{F} \) denote an algebraic closure of \( F \) . Then, extending the coefficients \( \left( \frac{a, b}{F}\right) { \otimes }_{F}\widehat{F} = \left( \frac{a, b}{\widehat{F}}\right) \) . Every element in \( \widehat{F} \) is a square so by part 1, \( \left( \frac{a, b}{\widehat{F}}\right) \cong \left( \frac{1,1}{\widehat{F}}\right) \cong {M}_{2}\left( \widehat{F}\right) \), whose centre is \( \widehat{F}1 \) . Thus the centre of \( \left( \frac{a, b}{\widehat{F}}\right) \) is \( {F1} \) . 3. If \( I \) is a non-zero ideal in \( A \), then \( I{ \otimes }_{F}\widehat{F} \) is a non-zero ideal in \( {M}_{2}\left( \widehat{F}\right) \) . However, \( {M}_{2}\left( \widehat{F}\right) \) is simple. As a vector space over \( F, I \) will then have dimension 4 and so \( I = A \) . Thus quaternion algebras are central and simple. They can be characterised in terms of central simple algebras and this will be shown later in this section, modulo some results on central simple algebras which will be discussed later in this chapter. Like Hamilton's quaternions, every quaternion algebra admits a "conjugation" leading to the notions of trace and norm. To discuss these, we first introduce the subspace of pure quaternions. Let \( A = \left( \frac{a, b}{F}\right) \) as above with basis \( \{ 1, i, j, k\} \) satisfying (2.1). This is referred to as a standard basis. Definition 2.1.3 Let \( {A}_{0} \) be the subspace of \( A \) spanned by the vectors \( i, j \) and \( k \) . Then the elements of \( {A}_{0} \) are the pure quaternions in \( A \) . This definition does not depend on the choice of basis. For, let \( x = {a}_{0} + \) \( {a}_{1}i + {a}_{2}j + {a}_{3}k \) . Then \[ {x}^{2} = \left( {{a}_{0}^{2} + a{a}_{1}^{2} + b{a}_{2}^{2} - {ab}{a}_{3}^{2}}\right) + 2{a}_{0}\left( {{a}_{1}i + {a}_{2}j + {a}_{3}k}\right) . \] Lemma 2.1.4 \( x \in A\left( {x \neq 0}\right) \) is a pure quaternion if and only if \( x \notin Z\left( A\right) \) and \( {x}^{2} \in Z\left( A\right) \) . Thus each \( x \in A \) has a unique decomposition as \( x = a + \alpha \), where \( a \in \) \( Z\left( A\right) = F \) and \( \alpha \in {A}_{0} \) . Define the conjugate \( \bar{x} \) of \( x \) by \( \bar{x} = a - \alpha \) . This defines an anti-involution of the algebra such that \( \overline{\left( x + y\right) } = \bar{x} + \bar{y},\overline{xy} = \bar{y}\bar{x} \) , \( \overline{\bar{x}} = x \) and \( \overline{rx} = r\bar{x} \) for \( r \in F \) . On a matrix algebra \( {M}_{2}\left( F\right) \) , \[ \overline{\left( \begin{array}{ll} a & b \\ c & d \end{array}\right) } = \left( \begin{matrix} d & - b \\ - c & a \end{matrix}\right) \] Definition 2.1.5 For \( x \in A \), the (reduced) norm and (reduced) trace of \( x \) lie in \( F \) and are defined by \( n\left( x\right) = x\bar{x} \) and \( \operatorname{tr}\left( x\right) = x + \bar{x} \), respectively. Thus on a matrix algebra, these coincide with the notions of determinant and trace. The norm map \( n : A \rightarrow F \) is multiplicative, as \( n\left( {xy}\right) = \left( {xy}\right) \overline{\left( xy\right) } = \) \( {xy}\bar{y}\bar{x} = n\left( x\right) n\left( y\right) \) . Thus the invertible elements of \( A \) are precisely those such that \( n\left( x\right) \neq 0 \), with the inverse of such an \( x \) being \( \bar{x}/n\left( x\right) \) . Thus if we let \( {A}^{ * } \) denote the invertible elements of \( A \), and \[ {A}^{1} = \{ x \in A \mid n\left( x\right) = 1\} \] then \( {A}^{1} \subset {A}^{ * } \) . This reduced norm \( n \) is related to field norms (see also Exercise 2.1, No. 7). An element \( w \) of the quaternion algebra \( A \) satisfies the quadratic \[ {x}^{2} - \operatorname{tr}\left( w\right) x + n\left( w\right) = 0 \] (2.3) with \( \operatorname{tr}\left( w\right), n\left( w\right) \in F \) . Let \( F\left( w\right) \) be the smallest subalgebra of \( A \) which contains \( {F1} \) and \( w \), so that \( F\left( w\right) \) is commutative. If \( A \) is a division algebra, then the polynomial (2.3) is reducible over \( F \) if and only if \( w \in Z\left( A\right) \) . Thus for \( w \notin Z\left( A\right), F\left( w\right) = E \) is a quadratic field extension \( E \mid F \) . Then \( {\left. {N}_{E \mid F} = n\right| }_{E} \) . Lemma 2.1.6 If the quaternion algebra \( A \) over \( F \) is a division algebra and \( w \notin Z\left( A\right) \), then \( E = F\left( w\right) \) is a quadratic field extension of \( F \) and \( {\left. n\right| }_{E} = {N}_{E \mid F} \) If \( A = \left( \frac{a, b}{F}\right) \) and \( x = {a}_{0} + {a}_{1}i + {a}_{2}j + {a}_{3}k \), then \[ n\left( x\right) = {a}_{0}^{2} - a{a}_{1}^{2} - b{a}_{2}^{2} + {ab}{a}_{3}^{2}. \] In the case of Hamilton’s quaternions \( \left( \frac{-1, - 1}{\mathbb{R}}\right) ,\;n\left( x\right) = {a}_{0}^{2} + {a}_{1}^{2} + {a}_{2}^{2} + {a}_{3}^{2} \) so that every non-zero element is invertible and \( \mathcal{H} \) is a division algebra. The matrix algebras \( {M}_{2}\left( F\right) \) are, of course, not division algebras. That these matrix algebras are the only non-division algebras among quaternion algebras is a consequence of Wedderburn's Theorem. From Wedderburn's Structure Theorem for finite-dimensional simple algebras (see Theorem 2.9.6), a quaternion algebra \( A \) is isomorphic to a full matrix algebra \( {M}_{n}\left( D\right) \), where \( D \) is a division algebra, with \( n \) and \( D \) uniquely determined by \( A \) . The \( F \) -dimension of \( {M}_{n}\left( D\right) \) is \( m{n}^{2} \), where \( m = {\dim }_{F}\left( D\right) \) and, so, for the four-dimensional quaternion algebras there are only two possibilities: \( m = 4, n = 1;m = 1, n = 2 \) . Theorem 2.1.7 If \( A \) is a quaternion algebra over \( F \), then \( A \) is either a division algebra or \( A \) is isomorphic to \( {M}_{2}\left( F\right) \) . We now use the Skolem Noether Theorem (see Theorem 2.9.8) to show that quaternion algebras can be characterised algebraically as follows: Theorem 2.1.8 Every four-dimensional central simple algebra over a field \( F \) of characteristic \( \neq 2 \) is a quaternion algebra. Proof: Let \( A \) be a four-dimensional central simple algebra over \( F \) . If \( A \) is isomorphic to \( {M}_{2}\left( F\right) \), it is a quaternion algebra, so by Theorem 2.1.7 we can assume that \( A \) is a division algebra. For \( w \notin Z\left( A\right) \), the subalgebra \( F\left( w\right) \) will be commutative. As a subring of \( A, F\left( w\right) \) is an integral domain and since \( A \) is finite-dimensional, \( w \) will satisfy an \( F \) -polynomial. Thus \( F\left( w\right) \) is a field. Since \( A \) is central, \( F\left( w\right) \neq A \) . Pick \( {w}^{\prime } \in A \smallsetminus F\left( w\right) \) . Now the elements \( 1, w,{w}^{\prime } \) and \( w{w}^{\prime } \) are necessarily independent over \( F \) and so form a basis of A. Thus \[ {w}^{2} = {a}_{0} + {a}_{1}w + {a}_{2}{w}^{\prime } + {a}_{3}w{w}^{\prime },\;{a}_{i} \in F. \] Since \( {w}^{\prime } \notin F\left( w\right) \), it follows that \( {w}^{2} = {a}_{0} + {a}_{1}w \) . Thus \( F\left( w\right) = E \) is a quadratic extension of \( F \) . Choose \( y \in E \) such that \( {y}^{2} = a \in F \) and \( E = F\left( y\right) \) . The automorphism on \( E \) induced by \( y \rightarrow - y \) will be induced by conjugation in \( A \) by an invertible ele
1358_[陈松蹊&张慧铭] A Course in Fixed and High-dimensional Multivariate Analysis (2020)
Definition 4.1.13
Definition 4.1.13 (Sub-Gaussian distribution). variance proxy \( {\sigma }^{2} \) if \( \mathrm{E}X = 0 \) and its moment generating function satisfies \[ \mathrm{E}\left\lbrack {\exp \left( {sX}\right) }\right\rbrack \leq \exp \left( \frac{{\sigma }^{2}{s}^{2}}{2}\right) ,\;\forall s \in \mathrm{R}. \] 2218 In this case we write \( X \sim \operatorname{subG}\left( {\sigma }^{2}\right) \) . Note that \( \operatorname{sub}G\left( {\sigma }^{2}\right) \) denotes a class of distributions rather than a distribution. The Gaussian distribution is a special case of sub-Gaussian. By this exponential moment condition, the bounded variable is essentially Sub-Gaussian. Equivalent definition of Sub-Gaussian random variables include: (i) for fixed positive constants \( C \) and \( \nu, P\left( {\left| X\right| > t}\right) \leq C{e}^{-\nu {t}^{2}} \) for any \( t > 0 \) ; (ii) there exists a positive \( C \) such that \( E{\left( {\left| X\right| }^{p}\right) }^{1/p} \leq C\sqrt{p} \) for any \( p \geq 1 \) . Gaussian random variables and random variables with bounded support are sub-Gaussian. The latter is a result of Hoeffding’s Lemma which states that for any \( X \) supported on an interval \( \left\lbrack {a, b}\right\rbrack \) with mean \( \mu \) \[ E\left\{ {e}^{\left( {X - \mu }\right) \lambda }\right\} \leq \exp \left( {{\lambda }^{2}{\left( b - a\right) }^{2}/8}\right) . \] see Hoeffding (1963), Section 14.4 of Bühlmann and van de Geer (2011). This lemma is used to derived the Hoeffding inequality for sums of bounded independent random variables \( \left\{ {X}_{i}\right\} \) with each \( {X}_{i} \) bounded within \( \left\lbrack {{a}_{i},{b}_{i}}\right\rbrack \) . Specifically, let \( {S}_{n} = \mathop{\sum }\limits_{{i = 1}}^{n}{X}_{i} \) and for ant \( s, t \geq 0 \) , \[ P\left( {{S}_{n} - E\left( {S}_{n}\right) \geq t}\right) = P\left( {{e}^{s\left\{ {{S}_{n} - E\left( {S}_{n}\right) }\right\} } \geq {e}^{st}}\right) \leq {e}^{-{st}}\mathop{\prod }\limits_{{i = 1}}^{n}E\left\{ {e}^{s\left( {{X}_{i} - {\mu }_{i}}\right) }\right\} \] \[ \leq {e}^{-{st}}\mathop{\prod }\limits_{{i = 1}}^{n}{e}^{{s}^{2}{\left( {b}_{i} - {a}_{i}\right) }^{2}/8} = {e}^{-{st} + {s}^{2}\sum {\left( {b}_{i} - {a}_{i}\right) }^{2}/8} \] (4.1.3) The smallest bounded is attained at \( s = \frac{4t}{\sum {\left( {b}_{i} - {a}_{i}\right) }^{2}} \) . Hence, the Hoeffding inequality \[ P\left( {{S}_{n} - E\left( {S}_{n}\right) \geq t}\right) \geq \exp \left\{ {-\frac{2{t}^{2}}{\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {b}_{i} - {a}_{i}\right) }^{2}}}\right\} . \] To show the high probability events in terms of responces in GLMs, we will adopt the Sub-Gaussian type concentration inequalities for the non-random weghted sum of exponential family random variables. Lemma 4.1.14 (Lemma 6.1 in Rigollet (2012)). Let \( {\left\{ {Y}_{i}\right\} }_{i = 1}^{n} \) be a sequence of random variables whose distribution belongs to canonical exponential family with \( f\left( {{y}_{i};{\theta }_{i}}\right) = \) \( c\left( {y}_{i}\right) \exp \left( {{y}_{i}{\theta }_{i} - b\left( {\theta }_{i}\right) }\right) \) . We assume uniformly bounded variances condition: there exist- \( s \) some variance depending constant \( {C}_{b} \) such that \( \sup \ddot{b}\left( {\theta }_{i}\right) \leq {C}_{b}^{2} \) for all \( i \) . Let \( \mathbf{w} \mathrel{\text{:=}} \) \( {\theta }_{i} \in \mathbb{R} \) \( {\left( {w}_{1},\cdots ,{w}_{n}\right) }^{T} \in {\mathrm{R}}^{n} \) be a non-random vector and define the weighted sum \( {S}_{n}^{w} = : \mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}{Y}_{i} \) , we have \[ P\left\{ {\left| {{S}_{n}^{w} - E\left( {S}_{n}^{w}\right) }\right| > t}\right\} \leq 2\exp \left\{ {-\frac{{t}^{2}}{2{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}}\right\} . \] (4.1.4) Moreover, we have \( \mathrm{E}{\left| {S}_{n}^{w} - \mathrm{E}{S}_{n}^{w}\right| }^{k} \leq {D}_{k, C}\parallel w{\parallel }_{2}^{k} \) where \( {D}_{k, C} = k{\left( 2{C}_{b}^{2}\right) }^{k/2}\Gamma \left( {k/2}\right) \) and \( \Gamma \left( \cdot \right) \) stands for the Gamma function. Proof: Let \( {Y}_{i} = \dot{\psi }\left( {\theta }_{i}\right) + {Z}_{i} \), where \( {Z}_{1},{Z}_{2},\cdots ,{Z}_{n} \) are centralized and independent exponential random variables. From the MGF \( {M}_{X}\left( t\right) = \mathrm{E}\exp \left( {t{Y}_{i}}\right) = \exp \left\{ {b\left( {{\theta }_{i} + t}\right) - b\left( {\theta }_{i}\right) }\right\} \) and bounded variance assumption \( \sup \ddot{b}\left( {\theta }_{i}\right) \leq {C}_{b}^{2} \) for all \( i \), it can be easily derived that there exist \( \eta \in \left\lbrack {0,1}\right\rbrack \) such that \[ \operatorname{E}\exp \left( {s{Z}_{i}}\right) = \operatorname{E}\exp \left\{ {s\left( {{Y}_{i} - \dot{b}\left( {\theta }_{i}\right) }\right) }\right\} = \exp \left( {b\left( {{\theta }_{i} + s}\right) - b\left( {\theta }_{i}\right) - \dot{b}\left( {\theta }_{i}\right) s}\right) \] \[ \text{[by Taylor’s expansion]} = \exp \left( \frac{\ddot{b}\left( {\eta {\theta }_{i}}\right) {s}^{2}}{2}\right) \leq \exp \left( \frac{{C}_{b}^{2}{s}^{2}}{2}\right) \text{.} \] (4.1.5) Note that \( {S}_{n}^{w} - \mathrm{E}{S}_{n}^{w} = \mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}{Z}_{i} \), it follows that \[ \mathrm{E}\exp \left( {s\left( {{S}_{n}^{\omega } - \mathrm{E}\left( {S}_{n}^{\omega }\right) }\right) }\right) = \mathrm{E}\exp \left( {s\mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}{Z}_{i}}\right) = \mathop{\prod }\limits_{{i = 1}}^{n}\mathrm{E}\exp \left( {s{w}_{i}{Z}_{i}}\right) \] \[ \text{ [using (4.1.5)] } \leq \mathop{\prod }\limits_{{i = 1}}^{n}\exp \left( \frac{{s}^{2}{C}_{b}^{2}{w}_{n}^{2}}{2}\right) = \exp \left( \frac{{s}^{2}{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}{2}\right) . \] Then by Chernoff's inequality, we have \[ P\left( {{S}_{n}^{w} - \mathrm{E}\left( {S}_{n}^{w}\right) \geq t}\right) \leq \exp \left( {-{st}}\right) \mathrm{E}\exp \left\{ {s\left( {{S}_{n}^{\omega } - \mathrm{E}\left( {S}_{n}^{\omega }\right) }\right) }\right\} \] \[ \leq \exp \left( {\frac{{s}^{2}{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}{2} - {st}}\right) = \exp \left( {-\frac{{t}^{2}}{2{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}}\right) . \] where the last equality is minimized by setting \( s = \frac{t}{{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}} \) . By replacing \( {Y}_{i} \) with \( - {Y}_{i} \), we also have \( P\left( {{S}_{n}^{w} - {S}_{n}^{w} \geq t}\right) \leq \exp \left( {-\frac{{t}^{2}}{2{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}}\right) \) by the same argument. Therefore \[ P\left( {\left| {{S}_{n}^{w} - \mathrm{E}\left( {S}_{n}^{w}\right) }\right| \geq t}\right) = P\left( {{S}_{n}^{w} - \mathrm{E}\left( {S}_{n}^{w}\right) \geq t}\right) + P\left( {\mathrm{E}\left( {S}_{n}^{w}\right) - {S}_{n}^{w} \geq t}\right) \leq 2\exp \left( {-\frac{{t}^{2}}{2{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}}\right) . \] (b) For any integer \( k \geq 1 \), it follows that \[ \mathrm{E}{\left| {S}_{n}^{w} - \mathrm{E}\left( {S}_{n}^{w}\right) \right| }^{k} = {\int }_{0}^{\infty }k{t}^{k - 1}P\left( {\left| {{S}_{n}^{w} - \mathrm{E}\left( {S}_{n}^{w}\right) }\right| \geq t}\right) {dt} \] \[ \leq {2k}{\int }_{0}^{\infty }{t}^{k - 1}\exp \left( {-\frac{{t}^{2}}{2{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}}\right) {dt} \] \[ \text{[by letting}x = \frac{{t}^{2}}{2{C}_{b}^{2}\parallel \mathbf{w}{\parallel }_{2}^{2}}\rbrack = k{\left( 2{C}_{b}^{2}\right) }^{k/2}\parallel \mathbf{w}{\parallel }_{2}^{k}{\int }_{0}^{\infty }{x}^{k/2 - 1}\exp \left( {-x}\right) {dx} \] \[ = {D}_{k, C}\parallel \mathbf{w}{\parallel }_{2}^{k} \] \( {}_{2243} \) where \( {D}_{k, C} = k{\left( 2{C}_{b}^{2}\right) }^{k/2}\Gamma \left( {k/2}\right) \) and \( \Gamma \left( \cdot \right) \) stands for the Gamma function. We say that a random vector \( \mathbf{X} \) in \( {\mathrm{R}}^{n} \) is sub-gaussian if the one-dimensional marginals \( \langle \mathbf{X},\mathbf{x}\rangle \) are sub-gaussian random variables for all \( \mathbf{x} \in {\mathrm{R}}^{n} \) . Let \( \parallel R{\parallel }_{{\psi }_{2}} \mathrel{\text{:=}} \) \( \mathop{\sup }\limits_{{p \geq 1}}{p}^{-1/2}{\left( \mathrm{E}{\left| R\right| }^{p}\right) }^{1/p} \) for random variable \( R \), the sub-gaussian norm of \( \mathbf{X} \) is defined as \[ \parallel \mathbf{X}{\parallel }_{{\psi }_{2}} \mathrel{\text{:=}} \mathop{\sup }\limits_{{\mathbf{x} \in {S}^{n - 1}}}\parallel \langle \mathbf{X},\mathbf{x}\rangle {\parallel }_{{\psi }_{2}} \] A random variable \( X \) with mean \( \mu \) is said to be sub-exponential with parameters \( \left( {{\sigma }^{2},{\lambda }_{0}}\right) \) if \[ E\left\{ {e}^{\left( {X - \mu }\right) \lambda }\right\} \leq \exp \left\{ {{\lambda }^{2}{\sigma }^{2}/2}\right\} \;\text{ for all }\left| \lambda \right| < {\lambda }_{0}. \] 2244 Sub-Gaussian random variables are sub-exponential but not vice verse. ## 4.1.3 Bernstein's types inequalities The following Bernstein's inequality for sum of independent random variables can be founded in p119 in Giné and Nickl (2015) or Lemma 2.2.11 in van der Vaart and Wellner (1996). Theorem 4.1.15. [Bernstein's inequality] The centred independent random variables \( {X}_{1},\ldots ,{X}_{n} \) satisfy \[ \mathrm{E}{\left| {X}_{i}\right| }^{k} \leq \frac{1}{2}{v}_{i}^{2}{\kappa }^{k - 2}k!,\left( {i = 1,2,\cdots, n}\right) ,\text{ for all }k \geq 2 \] where \( \kappa ,\left\{ {\nu }_{i}\right\} \) (variance factors) are constants independent of \( k \) . Let \( {\nu }_{n}^{2} = \mathop{\sum }\limits_{{i = 1}}^{n}{v}_{i}^{2} \) which denote the fluctuation of sums, then the following inequality (where \( r > 0 \) ) is valid for the sum \( {S}_{n} = {X}_{1} + \cdots + {X}_{n} \) : \[ P\left( {\left| {S}_{n}\right| \geq r}\right) \leq 2\exp \left( {-\frac{{r}^{2}}{2{\nu }_{n}^{2} + {2\kappa r}}}\right) ,\;P\left( {\left| {S}_{n}\right| \geq \s
1164_(GTM70)Singular Homology Theory
Definition 4.1
Definition 4.1. An abelian group \( A \) is divisible if given any \( a \in A \) and any nonzero integer \( n \), there exists an element \( x \in A \) such that \( {nx} = a \) . Example 4.1. The additive group of rational numbers is divisible. It is easily proved that any quotient group of a divisible group is divisible, and any direct sum of divisible groups is divisible. Thus we could construct many more examples. In a certain sense, divisible groups have properties which are dual to those of free abelian groups. For example, any subgroup of a free abelian group is also free abelian, while any quotient group of a divisible group is divisible. Any free group \( F \) is projective (in the category of abelian groups), in the sense that given any epimorphism \( h : A \rightarrow B \) and any homomorphism \( g : F \rightarrow B \) , there exists a homomorphism \( f : F \rightarrow A \) such that the following diagram is commutative: ![26a5d8f2-88cf-4556-8447-3a182179fff0_171_1.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_171_1.jpg) (the proof is easy). Dually, an abelian group \( G \) is called injective if given any monomorphism \( h : B \rightarrow A \) and any homomorphism \( g : B \rightarrow G \), there exists a homomorphism \( f : A \rightarrow G \) such that the following diagram is commutative: ![26a5d8f2-88cf-4556-8447-3a182179fff0_172_0.jpg](images/26a5d8f2-88cf-4556-8447-3a182179fff0_172_0.jpg) Note that this diagram is obtained from the previous one by reversing all the arrows. Theorem 4.1. An abelian group is injective if and only if it is divisible. The proof that an injective group is divisible is easy, and is left to the reader. Assume that \( G \) is divisible; we will prove that it is injective. Let \( A, B, h \) , and \( g \) be as in the diagram above. We may as well assume that \( B \) is a subgroup of \( A \), and \( h \) is the inclusion map. Consider all pairs \( \left( {{G}_{i},{h}_{i}}\right) \) where \( {G}_{i} \) is a subgroup of \( A \) which contains \( B \), and \( {h}_{i} : {G}_{i} \rightarrow G \) is a homomorphism such that \( {h}_{i} \mid B = g \) . This family of pairs is nonvacuous, because \( \left( {B, g}\right) \) obviously satisfies the required conditions. Define \( \left( {{G}_{i},{h}_{i}}\right) < \left( {{G}_{j},{h}_{j}}\right) \) if \( {G}_{i} \subset {G}_{j} \) and \( {h}_{j} \mid {G}_{i} = {h}_{i} \) . Apply Zorn’s lemma to this family with this ordering to conclude there exists a maximal pair \( \left( {{G}_{m},{h}_{m}}\right) \) . We assert \( {G}_{m} = A \) ; for if \( {G}_{m} \neq A \), let \( a \in A - {G}_{m} \) ; using the fact that \( G \) is divisible, it is easily shown that \( {h}_{m} \) can be extended to the subgroup generated by \( {G}_{m} \) and \( a \) . But this contradicts maximality of \( {G}_{m} \) . It is well known that every abelian group is isomorphic to a quotient of a free abelian group. The following is the dual property: Proposition 4.2. Any group is isomorphic to a subgroup of a divisible group. Proof. There are various ways to prove this. One way is to express the given group \( G \) as the quotient group of a free group \( F \) : \[ G \approx F/R\text{.} \] Obviously \( F \) can be considered as a subgroup of a divisible group \( D \) ; for if \( \left\{ {b}_{i}\right\} \) is a basis for \( F \), then we may take \( D \) as a rational vector space on the same basis. Then \( G \) is isomorphic to a subgroup of the divisible group \( D/R \) . Q.E.D. We will now list the basic properties of \( \operatorname{Ext}\left( {A, B}\right) \) . For any abelian groups \( A \) and \( B,\operatorname{Ext}\left( {A, B}\right) \) is also an abelian group. If \( f : {A}^{\prime } \rightarrow A \) and \( g : B \rightarrow {B}^{\prime } \) are homomorphisms, then \[ \operatorname{Ext}\left( {f, g}\right) : \operatorname{Ext}\left( {A, B}\right) \rightarrow \operatorname{Ext}\left( {{A}^{\prime },{B}^{\prime }}\right) \] is a homomorphism with the usual functorial properties. There are two ways to define or construct \( \operatorname{Ext}\left( {A, B}\right) \) : (a) By means of a free or projective resolution of \( A \) . Choose a short exact sequence \( 0 \rightarrow {F}_{1}\overset{d}{ \rightarrow }{F}_{0}\overset{\varepsilon }{ \rightarrow }A \rightarrow 0 \) with \( {F}_{0} \) (and hence \( {F}_{1} \) ) free abelian. Then the following sequence is exact: \[ 0 \leftarrow \mathrm{{Ext}}\left( {A, B}\right) \leftarrow \mathrm{{Hom}}\left( {{F}_{1}, B}\right) \overset{\mathrm{{Hom}}\left( {d,1}\right) }{ \leftarrow }\mathrm{{Hom}}\left( {{F}_{0}, B}\right) \overset{\mathrm{{Hom}}\left( {\varepsilon ,1}\right) }{ \leftarrow }\mathrm{{Hom}}\left( {A, B}\right) \leftarrow 0. \] In other words, \( \operatorname{Ext}\left( {A, B}\right) \) is the cokernel of the homomorphism \( \operatorname{Hom}\left( {d,1}\right) \) . (b) By means of an injective resolution of \( B \) . Choose a short exact sequence \( 0 \rightarrow B\overset{s}{ \rightarrow }{D}_{0}\overset{d}{ \rightarrow }{D}_{1} \rightarrow 0 \) with \( {D}_{0} \) (and hence \( {D}_{1} \) ) divisible. (By the proposition above, such a sequence always exists.) Then the following sequence is exact: \[ 0 \rightarrow \operatorname{Hom}\left( {A, B}\right) \xrightarrow[]{\operatorname{Hom}\left( {1,\varepsilon }\right) }\operatorname{Hom}\left( {A,{D}_{0}}\right) \xrightarrow[]{\operatorname{Hom}\left( {1, d}\right) }\operatorname{Hom}\left( {A,{D}_{1}}\right) \rightarrow \operatorname{Ext}\left( {A, B}\right) \rightarrow 0. \] Thus \( \operatorname{Ext}\left( {A, B}\right) \) is the cokernel of the homomorphism \( \operatorname{Hom}\left( {1, d}\right) \) . Naturally, one must prove that the group \( \operatorname{Ext}\left( {A, B}\right) \) is independent of the projective resolution in (a), and of the injective resolution in (b). Also, it must be proved that the two definitions give rise to the same group. For information on these matters, the reader is referred to books on homological algebra (see the bibliography for Chapter V). The definition of the induced homomorphism \( \operatorname{Ext}\left( {f, g}\right) \) is left to the reader. From these definitions, the following two statements are obvious consequences: (1) If \( A \) is a free abelian group, then \( \operatorname{Ext}\left( {A, B}\right) = 0 \) for any group \( B \) . (2) If \( B \) is a divisible group, then \( \operatorname{Ext}\left( {A, B}\right) = 0 \) for any group \( A \) . Using the definition (a) above, one readily shows that: (3) \( \operatorname{Ext}\left( {{Z}_{n}, B}\right) \approx B/{nB} \) , \[ \operatorname{Hom}\left( {{Z}_{n}, B}\right) \approx \{ x \in B\text{ 且 }\left| {{nx} = 0\} .}\right| \] By means of (1) and (3), the structure of \( \operatorname{Ext}\left( {A, B}\right) \) can be determined in case \( A \) is a finitely generated abelian group. We conclude this summary of the principal properties of the functor Ext by mentioning the following two exact sequences. Let \[ 0 \rightarrow A\overset{h}{ \rightarrow }B\overset{k}{ \rightarrow }C \rightarrow 0 \] be a short exact sequence of abelian groups, and let \( G \) be an arbitrary abelian group. Then the following two sequences are exact: \[ 0 \rightarrow \mathrm{{Hom}}\left( {C, G}\right) \xrightarrow[]{\;\mathrm{{Hom}}\left( {k,1}\right) }\mathrm{{Hom}}\left( {B, G}\right) \xrightarrow[]{\;\mathrm{{Hom}}\left( {h,1}\right) }\mathrm{{Hom}}\left( {A, G}\right) \] (4.1) \[ \rightarrow \operatorname{Ext}\left( {C, G}\right) \xrightarrow[]{\;\operatorname{Ext}\left( {k,1}\right) }\operatorname{Ext}\left( {B, G}\right) \xrightarrow[]{\;\operatorname{Ext}\left( {h,1}\right) }\operatorname{Ext}\left( {A, G}\right) \rightarrow 0, \] \[ 0 \rightarrow \mathrm{{Hom}}\left( {G, A}\right) \xrightarrow[]{\;\mathrm{{Hom}}\left( {1, h}\right) }\mathrm{{Hom}}\left( {G, B}\right) \xrightarrow[]{\;\mathrm{{Hom}}\left( {1, k}\right) }\mathrm{{Hom}}\left( {G, C}\right) \] (4.2) \[ \rightarrow \mathrm{{Ext}}\left( {G, A}\right) \xrightarrow[]{\;\mathrm{{Ext}}\left( {1, h}\right) }\mathrm{{Ext}}\left( {G, B}\right) \xrightarrow[]{\;\mathrm{{Ext}}\left( {1, k}\right) }\mathrm{{Ext}}\left( {G, C}\right) \rightarrow 0. \] In these exact sequences, the connecting homomorphisms, \( \operatorname{Hom}\left( {A, G}\right) \rightarrow \) \( \operatorname{Ext}\left( {C, G}\right) \) and \( \operatorname{Hom}\left( {G, C}\right) \rightarrow \operatorname{Ext}\left( {G, A}\right) \) have all the naturality properties that one might expect. With these preliminaries out of the way, we can now state the main result in this area: Theorem 4.3 (Universal coefficient theorem for cohomology). Let \( K \) be a chain complex of free abelian groups, and let \( G \) be an arbitrary abelian group. Then there exists a split exact sequence \[ 0 \rightarrow \operatorname{Ext}\left( {{H}_{n - 1}\left( K\right), G}\right) \overset{\beta }{ \rightarrow }{H}^{n}\left( {\operatorname{Hom}\left( {K, G}\right) }\right) \overset{\alpha }{ \rightarrow }\operatorname{Hom}\left( {{H}_{n}\left( K\right), G}\right) \rightarrow 0. \] The homomorphism \( \beta \) is natural, with respect to coefficient homomorphisms and chain maps. The splitting is natural with respect to coefficient homomorphisms but not with respect to chain maps. Proof. The proof we present is dual to that given in \( § \) V.6. For the reader who has some feeling for this duality, it is a purely mechanical exercise to transpose the previous proof to the present one. First we need a lemma, which is the dual of Lemma V.6.1. Lemma 4.4. If \( G \) is a divisible group, then the homomorphism \[ \alpha : {H}^{n}\left( {\operatorname{Hom}\left( {K, G}\right) }\right) \rightarrow \operatorname{Hom}\left( {{H}_{n}\left( K\right), G}\right) \] is an isomorphism for any chain complex \( K \) . The proof of this lemma is a nice exercise, involving the various definitions and the fact that divisible groups ar
1281_[张恭庆] Lecture Notes on Calculus of Variations
Definition 20.1
Definition 20.1 A function \( u : J = \left( {a, b}\right) \rightarrow {\mathbb{R}}^{1} \) is said to be of bounded variation, denoted BV, if there is a constant \( M > 0 \) such that \[ {S}_{\pi }\left( u\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\left| {u\left( {t}_{i + 1}\right) - u\left( {x}_{i}\right) }\right| \leq M \] for every partition \( \pi = \left\{ {a < {t}_{1} < \cdots < {t}_{n + 1} < b}\right\} \) of \( \left( {a, b}\right) \) . We call \[ {V}_{a}^{b}\left( u\right) = \mathop{\sup }\limits_{\pi }{S}_{\pi }\left( u\right) \] the total variation of \( u \) on \( J \) . The set of BV functions on \( J \) is denoted by \( \operatorname{BV}\left( J\right) \), which is itself a linear space. The following simple properties are straightforward to verify: (1) A bounded monotone function on \( J \) is a BV function. (2) \( \operatorname{Lip}\left( {J,{\mathbb{R}}^{1}}\right) \subset \operatorname{BV}\left( J\right) \) . (3) \( \forall u \in \operatorname{BV}\left( J\right), u \) can be decomposed as the difference of two monotone increasing functions: \[ u\left( x\right) = \frac{1}{2}\left( {{T}_{u}\left( x\right) + u\left( x\right) }\right) - \frac{1}{2}\left( {{T}_{u}\left( x\right) - u\left( x\right) }\right) , \] where \[ {T}_{u}\left( x\right) = \sup \left\{ {\mathop{\sum }\limits_{{i = 1}}^{m}\left| {u\left( {t}_{i + 1}\right) - u\left( {t}_{i}\right) }\right| \mid \sigma : a < {t}_{1} < \cdots < {t}_{m + 1} < x\text{ is any partition of }J}\right\} . \] (4) \( \forall u \in \operatorname{BV}\left( J\right) \), the set of jumping points \( {D}_{u} = \{ x \in J \mid u\left( {x - 0}\right) \neq \) \( u\left( {x + 0}\right) \} \) is at most countably infinite. Since the value of a BV function at a jump discontinuity is not determined, in order to avoid unnecessary fuss caused by this, we choose to normalize \[ u\left( x\right) = u\left( {x - 0}\right) ,\;\forall x \in J. \] To further signify the variation part of \( u \) on \( J \), we also normalize \( u\left( {a + 0}\right) = 0 \) . The set of all normalized functions of bounded variations on \( J \) is still a linear space, denoted by \( \operatorname{NBV}\left( J\right) \) . On which, we define the norm \[ \parallel u\parallel = {V}_{a}^{b}\left( u\right) \] Then \( \operatorname{NBV}\left( J\right) \) is a Banach space. In the following, we establish the one-to-one correspondence between the NBV functions and the \( \sigma \) -additive Borel measurable functions. Let \( u : J \rightarrow {\mathbb{R}}^{1} \) be a monotone increasing normalized BV function, define \[ {\mu }_{u}\left( \left( {a, x}\right) \right) = u\left( x\right) . \] We claim that \( {\mu }_{u} \) can be extended to a measure on \( J \), i.e. it is a \( \sigma \) -additive nonnegative set function. In fact, \( \forall \lbrack \alpha ,\beta ) \subset J \), let \[ {\mu }_{u}\left( {\lbrack \alpha ,\beta }\right) ) = u\left( \beta \right) - u\left( \alpha \right) \] then \( {\mu }_{u}\left( {\{ \alpha \} }\right) = u\left( {\alpha + 0}\right) - u\left( \alpha \right) \) . Hence, for any Borel set \( E \subset J \), we have \[ {\mu }_{u}\left( E\right) = {\mathcal{L}}^{1}\left( {\mathop{\bigcup }\limits_{{x \in E}}\left\lbrack {u\left( x\right), u\left( {x + 0}\right) }\right\rbrack }\right) \] where \( {\mathcal{L}}^{1} \) denotes the Lebesgue measure on \( {\mathbb{R}}^{1} \) . Conversely, for a given Borel measure \( \mu \) on \( J, u\left( x\right) = \mu \left( \left( {a, x}\right) \right) \) is monotonically increasing and left continuous; moreover, it satisfies \( u\left( {a + 0}\right) = 0 \) . From property (3), \( \forall u \in \operatorname{BV}\left( J\right) \), there exist two monotone increasing, left continuous functions \( {u}_{1} \) and \( {u}_{2} \) such that \( u = {u}_{1} - {u}_{2} \) . By the above statement, they correspond to two respective Borel measures \( {\mu }_{i} = {\mu }_{{u}_{i}} \) for \( i = 1,2 \) . By defining \[ \mu \left( E\right) = {\mu }_{1}\left( E\right) - {\mu }_{2}\left( E\right) \] it follows that \( \mu \) is a \( \sigma \) -additive Borel measurable set function, which satisfies \[ \mu \left( \left( {a, x}\right) \right) = {\mu }_{1}\left( \left( {a, x}\right) \right) - {\mu }_{2}\left( \left( {a, x}\right) \right) = u\left( x\right) ,\;\forall x \in J. \] Subsequently, for any \( \sigma \) -additive Borel measurable set function \( \mu \) , \[ u\left( x\right) = \mu \left( \left( {a, x}\right) \right) \] (20.1) defines a unique normalized BV function. As a consequence, for any \( \sigma \) -additive Borel measurable set function \( \mu \) and any Borel measurable set \( E \in \mathcal{B} \), we can also define a "total variation": \[ \left| \mu \right| \left( E\right) = \sup \left\{ {\mathop{\sum }\limits_{{i = 1}}^{\infty }\left| {\mu \left( {B}_{i}\right) }\right| \mid {B}_{i} \in \mathcal{B},{B}_{i} \cap {B}_{j} = \varnothing, i \neq j,\mathop{\bigcup }\limits_{{i = 1}}^{\infty }{B}_{i} = E}\right\} . \] It is not difficult to verify: (1) \( \left| \mu \right| \) is a Borel measure, (2) \( \left| {\mu \left( E\right) }\right| \leq \left| \mu \right| \left( E\right) ,\forall E \in \mathcal{B} \) , (3) \( \left| \mu \right| \left( J\right) = {V}_{a}^{b}\left( u\right) \) . We denote the space of all \( \sigma \) -additive Borel measurable set functions on \( J \) by \( \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \) . On \( \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \), define the norm to be \[ \parallel \mu {\parallel }_{\mathcal{M}} = \left| \mu \right| \left( J\right) \] then it is also a Banach space. The above one-to-one correspondence \( u \mapsto \mu \) is an isomorphism between the Banach spaces \( \operatorname{NBV}\left( J\right) \) and \( \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \) . Recall the Riesz representation theorem for the space of continuous functions on \( J \) : \[ {C}_{0}{\left( J\right) }^{ * } \cong \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \] where \( {C}_{0}\left( J\right) = \{ \varphi \in C\left( J\right) \mid \varphi \left( a\right) = \varphi \left( b\right) = 0\} \) . Since \( {C}_{0}\left( J\right) \) is separable, \( \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \) is the dual space of a separable Banach space. This means: \( \forall F \in {C}_{0}{\left( J\right) }^{ * } \), there exists a unique \( \mu \in \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \) such that \[ F\left( \phi \right) = {\int }_{J}{\phi d\mu },\;\forall \phi \in {C}_{0}\left( J\right) , \] with \( \parallel F\parallel = \parallel \mu {\parallel }_{\mathcal{M}} \), and the map \( F \mapsto \mu \) is surjective. By the Lebesgue-Nikodym decomposition theorem, every BV function has the following decomposition: \[ u\left( x\right) = v\left( x\right) + r\left( x\right) + s\left( x\right) \] where \( v\left( x\right) \) is the absolutely continuous part of \( u, r\left( x\right) \) is the Cantor part of \( u \) , and \( s\left( x\right) \) is the jumping part of \( u \) . In particular, \( s \) is itself a jump function, while \( r \) is a non-constant function of bounded variation, but whose derivative equals zero almost everywhere. Since a BV function \( u \) is itself a bounded measurable function, \( u \in {L}^{1}\left( J\right) \) . As a function in \( {L}^{1}\left( J\right) \), it has a generalized derivative (in the sense of distribution) \( {Du} \), which satisfies \[ \langle {Du},\varphi \rangle = - {\int }_{J}u \cdot {\varphi }^{\prime }{dx},\;\forall \varphi \in {C}_{0}^{\infty }\left( J\right) . \] (20.2) It is worth noting, contrasting to Sobolev spaces, for a BV function \( u,{Du} \) and the almost everywhere derivative \( {u}^{\prime } \) are not identical to each other! Only when \( r = s = 0, u \in \mathrm{{AC}}\left( J\right) \), which implies \( {Du} = {u}^{\prime } \) . If \( s \neq 0 \), then \( {Du} \) may be a measure. For example, let \( J = \left( {-1,1}\right) \) and \[ u\left( x\right) = \left\{ \begin{array}{l} 0, x \leq 0 \\ 1, x > 0 \end{array}\right. \] then \( {Du} = \delta \left( x\right) \) . That is, if we define \( \mu = {Du} \), then \( \mu \left( {\{ 0\} }\right) = 1,\mu \left( B\right) = 0 \) , \( \forall B \in \mathcal{B} \) and \( 0 \notin B \) . We now reveal the explicit relation between a BV function \( u \) and its corresponding measure \( \mu \) . Notice as the dual space of \( {C}_{0}\left( J\right) \), we have a natural dual pairing: \[ \langle \mu ,\varphi \rangle = {\int }_{J}\varphi \mathrm{d}\mu ,\;\forall \left( {\mu ,\varphi }\right) \in \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \times {C}_{0}\left( J\right) . \] Thus, \[ \parallel \mu {\parallel }_{\mathcal{M}} = \sup \left\{ {{\int }_{J}{\varphi d\mu } \mid \parallel \varphi {\parallel }_{{C}_{0}} \leq 1}\right\} \] Furthermore, according to the one-to-one correspondence of \( \mathcal{M}\left( {J,{\mathbb{R}}^{1}}\right) \) and \( \operatorname{NBV}\left( J\right) \), we also have \[ \langle \mu ,\varphi \rangle = {\int }_{J}{\varphi du}\left( x\right) = - {\int }_{J}u\left( x\right) {\varphi }^{\prime }\left( x\right) {dx}. \] Combining with (20.2), it follows immediately that \[ {Du} = \mu \text{.} \] Since there is no notion of end (left and right) points in the domain of functions of several variables, the idea of normalizing the function value seems impractical. However, the above formula sheds new light on how to generalize functions of bounded variations from a single variable to several variables. More precisely, we are led to the following definition: Definition 20.2 On \( \operatorname{BV}\left( J\right) \), we define the norm \[ \parallel u{\parallel }_{\mathrm{{BV}}} = \parallel u{\parallel }_{{L}^{1}} + \parallel {Du}{\parallel }_{\mathcal{M}} \] It is not difficult to verify that \( \mathrm{{BV}}\left( J\right) \) is a Banach space. ## 20.2 Functions of bounded variations in several variables Let \( \Omega \subset {\mathbb{R
1068_(GTM227)Combinatorial Commutative Algebra
Definition 15.1
Definition 15.1 Let \( w \in {M}_{k\ell } \) be a partial permutation, meaning that \( w \) is a \( k \times \ell \) matrix having all entries equal to 0 except for at most one entry equal to 1 in each row and column. The matrix Schubert variety \( {\bar{X}}_{w} \) inside \( {M}_{k\ell } \) is the subvariety \[ {\bar{X}}_{w} = \left\{ {Z \in {M}_{k\ell } \mid \operatorname{rank}\left( {Z}_{p \times q}\right) \leq \operatorname{rank}\left( {w}_{p \times q}\right) \text{ for all }p\text{ and }q}\right\} \] where \( {Z}_{p \times q} \) is the upper left \( p \times q \) rectangular submatrix of \( Z \) . Let \( r\left( w\right) \) be the \( k \times \ell \) rank array whose entry at \( \left( {p, q}\right) \) is \( {r}_{pq}\left( w\right) = \operatorname{rank}\left( {w}_{p \times q}\right) \) . Example 15.2 The classical determinantal variety is the set of all \( k \times \ell \) matrices over \( \mathbb{k} \) of rank at most \( r \) . This variety is the matrix Schubert variety \( {\bar{X}}_{w} \) for the partial permutation matrix \( w \) with \( r \) nonzero entries \[ {w}_{11} = {w}_{22} = \cdots = {w}_{rr} = 1 \] along the diagonal, and all other entries \( {w}_{\alpha \beta } \) equal to zero. The classical determinantal ideal, generated by the set of all \( \left( {r + 1}\right) \times \left( {r + 1}\right) \) minors of the \( k \times \ell \) matrix of variables, vanishes on this variety. In Definition 15.5 this ideal will be called the Schubert determinantal ideal \( {I}_{w} \) for the special partial permutation \( w \) above. We will see in Corollary 16.29 that in fact \( {I}_{w} \) is the prime ideal of \( {\bar{X}}_{w} \) . In Example 15.39 we show that the multidegree of this classical determinantal ideal is a Schur polynomial. Our results in Chapter 16 imply that the set of all \( \left( {r + 1}\right) \times \left( {r + 1}\right) \) minors is a Gröbner basis and its determinantal variety is Cohen-Macaulay. Some readers may wonder whether the machinery developed below is really the right way to prove the Gröbner basis property in this classical case. Our answer to this question is "yes": the weak order transitions in Sections 15.3-15.5 provide the steps for an elementary and self-contained proof by an induction involving all matrix Schubert varieties (starting from the case of a coordinate subspace in Example 15.3), and it is this induction that captures the combinatorics of determinantal ideals so richly. The combinatorics is inherent in the multigrading that is universal among those making all minors homogeneous (see Exercise 15.1), and the induction relies crucially on having a multigrading beyond the standard \( \mathbb{Z} \) -grading. For the classical determinantal ideal of maximal minors, where \( r = \) \( \min \left( {k,\ell }\right) \), the induction is particularly simple and explicit, as is the combinatorial multidegree formula; see Exercises 15.4, 15.5, and 15.12. Partial permutation matrices \( w \) are sometimes called rook placements, because rooks placed on the 1 entries in \( w \) are not attacking one another. The number \( \operatorname{rank}\left( {w}_{p \times q}\right) \) appearing in Definition 15.1 is simply the number of 1 entries (rooks) in the northwest \( p \times q \) submatrix of \( w \) . A partial permutation can be viewed as a correspondence that takes some of the integers \( 1,\ldots, k \) to distinct integers from \( \{ 1,\ldots ,\ell \} \) . We write \( w\left( i\right) = j \) if the partial permutation \( w \) has a 1 in row \( i \) and column \( j \) . This convention results from viewing matrices in \( {M}_{k\ell } \) as acting on row vectors from the right; it is therefore transposed from the more common convention for writing permutation matrices using columns. When \( w \) is an honest square permutation matrix of size \( n \), so \( k = n = \ell \) and there are exactly \( n \) entries of \( w \) equal to 1, then we can express \( w \) in one-line notation: the permutation \( w = {w}_{1}\ldots {w}_{n} \) of \( \{ 1,\ldots, n\} \) sends \( i \mapsto {w}_{i} \) . This is not to be confused with cycle notation, where (for instance) the permutation \( {\sigma }_{i} = \left( {i, i + 1}\right) \) is the adjacent transposition switching \( i \) and \( i + 1 \) . The number \( \operatorname{rank}\left( {w}_{p \times q}\right) \) can alternatively be expressed as \[ {r}_{pq}\left( w\right) = \operatorname{rank}\left( {w}_{p \times q}\right) = \# \{ \left( {i, j}\right) \leq \left( {p, q}\right) \mid w\left( i\right) = j\} \] for permutations \( w \) . The symmetric group of permutations of \( \{ 1,\ldots, n\} \) is denoted by \( {S}_{n} \) . There is a special permutation \( {w}_{0} = n\ldots {321} \) called the long word inside \( {S}_{n} \), which reverses the order of \( 1,\ldots, n \) . Example 15.3 The variety \( {\bar{X}}_{{w}_{0}} \) inside \( {M}_{nn} \) for the long word \( {w}_{0} \in {S}_{n} \) is just the linear subspace of lower-right-triangular matrices; its ideal is \( \left\langle {{x}_{ij} \mid i + j \leq n}\right\rangle \) . As we will see in Section 15.3, this is the smallest matrix Schubert variety indexed by an honest permutation in \( {S}_{n} \) . Example 15.4 Five of the six \( 3 \times 3 \) matrix Schubert varieties for honest permutations are linear subspaces: \[ {\bar{X}}_{123} = {M}_{33} \] \[ {I}_{213} = \left\langle {x}_{11}\right\rangle \] \[ {\overline{X}}_{213}\; = \;\{ Z \in {M}_{33}\;|\;{x}_{11} = 0\} \] \[ {I}_{231} = \left\langle {{x}_{11},{x}_{12}}\right\rangle \] \[ {\overline{X}}_{231}\; = \;\{ Z \in {M}_{33}\;|\;{x}_{11} = {x}_{12} = 0\} \] \[ {I}_{231} = \left\langle {{x}_{11},{x}_{21}}\right\rangle \] \[ {\overline{X}}_{312}\; = \;\{ Z \in {M}_{33}\;|\;{x}_{11} = {x}_{21} = 0\} \] \[ {I}_{321} = \left\langle {{x}_{11},{x}_{12},{x}_{21}}\right\rangle \] \[ {\bar{X}}_{321} = \left\{ {Z \in {M}_{33} \mid {x}_{11} = {x}_{12} = {x}_{21} = 0}\right\} \] The remaining permutation, \( w = {132} \), has matrix \( \frac{1}{1 + 1} \), so that \[ {I}_{132} = \left\langle {{x}_{11}{x}_{22} - {x}_{12}{x}_{21}}\right\rangle \;{\bar{X}}_{132} = \left\{ {Z \in {M}_{33} \mid \operatorname{rank}\left( {Z}_{2 \times 2}\right) \leq 1}\right\} . \] Thus \( {\bar{X}}_{132} \) is the set of matrices whose upper left \( 2 \times 2 \) block is singular. \( \diamond \) Since a matrix has rank at most \( r \) if and only if its minors of size \( r + 1 \) all vanish, the matrix Schubert variety \( {\bar{X}}_{w} \) is the (reduced) subvariety of \( {\mathbb{k}}^{k \times \ell } \) cut out by the ideal \( {I}_{w} \) defined as follows. Definition 15.5 Let \( w \in {M}_{k\ell } \) be a partial permutation. The Schubert determinantal ideal \( {I}_{w} \subset \mathbb{k}\left\lbrack \mathbf{x}\right\rbrack \) is generated by all minors in \( {\mathbf{x}}_{p \times q} \) of size \( 1 + {r}_{pq}\left( w\right) \) for all \( p \) and \( q \), where \( \mathbf{x} = \left( {x}_{\alpha \beta }\right) \) is the \( k \times \ell \) matrix of variables. It is a nontrivial fact that Schubert determinantal ideals are prime, but we will not need it in this chapter, where we work exclusively with the zero set \( {\bar{X}}_{w} \) of \( {I}_{w} \) . We therefore write \( I\left( {\bar{X}}_{w}\right) \) instead of \( {I}_{w} \) when we mean the radical of \( {I}_{w} \) . Chapter 16 gives a combinatorial algebraic primality proof. Example 15.6 Let \( w = {13865742} \), so that the matrix for \( w \) is given by replacing each \( \times \) by 1 in the left matrix below. ![9d852306-8a03-41f2-b2e7-a141e7b451e2_301_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_301_0.jpg) Each \( 8 \times 8 \) matrix in \( {\bar{X}}_{w} \) has the property that every rectangular submatrix contained in the region filled with 1 ’s has rank \( \leq 1 \), and every rectangular submatrix contained in the region filled with 2’s has rank \( \leq 2 \), and so on. The ideal \( {I}_{w} \) contains the 21 minors of size \( 2 \times 2 \) in the first region and the 144 minors of size \( 3 \times 3 \) in the second region. These 165 minors in fact generate \( {I}_{w} \) ; see Theorem 15.15. \( \diamond \) Example 15.7 Let \( w \) be the \( 3 \times 3 \) partial permutation matrix \( \frac{1}{1}\frac{1}{1} \) . The matrix Schubert variety \( {\bar{X}}_{w} \) is the set of \( 3 \times 3 \) matrices whose upper left entry is 0, and whose determinant vanishes. The ideal \( {I}_{w} \) is \[ \left\langle {{x}_{11},\det \left\lbrack \begin{array}{lll} {x}_{11} & {x}_{12} & {x}_{13} \\ {x}_{21} & {x}_{22} & {x}_{23} \\ {x}_{31} & {x}_{32} & {x}_{33} \end{array}\right\rbrack }\right\rangle \] The generators of \( {I}_{w} \) are the same as those of the ideal \( {I}_{2143} \) for the permutation in \( {S}_{4} \) sending \( 1 \mapsto 2,2 \mapsto 1,3 \mapsto 4 \) and \( 4 \mapsto 3 \) . It might seem a bit unhelpful of us to have ignored partial permutations until the very last example above, but in fact there is a general principle illustrated by Example 15.7 that gets us off the hook. Let us say that a partial permutation \( \widetilde{w} \) extends \( w \) if the matrix \( \widetilde{w} \) has northwest corner \( w \) . Proposition 15.8 Every partial permutation matrix \( w \) can be extended canonically to a square permutation matrix \( \widetilde{w} \) whose Schubert determinantal ideal \( {I}_{\widetilde{w}} \) has the same minimal generating minors as \( {I}_{w} \) . Proof. Suppose that \( w \) is not already a permutation and that by symmetry there is a row (as opposed to a column) of \( w \) that has no 1 entries. Define \( {w}^{\prime } \) by adding a new column and placing a 1 entry in its highest possible row. Define \( \widetilde{w} \) by continuing until there is a 1 entry in every row and column. The Schubert determinantal ideal of any partial permutation matrix extending \( w \) contains the generators of \( {I}_{w} \) by definition. Therefore it is enough—by
1167_(GTM73)Algebra
Definition 3.1
Definition 3.1. Let \( \mathrm{K} \) be a field and \( \mathrm{f} \in \mathrm{K}\left\lbrack \mathrm{x}\right\rbrack \) a polynomial of positive degree. An extension field \( \mathrm{F} \) of \( \mathrm{K} \) is said to be a splitting field over \( \mathrm{K} \) of the polynomial \( \mathrm{f} \) if \( \mathrm{f} \) splits in \( \mathrm{F}\left\lbrack \mathrm{x}\right\rbrack \) and \( \mathrm{F} = \mathrm{K}\left( {{\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{\mathrm{n}}}\right) \) where \( {\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{\mathrm{n}} \) are the roots of \( \mathrm{f} \) in \( \mathrm{F} \) . Let \( \mathrm{S} \) be a set of polynomials of positive degree in \( \mathrm{K}\left\lbrack \mathrm{x}\right\rbrack \) . An extension field \( \mathrm{F} \) of \( \mathrm{K} \) is said to be a splitting field over \( \mathbf{K} \) of the set \( \mathbf{S} \) of polynomials if every polynomial in \( \mathbf{S} \) splits in \( \mathrm{F}\left\lbrack \mathrm{x}\right\rbrack \) and \( \mathrm{F} \) is generated over \( \mathrm{K} \) by the roots of all the polynomials in \( \mathrm{S} \) . EXAMPLES. The only roots of \( {x}^{2} - 2 \) over \( \mathbf{Q} \) are \( \sqrt{2} \) and \( - \sqrt{2} \) and \( {x}^{2} - 2 \) \( = \left( {x - \sqrt{2}}\right) \left( {x + \sqrt{2}}\right) \) . Therefore \( \mathbf{Q}\left( \sqrt{2}\right) = \mathbf{Q}\left( {\sqrt{2}, - \sqrt{2}}\right) \) is a splitting field of \( {x}^{2} - 2 \) over \( \mathbf{Q} \) . Similarly \( \mathbf{C} \) is a splitting field of \( {x}^{2} + 1 \) over \( \mathbf{R} \) . However, if \( u \) is a root of an irreducible \( f \in K\left\lbrack x\right\rbrack, K\left( u\right) \) need not be a splitting field of \( f \) . For instance if \( u \) is the real cube root of 2 (the others being complex), then \( \mathbf{Q}\left( u\right) \subset \mathbf{R} \), whence \( \mathbf{Q}\left( u\right) \) is not a splitting field of \( {x}^{3} - 2 \) over \( \mathbf{Q} \) . REMARKS. If \( F \) is a splitting field of \( S \) over \( K \), then \( F = K\left( X\right) \), where \( X \) is the set of all roots of polynomials in the subset \( S \) of \( K\left\lbrack x\right\rbrack \) . Theorem 1.12 immediately implies that \( F \) is algebraic over \( K \) (and finite dimensional if \( S \), and hence \( X \), is a finite set). Note that if \( S \) is finite, say \( S = \left\{ {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right\} \), then a splitting field of \( S \) coincides with a splitting field of the single polynomial \( f = {f}_{1}{f}_{2}\cdots {f}_{n} \) (Exercise 1). This fact will be used frequently in the sequel without explicit mention. Thus the splitting field of a set \( S \) of polynomials will be chiefly of interest when \( S \) either consists of a single polynomial or is infinite. It will turn out that every [finite dimensional] algebraic Galois extension is in fact a particular kind of splitting field of a [finite] set of polynomials. The obvious question to be answered next is whether every set of polynomials has a splitting field. In the case of a single polynomial (or equivalently a finite set of polynomials), the answer is relatively easy. Theorem 3.2. If \( \mathrm{K} \) is a field and \( \mathrm{f} \in \mathrm{K}\left\lbrack \mathrm{x}\right\rbrack \) has degree \( \mathrm{n} \geq 1 \), then there exists a splitting field \( \mathrm{F} \) of \( \mathrm{f} \) with \( \left\lbrack {\mathrm{F} : \mathrm{K}}\right\rbrack \leq \mathrm{n} \) ! SKETCH OF PROOF. Use induction on \( n = \deg f \) . If \( n = 1 \) or if \( f \) splits over \( K \), then \( F = K \) is a splitting field. If \( n > 1 \) and \( f \) does not split over \( K \), let \( g \in K\left\lbrack x\right\rbrack \) be an irreducible factor of \( f \) of degree greater than one. By Theorem 1.10 there is a simple extension field \( K\left( u\right) \) of \( K \) such that \( u \) is a root of \( g \) and \( \left\lbrack {K\left( u\right) : K}\right\rbrack = \deg g > 1 \) . Then by Theorem III. 6.6, \( f = \left( {x - u}\right) h \) with \( h \in K\left( u\right) \left\lbrack x\right\rbrack \) of degree \( n - 1 \) . By induction there exists a splitting field \( F \) of \( h \) over \( K\left( u\right) \) of dimension at most \( \left( {n - 1}\right) \) ! Show that \( F \) is a splitting field of \( f \) over \( K \) (Exercise 3) of dimension \( \left\lbrack {F : K}\right\rbrack = \left\lbrack {F : K\left( u\right) }\right\rbrack \left\lbrack {K\left( u\right) : K}\right\rbrack \) \( \leq \left( {n - 1}\right) !\left( {\deg g}\right) \leq n! \) Proving the existence of a splitting field of an infinite set of polynomials is considerably more difficult. We approach the proof obliquely by introducing a special case of such a splitting field (Theorem 3.4) which is of great importance in its own right. Note: The reader who is interested only in splitting fields of a single polynomial (i.e. finite dimensional splitting fields) should skip to Theorem 3.8. Theorem 3.12 should be omitted and Theorems 3.8-3.16 read in the finite dimensional case. The proof of each of these results is either divided in two cases (finite and infinite dimensional) or is directly applicable to both cases. The only exception is the proof of (ii) \( \Rightarrow \) (i) in Theorem 3.14; an alternate proof is suggested in Exercise 25. Theorem 3.3. The following conditions on a field \( \mathrm{F} \) are equivalent. (i) Every nonconstant polynomial \( \mathrm{f} \in \mathrm{F}\left\lbrack \mathrm{x}\right\rbrack \) has a root in \( \mathrm{F} \) ; (ii) every nonconstant polynomial \( \mathrm{f}\varepsilon \mathrm{F}\left\lbrack \mathrm{x}\right\rbrack \) splits over \( \mathrm{F} \) ; (iii) every irreducible polynomial in \( \mathrm{F}\left\lbrack \mathrm{x}\right\rbrack \) has degree one; (iv) there is no algebraic extension field of \( \mathrm{F} \) (except \( \mathrm{F} \) itself); (v) there exists a subfield \( \mathrm{K} \) of \( \mathrm{F} \) such that \( \mathrm{F} \) is algebraic over \( \mathrm{K} \) and every polynomial in \( \mathrm{K}\left\lbrack \mathrm{x}\right\rbrack \) splits in \( \mathrm{F}\left\lbrack \mathrm{x}\right\rbrack \) . PROOF. Exercise; see Section III. 6 and Theorems 1.6, 1.10, 1.12 and 1.13. A field that satisfies the equivalent conditions of Theorem 3.3 is said to be algebraically closed. For example, we shall show that the field \( \mathbf{C} \) of complex numbers is algebraically closed (Theorem 3.19). Theorem 3.4. If \( \mathrm{F} \) is an extension field of \( \mathrm{K} \), then the following conditions are equivalent. (i) \( \mathrm{F} \) is algebraic over \( \mathrm{K} \) and \( \mathrm{F} \) is algebraically closed; (ii) \( \mathrm{F} \) is a splitting field over \( \mathrm{K} \) of the set of all [irreducible] polynomials in \( \mathrm{K}\left\lbrack \mathrm{x}\right\rbrack \) . PROOF. Exercise; also see Exercises 9, 10. An extension field \( F \) of a field \( K \) that satisfies the equivalent conditions of Theorem 3.4 is called an algebraic closure of \( K \) . For example, \( \mathbf{C} = \mathbf{R}\left( i\right) \) is an algebraic closure of \( \mathbf{R} \) . Clearly, if \( F \) is an algebraic closure of \( K \) and \( S \) is any set of polynomials in \( K\left\lbrack x\right\rbrack \), then the subfield \( E \) of \( F \) generated by \( K \) and all roots of polynomials in \( S \) is a splitting field of \( S \) over \( K \) by Theorems 3.3 and 3.4. Thus the existence of arbitrary splitting fields over a field \( K \) is equivalent to the existence of an algebraic closure of \( K \) . The chief difficulty in proving that every field \( K \) has an algebraic closure is set-theoretic rather than algebraic. The basic idea is to apply Zorn's Lemma to a suitably chosen set of algebraic extension fields of \( K.{}^{2} \) . To do this we need ## Lemma 3.5. If \( \mathrm{F} \) is an algebraic extension field of \( \mathrm{K} \), then \( \left| \mathrm{F}\right| \leq {\aleph }_{\mathrm{o}}\left| \mathrm{K}\right| \) . SKETCH OF PROOF. Let \( T \) be the set of monic polynomials of positive degree in \( K\left\lbrack x\right\rbrack \) . We first show that \( \left| T\right| = {\aleph }_{0}\left| K\right| \) . For each \( {ne}{\mathrm{\;N}}^{ * } \) let \( {T}_{n} \) be the set of all polynomials in \( T \) of degree \( n \) . Then \( \left| {T}_{n}\right| = \left| {K}^{n}\right| \), where \( {K}^{n} = K \times K \times \cdots K(n \) factors), since every polynomial \( f = {x}^{n} + {a}_{n - 1}{x}^{n - 1} + \cdots + {a}_{0}\varepsilon {T}_{n} \) is completely determined by its \( n \) coefficients \( {a}_{0},{a}_{1},\ldots ,{a}_{n - 1}{\varepsilon K} \) . For each \( {n\varepsilon }{\mathbf{N}}^{ * } \) let \( {f}_{n} : {T}_{n} \rightarrow {K}^{n} \) be a bijection. Since the sets \( {T}_{n} \) [resp. \( {K}^{n} \) ] are mutually disjoint, the map \( f : T = \mathop{\bigcup }\limits_{{n \in {\mathbf{N}}^{ * }}}{T}_{n} \rightarrow \mathop{\bigcup }\limits_{{n \in {\mathbf{N}}^{ * }}}{K}^{n} \), given by \( f\left( u\right) = {f}_{n}\left( u\right) \) for \( u \in {T}_{n} \), is a well-defined bijection. Therefore \( \left| T\right| = \left| {\mathop{\bigcup }\limits_{{{n\varepsilon }{\mathrm{N}}^{ * }}}{K}^{n}}\right| = {\aleph }_{0}\left| K\right| \) by Introduction, Theorem 8.12(ii). Next we show that \( \left| F\right| \leq \left| T\right| \), which will complete the proof. For each irreducible \( {f\varepsilon T} \), choose an ordering of the distinct roots of \( f \) in \( F \) . Define a map \( F \rightarrow T \times {\mathbf{N}}^{ * } \) as follows. If \( a \in F \), then \( a \) is algebraic over \( K \) by hypothesis, and there exists a unique irreducible monic polynomial \( {f\varepsilon T} \) with \( f\left( a\right) = 0 \) (Theorem 1.6). Assign to \( {a\varepsilon F} \) the pair \( \left( {f, i}\right) \in T \times {\mathbf{N}}^{ * } \) where \( a \) is the \( i \) th root of \( f \) in the previously chosen ordering of the roots of \( f \) in \( F \) . Verify t
1069_(GTM228)A First Course in Modular Forms
Definition 7.2.1
Definition 7.2.1. If \( \overline{\mathbf{k}}\left( C\right) \) is a finite extension of a field \( \overline{\mathbf{k}}\left( t\right) \) where \( t \) is transcendental over \( \overline{\mathbf{k}} \) then \( C \) is an affine algebraic curve over \( \mathbf{k} \) . If also for each point \( P \in C \) the \( m \) -by-n derivative matrix \( \left\lbrack {{D}_{j}{\varphi }_{i}\left( P\right) }\right\rbrack \) has rank \( n - 1 \) then the curve \( C \) is nonsingular. The definition shows that if \( C \) is an affine algebraic curve over \( \mathbf{k} \) then \( m \geq n - 1 \), i.e., the number of constraints on the variables is at least the geometrically obvious minimal number for a curve. After we introduce more ideas it will be an exercise to show that the nonsingularity condition is a maximality condition on the rank of the derivative matrix, i.e., the rank cannot be \( n \) . Homogenize the polynomials \( {\varphi }_{i} \) with another variable \( {x}_{0} \), consider the ideal \[ {I}_{\text{hom }} = \left\langle {\varphi }_{i,\text{ hom }}\right\rangle \subset \overline{\mathbf{k}}\left\lbrack {{x}_{0},\ldots ,{x}_{n}}\right\rbrack \] generated by the homogeneous polynomials but also containing inhomogeneous ones (unless \( {I}_{\text{hom }} = \{ 0\} \) ), and consider the set \[ {C}_{\text{hom }} = \left\{ {P \in {\mathbb{P}}^{n}\left( \overline{\mathbf{k}}\right) : \varphi \left( P\right) = 0\text{ for all homogeneous }\varphi \in {I}_{\text{hom }}}\right\} . \] This is the projective version of the curve, a superset of \( C \), the union of the affine curves obtained by dehomogenizing the \( {\varphi }_{i,\text{ hom }} \) at each coordinate, cf. Exercise 7.1.3(d). The definition of nonsingularity extends to \( {C}_{\text{hom }} \) by dehomogenizing to appropriate coordinate systems at the points outside of \( {\overline{\mathbf{k}}}^{n} \) . In this chapter we are interested in nonsingular projective algebraic curves, but for convenience we usually work in the \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) affine coordinate system, illustrating some projective ideas in examples and exercises. Setting \( m = 0 \) and \( n = 1 \) shows that the projective line \( {\mathbb{P}}^{1}\left( \overline{\mathbf{k}}\right) \) is a nonsingular algebraic curve over \( \mathbf{k} \) with function field \( \overline{\mathbf{k}}\left( x\right) \) . When \( m = 1 \) and \( n = 2 \) and the single polynomial defining \( C \) is a Weierstrass polynomial \( E\left( {x, y}\right) \) then \( C \) is an elliptic curve \( \mathcal{E} \) as in Section 7.1. The function field is \( \overline{\mathbf{k}}\left( {x, y}\right) \) where \( x \) and \( y \) are related by a Weierstrass equation, or \( \overline{\mathbf{k}}\left( x\right) \left\lbrack y\right\rbrack /\langle E\left( {x, y}\right) \rangle \), showing that it is quadratic over \( \overline{\mathbf{k}}\left( x\right) \) . Thus elliptic curves over \( \mathbf{k} \) are indeed nonsingular projective algebraic curves over \( \mathbf{k} \) as in Definition 7.2.1 and the preceding paragraph. The projective line can also be placed in the \( m = 1, n = 2 \) context by taking \( \varphi \left( {x, y}\right) = y \) . At the end of the section we will see why the definition of algebraic curve has not been limited to \( m = 1 \) and \( n = 2 \), i.e., to plane curves. Let \( C \) be a nonsingular algebraic curve over \( \mathbf{k} \) . Since \( C \) is defined by the condition \( \varphi \left( P\right) = 0 \) for all \( \varphi \in I \), an element \( f + I \) of the coordinate ring takes a well defined value in \( \overline{\mathbf{k}} \) at each point \( P \in C \) even though its representative \( f \) is not unique. That is, a polynomial function on \( C \), a formal algebraic object, also defines a mapping from the set \( C \) to \( \overline{\mathbf{k}} \) . A priori it is not clear that an element \( F = f/g \) of the function field takes a well defined value in \( \overline{\mathbf{k}} \cup \{ \infty \} \) at each \( P \in C \) since \( f\left( P\right) /g\left( P\right) \) might evaluate to \( 0/0 \), but we will see that it does and therefore a rational function \( F \in \overline{\mathbf{k}}\left( C\right) \) defines a mapping from \( C \) to \( {\mathbb{P}}^{1}\left( \overline{\mathbf{k}}\right) \) . In fact it defines a mapping on all of \( {C}_{\text{hom }} \) . Let \( P = \left( {{p}_{1},\ldots ,{p}_{n}}\right) \) be a point of \( C \) . To study rational functions on \( C \) at \( P \) start with the elements of the coordinate ring that vanish at \( P \) , \[ {m}_{P} = \{ f \in \overline{\mathbf{k}}\left\lbrack C\right\rbrack : f\left( P\right) = 0\} = \left\langle \left\{ {{x}_{i} - {p}_{i} + I : 1 \leq i \leq n}\right\} \right\rangle . \] This is a maximal ideal of \( \overline{\mathbf{k}}\left\lbrack C\right\rbrack \) (Exercise 7.2.2), and its second power is \[ {m}_{P}^{2} = \left\langle \left\{ {\left( {{x}_{i} - {p}_{i}}\right) \left( {{x}_{j} - {p}_{j}}\right) + I : 1 \leq i, j \leq n}\right\} \right\rangle . \] Lemma 7.2.2. Let \( C \) be a nonsingular algebraic curve. For any point \( P \in C \) , the quotient \( {m}_{P}/{m}_{P}^{2} \) is a 1-dimensional vector space over \( \overline{\mathbf{k}} \) . Proof. The tangent line to \( C \) at \( P \) is \[ {T}_{P}\left( C\right) = \left\{ {v \in {\overline{\mathbf{k}}}^{n} : \left\lbrack {{D}_{j}{\varphi }_{i}\left( P\right) }\right\rbrack v = 0}\right\} \] a 1-dimensional vector space over \( \overline{\mathbf{k}} \) by the Rank-Nullity Theorem of linear algebra since \( C \) is nonsingular. Although a function \( f + I \in \overline{\mathbf{k}}\left\lbrack C\right\rbrack \) has a well defined value at \( P \), its form as a coset makes its gradient (vector of partial derivatives) at \( P \) ill defined. But changing coset representatives changes the gradient to \[ \nabla \left( {f + \varphi }\right) \left( P\right) = \nabla f\left( P\right) + \nabla \varphi \left( P\right) ,\;\varphi \in I, \] and since \( \nabla \varphi \left( P\right) \) is in the row space of \( \left\lbrack {{D}_{j}{\varphi }_{i}\left( P\right) }\right\rbrack \) (Exercise 7.2.3(a)) the definition of the tangent line shows that the inner product of the gradient \( \nabla f\left( P\right) \) with any \( v \in {T}_{P}\left( C\right) \) is well defined. Translating \( f \) by a constant has no effect on the inner product, and any function \( f \in {m}_{P}^{2} \) satisfies \( \nabla f\left( P\right) \cdot v = 0 \) . Thus there is a well defined pairing \[ {m}_{P}/{m}_{P}^{2} \times {T}_{P}\left( C\right) \rightarrow \overline{\mathbf{k}},\;\left( {f, v}\right) \mapsto \nabla f\left( P\right) \cdot v. \] This pairing is perfect, meaning that it is linear and nondegenerate in each component, cf. the proof of Proposition 6.6.4 (Exercise 7.2.3(b)). It follows that \( {m}_{P}/{m}_{P}^{2} \) and \( {T}_{p}\left( C\right) \) have the same dimension as vector spaces over \( \overline{\mathbf{k}} \) since each can be viewed as the dual space of the other. Since the tangent line \( {T}_{p}\left( C\right) \) has dimension 1, so does \( {m}_{P}/{m}_{P}^{2} \) . For example, let \( \mathcal{E} \) be an elliptic curve defined by a Weierstrass polynomial \( E \) . The proof of Lemma 7.2.2 shows that \( x - {x}_{P} + {m}_{P}^{2} \) spans \( {m}_{P}/{m}_{P}^{2} \) if \( {D}_{2}E\left( P\right) \neq 0 \), meaning the tangent line at \( P \) is not vertical, and \( y - {y}_{P} + {m}_{P}^{2} \) spans \( {m}_{P}/{m}_{P}^{2} \) if \( {D}_{1}E\left( P\right) \neq 0 \), meaning the tangent line is not horizontal (Exercise 7.2.3(c)). The local ring of \( C \) over \( \overline{\mathbf{k}} \) at \( P \) is \[ \overline{\mathbf{k}}{\left\lbrack C\right\rbrack }_{P} = \{ f/g \in \overline{\mathbf{k}}\left( C\right) : g\left( P\right) \neq 0\} , \] a subring of the function field. The local maximal ideal at \( P \) is the ideal of functions that vanish at the point, \[ {M}_{P} = {m}_{P}\overline{\mathbf{k}}{\left\lbrack C\right\rbrack }_{P} = \left\{ {f/g \in \overline{\mathbf{k}}{\left\lbrack C\right\rbrack }_{P} : f\left( P\right) = 0}\right\} . \] This is a maximal ideal of \( \overline{\mathbf{k}}{\left\lbrack C\right\rbrack }_{P} \), and since it consists of all the noninvertible elements it is the unique maximal ideal (this is not true for \( {m}_{P} \) and \( \overline{\mathbf{k}}\left\lbrack C\right\rbrack \) ). Consider the natural map from the maximal ideal to the quotient of its localization, \[ i : {m}_{P} \rightarrow {M}_{P}/{M}_{P}^{2},\;f \mapsto f + {M}_{P}^{2}. \] The kernel is \( {m}_{P} \cap {M}_{P}^{2} = {m}_{P}^{2} \) (Exercise 7.2.3(d)). The map surjects since for any \( f/g \in {M}_{P} \) the subtraction \( f/g\left( P\right) - f/g = f\left( {g - g\left( P\right) }\right) /\left( {g\left( P\right) g}\right) \in {M}_{P}^{2} \) shows that \( i\left( {f/g\left( P\right) }\right) = f/g + {M}_{P}^{2} \) . Thus \( i \) induces an isomorphism \[ i : {m}_{P}/{m}_{P}^{2}\overset{ \sim }{ \rightarrow }{M}_{P}/{M}_{P}^{2} \] Proposition 7.2.3. For any nonsingular point \( P \in C \) the ideal \( {M}_{P} \) is principal. Proof. By the lemma and the isomorphism \( i \), some function \( t \in \overline{\mathbf{k}}\left\lbrack C\right\rbrack \) generates \( {M}_{P}/{M}_{P}^{2} \) as a vector space over \( \overline{\mathbf{k}} \) . This function will also generate \( {M}_{P} \) as an ideal of \( \overline{\mathbf{k}}{\left\lbrack C\right\rbrack }_{P} \) . Letting \( N = t\overline{\mathbf{k}}{\left\lbrack C\right\rbrack }_{P} \), an ideal of \( {M}_{P} \), it suffices to show that the quotient \( {M}_{P}/N \) is trivial. Since \( {M}_{P} \) is a \( \overline{\mathbf{k}}{\left\lbrack C\right\rbrack }_{P} \) -module, so is the quotient. Noting that \( {M}_{P} \cdot \left( {{M}_{P}/N}\right) = \left( {N + {M}_{P}^{2}}\right) /N \) and that \( N + {M}_{P}^{2} = \) \( \left( {\overline{\mathbf{k}}t + {M}_{P}^{2}}\right) \overlin
1129_(GTM35)Several Complex Variables and Banach Algebras
Definition 23.3
Definition 23.3. Let \( \Omega \) be a bounded domain in \( {\mathbb{C}}^{2} \), defined by \( \{ \rho < 0\} \), with \( \rho \) continuous. Fix \( {z}^{0} \in \partial \Omega \) such that \( \rho \) is \( {\mathcal{C}}^{2} \) on a neighborhood of \( {z}^{0} \) . We say that \( \partial \Omega \) is strictly pseudoconvex at \( {z}^{0} \) if we have (8) \[ \mathop{\sum }\limits_{{j, k = 1}}^{N}{\left( \frac{{\partial }^{2}\rho }{\partial {z}_{j}\partial {\bar{z}}_{k}}\right) }_{0}{\zeta }_{j}{\bar{\zeta }}_{k} > 0\;\forall \text{ complex tangents }\zeta \neq 0 \in {T}_{{z}^{0}}. \] Remark. This definition is analogous to strict convexity in \( {\mathbb{R}}^{N} \) . In fact, the next lemma, due to R. Narasimhan, shows that there is more than an analogy here. Lemma 23.3. Suppose that \( \partial \Omega \) is strictly pseudoconvex at \( {z}^{0} \), where \( \Omega \subseteq {\mathbb{C}}^{n} \) . Then there exists an open neighborhood \( V \) of \( {z}^{0} \) in \( {\mathbb{C}}^{n} \) and a biholomorphism \( \phi \) of \( V \) onto an open set in \( {\mathbb{C}}^{n} \) such that \( \phi \left( {\Omega \cap V}\right) \) is strictly convex in \( {\mathbb{C}}^{n} \) . Remark. Suppose that \( {z}^{0},\Omega \), and \( V \) are as in the lemma. Then there exists a function \( \psi \) holomorphic in \( V \) such that \[ Z\left( \psi \right) = \{ z \in V : \psi \left( z\right) = 0\} \] is a complex submanifold of \( V \) that passes through \( {z}^{0} \) and satisfies (9) \[ Z\left( \psi \right) \smallsetminus \left\{ {z}^{0}\right\} \subset {\mathbb{C}}^{n} \smallsetminus \bar{\Omega } \] In fact, if \( \Omega \) is strictly convex at \( {z}^{0} \), then for \( \psi \) we can take the the affine complex linear function \( F \) such that \( \{ \operatorname{Re}F = 0\} = {z}^{0} + {T}_{{z}^{0}}\left( {\partial \Omega }\right) \) . In the general case, we apply the lemma and we set \( \psi = F \circ \phi \), where \( F \) is the affine linear function for the strictly convex image \( \phi \left( {\Omega \cap V}\right) \) . Proof of Lemma 23.3. Without loss of generality we may assume that \( {z}^{0} = 0 \) and that \( {T}_{{z}^{0}}\left( {\partial \Omega }\right) = \left\{ {{x}_{n} = 0}\right\} \) . We write \[ \rho \left( z\right) = {x}_{n} + \operatorname{Re}\left( {\mathop{\sum }\limits_{{1 \leq j, k \leq n}}{\alpha }_{jk}{z}_{j}{z}_{k}}\right) + \mathop{\sum }\limits_{{1 \leq j, k \leq n}}{c}_{jk}{z}_{j}{\bar{z}}_{k} + o\left( {\left| z\right| }^{2}\right) , \] where \( \left( {c}_{jk}\right) \) is Hermitian positive definite. We make a quadratic change of coordinates, putting \[ {w}_{j} = {z}_{j},1 \leq j < n\;\text{ and }\;{w}_{n} = {z}_{n} + \mathop{\sum }\limits_{{1 \leq j, k \leq n}}{\alpha }_{jk}{z}_{j}{z}_{k}. \] Then \( z \mapsto w \equiv \phi \left( z\right) \) gives a biholomorphism near \( z = 0 \), and in the \( w \) -coordinates \[ \rho = \operatorname{Re}{w}_{n} + \mathop{\sum }\limits_{{1 \leq j, k \leq n}}{c}_{jk}{w}_{j}{\bar{w}}_{k} + o\left( {\left| w\right| }^{2}\right) . \] Writing \( {w}_{j} = {u}_{j} + i{v}_{j},1 \leq j \leq n \), we see that the real Hessian \( H \) of \( \rho \circ {\phi }^{-1} \) at \( w = 0 \), with respect to the real coordinates \( {u}_{1},{v}_{1},{u}_{2},\cdots ,{u}_{n},{v}_{n} \), is given by \( H\left( {{u}_{1},{v}_{1},{u}_{2},\cdots ,{u}_{n},{v}_{n}}\right) = \mathop{\sum }\limits_{{1 \leq j, k \leq n}}{c}_{jk}{w}_{j}{\bar{w}}_{k} \) . Hence, since \( \left( {c}_{jk}\right) \) is Hermitian positive definite, it follows that \( H \) is (real) positive definite-this yields the desired strict convexity. We shall denote the projection map of \( {\mathbb{C}}^{2} \) to the first coordinate by \( \pi \), i.e., \( \pi \left( {\lambda, w}\right) = \lambda \) . In the proof of the next theorem, we shall view \( \pi \) as a map restricted to a subset \( K \) of \( {\mathbb{C}}^{2} \), and we shall write \( {\pi }^{-1}\left( S\right) = \{ \left( {\lambda, w}\right) \in K : \pi \left( \lambda \right) \in S\} \) . Theorem 23.4. Let \( \Omega \) be a domain in the product set \( \{ \left| \lambda \right| < 1\} \times \mathbb{C} \subset {\mathbb{C}}^{2} \) so that \( \partial \Omega \cap \{ \left| \lambda \right| < 1\} \) is smooth and strictly pseudoconvex at each point. Put \( K = \left\lbrack {\{ \left| \lambda \right| < 1\} \times \mathbb{C}}\right\rbrack \smallsetminus \Omega \) . Assume that \( K \) is bounded in \( {\mathbb{C}}^{2} \) . Denote by \( A \) the algebra of restrictions to \( K \) of all polynomials in \( \lambda \) and \( w \) . Then \( \left( {A, K, D,\pi }\right) \) is a maximum modulus algebra on \( K \) . Proof. We first fix a point \( \left( {{\lambda }_{0},{w}_{0}}\right) \) in \( K \) that lies on \( \partial \Omega \cap \{ \left| \lambda \right| < 1\} \) . Since \( \partial \Omega \) is strictly pseudoconvex at \( \left( {{\lambda }_{0},{w}_{0}}\right) \), we can use the remark to obtain a neighborhood \( V \) of \( \left( {{\lambda }_{0},{w}_{0}}\right) \) in \( {\mathbb{C}}^{2} \) as well as a function \( \psi \) holomorphic in \( V \) such that \( Z\left( \psi \right) \) passes through \( \left( {{\lambda }_{0},{w}_{0}}\right) \), and otherwise lies totally outside of \( \bar{\Omega } \) . We write \( \sum = Z\left( \psi \right) \) . Let us denote by \( \Lambda \) the restriction of the coordinate function \( \lambda \) to \( \sum \) . Case 1. \( \Lambda \) is not a constant. Then there exists \( \epsilon > 0 \) such that the image of \( \sum \) under \( \Lambda \) contains the disk \( \left\{ {\left| {\lambda - {\lambda }_{0}}\right| \leq \epsilon }\right\} \) . We define the region \[ W = \left\{ {\left| {\Lambda - {\lambda }_{0}}\right| < \epsilon }\right\} \] on \( \sum \) . Then \[ \partial W = \left\{ {\left| {\Lambda - {\lambda }_{0}}\right| = \epsilon }\right\} \] on \( \sum \) . Since \( \sum \smallsetminus \left\{ \left( {{\lambda }_{0},{w}_{0}}\right) \right\} \) lies outside \( \bar{\Omega },\sum \subseteq K \) . Hence \( \partial W \subseteq K \), and so \[ \partial W \subseteq {\pi }^{-1}\left( \left\{ {\left| {\lambda - {\lambda }_{0}}\right| = \epsilon }\right\} \right) . \] Now fix a polynomial \( Q \) in \( \lambda \) and \( w \) . The restriction of \( Q \) to \( \sum \) is analytic on \( \sum \) . The maximum principle on \( W \) then gives \[ \left| {Q\left( {{\lambda }_{0},{w}_{0}}\right) }\right| \leq \mathop{\max }\limits_{{\partial W}}\left| Q\right| \leq \mathop{\max }\limits_{{{\pi }^{-1}\left( \left\{ {\left| {\lambda - {\lambda }_{0}}\right| = \epsilon }\right\} \right) }}\left| Q\right| . \] Case 2. \( \Lambda \) is constant on \( \sum \) . Then \( \Lambda \equiv {\lambda }_{0} \) on \( \sum \), so \( \sum \) is an open set on the complex line \( \left\{ {\lambda = {\lambda }_{0}}\right\} \) . Since \( \sum \smallsetminus \left\{ \left( {{\lambda }_{0},{w}_{0}}\right) \right\} \) lies outside \( \bar{\Omega },\sum \smallsetminus \left\{ \left( {{\lambda }_{0},{w}_{0}}\right) \right\} \) lies in the interior of \( K \), and so we can choose a region \( W \) on \( \sum \) such that \( \left( {{\lambda }_{0},{w}_{0}}\right) \in W \) and \( \partial W \) is a compact subset of \( \operatorname{int}\left( K\right) \) . It follows that there exists \( \epsilon > 0 \) such that, for each \( \left( {\lambda ,{w}_{1}}\right) \in \partial W \), the entire horizontal disk \[ \left\{ {\left( {{\lambda }_{0} + \tau ,{w}_{1}}\right) : \left| \tau \right| \leq \epsilon }\right\} \] is contained in \( K \) . The boundary of the disk, \[ \left\{ {\left( {{\lambda }_{0} + \tau ,{w}_{1}}\right) : \left| \tau \right| = \epsilon }\right\} \] then is contained in \[ {\pi }^{-1}\left( \left\{ {\left| {\lambda - {\lambda }_{0}}\right| = \epsilon }\right\} \right) \] Fix a polynomial \( Q\left( {\lambda, w}\right) \) . Then (10) \[ \left| {Q\left( {{\lambda }_{0},{w}_{0}}\right) }\right| \leq \left| {Q\left( {{\lambda }_{0},{w}_{1}}\right) }\right| \] for some \( \left( {{\lambda }_{0},{w}_{1}}\right) \in \partial W \) . Then, by the maximum principle on the horizontal complex line through \( \left( {{\lambda }_{0},{w}_{1}}\right) \) , (11) \[ \left| {Q\left( {{\lambda }_{0},{w}_{1}}\right) }\right| \leq \left| {Q\left( {{\lambda }_{0} + \tau ,{w}_{1}}\right) }\right| \] for some \( \tau \) with \( \left| \tau \right| = \epsilon \) . It follows from (10) and (11) that \( \left| {Q\left( {{\lambda }_{0},{w}_{0}}\right) }\right| \leq \) \( \mathop{\max }\limits_{{{\pi }^{-1}\left( \left\{ {\left| {\lambda - {\lambda }_{0}}\right| = \epsilon }\right\} \right) }}\left| Q\right| \) . We have completed the case of a point \( \left( {{\lambda }_{0},{w}_{0}}\right) \) in \( K \) that lies on \( \partial \Omega \cap \{ \left| \lambda \right| < 1\} \) . The case of \( \left( {{\lambda }_{0},{w}_{0}}\right) \in K \), but \( \left( {{\lambda }_{0},{w}_{0}}\right) \notin \partial \Omega \), clearly reduces to the previous one. Finally, it remains to show that \( \pi \) is a proper map of \( K \) onto \( D = \{ \left| \lambda \right| < 1\} \) . That \( \pi \) is proper is clear since \( K \) is a bounded and relatively closed subset of \( D \times \mathbb{C} \) . In particular, \( \pi \left( K\right) \) is a closed subset of \( D \) . To see that \( \pi \) maps onto \( D \) it therefore suffices to show that \( \pi \left( K\right) \) is an open subset of \( D \) . We have \( K = \) \( \operatorname{int}\left( K\right) \cup \left( {\partial \Omega }\right) \cap \left( {D \times \mathbb{C}}\right) ).\pi \left( {\operatorname{int}\left( K\right) }\right) \) is clearly open and so it suffices to show that \( \pi \left( {\partial \Omega \cap \left( {D \times \mathbb{C}}\right) }\right) \) is contained in the interior of \( \pi \left( K\right) \) . This is clear from the discussions of Case 1 and of Case 2 above. This completes the proof. ## 5 Levi-Flat Hypersurfaces Consider a region \( \Omega \) in \( {\mathbb{R}}^{N} \
1172_(GTM8)Axiomatic Set Theory
Definition 3.19
Definition 3.19. If \( \langle X, T\rangle \) is a topological space, if \( {X}^{\prime } \subseteq X \) and \( {T}^{\prime } = \) \( \left\{ {{X}^{\prime } \cap N \mid N \in T}\right\} \) then \( {T}^{\prime } \) is the relative topology on \( {X}^{\prime } \) induced by \( T \) and \( \left\langle {{X}^{\prime },{T}^{\prime }}\right\rangle \) is a subspace of \( \langle X, T\rangle \) . Theorem 3.20. If \( \langle X, T\rangle \) is a topological space, if \( {X}^{\prime } \subseteq X \), if \( {T}^{\prime } \) is the relative topology on \( {X}^{\prime } \) induced by \( T \), if \( B \) is a base for \( T \) and \[ {B}^{\prime } = \left\{ {{X}^{\prime } \cap N \mid N \in B}\right\} \] then \( {B}^{\prime } \) is a base for \( {T}^{\prime } \) . Proof. Left to the reader. Theorem 3.21. If \( \langle X, T\rangle \) is a topological space, if \( {X}^{\prime } \subseteq X \), and if \( {T}^{\prime } \) is the relative topology on \( {X}^{\prime } \) induced by \( T \) then 1. \( A \) is an open set in \( T \) implies \( A \cap {X}^{\prime } \) is an open set in \( {T}^{\prime } \) 2. \( A \) is closed in \( T \) implies \( A \cap {X}^{\prime } \) is closed in \( {T}^{\prime } \) 3. \( A \) is clopen in \( T \) implies \( A \cap {X}^{\prime } \) is clopen in \( {T}^{\prime } \) . Proof. 1. If \( A \) is open in \( T \) then \[ \left( {\forall a \in A \cap {X}^{\prime }}\right) \left( {\exists N\left( a\right) \in T}\right) \left\lbrack {N\left( a\right) \subseteq A}\right\rbrack . \] Then \( N\left( a\right) \cap {X}^{\prime } \in {T}^{\prime } \) and \( a \in N\left( a\right) \cap {X}^{\prime } \subseteq A \cap {X}^{\prime } \) . Thus \( A \cap {X}^{\prime } \) is open in \( {T}^{\prime } \) . 2. If \( a \in {X}^{\prime } \) and \( \left( {\forall N\left( a\right) \in {T}^{\prime }}\right) \left\lbrack {N\left( a\right) \cap \left( {A \cap {X}^{\prime }}\right) \neq 0}\right\rbrack \) then \[ \left( {\forall N\left( a\right) \in T}\right) \left\lbrack {N\left( a\right) \cap {X}^{\prime } \cap A \neq 0}\right\rbrack . \] Thus \[ \left( {\forall N\left( a\right) \in T}\right) \left\lbrack {N\left( a\right) \cap A \neq 0}\right\rbrack . \] Since \( A \) is closed \( a \in A \) and hence \( a \in A \cap {X}^{\prime } \) i.e., \( A \cap {X}^{\prime } \) is closed in \( {T}^{\prime } \) . 3. If \( A \) is both open and closed in \( T \) then by 1 and 2 above \( A \cap {X}^{\prime } \) is both open and closed in \( {T}^{\prime } \) . Theorem 3.22. If \( \langle X, T\rangle \) is a locally compact Hausdorff space then for each open set \( A \) and each \( a \in A \) there exists an open set \( B \) such that \[ a \in B \land {B}^{ - } \subseteq A\text{.} \] Proof. If \( A \) is an open set in \( X \) and \( a \in A \) then since \( \langle X, T\rangle \) is locally compact \( \exists N\left( a\right), N{\left( a\right) }^{ - } \) is compact. If \[ M = {\left( N{\left( a\right) }^{ - } \cap A\right) }^{0} \] then \( {M}^{ - } \) is also compact. If \[ {T}^{\prime } = \left\{ {{M}^{ - } \cap A \mid A \in T}\right\} \] then \( \left\langle {{M}^{ - },{T}^{\prime }}\right\rangle \) is a compact Hausdorff space. In this space \( {M}^{ - } - M \) is closed and hence compact. Moreover \[ \left( {\forall y \in {M}^{ - } - M}\right) \left( {\exists N\left( y\right) \in {T}^{\prime }}\right) \left\lbrack {a \notin N{\left( y\right) }^{ - }}\right\rbrack . \] Since \( {M}^{ - } - M \) is compact there is a finite collection of elements of \( {T}^{\prime } \) \[ N\left( {y}_{1}\right) ,\ldots, N\left( {y}_{n}\right) \] such that \[ {M}^{ - } - M \subseteq N\left( {y}_{1}\right) \cup \cdots \cup N\left( {y}_{n}\right) \] and \[ a \notin N{\left( {y}_{1}\right) }^{ - } \land \cdots \land a \notin N{\left( {y}_{n}\right) }^{ - }. \] Therefore there exist neighborhoods in \( {T}^{\prime } \) \[ {M}_{1}\left( a\right) ,\ldots ,{M}_{n}\left( a\right) \] such that \[ {M}_{i}\left( a\right) \cap N\left( {y}_{i}\right) = 0\;i = 1,\ldots, n. \] If \( M\left( a\right) = \mathop{\bigcap }\limits_{{i \leq n}}{M}_{i}\left( a\right) \) then \[ M\left( a\right) \cap \left\lbrack {N\left( {y}_{1}\right) \cup \cdots \cup N\left( {y}_{n}\right) }\right\rbrack = 0 \] Therefore \[ M\left( a\right) \subseteq {M}^{ - } - \left\lbrack {N\left( {y}_{1}\right) \cup \cdots \cup N\left( {y}_{n}\right) }\right\rbrack . \] But since \( N\left( {y}_{1}\right) \cup \cdots \cup N\left( {y}_{n}\right) \) is open \( {M}^{ - } - \left\lbrack {N\left( {y}_{1}\right) \cup \cdots \cup N\left( {y}_{n}\right) }\right\rbrack \) is closed. Hence \[ M{\left( a\right) }^{ - } \subseteq {M}^{ - } - \left\lbrack {N\left( {y}_{1}\right) \cup \cdots \cup N\left( {y}_{2}\right) }\right\rbrack . \] Therefore \[ M{\left( a\right) }^{ - } \cap \left\lbrack {N\left( {y}_{1}\right) \cup \cdots \cup N\left( {y}_{n}\right) }\right\rbrack = 0 \] and \[ M{\left( a\right) }^{ - } \subseteq M = {\left( N{\left( a\right) }^{ - } \cap A\right) }^{0}. \] But since \( M\left( a\right) \in {T}^{\prime } \) , \[ \left( {\exists N \in T}\right) \left\lbrack {M\left( a\right) = {M}^{ - } \cap N}\right\rbrack . \] And since \( M\left( x\right) \subseteq M = {M}^{0} \) \[ M\left( a\right) = M \cap N \in T. \] Theorem 3.23. (The Baire Category Theorem.) Every open meager set in a locally compact Hausdorff space is empty. Proof. If \( B \) is an open meager set in the locally compact Hausdorff space \( \langle X, T\rangle \) then there exists an \( \omega \) -sequence of nowhere dense sets \[ {A}_{0},{A}_{1},\ldots \] such that \[ B = \mathop{\bigcup }\limits_{{\alpha < \omega }}{A}_{\alpha } \] If \( B \neq 0 \) then by Theorem 3.22 \[ \left( {\exists {N}_{1} \in T}\right) \left\lbrack {{N}_{1} - \subseteq B}\right\rbrack \] and since \( {A}_{1} \) is nowhere dense \[ \left( {\exists {N}_{2} \subseteq {N}_{1}}\right) \left\lbrack {{N}_{2} \cap {A}_{1} = 0}\right\rbrack \] for otherwise \( {N}_{1} \subseteq {A}_{1}{}^{-0} \) . Then \[ \left( {\exists {N}_{3} \in T}\right) \left\lbrack {{N}_{3}{}^{ - } \subseteq {N}_{2}}\right\rbrack \] Inductively we define a nested sequence of neighborhoods such that \[ {N}_{n + 1} \subseteq {N}_{n + 1}^{ - } \subseteq {N}_{n},\;n < \omega . \] Consequently \[ \mathop{\bigcap }\limits_{{n < \omega }}{N}_{{2n} + 1} = \mathop{\bigcap }\limits_{{n < \omega }}{N}_{{2n} + 1}^{ - } \neq 0 \] (Theorem 3.17). Therefore \[ \exists x \in \mathop{\bigcap }\limits_{{n < \omega }}{N}_{{2n} + 1} \subseteq B. \] But then \( \forall n \in \omega \) \[ x \in {N}_{{2n} + 1} \land {N}_{{2n} + 1} \cap {A}_{n} = 0. \] Therefore \( x \notin \mathop{\bigcup }\limits_{{\alpha \in \omega }}{A}_{\alpha } \) . This is a contradiction that compels the conclusion \( B = 0 \) . Theorem 3.24. If \( \mathbf{B} \) is the Boolean \( \sigma \) -algebra of all Borel sets in the locally compact Hausdorff space \( \langle X, T\rangle \) and if \( I \) is the \( \sigma \) -ideal of all meager Borel sets then \( \mathbf{B}/I \) is isomorphic to \( {\mathbf{B}}^{\prime } \), the complete Boolean algebra of all regular open sets in \( X \) . Proof. If \[ F\left( G\right) = G/I,\;G \in \left| {\mathbf{B}}^{\prime }\right| \] then \( F\left( {G}_{1}\right) = F\left( {G}_{2}\right) \leftrightarrow {G}_{1} - {G}_{2} \in I \land {G}_{2} - {G}_{1} \in I \) . Then \( {G}_{1} - {G}_{2}^{ - } \) is meager and open. Thus, by the Baire Category Theorem \[ {G}_{1} - {G}_{2}^{ - } = 0. \] Similarly \[ {G}_{2} - {G}_{1}{}^{ - } = 0. \] Then \( {G}_{1} \subseteq {G}_{2}{}^{ - } \land {G}_{2} \subseteq {G}_{1}{}^{ - } \) . Since \( {G}_{1} \) and \( {G}_{2} \) are each regular open \[ {G}_{1} = {G}_{1}{}^{0} \subseteq {G}_{2}{}^{-0} = {G}_{2}\text{ and }{G}_{2} = {G}_{2}{}^{0} \subseteq {G}_{1}{}^{-0} = {G}_{1}. \] Therefore \( {G}_{1} = {G}_{2} \) and hence \( F \) is one-to-one. If \( G \in \left| \mathbf{B}\right| \) then by Corollary 3.11 there exists a regular open set \( {G}^{\prime } \) and meager sets \( {N}_{1},{N}_{2} \) such that \[ G = \left( {{G}^{\prime } + {N}_{1}}\right) - {N}_{2} \] Then \( G - {G}^{\prime } \subseteq {N}_{1} - {N}_{2} \) i.e., \( G - {G}^{\prime } \in I \) . Similarly \( {G}^{\prime } - G \in I \) and hence \[ G/I = {G}^{\prime }/I \] Then \[ F\left( {G}^{\prime }\right) = {G}^{\prime }/I = G/I. \] That is \( F \) is onto. That \( F \) has the morphism properties is clear from its definition. Definition 3.25. A Boolean algebra B satisfies the countable chain condition (c.c.c.) iff \[ \left( {\forall S \subseteq \left| \mathbf{B}\right| }\right) \left\lbrack {\forall a, b \in S)\left\lbrack {a \neq b \rightarrow {ab} = \mathbf{0}}\right\rbrack \rightarrow \overline{\bar{S}} \leq \omega }\right\rbrack . \] Theorem 3.26. If \( X \) is a topological space with a countable base then the Boolean algebra of regular open sets in \( X \) satisfies the countable chain condition. Proof. If \( {U}_{1},{U}_{2},\ldots \) is a countable base and if \( S \) is a pairwise disjoint subset of \( \left| \mathbf{B}\right| \) then since the elements of \( S \) are open it follows that \[ \left( {\forall A \in S}\right) \left( {\exists n < \omega }\right) \left\lbrack {{U}_{n} \subseteq A}\right\rbrack . \] Furthermore \[ \left( {\forall A, B \in S}\right) \left\lbrack {\left\lbrack {{U}_{n} \subseteq A}\right\rbrack \land \left\lbrack {{U}_{n} \subseteq B}\right\rbrack \rightarrow A = B}\right\rbrack . \] Therefore \( S \) is countable. Theorem 3.27. If \( \mathbf{B} \) satisfies the c.c.c. then for each subset \( E \) of \( \left| \mathbf{B}\right| \) there exists a countable subset \( D \) of \( E \) such that \( D \) and \( E \) have the same set of upper bounds. Proof. If \( I \) is the ideal generated by \( E \) then \( E \subseteq I \) . Consequently every upper bound for \( I \) is an upper bound for \( E \) . Conversely \[ \left( {\forall b \in I}\right) \left( {\exists {b}_{1},\ldots ,{b}_{n} \in E}\right
1065_(GTM224)Metric Structures in Differential Geometry
Definition 15.1
Definition 15.1. Let \( {M}^{n} \) be a manifold. For \( k \leq n \), the de Rham cohomology vector spaces with compact support \( {H}_{c}^{k}\left( M\right) \) are the spaces \( {Z}_{c}^{k}\left( M\right) /{B}_{c}^{k}\left( M\right) \) , where \( {Z}_{c}^{k}\left( M\right) \) denotes the space of all closed \( k \) -forms with compact support, and \( {B}_{c}^{k}\left( M\right) \) the space of all \( k \) -forms \( {d\alpha } \), where \( \alpha \in {A}_{k - 1}\left( M\right) \) has compact support. \( {H}^{k}\left( M\right) \) and \( {H}_{c}^{k}\left( M\right) \) coincide of course when \( M \) is compact. In general, though, not every exact \( k \) -form with compact support belongs to \( {B}_{\mathrm{c}}^{k}\left( M\right) \) : If \( f \) is a nonnegative function on \( {\mathbb{R}}^{n} \) which is positive at some point and has compact support, then the \( n \) -form \( \omega = {fd}{u}^{1} \land \cdots \land d{u}^{n} \) is exact (because it is closed and \( {H}^{n}\left( {\mathbb{R}}^{n}\right) = 0 \) ). Since \( {\int }_{{\mathbb{R}}^{n}}\omega > 0 \), the following proposition shows it does not equal \( {d\alpha } \) for any \( \alpha \) with compact support: Proposition 15.3. Let \( {M}^{n} \) be connected and orientable (without boundary). Given \( \omega \in {Z}_{c}^{n}\left( M\right) ,\omega \) belongs to \( {B}_{c}^{n}\left( M\right) \) iff \( {\int }_{M}\omega = 0 \) . Proof. If \( \omega \) belongs to \( {B}_{c}^{n}\left( M\right) \), then \( \omega = {d\alpha } \), where \( \alpha \) has compact support. By Stokes' theorem, \[ {\int }_{M}\omega = {\int }_{M}{d\alpha } = {\int }_{\partial M}\alpha = 0. \] We merely illustrate the proof of the converse in the case \( M = \mathbb{R} \) : Suppose \( \omega \) is a 1 -form on \( \mathbb{R} \) with compact support such that \( {\int }_{\mathbb{R}}\omega = 0 \) . Since \( {H}^{1}\left( \mathbb{R}\right) = 0 \) , \( \omega = {df} \) for some function \( f \) (which need not, a priori, have compact support). However, \( {df} \) must vanish outside some interval \( \left\lbrack {-N, N}\right\rbrack \), so that \( f\left( t\right) = {c}_{1} \) when \( t < - N \) and \( f\left( t\right) = {c}_{2} \) when \( t > N \) for some constants \( {c}_{1} \) and \( {c}_{2} \) . Then \[ 0 = {\int }_{\mathbb{R}}\omega = {\int }_{\left\lbrack -N - 1, N + 1\right\rbrack }{df} = {\int }_{\partial \left\lbrack {-N - 1, N + 1}\right\rbrack }f = {c}_{2} - {c}_{1}, \] so that \( {c}_{1} = {c}_{2} = c \) . Then \( f - c \) has compact support, and \( \omega = d\left( {f - c}\right) \) . THEOREM 15.2. If \( {M}^{n} \) is connected and orientable, then \( {H}_{c}^{n}\left( M\right) \cong \mathbb{R} \) . Proof. Consider the linear transformation from \( {Z}_{c}^{n}\left( M\right) \) to \( \mathbb{R} \) which maps a closed \( n \) -form \( \omega \) with compact support to \( {\int }_{M}\omega \in \mathbb{R} \) . This map is nontrivial (and hence onto) since for example if \( \left( {U, x}\right) \) is a positively oriented chart around some \( p \in M \), and \( f \) a nonnegative function which is positive at \( p \) and has support in \( U \), then \( {\int }_{M}\omega > 0 \), where \( \omega = {fd}{x}^{1} \land \cdots \land d{x}^{n} \) . By Proposition 15.3, its kernel is \( {B}_{c}^{n}\left( M\right) \), and the statement follows. Thus, for example, if \( {M}^{n} \) is compact, connected, and orientable, then \( {H}^{n}\left( M\right) \cong \mathbb{R} \) . It can be shown that for \( n \geq 1,{H}_{c}^{n}\left( M\right) = 0 \) if \( M \) is not orientable, and in general, \( {H}^{n}\left( M\right) = {H}_{c}^{n}\left( M\right) \) if \( M \) is compact and equals 0 otherwise. Let \( {M}_{1}^{n},{M}_{2}^{n} \) be connected orientable, \( f : {M}_{1} \rightarrow {M}_{2} \) differentiable. We then have a linear transformation \( \mathbb{R} \rightarrow \mathbb{R} \) such that the diagram ![dba7c7d2-75e5-424d-b8e1-2b4241be215c_63_0.jpg](images/dba7c7d2-75e5-424d-b8e1-2b4241be215c_63_0.jpg) commutes. For \( \left\lbrack \omega \right\rbrack \in {H}^{n}\left( {M}_{2}\right) \), the above diagram reads ![dba7c7d2-75e5-424d-b8e1-2b4241be215c_63_1.jpg](images/dba7c7d2-75e5-424d-b8e1-2b4241be215c_63_1.jpg) The bottom map must then be multiplication by some number \( \deg f \), called the degree of \( f \) ; i.e., \[ {\int }_{{M}_{1}}{f}^{ * }\omega = \left( {\deg f}\right) {\int }_{{M}_{2}}\omega . \] This number can in many cases be computed as follows: THEOREM 15.3. Let \( {M}_{i}^{n} \) be connected and orientable, \( i = 1,2 \), and consider a proper map \( f : {M}_{1} \rightarrow {M}_{2} \) . Suppose \( q \in {M}_{2} \) is a regular value of \( f \) . For each \( p \in {f}^{-1}\left( q\right) \), define the sign of \( f \) at \( p \) to be the number \( {\operatorname{sgn}}_{p}f = + 1 \) if \( {f}_{*p} \) is orientation-preserving, and -1 if it is orientation-reversing. Then \[ \deg f = \mathop{\sum }\limits_{{p \in {f}^{-1}\left( q\right) }}{\operatorname{sgn}}_{p}f \] EXAMPLES AND REMARKS 15.1. (i) Recall that \( f \) is proper if the preimage of a compact set is compact. In particular, \( f \) is proper whenever \( {M}_{1} \) is compact. (ii) Regular values always exist by Sard's theorem; in fact, their complement has measure 0 . Notice that \( \deg f = 0 \) if \( f \) is not onto. Proof of Theorem 15.3. By the inverse function theorem, \( {f}^{-1}\left( q\right) \) consists of isolated points; being compact, it is a finite collection \( \left\{ {{p}_{1},\ldots ,{p}_{k}}\right\} \) . Choose charts \( \left( {{U}_{i},{x}_{i}}\right) \) around \( {p}_{i} \) such that each restriction \( f : {U}_{i} \rightarrow {V}_{i} \mathrel{\text{:=}} f\left( {U}_{i}\right) \) is a diffeomorphism, and \( {U}_{i} \cap {U}_{j} = \varnothing \) . Then \( V \mathrel{\text{:=}} { \cap }_{i}{V}_{i} \) is the domain of a chart \( \left( {V, y}\right) \) around \( q \), and redefining \( {U}_{i} \) to be \( {U}_{i} \cap {f}^{-1}\left( V\right) \), we still have diffeomor-phisms \( f : {U}_{i} \rightarrow V \) . Let \( g \) be a nonnegative function with compact support in \( V \), and set \( \omega = {gd}{y}^{1} \land \cdots \land d{y}^{n}.{f}^{ * }\omega \) then has support in \( {U}_{1} \cup \cdots \cup {U}_{k} \), so that by Exercise 39, \[ {\int }_{{M}_{1}}{f}^{ * }\omega = \mathop{\sum }\limits_{{i = 1}}^{k}{\int }_{{U}_{i}}{f}^{ * }\omega = \mathop{\sum }\limits_{{i = 1}}^{k}\left( {{\operatorname{sgn}}_{{p}_{i}}f}\right) {\int }_{V}\omega = \mathop{\sum }\limits_{{i = 1}}^{k}\left( {{\operatorname{sgn}}_{{p}_{i}}f}\right) {\int }_{{M}_{2}}\omega . \] We end this chapter with a couple of topological applications of Theorem 15.3: COROLLARY 15.3. If \( n \) is even, then any vector field on \( {S}^{n} \) vanishes somewhere. Proof. We have seen in Section 14 that the antipodal map \( f : {S}^{n} \rightarrow {S}^{n} \) is orientation-reversing when \( n \) is even, so that \( f \) has degree -1 . By Theorems 15.1 and 15.3, \( f \) is not homotopic to the identity \( I \) . But if \( X \) were a nowhere-zero vector field on the sphere, it would induce a homotopy between \( f \) and \( I \) : Recall that there is a canonical inner product on each tangent space (the one for which \( {\mathcal{J}}_{p} : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}_{p}^{n} \) becomes a linear isometry for each \( p \) ), so that we may assume \( \left| X\right| \equiv 1 \) . Given \( p \in {S}^{n} \), let \( {c}_{p} \) denote the great circle \( {c}_{p}\left( t\right) = \left( {\cos {\pi t}}\right) p + \) \( \left( {\sin {\pi t}}\right) {\mathcal{J}}_{p}^{-1}X\left( p\right) \) . The desired homotopy \( H \) is then given by \( H\left( {p, t}\right) = {c}_{p}\left( t\right) \) . COROLLARY 15.4. \( {H}^{k}\left( {{\mathbb{R}}^{n}\smallsetminus \{ 0\} }\right) \cong {H}^{k}\left( {S}^{n - 1}\right) \) for all \( k \) . Proof. Let \( r : {\mathbb{R}}^{n} \smallsetminus \{ 0\} \rightarrow {S}^{n - 1} \) denote the retraction \( r\left( p\right) = p/\left| p\right| \), and \( \imath \) : \( {S}^{n - 1} \hookrightarrow {\mathbb{R}}^{n} \smallsetminus \{ 0\} \) the inclusion. Then \( r \circ \imath \) is the identity map on the sphere, and \( \imath \circ r \) is homotopic to the identity map on \( {\mathbb{R}}^{n} \smallsetminus \{ 0\} \) via \( H\left( {p, t}\right) = {tp} + \left( {1 - t}\right) \left( {\imath \circ r}\right) \left( p\right) \) . By Theorem 15.1, \( {\left( r \circ \iota \right) }^{ * } = {\iota }^{ * } \circ {r}^{ * } \) and \( {\left( \iota \circ r\right) }^{ * } = {r}^{ * } \circ {\iota }^{ * } \) are the identity on the respective cohomology spaces, so that \( {\imath }^{ * } : {H}^{k}\left( {{\mathbb{R}}^{n}\smallsetminus \{ 0\} }\right) \rightarrow {H}^{k}\left( {S}^{n - 1}\right) \) is an isomorphism. EXERCISE 42. Let \( \omega \) be a 1 -form on \( M \) such that \( {\int }_{c}\omega = 0 \) for any closed curve \( c \) in \( M \) . Show that \( \omega \) is exact. EXERCISE 43. \( M \) is said to be simply connected if any closed curve \( c \) : \( {S}^{1} \rightarrow M \) is homotopic to a constant map. Use Exercise 42 to prove that if \( M \) is simply connected, then \( {H}^{1}\left( M\right) = 0 \) . EXERCISE 44. Let \( U = {\mathbb{R}}^{3} \smallsetminus \{ \left( {0,0, z}\right) \mid z \geq 0\}, V = {\mathbb{R}}^{3} \smallsetminus \{ \left( {0,0, z}\right) \mid z \leq 0\} \) . (a) Show that \( U \) and \( V \) are contractible. (b) Suppose \( \omega \) is a closed 1 -form on \( {\mathbb{R}}^{3} \smallsetminus \{ 0\} \) . Show that there are functions \( f : U \rightarrow \mathbb{R} \) and \( g : V \rightarrow \mathbb{R} \), such that \( {\omega }_{\mid U} = {df},{\omega }_{\mid V} = {dg} \) . Conclude that \( \omega = {dh} \) for some function \( h \), so that \( {H}^{1}\left( {{\mathbb{R}}^{3}\smallsetminus \{ 0\} }\right) = 0 \) . (c) Prove that \( {H}^{1}\left( {S}^{2}\right) = 0 \) . EXERCISE 45. This exercise generalizes Example 11.1, exhibiting a closed \( \
1099_(GTM255)Symmetry, Representations, and Invariants
Definition 5.1.2
Definition 5.1.2. Let \( \left\{ {{f}_{1},\ldots ,{f}_{n}}\right\} \) be a set of generators for \( \mathcal{P}{\left( V\right) }^{G} \) with \( n \) as small as possible. Then \( \left\{ {{f}_{1},\ldots ,{f}_{n}}\right\} \) is called a set of basic invariants. Theorem 5.1.1 asserts that there always exists a finite set of basic invariants when \( G \) is reductive. Since \( \mathcal{P}\left( V\right) \) and \( \mathcal{J} = \mathcal{P}{\left( V\right) }^{G} \) are graded algebras, relative to the usual degree of a polynomial, there is a set of basic invariants with each \( {f}_{i} \) homogeneous, say of degree \( {d}_{i} \) . If we enumerate the \( {f}_{i} \) so that \( {d}_{1} \leq {d}_{2} \leq \cdots \) then the sequence \( \left\{ {d}_{i}\right\} \) is uniquely determined (even though the set of basic invariants is not unique). To prove this, define \[ {m}_{k} = \dim {\mathcal{J}}_{k}/{\left( {\mathcal{J}}_{ + }^{2}\right) }_{k} \] where \( {\mathcal{J}}_{k} \) is the homogeneous component of degree \( k \) of \( \mathcal{J} \) . We claim that \[ {m}_{k} = \operatorname{Card}\left\{ {j : {d}_{j} = k}\right\} . \] (5.3) Indeed, if \( \varphi \in {\mathcal{J}}_{k} \) then \[ \varphi = \mathop{\sum }\limits_{I}{a}_{I}{f}^{I}\;\left( {\text{ sum over }I\text{ with }{d}_{1}{i}_{1} + \cdots + {d}_{n}{i}_{n} = k}\right) , \] where \( {f}^{I} = {f}_{1}^{{i}_{1}}\cdots {f}_{n}^{{i}_{n}} \) and \( {a}_{I} \in \mathbb{C} \) . Since \( {f}^{I} \in {\mathcal{J}}_{ + }^{2} \) if \( {i}_{1} + \cdots + {i}_{n} \geq 2 \), we can write \[ \varphi = \sum {b}_{i}{f}_{i} + \psi \;\left( {\text{ sum over }i\text{ with }{d}_{i} = k}\right) , \] where \( \psi \in {\left( {\mathcal{J}}_{ + }^{2}\right) }_{k} \) and \( {b}_{i} \in \mathbb{C} \) . This proves (5.3) and shows that the set \( \left\{ {d}_{i}\right\} \) is intrinsically determined by \( \mathcal{J} \) as a graded algebra. We can also associate the Hilbert series \[ H\left( t\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }\dim \left( {\mathcal{J}}_{k}\right) {t}^{k} \] to the graded algebra \( \mathcal{J} \) . When the set of basic invariants is algebraically independent the Hilbert series is convergent for \( \left| t\right| < 1 \) and is given by the rational function \[ H\left( t\right) = \mathop{\prod }\limits_{{i = 1}}^{n}{\left( 1 - {t}^{{d}_{i}}\right) }^{-1}, \] where \( {d}_{1},\ldots ,{d}_{n} \) are the degrees of the basic invariants. ## 5.1.2 Invariant Polynomials for \( {\mathfrak{S}}_{\mathrm{n}} \) Let the symmetric group \( {\mathfrak{S}}_{n} \) act on the polynomial ring \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) by \[ \left( {s \cdot f}\right) \left( {{x}_{1},\ldots ,{x}_{n}}\right) = f\left( {{x}_{s\left( 1\right) },\ldots ,{x}_{s\left( n\right) }}\right) \] for \( s \in {\mathfrak{S}}_{n} \) and \( f \in \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . We can view this as the action of \( {\mathfrak{S}}_{n} \) on \( \mathcal{P}\left( {\mathbb{C}}^{n}\right) \) arising from the representation of \( {\mathfrak{S}}_{n} \) on \( {\mathbb{C}}^{n} \) as permutation matrices, with \( {x}_{1},\ldots ,{x}_{n} \) being the coordinate functions on \( {\mathbb{C}}^{n} \) . Define the elementary symmetric functions \( {\sigma }_{1},\ldots ,{\sigma }_{n} \) by \[ {\sigma }_{i}\left( {{x}_{1},\ldots ,{x}_{n}}\right) = \mathop{\sum }\limits_{{1 \leq {j}_{1} < \cdots < {j}_{i} \leq n}}{x}_{{j}_{1}}\cdots {x}_{{j}_{i}}, \] and set \( {\sigma }_{0} = 1 \) . For example, \( {\sigma }_{1}\left( x\right) = {x}_{1} + \cdots + {x}_{n} \) and \( {\sigma }_{n}\left( x\right) = {x}_{1}\cdots {x}_{n} \) . Clearly, \( {\sigma }_{i} \in \mathbb{C}{\left\lbrack {x}_{1},\ldots ,{x}_{n}\right\rbrack }^{{\mathfrak{S}}_{n}} \) . Furthermore, we have the identity \[ \mathop{\prod }\limits_{{i = 1}}^{n}\left( {t - {x}_{i}}\right) = \mathop{\sum }\limits_{{j = 0}}^{n}{t}^{n - j}{\left( -1\right) }^{j}{\sigma }_{j}\left( x\right) . \] (5.4) Thus the functions \( \left\{ {{\sigma }_{i}\left( x\right) }\right\} \) express the coefficients of a monic polynomial in the variable \( t \) as symmetric functions of the roots \( {x}_{1},\ldots ,{x}_{n} \) of the polynomial. Theorem 5.1.3. The set of functions \( \left\{ {{\sigma }_{1},\ldots ,{\sigma }_{n}}\right\} \) is algebraically independent and \( \mathbb{C}{\left\lbrack {x}_{1},\ldots ,{x}_{n}\right\rbrack }^{{\mathfrak{S}}_{n}} = \mathbb{C}\left\lbrack {{\sigma }_{1},\ldots ,{\sigma }_{n}}\right\rbrack \) . Hence the elementary symmetric functions are a set of basic invariants for \( {\mathfrak{S}}_{n} \) . Proof. We put the graded lexicographic order on \( {\mathbb{N}}^{n} \) . Let \( I \neq J \in {\mathbb{N}}^{n} \) . We define \( I\overset{\text{ grlex }}{ > }J \) if either \( \left| I\right| > \left| J\right| \) or else \( \left| I\right| = \left| J\right| ,{i}_{p} = {j}_{p} \) for \( p < r \), and \( {i}_{r} > {j}_{r} \) . This is a total order on \( {\mathbb{N}}^{n} \), and the set \( \left\{ {J \in {\mathbb{N}}^{n} : I\overset{\text{ grlex }}{ > }J}\right\} \) is finite for every \( I \in {\mathbb{N}}^{n} \) . Furthermore, this order is compatible with addition: \[ \text{if}I\overset{\text{ grlex }}{ > }J\text{and}P\overset{\text{ grlex }}{ > }Q\text{then}I + P\overset{\text{ grlex }}{ > }J + Q\text{.} \] Given \( f = \mathop{\sum }\limits_{I}{a}_{I}{x}^{I} \in \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \), we define the support of \( f \) to be \[ \mathcal{S}\left( f\right) = \left\{ {I \in {\mathbb{N}}^{n} : {a}_{I} \neq 0}\right\} . \] Assume that \( f \) is invariant under \( {\mathfrak{S}}_{n} \) . Then \( {a}_{I} = {a}_{s \cdot I} \) for all \( s \in {\mathfrak{S}}_{n} \), where \( s \) . \( \left\lbrack {{i}_{1},\ldots ,{i}_{n}}\right\rbrack = \left\lbrack {{i}_{{s}^{-1}\left( 1\right) },\ldots ,{i}_{{s}^{-1}\left( n\right) }}\right\rbrack \) . Thus \( \mathcal{S}\left( f\right) \) is invariant under \( {\mathfrak{S}}_{n} \) . If \( J \in \mathcal{S}\left( f\right) \) then the set of indices \( \left\{ {s \cdot J : s \in {\mathfrak{S}}_{n}}\right\} \) contains a unique index \( I = \left\lbrack {{i}_{1},\ldots ,{i}_{n}}\right\rbrack \) with \( {i}_{1} \geq {i}_{2} \geq \cdots \geq {i}_{n} \) (we call such an index \( I \) dominant). If \( I \) is the largest index in \( \mathcal{S}\left( f\right) \) (for the graded lexicographic order), then \( I \) must be dominant (since \( \left| {s \cdot I}\right| = \left| I\right| \) for \( s \in {\mathfrak{S}}_{n} \) ). The corresponding term \( {a}_{I}{x}^{I} \) is called the dominant term of \( f \) . For example, the dominant term in \( {\sigma }_{i} \) is \( {x}_{1}{x}_{2}\cdots {x}_{i} \) . Given a dominant index \( I \), set \[ {\sigma }_{I} = {\sigma }_{1}^{{i}_{1} - {i}_{2}}{\sigma }_{2}^{{i}_{2} - {i}_{3}}\cdots {\sigma }_{n}^{{i}_{n}}. \] Then \( {\sigma }_{I} \) is a homogeneous polynomial of degree \( \left| I\right| \) that is invariant under \( {\mathfrak{S}}_{n} \) . We claim that \[ \mathcal{S}\left( {{\sigma }_{I} - {x}^{I}}\right) \subset \left\{ {J \in {\mathbb{N}}^{n} : J\overset{\text{ grlex }}{ < }I}\right\} . \] (5.5) We prove (5.5) by induction on the graded lexicographic order of \( I \) . The smallest dominant index in this order is \( I = \left\lbrack {1,0,\ldots ,0}\right\rbrack \), and in this case \[ {\sigma }_{I} - {x}^{I} = {\sigma }_{1} - {x}_{1} = {x}_{2} + \cdots + {x}_{n}. \] Thus (5.5) holds for \( I = \left\lbrack {1,0,\ldots ,0}\right\rbrack \) . Given a dominant index \( I \), we may thus assume that (5.5) holds for all dominant indices less than \( I \) . We can write \( I = J + {M}_{i} \) for some \( i \leq n \), where \( {M}_{i} = \left\lbrack {\underset{i}{\underbrace{1,\ldots ,1}},0,\ldots ,0}\right\rbrack \) and \( J \) is a dominant index less than \( I \) . Thus \( {\sigma }_{I} = {\sigma }_{i}{\sigma }_{J} \) . Now \[ {\sigma }_{i} = \left( {\mathop{\prod }\limits_{{p = 1}}^{i}{x}_{p}}\right) + \cdots , \] where \( \cdots \) indicates a linear combination of monomials \( {x}^{K} \) with \( K\overset{\text{ grlex }}{ < }{M}_{i} \) . By induction, \( {\sigma }_{J} = {x}^{J} + \cdots \), where \( \cdots \) indicates a linear combination of monomials \( {x}^{L} \) with \( L\overset{\text{ grlex }}{ < }J \) . Hence \( {\sigma }_{i}{\sigma }_{J} = {x}^{J + {M}_{i}} + \cdots \), where \( \cdots \) indicates a linear combination of monomials \( {x}^{P} \) with \( P\overset{\text{ grlex }}{ < }{M}_{i} + J = I \) . This proves (5.5). Suppose \( f\left( x\right) \in \mathbb{C}{\left\lbrack {x}_{1},\ldots ,{x}_{n}\right\rbrack }^{{\mathfrak{S}}_{n}} \) . Let \( a{x}^{I} \) be the dominant term in \( f \) . Then \( g\left( x\right) = \) \( f\left( x\right) - a{\sigma }^{I} \in \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . If \( b{x}^{J} \) is the dominant term in \( g\left( x\right) \) then (5.5) implies that \( J\overset{\text{ grlex }}{ < }I \) . By induction we may assume that \( g \in \mathbb{C}\left\lbrack {{\sigma }_{1},\ldots ,{\sigma }_{n}}\right\rbrack \), so it follows that \( f \in \mathbb{C}\left\lbrack {{\sigma }_{1},\ldots ,{\sigma }_{n}}\right\rbrack \) . It remains to prove that the set \( \left\{ {{\sigma }_{1},\ldots ,{\sigma }_{n}}\right\} \subset \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) is algebraically independent. This is true for \( n = 1 \), since \( {\sigma }_{1}\left( {x}_{1}\right) = {x}_{1} \) . Assume that this is true for \( n \) . Suppose for the sake of contradiction that the elementary symmetric functions in the variables \( {x}_{1},\ldots ,{x}_{n + 1} \) satisfy a nontrivial polynomial relation. We can write such a relation as \[ \mathop{\sum }\limits_{{j = 0}}^{p}{f}_{j}\left( {{\sigma }_{1},\ldots ,{\sigma }_{n}}\right) {\sigma }_{n + 1}^{j} = 0 \] (5.6) where each \( {f}_{j} \) is a polynomial in \( n \) variables and \( {f}_{p} \neq 0 \) . We take the small
1065_(GTM224)Metric Structures in Differential Geometry
Definition 11.4
Definition 11.4. The \( k \) -th (de Rham) cohomology vector space of \( M \) is the space \( {H}^{k}\left( M\right) = {Z}_{k}\left( M\right) /{B}_{k}\left( M\right) \) . Thus, \( {H}^{k}\left( M\right) = 0 \) iff every closed \( k \) -form on \( M \) is exact. When \( k = 0 \), we define \( {H}^{0}\left( M\right) \mathrel{\text{:=}} {Z}_{0}\left( M\right) \) . If \( M \) is connected, it follows from Exercise 10 that \( {H}^{0}\left( M\right) \cong \mathbb{R} \) . Definition 11.5. Let \( f : M \rightarrow N \) be differentiable. For \( \alpha \in {A}_{k}\left( N\right) \), define the pullback of \( \alpha \) via \( f \) to be the \( k \) -form \( {f}^{ * }\alpha \) on \( M \) given by \[ \left( {{f}^{ * }\alpha }\right) \left( p\right) \left( {{v}_{1},\ldots ,{v}_{k}}\right) = \alpha \left( {f\left( p\right) }\right) \left( {{f}_{ * }{v}_{1},\ldots ,{f}_{ * }{v}_{k}}\right) ,\;p \in M,\;{v}_{i} \in {M}_{p}. \] In the special case that \( k = 0 \), i.e., when \( \alpha \) is a function \( \phi \) on \( M \), define \( {f}^{ * }\phi = \phi \circ f \) Clearly, \( {f}^{ * } : A\left( N\right) \rightarrow A\left( M\right) \) is linear. THEOREM 11.2. If \( f : M \rightarrow N \) is differentiable, then (1) \( {f}^{ * } : A\left( N\right) \rightarrow A\left( M\right) \) is an algebra homomorphism, (2) \( d{f}^{ * } = {f}^{ * }d \), and (3) \( {f}^{ * } \) induces a linear transformation \( {f}^{ * } : {H}^{k}\left( N\right) \rightarrow {H}^{k}\left( M\right) \) . Proof. In order to establish (1), notice that because \( {f}^{ * } \) is linear, it suffices to check that \( {f}^{ * }\left( {{\alpha }_{1} \land \cdots \land {\alpha }_{k}}\right) = {f}^{ * }{\alpha }_{1} \land \cdots \land {f}^{ * }{\alpha }_{k} \) for 1 -forms \( {\alpha }_{i} \) on \( N \) . But if \( {X}_{1},\ldots ,{X}_{k} \in \mathfrak{X}\left( M\right) \), then \[ {f}^{ * }\left( {{\alpha }_{1} \land \cdots \land {\alpha }_{k}}\right) \left( {{X}_{1},\ldots ,{X}_{k}}\right) = \left( {{\alpha }_{1} \land \cdots \land {\alpha }_{k}}\right) \circ f\left( {{f}_{ * }{X}_{1},\ldots ,{f}_{ * }{X}_{k}}\right) \] \[ = \det \left( {\left( {{\alpha }_{i} \circ f}\right) \left( {{f}_{ * }{X}_{j}}\right) }\right) = \det \left( {{f}^{ * }{\alpha }_{i}\left( {X}_{j}\right) }\right) \] \[ = {f}^{ * }{\alpha }_{1} \land \cdots \land {f}^{ * }{\alpha }_{k}\left( {{X}_{1},\ldots ,{X}_{k}}\right) \] (2) We first prove the statement for functions. If \( \phi \in \mathcal{F}\left( N\right) \) and \( X \in \mathfrak{X}\left( M\right) \) , then \[ {f}^{ * }{d\phi }\left( X\right) = \left( {d\phi }\right) \circ f\left( {{f}_{ * }X}\right) = \left( {{f}_{ * }X}\right) \phi = X\left( {\phi \circ f}\right) = d\left( {\phi \circ f}\right) \left( X\right) = d\left( {{f}^{ * }\phi }\right) \left( X\right) , \] so (2) holds on \( {A}_{0}\left( N\right) \) . In the general case, \( \alpha \in A\left( N\right) \) may locally be written as \( \alpha = \sum {\alpha }_{I}d{x}^{{i}_{1}} \land \cdots \land d{x}^{{i}_{k}} \) . By the above and (1), \[ {f}^{ * }\alpha = \sum \left( {{\alpha }_{I} \circ f}\right) {f}^{ * }d{x}^{{i}_{1}} \land \cdots \land {f}^{ * }d{x}^{{i}_{k}} = \sum \left( {{\alpha }_{I} \circ f}\right) d{f}^{ * }{x}^{{i}_{1}} \land \cdots \land d{f}^{ * }{x}^{{i}_{k}}, \] so that \[ d{f}^{ * }\alpha = \sum d\left( {{\alpha }_{I} \circ f}\right) \land {f}^{ * }d{x}^{{i}_{1}} \land \cdots \land {f}^{ * }d{x}^{{i}_{k}} \] \[ = \sum {f}^{ * }d{\alpha }_{I} \land {f}^{ * }d{x}^{{i}_{1}} \land \cdots \land {f}^{ * }d{x}^{{i}_{k}} \] \[ = {f}^{ * }\left( {\sum d{\alpha }_{I} \land d{x}^{{i}_{1}} \land \cdots \land d{x}^{{i}_{k}}}\right) = {f}^{ * }{d\alpha }. \] The last statement in the theorem is a direct consequence of the first two. We end this section with a coordinate-free characterization of the exterior derivative operator \( d \) . The proof uses concepts and results from Exercises 33 and 34 below. THEOREM 11.3. If \( \omega \in {A}_{k}\left( M\right) \) and \( {X}_{0},{X}_{1},\ldots ,{X}_{k} \in \mathfrak{X}\left( M\right) \), then \[ {d\omega }\left( {{X}_{0},\ldots ,{X}_{k}}\right) = \mathop{\sum }\limits_{{i = 0}}^{k}{\left( -1\right) }^{i}{X}_{i}\left( {\omega \left( {{X}_{0},\ldots ,{\widehat{X}}_{i},\ldots ,{X}_{k}}\right) }\right. \] \[ + \mathop{\sum }\limits_{{i < j}}{\left( -1\right) }^{i + j}\omega \left( {\left\lbrack {{X}_{i},{X}_{j}}\right\rbrack ,{X}_{0},\ldots ,{\widehat{X}}_{i},\ldots ,{\widehat{X}}_{j},\ldots ,{X}_{k}}\right) . \] (The "hat" over a vector field means the latter is deleted.) Proof. We shall consider the case \( k = 1 \), the general case being a straightforward induction. It follows from Exercise 33 that, in general, \[ \left( {{L}_{{X}_{0}}\omega }\right) \left( {{X}_{1},\ldots ,{X}_{k}}\right) = {L}_{{X}_{0}}\left( {\omega \left( {{X}_{1},\ldots ,{X}_{k}}\right) }\right) - \mathop{\sum }\limits_{{i = 1}}^{k}\omega \left( {{X}_{1},\ldots ,{L}_{{X}_{0}}{X}_{i},\ldots ,{X}_{k}}\right) . \] Together with Exercise 34, this implies that \[ {d\omega }\left( {{X}_{0},{X}_{1}}\right) = \left( {i\left( {X}_{0}\right) {d\omega }}\right) \left( {X}_{1}\right) = \left( {{L}_{{X}_{0}}\omega }\right) \left( {X}_{1}\right) - d\left( {i\left( {X}_{0}\right) \omega }\right) \left( {X}_{1}\right) \] \[ = {L}_{{X}_{0}}\left( {\omega \left( {X}_{1}\right) }\right) - \omega \left\lbrack {{X}_{0},{X}_{1}}\right\rbrack - d\left( {\omega \left( {X}_{0}\right) }\right) \left( {X}_{1}\right) \] \[ = {X}_{0}\left( {\omega {X}_{1}}\right) - {X}_{1}\left( {\omega {X}_{0}}\right) - \omega \left\lbrack {{X}_{0},{X}_{1}}\right\rbrack . \] EXERCISE 31. Show that the form \( \alpha \) in Example 11.1 is equal to \[ \frac{-{u}^{2}}{{\left( {u}^{1}\right) }^{2} + {\left( {u}^{2}\right) }^{2}}d{u}^{1} + \frac{{u}^{1}}{{\left( {u}^{1}\right) }^{2} + {\left( {u}^{2}\right) }^{2}}d{u}^{2}. \] EXERCISE 32. Let \( \alpha \) be a 1 -form on \( {\mathbb{R}}^{2} \), so that we may write \( \alpha = {f}_{1}d{u}^{1} + \) \( {f}_{2}d{u}^{2} \) for smooth functions \( {f}_{i} \) on \( {\mathbb{R}}^{2} \) . (a) Show that \( {d\alpha } = 0 \) iff \( {D}_{2}{f}_{1} = {D}_{1}{f}_{2} \) . (b) Show that if \( \alpha \) is closed, then it is exact. Thus, \( {H}^{1}\left( {\mathbb{R}}^{2}\right) = 0 \) . Hint: Fix any \( \left( {a, b}\right) \in {\mathbb{R}}^{2} \), and show that \( \alpha = {df} \), where \( f \) is defined by \( f\left( {x, y}\right) = {\int }_{a}^{x}{f}_{1}\left( {t, b}\right) {dt} + {\int }_{b}^{y}{f}_{2}\left( {x, t}\right) {dt}. \) EXERCISE 33. If \( \omega \) is a \( k \) -form on \( M \), and \( X \) a vector field with flow \( {\Phi }_{t} \) , one defines the Lie derivative of \( \omega \) with respect to \( X \) to be the \( k \) -form given by \[ \left( {{L}_{X}\omega }\right) \left( p\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{1}{t}\left\lbrack {\left( {{\Phi }_{t}^{ * }\omega }\right) \left( p\right) - \omega \left( p\right) }\right\rbrack ,\;p \in M. \] (a) Show that \( {L}_{X}f = {Xf} \) for \( f \in {A}_{0}\left( M\right) \) . (b) Show that \( {L}_{X}\left( {{\omega }_{1} \land \cdots \land {\omega }_{k}}\right) = \mathop{\sum }\limits_{i}{\omega }_{1} \land \cdots \land {L}_{X}{\omega }_{i} \land \cdots \land {\omega }_{k} \) for 1-forms \( {\omega }_{i} \) . (c) Show that \( {L}_{X} \circ d = d \circ {L}_{X} \) on \( {A}_{0}\left( M\right) \) . (d) Use (a) through (c) to show that \( {L}_{X} \circ d = d \circ {L}_{X} \) on \( A\left( M\right) \) . EXERCISE 34. Given a vector field \( X \) on \( M \), interior multiplication \( i\left( X\right) \) : \( {A}_{k}\left( M\right) \rightarrow {A}_{k - 1}\left( M\right) \) by \( X \) is defined by \[ \left( {i\left( X\right) \omega }\right) \left( {{X}_{1},\ldots ,{X}_{k - 1}}\right) \mathrel{\text{:=}} \omega \left( {X,{X}_{1},\ldots ,{X}_{k - 1}}\right) ,\;\omega \in {A}_{k}\left( M\right) ,\;{X}_{i} \in \mathfrak{X}\left( M\right) . \] Prove that \( {L}_{X} = i\left( X\right) \circ d + d \circ i\left( X\right) \) (see Exercise 33). ## 12. Integration on Chains We are now ready to generalize integration on Euclidean space to manifolds. Thanks to the work done in the preceding sections, we will be able to integrate differential forms rather than functions; one advantage lies in that the change of variables formula for integrals is particularly simple for differential forms. Definition 12.1. A singular \( k \) -cube in a manifold \( {M}^{n} \) is a differentiable map \( c : {\left\lbrack 0,1\right\rbrack }^{k} \rightarrow M \) . (For \( k = 0 \), define \( {\left\lbrack 0,1\right\rbrack }^{0} = \{ 0\} \), so that a singular 0 -cube is determined by one point \( c\left( 0\right) \in M \) ). The standard \( k \) -cube is the inclusion map \( {I}^{k} : {\left\lbrack 0,1\right\rbrack }^{k} \hookrightarrow {\mathbb{R}}^{k} \) . Definition 12.2. Let \( \omega \) be a \( k \) -form on \( {\left\lbrack 0,1\right\rbrack }^{k} \), and write \( \omega = {fd}{u}^{1} \land \cdots \land \) \( d{u}^{k} \), where \( f = \omega \left( {{D}_{1},\ldots ,{D}_{k}}\right) \) . The integral of \( \omega \) over \( {\left\lbrack 0,1\right\rbrack }^{k} \) is defined to be (12.1) \[ {\int }_{{\left\lbrack 0,1\right\rbrack }^{k}}\omega = {\int }_{{\left\lbrack 0,1\right\rbrack }^{k}}f \] If \( \omega \) is a \( k \) -form on a manifold \( M, k > 0 \), and \( c \) is a singular \( k \) -cube in \( M \) , the integral of \( \omega \) over \( c \) is (12.2) \[ {\int }_{c}\omega = {\int }_{{\left\lbrack 0,1\right\rbrack }^{k}}{c}^{ * }\omega \] where the right side is defined in (12.1). For \( k = 0,{\int }_{c}\omega \mathrel{\text{:=}} \omega \left( {c\left( 0\right) }\right) \) . EXAMPLES AND REMARKS 12.1. (i) In classical calculus, a vector field on the plane is a differentiable map \( F = \left( {f, g}\right) : {\mathbb{R}}^{2} \rightarrow {\mathbb{R}}^{2} \) . If \( c : \left\lbrack {0,1}\right\rbrack \rightarrow {\mathbb{R}}^{2} \) is a curve (or a singular 1-c
1112_(GTM267)Quantum Theory for Mathematicians
Definition 9.5
Definition 9.5 An unbounded operator \( A \) on \( \mathbf{H} \) is self-adjoint if \[ \operatorname{Dom}\left( {A}^{ * }\right) = \operatorname{Dom}\left( A\right) \] and \( {A}^{ * }\phi = {A\phi } \) for all \( \phi \in \operatorname{Dom}\left( A\right) \) . We may reformulate the definition of self-adjointness by saying that \( A \) is self-adjoint if \( {A}^{ * } \) is equal to \( A \), provided that equality of unbounded operators is understood to include equality of domains. Every self-adjoint operator is symmetric (by Proposition 9.4), but there exist many operators that are symmetric without being self-adjoint. In light of Proposition 9.4, a symmetric operator is self-adjoint if and only if \( \operatorname{Dom}\left( {A}^{ * }\right) = \operatorname{Dom}\left( A\right) \) . In trying to show that a symmetric operator is self-adjoint, the difficulty lies in showing that \( \operatorname{Dom}\left( {A}^{ * }\right) \) is no bigger than \( \operatorname{Dom}\left( A\right) \) . Definition 9.6 An unbounded operator \( A \) on \( \mathbf{H} \) is said to be closed if the graph of \( A \) is a closed subset of \( \overline{\mathbf{H}} \times \mathbf{H} \) . An unbounded operator \( A \) on \( \mathbf{H} \) is said to be closable if the closure in \( \mathbf{H} \times \mathbf{H} \) of the graph of \( A \) is the graph of a function. If \( A \) is closable, then the closure \( {A}^{cl} \) of \( A \) is the operator with graph equal to the closure of the graph of \( A \) . To be more explicit, an operator \( A \) is closed if and only if the following condition holds: Suppose a sequence \( {\psi }_{n} \) belongs to \( \operatorname{Dom}\left( A\right) \) and suppose that there exist vectors \( \psi \) and \( \phi \) in \( \mathbf{H} \) with \( {\psi }_{n} \rightarrow \psi \) and \( A{\psi }_{n} \rightarrow \phi \) . Then \( \psi \) belongs to \( \operatorname{Dom}\left( A\right) \) and \( {A\psi } = \phi \) . Regarding closability, an operator \( A \) is not closable if there exist two elements in the closure of the graph of \( A \) of the form \( \left( {\phi ,\psi }\right) \) and \( \left( {\phi ,\chi }\right) \), with \( \psi \neq \chi \) . Another way of putting it is to say that an operator \( A \) is closable if there exists some closed extension of it, in which case the closure of \( A \) is the smallest closed extension of \( A \) . The notion of the closure of a (closable) operator is useful because it sweeps away some of the arbitrariness in the choice of a domain of an operator. If we consider, for example, the operator \( A = - i\hslash d/{dx} \) as an unbounded operator on \( {L}^{2}\left( \mathbb{R}\right) \), there are many different reasonable choices for \( \operatorname{Dom}\left( A\right) \), including (1) the space of \( {C}^{\infty } \) functions of compact support, (2) the Schwartz space (Definition A.15), and (3) the space of continuously differentiable functions \( \psi \) for which both \( \psi \) and \( {\psi }^{\prime } \) belong to \( {L}^{2}\left( \mathbb{R}\right) \) . As it turns out, each of these three choices for \( \operatorname{Dom}\left( A\right) \) leads to the same operator \( {A}^{cl} \) . Note that we are not claiming that every choice for \( \operatorname{Dom}\left( A\right) \) leads to the same closure; nevertheless, it is often the case that many reasonable choices do lead to the same closure. Definition 9.7 An unbounded operator \( A \) on \( \mathbf{H} \) is said to be essentially self-adjoint if \( A \) is symmetric and closable and \( {A}^{cl} \) is self-adjoint. Actually, as we shall see in the next section, a symmetric operator is always closable. Many symmetric operators fail to be even essentially selfadjoint. We will see examples of such operators in Sects. 9.6 and 9.10. Section 9.5 gives some reasonably simple criteria for determining when a symmetric operator is essentially self-adjoint. ## 9.3 Elementary Properties of Adjoints and Closed Operators In this section, we spell out some of the most basic and useful properties of adjoints and closures of unbounded operators. In Sect. 9.5, we will draw on these results to prove some more substantial results. In what follows, if we say that two operators "coincide," it means that they have the same domain and that they are equal on that common domain. Proposition 9.8 1. If \( A \) is an unbounded operator on \( \mathbf{H} \), then the graph of the operator \( {A}^{ * } \) (which may or may not be densely defined) is closed in \( \mathbf{H} \times \mathbf{H} \) . 2. A symmetric operator is always closable. Proof. Suppose \( {\psi }_{n} \) is a sequence in the domain of \( {A}^{ * } \) that converges to some \( \psi \in \mathbf{H} \) . Suppose also that \( {A}^{ * }{\psi }_{n} \) converges to some \( \phi \in \mathbf{H} \) . Then \( \left\langle {{\psi }_{n}, A \cdot }\right\rangle = \left\langle {{A}^{ * }{\psi }_{n}, \cdot }\right\rangle \) and for any \( \chi \in \operatorname{Dom}\left( A\right) \), we have \[ \langle \psi ,{A\chi }\rangle = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left\langle {{\psi }_{n},{A\chi }}\right\rangle = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left\langle {{A}^{ * }{\psi }_{n},\chi }\right\rangle = \langle \phi ,\chi \rangle . \] This shows that \( \psi \) belongs to the domain of \( {A}^{ * } \) and that \( {A}^{ * }\psi = \phi \), establishing that the graph of \( {A}^{ * } \) is closed. If \( A \) is symmetric, \( {A}^{ * } \) is an extension of \( A \) . Since, as we have just proved, \( {A}^{ * } \) is closed, \( A \) has a closed extension and is therefore closable. ∎ Corollary 9.9 If \( A \) is a symmetric operator with \( \operatorname{Dom}\left( A\right) = \mathbf{H} \), then \( A \) is bounded. Proof. Since \( A \) is symmetric, it is closable by Proposition 9.8. But since the domain of \( A \) is already all of \( \mathbf{H} \), the closure of \( A \) must coincide with \( A \) itself. (The closure of \( A \) always agrees with \( A \) on \( \operatorname{Dom}\left( A\right) \), which in this case is all of \( \mathbf{H} \) .) Thus, \( A \) is a closed operator defined on all of \( \mathbf{H} \), and the closed graph theorem (Theorem A.39) implies that \( A \) is bounded. Proposition 9.10 If \( A \) is a closable operator on \( \mathbf{H} \), then the adjoint of \( {A}^{cl} \) coincides with the adjoint of \( A \) . Proof. Suppose that for some \( \psi \in \mathbf{H} \) there exists a \( \phi \) such that \( \left\langle {\psi ,{A}^{cl}\chi }\right\rangle = \) \( \langle \phi ,\chi \rangle \) for all \( \chi \in \operatorname{Dom}\left( {A}^{cl}\right) \) . Since \( {A}^{cl} \) is an extension of \( A \), it follows that \( \langle \psi ,{A\chi }\rangle = \langle \phi ,\chi \rangle \) for all \( \chi \in \operatorname{Dom}\left( A\right) \) . This shows that \( \operatorname{Dom}\left( {A}^{ * }\right) \supset \) \( \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) and that \( {A}^{ * } \) agrees with \( {\left( {A}^{cl}\right) }^{ * } \) on \( \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) . In the other direction, suppose for some \( \psi \in \mathbf{H} \) there exists a \( \phi \) such that \( \langle \psi ,{A\chi }\rangle = \langle \phi ,\chi \rangle \) for all \( \chi \in \operatorname{Dom}\left( A\right) \) . Suppose now \( \xi \in \operatorname{Dom}\left( {A}^{cl}\right) \) with \( {A}^{cl}\xi = \eta \) . Then there exists a sequence \( {\chi }_{n} \) in \( \operatorname{Dom}\left( A\right) \) with \( {\chi }_{n} \rightarrow \xi \) and \( A{\chi }_{n} \rightarrow \eta \), and we have \[ \left\langle {\psi, A{\chi }_{n}}\right\rangle = \left\langle {\phi ,{\chi }_{n}}\right\rangle \] for all \( n \) . Letting \( n \) tend to infinity, we obtain \( \langle \psi ,\eta \rangle = \langle \phi ,\xi \rangle \), or \( \left\langle {\psi ,{A}^{cl}\xi }\right\rangle = \) \( \langle \phi ,\xi \rangle \) . This shows that \( \psi \in \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) and \( {A}^{cl}\psi = \phi \) . Thus, \( \operatorname{Dom}\left( {A}^{ * }\right) \subset \) \( \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) . ∎ Proposition 9.11 If \( A \) is essentially self-adjoint, then \( {A}^{cl} \) is the unique self-adjoint extension of \( A \) . Proof. Suppose \( B \) is a self-adjoint extension of \( A \) . Since \( B = {B}^{ * }, B \) is closed and is, therefore, an extension of \( {A}^{cl} \) . It then follows from the definition of the adjoint that \( \operatorname{Dom}\left( {B}^{ * }\right) \subset \operatorname{Dom}\left( {A}^{cl}\right) \) . Thus, we have \[ \operatorname{Dom}\left( {B}^{ * }\right) \subset \operatorname{Dom}\left( {A}^{cl}\right) \subset \operatorname{Dom}\left( B\right) . \] Since \( B \) is self-adjoint, all three of the above sets must be equal, so actually \( B = {A}^{cl} \) . Proposition 9.12 If \( A \) is an unbounded operator on \( \mathbf{H} \), then \[ {\left( \operatorname{Range}\left( A\right) \right) }^{ \bot } = \ker \left( {A}^{ * }\right) \] Proof. First assume that \( \psi \in {\left( \operatorname{Range}\left( A\right) \right) }^{ \bot } \) . Then for all \( \phi \in \operatorname{Dom}\left( A\right) \) we have \[ \langle \psi ,{A\phi }\rangle = 0 \] That is to say, the linear functional \( \langle \psi, A \cdot \rangle \) is bounded-in fact, zero-on \( \operatorname{Dom}\left( A\right) \) . Thus, from the definition of the adjoint, we conclude that \( \psi \in \operatorname{Dom}\left( {A}^{ * }\right) \) and \( {A}^{ * }\psi = 0 \) . Meanwhile, suppose that \( \psi \) is in \( \operatorname{Dom}\left( {A}^{ * }\right) \) and that \( {A}^{ * }\psi = 0 \) . The only way this can happen is if the linear functional \( \langle \psi, A \cdot \rangle \) is zero on \( \operatorname{Dom}\left( A\right) \) , which means that \( \psi \) is orthogonal to the image of \( A \) . Proposition 9.13 Suppose \( A \) is an
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 4.21
Definition 4.21. Suppose \( G \) is a matrix Lie group and \( \Pi \) is a representation of \( G \) acting on a finite-dimensional vector space \( V \) . Then the dual representation \( {\Pi }^{ * } \) to \( \Pi \) is the representation of \( G \) acting on \( {V}^{ * } \) and given by \[ {\Pi }^{ * }\left( g\right) = {\left\lbrack \Pi \left( {g}^{-1}\right) \right\rbrack }^{tr}. \] (4.9) If \( \pi \) is a representation of a Lie algebra \( \mathfrak{g} \) acting on a finite-dimensional vector space \( V \), then \( {\pi }^{ * } \) is the representation of \( \mathfrak{g} \) acting on \( {V}^{ * } \) and given by \[ {\pi }^{ * }\left( X\right) = - \pi {\left( X\right) }^{tr}. \] (4.10) Using (4.8), it is easy to check that both \( {\Pi }^{ * } \) and \( {\pi }^{ * } \) are actually representations. (Here the inverse on the right-hand side of (4.9) and the minus sign on the right-hand side of (4.10) are essential.) The dual representation is also called contragredient representation. Proposition 4.22. If \( \Pi \) is a representation of a matrix Lie group \( G \), then (1) \( {\Pi }^{ * } \) is irreducible if and only if \( \Pi \) is irreducible and (2) \( {\left( {\Pi }^{ * }\right) }^{ * } \) is isomorphic to \( \Pi \) . Similar statements apply to Lie algebra representations. Proof. See Exercise 6. ## 4.4 Complete Reducibility Much of representation theory is concerned with studying irreducible representations of a group or Lie algebra. In favorable cases, knowing the irreducible representations leads to a description of all representations. Definition 4.23. A finite-dimensional representation of a group or Lie algebra is said to be completely reducible if it is isomorphic to a direct sum of a finite number of irreducible representations. Definition 4.24. A group or Lie algebra is said to have the complete reducibility property if every finite-dimensional representation of it is completely reducible. As it turns out, most groups and Lie algebras do not have the complete reducibility property. Nevertheless, many interesting example groups and Lie algebras do have this property, as we will see in this section and Sect. 10.3. Example 4.25. Let \( \Pi : \mathbb{R} \rightarrow \mathrm{{GL}}\left( {2;\mathbb{C}}\right) \) be given by \[ \Pi \left( x\right) = \left( \begin{array}{ll} 1 & x \\ 0 & 1 \end{array}\right) \] Then \( \Pi \) is not completely reducible. Proof. Direct calculation shows that \( \Pi \) is, in fact, a representation of \( \mathbb{R} \) . If \( \left\{ {{e}_{1},{e}_{2}}\right\} \) is the standard basis for \( {\mathbb{C}}^{2} \), then clearly the span of \( {e}_{1} \) is an invariant subspace. We now claim that \( \left\langle {e}_{1}\right\rangle \) is the only nontrivial invariant subspace for \( \Pi \) . To see this, suppose \( V \) is a nonzero invariant subspace and suppose \( V \) contains a vector not in the span of \( {e}_{1} \), say, \( v = a{e}_{1} + b{e}_{2} \) with \( b \neq 0 \) . Then \[ \Pi \left( 1\right) v - v = b{e}_{1} \] also belongs to \( V \) . Thus, \( {e}_{1} \) and \( {e}_{2} = \left( {v - a{e}_{1}}\right) /b \) belong to \( V \), showing that \( V = {\mathbb{C}}^{2} \) . We conclude, then, that \( {\mathbb{C}}^{2} \) does not decompose as a direct sum of irreducible invariant subspaces. Proposition 4.26. If \( V \) is a completely reducible representation of a group or Lie algebra, then the following properties hold. 1. For every invariant subspace \( U \) of \( V \), there is another invariant subspace \( W \) such that \( V \) is the direct sum of \( U \) and \( W \) . 2. Every invariant subspace of \( V \) is completely reducible. Proof. For Point 1, suppose that \( V \) decomposes as \[ V = {U}_{1} \oplus {U}_{2} \oplus \cdots \oplus {U}_{k} \] where the \( {U}_{j} \) ’s are irreducible invariant subspaces, and that \( U \) is any invariant subspace of \( V \) . If \( U \) is all of \( V \), then we can take \( W = \{ 0\} \) and we are done. If \( U \neq V \), there must be some \( {j}_{1} \) such that \( {U}_{{j}_{1}} \) is not contained in \( U \) . Since \( {U}_{{j}_{1}} \) is irreducible, it follows that the invariant subspace \( {U}_{{j}_{1}} \cap U \) must be \( \{ 0\} \) . Suppose now that \( U + {U}_{{j}_{1}} = V \) . If so, the sum is direct (since \( {U}_{{j}_{1}} \cap U = \{ 0\} \) ) and we are done. If \( U + {U}_{{j}_{1}} \neq V \), there is some \( {j}_{2} \) such that \( U + {U}_{{j}_{1}} \) does not contain \( {U}_{{j}_{2}} \) , in which case, \( \left( {U + {U}_{{j}_{1}}}\right) \cap {U}_{{j}_{2}} = \{ 0\} \) . Proceeding on in the same way, we must eventually obtain some family \( {j}_{1},{j}_{2},\ldots ,{j}_{l} \) such that \( U + {U}_{{j}_{1}} + \cdots + {U}_{{j}_{l}} = V \) and the sum is direct. Then \( W \mathrel{\text{:=}} {U}_{{j}_{1}} + \cdots + {U}_{{j}_{l}} \) is the desired complement to \( U \) . For Point 2, suppose \( U \) is an invariant subspace of \( V \) . We first establish that \( U \) has the "invariant complement property" in Point 1. Suppose, then, that \( X \) is another invariant subspace of \( V \) with \( X \subset U \) . By Point 1, we can find invariant subspace \( Y \) such that \( V = X \oplus Y \) . Let \( Z = Y \cap U \), which is then an invariant subspace. We want to show that \( U = X \oplus Z \) . For all \( u \in U \), we can write \( u = x + y \) with \( x \in X \) and \( y \in Y \) . But since \( X \subset U \), we have \( x \in U \) and therefore \( y = u - x \in U \) . Thus, \( y \in Z = Y \cap U \) . We have shown, then, that every \( u \in U \) can be written as the sum of an element of \( X \) and an element of \( Z \) . Furthermore, \( X \cap Z \subset X \cap Y = \{ 0\} \), so actually \( U \) is the direct sum of \( X \) and \( Z \) . We may now easily show that \( U \) is completely reducible. If \( U \) is irreducible, we are done. If not, \( U \) has a nontrivial invariant subspace \( X \) and thus \( U \) decomposes as \( U = X \oplus Z \) for some invariant subspace \( Z \) . If \( X \) and \( Z \) are irreducible, we are done, and if not, we proceed on in the same way. Since \( U \) is finite dimensional, this process must eventually terminate with \( U \) being decomposed as a direct sum of irreducibles. Proposition 4.27. If \( G \) is a matrix Lie group and \( \Pi \) is a finite-dimensional unitary representation of \( G \), then \( \Pi \) is completely reducible. Similarly, if \( \mathfrak{g} \) is a real Lie algebra and \( \pi \) is a finite-dimensional "unitary"representation of \( \mathfrak{g} \) (meaning that \( \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \) for all \( X \in \mathfrak{g} \) ), then \( \pi \) is completely reducible. Proof. Let \( V \) denote the Hilbert space on which \( \Pi \) acts and let \( \langle \cdot , \cdot \rangle \) denote the inner product on \( V \) . If \( W \subset V \) is an invariant subspace, let \( {W}^{ \bot } \) be the orthogonal complement of \( W \), so that \( V \) is the direct sum of \( W \) and \( {W}^{ \bot } \) . We claim that \( {W}^{ \bot } \) is also an invariant subspace for \( \Pi \) or \( \pi \) . To see this, note that since \( \Pi \) is unitary, \( \Pi {\left( A\right) }^{ * } = \Pi {\left( A\right) }^{-1} = \Pi \left( {A}^{-1}\right) \) for all \( A \in G \) . Then, for any \( w \in W \) and any \( v \in {W}^{ \bot } \), we have \[ \langle \Pi \left( A\right) v, w\rangle = \left\langle {v,\Pi {\left( A\right) }^{ * }w}\right\rangle = \left\langle {v,\Pi \left( {A}^{-1}\right) w}\right\rangle \] \[ = \left\langle {v,{w}^{\prime }}\right\rangle = 0 \] In the last step, we have used that \( {w}^{\prime } = \Pi \left( {A}^{-1}\right) w \) is in \( W \), since \( W \) is invariant. This shows that \( \Pi \left( A\right) v \) is orthogonal to every element of \( W \), as claimed. A similar argument, with \( \Pi \left( {A}^{-1}\right) \) replaced by \( - \pi \left( X\right) \), shows that the orthogonal complement of an invariant subspace for \( \pi \) is also invariant. We have established, then, that for a unitary representation, the orthogonal complement of an invariant subspace is again invariant. Suppose now that \( V \) is not irreducible. Then we can find an invariant subspace \( W \) that is neither \( \{ 0\} \) nor \( V \), and we decompose \( V \) as \( W \oplus {W}^{ \bot } \) . Then \( W \) and \( {W}^{ \bot } \) are both invariant subspaces and thus unitary representations of \( G \) in their own right. Then \( W \) is either irreducible or it splits as an orthogonal direct sum of invariant subspaces, and similarly for \( {W}^{ \bot } \) . We continue this process, and since \( V \) is finite dimensional, it cannot go on forever, and we eventually arrive at a decomposition of \( V \) as a direct sum of irreducible invariant subspaces. Theorem 4.28. If \( G \) is a compact matrix Lie group, every finite-dimensional representation of \( G \) is completely reducible. See also Sect. 10.3 for a similar result for semisimple Lie algebras. The argument below is sometimes called "Weyl's unitarian trick" for the role of unitarity in the proof. We require a notion of integration over matrix Lie groups that is invariant under the right action of the group. One way to construct such a right-invariant integral is to construct a right-invariant measure on \( G \), known as a Haar measure. It is, however, simpler to introduce the integral by means of a right-invariant differential form on \( G \) . (See Appendix B for a quick introduction to the notion of differential forms.) If \( G \subset {M}_{n}\left( \mathbb{C}\right) \) is a matrix Lie group, then the tangent space to \( G \) at the identity is the Lie algebra \( \mathfrak{g} \) of \( G \) (Corollary 3.46). It is then easy to see that the tangent space \( {T}_{A}G \) at any point \( A \in G \) is the space of vectors of the form \( {XA} \) with \( X \in \mathfrak{g} \) . If the dimension of \( \mathfrak{g} \) as a real vector space is \(
1329_[肖梁] Abstract Algebra (2022F)
Definition 19.3.1
Definition 19.3.1. The \( q \) -Frobenius automorphism of \( {\mathbb{F}}_{{q}^{n}} \) is the automorphism \[ {\phi }_{q} : {\mathbb{F}}_{{q}^{n}} \rightarrow {\mathbb{F}}_{{q}^{n}} \] \[ a \mapsto {a}^{q} \] (It is the same as composing the \( p \) -Frobenius \( r \) times.) Lemma 19.3.2. The Galois group of \( {\mathbb{F}}_{{q}^{n}} \) over \( {\mathbb{F}}_{q} \) is isomorphic to \( {\mathbf{Z}}_{n} \), generated by \( {\phi }_{q} \) . Proof. For the \( q \) -Frobenius automorphism \( {\phi }_{q} \) of \( {\mathbb{F}}_{{q}^{n}},{\phi }_{q}^{n}\left( b\right) = b \) for any \( b \in {\mathbb{F}}_{{q}^{n}} \) . So \( {\phi }_{q}^{n} = \mathrm{{id}} \) . The rest is clear. Example 19.3.3. We give an example of the Galois diagram for extensions of finite fields. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_122_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_122_0.jpg) ## 19.4. Cyclotomic extension. Definition 19.4.1. For a positive integer \( n \in \mathbb{N} \) , \[ {\mu }_{n} \mathrel{\text{:=}} \left\{ {n\text{th roots of unity in}\mathbb{C}}\right\} = \left\langle {\zeta }_{n}\right\rangle \cong {\mathbf{Z}}_{n}\text{,} \] where \( {\zeta }_{n} = {e}^{{2\pi }\mathbf{i}/n} \) . Define \( \mathbb{Q}\left( {\mu }_{n}\right) = \mathbb{Q}\left( {\zeta }_{n}\right) \subseteq \mathbb{C} \) ; it is a finite field extension of \( \mathbb{Q} \), called the \( n \) th cyclotomic extension of \( \mathbb{Q} \) . A primitive \( n \) th root of unity is a generator of \( {\mu }_{n} \) ; it is equal to \( {\zeta }_{n}^{a} \) for some \( a \in {\mathbf{Z}}_{n}^{ \times } \) . Define \[ {\Phi }_{n}\left( x\right) \mathrel{\text{:=}} \mathop{\prod }\limits_{{a \in {\mathbf{Z}}_{n}^{ \times }}}\left( {x - {\zeta }_{n}^{a}}\right) \] it is called the \( n \) th cyclotomic polynomial. Example 19.4.2. We have \( {\Phi }_{1}\left( x\right) = x - 1,{\Phi }_{2}\left( x\right) = x + 1,{\Phi }_{3}\left( x\right) = {x}^{2} + x + 1 \) . Lemma 19.4.3. We have \[ {x}^{n} - 1 = \mathop{\prod }\limits_{{d \mid n}}{\Phi }_{d}\left( x\right) \] Each \( {\Phi }_{n}\left( x\right) \) is a polynomial of degree \( \varphi \left( n\right) \) with coefficients in \( \mathbb{Z} \) . Proof. The first equality is easy: (19.4.3.1) \[ {x}^{n} - 1 = \mathop{\prod }\limits_{{b \in {\mathbf{Z}}_{n}}}\left( {x - {\zeta }_{n}^{b}}\right) = \mathop{\prod }\limits_{{d \mid n}}\mathop{\prod }\limits_{{i \in {\mathbf{Z}}_{d}^{ \times }}}\left( {x - {\zeta }_{n}^{di}}\right) = \mathop{\prod }\limits_{{d \mid n}}{\Phi }_{d}\left( x\right) . \] We will prove that \( {\Phi }_{n}\left( x\right) \) has coefficients in \( \mathbb{Z} \) and its coefficients have \( \gcd = 1 \) . Assume that this has been proved for smaller \( n \) . Then (19.4.3.1) and Gauss’ lemma implies that \( {\Phi }_{n}\left( x\right) \) has coefficients in \( \mathbb{Z} \) and has coefficients’ \( \gcd = 1 \) . Theorem 19.4.4. The polynomial \( {\Phi }_{n}\left( x\right) \) is irreducible in \( \mathbb{Q}\left\lbrack x\right\rbrack \) . So \( \left\lbrack {\mathbb{Q}\left( {\zeta }_{n}\right) : \mathbb{Q}}\right\rbrack = \varphi \left( n\right) \) . Proof. It suffices to show that \( {\Phi }_{n}\left( x\right) \) is irreducible in \( \mathbb{Z}\left\lbrack x\right\rbrack \) . Let \( \zeta \) be a primitive \( n \) th root of unity in a splitting field of \( {\Phi }_{n}\left( x\right) \) . We need to show that the minimal polynomial \( f\left( x\right) \mathrel{\text{:=}} {m}_{\zeta ,\mathbb{Q}}\left( x\right) \) of \( \zeta \) over \( \mathbb{Q} \) is equal to \( {\Phi }_{n}\left( x\right) \) ; it is clear that \( f\left( x\right) \mid {\Phi }_{n}\left( x\right) \) . We will show that for any integer \( a \) relatively prime to \( n \) , \( {\zeta }^{a} \) is a zero of \( {\Phi }_{n}\left( x\right) \) . We take a prime \( p \) not dividing \( n \) . Claim: \( {\zeta }^{p} \) is also a zero of \( f\left( x\right) \) . This claim would imply that if \( a = {p}_{1}^{{\alpha }_{1}}\cdots {p}_{r}^{{\alpha }_{r}} \) is relatively prime to \( n,{\zeta }^{a} = {\zeta }^{{p}_{1}^{{\alpha }_{1}}\cdots {p}_{r}^{{\alpha }_{r}}} \) . Iteratively, we can prove that \( \zeta \) is a zero of \( f\left( x\right) \) implying \( {\zeta }^{a} \) is a zero of \( f\left( x\right) \) . From this, we deduce that \( f\left( x\right) = {\Phi }_{n}\left( x\right) \) . Now, we prove the Claim. Suppose this is not true. Let \( g\left( x\right) = {m}_{{\zeta }^{p},\mathbb{Q}}\left( x\right) \) be the minimal polynomial of \( {\zeta }^{p} \) over \( \mathbb{Q} \) . As \( f\left( x\right) \neq g\left( x\right) \), we have \( \gcd \left( {f\left( x\right), g\left( x\right) }\right) = 1 \) and thus \[ f\left( x\right) g\left( x\right) \mid {\Phi }_{n}\left( x\right) . \] On the other hand, \( g\left( {\zeta }^{p}\right) = 0 \) implies that \( \zeta \) is a zero of \( g\left( {x}^{p}\right) \) . This implies that \[ f\left( x\right) \mid g\left( {x}^{p}\right) \; \Rightarrow \;g\left( {x}^{p}\right) = f\left( x\right) h\left( x\right) \text{ in }\mathbb{Z}\left\lbrack x\right\rbrack . \] Taking this equation modulo \( p \), we have \[ \bar{g}{\left( x\right) }^{p} = \bar{g}\left( {x}^{p}\right) = \bar{f}\left( x\right) \bar{h}\left( x\right) \;\text{ in }{\mathbb{F}}_{p}\left\lbrack x\right\rbrack . \] This implies that \( \bar{f}\left( x\right) \) and \( \bar{g}\left( x\right) \) have a common factor in \( {\mathbb{F}}_{p}\left\lbrack x\right\rbrack \) . Yet \( \bar{f}\left( x\right) \bar{g}\left( x\right) \) divides \( {\bar{\Phi }}_{n}\left( x\right) \), which further divides \( {x}^{n} - 1 \) . This implies that \( {x}^{n} - 1 \) has repeated zeros. But \[ \left( {{x}^{n} - 1, D\left( {{x}^{n} - 1}\right) }\right) = \left( {{x}^{n} - 1, n{x}^{n - 1}}\right) = \left( {{x}^{n} - 1,{x}^{n - 1}}\right) = \left( 1\right) . \] This contradicts with the properties of repeated zeros. This completes the proof of irreducibility of \( {\Phi }_{n}\left( x\right) \) . The following is clear. Corollary 19.4.5. The Galois group of \( \mathbb{Q}\left( {\zeta }_{n}\right) /\mathbb{Q} \) is \( {\mathbf{Z}}_{n}^{ \times } \) . Explicitly, for \( a \in {\mathbf{Z}}_{n}^{ \times } \), the associated automorphism is \[ \mathbb{Q}\left( {\zeta }_{n}\right) \overset{ \cong }{ \leftarrow }\mathbb{Q}\left\lbrack x\right\rbrack /\left( {{\Phi }_{n}\left( x\right) }\right) \overset{ \cong }{ \rightarrow }\mathbb{Q}\left( {\zeta }_{n}\right) \] \[ {\zeta }_{n} \leftarrow x + {\Phi }_{n}\left( x\right) , \mapsto {\zeta }_{n}^{a}. \] Corollary 19.4.6. For every finite abelian group \( G \), there exists a finite Galois extension \( K \) of \( \mathbb{Q} \) with Galois group \( G \) . Proof. Write \( G = {\mathbf{Z}}_{{n}_{1}} \times {\mathbf{Z}}_{{n}_{r}} \) . For each \( {n}_{i} \), find distinct odd prime number \( {p}_{i} \) such that \( {p}_{i} \equiv 1{\;\operatorname{mod}\;{n}_{i}} \) . Then \( G \) is a quotient of \[ {\mathbf{Z}}_{{p}_{1}}^{ \times } \times {\mathbf{Z}}_{{p}_{r}}^{ \times } \simeq {\mathbf{Z}}_{{p}_{1} - 1} \times \cdots {\mathbf{Z}}_{{p}_{r} - 1} \] Say the kernel of this quotient is \( H \) . Then we consider the field extensions: ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_123_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_123_0.jpg) The field \( K = \mathbb{Q}{\left( {\zeta }_{{p}_{1}\cdots {p}_{r}}\right) }^{H} \) is what we seek for. Example 19.4.7. We illustrate the above proof by constructing a cyclic extension of \( \mathbb{Q} \) or degree 3. Write \( \zeta = {\zeta }_{7} \) . Then we have ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_124_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_124_0.jpg) But we have to translate \( \ker \phi \) in terms of \( {\mathbf{Z}}_{7}^{ \times } \), it is the subset \( \{ 1, - 1\} \) (namely \( \left\{ {x \in {\mathbf{Z}}_{7}^{ \times } \mid {x}^{2} = }\right. \) \( 1{\;\operatorname{mod}\;7}\} \) ). Taking trace, we have \[ \theta \mathrel{\text{:=}} \zeta + {\zeta }^{-1} \in \mathbb{Q}{\left( \zeta \right) }^{\{ 1, - 1\} }. \] We may compute the minimal polynomial of \( \theta \) as follows: \[ {\theta }^{2} = {\zeta }^{2} + {\zeta }^{-2} + 2 \] \[ {\theta }^{3} = {\zeta }^{3} + {\zeta }^{-3} + 3\left( {\zeta + {\zeta }^{-1}}\right) . \] Using the fact that \( 1 + \zeta + {\zeta }^{2} + {\zeta }^{3} + {\zeta }^{-3} + {\zeta }^{-2} + {\zeta }^{-1} = 0 \), we deduce that \[ {\theta }^{3} + {\theta }^{2} = {\zeta }^{3} + {\zeta }^{-3} + {\zeta }^{2} + {\zeta }^{-2} + 3\left( {\zeta + {\zeta }^{-1}}\right) + 2 = 2\left( {\zeta + {\zeta }^{-1}}\right) + 1 = {2\theta } + 1. \] So the minimal polynomial of \( \theta \) is \( {x}^{3} + {x}^{2} - {2x} - 1 \) . The following is a converse of Corollary 19.4.6. It is one of the first achievement in the history of number theory, marking a starting point of studying abelian extensions of number fields. Theorem 19.4.8 (Kronecker-Weber). Every finite abelian extension \( K \) of \( \mathbb{Q} \) is contained in some \( \mathbb{Q}\left( {\zeta }_{n}\right) \) . ## Extended reading after Lecture 19 In Example 19.2.4, we in fact used a standard tool to produce elements in a fixed field, called the "trace" map. We give a formal definition here. Definition 19.4.9. For a finite extension \( K \) of \( F \) of degree \( n \) and \( a \in K \), we define its trace and norm over \( F \) as follows: viewing \( K \) as a finite dimensional \( F \) -vector space (and choose a basis), then multiplication by \( a \) is a \( F \) -linear map \( {T}_{a} \) on \( K \) given by an \( n \times n \) matrix (with coefficients in \( F \) ). Then the trace and the norm of \( a \) over \( F \) is \[ \operatorname{Tr}\left( a\right) = {\operatorname{Tr}}_{K/F}\left( a\right) = \operatorname{Tr}\left( {T}_{a}\right) \;\text{ and }\;\operatorname{Nm}\left( a\right) = {\operatorname{Nm}}_{K/F}\left( a\right) = \det \left( {T}_{a}\right) . \] As the traces and determinants of matrices do not depend on the choice of the ba
1172_(GTM8)Axiomatic Set Theory
Definition 5.3
Definition 5.3. \( F \) is an ultrafilter for the partial order structure \( \mathbf{P} \) iff \( F \) is a maximal filter i.e., \( F \) is a filter for \( \mathbf{P} \) and for each filter \( {F}^{\prime } \) \[ F \subseteq {F}^{\prime } \rightarrow F = {F}^{\prime }. \] Remark. Note that an ultrafilter for a partial order structure \( \mathbf{P} = \) \( \langle P, \leq \rangle \) need not be a proper filter, i.e., \( P \) could be an ultrafilter. Indeed if \( P \) is compatible \( P \) is an ultrafilter. We next establish a connection between filters for Boolean algebras and filters for partial order structures. Definition 5.4. Let \( \mathbf{B} = \langle B, + , \cdot , - ,\mathbf{0},\mathbf{1}\rangle \) be a Boolean algebra with natural order \( \leq \) (see Definition 1.5). Let \( P = B - \{ \mathbf{0}\} \) and \( \mathbf{P} = \langle P, \leq \rangle \) . Then \( \mathbf{P} \) is the partial order structure associated with \( \mathbf{B} \) . Theorem 5.5. Let \( \mathbf{B} = \langle B, + , \cdot , - ,\mathbf{0},\mathbf{1}\rangle \) be a Boolean algebra and \( \mathbf{P} = \langle P, \leq \rangle \) be its associated partial order structure. If \( F \) is a nonempty subset of \( \left| \mathbf{B}\right| \) then \( F \) is a proper filter for the Boolean algebra \( \mathbf{B} \) iff \( F \) is a filter for the partial order structure \( \mathbf{P} \) . Proof. Let \( F \) be a proper filter for \( \mathbf{B} \) . Then \( \mathbf{0} \notin F \) i.e., \( F \subseteq B - \{ \mathbf{0}\} \) . If \( x, y \in F \) then \( {xy} \in F \) and hence there exists a \( z \in F \), namely \( {xy} \), such that \( z \leq x \) and \( z \leq y \) . Thus \( F \) is a filter for \( \mathbf{P} \) . Conversely let \( F \) be a filter for \( \mathbf{P} \) . If \( x, y \in F \) then there is a \( z \in F \) such that \( z \leq x \) and \( z \leq y \) . Therefore \( z \leq {xy} \) and since \( F \) is upward hereditary \( {xy} \in F \) . Furthermore since \( \mathbf{0} \notin P \) it follows that \( \mathbf{0} \notin F \) i.e., \( F \) is a proper filter for \( \mathbf{B} \) . Definition 5.6. Let \( \mathbf{P} = \langle P, \leq \rangle \) be a partial order structure and let \( \mathbf{F} \) be the set of all ultrafilters for \( \mathbf{P} \) . Then \( N\left( p\right) \triangleq \{ F \in \mathbf{F} \mid p \in F\}, p \in P \) \( \mathbf{T} \triangleq \{ G \subseteq \mathbf{F} \mid \left( {\forall F \in G}\right) \left( {\exists p \in P}\right) \left\lbrack {F \in N\left( p\right) \subseteq G}\right\rbrack \} . \) Theorem 5.7. \( \langle \mathbf{F},\mathbf{T}\rangle \) is a \( {T}_{1} \) -space. Proof. First of all we shall show that \( \langle \mathbf{F},\mathbf{T}\rangle \) is a topological space. From Definition 5.6 it is clear that 0 and \( \mathbf{F} \) are each open. Let \( {G}_{1} \) and \( {G}_{2} \) be open sets and let \( F \in {G}_{1} \cap {G}_{2} \) . Then there exist \( p \) and \( {p}^{\prime } \) such that \[ F \in N\left( p\right) \subseteq {G}_{1} \] and \[ F \in N\left( {p}^{\prime }\right) \subseteq {G}_{2} \] Then \( p \in F,{p}^{\prime } \in F \) and hence there exists a \( z \in F \) such that \( z \leq p \) and \( z \leq {p}^{\prime } \) . Therefore, since every ultrafilter is upward hereditary \[ F \in N\left( z\right) \subseteq N\left( p\right) \cap N\left( {p}^{\prime }\right) \subseteq {G}_{1} \cap {G}_{2}, \] and hence \( {G}_{1} \cap {G}_{2} \) is open. It is clear that if each \( {G}_{a}, a \in A \), is open then \( \bigcup \left\{ {{G}_{a} \mid a \in A}\right\} \) is also open. Thus \( \langle \mathbf{F},\mathbf{T}\rangle \) is a topological space. Next we will show that \( \langle \mathbf{F},\mathbf{T}\rangle \) satisfies the \( {T}_{1} \) -axiom of separation. Let \( {F}_{1} \) and \( {F}_{2} \) be different elements of \( \mathbf{F} \) . From the maximality of \( {F}_{1} \) and of \( {F}_{2} \) , there is a \( p \in {F}_{1} - {F}_{2} \) and a \( {p}^{\prime } \in {F}_{2} - {F}_{1} \) . Then \( {F}_{2} \notin N\left( p\right) \) and \( {F}_{1} \notin N\left( {p}^{\prime }\right) \) i.e. \( \langle \mathbf{F},\mathbf{T}\rangle \) is a \( {T}_{1} \) -space. Remark. There exist examples of partial order structures such that the corresponding topological space \( \left\langle {{\mathbf{F}}^{\prime },{\mathbf{T}}^{\prime }}\right\rangle \) is not Hausdorff. Theorem 5.8. Let \( \mathbf{P} = \langle P, \leq \rangle \) be the partial order structure associated with the Boolean algebra B. Then \( \langle \mathbf{F},\mathbf{T}\rangle \) is a Hausdorff space. Proof. Suppose not. Then there would exist distinct \( {F}_{1},{F}_{2} \in \mathbf{F} \) such that \[ \left( {\forall {p}_{1} \in {F}_{1}}\right) \left( {\forall {p}_{2} \in {F}_{2}}\right) \left( {\exists F \in \mathbf{F}}\right) \left\lbrack {{p}_{1} \in F \land {p}_{2} \in F}\right\rbrack . \] Then \( \left( {\exists r \in F}\right) \left\lbrack {r \leq {p}_{1} \land r \leq {p}_{2}}\right\rbrack \) i.e., \( {p}_{1}{p}_{2} \neq \mathbf{0} \) . If \( G = \left\{ {p \in P \mid \left( {\exists {p}_{1} \in F}\right) \left( {\exists {p}_{2} \in F}\right) \left\lbrack {{p}_{1}{p}_{2} \leq p}\right\rbrack }\right\} \) then \( G \) is a filter for \( \mathbf{P} \) . For, if \( p, q \in G \) then \[ \left( {\exists {p}_{1},{q}_{1} \in {F}_{1}}\right) \left( {\exists {p}_{2},{q}_{2} \in {F}_{2}}\right) \left\lbrack {{p}_{1}{p}_{2} \leq p \land {q}_{1}{q}_{2} \leq q}\right\rbrack . \] Since \( {F}_{1} \) and \( {F}_{2} \) are filters \( {p}_{1}{q}_{1} \in {F}_{1} \) and \( {p}_{2}{q}_{2} \in {F}_{2} \) . Furthermore \( {p}_{1}{q}_{1}{p}_{2}{q}_{2} \leq \) \( {pq} \) . Then \( {pq} \in G \) i.e., for each \( p, q \in G \) there is an \( r \in G \), namely \( {pq} \), such that \( r \leq p \) and \( r \leq q \) . Clearly \( G \) is upward hereditary. Since \( G \) is a filter, since \( {F}_{1} \subseteq G \) and \( {F}_{2} \subseteq G \), and since \( {F}_{1} \) and \( {F}_{2} \) are distinct ultrafilters we have a contradiction. Therefore \( \langle \mathbf{F},\mathbf{T}\rangle \) is Hausdorff. Remark. In order for \( F \) to be a filter for a partial order structure \( \mathbf{P} = \) \( \langle P, \leq \rangle \) we require that \( F \) be strongly compatible (Definition 5.2). This raises a very natural question. Why do we not define a more general notion by requiring that \( F \) only be compatible? That is, instead of requiring \( F \) to satisfy 1 of Definition 5.2 why do we not require instead that \( F \) satisfy the weaker requirement \[ \left( {\forall x, y \in F}\right) \left( {\exists z \in P}\right) \left\lbrack {z \leq x \land z \in y}\right\rbrack ? \] For purposes of discussion let us call filters as originally defined strong filters and filters as newly proposed, weak filters. The change from strong filter to weak filter also changes the notion of ultrafilter for being maximal among weak filters is a stronger restriction than being maximal among strong filters. There are two interesting consequences of this fact. If ultrafilters are maximal among weak filters then the sets \( N\left( p\right) \) of Definition 5.6 form only a subbase for the topological space \( \langle \mathbf{F},\mathbf{T}\rangle \) . Furthermore this space is Hausdorff. The fact that \( \langle \mathbf{F},\mathbf{T}\rangle \) satisfies the \( {T}_{2} \) -axiom of separation was first pointed out by \( \mathrm{H} \) . Tanaka. Nevertheless, for the work that comes later we need strong filters and we want ultrafilter to mean a strong filter that is maximal among strong filters. Thus, later use brings us back to the definition as given. We do not know whether every \( {T}_{1} \) -space is homeomorphic to a topological space \( \langle \mathbf{F},\mathbf{T}\rangle \) associated with some partial order structure or whether every Hausdorff space is homeomorphic to a topological space \( \left\langle {{\mathbf{F}}^{\prime },{\mathbf{T}}^{\prime }}\right\rangle \) associated with the partial order structure associated with a Boolean algebra. Let \( \mathbf{P} = \langle P, \leq \rangle \) be a partial order structure and let \( \mathbf{F} \) the set of all ultra-filters for \( \mathbf{P} \) . In order to investigate some relations between the topologies on \( \mathbf{P} \) and \( \mathbf{F} \) we introduce the following notation. Definition 5.9. Let \( {G}_{1} \) be an open subset of \( P \) and \( {G}_{2} \) be an open subset of \( \mathbf{F} \) . Then \[ {G}_{1}^{ * } \triangleq \bigcup \left\{ {N\left( p\right) \mid \left\lbrack p\right\rbrack \subseteq {G}_{1}}\right\} \] \[ {G}_{2}^{\Delta } \triangleq \bigcup \left\{ {\left\lbrack p\right\rbrack \mid N\left( p\right) \subseteq {G}_{2}}\right\} \] Remark. Clearly \( {G}_{1}^{ * } \) and \( {G}_{2}^{\Delta } \) are open subsets of \( \mathbf{F} \) and \( P \) respectively. Theorem 5.10. If \( {G}_{1} \) and \( {G}_{2} \) are open subsets of \( P \) and \( \mathbf{F} \) respectively then 1. \( {G}_{1} \subseteq {G}_{1}^{*\Delta } \) . 2. \( {G}_{2} \subseteq {G}_{2}^{\Delta * } \) . Proof. 1. \( a \in {G}_{1} \rightarrow \left\lbrack a\right\rbrack \subseteq {G}_{1} \) \[ \rightarrow N\left( a\right) \subseteq {G}_{1}^{ * } \] \( \rightarrow \left\lbrack a\right\rbrack \subseteq {G}_{1}^{*\Delta }\; \) (since \( {G}_{1}^{ * } \) is an open subset of \( \mathbf{F} \) ) \[ \rightarrow a \in {G}_{1}^{* \vartriangle }\text{.} \] 2. \( F \in {G}_{2} \rightarrow \left( {\exists a \in F}\right) \left\lbrack {N\left( a\right) \subseteq {G}_{2}}\right\rbrack \) \[ \rightarrow \left( {\exists a \in F}\right) \left\lbrack {\left\lbrack a\right\rbrack \subseteq {G}_{2}^{\Delta }}\right\rbrack \] \( \rightarrow \left( {\exists a \in F}\right) \left\lbrack {N\left( a\right) \subseteq {G}_{2}^{\Delta * }}\right\rbrack \; \) (since \( {G}_{2}^{\Delta } \) is an open subset of \( P \) ) \[ \rightarrow F \in {G}_{2}^{\Delta * }\text{.} \] Theorem 5.11. 1. If \( {G}_{1} \) and \( {G}_{2} \) are open sub
1068_(GTM227)Combinatorial Commutative Algebra
Definition 5.60
Definition 5.60 An \( {i}^{\text{th }} \) Betti number \( {\beta }_{i, j}\left( M\right) \neq 0 \) of an \( \mathbb{N} \) -graded module \( M \) in degree \( j \) is extremal if \( {\beta }_{p, q}\left( M\right) = 0 \) for all \( p \) and \( q \) satisfying the following three conditions: (i) \( p \geq i \) ,(ii) \( p - q \geq i - j \), and (iii) \( q \geq j + 1 \) . In the Macaulay betti diagram of \( M \), the Betti number \( {\beta }_{i, j}\left( M\right) \) is plotted in column \( i \) and row \( j - i \) . Using this notation, condition (i) says that \( {\beta }_{p, q}\left( M\right) \) lies in a column weakly east of \( {\beta }_{i, j}\left( M\right) \), condition (ii) says that \( {\beta }_{p, q}\left( M\right) \) lies in a row weakly south of \( {\beta }_{i, j}\left( M\right) \), and imposing condition (iii) is equivalent to the additional requirement that \( \left( {p, q}\right) \neq \left( {i, j}\right) \) . Thus a nonzero Betti number \( {\beta }_{i, j}\left( M\right) \) is extremal if it is the only nonzero Macaulay betti entry in the quadrant of which it is the northwest corner. Projective dimension measures the column index of the easternmost extremal Betti number, whereas regularity measures the row index of the southernmost extremal Betti number. The following theorem implies, in particular, that these roles are switched under Alexander duality. Theorem 5.61 The Betti number \( {\beta }_{i, j}\left( {S/{I}_{\Delta }}\right) \) is extremal if and only if \( {\beta }_{j - i - 1, j}\left( {S/{I}_{\Delta }^{ \star }}\right) \) is extremal, and in this case \( {\beta }_{i, j}\left( {S/{I}_{\Delta }}\right) = {\beta }_{j - i - 1, j}\left( {S/{I}_{\Delta }^{ \star }}\right) \) . Theorem 5.59 is refined by Theorem 5.61 for squarefree monomial ideals, in the sense that the former is an immediate consequence of the latter. For arbitrary monomial ideals, even Theorem 5.59 cannot hold verbatim, since one side of the equality (projective dimension) is bounded while the other (regularity) is not. On the other hand, regularity is not a particularly \( {\mathbb{N}}^{n} \) - graded thing to measure - the definition requires us to sum the coordinates of the degree \( \mathbf{b} \), which is more of a \( \mathbb{Z} \) -graded procedure. The generalization to arbitrary monomial ideals of Theorems 5.56 and 5.59 needs an \( {\mathbb{N}}^{n} \) -graded analogue of regularity. Definition 5.62 The support-regularity of a monomial ideal \( I \) is \[ \operatorname{supp}.\operatorname{reg}\left( I\right) = \max \left\{ {\left| {\operatorname{supp}\left( \mathbf{b}\right) }\right| - i \mid {\beta }_{i,\mathbf{b}}\left( I\right) \neq 0}\right\} , \] and \( I \) is said to have a support-linear free resolution if there is a \( d \in \mathbb{N} \) such that \( \left| {\operatorname{supp}\left( m\right) }\right| = d = \operatorname{supp} \) .reg \( \left( I\right) \) for all minimal generators \( m \) of \( I \) . For squarefree ideals the notions of regularity and support-regularity coincide, because the only degrees we ever care about are squarefree. In particular, the two sentences in the following result specialize to the Eagon-Reiner Theorem and Theorem 5.59 when \( \mathbf{a} = \left( {1,\ldots ,1}\right) \) . Theorem 5.63 If a monomial ideal \( I \) is generated in degrees preceding \( \mathbf{a} \) , then \( S/I \) is Cohen-Macaulay if and only if the Alexander dual ideal \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) has support-linear free resolution. More generally, \( \operatorname{pd}\left( {S/I}\right) = \operatorname{supp}\operatorname{.reg}\left( {I}^{\left\lbrack \mathbf{a}\right\rbrack }\right) \) . The optimal insight provided by Theorem 5.63 comes in a context combining monomial matrices for free and injective resolutions, the latter of which we will introduce in Chapter 11. For a glimpse of this context, see Exercise 11.2. Essentially, decreases in the dimensions of the indecomposable injective summands in a minimal injective resolution of \( S/I \) correspond precisely to increases in the supports of the degrees in a minimal free resolution of \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . The former detect the projective dimension of \( S/I \) by the Auslander-Buchsbaum formula. Thus, when the supports of syzygy degrees of \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) increase as slowly as possible, so that \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) has support-linear free resolution, the dimensions of indecomposable summands in a minimal injective resolution of \( S/I \) decrease as slowly as possible. This slowest possible decrease in dimension postpones the occurrence of summands isomorphic to injective hulls of \( \mathbb{k} \) as long as possible, making the depth of \( S/I \) as large as possible. As a result, \( S/I \) must be Cohen-Macaulay (see Theorem 13.37.7). At the beginning of this section, we noted that Alexander duality interchanges two types of homological invariants, by which we meant projective dimension and regularity. Theorem 5.61 extends this interchange to a flip on a family of refinements of this pair of invariants. In contrast, the crux of Theorem 5.63 is that we could have meant a different interchange: namely the switch of Betti numbers for Bass numbers (Definition 11.37): whereas Betti numbers determine the regularity, the projective dimension can be reinterpreted in terms of depth-and hence in terms of Bass numbers-via the Auslander-Buchsbaum formula. ## Exercises 5.1 Prove Theorem 5.11 directly, by tensoring the coKoszul complex \( {\mathbb{K}}^{ \bullet } \) with \( S/I \) . 5.2 Prove Corollary 5.12 by applying Theorem 5.6 to Corollary 1.40. 5.3 Compute the Alexander dual of \( \left\langle {{x}^{4},{y}^{4},{x}^{3}z,{y}^{3}z,{x}^{2}{z}^{2},{y}^{2}{z}^{2}, x{z}^{3}, y{z}^{3}}\right\rangle \) with respect to \( \mathbf{a} = \left( {5,6,8}\right) \) . 5.4 Resume the notation from Exercise 3.6. (a) Turning the picture there upside down yields the staircase diagram for an Alexander dual ideal \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . What is \( \mathbf{a} \) ? (b) On a photocopy of the upside down staircase diagram, draw the Buchberger graph of \( {I}^{\left\lbrack \mathbf{a}\right\rbrack } \) . Compare it to the graph \( \operatorname{Buch}\left( I\right) \) that you drew in Exercise 3.6. (c) Use the labels on the planar map determined by \( \operatorname{Buch}\left( {I}^{\left\lbrack \mathbf{a}\right\rbrack }\right) \) to relabel the vertices, edges, and regions in the planar map determined by \( \operatorname{Buch}\left( I\right) \) . (d) Show that this relabeled planar map is colabeled and determines the resolution Alexander dual to the usual one from \( \operatorname{Buch}\left( I\right) \), as in Theorem 5.37. 5.5 For any monomial ideal \( I \), let \( {\mathbf{a}}_{I} \) be the exponent on the least common multiple of all minimal generators of \( I \), and define the tight Alexander dual \( {I}^{ \star } = {I}^{\left\lbrack {\mathbf{a}}_{I}\right\rbrack } \) . Find a monomial ideal \( I \) such that \( {\left( {I}^{ \star }\right) }^{ \star } \neq I \) . Characterize such ideals \( I \) . 5.6 Show that tight Alexander duality commutes with radicals: \( \operatorname{rad}{\left( I\right) }^{ \star } = \operatorname{rad}\left( {I}^{ \star }\right) \) . 5.7 Prove from first principles that a monomial ideal is irreducible as in Definition 5.16 if and only if it cannot be expressed as an intersection of two (perhaps ungraded) ideals strictly containing it. 5.8 The socle of a module \( M \) is the set \( \operatorname{soc}\left( M\right) = \left( {0{ : }_{M}\mathfrak{m}}\right) \) of elements in \( M \) annihilated by every variable. If \( M = S/I \) is artinian, prove that \( {\mathbf{x}}^{\mathbf{b}} \in \operatorname{soc}\left( M\right) \) if and only if \( {\mathfrak{m}}^{\mathbf{b} + \mathbf{1}} \) is an irreducible component of \( I \) . Use Corollary 5.39 and Hochster's formula to construct another proof of Theorem 5.42. 5.9 The monomial localization of a monomial ideal \( I \subseteq \mathbb{k}\left\lbrack \mathbf{x}\right\rbrack \) at \( {x}_{i} \) is the ideal \( {\left. I\right| }_{{x}_{i} = 1} \in \mathbb{k}\left\lbrack {\mathbf{x} \smallsetminus {x}_{i}}\right\rbrack \) that results after setting \( {x}_{i} = 1 \) in all generators of \( I \) . Suppose that a labeled cell complex \( X \) supports a minimal cellular resolution of \( S/\left( {I + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \) . Explain how to recover a minimal cellular resolution of \( {\left. I\right| }_{{x}_{i} = 1} \) from the faces of \( X \) containing the vertex \( v \in X \) labeled by \( {\mathbf{a}}_{v} = {x}_{i}^{{a}_{i} + 1} \) . This set of faces is called the star of \( v \), and the minimal cellular resolution will be supported on the link of \( v \) (also known as the vertex figure of \( X \) in a neighborhood of \( v \) ). 5.10 Suppose that a colabeled cell complex \( Y \) supports a minimal cocellular resolution of \( S/\left( {I + {\mathfrak{m}}^{\mathbf{a} + \mathbf{1}}}\right) \) . Explain why the set of faces of \( Y \) whose labels have \( {i}^{\text{th }} \) coordinate \( {a}_{i} + 1 \) is another colabeled complex. Show that it supports a minimal cocellular resolution of the monomial localization \( {\left. I\right| }_{{x}_{i} = 1} \) (Exercise 5.9). 5.11 Exhibit an example demonstrating that if the condition of minimality in Theorem 5.42 is omitted, then the intersection given there can fail to be an irreducible decomposition-even a redundant one. Nonetheless, prove that if the intersection is taken over a suitable subset of facets, then the conclusion still holds. 5.12 If \( {\mathcal{F}}_{X} \) is a minimal cellular resolution of an artinian quotient, then a face \( G \in X \) is in the boundary of \( X \) if and only if its label \( {\mathbf{a}}_{G} \) fails to have full support. 5.13 Prove that weakly cel
1068_(GTM227)Combinatorial Commutative Algebra
Definition 11.21
Definition 11.21 A graded \( \mathbb{k}\left\lbrack Q\right\rbrack \) -module \( J \) is called homologically injective if \( M \mapsto {\underline{\operatorname{Hom}}}_{\mathbb{k}\left\lbrack Q\right\rbrack }\left( {M, J}\right) \) takes exact sequences to exact sequences. In other words, if \( 0 \rightarrow M \rightarrow N \rightarrow P \rightarrow 0 \) is exact, then so is \[ 0 \leftarrow \underline{\operatorname{Hom}}\left( {M, J}\right) \leftarrow \underline{\operatorname{Hom}}\left( {N, J}\right) \leftarrow \underline{\operatorname{Hom}}\left( {P, J}\right) \leftarrow 0. \] For (10.2) in Chapter 10 we exploited this valuable property in the context of (ungraded) \( \mathbb{Z} \) -modules, otherwise known as abelian groups: divisible groups, such as \( {\mathbb{C}}^{ * } \), are homologically injective. In general, only the sur-jectivity of \( \underline{\operatorname{Hom}}\left( {M, J}\right) \leftarrow \underline{\operatorname{Hom}}\left( {N, J}\right) \) can fail, even for arbitrary \( J \) . The surjectivity for homologically injective \( J \) can be read equivalently as follows. Lemma 11.22 \( J \) is homologically injective if whenever \( M \subseteq N \) and \( \phi \) : \( M \rightarrow J \) are given, some map \( \psi : N \rightarrow J \) extends \( \phi \) ; that is, \( {\left. \psi \right| }_{M} = \phi \) . Judging from what we have already called the modules \( \mathbb{k}\{ F - Q\} \) and their direct sums in Definition 11.10, we had better reconcile our combinatorial definition of injective module with the usual homological one. The goal of this section is to accomplish just that, in Theorem 11.30. Recall that a module \( N \) is flat if tensoring any exact sequence with \( N \) yields another exact sequence. The examples of flat modules to keep in mind are the localizations \( \mathbb{k}\left\lbrack {Q - F}\right\rbrack \) . In fact, localizations are pretty much the only examples that can come up in the context of graded modules over affine semigroup rings (cf. the next lemma and Theorem 11.30). Lemma 11.23 \( N \) is flat if and only if \( {N}^{ \vee } \) is homologically injective. Proof. \( M \mapsto M \otimes N \) is exact if and only if \( M \mapsto {\left( M \otimes N\right) }^{ \vee } \) is. Now use the equality \( {\left( M \otimes N\right) }^{ \vee } = \underline{\operatorname{Hom}}\left( {M,{N}^{ \vee }}\right) \) of Lemma 11.16. Thus "flat" and "injective" are Matlis dual conditions. Heuristically, a module \( \mathbb{k}\{ T\} \) is flat if \( T \) is an intersection of positive half-spaces for facets of \( Q \), whereas \( \mathbb{k}\left\lbrack T\right\rbrack \) is injective if \( T \) is an intersection of negative half-spaces. Proposition 11.24 Indecomposable injectives are homologically injective. Proof. Since \( \mathbb{k}{\left\lbrack Q - F\right\rbrack }^{ \vee } = \mathbb{k}\{ F - Q\} \), this follows from Lemma 11.23. For any \( {\mathbb{Z}}^{d} \) -graded module \( M \), the Matlis dual can be expressed as \( {M}^{ \vee } = \) \( {\underline{\operatorname{Hom}}}_{\mathbb{k}\left\lbrack Q\right\rbrack }\left( {M,\mathbb{k}{\left\lbrack Q\right\rbrack }^{ \vee }}\right) \) by Lemma 11.16 with \( N = \mathbb{k}\left\lbrack Q\right\rbrack \) . Proposition 11.24 says in this case that Matlis duality is exact, which is obvious from the fact that \( \mathbb{k} \) is a field, because taking vector space duals is exact. Taking Hom into \( \mathbb{k}{\left\lbrack Q\right\rbrack }^{ \vee } \) (= the injective hull of \( \mathbb{k} \) ) provides a better algebraic formulation of Matlis duality than Definition 11.15, by avoiding degree-by-degree vector space duals. It should convince you that dualization with respect to injective modules can have concrete combinatorial interpretations. Homological injectivity behaves very well with respect to (categorical) direct products of modules. Unfortunately, the usual product of infinitely many \( {\mathbb{Z}}^{d} \) -graded modules \( {\left( {M}^{p}\right) }_{p \in P} \) is not necessarily \( {\mathbb{Z}}^{d} \) -graded. Indeed, there may be sequences \( {\left( {y}_{p}\right) }_{p \in P} \in \mathop{\prod }\limits_{{p \in P}}{M}^{p} \) of homogeneous elements that have distinct degrees, in which case \( \mathop{\prod }\limits_{{p \in P}}{M}^{p} \) fails to be the direct sum of its graded components. Such poor behavior occurs even in the simplest of cases, in the presence of only one variable \( x \) (so \( Q = \mathbb{N} \) ): the product \( \mathop{\prod }\limits_{{i = 0}}^{\infty }\mathbb{k}\left\lbrack x\right\rbrack \) of infinitely many copies of \( \mathbb{k}\left\lbrack x\right\rbrack \) has an element \( \left( {1, x,{x}^{2},\ldots }\right) \) that is not expressible as a finite sum of homogeneous elements. The remedy is to take the largest \( {\mathbb{Z}}^{d} \) -graded submodule of the usual product. Definition 11.25 The \( {\mathbb{Z}}^{d} \) -graded product \( {}^{ * }\mathop{\prod }\limits_{{p \in P}}{M}^{p} \) is the submodule of the usual product generated by arbitrary products of homogeneous elements of the same degree. Explicitly, this is the module that has \[ {\left( \mathop{\prod }\limits_{{p \in P}}{M}^{p}\right) }_{\mathbf{b}} = \mathop{\prod }\limits_{{p \in P}}{M}_{\mathbf{b}}^{p} \] as its component in \( {\mathbb{Z}}^{d} \) -graded degree \( \mathbf{b} \) . Lemma 11.26 Arbitrary \( {\mathbb{Z}}^{d} \) -graded products of homologically injective modules are homologically injective. Proof. The natural map \( \operatorname{Hom}\left( {N,{}^{ * }\mathop{\prod }\limits_{{p \in P}}{M}^{p}}\right) \rightarrow {}^{ * }\mathop{\prod }\limits_{{p \in P}}\operatorname{Hom}\left( {N,{M}^{p}}\right) \) is an isomorphism (write out carefully what it means to be a homogeneous element of degree a on each side). Apply Definition 11.21 to the case where each \( {M}^{p} \) is homologically injective. It is very easy to produce (in an abstract sense) nonzero maps from arbitrary modules to homological injectives. The next result capitalizes on this ease: we can stick a module injectively into a product of indecomposable injectives by explicitly making sure that no element maps to zero. Proposition 11.27 Every module \( M \) is isomorphic to a submodule of a homologically injective module. If \( M \) is finitely generated, then \( M \) is isomorphic to a submodule of a finite direct sum of indecomposable injectives. Proof. Homogeneous elements \( y \in M \) generate finitely generated submodules. Using Proposition 8.11 and Lemma 7.10, pick a face \( F \) such that \( {P}_{F} \) is associated to \( M \), so \( \left\langle {{\mathbf{t}}^{\mathbf{a}}y}\right\rangle \cong \mathbb{k}\left\{ {{\mathbf{u}}_{y} + {F}^{y}}\right\} \) for some \( \mathbf{a} \in Q \) and some vector \( {\mathbf{u}}_{y} \in {\mathbb{Z}}^{d} \) . The corresponding inclusion \( \left\langle {{\mathbf{t}}^{\mathbf{a}}y}\right\rangle \hookrightarrow \mathbb{k}\left\{ {{\mathbf{u}}_{y} + {F}^{y} - Q}\right\} \) extends to a map \( {\phi }_{y} : M \rightarrow \mathbb{k}\left\{ {{\mathbf{u}}_{y} + {F}^{y} - Q}\right\} \) by homological injectivity of the latter. The graded product of such maps over \( y \in M \) is a homomorphism \( {\left( {\phi }_{y}\right) }_{y \in M} : M \rightarrow {}^{ * }\mathop{\prod }\limits_{y}\mathbb{k}\left\{ {{\mathbf{u}}_{y} + {F}^{y} - Q}\right\} \) to a homologically injective module (Lemma 11.26 and Proposition 11.24) that is an inclusion by construction. When \( M \) is finitely generated, each of the finitely many submodules \( \left( {0{ : }_{M}{P}_{F}}\right) \) annihilated by a monomial prime ideal is itself finitely generated. Using the above construction, it suffices to take the graded product over all \( y \) in a finite set containing generators for each of the modules \( \left( {0{ : }_{M}{P}_{F}}\right) \) . This finite product is a direct sum. Lemma 11.28 Let \( J \) be homologically injective and \( E \) any module. 1. If \( E \) is a direct summand of \( J \), then \( E \) is homologically injective. 2. If \( J \subseteq E \), then \( J \) is a direct summand of \( E \) . Proof. To prove the first part, let \( J = {J}^{\prime } \oplus {J}^{\prime \prime } \) and apply \( \underline{\operatorname{Hom}}\left( {\_, J}\right) = \) \( \operatorname{Hom}\left( {\_ ,{J}^{\prime }}\right) \oplus \operatorname{Hom}\left( {\_ ,{J}^{\prime \prime }}\right) \) to any exact sequence. For the second part, the surjection \( \underline{\operatorname{Hom}}\left( {J, J}\right) \twoheadleftarrow \underline{\operatorname{Hom}}\left( {E, J}\right) \) produces a homomorphism \( E \rightarrow J \) mapping to \( {\operatorname{id}}_{J} \), which is by definition a splitting of the inclusion \( J \hookrightarrow E \) . \( ▱ \) Proposition 11.29 A module \( J \) is homologically injective if and only if \( J \) has no proper essential extensions. Proof. First assume \( J \) is homologically injective. If \( J \subseteq M \) is an essential extension, then writing \( M = J \oplus N \) for some \( N \) by the second part of Lemma 11.28, it must be that \( N = 0 \), so \( J = M \) . Now assume \( J \) has no proper essential extension. Use Proposition 11.27 to find an inclusion \( J \hookrightarrow E \) into a homologically injective module \( E \) . The set of submodules of \( E \) trivially intersecting \( J \) has a maximal element \( M \) by Zorn’s Lemma. The natural map \( J \rightarrow E/M \) makes the quotient \( E/M \) into essential extension of \( J \) by construction, so \( J \cong E/M \) . Thus \( E = J \oplus M \) . Homological injectivity of \( J \) is the first part of Lemma 11.28. Theorem 11.30 A module is homologically injective if and only if it is injective in the combinatorial sense of Definition 11.10. Proof. Finite direct sums of indecomposable injectives are homologically injective by Proposition 11.24 and Lemma 11.26. Now let \( J \) be an arbitrary direct sum of indecomposable injectives, and
1077_(GTM235)Compact Lie Groups
Definition 2.10
Definition 2.10. Let \( V \) and \( W \) be finite-dimensional representations of a Lie group \( G \) . (1) \( G \) acts on \( V \oplus W \) by \( g\left( {v, w}\right) = \left( {{gv},{gw}}\right) \) . (2) \( G \) acts on \( V \otimes W \) by \( g\sum {v}_{i} \otimes {w}_{j} = \sum g{v}_{i} \otimes g{w}_{j} \) . (3) \( G \) acts on \( \operatorname{Hom}\left( {V, W}\right) \) by \( \left( {gT}\right) \left( v\right) = \bar{g}\left\lbrack {T\left( {{g}^{-1}v}\right) }\right\rbrack \) . (4) \( G \) acts on \( { \otimes }^{k}V \) by \( g\sum {v}_{{i}_{1}} \otimes \cdots \otimes {v}_{{i}_{k}} = \sum \left( {g{v}_{{i}_{1}}}\right) \otimes \cdots \otimes \left( {g{v}_{{i}_{k}}}\right) \) . (5) \( G \) acts on \( \mathop{\bigwedge }\limits^{k}V \) by \( g\sum {v}_{{i}_{1}} \land \cdots \land {v}_{{i}_{k}} = \sum \left( {g{v}_{{i}_{1}}}\right) \land \cdots \land \left( {g{v}_{{i}_{k}}}\right) \) . (6) \( G \) acts on \( {S}^{k}\left( V\right) \) by \( g\sum {v}_{{i}_{1}}\cdots {v}_{{i}_{k}} = \sum \left( {g{v}_{{i}_{1}}}\right) \cdots \left( {g{v}_{{i}_{k}}}\right) \) . (7) \( G \) acts on \( {V}^{ * } \) by \( \left( {gT}\right) \left( v\right) = T\left( {{g}^{-1}v}\right) \) . (8) \( G \) acts on \( \bar{V} \) by the same action as it does on \( V \) . It needs to be verified that each of these actions define a representation. All are simple. We check numbers (3) and (5) and leave the rest for Exercise 2.14. For number (3), smoothness and invertibility are clear. It remains to verify the homomorphism property so we calculate \[ \left\lbrack {{g}_{1}\left( {{g}_{2}T}\right) }\right\rbrack \left( v\right) = {g}_{1}\left\lbrack {\left( {{g}_{2}T}\right) \left( {{g}_{1}^{-1}v}\right) }\right\rbrack = {g}_{1}{g}_{2}\left\lbrack {T\left( {{g}_{2}^{-1}{g}_{1}^{-1}v}\right) }\right\rbrack = \left\lbrack {\left( {{g}_{1}{g}_{2}}\right) T}\right\rbrack \left( v\right) \] for \( {g}_{i} \in G, T \in \operatorname{Hom}\left( {V, W}\right) \), and \( v \in V \) . For number (5), recall that \( \mathop{\bigwedge }\limits^{k}V \) is simply \( {\bigotimes }^{k}V \) modulo \( {\mathcal{I}}_{k} \), where \( {\mathcal{I}}_{k} \) is \( {\bigotimes }^{k}V \) intersect the ideal generated by \( \{ v \otimes v \mid v \in V\} \) . Since number (4) is a representation, it therefore suffices to show that the action of \( G \) on \( {\bigotimes }^{k}V \) preserves \( {\mathcal{I}}_{k} \) -but this is clear. Some special notes are in order. For number (1) dealing with \( V \oplus W \), choose in the obvious way a basis for \( V \oplus W \) that is constructed from a basis for \( V \) and a basis for \( W \) . With respect to this basis, the action of \( G \) can be realized on \( V \oplus W \) by multiplication by a matrix of the form \( \left( \begin{array}{ll} * & 0 \\ 0 & * \end{array}\right) \) where the upper left block is given by the action of \( G \) on \( V \) and the lower right block is given by the action of \( G \) on \( W \) . For number (7) dealing with \( {V}^{ * } \), fix a basis \( \mathcal{B} = {\left\{ {v}_{i}\right\} }_{i = 1}^{n} \) for \( V \) and let \( {\mathcal{B}}^{ * } = \) \( {\left\{ {v}_{i}^{ * }\right\} }_{i = 1}^{n} \) be the dual basis for \( {V}^{ * } \), i.e., \( {v}_{i}^{ * }\left( {v}_{j}\right) \) is 1 when \( i = j \) and is 0 when \( i \neq j \) . Using these bases, identify \( V \) and \( {V}^{ * } \) with \( {\mathbb{C}}^{n} \) by the coordinate maps \( {\left\lbrack \mathop{\sum }\limits_{i}{c}_{i}{v}_{i}\right\rbrack }_{\mathcal{B}} = \) \( \left( {{c}_{1},\ldots ,{c}_{n}}\right) \) and \( {\left\lbrack \mathop{\sum }\limits_{i}{c}_{i}{v}_{i}^{ * }\right\rbrack }_{{\mathcal{B}}^{ * }} = \left( {{c}_{1},\ldots ,{c}_{n}}\right) \) . With respect to these bases, realize the action of \( g \) on \( V \) and \( {V}^{ * } \) by a matrices \( {M}_{g} \) and \( {M}_{g}^{\prime } \) so that \( {\left\lbrack g \cdot v\right\rbrack }_{\mathcal{B}} = {M}_{g}{\left\lbrack v\right\rbrack }_{\mathcal{B}} \) and \( {\left\lbrack g \cdot T\right\rbrack }_{{\mathcal{B}}^{ * }} = {M}_{g}^{\prime }{\left\lbrack T\right\rbrack }_{{\mathcal{B}}^{ * }} \) for \( v \in V \) and \( T \in {V}^{ * } \) . In particular, \( {\left\lbrack {M}_{g}\right\rbrack }_{i, j} = {v}_{i}^{ * }\left( {g \cdot {v}_{j}}\right) \) and \( {\left\lbrack {M}_{g}^{\prime }\right\rbrack }_{i, j} = \left( {g \cdot {v}_{j}^{ * }}\right) \left( {v}_{i}\right) \) . Thus \( {\left\lbrack {M}_{g}^{\prime }\right\rbrack }_{i, j} = {v}_{j}^{ * }\left( {{g}^{-1} \cdot {v}_{i}}\right) = {\left\lbrack {M}_{{g}^{-1}}\right\rbrack }_{j, i} \) so that \( {M}_{g}^{\prime } = \) \( {M}_{g}^{-1, t} \) . In other words, once appropriate bases are chosen and the \( G \) action is realized by matrix multiplication, the action of \( G \) on \( {V}^{ * } \) is obtained from the action of \( G \) on \( V \) simply by taking the inverse transpose of the matrix. For number (8) dealing with \( \bar{V} \), fix a basis for \( V \) and realize the action of \( g \) by a matrix \( {M}_{g} \) as above. To examine the action of \( g \) on \( v \in \bar{V} \), recall that scalar multiplication is the conjugate of the original scalar multiplication in \( V \) . In particular, in \( \bar{V}, g \cdot v \) is therefore realized by the matrix \( \overline{{M}_{g}} \) . In other words, once a basis is chosen and the \( G \) action is realized by matrix multiplication, the action of \( G \) on \( \bar{V} \) is obtained from the action of \( G \) on \( V \) simply by taking the conjugate of the matrix. It should also be noted that few of these constructions are independent of each other. For instance, the action in number (7) on \( {V}^{ * } \) is just the special case of the action in number (3) on \( \operatorname{Hom}\left( {V, W}\right) \) in which \( W = \mathbb{C} \) is the trivial representation. Also the actions in (4), (5), and (6) really only make repeated use of number (2). Moreover, as representations, it is the case that \( {V}^{ * } \otimes W \cong \operatorname{Hom}\left( {V, W}\right) \) (Exercise 2.15) and, for compact \( G,{V}^{ * } \cong \bar{V} \) (Corollary 2.20). ## 2.2.2 Irreducibility and Schur's Lemma Now that we have many ways to glue representations together, it makes sense to seek some sort of classification. For this to be successful, it is necessary to examine the smallest possible building blocks. Definition 2.11. Let \( G \) be a Lie group and \( V \) a finite-dimensional representation of \( G \) . (1) A subspace \( U \subseteq V \) is \( G \) -invariant (also called a submodule or a subrepresenta-tion) if \( {gU} \subseteq U \) for \( g \in G \) . Thus \( U \) is a representation of \( G \) in its own right. (2) A nonzero representation \( V \) is irreducible if the only \( G \) -invariant subspaces are \( \{ 0\} \) and \( V \) . A nonzero representation is called reducible if there is a proper (i.e., neither zero nor all of \( V \) ) \( G \) -invariant subspace of \( V \) . It follows that a nonzero finite-dimensional representation \( V \) is irreducible if and only if \[ V = {\operatorname{span}}_{\mathbb{C}}\{ {gv} \mid g \in G\} \] for each nonzero \( v \in V \), since this property is equivalent to excluding proper \( G \) - invariant subspaces. For example, it is well known from linear algebra that this condition is satisfied for each of the standard representations in \( §{2.1.2.1} \) and so each is irreducible. For more general representations, this approach is often impossible to carry out. In those cases, other tools are needed. One important tool is based on the next result. Theorem 2.12 (Schur’s Lemma). Let \( V \) and \( W \) be finite-dimensional representations of a Lie group \( G \) . If \( V \) and \( W \) are irreducible, then \[ \dim {\operatorname{Hom}}_{G}\left( {V, W}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }V \cong W \\ 0 & \text{ if }V ≆ W. \end{array}\right. \] Proof. If nonzero \( T \in {\operatorname{Hom}}_{G}\left( {V, W}\right) \), then \( \ker T \) is not all of \( V \) and \( G \) -invariant so irreducibility implies \( T \) is injective. Similarly, the image of \( T \) is nonzero and \( G \) -invariant, so irreducibility implies \( T \) is surjective and therefore a bijection. Thus there exists a nonzero \( T \in {\operatorname{Hom}}_{G}\left( {V, W}\right) \) if and only if \( V \cong W \) . In the case \( V \cong W \), fix a bijective \( {T}_{0} \in {\operatorname{Hom}}_{G}\left( {V, W}\right) \) . If also \( T \in {\operatorname{Hom}}_{G}\left( {V, W}\right) \) , then \( T \circ {T}_{0}^{-1} \in {\operatorname{Hom}}_{G}\left( {V, V}\right) \) . Since \( V \) is a finite-dimensional vector space over \( \mathbb{C} \) , there exists an eigenvalue \( \lambda \) for \( T \circ {T}_{0}^{-1} \) . As \( \ker \left( {T \circ {T}_{0}^{-1} - {\lambda I}}\right) \) is nonzero and \( G \) - invariant, irreducibility implies \( T \circ {T}_{0}^{-1} - {\lambda I} = 0 \), and so \( {\operatorname{Hom}}_{G}\left( {V, W}\right) = \mathbb{C}{T}_{0} \) . Note Schur's Lemma implies that (2.13) \[ {\operatorname{Hom}}_{G}\left( {V, V}\right) = \mathbb{C}I \] for irreducible \( V \) . ## 2.2.3 Unitarity Definition 2.14. (1) Let \( V \) be a representation of a Lie group \( G \) . A form \( \left( {\cdot , \cdot }\right) \) : \( V \times V \rightarrow \mathbb{C} \) is called \( G \) -invariant if \( \left( {{gv}, g{v}^{\prime }}\right) = \left( {v,{v}^{\prime }}\right) \) for \( g \in G \) and \( v,{v}^{\prime } \in V \) . (2) A representation \( V \) of a Lie group \( G \) is called unitary if there exists a \( G \) -invariant (Hermitian) inner product on \( V \) . Noncompact groups abound with nonunitary representations (Exercise 2.18). However, compact groups are much more nicely behaved. Theorem 2.15. Every representation of a compact Lie group is unitary. Proof. Begin with any inner product \( \langle \cdot , \cdot \rangle \) on \( V \) and define \[ \left( {v,{v}^{\prime }}\right) = {\i
1056_(GTM216)Matrices
Definition 7.2
Definition 7.2 Two norms \( N \) and \( {N}^{\prime } \) on a (real or complex) vector space are said to be equivalent if there exist two numbers \( c,{c}^{\prime } \in \mathbb{R} \) such that \[ N \leq c{N}^{\prime },\;{N}^{\prime } \leq {c}^{\prime }N \] The equivalence between norms is obviously an equivalence relation, as its name implies. As announced above, we have the following result. Proposition 7.3 All norms on \( E = {K}^{n} \) are equivalent. For example, \[ \parallel x{\parallel }_{\infty } \leq \parallel x{\parallel }_{p} \leq {n}^{1/p}\parallel x{\parallel }_{\infty } \] Proof. It is sufficient to show that every norm is equivalent to \( \parallel \cdot {\parallel }_{1} \) . Let \( N \) be a norm on \( E \) . If \( x \in E \), the triangle inequality gives \[ N\left( x\right) \leq \mathop{\sum }\limits_{i}\left| {x}_{i}\right| N\left( {\mathbf{e}}^{i}\right) \] where \( \left( {{\mathbf{e}}^{1},\ldots ,{\mathbf{e}}^{n}}\right) \) is the canonical basis. One thus has \( N \leq c\parallel \cdot {\parallel }_{1} \) for \( c \mathrel{\text{:=}} \mathop{\max }\limits_{i}N\left( {\mathbf{e}}^{i}\right) \) . Observe that this first inequality expresses the fact that \( N \) is Lipschitz (hence continuous) on the metric space \( X = \left( {E,\parallel \cdot {\parallel }_{1}}\right) \) . For the reverse inequality, we reduce ad absurdum. Let us assume that the supremum of \( \parallel x{\parallel }_{1}/N\left( x\right) \) is infinite for \( x \neq 0 \) . By homogeneity, there would then exist a sequence of vectors \( {\left( {x}^{m}\right) }_{m \in \mathbb{N}} \) such that \( {\begin{Vmatrix}{x}^{m}\end{Vmatrix}}_{1} = 1 \) and \( N\left( {x}^{m}\right) \rightarrow 0 \) when \( m \rightarrow + \infty \) . The unit sphere of \( X \) is compact, thus one may assume (up to the extraction of a subsequence) that \( {x}^{m} \) converges to a vector \( x \) such that \( \parallel x{\parallel }_{1} = 1 \) . In particular, \( x \neq 0 \) . Because \( N \) is continuous on \( X \), one has also \( N\left( x\right) = \mathop{\lim }\limits_{{m \rightarrow + \infty }}N\left( {x}^{m}\right) = 0 \) and because \( N \) is a norm, we deduce \( x = 0 \), a contradiction. ## 7.1.3 Duality Definition 7.3 Given a norm \( \parallel \cdot \parallel \) on \( {K}^{n}\left( {K = \mathbb{R}\text{or}\mathbb{C}}\right) \), its dual norm on \( {K}^{n} \) is defined by \[ \parallel x{\parallel }^{\prime } \mathrel{\text{:=}} \mathop{\sup }\limits_{{y \neq 0}}\frac{\Re \langle x, y\rangle }{\parallel y\parallel } \] or equivalently \[ \parallel x{\parallel }^{\prime } \mathrel{\text{:=}} \mathop{\sup }\limits_{{y \neq 0}}\frac{\left| \langle x, y\rangle \right| }{\parallel y\parallel }. \] The fact that \( \parallel \cdot {\parallel }^{\prime } \) is a norm is obvious. For every \( x, y \in {K}^{n} \), one has \[ \left| {\langle x, y\rangle }\right| \leq \parallel x\parallel \cdot \parallel y{\parallel }^{\prime } \] (7.3) Proposition 7.2 shows that the dual norm of \( \parallel \cdot {\parallel }_{p} \) is \( \parallel \cdot {\parallel }_{q} \) for \( 1/p + 1/q = 1 \) . Because \( p \mapsto q \) is an involution, this suggests the following property: Proposition 7.4 The bi-dual (dual of the dual norm) of a norm is this norm itself: \[ {\left( \parallel \cdot {\parallel }^{\prime }\right) }^{\prime } = \parallel \cdot \parallel \] Proof. From (7.3), one has \( {\left( \parallel \cdot {\parallel }^{\prime }\right) }^{\prime } \leq \parallel \cdot \parallel \) . The converse is a consequence of the Hahn-Banach theorem: the unit ball \( B \) of \( \parallel \cdot \parallel \) is convex and compact. If \( x \) is a point of its boundary (i.e., \( \parallel x\parallel = 1 \) ), there exists an \( \mathbb{R} \) -affine (i.e., of the form constant plus \( \mathbb{R} \) -linear) function that vanishes at \( x \) and is nonpositive on \( B \) . Such a function can be written in the form \( z \mapsto \Re \langle z, y\rangle + c \), where \( c \) is a constant, necessarily equal to \( - \Re \langle x, y\rangle \) . Without loss of generality, one may assume that \( \langle y, x\rangle \) is real and nonnegative. Hence \[ \parallel y{\parallel }^{\prime } = \mathop{\sup }\limits_{{\parallel z\parallel = 1}}\Re \langle y, z\rangle = \langle y, x\rangle \] One deduces \[ {\left( \parallel x{\parallel }^{\prime }\right) }^{\prime } \geq \frac{\langle y, x\rangle }{\parallel y{\parallel }^{\prime }} = 1 = \parallel x\parallel \] By homogeneity, this is true for every \( x \in {\mathbb{C}}^{n} \) . ## 7.1.4 Matrix Norms Let us recall that \( {\mathbf{M}}_{n}\left( K\right) \) can be identified with the set of endomorphisms of \( E = {K}^{n} \) by \[ A \mapsto \left( {x \mapsto {Ax}}\right) . \] Definition 7.4 If \( \parallel \cdot \parallel \) is a norm on \( E \) and if \( A \in {\mathbf{M}}_{n}\left( K\right) \), we define \[ \parallel A\parallel \mathrel{\text{:=}} \mathop{\sup }\limits_{{x \neq 0}}\frac{\parallel {Ax}\parallel }{\parallel x\parallel }. \] Equivalently, \[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\parallel {Ax}\parallel = \mathop{\max }\limits_{{\parallel x\parallel \leq 1}}\parallel {Ax}\parallel \] One verifies easily that \( A \mapsto \parallel A\parallel \) is a norm on \( {\mathbf{M}}_{n}\left( K\right) \) . It is called the norm induced by that of \( E \), or the norm subordinated to that of \( E \) . Although we adopted the same notation \( \parallel \cdot \parallel \) for the two norms, that on \( E \) and that on \( {\mathbf{M}}_{n}\left( K\right) \), these are, of course, distinct objects. In many places, one finds the notation \( \parallel \mid \mid \mid \) for the induced norm. When one does not wish to mention by which norm on \( E \) a given norm on \( {\mathbf{M}}_{n}\left( K\right) \) is induced, one says that \( A \mapsto \parallel A\parallel \) is a matrix norm. The main properties of matrix norms are \[ \parallel {AB}\parallel \leq \parallel A\parallel \parallel B\parallel ,\;\begin{Vmatrix}{I}_{n}\end{Vmatrix} = 1. \] These properties are those of any algebra norm. In particular, one has \( \begin{Vmatrix}{A}^{k}\end{Vmatrix} \leq \parallel A{\parallel }^{k} \) for every \( k \in \mathbb{N} \) . ## Examples Three \( {l}^{p} \) -matrix norms can be computed in closed form: \[ \parallel A{\parallel }_{1} = \mathop{\max }\limits_{{1 \leq j \leq n}}\mathop{\sum }\limits_{{i = 1}}^{{i = n}}\left| {a}_{ij}\right| \] \[ \parallel A{\parallel }_{\infty } = \mathop{\max }\limits_{{1 \leq i \leq n}}\mathop{\sum }\limits_{{j = 1}}^{{j = n}}\left| {a}_{ij}\right| \] \[ \parallel A{\parallel }_{2} = {\begin{Vmatrix}{A}^{ * }\end{Vmatrix}}_{2} = \rho {\left( {A}^{ * }A\right) }^{1/2}. \] To prove these formulæ, we begin by proving the inequalities \( \geq \), selecting a suitable vector \( x \), and writing \( \parallel A{\parallel }_{p} \geq \parallel {Ax}{\parallel }_{p}/\parallel x{\parallel }_{p} \) . For \( p = 1 \) we choose an index \( j \) such that the maximum in the above formula is achieved. Then we let \( {x}_{j} = 1 \), and \( {x}_{k} = 0 \) otherwise. For \( p = \infty \), we let \( {x}_{j} = {\bar{a}}_{{i}_{0}j}/\left| {a}_{{i}_{0}j}\right| \), where \( {i}_{0} \) achieves the maximum in the above formula. For \( p = 2 \) we choose an eigenvector of \( {A}^{ * }A \) associated with an eigenvalue of maximal modulus. We thus obtain three inequalities. The reverse inequalities are direct consequences of the definitions. The similarity of the formulæ for \( \parallel A{\parallel }_{1} \) and \( \parallel A{\parallel }_{\infty } \), as well as the equality \( \parallel A{\parallel }_{2} = {\begin{Vmatrix}{A}^{ * }\end{Vmatrix}}_{2} \) illustrate the general formula \[ {\begin{Vmatrix}{A}^{ * }\end{Vmatrix}}_{{p}^{\prime }} = \parallel A{\parallel }_{p} = \mathop{\sup }\limits_{{x \neq 0}}\mathop{\sup }\limits_{{y \neq 0}}\frac{\Re \left( {{y}^{ * }{Ax}}\right) }{\parallel x{\parallel }_{p} \cdot \parallel y{\parallel }_{{p}^{\prime }}} = \mathop{\sup }\limits_{{x \neq 0}}\mathop{\sup }\limits_{{y \neq 0}}\frac{\left| \left( {y}^{ * }Ax\right) \right| }{\parallel x{\parallel }_{p} \cdot \parallel y{\parallel }_{{p}^{\prime }}}, \] where again \( p \) and \( {p}^{\prime } \) are conjugate exponents. We point out that if \( H \) is Hermitian, then \( \parallel H{\parallel }_{2} = \rho {\left( {H}^{2}\right) }^{1/2} = \rho \left( H\right) \) . Therefore the spectral radius is a norm over \( {\mathbf{H}}_{n} \), although it is not over \( {\mathbf{M}}_{n}\left( \mathbb{C}\right) \) . We already mentioned this fact, as a consequence of the Weyl inequalities. Proposition 7.5 For an induced norm, the condition \( \parallel B\parallel < 1 \) implies that \( {I}_{n} - B \) is invertible, with the inverse given by the sum of the series \[ \mathop{\sum }\limits_{{k = 0}}^{\infty }{B}^{k} \] Proof. The series \( \mathop{\sum }\limits_{k}{B}^{k} \) is normally convergent, because \( \mathop{\sum }\limits_{k}\begin{Vmatrix}{B}^{k}\end{Vmatrix} \leq \mathop{\sum }\limits_{k}\parallel B{\parallel }^{k} \), where the latter series converges because \( \parallel B\parallel < 1 \) . Because \( {\mathbf{M}}_{n}\left( K\right) \) is complete, the series \( \mathop{\sum }\limits_{k}{B}^{k} \) converges. Furthermore, \( \left( {{I}_{n} - B}\right) \mathop{\sum }\limits_{{k \leq N}}{B}^{k} = {I}_{n} - {B}^{N + 1} \), which tends to \( {I}_{n} \) . The sum of the series is thus the inverse of \( {I}_{n} - B \) . One has, moreover, \[ \begin{Vmatrix}{\left( {I}_{n} - B\right) }^{-1}\end{Vmatrix} \leq \mathop{\sum }\limits_{k}\parallel B{\parallel }^{k} = \frac{1}{1 - \parallel B\parallel }. \] 口 One can also deduce Proposition 7.5 from the following statement. Proposition 7.6 For every induced norm, one has \[ \rho \left( A\right) \leq \parallel A\parallel \] Proof. The case \( K = \mathbb{C} \) is easy, because there exists an eigenvector \( X \in E \)
1358_[陈松蹊&张慧铭] A Course in Fixed and High-dimensional Multivariate Analysis (2020)
Definition 2.2.2
Definition 2.2.2 Let \( C \) be a copula and \( {a}_{1},\ldots ,{a}_{k - 1},{a}_{k + 1},\ldots ,{a}_{p} \) be fixed numbers on I. The \( k \) -section of \( C \) for all \( k = 1,\ldots, p \) at \[ {a}_{\left( k\right) } = \left( {{a}_{1},\ldots ,{a}_{k - 1},{a}_{k + 1},\ldots ,{a}_{p}}\right) \] is \( t \rightarrow C\left( {{a}_{1},\ldots ,{a}_{k - 1}, t,{a}_{k + 1},\ldots ,{a}_{p}}\right) \) ; and the diagonal section of \( C \) is \( {\delta }_{c} : I \rightarrow I \) and \( {\delta }_{c}\left( t\right) = C\left( {t,\ldots, t}\right) . \) Corollary 2.2.1 All the \( {k}^{\text{th }} \) and diagonal sections of \( C \) are non-decreasing and uniformly continuous on \( I \) . Proof : Readily from Lemma ?? and Theorem 2.2.1. Let \( p = 2 \) . The copula surface \( \left( {u, v, C\left( {u, v}\right) }\right) \) is continuous and "monotone" within \( {\left\lbrack 0,1\right\rbrack }^{3} \) and passes through \( \left( {0,0,0}\right) ,\left( {0,1,0}\right) ,\left( {1,0,0}\right) ,\left( {1,1,1}\right) \) . A copula is not only uniformly continuous, but also "almost everywhere" differentiable with respect to the Lebesgue measure. This leads the way to the density of copulae. Lemma 2.2.2 Let \( C \) be a copula and \( u = \left( {{u}_{1},\ldots ,{u}_{p}}\right) \in {I}^{p} \) . Then, i) \( \frac{\partial C\left( u\right) }{\partial {u}_{k}} \) exists for almost all \( {u}_{k} \in I \) and \( 0 \leq \frac{\partial C\left( u\right) }{\partial {u}_{k}} \leq 1 \) . ii) The function \( {u}_{\left( k\right) } = \left( {{u}_{1},\ldots ,{u}_{k - 1},{u}_{k + 1},\ldots ,{u}_{p}}\right) \rightarrow \frac{\partial C\left( u\right) }{\partial {u}_{k}} \) are non-decreasing almost everywhere on \( {I}^{p - 1} \) . Proof : i) As all the \( k \) -sections of \( C \) is non-decreasing and continuous, it is differential almost everywhere (a.e.) with respect to the Lebesgue measure, that is, \( \frac{\partial C\left( u\right) }{\partial {u}_{k}} \) exists a.e.. For any \( 0 \leq {t}_{1} \leq {t}_{2} \leq 1 \), as \( C \) is \( k \) -section monotone and from Theorem 2.2.1, \[ 0 \leq C\left( {{u}_{1},\ldots ,{u}_{k - 1},{t}_{2},{u}_{k + 1},\ldots ,{u}_{p}}\right) - C\left( {{u}_{1},\ldots ,{u}_{k - 1},{t}_{1},{u}_{k + 1},\ldots ,{u}_{p}}\right) \leq {t}_{2} - {t}_{1}. \] 1242 Hence, \[ 0 \leq \frac{C\left( {{u}_{1},\ldots ,{u}_{k - 1},{t}_{2},{u}_{k + 1},\ldots ,{u}_{p}}\right) - C\left( {{u}_{1},\ldots ,{u}_{k - 1},{t}_{1},{u}_{k + 1},\ldots ,{u}_{p}}\right) }{{t}_{2} - {t}_{1}} \leq 1. \] Let \( {t}_{2} \rightarrow {t}_{1} \) from the right, as \( \frac{\partial C\left( u\right) }{\partial {u}_{k}} \) exists a.e. \[ 0 \leq \frac{\partial C\left( u\right) }{\partial {u}_{k}} \leq 1 \] ii) See Nelson (1998) for the proof. Remarks: The non-decreasing of \( {u}_{\left( k\right) } \rightarrow \frac{\partial C\left( u\right) }{\partial {u}_{k}} \) implies that \( \frac{{\partial }^{2}C\left( u\right) }{\partial {u}_{k}\partial {u}_{\ell }} \) exists a.e. for \( \ell \neq k \) . Repeat the above procedure \( p - 2 \) times on the rest of the variables, it can be derived that the copula density function \( \frac{{\partial }^{p}C\left( u\right) }{\partial {u}_{1}\cdots \partial {u}_{p}} \) exists a.e.. The copula density is more revealing on the underlying data structure than the copula function, just like a density is more revealing than a distribution function. ## 2.3 Sklar Theorem and Fréchet-Hoeffding Bounds There had been works on linking a joint distribution with its marginals, for instance the works of Fréchet (1951) and Féron (1956). The most significant work was that by Abe Sklar, which is announced in 1959. A sketch of the proof was in Sklar (1973), and Schweizer and Sklar (1974). Theorem 2.3.1. (Sklar 1959, Schweiser and Sklar 1974) Let \( F \) be a p-dimensional distribution function with margins \( {F}_{1},\ldots ,{F}_{p} \) . Then, there exists a p-dimensional copula \( C \) such that for all \( x \in {\overline{\mathbb{R}}}^{p} \) , \[ F\left( {{x}_{1},\ldots ,{x}_{p}}\right) = C\left( {{F}_{1}\left( {x}_{1}\right) ,\ldots ,{F}_{p}\left( {x}_{p}\right) }\right) . \] (2.3.1) If \( {F}_{1},\ldots ,{F}_{p} \) are all continuous, i.e., \( \operatorname{Ran}{F}_{i} = \left\lbrack {0,1}\right\rbrack \), then \( C \) is unique. Otherwise, \( C \) is uniquely determined on \( {\operatorname{RanF}}_{1} \times \cdots \times {\operatorname{RanF}}_{p} \) . Conversely, if \( C \) is a copula and \( {F}_{i} \) ’s are marginal distributions, then \( F \) defined in (2.3.1) is a distribution function on \( {\overline{\mathbb{R}}}^{p} \) with \( {F}_{1},\ldots ,{F}_{p} \) as its marginal distributions. Proof : As \( F \) is a distribution function, it satisfies the conditions of Lemma 2.2.1. So, for all \( x = \left( {{x}_{1},\ldots ,{x}_{p}}\right) \) and \( y = \left( {{y}_{1},\ldots ,{y}_{p}}\right) \in {\mathbb{R}}^{p} \) , \[ \left| {F\left( x\right) - F\left( y\right) }\right| \leq \mathop{\sum }\limits_{{k = 1}}^{p}\left| {{F}_{k}\left( {x}_{k}\right) - {F}_{k}\left( {y}_{k}\right) }\right| . \] Hence, if \( {F}_{k}\left( {x}_{k}\right) = {F}_{k}\left( {y}_{k}\right) \) for all \( k = 1,2,\ldots, p \), then \( F\left( x\right) = F\left( y\right) \) . This means that there exists a function of \( \left( {{F}_{1}\left( {x}_{1}\right) ,\ldots ,{F}_{p}\left( {x}_{p}\right) }\right) \) denoted as \( C \) such that \[ F\left( {{x}_{1},\ldots ,{x}_{p}}\right) = C\left( {{F}_{1}\left( {x}_{1}\right) ,\ldots ,{F}_{p}\left( {x}_{p}\right) }\right) . \] So, \( C \) is unique. It can be checked that \( C \) is a copula; see Nelson (1998) for details. Let \( X = \left( {{X}_{1},\ldots ,{X}_{p}}\right) \) be a random vector with a joint distribution \( F \) and marginal distributions \( {F}_{1},\ldots ,{F}_{p} \) . Then, Sklar Theorem implies that there is a copula \( C \) such that \[ F\left( {{x}_{1},\ldots ,{x}_{p}}\right) = C\left( {{F}_{1}\left( {x}_{1}\right) ,\ldots ,{F}_{p}\left( {x}_{p}\right) }\right) , \] and \( C \) is the copula of \( X \) and is denoted as \( {C}_{X} \) . For \( u = \left( {{u}_{1},\cdots ,{u}_{p}}\right) \in {I}^{p} \), define the so-called independent copula \( \Pi \left( u\right) = {u}_{1} \times \cdots \times {u}_{p} \) . Clearly, for continuous random vector \( X = \left( {{X}_{1},\ldots ,{X}_{p}}\right) \) with independent components, \( {C}_{X} = {\Pi }_{p} \) ## Definition 2.3.1 (Quantile) Let \( F \) be a univariate distribution. The generalized inverse (quantile) of \( F \) is \( {F}^{-1}\left( t\right) = \) \( \inf \{ x \in \mathbb{R} \mid F\left( x\right) \geq t\} \) . In particular, \( \because \inf \phi = \infty \) ". Corollary 2.3.1 Let \( F \) be a p-dimensional distribution with continuous margins \( {F}_{i} \) and copula \( C \) . Then, for all \( u = \left( {{u}_{1},\ldots ,{u}_{p}}\right) \in {\left\lbrack 0,1\right\rbrack }^{p} \) , \[ C\left( u\right) = F\left( {{F}_{1}^{-1}\left( {u}_{1}\right) ,\ldots ,{F}_{p}^{-1}\left( {u}_{p}\right) }\right) . \] Proof : As \( {F}_{i} \) is continuous, \( C \) is unique and \( \operatorname{Ran}{F}_{i} = I \) . Hence, for all \( {u}_{i} \in I \), let \( {x}_{i} = {F}_{i}^{-1}\left( {u}_{i}\right) \) . Then, Sklar Theorem implies \[ F\left( {{x}_{1},\ldots ,{x}_{p}}\right) = F\left( {{F}_{1}^{-1}\left( {u}_{1}\right) ,\ldots ,{F}_{p}^{-1}\left( {u}_{p}\right) }\right) \] \[ = C\left( {{F}_{1}{F}_{1}^{-1}\left( {u}_{1}\right) ,\ldots ,{F}_{p}{F}_{p}^{-1}\left( {u}_{p}\right) }\right) \] \[ = C\left( {{u}_{1},\ldots ,{u}_{p}}\right) \text{.} \] Remarks: i) The corollary can be used to find the copula that corresponds to the joint distribution \( F \) . ii) The copula is the link between a joint distribution \( F \) and its margins. It "fully" describes the dependence among components of the random vector that has \( F \) as its distribution. iii) The dependence described by the copula is comprehensive as it is a function rather than a number (like the correlation coefficient) or a few numbers (Kendall’s \( \tau \), Spearman’s \( \rho ,\ldots \) ) which are just partial summary of the dependence. Gaussian Copula. Let \( {\Phi }_{p,\sum } \) be the distribution of \( {N}_{p}\left( {0,\sum }\right) \) and \( \Phi \) be the marginal distribution of \( N\left( {0,1}\right) \) where the covariance matrix \( \sum = {\left( {\rho }_{ij}\right) }_{i, j = 1,\ldots, p} \) and \( {\rho }_{ii} = 1 \) for \( i = 1,\ldots, p \) . It copular, which is called Gaussian Copula, is \[ C\left( {{u}_{1},\ldots ,{u}_{p}}\right) = {\Phi }_{p,\sum }\left( {{\Phi }^{-1}\left( {u}_{1}\right) ,\ldots ,{\Phi }^{-1}\left( {u}_{p}\right) }\right) . \] If \( p = 2 \), then \[ C\left( {u, v}\right) = {\int }_{-\infty }^{{\Phi }^{-1}\left( u\right) }{\int }_{-\infty }^{{\Phi }^{-1}\left( v\right) }\frac{1}{{2\pi }\sqrt{1 - {\rho }^{2}}}\exp \left( {-\frac{{s}^{2} - {2\rho st} + {t}^{2}}{2\left( {1 - {\rho }^{2}}\right) }}\right) {dsdt}. \] Similarly, the \( t \) -copula can be derived. Ail-Mikhail-Haq Copula. Consider a bivariate distribution \[ F\left( {x, y}\right) = \left\{ \begin{array}{ll} \frac{\left( {x + 1}\right) \left( {{e}^{y} - 1}\right) }{x + 2{e}^{y} - 1}, & \left( {x, y}\right) \in \left\lbrack {-1,1}\right\rbrack \times \left\lbrack {0,\infty }\right\rbrack \\ 1 - {e}^{-y}, & \left( {x, y}\right) \in (1,\infty \rbrack \times \left\lbrack {0,\infty }\right\rbrack \\ 0 & \text{ otherwise. } \end{array}\right. \] We can check \( F \) is bi-variate distribution function, namely \( F\left( {-\infty, y}\right) = 0 = F\left( {x, - \infty }\right) \) , and in addition \( F\left( {\infty ,\infty }\right) = 1 - {e}^{-\infty } = 1 \) . Its first margin is \[ {F}_{1}\left( x\right) = F\left( {x,\infty }\right) = \left\{ \begin{array}{ll} \frac{\left( {x + 1}\right) \left( {{e}^{\infty } - 1}\right) }{x + 2{e}^{\infty } - 1} = \frac{x + 1}{2}, & x \in \left\lbrack {-1,1}\right\rbrack \\ 1 - {e}^{-\infty } = 1, & x > 1 \\ 0 & \text{ ot
18_Algebra Chapter 0
Definition 1.2
Definition 1.2. The quotient of the set \( S \) with respect to the equivalence relation \( \sim \) is the set \[ S/ \sim \mathrel{\text{:=}} {\mathcal{P}}_{ \sim } \] of equivalence classes of elements of \( S \) with respect to \( \sim \) . Example 1.3. Take \( S = \mathbb{Z} \), and let \( \sim \) be the relation defined by \[ a \sim b \Leftrightarrow a - b\text{is even.} \] Then \( \mathbb{Z}/ \sim \) consists of two equivalence classes: \[ \mathbb{Z}/ \sim = \left\{ {{\left\lbrack 0\right\rbrack }_{ \sim },{\left\lbrack 1\right\rbrack }_{ \sim }}\right\} \] Indeed, every integer \( b \) is either even (and hence \( b - 0 \) is even, so \( b \sim 0 \), and \( b \in {\left\lbrack 0\right\rbrack }_{ \sim } \) ) or odd (and hence \( b - 1 \) is even, so \( b \sim 1 \), and \( b \in {\left\lbrack 1\right\rbrack }_{ \sim } \) ). This is of course the starting point of modular arithmetic, which we will cover in due detail later on (§II 2.3). One way to think about this operation is that the equivalence relation 'becomes equality in the quotient’: that is, two elements of the quotient \( S/ \sim \) are equal if and only if the corresponding elements in \( S \) are related by \( \sim \) . In other words, taking a quotient is a way to turn any equivalence relation into an equality. This observation will be further formalized in 'categorical terms' in a short while (\$5.3). ## Exercises Exercises marked with \( \mathrm{a} \vartriangleright \) are referred to from the text; exercises marked with a \( \neg \) are referred to from other exercises. These referring exercises and sections are listed in brackets following the current exercise; see the introduction for further clarifications, if necessary. 1.1. Locate a discussion of Russell's paradox, and understand it. 1.2. \( \vartriangleright \) Prove that if \( \sim \) is an equivalence relation on a set \( S \), then the corresponding family \( {\mathcal{P}}_{ \sim } \) defined in \( \$ {1.5} \) is indeed a partition of \( S \) : that is, its elements are nonempty, disjoint, and their union is \( S \) . [9,1,5] 1.3. \( \vartriangleright \) Given a partition \( \mathcal{P} \) on a set \( S \), show how to define a relation \( \sim \) on \( S \) such that \( \mathcal{P} \) is the corresponding partition. [4.5] 1.4. How many different equivalence relations may be defined on the set \( \{ 1,2,3\} \) ? 1.5. Give an example of a relation that is reflexive and symmetric but not transitive. What happens if you attempt to use this relation to define a partition on the set? (Hint: Thinking about the second question will help you answer the first one.) 1.6. \( \vartriangleright \) Define a relation \( \sim \) on the set \( \mathbb{R} \) of real numbers by setting \( a \sim b \Leftrightarrow b - a \in \) \( \mathbb{Z} \) . Prove that this is an equivalence relation, and find a ’compelling’ description for \( \mathbb{R}/ \sim \) . Do the same for the relation \( \approx \) on the plane \( \mathbb{R} \times \mathbb{R} \) defined by declaring \( \left( {{a}_{1},{a}_{2}}\right) \approx \left( {{b}_{1},{b}_{2}}\right) \Leftrightarrow {b}_{1} - {a}_{1} \in \mathbb{Z} \) and \( {b}_{2} - {a}_{2} \in \mathbb{Z} \) . [418.1,118.10] ## 2. Functions between sets 2.1. Definition. A common thread we will follow for just about every structure introduced in this book will be to try to understand both the type of structures and the ways in which different instances of a given structure may interact. Sets interact with each other through functions. It is tempting to think of a function \( f \) from a set \( A \) to a set \( B \) in ’dynamic’ terms, as a way to ’go from \( A \) to \( B \) ’. Similarly to the business with relations, it is straightforward to formalize this notion in ways that do not need to invoke any deep ’meaning’ of any given \( f \) : everything that can be known about a function \( f \) is captured by the information of which element \( b \) of \( B \) is the image of any given element \( a \) of \( A \) . This information is nothing but a subset of \( A \times B \) : \[ {\Gamma }_{f} \mathrel{\text{:=}} \{ \left( {a, b}\right) \in A \times B \mid b = f\left( a\right) \} \subseteq A \times B. \] This set \( {\Gamma }_{f} \) is the graph of \( f \) ; officially, a function really ’is’ its graph 6 . Not all subsets \( \Gamma \subseteq A \times B \) correspond to (’are’) functions: we need to put one requirement on the graphs of functions, which can be expressed as follows: \[ \left( {\forall a \in A}\right) \left( {\exists !b \in B}\right) \;\left( {a, b}\right) \in {\Gamma }_{f}, \] or ('in functional notation') \[ \left( {\forall a \in A}\right) \left( {\exists !b \in B}\right) \;f\left( a\right) = b. \] That is, a function must send each element \( a \) of \( A \) to exactly one element of \( B \) , depending on \( a \) . ’Multivalued functions’ such as \( \pm \sqrt{x} \) (which are very important in, e.g., the study of Riemann surfaces) are not functions in this sense. To announce that \( f \) is a function from a set \( A \) to a set \( B \), one writes \( f : A \rightarrow B \) or draws the following picture ('diagram'): \[ A\overset{f}{ \rightarrow }B \] The action of a function \( f : A \rightarrow B \) on an element \( a \in A \) is sometimes indicated by a 'decorated' arrow, as in \[ a \mapsto f\left( a\right) . \] The collection of all functions from a set \( A \) to a set \( B \) is itself a set 7, denoted \( {B}^{A} \) . If we take seriously the notion that a function is really the same thing as its graph, then we can view \( {B}^{A} \) as a (special) subset of the power set of \( A \times B \) . Every set \( A \) comes equipped with a very special function, whose graph is the diagonal in \( A \times A \) : the identity function on \( A \) \[ {\operatorname{id}}_{A} : A \rightarrow A \] defined by \( \left( {\forall a \in A}\right) {\operatorname{id}}_{A}\left( a\right) = a \) . More generally, the inclusion of any subset \( S \) of a set \( A \) determines a function \( S \rightarrow A \), simply sending every element \( s \) of \( S \) to ’itself’ in \( A \) . If \( S \) is a subset of \( A \), we denote by \( f\left( S\right) \) the subset of \( B \) defined by \[ f\left( S\right) \mathrel{\text{:=}} \{ b \in B \mid \left( {\exists a \in S}\right) b = f\left( a\right) \} . \] That is, \( f\left( S\right) \) is the subset of \( B \) consisting of all elements that are images of elements of \( S \) by the function \( f \) . The largest such subset, that is, \( f\left( A\right) \), is called the image of \( f \), denoted ’im \( f \) ’. Also, \( {\left. f\right| }_{S} \) denotes the ’restriction’ of \( f \) to the subset \( S \) : this is the function \( S \rightarrow B \) defined by \[ \left( {\forall s \in S}\right) : {\left. \;f\right| }_{S}\left( s\right) = f\left( s\right) . \] \( {}^{6} \) To be precise, it is the graph \( {\Gamma }_{f} \) together with the information of the source \( A \) and the target \( B \) of \( f \) . These are part of the data of the function. \( {}^{7} \) This is another ’operation among sets’, not listed in [1.3] Can you see why we use \( {B}^{A} \) for this set? (Cf. Exercise 2.10) That is, \( {\left. f\right| }_{S} \) is the composition (in the sense explained in (2.3) \( f \circ i \), where \( i : S \rightarrow A \) is the inclusion. Note that \( f\left( S\right) = \operatorname{im}\left( {\left. f\right| }_{S}\right) \) . 2.2. Examples: Multisets, indexed sets. The 'multisets' mentioned briefly in [1.1] are a simple example of a notion easily formalized by means of functions. A multiset may be defined by giving a function from a (regular) set \( A \) to the set \( {\mathbb{N}}^{ * } \) of positive 8 integers; if \( m : A \rightarrow {\mathbb{N}}^{ * } \) is such a function, the corresponding multiset consists of the elements \( a \in A \), each taken \( m\left( a\right) \) times. Thus, the multiset \( \{ a, a, a, b, b, b, b, b, c\} \) is really the function \( m : \{ a, b, c\} \rightarrow {\mathbb{N}}^{ * } \) for which \( m\left( a\right) = 3 \) , \( m\left( b\right) = 5, m\left( c\right) = 1 \) . As with ordinary sets, the order in which the elements are listed is not part of the information carried by a multiset. Simple set-theoretic notions such as inclusion, union, etc., extend in a straightforward way to multisets. For another viewpoint on multisets, see Exercise 3.9 Another example is given by the use of ’indices’. If we write let \( {a}_{1},\ldots ,{a}_{n} \) be integers..., we really mean consider a function \( a : \{ 1,\ldots, n\} \rightarrow \mathbb{Z}\ldots \), with the understanding that \( {a}_{i} \) is shorthand for the value \( a\left( i\right) \) (for \( i = 1,\ldots, n \) ). It is tempting to think of an indexed set \( {\left\{ {a}_{i}\right\} }_{i \in I} \) simply as a set whose elements happen to be denoted \( {a}_{i} \), for \( i \) ranging over some ’set of indices’ \( I \) ; but such an indexed set is more properly a function \( I \rightarrow A \), where \( A \) is some set from which we draw the elements \( {a}_{i} \) . For example, this allows us to consider \( {a}_{0} \) and \( {a}_{1} \) as distinct elements of \( {\left\{ {a}_{i}\right\} }_{i \in \mathbb{N}} \), even if by coincidence \( {a}_{0} = {a}_{1} \) as elements of the target set \( A \) . It is easy to miss such subtleties, and some abuse of notation is common and usually harmless. These distinctions play a role in (for example) discussions of linear independence of sets of vectors; cf. SVI 1.2. 2.3. Composition of functions. Functions may be composed: if \( f : A \rightarrow B \) and \( g : B \rightarrow C \) are functions, then so is the operation \( g \circ f \) defined by \( \left( *\right) \) \[ \left( {\forall a \in A}\right) \;\left( {g \circ f}\right) \left( a\right) \mathrel{\text{:=}} g\left( {f\left( a\right) }\right) : \] that is, we use \( f \) to go from \( A \) to \( B \), then
18_Algebra Chapter 0
Definition 4.5
Definition 4.5. Two groups \( G, H \) are isomorphic if they are isomorphic in Grp in the sense of [14.1] that is (by Proposition 4.3), if there is a bijective group homomorphism \( G \rightarrow H \) . We have observed once and for all in § 14.1 that 'isomorphic' is automatically an equivalence relation. We write \( G \cong H \) if \( G \) and \( H \) are isomorphic. Automorphisms of a group \( G \) are isomorphisms \( G \rightarrow G \) ; these form a group \( {\operatorname{Aut}}_{\mathsf{{Grp}}}\left( G\right) \) (cf. [14.1), usually denoted \( \operatorname{Aut}\left( G\right) \) . Example 4.6. We have introduced our template of cyclic groups in [2.3] The notion of isomorphism allows us to give a formal definition: Definition 4.7. A group \( G \) is cyclic if it is isomorphic to \( \mathbb{Z} \) or to \( {C}_{n} = \mathbb{Z}/n\mathbb{Z} \) for some \( {}^{19}n \) . Thus, \( {C}_{2} \times {C}_{3} \) is cyclic, of order 6, since \( {C}_{2} \times {C}_{3} \cong {C}_{6} \) . More generally (Exercise 4.9) \( {C}_{m} \times {C}_{n} \) is cyclic if \( \gcd \left( {m, n}\right) = 1 \) . The reader will check easily (Exercise 4.3) that a group of order \( n \) is cyclic if and only if it contains an element of order \( n \) . There is a somewhat surprising source of cyclic groups: if \( p \) is prime, the group \( \left( {{\left( \mathbb{Z}/p\mathbb{Z}\right) }^{ * }, \cdot }\right) \) is cyclic. We will prove a more general statement when we have accumulated more machinery (Theorem IV16.10), but the adventurous reader can already enjoy a proof by working out Exercise 4.11 This is a relatively deep fact; note that, for example, (Z/12Z)* is not cyclic (cf. Exercise 2.19 and Exercise 4.10). The fact that \( {\left( \mathbb{Z}/p\mathbb{Z}\right) }^{ * } \) is cyclic for \( p \) prime means that there must be integers \( a \) such that every nonmultiple of \( p \) is congruent to a power of \( a \) ; the usual proofs of this fact are not constructive, that is, they do not explicitly produce an integer with this property. There is a very pretty connection between the order of an element of the cyclic group \( {\left( \mathbb{Z}/p\mathbb{Z}\right) }^{ * } \) and the so-called ’cyclotomic polynomials’; but that will have to wait for a little field theory (cf. Exercise VII 5.15). As we have seen, the groups \( {D}_{6} \) and \( {S}_{3} \) are isomorphic. Are \( {C}_{6} \) and \( {S}_{3} \) isomorphic? There are 46,656 functions between the sets \( {C}_{6} \) and \( {S}_{3} \), of which 720 are bijections and 120 are bijections preserving the identity. The reader is welcome to list all 120 and attempt to verify by hand if any of them is a homomorphism. But maybe there is a better strategy to answer such questions... Isomorphic objects of a category are essentially indistinguishable in that category. Thus, isomorphic groups share every group-theoretic structure. In particular, \( {}^{19} \) This includes the possibility that \( n = 1 \), that is, trivial groups are cyclic. Proposition 4.8. Let \( \varphi : G \rightarrow H \) be an isomorphism. \[ \text{-}\left( {\forall g \in G}\right) : \left| {\varphi \left( g\right) }\right| = \left| g\right| \text{;} \] - \( G \) is commutative if and only if \( H \) is commutative. Proof. The first assertion follows from Proposition 4.1 the order of \( \varphi \left( g\right) \) divides the order of \( g \), and on the other hand the order of \( g = {\varphi }^{-1}\left( {\varphi \left( g\right) }\right) \) must divide the order of \( \varphi \left( g\right) \) ; thus the two orders must be equal. The proof of the second assertion is left to the reader. Further instances of this principle will be assumed without explicit mention. Example 4.9. \( {C}_{6} \ncong {S}_{3} \), since one is commutative and the other is not. Here is another reason: in \( {C}_{6} \) there is 1 element of order one,1 of order two,2 of order three, and 2 of order six; in \( {S}_{3} \) the situation is different: 1 element of order one,3 of order two,2 of order three. Thus, none of the 120 bijections \( {C}_{6} \rightarrow {S}_{3} \) preserving the identity is a group homomorphism. Note: Two finite commutative groups are isomorphic if and only if they have the same number of elements of any given order, but we are not yet in a position to prove this; the reader will verify this fact in due time (Exercise IV16.13). The commutativity hypothesis is necessary: there do exist pairs of nonisomorphic finite groups with the same number of elements of any given order (same exercise). 4.4. Homomorphisms of abelian groups. I have already mentioned that \( \mathrm{{Ab}} \) is in some ways 'better behaved' than Grp, and I am ready to highlight another instance of this observation. As we have seen, \( {\operatorname{Hom}}_{\mathrm{{Grp}}}\left( {G, H}\right) \) is a pointed set for any two groups \( G, H \) . In Ab, we can say much more: \( {\operatorname{Hom}}_{\mathrm{{Ab}}}\left( {G, H}\right) \) is a group (in fact, an abelian group) for any two abelian groups \( G, H \) . The operation in \( {\operatorname{Hom}}_{\mathrm{{Ab}}}\left( {G, H}\right) \) is ’inherited’ from the operation in \( H \) : if \( \varphi ,\psi \) : \( G \rightarrow H \) are two group homomorphisms, let \( \varphi + \psi \) be the function defined by \[ \left( {\forall a \in G}\right) : \;\left( {\varphi + \psi }\right) \left( a\right) \mathrel{\text{:=}} \varphi \left( a\right) + \psi \left( a\right) . \] Is \( \varphi + \psi \) a group homomorphism? Yes, because \( \forall a, b \in G \) \[ \left( {\varphi + \psi }\right) \left( {a + b}\right) = \varphi \left( {a + b}\right) + \psi \left( {a + b}\right) = \left( {\varphi \left( a\right) + \varphi \left( b\right) }\right) + \left( {\psi \left( a\right) + \psi \left( b\right) }\right) \] \[ \overset{!}{ = }\left( {\varphi \left( a\right) + \psi \left( a\right) }\right) + \left( {\varphi \left( b\right) + \psi \left( b\right) }\right) = \left( {\varphi + \psi }\right) \left( a\right) + \left( {\varphi + \psi }\right) \left( b\right) . \] Note that the equality marked by ! uses crucially the fact that \( H \) is commutative. With this operation, \( {\operatorname{Hom}}_{\mathrm{{Ab}}}\left( {G, H}\right) \) is clearly a group: the associativity of + is inherited from that of the operation in \( H \) ; the trivial homomorphism is the identity element, and the inverse 20 of \( \varphi : G \rightarrow H \) is defined (not surprisingly) by \[ \left( {\forall a \in G}\right) : \;\left( {-\varphi }\right) \left( a\right) = - \varphi \left( a\right) . \] In fact, note that these conclusions may be drawn as soon as \( H \) is commutative: \( {\operatorname{Hom}}_{\mathrm{{Grp}}}\left( {G, H}\right) \) is a group if \( H \) is commutative (even if \( G \) is not). In fact, if \( H \) is \( {}^{20} \) Unfortunate clash of terminology! I mean the ’inverse’ as in ’group inverse’, not as a possible function \( H \rightarrow G \) . a commutative group, then \( {H}^{A} = {\operatorname{Hom}}_{\text{Set }}\left( {A, H}\right) \) is a commutative group for all sets \( A \) ; we will come back to this group in [5.4, ## Exercises 4.1. \( \vartriangleright \) Check that the function \( {\pi }_{m}^{n} \) defined in [4.1] is well-defined and makes the diagram commute. Verify that it is a group homomorphism. Why is the hypothesis \( m \mid n \) necessary? \( \left\lbrack {§\underline{4.1}}\right\rbrack \) 4.2. Show that the homomorphism \( {\pi }_{2}^{4} \times {\pi }_{2}^{4} : {C}_{4} \rightarrow {C}_{2} \times {C}_{2} \) is not an isomorphism. In fact, is there any isomorphism \( {C}_{4} \rightarrow {C}_{2} \times {C}_{2} \) ? 4.3. \( \vartriangleright \) Prove that a group of order \( n \) is isomorphic to \( \mathbb{Z}/n\mathbb{Z} \) if and only if it contains an element of order \( n \) . [4.3] 4.4. Prove that no two of the groups \( \left( {\mathbb{Z}, + }\right) ,\left( {\mathbb{Q}, + }\right) ,\left( {\mathbb{R}, + }\right) \) are isomorphic to one another. Can you decide whether \( \left( {\mathbb{R}, + }\right) ,\left( {\mathbb{C}, + }\right) \) are isomorphic to one another? (Cf. Exercise VI1.1) 4.5. Prove that the groups \( \left( {\mathbb{R}\smallsetminus \{ 0\} , \cdot }\right) \) and \( \left( {\mathbb{C}\smallsetminus \{ 0\} , \cdot }\right) \) are not isomorphic. 4.6. We have seen that \( \left( {\mathbb{R}, + }\right) \) and \( \left( {{\mathbb{R}}^{ > 0}, \cdot }\right) \) are isomorphic (Example 4.4). Are the groups \( \left( {\mathbb{Q}, + }\right) \) and \( \left( {{\mathbb{Q}}^{ > 0}, \cdot }\right) \) isomorphic? 4.7. Let \( G \) be a group. Prove that the function \( G \rightarrow G \) defined by \( g \mapsto {g}^{-1} \) is a homomorphism if and only if \( G \) is abelian. Prove that \( g \mapsto {g}^{2} \) is a homomorphism if and only if \( G \) is abelian. 4.8. \( \neg \) Let \( G \) be a group, and let \( g \in G \) . Prove that the function \( {\gamma }_{g} : G \rightarrow G \) defined by \( \left( {\forall a \in G}\right) : {\gamma }_{g}\left( a\right) = {ga}{g}^{-1} \) is an automorphism of \( G \) . (The automorphisms \( {\gamma }_{g} \) are called ’inner’ automorphisms of \( G \) .) Prove that the function \( G \rightarrow \operatorname{Aut}\left( G\right) \) defined by \( g \mapsto {\gamma }_{g} \) is a homomorphism. Prove that this homomorphism is trivial if and only if \( G \) is abelian. 6.7, 7.11, IV 1.5 4.9. \( \vartriangleright \) Prove that if \( m, n \) are positive integers such that \( \gcd \left( {m, n}\right) = 1 \), then \( {C}_{mn} \cong {C}_{m} \times {C}_{n}.\left\lbrack {\$ \underline{4.3},\underline{4.10},\$ \underline{1}\underline{V}/{6.1},\underline{V}/{6.8}}\right\rbrack \) 4.10. \( \vartriangleright \) Let \( p \neq q \) be odd prime integers; show that \( {\left( \mathbb{Z}/pq\mathbb{Z}\right) }^{ * } \) is not cyclic. (Hint: Use Exercise 4.9 to compute the order \( N \) of \( {\left( \mathbb{Z}/pq\mathbb{Z}\right) }^{ * } \
1189_(GTM95)Probability-1
Definition 1
Definition 1. A random vector \( \xi = {\left( {\xi }_{1},\ldots ,{\xi }_{n}\right) }^{ * } \) is Gaussian, or normally distributed, if its characteristic function has the form \[ {\varphi }_{\xi }\left( t\right) = {e}^{i\left( {t, m}\right) - \left( {1/2}\right) \left( {\mathbb{R}t, t}\right) }, \] (5) where \( m = {\left( {m}_{1},\ldots ,{m}_{n}\right) }^{ * },\left| {m}_{k}\right| < \infty \), and \( \mathbb{R} = \begin{Vmatrix}{r}_{kl}\end{Vmatrix} \) is a symmetric positive semidefinite \( n \times n \) matrix; we use the abbreviation \( \xi \sim \mathcal{N}\left( {m,\mathbb{R}}\right) \) . This definition immediately makes us ask whether (5) is in fact a characteristic function. Let us show that it is. First suppose that \( \mathbb{R} \) is nonsingular. Then we can define the inverse \( A = {\mathbb{R}}^{-1} \) and the function \[ f\left( x\right) = \frac{{\left| A\right| }^{1/2}}{{\left( 2\pi \right) }^{n/2}}\exp \left\{ {-\frac{1}{2}\left( {A\left( {x - m}\right) ,\left( {x - m}\right) }\right) }\right\} , \] (6) where \( x = {\left( {x}_{1},\ldots ,{x}_{n}\right) }^{ * } \) and \( \left| A\right| = \det A \) . This function is nonnegative. Let us show that \[ {\int }_{{R}^{n}}{e}^{i\left( {t, x}\right) }f\left( x\right) {dx} = {e}^{i\left( {t, m}\right) - \left( {1/2}\right) \left( {\mathbb{R}t, t}\right) }, \] or equivalently that \[ {I}_{n} \equiv {\int }_{{R}^{n}}{e}^{i\left( {t, x - m}\right) }\frac{{\left| A\right| }^{1/2}}{{\left( 2\pi \right) }^{n/2}}{e}^{-\left( {1/2}\right) \left( {A\left( {x - m}\right) ,\left( {x - m}\right) }\right) }{dx} = {e}^{-\left( {1/2}\right) \left( {\mathbb{R}t, t}\right) }. \] (7) Let us make the change of variable \[ x - m = \mathcal{O}u,\;t = \mathcal{O}v \] where \( \mathcal{O} \) is an orthogonal matrix such that \[ {\mathcal{O}}^{ * }\mathbb{R}\mathcal{O} = D \] and \[ D = \left( \begin{array}{ll} {d}_{1} & 0 \\ & \ddots \\ 0 & {d}_{n} \end{array}\right) \] is a diagonal matrix with \( {d}_{i} \geq 0 \) (see the proof of the lemma in Sect. 8). Since \( \left| \mathbb{R}\right| = \det \mathbb{R} \neq 0 \), we have \( {d}_{i} > 0, i = 1,\ldots, n \) . Therefore \[ \left| A\right| = \left| {\mathbb{R}}^{-1}\right| = {d}_{1}^{-1}\cdots {d}_{n}^{-1}. \] (8) Moreover (for notation, see Subsection 1, Sect. 12) \[ i\left( {t, x - m}\right) - \frac{1}{2}\left( {A\left( {x - m}\right), x - m}\right) = i\left( {\mathcal{O}v,\mathcal{O}u}\right) - \frac{1}{2}\left( {A\mathcal{O}u,\mathcal{O}u}\right) \] \[ = i{\left( \mathcal{O}v\right) }^{ * }\mathcal{O}u - \frac{1}{2}{\left( \mathcal{O}u\right) }^{ * }A\left( {\mathcal{O}u}\right) \] \[ = i{v}^{ * }u - \frac{1}{2}{u}^{ * }{\mathcal{O}}^{ * }A\mathcal{O}u \] \[ = i{v}^{ * }u - \frac{1}{2}{u}^{ * }{D}^{-1}u. \] Together with (9) of Sect. 12 and (8), this yields \[ {I}_{n} = {\left( 2\pi \right) }^{-n/2}{\left( {d}_{1}\cdots {d}_{n}\right) }^{-1/2}{\int }_{{R}^{n}}\exp \left( {i{v}^{ * }u - \frac{1}{2}{u}^{ * }{D}^{-1}u}\right) {du} \] \[ = \mathop{\prod }\limits_{{k = 1}}^{n}{\left( 2\pi {d}_{k}\right) }^{-1/2}{\int }_{-\infty }^{\infty }\exp \left( {i{v}_{k}{u}_{k} - \frac{{u}_{k}^{2}}{2{d}_{k}}}\right) d{u}_{k} = \mathop{\prod }\limits_{{k = 1}}^{n}\exp \left( {-\frac{1}{2}{v}_{k}^{2}{d}_{k}}\right) \] \[ = \exp \left( {-\frac{1}{2}{v}^{ * }{Dv}}\right) = \exp \left( {-\frac{1}{2}{v}^{ * }{\mathcal{O}}^{ * }\mathbb{R}\mathcal{O}v}\right) = \exp \left( {-\frac{1}{2}{t}^{ * }\mathbb{R}t}\right) = \exp \left( {-\frac{1}{2}\left( {\mathbb{R}t, t}\right) }\right) . \] It also follows from (6) that \[ {\int }_{{R}^{n}}f\left( x\right) {dx} = 1 \] (9) Therefore (5) is the characteristic function of a nondegenerate \( n \) -dimensional Gaussian distribution (see Subsection 3, Sect. 3). Now let \( \mathbb{R} \) be singular. Take \( \varepsilon > 0 \) and consider the positive definite symmetric matrix \( {\mathbb{R}}^{\varepsilon } \equiv \mathbb{R} + {\varepsilon E} \), where \( E \) is the identity matrix. Then by what has been proved, \[ {\varphi }^{\varepsilon }\left( t\right) = \exp \left\{ {i\left( {t, m}\right) - \frac{1}{2}\left( {{\mathbb{R}}^{\varepsilon }t, t}\right) }\right\} \] is a characteristic function: \[ {\varphi }^{\varepsilon }\left( t\right) = {\int }_{{R}^{n}}{e}^{i\left( {t, x}\right) }d{F}_{\varepsilon }\left( x\right) \] where \( {F}_{\varepsilon }\left( x\right) = {F}_{\varepsilon }\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is an \( n \) -dimensional distribution function. As \( \varepsilon \rightarrow 0 \) , \[ {\varphi }^{\varepsilon }\left( t\right) \rightarrow \varphi \left( t\right) = \exp \{ i\left( {t, m}\right) - \frac{1}{2}\left( {\mathbb{R}t, t}\right) \} . \] The limit function \( \varphi \left( t\right) \) is continuous at \( \left( {0,\ldots ,0}\right) \) . Hence, by Theorem 1 and Problem 1 of Sect. 3, Chap. 3, it is a characteristic function. Thus we have shown that Definition 1 is correct. 3. Let us now discuss the meaning of the vector \( m \) and the matrix \( \mathbb{R} = \begin{Vmatrix}{r}_{kl}\end{Vmatrix} \) that appear in (5). Since \[ \log {\varphi }_{\xi }\left( t\right) = i\left( {t, m}\right) - \frac{1}{2}\left( {\mathbb{R}t, t}\right) = i\mathop{\sum }\limits_{{k = 1}}^{n}{t}_{k}{m}_{k} - \frac{1}{2}\mathop{\sum }\limits_{{k, l = 1}}^{n}{r}_{kl}{t}_{k}{t}_{l}, \] (10) we find from (35) of Sect. 12 and the formulas that connect the moments and the semi-invariants that \[ {m}_{1} = {s}_{\xi }^{\left( 1,0,\ldots ,0\right) } = \mathrm{E}{\xi }_{1},\ldots ,{m}_{n} = {s}_{\xi }^{\left( 0,\ldots ,0,1\right) } = \mathrm{E}{\xi }_{n}. \] Similarly \[ {r}_{11} = {s}_{\xi }^{\left( 2,0,\ldots ,0\right) } = \operatorname{Var}{\xi }_{1},\;{r}_{12} = {s}_{\xi }^{\left( 1,1,0,\ldots \right) } = \operatorname{Cov}\left( {{\xi }_{1},{\xi }_{2}}\right) , \] and generally \[ {r}_{kl} = \operatorname{Cov}\left( {{\xi }_{k},{\xi }_{l}}\right) \] Consequently \( m \) is the mean-value vector of \( \xi \) and \( \mathbb{R} \) is its covariance matrix. If \( \mathbb{R} \) is nonsingular, we can obtain this result in a different way. In fact, in this case \( \xi \) has a density \( f\left( x\right) \) given by (6). A direct calculation then shows that \[ \mathrm{E}{\xi }_{k} \equiv \int {x}_{k}f\left( x\right) {dx} = {m}_{k} \] (11) \[ \operatorname{Cov}\left( {{\xi }_{k},{\xi }_{l}}\right) = \int \left( {{x}_{k} - {m}_{k}}\right) \left( {{x}_{l} - {m}_{l}}\right) f\left( x\right) {dx} = {r}_{kl}. \] 4. Let us discuss some properties of Gaussian vectors. ## Theorem 1 (a) The components of a Gaussian vector are uncorrelated if and only if they are independent. (b) A vector \( \xi = {\left( {\xi }_{1},\ldots ,{\xi }_{n}\right) }^{ * } \) is Gaussian if and only if, for every vector \( \lambda = \) \( {\left( {\lambda }_{1},\ldots ,{\lambda }_{n}\right) }^{ * } \in {R}^{n} \) the random variable \( \left( {\xi ,\lambda }\right) = {\lambda }_{1}{\xi }_{1} + \cdots + {\lambda }_{n}{\xi }_{n} \) has a Gaussian distribution. Proof. (a) If the components of \( \xi = {\left( {\xi }_{1},\ldots ,{\xi }_{n}\right) }^{ * } \) are uncorrelated, it follows from the form of the characteristic function \( {\varphi }_{\xi }\left( t\right) \) that it is a product of characteristic functions: \[ {\varphi }_{\xi }\left( t\right) = \mathop{\prod }\limits_{{k = 1}}^{n}{\varphi }_{{\xi }_{k}}\left( {t}_{k}\right) \] Therefore, by Theorem 4 of Sect. 12, the components are independent. The converse is evident, since independence always implies lack of correlation. (b) If \( \xi \) is a Gaussian vector, it follows from (5) that \[ \mathrm{E}\exp \left\{ {{it}\left( {{\xi }_{1}{\lambda }_{1} + \cdots + {\xi }_{n}{\lambda }_{n}}\right) }\right\} = \exp \left\{ {{it}\left( {\sum {\lambda }_{k}{m}_{k}}\right) - \frac{{t}^{2}}{2}\left( {\sum {r}_{kl}{\lambda }_{k}{\lambda }_{l}}\right) }\right\}, t \in R, \] and consequently \[ \left( {\xi ,\lambda }\right) \sim \mathcal{N}\left( {\sum {\lambda }_{k}{m}_{k},\sum {r}_{kl}{\lambda }_{k}{\lambda }_{l}}\right) \] Conversely, to say that the random variable \( \left( {\xi ,\lambda }\right) = {\xi }_{1}{\lambda }_{1} + \cdots + {\xi }_{n}{\lambda }_{n} \) is Gaussian means, in particular, that \[ \begin{aligned} \mathrm{E}{e}^{i\left( {\xi ,\lambda }\right) } & = \exp \left\{ {i\mathrm{\;E}\left( {\xi ,\lambda }\right) - \frac{1}{2}\operatorname{Var}\left( {\xi ,\lambda }\right) }\right\} \\ & = \exp \left\{ {i\sum {\lambda }_{k}\mathrm{E}{\xi }_{k} - \frac{1}{2}\sum {\lambda }_{k}{\lambda }_{l}\operatorname{Cov}\left( {{\xi }_{k},{\xi }_{l}}\right) }\right\} . \end{aligned} \] Since \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) are arbitrary it follows from Definition 1 that the vector \( \xi = \) \( \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \) is Gaussian. This completes the proof of the theorem. Remark. Let \( \left( \begin{array}{l} \theta \\ \xi \end{array}\right) \) be a Gaussian vector with \( \theta = {\left( {\theta }_{1},\ldots ,{\theta }_{k}\right) }^{ * } \) and \( \xi = {\left( {\xi }_{1},\ldots ,{\xi }_{l}\right) }^{ * } \) . If \( \theta \) and \( \xi \) are uncorrelated, i.e., \( \operatorname{Cov}\left( {{\theta }_{i},{\xi }_{j}}\right) = 0, i = 1,\ldots, k;j = 1,\ldots, l \), they are independent. The proof is the same as for conclusion (a) of the theorem. Let \( \xi = {\left( {\xi }_{1},\ldots ,{\xi }_{n}\right) }^{ * } \) be a Gaussian vector; let us suppose, for simplicity, that its mean-value vector is zero. If \( \operatorname{rank}\mathbb{R} = r < n \), then (as was shown in Sect. 11), there are \( n - r \) linear relations connecting \( {\xi }_{1},\ldots ,{\xi }_{n} \) . We may then suppose that, say, \( {\xi }_{1},\ldots ,{\xi }_{r} \) are linearly independent, and the others can be expressed linearly in terms of them. Hence all the basic properties of the vector \( \xi = {\left( {\xi }_{1},\ldots ,{\xi }_{n}\right) }^{ * } \) are determined by the first \( r \) components \( \left
1074_(GTM232)An Introduction to Number Theory
Definition 9.8
Definition 9.8. The Schwartz space \( \mathcal{S} \) is the set of functions \( \mathrm{f} : \mathbb{R} \rightarrow \mathbb{C} \) that are infinitely differentiable and whose derivatives \( {\mathrm{f}}^{\left( n\right) } \) (including the function itself \( {\mathrm{f}}^{\left( 0\right) } = \mathrm{f} \) ) all satisfy \[ {\left( 1 + \left| x\right| \right) }^{m}{\mathrm{f}}^{\left( n\right) } = \mathrm{O}\left( 1\right) \] (9.4) for all \( m \in \mathbb{N} \) . The bound in \( \mathrm{O}\left( 1\right) \) may depend upon \( m \) and \( n \) . Example 9.9. The Gaussian function \( \mathrm{f}\left( x\right) = {e}^{-{x}^{2}} \) is in \( \mathcal{S} \) . --- \( {}^{1} \) On May 24th 2000, the Clay Mathematics Institute established seven Millennium Prize Problems, each worth one million dollars, including the Riemann Hypothesis because "they are important classic questions that have resisted solution over the years." --- Notice that \( \mathcal{S} \) is a complex vector space and that any function \( f \in \mathcal{S} \) is integrable, \[ \left| {{\int }_{-\infty }^{\infty }\mathrm{f}\left( x\right) {dx}}\right| \leq {\int }_{-\infty }^{\infty }\left| {\mathrm{f}\left( x\right) }\right| {dx} \leq C{\int }_{-\infty }^{\infty }\frac{1}{{\left( 1 + \left| x\right| \right) }^{2}}{dx} < \infty , \] just by taking \( n = 0 \) and \( m = 2 \) in Equation (9.4). Definition 9.10. For any function \( \mathrm{f} \in \mathcal{S} \), the Fourier transform of \( \mathrm{f} \) is the function \[ \widehat{\mathrm{f}}\left( y\right) = {\int }_{-\infty }^{\infty }\mathrm{f}\left( x\right) {e}^{-{2\pi }\mathrm{i}{xy}}{dx}. \] The integral exists for the same reason as before, \[ \left| {\widehat{f}\left( y\right) }\right| \leq {\int }_{-\infty }^{\infty }\left| {f\left( x\right) }\right| {dx} < \infty \] and in fact \( \widehat{\mathfrak{f}} \in \mathcal{S} \) again since we may apply Equation (9.4) with \( m = n \) to get the bound for \( {\widehat{\mathbf{f}}}^{\left( n\right) } \) . Thus \( f \rightarrow \widehat{f} \) is a linear map from \( \mathcal{S} \) to \( \mathcal{S} \) . It turns out that this map has a fixed point - a function equal to its Fourier transform. Recall that \( {\int }_{-\infty }^{\infty }{e}^{-{x}^{2}}{dx} = \sqrt{\pi } \) . Lemma 9.11. If \( \mathrm{f}\left( y\right) = {e}^{-\pi {y}^{2}} \), then \( \widehat{\mathrm{f}}\left( y\right) = \mathrm{f}\left( y\right) \) . Proof. \[ \widehat{\mathrm{f}}\left( y\right) = {\int }_{-\infty }^{\infty }{e}^{-\pi {x}^{2}}{e}^{-{2\pi }\mathrm{i}{xy}}{dx}. \] The idea is to complete the square, \[ - \pi \left( {{x}^{2} + 2\mathrm{i}{xy}}\right) = - \pi \left\lbrack {{\left( x + \mathrm{i}y\right) }^{2} + {y}^{2}}\right\rbrack \] so the Fourier transform of \( f \) is \[ \widehat{\mathrm{f}}\left( y\right) = {e}^{-\pi {y}^{2}}{\int }_{-\infty }^{\infty }{e}^{-\pi {\left( x + \mathrm{i}y\right) }^{2}}{dx}. \] Let \[ I\left( y\right) = {\int }_{-\infty }^{\infty }{e}^{-\pi {\left( x + \mathrm{i}y\right) }^{2}}{dx}. \] We know that \( I\left( 0\right) = 1 \) . What happens if \( y \neq 0 \) ? Fix some large \( N \) and consider the following paths: \[ {\gamma }_{1} = \left\lbrack {-N, N}\right\rbrack ,\;{\gamma }_{2} = \left\lbrack {N, N + y\mathrm{i}}\right\rbrack , \] \[ {\gamma }_{3} = \left\lbrack {N + y\mathrm{i}, - N + y\mathrm{i}}\right\rbrack ,{\gamma }_{4} = \left\lbrack {-N + y\mathrm{i}, - N}\right\rbrack . \] Put \( \gamma = {\gamma }_{1} + {\gamma }_{2} + {\gamma }_{3} + {\gamma }_{4} \) (a rectangle). Since \( {e}^{-\pi {z}^{2}} \) is an analytic function on the whole of the complex plane, we have, for any \( N \geq 0 \) , \[ {\int }_{\gamma }{e}^{-\pi {z}^{2}}{dz} = 0 \] Now, as \( N \rightarrow \infty \), the integral of \( {e}^{-\pi {z}^{2}} \) over \( {\gamma }_{1} \) tends to \( I\left( 0\right) = 1 \), the integral over \( {\gamma }_{3} \) tends to \( - I\left( y\right) \), and the integrals over \( {\gamma }_{2} \) and \( {\gamma }_{4} \) both tend to 0, as \( N \rightarrow \infty \) . This completes the proof of Lemma 9.11. Exercise 9.3. Prove that \( {\int }_{N}^{N + y\mathrm{i}}{e}^{-{z}^{2}}{dz} \rightarrow 0 \) as \( N \rightarrow \infty \) for any \( y \in \mathbb{R} \) . ## 9.4 Fourier Analysis of Periodic Functions Fourier analysis is more familiar in the setting of periodic functions. Definition 9.12. A function \( \mathrm{g} : \mathbb{R} \rightarrow \mathbb{C} \) is periodic with period 1 if \[ \mathrm{g}\left( x\right) = \mathrm{g}\left( {x + 1}\right) \text{for all}x \in \mathbb{R}\text{.} \] If \( \mathrm{g} \) is periodic and piecewise continuous, then its \( k \) th Fourier coefficient is defined for \( k \in \mathbb{Z} \) by \[ {c}_{k} = {\int }_{0}^{1}\mathrm{\;g}\left( x\right) {e}^{-{2\pi }\mathrm{i}{kx}}{dx} \] and its Fourier series is the function \[ \mathrm{G}\left( x\right) = \mathop{\sum }\limits_{{k \in \mathbb{Z}}}{c}_{k}{e}^{{2\pi }\mathrm{i}{kx}}. \] Lemma 9.13. If \( \mathrm{g} \) is periodic and twice differentiable with continuous second derivative, then there exists a constant \( C > 0 \), depending only upon \( \mathrm{g} \), such that \[ \left| {c}_{k}\right| \leq \frac{C}{{k}^{2}} \] for all \( k \neq 0 \) . Proof. Integrate by parts: \[ {c}_{k} = {\left\lbrack \frac{-{e}^{-{2\pi }\mathrm{i}{kx}}\mathrm{\;g}\left( x\right) }{{2\pi }\mathrm{i}k}\right\rbrack }_{0}^{1} + {\int }_{0}^{1}\frac{{e}^{-{2\pi }\mathrm{i}{kx}}{\mathrm{\;g}}^{\prime }\left( x\right) }{{2\pi }\mathrm{i}k}{dx}. \] Now the bracketed term vanishes because \( \mathrm{g} \) is periodic. Integrate by parts again, so that \( {k}^{2} \) appears in the denominator, and then bound the exponential by 1 . Finally, put \( C = {\int }_{0}^{1}\left| {\mathrm{\;g}}^{\prime \prime }\right| {dx}/\left( {4{\pi }^{2}}\right) \) . Theorem 9.14. Any function \( \mathrm{g} \) that is periodic and differentiable infinitely often has a Fourier series expansion \[ \mathrm{g}\left( x\right) = \mathop{\sum }\limits_{{k \in \mathbb{Z}}}{c}_{k}{e}^{{2\pi }\mathrm{i}{kx}} \] that is uniformly convergent on \( \mathbb{R} \) . Proof. Let \( \mathrm{G} \) be the Fourier series of \( \mathrm{g} \), and apply Lemma 9.13: \[ \left| {\mathrm{G}\left( x\right) - \mathop{\sum }\limits_{{k = - n}}^{n}{c}_{k}{e}^{{2\pi }\mathrm{i}{kx}}}\right| \leq C\mathop{\sum }\limits_{{\left| k\right| > n}}\frac{1}{{k}^{2}} \] where the last sum tends to zero independent of \( x \) since the constant \( C \) depends only on \( \mathrm{g} \) . This proves the convergence is uniform. The equality \( \mathrm{g}\left( x\right) = \mathrm{G}\left( x\right) \) is not so easy to prove. We first record a few lemmas that are of interest in their own right. Lemma 9.15. Consider the sequence of functions \( \left( {D}_{K}\right) \) defined by \[ {D}_{K}\left( x\right) = \mathop{\sum }\limits_{{k = - K}}^{K}{e}^{{2\pi }\mathrm{i}{kx}},\;\text{ for }K \in \mathbb{N}, \] called the Dirichlet kernel. Then \[ {\int }_{0}^{1}{D}_{K}\left( x\right) {dx} = 1 \] (9.5) \[ {D}_{K}\left( x\right) = \frac{\sin \left( {\left( {{2K} + 1}\right) {\pi x}}\right) }{\sin \left( {\pi x}\right) } \] (9.6) and \[ {\int }_{0}^{1}\mathrm{\;g}\left( {y + x}\right) {D}_{K}\left( x\right) {dx} = \mathop{\sum }\limits_{{k = - K}}^{K}{c}_{k}{e}^{{2\pi }\mathrm{i}{ky}}, \] (9.7) where \( {c}_{k} \) are the Fourier coefficients of \( \mathrm{g} \) as in Theorem 9.14. The functions \( {D}_{K} \) are useful because they concentrate at the origin and pick out the Fourier coefficients conveniently. The shape of \( {D}_{K} \) is illustrated in Figure 9.2, that shows the graph of \( {D}_{11} \) . Proof of Lemma 9.15. Equation (9.5) follows from the fact that \[ {\int }_{0}^{1}{e}^{{2\pi }\mathrm{i}{kx}}{dx} = 0\text{ for all }k \neq 0. \] Equation (9.6) is proved by induction on \( k \) or directly by summation of a geometric progression. Equation (9.7) follows since ![05842f43-827d-4037-8783-9728eb4e3405_194_0.jpg](images/05842f43-827d-4037-8783-9728eb4e3405_194_0.jpg) Figure 9.2. The Dirichlet kernel \( {D}_{11}\left( x\right) \) for \( - \frac{1}{2} \leq x \leq \frac{1}{2} \) . \[ {\int }_{0}^{1}\mathrm{\;g}\left( {y + x}\right) {D}_{K}\left( x\right) {dx} = {\int }_{y - 1}^{y}\mathrm{\;g}\left( z\right) {D}_{K}\left( {z - y}\right) {dz} \] \[ = {\int }_{-1/2}^{1/2}\mathrm{\;g}\left( z\right) {D}_{K}\left( {y - z}\right) {dz} \] In the last step, we have used the fact that \( \mathrm{g} \) and \( {D}_{K} \) are periodic functions and that \( {D}_{K} \) is an even function. At this point, we put in the definition of the \( {D}_{K} \), interchange the integral and the sum, and extract a factor \( {e}^{{2\pi }\mathrm{i}{ky}} \) from each summand, which gives the right-hand side of Equation (9.7). Lemma 9.16. [Riemann-Lebesgue Lemma] Let \( \mathbf{g} \) be a continuous periodic function, and let \( {c}_{k} \) be the \( k \) th Fourier coefficient of \( \mathrm{g} \) . Then \[ \mathop{\lim }\limits_{{\left| k\right| \rightarrow \infty }}{c}_{k} = 0 \] Proof of Lemma 9.16. Define for continuous complex-valued periodic functions \( u, v \) the inner product \[ \left( {\mathrm{u},\mathrm{v}}\right) = {\int }_{0}^{1}\mathrm{u}\left( x\right) \overline{\mathrm{v}\left( x\right) }{dx} \] and the norm \[ \parallel u\parallel = \sqrt{\left( u, u\right) }. \] Let \( {\mathrm{u}}_{k}\left( x\right) = {e}^{{2\pi }\mathrm{i}{kx}} \) so that \( {c}_{k} = \left( {\mathrm{g},{\mathrm{u}}_{k}}\right) \) . Using the linearity of the inner product and the orthogonality relations \[ \left( {{\mathrm{u}}_{k},{\mathrm{u}}_{\ell }}\right) = \left\{ \begin{array}{l} 0\text{ if }k \neq \ell \\ 1\text{ if }k = \ell \end{array}\right. \] we get \[ {\begin{Vmatrix}\mathrm{g} - \mathop{\sum }\limits_{{k = - K}}^{K}\left( \mathrm{\;g},{\mathrm{u}}_{k}\right) {\mathrm{u}}_{k}\end{Vmatrix}}^{2} = \left( {\mathrm{\;g} - \mathop{\sum }\limits_{{k = - K}}^{K}\left( {\mathrm{\;g},{\mathrm{u}}_{k}
1112_(GTM267)Quantum Theory for Mathematicians
Definition 16.50
Definition 16.50 A finite-dimensional representation of a group or Lie algebra is said to be completely reducible if it is isomorphic to a direct sum of irreducible representations. Proposition 16.51 Every finite-dimensional unitary representation of a group or Lie algebra is completely reducible. Proof. Suppose \( \left( {\Pi, V}\right) \) is a unitary representation of a matrix Lie group \( G \) . If \( W \) is a subspace of \( V \) invariant under each \( \Pi \left( A\right) \), then \( {W}^{ \bot } \) is invariant under each \( \Pi {\left( A\right) }^{ * } \), as the reader may easily verify. But since \( \Pi \) is unitary, \[ \Pi {\left( A\right) }^{ * } = \Pi {\left( A\right) }^{-1} = \Pi \left( {A}^{-1}\right) . \] Thus, \( {W}^{ \bot } \) is invariant under \( \Pi \left( {A}^{-1}\right) \) for all \( A \in G \), hence under \( \Pi \left( A\right) \) for all \( A \in G \) . We conclude that, in the unitary case, the orthogonal complement of an invariant subspace is always invariant. If \( V \) is irreducible, there is nothing to prove. If not, we pick a nontrivial invariant subspace \( W \) and decompose \( V \) as \( W \oplus {W}^{ \bot } \) . The restriction of \( \Pi \) to \( W \) or to \( {W}^{ \bot } \) is again a unitary representation, so we can repeat this procedure for each of these subspaces. Since \( V \) is finite dimensional, the process must eventually terminate, yielding an orthogonal decomposition of \( V \) as a direct sum of irreducible invariant subspaces. If we consider a unitary representation \( \pi \) of a Lie algebra \( \mathfrak{g} \), we have the same argument, but with the identity \( \Pi {\left( A\right) }^{ * } = \Pi \left( {A}^{-1}\right) \) replaced by \( \pi {\left( X\right) }^{ * } = - \pi \left( X\right) \) Proposition 16.52 Suppose \( K \) is a compact matrix Lie group. For any finite-dimensional representation \( \left( {\Pi, V}\right) \) of \( K \), there exists an inner product on \( V \) such that \( \Pi \left( A\right) \) is unitary for all \( A \in G \) . In particular, every finite-dimensional representation of \( K \) is completely reducible. See Proposition 4.36 in [21]. ## 16.9 Infinite-Dimensional Unitary Representations For the applications we have in mind, we need to consider representations that are infinite dimensional. The theory of such representations is inevitably more complicated than that of finite-dimensional representations. For our purposes, it suffices to consider the nicest sort of infinite-dimensional representations - unitary representations in a Hilbert space. ## 16.9.1 Ordinary Unitary Representations We begin by considering ordinary representations and then turn to projective representations. Definition 16.53 Suppose \( G \) is a matrix Lie group. Then a unitary representation of \( G \) is a strongly continuous homomorphism \( \Pi : G \rightarrow \mathrm{U}\left( \mathbf{H}\right) \) , where \( \mathbf{H} \) is a separable Hilbert space and \( \mathrm{U}\left( \mathbf{H}\right) \) is the group of unitary operators on \( \mathbf{H} \) . Here, strong continuity of \( \Pi \) means that if a sequence \( {A}_{m} \) in \( G \) converges to \( A \in G \), then \[ \mathop{\lim }\limits_{{m \rightarrow \infty }}\begin{Vmatrix}{\Pi \left( {A}_{m}\right) \psi - \Pi \left( A\right) \psi }\end{Vmatrix} = 0 \] for all \( \psi \in \mathbf{H} \) . We can attempt to associate to a unitary representation \( \Pi \) of \( G \) some sort of representation \( \pi \) of the Lie algebra \( \mathfrak{g} \) of \( G \), by imitating the construction in Theorem 16.23. For any \( X \in \mathfrak{g} \), the map \( t \mapsto \Pi \left( {e}^{tX}\right) \) is a strongly continuous one-parameter unitary group. Thus, Stone's theorem (Theorem 10.15) tells us that there exists a unique self-adjoint operator \( A \) such that \( \Pi \left( {e}^{tX}\right) = {e}^{itA} \) for all \( t \in \mathbb{R} \) . If we let \( \pi \left( X\right) \) denote the skew-selfadjoint operator \( {iA} \), we will have \[ \Pi \left( {e}^{tX}\right) = {e}^{{t\pi }\left( X\right) } \] (16.5) The operators \( \pi \left( X\right), X \in \mathfrak{g} \), are in general unbounded and defined only on a dense subspace of \( \mathbf{H} \) . Nevertheless, it can be shown (see, e.g.,[43]) that there exists a dense subspace \( V \) of \( \mathbf{H} \) contained in the domain of each \( \pi \left( X\right) \) and that is invariant under each \( \pi \left( X\right) \), and on which we have \( \pi \left( \left\lbrack {X, Y}\right\rbrack \right) = \left\lbrack {\pi \left( X\right) ,\pi \left( Y\right) }\right\rbrack \) . In the case of the particular representation that we will consider in the next chapter, we can avoid these difficulties by looking at finite-dimensional invariant subspaces. Proposition 16.54 Suppose \( G \) is a matrix Lie group and \( \Pi : G \rightarrow \mathrm{U}\left( \mathbf{H}\right) \) is a unitary representation of \( G \) . For each \( X \in \mathfrak{g} \), let \( \pi \left( X\right) \) denote the operator in (16.5). Suppose \( V \subset \mathbf{H} \) is a finite-dimensional subspace of \( \mathbf{H} \) such that \( \Pi \left( A\right) \) maps \( V \) into \( V \), for all \( A \in G \) . Then for all \( X \in \mathfrak{g}, V \subset \operatorname{Dom}\left( {\pi \left( X\right) }\right) \) , \( \pi \left( X\right) \) maps \( V \) into \( V \), and we have \[ \pi \left( \left\lbrack {X, Y}\right\rbrack \right) v = \left\lbrack {\pi \left( X\right) ,\pi \left( Y\right) }\right\rbrack v \] (16.6) for all \( v \in V \) . In the other direction, suppose \( G \) is connected and suppose \( V \) is any finite-dimensional subspace of \( \mathbf{H} \) such that for all \( X \in \mathfrak{g}, V \subset \operatorname{Dom}\left( {\pi \left( X\right) }\right) \) and \( \pi \left( X\right) \) maps \( V \) into \( V \) . Then \( \Pi \left( A\right) \) also maps \( V \) into \( V \), for all \( A \in G \) . Proof. Since \( V \) is invariant under both \( \Pi \left( A\right) \) and \( \Pi {\left( A\right) }^{ * } = \Pi \left( {A}^{-1}\right) \), the restriction to \( V \) of each \( \Pi \left( A\right) \) is unitary. The operators \( {\left. \Pi \left( A\right) \right| }_{V} \) form a finite-dimensional unitary representation of \( G \) that is strongly continuous and thus continuous. (In the finite-dimensional case, all reasonable notions of continuity for representations coincide.) For each \( X \in \mathfrak{g} \), Theorem 16.18 tells us that there is an operator \( \widetilde{X} \) on \( V \) such that \[ {\left. \Pi \left( {e}^{tX}\right) \right| }_{V} = {e}^{t\widetilde{X}} \] Thus, for any \( v \in V \), we have \[ \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{\Pi \left( {e}^{tX}\right) v - v}{t} = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{{e}^{t\widetilde{X}}v - v}{t} = \widetilde{X}v. \] This calculation shows that \( v \) is in the domain of the infinitesimal generator \( \pi \left( X\right) \) of the unitary group \( \Pi \left( {e}^{tX}\right) \), and that \( \pi \left( X\right) v = \widetilde{X}v \) . Since the operators \( \widetilde{X}, X \in \mathfrak{g} \), form a representation of \( \mathfrak{g} \), we have the relation (16.6). In the other direction, if \( V \) is invariant under \( \pi \left( X\right) \), the restriction of \( \pi \left( X\right) \) to \( V \) is automatically bounded. Thus, there is a constant \( C \) such that \[ \begin{Vmatrix}{\pi {\left( X\right) }^{m}v}\end{Vmatrix} \leq {C}^{m}\parallel v\parallel \] (16.7) for all \( v \in V \) . If we use the direct-integral form of the spectral theorem for the self-adjoint operator \( A \mathrel{\text{:=}} - {i\pi }\left( X\right) \), it is easy to see that (16.7) can only hold if \( v \), viewed as an element of the direct integral, is supported on a bounded interval inside the spectrum of \( A \) . Since the power series of the function \( \lambda \mapsto {e}^{t\lambda } \) converges to \( {e}^{t\lambda } \) uniformly on any finite interval, we will have \[ \Pi \left( {e}^{tX}\right) v = {e}^{itA}v = \mathop{\sum }\limits_{{m = 0}}^{\infty }\frac{{t}^{m}\pi {\left( X\right) }^{m}}{m!}v. \] Each term in the above power series belongs to \( V \), which is finite dimensional and thus closed. We conclude that \( \Pi \left( {e}^{tX}\right) v \) belongs to \( V \) for all \( X \in \mathfrak{g} \) . Since \( G \) is connected, each element of \( G \) is a product of exponentials of Lie algebra elements, and we have the claim. ## 16.9.2 Projective Unitary Representations Given a Hilbert space \( \mathbf{H} \), let \( {S}^{\mathbf{H}} \) denote the unit sphere in \( \mathbf{H} \), that is, the set of vectors with norm 1 . Let \( P\mathbf{H} \) be the quotient space \( \left( {S}^{\mathbf{H}}\right) / \sim \), where " \( \sim \) " denotes the equivalence relation in which \( u \sim v \) if and only if \( u = {e}^{i\theta }v \) for some \( \theta \in \mathbb{R} \) . The quotient map \( q : {S}^{\mathbf{H}} \rightarrow P\mathbf{H} \) induces a topology on \( P\mathbf{H} \) in which a set \( U \subset P\mathbf{H} \) is open if and only if \( {q}^{-1}\left( U\right) \) is open as a subset of the metric space \( {S}^{\mathbf{H}} \subset \mathbf{H} \) . As in the finite-dimensional case, we can form the quotient group \[ \mathrm{{PU}}\left( \mathbf{H}\right) \mathrel{\text{:=}} \mathrm{U}\left( \mathbf{H}\right) /\left\{ {{e}^{i\theta }I}\right\} \] The action of \( \mathrm{U}\left( \mathbf{H}\right) \) on \( {S}^{\mathbf{H}} \) descends to a well-defined action of \( \mathrm{{PU}}\left( \mathbf{H}\right) \) on \( P\mathbf{H} \) . Definition 16.55 A projective unitary representation of a matrix Lie group \( G \) is a homomorphism \( \Pi : G \rightarrow \mathrm{{PU}}\left( \mathbf{H}\right) \), for some Hilbert space \( \mathbf{H} \) , with the property that if a sequence \( {A}_{m} \) in \( G \) converges to \( A \) in \( G \), then \[ \Pi \left( {A}_{m}\right) x \rightarrow \Pi \lef
1329_[肖梁] Abstract Algebra (2022F)
Definition 11.2.1
Definition 11.2.1. A subset \( I \subseteq R \) is called a left ideal if (1) for any \( a, b \in I, a - b \in I \) (so that \( I \) is a subgroup of \( \left( {R, + }\right) \) ); (2) for any \( a \in I \) and \( x \in R \), we have \( {xa} \in I \) . We say that \( I \) is a right ideal if it satisfies the above conditions with (2) replaced by \( {ax} \in I \) . We say that \( I \) is an ideal (or a two-sided ideal) if it is a left ideal and a right ideal as the same time. We say that \( I \) is a proper ideal if \( I \neq R \) . Remark 11.2.2. (1) For commutative rings, there is no difference between left, right, or two-sided ideals. (2) An ideal of a ring is (usually) not a ring, because \( 1 \notin I \) . \( \left( {1 \in I\text{implies that}I = R\text{.}}\right) \) Definition 11.2.3. Let \( R \) be a ring and \( I \) a two-sided ideal such that \( I \neq R \) . We define the quotient ring \( R/I \mathrel{\text{:=}} \{ x + I \mid x \in R\} \) (quotient as an additive group) with operations: \[ \left( {x + I}\right) + \left( {y + I}\right) = \left( {x + y}\right) + I\text{ and }\left( {x + I}\right) \cdot \left( {y + I}\right) = \left( {xy}\right) + I. \] We check that the multiplication is well-defined: if \( {x}^{\prime } = x + a \) and \( {y}^{\prime } = y + b \) with \( a, b \in I \) , then \[ {x}^{\prime }{y}^{\prime } + I = \left( {x + a}\right) \left( {y + b}\right) + I = {xy} + \underset{\text{each term } \in I}{\underbrace{{xb} + {ay} + {ab}}} + I = {xy} + I. \] There is a natural surjective quotient homomorphism \[ \pi : R \rightarrow R/I \] \[ x \mapsto x + I = : \bar{x} \] with \( \ker \pi = I \) . This following is the analogue of isomorphism theorems for rings. Theorem 11.2.4 (Isomorphism Theorems). (1) If \( \phi : R \rightarrow S \) is a ring homomorphism, then \( \ker \phi \) is a two-sided ideal and \( \phi \left( R\right) \) is a subring of \( S \) . Moreover, \( \phi \) induces an isomorphism \[ R/\ker \phi \xrightarrow[]{ \cong }\phi \left( R\right) \] \[ x + \ker \phi \mapsto \phi \left( x\right) \] (2) Let \( I \subseteq J \) be proper ideals of \( R \), then \( J/I \subseteq R/I \) is a proper ideal and \[ \left( {R/I}\right) /\left( {J/I}\right) \cong R/J \] (3) Let \( I \) be a proper ideal of \( R \) . Then there is a 1-1 correspondence \( \{ \textit{left/right/two-sided ideals J containing I}\} \leftrightarrow \{ \textit{left/right/two-sided ideals }\bar{J}\textit{ of R/I}\} \) \[ J \vdash J/I \] \[ {\pi }^{-1}\left( \bar{J}\right) \leftarrow J \] preserving inclusion orders, sums, intersections, and quotients. Remark 11.2.5. There is no good analogue of the second isomorphism theorem in case of rings. We represent ideals using the following notation Notation 11.2.6. Let \( R \) be a commutative ring, and let \( {a}_{1},\ldots ,{a}_{s} \in R \) . Define \[ \left( {{a}_{1},\ldots ,{a}_{s}}\right) = \left\{ {\mathop{\sum }\limits_{{i = 1}}^{s}{x}_{i}{a}_{i} \mid {x}_{i} \in R}\right\} \subseteq R. \] (This definition also works for infinite sets \( \left\{ {{a}_{i} \mid i \in J\} }\right. \) when we allow only finite sums.) We call this ideal the ideal generated by \( {a}_{1},\ldots ,{a}_{s} \) . It is the minimal ideal that contains all of \( {a}_{1},\ldots ,{a}_{s} \) . Example 11.2.7. (1) In \( R = \mathbb{Z} \) , \[ \left( {4,6}\right) = \{ {4x} + {6y} \mid x, y \in \mathbb{Z}\} = 2\mathbb{Z} = \left( 2\right) . \] In general, \( \left( {{a}_{1},\ldots ,{a}_{s}}\right) = \left( {\gcd \left( {{a}_{1},\ldots ,{a}_{s}}\right) }\right) \) . (2) For the homomorphism \( \phi : \mathbb{Z} \rightarrow {\mathbf{Z}}_{n} \) given by modulo \( n \), ker \( \phi = n\mathbb{Z} = \left( n\right) \) . So \( {\mathbf{Z}}_{n} \cong \mathbb{Z}/n\mathbb{Z} \) (3) For a commutative ring \( R \) and \( a \in R \), we have an evaluation homomorphism \[ {\phi }_{a} : R\left\lbrack x\right\rbrack \rightarrow R \] \[ f\left( x\right) \mapsto f\left( a\right) . \] The kernel \( \ker {\phi }_{a} = \{ f\left( x\right) \in R\left\lbrack x\right\rbrack \mid f\left( a\right) = 0\} = \left( {x - a}\right) \) . (This is because one can always write every \( f\left( x\right) \in R\left\lbrack x\right\rbrack \) as \( f\left( x\right) = g\left( x\right) \left( {x - a}\right) + f\left( a\right) \) . So \( R\left\lbrack x\right\rbrack /\left( {x - a}\right) \cong R \) . (4) Let \( R \) be a ring and \( G \) a group, there is a natural homomorphism from the group ring \( R\left\lbrack G\right\rbrack \) : \[ \phi : R\left\lbrack G\right\rbrack \rightarrow R \] \[ \mathop{\sum }\limits_{{g \in G}}{a}_{g}\left\lbrack g\right\rbrack \mapsto \mathop{\sum }\limits_{{g \in G}}{a}_{g} \] The kernel \( \ker \phi = \left( {g - 1;g \in G}\right) \) is called the augmentation ideal of \( R\left\lbrack G\right\rbrack \) . (5) For \( R = {R}_{1} \times {R}_{2} \) a direct product of rings, both \( {R}_{1} \times \{ 0\} \) and \( \{ 0\} \times {R}_{2} \) are ideals. They cay be alternatively written as \[ {R}_{1} \times \{ 0\} = \left( \left( {1,0}\right) \right) \;\text{ and }\;\{ 0\} \times {R}_{2} = \left( \left( {0,1}\right) \right) . \] Caveat: the map \[ {R}_{1} \rightarrow {R}_{1} \times {R}_{2} \] \[ a \mapsto \left( {a,0}\right) \] does not take \( {1}_{{R}_{1}} \) to \( {1}_{{R}_{1} \times {R}_{2}} \) ; so it is NOT a homomorphism in our convention. We have the following operations of ideals. Definition 11.2.8. Let \( I \) and \( J \) be ideals of a ring \( R \) . (1) Define the sum of ideals to be \[ I + J = \{ a + b \mid a \in I, b \in J\} . \] (2) Define the product of ideals to be \[ {IJ} = \{ \text{ finite sums of elements }{ab}\text{ for }a \in I, b \in J\} . \] Caveat 11.2.9. In general, it is not true that all elements of \( {IJ} \) can be written as a pure product \( {ab} \) for \( a \in I \) and \( b \in J \) . For example, \( R = \mathbb{Z}\left\lbrack x\right\rbrack \) and \( I = \left( {2, x}\right) = \{ f\left( x\right) \in \) \( \mathbb{Z}\left\lbrack x\right\rbrack \mid f\left( 0\right) = 2\} \) . Then \( {x}^{2} + 4 \in {I}^{2} \) yet it cannot be written in the form of \( {ab} \) with \( a, b \in I \) . Example 11.2.10. If \( R \) is a commutative ring and \( I = \left( {{a}_{1},\ldots ,{a}_{s}}\right) \) and \( J = \left( {{b}_{1},\ldots ,{b}_{t}}\right) \) , then \[ I + J = \left( {{a}_{1},\ldots ,{a}_{s},{b}_{1},\ldots ,{b}_{t}}\right) ,\;{IJ} = \left( {{a}_{1}{b}_{1},\ldots ,{a}_{1}{b}_{t},\ldots ,{a}_{i}{b}_{j},\ldots ,{a}_{s}{b}_{t}}\right) . \] Remark 11.2.11. It is important to explain the practical meaning of taking quotient rings: it is to imposing relations among generators. We explain this through an example: for \( k \) a field, we show that \[ k\left\lbrack {x, y, z}\right\rbrack /\left( {x - {y}^{2}, y - {z}^{3}}\right) \cong k\left\lbrack z\right\rbrack . \] Indeed, for example, the element \( {x}^{2}y \) can be written as \[ {x}^{2}y = {\left( x - {y}^{2} + {y}^{2}\right) }^{2}y = \left( {x - {y}^{2}}\right) \cdot * + {y}^{4}y = \left( {x - {y}^{2}}\right) \cdot * + {\left( y - {z}^{3} + {z}^{3}\right) }^{5} \] \[ = \left( {x - {y}^{2}}\right) \cdot * + \left( {y - {z}^{3}}\right) \cdot * + {z}^{15}. \] So \( {x}^{2}y \) is equivalent to \( {z}^{15} \) in the quotient. For another example, consider a homomorphism \[ {\phi }_{i} : \mathbb{R}\left\lbrack x\right\rbrack \rightarrow \mathbb{C} \] \[ f\left( x\right) \mapsto f\left( i\right) \] The kernel \( \ker {\phi }_{i} = \left( {{x}^{2} + 1}\right) \) . So \[ \mathbb{R}\left\lbrack x\right\rbrack /\left( {{x}^{2} + 1}\right) \cong \mathbb{C}. \] (namely, we are imposing the relation \( {x}^{2} + 1 = 0 \) in the ring.) We also point out that this is the prototype for field extension later, describing \( \mathbb{C} \) in terms \( \mathbb{R} \) . ## Extended readings after Section 11 11.3. Quaternions over \( \mathbb{Q} \) . In fact, in the constructions of Hamilton quaternions, we do not really need it to have coefficients in \( \mathbb{R} \) . In fact, for nonzero numbers \( A, B \in \mathbb{Q} \), we may define a quaternion ring over \( \mathbb{Q} \) : \[ {D}_{A, B} \mathrel{\text{:=}} \{ a + {bi} + {cj} + {dij} \mid a, b, c, d \in \mathbb{Q}\} \] where the multiplications are \( \mathbb{Q} \) -linear and are governed by \[ {i}^{2} = A,\;{j}^{2} = B,\;{ij} = - {ji}. \] When \( A = B = - 1 \) and if we change the coefficients from \( \mathbb{Q} \) to \( \mathbb{R} \), then we recover the Hamilton quaternions. It is an interesting fact (which is also important in number theory) that such \( {D}_{A, B} \) is either isomorphic to the matrix ring \( {\operatorname{Mat}}_{2 \times 2}\left( \mathbb{Q}\right) \) or is a division ring. (For example, if both \( A \) and \( B \) are negative then \( {D}_{A, B} \) is a division ring for the "same" reason as in for \( \mathbb{H} \) . Yet for each \( p \), if \( A \) is exactly divisible by \( p \) and \( B \) is an integer whose reduction modulo \( p \) is not a square, then \( {D}_{A, B} \) is a division ring "for reasons at \( p \) ".) 12.1. Chinese remainder theorem. Recall that the classical Chinese remainder theorem can be restated as follows: if \( {n}_{1},\ldots ,{n}_{r} \) are pair-wise coprime integers, then \[ \mathbb{Z} \rightarrow \mathbb{Z}/{n}_{1}\mathbb{Z} \times \cdots \mathbb{Z}/{n}_{r}\mathbb{Z} \] is surjective and its kernel is \( {n}_{1}\mathbb{Z} \cap \cdots \cap {n}_{r}\mathbb{Z} = {n}_{1}\cdots {n}_{r}\mathbb{Z} \) . Definition 12.1.1. Let \( R \) be a two commutative ring. We say two ideals \( I \) and \( J \) of \( R \) are comaximal if \( I + J = R \), i.e. \( 1 \in R \) can be written as \( 1 = a + b \) with \( a \in I \) and \( b \in J \) . (Note that in the case \( R = \mathbb{Z}, m, n \in \mathbb{Z} \) are coprime if and only if \( \left( m\right) + \left( n\right) = \left( {\gcd \left( {m, n}\right) }\right) = \) (1).) Theorem 12.1.2. Let \( {I}_{1},\ldots ,{I}_{k} \) be ideal
1172_(GTM8)Axiomatic Set Theory
Definition 15.1
Definition 15.1. We define a mapping \( j : V\left\lbrack {F}_{0}\right\rbrack \rightarrow {V}^{\left( \mathbf{B}\right) } \) and a denotation operator \( {D}_{j} \) on the terms and formulas of the ramified language corresponding to \( V\left\lbrack {F}_{0}\right\rbrack \) by recursion in the following way (cf. Definition 9.36): 1. \( j\left( \underline{k}\right) \triangleq \breve{k}, k \in V \) . 2. \( {D}_{j}\left( {V\left( t\right) }\right) \triangleq \llbracket M\left( {j\left( t\right) }\right) \rrbracket \) . 3. \( {D}_{j}\left( {F\left( t\right) }\right) \triangleq \mathop{\sum }\limits_{{b \in R\left( \alpha \right) \cap B}}\llbracket j\left( t\right) = \check{b}\rrbracket \cdot b \) where \( \alpha = \rho \left( t\right) + 1 \) . 4. \( {D}_{j}\left( {{t}_{1} \in {t}_{2}}\right) \triangleq \left\lbrack {j\left( {t}_{1}\right) \in j\left( {t}_{1}\right) }\right\rbrack \) . 5. \( {D}_{j}\left( {{t}_{1} = {t}_{2}}\right) \triangleq \llbracket j\left( {t}_{1}\right) = j\left( {t}_{2}\right) \rrbracket \) . 6. \( {D}_{j}\left( {-\varphi }\right) \triangleq - {D}_{j}\left( \varphi \right) ,{D}_{j}\left( {{\varphi }_{1} \land {\varphi }_{2}}\right) = {D}_{j}\left( {\varphi }_{1}\right) \cdot {D}_{j}\left( {\varphi }_{2}\right) \) . 7. \( {D}_{j}\left( {\left( {\forall {x}_{n}{}^{\beta }}\right) \varphi \left( {{x}_{n}{}^{\beta }}\right) }\right) \overset{\Delta }{ = }\mathop{\prod }\limits_{{t \in {T}_{\beta }}}{D}_{j}\left( {\varphi \left( t\right) }\right) \) . 8. \( j\left( {{\widehat{x}}_{n}{}^{\beta }\varphi \left( {{x}_{n}{}^{\beta }}\right) }\right) \triangleq v \) where \( v \in {V}^{\left( \mathbf{B}\right) } \) is given by \[ \mathcal{D}\left( v\right) = \left\{ {j\left( t\right) \mid t \in {T}_{\beta }}\right\} \] and \[ v\left( {j\left( t\right) }\right) = {D}_{j}\left( {\varphi \left( t\right) }\right) \;\text{ for }t \in {T}_{\beta }. \] Remark. We use the notation \( F\left( b\right) \) for the value of the function \( F \) at \( b \) and at the same time \( F\left( \right) \) (e.g., in 3) is a formal symbol of the ramified language. Also \( \llbracket \rrbracket \) refers to \( V\left\lbrack {F}_{0}\right\rbrack \) and to \( {\mathbf{V}}^{\left( \mathbf{B}\right) } \) . Despite these ambiguities it is hoped that the proper meaning of \( F \) and \( \llbracket \rrbracket \) is always clear from the context. Theorem 15.2. If \( u, v \in {V}^{\left( \mathbf{B}\right) } \), if \( \mathcal{D}\left( u\right) \subseteq S \) and \( \mathcal{D}\left( v\right) \subseteq S \) where \( S \subseteq {V}^{\left( \mathbf{B}\right) } \) , then \( \llbracket u = v\rrbracket = \mathop{\prod }\limits_{{w \in S}}\llbracket w \in u \leftrightarrow w \in v\rrbracket \) . Proof. \( \llbracket u = v\rrbracket \leq \mathop{\prod }\limits_{{w \in S}}\llbracket w \in u \cdot \cdot w \in v\rrbracket \) follows from the Axioms of Equality. On the other hand, \[ \mathop{\prod }\limits_{{w \in S}}\llbracket w \in u \rightarrow w \in v\rrbracket \leq \mathop{\prod }\limits_{{w \in \mathcal{I}\left( u\right) }}\llbracket w \in u \rightarrow w \in v\rrbracket \cdot \mathop{\prod }\limits_{{w \in \mathcal{I}\left( v\right) }}\llbracket w \in v \rightarrow w \in u\rrbracket \] \[ \leq \mathop{\prod }\limits_{{w \in \mathcal{L}\left( u\right) }}\left( {u\left( w\right) \Rightarrow \llbracket w\rrbracket \in v\rrbracket }\right) \cdot \mathop{\prod }\limits_{{w \in \mathcal{L}\left( u\right) }}\left( {v\left( w\right) \Rightarrow \llbracket w\rrbracket \in u\rrbracket }\right) \] by Corollary 13.4.3 \[ = \llbracket u = v\rrbracket \text{.} \] Theorem 15.3. \( t \in {T}_{\alpha } \rightarrow j\left( t\right) \in {V}_{\alpha }^{\left( \mathbf{B}\right) } \) . Proof. (By induction on \( \alpha \) .) If \( t = \underline{k} \) and \( \underline{k} \in {T}_{\alpha } \), then \( k \in R\left( \alpha \right) \), i.e., \( k \in R\left( {\beta + 1}\right) \) for some \( \beta < \alpha \) . Therefore \[ j\left( \underline{k}\right) = \check{k} = \left\{ {\left\langle {{\check{k}}_{1},1}\right\rangle \mid {k}_{1} \in k}\right\} \] and \[ \mathcal{D}\left( \check{k}\right) = \left\{ {{\check{k}}_{1} \mid {k}_{1} \in k}\right\} \] From the induction hypothesis, \( \mathcal{D}\left( \check{k}\right) \subseteq {V}_{\beta }{}^{\left( \mathrm{B}\right) } \) . Hence \[ \check{k} \in {V}_{\beta + 1}^{\left( \mathbf{B}\right) } \subseteq {V}_{\alpha }^{\left( \mathbf{B}\right) } \] If \( t = {\widehat{x}}^{\beta }\varphi \left( {x}^{\beta }\right) \) for some \( \varphi \) and \( t \in {T}_{\alpha } \), then \( \beta < \alpha \) . For \( v = j\left( t\right) \) we have, by the induction hypothesis, \[ \mathcal{D}\left( v\right) = \left\{ {j\left( t\right) \mid t \in {T}_{\beta }}\right\} \subseteq {V}_{\beta }^{\left( \mathbf{B}\right) }. \] Therefore \[ v \in {V}_{\beta + 1}^{\left( \mathbf{B}\right) } \subseteq {V}_{\alpha }^{\left( \mathbf{B}\right) } \] Theorem 15.4. If \( {t}_{1},{t}_{2} \) are constant terms and \( \varphi \) is a limited formula, then \( \llbracket \varphi \rrbracket = {D}_{j}\left( \varphi \right) \) . In particular, 1. \( \llbracket {t}_{1} = {t}_{2}\rrbracket = \llbracket j\left( {t}_{1}\right) = j\left( {t}_{2}\right) \rrbracket \) . 2. \( \llbracket {t}_{1} \in {t}_{2}\rrbracket = \llbracket j\left( {t}_{1}\right) \in j\left( {t}_{2}\right) \rrbracket \) . Proof. (By induction on Ord ( \( \varphi \) ).) 1. Let \( \beta = \max \left( {\rho \left( {t}_{1}\right) ,\rho \left( {t}_{2}\right) }\right) \) . Then \[ \llbracket {t}_{1} = {t}_{2}\rrbracket = \mathop{\prod }\limits_{{t \in {T}_{\beta }}}\llbracket t \in {t}_{1} \leftrightarrow t \in {t}_{2}\rrbracket \;\text{by Definition 9.27.6} \] \[ = \mathop{\prod }\limits_{{t \in {T}_{\beta }}}\left\lbrack {\left\lbrack {j\left( t\right) \in j\left( {t}_{1}\right) \leftrightarrow j\left( t\right) \in j\left( {t}_{2}\right) }\right\rbrack \text{by the induction hypothesis}}\right\rbrack \] \[ = \llbracket j\left( {t}_{1}\right) = j\left( {t}_{2}\right) \rrbracket \;\text{by Theorem 15.2} \] since \[ \mathcal{D}\left( {j\left( {t}_{i}\right) }\right) \subseteq \left\{ {j\left( t\right) \mid t \in {T}_{\beta }}\right\} \;\text{ for }i = 1,2. \] 144 2. We distinguish the following three cases: 2.1 \( {t}_{1} = {\underline{k}}_{1} \land {t}_{2} = {\underline{k}}_{2} \) for some \( {k}_{1},{k}_{2} \in V \) . Then \[ \llbracket {t}_{1} \in {t}_{2}\rrbracket = \llbracket {\underline{k}}_{1} \in {\underline{k}}_{2}\rrbracket = \llbracket {\check{k}}_{1} \in {\check{k}}_{2}\rrbracket \] \[ = \llbracket j\left( {\underline{k}}_{1}\right) = j\left( {\underline{k}}_{2}\right) \rrbracket . \] 2.2 \( {t}_{2} = {\widehat{x}}^{\beta }\varphi \left( {x}^{\beta }\right) \) . Let \( v = j\left( {{\widehat{x}}^{\beta }\varphi \left( {x}^{\beta }\right) }\right) = j\left( {t}_{2}\right) \) . Then \( \llbracket {t}_{1} \in {t}_{2}\rrbracket = \mathop{\sum }\limits_{{t \in {T}_{\beta }}}\llbracket {t}_{1} = t\rrbracket \cdot \llbracket \varphi \left( t\right) \rrbracket \; \) by Definition 9.27.5 \[ = \mathop{\sum }\limits_{{t \in {T}_{\beta }}}\left\lbrack {j\left( {t}_{1}\right) = j\left( t\right) }\right\rbrack \cdot {D}_{j}\left( {\varphi \left( t\right) }\right) \;\text{by the induction hypothesis} \] \[ = \mathop{\sum }\limits_{{x \in \mathcal{D}\left( v\right) }}\left\lbrack {j\left( {t}_{1}\right) = x}\right\rbrack \cdot v\left( x\right) \] \[ = \llbracket j\left( {t}_{1}\right) \in v\rrbracket = \llbracket j\left( {t}_{1}\right) \in j\left( {t}_{2}\right) \rrbracket . \] \( {2.3}{t}_{2} = {\underline{k}}_{2} \) for some \( {k}_{2} \) . Then \[ \left\lbrack {\left\lbrack {{t}_{1} \in {\underline{k}}_{2}}\right\rbrack = \mathop{\sum }\limits_{{k \in {k}_{2}}}\left\lbrack {{t}_{1} = \underline{k}}\right\rbrack }\right\rbrack \;\text{ by Definition 9.27.4 } \] \[ = \mathop{\sum }\limits_{{k \in {k}_{2}}}\left\lbrack {j\left( {t}_{1}\right) = \check{k}}\right\rbrack \;\text{by the induction hypothesis} \] \[ = \llbracket j\left( {t}_{1}\right) \in {\check{k}}_{2}\rrbracket \] \[ = \llbracket j\left( {t}_{1}\right) \in j\left( {t}_{2}\right) \rrbracket \text{.} \] Among the remaining cases we need only consider the following. 3. If \( \varphi \) is \( \left( {\forall {x}^{\beta }}\right) \psi \left( {x}^{\beta }\right) \), then \[ \llbracket \varphi \rrbracket = \mathop{\prod }\limits_{{t \in {T}_{\beta }}}\llbracket \psi \left( t\right) \rrbracket \;\text{ by Definition 9.27.8 } \] \[ = \mathop{\prod }\limits_{{t \in {T}_{\beta }}}{D}_{j}\left( {\psi \left( t\right) }\right) \;\text{ by the induction hypothesis } \] \[ = {D}_{j}\left( {\left( {\forall {x}^{\beta }}\right) \psi \left( {x}^{\beta }\right) }\right) \] \[ = {D}_{j}\left( \varphi \right) \text{.} \] 4. \( \llbracket V\left( t\right) \rrbracket = \mathop{\sum }\limits_{{k \in R\left( \alpha \right) }}\llbracket t = \underline{k}\rrbracket \; \) where \( \alpha = \rho \left( t\right) + 1 \) \[ = \mathop{\sum }\limits_{{k \in R\left( \alpha \right) }}\left\lbrack {j\left( t\right) = \check{k}}\right\rbrack \;\text{by the induction hypothesis} \] \[ = \llbracket M\left( {j\left( t\right) }\right) \rrbracket = {D}_{j}\left( {V\left( t\right) }\right) \] \[ \text{5.}\llbracket F\left( t\right) \rrbracket = \mathop{\sum }\limits_{{b \in R\left( \alpha \right) \cap B}}\llbracket t = \underline{b}\rrbracket \cdot {F}_{0}\left( b\right) \] where \( \alpha = \rho \left( t\right) + 1 \) \[ = \mathop{\sum }\limits_{{b \in R\left( \alpha \right) \cap B}}\left\lbrack {j\left( t\right) = \check{b}}\right\rbrack \cdot b\] by the induction hypothesis \[ = {D}_{j}\left( {F\left( t\right) }\right) \] Corollary 15.5. \( \left( {\exists \alpha }\right) \left\lbrack {\int \left( {{\widehat{x}}_{n}{}^{\alpha }F\left( {{x}_{n}{}^{\alpha }}\right) }\right) = F}\right\rbrack = 1 \) . Proof. Choose \( \alpha = \operatorname{rank}\left( B\right) \) and use 5. Remark. From now on we again assum
111_111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 1.5
Definition 1.5 A linear subspace \( \mathcal{D} \) of \( \mathcal{D}\left( T\right) \) is called a core for \( T \) if \( \mathcal{D} \) is dense in \( \left( {\mathcal{D}\left( T\right) ,\parallel \cdot {\parallel }_{T}}\right) \), that is, for each \( x \in \mathcal{D}\left( T\right) \), there exists a sequence \( {\left( {x}_{n}\right) }_{n \in \mathbb{N}} \) of vectors \( {x}_{n} \in \mathcal{D} \) such that \( x = \mathop{\lim }\limits_{n}{x}_{n} \) in \( {\mathcal{H}}_{1} \) and \( {Tx} = \mathop{\lim }\limits_{n}T\left( {x}_{n}\right) \) in \( {\mathcal{H}}_{2} \) . If \( T \) is closed, a linear subspace \( \mathcal{D} \) of \( \mathcal{D}\left( T\right) \) is a core for \( T \) if and only if \( T \) is the closure of its restriction \( T \mid \mathcal{D} \) . That is, a closed operator can be restored from its restriction to any core. The advantage of a core is that closed operators are often easier to handle on appropriate cores rather than on full domains. Example 1.1 (Nonclosable operators) Let \( \mathcal{D} \) be a linear subspace of a Hilbert space \( \mathcal{H} \), and let \( e \neq 0 \) be a vector of \( \mathcal{H} \) . Let \( F \) be a linear functional on \( \mathcal{D} \) which is not continuous in the Hilbert space norm. Define the operator \( T \) by \( \mathcal{D}\left( T\right) = \mathcal{D} \) and \( T\left( x\right) = F\left( x\right) e \) for \( x \in \mathcal{D}. \) Statement \( T \) is not closable. Proof Since \( F \) is not continuous, there exists a sequence \( {\left( {x}_{n}\right) }_{n \in \mathbb{N}} \) from \( \mathcal{D} \) such that \( \mathop{\lim }\limits_{n}{x}_{n} = 0 \) in \( \mathcal{H} \) and \( \left( {F\left( {x}_{n}\right) }\right) \) does not converge to zero. By passing to a subsequence if necessary we can assume that there is a constant \( c > 0 \) such that \( \left| {F\left( {x}_{n}\right) }\right| \geq c \) for all \( n \in \mathbb{N} \) . Putting \( {x}_{n}^{\prime } = F{\left( {x}_{n}\right) }^{-1}{x}_{n} \), we have \( \mathop{\lim }\limits_{n}{x}_{n}^{\prime } = 0 \) and \( T\left( {x}_{n}^{\prime }\right) = F\left( {x}_{n}^{\prime }\right) e = e \neq 0 \) . Hence, \( T \) is not closable by Proposition 1.5(ii). The preceding proof has also shown that \( \left( {0, e}\right) \in \overline{\mathcal{G}\left( T\right) } \), so \( \overline{\mathcal{G}\left( T\right) } \) is not the graph of a linear operator. Explicit examples of discontinuous linear functionals are easily obtained as follows: If \( \mathcal{D} \) is the linear span of an orthonormal sequence \( {\left( {e}_{n}\right) }_{n \in \mathbb{N}} \) of a Hilbert space, define \( F \) on \( \mathcal{D} \) by \( F\left( {e}_{n}\right) = 1, n \in \mathbb{N} \) . If \( \mathcal{H} = {L}^{2}\left( \mathbb{R}\right) \) and \( \mathcal{D} = {C}_{0}^{\infty }\left( \mathbb{R}\right) \), define \( F\left( f\right) = f\left( 0\right) \) for \( f \in \mathcal{D} \) . ## 1.2 Adjoint Operators In this section the scalar products of the underlying Hilbert spaces are essentially used to define adjoints of densely defined linear operators. Let \( \left( {{\mathcal{H}}_{1},\langle \cdot , \cdot {\rangle }_{1}}\right) \) and \( \left( {{\mathcal{H}}_{2},\langle \cdot , \cdot {\rangle }_{2}}\right) \) be Hilbert spaces. Let \( T \) be a linear operator from \( {\mathcal{H}}_{1} \) into \( {\mathcal{H}}_{2} \) such that the domain \( \mathcal{D}\left( T\right) \) is dense in \( {\mathcal{H}}_{1} \) . Set \[ \mathcal{D}\left( {T}^{ * }\right) = \left\{ {y \in {\mathcal{H}}_{2} : }\right. \text{There exists}u \in {\mathcal{H}}_{1}\text{such that}\langle {Tx}, y{\rangle }_{2} = \langle x, u{\rangle }_{1} \] \[ \text{for}x \in \mathcal{D}\left( T\right) \} \text{.} \] By Riesz’ theorem, a vector \( y \in {\mathcal{H}}_{2} \) belongs to \( \mathcal{D}\left( {T}^{ * }\right) \) if and only if the map \( x \rightarrow \langle {Tx}, y{\rangle }_{2} \) is a continuous linear functional on \( \left( {\mathcal{D}\left( T\right) ,\parallel \cdot {\parallel }_{1}}\right) \), or equivalently, there is a constant \( {c}_{y} > 0 \) such that \( \left| {\langle {Tx}, y{\rangle }_{2}}\right| \leq {c}_{y}\parallel x{\parallel }_{1} \) for all \( x \in \mathcal{D}\left( T\right) \) . An explicit description of the set \( \mathcal{D}\left( {T}^{ * }\right) \) is in general a very difficult matter. Since \( \mathcal{D}\left( T\right) \) is dense in \( {\mathcal{H}}_{1} \), the vector \( u \in {\mathcal{H}}_{1} \) satisfying \( \langle {Tx}, y{\rangle }_{2} = \langle x, u{\rangle }_{1} \) for all \( x \in \mathcal{D}\left( T\right) \) is uniquely determined by \( y \) . Therefore, setting \( {T}^{ * }y = u \), we obtain a well-defined mapping \( {T}^{ * } \) from \( {\mathcal{H}}_{2} \) into \( {\mathcal{H}}_{1} \) . It is easily seen that \( {T}^{ * } \) is linear. Definition 1.6 The linear operator \( {T}^{ * } \) is called the adjoint operator of \( T \) . By the preceding definition we have \[ \langle {Tx}, y{\rangle }_{2} = {\left\langle x,{T}^{ * }y\right\rangle }_{1}\;\text{ for all }x \in \mathcal{D}\left( T\right), y \in \mathcal{D}\left( {T}^{ * }\right) . \] (1.5) Let \( T \) be a densely defined linear operator on \( \mathcal{H} \) . Then \( T \) is called symmetric if \( T \subseteq {T}^{ * } \) . Further, we say that \( T \) is self-adjoint if \( T = {T}^{ * } \) and that \( T \) is essentially self-adjoint if its closure \( \bar{T} \) is self-adjoint. These are fundamental notions studied extensively in this book. The domain \( \mathcal{D}\left( {T}^{ * }\right) \) of \( {T}^{ * } \) may be not dense in \( {\mathcal{H}}_{2} \) as the next example shows. There are even operators \( T \) such that \( \mathcal{D}\left( {T}^{ * }\right) \) consists of the null vector only. Example 1.2 (Example 1.1 continued) Suppose that \( \mathcal{D}\left( T\right) \) is dense in \( \mathcal{H} \) . Since the functional \( F \) is discontinuous, the map \( x \rightarrow \langle {Tx}, y\rangle = F\left( x\right) \langle e, y\rangle \) is continuous if and only if \( y \bot e \) . Hence, \( \mathcal{D}\left( {T}^{ * }\right) = {e}^{ \bot } \) in \( \mathcal{H} \) and \( {T}^{ * }y = 0 \) for \( y \in \mathcal{D}\left( {T}^{ * }\right) \) . Example 1.3 (Multiplication operators by continuous functions) Let \( \mathcal{J} \) be an interval. For a continuous function \( \varphi \) on \( \mathcal{J} \), we define the multiplication operator \( {M}_{\varphi } \) on the Hilbert space \( {L}^{2}\left( \mathcal{J}\right) \) by \[ \left( {{M}_{\varphi }f}\right) \left( x\right) = \varphi \left( x\right) f\left( x\right) \;\text{ for }f \in \mathcal{D}\left( {M}_{\varphi }\right) \mathrel{\text{:=}} \left\{ {f \in {L}^{2}\left( \mathcal{J}\right) : \varphi \cdot f \in {L}^{2}\left( \mathcal{J}\right) }\right\} . \] Since \( \mathcal{D}\left( {M}_{\varphi }\right) \) contains all continuous functions with compact support, \( \mathcal{D}\left( {M}_{\varphi }\right) \) is dense, so the adjoint operator \( {\left( {M}_{\varphi }\right) }^{ * } \) exists. We will prove the following: Statement \( {\left( {M}_{\varphi }\right) }^{ * } = {M}_{\bar{\varphi }} \) . Proof From the relation \[ \left\langle {{M}_{\varphi }f, g}\right\rangle = {\int }_{\mathcal{J}}{\varphi f}\bar{g}{dx} = {\int }_{\mathcal{J}}f\overline{\bar{\varphi }g}{dx} = \left\langle {f,{M}_{\bar{\varphi }}g}\right\rangle \] for \( f, g \in \mathcal{D}\left( {M}_{\varphi }\right) = \mathcal{D}\left( {M}_{\bar{\varphi }}\right) \) we conclude that \( {M}_{\bar{\varphi }} \subseteq {\left( {M}_{\varphi }\right) }^{ * } \) . To prove the converse inclusion, let \( g \in \mathcal{D}\left( {\left( {M}_{\varphi }\right) }^{ * }\right) \) and set \( h = {\left( {M}_{\varphi }\right) }^{ * }g \) . Let \( {\chi }_{K} \) be the characteristic function of a compact subset \( K \) of \( \mathcal{J} \) . For \( f \in \mathcal{D}\left( {M}_{\varphi }\right) \), we have \( f \cdot {\chi }_{K} \in \mathcal{D}\left( {M}_{\varphi }\right) \) and \( \left\langle {{M}_{\varphi }\left( {f{\chi }_{K}}\right), g}\right\rangle = \langle {\varphi f}{\chi }_{K}, g\rangle = \left\langle {f{\chi }_{K}, h}\right\rangle \), so that \[ {\int }_{\mathcal{J}}f{\chi }_{K}\left( {\varphi \bar{g} - \bar{h}}\right) {dx} = 0. \] Since \( \mathcal{D}\left( {M}_{\varphi }\right) \) is dense, the element \( {\chi }_{K}\left( {\varphi \bar{g} - \bar{h}}\right) \) of \( {L}^{2}\left( \mathcal{J}\right) \) must be zero, so that \( \bar{\varphi }g = h \) on \( K \) and hence on the whole interval \( \mathcal{J} \) . That is, we have \( g \in \mathcal{D}\left( {M}_{\bar{\varphi }}\right) \) and \( h = {\left( {M}_{\varphi }\right) }^{ * }g = {M}_{\bar{\varphi }}g \) . This completes the proof of the equality \( {\left( {M}_{\varphi }\right) }^{ * } = {M}_{\bar{\varphi }} \) . The special case where \( \varphi \left( x\right) = x \) and \( \mathcal{J} = \mathbb{R} \) is of particular importance. The operator \( Q \mathrel{\text{:=}} {M}_{\varphi } = {M}_{x} \) is then the position operator of quantum mechanics. By the preceding statement we have \( Q = {Q}^{ * } \), so \( Q \) is a self-adjoint operator on \( {L}^{2}\left( \mathbb{R}\right) \) . Multiplication operators by measurable functions on general measure spaces will be studied in Sect. 3.4. We now begin to develop basic properties of adjoint operators. Proposition 1.6 Let \( S \) and \( T \) be linear operators from \( {\mathcal{H}}_{1} \) into \( {\mathcal{H}}_{2} \) such that \( \mathcal{D}\left( T\right) \) is dense in \( {\mathcal{H}}_{1} \) . Then: (i) \( {T}^{ * } \) is a closed linear operator from \( {\mathcal{H}}_{2} \) into \( {\mathcal{H}}_{1} \) . (ii) \( \mathcal{R}{\left( T\right) }^{ \bot } = \mathcal{N}\left( {T}^{ * }\right) \) . (iii) If \( \mathcal{D}\left( {T}^{ * }\right) \) is dense in \( {\mathcal{H}}_{2} \), then \( T \subseteq {T}^{* * } \), where \( {T}^{* * } \mathrel{\text{:=}} {\left( {T}^{ * }\right) }^{ * } \)
1139_(GTM44)Elementary Algebraic Geometry
Definition 5.5
Definition 5.5. Let \( \mathfrak{a} \) and \( \mathfrak{b} \) be fractional ideals of \( R \) . Then the set of all elements \( x \in K \) satisfying \( x\mathfrak{b} \subset \mathfrak{a} \) forms a fractional ideal called the quotient of a by \( b \), denoted \( a : b \) . A fractional ideal \( a \) is called invertible if there exists a fractional ideal \( \mathfrak{b} \) such that \( \mathfrak{a} \cdot \mathfrak{b} = R \) ; then \( \mathfrak{b} \) is the inverse of \( \mathfrak{a} \) , and is denoted by \( {\mathfrak{a}}^{-1} \) or \( 1/\mathfrak{a} \) . Remark 5.6. When a and \( \mathfrak{b} \) are both ordinary ideals of \( R \), Definition 5.5 yields a larger set than the " \( a : b \) " of Exercise 4.5 of Chapter III since in that exercise we restrict \( x \) to be in \( R \) . One thus must make clear in which sense one is taking the quotient. For the remainder of the book, we shall always mean it in the sense of Definition 5.5. We can now easily see that any fractional ideal is the quotient of two integral ones: Let \( \mathfrak{A} \) be any fractional ideal of \( R \), and let \( {d}_{0} \in R \) be a universal denominator of \( \mathfrak{A} \) . Then \( \mathfrak{a} = \mathfrak{A} \cap R \) and \( \left( {d}_{0}\right) \) are integral ideals of \( R \), and \( \mathfrak{A} = \mathfrak{a} : \left( {d}_{0}\right) \) . (Each of " \( \subset \) " and " \( \supset \) " of the last equality follows directly from the definitions of fractional ideal and of \( \mathfrak{a} : \left( {d}_{0}\right) \) .) Using fractional ideals, we will be able to see the essentially identical nature of point chains on an irreducible nonsingular curve and nonzero ideals of its coordinate ring \( R \) (Theorem 5.12). Theorem 5.12 says that every integral ideal \( \mathfrak{a} \) of \( R \) is uniquely the product \( \mathfrak{a} = {\mathfrak{p}}_{1}{}^{{m}_{1}} \cdot \ldots \cdot {\mathfrak{p}}_{s}{}^{{m}_{s}} \) of finitely many (ordinary) prime ideals of \( R \), and every fractional ideal of \( R \) can be uniquely written as \( {\mathfrak{p}}_{1}{}^{{m}_{1}} \cdot \ldots \cdot {\mathfrak{p}}_{s}{}^{{m}_{s}}/{\mathfrak{q}}_{1}{}^{{n}_{1}} \cdot \ldots \cdot {\mathfrak{q}}_{t}{}^{{n}_{t}} \) (where the ideals \( {\mathfrak{p}}_{i} \) and \( {\mathfrak{q}}_{j} \) are prime, and no \( {\mathfrak{p}}_{i} \) equals any \( {\mathfrak{q}}_{j} \) ). Before stating Theorem 5.12 formally, we shall convert the local ring translation of nonsingularity, namely, (5.1.2), to a form which will allow us to state Theorem 5.12 in a somewhat more standard form, and will simplify the proof of the result. Specifically, we convert (5.1.2) to (5.12.2). (We need not convert (5.1.1)-condition (5.12.1) is simply (5.1.1) stated in the more general setting of Theorem 5.12.) The key concept here is that of normal domain. We begin with the following result. Lemma 5.7. Suppose \( R \) is a Noetherian domain with quotient field \( K \), and let \( \mathfrak{m} \) be any maximal ideal of \( R \) ; if the maximal ideal \( \mathfrak{M} \) of \( {R}_{\mathfrak{m}} \) is principal, then \( {R}_{\mathrm{m}} \) is a valuation ring. Proof. Suppose \( \mathfrak{M} = \left( m\right) \) . We first show this: (5.8) Each element \( a \) of \( {R}_{\mathrm{m}} \) can be written as \( a = u{m}^{n} \) for some unit \( u \in {R}_{\mathrm{m}} \smallsetminus \mathfrak{M} \) and some nonnegative integer \( n \) . If \( m \) does not divide \( a \) (that is, if for no \( x \in {R}_{m} \) do we have \( a = {xm} \) ) then \( a \) cannot be in \( \mathfrak{M} \), hence \( a \) is a unit of \( {R}_{\mathfrak{m}}\left( {a = u \cdot {m}^{0}}\right) \) . If \( m \) does divide \( a \), write \( a = {x}_{1}m \) . If \( m \) divides \( {x}_{1} \), then \( a = {x}_{2}{m}^{2} \), etc. Since \( {R}_{\mathrm{m}} \) is Noetherian, this process must terminate after a finite number \( n \) of steps, \( a = {x}_{n}{m}^{n} \), otherwise we would have a strictly ascending sequence of ideals \( \left( a\right) \subsetneqq \left( {x}_{1}\right) \subsetneqq \left( {x}_{2}\right) \subsetneqq \ldots \) . (This sequence would be strict, for if there were an \( i \) such that \( \left( {x}_{i}\right) = \left( {x}_{i + 1}\right) - \) that is, if \( {x}_{i + 1} = r{x}_{i} \) for some \( r \in {R}_{\mathfrak{m}} \) -then \( a = {x}_{i}{m}^{i} = {x}_{i + 1}{m}^{i + 1} = \) \( r{x}_{i}{m}^{i + 1} \) . Cancelling \( {x}_{i}{m}^{i} \) gives \( 1 = {rm} \), so \( \left( m\right) = {R}_{\mathrm{m}} \), a contradiction.) Since \( a \) doesn’t divide \( {x}_{n},{x}_{n} \) is a unit of \( {R}_{m} \), and (5.8) follows. It is now easily seen that \( {R}_{\mathrm{m}} \) is a valuation ring of \( K \), for if \( b \) is any nonzero element of \( K \), we may write \( b = c/d \), where \( c, d \in R \) ; if \( c = u{m}^{n}, d = {u}^{\prime }{m}^{{n}^{\prime }} \) , then \( b = {u}^{\prime \prime }{m}^{n - {n}^{\prime }}\left( {u,{u}^{\prime }}\right. \), and \( \left. {{u}^{\prime \prime }\text{units of}{R}_{\mathfrak{m}}}\right) \) . If \( n - {n}^{\prime } \geq 0, c \in {R}_{\mathfrak{m}} \) ; if \( n - {n}^{\prime } \leq 0 \) , \( 1/c \in {R}_{\mathrm{m}} \) . Hence Lemma 5.7 is proved. Now recall the notion of an element being integral over a domain (after the proof of (6.1.1) in Chapter III). Definition 5.9. Let \( R \) be an integral domain with quotient field \( K \) . If each element of \( K \) integral over \( R \) is already in \( R \), then we say \( R \) is integrally closed in \( K \), or that \( R \) is normal. Example 5.10. The ring \( \mathbb{Z} \) is a canonical example of a normal domain. Every element \( a \) of \( \mathbb{Q} \) satisfies an equation \( {bx} - c = 0 \), with \( b, c \in \mathbb{Z} \) . Clearly \( a = c/b \) is integral over \( \mathbb{Z} \) iff \( b \) may be taken to be 1 . In a similar way we see \( \mathbb{C}\left\lbrack X\right\rbrack \) is normal. The coordinate ring \( \mathbb{C}\left\lbrack {X, Y}\right\rbrack /\left( {{Y}^{2} - {X}^{3}}\right) = \) \( \mathbb{C}\left\lbrack {X,{X}^{3/2}}\right\rbrack \) of the cusp curve \( \mathrm{V}\left( {{Y}^{2} - {X}^{3}}\right) \subset {\mathbb{C}}_{XY} \) is not normal, for \( {X}^{1/2} \) is not in \( \mathbb{C}\left\lbrack {X,{X}^{3/2}}\right\rbrack \), yet it is integral over \( \mathbb{C}\left\lbrack {X,{X}^{3/2}}\right\rbrack \) since \( Z = {X}^{1/2} \) satisfies the integral equation \( {Z}^{2} - X = 0 \) . Note that the abstract varieties determined by the rings \( \mathbb{Z} \) and \( \mathbb{C}\left\lbrack X\right\rbrack \) above are nonsingular; the cusp curve, having coordinate ring \( \mathbb{C}\left\lbrack {X,{X}^{3/2}}\right\rbrack \) , has a singularity at \( \left( {0,0}\right) \) . We shall see in Exercise 5.3 that a coordinate ring of an irreducible variety \( V \) is normal iff \( V \) is "nonsingular in codimension 1"- that is, iff each irreducible subvariety of codimension 1 in \( V \) is nonsingular in \( V \) . (Cf. Definition 4.1.) (Thus the subvariety \( \mathbf{S}\left( V\right) \) of points singular in \( V \) has codimension \( \geq 2 \) in \( V \) .) The next result provides an example at the local level, and will be used in what follows. Lemma 5.11. Let \( {R}_{m} \) be the valuation ring of Lemma 5.7. Then \( {R}_{m} \) is integrally closed in its quotient field \( K \) . Proof. If \( {R}_{\mathrm{m}} \) ’s maximal ideal is \( \left( m\right) \), let \( a = u{m}^{-n} \), where \( n > 0 \), be a typical element of \( K \smallsetminus {R}_{\mathfrak{m}}\left( u\right. \), a unit in \( \left. {R}_{\mathfrak{m}}\right) \) . If \( a \) were integral over \( {R}_{\mathfrak{m}} \), there would be an equation of the form \( {m}^{-{nr}} + {c}_{1}{m}^{-n\left( {r - 1}\right) } + \ldots + {c}_{r} = 0\left( {{c}_{i} \in {R}_{m}}\right) \), which implies that \( 1/m = {d}_{1} + \ldots + {d}_{{nr} - 1}{m}^{{nr} - 1}\left( {{d}_{i} \in {R}_{m}}\right) \) . Since \( m \) is not a unit, \( 1/m \notin {R}_{m} \) . Yet \( {d}_{1} + \ldots + {d}_{{nr} - 1}{m}^{{nr} - 1} \in {R}_{m} \), a contradiction. Note that since any element outside all maximal ideals of a domain \( R \) is a unit of \( R \), we have that \( R = \cap {R}_{m} \), where intersection extends over all maximal ideals of \( R \) . Also, it follows immediately from Definition 5.9 that for any collection of subrings \( {R}_{\gamma } \) of a field \( K \), if each \( {R}_{\gamma } \) is integrally closed in \( K \), then \( \mathop{\bigcap }\limits_{\gamma }{R}_{\gamma } \) is too. From this it follows that \( R = \cap {R}_{m} \) is integrally closed in \( K \) . If \( D \) is a domain in which every nonzero proper prime ideal \( \mathfrak{p} \) is maximal, and if \( {D}_{\mathfrak{p}} \) is regular, then \( {D}_{\mathfrak{p}} \) is a principal ideal ring. This fact and Lemmas 5.7 and 5.11 convert (5.1.2) to (5.12.2) below. In Theorem 5.12, (5.12.1) and (5.12.2) together imply that any concrete model of \( D \) is a nonsingular curve. Our basic decomposition result is: Theorem 5.12. Let \( D \) be a Noetherian integral domain satisfying these properties: (5.12.1) Every nonzero proper prime ideal of \( D \) is maximal; (5.12.2) \( D \) is integrally closed in its quotient field. Then: If \( \mathfrak{a} \) is a nonzero integral ideal of \( D \), it is a product of finitely many (not necessarily distinct) prime ideals \( {\mathfrak{p}}_{i} \) : \[ \mathfrak{a} = {\mathfrak{p}}_{1} \cdot \ldots \cdot {\mathfrak{p}}_{r} \] (24) This factorization is unique up to the order of the prime ideals \( {\mathfrak{p}}_{i} \) . If \( \mathfrak{A} \) is a nonzero fractional ideal of \( D \), it is a quotient of products of prime integral ideals-that is, we may write \[ \mathfrak{A} = \frac{{\mathfrak{p}}_{1} \cdot \ldots \cdot {\mathfrak{p}}_{s}}{{\mathfrak{q}}_{1} \cdot \ldots \cdot {\mathfrak{q}}_{t}} \] (25) We may assume that no \( {\mathfrak{p}}_{i} \) equals any \( {\mathfrak{q}}_{j} \) ; then this representation is unique up to the order
1057_(GTM217)Model Theory
Definition 3.3.10
Definition 3.3.10 If \( F \) is a formally real field, a real closure of \( F \) is a real closed algebraic extension of \( F \) . By Zorn’s Lemma, every formally real field \( F \) has a maximal formally real algebraic extension. This maximal extension is a real closure of \( F \) . The real closure of a formally real field may not be unique. Let \( F = \) \( \mathbb{Q}\left( X\right) ,{F}_{0} = F\left( \sqrt{X}\right) \), and \( {F}_{1} = F\left( \sqrt{-X}\right) \) . By Theorem 3.3.3, \( {F}_{0} \) and \( {F}_{1} \) are formally real. Let \( {R}_{i} \) be a real closure of \( {F}_{i} \) . There is no isomorphism between \( {R}_{0} \) and \( {R}_{1} \) fixing \( F \) because \( X \) is a square in \( {R}_{0} \) but not in \( {R}_{1} \) . Thus, some work needs to be done to show that any ordered field \( \left( {F, < }\right) \) has a real closure where the canonical order extends the ordering of \( F \) . Lemma 3.3.11 If \( \left( {F, < }\right) \) is an ordered field, \( 0 < x \in F \), and \( x \) is not a square in \( F \), then we can extend the ordering of \( F \) to \( F\left( \sqrt{x}\right) \) . Proof We can extend the ordering to \( F\left( \sqrt{x}\right) \) by \( 0 < a + b\sqrt{x} \) if and only if i) \( b = 0 \) and \( a > 0 \), or ii) \( b > 0 \) and \( \left( {a > 0}\right. \) or \( \left. {x > \frac{{a}^{2}}{{b}^{2}}}\right) \), or iii) \( b < 0 \) and \( \left( {a < 0}\right. \) and \( \left. {x < \frac{{a}^{2}}{{b}^{2}}}\right) \) . Corollary 3.3.12 i) If \( \left( {F, < }\right) \) is an ordered field, there is a real closure \( R \) of \( F \) such that the canonical ordering of \( R \) extends the ordering on \( F \) . ii) \( {\mathrm{{RCF}}}_{\forall } \) is the theory of ordered integral domains. ## Proof i) By successive applications of Lemma 3.3.11, we can find an ordered field \( \left( {L, < }\right) \) extending \( \left( {F, < }\right) \) such that every positive element of \( F \) has a square root in \( L \) . We now apply Zorn’s Lemma to find a maximal formally real algebraic extension \( R \) of \( L \) . Because every positive element of \( F \) is a square in \( R \), the canonical ordering of \( R \) extends the ordering of \( F \) . ii) Clearly, any substructure of a real closed field is an ordered integral domain. If \( \left( {D, < }\right) \) is an ordered integral domain and \( F \) is the fraction field of \( F \), then we can order \( F \) by \[ \frac{a}{b} > 0 \Leftrightarrow a, b > 0\text{ or }a, b < 0. \] By i), we can find \( \left( {R, < }\right) \vDash \) RCF such that \( \left( {F, < }\right) \subseteq \left( {R, < }\right) \) . Although a formally real field may have nonisomorphic real closures, if \( \left( {F, < }\right) \) is an ordered field there will be a unique real closure compatible with the ordering of \( F \) . Theorem 3.3.13 If \( \left( {F, < }\right) \) is an ordered field, and \( {R}_{1} \) and \( {R}_{2} \) are real closures of \( F \) where the canonical ordering extends the ordering of \( F \), then there is a unique field isomorphism \( \phi : {R}_{1} \rightarrow {R}_{2} \) that is the identity on \( F \) . Note that because the ordering of a real closed field is definable in \( {\mathcal{L}}_{\mathrm{r}},\phi \) also preserves the ordering. We often say that any ordered field \( \left( {F, < }\right) \) has a unique real closure. By this we mean that there is a unique real closure that extends the given ordering. Corollary 3.3.14 RCF has algebraically prime models. Proof Let \( \left( {D, < }\right) \) be an ordered domain, and let \( \left( {R, < }\right) \) be the real closure of the fraction field compatible with the ordering of \( D \) . Let \( \left( {F, < }\right) \) be any real closed field extension of \( \left( {D, < }\right) \) . Let \( K = \{ \alpha \in F : \alpha \) is algebraic over the fraction field of \( D\} \) . By Theorem 3.3.5, it is easy to see that \( K \) is real closed. Because the ordering of \( K \) extends \( \left( {D, < }\right) \), by Theorem 3.3.13 there is an isomorphism \( \phi : F \rightarrow K \) fixing \( D \) . We are now ready to prove quantifier elimination. Theorem 3.3.15 The theory RCF admits elimination of quantifiers in \( {\mathcal{L}}_{\text{or }} \) . Proof Because RCF has algebraically prime models, by Corollary 3.1.12, we need only show that \( F{ \prec }_{s}K \) when \( F, K \vDash \operatorname{RCF} \) and \( F \subseteq K \) . Let \( \phi \left( {v,\bar{w}}\right) \) be a quantifier-free formula and let \( \bar{a} \in F, b \in K \) be such that \( K \vDash \phi \left( {b,\bar{a}}\right) \) . We must find \( {b}^{\prime } \in F \) such that \( F \vDash \phi \left( {{b}^{\prime },\bar{a}}\right) \) . Note that \[ p\left( X\right) \neq 0 \leftrightarrow \left( {p\left( \bar{X}\right) > 0 \vee - p\left( \bar{X}\right) > 0}\right) \] and \[ p\left( \bar{X}\right) \ngtr 0 \leftrightarrow \left( {p\left( \bar{X}\right) = 0 \vee - p\left( \bar{X}\right) > 0}\right) . \] With this in mind, we may assume that \( \phi \) is a disjunction of conjunctions of formulas of the form \( p\left( {v,\bar{w}}\right) = 0 \) or \( p\left( {v,\bar{w}}\right) > 0 \) . As in Theorem 3.2.2, we may assume that there are polynomials \( {p}_{1},\ldots ,{p}_{n} \) and \( {q}_{1},\ldots ,{q}_{m} \in F\left\lbrack X\right\rbrack \) such that \[ \phi \left( {v,\bar{a}}\right) \leftrightarrow \mathop{\bigwedge }\limits_{{i = 1}}^{n}{p}_{i}\left( v\right) = 0 \land \mathop{\bigwedge }\limits_{{i = 1}}^{m}{q}_{i}\left( v\right) > 0. \] If any of the polynomials \( {p}_{i}\left( X\right) \) is nonzero, then \( b \) is algebraic over \( F \) . Because \( F \) has no proper formally real algebraic extensions, in this case \( b \in F \) . Thus, we may assume that \[ \phi \left( {v,\bar{a}}\right) \leftrightarrow \mathop{\bigwedge }\limits_{{i = 1}}^{m}{q}_{i}\left( v\right) > 0 \] The polynomial \( {q}_{i}\left( X\right) \) can only change signs at zeros of \( {q}_{i} \) and if all zeros of \( {q}_{i} \) are in \( F \) . Thus, we can find \( {c}_{i},{d}_{i} \in F \) such that \( {c}_{i} < b < {d}_{i} \) and \( {q}_{i}\left( x\right) > 0 \) for all \( x \in \left( {{c}_{i},{d}_{i}}\right) \) . Let \( c = \max \left( {{c}_{1},\ldots ,{c}_{m}}\right) \) and \( d = \min \left( {{d}_{1},\ldots ,{d}_{m}}\right) \) . Then, \( c < d \) and \( \mathop{\bigwedge }\limits_{{i = 1}}^{m}{q}_{i}\left( x\right) > 0 \) whenever \( c < x < d \) . Thus, we can find \( {b}^{\prime } \in F \) such that \( F \vDash \phi \left( {{b}^{\prime },\bar{a}}\right) \) . Corollary 3.3.16 RCF is complete, model complete, and decidable. Thus RCF is the theory of \( \left( {\mathbb{R},+,\cdot , < }\right) \) and RCF is decidable. Proof By quantifier elimination, RCF is model complete. Every real closed field has characteristic zero; thus, the rational numbers are embedded in every real closed field. Therefore, \( {\mathbb{R}}_{\text{alg }} \), the field of real algebraic numbers (i.e., the real closure of the rational numbers) is a subfield of any real closed field. Thus, for any real closed field \( R,{\mathbb{R}}_{\text{alg }} \prec R \), so \( R \equiv {\mathbb{R}}_{\mathrm{{alg}}} \) . In particular, \( R \equiv {\mathbb{R}}_{\text{alg }} \equiv \mathbb{R} \) . Because RCF is complete and recursively axiomatized, it is decidable. ## Semialgebraic Sets Quantifier elimination for real closed fields has a geometric interpretation. Definition 3.3.17 Let \( F \) be an ordered field. We say that \( X \subseteq {F}^{n} \) is semialgebraic if it is a Boolean combination of sets of the form \( \{ \bar{x} : p\left( \bar{x}\right) > \) \( 0\} \), where \( p\left( \bar{X}\right) \in F\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) . By quantifier elimination, the semialgebraic sets are exactly the definable sets. The next corollary is a geometric restatement of quantifier elimination. It is analogous to Chevalley's Theorem (3.2.8) for algebraically closed fields. Corollary 3.3.18 (Tarski-Seidenberg Theorem) The semialgebraic sets are closed under projection. The next corollary is a typical application of quantifier elimination. Corollary 3.3.19 If \( F \vDash {RCF} \) and \( A \subseteq {F}^{n} \) is semialgebraic, then the closure (in the Euclidean topology) of \( F \) is semialgebraic. Proof We repeat the main idea of Lemma 1.3.3. Let \( d \) be the definable function \[ d\left( {{x}_{1},\ldots ,{x}_{n},{y}_{1},\ldots ,{y}_{n}}\right) = z\text{ if and only if }z \geq 0 \land {z}^{2} = \mathop{\sum }\limits_{{i = 1}}^{n}{\left( {x}_{i} - {y}_{i}\right) }^{2}. \] The closure of \( A \) is \[ \{ \bar{x} : \forall \epsilon > 0\exists \bar{y} \in A\;d\left( {\bar{x},\bar{y}}\right) < \epsilon \} . \] Because this set is definable, it is semialgebraic. We say that a function is semialgebraic if its graph is semialgebraic. The next result shows how we can use the completeness of RCF to transfer results from \( \mathbb{R} \) to other real closed fields. Corollary 3.3.20 Let \( F \) be a real closed field. If \( X \subseteq {F}^{n} \) is closed and bounded, and \( f \) is a continuous semialgebraic function, then \( f\left( X\right) \) is closed and bounded. Proof If \( F = \mathbb{R} \), then \( X \) is closed and bounded if and only if \( X \) is compact. Because the continuous image of a compact set is compact, the continuous image of a closed and bounded set is closed and bounded. In general, there are \( \bar{a},\bar{b} \in F \) and formulas \( \phi \) and \( \psi \) such that \( \phi \left( {\bar{x},\bar{a}}\right) \) defines \( X \) and \( \psi \left( {\bar{x}, y,\bar{b}}\right) \) defines \( f\left( \bar{x}\right) = y \) . There is a sentence \( \Phi \) asserting: \( \forall \bar{u},\bar{w} \) [if \( \psi \left( {\bar{x}, y,\bar{w}}\right) \) defines a continuous function with domain \( \phi \left( {\bar{x},\bar{u}}\right) \) and \( \phi \left( {\bar{x},\bar{u}}\right) \) is a closed and bounded set, then th
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 10.3.10
Definition 10.3.10 A field \( k \subset \mathbb{C} \) is a field of definition of \( \Gamma \) if there is a \( k \) -form \( H \) of \( G \) and an isomorphism \( \rho : G \rightarrow H \) defined over some finite extension of \( k \) such that \( \rho \left( \Gamma \right) \subset {H}_{k} \) . Theorem 10.3.11 (Vinberg) There is a least field of definition of \( \Gamma \) which is an invariant of the commensurability class of \( \Gamma \) . Furthermore, it is the field \( \mathbb{Q} \) (tr Ad \( \gamma : \gamma \in \Gamma \) ), where Ad is the adjoint representation of \( G \) . In the special case where \( G = {\mathrm{{PGL}}}_{2} \) and \( \Gamma \) is discrete of finite covolume, then \( \Gamma \) is Zariski dense by Borel’s density theorem. Furthermore, \( {k\Gamma } = \) \( \mathbb{Q}\left( {\operatorname{tr}\text{Ad}\gamma : \gamma \in \Gamma }\right) \) (see Exercise 3.3, No. 4). Using Theorem 10.3.7, one can prove Vinberg's Theorem (see Exercise 10.3, No. 5) in this particular case and the \( k \) -form which corresponds to the least field of definition is then the invariant quaternion algebra, as described in Chapter 3. ## Exercise 10.3 1. In the notation used to describe the restriction of scalars functor, show that \( \tau \left( {s}_{ij}^{\prime }\right) = {s}_{{i\tau }\left( j\right) }^{\prime } \) . 2. Complete the proof of Lemma 10.3.3. 3. Let \( M \) be a compact hyperbolic 3-manifold. Then \( M \) admits "hidden symmetries" if there is an isometry between two finite covers of \( M \) which is not the lift of an isometry of \( M \) . If \( M \) is non-arithmetic, show that there is a finite cover \( {M}^{\prime } \) of \( M \) such that all hidden symmetries come from isometries of \( {M}^{\prime } \) . If \( M \) is arithmetic, show that \( M \) always has hidden symmetries. 4. Let \( \left( {V, q}\right) \) be a fixed non-degenerate three-dimensional quadratic space over a number field \( k \) . Show that every \( k \) -form of \( \mathrm{{SO}}\left( {V, q}\right) \) is of the form \( \mathrm{{SO}}\left( {{V}^{\prime },{q}^{\prime }}\right) \) where \( \left( {{V}^{\prime },{q}^{\prime }}\right) \) is defined over \( k \) and is isometric to \( \left( {V, q}\right) \) over some extension of \( k \) . 5. If \( G = {\mathrm{{PGL}}}_{2}\left( \mathbb{C}\right) \) and \( \Gamma \) is a finite-covolume Kleinian group, prove Vin-berg’s Theorem in this case and show that \( {k\Gamma } \) is the least field of definition. ## 10.4 Reflection Groups If \( P \) is a polyhedron in \( {\mathbf{H}}^{3} \) whose dihedral angles are submultiples of \( \pi \) , then the group \( \Gamma \left( P\right) \) generated by reflections in the faces of \( P \) is a discrete subgroup of Isom \( {\mathbf{H}}^{3} \) and the orientation-preserving subgroup \( {\Gamma }^{ + }\left( P\right) \) is a Kleinian group. Clearly, if \( P \) is compact or of finite volume, then \( {\Gamma }^{ + }\left( P\right) \) is cocompact or of finite covolume, respectively. Several cases have been examined in Chapter 4 to determine their invariant trace field and quaternion algebra. This was usually done by suitably locating the polyhedron in the upper half-space model of \( {\mathbf{H}}^{3} \) and specifically calculating generating matrices. Utilising the Lobachevski model of \( {\mathbf{H}}^{3} \), such polyhedra can be conveniently described by their Gram matrix. This matrix then provides a link to determining the invariant trace field and quaternion algebra and arithmeticity or otherwise of such groups without specifically positioning the polyhedron in \( {\mathbf{H}}^{3} \) (cf. §4.7.1 and §4.7.2). We describe this in this section, drawing on the work of Vinberg, which also applies, more generally, to \( {\mathbf{H}}^{n} \) . Recall the Lobachevski model in the language used at the start of \( §{10.2} \) . A hyperbolic plane in \( {\mathbf{H}}^{3} \) is the projective image of a three-dimensional linear hyperbolic subspace \( S \) of \( V \) . The orthogonal complement in \( \left( {V, q}\right) \) of \( S \) will be a one-dimensional subspace spanned by a vector \( \mathbf{e} \) such that \( q\left( \mathbf{e}\right) > 0 \) . Thus for a polyhedron \( P \), we choose a set of outward-pointing normal vectors \( \left\{ {{\mathbf{e}}_{1},{\mathbf{e}}_{2},\ldots ,{\mathbf{e}}_{n}}\right\} \), one for each face, and normalise so that each \( q\left( {\mathbf{e}}_{i}\right) = 1 \) . The associated bilinear form \( B \) on \( V \) is defined by \( B\left( {\mathbf{x},\mathbf{y}}\right) = \) \( q\left( {\mathbf{x} + \mathbf{y}}\right) - q\left( \mathbf{x}\right) - q\left( \mathbf{y}\right) \) and \( P \) is the image of \[ \left\{ {\mathbf{x} \in V \mid B\left( {\mathbf{x},{\mathbf{e}}_{i}}\right) \leq 0\text{ for }i = 1,2,\ldots, n}\right\} . \] The Gram matrix \( G\left( P\right) \) of \( P \) is then the \( n \times n \) matrix \( G\left( P\right) = \left\lbrack {a}_{ij}\right\rbrack \), where \( {a}_{ij} = B\left( {{\mathbf{e}}_{i},{\mathbf{e}}_{j}}\right) \) . The diagonal entries of this matrix are 2 . If the faces \( {F}_{i} \) and \( {F}_{j} \) meet with dihedral angle \( {\theta }_{ij} \) and \( {F}_{i} \) is the projective image of \( {\mathbf{e}}_{i}^{ \bot } \), then \( B\left( {{\mathbf{e}}_{i},{\mathbf{e}}_{j}}\right) = - 2\cos {\theta }_{ij} \) . If the faces do not intersect (and are not parallel), then they have a unique common perpendicular in \( {\mathbf{H}}^{3} \) whose hyperbolic length is \( {\ell }_{ij} \) . In that case, \( B\left( {{\mathbf{e}}_{i},{\mathbf{e}}_{j}}\right) = - 2\cosh {\ell }_{ij} \) . The matrix \( G\left( P\right) \) is \( n \times n \) with \( n \geq 4 \) and \( n = 4 \) if and only if \( P \) is a tetrahedron. In all cases, the matrix \( G\left( P\right) \) has rank 4 and signature \( \left( {3,1}\right) \) over \( \mathbb{R} \) . Indeed, necessary and sufficient conditions for the existence of acute-angled polyhedra of finite volume in \( {\mathbf{H}}^{3} \) (and, more generally, in \( {\mathbf{H}}^{n} \) for \( n \geq 3 \) ) can be described in terms of the matrix \( G\left( P\right) \) . For the moment, consider the following fields obtained from \( G\left( P\right) \) : \[ K\left( P\right) = \mathbb{Q}\left( \left\{ {{a}_{ij} : i, j = 1,2,\ldots, n}\right\} \right) . \] (10.16) For any subset \( \left\{ {{i}_{1},{i}_{2},\ldots ,{i}_{r}}\right\} \subset \{ 1,2,\ldots, n\} \), define the cyclic product by \[ {b}_{{i}_{1}{i}_{2}\cdots {i}_{r}} = {a}_{{i}_{1}{i}_{2}}{a}_{{i}_{2}{i}_{3}}\cdots {a}_{{i}_{r}{i}_{1}} \] (10.17) and the field \( k\left( P\right) \) by \[ k\left( P\right) = \mathbb{Q}\left( \left\{ {b}_{{i}_{1}{i}_{2}\cdots {i}_{r}}\right\} \right) . \] (10.18) It is not difficult to see that the non-zero cyclic products \( {b}_{{i}_{1}{i}_{2}\cdots {i}_{r}} \) correspond to closed paths \( \left\{ {{i}_{1},{i}_{2},\ldots ,{i}_{r},{i}_{1}}\right\} \) in the Coxeter symbol for the polyhedron (see Exercise 10.4, No. 1). With \( \left\{ {{i}_{1},{i}_{2},\ldots ,{i}_{r}}\right\} \) as defined above, also define \[ {\mathbf{v}}_{{i}_{1}{i}_{2}\cdots {i}_{r}} = {a}_{1{i}_{1}}{a}_{{i}_{1}{i}_{2}}\cdots {a}_{{i}_{r - 1}{i}_{r}}{\mathbf{e}}_{{i}_{r}}. \] (10.19) These vectors arise from paths starting at the vertex labelled 1 in the Coxeter symbol of the polyhedron. Let \( M \) be the \( k\left( P\right) \) -subspace of \( V \) spanned by all \( {\mathbf{v}}_{{i}_{1}{i}_{2}\cdots {i}_{r}} \) . Note that the value of the form on these spanning vectors lies in \( k\left( P\right) \), since \( B\left( {{\mathbf{v}}_{{i}_{1}{i}_{2}\cdots {i}_{r}},{\mathbf{v}}_{{j}_{1}{j}_{2}\cdots {j}_{s}}}\right) = {b}_{1{i}_{1}{i}_{2}\cdots {i}_{r}{j}_{s}{j}_{s - 1}\cdots {j}_{1}} \in k\left( P\right) \) . Thus, with the restriction of the quadratic form \( q \) on \( V,\left( {M, q}\right) \) is a quadratic space over \( k\left( P\right) \) . When \( P \) has finite volume, its Coxeter symbol is connected and so, for any \( {i}_{r} \), there exists a non-zero \( {\mathbf{v}}_{{i}_{1}{i}_{2}\cdots {i}_{r}} \) as described at (10.19). Thus \( M \otimes \mathbb{R} = V \) so that \( M \) will be four-dimensional over \( k\left( P\right) \) and have signature \( \left( {3,1}\right) \) over \( \mathbb{R} \) (see Exercise 10.4, No. 1). Let \( d \) be the discriminant of the quadratic space \( \left( {M, q}\right) \) so that \( d \in k\left( P\right) \) and \( d < 0 \) . Recall that \( {\Gamma }^{ + }\left( P\right) \) is the subgroup of orientation-preserving isometries in the group generated by reflections in the faces of \( P \) and, as such, is a subgroup of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) . The main purpose of this section is to prove the following: Theorem 10.4.1 \[ k{\Gamma }^{ + }\left( P\right) = k\left( P\right) \left( \sqrt{d}\right) \] to identify the invariant quaternion algebra \( A{\Gamma }^{ + }\left( P\right) \) from the Gram matrix and to investigate when the group \( {\Gamma }^{ + }\left( P\right) \) is arithmetic. Thus the invariant trace field can be determined directly from the Gram matrix and, hence, directly from the geometry of \( P \) . Let \( {r}_{i} \) denote the reflection in the face \( {F}_{i} \) for \( i = 1,2,\ldots, n \) so that \( \Gamma \left( P\right) = \left\langle {{r}_{1},{r}_{2},\ldots ,{r}_{n}}\right\rangle \) . Let \( {\gamma }_{ij} = {r}_{i}{r}_{j} \) so that \( {\gamma }_{ij} \in {\Gamma }^{ + }\left( P\right) \) . Furthermore, regarded as an element of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \), tr \( {\gamma }_{ij} = {a}_{ij} \), at least up to sign. Lemma 10.4.2 \[ {b}_{{i}_{1}{i}_{2}\cdots {i}_{r}} \in k{\Gamma }^{ + }\left( P\right) \] Proof: Note that \( {b}_{{i}_{1}{i}_{2}\cdots {i}_{r}} = \operatorname{tr}{\gamma }_{{i}_{1}{i}_{2}}\operatorname{tr}{\gamma }_{{i}_{2}{i}_{3}}\cdots \operatorname{tr}{\gamma }_{{i}_{r}{i}_{1}} \) . For brevity, let \( {\gamma }_{j} = \) \( {\gamma }_{{i}_{j}{i}_{j + 1}} \)
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 1.115
Definition 1.115. For a function \( f : X \rightarrow {\mathbb{R}}_{\infty } \) finite at \( x \in X \), the slope (or strong slope or calmness rate) of \( f \) at \( x \) is the function \( \left| \nabla \right| \left( f\right) : X \rightarrow {\mathbb{R}}_{\infty } \) given by \[ \left| \nabla \right| \left( f\right) \left( x\right) \mathrel{\text{:=}} \mathop{\limsup }\limits_{{v \rightarrow x, v \neq x}}\frac{{\left( f\left( x\right) - f\left( v\right) \right) }^{ + }}{d\left( {x, v}\right) } \mathrel{\text{:=}} \mathop{\inf }\limits_{{\varepsilon > 0}}\mathop{\sup }\limits_{{v \in B\left( {x,\varepsilon }\right) \smallsetminus \{ x\} }}\frac{{\left( f\left( x\right) - f\left( v\right) \right) }^{ + }}{d\left( {x, v}\right) }, \] where the positive part \( {r}^{ + } \) of an extended real number \( r \) is \( \max \left( {r,0}\right) \) . The terminology "calmness rate" is justified by the following observation. If \( f \) is calm at \( x \in X \) in the sense that \( f\left( x\right) < + \infty \) and for some \( r, c > 0 \) one has \[ \forall v \in B\left( {x, r}\right) ,\;f\left( v\right) \geq f\left( x\right) - {cd}\left( {v, x}\right) , \] then one has \( \left| \nabla \right| \left( f\right) \left( x\right) \leq c \) . In fact, \( \left| \nabla \right| \left( f\right) \left( x\right) \) is the infimum of the constants \( c > \) 0 such that the preceding inequality holds for some \( r > 0 \) . We observe that the terms "calmness rate" or "downward slope" is more justified than the classical term "slope," since the behavior of \( f \) on the superlevel set \( \{ u \in X : f\left( u\right) > f\left( x\right) \} \) is not involved in the definition. Moreover, with the convention \( 0/0 = 0 \), one has \[ \left| \nabla \right| \left( f\right) \left( x\right) = \max \left( {\mathop{\inf }\limits_{{\varepsilon > 0}}\mathop{\sup }\limits_{{v \in B\left( {x,\varepsilon }\right) \smallsetminus \{ x\} }}\frac{f\left( x\right) - f\left( v\right) }{d\left( {x, v}\right) },0}\right) = \mathop{\limsup }\limits_{{v \rightarrow x}}\frac{f\left( x\right) - f\left( v\right) }{d\left( {x, v}\right) }. \] We also note that \( \left| \nabla \right| \left( f\right) \left( x\right) = 0 \) when \( x \) is a local minimizer of \( f \), but the converse does not hold, as the example of \( f : \mathbb{R} \rightarrow \mathbb{R} \) given by \( f\left( t\right) = - {t}^{2} \) shows for \( x = 0 \) . The notation we use (that is a slight variant of the original notation) evokes the following example and takes into account the fact that in general one cannot speak of the gradient of \( f \) or of the derivative of \( f \) . Here we use notions from Chap. 2. Example-Exercise. If \( f \) and \( g \) are finite at \( x \) and tangent at \( x \) in the sense that \( g\left( v\right) = \) \( f\left( v\right) + o\left( {d\left( {v, x}\right) }\right) \), then \( \left| \nabla \right| \left( g\right) \left( x\right) = \left| \nabla \right| \left( f\right) \left( x\right) \) . Deduce from this property that if \( X \) is a normed space and if \( f \) is Fréchet differentiable at \( x \in X \) then \( \left| \nabla \right| \left( f\right) \left( x\right) = \parallel {Df}\left( x\right) \parallel \) . Remark. Let \( {\delta }_{f} \) be a decrease index for \( f \) and let \( {\delta }_{f}^{\prime } : X \rightarrow {\overline{\mathbb{R}}}_{ + } \) be such that \( {\delta }_{f}^{\prime } \leq {\delta }_{f} \) . Then \( {\delta }_{f}^{\prime } \) is a decrease index for \( f \) . This simple observation shows the versatility of the notion of decrease index. Remark-Exercise. A function \( {\delta }_{f} \) on \( X \) is a decrease index for \( f \) if and only if its lower semicontinuous hull \( \overline{{\delta }_{f}} \) given by \( \overline{{\delta }_{f}}\left( x\right) \mathrel{\text{:=}} \mathop{\liminf }\limits_{{v \rightarrow x}}{\delta }_{f}\left( v\right) \) is a decrease index for \( f \) . [Hint: Use the fact that the infimum of the lower semicontinuous hull \( \bar{\varphi } \) of a function \( \varphi \) on an open subset \( B \) of \( X \) coincides with the infimum of \( \varphi \) on \( B \) .] Taking into account the preceding two remarks, the next result states in essence that the slope is somewhat the best decrease index: it is almost the largest one. Proposition 1.116. For every bounded-below lower semicontinuous function \( f \) on a complete metric space \( X \), the slope of \( f \) is a decrease index. Moreover, for every decrease index \( {\delta }_{f} \) for \( f \), the lower semicontinuous hull \( \overline{{\delta }_{f}} \) of \( {\delta }_{f} \) satisfies \( \overline{{\delta }_{f}} \leq \left| \nabla \right| \left( f\right) \) This result is a consequence of a lemma that brings some additional information. Lemma 1.117. Let \( X \) be a complete metric space and let \( f \) be a lower semicontinuous function on the open ball \( B \mathrel{\text{:=}} B\left( {x, r}\right) \) . Suppose \( \inf f\left( B\right) > - \infty \) and let \( \beta \mathrel{\text{:=}} f\left( x\right) - \inf f\left( B\right) \) . Then for all \( t \in \left( {0,1}\right) \) there exists \( u \in B\left\lbrack {x,{rt}}\right\rbrack \) such that \( \left| \nabla \right| \left( f\right) \left( u\right) \leq \beta /{rt}, f\left( u\right) \leq f\left( x\right) . \) Proof. We may suppose \( \beta < + \infty \) . We apply Theorem 1.88 to the restriction of \( f \) to the closed ball \( B\left\lbrack {x,{rs}}\right\rbrack \), where \( s \in \left( {t,1}\right) \) . Then there exists \( u \in B\left\lbrack {x,{rt}}\right\rbrack \) satisfying \( f\left( u\right) \leq f\left( x\right) \) that is a minimizer of \( f\left( \cdot \right) + \left( {\beta /{rt}}\right) d\left( {\cdot, u}\right) \) on \( B\left\lbrack {x,{rs}}\right\rbrack \), hence on \( B\left\lbrack {u,\sigma }\right\rbrack \) for \( \sigma \mathrel{\text{:=}} r\left( {s - t}\right) \), so that \[ \left| \nabla \right| \left( f\right) \left( u\right) = \mathop{\inf }\limits_{{\rho > 0}}\mathop{\sup }\limits_{{w \in B\left( {u,\rho }\right) \smallsetminus \{ u\} }}\frac{{\left( f\left( u\right) - f\left( w\right) \right) }^{ + }}{d\left( {u, w}\right) } \leq \mathop{\sup }\limits_{{w \in B\left( {u,\sigma }\right) \smallsetminus \{ u\} }}\frac{{\left( f\left( u\right) - f\left( w\right) \right) }^{ + }}{d\left( {u, w}\right) } \leq \frac{\beta }{rt}. \] Proof of Proposition 1.116. Given \( x \in X, r, c > 0 \) such that \( {cr} > \beta \mathrel{\text{:=}} f\left( x\right) - \) \( \inf f\left( {B\left( {x, r}\right) }\right) \), one picks \( t \in \left( {0,1}\right) \) such that \( \operatorname{crt} > \beta \) . Then the lemma yields some \( u \in B\left\lbrack {x,{rt}}\right\rbrack \subset B\left( {x, r}\right) \) such that \( \left| \nabla \right| \left( f\right) \left( u\right) \leq \beta /{rt} < c \), so that (1.27) is satisfied and \( \left| \nabla \right| \left( f\right) \) is a decrease index. Now let us prove that for every decrease index \( {\delta }_{f} \) of \( f \) and \( x \in X \) one has \( \overline{{\delta }_{f}}\left( x\right) \leq \) \( \left| \nabla \right| \left( f\right) \left( x\right) \) . Let \( c > \left| \nabla \right| \left( f\right) \left( x\right) \) . Then for every \( b \in \left( {\left| \nabla \right| \left( f\right) \left( x\right), c}\right) \), there exists some \( s > 0 \) such that \[ \mathop{\sup }\limits_{{u \in B\left( {x, s}\right) \smallsetminus \{ x\} }}\frac{{\left( f\left( x\right) - f\left( u\right) \right) }^{ + }}{d\left( {u, x}\right) } < b \] hence for all \( r \in \left( {0, s}\right) \) and all \( u \in B\left( {x, r}\right) \), one has \( f\left( x\right) \leq f\left( u\right) + {bd}\left( {u, x}\right) \leq f\left( u\right) + {br} \) and \( f\left( x\right) \leq \inf f\left( {B\left( {x, r}\right) }\right) + {br} < \inf f\left( {B\left( {x, r}\right) }\right) + {cr} \) . Thus, by (1.27) there exists some \( u \in B\left( {x, r}\right) \) such that \( {\delta }_{f}\left( u\right) < c \) . Thus \( \overline{{\delta }_{f}}\left( x\right) = \mathop{\sup }\limits_{{r \in \left( {0, s}\right) }}\mathop{\inf }\limits_{{u \in B\left( {x, r}\right) }}{\delta }_{f}\left( u\right) \leq c \) . Since \( c \) is arbitrarily close to \( \left| \nabla \right| \left( f\right) \left( x\right) \), we get \( \overline{{\delta }_{f}}\left( x\right) \leq \left| \nabla \right| \left( f\right) \left( x\right) \) . It will be useful to dispose of a parameterized version of the decrease principle. The novelty here lies in the (inward) continuous dependence of the solution set on the parameter \( w \) . Theorem 1.118 (Parameterized decrease principle). Let \( W \) be a topological space and let \( X \) be a complete metric space. Let \( f : W \times X \rightarrow {\overline{\mathbb{R}}}_{ + } \) and let \( \left( {\bar{w},\bar{x}}\right) \in \) \( S \mathrel{\text{:=}} \left\{ {\left( {w, x}\right) \in W \times X : f\left( {w, x}\right) = 0}\right\} \) . For each \( w \in W \) let \( {\delta }_{w} : X \rightarrow {\overline{\mathbb{R}}}_{ + } \) be a decrease index for \( {f}_{w} \mathrel{\text{:=}} f\left( {w, \cdot }\right) \) and let \( S\left( w\right) \mathrel{\text{:=}} {f}_{w}^{-1}\left( {\{ 0\} }\right) \) . Suppose there exist \( c, r > 0 \) and a neighborhood \( U \) of \( \bar{w} \) such that (a) \( {\delta }_{w}\left( x\right) \geq c \) for all \( \left( {w, x}\right) \in \left( {U \times B\left( {\bar{x}, r}\right) }\right) \smallsetminus S \) ; (b) The multimap \( w \rightrightarrows \operatorname{epi}{f}_{w} \) is inward continuous at \( \left( {\bar{w},\left( {\bar{x},0}\right) }\right) \) ; (c) For all \( w \in U \) the function \( {f}_{w} \) is such that \( {f}_{w}^{-1} \) is closed at 0 . Then the multimap \( S\left( \cdot \right) \) is inward continuous at \( \left( {\bar{w},\bar{x}}\right) \) . Moreover, for all \( s \in \left( {0, r/2}\right) \) there exists a neighborhood \( V \) of \( \bar{w} \) such that fo
1114_(GTM269)Locally Convex Spaces
Definition 5.35
Definition 5.35. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and \( T \) : \( X \rightarrow Y \) is a continuous linear transformation. Then \( T \) has closed range when \( T\left( X\right) \) is a closed subspace of \( Y \) . Some of what matters here appears in disguise in Theorem 5.2. The following is simply a clarification. Theorem 5.36. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and \( T \) : \( X \rightarrow Y \) is a continuous linear transformation. Then (a) \( T{\left( X\right) }^{ \bot } = \ker {T}^{ * } \) , (b) \( {\left( \ker {T}^{ * }\right) }_{ \bot } = T{\left( X\right) }^{ - } \) , (c) \( {T}^{ * }{\left( {Y}^{ * }\right) }_{ \bot } = \ker T \), and (d) \( {\left( \ker T\right) }^{ \bot } \) is the weak-* closure of \( {T}^{ * }\left( {Y}^{ * }\right) \) . Proof. In the third sentence of Theorem 5.2, setting \( A = X \) gives \( T{\left( X\right) }^{ \circ } = \) \( {\left( {T}^{ * }\right) }^{-1}\left( {X}^{ \circ }\right) = \ker {T}^{ * } \) ; but \( T{\left( X\right) }^{ \circ } = T{\left( X\right) }^{ \bot } \) since \( T\left( X\right) \) is a subspace. This gives (a); it also gives (c) by using weak-* topologies on the dual spaces. As for (b): \( {\left( T{\left( X\right) }^{ \bot }\right) }_{ \bot } = {\left( \ker {T}^{ * }\right) }_{ \bot } \) by part (a). But \( {\left( {E}^{ \bot }\right) }_{ \bot } = {E}^{ - } \) for subspaces by the bipolar theorem, so \( {\left( \ker {T}^{ * }\right) }_{ \bot } = T{\left( X\right) }^{ - } \) . Part (d) now follows by again using weak-* topologies. Moral: If all maps in Theorem 5.36 have closed range, then there is a nice symmetry between (a) and (b), and between (c) and (d). Nice symmetries are not enough, however. The utility of "closed range" goes far beyond that. One example is the formation of homology spaces: If \( d : X \rightarrow Y \) and \( \delta : Y \rightarrow Z \) are continuous linear transformations of Hausdorff locally convex spaces with \( {\delta d} = 0 \), then \( d \) has closed range exactly when the homology space \( \ker \delta /d\left( X\right) \) is a Hausdorff locally convex space. This kind of thing matters. Other usages apply to solving equations: By part (b), \( T \) has closed range exactly when \[ \text{"}T\left( x\right) = y\text{has a solution for a} \] \[ \text{fixed}y \Leftrightarrow f\left( y\right) = 0\text{for all}f \in \ker \left( {T}^{ * }\right) \text{"} \] is valid. The first few theorems relate \( T \) with \( {T}^{ * } \) . For the first one, the condition on the range space is a bit peculiar, but does arise in practice. Proposition 5.37. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and suppose \( X \) is B-complete (or a Fréchet space) and \( Y \) has the property that every closed subspace is barreled. If \( T : X \rightarrow Y \) is a continuous linear transformation with closed range, then \( {T}^{ * } \) has weak-* closed range. Remark. The property hypothesized for \( Y \) holds for Fréchet spaces. Proof. Assuming \( T\left( X\right) \) is closed forces \( T\left( X\right) \) to be barreled, so we now have that \( T \) is an open map (Corollary 5.34 or Theorem 4.35) onto \( T\left( X\right) \) . Hence the induced map \( {T}_{0} : X/\ker \left( T\right) \rightarrow T\left( X\right) \) is a topological isomorphism (Theorem 1.23). Hence the induced map \( {T}_{0}^{ * } : T{\left( X\right) }^{ * } \rightarrow {\left( X/\ker \left( T\right) \right) }^{ * } \approx \ker {\left( T\right) }^{ \bot } \) is bijective. However, using the composite: \[ X\overset{\pi }{ \rightarrow }X/\ker \left( T\right) \overset{{T}_{0}}{ \rightarrow }T\left( X\right) \;T = {T}_{0}\pi \] we get that \( {T}^{ * } = {\pi }^{ * }{T}_{0}^{ * }{\iota }^{ * } \), where \( \iota : T\left( X\right) \rightarrow Y \) is the inclusion: ![b2b8d3de-e59f-4ab3-b706-7703f224cb7e_157_0.jpg](images/b2b8d3de-e59f-4ab3-b706-7703f224cb7e_157_0.jpg) This has a companion. Proposition 5.38. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and suppose \( X \) is barreled and \( B \) -complete (or is a Fréchet space) and \( Y \) is first countable. If \( T : X \rightarrow Y \) is a continuous linear transformation, and if \( {T}^{ * } \) has weak-* closed range, then \( T \) has closed range. Proof. (Very weird): Assuming that \( {T}^{ * } \) has weak-* closed range, we have that \( {T}^{ * }\left( {Y}^{ * }\right) = {\left( \ker T\right) }^{ \bot } \) (Theorem 5.36(d)), so that \( {T}^{ * }\left( {Y}^{ * }\right) \approx {\left( X/\ker \left( T\right) \right) }^{ * } \) by Theorem 5.7. Let \( {\tau }_{1} \) be the quotient topology of \( X/\ker \left( T\right) \) transported over to \( T\left( X\right) \), and let \( {\tau }_{2} \) be the induced topology on \( T\left( X\right) \) as a subspace of \( Y \) . The preceding shows that the dual space of \( \left( {T\left( X\right) ,{\tau }_{1}}\right) \) is precisely given by \( {Y}^{ * }/\ker \left( {T}^{ * }\right) \) (since that is isomorphic to \( {T}^{ * }\left( {Y}^{ * }\right) = {\left( X/\ker \left( T\right) \right) }^{ * } \) as a vector space), while the dual of \( \left( {T\left( X\right) ,{\tau }_{2}}\right) \) is given by \( {Y}^{ * }/T{\left( X\right) }^{ \bot } \) by Theorem 5.6. But \( T{\left( X\right) }^{ \bot } = \ker \left( {T}^{ * }\right) \) by Theorem 5.36(a), so \( \left( {T\left( X\right) ,{\tau }_{1}}\right) \) and \( \left( {T\left( X\right) ,{\tau }_{2}}\right) \) have the same continuous linear functionals. Now for the weird part. \( \left( {T\left( X\right) ,{\tau }_{1}}\right) \) is barreled since \( X/\ker \left( T\right) \) is barreled (Proposition 4.2(a)), so \( \left( {T\left( X\right) ,{\tau }_{1}}\right) \) infrabarreled, and so is a Mackey space (Corollary 4.9). But \( \left( {T\left( X\right) ,{\tau }_{2}}\right) \) is first countable, so it is bornological (Proposition 4.10), hence is infrabarreled, hence is also a Mackey space. That is, both \( {\tau }_{1} \) and \( {\tau }_{2} \) agree with the Mackey topology on \( T\left( X\right) \), where its dual space is \( {Y}^{ * }/T{\left( X\right) }^{ \bot } \) . In particular, \( {\tau }_{1} = {\tau }_{2} \) . But \( {\tau }_{1} \) is \( B \) -complete (Proposition 5.32) and so is complete (Proposition 5.31), so \( T\left( X\right) \) is a complete, hence closed (Proposition 1.30) subspace of \( Y \) . (If \( X \) is a Fréchet space, then so is \( T\left( X\right) \), making things even simpler.) There is one more result along these lines that is suitable for presentation here. It concerns strong topologies. It is difficult to state without replacing \( Y \) with \( T{\left( X\right) }^{ - } - \) but then, any closed range theorem can be restated this way. Proposition 5.39. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and suppose \( X \) is a Fréchet space and \( Y \) is infrabarreled. Suppose \( T : X \rightarrow Y \) is a continuous linear map for which \( T{\left( X\right) }^{ - } = Y \) . Then \( {T}^{ * } \) is one-to-one. If \( {\left( {T}^{ * }\right) }^{-1} : {T}^{ * }\left( {Y}^{ * }\right) \rightarrow {Y}^{ * } \) is strongly bounded, then \( T \) has closed range. Proof. \( {T}^{ * } \) is one-to-one since \( T \) has dense range (Theorem 5.36(a)). Suppose \( U \) is a barrel neighborhood of 0 in \( X \) . Then \( {U}^{ \circ } \) is equicontinuous, hence is strongly bounded in \( {X}^{ * } \) [Theorem 4.16(a)], so \( {U}^{ \circ }\bigcap {T}^{ * }\left( {Y}^{ * }\right) \) is bounded in \( {T}^{ * }\left( {Y}^{ * }\right) \), so \( {\left( {T}^{ * }\right) }^{-1}\left( {{U}^{ \circ } \cap {T}^{ * }\left( {Y}^{ * }\right) }\right) = {\left( {T}^{ * }\right) }^{-1}\left( {U}^{ \circ }\right) \) is strongly bounded in \( {Y}^{ * } \) by assumption. But \( {\left( {T}^{ * }\right) }^{-1}\left( {U}^{ \circ }\right) = T{\left( U\right) }^{ \circ } \) by Theorem 5.2, so \( T{\left( U\right) }^{ \circ } \) is strongly bounded in \( {Y}^{ * } \) . Hence \( T{\left( U\right) }^{ \circ } \) is equicontinuous by Theorem 4.16(b), so that \( {\left( T{\left( U\right) }^{ \circ }\right) }_{ \circ } \) is a neighborhood of 0 in \( Y \) . But \( T\left( U\right) \) is nonempty, convex, and balanced since \( U \) is a barrel, so \( {\left( T{\left( U\right) }^{ \circ }\right) }_{ \circ } = T{\left( U\right) }^{ - } \) by the bipolar theorem. The preceding shows that \( T{\left( U\right) }^{ - } \) is a neighborhood of 0 in \( Y \) whenever \( U \) is a barrel neighborhood of 0 in \( X \), so \( T \) is nearly open (Corollary 4.33), and so is onto (Theorem 4.35(b)). Remark. If \( X \) and \( Y \) are Banach spaces, then the strong topologies on the dual spaces are Banach space topologies, and the preceding theorem shows that if \( {T}^{ * } \) has a strongly closed range, then \( T \) has a closed range. [The boundedness of \( {\left( {T}^{ * }\right) }^{-1} \) comes from the open mapping theorem.] This is also true for Fréchet spaces (Theorem 6.1), but that is a bit more involved. Like the result for Banach spaces, it depends on Proposition 5.39. The final topic here concerns compact linear maps. If \( X \) and \( Y \) are Hausdorff locally convex spaces, and \( T : X \rightarrow Y \) is a linear map, then \( T \) is compact when \( T{\left( U\right) }^{ - } \) is compact in \( Y \) for some neighborhood \( U \) of 0 in \( X \) . The next result gives what we need for preparation. Proposition 5.40. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces over \( \mathbb{R} \) , and \( T : X \rightarrow Y \) is a compact linear map. Then \( T \) is continuous. If \( U \) is a barrel neighborhood of 0 in \( X \) for which \( T{\left( U\right) }^{ - } \) is compact, and \( V \) is a neighborhood of 0 in \( Y \), then there exists a closed subspace \( E \) of \( X \) such that \( X/E \) is finite-dimensional and \( T{\left( U \cap E\right) }^{ - } \subset V \) . Proof. Suppose \( T \) is c
1042_(GTM203)The Symmetric Group
Definition 5.4.5
Definition 5.4.5 A finite, graded poset \( A \) of rank \( n \) is ample if \[ \operatorname{rk}{X}_{k} = \min \left\{ {\left| {A}_{k}\right| ,\left| {A}_{k + 1}\right| }\right\} \;\text{ for }\;0 \leq k < n. \] This is the definition needed to connect unimodality of posets and modules. We will also be able to tie in the concept of orbit as defined in Exercise 2 of Chapter 1. If \( G \) acts on a finite set \( S \), then the orbit of \( s \in S \) is \[ {\mathcal{O}}_{s} = \{ {gs} : g \in G\} . \] Also, let \[ S/G = \left\{ {{\mathcal{O}}_{s} : s \in S}\right\} \] Theorem 5.4.6 ([Stn 82]) Let \( A \) be a finite, graded poset of rank \( n \) . Let \( G \) be a group of automorphisms of \( A \) and \( V \) be an irreducible \( G \) -module. If \( A \) is unimodal and ample, then the following sequences are unimodal. (1) \( \mathbb{C}{\mathbf{A}}_{0},\mathbb{C}{\mathbf{A}}_{1},\mathbb{C}{\mathbf{A}}_{2},\ldots ,\mathbb{C}{\mathbf{A}}_{n} \) . (2) \( {m}_{0}\left( V\right) ,{m}_{1}\left( V\right) ,{m}_{2}\left( V\right) ,\ldots ,{m}_{n}\left( V\right) \), where \( {m}_{k}\left( V\right) \) is the multiplicity of \( V \) in \( {\mathbb{{CA}}}_{k} \) . (3) \( \left| {{A}_{0}/G}\right| ,\left| {{A}_{1}/G}\right| ,\left| {{A}_{2}/G}\right| ,\ldots ,\left| {{A}_{n}/G}\right| \) . Proof. The fact that the first sequence is unimodal follows immediately from the definition of ample and Lemma 5.4.4. This implies that the second sequence is as well by definition of the partial order on \( G \) -modules. Finally, by Exercise \( 5\mathrm{\;b} \) in Chapter 2,(3) is the special case of (2) where one takes \( V \) to be the trivial module. - In order to apply this theorem to the Boolean algebra, we will need the following lemma. The proof we give is based on Kantor's [Kan 72]. Another is indicated in Exercise 25. Proposition 5.4.7 The Boolean algebra \( {B}_{n} \) is ample. Proof. By Proposition 5.4.2 and the fact that \( {X}_{n - k - 1} \) is the transpose of \( {X}_{k} \), it suffices to show that \( \operatorname{rk}{X}_{k} = \left( \begin{array}{l} n \\ k \end{array}\right) \) for \( k < n/2 \) . Let \( {X}_{S} \) denote the row of \( X = {X}_{k} \) indexed by the set \( S \) . Suppose, towards a contradiction, that there is a dependence relation \[ {X}_{\bar{S}} = \mathop{\sum }\limits_{{S \neq \bar{S}}}{a}_{S}{X}_{S} \] (5.11) for certain real numbers \( {a}_{S} \), not all zero. Let \( G \) be the subgroup of \( {\mathcal{S}}_{n} \) that fixes \( \bar{S} \) as a set, i.e., \( G = {\mathcal{S}}_{\bar{S}} \times {\mathcal{S}}_{{\bar{S}}^{c}} \), where \( {\bar{S}}^{c} \) is the complement of \( \bar{S} \) in \( \{ 1,2,\ldots, n\} \) . Then \( G \) acts on the \( k \) th rank \[ {B}_{n, k}\overset{\text{ def }}{ = }{\left( {B}_{n}\right) }_{k} \] and if \( g \in G \), then \( g\bar{S} = \bar{S} \) and \( {X}_{{gS},{gT}} = {X}_{S, T} \) for all \( S \in {B}_{n, k}, T \in {B}_{n, k + 1} \) . Also, \( g \) permutes the \( S \neq \bar{S} \) among themselves. So, using (5.11), we obtain \[ {X}_{\bar{S}, T} = {X}_{g\bar{S},{gT}} = {X}_{\bar{S},{gT}} = \mathop{\sum }\limits_{{S \neq \bar{S}}}{a}_{S}{X}_{S,{gT}} = \mathop{\sum }\limits_{{S \neq \bar{S}}}{a}_{gS}{X}_{{gS},{gT}} = \mathop{\sum }\limits_{{S \neq \bar{S}}}{a}_{gS}{X}_{S, T}. \] It follows that \( {X}_{\bar{S}} = \mathop{\sum }\limits_{{S \neq \bar{S}}}{a}_{gS}{X}_{S} \), and summing over all \( g \in G \) gives \[ \left| G\right| {X}_{\bar{S}} = \mathop{\sum }\limits_{{g \in G}}\mathop{\sum }\limits_{{S \neq \bar{S}}}{a}_{gS}{X}_{S} = \mathop{\sum }\limits_{{S \neq \bar{S}}}{X}_{S}\mathop{\sum }\limits_{{g \in G}}{a}_{gS}. \] \( \left( {5.12}\right) \) By Exercise 5a in Chapter 2, the action of \( G \) partitions \( {B}_{n, k} \) into orbits \( {\mathcal{O}}_{i} \) . Directly from the definitions we see that we can write \[ {\mathcal{O}}_{i} = \left\{ {S \in {B}_{n, k} : \left| {S \cap \bar{S}}\right| = i}\right\} ,\;0 \leq i \leq k. \] So \( {\mathcal{O}}_{k} = \{ \bar{S}\} \) and \( { \uplus }_{i = 0}^{k - 1}{\mathcal{O}}_{i} \) is the set of all \( S \neq \bar{S}, S \in {B}_{n, k} \) . Because of the bijection from Exercise 2b in Chapter 1, if \( S \in {\mathcal{O}}_{i} \), then as \( g \) runs over \( G \) , the sets \( T = {gS} \) will run over \( {\mathcal{O}}_{i} \) with each set being repeated \( \left| G\right| /\left| {\mathcal{O}}_{i}\right| \) times. So (5.12) can be rewritten \[ {X}_{\bar{S}} = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{i = 0}}^{{k - 1}}\mathop{\sum }\limits_{{S \in {\mathcal{O}}_{i}}}{X}_{S}\mathop{\sum }\limits_{{g \in G}}{a}_{gS} \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}\mathop{\sum }\limits_{{S \in {\mathcal{O}}_{i}}}{X}_{S}\frac{1}{\left| {\mathcal{O}}_{i}\right| }\mathop{\sum }\limits_{{T \in {\mathcal{O}}_{i}}}{a}_{T} \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}\frac{1}{\left| {\mathcal{O}}_{i}\right| }\mathop{\sum }\limits_{{T \in {\mathcal{O}}_{i}}}{a}_{T}\mathop{\sum }\limits_{{S \in {\mathcal{O}}_{i}}}{X}_{S} \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{b}_{i}\mathop{\sum }\limits_{{S \in {\mathcal{O}}_{i}}}{X}_{S} \] for certain real numbers \( {b}_{i} \), not all zero (since the sum is a nonzero vector). Now, for \( 0 \leq j < k \), choose \( {T}_{j} \in {B}_{n, k + 1} \) with \( \left| {{T}_{j} \cap \bar{S}}\right| = j \), which can be done since \( \left( {k + 1}\right) + k = {2k} + 1 \leq n \) . But then \( {X}_{\bar{S},{T}_{j}} = 0 \) . So the previous equation for \( {X}_{\bar{S}} \) gives \[ 0 = \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{b}_{i}\mathop{\sum }\limits_{{S \in {\mathcal{O}}_{i}}}{X}_{S,{T}_{j}} \] (5.13) which can be viewed as a system of \( k \) equations in \( k \) unknowns, namely the \( {b}_{i} \) . Now, if \( i > j \) then \( {X}_{S,{T}_{j}} = 0 \) for all \( S \in {\mathcal{O}}_{i} \) . So this system is actually triangular. Furthermore, if \( i = j \) then there is an \( S \in {\mathcal{O}}_{i} \) with \( {X}_{S,{T}_{j}} = 1 \), so that the system has nonzero coefficients along the diagonal. It follows that \( {b}_{i} = 0 \) for all \( i \), the desired contradiction. ∎ Corollary 5.4.8 Let \( G \leq {\mathcal{S}}_{n} \) act on \( {B}_{n} \) and let \( V \) be an irreducible \( G \) - module. Then, keeping the notation in the statement and proof of the previous proposition, the following sequences are symmetric and unimodal. (1) \( {m}_{0}\left( V\right) ,{m}_{1}\left( V\right) ,{m}_{2}\left( V\right) ,\ldots ,{m}_{n}\left( V\right) \), where \( {m}_{k}\left( V\right) \) is the multiplicity of \( V \) in \( {\mathbb{{CB}}}_{n, k} \) . (2) \( \left| {{B}_{n,0}/G}\right| ,\left| {{B}_{n,1}/G}\right| ,\left| {{B}_{n,2}/G}\right| ,\ldots ,\left| {{B}_{n, n}/G}\right| \) . Proof. The fact that the sequences are unimodal follows from Theorem 5.4.6, and Propositions 5.4.2 and 5.4.7. For symmetry, note that the map \( f : {B}_{n, k} \rightarrow \) \( {B}_{n, n - k} \) sending \( S \) to its complement induces a \( G \) -module isomorphism. ∎ As an application of this corollary, fix nonnegative integers \( k, l \) and consider the partition \( \left( {l}^{k}\right) \), which is just a \( k \times l \) rectangle. There is a corresponding lower order ideal in Young's lattice \[ {Y}_{k, l} = \left\{ {\lambda \in Y : \lambda \leq \left( {l}^{k}\right) }\right\} \] consisting of all partitions fitting inside the rectangle. We will show that this poset is symmetric and unimodal. But first we need an appropriate group action. Definition 5.4.9 Let \( G \) and \( H \) be permutation groups acting on sets \( S \) and \( T \), respectively. The wreath product, \( G\wr H \), acts on \( S \times T \) as follows. The elements of \( G\wr H \) are all pairs of the form \( \left( {g, h}\right) \), where \( h \in H \) and \( g \in {G}^{\left| T\right| } \) , so \( g = \left( {{g}_{t} : {g}_{t} \in G}\right) \) . The action is \[ \left( {g, h}\right) \left( {s, t}\right) = \left( {{g}_{t}s,{ht}}\right) \] (5.14) It is easy to verify that \( G\wr H \) is a group and that this is actually a group action. - To illustrate, if \( G = {\mathcal{S}}_{3} \) and \( H = {\mathcal{S}}_{2} \) acting on \( \{ 1,2,3\} \) and \( \{ 1,2\} \), respectively, then a typical element of \( G\wr H \) is \[ \left( {g, h}\right) = \left( {g,\left( {1,2}\right) }\right) ,\;\text{ where }\;g = \left( {{g}_{1},{g}_{2}}\right) = \left( {\left( {1,2,3}\right) ,{\left( 1,2,3\right) }^{2}}\right) . \] So an example of the action would be \[ \left( {g, h}\right) \left( {3,2}\right) = \left( {{g}_{2}\left( 3\right), h\left( 2\right) }\right) = \left( {2,1}\right) . \] Now, let \[ {p}_{k, l}\left( i\right) = \left\{ {\lambda \vdash i : \lambda \subseteq \left( {l}^{k}\right) }\right\} \] Corollary 5.4.10 For fixed \( k, l \), the sequence \[ {p}_{k, l}\left( 0\right) ,{p}_{k, l}\left( 1\right) ,{p}_{k, l}\left( 2\right) ,\ldots ,{p}_{k, l}\left( {kl}\right) \] is symmetric and unimodal. So the poset \( {Y}_{k, l} \) is as well. \( \textbf{Proof. Identify the cells of }\left( {l}^{k}\right) \) with the elements of \( \{ 1,2,\ldots, k\} \times \{ 1,2,\ldots, l\} \) in the usual way. Then \( {\mathcal{S}}_{k}\wr {\mathcal{S}}_{l} \) has an induced action on subsets of \( \left( {l}^{k}\right) \), which are partially ordered as in \( {B}_{kl} \) (by containment). There is a unique partition in each orbit, and so the result follows from Corollary 5.4.8, part (2). - ## 5.5 Chromatic Symmetric Functions In this section we will introduce the reader to combinatorial graphs and their proper colorings. It turns out that one can construct a generating function for such colorings that is symmetric and so can be studied in the light of Chapter 4. Definition 5.5.1 A graph, \( \Gamma \), consists of a finite set of vertices \( V = V\left( \Gamma \right) \) and a set \( E = E\left( \Gamma \right) \) of edges, which are unordered pairs of vertices. An edge connecting vertices \( u \) and \( v \)
1112_(GTM267)Quantum Theory for Mathematicians
Definition 7.1
Definition 7.1 The Banach space of bounded operators on \( \mathbf{H} \), with respect to the operator norm (7.1), is denoted \( \mathcal{B}\left( \mathbf{H}\right) \) . 131 Recall (Appendix A.4.3) that for any \( A \in \mathcal{B}\left( \mathbf{H}\right) \) there is a unique operator \( {A}^{ * } \in \mathcal{B}\left( \mathbf{H}\right) \), called the adjoint of \( A \), such that \[ \langle \phi ,{A\psi }\rangle = \left\langle {{A}^{ * }\phi ,\psi }\right\rangle \] for all \( \phi ,\psi \in \mathbf{H} \) . An operator \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is called self-adjoint if \( {A}^{ * } = A \) . We say that \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is non-negative if \[ \langle \psi ,{A\psi }\rangle \geq 0 \] (7.3) for all \( \psi \in \mathbf{H} \) . Proposition 7.2 For all \( A \in \mathcal{B}\left( \mathbf{H}\right) \), we have \[ \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \parallel A\parallel \] and \[ \begin{Vmatrix}{{A}^{ * }A}\end{Vmatrix} = \parallel A{\parallel }^{2}. \] In particular, if \( A \) is self-adjoint, we have the useful result that \( \begin{Vmatrix}{A}^{2}\end{Vmatrix} = \) \( \parallel A{\parallel }^{2} \) . Proof. The operator norm of \( A \) can also be computed as \[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel \psi \parallel = 1}}\parallel {A\psi }\parallel \] Furthermore, for any vector \( \phi \in \mathbf{H},\parallel \phi \parallel = \mathop{\sup }\limits_{{\parallel \chi \parallel = 1}}\left| {\langle \chi ,\phi \rangle }\right| \) . (Inequality one direction is by the Cauchy-Schwarz inequality, and inequality the other direction is by taking \( \chi \) to be a multiple of \( \phi \) .) Thus, \[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel \phi \parallel = \parallel \psi \parallel = 1}}\left| {\langle \phi ,{A\psi }\rangle }\right| \] From this, we get \[ \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel \phi \parallel = \parallel \psi \parallel = 1}}\left| \left\langle {\phi ,{A}^{ * }\psi }\right\rangle \right| \] \[ = \mathop{\sup }\limits_{{\parallel \phi \parallel = \parallel \psi \parallel = 1}}\left| {\langle {A\phi },\psi \rangle }\right| \] \[ = \mathop{\sup }\limits_{{\parallel \phi \parallel = \parallel \psi \parallel = 1}}\left| {\langle \psi ,{A\phi }\rangle }\right| \] \[ = \parallel A\parallel \text{.} \] Meanwhile, \( \begin{Vmatrix}{{A}^{ * }A}\end{Vmatrix} \leq \begin{Vmatrix}{A}^{ * }\end{Vmatrix}\parallel A\parallel = \parallel A{\parallel }^{2} \) . On the other hand, \[ \begin{Vmatrix}{{A}^{ * }A}\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel \phi \parallel = \parallel \psi \parallel = 1}}\left| \left\langle {\phi ,{A}^{ * }{A\psi }}\right\rangle \right| \] \[ = \mathop{\sup }\limits_{{\parallel \phi \parallel = \parallel \psi \parallel = 1}}\left| {\langle {A\phi },{A\psi }\rangle }\right| \] \[ \geq \mathop{\sup }\limits_{{\parallel \psi \parallel = 1}}\left| {\langle {A\psi },{A\psi }\rangle }\right| \] \[ = \parallel A{\parallel }^{2} \] which establishes the inequality in the other order. - We now record an elementary but very useful result. Proposition 7.3 For all \( A \in \mathcal{B}\left( \mathbf{H}\right) \), we have \[ {\left\lbrack \operatorname{Range}\left( A\right) \right\rbrack }^{ \bot } = \ker \left( {A}^{ * }\right) \] where for any \( B \in \mathcal{B}\left( \mathbf{H}\right) ,\ker \left( B\right) \) denotes the kernel of \( B \) . Proof. Suppose first that \( \psi \) belongs to \( {\left\lbrack \operatorname{Range}\left( A\right) \right\rbrack }^{ \bot } \) . Then for all \( \phi \in \mathbf{H} \) , we have \[ 0 = \langle \psi ,{A\phi }\rangle = \left\langle {{A}^{ * }\psi ,\phi }\right\rangle \] (7.4) This implies that \( {A}^{ * }\psi = 0 \) and thus that \( \psi \in \ker \left( {A}^{ * }\right) \) . Conversely, suppose \( \psi \in \ker \left( {A}^{ * }\right) \) . Then for all \( \phi \in \mathbf{H} \) ,(7.4) holds (reading the equation from right to left). This shows that \( \psi \) is orthogonal to every element of the form \( {A\phi } \), meaning that \( \psi \in {\left\lbrack \operatorname{Range}\left( A\right) \right\rbrack }^{ \bot } \) . ∎ Next, we define the spectrum of a bounded operator, which plays the same role as the set of eigenvalues in the finite-dimensional case. Definition 7.4 For \( A \in \mathcal{B}\left( \mathbf{H}\right) \), the resolvent set of \( A \), denoted \( \rho \left( A\right) \) is the set of all \( \lambda \in \mathbb{C} \) such that the operator \( \left( {A - {\lambda I}}\right) \) has a bounded inverse. The spectrum of \( A \), denoted by \( \sigma \left( A\right) \), is the complement in \( \mathbb{C} \) of the resolvent set. For \( \lambda \) in the resolvent set of \( A \), the operator \( {\left( A - \lambda I\right) }^{-1} \) is called the resolvent of \( A \) at \( \lambda \) . Saying that \( \left( {A - {\lambda I}}\right) \) has a bounded inverse means that there exists a bounded operator \( B \) such that \[ \left( {A - {\lambda I}}\right) B = B\left( {A - {\lambda I}}\right) = I. \] If \( A \) is bounded and \( A - {\lambda I} \) is one-to-one and maps \( \mathbf{H} \) onto \( \mathbf{H} \), then it follows from the closed graph theorem (Theorem A.39) that the inverse map must be bounded. Thus, the resolvent set of \( A \) can alternatively be described as the set of \( \lambda \in \mathbb{C} \) for which \( A - {\lambda I} \) is one-to-one and onto. Proposition 7.5 For all \( A \in \mathcal{B}\left( \mathbf{H}\right) \), the following results hold. 1. The spectrum \( \sigma \left( A\right) \) of \( A \) is a closed, bounded, and nonempty subset of \( \mathbb{C} \) . 2. If \( \left| \lambda \right| > \parallel A\parallel \), then \( \lambda \) is in the resolvent set of \( A \) . Lemma 7.6 Suppose \( X \in \mathcal{B}\left( \mathbf{H}\right) \) satisfies \( \parallel X\parallel < 1 \) . Then the operator \( I - X \) is invertible, with the inverse given by the following convergent series in \( \mathcal{B}\left( \mathbf{H}\right) \) : \[ {\left( I - X\right) }^{-1} = I + X + {X}^{2} + {X}^{3} + \cdots \] (7.5) Proof. As a consequence of (7.2), we have \( \begin{Vmatrix}{X}^{m}\end{Vmatrix} \leq \parallel X{\parallel }^{m} \) . The (geometric) series on the right-hand side of (7.5) is therefore absolutely convergent and thus convergent in the Banach space \( \mathcal{B}\left( \mathbf{H}\right) \) (Appendix A.3.4). If we multiply this series on either side by \( \left( {I - X}\right) \), everything will cancel except \( I \), showing that the sum of the series is the inverse of \( \left( {I - X}\right) \) . ∎ Proof of Proposition 7.5. For any nonzero \( \lambda \in \mathbb{C} \), consider the operator \[ A - {\lambda I} = - \lambda \left( {I - \frac{A}{\lambda }}\right) \] If \( \left| \lambda \right| > \parallel A\parallel \), then \( \parallel A/\lambda \parallel < 1 \), and \( I - A/\lambda \) is invertible by the lemma. It then follows that \( A - {\lambda I} \) is invertible, with \[ {\left( A - \lambda I\right) }^{-1} = - \frac{1}{\lambda }\left( {I + \frac{A}{\lambda } + \frac{{A}^{2}}{{\lambda }^{2}} + \cdots }\right) . \] (7.6) Thus, \( \lambda \) is in the resolvent set of \( A \) . This establishes Point 2 in the proposition and shows that \( \sigma \left( A\right) \) is bounded. Suppose now that \( {\lambda }_{0} \in \mathbb{C} \) is in the resolvent set of \( A \) . Then for another number \( \lambda \in \mathbb{C} \), we have \[ A - {\lambda I} = A - {\lambda }_{0}I - \left( {\lambda - {\lambda }_{0}}\right) I \] \[ = \left( {A - {\lambda }_{0}I}\right) \left( {I - \left( {\lambda - {\lambda }_{0}}\right) {\left( A - {\lambda }_{0}I\right) }^{-1}}\right) . \] (7.7) Thus, if \[ \left| {\lambda - {\lambda }_{0}}\right| < \frac{1}{\begin{Vmatrix}{\left( A - {\lambda }_{0}I\right) }^{-1}\end{Vmatrix}} \] both factors on the right-hand side of (7.7) will be invertible, so that \( A - {\lambda I} \) is also invertible. Thus, the resolvent set of \( A \) is open and the spectrum is closed. To show that \( \sigma \left( A\right) \) is nonempty, note that \( A - {\lambda I} \) may be computed as follows: \[ {\left( A - \lambda I\right) }^{-1} = {\left( I - \left( \lambda - {\lambda }_{0}\right) {\left( A - {\lambda }_{0}I\right) }^{-1}\right) }^{-1}{\left( A - {\lambda }_{0}I\right) }^{-1} \] \[ = \left( {\mathop{\sum }\limits_{{m = 0}}^{\infty }{\left( \lambda - {\lambda }_{0}\right) }^{m}{\left( {\left( A - {\lambda }_{0}I\right) }^{-1}\right) }^{m}}\right) {\left( A - {\lambda }_{0}I\right) }^{-1}. \] (7.8) Thus, near any point \( {\lambda }_{0} \) in the resolvent set of \( A \), the resolvent \( {\left( A - \lambda I\right) }^{-1} \) can be computed by the locally convergent series (7.8) in powers of \( \lambda - {\lambda }_{0} \) , with the coefficients of the series being elements of \( \mathcal{B}\left( \mathbf{H}\right) \) . For any \( \phi ,\psi \in \mathbf{H} \) , the map \[ \lambda \mapsto \left\langle {\phi ,{\left( A - \lambda I\right) }^{-1}\psi }\right\rangle \] (7.9) will be given by a locally convergent power series with coefficients in \( \mathbb{C} \) , meaning that the function (7.9) is a holomorphic function on the resolvent set of \( A \) . Furthermore, from (7.6) we can see that \( \begin{Vmatrix}{\left( A - \lambda I\right) }^{-1}\end{Vmatrix} \) tends to zero as \( \left| \lambda \right| \) tends to infinity, and so also does the right-hand side of (7.9). If \( \sigma \left( A\right) \) were the empty set, the function (7.9) would be holomorphic on all of \( \mathbb{C} \) and tending to zero at infinity. By Liouville’s theorem, the right-hand side of (7.9) would have to be identically zero for all \( \phi \) and \( \psi \), which would mean that \( {\left( A - \lambda I\right) }^{-1} \) is the zero operator. But since \( \left( {A - {\lambda I}}\right) {\left( A - \lambda I\right) }^{-1} = I \), the operator \( {\left(
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 2.2.13
Definition 2.2.13. Let \( p : Y \rightarrow X \) be a covering space with \( Y \) path connected. Then the group of covering translations (or deck transformations) \( {G}_{p} \) is the group of homeomorphisms \( f : Y \rightarrow Y \) making the following diagram commute: ![21ef530b-1e09-406a-b041-cf4539af5c14_23_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_23_1.jpg) \( \diamond \) Lemma 2.2.14. Let \( {y}_{0} \) be any point of \( Y \) . Then \( f \in {G}_{p} \) is determined by \( f\left( {y}_{0}\right) \) . Corollary 2.2.15. \( {G}_{p} \) acts properly discontinuously on \( Y \) . Definition 2.2.16. Let \( p : Y \rightarrow X \) be a covering projection with \( Y \) path connected. Let \( {x}_{0} \in X \) . For \( f : \left( {I,\{ 0,1\} }\right) \rightarrow \left( {X,{x}_{0}}\right) \) and \( y \in Y \) with \( p\left( y\right) = {x}_{0} \), let \( {\widetilde{f}}_{y} : \left( {I,0}\right) \rightarrow \) \( \left( {Y, y}\right) \) be the unique lift of \( f \) given by Theorem 2.2.4. Then \( Y \) is a regular cover of \( X \) if for each such \( f \), either \( {f}_{y}\left( 1\right) = y \) for every \( y \) with \( p\left( y\right) = {x}_{0} \) or \( {f}_{y}\left( 1\right) \neq y \) for every \( y \) with \( p\left( y\right) = {x}_{0} \) . Informally, \( Y \) is regular if either every lift of a loop is a loop, or no lift of a loop is a loop. (It is easy to check that the condition of being regular is independent of the choice of \( {x}_{0} \) .) Although some results are true more generally, henceforth we shall assume that for a covering projection \( p : Y \rightarrow X \) : Hypotheses 2.2.17. (i) \( X \) is connected. (ii) \( X \) is locally path connected, i.e., for any \( x \in X \) and any neighborhood \( U \) of \( X \) , there is an open neighborhood \( V \subset U \) of \( x \) such that \( V \) is path connected. (iii) \( X \) is semilocally simply connected, i.e., for any \( x \in X \) and any neighborhood \( U \) of \( X \), there is an open neighborhood \( V \subset U \) of \( x \) such that \( {i}_{ * } : {\pi }_{1}\left( {V, x}\right) \rightarrow \) \( {\pi }_{1}\left( {U, x}\right) \) is the trivial map, where \( i : V \rightarrow U \) is the inclusion. (iv) \( Y \) is connected. Note that (i) and (ii) imply that \( X \) is path connected. Note that since \( p \) is a local homeomorphism, properties (ii),(iii), and (iv) hold for \( Y \) as well, and in particular \( Y \) is also path connected. For example, we see that these hypotheses are satisfied for the covering projections in Example 2.2.3(ii). (However, \( X \) may be connected without \( Y \) being connected, as in Example 2.2.3(i).) Definition 2.2.18. The degree of the cover is the cardinality of \( {p}^{-1}\left( {x}_{0}\right) \) . \( \diamond \) Theorem 2.2.19. Under Hypotheses 2.2.17: (i) To each subgroup \( H \) of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) there corresponds a covering projection \( p \) : \( Y \rightarrow X \) and a point \( {y}_{0} \in Y \) with \( p\left( {y}_{0}\right) = {x}_{0} \) such that \[ {p}_{ * }\left( {{\pi }_{1}\left( {Y,{y}_{0}}\right) }\right) = H \subseteq {\pi }_{1}\left( {X,{x}_{0}}\right) \] and \( \left( {Y,{y}_{0}}\right) \) is unique up to equivalence. (ii) The points in \( {p}^{-1}\left( {x}_{0}\right) \) are in 1-1 correspondence with the right cosets of \( H \) in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . Thus the degree of the cover is the index of \( H \) in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . (iii) \( H \) is normal in \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) if and only if \( Y \) is a regular cover. In this case the group of covering translations is isomorphic to the quotient group \( {\pi }_{1}\left( {X,{x}_{0}}\right) /H \) . Remark 2.2.20. By Corollary 2.2.10, this is a 1-1 correspondence. \( \diamond \) Corollary 2.2.21. Under Hypotheses 2.2.17: Every \( X \) has a simply-connected cover \( p : \widetilde{X} \rightarrow X \), unique up to equivalence. \( \widetilde{X} \) is the universal cover of \( X \), and \( X \) is the quotient of \( \widetilde{X} \) by the group of covering translations. Also, if \( Y \) is any cover of \( X \), then \( \widetilde{X} \) is a cover of \( Y \) . Proof. This is a direct consequence of Theorem 2.2.19, and our earlier results, taking \( H \) to be the trivial subgroup of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . Remark 2.2.22. This shows that, in the situation where Hypotheses 2.2.17 hold, the covering projection \( p : \widetilde{X} \rightarrow X \) from the universal cover \( \widetilde{X} \) to \( X \) is exactly the quotient map under the action of the group \( {G}_{p} \) of covering translations, isomorphic to \( {\pi }_{1}\left( {X,{x}_{0}}\right) \), considered in Theorem 2.2.6. The only difference is that we have reversed our point of view: In Theorem 2.2.6 we assumed \( {G}_{p} \) was known, and used it to find \( {\pi }_{1}\left( {X,{x}_{0}}\right) \), while in Theorem 2.2.19 we assumed \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) was known, and used it to find \( {G}_{p} \) . ◇ ## 2.3 van Kampen's Theorem and Applications van Kampen's theorem allows us, under suitable circumstances, to compute the fundamental group of a space from the fundamental groups of subspaces. Theorem 2.3.1. Let \( X = {X}_{1} \cup {X}_{2} \) and suppose that \( {X}_{1},{X}_{2} \), and \( A = {X}_{1} \cap {X}_{2} \) are all open, path connected subsets of \( X \) . Let \( {x}_{0} \in A \) . Then \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) is the free product with amalgamation \[ {\pi }_{1}\left( {X,{x}_{0}}\right) = {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) { * }_{{\pi }_{1}\left( {A,{x}_{0}}\right) }{\pi }_{1}\left( {{X}_{2},{x}_{0}}\right) . \] In other words, if \( {i}_{1} : A \rightarrow {X}_{1} \) and \( {i}_{2} : A \rightarrow {X}_{2} \) are the inclusions, then \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) is the free product \( {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) * {\pi }_{1}\left( {{X}_{2},{x}_{0}}\right) \) modulo the relations \( {\left( {i}_{1}\right) }_{ * }\left( \alpha \right) = {\left( {i}_{2}\right) }_{ * }\left( \alpha \right) \) for every \( \alpha \in {\pi }_{1}\left( {A,{x}_{0}}\right) \) . As important special cases we have: Corollary 2.3.2. Under the hypotheses of van Kampen's theorem: (i) If \( {X}_{1} \) and \( {X}_{2} \) are simply connected, then \( X \) is simply connected. (ii) If \( A \) is simply connected, then \( {\pi }_{1}\left( {X,{x}_{0}}\right) = {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) * {\pi }_{1}\left( {{X}_{2},{x}_{0}}\right) \) . (iii) If \( {X}_{2} \) is simply connected, then \( {\pi }_{1}\left( {X,{x}_{0}}\right) = {\pi }_{1}\left( {{X}_{1},{x}_{0}}\right) /\left\langle {{\pi }_{1}\left( {A,{x}_{0}}\right) }\right\rangle \) where \( \left\langle {{\pi }_{1}\left( {A,{x}_{0}}\right) }\right\rangle \) denotes the subgroup normally generated by \( {\pi }_{1}\left( {A,{x}_{0}}\right) \) . Corollary 2.3.3. For \( n > 1 \), the \( n \) -sphere \( {S}^{n} \) is simply connected. Proof. We regard \( {S}^{n} \) as the unit sphere in \( {\mathbb{R}}^{n + 1} \) . Let \( {X}_{1} = {S}^{n} - \{ \left( {0,0,\ldots ,0,1}\right) \} \) and \( {X}_{2} = {S}^{n} - \{ \left( {0,0,\ldots ,0, - 1}\right) \} \) . Then \( {X}_{1} \) and \( {X}_{2} \) are both homeomorphic to \( {\mathring{D}}^{n} \), so are path connected and simply connected, and \( {X}_{1} \cap {X}_{2} \) is path connected, as \( n > 1 \), so by Corollary 2.3.2(i) \( {S}^{n} \) is simply connected. Example 2.3.4. (i) Regard \( {S}^{n} \) as the unit sphere in \( {\mathbb{R}}^{n + 1} \) and let \( {\mathbb{Z}}_{2} \) act on \( {S}^{n} \), where the nontrivial element \( g \) of \( {\mathbb{Z}}_{2} \) acts via the antipodal map, \( g\left( {{z}_{1},\ldots ,{z}_{n + 1}}\right) = \) \( \left( {-{z}_{1},\ldots , - {z}_{n + 1}}\right) \) . The quotient \( \mathbb{R}{P}^{n} = {S}_{n}/{\mathbb{Z}}_{2} \) is real projective \( n \) -space. Note that \( p : {S}^{0} \rightarrow \mathbb{R}{P}^{0} \) is the map from the space of two points to the space of one point, and \( p : {S}^{1} \rightarrow \mathbb{R}{P}^{1} \) may be identified with the cover in Example 2.2.3(iib) for \( n = 2 \) . But for \( n > 1 \), by Corollary 2.3.3 and Theorem 2.2.6 we see that \( {\pi }_{1}\left( {\mathbb{R}{P}^{n},{x}_{0}}\right) = {\mathbb{Z}}_{2}. \) (ii) For \( n = {2m} - 1 \) odd, regard \( {S}^{n} \) as the unit sphere in \( {\mathbb{C}}^{m} \) . Fix a positive integer \( k \) and integers \( {j}_{1},\ldots ,{j}_{m} \) relatively prime to \( k \) . Let the group \( {\mathbb{Z}}_{k} \) act on \( {S}^{n} \) where a fixed generator \( g \) acts by \( g\left( {{z}_{1},\ldots ,{z}_{m}}\right) = \) \( \left( {\exp \left( {{2\pi i}{j}_{1}/k}\right) {z}_{1},\ldots ,\exp \left( {{2\pi i}{j}_{m}/k}\right) {z}_{m}}\right) \) . The quotient \( L = {L}^{{2m} - 1}\left( {k;{j}_{1},\ldots ,{j}_{m}}\right) \) is a lens space. For \( m = 1 \) the projection \( p : {S}^{{2m} - 1} \rightarrow L \) may be identified with the cover in Example 2.2.3(iib) with (in the notation there) \( n = k \) . But for \( m > 1 \) , by Corollary 2.3.3 and Theorem 2.2.6 we see that \( {\pi }_{1}\left( {L,{x}_{0}}\right) = {\mathbb{Z}}_{k} \) . Example 2.3.5. Regard \( {S}^{1} \) as the unit circle in \( \mathbb{C} \) . Let \( n \) be a positive integer. For \( k = 1,\ldots, n \) let \( {\left( {S}^{1}\right) }_{k} \) be a copy of \( {S}^{1} \) . The \( n \) -leafed rose is the space \( {R}_{n} \) obtained from the disjoint union of \( {\left( {S}^{1}\right) }_{1},\ldots ,{\left( {S}^{1}\right) }_{n} \) by identifying the point 1 in each copy of \( {S}^{1} \) . Let \( {r}_{0} \in {R}_{n} \) be the common identification point. Let \( {\left( {S}^{1}\right) }_{k} \) be coordinated by \( {\left( z\right) }_{k} \) , and let \( {i}_{k} : {\left( {S}^{1}\right) }_{k} \rightarrow {R}_{n} \) be the inclusion. Corollary 2.3.6. The fundamental group \( {\pi }_{1}\left( {{R}_{n},{
1056_(GTM216)Matrices
Definition 11.1
Definition 11.1 Let \( M \in {\mathbf{{GL}}}_{n}\left( k\right) \) be given. We say that \( M \) admits an LU factorization if there exist in \( {\mathbf{{GL}}}_{n}\left( k\right) \) two matrices \( L \) (lower-triangular with \( 1\mathrm{\;s} \) on the diagonal) and \( U \) (upper-triangular) such that \( M = {LU} \) . ## Remarks - The diagonal entries of \( U \) are not equal to 1 in general. The \( {LU} \) factorization is thus asymmetric with respect to \( L \) and \( U \) . - The letters \( L \) and \( U \) recall the shape of the matrices: \( L \) for lower and \( U \) for upper. - If there exists an \( {LU} \) factorization (which is unique, as we show below), then it can be computed by induction on the size of the matrix. The algorithm is provided in the proof of the next theorem. Indeed, if \( {N}^{\left( p\right) } \) denotes the matrix extracted from \( N \) by keeping only the first \( p \) rows and columns, we have easily \[ {M}^{\left( p\right) } = {L}^{\left( p\right) }{U}^{\left( p\right) } \] which is nothing but the \( {LU} \) factorization of \( {M}^{\left( p\right) } \) . Definition 11.2 The leading principal minors of \( M \) are the determinants of the matrices \( {M}^{\left( p\right) } \) defined above, for \( 1 \leq p \leq n \) . Theorem 11.1 The matrix \( M \in {\mathbf{{GL}}}_{n}\left( k\right) \) admits an LU factorization if and only if its leading principal minors are nonzero. When this condition is fulfilled, the LU factorization is unique. Proof. Let us begin with uniqueness: if \( {LU} = {L}^{\prime }{U}^{\prime } \), then \( {\left( {L}^{\prime }\right) }^{-1}L = {U}^{\prime }{U}^{-1} \), which reads \( {L}^{\prime \prime } = {U}^{\prime \prime } \), where \( {L}^{\prime \prime } \) and \( {U}^{\prime \prime } \) are triangular of opposite types, the diagonal entries of \( {L}^{\prime \prime } \) being \( 1\mathrm{\;s} \) . We deduce \( {L}^{\prime \prime } = {U}^{\prime \prime } = {I}_{n} \) ; that is, \( {L}^{\prime } = L,{U}^{\prime } = U \) . We next prove the necessity. Let us assume that \( M \) admits an \( {LU} \) factorization. Then \( \det {M}^{\left( p\right) } = \det {L}^{\left( p\right) }\det {U}^{\left( p\right) } = \mathop{\prod }\limits_{{1 \leq j \leq p}}{u}_{jj} \), which is nonzero because \( U \) is invertible. Finally, we prove the sufficiency, that is, the existence of an \( {LU} \) factorization. We proceed by induction on the size of the matrices. It is clear if \( n = 1 \) . Otherwise, let us assume that the statement is true up to the order \( n - 1 \) and let \( M \in {\mathbf{{GL}}}_{n}\left( k\right) \) be given, with nonzero leading principal minors. We look for \( L \) and \( U \) in the blockwise form \[ L = \left( \begin{matrix} {L}^{\prime } & 0 \\ {X}^{T} & 1 \end{matrix}\right) ,\;U = \left( \begin{matrix} {U}^{\prime } & Y \\ 0 & u \end{matrix}\right) , \] with \( {L}^{\prime },{U}^{\prime } \in {\mathbf{M}}_{n - 1}\left( k\right) \), and so on. We likewise obtain the description \[ M = \left( \begin{matrix} {M}^{\prime } & R \\ {S}^{T} & m \end{matrix}\right) \] Multiplying blockwise, we obtain the equations \[ {L}^{\prime }{U}^{\prime } = {M}^{\prime },\;{L}^{\prime }Y = R,\;{\left( {U}^{\prime }\right) }^{T}X = S,\;u = m - {X}^{T}Y. \] By assumption, the leading principal minors of \( {M}^{\prime } \) are nonzero. The induction hypothesis guarantees the existence of the factorization \( {M}^{\prime } = {L}^{\prime }{U}^{\prime } \) . Then \( Y \) and \( X \) are the unique solutions of (triangular) Cramer systems. Finally, \( u \) is explicitly given. Let us evaluate the number of operations needed in the computation of \( L \) and \( U \) . We pass from a factorization in \( {\mathbf{{GL}}}_{n - 1}\left( k\right) \) to a factorization in \( {\mathbf{{GL}}}_{n}\left( k\right) \) by means of the computations of \( X \) (in \( \left( {n - 1}\right) \left( {n - 2}\right) \) operations), \( Y \) (in \( {\left( n - 1\right) }^{2} \) operations) and \( u \) (in \( 2\left( {n - 1}\right) \) operations), for a total of \( \left( {n - 1}\right) \left( {{2n} - 1}\right) \) operations. Finally, the computation ex nihilo of an \( {LU} \) factorization costs \[ P\left( n\right) = 3 + {10} + \cdots + \left( {n - 1}\right) \left( {{2n} - 1}\right) = \frac{2}{3}{n}^{3} + O\left( {n}^{2}\right) \] operations. Proposition 11.1 The LU factorization is computable in \( \frac{2}{3}{n}^{3} + O\left( {n}^{2}\right) \) operations. One says that the complexity of the \( {LU} \) factorization is \( \frac{2}{3}{n}^{3} \) . ## Remark When all leading principal minors but the last (the determinant of M) are nonzero, the proof above furnishes a factorization \( M = {LU} \), in which \( U \) is not invertible; that is, \( {u}_{nn} = 0 \) . ## 11.1.1 Block Factorization One can likewise perform a blockwise \( {LU} \) factorization. If \( n = {p}_{1} + \cdots + {p}_{r} \) with \( {p}_{j} \geq 1 \), the matrices \( L \) and \( U \) are block-triangular. The diagonal blocks are square, of respective sizes \( {p}_{1},\ldots ,{p}_{r} \) . Those of \( L \) are of the form \( {I}_{{p}_{j}} \), whereas those of \( U \) are invertible. A necessary and sufficient condition for such a factorization to exist is that the leading principal minors of \( M \), of orders \( {p}_{1} + \cdots + {p}_{j}\left( {j \leq r}\right) \), be nonzero. As above, we may allow that \( \det M \neq 0 \), with the price that the last diagonal block of \( U \) be singular. Such a factorization is useful for the resolution of the linear system \( {MX} = b \) if the diagonal blocks of \( U \) are easily inverted, for instance if their sizes are small enough (say \( {p}_{j} \approx \sqrt{n} \) ). Another favorable situation is when most of the diagonal blocks are equal to each other, because then one has to invert only a few blocks. We have performed this blockwise factorization in Section 3.3.1 when \( r = 2 \) . Recall that if \[ M = \left( \begin{array}{ll} A & B \\ C & D \end{array}\right) \] (11.1) where the diagonal blocks are square and \( A \) is invertible, then \[ M = {LU}\;\text{ with }\;L = \left( \begin{matrix} I & 0 \\ C{A}^{-1} & I \end{matrix}\right) ,\;U = \left( \begin{matrix} A & B \\ 0 & D - C{A}^{-1}B \end{matrix}\right) . \] (11.2) From this, we see that if \( M \) is nonsingular too, then \[ {M}^{-1} = {U}^{-1}{L}^{-1} = \left( \begin{matrix} {A}^{-1} & \cdot \\ 0 & {\left( D - C{A}^{-1}B\right) }^{-1} \end{matrix}\right) \cdot \left( \begin{matrix} I & 0 \\ \cdot & I \end{matrix}\right) = \left( \begin{matrix} \cdot & \cdot \\ \cdot & {\left( D - C{A}^{-1}B\right) }^{-1} \end{matrix}\right) . \] When all the blocks have the same size, a similar analysis yields Banachiewicz' formula Corollary 11.1 Let \( M \in {\mathbf{{GL}}}_{n}\left( k\right) \), with \( n = {2m} \), read blockwise \[ M = \left( \begin{array}{ll} A & B \\ C & D \end{array}\right) ,\;A, B, C, D \in {\mathbf{{GL}}}_{m}\left( k\right) . \] Then \[ {M}^{-1} = \left( \begin{array}{ll} {\left( A - B{D}^{-1}C\right) }^{-1} & {\left( C - D{B}^{-1}A\right) }^{-1} \\ {\left( B - A{C}^{-1}D\right) }^{-1} & {\left( D - C{A}^{-1}B\right) }^{-1} \end{array}\right) . \] Proof. We can verify the formula by multiplying by \( M \) . The only point to show is that the inverses are meaningful: \( A - B{D}^{-1}C,\ldots \) are invertible. Because of the symmetry of the formulæ, it is enough to check it for a single term, namely \( D - \) \( C{A}^{-1}B \) . Schur’s complement formula gives \( \det \left( {D - C{A}^{-1}B}\right) = \det M/\det A \), which is nonzero by assumption. ## 11.1.2 Complexity of Matrix Inversion We can now show that the complexity of inverting a matrix is not higher than that of matrix multiplication, at equivalent sizes. This fact is due independently to Boltz, Banachiewicz, and to Strassen We assume here that \( k = \mathbb{R} \) or \( \mathbb{C} \) . Notation 11.1 We denote by \( {J}_{n} \) the number of operations in \( k \) used in the inversion of a typical \( n \times n \) matrix, and by \( {P}_{n} \) the number of operations (in \( k \) ) used in the product of two \( n \times n \) matrices. Of course, the number \( {J}_{n} \) must be understood for generic matrices, that is, for matrices within a dense open subset of \( {\mathbf{M}}_{n}\left( k\right) \) . More important, \( {J}_{n},{P}_{n} \) also depend on the algorithm chosen for inversion or for multiplication. In the sequel we wish to adapt the inversion algorithm to the one used for multiplication. Let us examine first of all the matrices whose size \( n \) has the form \( {2}^{k} \) . We decompose the matrices \( M \in {\mathbf{{GL}}}_{n}\left( k\right) \) blockwise as in (11.1), with blocks of size \( n/2 \times n/2 \) . The condition \( A \in {\mathbf{{GL}}}_{n/2}\left( k\right) \) defines a dense open set, because \( M \mapsto \) \( \det A \) is a nonzero polynomial. Suppose that we are given an inversion algorithm for generic matrices in \( {\mathbf{{GL}}}_{n/2}\left( k\right) \) in \( {j}_{k - 1} = {J}_{{2}^{k - 1}} \) operations. Then blockwise \( {LU} \) factorization and the formula \( {M}^{-1} = {U}^{-1}{L}^{-1} \), where \[ {L}^{-1} = \left( \begin{matrix} I & 0 \\ - C{A}^{-1} & I \end{matrix}\right) ,\;U = \left( \begin{matrix} {A}^{-1} & - {A}^{-1}B{\left( D - C{A}^{-1}B\right) }^{-1} \\ 0 & {\left( D - C{A}^{-1}B\right) }^{-1} \end{matrix}\right) , \] furnish an inversion algorithm for generic matrices in \( {\mathbf{{GL}}}_{n}\left( k\right) \) . We can then bound \( {j}_{k} \) by means of \( {j}_{k - 1} \) and the number \( {\pi }_{k - 1} = {P}_{{2}^{k - 1}} \) of operations used in the computation of the product of two matrices of size \( {2}^{k - 1} \times {2}^{k - 1} \) . We also denote by \( {\sigma }_{k} = {2}^{2k} \) the number of operations involved in the computation of the sum of matrices in \( {\mathbf{M}}_{{2}^{k}}\left( k\right) \) . To compute \( {M}^{-1} \), we first comp
1077_(GTM235)Compact Lie Groups
Definition 1.1
Definition 1.1. An \( n \) -dimensional topological manifold is a second countable (i.e., possessing a countable basis for the topology) Hausdorff topological space \( M \) that is locally homeomorphic to an open subset of \( {\mathbb{R}}^{n} \) . This means that for all \( m \in M \) there exists a homeomorphism \( \varphi : U \rightarrow V \) for some open neighborhood \( U \) of \( m \) and an open neighborhood \( V \) of \( {\mathbb{R}}^{n} \) . Such a homeomorphism \( \varphi \) is called a chart. Definition 1.2. An \( n \) -dimensional smooth manifold is a topological manifold \( M \) along with a collection of charts, \( \left\{ {{\varphi }_{\alpha } : {U}_{\alpha } \rightarrow {V}_{\alpha }}\right\} \), called an atlas, so that (1) \( M = { \cup }_{\alpha }{U}_{\alpha } \) and (2) For all \( \alpha ,\beta \) with \( {U}_{\alpha } \cap {U}_{\beta } \neq \varnothing \), the transition map \( {\varphi }_{\alpha ,\beta } = {\varphi }_{\beta } \circ {\varphi }_{\alpha }^{-1} \) : \( {\varphi }_{\alpha }\left( {{U}_{\alpha } \cap {U}_{\beta }}\right) \rightarrow {\varphi }_{\beta }\left( {{U}_{\alpha } \cap {U}_{\beta }}\right) \) is a smooth map on \( {\mathbb{R}}^{n} \) . It is an elementary fact that each atlas can be completed to a unique maximal atlas containing the original. By common convention, a manifold's atlas will always be extended to this completion. Besides \( {\mathbb{R}}^{n} \), common examples of manifolds include the \( n \) -sphere, \[ {S}^{n} = \left\{ {x \in {\mathbb{R}}^{n + 1} \mid \parallel x\parallel = 1}\right\} \] where \( \parallel \cdot \parallel \) denotes the standard Euclidean norm, and the \( n \) -torus, \[ {T}^{n} = \underset{n\text{ copies }}{\underbrace{{S}^{1} \times {S}^{1} \times \cdots \times {S}^{1}}}. \] Another important manifold is real projective space, \( \mathbb{P}\left( {\mathbb{R}}^{n}\right) \), which is the \( n \) - dimensional compact manifold of all lines in \( {\mathbb{R}}^{n + 1} \) . It may be alternately realized as \( {\mathbb{R}}^{n + 1} \smallsetminus \{ 0\} \) modulo the equivalence relation \( x \sim {\lambda x} \) for \( x \in {\mathbb{R}}^{n + 1} \smallsetminus \{ 0\} \) and \( \lambda \in \mathbb{R} \smallsetminus \{ 0\} \) , or as \( {S}^{n} \) modulo the equivalence relation \( x \sim \pm x \) for \( x \in {S}^{n} \) . More generally, the Grassmannian, \( {\operatorname{Gr}}_{k}\left( {\mathbb{R}}^{n}\right) \), consists of all \( k \) -planes in \( {\mathbb{R}}^{n} \) . It is a compact manifold of dimension \( k\left( {n - k}\right) \) and reduces to \( \mathbb{P}\left( {\mathbb{R}}^{n - 1}\right) \) when \( k = 1 \) . Write \( {M}_{n, m}\left( \mathbb{F}\right) \) for the set of \( n \times m \) matrices over \( \mathbb{F} \) where \( \mathbb{F} \) is either \( \mathbb{R} \) or \( \mathbb{C} \) . By looking at each coordinate, \( {M}_{n, m}\left( \mathbb{R}\right) \) may be identified with \( {\mathbb{R}}^{nm} \) and \( {M}_{n, m}\left( \mathbb{C}\right) \) with \( {\mathbb{R}}^{2nm} \) . Since the determinant is continuous on \( {M}_{n, n}\left( \mathbb{F}\right) \), we see \( \mathop{\det }\limits^{{-1}}\{ 0\} \) is a closed subset. Thus the general linear group (1.3) \[ {GL}\left( {n,\mathbb{F}}\right) = \left\{ {g \in {M}_{n, n}\left( \mathbb{F}\right) \mid g\text{ is invertible }}\right\} \] is an open subset of \( {M}_{n, n}\left( \mathbb{F}\right) \) and therefore a manifold. In a similar spirit, for any finite-dimensional vector space \( V \) over \( \mathbb{F} \), we write \( {GL}\left( V\right) \) for the set of invertible linear transformations on \( V \) . ## 1.1.2 Lie Groups Definition 1.4. A Lie group \( G \) is a group and a manifold so that (1) the multiplication map \( \mu : G \times G \rightarrow G \) given by \( \mu \left( {g,{g}^{\prime }}\right) = g{g}^{\prime } \) is smooth and (2) the inverse map \( \iota : G \rightarrow G \) by \( \iota \left( g\right) = {g}^{-1} \) is smooth. A trivial example of a Lie group is furnished by \( {\mathbb{R}}^{n} \) with its additive group structure. A slightly fancier example of a Lie group is given by \( {S}^{1} \) . In this case, the group structure is inherited from multiplication in \( \mathbb{C} \smallsetminus \{ 0\} \) via the identification \[ {S}^{1} \cong \{ z \in \mathbb{C}\left| \right| z \mid = 1\} \] However, the most interesting example of a Lie group so far is \( {GL}\left( {n,\mathbb{F}}\right) \) . To verify \( {GL}\left( {n,\mathbb{F}}\right) \) is a Lie group, first observe that multiplication is smooth since it is a polynomial map in the coordinates. Checking that the inverse map is smooth requires the standard linear algebra formula \( {g}^{-1} = \operatorname{adj}\left( g\right) /\det g \), where the \( \operatorname{adj}\left( g\right) \) is the transpose of the matrix of cofactors. In particular, the coordinates of \( \operatorname{adj}\left( g\right) \) are polynomial functions in the coordinates of \( g \) and det \( g \) is a nonvanishing polynomial on \( {GL}\left( {n,\mathbb{F}}\right) \) so the inverse is a smooth map. Writing down further examples of Lie groups requires a bit more machinery. In fact, most of our future examples of Lie groups arise naturally as subgroups of \( {GL}\left( {n,\mathbb{F}}\right) \) . To this end, we next develop the notion of a Lie subgroup. ## 1.1.3 Lie Subgroups and Homomorphisms Recall that an (immersed) submanifold \( N \) of \( M \) is the image of a manifold \( {N}^{\prime } \) under an injective immersion \( \varphi : {N}^{\prime } \rightarrow M \) (i.e., a one-to-one smooth map whose differential has full rank at each point of \( {N}^{\prime } \) ) together with the manifold structure on \( N \) making \( \varphi : {N}^{\prime } \rightarrow N \) a diffeomorphism. It is a familiar fact from differential geometry that the resulting topology on \( N \) may not coincide with the relative topology on \( N \) as a subset of \( M \) . A submanifold \( N \) whose topology agrees with the relative topology is called a regular (or imbedded) submanifold. Defining the notion of a Lie subgroup is very similar. Essentially the word homomorphism needs to be thrown in. Definition 1.5. A Lie subgroup \( H \) of a Lie group \( G \) is the image in \( G \) of a Lie group \( {H}^{\prime } \) under an injective immersive homomorphism \( \varphi : {H}^{\prime } \rightarrow G \) together with the Lie group structure on \( H \) making \( \varphi : {H}^{\prime } \rightarrow H \) a diffeomorphism. The map \( \varphi \) in the above definition is required to be smooth. However, we will see in Exercise 4.13 that it actually suffices to verify that \( \varphi \) is continuous. As with manifolds, a Lie subgroup is not required to be a regular submanifold. A typical example of this phenomenon is constructed by wrapping a line around the torus at an irrational angle (Exercise 1.5). However, regular Lie subgroups play a special role and there happens to be a remarkably simple criterion for determining when Lie subgroups are regular. Theorem 1.6. Let \( G \) be a Lie group and \( H \subseteq G \) a subgroup (with no manifold assumption). Then \( H \) is a regular Lie subgroup if and only if \( H \) is closed. The proof of this theorem requires a fair amount of effort. Although some of the necessary machinery is developed in \( \$ {4.1.2} \), the proof lies almost entirely within the purview of a course on differential geometry. For the sake of clarity of exposition and since the result is only used to efficiently construct examples of Lie groups in \( \$ {1.1.4} \) and \( \$ {1.3.2} \), the proof of this theorem is relegated to Exercise 4.28. While we are busy putting off work, we record another useful theorem whose proof, for similar reasons, can also be left to a course on differential geometry (e.g., [8] or [88]). We note, however, that a proof of this result follows almost immediately once Theorem 4.6 is established. Theorem 1.7. Let \( H \) be a closed subgroup of a Lie group \( G \) . Then there is a unique manifold structure on the quotient space \( G/H \) so the projection map \( \pi : G \rightarrow G/H \) is smooth, and so there exist local smooth sections of \( G/H \) into \( G \) . Pressing on, an immediate corollary of Theorem 1.6 provides an extremely useful method of constructing new Lie groups. The corollary requires the well-known fact that when \( f : H \rightarrow M \) is a smooth map of manifolds with \( f\left( H\right) \subseteq N, N \) a regular submanifold of \( M \), then \( f : H \rightarrow N \) is also a smooth map (see [8] or [88]). Corollary 1.8. A closed subgroup of a Lie group is a Lie group in its own right with respect to the relative topology. Another common method of constructing Lie groups depends on the Rank Theorem from differential geometry. Definition 1.9. A homomorphism of Lie groups is a smooth homomorphism between two Lie groups. Theorem 1.10. If \( G \) and \( {G}^{\prime } \) are Lie groups and \( \varphi : G \rightarrow {G}^{\prime } \) is a homomorphism of Lie groups, then \( \varphi \) has constant rank and \( \ker \varphi \) is a (closed) regular Lie subgroup of \( G \) of dimension \( \dim G - \operatorname{rk}\varphi \) where \( \operatorname{rk}\varphi \) is the rank of the differential of \( \varphi \) . Proof. It is well known (see [8]) that if a smooth map \( \varphi \) has constant rank, then \( {\varphi }^{-1}\{ e\} \) is a closed regular submanifold of \( G \) of dimension \( \dim G - \operatorname{rk}\varphi \) . Since \( \ker \varphi \) is a subgroup, it suffices to show that \( \varphi \) has constant rank. Write \( {l}_{g} \) for left translation by \( g \) . Because \( \varphi \) is a homomorphism, \( \varphi \circ {l}_{g} = {l}_{\varphi \left( g\right) } \circ \varphi \), and since \( {l}_{g} \) is a diffeomorphism, the rank result follows by taking differentials. ## 1.1.4 Compact Classical Lie Groups With the help of Corollary 1.8, it is easy
1094_(GTM250)Modern Fourier Analysis
Definition 6.1.4
Definition 6.1.4. A finite set of tiles \( \mathbf{P} \) is called a tree if there exists a tile \( t \in \mathbf{P} \) such that all \( s \in \mathbf{P} \) satisfy \( s < t \) . We call \( t \) the top of \( \mathbf{P} \) and we denote it by \( t = \operatorname{top}\left( \mathbf{P}\right) \) . Observe that the top of a tree is unique. We denote trees by \( \mathbf{T},{\mathbf{T}}^{\prime },{\mathbf{T}}_{1},{\mathbf{T}}_{2} \), and so on. We observe that every finite set of tiles \( \mathbf{P} \) can be written as a union of trees whose tops are maximal elements. Indeed, consider all maximal elements of \( \mathbf{P} \) under the partial order \( < \) . Then every nonmaximal element \( s \) of \( \mathbf{P} \) satisfies \( s < t \) for some maximal element \( t \in \mathbf{P} \), and thus it belongs to a tree with top \( t \) . Tiles can be written as a union of two semitiles \( {I}_{s} \times {\omega }_{s\left( 1\right) } \) and \( {I}_{s} \times {\omega }_{s\left( 2\right) } \) . Since tiles have area 1, semitiles have area \( 1/2 \) . Definition 6.1.5. A tree \( \mathbf{T} \) is called a 1-tree if \[ {\omega }_{\operatorname{top}\left( \mathbf{T}\right) \left( 1\right) } \subseteqq {\omega }_{s\left( 1\right) } \] for all \( s \in \mathbf{T} \) . A tree \( {\mathbf{T}}^{\prime } \) is called a 2-tree if for all \( s \in {\mathbf{T}}^{\prime } \) we have \[ {\omega }_{\operatorname{top}\left( {\mathbf{T}}^{\prime }\right) \left( 2\right) } \leqq {\omega }_{s\left( 2\right) } \] We make a few observations about 1-trees and 2-trees. First note that every tree can be written as the union of a 1-tree and a 2-tree, and the intersection of these is exactly the top of the tree. Also, if \( \mathbf{T} \) is a 1-tree, then the intervals \( {\omega }_{\operatorname{top}\left( \mathbf{T}\right) \left( 2\right) } \) and \( {\omega }_{s\left( 2\right) } \) are disjoint for all \( s \in \mathbf{T} \) and similarly for 2-trees. See Figure 6.2. Definition 6.1.6. Let \( N : \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \) be a measurable function, let \( s \in \mathbf{D} \), and let \( E \) be a set of finite measure. Then we introduce the quantity \[ \mathcal{M}\left( {E;\{ s\} }\right) = \frac{1}{\left| E\right| }\mathop{\sup }\limits_{\substack{{u \in \mathbf{D}} \\ {s < u} }}\mathop{\int }\limits_{{E \cap {N}^{-1}\left\lbrack {\omega }_{u}\right\rbrack }}\frac{{\left| {I}_{u}\right| }^{-1}{dx}}{{\left( 1 + \frac{\left| x - c\left( {I}_{u}\right) \right| }{\left| {I}_{u}\right| }\right) }^{10}}. \] ## 6.1 Almost Everywhere Convergence of Fourier Integrals Fig. 6.2 A tree of seven tiles ![4d96d124-1e54-43db-8e54-2e727636ea55_442_0.jpg](images/4d96d124-1e54-43db-8e54-2e727636ea55_442_0.jpg) including the darkened top. The top together with the three tiles on the right forms a 1-tree, while the top together with the three tiles on the left forms a 2-tree. We call \( \mathcal{M}\left( {E;\{ s\} }\right) \) the mass of \( E \) with respect to \( \{ s\} \) . Given a subset \( \mathbf{P} \) of \( \mathbf{D} \), we define the mass of \( E \) with respect to \( \mathbf{P} \) as \[ \mathcal{M}\left( {E;\mathbf{P}}\right) = \mathop{\sup }\limits_{{s \in \mathbf{P}}}\mathcal{M}\left( {E;\{ s\} }\right) \] We observe that the mass of \( E \) with respect to any set of tiles is at most \[ \frac{1}{\left| E\right| }{\int }_{-\infty }^{+\infty }\frac{dx}{{\left( 1 + \left| x\right| \right) }^{10}} \leq \frac{1}{\left| E\right| }. \] Definition 6.1.7. Given a finite subset \( \mathbf{P} \) of \( \mathbf{D} \) and a function \( g \) in \( {L}^{2}\left( \mathbf{R}\right) \), we introduce the quantity \[ \mathcal{E}\left( {g;\mathbf{P}}\right) = \frac{1}{\parallel g{\parallel }_{{L}^{2}}}\mathop{\sup }\limits_{\mathbf{T}}{\left( \frac{1}{\left| {I}_{\operatorname{top}\left( \mathbf{T}\right) }\right| }\mathop{\sum }\limits_{{s \in \mathbf{T}}}{\left| \left\langle g \mid {\varphi }_{s}\right\rangle \right| }^{2}\right) }^{\frac{1}{2}}, \] where the supremum is taken over all 2-trees \( \mathbf{T} \) contained in \( \mathbf{P} \) . We call \( \mathcal{E}\left( {g;\mathbf{P}}\right) \) the energy of the function \( g \) with respect to the set of tiles \( \mathbf{P} \) . We now state three important lemmas which we prove in the remaining three subsections, respectively. Lemma 6.1.8. There exists a constant \( {C}_{1} \) such that for any measurable function \( N \) : \( \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \), for any measurable subset \( E \) of the real line with finite measure, and for any finite set of tiles \( \mathbf{P} \) there is a subset \( {\mathbf{P}}^{\prime } \) of \( \mathbf{P} \) such that \[ \mathcal{M}\left( {E;\mathbf{P} \smallsetminus {\mathbf{P}}^{\prime }}\right) \leq \frac{1}{4}\mathcal{M}\left( {E;\mathbf{P}}\right) \] and \( {\mathbf{P}}^{\prime } \) is a union of trees \( {\mathbf{T}}_{j} \) satisfying \[ \mathop{\sum }\limits_{j}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{j}\right) }\right| \leq \frac{{C}_{1}}{\mathcal{M}\left( {E;\mathbf{P}}\right) } \] (6.1.33) Lemma 6.1.9. There exists a constant \( {C}_{2} \) such that for any finite set of tiles \( \mathbf{P} \) and for all functions \( g \) in \( {L}^{2}\left( \mathbf{R}\right) \) there is a subset \( {\mathbf{P}}^{\prime \prime } \) of \( \mathbf{P} \) such that \[ \mathcal{E}\left( {g;\mathbf{P} \smallsetminus {\mathbf{P}}^{\prime \prime }}\right) \leq \frac{1}{2}\mathcal{E}\left( {g;\mathbf{P}}\right) \] and \( {\mathbf{P}}^{\prime \prime } \) is a union of trees \( {\mathbf{T}}_{j} \) satisfying \[ \mathop{\sum }\limits_{j}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{j}\right) }\right| \leq \frac{{C}_{2}}{\mathcal{E}{\left( g;\mathbf{P}\right) }^{2}} \] (6.1.34) Lemma 6.1.10. (The basic estimate) There is a finite constant \( {C}_{3} \) such that for all trees \( \mathbf{T} \), all functions \( g \) in \( {L}^{2}\left( \mathbf{R}\right) \), for any measurable function \( N : \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \), and for all measurable sets \( E \) we have \[ \mathop{\sum }\limits_{{s \in \mathbf{T}}}\left| {\left\langle {g \mid {\varphi }_{s}}\right\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right| \] (6.1.35) \[ \leq {C}_{3}\left| {I}_{\operatorname{top}\left( \mathbf{T}\right) }\right| \mathcal{E}\left( {g;\mathbf{T}}\right) \mathcal{M}\left( {E;\mathbf{T}}\right) \parallel g{\parallel }_{{L}^{2}}\left| E\right| . \] In the rest of this subsection, we conclude the proof of Theorem 6.1.1 assuming Lemmas 6.1.8, 6.1.9, and 6.1.10. Given a finite set of tiles \( \mathbf{P} \), a measurable set \( E \) of finite measure, a measurable function \( N : \mathbf{R} \rightarrow {\mathbf{R}}^{ + } \), and a function \( f \) in \( \mathcal{S}\left( \mathbf{R}\right) \), we find a very large integer \( {n}_{0} \) such that \[ \mathcal{E}\left( {f;\mathbf{P}}\right) \leq {2}^{{n}_{0}} \] \[ \mathcal{M}\left( {E;\mathbf{P}}\right) \leq {2}^{2{n}_{0}}. \] We shall construct by decreasing induction a sequence of pairwise disjoint sets \[ {\mathbf{P}}_{{n}_{0}},{\mathbf{P}}_{{n}_{0} - 1},{\mathbf{P}}_{{n}_{0} - 2},{\mathbf{P}}_{{n}_{0} - 3},\ldots \] such that \[ \mathop{\bigcup }\limits_{{j = - \infty }}^{{n}_{0}}{\mathbf{P}}_{j} = \mathbf{P} \] (6.1.36) and such that the following properties are satisfied: (1) \( \mathcal{E}\left( {f;{\mathbf{P}}_{j}}\right) \leq {2}^{j + 1} \) for all \( j \leq {n}_{0} \) ; (2) \( \mathcal{M}\left( {E;{\mathbf{P}}_{j}}\right) \leq {2}^{{2j} + 2} \) for all \( j \leq {n}_{0} \) ; (3) \( \mathcal{E}\left( {f;\mathbf{P} \smallsetminus \left( {{\mathbf{P}}_{{n}_{0}} \cup \cdots \cup {\mathbf{P}}_{j}}\right) }\right) \leq {2}^{j} \) for all \( j \leq {n}_{0} \) ; (4) \( \mathcal{M}\left( {E;\mathbf{P} \smallsetminus \left( {{\mathbf{P}}_{{n}_{0}} \cup \cdots \cup {\mathbf{P}}_{j}}\right) }\right) \leq {2}^{2j} \) for all \( j \leq {n}_{0} \) ; (5) \( {\mathbf{P}}_{j} \) is a union of trees \( {\mathbf{T}}_{jk} \) such that for all \( j \leq {n}_{0} \) we have \[ \mathop{\sum }\limits_{k}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{jk}\right) }\right| \leq {C}_{0}{2}^{-{2j}} \] where \( {C}_{0} = {C}_{1} + {C}_{2} \) and \( {C}_{1} \) and \( {C}_{2} \) are the constants that appear in Lemmas 6.1.8 and 6.1.9, respectively. Assume momentarily that we have constructed a sequence \( {\left\{ {\mathbf{P}}_{j}\right\} }_{j \leq {n}_{0}} \) with the described properties. Then to obtain estimate (6.1.32) we use (1), (2), (5), the observation that the mass of any set of tiles is always bounded by \( {\left| E\right| }^{-1} \), and Lemma 6.1.10 to obtain \[ \mathop{\sum }\limits_{{s \in \mathbf{P}}}\left| {\left\langle {f \mid {\varphi }_{s}}\right\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right| \] \[ = \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{{s \in {\mathbf{P}}_{j}}}\left| {\langle f\left| {{\varphi }_{s}\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right| }\right| \] \[ \leq \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{k}\mathop{\sum }\limits_{{s \in {\mathbf{T}}_{jk}}}\left| {\left\langle {f \mid {\varphi }_{s}}\right\rangle \left\langle {{\chi }_{E \cap {N}^{-1}\left\lbrack {\omega }_{s\left( 2\right) }\right\rbrack } \mid {\varphi }_{s}}\right\rangle }\right| \] \[ \leq {C}_{3}\mathop{\sum }\limits_{j}\mathop{\sum }\limits_{k}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{jk}\right) }\right| \mathcal{E}\left( {f;{\mathbf{T}}_{jk}}\right) \mathcal{M}\left( {E;{\mathbf{T}}_{jk}}\right) \parallel f{\parallel }_{{L}^{2}}\left| E\right| \] \[ \leq {C}_{3}\mathop{\sum }\limits_{j}\mathop{\sum }\limits_{k}\left| {I}_{\operatorname{top}\left( {\mathbf{T}}_{jk}\right) }
1167_(GTM73)Algebra
Definition 1.1
Definition 1.1. A field \( \mathrm{F} \) is said to be an extension field of \( \mathrm{K} \) (or simply an extension of \( \mathrm{K} \) ) provided that \( \mathrm{K} \) is a subfield of \( \mathrm{F} \) . If \( F \) is an extension field of \( K \), then it is easy to see that \( {1}_{K} = {1}_{F} \) . Furthermore, \( F \) is a vector space over \( K \) (Definition IV.1.1). Throughout this chapter the dimension of the \( K \) -vector space \( F \) will be denoted by \( \left\lbrack {F : K}\right\rbrack \) rather than \( {\dim }_{K}F \) as previously. \( F \) is said to be a finite dimensional extension or infinite dimensional extension of \( K \) according as \( \left\lbrack {F : K}\right\rbrack \) is finite or infinite. Theorem 1.2. Let \( \mathrm{F} \) be an extension field of \( \mathrm{E} \) and \( \mathrm{E} \) an extension field of \( \mathrm{K} \) . Then \( \left\lbrack {\mathrm{F} : \mathrm{K}}\right\rbrack = \left\lbrack {\mathrm{F} : \mathrm{E}}\right\rbrack \left\lbrack {\mathrm{E} : \mathrm{K}}\right\rbrack \) . Furthermore \( \left\lbrack {\mathrm{F} : \mathrm{K}}\right\rbrack \) is finite if and only if \( \left\lbrack {\mathrm{F} : \mathrm{E}}\right\rbrack \) and \( \left\lbrack {\mathrm{E} : \mathrm{K}}\right\rbrack \) are finite. PROOF. This is a restatement of Theorem IV.2.16. In the situation \( K \subset E \subset F \) of Theorem 1.2, \( E \) is said to be an intermediate field of \( K \) and \( F \) . If \( F \) is a field and \( X \subset F \), then the subfield [resp. subring] generated by \( \mathbf{X} \) is the intersection of all subfields [resp. subrings] of \( F \) that contain \( X \) . If \( F \) is an extension field of \( K \) and \( X \subset F \), then the subfield [resp. subring] generated by \( K \cup X \) is called the subfield [resp. subring] generated by \( \mathbf{X} \) over \( \mathbf{K} \) and is denoted \( K\left( X\right) \) [resp. \( K\left\lbrack X\right\rbrack \) ]. Note that \( K\left\lbrack X\right\rbrack \) is necessarily an integral domain. If \( X = \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \), then the subfield \( K\left( X\right) \) [resp. subring \( K\left\lbrack X\right\rbrack \) ] of \( F \) is denoted \( K\left( {{u}_{1},\ldots ,{u}_{n}}\right) \) [resp. \( K\left\lbrack {{u}_{1},\ldots ,{u}_{n}}\right\rbrack \) ]. The field \( K\left( {{u}_{1},\ldots ,{u}_{n}}\right) \) is said to be a finitely generated extension of \( K \) (but it need not be finite dimensional over \( K \) ; see Exercise 2). If \( X = \{ u\} \), then \( K\left( u\right) \) is said to be a simple extension of \( K \) . A routine verification shows that neither \( K\left( {{u}_{1},\ldots ,{u}_{n}}\right) \) nor \( K\left\lbrack {{u}_{1},\ldots ,{u}_{n}}\right\rbrack \) depends on the order of the \( {u}_{i} \) and that \( K\left( {{u}_{1},\ldots ,{u}_{n - 1}}\right) \left( {u}_{n}\right) = K\left( {{u}_{1},\ldots ,{u}_{n}}\right) \) and \( K\left\lbrack {{u}_{1},\ldots ,{u}_{n - 1}}\right\rbrack \left\lbrack {u}_{n}\right\rbrack = K\left\lbrack {{u}_{1},\ldots ,{u}_{n}}\right\rbrack \) (Exercise 4). These facts will be used frequently in the sequel without explicit mention. NOTATION. If \( F \) is a field \( u, v \in F \), and \( v \neq 0 \), then \( u{v}^{-1} \in F \) will sometimes be denoted by \( u/v \) . Theorem 1.3. If \( \mathrm{F} \) is an extension field of a field \( \mathrm{K},\mathrm{u},{\mathrm{u}}_{\mathrm{i}}\varepsilon \mathrm{F} \), and \( \mathrm{X} \subset \mathrm{F} \), then (i) the subring \( \mathrm{K}\left\lbrack \mathrm{u}\right\rbrack \) consists of all elements of the form \( \mathrm{f}\left( \mathrm{u}\right) \), where \( \mathrm{f} \) is a polynomial with coefficients in \( \mathrm{K} \) (that is, \( \mathrm{f}\varepsilon \mathrm{K}\left\lbrack \mathrm{x}\right\rbrack \) ); (ii) the subring \( \mathrm{K}\left\lbrack {{\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{\mathrm{m}}}\right\rbrack \) consists of all elements of the form \( \mathrm{g}\left( {{\mathrm{u}}_{1},{\mathrm{u}}_{2},\ldots ,{\mathrm{u}}_{\mathrm{m}}}\right) \) , where \( \mathrm{g} \) is a polynomial in \( \mathrm{m} \) indeterminates with coefficients in \( \mathrm{K} \) (that is, \( \left. {g \in K\left\lbrack {{x}_{1},\ldots ,{x}_{m}}\right\rbrack }\right) \) ; (iii) the subring \( \mathrm{K}\left\lbrack \mathrm{X}\right\rbrack \) consists of all elements of the form \( \mathrm{h}\left( {{\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{\mathrm{n}}}\right) \), where each \( {\mathrm{u}}_{\mathrm{i}} \in \mathrm{X},\mathrm{n} \) is a positive integer, and \( \mathrm{h} \) is a polynomial in \( \mathrm{n} \) indeterminates with coefficients in \( \mathrm{K} \) (that is, \( \mathrm{n}\varepsilon {\mathbf{N}}^{ * },\mathrm{\;h}\varepsilon \mathrm{K}\left\lbrack {{\mathrm{x}}_{1},\ldots ,{\mathrm{x}}_{\mathrm{n}}}\right\rbrack \) ); (iv) the subfield \( \mathrm{K}\left( \mathrm{u}\right) \) consists of all elements of the form \( \mathrm{f}\left( \mathrm{u}\right) /\mathrm{g}\left( \mathrm{u}\right) = \mathrm{f}\left( \mathrm{u}\right) \mathrm{g}{\left( \mathrm{u}\right) }^{-1} \) , where \( \mathrm{f},\mathrm{g} \in \mathrm{K}\left\lbrack \mathrm{x}\right\rbrack \) and \( \mathrm{g}\left( \mathrm{u}\right) \neq 0 \) ; (v) the subfield \( \mathrm{K}\left( {{\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{\mathrm{m}}}\right) \) consists of all elements of the form \[ h\left( {{u}_{1},\ldots ,{u}_{m}}\right) /k\left( {{u}_{1},\ldots ,{u}_{m}}\right) = h\left( {{u}_{1},\ldots ,{u}_{m}}\right) k{\left( {u}_{1},\ldots ,{u}_{m}\right) }^{-1}, \] where \( \mathrm{h},\mathrm{k}\varepsilon \mathrm{K}\left\lbrack {{\mathrm{x}}_{1},\ldots ,{\mathrm{x}}_{\mathrm{m}}}\right\rbrack \) and \( \mathrm{k}\left( {{\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{\mathrm{m}}}\right) \neq 0 \) ; (vi) the subfield \( \mathrm{K}\left( \mathrm{X}\right) \) consists of all elements of the form \[ f\left( {{u}_{1},\ldots ,{u}_{n}}\right) /g\left( {{u}_{1},\ldots ,{u}_{n}}\right) = f\left( {{u}_{1},\ldots ,{u}_{n}}\right) g{\left( {u}_{1},\ldots ,{u}_{n}\right) }^{-1} \] where \( \mathrm{n}\varepsilon {\mathrm{N}}^{ * },\mathrm{f},\mathrm{g}\varepsilon \mathrm{K}\left\lbrack {{\mathrm{x}}_{1},\ldots ,{\mathrm{x}}_{n}}\right\rbrack ,{\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{n}\varepsilon \mathrm{X} \) and \( \mathrm{g}\left( {{\mathrm{u}}_{1},\ldots ,{\mathrm{u}}_{n}}\right) \neq 0 \) . (vii) For each \( \mathrm{v}\varepsilon \mathrm{K}\left( \mathrm{X}\right) \) (resp. \( \mathrm{K}\left\lbrack \mathrm{X}\right\rbrack \) ) there is a finite subset \( {\mathrm{X}}^{\prime } \) of \( \mathrm{X} \) such that \( \mathrm{v}\varepsilon \mathrm{K}\left( {\mathrm{X}}^{\prime }\right) \) (resp. \( \mathrm{K}\left\lbrack {\mathrm{X}}^{\prime }\right\rbrack \) ). SKETCH OF PROOF. (vi) Every field that contains \( K \) and \( X \) must contain the \( \mathrm{{set}}E = \left\{ \begin{matrix} f\left( {{u}_{1},\ldots ,{u}_{n}}\right) /g\left( {{u}_{1},\ldots ,{u}_{n}}\right) & \;|\;n \in {\mathbf{N}}^{ * };\;f, \\ g \in K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack ;\;{u}_{i} \in X; & \end{matrix}\right. \) \( \left. {g\left( {{u}_{1},\ldots ,{u}_{n}}\right) \neq 0}\right\} \), whence \( K\left( X\right) \supset E \) . Conversely, if \( f, g \in K\left\lbrack {{x}_{1},\ldots ,{x}_{m}}\right\rbrack \) and \( {f}_{1},{g}_{1} \in K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \), then define \( h, k \in K\left\lbrack {{x}_{1},\ldots ,{x}_{m + n}}\right\rbrack \) by \[ h\left( {{x}_{1},\ldots ,{x}_{m + n}}\right) = f\left( {{x}_{1},\ldots ,{x}_{m}}\right) {g}_{1}\left( {{x}_{m + 1},\ldots ,{x}_{m + n}}\right) \] \[ - g\left( {{x}_{1},\ldots ,{x}_{m}}\right) {f}_{1}\left( {{x}_{m + 1},\ldots ,{x}_{m + n}}\right) \] \[ k\left( {{x}_{1},\ldots ,{x}_{m + n}}\right) = g\left( {{x}_{1},\ldots ,{x}_{m}}\right) {g}_{1}\left( {{x}_{m + 1},\ldots ,{x}_{m + n}}\right) . \] Then for any \( {u}_{1},\ldots ,{u}_{m},{v}_{1},\ldots ,{v}_{n} \in X \) such that \( g\left( {{u}_{1},\ldots ,{u}_{m}}\right) \neq 0,{g}_{1}\left( {{v}_{1},\ldots ,{v}_{n}}\right) \neq 0 \) , \[ \frac{f\left( {{u}_{1},\ldots ,{u}_{m}}\right) }{g\left( {{u}_{1},\ldots ,{u}_{m}}\right) } - \frac{{f}_{1}\left( {{v}_{1},\ldots ,{v}_{n}}\right) }{{g}_{1}\left( {{v}_{1},\ldots ,{v}_{n}}\right) } = \frac{h\left( {{u}_{1},\ldots ,{u}_{m},{v}_{1},\ldots ,{v}_{n}}\right) }{k\left( {{u}_{1},\ldots ,{u}_{m},{v}_{1},\ldots ,{v}_{n}}\right) } \in E. \] Therefore, \( E \) is a group under addition (Theorem I.2.5). Similarly the nonzero elements of \( E \) form a group under multiplication, whence \( E \) is a field. Since \( X \subset E \) and \( K \subset E \), we have \( K\left( X\right) \subset E \) . Therefore, \( K\left( X\right) = E \) . (vii) If \( u \in K\left( X\right) \), then by (vi) \( u = f\left( {{u}_{1},\ldots ,{u}_{n}}\right) /g\left( {{u}_{1},\ldots ,{u}_{n}}\right) \in K\left( {X}^{\prime }\right) \), where \( {X}^{\prime } = \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \subset X \) . If \( L \) and \( M \) are subfields of a field \( F \), the composite of \( L \) and \( M \) in \( F \), denoted \( {LM} \) is the subfield generated by the set \( L \cup M \) . An immediate consequence of this definition is that \( {LM} = L\left( M\right) = M\left( L\right) \) . It is easy to show that if \( K \) is a subfield of \( L \cap M \) such that \( M = K\left( S\right) \) where \( S \subset M \), then \( {LM} = L\left( S\right) \) (Exercise 5). The relationships of the dimensions \( \left\lbrack {L : K}\right\rbrack ,\left\lbrack {M : K}\right\rbrack ,\left\lbrack {{LM} : K}\right\rbrack \), etc. are considered in Exercises 20-21. The composite of any finite number of subfields \( {E}_{1},{E}_{2},\ldots ,{E}_{n} \) is defined to be the subfield generated by the set \( {E}_{1} \cup {E}_{2} \cup \cdots \cup {E}_{n} \) and is denoted \( {E}_{1}{E}_{2}\cdots {E}_{n} \) (see Exercise 5). The next step in the study of field extensions is to distinguish two fundamentally different situation
1033_(GTM196)Basic Homological Algebra
Definition 4.41
Definition 4.41 (von Neumann) If \( R \) is a ring, then \( R \) is regular if, for all \( a \in R \), there exists \( r \in R \) for which \( a = {ara} \) . Suppose \( a = {ara} \) . Then \( {ra} = {rara} = {\left( ra\right) }^{2} \), that is, \( {ra} \) is an idempotent. Furthermore, \( \operatorname{Rr}a \subset {Ra} \) trivially, while \( a = {ara} \in {Rra} \), so that \( \operatorname{Rr}a \supset {Ra} \) . Hence, \( {Ra} = {Rra} \) . In words, every principal left ideal is generated by an idempotent. The significance of this is contained in the next lemma, the first of four we need in discussing weak dimension zero. Lemma 4.42 Suppose \( R \) is a ring, and \( I \) is a left ideal. Then \( I \) is a direct summand of \( R \) if and only if \( I \) is principal and generated by an idempotent. Proof: If \( I = {Re} \), with \( e = {e}^{2} \), set \( f = 1 - e \) . Then \( 1 = e + \left( {1 - e}\right) \in I + {Rf} \) , while if \( r, s \in R \) and \( {re} = s\left( {1 - e}\right) \in I \cap {Rf} \), then \( {re} = s - {se} \Rightarrow s = {se} + {re} \Rightarrow \) \( {se} = \left( {{se} + {re}}\right) e = s{e}^{2} + r{e}^{2} = {se} + {re} \Rightarrow {re} = 0 \) . Hence, \( I \cap {Rf} = 0 \) and \( I \oplus {Rf} = R \) . If \( R = I \oplus J \) for some left ideal \( J \), then \( 1 = e + f \) for some \( e \in I \) and \( f \in J \) . Thus, \( {Re} \subset I \) and \( {Rf} \subset J \) ; also, if \( x \in R \), then \( x = x \cdot 1 = \) \( x \cdot \left( {e + f}\right) = {xe} + {xf} \) . Hence," \( x = {xe} + {xf} \) " is the decomposition of \( x \) into a sum of elements of \( I \) and \( J \) . Taking \( x \in I \) gives \( x = {xe} \) (and \( 0 = {xf} \) ); in particular, \( e = {e}^{2} \) and \( I \subset {Ie} \) . Hence, \( I \supset {Re} \supset {Ie} \supset I \), so \( I = {Re} \) is principal and generated by an idempotent. Lemma 4.43 Suppose \( R \) is a ring, and suppose \( e \) and \( f \) are idempotents in \( R \) such that \( {ef} = 0 = {fe} \) . Then \( e + f \) is idempotent and \( {Re} + {Rf} = R\left( {e + f}\right) \) . Proof: \( {\left( e + f\right) }^{2} = {e}^{2} + {ef} + {fe} + {f}^{2} = {e}^{2} + {f}^{2} = e + f \) . Further, \( e + f \in \) \( {Re} + {Rf} \), so \( R\left( {e + f}\right) \subset {Re} + {Rf} \) . Finally, \( e = e\left( {e + f}\right) \in R\left( {e + f}\right) \) and \( f = f\left( {e + f}\right) \in R\left( {e + f}\right) \), so that \( {Re} + {Rf} \subset R\left( {e + f}\right) \) . Lemma 4.44 Suppose \( R \) is a ring. Then \( {Ra} + {Rb} = {Ra} + {Rb}\left( {1 - a}\right) \) . Proof: \( b\left( {1 - a}\right) = b - {ba} = - {ba} + b \in {Ra} + {Rb} \), so \( {Rb}\left( {1 - a}\right) \subset {Ra} + {Rb} \) . \( {Ra} \subset {Ra} + {Rb} \) trivially, so \( {Ra} + {Rb}\left( {1 - a}\right) \subset {Ra} + {Rb} \) . On the other hand, \( b = {ba} + b\left( {1 - a}\right) \in {Ra} + {Rb}\left( {1 - a}\right) \), so \( {Rb} \subset {Ra} + {Rb}\left( {1 - a}\right) \) . Again \( {Ra} \subset {Ra} + {Rb}\left( {1 - a}\right) \), so \( {Ra} + {Rb} \subset {Ra} + {Rb}\left( {1 - a}\right) \) . Lemma 4.45 Suppose \( R \) is regular. Then every finitely generated left ideal is principal (and generated by an idempotent). Proof: Suppose we knew the sum of two principal left ideals was principal. Then the set of principal left ideals would be closed under addition of ideals, and so would include every finitely generated ideal. It thus suffices to show that \( {Re} + {Rf} \) is principal if \( e \) and \( f \) are idempotents. (Recall that if \( R \) is regular, then every principal left ideal is generated by an idempotent.) For this purpose, we successively modify \( e \) and \( f \) without changing the sum \( {Re} + {Rf} \) until we are in the situation of Lemma 4.43. First of all, \( {Re} + {Rf} = {Re} + {Rf}\left( {1 - e}\right) \) by Lemma 4.44. Write \( f\left( {1 - e}\right) = \) \( f\left( {1 - e}\right) {rf}\left( {1 - e}\right) \), and set \( {f}^{\prime } = {rf}\left( {1 - e}\right) \) . Then \( {Rf}\left( {1 - e}\right) = R{f}^{\prime } \), so \( {Re} + {Rf} = {Re} + R{f}^{\prime } \) . Also, \( {f}^{\prime }e = {rf}\left( {1 - e}\right) e = {rf}\left( {e - {e}^{2}}\right) = 0 \) . Next, do the same to \( e : R{f}^{\prime } + {Re} = R{f}^{\prime } + {Re}\left( {1 - {f}^{\prime }}\right) \) . Set \( {e}^{\prime } = e\left( {1 - {f}^{\prime }}\right) \) . This \( {e}^{\prime } \) is already idempotent: \( {\left( {e}^{\prime }\right) }^{2} = e\left( {1 - {f}^{\prime }}\right) e\left( {1 - {f}^{\prime }}\right) = e\left( {e - {f}^{\prime }e}\right) \left( {1 - {f}^{\prime }}\right) = {e}^{2}\left( {1 - {f}^{\prime }}\right) = \) \( e\left( {1 - {f}^{\prime }}\right) = {e}^{\prime } \) . Furthermore, \( {e}^{\prime }{f}^{\prime } = e\left( {1 - {f}^{\prime }}\right) {f}^{\prime } = e\left( {{f}^{\prime } - {\left( {f}^{\prime }\right) }^{2}}\right) = 0 \), while \( {f}^{\prime }{e}^{\prime } = {f}^{\prime }e\left( {1 - {f}^{\prime }}\right) = 0 \) . Hence, \( {Re} + {Rf} = {Re} + R{f}^{\prime } = R{e}^{\prime } + R{f}^{\prime } = R\left( {{e}^{\prime } + {f}^{\prime }}\right) \) , the last equality by Lemma 4.43. Theorem 4.46 (Weak Dimension Zero Characterization) Suppose \( R \) is a ring. The following conditions are equivalent: \[ \text{i)}\mathrm{W} - \dim R = 0\text{.} \] ii) Every left \( R \) -module is flat. iii) For every finitely generated left ideal \( I, R/I \) is projective. iv) \( {\operatorname{Tor}}_{1}\left( {R/J, R/I}\right) = 0 \) for every finitely generated right ideal \( J \) and every finitely generated left ideal I. \[ \text{v)}{\operatorname{Tor}}_{1}\left( {R/{aR}, R/{Ra}}\right) = 0\text{for every}a \in R\text{.} \] vi) \( R \) is regular. \( \textbf{Proof: We prove that (i)} \Rightarrow \) (ii) \( \Rightarrow \) (iv) \( \Rightarrow \) (v) \( \Rightarrow \) (vi) \( \Rightarrow \) (iii) \( \Rightarrow \) (i). (i) \( \Rightarrow \) (ii) by definition. (ii) \( \Rightarrow \) (iv) by Corollary 3.19. (iv) \( \Rightarrow \) (v) trivially. (v) \( \Rightarrow \) (vi) by Exercise 9(c) of Chapter 3, which states, \( {}^{\omega }{\operatorname{Tor}}_{1}\left( {R/J, R/I}\right) \approx \left( {J \cap I}\right) /{JI} \) ": If \( \;{aR} \cap {Ra} = {aR} \cdot {Ra}, \) then \( \;a \in {aR} \cap {Ra} = {aR} \cdot {Ra} = {aRa} \Rightarrow a = {ara} \) for some \( r \in R \) . (vi) \( \Rightarrow \) (iii), since every finitely generated left ideal \( I \) is then principal and generated by an idempotent (Lemma 4.45), hence is a direct summand of \( R \) (Lemma 4.42). But \( R/I \) is the other summand of \( R \) , and so is projective (since \( R \) is projective). Finally,(iii) \( \Rightarrow \) (i) by the weak dimension theorem. One final remark. The occurrence of the word "projective" in (iii) is no accident. \( R/I \) is finitely presented (since \( I \) is finitely generated), so \( R/I \) is projective if and only if it is flat (Theorem 4.19). ## 4.4 An Example There are examples to complement some of the results. In Exercise 7 there is a commutative example where: (i) weak and global dimensions differ; (ii) there is a flat, finitely generated module which is not finitely presented (and not projective); and (iii) there is a projective ideal (consisting of zero divisors) which is not finitely generated. In the next chapter, there will be an example where right and left global dimensions are not the same. Here we shall discuss an example where weak and global dimensions differ in a systematic way. This example is not as well known as it should be; it is a Bezóut domain, that is, an integral domain in which every finitely generated ideal is principal. It is not a PID. The global and weak dimensions always differ in this circumstance. Furthermore, the constructions computing the global dimension mimic (in a simpler setting) some constructions in the next chapter. First, a few simple facts about Bezóut domains. Suppose \( R \) is an integral domain, and \( a \) is a nonzero element of \( R \) . Then \( R \) is isomorphic to \( {Ra} \) as an \( R \) -module via \( x \mapsto {xa} \) . Thus, any principal ideal in an integral domain is projective, hence flat. It follows that any Bezóut domain has weak dimension less than or equal to one by Corollary 4.14. However, in an integral domain, \( 0 \neq a = {ara} \) implies \( {ra} = 1 \) . That is, a regular integral domain is a field. Consequently, by Theorem 4.46, any Bezóut domain which is not a field has weak dimension one. If a Bezóut domain is not a PID, then it must contain ideals which are not finitely generated, hence not projective by Proposition 4.23. By Corollary 4.11, the global dimension will be greater than or equal to two. Suppose \( R \) is a Bezóut domain, and suppose \( I \) is a nonprincipal ideal generated by a countable set \( \left\{ {{r}_{1},{r}_{2},\ldots }\right\} ,{r}_{1} \neq 0 \) . We shall show that P-dim \( I = 1 \) . Let \( {I}_{n} \) be the ideal generated by \( \left\{ {{r}_{1},\ldots ,{r}_{n}}\right\} \) . Say \( {I}_{n} = R{a}_{n} \) , so \( {a}_{n} \mid {r}_{k} \) if \( k \leq n \), and \( {I}_{n} \subset {I}_{n + 1} \), so that \( {a}_{n + 1} \mid {a}_{n} \), say \( {a}_{n} = {d}_{n}{a}_{n + 1} \) . \( \bigcup {I}_{n} = I \) . Note that all \( {a}_{n} \) are nonzero. Let \( F \) be free on \( 1,2,\ldots \), and send \( \left( {{x}_{1},{x}_{2},\ldots }\right) \in F = {\bigoplus }_{i = 1}^{\infty }R \) to \( \sum {x}_{j}{a}_{j} \) . This maps \( F \) onto \( I \) . Set \( {v}_{1} = \left( {1, - {d}_{1},0,0,\ldots }\right) ,{v}_{2} = \left( {0,1, - {d}_{2},0,0,\ldots }\right) \) , etc. Note that \( {v}_{j} \mapsto {a}_{j} - {d}_{j}{a}_{j + 1} = 0 \), so all \( {v}_{j} \) are in the kernel \( K \) of \( F \rightarrow I \) . In fact, the \( {v}_{j} \) form a free basis of \( K \), as we shall shortly see. \[ \text{Suppose}\left( {{x}_{1},{x}_{2},\ldots }\right) \in K\text{, so that}\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}{a}_{j} = 0\text{. Then}
1189_(GTM95)Probability-1
Definition 1
Definition 1. The conditional probability of event \( B \) given that event \( A,\mathrm{P}\left( A\right) > 0 \) , occurred (denoted by \( \mathrm{P}\left( {B \mid A}\right) \) ) is \[ \frac{\mathrm{P}\left( {AB}\right) }{\mathrm{P}\left( A\right) }\text{.} \] (1) In the classical approach we have \( \mathrm{P}\left( A\right) = N\left( A\right) /N\left( \Omega \right) ,\mathrm{P}\left( {AB}\right) = N\left( {AB}\right) /N\left( \Omega \right) \) , and therefore \[ \mathrm{P}\left( {B \mid A}\right) = \frac{N\left( {AB}\right) }{N\left( A\right) }. \] (2) From Definition 1 we immediately get the following properties of conditional probability: \[ \mathrm{P}\left( {A \mid A}\right) = 1 \] \[ \mathrm{P}\left( {\varnothing \mid A}\right) = 0 \] \[ \mathrm{P}\left( {B \mid A}\right) = 1,\;B \supseteq A, \] \[ \mathrm{P}\left( {{B}_{1} + {B}_{2} \mid A}\right) = \mathrm{P}\left( {{B}_{1} \mid A}\right) + \mathrm{P}\left( {{B}_{2} \mid A}\right) . \] It follows from these properties that for a given set \( A \) the conditional probability \( \mathrm{P}\left( {\cdot \mid A}\right) \) has the same properties on the space \( \left( {\Omega \cap A,\mathcal{A} \cap A}\right) \), where \( \mathcal{A} \cap A = \) \( \{ B \cap A : B \in \mathcal{A}\} \), that the original probability \( \mathrm{P}\left( \cdot \right) \) has on \( \left( {\Omega ,\mathcal{A}}\right) \) . Note that \[ \mathrm{P}\left( {B \mid A}\right) + \mathrm{P}\left( {\bar{B} \mid A}\right) = 1 \] however in general \[ \mathrm{P}\left( {B \mid A}\right) + \mathrm{P}\left( {B \mid \bar{A}}\right) \neq 1 \] \[ \mathrm{P}\left( {B \mid A}\right) + \mathrm{P}\left( {\bar{B} \mid \bar{A}}\right) \neq 1. \] Example 1. Consider a family with two children. We ask for the probability that both children are boys, assuming (a) that the older child is a boy; (b) that at least one of the children is a boy. The sample space is \[ \Omega = \{ \mathrm{{BB}},\mathrm{{BG}},\mathrm{{GB}},\mathrm{{GG}}\} \] where BG means that the older child is a boy and the younger is a girl, etc. Let us suppose that all sample points are equally probable: \[ \mathrm{P}\left( \mathrm{{BB}}\right) = \mathrm{P}\left( \mathrm{{BG}}\right) = \mathrm{P}\left( \mathrm{{GB}}\right) = \mathrm{P}\left( \mathrm{{GG}}\right) = \frac{1}{4}. \] Let \( A \) be the event that the older child is a boy, and \( B \), that the younger child is a boy. Then \( A \cup B \) is the event that at least one child is a boy, and \( {AB} \) is the event that both children are boys. In question (a) we want the conditional probability \( \mathrm{P}\left( {{AB} \mid A}\right) \), and in (b), the conditional probability \( \mathrm{P}\left( {{AB} \mid A \cup B}\right) \) . It is easy to see that \[ \mathrm{P}\left( {{AB} \mid A}\right) = \frac{\mathrm{P}\left( {AB}\right) }{\mathrm{P}\left( A\right) } = \frac{\frac{1}{4}}{\frac{1}{2}} = \frac{1}{2} \] \[ \mathrm{P}\left( {{AB} \mid A \cup B}\right) = \frac{\mathrm{P}\left( {AB}\right) }{\mathrm{P}\left( {A \cup B}\right) } = \frac{\frac{1}{4}}{\frac{3}{4}} = \frac{1}{3}. \] 2. The simple but important formula (3), below, is called the formula for total probability. It provides the basic means for calculating the probabilities of complicated events by using conditional probabilities. Consider a decomposition \( \mathcal{D} = \left\{ {{A}_{1},\ldots ,{A}_{n}}\right\} \) with \( \mathrm{P}\left( {A}_{i}\right) > 0, i = 1,\ldots, n \) (such a decomposition is often called a complete set of disjoint events). It is clear that \[ B = B{A}_{1} + \cdots + B{A}_{n} \] and therefore \[ \mathrm{P}\left( B\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\mathrm{P}\left( {B{A}_{i}}\right) \] But \[ \mathrm{P}\left( {B{A}_{i}}\right) = \mathrm{P}\left( {B \mid {A}_{i}}\right) \mathrm{P}\left( {A}_{i}\right) \] Hence we have the formula for total probability: \[ \mathrm{P}\left( B\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\mathrm{P}\left( {B \mid {A}_{i}}\right) \mathrm{P}\left( {A}_{i}\right) \] (3) In particular, if \( 0 < \mathrm{P}\left( A\right) < 1 \), then \[ \mathrm{P}\left( B\right) = \mathrm{P}\left( {B \mid A}\right) \mathrm{P}\left( A\right) + \mathrm{P}\left( {B \mid \bar{A}}\right) \mathrm{P}\left( \bar{A}\right) . \] (4) Example 2. An urn contains \( M \) balls, \( m \) of which are "lucky." We ask for the probability that the second ball drawn is lucky (assuming that the result of the first draw is unknown, that a sample of size 2 is drawn without replacement, and that all outcomes are equally probable). Let \( A \) be the event that the first ball is lucky, and \( B \) the event that the second is lucky. Then \[ \mathrm{P}\left( {B \mid A}\right) = \frac{\mathrm{P}\left( {BA}\right) }{\mathrm{P}\left( A\right) } = \frac{\frac{m\left( {m - 1}\right) }{M\left( {M - 1}\right) }}{\frac{m}{M}} = \frac{m - 1}{M - 1}, \] \[ \mathrm{P}\left( {B \mid \bar{A}}\right) = \frac{\mathrm{P}\left( {B\bar{A}}\right) }{\mathrm{P}\left( \bar{A}\right) } = \frac{\frac{m\left( {M - m}\right) }{M\left( {M - 1}\right) }}{\frac{M - m}{M}} = \frac{m}{M - 1} \] and \[ \mathrm{P}\left( B\right) = \mathrm{P}\left( {B \mid A}\right) \mathrm{P}\left( A\right) + \mathrm{P}\left( {B \mid \bar{A}}\right) \mathrm{P}\left( \bar{A}\right) \] \[ = \frac{m - 1}{M - 1} \cdot \frac{m}{M} + \frac{m}{M - 1} \cdot \frac{M - m}{M} = \frac{m}{M}. \] It is interesting to observe that \( \mathrm{P}\left( A\right) \) is precisely \( m/M \) . Hence, when the nature of the first ball is unknown, it does not affect the probability that the second ball is lucky. By the definition of conditional probability (with \( \mathrm{P}\left( A\right) > 0 \) ), \[ \mathrm{P}\left( {AB}\right) = \mathrm{P}\left( {B \mid A}\right) \mathrm{P}\left( A\right) \] (5) This formula, the multiplication formula for probabilities, can be generalized (by induction) as follows: If \( {A}_{1},\ldots ,{A}_{n - 1} \) are events with \( \mathrm{P}\left( {{A}_{1}\cdots {A}_{n - 1}}\right) > 0 \), then \[ \mathrm{P}\left( {{A}_{1}\cdots {A}_{n}}\right) = \mathrm{P}\left( {A}_{1}\right) \mathrm{P}\left( {{A}_{2} \mid {A}_{1}}\right) \cdots \mathrm{P}\left( {{A}_{n} \mid {A}_{1}\cdots {A}_{n - 1}}\right) \] (6) (here \( {A}_{1}\cdots {A}_{n} = {A}_{1} \cap {A}_{2} \cap \cdots \cap {A}_{n} \) ). 3. Suppose that \( A \) and \( B \) are events with \( \mathrm{P}\left( A\right) > 0 \) and \( \mathrm{P}\left( B\right) > 0 \) . Then along with (5) we have the parallel formula \[ \mathrm{P}\left( {AB}\right) = \mathrm{P}\left( {A \mid B}\right) \mathrm{P}\left( B\right) . \] (7) From (5) and (7) we obtain Bayes's formula \[ \mathrm{P}\left( {A \mid B}\right) = \frac{\mathrm{P}\left( A\right) \mathrm{P}\left( {B \mid A}\right) }{\mathrm{P}\left( B\right) }. \] (8) If the events \( {A}_{1},\ldots ,{A}_{n} \) form a decomposition of \( \Omega \) ,(3) and (8) imply Bayes’s theorem: \[ \mathrm{P}\left( {{A}_{i} \mid B}\right) = \frac{\mathrm{P}\left( {A}_{i}\right) \mathrm{P}\left( {B \mid {A}_{i}}\right) }{\mathop{\sum }\limits_{{j = 1}}^{n}\mathrm{P}\left( {A}_{j}\right) \mathrm{P}\left( {B \mid {A}_{j}}\right) }. \] (9) In statistical applications, \( {A}_{1},\ldots ,{A}_{n}\left( {{A}_{1} + \cdots + {A}_{n} = \Omega }\right) \) are often called hypotheses, and \( \mathrm{P}\left( {A}_{i}\right) \) is called the prior (or a priori)* probability of \( {A}_{i} \) . The condi- * Apriori: before the experiment; aposteriori: after the experiment. tional probability \( \mathrm{P}\left( {{A}_{i} \mid B}\right) \) is considered as the posterior (or the a posteriori) probability of \( {A}_{i} \) after the occurrence of event \( B \) . Example 3. Let an urn contain two coins: \( {A}_{1} \), a fair coin with probability \( \frac{1}{2} \) of falling \( \mathrm{H} \) ; and \( {\mathrm{A}}_{2} \), a biased coin with probability \( \frac{1}{3} \) of falling \( \mathrm{H} \) . A coin is drawn at random and tossed. Suppose that it falls head. We ask for the probability that the fair coin was selected. Let us construct the corresponding probabilistic model. Here it is natural to take the sample space to be the set \( \Omega = \left\{ {{A}_{1}\mathrm{H},{A}_{1}\mathrm{\;T},{A}_{2}\mathrm{H},{A}_{2}\mathrm{\;T}}\right\} \), which describes all possible outcomes of a selection and a toss \( \left( {{A}_{1}\mathrm{H}}\right. \) means that \( \operatorname{coin}{A}_{1} \) was selected and fell heads, etc.) The probabilities \( p\left( \omega \right) \) of the various outcomes have to be assigned so that, according to the statement of the problem, \[ \mathrm{P}\left( {A}_{1}\right) = \mathrm{P}\left( {A}_{2}\right) = \frac{1}{2} \] and \[ \mathrm{P}\left( {\mathrm{H} \mid {A}_{1}}\right) = \frac{1}{2},\;\mathrm{P}\left( {\mathrm{H} \mid {A}_{2}}\right) = \frac{1}{3}. \] With these assignments, the probabilities of the sample points are uniquely determined: \[ \mathrm{P}\left( {{A}_{1}\mathrm{H}}\right) = \frac{1}{4},\;\mathrm{P}\left( {{A}_{1}\mathrm{\;T}}\right) = \frac{1}{4},\;\mathrm{P}\left( {{A}_{2}\mathrm{H}}\right) = \frac{1}{6},\;\mathrm{P}\left( {{A}_{2}\mathrm{\;T}}\right) = \frac{1}{3}. \] Then by Bayes's formula the probability in question is \[ \mathrm{P}\left( {{A}_{1} \mid \mathrm{H}}\right) = \frac{\mathrm{P}\left( {A}_{1}\right) \mathrm{P}\left( {\mathrm{H} \mid {A}_{1}}\right) }{\mathrm{P}\left( {A}_{1}\right) \mathrm{P}\left( {\mathrm{H} \mid {A}_{1}}\right) + \mathrm{P}\left( {A}_{2}\right) \mathrm{P}\left( {\mathrm{H} \mid {A}_{2}}\right) } = \frac{3}{5}, \] and therefore \[ \mathrm{P}\left( {{A}_{2} \mid \mathrm{H}}\right) = \frac{2}{5}. \] 4. In certain sense, the concept of independence, which we are now going to introduce, plays a central role in probability theory: it is precisely this concept that distinguishes probability theory from the general theory of measure spaces. If \( A \) and \( B \) are two events, it is natural to say that \( B \) is independent of \( A \) if knowing
113_Topological Groups
Definition 11.39
Definition 11.39. A formula \( \varphi \) is existential if it is in prenex normal form with only existential quantifiers. Let \( {\mathcal{L}}^{\prime } \) be a Skolem expansion of \( \mathcal{L} \), with notation as in 11.33. With each prenex formula \( \varphi \) of \( {\mathcal{L}}^{\prime } \) we associate the prenex formula \( {\varphi }^{\mathrm{n}} \) obtained from \( \varphi \) by interchanging \( \exists \) and \( \forall \) and replacing the matrix \( \psi \) of \( \varphi \) by \( \neg \psi \) . Now with each prenex formula \( \varphi \) of \( {\mathcal{L}}^{\prime } \) we associate a formula \( {\varphi }^{\mathrm{H}} \) : \[ {\varphi }^{\mathrm{H}} = \varphi \text{if}\varphi \text{is quantifier free;} \] \[ {\left( \exists \alpha \varphi \right) }^{\mathrm{H}} = \exists \alpha {\varphi }^{\mathrm{H}}; \] \[ {\left( \forall {v}_{i}\varphi \right) }^{\mathrm{H}} = {\varphi }^{\mathrm{H}}\left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) , \] where \( \sigma = {S}_{\psi }^{j}{\beta }_{0}\cdots {\beta }_{m - 1},\psi = \exists {v}_{i}{\varphi }^{\mathrm{n}},\mathrm{{Fv}}\psi = \left\{ {{\beta }_{0},\ldots ,{\beta }_{m - 1}}\right\} \) with \( {v}^{-1}{\beta }_{0} < \) \( \cdots < {v}^{-1}{\beta }_{m - 1} \), and \( j \) is chosen minimal so that \( \psi \) is a formula of \( {\mathcal{L}}_{j} \) . Theorem 11.40. Let \( {\mathcal{L}}^{\prime } \) be a Skolem expansion of \( \mathcal{L} \), and let \( \varphi \) be a prenex formula of \( {\mathcal{L}}^{\prime } \) . Then: (i) \( {\varphi }^{\mathrm{H}} \) is an existential formula; (ii) \( \vDash {\varphi }^{\mathrm{H}} \leftrightarrow \neg {\varphi }^{\mathrm{{nS}}} \) ; (iii) if \( \varphi \) is a sentence, then \( \vDash \varphi \) iff \( \vDash {\varphi }^{\mathrm{H}} \) . Proof. (i) is obvious from the definitions. To prove (ii), first note that \( \vdash \neg \varphi \leftrightarrow {\varphi }^{\mathrm{n}} \) . Next we prove (ii) by induction on the length of \( \varphi \) ; it is clear if \( \varphi \) is quantifier free. Now \( {\left( \exists \alpha \varphi \right) }^{\mathrm{H}} = \exists \alpha {\varphi }^{\mathrm{H}} \), and we assume inductively that \( \vDash {\varphi }^{\mathrm{H}} \Leftrightarrow \neg {\varphi }^{\mathrm{{nS}}} \) . Thus \( \vDash {\left( \exists \alpha \varphi \right) }^{\mathrm{H}} \leftrightarrow \exists \alpha \neg {\varphi }^{\mathrm{{nS}}} \) . Also \( \vDash \exists \alpha \neg {\varphi }^{\mathrm{{nS}}} \leftrightarrow \neg \forall \alpha {\varphi }^{\mathrm{{nS}}} \), and \( {\left( \forall \alpha {\varphi }^{\mathrm{n}}\right) }^{\mathrm{S}} = \forall \alpha {\varphi }^{\mathrm{{nS}}} \) . Since \( {\left( \exists \alpha \varphi \right) }^{\mathrm{n}} = \forall \alpha {\varphi }^{\mathrm{n}} \), it follows that \( \vDash {\left( \exists \alpha \varphi \right) }^{\mathrm{H}} \leftrightarrow \neg {\left( \exists \alpha \varphi \right) }^{\mathrm{{nS}}} \) , as desired. Next, let \( {\left( \forall {v}_{i}\varphi \right) }^{H} = {\varphi }^{H}\left( {{v}_{0},\ldots ,{v}_{n - 1},\sigma }\right) \), with notation as in 11.39. By the induction hypothesis, \( \vDash {\varphi }^{\mathrm{H}} \leftrightarrow \neg {\varphi }^{\mathrm{{nS}}} \), so \( \vDash {\varphi }^{\mathrm{H}}\left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) \) \( \leftrightarrow \neg {\varphi }^{\mathrm{{nS}}}\left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) \) . Recalling 11.37, we see that \( {\varphi }^{\mathrm{{nS}}}\left( {{v}_{0},\ldots ,{v}_{i - 1},\sigma }\right) = \) \( {\left( \exists {v}_{i}{\varphi }^{\mathrm{n}}\right) }^{\mathrm{s}} \) . Note that \( {\left( \forall {v}_{i}\varphi \right) }^{\mathrm{n}} = \exists {v}_{i}{\varphi }^{\mathrm{n}} \) . Thus \( \vdash {\left( \forall {v}_{i}\varphi \right) }^{\mathrm{H}} \leftrightarrow \neg {\left( \forall {v}_{i}\varphi \right) }^{\mathrm{{nS}}} \), as desired. Finally, we prove (iii): Fφ \( \; \) iff \( \neg \varphi \) does not have a model iff \( {\varphi }^{\mathrm{n}} \) does not have a model iff \( {\varphi }^{\mathrm{{nS}}} \) does not have a model (by \( {11.38}\left( {iv}\right) \) ) iff \( \vDash \neg {\varphi }^{\mathrm{{nS}}} \) iff \( \vDash {\varphi }^{\mathrm{H}} \) (by (ii)). Herbrand’s theorem in a sense reduces provability to checking tautologies. The following result, interesting in itself, is one of the main lemmas for the theorem. Theorem 11.41. If \( \varphi \) is a quantifier-free formula not involving equality and \( \vDash \varphi \), then \( \varphi \) is a tautology. Proof. Assume that \( \varphi \) is not a tautology; let \( f \) be a truth valuation (10.19) such that \( {f\varphi } = 0 \) . Let \( A = {\operatorname{Trm}}_{\mathcal{L}} \) . For any relation symbol \( \mathbf{R} \), say \( \mathbf{R} \) of rank \( m \), let \[ {\mathbf{R}}^{\mathfrak{A}} = \left\{ {\left( {{\tau }_{0},\ldots ,{\tau }_{m - 1}}\right) : f\mathbf{R}{\tau }_{0}\cdots {\tau }_{m - 1} = 1}\right\} . \] Also, for any operation symbol \( \mathbf{O} \), say of rank \( m \), let \[ {\mathbf{O}}^{\mathfrak{A}}\left( {{\tau }_{0},\ldots ,{\tau }_{m - 1}}\right) = \mathbf{O}{\tau }_{0}\cdots {\tau }_{m - 1}. \] With these denotations for operation symbols and relation symbols we obtain an \( \mathcal{L} \) -structure \( \mathfrak{A} \) . Note that \( {\sigma }^{\mathfrak{A}}x = \sigma \) for every term \( \sigma \) where \( {x}_{i} = \left\langle {v}_{i}\right\rangle \) for each \( i \in \omega \) . Hence by induction on \( \psi \) we easily obtain: for any quantifier-free formula \( \psi \) not involving equality, \( \mathfrak{A} \vDash \psi \left\lbrack x\right\rbrack \) iff \( {f\psi } = 1 \) . Since \( {f\varphi } = 0 \), it follows that \( \mathfrak{A} \mathrel{\text{\vDash \not{} }} \varphi \left\lbrack x\right\rbrack \), so \( \mathrel{\text{\vDash \not{} }} \varphi \), as desired. Now we can give our version of Herbrand's theorem. Several versions of this theorem can be found in the literature. It has found considerable use, especially in finitary proofs of the consistency of theories. Theorem 11.42 (Herbrand). Let \( {\mathcal{L}}^{\prime } \) be a Skolem expansion of \( \mathcal{L} \), and let \( \varphi \) be a prenex sentence of \( {\mathcal{L}}^{\prime } \) not involving equality. Say \( {\varphi }^{\mathrm{H}} = \exists {\alpha }_{0}\cdots \exists {\alpha }_{m - 1}\psi \) with \( \psi \) quantifier-free. Then \( \vDash \varphi \) iff some disjunction of instances \( {\operatorname{Subf}}_{▞}^{⓪}\ldots \) Subf \( {}_{\sigma \left( {m - 1}\right) }^{\alpha \left( {m - 1}\right) }\psi \) of \( \psi \) is a tautology, where \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \) are variable-free terms. Proof. \( \Rightarrow \) . Assume \( \vDash \varphi \) . Thus by \( {11.40}, \vDash {\varphi }^{\mathrm{H}} \) . Now let \( \Gamma \) be the set of all sentences \[ \neg {\mathrm{{Subf}}}_{▞}^{⓪}\cdots {\mathrm{{Subf}}}_{\sigma \left( {m - 1}\right) }^{\alpha \left( {m - 1}\right) }\psi \] where \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \) are variable-free terms of \( {\mathcal{L}}^{\prime } \) . We claim that \( \Gamma \) is inconsistent. For, if it is consistent, then it has a model \( \mathfrak{A} \) . Let \( B \) be the set of all elements \( {}^{0}{\sigma }^{\mathfrak{A}} \) of \( A \) (see 11.4), where \( \sigma \) is a variable-free term. If \( \mathbf{R} \) is a relation symbol of rank \( n \), denoted by \( {\mathbf{R}}^{\mathfrak{A}} \) in \( \mathfrak{A} \), let \[ {\mathbf{R}}^{\mathfrak{B}} = \left\{ {\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) : {x}_{0},\ldots ,{x}_{n - 1} \in B\text{ and }\left( {{x}_{0},\ldots ,{x}_{n - 1}}\right) \in {\mathbf{R}}^{\mathfrak{A}}}\right\} . \] If \( \mathbf{O} \) is an operation symbol of rank \( n \), denoted by \( {\mathbf{O}}^{\mathfrak{A}} \) in \( \mathfrak{A} \), let for variable free terms \( {\sigma }_{0},\ldots ,{\sigma }_{n - 1} \) \[ {\mathbf{O}}^{930}{\sigma }_{0}^{91}\cdots {}^{0}{\sigma }_{n - 1}^{91} = {}^{0}{\left( \mathbf{O}{\sigma }_{0}\cdots {\sigma }_{n - 1}\right) }^{91}. \] This definition is easily justified. Thus we obtain a structure \( \mathfrak{B} \) . Since each member of \( \Gamma \) is quantifier-free, \( \mathfrak{B} \) is a model of \( \Gamma \) . Since \( \vDash {\varphi }^{\mathrm{H}} \), choose \( {x}_{0},\ldots \) , \( {x}_{m - 1} \in B \) so that \( \mathfrak{B} \vDash \psi \left\lbrack {{x}_{0},\ldots ,{x}_{m - 1}}\right\rbrack \) . Say \( {x}_{i} = {}^{0}{\sigma }_{i}^{\mathfrak{A}} \) for each \( i < m \) . Thus \( {\mathrm{{Subf}}}_{▞}^{v0}\cdots {\mathrm{{Subf}}}_{\sigma \left( {m - 1}\right) }^{v\left( {m - 1}\right) }\psi \) holds in \( \mathfrak{B} \), which is a contradiction since \( \mathfrak{B} \) is a model of \( \Gamma \) . Thus \( \Gamma \) is inconsistent. Hence by 10.92, some disjunction of negations of members of \( \Gamma \) is valid, so by 11.41 it is a tautology, as desired. \( \Leftarrow \) . Obvious. We give one simple application of Herbrand's theorem: Consider the formula \( \psi = \exists \alpha \forall {\beta \varphi } \rightarrow \forall \beta \exists {\alpha \varphi } \), where \( \alpha \) and \( \beta \) are distinct variables and \( \varphi \) is any formula (possibly involving equality). We want to prove that \( \vDash \psi \) by using Herbrand’s theorem. It is of course easy to prove \( \vDash \psi \) by a direct semantic argument, but as we shall see, an application of Herbrand's theorem is more routine. Let \( \mathcal{L} \) be our original language. Choose \( n > 0 \) so that \( \operatorname{Fv}\varphi \subseteq \) \( \left\{ {{v}_{0},\ldots ,{v}_{n - 1}}\right\} \) . Expand the language to \( {\mathcal{L}}^{\prime } \) by adding a new \( n \) -ary relation symbol \( \mathbf{R} \) . Now let \( {\varphi }^{\prime } = \mathbf{R}{v}_{0}\cdots {v}_{n - 1},{\psi }^{\prime } = \exists \alpha \forall \beta {\varphi }^{\prime } \rightarrow \forall \beta \exists \alpha {\varphi }^{\prime } \) . We first show that \( \vDash {\psi }^{\pr
109_The rising sea Foundations of Algebraic Geometry
Definition 5.104
Definition 5.104. We denote by \( {w}_{0} \) the unique element of maximal length in \( W \), and we set \( {d}_{0} \mathrel{\text{:=}} l\left( {w}_{0}\right) \) . For \( J \subseteq S,{w}_{0}\left( J\right) \) denotes the longest element of \( {W}_{J} \), and \( {d}_{0}\left( J\right) \mathrel{\text{:=}} l\left( {{w}_{0}\left( J\right) }\right) \) . In particular, \( {w}_{0}\left( S\right) = {w}_{0} \) and \( {d}_{0}\left( S\right) = {d}_{0} \) . The existence of a longest element \( {w}_{0} \) in \( W \) has the important consequence that the diameter of the building \( \mathcal{C} \) is finite. Here the diameter of \( \mathcal{C} \) is defined as usual in a metric space by \[ \operatorname{diam}\mathcal{C} \mathrel{\text{:=}} \sup \{ d\left( {C, D}\right) \mid C, D \in \mathcal{C}\} = l\left( {w}_{0}\right) = {d}_{0}, \] where \( {w}_{0} \) and \( {d}_{0} \) are as in Definition 5.104. This leads to the fundamental concept that distinguishes spherical buildings from general buildings: Definition 5.105. Two chambers \( C, D \in \mathcal{C} \) are called opposite if \( d\left( {C, D}\right) = \) \( {d}_{0} \) or, equivalently, \( \delta \left( {C, D}\right) = {w}_{0} \), in which case we also write \( C \) op \( D \) . Two residues \( \mathcal{R} \) and \( \mathcal{S} \) of \( \mathcal{C} \) are called opposite if for each \( C \in \mathcal{R} \), there exists a \( D \in \mathcal{S} \) with \( D \) op \( C \) and for each \( {D}^{\prime } \in \mathcal{S} \), there exists a \( {C}^{\prime } \in \mathcal{R} \) with \( {C}^{\prime } \) op \( {D}^{\prime } \) . If this is the case, we also use the notation \( \mathcal{R} \) op \( \mathcal{S} \) . We noted above that \( {w}_{0} \) normalizes \( S \) . Hence conjugation by \( {w}_{0} \) induces an automorphism of \( \left( {W, S}\right) \) . Definition 5.106. We denote by \( {\sigma }_{0} \) the automorphism of \( \left( {W, S}\right) \) given by \( {\sigma }_{0}\left( w\right) = {w}_{0}w{w}_{0} \) for all \( w \in W \) . If \( J \subseteq S \), we set \( {J}^{0} \mathrel{\text{:=}} {\sigma }_{0}\left( J\right) \) . We call two subsets \( J \) and \( K \) of \( S \) opposite, and we write \( J \) op \( K \), if \( K = {J}^{0} \) . We have already encountered \( {\sigma }_{0} \) in Chapter 1, where it occurred in several exercises as well as in Proposition 1.130 and Corollary 1.131. We will not use those results here except in our discussion of examples in Section 5.7.4. Here is an alternative characterization of opposite residues, which also justifies the definition of opposite types in Definition 5.106. Lemma 5.107. A J-residue \( \mathcal{R} \) and a \( K \) -residue \( \mathcal{S} \) of \( \mathcal{C} \) are opposite if and only if \( J = {K}^{0} \) and there is a chamber in \( \mathcal{R} \) that is opposite to some chamber in \( \mathcal{S} \) . If this is the case, then \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \), and \( {w}_{0}\left( J\right) {w}_{0} = \) \( {w}_{0}{w}_{0}\left( K\right) \) is the unique element of minimal length in \( \delta \left( {\mathcal{R},\mathcal{S}}\right) \) . Proof. Suppose there are chambers \( C \in \mathcal{R} \) and \( D \in \mathcal{S} \) with \( C \) op \( D \), i.e., \( \delta \left( {C, D}\right) = {w}_{0} \) . Applying Lemma 5.29, we obtain \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0}{W}_{K} \) . If \( \mathcal{R} \) and \( \mathcal{S} \) are opposite, then for each \( {D}^{\prime } \in \mathcal{S} \) there exists a \( {C}^{\prime } \in \mathcal{R} \) with \( \delta \left( {{C}^{\prime },{D}^{\prime }}\right) = {w}_{0} \) . This implies that \( \delta \left( {\mathcal{R},{D}^{\prime }}\right) = {W}_{J}{w}_{0} \) . Since this is true for every \( {D}^{\prime } \in \mathcal{S} \), it follows that \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} \) . Similarly, \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {w}_{0}{W}_{K} \) . Thus \( {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \), which implies that \( {W}_{J} = {w}_{0}{W}_{K}{w}_{0} = {W}_{{K}^{0}} \) and hence that \( J = {K}^{0} \) by the basic properties of standard parabolic subgroups. If, conversely, \( J = {K}^{0} \), then \( {W}_{J} = {w}_{0}{W}_{K}{w}_{0} \) ; hence \[ \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0}{W}_{K} = \left( {{w}_{0}{W}_{K}{w}_{0}}\right) {w}_{0}{W}_{K} = {w}_{0}{W}_{K}, \] and similarly \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} \) . This implies \( \delta \left( {{C}^{\prime },\mathcal{S}}\right) = {w}_{0}{W}_{K} \) for any \( {C}^{\prime } \in \mathcal{R} \) . (since \( \delta \left( {{C}^{\prime },\mathcal{S}}\right) \) is a left coset of \( {W}_{K} \) and is contained in \( \delta \left( {\mathcal{R},\mathcal{S}}\right) \) ), and similarly \( \delta \left( {\mathcal{R},{D}^{\prime }}\right) = {W}_{J}{w}_{0} \) for any \( {D}^{\prime } \in \mathcal{S} \) . But this just means that each \( {C}^{\prime } \in \mathcal{R} \) is opposite a chamber in \( \mathcal{S} \) and each \( {D}^{\prime } \in \mathcal{S} \) is opposite a chamber in \( \mathcal{R} \) . Hence the residues \( \mathcal{R} \) and \( \mathcal{S} \) are opposite, and the first part of the lemma is proved. Now assume that \( \mathcal{R} \) op \( \mathcal{S} \), so that \( \delta \left( {\mathcal{R},\mathcal{S}}\right) = {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \) by the argument above. Consider an arbitrary element \( {w}_{J}{w}_{0} \in {W}_{J}{w}_{0}\left( {{w}_{J} \in {W}_{J}}\right) \) . We have \( l\left( {{w}_{J}{w}_{0}}\right) = l\left( {w}_{0}\right) - l\left( {w}_{J}\right) \), so we minimize \( l\left( {{w}_{J}{w}_{0}}\right) \) by maximizing \( l\left( {w}_{J}\right) \) , i.e., by taking \( {w}_{J} = {w}_{0}\left( J\right) \) . Thus \( {w}_{0}\left( J\right) {w}_{0} \) is the unique element of minimal length in \( {W}_{J}{w}_{0} \) . Similarly, \( {w}_{0}{w}_{0}\left( K\right) \) is the element of minimal length in \( {w}_{0}{W}_{K} \) . Finally, we must have \( {w}_{0}\left( J\right) {w}_{0} = {w}_{0}{w}_{0}\left( K\right) \), since \( {W}_{J}{w}_{0} = {w}_{0}{W}_{K} \) . ## *5.7.2 A Metric Characterization of Opposition It is natural to ask whether the opposition relation on residues admits a direct characterization in terms of distances between residues, as in the definition of opposition for chambers. In this optional subsection we will give such a characterization. Note first that the distance between residues makes sense, as a special case of the distance between subsets of a metric space. Namely, if \( \mathcal{R} \) and \( \mathcal{S} \) are residues, then \[ d\left( {\mathcal{R},\mathcal{S}}\right) \mathrel{\text{:=}} \min \{ d\left( {C, D}\right) \mid C \in \mathcal{R}, D \in \mathcal{S}\} . \] If \( \left( {\mathcal{C},\delta }\right) \) is the W-metric building associated to a simplicial building \( \Delta \), then this is a familiar concept. Indeed, we have \( \mathcal{R} = {\mathcal{C}}_{ \geq A} \) and \( \mathcal{S} = {\mathcal{C}}_{ \geq B} \) for some simplices \( A, B \in \Delta \), and \( d\left( {\mathcal{R},\mathcal{S}}\right) \) is the same as the gallery distance \( d\left( {A, B}\right) \) that we have worked with in earlier chapters. As usual, we will write \( d\left( {\mathcal{R}, D}\right) \) instead of \( d\left( {\mathcal{R},\{ D\} }\right) \) in case \( \mathcal{S} \) is a singleton. From the theory of projections, we know that \[ d\left( {\mathcal{R}, D}\right) = d\left( {{\operatorname{proj}}_{\mathcal{R}}D, D}\right) . \] The key to our metric characterization of opposition is the following lemma: Lemma 5.108. Let \( \mathcal{R} \) be a \( J \) -residue and \( D \) a chamber of \( \mathcal{C} \) . (1) \( \max \{ d\left( {C, D}\right) \mid C \in \mathcal{R}\} = {d}_{0}\left( J\right) + d\left( {\mathcal{R}, D}\right) \) . (2) \( d\left( {\mathcal{R}, D}\right) \leq {d}_{0} - {d}_{0}\left( J\right) \), with equality if and only if \( \mathcal{R} \) contains a chamber opposite \( D \) . Proof. Let \( {C}^{\prime } \mathrel{\text{:=}} {\operatorname{proj}}_{\mathcal{R}}D \) . Then by the gate property (Proposition 5.34) we have \[ d\left( {C, D}\right) = d\left( {C,{C}^{\prime }}\right) + d\left( {{C}^{\prime }, D}\right) \] for all \( C \in \mathcal{R} \) . This is maximal when \( d\left( {C,{C}^{\prime }}\right) \) is maximal. Since \( \delta \left( {\mathcal{R},{C}^{\prime }}\right) = \) \( {W}_{J} \), it follows that we maximize \( d\left( {C, D}\right) \) by taking \( C \in \mathcal{R} \) such that \( \delta \left( {C,{C}^{\prime }}\right) = \) \( {w}_{0}\left( J\right) \), in which case \( d\left( {C, D}\right) = {d}_{0}\left( J\right) + d\left( {{C}^{\prime }, D}\right) \) . This proves (1), and (2) follows immediately. We can now give our promised characterization of opposition. See Exercise 4.80 for the same result stated in the language of simplicial buildings. Proposition 5.109. Let \( \mathcal{R} \) be a \( J \) -residue, and let \( \mathcal{S} \) be a residue with \( \operatorname{rank}\mathcal{S} \geq \operatorname{rank}\mathcal{R} \) . Then the following conditions are equivalent. (i) \( \mathcal{R} \) op \( \mathcal{S} \) . (ii) \( d\left( {\mathcal{R},\mathcal{S}}\right) = {d}_{0} - {d}_{0}\left( J\right) \) . (iii) \( d\left( {\mathcal{R},\mathcal{S}}\right) = \max \left\{ {d\left( {\mathcal{R},{\mathcal{S}}^{\prime }}\right) \mid {\mathcal{S}}^{\prime }}\right. \) is a residue of \( \mathcal{C}\} \) . Proof. It is immediate from the definitions that the maximum on the right side of (iii) is equal to \[ \max \{ d\left( {\mathcal{R}, D}\right) \mid D \in \mathcal{C}\} \] By Lemma 5.108, this maximum is equal to \( {d}_{0} - {d}_{0}\left( J\right) \) . So (ii) and (iii) are equivalent. The last assertion of Lemma 5.107 shows that (i) implies (ii). To complete the proof, we assume that (ii) and (iii) hold, and we show that \( \mathcal{R} \) op \( \mathcal{S} \) . It follows from (ii
1076_(GTM234)Analysis and Probability Wavelets, Signals, Fractals
Definition 9.3.3
Definition 9.3.3. Let \( X \) be a set, and let \( R : X \rightarrow X \) be an endomorphism. Let \( {R}^{-1}\left( x\right) \mathrel{\text{:=}} \{ y \in X \mid R\left( y\right) = x\} \) for \( x \in X \) . We say that \( R \) is an \( n \) -fold branch mapping if \( R \) is onto and if \[ \# {R}^{-1}\left( x\right) = n\;\text{ for all }x \in X. \] (9.3.10) For a given \( n \) -fold branch mapping, we shall select \( n \) branches of the inverse, i.e., \( n \) distinct mappings \[ {\sigma }_{i} : X \rightarrow X,\;i = 1,\ldots, n, \] (9.3.11) such that \[ R \circ {\sigma }_{i} = {\operatorname{id}}_{X}\;\text{ for all }i = 1,\ldots, n. \] (9.3.12) Definition 9.3.4. Let \( \mathcal{H} \) be a complex Hilbert space, and let \( \left( {S}_{i}\right) \in \operatorname{Rep}\left( {{\mathcal{O}}_{n},\mathcal{H}}\right) \) for some \( n \in \mathbb{N}, n \geq 2 \) . We say that \( \left( {S}_{i}\right) \) is a permutative representation, see [BrJo99a], is each isometry \( {S}_{i} \) permutes the elements in some orthonormal basis (ONB) for \( \mathcal{H} \) . Lemma 9.3.5. [BrJ099a] Up to unitary equivalence of representations every per-mutative representation \( \left( {S}_{i}\right) \) of \( {\mathcal{O}}_{n} \) in a Hilbert space \( \mathcal{H} \) has the following form for some set \( X \) and some \( n \) -fold branch mapping \( R : X \rightarrow X \) . Let \( \mathcal{H} = {\ell }^{2}\left( X\right) \), and for \( x \in X \), let \( |x\rangle \) be the corresponding basis vector in \( \mathcal{H} \) . Ifc: \( X \rightarrow \mathbb{C} \) is in \( {\ell }^{2}\left( X\right) \), then \[ c = \mathop{\sum }\limits_{{x \in X}}c\left( x\right) |x\rangle ,\;\parallel c{\parallel }^{2} = \mathop{\sum }\limits_{{x \in X}}{\left| c\left( x\right) \right| }^{2}, \] (9.3.13) and \[ \langle x \mid y\rangle = {\delta }_{x, y}\;\text{ for all }x, y \in X. \] (9.3.14) For a choice of distinct branches \( {\sigma }_{i}, i = 1,\ldots, n \), set \[ {S}_{i}\left| {x\rangle \mathrel{\text{:=}} \left| {{\sigma }_{i}\left( x\right) }\right\rangle }\right| \] (9.3.15) Proof. Let \( \left( {S}_{i}\right) \in \operatorname{Rep}\left( {{\mathcal{O}}_{n},\mathcal{H}}\right) \) be a permutative representation. Let \( X \) be an index set for some ONB which is permuted by the respective isometries \( {S}_{i} \) . As a result, there are maps \( {\sigma }_{i} : X \rightarrow X \) such that \[ {S}_{i}\left| {x\rangle = \left| {{\sigma }_{i}\left( x\right) }\right\rangle \;\text{ for }i = 1,\ldots, n\text{ and }x \in X}\right| \] (9.3.16) Using (a) in (9.3.1), we consider that each \( {\sigma }_{i} \) is one-to-one, and that \[ {\sigma }_{i}\left( X\right) \cap {\sigma }_{j}\left( X\right) = \varnothing \;\text{ if }i \neq j. \] (9.3.17) Using (9.3.14) and (9.3.16), we derive the formula \[ {S}_{i}^{ * }\left| {{\sigma }_{j}\left( x\right) }\right\rangle = {\delta }_{i, j}|x\rangle \;\text{ for }i, j = 1,\ldots, n\text{ and }x \in X, \] (9.3.18) for the adjoint operators \( {S}_{i}^{ * }, i = 1,\ldots, n \) . When (9.3.18) is substituted into (b) of (9.3.1) we conclude that \[ \mathop{\bigcup }\limits_{{i = 1}}^{n}{\sigma }_{i}\left( X\right) = X \] (9.3.19) As a result, we may define \( R : X \rightarrow X \) by setting \[ R\left( {{\sigma }_{i}\left( x\right) }\right) = x\;\text{ for }i = 1,\ldots, n\text{ and }x \in X, \] (9.3.20) and conclude that \( R \), defined this way, is an \( n \) -fold branch mapping. Substituting back into (9.3.18), we conclude that \[ {S}_{i}^{ * }\left| {x\rangle = {\chi }_{{\sigma }_{i}\left( X\right) }}\right| R\left( x\right) \rangle \;\text{ for }i = 1,\ldots, n\text{ and }x \in X. \] ## 9.4 Tilings Let \( X \) be a set, and let \( n \in \mathbb{N}, n \geq 2 \), be given. Let \( R : X \rightarrow X \) be an \( n \) -fold branch mapping. For \( x \in X \) and \( p \in \mathbb{N} \), set \[ {R}^{-p}\left( x\right) \mathrel{\text{:=}} \left\{ {y \in X \mid {R}^{p}\left( y\right) = x}\right\} . \] (9.4.1) It is convenient to include the case \( p = 0 \), and set \[ E\left( {x, p}\right) = \left\{ \begin{array}{ll} {R}^{-p}\left( x\right) & \text{ if }p \in \mathbb{N}, \\ \{ x\} \text{ (the singleton) } & \text{ if }p = 0. \end{array}\right. \] (9.4.2) Definition 9.4.1. A subset \( \mathcal{A} \subset X \times {\mathbb{N}}_{0} \) is said to define a tiling of \( X \) if \[ \mathop{\bigcup }\limits_{{\left( {x, p}\right) \in \mathcal{A}}}E\left( {x, p}\right) = X \] (9.4.3) and \[ E\left( {x, p}\right) \cap E\left( {{x}^{\prime },{p}^{\prime }}\right) = \varnothing \;\text{ if }\left( {x, p}\right) \neq \left( {{x}^{\prime },{p}^{\prime }}\right) \text{ in }\mathcal{A}, \] (9.4.4) i.e., the sets \( E\left( {x, p}\right) \), indexed by distinct points in \( \mathcal{A} \), are disjoint. Examples 9.4.2. In the next three examples we illustrate the use of tensor products as outlined in Lemma 9.3.2. The scaling will be represented by the unitary operator \( U \) . One representation \( \left( {S}_{i}\right) \) of \( {\mathcal{O}}_{n} \) serves to encode subdivision bands, and a second representation is then used in recovering \( U \) from \( \left( {S}_{i}\right) \) . This leads to a representation (9.3.2) for the operator \( U \) . The first representation \( \left( {S}_{i}\right) \) of \( {\mathcal{O}}_{n} \) will be a "permutative representation," i.e., it will be defined from a permutation of a canonical basis in one of the two tensor factors. Formula (9.2.10) is an example of such a representation, but there are many more; see for example the memoir [BrJo99a]. The second representation \( \left( {V}_{i}\right) \) will be given by a quadrature-mirror filter as in (9.2.12), or more generally filters corresponding to \( n \) subbands as given in Chapter 7. The reader is encouraged to work out the two types of representations in Exercises 8.3 and 8.4. The additional feature in this construction is tensor products of Hilbert spaces. For this part the use of operator theory helps clarify the use of tensor products and setting up a "wavelet transform." We are further relying on Dirac's elegant bra-ket notation: see the conclusion of Chapter 2 for details. Example 9.4.2.1. Let \( X = {\mathbb{N}}_{0} \), and set \[ R\left( {2n}\right) = n\;\text{ and }\;R\left( {{2n} + 1}\right) = n\;\text{ for }n \in {\mathbb{N}}_{0}. \] (9.4.5) Then for \( \left( {n, p}\right) \in X \times {\mathbb{N}}_{0} \), we have the identity \[ E\left( {n, p}\right) = \left\lbrack {n{2}^{p},\left( {n + 1}\right) {2}^{p}}\right) . \] (9.4.6) One easily checks that the mapping \( R \) in (9.4.5) is a 2-fold branch mapping with branches \[ {\sigma }_{0}\left( n\right) = {2n},\;{\sigma }_{1}\left( n\right) = {2n} + 1\;\text{ for }n \in {\mathbb{N}}_{0}. \] (9.4.7) If \( I \in \Omega \left( p\right) \), i.e., \( I = \left( {{i}_{1},\ldots ,{i}_{p}}\right) \), then \[ {\sigma }_{I}\left( n\right) = {i}_{1} + {i}_{2}2 + \cdots + {i}_{p}{2}^{p - 1} + n{2}^{p}, \] (9.4.8) and it follows that the sets \( E\left( {n, p}\right) \) are as described in (9.4.6). In conclusion, the possible tilings of \( X = {\mathbb{N}}_{0} \) associated with \( R \) in (9.4.5) are the partitions of \( {\mathbb{N}}_{0} \) into non-overlapping segments of the form (9.4.6). Here are three distinct types of such tilings. \[ \text{Case (a):}\;\mathcal{A} = \left\{ {\left( {n,0}\right) \mid n \in {\mathbb{N}}_{0}}\right\} \text{, and} \] \[ E\left( {n,0}\right) = \{ n\} \text{ (the singleton) }\;\text{ for }n \in {\mathbb{N}}_{0}. \] \[ \text{Case (b):}\;\mathcal{A} = \left\{ {\left( {0,0}\right) ,\left( {1, p}\right) \mid p \in {\mathbb{N}}_{0}}\right\} \text{, and} \] \[ E\left( {1, p}\right) = \left\lbrack {{2}^{p},{2}^{p + 1}}\right) \;\text{ for }p \in {\mathbb{N}}_{0}. \] \[ \text{Case (c):}\;\mathcal{A} = \{ \left( {0,2}\right) ,\left( {1,{2k}}\right) ,\left( {2,{2k}}\right) ,\left( {3,{2k}}\right) \mid k \in \mathbb{N}\} \text{, where now} \] \[ E\left( {0,2}\right) = \{ 0,1,2,3\} \text{, and} \] \[ E\left( {j,{2k}}\right) = \left\lbrack {j{2}^{2k},\left( {j + 1}\right) {2}^{2k}}\right) \;\text{ for }j = 1,2,3\text{ and }k \in \mathbb{N}. \] We stress, however, that there are many more types of examples. Example 9.4.2.2. It would be tempting to define \( R \) on \( {\mathbb{N}}_{0} \) by the following rules, \[ R\left( {2n}\right) = n\;\text{ and }\;R\left( {{2n} + 3}\right) = n, \] (9.4.9) by analogy to (9.4.5) in Example 9.4.2.1, but the equation \( 1 = {2n} + 3 \) does not have solutions in \( {\mathbb{N}}_{0} \) . One checks that no subset \( X \) of \( {\mathbb{N}}_{0} \) allows the rules (9.4.9) in the definition of a 2-fold branch mapping. Nonetheless, if we take \( X = \mathbb{Z} \), then (9.4.9) does define a 2-fold branch mapping. So there are several differences betwen the two examples 9.4.2.1 and 9.4.2.2. For Example 9.4.2.1, every point is attracted to \( \{ 0\} \) in the sense that if \( n \in {\mathbb{N}}_{0} \), then there is a \( p \) such that \( {R}^{p}n = 0 \) . For Example 9.4.2.2, there is no singleton in \( X = \mathbb{Z} \) which serves as an attractor. Nonetheless, for every \( n \in \mathbb{Z} \), there is a \( p \) such that \( {R}^{p}n \in \{ - 3, - 2, - 1,0\} \) . The analogues of the tiling systems (a)-(c) in Example 9.4.2.1 carry over to Example 9.4.2.2 as follows. Case (a): \( \;\mathcal{A} = \{ \left( {n,0}\right) \mid n \in \mathbb{Z}\} \), and \[ E\left( {n,0}\right) = \{ n\} \text{ (the singleton) }\;\text{ for }n \in \mathbb{Z}. \] Case (b): This tile system now contains a more varied system of tiles for the set \( X = \mathbb{Z} \) . The singleton tiles are \( E\left( {n,0}\right) = \{ n\} \) for \( n = - 3, - 2, - 1,0 \), and in addition there are the following five classes of (nonoverlapping) dyadic tiles: \( E\left( {-6, p}\right) \) for all \( p \in {\mathbb{N}}_{0} \) , \( E\left( {-4, p}\right) \) for all \( p \in {\mathbb{N}}_{0} \) , \( E\left(
1094_(GTM250)Modern Fourier Analysis
Definition 4.1.13
Definition 4.1.13. Suppose that for a given \( T \) in \( {CZO}\left( {\delta, A, B}\right) \) there is a sequence \( {\varepsilon }_{j} \) of positive numbers that tends to zero as \( j \rightarrow \infty \) such that for all \( f \in {L}^{2}\left( {\mathbf{R}}^{n}\right) \) , \[ {T}^{\left( {\varepsilon }_{j}\right) }\left( f\right) \rightarrow T\left( f\right) \] weakly in \( {L}^{2} \) . Then \( T \) is called a Calderón-Zygmund singular integral operator. Thus Calderón-Zygmund singular integral operators are special kinds of Calderón-Zygmund operators. The subclass of \( {CZO}\left( {\delta, A, B}\right) \) consisting of all Calderón-Zygmund singular integral operators is denoted by \( \operatorname{CZSIO}\left( {\delta, A, B}\right) \) . In view of Proposition 4.1.11 and Remark 4.1.12, a Calderón-Zygmund operator is equal to a Calderón-Zygmund singular integral operator plus a bounded function times the identity operator. For this reason, the study of Calderón-Zygmund operators is equivalent to the study of Calderón-Zygmund singular integral operators, and in almost all situations it suffices to restrict attention to the latter. ## 4.1.3 Calderón-Zygmund Operators Acting on Bounded Functions We are now interested in defining the action of a Calderón-Zygmund operator \( T \) on bounded and smooth functions. To achieve this we first need to define the space of special test functions \( {\mathcal{D}}_{0} \) . Definition 4.1.14. We denote by \( \mathcal{D}\left( {\mathbf{R}}^{n}\right) = {\mathcal{C}}_{0}^{\infty }\left( {\mathbf{R}}^{n}\right) \) the space of all smooth functions with compact support on \( {\mathbf{R}}^{n} \) . We define \( {\mathcal{D}}_{0}\left( {\mathbf{R}}^{n}\right) \) to be the space of all smooth functions with compact support and integral zero. We equip \( {\mathcal{D}}_{0}\left( {\mathbf{R}}^{n}\right) \) with the same topology as the space \( \mathcal{D}\left( {\mathbf{R}}^{n}\right) \) . This means that a linear functional \( u \in {\mathcal{D}}_{0}\left( {\mathbf{R}}^{n}\right) \) is continuous if for any compact set \( K \) in \( {\mathbf{R}}^{n} \) there is a constant \( {C}_{K} \) and an integer \( M \) such that \[ \left| {\langle u,\varphi \rangle }\right| \leq {C}_{K}\mathop{\sum }\limits_{{\left| \alpha \right| \leq M}}{\begin{Vmatrix}{\partial }^{\alpha }\varphi \end{Vmatrix}}_{{L}^{\infty }} \] for all \( \varphi \) smooth functions supported in \( K \) . The dual space of \( {\mathcal{D}}_{0}\left( {\mathbf{R}}^{n}\right) \) under this topology is denoted by \( {\mathcal{D}}_{0}^{\prime }\left( {\mathbf{R}}^{n}\right) \) . This is a space of distributions larger than \( {\mathcal{D}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) . Example 4.1.15. BMO functions are examples of elements of \( {\mathcal{D}}_{0}^{\prime }\left( {\mathbf{R}}^{n}\right) \) . Indeed, given \( b \in {BMO}\left( {\mathbf{R}}^{n}\right) \), for any compact set \( K \) there is a constant \( {C}_{K} = \parallel b{\parallel }_{{L}^{1}\left( K\right) } \) such that \[ \left| {{\int }_{{\mathbf{R}}^{n}}b\left( x\right) \varphi \left( x\right) {dx}}\right| \leq {C}_{K}\parallel \varphi {\parallel }_{{L}^{\infty }} \] for any \( \varphi \in {\mathcal{D}}_{0}\left( {\mathbf{R}}^{n}\right) \) . Moreover, observe that the preceding integral remains unchanged if the \( {BMO} \) function \( b \) is replaced by \( b + c \), where \( c \) is a constant. Definition 4.1.16. Let \( T \) be a continuous linear operator from \( \mathcal{S}\left( {\mathbf{R}}^{n}\right) \) to \( {\mathcal{S}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) that satisfies (4.1.5) for some distribution \( W \) that coincides with a standard kernel \( K\left( {x, y}\right) \) satisfying (4.1.1),(4.1.2), and (4.1.3). Given \( f \) bounded and smooth, we define an element \( T\left( f\right) \) of \( {\mathcal{D}}_{0}^{\prime }\left( {\mathbf{R}}^{n}\right) \) as follows: For a given \( \varphi \) in \( {\mathcal{D}}_{0}\left( {\mathbf{R}}^{n}\right) \), select \( \eta \) in \( {\mathcal{C}}_{0}^{\infty } \) with \( 0 \leq \eta \leq 1 \) and equal to 1 in a neighborhood of the support of \( \varphi \) . Since \( T \) maps \( \mathcal{S} \) to \( {\mathcal{S}}^{\prime } \), the expression \( T\left( {f\eta }\right) \) is a tempered distribution, and its action on \( \varphi \) is well defined. We define the action of \( T\left( f\right) \) on \( \varphi \) via the identity \[ \langle T\left( f\right) ,\varphi \rangle = \langle T\left( {f\eta }\right) ,\varphi \rangle + {\int }_{{\mathbf{R}}^{n}}\left\lbrack {{\int }_{{\mathbf{R}}^{n}}K\left( {x, y}\right) \varphi \left( x\right) {dx}}\right\rbrack f\left( y\right) \left( {1 - \eta \left( y\right) }\right) {dy}, \] (4.1.29) provided we make sense of the double integral as an absolutely convergent integral. To do this, we pick \( {x}_{0} \) in the support of \( \varphi \) and we split the \( y \) -integral in (4.1.29) into the sum of integrals over the regions \( {I}_{0} = \left\{ {y \in {\mathbf{R}}^{n} : \left| {x - {x}_{0}}\right| > \frac{1}{2}\left| {{x}_{0} - y}\right| }\right\} \) and \( {I}_{\infty } = \left\{ {y \in {\mathbf{R}}^{n} : \left| {x - {x}_{0}}\right| \leq \frac{1}{2}\left| {{x}_{0} - y}\right| }\right\} \) . By the choice of \( \eta \) we must necessarily have dist (supp \( \left( {1 - \eta }\right) \), supp \( \varphi ) > 0 \), and hence the part of the double integral in (4.1.29) when \( y \) is restricted to \( {I}_{0} \) is absolutely convergent in view of (4.1.1). For \( y \in {I}_{\infty } \) we use the mean value property of \( \varphi \) to write the expression inside the square brackets in (4.1.29) as \[ {\int }_{{\mathbf{R}}^{n}}\left( {K\left( {x, y}\right) - K\left( {{x}_{0}, y}\right) }\right) \varphi \left( x\right) {dx}. \] With the aid of (4.1.2) we deduce the absolute convergence of the double integral in (4.1.29) as follows: \[ {\iint }_{\left| {y - {x}_{0}}\right| \geq 2\left| {x - {x}_{0}}\right| }\left| {K\left( {x, y}\right) - K\left( {{x}_{0}, y}\right) }\right| \left| {\varphi \left( x\right) }\right| \left( {1 - \eta \left( y\right) }\right) \left| {f\left( y\right) }\right| {dxdy} \] \[ \leq {\int }_{{\mathbf{R}}^{n}}A{\left| x - {x}_{0}\right| }^{\delta }{\int }_{\left| {y - {x}_{0}}\right| \geq 2\left| {x - {x}_{0}}\right| }{\left| {x}_{0} - y\right| }^{-n - \delta }\left| {f\left( y\right) }\right| {dy}\left| {\varphi \left( x\right) }\right| {dx} \] \[ \leq A\frac{{\omega }_{n - 1}}{\delta {2}^{\delta }}\parallel \varphi {\parallel }_{{L}^{1}}\parallel f{\parallel }_{{L}^{\infty }} < \infty . \] This completes the definition of \( T\left( f\right) \) as an element of \( {\mathcal{D}}_{0}^{\prime } \) when \( f \in {\mathcal{C}}^{\infty } \cap {L}^{\infty } \) , and certainly (4.1.29) is independent of \( {x}_{0} \), but leaves two points open. First, we need to show that this definition is independent of \( \eta \) and secondly that whenever \( f \) is a Schwartz function, the distribution \( T\left( f\right) \) defined in (4.1.29) coincides with the original element of \( {\mathcal{S}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) given in Definition 4.1.8. Remark 4.1.17. We show that the definition of \( T\left( f\right) \) is independent of the choice of the function \( \eta \) . Indeed, if \( \zeta \) is another function satisfying \( 0 \leq \zeta \leq 1 \) that is also equal to 1 in a neighborhood of the support of \( \varphi \), then \( f\left( {\eta - \zeta }\right) \) and \( \varphi \) have disjoint supports, and by (4.1.7) we have the absolutely convergent integral realization \[ \langle T\left( {f\left( {\eta - \zeta }\right) }\right) ,\varphi \rangle = {\int }_{{\mathbf{R}}^{n}}{\int }_{{\mathbf{R}}^{n}}K\left( {x, y}\right) f\left( y\right) \left( {\eta - \zeta }\right) \left( y\right) {dy\varphi }\left( x\right) {dx}. \] It follows that the expression in (4.1.29) coincides with the corresponding expression obtained when \( \eta \) is replaced by \( \zeta \) . Next, if \( f \) is a Schwartz function, then both \( {\eta f} \) and \( \left( {1 - \eta }\right) f \) are Schwartz functions; by the linearity of \( T \) one has \( \langle T\left( f\right) ,\varphi \rangle = \langle T\left( {\eta f}\right) ,\varphi \rangle + \langle T\left( {\left( {1 - \eta }\right) f}\right) ,\varphi \rangle \) , and by (4.1.7) the second expression can be written as the double absolutely convergent integral in (4.1.29), since \( \varphi \) and \( \left( {1 - \eta }\right) f \) have disjoint supports. Thus the distribution \( T\left( f\right) \) defined in (4.1.29) coincides with the original element of \( {\mathcal{S}}^{\prime }\left( {\mathbf{R}}^{n}\right) \) given in Definition 4.1.8. Remark 4.1.18. When \( T \) has a bounded extension that maps \( {L}^{2} \) to itself, we may define \( T\left( f\right) \) for all \( f \in {L}^{\infty }\left( {\mathbf{R}}^{n}\right) \), not necessarily smooth. Simply observe that under this assumption, the expression \( T\left( {f\eta }\right) \) is a well-defined \( {L}^{2} \) function and thus \[ \langle T\left( {f\eta }\right) ,\varphi \rangle = {\int }_{{\mathbf{R}}^{n}}T\left( {f\eta }\right) \left( x\right) \varphi \left( x\right) {dx} \] is given by an absolutely convergent integral for all \( \varphi \in {\mathcal{D}}_{0} \) . Finally, observe that although \( \langle T\left( f\right) ,\varphi \rangle \) is defined for \( f \) in \( {L}^{\infty } \) and \( \varphi \) in \( {\mathcal{D}}_{0} \), this definition is valid for all square integrable functions \( \varphi \) with compact support and integral zero; indeed, the smoothness of \( \varphi \) was never an issue in the definition of \( \langle T\left( f\right) ,\varphi \rangle \) In summary, if \( T \) is a Calderón-Zygmund operator and \( f \) lies in \( {L}^{\infty }\left( {\mathbf{R}}^{n}\right) \), then \( T\left( f\right) \) has a well-defined action \( \langle T\left( f\right) ,\varphi \rangle \) on square integrable functio
1139_(GTM44)Elementary Algebraic Geometry
Definition 2.5
Definition 2.5. A homogeneous variety in \( {k}^{n} \) is an algebraic variety which is a homogeneous set. Theorem 2.6. Let \( k \) be infinite. An algebraic variety \( V \) in \( {k}^{n} \) is homogeneous iff it is defined by a set of homogeneous polynomials. (We agree that the variety defined by the empty set of polynomials is \( {k}^{n} \) .) Proof. Since the theorem is trivial if \( V = {k}^{n} \), assume \( V \subsetneqq {k}^{n} \) . \( \Leftarrow \) : Let the variety be \( V = \mathrm{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \), where each \( {q}_{i} \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) is homogeneous of degree \( {d}_{i} \) . Now \( x \in V \) iff \( {q}_{i}\left( x\right) = 0 \) for \( i = 1,\ldots, r \) . But for any \( t \in k,{q}_{i}\left( {tx}\right) = {t}^{{\bar{d}}_{i}}{q}_{i}\left( x\right) \), so \( x \in V \) implies \( {tx} \in V \) for all \( t \in k \) . \( \Rightarrow \) : Suppose \( V = \mathrm{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \) is homogeneous. Now \( V \) may be homogeneous without every (or even any) \( {q}_{i} \) being homogeneous. (Example: \( \{ 0\} \subset {\mathbb{R}}^{2} \) is homogeneous, yet it is the intersection of two parabolas, \( \{ 0\} = \) \( \left. {\mathrm{V}\left( {Y + {X}^{2}, Y - {X}^{2}}\right) }\right) \) .) However, we shall manufacture from \( \left\{ {{q}_{1},\ldots ,{q}_{r}}\right\} \) a set of homogeneous polynomials defining \( V \) ; this set is just the set of all homogeneous components of all the \( {q}_{i} \) . (Thus, \( \{ 0\} = \mathrm{V}\left( {Y,{X}^{2}, Y, - {X}^{2}}\right) = \mathrm{V}\left( {Y,{X}^{2}}\right) \) .). Let \( {x}_{0} \) be a fixed point in \( V \) ; then each \( {q}_{i}\left( {x}_{0}\right) = 0 \) . Now let \( t \) be an arbitrary element of \( k \), and write \( {q}_{i} = \sum {q}_{ij} \), where \( {q}_{ij} \) is the homogeneous component of degree \( j \) of \( {q}_{i} \) . Then \[ {q}_{i}\left( {t{x}_{0}}\right) = \sum {t}^{j}{q}_{ij}\left( {x}_{0}\right) \] (1) since \( {x}_{0} \) is fixed and \( t \) is arbitrary, the polynomial in (1) may be looked at as a polynomial in an indeterminant \( T \), namely \( {q}_{i}\left( {T{x}_{0}}\right) \in k\left\lbrack T\right\rbrack \) . Since \( V \) is homogeneous, \( {q}_{i}\left( {T{x}_{0}}\right) \) is 0 for each \( T = t \) ; because \( k \) is infinite, \( {q}_{i}\left( {T{x}_{0}}\right) \) is the zero polynomial in \( k\left\lbrack T\right\rbrack \) . Hence each coefficient of \( {q}_{i}\left( {T{x}_{0}}\right) \) is 0 - that is, each \( {q}_{ij}\left( {x}_{0}\right) = 0 \) . Hence \( {x}_{0} \in V \) implies \( {q}_{ij}\left( {x}_{0}\right) = 0 \) for each \( {q}_{ij} \) above, or \( {q}_{ij} \) is 0 at each point of \( V \) . Hence \( V \subset \mathrm{V}\left( \left\{ {q}_{ij}\right\} \right) \) . But obviously each \( {q}_{ij}\left( {x}_{0}\right) = 0 \) implies each \( {q}_{i}\left( {x}_{0}\right) = 0 \), so \( \left. {\mathrm{V}\left( \left\{ {q}_{ij}\right\} \right) \subset V\text{. Therefore}V = \mathrm{V}\left( \left\{ {q}_{ij}\right\} \right) \text{, so "} \Rightarrow }\right) \) is proved. Definition 2.7. A projective variety \( V \) in \( {\mathbb{P}}^{n}\left( k\right) \) is a subset of \( {\mathbb{P}}^{n}\left( k\right) \) represented by a homogeneous variety in \( {k}^{n + 1} \) . If \( {q}_{i} \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack \), where \( i = 1,\ldots, r \), are homogeneous polynomials, then by abuse of language, \( \mathrm{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \) denotes the projective variety in \( {\mathbb{P}}^{n}\left( k\right) \) represented by the homogeneous variety \( \mathbf{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \) in \( {k}^{n + 1} \) . It is clear that the intersection of any number of homogeneous varieties in \( {k}^{n + 1} \) is a homogeneous variety; likewise for projective varieties in \( {\mathbb{P}}^{n}\left( k\right) \) . Hence for any subset of \( {\mathbb{P}}^{n}\left( k\right) \) there is a smallest projective variety in \( {\mathbb{P}}^{n}\left( k\right) \) containing that subset. Definition 2.8. Let \( {k}^{n} \subset {\mathbb{P}}^{n}\left( k\right) \), and let \( V \) be a variety in \( {k}^{n} \) . The smallest projective variety in \( {\mathbb{P}}^{n}\left( k\right) \) containing \( V \) is called the projective completion of \( V \) in \( {\mathbb{P}}^{n}\left( k\right) \) and is denoted by \( {V}^{c} \) ; sometimes notationally it is preferable to refer to the homogeneous variety \( \mathrm{H}\left( V\right) \) in \( {k}^{n + 1} \) representing \( {V}^{c} \), and we then also denote \( {V}^{c} \) by \( \mathrm{H}\left( V\right) \) . Definition 2.9. Let \( p\left( {{X}_{1},\ldots ,{X}_{n}}\right) \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) be of degree \( d \), and write \( p = {p}_{0} + {p}_{1} + \ldots + {p}_{d} \) as in Definition 2.3. Then \[ {p}_{0}{X}_{n + 1}{}^{d} + {p}_{1}{X}_{n + 1}{}^{d - 1} + \ldots + {p}_{d} \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n},{X}_{n + 1}}\right\rbrack \] is homogeneous of degree \( d \) and is called the homogenization of \( p \) ; we denote it by \( {\mathrm{H}}_{{X}_{n + 1}}\left( p\right) ,{\mathrm{H}}_{n + 1}\left( p\right) \), or by just \( \mathrm{H}\left( p\right) \), depending on context. If \( p \in k\left\lbrack {{X}_{1},\ldots ,{\widehat{X}}_{i},\ldots ,{X}_{n + 1}}\right\rbrack \), the homogenization \( {\mathrm{H}}_{{X}_{i}}\left( p\right) = {\mathrm{H}}_{i}\left( p\right) \) of \( p \) at \( {X}_{i} \) is defined analogously. Remark 2.10. Suppose \( k = \mathbb{C} \) . It turns out that if \( \mathrm{V}\left( {{p}_{1},\ldots ,{p}_{r}}\right) \subset {\mathbb{C}}^{n} \), then \( {V}^{c} \) is represented by the homogeneous variety \[ \mathrm{V}\left( {{\mathrm{H}}_{n + 1}\left( {p}_{1}\right) ,\ldots ,{\mathrm{H}}_{n + 1}\left( {p}_{r}\right) }\right) \subset {\mathbb{C}}^{n + 1}. \] (2) Also we shall see that \( {V}^{c} \) is the topological closure of \( V \) in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) . Neither of these statements is true in general for varieties over \( \mathbb{R} \) (see Example 2.22). However, any projective variety \( V \subseteq {\mathbb{P}}^{n}\left( \mathbb{R}\right) \) is topologically closed. \( H\left( V\right) \) is the intersection of hypersurfaces, and each hypersurface is the inverse image under a polynomial function of the closed set \( \{ 0\} \subset \mathbb{R} \) ; hence each hypersurface is closed. (We note, of course, that our canonical map from \( {\mathbb{R}}^{n + 1} \) to \( {\mathbb{P}}^{n}\left( \mathbb{R}\right) \) sends closed sets to closed sets.) Thus if the topological closure of \( V \) in \( {\mathbb{P}}^{n}\left( \mathbb{R}\right) \) is a variety, it is the projective completion of \( V \) ; this will be of use to us in the examples of this section. We have homogenized both polynomials and varieties; those operations, when \( k = \mathbb{C} \), are related in (2). We can also reverse the process: Definition 2.11. Let \( V \subset {\mathbb{P}}^{n}\left( k\right) \) be a projective variety, and \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) a choice of hyperplane at infinity. The part of \( V \) in \( {\mathbb{P}}^{n}\left( k\right) \smallsetminus {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) is called the dehomogenization of \( V \) at \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \), or the affine part of \( V \) relative to \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) . The \( n + 1 \) canonical choices of hyperplanes described in the last section (defined by \( {X}_{1} = 0,\ldots ,{X}_{n + 1} = 0 \) in \( {k}^{n + 1} \) ) induce \( n + 1 \) canonical de-homogenizations of \( {\mathbb{P}}^{n}\left( k\right) \), and also of any projective variety \( V \) in \( {\mathbb{P}}^{n}\left( k\right) \) . As before, \( V \) is covered by the \( n + 1 \) corresponding affine parts of \( V \) . Notation 2.12. We denote the dehomogenization of \( V \) at \( {\mathbb{P}}_{\infty }{}^{n - 1} \) by \( {\mathrm{D}}_{{\mathbb{P}}_{\infty }}{}^{n - 1}\left( V\right) \) , or by \( \mathrm{D}\left( V\right) \) if the hyperplane \( {\mathbb{P}}_{\infty }{}^{n - 1} \) is clear from context. We denote the above canonical dehomogenizations of any \( V \) by \( {\mathrm{D}}_{1}\left( V\right) ,\ldots ,{\mathrm{D}}_{n + 1}\left( V\right) \) . Just as there are \( n + 1 \) canonical dehomogenizations of \( {\mathbb{P}}^{n}\left( k\right) \), there are also \( n + 1 \) canonical dehomogenizations of any homogeneous polynomial \( p \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack \) . Definition 2.13. Let \( q\left( {{X}_{1},\ldots ,{X}_{n + 1}}\right) \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack \) be a homogeneous polynomial. The polynomial \[ q\left( {{X}_{1},\ldots ,{X}_{i - 1},1,{X}_{i + 1},\ldots ,{X}_{n + 1}}\right) \] is called the dehomogenization of \( q\left( {{X}_{1},\ldots ,{X}_{n + 1}}\right) \) at \( {X}_{i} \) ; we denote it by \( {\mathrm{D}}_{{X}_{i}}\left( q\right) \), by \( {\mathrm{D}}_{i}\left( q\right) \), or by \( \mathrm{D}\left( q\right) \) if clear from context. Lemma 2.14. Let \( {q}_{1},\ldots ,{q}_{r} \in k\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack \) be homogeneous; let \( \mathbf{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \subset {\mathbb{P}}^{n}\left( k\right) \) be the projective variety defined by \( {q}_{1},\ldots ,{q}_{r} \) . Then \[ {\mathrm{D}}_{i}\left( {\mathrm{\;V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) }\right) = \mathrm{V}\left( {{\mathrm{D}}_{i}\left( {q}_{1}\right) ,\ldots ,{\mathrm{D}}_{i}\left( {q}_{r}\right) }\right) . \] (3) Proof. The variety \( \left. {\mathrm{V}\left( {{\mathrm{D}}_{i}\left( {q}_{1}\right) ,\ldots ,{\mathrm{D}}_{i}\left( {q}_{r}\right) }\right) }\right) \) can be looked at as the intersection of the variety \( \mathrm{V}\left( {{q}_{1},\ldots ,{q}_{r}}\right) \) with the plane given by \( {X}_{i} = 1 \) in \( {k
1169_(GTM75)Basic Theory of Algebraic Groups and Lie Algebras
Definition 1.1
Definition 1.1. Let \( K \) be an extension field of a field \( F \) . We say that \( K \) is separable over \( F \) if, for every \( K \) -space \( S \), every derivation from \( F \) to \( S \) extends to one from \( K \) to \( S \) . The natural heredity pattern is described in the following proposition. Proposition 1.2. Let \( F \subset K \subset L \) be a tower of fields. If \( K \) is separable over \( F \) and \( L \) is separable over \( K \) then \( L \) is separable over \( F \) . If \( L \) is separable over \( F \), so is \( K \) . Proof. The first part is clear from the definition. In order to prove the second part, let \( \tau \) be a derivation from \( F \) to a \( K \) -space \( S \) . We form the \( L \) - space \( L \otimes S \), and write it as a direct \( K \) -space sum \( S + T \) . Now we may view \( \tau \) as a derivation from \( F \) to \( L \otimes S \) . By assumption on \( L \), this derivation extends to a derivation, \( \sigma \) say, from \( L \) to \( L \otimes S \) . If \( \pi \) is the \( K \) -space projection from \( L \otimes S \) to \( S \) corresponding to our above decomposition, then the restriction of \( \pi \circ \sigma \) to \( K \) is evidently a derivation from \( K \) to \( S \) extending \( \tau \) . In the following proposition, "separably algebraic" has the usual meaning. Proposition 1.3. Let \( K \) be a field, \( F \) a subfield of \( K \), and \( u \) an element of \( K \) . Let \( \tau \) be a derivation from \( F \) to an \( F\left( u\right) \) -space \( S \) . If \( u \) is not algebraic over \( F \) then, for every element \( s \) of \( S \), there is one and only one extension of \( \tau \) to a derivation from \( F\left( u\right) \) to \( S \) sending \( u \) onto \( s \) . If \( u \) is separably algebraic over \( F \) then \( \tau \) has precisely one extension to a derivation from \( F\left( u\right) \) to \( S \) . Proof. First, consider the case where \( u \) is not algebraic over \( F \) . Clearly, there is one and only one derivation \( \sigma \) from \( F\left\lbrack u\right\rbrack \) to \( S \) sending \( u \) onto \( s \) and coinciding with \( \tau \) on \( F \) . In fact, \( \sigma \) is given by \[ \sigma \left( {\mathop{\sum }\limits_{i}{c}_{i}{u}^{i}}\right) = \mathop{\sum }\limits_{i}\left( {{u}^{i} \cdot \tau \left( {c}_{i}\right) + i{c}_{i}{u}^{i - 1} \cdot s}\right) . \] Now \( \sigma \) extends in one and only one way to a derivation from \( F\left( u\right) \) to \( S \) by the usual formula for the derivative of a fraction: \[ \sigma \left( {a{b}^{-1}}\right) = {b}^{-2} \cdot \left( {b \cdot \sigma \left( a\right) - a \cdot \sigma \left( b\right) }\right) . \] Next, suppose that \( u \) is separably algebraic over \( F \) . Let \( f \) denote the monic minimum polynomial for \( u \) relative to \( F \), and let \( {f}^{\prime } \) denote the formal derivative of \( f \) . The assumption on \( u \) means that \( {f}^{\prime }\left( u\right) \neq 0 \) . Let us denote the coefficients of \( f \) by \( {c}_{i}\left( {i = 0,\ldots, n}\right) \), with \( {c}_{n} = 1 \) . Let \( x \) be an auxiliary variable, and let us regard \( S \) as an \( F\left\lbrack x\right\rbrack \) -module via the \( F \) -algebra homomorphism from \( F\left\lbrack x\right\rbrack \) to \( F\left\lbrack u\right\rbrack \) sending \( x \) onto \( u \) . Let \( \rho \) be the derivation from \( F\left\lbrack x\right\rbrack \) to \( S \) that is determined by the conditions that \( \rho \) be an extension of \( \tau \) and that \[ \rho \left( x\right) = - {f}^{\prime }{\left( u\right) }^{-1} \cdot \mathop{\sum }\limits_{{i = 0}}^{n}{u}^{i} \cdot \tau \left( {c}_{i}\right) . \] Then \( \rho \) annihilates the ideal \( F\left\lbrack x\right\rbrack f\left( x\right) \), and therefore induces a derivation from \( F\left\lbrack u\right\rbrack \) to \( S \) extending \( \tau \) . If \( \sigma \) is any such extension of \( \tau \), we must have \[ 0 = \sigma \left( {f\left( u\right) }\right) = {f}^{\prime }\left( u\right) \cdot \sigma \left( u\right) + \mathop{\sum }\limits_{{i = 0}}^{n}{u}^{i} \cdot \tau \left( {c}_{i}\right) , \] which shows that \( \sigma \) must coincide with the derivation induced by \( \rho \) . Via an evident application of Zorn's Lemma, Proposition 1.3 shows that, in characteristic 0 , every field extension is separable, and also that every purely transcendental field extension is separable. Lemma 1.4. Let \( F \) be a field of non-zero characteristic \( p \), let \( S \) be an \( F \) -space, \( s \) an element of \( S \), and \( u \) an element of \( F \) that is not the \( p \) -th power of an element of \( F \) . There is a derivation \( \tau \) from \( F \) to \( S \) such that \( \tau \left( u\right) = s \) . Proof. Let \( {F}^{\left\lbrack p\right\rbrack } \) denote the subfield of \( F \) consisting of the \( p \) -th powers of the elements of \( F \), let \( L \) be a subfield of \( F \) containing \( {F}^{\left\lbrack p\right\rbrack } \), and let \( v \) be an element of \( F \) not belonging to \( L \) . Let \( f\left( x\right) \) denote the minimum polynomial for \( v \) relative to \( L \) . Then \( f\left( x\right) \) divides \( {x}^{p} - {v}^{p} = {\left( x - v\right) }^{p} \) in \( F\left\lbrack x\right\rbrack \), so that we must have \( f\left( x\right) = {\left( x - v\right) }^{q} \), with \( 0 < q \leq p \) . Now \( {v}^{q} \) and \( {v}^{p} \) lie in \( L \) . If \( q \neq p \) , there are integers \( r \) and \( s \) such that \( {rp} + {sq} = 1 \), so that \( v = {\left( {v}^{p}\right) }^{r}{\left( {v}^{q}\right) }^{s} \in L \) , contrary to assumption. Therefore, we have \( q = p \), so that \( f\left( x\right) = {x}^{p} - {v}^{p} \) . Let \( \rho \) be any derivation from \( L \) to \( S \) . Clearly, \( \rho \) can be extended to a derivation from \( L\left\lbrack x\right\rbrack \) to \( S \) sending \( x \) to \( s \) . This extension sends \( f\left( x\right) \) to 0 and hence induces an extension of \( \rho \) to a derivation from \( L\left\lbrack v\right\rbrack \) to \( S \) sending \( v \) to \( s \) . Using this result in an evident application of Zorn's Lemma, we obtain the required derivation \( \tau \) . Proposition 1.5. Let \( F \) be a field of non-zero characteristic \( p \), and let \( K \) be a field extension of \( F \) . Then \( K \) is separable over \( F \) if and only if, for every \( F \) -linearly independent subset \( U \) of \( K \), the set \( {U}^{\left\lbrack p\right\rbrack } \) of \( p \) -th powers of the elements of \( U \) is F-linearly independent. Proof. First, suppose that the condition is satisfied. Then the multiplication map from \( F{ \otimes }_{{F}^{\left\lbrack p\right\rbrack }}{K}^{\left\lbrack p\right\rbrack } \) to \( K \) is injective, so that the subfield \( F\left\lbrack {K}^{\left\lbrack p\right\rbrack }\right\rbrack \) of \( K \) is \( F \) -algebra isomorphic with \( F{ \otimes }_{{F}^{\left\lbrack p\right\rbrack }}{K}^{\left\lbrack p\right\rbrack } \) . Let \( \tau \) be a derivation from \( F \) to a \( K \) -space \( S \) . Evidently, \( \tau \) annihilates \( {F}^{\left\lbrack p\right\rbrack } \), so that it is an \( {F}^{\left\lbrack p\right\rbrack } \) -linear map. As such, it extends naturally to a \( {K}^{\left\lbrack p\right\rbrack } \) -linear map from \( F{ \otimes }_{{F}^{\left\lbrack p\right\rbrack }}{K}^{\left\lbrack p\right\rbrack } \) to \( \mathrm{S} \) , which is clearly a derivation. Because of the isomorphism noted above, this means that \( \tau \) extends to a derivation from \( F\left\lbrack {K}^{\left\lbrack p\right\rbrack }\right\rbrack \) to \( S \) . It is clear from Lemma 1.4 that we can apply Zorn's Lemma in the usual way in order to extend this further to a derivation from \( K \) to \( S \) . Thus, \( K \) is separable over \( F \) . Now suppose that the condition of the proposition is not satisfied, and choose an \( F \) -linearly independent subset \( \left( {{u}_{1},\ldots ,{u}_{n}}\right) \) of \( K \) such that the \( {u}_{i}^{p} \) ’s are not linearly independent over \( F \), with \( n \) as small as possible. Then there are elements \( {c}_{2},\ldots ,{c}_{n} \) in \( F \) such that \[ {u}_{1}^{p} + \mathop{\sum }\limits_{{i = 2}}^{n}{c}_{i}{u}_{i}^{p} = 0 \] Suppose that, contrary to what we must prove, \( K \) is separable over \( F \) . Then every derivation \( \tau \) from \( F \) to \( F \) extends to a derivation \( \sigma \) from \( K \) to \( K \) . Applying \( \sigma \) to our above relation, we obtain \[ \mathop{\sum }\limits_{{i = 2}}^{n}\tau \left( {c}_{i}\right) {u}_{i}^{p} = 0 \] By the minimality of \( n \), this gives \( \tau \left( {c}_{i}\right) = 0 \) for each \( i \) from 2 to \( n \) . Thus, each \( {c}_{i} \) is annihilated by every derivation from \( F \) to \( F \) . By Lemma 1.4, this implies that each \( {c}_{i} \) is the \( p \) -th power \( {d}_{i}^{p} \) of an element \( {d}_{i} \) of \( F \) . Our original relation may now be written \[ {\left( {u}_{1} + \mathop{\sum }\limits_{{i = 2}}^{n}{d}_{i}{u}_{i}\right) }^{p} = 0 \] This contradicts the \( F \) -linear independence of the \( {u}_{i} \) ’s. The conclusion is that if the condition of the proposition is not satisfied then \( K \) is not separable over \( F \) . If \( R \) is any commutative ring, let us agree to call a derivation from \( R \) to \( R \) simply a derivation of \( R \) . The last part of the proof of Proposition 1.5 has shown that if every derivation of \( F \) extends to a derivation of \( K \) then the condition of Proposition 1.5 is satisfied. Hence, if \( K \) is an extension field of the field \( F \) such that every derivation of \( F \) extends to one of \( K \), then \( K \) is separable over \( F \) . It follows from Zorn's Lemma and Proposition 1.3 that an extension that is separably algebraic in the usual sense is separable also in the sense of Definition 1.1. Conversely, if \( K \) is an algebraic
1092_(GTM249)Classical Fourier Analysis
Definition 4.3.6
Definition 4.3.6. Let \( {t}_{0} \in {\mathbf{R}}^{n} \) . A bounded function \( b \) on \( {\mathbf{R}}^{n} \) is called regulated at the point \( {t}_{0} \) if \[ \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}\frac{1}{{\varepsilon }^{n}}{\int }_{\left| t\right| \leq \varepsilon }\left( {b\left( {{t}_{0} - t}\right) - b\left( {t}_{0}\right) }\right) {dt} = 0. \] (4.3.8) The function \( b \) is called regulated if it is regulated at every \( {t}_{0} \in {\mathbf{R}}^{n} \) . Clearly, if \( {t}_{0} \) is a Lebesgue point of \( b \), then \( b \) is regulated at \( {t}_{0} \) . In particular, this is the case if \( b \) is continuous at \( {t}_{0} \) . If \( b\left( {t}_{0}\right) = 0 \), condition (4.3.8) also holds when \( b\left( {{t}_{0} - t}\right) = - b\left( {{t}_{0} + t}\right) \) whenever \( \left| t\right| \leq \varepsilon \) for some \( \varepsilon > 0 \) ; for instance the function \( b\left( t\right) = - i\operatorname{sgn}\left( {t - {t}_{0}}\right) \) has this property. An example of a regulated function is the following modification of the characteristic function of the cube \( {\left\lbrack -1,1\right\rbrack }^{n} \) \[ {\widetilde{\chi }}_{{\left\lbrack -1,1\right\rbrack }^{n}}\left( {{x}_{1},\ldots ,{x}_{n}}\right) = \left\{ \begin{array}{ll} 1 & \text{ when all }\left| {x}_{j}\right| < 1, \\ {2}^{k - n} & \text{ if }\left( {{x}_{1},\ldots ,{x}_{n}}\right) \text{ belongs to some }k\text{-dimensional } \\ & \text{ face of the boundary of }{\left\lbrack -1,1\right\rbrack }^{n}, \\ 0 & \text{ when some }\left| {x}_{j}\right| > 1, \end{array}\right. \] with the understanding that points are zero-dimensional. The first transference result we discuss is the following. Theorem 4.3.7. Suppose that \( b \) is a regulated function at every point \( m \in {\mathbf{Z}}^{n} \) and that \( b \) lies in \( {\mathcal{M}}_{p}\left( {\mathbf{R}}^{n}\right) \) for some \( 1 < p < \infty \) . Then the sequence \( \{ b\left( m\right) {\} }_{m \in {\mathbf{Z}}^{n}} \) is in \( {\mathcal{M}}_{p}\left( {\mathbf{Z}}^{n}\right) \) and moreover, \[ {\begin{Vmatrix}\{ b\left( m\right) {\} }_{m \in {\mathbf{Z}}^{n}}\end{Vmatrix}}_{{\mathcal{M}}_{p}\left( {\mathbf{Z}}^{n}\right) } \leq \parallel b{\parallel }_{{\mathcal{M}}_{p}\left( {\mathbf{R}}^{n}\right) }. \] If \( b \) is regulated everywhere, then for all \( R > 0 \) the sequences \( \{ b\left( {m/R}\right) {\} }_{m \in {\mathbf{Z}}^{n}} \) are in \( {\mathcal{M}}_{p}\left( {\mathbf{Z}}^{n}\right) \) and we have \[ \mathop{\sup }\limits_{{R > 0}}{\begin{Vmatrix}\{ b\left( m/R\right) {\} }_{m \in {\mathbf{Z}}^{n}}\end{Vmatrix}}_{{\mathcal{M}}_{p}\left( {\mathbf{Z}}^{n}\right) } \leq \parallel b{\parallel }_{{\mathcal{M}}_{p}\left( {\mathbf{R}}^{n}\right) }. \] The second conclusion of the theorem is a consequence of the first, since for a given \( R > 0 \) the function \( b\left( {\xi /R}\right) \) is regulated on \( {\mathbf{Z}}^{n} \) and has the same \( {\mathcal{M}}_{p}\left( {\mathbf{R}}^{n}\right) \) norm as \( b\left( \xi \right) \) . Before we prove this result, we state and prove a couple of lemmas. Lemma 4.3.8. Suppose that the function \( b \) on \( {\mathbf{R}}^{n} \) is regulated at the point \( {x}_{0} \) . Let \( {K}_{\varepsilon }\left( x\right) = {\varepsilon }^{-n}{e}^{-\pi {\left| x/\varepsilon \right| }^{2}} \) for \( \varepsilon > 0 \) . Then we have that \( \left( {b * {K}_{\varepsilon }}\right) \left( {x}_{0}\right) \rightarrow b\left( {x}_{0}\right) \) as \( \varepsilon \rightarrow 0 \) . Proof. For \( r > 0 \) define the function \[ {F}_{{x}_{0}}\left( r\right) = \frac{1}{{r}^{n}}{\int }_{\left| t\right| \leq r}\left( {b\left( {{x}_{0} - t}\right) - b\left( {x}_{0}\right) }\right) {dt} = \frac{1}{{r}^{n}}{\int }_{0}^{r}{\int }_{{\mathbf{S}}^{n - 1}}\left( {b\left( {{x}_{0} - {s\theta }}\right) - b\left( {x}_{0}\right) }\right) {d\theta }{s}^{n - 1}{ds}. \] Let \( \eta > 0 \) . Since \( b \) is regulated at \( {x}_{0} \) there is a \( \delta > 0 \) such that for \( r \leq \delta \) we have \( \left| {{F}_{{x}_{0}}\left( r\right) }\right| \leq \eta \) . Fix such a \( \delta \) and write \[ \left( {b * {K}_{\varepsilon }}\right) \left( {x}_{0}\right) - b\left( {x}_{0}\right) = {\int }_{y \in {\mathbf{R}}^{n}}\left( {b\left( {{x}_{0} - y}\right) - b\left( {x}_{0}\right) }\right) {K}_{\varepsilon }\left( y\right) {dy} = {A}_{1}^{\varepsilon } + {A}_{2}^{\varepsilon }, \] where \[ {A}_{1}^{\varepsilon } = {\int }_{\left| y\right| \geq \delta }\left( {b\left( {{x}_{0} - y}\right) - b\left( {x}_{0}\right) }\right) {K}_{\varepsilon }\left( y\right) {dy} \] and \[ {A}_{2}^{\varepsilon } = {\int }_{\left| y\right| < \delta }\left( {b\left( {{x}_{0} - y}\right) - b\left( {x}_{0}\right) ){K}_{\varepsilon }\left( y\right) }\right) {dy} \] \[ = {\int }_{0}^{\delta }\frac{1}{{\varepsilon }^{n}}{e}^{-\pi {\left( r/\varepsilon \right) }^{2}}{\int }_{{\mathbf{S}}^{n - 1}}\left( {b\left( {{x}_{0} - {r\theta }}\right) - b\left( {x}_{0}\right) }\right) {r}^{n}{dr} \] \[ = {\int }_{0}^{\delta }\frac{1}{{\varepsilon }^{n}}{e}^{-\pi {\left( r/\varepsilon \right) }^{2}}\frac{d}{dr}\left( {{r}^{n}{F}_{{x}_{0}}\left( r\right) }\right) {dr}. \] For our given \( \eta > 0 \) there is an \( {\varepsilon }_{0} > 0 \) such that for \( \varepsilon < {\varepsilon }_{0} \) we have \[ \left| {A}_{1}^{\varepsilon }\right| \leq 2\parallel b{\parallel }_{{L}^{\infty }}{\int }_{\left| y\right| \geq \frac{\delta }{\varepsilon }}{e}^{-\pi {\left| y\right| }^{2}}{dy} < \eta . \] Via an integration by parts \( {A}_{2}^{\varepsilon } \) can be written as \[ \left| {A}_{2}^{\varepsilon }\right| = \left| {{\delta }^{n}{F}_{{x}_{0}}\left( \delta \right) \frac{1}{{\varepsilon }^{n}}{e}^{-\pi {\left( \delta /\varepsilon \right) }^{2}} - 0 + {2\pi }{\int }_{0}^{\delta }\frac{r}{{\varepsilon }^{n + 2}}{e}^{-\pi {\left( r/\varepsilon \right) }^{2}}{r}^{n}{F}_{{x}_{0}}\left( r\right) {dr}}\right| \] \[ = \left| {{F}_{{x}_{0}}\left( \delta \right) \frac{{\delta }^{n}}{{\varepsilon }^{n}}{e}^{-\pi {\left( \delta /\varepsilon \right) }^{2}} + {2\pi }{\int }_{0}^{\delta /\varepsilon }{r}^{n + 1}{F}_{{x}_{0}}\left( {\varepsilon r}\right) {e}^{-\pi {r}^{2}}{dr}}\right| \] \[ \leq \left| {{F}_{{x}_{0}}\left( \delta \right) }\right| \frac{{\delta }^{n}}{{\varepsilon }^{n}}{e}^{-\pi {\left( \delta /\varepsilon \right) }^{2}} + \mathop{\sup }\limits_{{0 < r \leq \frac{\delta }{\varepsilon }}}\left| {{F}_{{x}_{0}}\left( {\varepsilon r}\right) }\right| {2\pi }{\int }_{0}^{\delta /\varepsilon }{r}^{n + 1}{e}^{-\pi {r}^{2}}{dr} \] \[ \leq \left| {{F}_{{x}_{0}}\left( \delta \right) }\right| {C}_{n} + \mathop{\sup }\limits_{{0 < r \leq \delta }}\left| {{F}_{{x}_{0}}\left( r\right) }\right| {C}_{n}^{\prime } \] \[ \leq \left( {{C}_{n} + {C}_{n}^{\prime }}\right) \eta \] where we set \( {C}_{n} = \mathop{\sup }\limits_{{t > 0}}{t}^{n}{e}^{-\pi {t}^{2}} \) and \( {C}_{n}^{\prime } = {2\pi }{\int }_{0}^{\infty }{r}^{n + 1}{e}^{-\pi {r}^{2}}{dr} \) . Then for \( \varepsilon < {\varepsilon }_{0} \) we have \( \left| {\left( {b * {K}_{\varepsilon }}\right) \left( {{x}_{0} - b\left( {x}_{0}\right) \mid < \left( {{C}_{n} + {C}_{n}^{\prime } + 1}\right) \eta }\right. }\right| \), thus \( \left( {b * {K}_{\varepsilon }}\right) \left( {x}_{0}\right) \rightarrow b\left( {x}_{0}\right) \) as \( \varepsilon \rightarrow 0 \) . \( ▱ \) Lemma 4.3.9. Let \( T \) be the operator on \( {\mathbf{R}}^{n} \) whose multiplier is \( b\left( \xi \right) \), and let \( S \) be the operator on \( {\mathbf{T}}^{n} \) whose multiplier is the sequence \( \{ b\left( m\right) {\} }_{m \in {\mathbf{Z}}^{n}} \) . Assume that \( b\left( \xi \right) \) is regulated at every point \( \xi = m \in {\mathbf{Z}}^{n} \) . Suppose that \( P \) and \( Q \) are trigonometric polynomials on \( {\mathbf{T}}^{n} \) and let \( {L}_{\varepsilon }\left( x\right) = {e}^{-{\pi \varepsilon }{\left| x\right| }^{2}} \) for \( x \in {\mathbf{R}}^{n} \) and \( \varepsilon > 0 \) . Then the following identity is valid whenever \( \alpha ,\beta > 0 \) and \( \alpha + \beta = 1 \) : \[ \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}{\varepsilon }^{\frac{n}{2}}{\int }_{{\mathbf{R}}^{n}}T\left( {P{L}_{\varepsilon \alpha }}\right) \left( x\right) \overline{Q\left( x\right) {L}_{\varepsilon \beta }\left( x\right) }{dx} = {\int }_{{\mathbf{T}}^{n}}S\left( P\right) \left( x\right) \overline{Q\left( x\right) }{dx}. \] (4.3.9) Proof. It suffices to prove the required assertion for \( P\left( x\right) = {e}^{{2\pi im} \cdot x} \) and \( Q\left( x\right) = \) \( {e}^{{2\pi ik} \cdot x}, k, m \in {\mathbf{Z}}^{n} \), since the general case follows from this case by linearity. In view of Parseval's relation (Proposition 3.2.7 (3)), we have \[ {\int }_{{\mathbf{T}}^{n}}S\left( P\right) \left( x\right) \overline{Q\left( x\right) }{dx} = \mathop{\sum }\limits_{{r \in {\mathbf{Z}}^{n}}}b\left( r\right) \widehat{P}\left( r\right) \overline{\widehat{Q}\left( r\right) } = \left\{ \begin{array}{ll} b\left( m\right) & \text{ when }k = m, \\ 0 & \text{ when }k \neq m. \end{array}\right. \] (4.3.10) On the other hand, using the identity in Theorem 2.2.14 (3), we obtain \[ {\varepsilon }^{\frac{n}{2}}{\int }_{{\mathbf{R}}^{n}}T\left( {P{L}_{\varepsilon \alpha }}\right) \left( x\right) \overline{Q\left( x\right) {L}_{\varepsilon \beta }\left( x\right) }{dx} \] \[ = {\varepsilon }^{\frac{n}{2}}{\int }_{{\mathbf{R}}^{n}}b\left( \xi \right) \widehat{P{L}_{\varepsilon \alpha }}\left( \xi \right) \overline{\widehat{Q{L}_{\varepsilon \beta }}\left( \xi \right) }{d\xi } \] \[ = {\varepsilon }^{\frac{n}{2}}{\int }_{{\mathbf{R}}^{n}}b\left( \xi \right) {\left( \varepsilon \alpha \right) }^{-\frac{n}{2}}{e}^{-\pi \frac{{\left| \xi - m\right| }^{2}}{\varepsilon \alpha }}{\left( \varepsilon \beta \right) }^{-\frac{n}{2}}{e}^{-\pi \frac{{\left| \xi - k\right| }^{2}}{\varepsilon \beta }}{d\xi } \] \[ = {\left( \varepsilon \alpha \beta \right) }^{-\frac{n}{2}}
1172_(GTM8)Axiomatic Set Theory
Definition 9.36
Definition 9.36. \( {F}_{0} \triangleq \left\{ {k \in K \mid h\left( {{f}_{0}\left( k\right) }\right) = \mathbf{1}}\right\} \) . For \( t,{t}_{1},{t}_{2} \) constant terms, 1. \( D\left( {{t}_{1} \in {t}_{2}}\right) \overset{\Delta }{ \leftrightarrow }D\left( {t}_{1}\right) \in D\left( {t}_{2}\right) \) . 2. \( D\left( {V\left( t\right) }\right) \overset{\Delta }{ \leftrightarrow }D\left( t\right) \in M \) . 3. \( D(F\left( t\right) \overset{\Delta }{ \leftrightarrow }D\left( t\right) \in {F}_{0} \) . The remainder of the definition is the same as in Definition 8.14. Let \( M\left\lbrack {F}_{0}\right\rbrack \triangleq M\left\lbrack h\right\rbrack \triangleq \{ D\left( t\right) \mid t \in T\} \), where \( T = \left\{ {{T}_{\alpha }{}^{M} \mid \alpha \in M}\right\} \) . When we wish to identify the particular denotation operator associated with a particular \( M\left\lbrack {F}_{0}\right\rbrack \) we will write \( {D}_{M\left\lbrack {F}_{0}\right\rbrack } \) instead of \( D \) . 4. \( {D}^{\prime }\left( {V\left( t\right) }\right) \overset{\Delta }{ \leftrightarrow }\left( {\exists k \in M}\right) {D}^{\prime }\left( {t = \underline{k}}\right) \) . 5. \( {D}^{\prime }\left( {F\left( t\right) }\right) \overset{\Delta }{ \leftrightarrow }\left( {\exists k \in {F}_{0}}\right) {D}^{\prime }\left( {t = \underline{k}}\right) \) . 6. \( {D}^{\prime }\left( {{\underline{k}}_{1} \in {\underline{k}}_{2}}\right) \overset{\Delta }{ \leftrightarrow }{k}_{1} \in {k}_{2},\;{k}_{1},{k}_{2} \in K \) . 7. \( {D}^{\prime }\left( {t \in \underline{k}}\right) \overset{\Delta }{ \leftrightarrow }\left( {\exists {k}^{\prime } \in K}\right) {D}^{\prime }\left( {t = {\underline{k}}^{\prime }}\right) \) . 8. \( {D}^{\prime }\left( {t \in {\widehat{x}}^{\beta }\varphi \left( {x}^{\beta }\right) }\right) \overset{\Delta }{ \leftarrow }\left( {\exists {t}^{\prime } \in {T}_{\beta }}\right) \left\lbrack {{D}^{\prime }\left( {t = {t}^{\prime }}\right) \land {D}^{\prime }\left( {\varphi \left( {t}^{\prime }\right) }\right) }\right\rbrack ,\beta \in M \) . 9. \( {D}^{\prime }\left( {{t}_{1} = {t}_{2}}\right) \overset{\Delta }{ \leftrightarrow }\left( {\forall t \in {T}_{\beta }}\right) {D}^{\prime }\left( {t \in {t}_{1} \leftrightarrow t \in {t}_{2}}\right) ,\beta = \max \left( {\rho \left( {t}_{1}\right) ,\rho \left( {t}_{2}\right) }\right) \) . 10. \( {D}^{\prime }\left( {\neg \varphi }\right) \overset{\Delta }{ \leftrightarrow }\neg {D}^{\prime }\left( \varphi \right) ,{D}^{\prime }\left( {\varphi \land \psi }\right) \leftrightarrow {D}^{\prime }\left( \varphi \right) \land {D}^{\prime }\left( \psi \right) \) . 11. \( {D}^{\prime }\left( {\left( {\forall {x}^{\beta }}\right) \varphi \left( {x}^{\beta }\right) }\right) \overset{\Delta }{ \leftrightarrow }\left( {\forall t \in {T}_{\beta }}\right) {D}^{\prime }\left( {\varphi \left( t\right) }\right) ,\beta \in M \) . Remark. It is easy to see that \( D \) is equivalent to \( {D}^{\prime } \) and the following theorem holds: Theorem 9.37. \( M\left\lbrack {F}_{0}\right\rbrack \vDash \varphi \leftrightarrow h\left( {\llbracket \varphi \rrbracket }\right) = 1 \) . Remark. Since \( \mathbf{T} \) is a \( \mathbf{B} \) -valued model of \( {ZF} \), we have \( \llbracket \varphi \rrbracket = \mathbf{1} \) for each axiom \( \varphi \) . Consequently we have the following result. Theorem 9.38. \( M\left\lbrack {F}_{0}\right\rbrack \) is a standard transitive model of \( {ZF} \) . If \( M \) satisfies the \( {AC} \) so does \( M\left\lbrack {F}_{0}\right\rbrack \) . Proof. To show that \( M\left\lbrack {F}_{0}\right\rbrack \) satisfies the \( {AC} \) if \( M \) does we note that if \( M \) satisfies the \( {AC} \) then since \( K \in M, K \) is well ordered and, since \( {F}_{0} \subseteq K,{F}_{0} \) is also well ordered in \( M\left\lbrack {F}_{0}\right\rbrack \) . Hence \( M\left\lbrack {F}_{0}\right\rbrack \) satisfies the \( {AC} \) . Remark. Comparing the results of Theorem 9.38 with those discussed at the end of \( \$ 8 \) note that we did not require the existence of a model \( \widetilde{M} \) of \( {ZF} \) with the same order type as \( M \) such that \( {F}_{0} \) is a class in \( \widetilde{M} \) but instead \( {F}_{0} \) must satisfy certain requirements to ensure that \( M\left\lbrack {F}_{0}\right\rbrack \) be a model of \( {ZF} \) . Defining \( M\left\lbrack {F}_{0}\right\rbrack \) by considering B-valued relative constructibility has many advantages as will become clear from the applications in the next sections. We can however give an application of this method now: Suppose that \( M \) is a countable standard transitive model of \( {ZF},\mathbf{B} \in M \) and \( \mathbf{B} \) is \( M \) -complete, also \( K \in M \) and \( {\left( V\left\lbrack {f}_{0}\right\rbrack \right) }^{M} \) is defined as before. Now assume that there is some sentence \( \varphi \) such that \( \llbracket \varphi \rrbracket \neq \mathbf{0} \) . Then there is a homomorphism \( h : \left| \mathbf{B}\right| \rightarrow \left| \mathbf{2}\right| \) that is \( M \) -complete, sends \( \llbracket \varphi \rrbracket \) to \( \mathbf{1} \), and from which we get a standard 2-valued model \( M\left\lbrack {F}_{0}\right\rbrack \) of \( {ZF} \) in which \( \varphi \) is true in the ordinary sense of 2-valued logic. ## 10. Forcing As an application of the general theory developed in the previous sections we give a definition of "forcing" and derive its elementary properties. Throughout this section, \( M \) denotes a standard transitive model of \( {ZF} \) , \( \mathbf{P} \in M \) is a partial order structure, and \( \mathbf{B} \) is the corresponding \( M \) -complete Boolean algebra of regular open sets of \( \mathbf{P} \) in the relative sense of \( M \) . Furthermore we have \( h \) an \( M \) -complete homomorphism of \( \mathbf{B} \) into 2, \( F \) an \( M \) -complete ultrafilter for \( \mathbf{B} \), and \( G \) a set that is \( \mathbf{P} \) -generic over \( M \) , such that \( h, F \), and \( G \) are related to each other as described in \( §2 \) . Thus one of them may be given and the remaining sets are obtained from it as in \( §2 \) . We now specialize the construction of \( M\left\lbrack {F}_{0}\right\rbrack \) to one of the following cases: 1. \( K = B \) and \( {f}_{0} : B \rightarrow B \) is the identity on \( B \) or 2. \( K = P \) and \( {f}_{0} : P \rightarrow B \) is defined by \[ {f}_{0}\left( p\right) = {\left\lbrack p\right\rbrack }^{-0},\;p \in P. \] In case 1 \[ {F}_{0} = \left\{ {b \in B \mid h\left( {{f}_{0}\left( b\right) }\right) = 1}\right\} \] \[ = \{ b \in B \mid h\left( b\right) = 1\} \] \[ = F\text{.} \] In case 2 \[ {F}_{0} = \left\{ {p \in P \mid h\left( {\left\lbrack p\right\rbrack }^{-0}\right) = 1}\right\} \] \[ = \left\{ {p \in P \mid {\left\lbrack p\right\rbrack }^{-0} \in F}\right\} \] \[ = G\text{.} \] Since \( h, F, G \) are obtainable from each other in a simple way, we have in both cases \( M\left\lbrack {F}_{0}\right\rbrack = M\left\lbrack h\right\rbrack = M\left\lbrack G\right\rbrack = M\left\lbrack F\right\rbrack \) and Theorem 10.1 follows. Theorem 10.1. If \( G \) is \( \mathbf{P} \) -generic over \( M \) then \( M\left\lbrack G\right\rbrack \) is a standard transitive model of \( {ZF} \) that has the same order type as \( M \) . For any formula \( \varphi \) of \( {\mathcal{L}}_{0} \) \[ M\left\lbrack G\right\rbrack \vDash \varphi \leftrightarrow h\left( \left\lbrack \varphi \right\rbrack \right) = 1 \] \[ \leftrightarrow \llbracket \varphi \rrbracket \in F \] \[ \leftrightarrow \left\lbrack \varphi \right\rbrack \cap G \neq 0. \] Furthermore if \( M \) satisfies the \( {AC} \) so does \( M\left\lbrack G\right\rbrack \) . Remark. \( \llbracket \varphi \rrbracket = b \) is definable in \( {\mathcal{L}}_{0} \) from \( \mathbf{P} \) and \( {}^{\top }{\varphi }^{1} \) (uniformly in \( {}^{\top }{\varphi }^{1} \) if \( \varphi \) ranges over limited formulas only). Definition 10.2. If \( p \in P \) and \( \varphi \) is a formula, limited or unlimited, then \[ p \Vdash \varphi \overset{\Delta }{ \leftrightarrow }p \in \llbracket \varphi \rrbracket . \] Theorem 10.3. \( p \Vdash \varphi \) is definable in \( {\mathcal{L}}_{0} \) from \( {}^{\lceil }{\varphi }^{1}, b \) and \( \mathbf{P} \) (uniformly in \( {}^{\Gamma }{\varphi }^{1} \) if \( \varphi \) ranges over limited formulas only). Remark. As can be seen from Theorem 10.1 there is a close relationship between satisfaction in \( M\left\lbrack G\right\rbrack \) and the notion of forcing. In particular the forcing relation satisfies certain recursive conditions similar to the notion of satisfaction in \( M\left\lbrack G\right\rbrack \) : Theorem 10.4. Let \( k,{k}_{1},{k}_{2} \in V \) and \( t,{t}_{1},{t}_{2} \) be constant terms. 1. \( p \Vdash \neg \varphi \leftrightarrow \left( {\forall q \leq p}\right) \neg \left( {q \Vdash \varphi }\right) \) . 2. \( p \Vdash {\varphi }_{1} \land {\varphi }_{2} \leftrightarrow p \Vdash {\varphi }_{1} \land p \Vdash {\varphi }_{2} \) . 3. \( p \Vdash \left( {\forall x}\right) \varphi \left( x\right) \leftrightarrow \left( {\forall t \in T}\right) \left\lbrack {p \Vdash \varphi \left( t\right) }\right\rbrack \) . 4. \( p \Vdash \left( {\forall {x}^{\alpha }}\right) \varphi \left( {x}^{\alpha }\right) \leftrightarrow \left( {\forall q \leq p}\right) \left( {\exists {q}^{\prime } \leq q}\right) \left( {\forall t \in {T}_{\alpha }}\right) \left\lbrack {{q}^{\prime } \Vdash \varphi \left( t\right) }\right\rbrack \) . 5. \( p \Vdash V\left( t\right) \leftrightarrow \left( {\forall q \leq p}\right) \left( {\exists {q}^{\prime } \leq q}\right) \left( {\exists k}\right) \left\lbrack {{q}^{\prime } \Vdash t = \underline{k}}\right\rbrack \) in particular \( p \Vdash V\left( \underline{k}\right) \) . 6. \( p \Vdash F\left( t\right) \leftrightarrow \left( {\forall q \leq p}\right) \left( {\exists {q}^{\prime } \leq q}\right) \left( {\exi
1329_[肖梁] Abstract Algebra (2022F)
Definition 9.2.1
Definition 9.2.1. Let \( \rho : G \rightarrow \mathrm{{GL}}\left( V\right) \) be a linear representation of \( G \) . A subrepresenta-tion is a \( \mathbb{C} \) -linear subspace \( W \subseteq V \) that is "stable under \( G \) -action", i.e. \[ \forall g \in G,\rho \left( g\right) \left( W\right) \subseteq W. \] This then induces a representation \( {\rho }_{W} : G \rightarrow \mathrm{{GL}}\left( W\right) \) . Example 9.2.2. Assume that \( G \) is finite. Consider the regular representation in Example 9.1.3(2). Then the subspace \[ W = \left\{ {\left. {a \cdot \mathop{\sum }\limits_{{h \in G}}\left\lbrack h\right\rbrack }\right| \;a \in \mathbb{C}}\right\} \subseteq \mathbb{C}\left\lbrack G\right\rbrack \] is one-dimensional, and can be seen to be stable under \( G \) -action: for \( g \in G \) , \[ \rho \left( g\right) \left( {a \cdot \mathop{\sum }\limits_{{h \in G}}\left\lbrack h\right\rbrack }\right) = a \cdot \mathop{\sum }\limits_{{h \in G}}\left\lbrack {gh}\right\rbrack = a \cdot \mathop{\sum }\limits_{{h \in G}}\left\lbrack h\right\rbrack \in W. \] The corresponding representation \( {\rho }_{W} : G \rightarrow \mathrm{{GL}}\left( W\right) = {\mathbb{C}}^{ \times } \) is the trivial representation \( {\rho }_{W}\left( g\right) = 1 \) for all \( g \in G \) . Example 9.2.3. If \( \phi : V \rightarrow W \) is a homomorphism of linear representations of \( G \), then \( \ker \left( \phi \right) = \{ v \in V,\phi \left( v\right) = 0\} \) is a subrepresentation of \( V \) . Definition 9.2.4. Let \( \left( {\rho, W}\right) \) and \( \left( {{\rho }^{\prime },{W}^{\prime }}\right) \) be linear representations of \( G \) . Then we can form their direct sum \[ {\rho }^{\prime \prime } = \rho \oplus {\rho }^{\prime } : G \rightarrow \mathrm{{GL}}\left( {W \oplus {W}^{\prime }}\right) \] \[ g \mapsto \left( \begin{matrix} \rho \left( g\right) & 0 \\ 0 & {\rho }^{\prime }\left( g\right) \end{matrix}\right) \] or in terms of \( G \) -action: for \( g \in G, w \in W \), and \( {w}^{\prime } \in {W}^{\prime } \) , \[ {\rho }^{\prime \prime }\left( g\right) \left( {w,{w}^{\prime }}\right) = \left( {\rho \left( g\right) \left( w\right) ,\rho \left( g\right) \left( {w}^{\prime }\right) }\right) . \] Theorem 9.2.5. Let \( G \) be a finite group. If \( W \subseteq V \) is a subrepresentation of \( G \), then there exists a subrepresentation \( {W}^{ \circ } \subseteq V \) such that \[ V = W \oplus {W}^{ \circ } \] We call \( {W}^{ \circ }a \) complementary representation of \( W \) in \( V \) . We introduce a interesting construction before proving the theorem; this construction will be used frequently in the study of representation theory of finite groups. Construction 9.2.6. It is essential to assume that \( G \) is finite. Let \( \left( {{\rho }_{1},{W}_{1}}\right) \) and \( \left( {{\rho }_{2},{W}_{2}}\right) \) be two linear representations of a finite group \( G \) . If \( \phi : {W}_{1} \rightarrow {W}_{2} \) is a homomorphism of representations of \( G \), then for any \( g \in G,{\rho }_{2}\left( g\right) \circ \phi \circ {\phi }_{1}{\left( g\right) }^{-1} = \phi \) . Now if we start with just a \( \mathbb{C} \) -linear map \( \phi : {W}_{1} \rightarrow {W}_{2} \) (unnecessarily respecting the \( G \) -action), we can define a homomorphism of \( G \) -representation by "averaging over \( G \) " as follows. \[ \widetilde{\phi } \mathrel{\text{:=}} \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}{\rho }_{2}\left( g\right) \circ \phi \circ {\rho }_{1}{\left( g\right) }^{-1} \in {\operatorname{Hom}}_{\mathbb{C}}\left( {{W}_{1},{W}_{2}}\right) . \] Let us check the properties for being a homomorphism: for \( h \in H \) , \[ \widetilde{\phi } \circ {\rho }_{1}\left( h\right) = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}{\rho }_{2}\left( g\right) \circ \phi \circ {\rho }_{1}\left( {{g}^{-1}h}\right) \] \[ \overset{g = {hk}}{ = }\frac{1}{\left| G\right| }\mathop{\sum }\limits_{{k \in G}}{\rho }_{2}\left( {hk}\right) \circ \phi \circ {\rho }_{1}\left( {k}^{-1}\right) = {\rho }_{2}\left( h\right) \circ \widetilde{\phi }. \] We also remark that if \( \phi \) is already a homomorphism of representations, then \[ \widetilde{\phi } = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}{\rho }_{2}\left( g\right) \circ \phi \circ {\rho }_{1}{\left( g\right) }^{-1} = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\phi = \phi . \] 9.2.7. Proof of Theorem 9.2.5. Pick an arbitrary subspace \( {W}^{\prime } \subseteq V \) such that \( V = W \oplus {W}^{\prime } \) as \( \mathbb{C} \) -vector spaces. This is equivalent to give a map \( \operatorname{pr} : V \rightarrow W \) (projection along \( {W}^{\prime } \) ) such that \( {\left. \operatorname{pr}\right| }_{W} : W \rightarrow W \) is the identity map, and ker pr recovers the subspace \( {W}^{\prime } \) . (But this is not a homomorphism of \( G \) -representations, so \( \ker \left( \mathrm{{pr}}\right) \) is not a subrepresentation.) Now we apply the "averaging construction" in Construction 9.2.6 to pr : \( V \rightarrow W \) get a \( G \) -homomorphism \[ \widetilde{\operatorname{pr}} \mathrel{\text{:=}} \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\rho \left( g\right) \circ \rho {\left( g\right) }^{-1} : V \rightarrow W. \] We check that for \( w \in W \) , \[ \widetilde{\operatorname{pr}}\left( w\right) = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\rho \left( g\right) \circ \underset{\text{same as }\rho \left( {g}^{-1}\right) \left( w\right) }{\underbrace{\operatorname{pr} \circ \underset{\text{still in }W}{\underbrace{\rho {\left( g\right) }^{-1}\left( w\right) }}}} = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}w = w. \] So \( \widetilde{\mathrm{{pr}}} \) is another projection to \( W \), respecting the structure of \( G \) -representations. Thus \( \ker \left( \widetilde{\mathrm{{pr}}}\right) \) is a needed complement of \( W \) in \( V \), that is stable under \( G \) . Definition 9.2.8. Let \( G \) be a group. A linear representation \( V \) is called irreducible if the only subrepresentations of \( V \) are \( \{ 0\} \) and \( V \) . We write \( \operatorname{Irr}\left( G\right) \) for the set of irreducible representations (up to isomorphisms). Corollary 9.2.9. If \( G \) is finite, then every finite dimensional representation \( V \) is "completely reducible", i.e. is a direct sum of irreducible representations \( \left( {9.2.9.1}\right) \) \[ V \simeq {W}_{1} \oplus {W}_{2} \oplus \cdots \oplus {W}_{k} \] Proof. This follows from repeatedly applying Theorem 9.2.5. Remark 9.2.10. The decomposition (9.2.9.1) is not unique, but the multiplicity of each irreducible factor is unique. ## 9.3. Tensor product representations and dual representations. 9.3.1. Recollection of tensor product of vector spaces. For two \( \mathbb{C} \) -vector spaces \( V \) and \( W \) with basis \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{m}}\right\} \) and \( \left\{ {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right\} \), the tensor product \( V \otimes W \) is the vector space with basis \( \left\{ {{e}_{i} \otimes {f}_{j} \mid 1 \leq i \leq m,1 \leq j \leq n}\right\} \) . For example, we compute \[ \left( {{e}_{1} + 2{e}_{2}}\right) \otimes \left( {2{f}_{1} - 3{f}_{2}}\right) = 2{e}_{1} \otimes {f}_{1} - 3{e}_{1} \otimes {f}_{2} + 4{e}_{2} \otimes {f}_{1} - 6{e}_{2} \otimes {f}_{2}. \] Definition 9.3.2. If \( \left( {{\rho }_{1}, V}\right) \) and \( \left( {{\rho }_{2}, W}\right) \) be two representations of \( G \), their tensor product is \( V \otimes W \), on which \( G \) acts as \[ \rho \left( g\right) \left( {v \otimes w}\right) \mathrel{\text{:=}} {\rho }_{1}\left( g\right) \left( v\right) \otimes {\rho }_{2}\left( g\right) \left( w\right) . \] 9.3.3. Dual vector space. For a \( \mathbb{C} \) -vector space \( V \), set \( {V}^{ * } \mathrel{\text{:=}} {\operatorname{Hom}}_{\mathbb{C}}\left( {V,\mathbb{C}}\right) \) ; it is the dual vector space. For each \( {v}^{ * } \in {V}^{ * } \) and \( v \in V \), we may evaluate \( {v}^{ * } \) at \( v \) as: \( {v}^{ * }\left( v\right) \) . In other terms, we may rewrite this as a natural pairing \( \left( {9.3.3.1}\right) \) \[ {V}^{ * } \times V \rightarrow \mathbb{C} \] \[ \left( {{v}^{ * }, v}\right) \mapsto {v}^{ * }\left( v\right) \] Definition 9.3.4. If \( \left( {\rho, V}\right) \) is a representation of \( G \), we define its dual representation or contragredient representations to be \( {V}^{ * } \mathrel{\text{:=}} {\operatorname{Hom}}_{\mathbb{C}}\left( {V,\mathbb{C}}\right) \), and the \( G \) -action is given by, for \( {v}^{ * } \in {V}^{ * } \) and \( v \in V \) , (9.3.4.1) \[ \left( {{\rho }^{ * }\left( g\right) \left( {v}^{ * }\right) }\right) \left( v\right) \mathrel{\text{:=}} {v}^{ * }\left( {\rho \left( {g}^{-1}\right) \left( v\right) }\right) . \] In terms of (9.3.3.1), the above action in (9.3.4.1) is given by \[ \left\langle {{\rho }^{ * }\left( g\right) \left( {v}^{ * }\right), v}\right\rangle = \left\langle {{v}^{ * },{\rho }^{ * }\left( {g}^{-1}\right) \left( v\right) }\right\rangle . \] Remark 9.3.5. We explain why we use \( {g}^{-1} \) in (9.3.4.1) of the dual representation. This is because we need \( {\rho }^{ * }\left( {gh}\right) = {\rho }^{ * }\left( g\right) {\rho }^{ * }\left( h\right) \) . We check: \( \left( {{\rho }^{ * }\left( g\right) \left( {{\rho }^{ * }\left( h\right) \left( {v}^{ * }\right) }\right) }\right) \left( v\right) = \left( {{\rho }^{ * }\left( h\right) \left( {v}^{ * }\right) }\right) \left( {\rho \left( {g}^{-1}\right) \left( v\right) }\right) = {v}^{ * }\left( {\rho \left( {h}^{-1}\right) \left( {\rho \left( {g}^{-1}\right) \left( v\right) }\right) = {v}^{ * }\left( {\rho \left( {\left( gh\right) }^{-1}\right) \left( v\right) }\right) .}\right. \) Remark 9.3.6. In terms of matrix representations, when fixing an isomorphism \( V \simeq {\mathbb{C}}^{n} \),
18_Algebra Chapter 0
Definition 4.8
Definition 4.8. Let \( \mathrm{C} \) be a category. A morphism \( f \in {\operatorname{Hom}}_{\mathrm{C}}\left( {A, B}\right) \) is an epimorphism if the following holds: for all objects \( Z \) of \( \mathrm{C} \) and all morphisms \( {\beta }^{\prime },{\beta }^{\prime \prime } \in {\operatorname{Hom}}_{\mathrm{C}}\left( {B, Z}\right) \) , \[ {\beta }^{\prime } \circ f = {\beta }^{\prime \prime } \circ f \Rightarrow {\beta }^{\prime } = {\beta }^{\prime \prime }. \] Example 4.9. As proven in Proposition 2.3, in the category Set the monomor-phisms are precisely the injective functions. The reader should have by now checked that, likewise, in Set the epimorphisms are precisely the surjective functions (cf. Exercise 2.5). Thus, while the definitions given in 82.6 may have looked counterintuitive at first, they work as natural 'categorical counterparts' of the ordinary notions of injective/surjective functions. Example 4.10. In the categories of Example 3.3, every morphism is both a monomorphism and an epimorphism. Indeed, recall that there is at most one morphism between any two objects in these categories; hence the conditions defining monomorphisms and epimorphisms are vacuous. Contemplating Example 4.10 reveals a few unexpected twists in these definitions, which defy our intuition as set-theorists. For instance, in Set, a function is an isomorphism if and only if it is both injective and surjective, hence if and only if it is both a monomorphism and an epimorphism. But in the category defined by \( \leq \) on \( \mathbb{Z} \), every morphism is both a monomorphism and an epimorphism, while the only isomorphisms are the identities (Example 4.5). Thus this property is a special feature of Set, and we should not expect it to hold automatically in every category; it will not hold in the category Ring of rings (cf. 91112.3). It will hold in every abelian category (of which Set is not an example!), but that is a story for a very distant future (Lemma IX 1.9). Similarly, in Set a function is an epimorphism, that is, surjective, if and only if it has a right-inverse (Proposition 2.1); this may fail in general, even in respectable categories such as the category Grp of groups (cf. Exercise 1118.24). ## Exercises 4.1. \( \vartriangleright \) Composition is defined for two morphisms. If more than two morphisms are given, e.g., \[ A\overset{f}{ \rightarrow }B\overset{g}{ \rightarrow }C\overset{h}{ \rightarrow }D\overset{i}{ \rightarrow }E, \] then one may compose them in several ways, for example: \[ \left( {ih}\right) \left( {gf}\right) ,\;\left( {i\left( {hg}\right) }\right) f,\;i\left( {\left( {hg}\right) f}\right) ,\;\text{ etc. } \] so that at every step one is only composing two morphisms. Prove that the result of any such nested composition is independent of the placement of the parentheses. (Hint: Use induction on \( n \) to show that any such choice for \( {f}_{n}{f}_{n - 1}\cdots {f}_{1} \) equals \[ \left( {\left( {\cdots \left( {\left( {{f}_{n}{f}_{n - 1}}\right) {f}_{n - 2}}\right) \cdots }\right) {f}_{1}).}\right. \] Carefully working out the case \( n = 5 \) is helpful.) [4.1,111.3] 4.2. \( \vartriangleright \) In Example 3.3 we have seen how to construct a category from a set endowed with a relation, provided this latter is reflexive and transitive. For what types of relations is the corresponding category a groupoid (cf. Example 4.6)? [4.1 4.3. Let \( A, B \) be objects of a category \( \mathrm{C} \), and let \( f \in {\operatorname{Hom}}_{\mathrm{C}}\left( {A, B}\right) \) be a morphism. - Prove that if \( f \) has a right-inverse, then \( f \) is an epimorphism. - Show that the converse does not hold, by giving an explicit example of a category and an epimorphism without a right-inverse. 4.4. Prove that the composition of two monomorphisms is a monomorphism. Deduce that one can define a subcategory \( {\mathrm{C}}_{\text{mono }} \) of a category \( \mathrm{C} \) by taking the same objects as in \( \mathrm{C} \) and defining \( {\operatorname{Hom}}_{{\mathrm{C}}_{\text{mono }}}\left( {A, B}\right) \) to be the subset of \( {\operatorname{Hom}}_{\mathrm{C}}\left( {A, B}\right) \) consisting of monomorphisms, for all objects \( A, B \) . (Cf. Exercise 3.8 of course, in general \( {\mathrm{C}}_{\text{mono }} \) is not full in \( \mathrm{C} \) .) Do the same for epimorphisms. Can you define a subcategory \( {\mathrm{C}}_{\text{nonmono }} \) of \( \mathrm{C} \) by restricting to morphisms that are not monomor-phisms? 4.5. Give a concrete description of monomorphisms and epimorphisms in the category MSet you constructed in Exercise 3.9. (Your answer will depend on the notion of morphism you defined in that exercise!) ## 5. Universal properties The 'abstract' examples in [3] may have left the reader with the impression that one can produce at will a large number of minute variations of the same basic ideas, without really breaking any new ground. This may be fun in itself, but why do we really want to explore this territory? Categories offer a rich unifying language, giving us a bird's eye view of many constructions in algebra (and other fields). In this course, this will be most apparent in the steady appearance of constructions satisfying suitable universal properties. For instance, we will see in a moment that products and disjoint unions (as reviewed in [1.3] and following) are characterized by certain universal properties having to do with the categories \( {\mathrm{C}}_{A, B} \) and \( {\mathrm{C}}^{A, B} \) considered in Example 3.9. Many of the concepts introduced in this course will have an explicit description (such as the definition of product of sets given in (1.4) and an accompanying description in terms of a universal property (such as the one we will see in (5.4). The 'explicit' description may be very useful in concrete computations or arguments, but as a rule it is the universal property that clarifies the true nature of the construction. In some cases (such as for the disjoint union) the explicit description may turn out to depend on a seemingly arbitrary choice, while the universal property will have no element of arbitrariness. In fact, viewing the construction in terms of its corresponding universal property clarifies why one can only expect it to be defined 'up to isomorphism'. Also, deeper relationships become apparent when the constructions are viewed in terms of their universal properties. For example, we will see that products of sets and disjoint unions of sets are really 'mirror' constructions (in the sense that reversing arrows transforms the universal property for one into that for the other). This is not so clear (to this writer, anyway) from the explicit descriptions in [1.4] ## 5.1. Initial and final objects. Definition 5.1. Let \( \mathrm{C} \) be a category. We say that an object \( I \) of \( \mathrm{C} \) is initial in \( \mathrm{C} \) if for every object \( A \) of \( \mathrm{C} \) there exists exactly one morphism \( I \rightarrow A \) in \( \mathrm{C} \) : \[ \forall A \in \operatorname{Obj}\left( \mathrm{C}\right) : \;{\operatorname{Hom}}_{\mathrm{C}}\left( {I, A}\right) \text{ is a singleton. } \] We say that an object \( F \) of \( \mathrm{C} \) is final in \( \mathrm{C} \) if for every object \( A \) of \( \mathrm{C} \) there exists exactly one morphism \( A \rightarrow F \) in \( \mathrm{C} \) : \[ \forall A \in \operatorname{Obj}\left( \mathrm{C}\right) : \;{\operatorname{Hom}}_{\mathrm{C}}\left( {A, F}\right) \text{ is a singleton. } \] One may use terminal to denote either possibility, but in general I would advise the reader to be explicit about which ’end’ of \( \mathrm{C} \) one is considering. A category need not have initial or final objects, as the following example shows. Example 5.2. The category obtained by endowing \( \mathbb{Z} \) with the relation \( \leq \) (see Example 3.3) has no initial or final object. Indeed, an initial object in this category would be an integer \( i \) such that \( i \leq a \) for all integers \( a \) ; there is no such integer. Similarly, a final object would be an integer \( f \) larger than every integer, and there is no such thing. By contrast, the category considered in Example 3.6 does have a final object, namely the pair \( \left( {3,3}\right) \) ; it still has no initial object. Also, initial and final objects, when they exist, may or may not be unique: Example 5.3. In Set, the empty set \( \varnothing \) is initial (the ’empty graph’ defines the unique function from \( \varnothing \) to any given object!), and clearly it is the unique set that fits this requirement (Exercise 5.2). Set also has final objects: for every set \( A \), there is a unique function from \( A \) to a singleton \( \{ p\} \) (that is, the ’constant’ function). Every singleton is final in Set; thus, final objects are not unique in this category. However, I claim that if initial/final objects exist, then they are unique up to a unique isomorphism. I will invoke this fact frequently, so here is its official statement and its (immediate) proof: Proposition 5.4. Let \( \mathrm{C} \) be a category. - If \( {I}_{1},{I}_{2} \) are both initial objects in \( \mathrm{C} \), then \( {I}_{1} \cong {I}_{2} \) . - If \( {F}_{1},{F}_{2} \) are both final objects in \( \mathrm{C} \), then \( {F}_{1} \cong {F}_{2} \) . Further, these isomorphisms are uniquely determined. Proof. Recall that (by definition of category!) for every object \( A \) of \( \mathrm{C} \) there is at least one element in \( {\operatorname{Hom}}_{\mathrm{C}}\left( {A, A}\right) \), namely the identity \( {1}_{A} \) . If \( I \) is initial, then there is a unique morphism \( I \rightarrow I \), which therefore must be the identity \( {1}_{I} \) . Now assume \( {I}_{1} \) and \( {I}_{2} \) are both initial in \( \mathrm{C} \) . Since \( {I}_{1} \) is initial, there is a unique morphism \( f : {I}_{1} \rightarrow {I}_{2} \) in \( \mathrm{C} \)
18_Algebra Chapter 0
Definition 1.11
Definition 1.11. The normalizer \( {N}_{G}\left( A\right) \) of \( A \) is its stabilizer under conjugation. The centralizer of \( A \) is the subgroup \( {Z}_{G}\left( A\right) \subseteq {N}_{G}\left( A\right) \) fixing each element of \( A \) . Thus, \( g \in {N}_{G}\left( A\right) \) if and only if \( {gA}{g}^{-1} = A \), and \( g \in {Z}_{G}\left( A\right) \) if and only if \( \forall a \in A,{ga}{g}^{-1} = a. \) For \( A = \{ a\} \) a singleton, we have \( {N}_{G}\left( {\{ a\} }\right) = {Z}_{G}\left( {\{ a\} }\right) = {Z}_{G}\left( a\right) \) . In general, \( {Z}_{G}\left( A\right) \varsubsetneq {N}_{G}\left( A\right) \) If \( H \) is a subgroup of \( G \), every conjugate \( {gH}{g}^{-1} \) of \( H \) is also a subgroup of \( G \) ; conjugate subgroups have the same order. \( {}^{3} \) If \( A \) is finite (but not in general), this condition is equivalent to \( {gA}{g}^{-1} \subseteq A \) . Remark 1.12. The definition implies immediately that \( H \subseteq {N}_{G}\left( H\right) \) and that \( H \) is normal in \( G \) if and only if \( {N}_{G}\left( H\right) = G \) . More generally, the normalizer \( {N}_{G}\left( H\right) \) of \( H \) in \( G \) is (clearly) the largest subgroup of \( G \) in which \( H \) is normal. One could apply Proposition 1.1 to the conjugation action on subsets or subgroup; however, there are too many subsets, and one has little control over the number of subgroups. Other numerical considerations involving the number of conjugates of a given subset or subgroups may be very useful. Lemma 1.13. Let \( H \subseteq G \) be a subgroup. Then (if finite) the number of subgroups conjugate to \( H \) equals the index \( \left\lbrack {G : {N}_{G}\left( H\right) }\right\rbrack \) of the normalizer of \( H \) in \( G \) . Proof. This is again an immediate consequence of Proposition 119.9 Corollary 1.14. If \( \left\lbrack {G : H}\right\rbrack \) is finite, then the number of subgroups conjugate to \( H \) is finite and divides \( \left\lbrack {G : H}\right\rbrack \) . Proof. \[ \left\lbrack {G : H}\right\rbrack = \left\lbrack {G : {N}_{G}\left( H\right) }\right\rbrack \cdot \left\lbrack {{N}_{G}\left( H\right) : H}\right\rbrack \] (cf. \( §\overline{118.5} \) ). One of the celebrated Sylow theorems will strengthen this statement substantially in the case in which \( H \) is a maximal \( p \) -group contained in a finite group \( G \) . For a statement concerning the size of the normalizer of an arbitrary \( p \) -subgroup of a group, see Lemma 2.9 Another useful numerical tool is the observation that if \( H \) and \( K \) are subgroups of a group \( G \) and \( H \subseteq {N}_{G}\left( K\right) \) - so that \( {gK}{g}^{-1} = K \) for all \( g \in H \) -then conjugation by \( g \in H \) gives an automorphism of \( K \) . Indeed, I have already observed that conjugation is a bijection, and it is immediate to see that it is a homomorphism: \( \forall {k}_{1},{k}_{2} \in K \) \[ \left( {g{k}_{1}{g}^{-1}}\right) \left( {g{k}_{2}{g}^{-1}}\right) = g{k}_{1}\left( {{g}^{-1}g}\right) {k}_{2}{g}^{-1} = g\left( {{k}_{1}{k}_{2}}\right) {g}^{-1}. \] Thus, conjugation gives a set-function \[ \gamma : H \rightarrow {\operatorname{Aut}}_{\mathrm{{Grp}}}\left( K\right) \] The reader will check that this is a group homomorphism and will determine ker \( \gamma \) (Exercise 1.21). This is especially useful if \( H \) is finite and some information is available concerning \( {\operatorname{Aut}}_{\mathrm{{Grp}}}\left( K\right) \) (for an example, see Exercise 4.14). A classic application is presented in Exercise 1.22 ## Exercises 1.1. \( \vartriangleright \) Let \( p \) be a prime integer, let \( G \) be a \( p \) -group, and let \( S \) be a set such that \( \left| S\right| ≢ 0{\;\operatorname{mod}\;p} \) . If \( G \) acts on \( S \), prove that the action must have fixed points. [4,1,1, 2.3 1.2. Find the center of \( {D}_{2n} \) . (The answer depends on the parity of \( n \) . You have actually done this already: Exercise 112.7 This time, use a presentation.) 1.3. Prove that the center of \( {S}_{n} \) is trivial for \( n \geq 3 \) . (Suppose that \( \sigma \in {S}_{n} \) sends \( a \) to \( b \neq a \), and let \( c \neq a, b \) . Let \( \tau \) be the permutation that acts solely by swapping \( b \) and \( c \) . Then compare the action of \( {\sigma \tau } \) and \( {\tau \sigma } \) on \( a \) .) 1.4. \( \vartriangleright \) Let \( G \) be a group, and let \( N \) be a subgroup of \( Z\left( G\right) \) . Prove that \( N \) is normal in \( G \) . \( \left\lbrack {\$ {2.2}}\right\rbrack \) 1.5. \( \vartriangleright \) Let \( G \) be a group. Prove that \( G/Z\left( G\right) \) is isomorphic to the group \( \operatorname{Inn}\left( G\right) \) of inner automorphisms of \( G \) . (Cf. Exercise 114.8) Then prove Lemma 1.5 again by using the result of Exercise II.6.7. [1.2] 1.6. \( \vartriangleright \) Let \( p, q \) be prime integers, and let \( G \) be a group of order \( {pq} \) . Prove that either \( G \) is commutative or the center of \( G \) is trivial. Conclude (using Corollary 1.9) that every group of order \( {p}^{2} \), for a prime \( p \), is commutative. [4.3] 1.7. Prove or disprove that if \( p \) is prime, then every group of order \( {p}^{3} \) is commutative. 1.8. \( \vartriangleright \) Let \( p \) be a prime number, and let \( G \) be a \( p \) -group: \( \left| G\right| = {p}^{r} \) . Prove that \( G \) contains a normal subgroup of order \( {p}^{k} \) for every nonnegative \( k \leq r \) . [42.2] 1.9. \( \neg \) Let \( p \) be a prime number, \( G \) a \( p \) -group, and \( H \) a nontrivial normal subgroup of \( G \) . Prove that \( H \cap Z\left( G\right) \neq \{ e\} \) . (Hint: Use the class formula.) 3.11 1.10. Prove that if \( G \) is a group of odd order and \( g \in G \) is conjugate to \( {g}^{-1} \), then \( g = {e}_{G}. \) 1.11. Let \( G \) be a finite group, and suppose there exist representatives \( {g}_{1},\ldots ,{g}_{r} \) of the \( r \) distinct conjugacy classes in \( G \), such that \( \forall i, j,{g}_{i}{g}_{j} = {g}_{j}{g}_{i} \) . Prove that \( G \) is commutative. (Hint: What can you say about the sizes of the conjugacy classes?) 1.12. Verify that the class formula for both \( {D}_{8} \) and \( {Q}_{8} \) (cf. Exercise III 1.12) is \( 8 = 2 + 2 + 2 + 2 \) . (Also note that \( {D}_{8} \ncong {Q}_{8} \) .) 1.13. \( \vartriangleright \) Let \( G \) be a noncommutative group of order 6. As observed in Example 1.10, \( G \) must have trivial center and exactly two conjugacy classes, of order 2 and 3 . - Prove that if every element of a group has order \( \leq 2 \), then the group is commutative. Conclude that \( G \) has an element \( y \) of order 3 . - Prove that \( \langle y\rangle \) is normal in \( G \) . - Prove that \( \left\lbrack y\right\rbrack \) is the conjugacy class of order 2 and \( \left\lbrack y\right\rbrack = \left\{ {y,{y}^{2}}\right\} \) . - Prove that there is an \( x \in G \) such that \( {yx} = x{y}^{2} \) . - Prove that \( x \) has order 2 . - Prove that \( x \) and \( y \) generate \( G \) . - Prove that \( G \cong {S}_{3} \) . ## [9.3, 82.5] 1.14. Let \( G \) be a group, and assume \( \left\lbrack {G : Z\left( G\right) }\right\rbrack = n \) is finite. Let \( A \subseteq G \) be any subset. Prove that the number of conjugates of \( A \) is at most \( n \) . 1.15. Suppose that the class formula for a group \( G \) is \( {60} = 1 + {15} + {20} + {12} + {12} \) . Prove that the only normal subgroups of \( G \) are \( \{ e\} \) and \( G \) . 1.16. \( \vartriangleright \) Let \( G \) be a finite group, and let \( H \subseteq G \) be a subgroup of index 2. For \( a \in H \), denote by \( {\left\lbrack a\right\rbrack }_{H} \), resp., \( {\left\lbrack a\right\rbrack }_{G} \), the conjugacy class of \( a \) in \( H \), resp., \( G \) . Prove that either \( {\left\lbrack a\right\rbrack }_{H} = {\left\lbrack a\right\rbrack }_{G} \) or \( {\left\lbrack a\right\rbrack }_{H} \) is half the size of \( {\left\lbrack a\right\rbrack }_{G} \), according to whether the centralizer \( {Z}_{G}\left( a\right) \) is not or is contained in \( H \) . (Hint: Note that \( H \) is normal in \( G \) , by Exercise II 8.2, apply Proposition II 8.11) [4.4 1.17. \( \neg \) Let \( H \) be a proper subgroup of a finite group \( G \) . Prove that \( G \) is not the union of the conjugates of \( H \) . (Hint: You know the number of conjugates of \( H \) ; keep in mind that any two subgroups overlap, at least at the identity.) [1.18, 1.20 1.18. Let \( S \) be a set endowed with a transitive action of a finite group \( G \), and assume \( \left| S\right| \geq 2 \) . Prove that there exists a \( g \in G \) without fixed points in \( S \), that is, such that \( {gs} \neq s \) for all \( s \in S \) . (Hint: By Proposition 119.9 you may assume \( S = G/H \), with \( H \) proper in \( G \) . Use Exercise 1.17 ) 1.19. Let \( H \) be a proper subgroup of a finite group \( G \) . Prove that there exists a \( g \in G \) whose conjugacy class is disjoint from \( H \) . 1.20. Let \( G = {\mathrm{{GL}}}_{2}\left( \mathbb{C}\right) \), and let \( H \) be the subgroup consisting of upper triangular matrices (Exercise 1116.2). Prove that \( G \) is the union of the conjugates of \( H \) . Thus, the finiteness hypothesis in Exercise 1.17 is necessary. (Hint: Equivalently, prove that every \( 2 \times 2 \) matrix is conjugate to a matrix in \( H \) . You will use the fact that C is algebraically closed; see Example III 4.14 ) 1.21. \( \vartriangleright \) Let \( H, K \) be subgroups of a group \( G \), with \( H \subseteq {N}_{G}\left( K\right) \) . Verify that the function \( \gamma : H \rightarrow {\operatorname{Aut}}_{\mathrm{{Grp}}}\left( K\right) \) defined by conjugation is a homomorphism of groups and that \( \ker \gamma = H \cap {Z}_{G}\left( K\right) \), where \( {Z}_{G}\left( K\right) \) is the centralizer of \( K \) . [4.4,1.22] 1
1343_[鄂维南&李铁军&Vanden-Eijnden] Applied Stochastic Analysis -GSM199
Definition 3.19
Definition 3.19 (Lumpability). A Markov chain is said to be lumpable with respect to the partition \( S = \mathop{\bigcup }\limits_{{k = 1}}^{K}{C}_{k} \) if for every initial distribution \( {\mathbf{\mu }}_{0} \) the lumped process defined through (3.51) is a Markov chain and the transition probabilities do not depend on the choice of \( {\mathbf{\mu }}_{0} \) . The following result gives an equivalent characterization of lumpablity KS76 Theorem 3.20. A Markov chain with transition matrix \( \mathbf{P} \) is lumpable with respect to the partition \( S = \mathop{\bigcup }\limits_{{k = 1}}^{K}{C}_{k} \) if and only if for any \( k, l \in \{ 1,2\ldots, K\} \) \[ {p}_{i,{C}_{l}} \mathrel{\text{:=}} \mathop{\sum }\limits_{{j \in {C}_{l}}}{p}_{ij} \] is a piecewise constant with respect to the partition \( S = \mathop{\bigcup }\limits_{{k = 1}}^{K}{C}_{k} \) . These constants form the components \( \left( {\widehat{p}}_{kl}\right) \) of the lumped transition probability matrix for \( k, l \in \{ 1,2\ldots, K\} \) . A closely related concept is spectral clustering. In this context, one has the following result [MS01, SM00]. Theorem 3.21. Assume that the transition matrix \( \mathbf{P} \) has I independent eigenvectors. Let \( S = \mathop{\bigcup }\limits_{{k = 1}}^{K}{C}_{k} \) be a partition of \( S \) . Then the Markov chain is lumpable with respect to this partition and the corresponding lumped matrix \( {\left\{ {\widehat{p}}_{kl}\right\} }_{k, l = 1}^{K} \) is nonsingular if and only if \( \mathbf{P} \) has \( K \) eigenvectors that are piecewise constant with respect to the partition and corresponding eigenvalues are nonzero. For applications of this result to image segmentation, please see MS01, SM00 We finish with an approximate coarse-graining procedure for Markov chains ELVE08. For this purpose, we define a metric in the space of Markov chains (stochastic matrices) on \( S \) with invariant distribution \( \mathbf{\pi } \) . Let \( \mathbf{Q} = {\left( {q}_{ij}\right) }_{i, j \in S} \) be a stochastic matrix with invariant distribution \( \mathbf{\pi } \) . We define its norm by (3.52) \[ \parallel \mathbf{Q}{\parallel }_{\mathbf{\pi }}^{2} = \mathop{\sum }\limits_{{i, j \in S}}\frac{{\pi }_{i}}{{\pi }_{j}}{q}_{ij}^{2} \] It is easy to see that if \( \mathbf{Q} \) satisfies the condition of detailed balance, this norm is in fact the sum of the squares of the eigenvalues of \( \mathbf{Q} \) . Given a partition of \( S : S = \mathop{\bigcup }\limits_{{k = 1}}^{K}{C}_{k} \) with \( {C}_{k} \cap {C}_{l} = \varnothing \) if \( k \neq l \), let \( \widehat{\mathbf{P}} = {\left( {\widehat{p}}_{kl}\right) }_{k, l = 1}^{K} \) be a stochastic matrix on the state space \( C = \left\{ {{C}_{1},\ldots ,{C}_{K}}\right\} \) . This matrix can be naturally lifted to the space of stochastic matrices on the original state space \( S \) by (3.53) \[ {\widetilde{p}}_{ij} = \frac{{\pi }_{j}{\widehat{p}}_{kl}}{\mathop{\sum }\limits_{{k \in {C}_{l}}}{\pi }_{k}}\;\text{ if }i \in {C}_{k}, j \in {C}_{l}. \] Equation (3.53) says that the probability of jumping from any state in \( {C}_{k} \) is the same and the walker enters \( {C}_{l} \) according to the invariant distribution. This is consistent with the idea of coarsening the original dynamics onto the new state space \( C = \left\{ {{C}_{1},\ldots ,{C}_{K}}\right\} \) and ignoring the details of the dynamics within the sets \( {C}_{k} \) . Note that \( \widetilde{\mathbf{P}} \) is a stochastic matrix on \( S \) with invariant distribution \( \mathbf{\pi } \) if \( \widehat{\mathbf{P}} \) is a stochastic matrix on \( C \) with invariant distribution \( \widehat{\pi } \) . ![6e78b8ff-0ddf-4fb8-a32a-3fe055d1d6c5_93_0.jpg](images/6e78b8ff-0ddf-4fb8-a32a-3fe055d1d6c5_93_0.jpg) Figure 3.4. Schematics of the network partition and community structure. Given a Markov chain \( \mathbf{P} \) and predefined partition \( C \), finding the optimal coarse-grained chain \( \widetilde{\mathbf{P}} \) is a useful problem and can be investigated under the framework of optimal prediction Cho03 with respect to the norm (3.52) \[ \mathop{\min }\limits_{{\widehat{\mathbf{P}} \cdot \mathbf{1} = \mathbf{1},\widehat{\mathbf{P}} \geq 0}}\parallel \widetilde{\mathbf{P}} - \mathbf{P}{\parallel }_{\mathbf{\pi }} \] One can ask further about the optimal partition, and this can be approached by minimizing the approximation error with respect to all possible partitions. Algebraically, the existence of community structure or clustering property of a network corresponds to the existence of a spectral gap in the eigenvalues of \( \mathbf{P} \) ; i.e., \( {\lambda }_{k} = 1 - {\eta }_{k}\delta \) for \( k = 0,\ldots, K - 1 \) and \( \left| {\lambda }_{k}\right| < {\lambda }^{ * } < 1 \) for \( k = K,\ldots, I - 1 \) where \( 0 < \delta \ll 1,{\eta }_{k} > 0 \) . See ELVE08 for more details. ## Exercises 3.1. Write down the transition probability matrix \( \mathbf{P} \) of Ehrenfest’s diffusion model and analyze its invariant distribution. 3.2. Suppose \( {\left\{ {X}_{n}\right\} }_{n \in \mathbb{N}} \) is a Markov chain on a finite state space \( S \) . Prove that \[ \mathbb{P}\left( {A \cap \left\{ {{X}_{m + 1} = {i}_{m + 1},\ldots ,{X}_{m + n} = {i}_{m + n}}\right\} \mid {X}_{m} = {i}_{m}}\right) \] \[ = \mathbb{P}\left( {A \mid {X}_{m} = {i}_{m}}\right) {\mathbb{P}}^{i}\left( \left\{ {{X}_{1} = {i}_{m + 1},\ldots ,{X}_{n} = {i}_{m + n}}\right\} \right) , \] where the set \( A \) is an arbitrary union of elementary events like \( \left\{ {\left( {{X}_{0},\ldots ,{X}_{m - 1}}\right) = \left( {{i}_{0},\ldots ,{i}_{m - 1}}\right) }\right\} . \) 3.3. Prove Lemma 3.7 3.4. Show that the exponent \( s \) in \( {\mathbf{P}}^{s} \) in Theorem 3.11 necessarily satisfies \( s \leq I - 1 \), where \( I = \left| S\right| \) is the number of states of the chain. 3.5. Show that the recurrence relation \( {\mathbf{\mu }}_{n} = {\mathbf{\mu }}_{n - 1}\mathbf{P} \) can be written as \[ {\mu }_{n, i} = {\mu }_{n - 1, i}\left( {1 - \mathop{\sum }\limits_{{j \in S, j \neq i}}{p}_{ij}}\right) + \mathop{\sum }\limits_{{j \in S, j \neq i}}{\mu }_{n - 1, j}{p}_{ji}. \] The first term on the right-hand side gives the total probability of not making a transition from state \( i \), while the last term is the probability of transition from one of states \( j \neq i \) to state \( i \) . 3.6. Prove (3.8) by mathematical induction. 3.7. Derive (3.8) through the moment generating function \[ g\left( {t, z}\right) = \mathop{\sum }\limits_{{m = 0}}^{\infty }{p}_{m}\left( t\right) {z}^{m}. \] 3.8. Let \( f \) be a function defined on the state space \( S \), and let \[ {h}_{i}\left( t\right) = {\mathbb{E}}^{i}f\left( {X}_{t}\right) ,\;i \in S. \] Prove that \( d\mathbf{h}/{dt} = \mathbf{P}\mathbf{h} \) for \( \mathbf{h} = {\left( {h}_{1}\left( t\right) ,\ldots ,{h}_{I}\left( t\right) \right) }^{T} \) . 3.9. Let \( \left\{ {N}_{t}^{1}\right\} \) and \( \left\{ {N}_{t}^{2}\right\} \) be two independent Poisson processes with parameters \( {\lambda }_{1} \) and \( {\lambda }_{2} \), respectively. Prove that \( {N}_{t} = {N}_{t}^{1} + {N}_{t}^{2} \) is a Poisson process with parameter \( {\lambda }_{1} + {\lambda }_{2} \) . 3.10. Let \( \left\{ {N}_{t}\right\} \) be a Poisson process with rate \( \lambda \) and let \( \tau \sim \mathcal{E}\left( \mu \right) \) be an independent random variable. Prove that \[ \mathbb{P}\left( {{N}_{\tau } = m}\right) = {\left( 1 - p\right) }^{m}p,\;m \in \mathbb{N}, \] where \( p = \mu /\left( {\lambda + \mu }\right) \) . 3.11. Let \( \left\{ {N}_{t}\right\} \) be a Poisson process with rate \( \lambda \) . Show that the distribution of the arrival time \( {\tau }_{k} \) between \( k \) successive jumps is a gamma distribution with probability density \[ p\left( t\right) = \frac{{\lambda }^{k}{t}^{k - 1}}{\Gamma \left( k\right) }{e}^{-{\lambda t}},\;t > 0, \] where \( \Gamma \left( k\right) = \left( {k - 1}\right) \) ! is the gamma function. 3.12. Consider the Poisson process \( \left\{ {N}_{t}\right\} \) with rate \( \lambda \) . Given that \( {N}_{t} = n \) , prove that the \( n \) arrival times \( {J}_{1},{J}_{2},\ldots ,{J}_{n} \) have the same distribution as the order statistics corresponding to \( n \) independent random variables uniformly distributed in \( \left( {0, t}\right) \) . That is, \( \left( {{J}_{1},\ldots ,{J}_{n}}\right) \) has the probability density \[ p\left( {{x}_{1},{x}_{2},\ldots ,{x}_{n} \mid {N}_{t} = n}\right) = \frac{n!}{{t}^{n}},\;0 < {x}_{1} < \cdots < {x}_{n} < t. \] 3.13. The compound Poisson process \( \left\{ {X}_{t}\right\} \left( {t \in {\mathbb{R}}^{ + }}\right) \) is defined by \[ {X}_{t} = \mathop{\sum }\limits_{{k = 1}}^{{N}_{t}}{Y}_{k} \] where \( \left\{ {N}_{t}\right\} \) is a Poisson process with rate \( \lambda \) and \( \left\{ {Y}_{k}\right\} \) are i.i.d. random variables on \( \mathbb{R} \) with probability density function \( q\left( y\right) \) . Derive the equation for the probability density function \( p\left( {x, t}\right) \) of \( {X}_{t} \) . 3.14. Suppose that \( \left\{ {N}_{t}^{\left( k\right) }\right\} \left( {k = 1,2,\ldots }\right) \) are independent Poisson processes with rate \( {\lambda }_{k} \), respectively, \( \lambda = \mathop{\sum }\limits_{{k = 1}}^{\infty }{\lambda }_{k} < \infty \) . Prove that \( \left\{ {{X}_{t} = \mathop{\sum }\limits_{{k = 1}}^{\infty }k{N}_{t}^{\left( k\right) }}\right\} \) is a compound Poisson process. 3.15. Prove (3.36) and (3.40). 3.16. Consider an irreducible Markov chain \( {\left\{ {X}_{n}\right\} }_{n \in \mathbb{N}} \) on a finite state space \( S \) . Let \( H \subset S \) . Define the first passage time \( {T}_{H} = \inf \left\{ {n \mid {X}_{n} \in }\right. \) \( H, n \in \mathbb{N}\} \) and \[ {h}_{i} = {\mathbb{P}}^{i}\left( {{T}_{H} < \infty }\right) ,\;i \in S. \] Prove that \( \mathbf{h} = {\left( {h}_{i}\right) }_{i \in S} \) satisfies the equatio
108_The Joys of Haar Measure
Definition 4.3.10
Definition 4.3.10. For \( x \in {U}_{0} \), we define \( \langle x\rangle = x/\omega \left( x\right) \) and call it the diamond of \( x \) . Proposition 4.3.11. If we are not in the special case \( p = 2 \) and \( K = {\mathbb{Q}}_{2} \) then \( \langle x\rangle \) is the unique element of \( {U}_{1} \) such that \( x/\langle x\rangle \) is an \( \left( {\mathcal{N}\mathfrak{p} - 1}\right) \) st root of unity. On the other hand, if \( p = 2 \) and \( K = {\mathbb{Q}}_{2} \) then \( \langle x\rangle \) is the unique element of \( {U}_{2} \) such that \( x/\langle x\rangle \in \{ \pm 1\} \) . In particular, for any prime number \( p \), if \( K = {\mathbb{Q}}_{p} \) then \( {\exp }_{p}\left( {{\log }_{p}\left( {\langle x\rangle }\right) }\right) = \langle x\rangle \) . Proof. Immediate and left to the reader. ## 4.3.3 Study of the Groups \( {U}_{i} \) We begin with the following result. Proposition 4.3.12. Set \( z\left( \mathfrak{p}\right) = \left\lfloor \frac{e\left( {\mathfrak{p}/p}\right) }{p - 1}\right\rfloor + 1 \) . Then for all \( i \geq z\left( \mathfrak{p}\right) \) the \( \mathfrak{p} \) - adic logarithm and exponential give inverse isomorphisms between the multiplicative group \( {U}_{i} \) and the additive group \( {\mathfrak{p}}^{i}{\mathbb{Z}}_{\mathfrak{p}} \) . In particular, if \( e\left( {\mathfrak{p}/p}\right) < p - 1 \) then \( {U}_{1} \) is isomorphic to \( \mathfrak{p}{\mathbb{Z}}_{\mathfrak{p}} \) . Proof. If \( x \in {U}_{z\left( \mathfrak{p}\right) } \) then \( \left| {x - 1}\right| < 1 \), so that the logarithm converges. Furthermore, writing as usual \( e = e\left( {\mathfrak{p}/p}\right) \), we have \[ {v}_{p}\left( {{\left( x - 1\right) }^{k - 1}/k}\right) = \left( {k - 1}\right) {v}_{p}\left( {x - 1}\right) - {v}_{p}\left( k\right) \geq \left( {k - 1}\right) z\left( \mathfrak{p}\right) /e - {v}_{p}\left( k\right) . \] Now \( {v}_{p}\left( k\right) \leq \log \left( k\right) /\log \left( p\right) \) (the ordinary logarithm here!) and \( z\left( \mathfrak{p}\right) \geq (e + \) 1)/( \( p - 1 \) ); hence \[ {v}_{p}\left( \frac{{\left( x - 1\right) }^{k - 1}}{k}\right) \geq \left( {k - 1}\right) \frac{e + 1}{e\left( {p - 1}\right) } - \frac{\log \left( k\right) }{\log \left( p\right) }, \] so that for \( k > 1 \) we have \[ {v}_{p}\left( \frac{{\left( x - 1\right) }^{k - 1}}{k}\right) > \frac{k - 1}{p - 1} - \frac{\log \left( k\right) }{\log \left( p\right) } = \frac{\log \left( k\right) }{p - 1}\left( {\frac{k - 1}{\log \left( k\right) } - \frac{p - 1}{\log \left( p\right) }}\right) . \] Since the function \( \left( {k - 1}\right) /\log \left( k\right) \) is an increasing function of \( k > 1 \), it follows that \( {v}_{p}\left( {{\left( x - 1\right) }^{k - 1}/k}\right) > 0 \) for \( k \geq p \) . On the other hand, when \( k < p \) we have \( {v}_{p}\left( k\right) = 0 \), hence \( {v}_{p}\left( {{\left( x - 1\right) }^{k - 1}/k}\right) \geq \left( {k - 1}\right) /e > 0 \) for \( k > 1 \) . Thus for each \( k \geq 2 \) the term in \( {\left( x - 1\right) }^{k} \) in the power series expansion of \( {\log }_{p}\left( x\right) \) has a \( p \) -adic valuation strictly greater than that of the term with \( k = 1 \) . It follows that \( {v}_{p}\left( {{\log }_{p}\left( x\right) }\right) = {v}_{p}\left( {x - 1}\right) \) ; hence if \( x \in {U}_{i} \), we have \( {\log }_{p}\left( x\right) \in {\mathfrak{p}}^{i}{\mathbb{Z}}_{\mathfrak{p}} \) . Conversely, if \( y \in {\mathfrak{p}}^{i}{\mathbb{Z}}_{\mathfrak{p}} \) for some \( i \geq z\left( \mathfrak{p}\right) \), then \( {v}_{p}\left( y\right) \geq z\left( \mathfrak{p}\right) /e > 1/\left( {p - 1}\right) \), so that by Proposition 4.2.10 the power series for \( {\exp }_{p}\left( y\right) \) converges, and if \( y = {\log }_{p}\left( x\right) \) we have \( {\exp }_{p}\left( y\right) = x \), and similarly we check that \( {v}_{p}\left( {{\exp }_{p}\left( y\right) - 1}\right) = {v}_{p}\left( y\right) \geq \) \( i/e \), so that \( {\log }_{p}\left( {{\exp }_{p}\left( y\right) }\right) = y \), proving the proposition. Corollary 4.3.13. (1) For every \( i \geq 1,{U}_{i} \) has a natural \( {\mathbb{Z}}_{p} \) -module structure. (2) For \( i \geq z\left( \mathfrak{p}\right) ,{U}_{i} \) is a free \( {\mathbb{Z}}_{p} \) -module of dimension \( \left\lbrack {{K}_{\mathfrak{p}} : {\mathbb{Q}}_{p}}\right\rbrack \) . (3) For every \( i \geq 1,{U}_{i} \) is finitely generated of rank \( \left\lbrack {{K}_{\mathfrak{p}} : {\mathbb{Q}}_{p}}\right\rbrack \), and more precisely \[ {U}_{i} \simeq {\mu }_{\mathfrak{p}, i} \times {\mathbb{Z}}_{p}^{\left\lbrack {K}_{\mathfrak{p}} : {\mathbb{Q}}_{p}\right\rbrack } \] where \( {\mu }_{\mathfrak{p}, i} \) is the finite cyclic group of roots of unity in \( {K}_{\mathfrak{p}} \) congruent to 1 modulo \( {\mathfrak{p}}^{i} \), and \( \left| {\mu }_{\mathfrak{p}, i}\right| = {p}^{m} \) for some \( m \) such that \( 0 \leq m \leq f\lfloor e/\left( {p - 1}\right) \rfloor \) . Proof. (1). For \( x = 1 + y \in {U}_{i} \) with \( i \geq 1 \) and \( \alpha \in {\mathbb{Z}}_{p} \), we set directly \[ {\left( 1 + y\right) }^{\alpha } = \mathop{\sum }\limits_{{n \geq 0}}\left( \begin{array}{l} \alpha \\ n \end{array}\right) {y}^{n} \] By Corollary 4.2.15 this series converges and we have \( {x}^{\alpha } \equiv 1\left( {{\;\operatorname{mod}\;{\mathfrak{p}}^{i}}{\mathbb{Z}}_{\mathfrak{p}}}\right. \) ), so thanks again to the above-mentioned corollary this clearly induces a \( {\mathbb{Z}}_{p} \) - module structure on \( {U}_{i} \) . (2). If \( \pi \) is a uniformizer of \( \mathfrak{p} \), then multiplication by \( {\pi }^{i} \) clearly gives a noncanonical isomorphism between the additive groups \( {\mathbb{Z}}_{\mathfrak{p}} \) and \( {\mathfrak{p}}^{i}{\mathbb{Z}}_{\mathfrak{p}} \) . By Corollary 4.1.27, \( {\mathbb{Z}}_{\mathfrak{p}} \) is a free \( {\mathbb{Z}}_{p} \) -module of dimension \( \left\lbrack {{K}_{\mathfrak{p}} : {\mathbb{Q}}_{p}}\right\rbrack \), proving (2). Note that we have \( {\left( 1 + y\right) }^{\alpha } = {\exp }_{p}\left( {\alpha {\log }_{p}\left( {1 + y}\right) }\right) \), so that the \( {\mathbb{Z}}_{p} \) -structures are the same. (3). We have already proved this for \( i \geq z\left( \mathfrak{p}\right) \) . For \( i < z\left( \mathfrak{p}\right) \), we note that by Lemma 4.3.9, \( {U}_{z\left( \mathfrak{p}\right) } \) has finite index in \( {U}_{i} \) equal to a power of \( p \), hence is finitely generated with the same rank. Thus by the structure theorem for finitely generated modules over the principal ideal domain \( {\mathbb{Z}}_{p} \) we deduce that \( {U}_{i} \simeq {T}_{i} \times {\mathbb{Z}}_{p}^{\left\lbrack {K}_{\mathfrak{p}} : {\mathbb{Q}}_{p}\right\rbrack } \), where \( {T}_{i} \) is the finite torsion subgroup of \( {U}_{i} \) . In particular, \( {T}_{i} \) is a finite subgroup of \( {K}_{\mathfrak{p}}^{ * } \) ; hence by Corollary \( {2.4.3},{T}_{i} \) is cyclic, hence contains only roots of unity congruent to 1 modulo \( {\mathfrak{p}}^{i} \) (and conversely such elements evidently belong to \( {T}_{i} \) ). Finally, since \( {T}_{i} \) is a finite \( {\mathbb{Z}}_{p} \) -module its order must be a power of \( p \) . More precisely, since \( {U}_{z\left( \mathfrak{p}\right) } \) is torsion-free and \( \left\lbrack {{U}_{1} : {U}_{z\left( \mathfrak{p}\right) }}\right\rbrack = \mathcal{N}{\mathfrak{p}}^{z\left( \mathfrak{p}\right) - 1} \), then a generator \( y \) of the cyclic group \( {T}_{i} \) satisfies \( {y}^{\mathcal{N}{\mathfrak{p}}^{z\left( \mathfrak{p}\right) - 1}} = 1 \), so that \( \left| {T}_{i}\right| \mid \mathcal{N}{\mathfrak{p}}^{z\left( \mathfrak{p}\right) - 1} \) as claimed. Corollary 4.3.14. As usual set \( e = e\left( {\mathfrak{p}/p}\right) \) and \( f = f\left( {\mathfrak{p}/p}\right) \) . As abelian groups we have the isomorphism \[ {K}_{\mathfrak{p}}^{ * } \simeq {\mu }_{\mathfrak{p}}^{\prime } \times \mathbb{Z} \times {\mathbb{Z}}_{p}^{ef} \] where \( {\mu }_{\mathfrak{p}}^{\prime } \) is a cyclic group such that \[ \left| {\mu }_{\mathfrak{p}}^{\prime }\right| = \left( {\mathcal{N}\mathfrak{p} - 1}\right) {p}^{k}\;\text{ for some }k\text{ such that }\;0 \leq k \leq f\lfloor e/\left( {p - 1}\right) \rfloor . \] If in addition \( \mathfrak{p} \) is above 2 then \( k \geq 1 \) . Proof. By Corollary 4.3.7 and the above corollary we have \( {K}_{\mathfrak{p}}^{ * } \simeq {\mu }_{\mathfrak{p}} \times {\mu }_{\mathfrak{p},1} \times \) \( \mathbb{Z} \times {\mathbb{Z}}_{p}^{ef} \) and \( \left| {\mu }_{\mathfrak{p}}\right| = \mathcal{N}\mathfrak{p} - 1 \), while \( \left| {\mu }_{\mathfrak{p},1}\right| \mid \mathcal{N}{\mathfrak{p}}^{\lfloor e/\left( {p - 1}\right) \rfloor } \), so the result follows. If \( \mathfrak{p} \) is above 2 we have \( - 1 \equiv 1\left( {\;\operatorname{mod}\;\mathfrak{p}}\right) \) hence \( - 1 \in {\mu }_{\mathfrak{p},1} \), so \( 2\left| \right| {\mu }_{\mathfrak{p},1}\left| \right| {\mu }_{\mathfrak{p}}^{\prime } \mid \) . Examples. If \( p \geq 3 \), then \[ {\mathbb{Q}}_{p}^{ * } = {\mu }_{p} \times \left( {1 + p{\mathbb{Z}}_{p}}\right) \times {p}^{\mathbb{Z}}\;\text{ and }\;1 + p{\mathbb{Z}}_{p} = {\left( 1 + p\right) }^{{\mathbb{Z}}_{p}}, \] where as above, \( {\mu }_{p} \) is the group of \( \left( {p - 1}\right) \) st roots of unity in \( {\mathbb{Q}}_{p} \) . If \( p = 2 \), then \[ {\mathbb{Q}}_{2}^{ * } = \{ \pm 1\} \times \left( {1 + 4{\mathbb{Z}}_{2}}\right) \times {2}^{\mathbb{Z}}\;\text{ and }\;1 + 4{\mathbb{Z}}_{2} = {5}^{{\mathbb{Z}}_{2}}. \] ## 4.3.4 Study of the Group \( {U}_{1} \) We now want to determine explicitly a minimal system of generators for \( {U}_{1} \) . We begin with a very classical lemma, which is useful in many parts of algebra. Lemma 4.3.15 (Nakayama). Let \( M \) be a finitely generated \( {\mathbb{Z}}_{\mathfrak{p}} \) -module. The equality \( M = \mathfrak{p}M \) implies that \( M = 0 \) . In particular, a set \( \left( {x}_{i}\right) \) of elements
111_111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 3.4
Definition 3.4 Let \( T \) be a densely defined symmetric operator on \( \mathcal{H} \) . The boundary form of \( {T}^{ * } \) is the sequilinear form \( {\left\lbrack \cdot , \cdot \right\rbrack }_{{T}^{ * }} \) on \( \mathcal{D}\left( {T}^{ * }\right) \) defined by \[ {\left\lbrack x, y\right\rbrack }_{{T}^{ * }} = \left\langle {{T}^{ * }x, y}\right\rangle - \left\langle {x,{T}^{ * }y}\right\rangle ,\;x, y \in \mathcal{D}\left( {T}^{ * }\right) . \] (3.9) A linear subspace \( \mathcal{D} \) of \( \mathcal{D}\left( {T}^{ * }\right) \) is called symmetric if \( {\left\lbrack x, y\right\rbrack }_{{T}^{ * }} = 0 \) for all \( x, y \in \mathcal{D} \) . The reason for the latter terminology stems from the following simple fact which follows immediately from the corresponding definitions. Lemma 3.5 The symmetric extensions of a densely defined symmetric operator \( T \) are the restrictions of \( {T}^{ * } \) to symmetric subspaces \( \mathcal{D} \) of \( \mathcal{D}\left( {T}^{ * }\right) \) which contain \( \mathcal{D}\left( T\right) \) . Proposition 3.6 Let \( T \) be a densely defined closed symmetric operator, and let \( \mathcal{E} \) be a finite-dimensional linear subspace of \( \mathcal{D}\left( {T}^{ * }\right) \) . Define \[ \mathcal{D}\left( {T}_{\mathcal{E}}\right) \mathrel{\text{:=}} \mathcal{D}\left( T\right) + \mathcal{E}\;\text{ and }\;{T}_{\mathcal{E}} \mathrel{\text{:=}} {T}^{ * } \mid \mathcal{D}\left( {T}_{\mathcal{E}}\right) . \] Suppose that the linear subspace \( \mathcal{D}\left( {T}_{\mathcal{E}}\right) \) of \( \mathcal{D}\left( {T}^{ * }\right) \) is symmetric. Then \( {T}_{\mathcal{E}} \) is a closed symmetric operator. If \( \mathcal{E} \) has dimension \( k \) modulo \( \mathcal{D}\left( T\right) \), then \( {T}_{\mathcal{E}} \) has the deficiency indices \( \left( {{d}_{ + }\left( T\right) - k,{d}_{ - }\left( T\right) - k}\right) \) . Proof Since \( \mathcal{G}\left( {T}_{\mathcal{E}}\right) \) is the sum of the closed subspace \( \mathcal{G}\left( T\right) \) and the finite-dimensional vector space \( \left\{ {\left( {x,{T}^{ * }x}\right) : x \in \mathcal{E}}\right\} ,\mathcal{G}\left( {T}_{\mathcal{E}}\right) \) is closed, and so is the operator \( {T}_{\mathcal{E}} \) . By Lemma 3.5, \( {T}_{\mathcal{E}} \) is symmetric, because \( \mathcal{D}\left( {T}_{\mathcal{E}}\right) \) is a symmetric subspace. It remains to prove the assertion concerning the deficiency indices of \( {T}_{\mathcal{E}} \) . Let \( \lambda \in \mathbb{C} \smallsetminus \mathbb{R} \) . Without loss of generality we can assume that \( \mathcal{D}\left( T\right) + \mathcal{E} \) is a direct sum and \( \dim \mathcal{E} = k \) . First we show that \( \mathcal{R}\left( {T - {\lambda I}}\right) + \left( {{T}^{ * } - {\lambda I}}\right) \mathcal{E} \) is also a direct sum and \( \dim \left( {{T}^{ * } - {\lambda I}}\right) \mathcal{E} = k \) . Indeed, assume that \( \left( {T - {\lambda I}}\right) x + \left( {{T}^{ * } - {\lambda I}}\right) u = 0 \) for \( x \in \mathcal{D}\left( T\right) \) and \( u \in \mathcal{E} \) . Then \( x + u \in \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \) . But \( x + u \in \mathcal{D}\left( {T}_{\mathcal{E}}\right) \), so that \( x + u \in \mathcal{N}\left( {{T}_{\mathcal{E}} - {\lambda I}}\right) \) . Therefore, since \( {T}_{\mathcal{E}} \) is a symmetric operator and \( \lambda \) is not real, \( x + u = 0 \) by Lemma 3.4(i). Since the sum \( \mathcal{D}\left( T\right) + \mathcal{E} \) is direct, \( x = 0 \) and \( u = 0 \) . This proves also that \( \left( {{T}^{ * } - {\lambda I}}\right) \mid \mathcal{E} \) is injective. Hence, \( \dim \left( {{T}^{ * } - {\lambda I}}\right) \mathcal{E} = \dim \mathcal{E} = k \) . Obviously, \( \mathcal{R}{\left( {T}_{\mathcal{E}} - \lambda I\right) }^{ \bot } \subseteq \mathcal{R}{\left( T - \lambda I\right) }^{ \bot } \) . Since \( \mathcal{R}\left( {{T}_{\mathcal{E}} - {\lambda I}}\right) \) is the direct sum of \( \mathcal{R}\left( {T - {\lambda I}}\right) \) and the \( k \) -dimensional space \( \left( {{T}^{ * } - {\lambda I}}\right) \mathcal{E},\mathcal{R}{\left( {T}_{\mathcal{E}} - \lambda I\right) }^{ \bot } \) has the codi-mension \( k \) in \( \mathcal{R}{\left( T - \lambda I\right) }^{ \bot } \) . By Definition 3.3 this means that \( {d}_{\bar{\lambda }}\left( {T}_{\mathcal{E}}\right) = {d}_{\bar{\lambda }}\left( T\right) - k \) . \( ▱ \) ## 3.2 Self-adjoint Operators Self-adjointness is the most important notion on unbounded operators in this book. The main results about self-adjoint operators are the spectral theorem proved in Sect. 5.2 and the corresponding functional calculus based on it. A large effort is made in this book to prove that certain symmetric operators are self-adjoint or to extend symmetric operators to self-adjoint ones. Definition 3.5 A densely defined symmetric operator \( T \) on a Hilbert space \( \mathcal{H} \) is called self-adjoint if \( T = {T}^{ * } \) and essentially self-adjoint, briefly e.s.a., if \( \bar{T} \) is self-adjoint, or equivalently, if \( \bar{T} = {T}^{ * } \) . Let us state some simple consequences that will be often used without mention. A self-adjoint operator \( T \) is symmetric and closed, since \( {T}^{ * } \) is always closed. Let \( T \) be a densely defined symmetric operator. Since then \( T \subseteq {T}^{ * }, T \) is selfadjoint if and only if \( \mathcal{D}\left( T\right) = \mathcal{D}\left( {T}^{ * }\right) \) . Likewise, \( T \) is essentially self-adjoint if and only if \( \mathcal{D}\left( \bar{T}\right) = \mathcal{D}\left( {T}^{ * }\right) \) . Any self-adjoint operator \( T \) on \( \mathcal{H} \) is maximal symmetric, that is, if \( S \) is a symmetric operator on \( \mathcal{H} \) such that \( T \subseteq S \), then \( T = S \) . Indeed, \( T \subseteq S \) implies that \( {S}^{ * } \subseteq {T}^{ * } \) . Combined with \( S \subseteq {S}^{ * } \) and \( {T}^{ * } = T \), this yields \( S \subseteq T \), so that \( T = S \) . Some self-adjointness criteria follow easily from the next result. The nice direct sum decomposition (3.10) of the domain \( \mathcal{D}\left( {T}^{ * }\right) \) is called von Neumann’s formula. Proposition 3.7 Let \( T \) be a densely defined symmetric operator. Then \[ \mathcal{D}\left( {T}^{ * }\right) = \mathcal{D}\left( \bar{T}\right) \dot{ + }\mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \dot{ + }\mathcal{N}\left( {{T}^{ * } - \bar{\lambda }I}\right) \;\text{ for }\lambda \in \mathbb{C} \smallsetminus \mathbb{R}, \] (3.10) \[ \dim \mathcal{D}\left( {T}^{ * }\right) /\mathcal{D}\left( \bar{T}\right) = {d}_{ + }\left( T\right) + {d}_{ - }\left( T\right) . \] (3.11) Proof Let us abbreviate \( {\mathcal{N}}_{\lambda } \mathrel{\text{:=}} \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \) and \( {\mathcal{N}}_{\bar{\lambda }} \mathrel{\text{:=}} \mathcal{N}\left( {{T}^{ * } - \bar{\lambda }I}\right) \) . The inclusion \( \mathcal{D}\left( \bar{T}\right) + {\mathcal{N}}_{\lambda } + {\mathcal{N}}_{\bar{\lambda }} \subseteq \mathcal{D}\left( {T}^{ * }\right) \) is obvious. We prove that \( \mathcal{D}\left( {T}^{ * }\right) \subseteq \mathcal{D}\left( \bar{T}\right) + {\mathcal{N}}_{\lambda } + {\mathcal{N}}_{\bar{\lambda }} \) . Let \( x \in \mathcal{D}\left( {T}^{ * }\right) \) . By Corollary 2.2 we have \[ \mathcal{H} = \mathcal{R}\left( {\bar{T} - {\lambda I}}\right) \oplus {\mathcal{N}}_{\bar{\lambda }} \] (3.12) We apply (3.12) to the vector \( \left( {{T}^{ * } - {\lambda I}}\right) x \in \mathcal{H} \) . Then there exist \( {x}_{0} \in \mathcal{D}\left( \bar{T}\right) \) and \( {x}_{ - }^{\prime } \in {\mathcal{N}}_{\bar{\lambda }} \) such that \( \left( {{T}^{ * } - {\lambda I}}\right) x = \left( {\bar{T} - {\lambda I}}\right) {x}_{0} + {x}_{ - }^{\prime } \) . Since \( \lambda \notin \mathbb{R} \), we can set \( {x}_{ - } \mathrel{\text{:=}} \) \( {\left( \bar{\lambda } - \lambda \right) }^{-1}{x}_{ - }^{\prime } \) . The preceding can be rewritten as \( \left( {{T}^{ * } - {\lambda I}}\right) \left( {x - {x}_{0} - {x}_{ - }}\right) = 0 \), that is, \( {x}_{ + } \mathrel{\text{:=}} x - {x}_{0} - {x}_{ - } \) is in \( {\mathcal{N}}_{\lambda } \) . Hence, \( x = {x}_{0} + {x}_{ + } + {x}_{ - } \in \mathcal{D}\left( \bar{T}\right) + {\mathcal{N}}_{\lambda } + {\mathcal{N}}_{\bar{\lambda }} \) . To prove that the sum in (3.10) is a direct sum, we assume that \( {x}_{0} + {x}_{ + } + {x}_{ - } = 0 \) for \( {x}_{0} \in \mathcal{D}\left( \bar{T}\right) ,{x}_{ + } \in {\mathcal{N}}_{\lambda } \), and \( {x}_{ - } \in {\mathcal{N}}_{\bar{\lambda }} \) . Then \[ \left( {{T}^{ * } - {\lambda I}}\right) \left( {{x}_{0} + {x}_{ + } + {x}_{ - }}\right) = \left( {\bar{T} - {\lambda I}}\right) {x}_{0} + \left( {\bar{\lambda } - \lambda }\right) {x}_{ - } = 0. \] The vector \( \left( {\lambda - \bar{\lambda }}\right) {x}_{ - } = \left( {\bar{T} - {\lambda I}}\right) {x}_{0} \) is in \( \mathcal{R}\left( {\bar{T} - {\lambda I}}\right) \cap {\mathcal{N}}_{\bar{\lambda }} \) . Therefore, it follows from (3.12) that \( {x}_{ - } = 0 \) and \( \left( {\bar{T} - {\lambda I}}\right) {x}_{0} = 0 \) . Since \( \bar{T} \) is symmetric and \( \lambda \) is not real, the latter yields \( {x}_{0} = 0 \) by Lemma 3.4. Since \( {x}_{0} = {x}_{ - } = 0 \), we get \( {x}_{ + } = 0 \) . The preceding proves (3.10). (3.11) is an immediate consequence of (3.10). Recall that a densely defined symmetric operator \( T \) is essentially self-adjoint if and only if \( \mathcal{D}\left( \bar{T}\right) = \mathcal{D}\left( {T}^{ * }\right) \) . By (3.10) (or (3.11)) the latter is satisfied if and only if \( {d}_{ + }\left( T\right) = {d}_{ - }\left( T\right) = 0 \) . That is, using formulas (3.4)-(3.7), we obtain the following: Proposition 3.8 Let \( T \) be a densely defined symmetric operator on \( \mathcal{H} \), and let \( {\lambda }_{ + } \) and \( {\lambda }_{ - } \) be complex numbers such that \( \operatorname{Im}{\lambda
1042_(GTM203)The Symmetric Group
Definition 1.7.5
Definition 1.7.5 Given vector spaces \( V \) and \( W \), then their tensor product is the set \[ V \otimes W = \left\{ {\mathop{\sum }\limits_{{i, j}}{c}_{i, j}{\mathbf{v}}_{i} \otimes {\mathbf{w}}_{j} : {c}_{i, j} \in \mathbb{C},{\mathbf{v}}_{i} \in V,{\mathbf{w}}_{j} \in W}\right\} \] subject to the relations \[ \left( {{c}_{1}{\mathbf{v}}_{1} + {c}_{2}{\mathbf{v}}_{2}}\right) \otimes \mathbf{w} = {c}_{1}\left( {{\mathbf{v}}_{1} \otimes \mathbf{w}}\right) + {c}_{2}\left( {{\mathbf{v}}_{2} \otimes \mathbf{w}}\right) \] and \[ \mathbf{v} \otimes \left( {{d}_{1}{\mathbf{w}}_{1} + {d}_{2}{\mathbf{w}}_{2}}\right) = {d}_{1}\left( {\mathbf{v} \otimes {\mathbf{w}}_{1}}\right) + {d}_{2}\left( {\mathbf{v} \otimes {\mathbf{w}}_{2}}\right) . \] It is easy to see that \( V \otimes W \) is also a vector space. In fact, the reader can check that if \( \mathcal{B} = \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2},\ldots ,{\mathbf{v}}_{d}}\right\} \) and \( \mathcal{C} = \left\{ {{\mathbf{w}}_{1},{\mathbf{w}}_{2},\ldots ,{\mathbf{w}}_{f}}\right\} \) are bases for \( V \) and \( W \), respectively, then the set \[ \left\{ {{\mathbf{v}}_{i} \otimes {\mathbf{w}}_{j} : 1 \leq i \leq d,1 \leq j \leq f}\right\} \] is a basis for \( V \otimes W \) . This gives the connection with the definition of matrix tensor products: The algebra \( {\operatorname{Mat}}_{d} \) has as basis the set \[ \mathcal{B} = \left\{ {{E}_{i, j} : 1 \leq i, j \leq d}\right\} \] where \( {E}_{i, j} \) is the matrix of zeros with exactly one 1 in position \( \left( {i, j}\right) \) . So if \( X = \left( {x}_{i, j}\right) \in {\operatorname{Mat}}_{d} \) and \( Y = \left( {y}_{k, l}\right) \in {\operatorname{Mat}}_{f} \), then, by the fact that \( \otimes \) is linear, \[ X \otimes Y = \left( {\mathop{\sum }\limits_{{i, j = 1}}^{d}{x}_{i, j}{E}_{i, j}}\right) \otimes \left( {\mathop{\sum }\limits_{{k, l = 1}}^{f}{y}_{k, l}{E}_{k, l}}\right) \] \[ = \mathop{\sum }\limits_{{i, j = 1}}^{d}\mathop{\sum }\limits_{{k, l = 1}}^{f}{x}_{i, j}{y}_{k, l}\left( {{E}_{i, j} \otimes {E}_{k, l}}\right) . \] (1.14) But if \( {E}_{i, j} \otimes {E}_{k, l} \) represents the \( \left( {k, l}\right) \) th position of the \( \left( {i, j}\right) \) th block of a matrix, then equation (1.14) says that the corresponding entry for \( X \otimes Y \) should be \( {x}_{i, j}{y}_{k, l} \), agreeing with the matrix definition. We return from our brief detour to consider the center of \( \operatorname{Com}X \) . The center of an algebra \( A \) is \[ {Z}_{A} = \{ a \in A : {ab} = {ba}\text{ for all }b \in A\} . \] First we will compute the center of a matrix algebra. This result should be very reminiscent of Corollary 1.6.8 to Schur's lemma. Proposition 1.7.6 The center of \( {\operatorname{Mat}}_{d} \) is \[ {Z}_{{\text{Mat }}_{d}} = \left\{ {c{I}_{d} : c \in \mathbb{C}}\right\} \] Proof. Suppose that \( C \in {Z}_{{\text{Mat }}_{d}} \) . Then, in particular, \[ C{E}_{i, i} = {E}_{i, i}C \] \( \left( {1.15}\right) \) for all \( i \) . But \( C{E}_{i, i} \) (respectively, \( {E}_{i, i}C \) ) is all zeros except for the \( i \) th column (respectively, row), which is the same as \( C \) ’s. Thus (1.15) implies that all off-diagonal elements of \( C \) must be 0 . Similarly, if \( i \neq j \), then \[ C\left( {{E}_{i, j} + {E}_{j, i}}\right) = \left( {{E}_{i, j} + {E}_{j, i}}\right) C \] where the left (respectively, right) multiplication exchanges columns (respectively, rows) \( i \) and \( j \) of \( C \) . It follows that all the diagonal elements must be equal and so \( C = c{I}_{d} \) for some \( c \in \mathbb{C} \) . Finally, all these matrices clearly commute with any other matrix, so we are done. - Since we will be computing \( {Z}_{\text{Com }X} \) and the elements of the commutant algebra involve direct sums and tensor products, we will need to know how these operations behave under multiplication. Lemma 1.7.7 Suppose \( A, X \in {\operatorname{Mat}}_{d} \) and \( B, Y \in {\operatorname{Mat}}_{f} \) . Then 1. \( \left( {A \oplus B}\right) \left( {X \oplus Y}\right) = {AX} \oplus {BY} \) , 2. \( \left( {A \otimes B}\right) \left( {X \otimes Y}\right) = {AX} \otimes {BY} \) . Proof. Both assertions are easy to prove, so we will do only the second. Suppose \( A = \left( {a}_{i, j}\right) \) and \( X = \left( {x}_{i, j}\right) \) . Then \[ \left( {A \otimes B}\right) \left( {X \otimes Y}\right) = \left( {{a}_{i, j}B}\right) \left( {{x}_{i, j}Y}\right) \;\text{ (definition of } \otimes \text{ ) } \] \[ = \;\left( {\mathop{\sum }\limits_{k}{a}_{i, k}B\;{x}_{k, j}Y}\right) \;\text{(block multiplication)} \] \[ = \;\left( {\left( {\mathop{\sum }\limits_{k}{a}_{i, k}{x}_{k, j}}\right) {BY}}\right) \;\mathrm{\left( {distributivity}\right) } \] \[ = {AX} \otimes {BY}.\;\text{(definition of} \otimes \text{)}\blacksquare \] Now consider \( C \in {Z}_{\text{Com }X} \), where \( X \) and \( \operatorname{Com}X \) are given by (1.12) and (1.13), respectively. So \[ {CT} = {TC}\text{for all}T \in \operatorname{Com}X\text{,} \] (1.16) where \( T = { \oplus }_{i = 1}^{k}\left( {{M}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) \) and \( C = { \oplus }_{i = 1}^{k}\left( {{C}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) \) . Computing the left-hand side, we obtain \[ {CT} = \left( {{ \oplus }_{i = 1}^{k}{C}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) \left( {{ \oplus }_{i = 1}^{k}{M}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) \;\left( {\text{definition of}\;C\text{and}\;T}\right) \] \[ = \;{ \oplus }_{i = 1}^{k}\left( {{C}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) \left( {{M}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) \;\text{(Lemma 1.7.7, item 1)} \] \[ = { \oplus }_{i = 1}^{k}\left( {{C}_{{m}_{i}}{M}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) . \] Similarly, \[ {TC} = { \oplus }_{i = 1}^{k}\left( {{M}_{{m}_{i}}{C}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) \] Thus equation (1.16) holds if and only if \[ {C}_{{m}_{i}}{M}_{{m}_{i}} = {M}_{{m}_{i}}{C}_{{m}_{i}}\text{for all}{M}_{{m}_{i}} \in {\operatorname{Mat}}_{{m}_{i}}\text{.} \] But this just means that \( {C}_{{m}_{i}} \) is in the center of \( {\operatorname{Mat}}_{{m}_{i}} \), which, by Proposition 1.7.6, is equivalent to \[ {C}_{{m}_{i}} = {c}_{i}{I}_{{m}_{i}} \] for some \( {c}_{i} \in \mathbb{C} \) . Hence \[ C = { \oplus }_{i = 1}^{k}{c}_{i}{I}_{{m}_{i}} \otimes {I}_{{d}_{i}} \] \[ = { \oplus }_{i = 1}^{k}{c}_{i}{I}_{{m}_{i}{d}_{i}} \] \[ = \left( \begin{matrix} {c}_{1}{I}_{{m}_{1}{d}_{1}} & 0 & \cdots & 0 \\ 0 & {c}_{2}{I}_{{m}_{2}{d}_{2}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & {c}_{k}{I}_{{m}_{k}{d}_{k}} \end{matrix}\right) , \] and all members of \( {Z}_{\text{Com }X} \) have this form. Note that \( \dim {Z}_{\text{Com }X} = k \) . For a concrete example, let \[ X = \left( \begin{matrix} {X}^{\left( 1\right) } & 0 & 0 \\ 0 & {X}^{\left( 1\right) } & 0 \\ 0 & 0 & {X}^{\left( 2\right) } \end{matrix}\right) = 2{X}^{\left( 1\right) } \oplus {X}^{\left( 2\right) }, \] where \( \deg {X}^{\left( 1\right) } = 3 \) and \( \deg {X}^{\left( 2\right) } = 4 \) . Then the matrices \( T \in \operatorname{Com}X \) look like \[ T = \left( \begin{matrix} a & 0 & 0 & b & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & a & 0 & 0 & b & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & a & 0 & 0 & b & 0 & 0 & 0 & 0 & 0 \\ c & 0 & 0 & d & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & c & 0 & 0 & d & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & c & 0 & 0 & d & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & x & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & x & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & x & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & x \end{matrix}\right) , \] where \( a, b, c, d, x \in \mathbb{C} \) . The dimension is evidently \[ \dim \left( {\operatorname{Com}X}\right) = {m}_{1}^{2} + {m}_{2}^{2} = {2}^{2} + {1}^{2} = 5. \] The elements \( C \in {Z}_{\text{Com }X} \) are even simpler: \[ C = \left( \begin{matrix} a & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & a & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & a & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & a & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & a & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & a & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & x & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & x & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & x & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & x \end{matrix}\right) , \] where \( a, x \in \mathbb{C} \) . Here the dimension is the number of different irreducible components of \( X \), in this case 2 . We summarize these results in the following theorem. Theorem 1.7.8 Let \( X \) be a matrix representation of \( G \) such that \[ X = {m}_{1}{X}^{\left( 1\right) } \oplus {m}_{2}{X}^{\left( 2\right) } \oplus \cdots \oplus {m}_{k}{X}^{\left( k\right) }, \] (1.17) where the \( {X}^{\left( i\right) } \) are inequivalent, irreducible and \( \deg {X}^{\left( i\right) } = {d}_{i} \) . Then 1. \( \deg X = {m}_{1}{d}_{1} + {m}_{2}{d}_{2} + \cdots + {m}_{k}{d}_{k} \) , 2. \( \operatorname{Com}X = \left\{ {{ \oplus }_{i = 1}^{k}\left( {{M}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) : {M}_{{m}_{i}} \in {\operatorname{Mat}}_{{m}_{i}}\text{for all}i}\right\} \) , 3. \( \dim \left( {\operatorname{Com}X}\right) = {m}_{1}^{2} + {m}_{2}^{2} + \cdots + {m}_{k}^{2} \) , 4. \( {Z}_{\text{Com }X} = \left\{ {{ \oplus }_{i = 1}^{k}{c}_{i}{I}_{{m}_{i}{d}_{i}} : {c}_{i} \in \mathbb{C}}\right. \) for all \( \left. i\right\} \), and 5. \( \dim {Z}_{\operatorname{Com}X} = k \) . ∎ What happens if we try to apply Theorem 1.7.8 to a representation \( Y \) that is not decomposed into irreducibles? By the matrix version of Maschke's theorem (Corollary 1.5.4), \( Y \) is equivalent to a representation \( X \) of the form given in equation (1.17). But if \( Y = {RX}{R}^{-1} \) for some fixed matrix \( R \), then the map \[ T \rightarrow {RT}{R}^{-1} \] is an algebra isomorphism from \( \operatorname{Com}X \) to \( \operatorname{Com}Y \) . Once the commutant algebras are isomorphic, it is easy t
106_106_The Cantor function
Definition 3.2
Definition 3.2 Let \( A \subseteq P, p \in P \) . A proof of length \( n \) of \( p \) from \( A \) is a sequence \( {p}_{1},\ldots ,{p}_{n} \) of \( n \) elements of \( P \) such that \( {p}_{n} = p \), the sequence \( {p}_{1},\ldots ,{p}_{n - 1} \) is a proof of length \( n - 1 \) of \( {p}_{n - 1} \) from \( A \), and (a) \( {p}_{n} \in \mathcal{A} \cup A \), or (b) \( {p}_{i} = {p}_{j} \Rightarrow {p}_{n} \) for some \( i, j < n \), or (c) \( {p}_{n} = \left( {\forall x}\right) w\left( x\right) \) and some subsequence \( {p}_{{k}_{1}},\ldots ,{p}_{{k}_{r}} \) of \( {p}_{1},\ldots {p}_{n - 1} \) is a proof (of length \( < n \) ) of \( w\left( x\right) \) from a subset \( {A}_{0} \) of \( A \) such that \( x \notin \operatorname{var}\left( {A}_{0}\right) \) . This is an inductive definition of a proof in \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) . As for \( \operatorname{Prop}\left( X\right) \) , we require a proof to be a proof of finite length. The restriction \( x \notin \operatorname{var}\left( {A}_{0}\right) \) in (c) means that no special assumptions about \( x \) are used in proving \( w\left( x\right) \) , and is the formal analogue of the restriction on the use of Generalisation in our informal logic. As before, we write \( A \vdash p \) if there exists a proof of \( p \) from \( A \) . We denote by \( \operatorname{Ded}\left( A\right) \) the set of all \( p \) such that \( A \vdash p \) . We write \( \vdash p \) for \( \varnothing \vdash p \), and any \( p \) for which \( \vdash p \) is called a theorem of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) . Example 3.3. We show \( \{ \sim \left( {\exists x}\right) \left( { \sim p}\right) \} \vdash \left( {\forall x}\right) p \) for any element \( p \in P \) . (Recall that \( \left( {\exists x}\right) \) is an abbreviation for \( \sim \left( {\forall x}\right) \sim \) .) The following is a proof. \( \left( {\mathcal{A}}_{3}\right) \) (assumption) \[ {p}_{3} = \left( {\forall x}\right) \left( { \sim \sim p}\right) ,\;\left( {{p}_{1} = {p}_{2} \Rightarrow {p}_{3}}\right) \] \[ {p}_{4} = \left( {\forall x}\right) \left( { \sim \sim p\left( x\right) }\right) \Rightarrow \sim \sim p\left( y\right) , \] \( \left( {\mathcal{A}}_{5}\right) \) Note that by \( \left( {\mathcal{A}}_{5}\right) \), the \( y \) in \( {p}_{4} \) may be chosen to be any variable. To permit a subsequent use of Generalisation, \( y \) must not be in \( \operatorname{var}\left( { \sim \left( {\exists x}\right) \left( { \sim p\left( x\right) }\right) }\right) \) . A possible choice for \( y \) is the variable \( x \) itself. \[ {p}_{5} = \sim \sim p\left( y\right) \] \[ \left( {{p}_{4} = {p}_{3} \Rightarrow {p}_{5}}\right) \] \[ {p}_{6} = \sim \sim p\left( y\right) \Rightarrow p\left( y\right) \] \( \left( {\mathcal{A}}_{3}\right) \) \[ {p}_{7} = p\left( y\right) \] \( \left( {{p}_{6} = {p}_{5} \Rightarrow {p}_{7}}\right) \) \[ {p}_{8} = \left( {\forall y}\right) p\left( y\right) \] (Generalisation, \( y \notin \operatorname{var}\left( { \sim \left( {\exists x}\right) \left( { \sim p\left( x\right) }\right) }\right) \) ## Exercises 3.4. Show that every axiom of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) is valid. 3.5. Construct a proof in \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) of \( \left( {\forall x}\right) \left( {\forall y}\right) p\left( {x, y}\right) \) from \( \{ \left( {\forall y}\right) \left( {\forall x}\right) p\left( {x, y}\right) \} \) . ## §4 Properties of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) We have now constructed the logic \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) . Its algebra of propositions is the reduced first order algebra \( P\left( {V,\mathcal{R}}\right) \), its valuations are the valuations associated with the interpretations of \( P\left( {V,\mathcal{R}}\right) \) defined in \( §2 \), and its proofs are as defined in \( §3 \) . We can immediately inquire if there is a substitution theorem for this logic, corresponding to Theorem 4.11 of the Propositional Calculus. There, substitution was defined in terms of a homomorphism \( \varphi : {P}_{1} \rightarrow {P}_{2} \) of one algebra of propositions into another. If \( {P}_{1},{P}_{2} \) are first order algebras, then as the concept of a homomorphism from \( {P}_{1} \) to \( {P}_{2} \) requires these algebras to have the same set of operations, it follows that they must have the same set of individual variables. Even in this case, a homomorphism would be too restrictive for our purposes, for we would naturally want to be able to interchange two variables \( x, y \), so mapping elements \( p\left( x\right) \) of the algebra to \( \varphi \left( {p\left( x\right) }\right) = p\left( y\right) \), but unfortunately such a map is not a homomorphism. For if \( p\left( x\right) \in P \) is such that \( x \in \operatorname{var}\left( {p\left( x\right) }\right), y \notin \operatorname{var}\left( {p\left( x\right) }\right) \), then \[ \varphi \left( {\left( {\forall x}\right) p\left( x\right) }\right) = \left( {\forall y}\right) p\left( y\right) = \left( {\forall x}\right) p\left( x\right) , \] \[ \left( {\forall x}\right) \varphi \left( {p\left( x\right) }\right) = \left( {\forall x}\right) p\left( y\right) . \] Since \( y \in \operatorname{var}\left( {\left( {\forall x}\right) p\left( y\right) }\right) \) but \( y \notin \operatorname{var}\left( {\left( {\forall y}\right) p\left( y\right) }\right) \), these elements are distinct and \( \varphi \) is not a homomorphism. Definition 4.1. Let \( {P}_{1} = P\left( {{V}_{1},{\mathcal{R}}^{\left( 1\right) }}\right) \) and \( {P}_{2} = P\left( {{V}_{2},{\mathcal{R}}^{\left( 2\right) }}\right) \) . A semi-homomorphism \( \left( {\alpha ,\beta }\right) : \left( {{P}_{1},{V}_{1}}\right) \rightarrow \left( {{P}_{2},{V}_{2}}\right) \) is a pair of maps \( \alpha : {P}_{1} \rightarrow {P}_{2} \) , \( \beta : {V}_{1} \rightarrow {V}_{2} \) such that (a) \( \beta \left( {V}_{1}\right) \) is infinite, (b) \( \alpha \) is an \( \{ \mathrm{F}, \Rightarrow \} \) -homomorphism, and (c) \( \alpha \left( {\left( {\forall x}\right) p}\right) = \left( {\forall {x}^{\prime }}\right) \alpha \left( p\right) \), where \( {x}^{\prime } = \beta \left( x\right) \) . Lemma 4.2. Let \( \left( {\alpha ,\beta }\right) : \left( {{P}_{1},{V}_{1}}\right) \rightarrow \left( {{P}_{2},{V}_{2}}\right) \) be a semi-homomorphism. Let \( p \in {P}_{1} \) and suppose \( x \in {V}_{1} - \operatorname{var}\left( p\right) \) . Then \( \beta \left( x\right) \notin \operatorname{var}\left( {\alpha \left( p\right) }\right) \) . Proof: We observe first that if \( x \neq y \), then \( \left( {\forall x}\right) p = \left( {\forall y}\right) p \) if and only if neither \( x \) nor \( y \) is in \( \operatorname{var}\left( \mathrm{p}\right) \) . Since \( \beta \left( {V}_{1}\right) \) is infinite, there is an element \( {y}^{\prime } \in \beta \left( {V}_{1}\right) \) such that \( {y}^{\prime } \neq \beta \left( x\right) \) and \( {y}^{\prime } \notin \beta \left( {\operatorname{var}\left( p\right) }\right) \) . Choosing \( y \in {V}_{1} \) so that \( \beta \left( y\right) = {y}^{\prime } \), it follows that \( \left( {\forall x}\right) p = \) \( \left( {\forall y}\right) p \) . If \( {x}^{\prime } = \beta \left( x\right) \), then we have \[ \left( {\forall {x}^{\prime }}\right) \alpha \left( p\right) = \alpha (\left( {\forall x}\right) p = \alpha \left( {\left( {\forall y}\right) p}\right) = \left( {\forall {y}^{\prime }}\right) \alpha \left( p\right) , \] and it follows again that \( {x}^{\prime } \notin \operatorname{var}\left( {\alpha \left( p\right) }\right) \) . Theorem 4.3. (The Substitution Theorem). Let \( \left( {\alpha ,\beta }\right) : \left( {{P}_{1},{V}_{1}}\right) \rightarrow \left( {{P}_{2},{V}_{2}}\right) \) be a semi-homomorphism. Let \( A \subseteq P, p \in {P}_{1} \) . (a) If \( A \vdash p \), then \( \alpha \left( A\right) \vdash \alpha \left( p\right) \) . (b) If \( A \vDash p \), then \( \alpha \left( A\right) \vDash \alpha \left( p\right) \) . Proof: (a) Let \( {p}_{1},\ldots ,{p}_{n} \) be a proof of \( p \) from \( A \) . We use induction over \( n \) to show that \( \alpha \left( {p}_{1}\right) ,\ldots ,\alpha \left( {p}_{n}\right) \) is a proof of \( \alpha \left( p\right) \) from \( \alpha \left( A\right) \) . If \( a = \left( {\left( {\forall x}\right) \left( {p \Rightarrow q}\right) }\right) \Rightarrow \left( {p \Rightarrow \left( {\forall x}\right) q}\right) \) is an axiom of type \( {A}_{4} \), then by Lemma 4.2, the condition \( x \notin \operatorname{var}\left( p\right) \) is preserved by the semi-homomorphism \( \left( {\alpha ,\beta }\right) \), and so \( \alpha \left( a\right) \) is again an axiom. In all other cases, it is clear that the image of an axiom is an axiom. Thus if \( p \in {\mathcal{A}}^{\left( 1\right) } \cup A \), then \( \alpha \left( p\right) \in {\mathcal{A}}^{\left( 2\right) } \cup \alpha \left( A\right) \) , where \( {\mathcal{A}}^{\left( i\right) } \) is the set of axioms of \( \operatorname{Pred}\left( {{V}_{i},{\mathcal{R}}^{\left( i\right) }}\right) \) . Hence our desired result holds for \( n = 1 \) . For \( n > 1 \), we may suppose by induction that \( \alpha \left( {p}_{1}\right) ,\ldots ,\alpha \left( {p}_{n - 1}\right) \) is a proof of \( \alpha \left( {p}_{n - 1}\right) \) from \( \alpha \left( A\right) \) . If \( {p}_{i} = {p}_{j} \Rightarrow {p}_{n} \) for some \( i, j < n \), then \( \alpha \left( {p}_{i}\right) = \alpha \left( {p}_{j}\right) \Rightarrow \alpha \left( {p}_{n}\right) \) , and the result holds. It remains only to consider the case that \( {p}_{n} = \left( {\forall x}\right) q \) , where some subsequence \( {q}_{1},\ldots ,{q}_{k} \) of \( {p}_{1},\ldots ,{p}_{n - 1} \) is a proof of \( q \) from some subset \( {A}_{0} \subseteq A \) with \( x \notin \operatorname{var}\left( {A}_{0}\right) \) . By induction, \( \alpha \left( {
1057_(GTM217)Model Theory
Definition 8.2.2
Definition 8.2.2 Let \( X \subseteq {\mathbb{M}}^{n} \) be definable. We say that \( A \subset {\mathbb{M}}^{\text{eq }} \) is a canonical base for \( X \) if \( \sigma \) fixes \( X \) setwise if and only if \( \sigma \) fixes \( A \) pointwise for all \( \sigma \in \operatorname{Aut}\left( \mathbb{M}\right) \) . If \( p \in {S}_{n}\left( \mathbb{M}\right) \), then \( A \subseteq {\mathbb{M}}^{\text{eq }} \) is a canonical base for \( p \) if \( {\sigma p} = p \) if and only if \( \sigma \) fixes \( A \) pointwise for all \( \sigma \in \operatorname{Aut}\left( \mathbb{M}\right) \) . For any theory, we can find canonical bases for definable sets in \( {\mathbb{M}}^{\text{eq }} \) . Lemma 8.2.3 Suppose that \( X \subseteq \mathbb{M} \) is definable. There is \( \alpha \in {\mathbb{M}}^{\text{eq }} \) such that \( \alpha \) is a canonical base for \( X \) . Indeed, if \( X \) is \( A \) -definable, we can find a canonical base in \( {\operatorname{dcl}}^{\mathrm{{eq}}}\left( A\right) \) . Proof Suppose that \( X \) is defined by the formula \( \phi \left( {\bar{x},\bar{a}}\right) \) . Let \( E \) be the equivalence relation \[ \bar{a}E\bar{b} \Leftrightarrow \left( {\phi \left( {\bar{x},\bar{a}}\right) \leftrightarrow \phi \left( {\bar{x},\bar{b}}\right) }\right) . \] Let \( \alpha = \bar{a}/E \in {\mathbb{M}}^{\text{eq }} \) . Then, \( \alpha \) is a canonical base for \( X \) . Next, we consider canonical bases for types. We first note that canonical bases are determined up to definable closure in \( {\mathbb{M}}^{\text{eq }} \) . This works equally well for definable sets instead of types. Lemma 8.2.4 If \( A \) is a canonical base for \( p \in {S}_{n}\left( \mathbb{M}\right) \), then \( B \) is a canonical base for \( p \) if and only if \( {\operatorname{dcl}}^{\mathrm{{eq}}}\left( A\right) = {\operatorname{dcl}}^{\mathrm{{eq}}}\left( B\right) \) . Proof Suppose that \( C \subset \mathbb{M} \) and \( \left| C\right| < \left| \mathbb{M}\right| \) ; let \( \operatorname{Aut}\left( {\mathbb{M}/C}\right) \) denote the automorphisms of \( \mathbb{M} \) fixing \( C \) pointwise. The proof of Proposition 4.3.25 generalized to \( {\mathbb{M}}^{\text{eq }} \) shows that \[ {\operatorname{dcl}}^{\mathrm{{eq}}}\left( C\right) = \left\{ {x \in {\mathbb{M}}^{\mathrm{{eq}}} : \sigma \left( x\right) = x\text{ for all }\sigma \in \operatorname{Aut}\left( {\mathbb{M}/C}\right) }\right\} . \] Suppose that \( B \) is a canonical base for \( p \) . If \( \sigma \) is an automorphism fixing \( B \) pointwise, then \( {\sigma p} = p \) and \( \sigma \) fixes \( A \) pointwise. Thus \( A \subseteq {\operatorname{dcl}}^{\mathrm{{eq}}}\left( B\right) \) . Similarly, if \( \tau \) is an automorphism fixing \( A \) pointwise, \( {\tau p} = p \) and \( \tau \) fixes \( B \) pointwise. Thus \( B \subseteq {\operatorname{dcl}}^{\text{eq }}\left( A\right) \) . Hence \( {\operatorname{dcl}}^{\text{eq }}\left( A\right) = {\operatorname{dcl}}^{\text{eq }}\left( B\right) \) . Conversely, suppose that \( {\operatorname{dcl}}^{\mathrm{{eq}}}\left( A\right) = {\operatorname{dcl}}^{\mathrm{{eq}}}\left( B\right) \) . If \( \sigma \in \operatorname{Aut}\left( \mathbb{M}\right) \), then \( \sigma \) fixes \( A \) pointwise if and only if \( \sigma \) fixes \( B \) pointwise. Because \( A \) is a canonical base for \( p \), so is \( B \) . Definition 8.2.5 If \( A \) is any canonical base for \( p \), let \( \operatorname{cb}\left( p\right) = {\operatorname{dcl}}^{\mathrm{{eq}}}\left( A\right) \) . By Lemma 8.2.4, this definition of \( \operatorname{cb}\left( p\right) \) does not depend on the choice of \( A \) . It is easy to see that \( \operatorname{cb}\left( p\right) \) is the largest possible choice of canonical base for \( p \) . Using definability of types, it is easy to find canonical bases in \( \omega \) -stable theories. Lemma 8.2.6 Suppose that \( \mathbb{M} \) is \( \omega \) -stable and \( p \in {S}_{n}\left( \mathbb{M}\right) \) . Then, \( p \) has a canonical base in \( {\mathbb{M}}^{\text{eq }} \) . Proof For each \( \mathcal{L} \) -formula \( \phi \left( {\bar{v},\bar{w}}\right) \), let \( {X}_{\phi } = \{ \bar{a} \in \mathbb{M} : \phi \left( {\bar{v},\bar{a}}\right) \in p\} \) . By definability of types, \( {X}_{\phi } \) is definable. If \( \sigma \) is an automorphism of \( \mathbb{M} \), then \( {\sigma p} = p \) if and only if \( \sigma \) fixes each \( {X}_{\phi } \) setwise. Let \( {\alpha }_{\phi } \in {\mathbb{M}}^{\text{eq }} \) be a canonical base for \( {X}_{\phi } \), and let \( A = \left\{ {{\alpha }_{\phi } : \phi }\right. \) an \( \mathcal{L} \) -formula \( \} \) . Then, \( {\sigma p} = p \) if and only if \( \sigma \) fixes \( A \) pointwise. Thus, \( A \) is a canonical base for \( p \) . A more careful analysis shows that we can always find a finite canonical base in \( {\operatorname{acl}}^{\text{eq }}\left( A\right) \) for any \( A \) over which \( p \) does not fork. Theorem 8.2.7 Suppose that \( \mathbb{M} \) is \( \omega \) -stable and \( p \in {S}_{n}\left( \mathbb{M}\right) \) does not fork over \( A \subseteq \mathbb{M} \) . There is \( \alpha \in {\operatorname{acl}}^{\mathrm{{eq}}}\left( A\right) \), a canonical base for \( p \) . If \( p \mid A \) is stationary, then we can find a canonical base \( \alpha \in {\operatorname{dcl}}^{\mathrm{{eq}}}\left( A\right) \) . Proof Suppose that \( \phi \left( {\bar{v},\bar{w}}\right) \) is an \( \mathcal{L} \) -formula such that \( \phi \left( {\bar{v},\bar{a}}\right) \in p \) and \( \operatorname{RM}\left( {\phi \left( {\bar{v},\bar{a}}\right) }\right) = \operatorname{RM}\left( p\right) \) . Let \( X = \{ \bar{b} : \phi \left( {\bar{v},\bar{b}}\right) \in p\} \) . By definability of types, \( X \) is definable. Indeed, by Theorem 6.3.9, \( X \) is definable over \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( A\right) \) and, if \( p \mid A \) is stationary, \( X \) is definable over \( A \) . Claim If \( \sigma \) is an automorphism of \( \mathbb{M} \), then \( {\sigma p} = p \) if and only if \( {\sigma X} = X \) . If \( {\sigma p} = p \), then \[ \bar{c} \in X \Leftrightarrow \phi \left( {\bar{v},\bar{c}}\right) \in p \Leftrightarrow \phi \left( {\bar{v},\bar{c}}\right) \in {\sigma p} \Leftrightarrow \bar{c} \in {\sigma X}. \] Thus \( {\sigma X} = X \) . Conversely, suppose \( {\sigma X} = X \) . Then, \( \bar{a} \in {\sigma X} \) and \( \phi \left( {\bar{v},\bar{a}}\right) \in {\sigma p} \) . Because \( \operatorname{RM}\left( p\right) = \operatorname{RM}\left( {\sigma \left( p\right) }\right) \) and \( {\deg }_{\mathrm{M}}\left( {\phi \left( {\bar{v},\bar{a}}\right) }\right) = 1,{\sigma p} = p \) . Thus, \( B \subset {\mathbb{M}}^{\text{eq }} \) is a canonical base for \( p \) if and only if \( B \) is a canonical base for \( X \) . Because \( X \) is \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( A\right) \) -definable, by Lemma 8.2.4, we can find a canonical base in \( {\operatorname{acl}}^{\mathrm{{eq}}}\left( A\right) \) . If \( p \mid A \) is stationary, then \( X \) is \( A \) -definable and we can find a canonical base in \( {\operatorname{dcl}}^{\text{eq }}\left( A\right) \) . If \( p \in {S}_{n}\left( \mathbb{M}\right) \), then \( p \) is definable over any canonical base. Thus, the converse to Theorem 8.2.7 holds as well. Corollary 8.2.8 Suppose \( \mathbb{M} \) is \( \omega \) -stable, then \( p \in {S}_{n}\left( \mathbb{M}\right) \) does not fork over \( A \) if and only if \( \operatorname{cb}\left( p\right) \subseteq {\operatorname{acl}}^{\mathrm{{eq}}}\left( A\right) \) . We say that the canonical base for a set \( X \) (or a type \( p \) ) has rank \( \alpha \) if \( \alpha \) is least such that there is a canonical base \( b \in {\mathbb{M}}^{\text{eq }} \) with \( \operatorname{RM}\left( b\right) = \alpha \) . For strongly minimal theories, we can compute ranks in \( {\mathbb{M}}^{\text{eq }} \) using the following elimination of imaginaries result of Lascar and Pillay. We have already proved a special case in Lemma 3.2.19, and the proof of the general case is a straightforward generalization, which we leave as an exercise. Lemma 8.2.9 Let \( \mathbb{M} \) be a strongly minimal set and let \( X \subset \mathbb{M} \) be infinite. Suppose that \( E \) is an \( \varnothing \) -definable equivalence relation on \( {\mathbb{M}}^{m} \) . Let \( \overline{a} \in {\mathbb{M}}^{m} \) and \( \alpha = \bar{a}/E \) . There is a finite \( C \subset {\mathbb{M}}^{k} \) (for some \( k \) ) such that any automorphism of \( \mathbb{M} \) fixing \( X \) fixes \( \alpha \) if and only if it fixes \( C \) setwise. In particular, if \( {\mathbb{M}}_{X} \) is \( \mathbb{M} \) viewed as an \( {\mathcal{L}}_{X} \) -structure, then for every \( \alpha \in {\mathbb{M}}_{X}^{\text{eq }} \) there is \( \bar{d} \in \mathbb{M} \) such that \( {\operatorname{acl}}^{\text{eq }}\left( {\alpha, X}\right) = {\operatorname{acl}}^{\text{eq }}\left( {\bar{d}, X}\right) \) . Suppose that \( \mathbb{M} \) is strongly minimal and \( \alpha \in {\mathbb{M}}^{\text{eq }} \) . Let \( X \subset \mathbb{M} \) be infinite such that \( \alpha \downarrow X \) . Because \( \operatorname{RM}\left( \alpha \right) = \operatorname{RM}\left( {\alpha /X}\right) \), it suffices to calculate \( \operatorname{RM}\left( {\alpha /X}\right) \) . By Lemma 8.2.9, there is \( \bar{d} \in \mathbb{M} \) such that the set \( \alpha \) is interal-gebraic with \( \bar{d} \) over \( X \) . Then, \( \operatorname{RM}\left( \alpha \right) \) is equal to \( \operatorname{RM}\left( {\bar{d}/X}\right) \) . ## Families of Plane Curves Using canonical bases, we can make precise the idea that in locally modular strongly minimal sets families of plane curves are "essentially one-dimensional." Definition 8.2.10 Suppose that \( D \subset {\mathbb{M}}^{n} \) is strongly minimal and \( \phi \) is the strongly minimal formula defining \( D \) . We say that \( D \) is linear if for all \(
1225_(Griffiths) Introduction to Algebraic Curves
Definition 4.7
Definition 4.7. Let \( C \) be a Riemann surface with \( \omega \in {K}^{1}\left( C\right), p \in C \) , \( {\gamma }_{p} \) a small circle around the point \( p \), and \( \omega \) having no poles other than \( p \) on the disc surrounded by \( {\gamma }_{p} \) ( \( p \) itself may or may not be a pole). Then we define the residue at the point \( p \) of \( \omega \) to be \[ {\operatorname{Res}}_{p}\left( \omega \right) = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}\omega \] From Stokes' Theorem, this definition is independent of the choice of the small circle \( {\gamma }_{p} \) . Moreover, for \( p \in {U}_{j},{\gamma }_{p} \subset {U}_{j} \), if \( \omega = {f}_{j}\left( {z}_{j}\right) d{z}_{j} \) in \( {U}_{j} \), then \[ {\operatorname{Res}}_{p}\left( \omega \right) = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}\omega = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}{f}_{j}\left( {z}_{j}\right) d{z}_{j} \] \[ = {\operatorname{Res}}_{p}\left( {{f}_{j}\left( {z}_{j}\right) d{z}_{j}}\right) \text{.} \] THEOREM 4.8 (RESIDUE THEOREM). Suppose \( C \) is a compact Riemann surface. For \( \omega \in {K}^{1}\left( C\right) \), we have \[ \mathop{\sum }\limits_{{p \in C}}{\operatorname{Res}}_{p}\left( \omega \right) = 0 \] Proof. Since \( C \) is compact, \( \omega \) can have only a finite number of poles on \( C,{p}_{1},{p}_{2},\ldots ,{p}_{m} \) . Now choose mutually disjoint small discs \( {\Delta }_{1},{\Delta }_{2},\ldots ,{\Delta }_{m} \) so that each contains a distinct \( {p}_{i} \) and each satisfies the conditions in Definition 4.7. Write \[ \Omega = C \smallsetminus \mathop{\bigcup }\limits_{i}{\Delta }_{i} \] Clearly, for a suitably chosen orientation, we have \[ \partial \Omega = - \mathop{\bigcup }\limits_{i}\partial {\Delta }_{i} \] and from Stokes' Theorem, we obtain \[ {2\pi i}\mathop{\sum }\limits_{{p \in C}}{\operatorname{Res}}_{p}\left( \omega \right) = {2\pi i}\mathop{\sum }\limits_{{i = 1}}^{m}{\operatorname{Res}}_{{p}_{i}}\left( \omega \right) \] \[ = \mathop{\sum }\limits_{{i = 1}}^{m}{\int }_{\partial {\Delta }_{i}}\omega \] \[ = - {\int }_{\partial \Omega }\omega = 0.\;\text{ Q.E.D. } \] THEOREM 4.9. Let \( C \) be a compact Riemann surface. If \( f \in K\left( C\right) \) is not a constant function, then \[ \mathop{\sum }\limits_{{p \in C}}{\nu }_{p}\left( f\right) = 0 \] This implies in particular that the number of zeroes of \( f \) is equal to the number of poles of \( f \) (counting multiplicity): \[ \text{#\{zeroes of}f\text{\}} \] \[ = \# \{ \text{ poles of }f\} \text{. } \] Proof. Choose \( \omega = {df}/f \) and apply the above residue theorem. Q.E.D. COROLLARY 4.10. If \( f \in K\left( C\right) \) is not constant, then for any \( a \in C \), we have \[ \# {f}^{-1}\left( a\right) \] \[ = \# \{ \text{ poles of }f\} \text{. } \] Here, \( \# {f}^{-1}\left( a\right) \) is the number of points \( p \) such that \( f\left( p\right) = a \), counting each such point with suitable multiplicity. This says that any complex number is a functional value of \( f \), and every functional value is assumed the same number of times (counting multiplicity). REMARK 4.11. If we think of \( f \in K\left( C\right) \) as being a holomorphic mapping from \( C \) onto the Riemann sphere \( S \), then the foregoing conclusion can be interpreted as follows: any point on \( S \) is an image point of \( f \), and the number of inverse image points of each \( p \in C \) does not depend on \( p \) (counting multiplicity). ## §5. Differential forms A complex number \( z = x + {iy} \in \mathbb{C} \) also represents a pair of real numbers \( \left( {x, y}\right) \in {\mathbb{R}}^{2} \) . Often, when one considers the complex structure of \( \mathbb{C} \), one also considers the corresponding \( {\mathbb{R}}^{2} \) . For example, a function \( f : \mathbb{C} \rightarrow \mathbb{C} \) not only may be thought of as a mapping \( {\mathbb{R}}^{2} \rightarrow {\mathbb{R}}^{2} \) \[ u = u\left( {x, y}\right) \] \[ v = v\left( {x, y}\right) \] but also as a mapping \( {\mathbb{R}}^{2} \rightarrow \mathbb{C} \) \[ f = u\left( {x, y}\right) + {iv}\left( {x, y}\right) . \] If whenever we consider the mapping \( f : \mathbb{C} \rightarrow \mathbb{C} \) we also consider the corresponding mapping \( {\mathbb{R}}^{2} \rightarrow \mathbb{C} \), it will be convenient to fix the following notation: Notation. \[ {dz} = {dx} + {idy},\;d\bar{z} = {dx} - {idy}, \] \[ \frac{\partial f}{\partial z} = \frac{1}{2}\left( {\frac{\partial f}{\partial x} - i\frac{\partial f}{\partial y}}\right) \] \[ \frac{\partial f}{\partial \bar{z}} = \frac{1}{2}\left( {\frac{\partial f}{\partial x} + i\frac{\partial f}{\partial y}}\right) \] \[ {df} = \frac{\partial f}{\partial x}{dx} + \frac{\partial f}{\partial y}{dy} = \frac{\partial f}{\partial z}{dz} + \frac{\partial f}{\partial \bar{z}}d\bar{z}. \] Clearly \( \partial f/\partial \bar{z} = 0 \) is equivalent to the Cauchy-Riemann equations: \[ \left\{ \begin{array}{l} \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \\ \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x} \end{array}\right. \] \[ f = u\left( {x, y}\right) + {iv}\left( {x, y}\right) . \] Consequently, \( f \) is holomorphic if and only if \[ \frac{\partial f}{\partial \bar{z}} = 0 \] For the elements of a given vector space \( V \), we can define a bilinear and antisymmetric exterior product \( \land \) \[ \left( {a{v}_{1}}\right) \land {v}_{2} = a\left( {{v}_{1} \land {v}_{2}}\right) \] \[ \left( {{v}_{1} + {v}_{1}^{\prime }}\right) \land {v}_{2} = {v}_{1} \land {v}_{2} + {v}_{1}^{\prime } \land {v}_{2} \] \[ {v}_{2} \land {v}_{1} = - {v}_{1} \land {v}_{2} \] Using the preceding notational convention, we have the following relationship between the exterior product of the real differentials \( {dx},{dy} \) and that of the complex differentials \( {dz}, d\bar{z} \) \[ {dz} \land d\bar{z} = - {2idx} \land {dy}. \] DEFINITION 5.1. Let \( C \) be a Riemann surface with holomorphic coordinate covering \( \left\{ \left( {{U}_{i},{z}_{i}}\right) \right\} \) . A differential one-form \( \lambda \) on \( C \) is given by a family of local expressions \[ {\lambda }_{i} = {f}_{i}\left( {{z}_{i},{\bar{z}}_{i}}\right) d{z}_{i} + {g}_{i}\left( {{z}_{i},{\bar{z}}_{i}}\right) d{\bar{z}}_{i}, \] where \( {f}_{i},{g}_{i} \) are smooth functions which obey the following transformation laws: \[ {f}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) ,\overline{{\varphi }_{ij}\left( {z}_{j}\right) }}\right) \frac{d{\varphi }_{ij}\left( {z}_{j}\right) }{d{z}_{j}} = {f}_{j}\left( {{z}_{j},{\bar{z}}_{j}}\right) , \] \[ {g}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) ,\overline{{\varphi }_{ij}\left( {z}_{j}\right) }}\right) \frac{\overline{d{\varphi }_{ij}\left( {z}_{j}\right) }}{d{z}_{j}} = {g}_{j}\left( {{z}_{j},{\bar{z}}_{j}}\right) . \] DEFINITION 5.2. We define the exterior derivative \( {d\lambda } \) of the differential one-form \( \lambda = \left\{ {{f}_{i}d{z}_{i} + {g}_{i}d{\bar{z}}_{i}}\right\} \) as \[ {d\lambda } = \left\{ {d{f}_{i} \land d{z}_{i} + d{g}_{i} \land d{\bar{z}}_{i}}\right\} \] \[ = \left\{ {\left. \left( {\frac{\partial {g}_{i}}{\partial {z}_{i}} - \frac{\partial {f}_{i}}{\partial {\bar{z}}_{i}}}\right) \right| \;d{z}_{i} \land d{\bar{z}}_{i}}\right\} . \] This definition is easily seen to be well defined. Definition 5.3. The differential one-form \( \lambda \) on \( C \) is called a closed one-form if \( {d\lambda } = 0 \) . If there exists a smooth function \( f \) such that \[ \lambda = {df} = \frac{\partial f}{\partial z}{dz} + \frac{\partial f}{\partial \bar{z}}d\bar{z} \] then \( \lambda \) is called an exact one-form. REMARK 5.4. From \[ \frac{{\partial }^{2}f}{\partial z\partial \bar{z}} = \frac{{\partial }^{2}f}{\partial \bar{z}\partial z} \] we have \[ d\left( {df}\right) = d\left( {\frac{\partial f}{\partial z}{dz} + \frac{\partial f}{\partial \bar{z}}d\bar{z}}\right) \] \[ = \frac{{\partial }^{2}f}{\partial \bar{z}\partial z}d\bar{z} \land {dz} + \frac{{\partial }^{2}f}{\partial z\partial \bar{z}}{dz} \land d\bar{z} \] \[ = \left( {\frac{{\partial }^{2}f}{\partial z\partial \bar{z}} - \frac{{\partial }^{2}f}{\partial \bar{z}\partial z}}\right) {dz} \land d\bar{z} = 0, \] and thus any exact form is always closed. Theorem 5.5 (Stokes’ Theorem for differential forms). Suppose \( C \) is a Riemann surface, \( \Omega \) is an open set in \( C \) with \( \bar{\Omega } \) compact, \( \partial \Omega \) piecewise smooth, and \( \lambda \) is a differential one-form defined on an open set containing \( \bar{\Omega } \) . Then \[ {\int }_{\partial \Omega }\lambda = {\iint }_{\Omega }{d\lambda } \] The proof of this theorem can be found in any text containing a discussion of differentiable manifolds. COROLLARY 5.6. If \( \lambda \) is a differential one-form on a compact Riemann surface \( C \), then \[ {\iint }_{C}{d\lambda } = 0 \] ## §6. The Poincaré-Hopf formula Suppose \( C \) is a compact Riemann surface. In \( §4 \) we have already proven that for a nonconstant \( f \in K\left( C\right) \), we have \[ \mathop{\sum }\limits_{{p \in C}}{\nu }_{p}\left( f\right) = 0 \] It is then natural to consider a similar problem: for a nontrivial \( \omega \in \) \( {K}^{1}\left( C\right) \) \[ \mathop{\sum }\limits_{{p \in C}}{\nu }_{p}\left( \omega \right) = ? \] In order to discuss this problem, let us first make some preparations. Definition 6.1. Suppose \[ f : {S}^{1} \rightarrow {\mathbb{R}}^{2} \smallsetminus \{ O\} \] is a smooth mapping. Then its winding number is by definition \[ \frac{1}{2\pi }{\oint }_{{s}^{1}}{f}^{ * }\frac{{udv} - {vdu}}{{u}^{2} + {v}^{2}} = \frac{1}{2\pi }{\oint }_{{s}^{1}}d\arg f. \] The winding number of \( f \) counts the number of times \( f\left( z\right) \) winds around the origin \( O \in {\mathbb{R}}^{2} \) as \( z \) goes around \( {S}^{1} \) once. (This can be seen from the above definition.) From thi
1343_[鄂维南&李铁军&Vanden-Eijnden] Applied Stochastic Analysis -GSM199
Definition 1.30
Definition 1.30 (Convergence in \( {L}^{p} \) ). \( {X}_{n} \) converges to \( X \) in \( {L}^{p}(0 < p < \) \( \infty \) ) if (1.44) \[ \mathbb{E}{\left| {X}_{n} - X\right| }^{p} \rightarrow 0 \] For \( p = 1 \), this is also referred to as convergence in mean; for \( p = 2 \), this is referred to as convergence in mean square. Remark 1.31. Note that the power \( p \) does not need to be greater than 1 since it still gives a metric for the random variables. But only when \( p \geq 1 \) do we get a norm. The statements in this section hold when \( 0 < p < \infty \) . We have the following relations between different notions of convergence. Theorem 1.32. (i) Almost sure convergence implies convergence in probability. (ii) Convergence in probability implies almost sure convergence along a subsequence. (iii) If \( p < q \), then convergence in \( {L}^{q} \) implies convergence in \( {L}^{p} \) . (iv) Convergence in \( {L}^{p} \) implies convergence in probability. (v) Convergence in probability implies convergence in distribution. Proof. (i) Note that \[ \mathbb{P}\left( {\left| {{X}_{n}\left( \omega \right) - X\left( \omega \right) }\right| > \epsilon }\right) = {\int }_{\Omega }{\chi }_{\left\{ \left| {X}_{n} - X\right| > \epsilon \right\} }\left( \omega \right) \mathbb{P}\left( {d\omega }\right) \rightarrow 0 \] by the almost sure convergence and dominated convergence theorems. (ii) The proof will be deferred to Section 1.11. (iii) This is a consequence of the Hölder inequality: \[ \mathbb{E}{\left| {X}_{n} - X\right| }^{p} \leq {\left( \mathbb{E}{\left( {\left| {X}_{n} - X\right| }^{p}\right) }^{\frac{q}{p}}\right) }^{\frac{p}{q}} = {\left( \mathbb{E}{\left| {X}_{n} - X\right| }^{q}\right) }^{\frac{p}{q}},\;p < q. \] (iv) This is a consequence of the Chebyshev inequality: \[ \mathbb{P}\left( {\omega : \left| {{X}_{n}\left( \omega \right) - X\left( \omega \right) }\right| > \epsilon }\right) \leq \frac{1}{{\epsilon }^{p}}\mathbb{E}{\left| {X}_{n} - X\right| }^{p} \] for any \( \epsilon > 0 \) . (v) Argue by contradiction. Suppose there exist a bounded continuous function \( f\left( x\right) \) and a subsequence \( {X}_{{n}_{k}} \) such that (1.45) \[ \mathbb{E}f\left( {X}_{{n}_{k}}\right) \nrightarrow \mathbb{E}f\left( X\right) ,\;k \rightarrow \infty . \] From assertion (ii), there exists a further subsequence of \( \left\{ {X}_{{n}_{k}}\right\} \) (still denoted as \( \left. \left\{ {X}_{{n}_{k}}\right\} \right) \) such that \( {X}_{{n}_{k}} \) converges to \( X \) almost surely. This contradicts (1.45) by the dominated convergence theorem. ## 1.9. Characteristic Function The characteristic function of a random variable \( X \) is defined as (1.46) \[ f\left( \xi \right) = \mathbb{E}{e}^{i\xi X} = {\int }_{\mathbb{R}}{e}^{i\xi x}\mu \left( {dx}\right) . \] Proposition 1.33. The characteristic function has the following properties: (1) \( \forall \xi \in \mathbb{R},\left| {f\left( \xi \right) }\right| \leq 1, f\left( \xi \right) = \overline{f\left( {-\xi }\right) }, f\left( 0\right) = 1 \) ; (2) \( f \) is uniformly continuous on \( \mathbb{R} \) . Proof. The proof of the first statements is straightforward. For the second statement, we have \[ \left| {f\left( {\xi }_{1}\right) - f\left( {\xi }_{2}\right) }\right| = \left| {\mathbb{E}\left( {{e}^{i{\xi }_{1}X} - {e}^{i{\xi }_{2}X}}\right) }\right| = \left| {\mathbb{E}\left( {{e}^{i{\xi }_{1}X}\left( {1 - {e}^{i\left( {{\xi }_{2} - {\xi }_{1}}\right) X}}\right) }\right) }\right| \] \[ \leq \mathbb{E}\left| {1 - {e}^{i\left( {{\xi }_{2} - {\xi }_{1}}\right) X}}\right| . \] Since the integrand \( \left| {1 - {e}^{i\left( {{\xi }_{2} - {\xi }_{1}}\right) X}}\right| \) depends only on the difference between \( {\xi }_{1} \) and \( {\xi }_{2} \) and since it tends to 0 almost surely as the difference goes to 0 , uniform continuity follows immediately from the dominated convergence theorem. Example 1.34. Here are the characteristic functions of some typical distributions: (1) Bernoulli distribution. \[ f\left( \xi \right) = q + p{e}^{i\xi } \] (2) Binomial distribution \( B\left( {n, p}\right) \) . \[ f\left( \xi \right) = {\left( q + p{e}^{i\xi }\right) }^{n}. \] (3) Poisson distribution \( \mathcal{P}\left( \lambda \right) \) . \[ f\left( \xi \right) = {e}^{\lambda \left( {{e}^{i\xi } - 1}\right) }. \] (4) Exponential distribution \( \mathcal{E}\left( \lambda \right) \) . \[ f\left( \xi \right) = {\left( 1 - {\lambda }^{-1}i\xi \right) }^{-1}. \] (5) Normal distribution \( N\left( {\mu ,{\sigma }^{2}}\right) \) . (1.47) \[ f\left( \xi \right) = \exp \left( {{i\mu \xi } - \frac{{\sigma }^{2}{\xi }^{2}}{2}}\right) . \] (6) Multivariate normal distribution \( N\left( {\mathbf{\mu },\mathbf{\sum }}\right) \) . (1.48) \[ f\left( \mathbf{\xi }\right) = \exp \left( {i{\mathbf{\mu }}^{T}\mathbf{\xi } - \frac{1}{2}{\mathbf{\xi }}^{T}\mathbf{\sum }\mathbf{\xi }}\right) . \] The property (1.48) is also used to define degenerate multivariate Gaussian distributions. The following result gives an explicit characterization of the weak convergence of probability measures in terms of their characteristic functions. This is the key ingredient in the proof of the central limit theorem. Theorem 1.35 (Lévy’s continuity theorem). Let \( {\left\{ {\mu }_{n}\right\} }_{n \in \mathbb{N}} \) be a sequence of probability measures, and let \( {\left\{ {f}_{n}\right\} }_{n \in \mathbb{N}} \) be their corresponding characteristic functions. Assume that: (1) \( {f}_{n} \) converges everywhere on \( \mathbb{R} \) to a limiting function \( f \) . (2) \( f \) is continuous at \( \xi = 0 \) . Then there exists a probability distribution \( \mu \) such that \( {\mu }_{n}\overset{d}{ \rightarrow }\mu \) . Moreover \( f \) is the characteristic function of \( \mu \) . Conversely, if \( {\mu }_{n}\overset{d}{ \rightarrow }\mu \), where \( \mu \) is some probability distribution, then \( {f}_{n} \) converges to \( f \) uniformly on every finite interval, where \( f \) is the characteristic function of \( \mu \) . For a proof, see [Chu01]. As for Fourier transforms, one can also define the inverse transform \[ \rho \left( x\right) = \frac{1}{2\pi }{\int }_{\mathbb{R}}{e}^{-{i\xi x}}f\left( \xi \right) {d\xi } \] An interesting question arises as to when this gives the density of a probability measure. To address this, we introduce the following notion. Definition 1.36. A function \( f \) is called positive semidefinite if for any finite set of values \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{n}}\right\}, n \in \mathbb{N} \), the matrix \( {\left( f\left( {\xi }_{i} - {\xi }_{j}\right) \right) }_{i, j = 1}^{n} \) is positive semidefinite; i.e., (1.49) \[ \mathop{\sum }\limits_{{i, j}}f\left( {{\xi }_{i} - {\xi }_{j}}\right) {v}_{i}{\bar{v}}_{j} \geq 0 \] for every set of values \( {v}_{1},\ldots ,{v}_{n} \in \mathbb{C} \) . Theorem 1.37 (Bochner’s theorem). A function \( f \) is the characteristic function of a probability measure if and only if it is positive semidefinite and continuous at 0 with \( f\left( 0\right) = 1 \) . Proof. We only prove the necessity part. The other part is less trivial and readers may consult Chu01. Assume that \( f \) is a characteristic function. Then (1.50) \[ \mathop{\sum }\limits_{{i, j = 1}}^{n}f\left( {{\xi }_{i} - {\xi }_{j}}\right) {v}_{i}{\bar{v}}_{j} = {\int }_{\mathbb{R}}{\left| \mathop{\sum }\limits_{{i = 1}}^{n}{v}_{i}{e}^{i{\xi }_{i}x}\right| }^{2}{dx} \geq 0. \] ## 1.10. Generating Function and Cumulants For a discrete random variable, its generating function is defined as (1.51) \[ G\left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }P\left( {X = {x}_{k}}\right) {x}^{k}. \] One immediately has \[ P\left( {X = {x}_{k}}\right) = {\left. \frac{1}{k!}{G}^{\left( k\right) }\left( x\right) \right| }_{x = 0}. \] Definition 1.38. The convolution of two sequences \( \left\{ {a}_{k}\right\} ,\left\{ {b}_{k}\right\} ,\left\{ {c}_{k}\right\} = \) \( \left\{ {a}_{k}\right\} * \left\{ {b}_{k}\right\} \), is defined by (1.52) \[ {c}_{k} = \mathop{\sum }\limits_{{j = 0}}^{k}{a}_{j}{b}_{k - j} \] It is easy to show that the generating functions defined by \[ A\left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }{a}_{k}{x}^{k},\;B\left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }{b}_{k}{x}^{k},\;C\left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }{c}_{k}{x}^{k} \] with \( \left\{ {c}_{k}\right\} = \left\{ {a}_{k}\right\} * \left\{ {b}_{k}\right\} \) satisfy \( C\left( x\right) = A\left( x\right) B\left( x\right) \) . The following result is more or less obvious. Theorem 1.39. Let \( X \) and \( Y \) be two independent random variables with probability distribution \[ P\left( {X = j}\right) = {a}_{j},\;P\left( {Y = k}\right) = {b}_{k}, \] respectively, and let \( A \) and \( B \) be the corresponding generating functions. Then the generating function of \( X + Y \) is \( C\left( x\right) = A\left( x\right) B\left( x\right) \) . This can be used to compute some generating functions. (a) Bernoulli distribution: \( G\left( x\right) = q + {px} \) . (b) Binomial distribution: \( G\left( x\right) = {\left( q + px\right) }^{n} \) . (c) Poisson distribution: \( G\left( x\right) = {e}^{-\lambda + {\lambda x}} \) . The moment generating function of a random variable \( X \) is defined for all values of \( t \) by (1.53) \[ M\left( t\right) = \mathbb{E}{e}^{tX} = \left\{ \begin{array}{ll} \mathop{\sum }\limits_{x}p\left( x\right) {e}^{tx}, & X\text{ is discrete-valued,} \\ {\int }_{\mathbb{R}}p\left( x\right) {e}^{tx}{dx}, & X\text{ is continuous,} \end{array}\right. \] provided that \( {e}^{tX} \) is integrable. It is obvious that \( M\left( 0\right) = 1 \) . Once \( M\left( t\right) \) is defined, one can show \( M\left( t\right) \in {C}^{\infty } \) in its domain and its relation to the \( n \) th moment (1.54) \[ {M}^{\left( n\right) }\left( t\right) = \mathbb{E
1172_(GTM8)Axiomatic Set Theory
Definition 3.5
Definition 3.5. If \( \langle X, T\rangle \) is a topological space and \( A \subseteq X \), then \( A \) is a Borel set iff \( A \) belongs to the \( \sigma \) -subalgebra, generated by \( T \), of the natural Boolean algebra on \( \mathcal{P}\left( X\right) \) . Theorem 3.6. If \( \langle X, T\rangle \) is a topological space, if \[ {A}_{0} = T \cup \{ X - a \mid a \in T\} \] \[ {A}_{\alpha + 1} = \left\{ {a \mid \left( {\exists f \in {\left( {A}_{\alpha }\right) }^{\omega }}\right) \left\lbrack {\left\lbrack {a = \mathop{\bigcup }\limits_{{i < \omega }}f\left( i\right) }\right\rbrack \vee \left\lbrack {a = \mathop{\bigcap }\limits_{{i < \omega }}f\left( i\right) }\right\rbrack }\right\rbrack }\right\} \] \[ {A}_{\alpha } = \mathop{\bigcup }\limits_{{\beta < \alpha }}{A}_{\beta },\;\alpha \in {K}_{\mathrm{{II}}}, \] then \( A = \mathop{\bigcup }\limits_{{\alpha \in {\aleph }_{1}}}{A}_{\alpha } \) is the set of all Borel sets in \( X \) . Proof. Clearly each element in \( {A}_{0} \) is a Borel set. If \( {A}_{\alpha } \) is a collection of Borel sets then so is \( {A}_{\alpha + 1} \) . If for \( \beta < \alpha ,{A}_{\beta } \) is a collection of Borel sets then \[ \mathop{\bigcup }\limits_{{\beta < \alpha }}{A}_{\beta } \] is a collection of Borel sets. Therefore \( A \) is a collection of Borel sets. To prove that \( A \) contains all Borel sets it is sufficient to prove that \( A \) is a Boolean \( \sigma \) -algebra. Since \( {A}_{0} \subseteq A \) and \( \mathbf{0},\mathbf{1} \in {A}_{0} \) we have \( \mathbf{0},\mathbf{1} \in A \) . Since union and intersection are associative, commutative, and distributive we need only prove that \( A \) has the closure and \( \sigma \) -closure properties. We first note that \( \alpha < \beta \) implies \( {A}_{\alpha } \subseteq {A}_{\beta } \) . If \[ {b}_{0},{b}_{1},\ldots \] is an \( \omega \) -sequence of elements of \( A \) then there exists an \( \omega \) -sequence of ordinals \[ {\alpha }_{0},{\alpha }_{1},\ldots \] each less than \( {\aleph }_{1} \) and such that \( {b}_{0} \in {A}_{{\alpha }_{0}},{b}_{1} \in {A}_{{\alpha }_{1}},\ldots \) . Since \( \left\{ {{\alpha }_{0},{\alpha }_{1},\ldots }\right\} \) is a set it has a supremum that is also less than \( {\aleph }_{1} \) . Therefore \[ \left( {\exists \alpha < {\aleph }_{1}}\right) \left( {\forall i < \omega }\right) \left\lbrack {{b}_{i} \in {A}_{\alpha }}\right\rbrack . \] Then \[ \mathop{\sum }\limits_{{i < \omega }}{b}_{i} \in {A}_{\alpha + 1} \land \mathop{\prod }\limits_{{i < \omega }}{b}_{i} \in {A}_{\alpha + 1} \] Definition 3.7. If \( X \) is a topological space and \( A \subseteq X \) then 1. \( A \) is nowhere dense iff \( {A}^{-0} = 0 \) . 2. \( A \) is meager iff \( A \) is the union of countably many nowhere dense sets i.e., \( A = \mathop{\bigcup }\limits_{{i < \omega }}{A}_{i} \) where \( \forall i < \omega ,{A}_{i} \) is nowhere dense. Theorem 3.8. If \( X \) is a topological space and \( A \subseteq X \) then 1. \( A \) is open implies \( {A}^{ - } - A \) is meager. 2. \( A \) is closed implies \( A - {A}^{0} \) is meager. Proof. \[ \text{1.}{\left( {A}^{ - } - A\right) }^{0} = {\left\lbrack {A}^{ - } \cap \left( X - A\right) \right\rbrack }^{-0} \subseteq {A}^{-0} \cap {\left( X - A\right) }^{-0} \] \[ = {A}^{-0} \cap \left( {X - {A}^{0 - }}\right) \] since \( A \) is open \[ \subseteq {A}^{-0} \cap \left( {X - {A}^{-0}}\right) \] \[ = 0\text{.} \] 2. \( {\left( A - {A}^{0}\right) }^{-0} = {\left\lbrack A \cap \left( X - {A}^{0}\right) \right\rbrack }^{-0} \) \[ \subseteq {A}^{0} \cap {\left( X - {A}^{0}\right) }^{-0} \] since \( A \) is closed \[ = {A}^{0} \cap \left( {X - {A}^{0 - }}\right) \] \[ = 0\text{.} \] Theorem 3.9. 1. The collection of all meager sets in a topological space \( X \) is a proper \( \sigma \) -ideal in the natural algebra on \( \mathcal{P}\left( X\right) \) . 2. The collection of all meager Borel sets in a topological space \( X \) is a proper \( \sigma \) -ideal in the Boolean \( \sigma \) -algebra of Borel sets. Proof. Left to the reader. Theorem 3.10. If \( B \) is a Borel set of the topological space \( X \) then there exists an open set \( G \) and meager sets \( {N}_{1} \) and \( {N}_{2} \) such that \[ B = \left( {G + {N}_{1}}\right) - {N}_{2} \] i.e. every Borel set has the property of Baire. Proof. If \( B \) is open then \( B = \left( {B + 0}\right) - 0 \) . If \( B \) is closed \[ B = \left\lbrack {{B}^{0} + \left( {B - {B}^{0}}\right) }\right\rbrack - 0. \] Thus in the notation of Theorem 3.6, the result holds for each element of \( {A}_{0} \) . If it holds for each element of \( {A}_{\alpha } \) and if \( B \in {A}_{\alpha + 1} \) then there is an \( \omega \) -sequence \( {B}_{0},{B}_{1},\ldots \), of elements in \( {A}_{\alpha } \), such that \( B = \mathop{\sum }\limits_{{i < \omega }}{B}_{i} \) or \( B = - \mathop{\sum }\limits_{{i < \omega }}{B}_{i} \) . From our induction hypothesis there exist open sets \( {G}_{i} \) and meager sets \( {N}_{1}{}^{i} \) and \( {N}_{2}{}^{i} \) such that \[ {B}_{i} = \left( {{G}_{i} + {N}_{1}{}^{i}}\right) - {N}_{2}{}^{i}. \] If \( G = \mathop{\sum }\limits_{{i < \omega }}{G}_{i} \) then \( G \) is open. Furthermore if \[ {N}_{1} = B - G \land {N}_{2} = G - B \] then \[ {N}_{1} = B - G \subseteq \mathop{\sum }\limits_{{i < \omega }}{B}_{i} - \mathop{\sum }\limits_{{i < \omega }}{G}_{i} \subseteq \mathop{\sum }\limits_{{i < \omega }}\left( {{B}_{i} - {G}_{i}}\right) \subseteq \mathop{\sum }\limits_{{i < \omega }}{N}_{1}^{i} \] \[ {N}_{2} = G - B \subseteq \mathop{\sum }\limits_{{i < \omega }}\left( {{G}_{i} - {B}_{i}}\right) \subseteq \mathop{\sum }\limits_{{i < \omega }}{N}_{2}^{i}. \] Thus \( {N}_{1} \) and \( {N}_{2} \) are meager and \( B = \left( {G + {N}_{1}}\right) - {N}_{2} \) . If \( B = - C \) and \( C = \left( {G + {N}_{1}}\right) - {N}_{2} \) for \( G \) open and \( {N}_{1} \) and \( {N}_{2} \) meager, then \[ B = \left( {-G + {N}_{2}}\right) - \left( {{N}_{1} - {N}_{2}}\right) . \] Since \( {}^{ - }G \) is closed \( {}^{ - }G - {\left( {}^{ - }G\right) }^{0} \) is meager and hence \[ B = \left\lbrack {{\left( -G\right) }^{0} + \left( {-G - {\left( -G\right) }^{0}}\right) + {N}_{2}}\right\rbrack - \left( {{N}_{1} - {N}_{2}}\right) \] where \( \left( {-G - {\left( -G\right) }^{0}}\right) + {N}_{2} \) and \( {N}_{1} - {N}_{2} \) are meager. Corollary 3.11. If \( B \) is a Borel set of the topological space \( \langle X, T\rangle \) then there exists a regular open set \( G \) and meager sets \( {N}_{1} \) and \( {N}_{2} \) such that \[ B = \left( {G + {N}_{1}}\right) - {N}_{2} \] Proof. By Theorem 3.10 there exists an open set \( G \) and meager sets \( {N}_{1} \) and \( {N}_{2} \) such that \( B = \left( {G + {N}_{1}}\right) - {N}_{2} \) . But \[ G = {G}^{-0} - \left( {{G}^{-0} - G}\right) . \] Hence \[ B = \left( {{G}^{-0} + {N}_{1}}\right) - \left\lbrack {\left( {{G}^{-0} - G - {N}_{1}}\right) + {N}_{2}}\right\rbrack . \] Definition 3.12. \( A \) is a compact set in the topological space \( \langle X, T\rangle \) iff \( A \subseteq X \) and \[ \left( {\forall S \subseteq T}\right) \left\lbrack {A \subseteq \bigcup \left( S\right) \rightarrow \left( {\exists {S}^{\prime } \subseteq S}\right) \left\lbrack {{\operatorname{Fin}}^{ * }\left( {S}^{\prime }\right) \land A \subseteq \bigcup \left( {S}^{\prime }\right) }\right\rbrack }\right\rbrack . \] Definition 3.13. A topological space \( \langle X, T\rangle \) is 1. a Hausdorff space \[ \text{iff}\left( {\forall a, b \in X}\right) \left\lbrack {a \neq b \rightarrow \left( {\exists N\left( a\right) }\right) \left( {\exists {N}^{\prime }\left( b\right) }\right) \left\lbrack {N\left( a\right) \cap {N}^{\prime }\left( b\right) = 0}\right\rbrack }\right\rbrack \text{,} \] 2. a compact space iff \( X \) is a compact set, 3. a locally compact space iff \( \forall a \in X,\exists N\left( a\right), N{\left( a\right) }^{ - } \) is a compact set. Theorem 3.14. If the topological space \( \langle X, T\rangle \) is a Hausdorff space then \[ \left( {\forall a, b \in X}\right) \left\lbrack {a \neq b \rightarrow \left( {\exists N\left( a\right) }\right) \left\lbrack {b \notin N{\left( a\right) }^{ - }}\right\rbrack }\right\rbrack \text{.} \] Proof. By definition of a Hausdorff space \[ \left( {\exists N\left( a\right) }\right) \left( {\exists {N}^{\prime }\left( b\right) }\right) \left\lbrack {N\left( a\right) \cap {N}^{\prime }\left( b\right) = 0}\right\rbrack . \] Therefore \( b \notin N{\left( a\right) }^{ - } \) . Theorem 3.15. 1. Every compact set in a Hausdorff space is closed. 2. Every closed set in a compact space is compact. Proof. 1. Let \( A \) be a compact set in a Hausdorff space \( \langle X, T\rangle \) . If \( b \in {A}^{ - } - A \) then by Theorem 3.14 \[ \left( {\forall a \in A}\right) \left( {\exists N\left( a\right) }\right) \left\lbrack {b \notin N{\left( a\right) }^{ - }}\right\rbrack . \] Since \( A \subseteq \bigcup \left\{ {N\left( a\right) \mid a \in A \land b \notin N{\left( a\right) }^{ - }}\right\} \) and since \( A \) is compact, there exists a finite collection of elements of \( A \) \[ {a}_{1},\ldots ,{a}_{n} \] * Fin \( \left( S\right) \) means " \( S \) is finite". and a neighborhood of each such point \[ N\left( {a}_{1}\right) ,\ldots, N\left( {a}_{n}\right) \] such that \[ A \subseteq N\left( {a}_{1}\right) \cup \cdots \cup N\left( {a}_{n}\right) \] and \( b \notin N{\left( {a}_{1}\right) }^{ - }, b \notin N{\left( {a}_{2}\right) }^{ - },\cdots, b \notin N{\left( {a}_{n}\right) }^{ - } \) . Therefore \[ \left( {\exists {N}_{1}\left( b\right) }\right) \left\lbrack {{N}_{1}\left( b\right) \cap N\left( {a}_{1}\right) = 0}\right\rbrack \] \[ \left( {\exists {N}_{2}\left( b\right) }\right) \left\lbrack {{N}_{2}\left( b\right) \cap N\left( {a}_{2}\right) = 0}\right\rbrack \] \[ \vdots \] \[ \left( {\exists {N}_{n}\left( b\right) }\right) \left\lbrack
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 2.2.6
Definition 2.2.6 (Subdifferential) Let \( f : X \rightarrow \mathbb{R} \cup \{ + \infty \} \) be a convex function on a vector space \( X.\forall {x}_{0} \in \operatorname{dom}\left( f\right) ,{x}^{ * } \in {X}^{ * } \) is called a subgradient of \( {fat}{x}_{0} \) if \[ \left\langle {{x}^{ * }, x - {x}_{0}}\right\rangle + f\left( {x}_{0}\right) \leq f\left( x\right) \;\forall x \in X. \] The set of all subgradients at \( {x}_{0} \) is called the subdifferential of \( f \) at \( {x}_{0} \), and is denoted by \( \partial f\left( {x}_{0}\right) \) . Geometrically, \( {x}^{ * } \in \partial f\left( {x}_{0}\right) \) if and only if the hyperplane \[ y = \left\langle {{x}^{ * }, x - {x}_{0}}\right\rangle + f\left( {x}_{0}\right) \] lies below the epigraph of \( f \), i.e., it is a support of \( \operatorname{epi}\left( f\right) \) . ![9e04f701-3152-4cf2-8818-a909bab05a37_93_0.jpg](images/9e04f701-3152-4cf2-8818-a909bab05a37_93_0.jpg) Fig. 2.1. Obviously, \( \partial f\left( {x}_{0}\right) \) may contain more than one point. The following propositions hold, if \( X \) is a Banach space. (1) \( \partial f\left( {x}_{0}\right) \) is a \( {\mathrm{w}}^{ * } \) -closed convex set. (2) If \( {x}_{0} \in \operatorname{int}\left( {\operatorname{dom}\left( f\right) }\right) \), then \( \partial f\left( {x}_{0}\right) \neq \varnothing \) . Proof. We apply the Hahn-Banach separation theorem to the convex set \( \operatorname{epi}\left( f\right) : \exists \left( {{x}^{ * },\lambda }\right) \in {X}^{ * } \times {\mathbb{R}}^{1} \smallsetminus \{ \left( {\theta ,0}\right) \} \) such that \[ \left\langle {{x}^{ * },{x}_{0}}\right\rangle + {\lambda f}\left( {x}_{0}\right) \geq \left\langle {{x}^{ * }, x}\right\rangle + {\lambda t}\;\forall \left( {x, t}\right) \in \operatorname{epi}\left( f\right) . \] Since \( \left( {{x}_{0}, f\left( {x}_{0}\right) + 1}\right) \in \operatorname{epi}\left( f\right) \), it follows that \( \lambda \leq 0 \) . However, \( \lambda \neq 0 \) . Otherwise, we would have \( \left\langle {{x}^{ * }, x - {x}_{0}}\right\rangle \leq 0\forall x \in \operatorname{dom}\left( f\right) \) . From \( {x}_{0} \in \) \( \operatorname{int}\left( {\operatorname{dom}\left( f\right) }\right) \), we conclude that \( {x}^{ * } = \theta \) . It contradicts with \( \left( {{x}^{ * },\lambda }\right) \neq \left( {\theta ,0}\right) \) . Setting \( {x}_{0}^{ * } = \frac{1}{-\lambda }{x}^{ * } \), we obtain \( {x}_{0}^{ * } \in \partial f\left( {x}_{0}\right) \) . (3) \( \forall \lambda \geq 0\partial \left( {\lambda f}\right) \left( {x}_{0}\right) = \lambda \partial f\left( {x}_{0}\right) \) . (4) If \( f, g : X \rightarrow {\mathbb{R}}^{1} \cup \{ + \infty \} \) are convex, then \( \forall {x}_{0} \in \operatorname{int}\left( {\operatorname{dom}\left( f\right) \cap \operatorname{dom}\left( g\right) }\right) \) \[ \partial \left( {f + g}\right) \left( {x}_{0}\right) = \partial f\left( {x}_{0}\right) + \partial g\left( {x}_{0}\right) . \] Proof. " \( \supset \) " is trivial; we are going to prove " \( \subset \) ". We may assume \( {x}_{0} = \theta \) , \( f\left( \theta \right) = g\left( \theta \right) = 0 \), and \( \theta \in \partial \left( {f + g}\right) \left( \theta \right) \) . We want to show that \( \exists {x}_{0}^{ * } \in \) \( \partial f\left( \theta \right) \) such that \( - {x}_{0}^{ * } \in \partial g\left( \theta \right) \) . Since the set \( C = \{ \left( {x, t}\right) \in \operatorname{dom}\left( g\right) \times \) \( \left. {{\mathbb{R}}^{1} \mid t \leq - g\left( x\right) }\right\} \) is convex, and \( C \cap \operatorname{int}\left( {\operatorname{epi}\left( f\right) }\right) = \left( {\theta ,0}\right) \), from the fact that \( f\left( x\right) + g\left( x\right) \geq 0 \) . According to the Hahn-Banach separation theorem, \( \exists \left( {{x}^{ * },\lambda }\right) \in {X}^{ * } \times {\mathbb{R}}^{1} \) such that \[ \left\langle {{x}^{ * }, x}\right\rangle + {\lambda f}\left( x\right) \geq 0 \geq \left\langle {{x}^{ * }, x}\right\rangle + {\lambda t}\;\forall \left( {x, t}\right) \in C. \] In the same manner as the proof of (2), we verify \( \lambda > 0 \) . Setting \( {x}_{0}^{ * } = \frac{-{x}^{ * }}{\lambda } \) , we have \[ \left\langle {{x}_{0}^{ * }, x}\right\rangle \leq f\left( x\right) , - \left\langle {{x}_{0}^{ * }, x}\right\rangle \leq g\left( x\right) \;\forall x \in X. \] i.e., \( {x}_{0}^{ * } \in \partial f\left( \theta \right) \), and \( - {x}_{0}^{ * } \in \partial g\left( \theta \right) \) . (5) If \( f : X \rightarrow {\mathbb{R}}^{1} \cup \{ + \infty \} \) is convex, and is G-differentiable at a point \( {x}_{0} \in \operatorname{int}\left( {\operatorname{dom}\left( f\right) }\right) \), then \( \partial f\left( {x}_{0}\right) \) is a single point \( {x}_{0}^{ * } \) satisfying \( \left\langle {{x}_{0}^{ * }, h}\right\rangle = \) \( {df}\left( {{x}_{0}, h}\right) \forall h \in X \) . Proof. We may assume \( {x}_{0} = \theta \) and \( f\left( \theta \right) = 0 \) . Define the functional on \( X \) : \[ L\left( h\right) = {df}\left( {\theta, h}\right) . \] It is homogeneous. From the convexity of \( f \), we have \[ L\left( {{h}_{1} + {h}_{2}}\right) \leq L\left( {h}_{1}\right) + L\left( {h}_{2}\right) ,\forall {h}_{1},{h}_{2} \in X. \] Then \( L \) is linear. Again by the convexity: \[ - f\left( {-h}\right) \leq L\left( h\right) \leq f\left( h\right) \;\forall h \in X. \] Combining with Theorem 2.2.1, \( L \) is continuous. Therefore \( \exists {x}_{0}^{ * } \in {X}^{ * } \) such that \[ L\left( h\right) = \left\langle {{x}_{0}^{ * }, h}\right\rangle \] It follows that \( {x}_{0}^{ * } \in \partial f\left( \theta \right) \) . Now, suppose \( {x}^{ * } \in \partial f\left( \theta \right) \), i.e., \( \left\langle {{x}^{ * }, h}\right\rangle \leq f\left( h\right) \forall h \in X \), and then \( \left\langle {{x}^{ * }, h}\right\rangle \leq \) \( \frac{1}{t}f\left( {th}\right) \forall h \in X,\forall t > 0 \) . By taking limit \( t \rightarrow + 0 \), it follows that \( \left\langle {{x}^{ * }, h}\right\rangle \leq \) \( \left\langle {{x}_{0}^{ * }, h}\right\rangle \forall h \in X \) . Then \( {x}^{ * } = {x}_{0}^{ * } \) . (6) If \( f : X \rightarrow {\mathbb{R}}^{1} \cup \{ + \infty \} \) is convex and attains its minimum at \( {x}_{0} \), then \( \theta \in \partial f\left( {x}_{0}\right) \) . Let us present a few examples in computing the subdifferentials of convex functions. Example 1. (Normalized duality map) Let \( X \) be a real Banach space, and let \( f\left( x\right) = \frac{1}{2}\parallel x{\parallel }^{2} \) . Then \( \partial f\left( x\right) = F\left( x\right) \mathrel{\text{:=}} \left\{ {{x}^{ * } \in {X}^{ * }\left| {\;\begin{Vmatrix}{x}^{ * }\end{Vmatrix} = \parallel x\parallel ,\left\langle {{x}^{ * }, x}\right\rangle = \parallel }\right. }\right. \) \( x{\parallel }^{2} \) \}. It is called the normalized duality map. In the particular case where \( X \) is a real Hilbert space, it is known that \( {f}^{\prime }\left( x\right) = x \), and indeed, \( F\left( x\right) = x \) . However, for Banach spaces, \( f \) may not be differentiable. We verify \( \partial f\left( x\right) = F\left( x\right) \) as follows: " \( \subset \) " \( \forall {x}^{ * } \in \partial f\left( x\right) \), we have \[ \left\langle {{x}^{ * }, y - x}\right\rangle \leq \frac{1}{2}\left( {\parallel y{\parallel }^{2} - \parallel x{\parallel }^{2}}\right) , \] and then \( \forall h \in X \) , \[ \left\langle {{x}^{ * }, h}\right\rangle \leq \frac{1}{2t}\left( {\parallel x + {th}{\parallel }^{2} - \parallel x{\parallel }^{2}}\right) \leq \parallel x\parallel \parallel h\parallel + \frac{t}{2}\parallel h{\parallel }^{2}, \] as \( t > 0 \) . It follows that \( \left\langle {{x}^{ * }, h}\right\rangle \leq \parallel x\parallel \parallel h\parallel \), and then \( \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \leq \parallel x\parallel \) . On the other hand, setting \( y = {\lambda x} \), we have \( \left( {\lambda - 1}\right) \left\langle {{x}^{ * }, x}\right\rangle \leq \frac{1}{2}\left( {{\lambda }^{2} - 1}\right) \parallel x{\parallel }^{2} \) . Dividing by \( \left( {\lambda - 1}\right) \) as \( \lambda \in \left( {0,1}\right) \), and then letting \( \lambda = 1 \) we obtain \( \left\langle {{x}^{ * }, x}\right\rangle \geq \parallel x{\parallel }^{2} \) ; it follows that \( \parallel x\parallel \leq \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \) . Thus \( \parallel x\parallel = \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \), and \( \left\langle {{x}^{ * }, x}\right\rangle = \parallel x{\parallel }^{2} \) . " \( \supset \) " \( \forall {x}^{ * } \in F\left( x\right) \), one has \[ \left\langle {{x}^{ * }, y - x}\right\rangle = \left\langle {{x}^{ * }, y}\right\rangle - \parallel x{\parallel }^{2} \] \[ \leq \begin{Vmatrix}{x}^{ * }\end{Vmatrix}\parallel y\parallel - \parallel x{\parallel }^{2} \] \[ \leq \frac{1}{2}\left( {\parallel y{\parallel }^{2} - \parallel x{\parallel }^{2}}\right) \] \[ = f\left( y\right) - f\left( x\right) \] \( \forall y \in X \), i.e., \( {x}^{ * } \in \partial f\left( x\right) \) . Example 2. Let \( C \) be a convex subset of \( X \) . The support function of \( C \) is defined to be \( {S}_{C}\left( {x}^{ * }\right) = \mathop{\sup }\limits_{{x \in C}}\left\langle {{x}^{ * }, x}\right\rangle \) . Geometrically, \( {S}_{C}\left( {x}^{ * }\right) = \left\langle {{x}^{ * },{x}_{0}}\right\rangle \) if and only if \( \left\{ {x \in X \mid \left\langle {{x}^{ * }, x}\right\rangle = {S}_{C}\left( {x}^{ * }\right) }\right\} \) is a support hyperplane of \( C \) at \( {x}_{0} \) . We consider the subdifferential of the indicator function of \( C \) : \[ \partial {\chi }_{C}\left( {x}_{0}\right) = \left\{ \begin{array}{ll} \{ \theta \} & \text{ if }{x}_{0} \in \overset{ \circ }{C} \\ \left\{ {{x}^{ * } \in {X}^{ * } \mid {S}_{C}\left( {x}^{ * }\right) = \left\langle {{x}^{ * },{x}_{0}}\right\rangle }\right\} & \text{ if }{x}_{0} \in \partial C \\ \
1112_(GTM267)Quantum Theory for Mathematicians
Definition 23.27
Definition 23.27 A submanifold \( R \) of \( N \) is said to be Lagrangian if \( \dim \) \( R = n \) and \( {T}_{z}R \) is a Lagrangian subspace of \( {T}_{z}N \) for each \( z \in R \) . \( A \) Lagrangian submanifold \( R \) of \( N \) is said to be Bohr-Sommerfeld (with respect to \( L \) ) if the holonomy in \( L \) of every loop in \( R \) is trivial. We may summarize the preceding discussion as follows. Conclusion 23.28 For a purely real polarization \( P \) with embedded leaves, a polarized section vanishes on every leaf of \( P \) that is not Bohr–Sommerfeld. Our next example suggests that when the leaves are compact, the Bohr-Sommerfeld leaves typically form a discrete set within the set of all leaves. Example 23.29 Let \( N = {S}^{1} \times \mathbb{R} \), equipped with the symplectic form \( \omega = \) \( {dx} \land {d\phi } \), where \( x \) is the linear coordinate on \( \mathbb{R} \) and \( \phi \) is the angular coordinate on \( {S}^{1} \) . Let \( L \) be the trivial line bundle on \( N \), with sections that are identified with smooth functions. Let \( \theta = x{d\phi } \) and define a connection \( \nabla \) on \( L \) by \( {\nabla }_{X} = X - \left( {i/\hslash }\right) \theta \left( X\right) \), and let \( P \) be the purely real polarization of \( N \) for which the leaves are the sets of the form \( {S}^{1} \times \{ x\} \), for \( x \in \mathbb{R} \) . Then a leaf \( {S}^{1} \times \{ x\} \) is Bohr-Sommerfeld if and only if \( x/\hslash \) is an integer. In particular, there are no nonzero, smooth polarized sections of \( L \) . Proof. If we define a section locally on a given leaf \( {S}^{1} \times \{ x\} \) as \[ s\left( \phi \right) = c{e}^{{ix\phi }/\hslash } \] for some nonzero constant \( c \), then it is easily verified that \( {\nabla }_{\partial /\partial \phi }s = 0 \) . After one trip around the circle, the value of this section will be the starting value times \( {e}^{{2\pi ix}/\hslash } \) . Thus, the holonomy around \( {S}^{1} \times \{ x\} \) is trivial if and only if \( x/\hslash \) is an integer. A polarized section, then, would have to vanish on all the leaves where \( x/\hslash \) is not an integer. Since such leaves form a dense subset of \( N \), any smooth polarized section must be identically zero. ∎ Even in cases, such as Example 23.29, where there are no smooth polarized sections, one may still consider "distributional" polarized sections that are supported on the Bohr-Sommerfeld leaves, as on pp. 251-252 of [45]. ## 23.5.3 The Complex Case In Proposition 22.8, we computed the space of polarized sections for a certain positive, translation-invariant polarization on \( {\mathbb{R}}^{2n} \), namely the one for which \( {P}_{z} \) is spanned by the vectors \( \partial /\partial {z}_{j} \) in (22.9). The situation here is better than that for the vertical polarization, in that there are nonzero polarized sections that are square integrable over \( {\mathbb{R}}^{2n} \) . Recall, however, that if we take our polarization to be spanned by the vectors \( \partial /\partial {\bar{z}}_{j} \), then [see (22.13)], then there are no nonzero square-integrable polarized sections. This example indicates the importance of the positivity condition in Definition 23.19. For our next example, we consider the example of the unit disk \( D \) , equipped with the unique (up to a constant) symplectic form that is invariant under the group of fractional linear transformations that map \( D \) onto \( D \) . In this case, the quantum Hilbert space can be identified with a weighted Bergman space, that is, an \( {L}^{2} \) space of holomorphic functions on \( D \) with respect to a measure of the form \( {\left( 1 - {\left| z\right| }^{2}\right) }^{\nu }{dxdy} \) . Example 23.30 Let \( N \) be the unit disk \( D \subset {\mathbb{R}}^{2} \) equipped with the following symplectic form: \[ \omega = 4{\left( 1 - {\left| z\right| }^{2}\right) }^{-2}{dx} \land {dy} = {\left( 1 - {r}^{2}\right) }^{-2}{rdr} \land {d\phi }, \] where \( \left( {r,\phi }\right) \) are the usual polar coordinates. Let \( L \) be the trivial line bundle over \( D \) with connection \( {\nabla }_{X} = X - \left( {i/\hslash }\right) \theta \), where \( \theta \) is the symplectic potential for \( \omega \) given by \[ \theta = 2\frac{{r}^{2}}{1 - {r}^{2}}{d\phi } \] Define a complex polarization on \( D \) by letting \( {P}_{z} = \operatorname{Span}\left( {\partial /\partial z}\right) \), where \( z = x - {iy} \) . In that case, holomorphic sections \( s \) have the form \[ s\left( z\right) = F\left( z\right) {\left( 1 - {\left| z\right| }^{2}\right) }^{1/\hslash }, \] where \( F \) is holomorphic. The norm of such a section is computed as \[ \parallel s{\parallel }^{2} = 4{\int }_{D}{\left| F\left( z\right) \right| }^{2}{\left( 1 - {\left| z\right| }^{2}\right) }^{2/\hslash - 2}{dxdy}. \] As in the case of the plane, the seemingly unnatural definition \( z = x - {iy} \) is necessary to obtain a Kähler polarization. If we used \( z = x + {iy} \) instead, the holomorphic sections would have the form \( F\left( z\right) {\left( 1 - {\left| z\right| }^{2}\right) }^{-1/h} \), in which case there would be no nonzero, square-integrable holomorphic sections. Proof. See Exercise 8. ∎ We now consider general purely complex polarizations. Recall that, by Proposition 23.18 and the Newlander-Nirenberg theorem, \( N \) has a unique complex structure for which \( {P}_{z} \) is the \( \left( {1,0}\right) \) -subspace of \( {T}_{z}^{\mathbb{C}}N \), for all \( z \in N \) . As in the purely real case, there always exist local polarized sections. Theorem 23.31 Suppose \( P \) is a purely complex polarization on \( N \) . Then for each \( {z}_{0} \in N \), there exists a \( P \) -polarized section \( s \) of \( L \), defined in a neighborhood of \( {z}_{0} \), such that \( s\left( {z}_{0}\right) \neq 0 \) . We defer the proof of Theorem 23.31 until the end of this subsection. Suppose \( s \) is as in the theorem and \( {s}^{\prime } \) is any other locally defined \( P \) - polarized section. Then \( {s}^{\prime } = {fs} \) for some unique complex-valued function \( f \) , and by the product rule for covariant derivatives, \( X\left( f\right) = 0 \) for all \( X \in {\bar{P}}_{z} \) . This means that \( f \) is holomorphic with respect to the complex structure on \( N \) for which \( P \) is the \( \left( {1,0}\right) \) -tangent space. Thus, we have a preferred family of local trivializations of \( L \) (the ones given by nonvanishing local polarized sections) such that the "ratio" of any two such trivializations is a holomorphic function. This means that we have given \( L \) the structure of a "holomorphic line bundle" over the complex manifold \( N \) in such a way that the holomorphic sections of \( L \) are precisely the polarized sections with respect to \( P \) . Arguing as in the proof of Proposition 14.15, it is not hard to show that for a purely complex polarization, the space of square-integrable polarized sections of \( L \) forms a closed subspace of the prequantum Hilbert space. For any \( z \in N \), if we choose a linear identification of the fiber of \( L \) over \( z \) with \( \mathbb{C} \), then the map \( s \mapsto s\left( z\right) \) is a linear functional on the quantum Hilbert space. It is not hard to show, as in the proof of Proposition 14.15, that this linear functional is continuous, and can therefore be represented as an inner product with a unique element of the quantum Hilbert space. Definition 23.32 Let \( P \) be a purely complex polarization on \( N \) . For each \( z \in N \), choose a linear identification of the fiber of \( L \) over \( z \) with \( \mathbb{C} \) . Then the coherent state \( {\chi }_{z} \) is the unique element of the quantum Hilbert space with respect to \( P \) such that \[ s\left( z\right) = \left\langle {{\chi }_{z}, s}\right\rangle \] for all \( s \) . Suppose \( N = {\mathbb{R}}^{2} \) with a polarization given by \( {P}_{z} = \operatorname{Span}\left( {\partial /\partial z}\right) \), where \( z = x - {i\alpha p} \) . If we use the symplectic potential \( \theta = \left( {{pdx} - {xdp}}\right) /2 \) , then, as in the proof of Proposition 22.14, the quantum Hilbert space is naturally identifiable with the Segal-Bargmann space. In this case, the coherent states can be read off from Proposition 14.17. It could happen that \( {\chi }_{z} = 0 \) for some \( z \in N \), or even for all \( z \in N \) , depending on the choice of \( P \) . Even if \( {\chi }_{z} \) is nonzero, \( {\chi }_{z} \) is only well defined up to multiplication by a constant, because we must choose an identification of \( {L}^{-1}\left( {\{ z\} }\right) \) with \( \mathbb{C} \) . But if \( {\chi }_{z} \neq 0 \), the one-dimensional subspace spanned by \( {\chi }_{z} \) is independent of this choice. That is to say, whenever \( {\chi }_{z} \neq 0 \), the span of \( {\chi }_{z} \) is a well-defined element of the projective space \( \mathcal{P}\left( \mathbf{H}\right) \), where \( \mathbf{H} \) is the quantum Hilbert space. Recall, meanwhile, that if \( \left( {L,\nabla }\right) \) is a Hermitian line bundle with connection having curvature \( \omega /\hslash \), then for any positive integer \( n \), there is a natural Hermitian connection on \( {L}^{\otimes k} \) having curvature \( {k\omega }/\hslash \) . This means that if \( L \) is a prequantum line bundle with one value \( {\hslash }_{0} \) of Planck’s constant, then \( {L}^{\otimes k} \) is a prequantum line bundle with Planck’s constant equal to \( {\hslash }_{0}/k \) . The following result shows that in the case of compact symplectic manifolds with Kähler polarizations, things behave nicely when \( k \) tends to infinity. Theorem 23.33 Assume \( N \) is compact and let \( P \) be a Kähler polarization on \( N \) . For each positive integer \( k \), let \( {\mathbf{H}}_{k} \) denote the space of polarized sections of \(
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 5.81
Definition 5.81. A subset \( E \) of a normed space \( X \) is said to be moderately regular, or simply \( M \) -regular, at \( a \in \operatorname{cl}\left( E\right) \) if \( {T}^{M}\left( {E, a}\right) = {T}^{D}\left( {E, a}\right) \) . A function \( f : X \rightarrow \overline{\mathbb{R}} \) finite at \( x \in X \) is said to be moderately regular, or simply M-regular, at \( x \) if \( {f}^{M}\left( {x, \cdot }\right) = \) \( {f}^{D}\left( {x, \cdot }\right) \) . The dual requirements are equivalent to the respective primal properties, as one can show as for the case of circa-regularity. Proposition 5.82. A set \( E \) is \( M \) -regular at \( a \in \operatorname{cl}\left( E\right) \) if and only if \( {N}_{M}\left( {E, a}\right) = \) \( {N}_{D}\left( {E, a}\right) \) . A function \( f \) is \( M \) -regular at \( x \in {f}^{-1}\left( \mathbb{R}\right) \) iff \( {\partial }_{M}f\left( x\right) = {\partial }_{D}f\left( x\right) \) and \( {\partial }_{M}^{\infty }f\left( x\right) = {\partial }_{D}^{\infty }f\left( x\right) \) A convex set is M-regular at every point of its closure and a convex function is M-regular at every point of its domain. If a function \( f : X \rightarrow \overline{\mathbb{R}} \) is finite at \( x \in X \) and is Hadamard differentiable at \( x \), then one sees that \( f \) is M-regular at \( x \) and \( {\partial }_{M}f\left( x\right) = \) \( \left\{ {{f}^{\prime }\left( x\right) }\right\} ,{\partial }_{M}^{\infty }f\left( x\right) = {\partial }_{D}^{\infty }f\left( x\right) = \varnothing \) . Other examples arise from the calculus rules we present next. Their proofs, being similar to those given for the Clarke subdifferential, are left as exercises. Again, we take a geometric starting point. Proposition 5.83. Let \( X, Y \) be normed spaces, let \( W \) be an open subset of \( X \), let \( F \) (resp. G) be a subset of \( X \) (resp. \( Y \) ), and let \( g : W \rightarrow Y \) be Hadamard differentiable at a point a of \( E \mathrel{\text{:=}} F \cap {g}^{-1}\left( G\right) \) . Suppose \( A\left( {{T}^{M}\left( {F, a}\right) }\right) \cap {H}^{M}\left( {G, b}\right) \neq \varnothing \), where \( A \mathrel{\text{:=}} \) \( {g}^{\prime }\left( a\right), b \mathrel{\text{:=}} g\left( a\right) \) . Then \( {T}^{M}\left( {F, a}\right) \cap {A}^{-1}\left( {{T}^{M}\left( {G, b}\right) }\right) \subset {T}^{M}\left( {E, a}\right) \) . If \( F \) and \( G \) are \( M \) - regular at \( a \) and \( b \) respectively, then equality holds and \( E \) is \( M \) -regular at \( a \) . A chain rule can be derived from Proposition 5.83 as for Clarke subdifferentials. Theorem 5.84. Let \( X, Y \) be normed spaces, let \( W \) be an open subset of \( X \), let \( h : Y \rightarrow \) \( \overline{\mathbb{R}} \) be finite at \( \bar{y} \in Y \), and let \( g : W \rightarrow Y \) be a mapping that is Hadamard differentiable at some point \( \bar{x} \) of \( X \) such that \( \bar{y} = g\left( \bar{x}\right) \) . Let \( f \mathrel{\text{:=}} h \circ g \) . Suppose there exists some \( u \in X \) such that \( {h}^{\diamond }\left( {\bar{y},{g}^{\prime }\left( \bar{x}\right) \left( u\right) }\right) < + \infty \) . Then one has \[ {f}^{M}\left( {\bar{x}, \cdot }\right) \leq {h}^{M}\left( {\bar{y}, \cdot }\right) \circ {g}^{\prime }\left( \bar{x}\right) \] (5.48) \[ {\partial }_{M}f\left( \bar{x}\right) \subset {g}^{\prime }{\left( \bar{x}\right) }^{\top }\left( {{\partial }_{M}h\left( \bar{y}\right) }\right) \mathrel{\text{:=}} {\partial }_{M}h\left( \bar{y}\right) \circ {g}^{\prime }\left( \bar{x}\right) . \] (5.49) If \( h \) is \( M \) -regular at \( \bar{y} \), these relations are equalities. Theorem 5.85. Let \( f, g : X \rightarrow \overline{\mathbb{R}} \) be two lower semicontinuous functions finite at \( \bar{x} \in X \) such that there exists some \( u \in \operatorname{dom}{f}^{M}\left( {\bar{x}, \cdot }\right) \cap \operatorname{dom}{g}^{\diamond }\left( {\bar{x}, \cdot }\right) \) . Then \[ {\left( f + g\right) }^{M}\left( {\bar{x}, \cdot }\right) \leq {f}^{M}\left( {\bar{x}, \cdot }\right) + {g}^{M}\left( {\bar{x}, \cdot }\right) , \] (5.50) \[ {\partial }_{M}\left( {f + g}\right) \left( \bar{x}\right) \subset {\partial }_{M}f\left( \bar{x}\right) + {\partial }_{M}g\left( \bar{x}\right) . \] (5.51) If \( f \) and \( g \) are \( M \) -regular at \( \bar{x} \), these relations are equalities. For separable functions, a sum rule does not require additional assumptions. Proposition 5.86. If \( X, Y \) are normed spaces, \( f : X \rightarrow {\mathbb{R}}_{\infty }, g : Y \rightarrow {\mathbb{R}}_{\infty } \), and if \( h \) is defined by \( h\left( {x, y}\right) \mathrel{\text{:=}} f\left( x\right) + g\left( y\right) \), then for every \( \left( {x, y}\right) \in \operatorname{dom}h,\left( {u, v}\right) \in X \times Y \) , one has \[ {h}^{M}\left( {\left( {x, y}\right) ,\left( {u, v}\right) }\right) \leq {f}^{M}\left( {x, u}\right) + {g}^{M}\left( {y, v}\right) \] (5.52) \[ {\partial }_{M}h\left( {x, y}\right) \subset {\partial }_{M}f\left( x\right) \times {\partial }_{M}g\left( y\right) \] (5.53) If \( f \) and \( g \) are \( M \) -regular at \( x \) and \( y \) respectively, these relations are equalities. Now let us state a moderate version of Proposition 5.58; its proof is similar. Proposition 5.87. Let \( A \in L\left( {V, W}\right) \) be a surjective continuous linear map between two normed spaces, let \( j : V \rightarrow \overline{\mathbb{R}}, p : W \rightarrow \overline{\mathbb{R}} \) be lower semicontinuous and such that \( p \circ A \leq j \) . Suppose that for some \( \bar{v} \in {j}^{-1}\left( \mathbb{R}\right) ,\bar{w} \mathrel{\text{:=}} A\bar{v} \in {p}^{-1}\left( \mathbb{R}\right) \) and every sequence \( \left( {\alpha }_{n}\right) \rightarrow {0}_{ + } \), one of the following assumptions is satisfied: (a) For every sequence \( \left( {w}_{n}\right) \rightarrow \bar{w} \) one can find a sequence \( \left( {v}_{n}\right) \rightarrow \bar{v} \) such that \( j\left( {v}_{n}\right) \leq p\left( {w}_{n}\right) + {\alpha }_{n} \) and \( A\left( {v}_{n}\right) = {w}_{n} \) for all \( n \in \mathbb{N} \) . (b) \( p = {d}_{S} \) for some closed subset \( S \) of \( W \) containing \( \bar{w} \) and for every sequence \( \left( {w}_{n}\right) { \rightarrow }_{S}\bar{w} \) one can find a sequence \( \left( {v}_{n}\right) \rightarrow \bar{v} \) such that \( j\left( {v}_{n}\right) \leq p\left( {w}_{n}\right) + {\alpha }_{n} \) and \( A\left( {v}_{n}\right) = {w}_{n} \) for all \( n \in \mathbb{N} \) . Then one has \[ {A}^{\top }\left( {{\partial }_{M}p\left( \bar{w}\right) }\right) \subset {\partial }_{M}j\left( \bar{v}\right) \] ## Exercises 1. Prove Theorem 5.84. 2. Prove Theorem 5.85. 3. Prove Proposition 5.86. 4. The Treiman subderivate of a function \( f : X \rightarrow \overline{\mathbb{R}} \) at \( x \in {f}^{-1}\left( \mathbb{R}\right) \) in the direction \( u \in X \) is the function \( {f}^{B}\left( {x, \cdot }\right) \) whose epigraph is \( {T}^{B}\left( {E, e}\right) \), where \( E \) is the epigraph of \( f \) and \( e \mathrel{\text{:=}} \left( {x, f\left( x\right) }\right) : \operatorname{epi}{f}^{B}\left( {x, \cdot }\right) = {T}^{B}\left( {E, e}\right) \) . The Treiman subdifferential of \( f \) : \( X \rightarrow \overline{\mathbb{R}} \) at \( x \in {f}^{-1}\left( \mathbb{R}\right) \) is the set \[ {\partial }_{B}f\left( x\right) \mathrel{\text{:=}} \left\{ {{x}^{ * } \in {X}^{ * } : {x}^{ * }\left( \cdot \right) \leq {f}^{B}\left( {x, \cdot }\right) }\right\} . \] Prove for this concept results similar to those of this section. (See [931,934].) 5. Suppose that for every subset \( E \) of a normed space \( X \) and all \( a \in \mathrm{{cl}}E \) one is given a set \( \gamma \left( {E, a}\right) \) of sequences \( {\left( \left( {t}_{n},{e}_{n}\right) \right) }_{n} \) converging to \( \left( {{0}_{ + }, a}\right) \) . Then define \( {T}^{\gamma }\left( {E, a}\right) \) as the set of \( v \in X \) such that for every \( {\left( \left( {t}_{n},{e}_{n}\right) \right) }_{n} \) in \( \gamma \left( {E, a}\right) \) there exists a sequence \( \left( {v}_{n}\right) \rightarrow v \) in \( X \) with \( {e}_{n} + {t}_{n}{v}_{n} \in E \) for every \( n \) . (a) Check that \( {T}^{\gamma }\left( {E, a}\right) = {T}^{I}\left( {E, a}\right) \) when \( {\left( \left( {t}_{n},{e}_{n}\right) \right) }_{n} \in \gamma \left( {E, a}\right) \) iff \( {e}_{n} = a \) for all \( n \) . (b) Check that \( {T}^{\gamma }\left( {E, a}\right) = {T}^{C}\left( {E, a}\right) \) when \( {\left( \left( {t}_{n},{e}_{n}\right) \right) }_{n} \in \gamma \left( {E, a}\right) \) iff \( {e}_{n} \in E \) for all \( n \) . (c) Give interpretations of \( {T}^{B}\left( {E, a}\right) \) and \( {T}^{M}\left( {E, a}\right) \) in terms of appropriate convergences \( \gamma \) . (d) Suppose that for every \( {\left( \left( {t}_{n},{e}_{n}\right) \right) }_{n} \in \gamma \left( {E, a}\right) \), every \( u \in X \), and every sequence \( \left( {u}_{n}\right) \rightarrow u \) satisfying \( {e}_{n} + {t}_{n}{u}_{n} \in E \) for all \( n \) one has \( {\left( \left( {t}_{n},{e}_{n} + {t}_{n}{u}_{n}\right) \right) }_{n} \in \gamma \left( {E, a}\right) \) . Show that \( {T}^{\gamma }\left( {E, a}\right) \) is convex in such a case. (See [790].) 6. Define a hypertangent cone associated with a convergence \( \gamma \) as in the preceding exercise and use it to display some calculus rules. 7. Let \( S \) be a subset of a Banach space \( X \) and let \( f : X \rightarrow \mathbb{R} \) be a Lipschitzian function that attains its minimum over \( S \) at \( \bar{x} \in S \) . Show that \( 0 \in {\partial }_{M}f\left( \bar{x}\right) + {N}_{M}\left( {S,\bar{x}}\right) \) . Compare this condition with the necessary condition using the Clarke subdifferen-tial. [Hint: Use the penalization lemma and the sum rule of Theorem 5.85.] ## 5.6 Notes and Remarks The present chapter deals with notion
106_106_The Cantor function
Definition 3.7
Definition 3.7. The theory \( \mathcal{T} \) is called categorical if all models of \( \mathcal{T} \) are isomorphic. ## Examples 3.8. Two models \( {G}_{1},{G}_{2} \) of elementary group theory are isomorphic as models if and only if they are isomorphic in the group theoretic sense. Since there exist groups \( {G}_{1},{G}_{2} \) which are not isomorphic, elementary group theory is not categorical. 3.9. We form trivial group theory by adding the further axiom \( \left( {\forall x}\right) \mathcal{I}\left( {x, e}\right) \) to elementary group theory. A model of trivial group theory is a group of order 1 . Any two such groups are isomorphic, thus trivial group theory is categorical. Observe that if \( {M}_{1},{M}_{2} \) are isomorphic models of the theory \( \mathcal{T} \) and if \( p \in \mathcal{L}\left( \mathcal{T}\right) \) is true for \( {M}_{1} \), then it is true for \( {M}_{2} \) . The definition of categoricity, together with Theorem 3.4, immediately yields the next theorem. Theorem 3.10. If the theory \( \mathcal{T} \) is categorical, then \( \mathcal{T} \) is complete. We now generalise the concept of categoricity. We shall denote the cardinal of any set \( X \) by \( \left| X\right| \) . Definition 3.11. The cardinal of a model \( M = \left( {M, v,\psi }\right) \) is the cardinal \( \left| M\right| \) of the set \( M \), and will be denoted by \( \left| M\right| \) . Note that isomorphic models have the same cardinal. Definition 3.12. Let \( \chi \) be a cardinal number. The theory \( \mathcal{T} \) is called \( \chi \) -categorical or categorical in cardinal \( \chi \) if all models of \( \mathcal{T} \) which have cardinal \( \chi \) are isomorphic. Example 3.13. Elementary group theory is categorical in cardinal 1. It is not categorical in cardinal 4, because there are two distinct isomorphism classes of groups of order 4. Provided that \( \chi \) is a finite cardinal, there is in the language \( \mathcal{L}\left( \mathcal{T}\right) \) of any theory \( \mathcal{T} \) an element which specifies \( \chi \) as the cardinal of a model of \( \mathcal{T} \) . For if we denote by \( \operatorname{al}\left( n\right) \) the proposition \[ \left( {\exists {a}_{1}}\right) \cdots \left( {\exists {a}_{n}}\right) \left( { \sim \mathcal{I}\left( {{a}_{1},{a}_{2}}\right) \land \sim \mathcal{I}\left( {{a}_{1},{a}_{3}}\right) \land \cdots \land \sim \mathcal{I}\left( {{a}_{1},{a}_{n}}\right) }\right. \] \[ \left. {\land \sim \mathcal{I}\left( {{a}_{2},{a}_{3}}\right) \land \cdots \land \sim \mathcal{I}\left( {{a}_{n - 1},{a}_{n}}\right) }\right) , \] then any model of \( \mathcal{T} \) in which al \( \left( n\right) \) is true has at least \( n \) elements. Any model in which \( \operatorname{al}\left( n\right) \land \sim \operatorname{al}\left( {n + 1}\right) \) is true has exactly \( n \) elements. Theorem 3.14. Suppose the theory \( \mathcal{T} \) has models of arbitrarily large finite cardinal. Then \( \mathcal{T} \) has an infinite model. Proof: Let \( \mathcal{T} = \left( {\mathcal{R}, A, C}\right) \), and put \( {\mathcal{T}}^{\prime } = \left( {\mathcal{R},{A}^{\prime }, C}\right) \) where \( {A}^{\prime } = \) \( A \cup \left\{ {\operatorname{al}\left( n\right) \mid n \in {\mathbf{N}}^{ + }}\right\} \) . We show that \( {\mathcal{T}}^{\prime } \) is consistent. If \( {A}^{\prime }{ \vdash }_{\mathcal{I}}F \), then \( A \cup N{ \vdash }_{\mathcal{I}}F \) for some finite subset \( N \) of \( \left\{ {\left. {\operatorname{al}\left( n\right) }\right| \;n \in {\mathbf{N}}^{ + }}\right\} \) . Let \( {n}_{0} = \max \{ n \mid \operatorname{al}\left( n\right) \in N\} \) . By hypothesis, there exists a model \( M \) of \( \mathcal{T} \) with \( \left| M\right| \geq {n}_{0} \) . This \( M \) is a model of \( \left( {\mathcal{R}, A \cup N, C}\right) \), which contradicts the hypothesis \( A \cup N{ \vdash }_{\mathcal{I}}F \) . Hence \( {\mathcal{T}}^{\prime } \) is consistent, and so it has a model. Any model \( M \) of \( {\mathcal{T}}^{\prime } \) must satisfy \( \left| M\right| \geq n \) for all \( n \in {\mathbf{N}}^{ + } \), hence \( \left| M\right| \) is infinite. ## Exercises 3.15. \( R \) is a ring with 1 . Construct an elementary theory (i.e., one concerned with elements and not with subsets or maps) \( {\operatorname{Mod}}_{R} \), of unital (left) \( R \) -modules, such that the models of the theory are precisely all unital \( R \) - modules. (Hint: take each \( r \in R \) as a binary relation symbol, interpreting \( r\left( {{m}_{1},{m}_{2}}\right) \) as \( \left. {r{m}_{1} = {m}_{2}\text{.}}\right) \) 3.16. Construct an elementary theory of fields with constants 0,1 . In the language \( \mathcal{L}\left( \mathcal{F}\right) \) of this theory \( \mathcal{F} \), construct a proposition \( \operatorname{char}\left( n\right) \), which asserts that the characteristic divides \( n\left( {n \in {\mathbf{N}}^{ + }}\right) \) . Hence construct a theory \( {\mathcal{F}}_{0} \) of fields of characteristic 0 such that \( \mathcal{L}\left( {\mathcal{F}}_{0}\right) = \mathcal{L}\left( \mathcal{F}\right) \) and the set \( {A}_{0} \) of axioms of \( {\mathcal{F}}_{0} \) includes the set \( A \) of axioms of \( \mathcal{F} \) . Show that for each theorem \( p \) of \( {\mathcal{F}}_{0} \), there is a number \( {n}_{p} \in \mathbf{N} \) such that \( p \) is true for all fields of characteristic greater than \( {n}_{p} \) . Show that no set \( {A}_{1} \) of axioms, such that \( \mathcal{L}\left( \mathcal{F}\right) \supseteq {A}_{1} \supseteq A \) and \( {A}_{1} \) contains only finitely many elements of \( \mathcal{L}\left( \mathcal{F}\right) - A \), can axiomatise fields of characteristic 0 . Definition 3.17. The cardinal \( \left| \mathcal{T}\right| \) of the theory \( \mathcal{T} = \left( {\mathcal{R}, A, C}\right) \) is \( \left| {\mathcal{R} \cup A}\right| \) . \( \mathcal{T} \) is called finite if \( \mathcal{R} \cup A \) is finite. \( \mathcal{T} \) is called finitely axiomatised if \( A \) is finite. Since each element of \( C \) is a variable of some axiom, and since each axiom involves only finitely many variables, we have either that \( C \) and \( A \) are both finite, or that \( \left| C\right| \leq \left| A\right| \) . If \( \mathcal{R} \cup A \) is infinite, then \( \left| {\mathcal{L}\left( \mathcal{T}\right) }\right| = \left| {\mathcal{R} \cup A}\right| \) , while \( \mathcal{L}\left( \mathcal{T}\right) \) is countable if \( \mathcal{R} \cup A \) is finite. We remark that a relation symbol not occurring in any axiom would be of little interest, as it could be interpreted as any relation and so could occur in a theorem of \( \mathcal{T} \) only in an essentially trivial way. It would not be a serious restriction to require every relation symbol to appear in some axiom, in which case we would have either \( \mathcal{R} \) and \( A \) finite or \( \left| \mathcal{R}\right| \leq \left| A\right| = \left| {\mathcal{R} \cup A}\right| \) . When \( A \) is finite, the actual value of \( \left| A\right| \) is of no real interest, because an axiom set \( A = \left\{ {{a}_{1},\ldots ,{a}_{n}}\right\} \) can always be replaced by \( {A}^{\prime } = \left\{ {{a}_{1} \land \cdots \land {a}_{n}}\right\} \) without making any essential change in the theory. The following theorem is the main result of the present chapter, and is in fact the fundamental theorem of model theory. Theorem 3.18. (Löwenheim-Skolem Theorem). Let \( \mathcal{T} \) be a first-order theory of cardinal \( \chi \), and let \( \aleph \) be any infinite cardinal such that \( \aleph \geq \chi \) . Suppose \( \mathcal{T} \) has an infinite model. Then \( \mathcal{T} \) has a model of cardinal \( \aleph \) . Proof: Suppose \( \mathcal{T} = \left( {\mathcal{R}, A, C}\right) \) . Choose some set \( {V}_{0} \supset C \) such that \( \left| {{V}_{0} - C}\right| = \aleph \) . Then \( \left| {P\left( {{V}_{0},\mathcal{R}}\right) }\right| = \aleph \) . Put \[ {A}_{0}^{\prime } = I \cup A \cup \left\{ { \sim \mathcal{I}\left( {x, y}\right) \mid x, y \in {V}_{0} - C, x \neq y}\right\} . \] This gives a theory \( {\mathcal{T}}^{\prime } = \left( {\mathcal{R},{A}_{0}^{\prime },{V}_{0}}\right) \) which we prove consistent. If \( {\mathcal{T}}^{\prime } \) is inconsistent, then \( F \) is provable from \( A \) and some finite subset of \( \{ \sim \mathcal{I}\left( {x, y}\right) \mid \) \( \left. {x, y \in {V}_{0} - C, x \neq y}\right\} \), which contradicts the hypothesis that \( \mathcal{T} \) has an infinite model. Therefore \( {\mathcal{T}}^{\prime } \) is consistent. We follow the method used to prove the Satisfiability Theorem (cf Lemma 4.14 of Chapter IV), and construct inductively sets \( {V}_{n},{A}_{n}^{\prime } \) and \( {A}_{n} \) . We put \[ {V}_{n + 1} = {V}_{n} \cup \left\{ {{t}_{q}^{\left( n\right) } \mid q\left( x\right) \in P\left( {{V}_{n},\mathcal{R}}\right) ,\left( {\exists x}\right) q\left( x\right) \in {A}_{n}}\right\} \] \[ {A}_{n + 1}^{\prime } = {A}_{n} \cup \left\{ {q\left( {t}_{q}^{\left( n\right) }\right) \mid q\left( x\right) \in P\left( {{V}_{n},\mathcal{R}}\right) ,\left( {\exists x}\right) q\left( x\right) \in {A}_{n}}\right\} \] and take for \( {A}_{n + 1} \) a maximal consistent subset of \( P\left( {{V}_{n + 1},\mathcal{R}}\right) \) containing \( {A}_{n + 1}^{\prime } \) . We put \( {V}^{ * } = \mathop{\bigcup }\limits_{n}{V}_{n},{A}^{ * } = \mathop{\bigcup }\limits_{n}{A}_{n},{P}^{ * } = P\left( {{V}^{ * },\mathcal{R}}\right) = \mathop{\bigcup }\limits_{n}P\left( {{V}_{n},\mathcal{R}}\right) \) . Since \( {A}_{0} \supseteq {A}_{0}^{\prime } \) is a maximal consistent subset of \( P\left( {{V}_{0},\mathcal{R}}\right) \), and since \( \left| {P\left( {{V}_{0},\mathcal{R}}\right) }\right| = \aleph \), we have \( \left| {A}_{0}\right| = \aleph \) . Then, from \( \left| {P\left( {{V}_{n},\mathcal{R}}\right) }\right| = \left| {A}_{n}\right| = \aleph \), it follows that \( \left| {V}_{n + 1}\right| = \aleph \)