book_name
stringclasses
89 values
def_number
stringlengths
12
19
text
stringlengths
5.47k
10k
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 7.3
Definition 7.3. A motion \( f\left( {p, t}\right) \) in Euclidean space is called recurrent if for any \( \varepsilon > 0 \), there exists a relatively dense set \( \left\{ {\tau }_{n}\right\} \) (shifts) such that \[ \rho \left( {p, f\left( {p,{\tau }_{n}}\right) }\right) < \varepsilon . \] We consider the systems \[ \frac{du}{dt} = \frac{1}{F\left( {u, v}\right) },\;\frac{dv}{dt} = \frac{\mu }{F\left( {u, v}\right) }, \] (7.1) \[ \frac{du}{dt} = 1,\;\frac{dv}{dt} = f\left( {u, v}\right) , \] (7.2) and \[ \frac{du}{dt} = g\left( {u, v}\right) ,\;\frac{dv}{dt} = f\left( {u, v}\right) , \] (7.3) \[ \frac{du}{dt} = g\left( {u, v}\right) ,\;\frac{dv}{dt} = g\left( {u, v}\right) \cdot f\left( {u, v}\right) , \] (7.4) where \( f, g, F \in {C}^{n}\left( {n \geq 2}\right), F \neq 0, g \neq 0;F, f, g \) are doubly periodic functions of period 1, and the rotation number \( \mu \) is irrational. The theorems of Denjoy [10] and Siegel [11] imply that the above systems all have \( {T}^{2} \) as minimal set. Let \( v = H\left( {u;p}\right) \) be a representation of the orbit for the system (7.2) and (7.4) through the point \( p = \left( {\widetilde{u},\widetilde{v}}\right) \) . From [11], it follows that there exists a unique (in the mod 1 sense) continuous monotonic increasing function \( h : R \rightarrow R \) such that \[ h\left( {v + 1}\right) = h\left( v\right) + 1 \] (7.5) \[ {h}_{0}H\left( {1;\left( {0, v}\right) }\right) = \lambda + h\left( v\right) , \] (7.6) \[ H\left( {u;p}\right) = {\lambda u} + {h}^{-1} \circ H\left( {0;p}\right) + w\left( {u,{\lambda u} + {h}^{-1} \circ H\left( {0;p}\right) }\right) \] (7.7) for all \( u, v, p \), where the rotation number \( \lambda \) is irrational, and \( w \) is a doubly periodic function of period 1 . In fact, \[ w\left( {y, z}\right) = H\left( {y;\left( {0, h\left( {z - {\lambda y}}\right) }\right) - z}\right) . \] LEMMA 7.1 [12]. Let \( {D}_{\delta }, D, A \) be subsets of irrational numbers defined as follows: \( {D}_{\delta } = \{ \lambda \mid \) there exists \( C = C\left( \lambda \right) \) such that for any rational number \( p/q \) , \( p, q \in Z \) relatively prime, the inequality \( \left| {\lambda + p/q}\right| \geq C/{\left| q\right| }^{\delta } \) holds \( \} \) , \[ D = \mathop{\bigcap }\limits_{{\delta > 2}}{D}_{\delta } \] \[ A = \left\{ {\lambda \mid \mathop{\lim }\limits_{{{N}_{2} \rightarrow \infty }}\mathop{\lim }\limits_{{{N}_{1} \rightarrow + \infty }}\sup \left\lbrack {\mathop{\sum }\limits_{\substack{{{a}_{1} \geq {N}_{2}} \\ {1 \leq i \leq {N}_{1}} }}\log \left( {1 + {a}_{i}}\right) /\mathop{\sum }\limits_{{1 \leq i \leq {N}_{1}}}\log \left( {1 + {a}_{i}}\right) }\right\rbrack = 0,}\right. \] where \( \left\lbrack {{a}_{0},{a}_{1},\ldots ,{a}_{n},\ldots }\right\rbrack \) is the infinite continued fraction \[ \text{representation of}\lambda \} \text{.} \] Then (i) \( A = D \), and the Lebesgue measure of \( R \smallsetminus A \) is zero. (ii) \( h \in {C}^{n - \delta } \) if \( n \geq 3,\delta > 1 \), and \( \mu \in A \) . LEMMA 7.2 [13]. Let \( M \) be compact, then a necessary and sufficient condition for \( M \) to be the minimal set formed by an almost periodic orbit is the property that every orbit of the system is Liapunov stable with respect to \( M \) . Lemma 7.3. Let \( \delta > 1,\lambda \in {D}_{\delta }, k\left( \delta \right) = \left\lbrack \delta \right\rbrack + 1 + \left\lbrack {2\{ \delta \} }\right\rbrack \), where \( \left\lbrack \delta \right\rbrack \) and \( \{ \delta \} \) respectively represent the integral and decimal part of \( \delta \) . Suppose \( F\left( {x, y}\right) \) is a doubly periodic function of period 1 in \( {C}^{k}\left( {k = k\left( \delta \right) }\right) \) . Then \[ \mathop{\lim }\limits_{{\left( \begin{array}{l} x \\ y \end{array}\right) \rightarrow \left( \begin{array}{l} {x}_{0} \\ {y}_{0} \end{array}\right) }}{\int }_{0}^{\theta }\left\lbrack {F\left( {x + s, y + {\lambda s}}\right) - F\left( {{x}_{0} + s,{y}_{0} + {\lambda s}}\right) }\right\rbrack {ds} = 0 \] (7.8) uniformly for all \( \theta \in R \) . Proof. From the definition of \( k \), we clearly have \( k \geq 2,2\left( {\delta - k}\right) < - 1 \) . Hence, it follows that \[ \mathop{\sum }\limits_{{n \neq 0}}{\left| n\right| }^{2\left( {\delta - 1 - k}\right) } < + \infty \] (7.9) and \[ \mathop{\sum }\limits_{{m \neq 0}}\mathop{\sum }\limits_{{n \neq 0}}{h}^{2\left( {\delta - k}\right) } \cdot {\left| m\right| }^{-2} < + \infty . \] (7.10) Denote \[ {a}_{mn} = {\int }_{0}^{1}{\int }_{0}^{1}F\left( {x, y}\right) {e}^{-{2\pi }\imath \left( {{mx} + {ny}}\right) }{dxdy}. \] We then have \[ \mathop{\sum }\limits_{m}\mathop{\sum }\limits_{n}\left| {{a}_{mn}{m}^{l}{n}^{k - l}}\right| < + \infty ,\;l = 0,1,2,\ldots, k. \] (7.11) In particular, \[ \mathop{\sum }\limits_{n}{\left| {a}_{on}{n}^{k}\right| }^{2} < + \infty \] (7.12) and \[ \sum {\left| {a}_{m0}{m}^{k}\right| }^{2} < + \infty \] (7.13) Since \[ \mathop{\sum }\limits_{{m \neq 0}}\left| \frac{{a}_{m0}}{m}\right| = \mathop{\sum }\limits_{{m \neq 0}}{a}_{m0}{\left| m\right| }^{k}{\left| m\right| }^{-\left( {k + 1}\right) } \] \[ \leq {\left\lbrack \mathop{\sum }\limits_{{m \neq 0}}{\left| {a}_{m0}{m}^{k}\right| }^{2}\right\rbrack }^{1/2} \cdot {\left\lbrack \mathop{\sum }\limits_{{m \neq 0}}{\left| m\right| }^{-2\left( {k + 1}\right) }\right\rbrack }^{1/2}, \] \[ \mathop{\sum }\limits_{m}\mathop{\sum }\limits_{n}\left| {a}_{mn}\right| \cdot {\left| n\right| }^{\delta - 1} = \mathop{\sum }\limits_{m}\mathop{\sum }\limits_{{n \neq 0}}\left| {a}_{mn}\right| \cdot {\left| n\right| }^{\delta - 1} \] \[ = \mathop{\sum }\limits_{{n \neq 0}}\left| {a}_{0n}\right| {\left| n\right| }^{\delta - 1} + \mathop{\sum }\limits_{{m \neq 0}}\mathop{\sum }\limits_{{n \neq 0}}\left| {a}_{mn}\right| \cdot {\left| n\right| }^{\delta - 1} \] \[ \leq {\left\lbrack \mathop{\sum }\limits_{{n \neq 0}}\left| {a}_{0n}{n}^{k}\right| \right\rbrack }^{1/2}{\left\lbrack \mathop{\sum }\limits_{{n \neq 0}}{\left| n\right| }^{2\left( {\delta - 1 - k}\right) }\right\rbrack }^{1/2} \] \[ + {\left\lbrack \mathop{\sum }\limits_{{m \neq 0}}\mathop{\sum }\limits_{{n \neq 0}}{\left| {a}_{mn}m{n}^{k - 1}\right| }^{2}\right\rbrack }^{1/2}{\left\lbrack \mathop{\sum }\limits_{{m \neq 0}}\mathop{\sum }\limits_{{n \neq 0}}{\left| m\right| }^{-2}{\left| n\right| }^{2\left( {\delta - k}\right) }\right\rbrack }^{1/2}, \] and \[ \mathop{\sum }\limits_{{\left( \begin{matrix} m \\ n \end{matrix}\right) \neq \left( \begin{matrix} 0 \\ 0 \end{matrix}\right) }}\left| \frac{{a}_{mn}}{m + {\lambda n}}\right| = \mathop{\sum }\limits_{{m \neq 0}}\left| \frac{{a}_{m0}}{m}\right| + \mathop{\sum }\limits_{m}\mathop{\sum }\limits_{{n \neq 0}}\left| \frac{{a}_{mn}}{m + {\lambda n}}\right| \] \[ \leq \mathop{\sum }\limits_{{m \neq 0}}\left| \frac{{a}_{m0}}{m}\right| + \frac{1}{C\left( \lambda \right) }\mathop{\sum }\limits_{m}\mathop{\sum }\limits_{{n \neq 0}}\left| {a}_{mn}\right| \cdot {\left| n\right| }^{\delta - 1}. \] Consequently, from (7.13), we obtain \[ \mathop{\sum }\limits_{{m \neq 0}}\left| \frac{{a}_{m0}}{m}\right| < + \infty \] (7.14) From (7.9)-(7.12), it follows that \[ \mathop{\sum }\limits_{m}\mathop{\sum }\limits_{n}\left| {a}_{mn}\right| \cdot {\left| n\right| }^{\delta - 1} < + \infty . \] (7.15) Moreover, from (7.14), (7.15), we have \[ \mathop{\sum }\limits_{{\left( \begin{matrix} m \\ n \end{matrix}\right) \neq \left( \begin{matrix} 0 \\ 0 \end{matrix}\right) }}\left| \frac{{a}_{mn}}{m + {\lambda n}}\right| < + \infty \] (7.16) \[ \left| {{\int }_{0}^{\theta }\left\lbrack {F\left( {x + s, y + {\lambda s}}\right) - F\left( {{x}_{0} + s,{y}_{0} + {\lambda s}}\right) }\right\rbrack {ds}}\right| \] \[ = \left| {\mathop{\sum }\limits_{m}\mathop{\sum }\limits_{n}{a}_{mn}\left\lbrack {{e}^{{2\pi }\imath \left( {{mx} + {ny}}\right) } - {e}^{{2\pi }\imath \left( {m{x}_{0} + n{y}_{0}}\right) }}\right\rbrack \cdot {\int }_{0}^{\theta }{e}^{{2\pi }\imath \left( {m + {\lambda n}}\right) s}{ds}}\right| \] \[ = \left| {\mathop{\sum }\limits_{{\left( \begin{matrix} m \\ n \end{matrix}\right) \neq \left( \begin{matrix} 0 \\ 0 \end{matrix}\right) }}\frac{{a}_{mn}}{{2\pi i}\left( {m + {\lambda n}}\right) }\left\lbrack {{e}^{{2\pi }\imath \left( {{mx} + {ny}}\right) } - {e}^{{2\pi }\imath \left( {m{x}_{0} + n{y}_{0}}\right) }}\right\rbrack \left\lbrack {{e}^{{2\pi }\imath \left( {m + {\lambda n}}\right) \theta } - 1}\right\rbrack }\right| \] \[ \leq \mathop{\sum }\limits_{{\left( \begin{matrix} m \\ n \end{matrix}\right) \neq \left( \begin{matrix} 0 \\ 3 \end{matrix}\right) }}\frac{1}{\pi }\left| \frac{{a}_{mn}}{m + {\lambda n}}\right| \cdot \left| {{e}^{{2\pi }\imath \left( {{mx} + {ny}}\right) } - {e}^{{2\pi }\imath \left( {m{x}_{0} + n{y}_{0}}\right) }}\right| . \] Since \[ \left| {{e}^{{2\pi }\imath \left( {{mx} + {ny}}\right) } - {e}^{{2\pi }\imath \left( {m{x}_{0} + n{y}_{0}}\right) }}\right| \leq 2 \] and \[ \mathop{\lim }\limits_{{\left( \begin{array}{l} x \\ y \end{array}\right) \rightarrow \left( \begin{array}{l} {x}_{0} \\ {y}_{0} \end{array}\right) }}\left\lbrack {{e}^{{2\pi }\imath \left( {{mx} + {ny}}\right) } - {e}^{{2\pi }\imath \left( {m{x}_{0} + n{y}_{0}}\right) }}\right\rbrack = 0, \] the conclusion of Lemma 7.3 follows from (7.16). THEOREM 7.1. For almost all \( \lambda \in R \) (with respect to the Lebesgue measure), all nonsingular \( {C}^{5} \) vector fields (7.4) on \( {T}^{2} \) satisfying \( \mu \left( f\right) = \lambda \) have \( {T}^{2} \) as the minimal set formed by an almost periodic orbit. Proof. Since the measure of \( R \smallsetminus A \) is zero (see Lemma 7.1), we only need to prove this theorem for the case \( \lambda \in A \) . Suppose \( \lambda \in A \), the theorems of Denjoy [10] and Siegel [11] imply that \( {T}^{2} \) is the minimal set for the above system. We now show that the system (7.4) has all its orbits Liapunov stab
1172_(GTM8)Axiomatic Set Theory
Definition 7.8
Definition 7.8. If \( A \) is a set then \[ {Df}\left( \mathbf{A}\right) \triangleq \left\{ {a\mid {\exists }^{\lceil }\varphi \left( x\right) {}^{1} \in {\operatorname{Fml}}^{1}\left( \mathbf{A}\right) \left\lbrack {a = \{ x \in A \mid \mathbf{A} \vDash \varphi \left( x\right) \} }\right\rbrack }\right\} . \] Remark. \( \;{Df}\left( \mathbf{A}\right) \) is the set of sets definable from \( \mathbf{A} \) (using elements from \( A \) as parameters). Here we need Theorem 7.6 to show that \( \{ x \in A \mid \mathbf{A} \vDash \varphi \left( x\right) \} \) is a set in \( {ZF} \) . Taking \( \varphi \left( x\right) \) as \( x = x \) we obtain the following. Theorem 7.9. \( A \in {Df}\left( \mathbf{A}\right) \) provided \( A \) is a set. Theorem 7.10. If \( A \) is a set then \( {Df}\left( \mathbf{A}\right) \) is a set definable in \( {\mathcal{L}}_{0} \) from A. If, in addition, \( M \) is a standard transitive model of \( {ZF} \) and \( \mathbf{A} \in M \) then \( {Df}\left( \mathbf{A}\right) \) is absolute with respect to \( M \) . Remark. For our general theory let \( \left\langle {{\mathbf{M}}_{\alpha } \mid \alpha \in {On}}\right\rangle \) be a sequence of transitive structures of \( \mathcal{L} \) such that \( {M}_{\alpha } = \left| {\mathbf{M}}_{\alpha }\right| \) is a set and the following three conditions are satisfied: 1. \( \alpha < \beta \rightarrow {\mathbf{M}}_{\alpha } \) is a substructure of \( {\mathbf{M}}_{\beta } \) . 2. \( {M}_{\alpha } = \mathop{\bigcup }\limits_{{\beta < \alpha }}{M}_{\beta },\alpha \in {K}_{\mathrm{{II}}} \) 3. \( {Df}\left( {\mathbf{M}}_{\alpha }\right) \subseteq {M}_{\alpha + 1} \) . \[ M = \mathop{\bigcup }\limits_{{\alpha \in {On}}}{M}_{\alpha },\;\mathbf{M} = \left\langle {M,{R}_{0}{}^{\mathbf{M}},\ldots }\right\rangle . \] Remark. Since each \( {M}_{\alpha },\alpha \in {On} \), is transitive, so is \( M \) . Moreover \( {R}_{i}{}^{M} \) is defined by \[ {R}_{i}{}^{\mathbf{M}}\left( {{a}_{1},\ldots ,{a}_{{n}_{i}}}\right) \leftrightarrow {R}_{i}{}^{{\mathbf{M}}_{\alpha }}\left( {{a}_{1},\ldots ,{a}_{{n}_{i}}}\right) ,{a}_{1},\ldots ,{a}_{{n}_{i}} \in {M}_{\alpha }. \] In view of 1 this definition is unambiguous. Furthermore \( {\mathbf{M}}_{\alpha } \subseteq \mathbf{M} \) for each \( \alpha \in {On} \) . We now wish to prove that \( \mathbf{M} \) is a standard transitive model of \( {ZF} \) . Theorem 7.11. 1. \( {M}_{\alpha } \in {M}_{\alpha + 1} \) . 2. \( {M}_{\alpha } \in M \) . Proof. Theorems 7.9 and 3 above. Remark. Since \( M \) is transitive, \( \mathbf{M} \) satisfies the axioms of extensionality and regularity. It is easy to check that the axioms of Pairing and Union hold in M. Since \( \alpha \in {M}_{\alpha + 1},{On} \subseteq M \) and \( \omega \in M \) . Therefore \( \mathbf{M} \) satisfies the Axiom of Infinity. The main idea used for the proof of the remaining axioms is contained in the proof of the following proposition. Theorem 7.12. \( a \subseteq M \rightarrow \left( {\exists \alpha }\right) \left\lbrack {a \subseteq {M}_{\alpha }}\right\rbrack \) . Proof. From the Axiom Schema of Replacement it follows that \[ \left( {\exists \alpha }\right) \left\lbrack {\alpha = \mathop{\bigcup }\limits_{{x \in a}}{\mu }_{\beta }\left( {x \in {M}_{\beta }}\right) }\right\rbrack \] then \( a \subseteq {M}_{\alpha } \) . Remark. In order to prove the Axiom of Separation in \( \mathbf{M} \) we first prove a kind of reflection principle in \( \mathbf{M} \) . This proof requires several preliminary results. Definition 7.13. A function \( F : {On} \rightarrow {On} \) is semi-normal iff 1. \( \left( {\forall \alpha }\right) \left\lbrack {a \leq F\left( \alpha \right) }\right\rbrack \) . 2. \( \left( {\forall \alpha ,\beta }\right) \left\lbrack {\alpha < \beta \rightarrow F\left( \alpha \right) \leq F\left( \beta \right) }\right\rbrack \) . 3. \( \left( {\forall \alpha \in {K}_{\mathrm{{II}}}}\right) \left\lbrack {F\left( \alpha \right) = \mathop{\bigcup }\limits_{{\beta < \alpha }}F\left( \beta \right) }\right\rbrack \) . Definition 7.14. A function \( F : {On} \rightarrow {On} \) is a normal function iff 1. \( \left( {\forall \alpha ,\beta }\right) \left\lbrack {\alpha < \beta \rightarrow F\left( \alpha \right) < F\left( \beta \right) }\right\rbrack \) . 2. \( \left( {\forall \alpha \in {K}_{\mathrm{{II}}}}\right) \left\lbrack {F\left( \alpha \right) = \mathop{\bigcup }\limits_{{\beta < \alpha }}F\left( \beta \right) }\right\rbrack \) . Remark. Every normal function is a strictly monotonic ordinal function. Since we have \( \alpha \leq F\left( \alpha \right) \) for every strictly monotonic ordinal function it follows that every normal function is also semi-normal. Theorem 7.15. If \( {F}_{1},\ldots ,{F}_{n} \) are semi-normal functions then \[ \left( {\forall \alpha }\right) \left( {\exists \beta > \alpha }\right) \left\lbrack {\beta = {F}_{1}\left( \beta \right) = \cdots = {F}_{n}\left( \beta \right) }\right\rbrack . \] Proof. We define an \( \omega \) -sequence \( \left\langle {{\alpha }_{m} \mid m \in \omega }\right\rangle \) by recursion: \[ {\alpha }_{1} = \alpha + 1,{\alpha }_{2} = {F}_{1}\left( {\alpha }_{1}\right) ,\ldots ,{\alpha }_{n + 1} = {F}_{n}\left( {\alpha }_{n}\right) \] \[ {\alpha }_{k + i} = {F}_{i}\left( {\alpha }_{k}\right), i \equiv k\left( {\;\operatorname{mod}\;n}\right), i = 1,\ldots, n. \] If \( \beta = \mathop{\bigcup }\limits_{{m \in \omega }}{\alpha }_{m} \) then \( \alpha < \beta \) and the sequence \( \left\langle {{\alpha }_{m} \mid m \in \omega }\right\rangle \) is nondecreasing. If \( \beta \in {K}_{\mathrm{I}} \) then \( m \in \omega ,\beta = {\alpha }_{m} = {\alpha }_{m + 1} = \cdots \), and hence \[ \beta = {F}_{1}\left( \beta \right) = \cdots = {F}_{n}\left( \beta \right) . \] If \( \beta \in {K}_{\text{II }} \) then \[ {F}_{i}\left( \beta \right) = \mathop{\bigcup }\limits_{{k \in \omega }}{F}_{i}\left( {\alpha }_{k}\right) \] \[ = \mathop{\bigcup }\limits_{{k \in \omega }}\left\{ {{\alpha }_{k + i} \mid i \equiv k\left( {\;\operatorname{mod}\;n}\right) }\right\} \left( { = \mathop{\bigcup }\limits_{{k \in \omega }}{\alpha }_{{kn} + i + 1}}\right) \] \[ = \beta \text{.} \] Theorem 7.16. If \( \varphi \left( {{a}_{0},\ldots ,{a}_{n}}\right) \) is a formula of \( \mathcal{L} \) then \[ \left( {\forall \alpha }\right) \left( {\exists \beta \geq \alpha }\right) \left( {{\forall }^{\prime }{a}_{1},\ldots ,{a}_{n} \in {M}_{\alpha }}\right) \lbrack \mathbf{M} \vDash \left( {\exists x}\right) \varphi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) \leftrightarrow \] \[ \left( {\exists a \in {M}_{\beta }}\right) \left\lbrack {M \vDash \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack . \] Proof. By Theorem 7.7. \[ \mathbf{M} \vDash \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) \] is expressible by a formula of \( {\mathcal{L}}_{0} \) . Using the fact that \( {M}_{\alpha } \) is a set we can then define \[ \beta = \mathop{\sup }\limits_{{{a}_{1},\ldots ,{a}_{n} \in {M}_{\alpha }}}{\mu }_{{\beta }^{\prime }}\left( {{\beta }^{\prime } \geq \alpha \land \left( {\exists a \in {M}_{{\beta }^{\prime }}}\right) \left\lbrack {\mathbf{M} \vDash \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack }\right) . \] This \( \beta \) has the desired properties. Theorem 7.17. For each formula \( \varphi \left( {{a}_{0},\ldots ,{a}_{n}}\right) \) of \( \mathcal{L} \) there exists a seminormal function \( F \) such that \[ \left( {\forall \alpha }\right) \left( {\forall \beta }\right) \lbrack \beta = F\left( \alpha \right) \rightarrow \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {M}_{\alpha }}\right) \lbrack \mathbf{M} \vDash \left( {\exists x}\right) \varphi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) \leftrightarrow \] \[ \left. \left. {\left( {\exists a \in {M}_{\beta }}\right) \left\lbrack {\mathbf{M} \vDash \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack }\right\rbrack \right\rbrack \text{.} \] Proof. From Theorem 7.16 \[ \left( {\exists \beta \geq \alpha }\right) \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {M}_{\alpha }}\right) \lbrack \mathbf{M} \vDash \left( {\exists x}\right) \varphi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) \leftrightarrow \] \[ \left( {\exists a \in {M}_{\beta }}\right) \left\lbrack {\mathbf{M} \vDash \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack \rbrack \] therefore if \[ F\left( \alpha \right) = {\mu }_{\beta }(\beta \geq \alpha \land \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {M}_{\alpha }}\right) \lbrack \mathbf{M} \vDash \left( {\exists x}\right) \varphi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) \leftrightarrow \] \[ \left( {\exists a \in {M}_{\beta }}\right) \left\lbrack {\mathbf{M} \vDash \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack \rbrack ) \] then \( \alpha \leq F\left( \alpha \right) \) . Furthermore \( F \) is nondecreasing and continuous hence seminormal. Corollary 7.18. For each formula \( \varphi \left( {{a}_{0},\ldots ,{a}_{n}}\right) \) of \( \mathcal{L} \) there exists a seminormal function \( F \) such that \[ \left( {\forall \beta }\right) \left\lbrack {\beta = F\left( \beta \right) \rightarrow \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {M}_{\beta }}\right) \left\lbrack {\mathbf{M} \vDash \left( {\exists x}\right) \varphi \left( {x,{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack \leftrightarrow }\right. \] \[ \left( {\exists a \in {M}_{\beta }}\right) \left\lbrack {\mathbf{M} \vDash \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack \rbrack . \] Theorem 7.19. For each formula \( \varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) of \( \mathcal{L} \) there are finitely many semi-normal functions \( {F}_{1},\ldots ,{F}_{n} \) such that \[ \left( {\forall \beta }\right) \lbrack \beta = {F}_{1}\left( \beta \right) = \cdots = {F}_{m}\left( \beta \right) \rightarrow \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {M}_{\beta }}\right) \lbrack \mathbf{M} \vDash \var
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 5.5.7
Definition 5.5.7 A subset \( W \) of \( X \) is said to have a mean value property (MVP) with respect to the flow \( \varphi \), if \( \forall x \in X,\forall {t}_{1} < {t}_{2},\varphi \left( {{t}_{i}, x}\right) \in W, i = 1,2 \) implies that \( \varphi \left( {\left\lbrack {{t}_{1},{t}_{2}}\right\rbrack, x}\right) \subset W \) . We now introduce the following key concept: Definition 5.5.8 (Dynamically isolated critical set) Let \( \left( {M, f,\varphi }\right) \) be a pseudo-gradient flow. A subset \( S \) of the critical set \( K \) is said to be a dynamically isolated set if there exist a closed neighborhood \( \mathcal{O} \) of \( S \) and regular values \( \alpha < \beta \) of \( f \) such that \[ \mathcal{O} \subset {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \] and \[ \operatorname{cl}\left( \widetilde{\mathcal{O}}\right) \cap K \cap {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack = S \] where \( \widetilde{\mathcal{O}} = \underset{t \in {R}^{1}}{ \cup }\varphi \left( {t,\mathcal{O}}\right) \) . We shall then say that \( \left( {\mathcal{O},\alpha ,\beta }\right) \) is an isolating triplet for \( S \) . Lemma 5.5.9 Let \( \left( {M, f,\varphi }\right) \) be a pseudo-gradient flow and \( K \) be the critical set of \( f \) . If \( U \) is a closed (MVP) neighborhood of \( S \) satisfying \( U \cap K = S \), then \( \left\lbrack S\right\rbrack = I\left( U\right) \) . If further, there exist real numbers \( \alpha < \beta \) and a (MVP) closed set \( W \), satisfying \( U \subset W \subset {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \) and \( W \cap K = S \), then \( \exists T > 0 \) such that \( {G}^{T}\left( W\right) \subset \operatorname{int}\left( \mathrm{U}\right) \) . Proof. 1. \( \left\lbrack S\right\rbrack = I\left( U\right) \) " \( \subset \) " \( \forall x \in \left\lbrack S\right\rbrack \), by definition \( \omega \left( x\right) \cup {\omega }^{ * }\left( x\right) \subset S \), then \( \exists {t}_{n}^{ \pm } \rightarrow \pm \infty \) such that \( \varphi \left( {{t}_{n}^{ \pm }, x}\right) \in U \), and then \( \varphi \left( {\left\lbrack {{t}_{n}^{ - },{t}_{n}^{ + }}\right\rbrack, x}\right) \subset U \) . Since \( n \) is arbitrary, it follows that \( \varphi \left( {t, x}\right) \in U\forall t \in {R}^{1} \), i.e., \( x \in I\left( U\right) \) . " \( \supset \) " \( \forall x \in I\left( U\right) ,\varphi \left( {t, x}\right) \in U\forall t \in {R}^{1} \) . Since \( U \) is closed, \( \omega \left( x\right) \cup {\omega }^{ * }\left( x\right) \subset U \) . From Lemma 5.5.6, \( \omega \left( x\right) \cup {\omega }^{ * }\left( x\right) \subset K \), therefore \( \omega \left( x\right) \cup {\omega }^{ * }\left( x\right) \subset U \cap K = S \) , i.e., \( x \in \left\lbrack S\right\rbrack \) . 2. \( {G}^{T}\left( W\right) \subset \operatorname{int}\left( U\right) \) . From the (PS) condition, \( \exists \delta \in \left( {0,1}\right) \) such that \( \operatorname{dist}\left( {x, K}\right) \geq \delta \) and \( \parallel \) \( {f}^{\prime }\left( x\right) \parallel \geq \delta \forall x \in W \smallsetminus \operatorname{int}\left( U\right) \) . Set \( T > {\delta }^{-2}\left( {\beta - \alpha }\right) \) . We shall prove that \( \forall x \notin \) \( \operatorname{int}\left( U\right) \exists t \in \left\lbrack {-T, T}\right\rbrack \) such that \( \varphi \left( {t, x}\right) \notin W \) . It is divided into three cases: (a) \( x \notin W \) . By taking \( t = 0 \), it is done. (b) \( x \in W \smallsetminus \widetilde{\operatorname{int}\left( U\right) } \) . If \( \varphi \left( {\left\lbrack {-T, T}\right\rbrack, x}\right) \subset W \), then \[ f\left( {\varphi \left( {-T, x}\right) }\right) - f\left( {\varphi \left( {T, x}\right) }\right) = {\int }_{-T}^{T}\left\langle {{f}^{\prime }\left( {\varphi \left( {s, x}\right) }\right) ,\dot{\varphi }\left( {s, x}\right) }\right\rangle {ds} \] \[ \geq {2T}{\delta }^{2} > 2\left( {\beta - \alpha }\right) . \] The contradiction shows \( \varphi \left( {\left\lbrack {-T, T}\right\rbrack, x}\right) ⊄ W \) . (c) \( x \in \left( {\operatorname{int}\left( U\right) \smallsetminus \operatorname{int}\left( U\right) }\right) \cap W = \operatorname{int}\left( U\right) \cap W \smallsetminus \operatorname{int}\left( U\right) \) . Since either \( x \in \) \( \underset{t > 0}{ \cup }\varphi \left( {t,\operatorname{int}\left( U\right) }\right) \) or \( x \in \underset{t < 0}{ \cup }\varphi \left( {t,\operatorname{int}\left( U\right) }\right) . \) In the first case, by the use of (MVP) of \( U \), we have \( {t}_{1} \leq 0 \leq {t}_{2} \) such that \[ \varphi \left( {\left\lbrack {{t}_{1},{t}_{2}}\right\rbrack, x}\right) \subset \left( {\widetilde{\operatorname{int}\left( U\right) } \cap W}\right) \smallsetminus \operatorname{int}\left( U\right) \text{and}\varphi \left( {{t}_{1} - \epsilon, x}\right) \in U,\varphi \left( {{t}_{2} + \epsilon, x}\right) \notin W \] for all \( \epsilon > 0 \) small. Again, we would have \( \beta - \alpha \geq {\delta }^{2}\left( {{t}_{2} - {t}_{1}}\right) \) so that \( {t}_{2} < T \) . Therefore \( \varphi \left( {\left\lbrack {-T, T}\right\rbrack, x}\right) ⊄ W \) . Similarly for the second case. Theorem 5.5.10 Let \( \left( {M, f,\varphi }\right) \) be a pseudo-gradient flow. If \( \left( {\mathcal{O},\alpha ,\beta }\right) \) is an isolating triplet for a dynamically isolated critical set \( S \) for \( f \), then \( \left\lbrack S\right\rbrack \) is an isolated invariant set. Moreover, any closed MVP neighborhood \( U \) of \( \left\lbrack S\right\rbrack \), satisfying \( U \subset {cl}\left( \widetilde{\mathcal{O}}\right) \cap \) \( {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \), is an isolating neighborhood for \( \left\lbrack S\right\rbrack \), and \( U \in \sum \) . Proof. 1. Applying Lemma 5.5.9, \( W = \operatorname{cl}\left( \widetilde{\mathcal{O}}\right) \cap {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \in \sum \), is an isolated invariant neighborhood of \( \left\lbrack S\right\rbrack \) . 2. To prove that \( U \in \sum \) is an isolating neighborhood for \( \left\lbrack S\right\rbrack \), it is sufficient to show that \( \left\lbrack S\right\rbrack = I\left( U\right) \subset {G}^{T}\left( U\right) \subset \operatorname{int}\left( U\right) \) for some \( T > 0 \) . Since \[ \left\lbrack S\right\rbrack = I\left( \left\lbrack S\right\rbrack \right) \subset I\left( U\right) \subset I\left( W\right) = \left\lbrack S\right\rbrack , \] where \( W = \operatorname{cl}\left( \widetilde{\mathcal{O}}\right) \cap {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \), we obtain \( \left\lbrack S\right\rbrack = I\left( U\right) \) . Again by Lemma 5.5.9, we have \( {G}^{T}\left( U\right) \subset {G}^{T}\left( W\right) \subset \operatorname{int}\left( U\right) \) . Example 1. If \( c \) is an isolated critical value, i.e., \( {K}_{c} = K \cap {f}^{-1}\left( c\right) \neq \varnothing \), and there is no critical point on the levels in \( \left\lbrack {c - \epsilon, c + \epsilon }\right\rbrack \smallsetminus \{ c\} \) for some \( \epsilon > 0 \), then the set \( {K}_{c} \) is a dynamically isolated critical set. Example 2. If \( {x}_{0} \) is an isolated critical point of \( f \), then \( S = \left\{ {x}_{0}\right\} \) is a dynamically isolated critical set. ## 5.5.2 Index Pair and Conley Index As we have seen in the Morse theory, isolated neighborhoods only are not enough in characterizing the isolated critical points, the local dynamic behavior provides the necessary information. This leads us to: Definition 5.5.11 Let \( \left( {N, L}\right) \) be a pair of subspaces of \( X \) . A subset \( L \) of \( N \) is called positively invariant in \( N \) with respect to the flow \( \varphi \), if \( x \in L \) and \( \varphi \left( {\left\lbrack {0, t}\right\rbrack, x}\right) \subset N \) imply \( \varphi \left( {\left\lbrack {0, t}\right\rbrack, x}\right) \subset L \) . It is called an exit of \( N \), if \( \forall x \in \) \( N,\exists {t}_{1} > 0 \) such that \( \varphi \left( {{t}_{1}, x}\right) \notin N \), implies \( \exists {t}_{0} \in \left\lbrack {0,{t}_{1}}\right) \) such that \( \varphi \left( {\left\lbrack {0,{t}_{0}}\right\rbrack, x}\right) \subset \) \( N \) and \( \varphi \left( {{t}_{0}, x}\right) \in L \) . Example. Let \( \left( {M, f,\phi }\right) \) be a pseudo-gradient flow, and let \( \alpha < \beta < \gamma \) . Let \( N = {f}^{-1}\left\lbrack {\alpha ,\gamma }\right\rbrack \) and \( L = {f}^{-1}\left\lbrack {\alpha ,\beta }\right\rbrack \) . \( L \) is positively invariant in \( N \), and also an exit set of \( N \) . To an isolated invariant set \( S \), we introduce: Definition 5.5.12 For \( U \in \sum \), let \( \left( {N, L}\right) \) be a pair of closed subsets of \( U \) with \( L \subset N \) . It is called an index pair relative to \( U \) if: (1) \( \overline{N \smallsetminus L} \in \sum \) , (2) \( L \) is positively invariant in \( N \) , (3) \( L \) is an exit set of \( N \) , (4) \( \overline{N \smallsetminus L} \subset U \) and \( \exists T > 0 \) such that \( {G}^{T}\left( U\right) \subset \overline{N \smallsetminus L} \) . According to the definition \( S \mathrel{\text{:=}} I\left( U\right) = I\left( {N \smallsetminus L}\right) \) is an isolated invariant set, and both \( U \) and \( \overline{N \smallsetminus L} \) are isolating neighborhoods of \( S \) . Example. Let \( \left( {M, f,\varphi }\right) \) be a pseudo-gradient flow. If \( \left( {\mathcal{O},\alpha ,\beta }\right) \) and \( \left( {{\mathcal{O}}^{\prime },{\alpha }^{\prime },{\beta }^{\prime }}\right) \) are two isolating triplets for a dynamically isolated critical set \( S \) for \( f \), with \( {\mathcal{O}}^{\prime } \subset \mathcal{O},\left\lbrack {{\alpha }^{\prime },{\beta }^{\prime }}\right\rbrack \subset \left\lbrack {\alpha ,\beta }\right\rbrack \), then \( \exists T > 0 \) such that \( N = {G}^{T}\left( W\right), L = \varphi \left( {-T,{W}_{ - }}\
1139_(GTM44)Elementary Algebraic Geometry
Definition 2.7
Definition 2.7. Let \( V \subset {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) be a nonempty variety and let \( P \) be any point of \( V \) . The dimension of \( V \) at \( P \), written \( {\dim }_{P}V \), is the dimension of any affine representative of \( V \) at the corresponding affine representative of \( P \) . Or, equivalently, \( {\dim }_{P}V = \left( {{\dim }_{Q}\mathrm{H}\left( V\right) }\right) - 1 \), where \( \mathrm{H}\left( V\right) \) is the homogeneous variety in \( {\mathbb{C}}^{n + 1} \) corresponding to \( V \), and \( Q \) is any nonzero point on the 1-subspace of \( {\mathbb{C}}^{n + 1} \) corresponding to \( P \) . The dimension of \( V \) , written \( \dim V \), is \( \mathop{\max }\limits_{{P \in V}}\left( {{\dim }_{P}V}\right) \) . (Again, we define \( \dim \varnothing \) to be -1 .) We then obviously have Theorem 2.8. If \( V \subset {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) is a projective variety, then \( V \) has a dimension at each point and a dimension. If \( V \) is irreducible, we can prove more: Theorem 2.9. If \( V \) is any irreducible variety of \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \), then every point of \( V \) has the same dimension. Corollary 2.10. Let \( V \) be a variety of \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \), and let \( P \) be any point of \( V \) . Then \( {\dim }_{P}V \) is the largest of the dimensions of those irreducible components of \( V \) which contain \( P;\dim V \) is the largest of the dimensions of \( V \) ’s irreducible components. Proof of Theorem 2.9. Since for a projective variety \( V \subset {\mathbb{P}}^{n}\left( \mathbb{C}\right) \), any two points of \( V \) lie in some one affine representative of \( V \), it clearly suffices to assume that \( V \) is affine. Thus let \( V \subset {\mathbb{C}}^{n} \), and let \( r = \mathop{\max }\limits_{{P \in V}}\left( {\operatorname{rank}\left( {J{\left( V\right) }_{P}}\right) }\right) \) . Then our theorem says that the set of points of \( V \) where the rank is \( r \), is dense in \( V \) . Now the set of points \( P \) where \( \operatorname{rank}\left( {J{\left( V\right) }_{P}}\right) \) is strictly less than \( r \) forms a proper subvariety of \( V \), since these points form the zero-set of the collection of \( \left( {r \times r}\right) \) minors of our " \( \left( {\infty \times n}\right) \) " array, and each such minor is a polynomial in \( {X}_{1},\ldots ,{X}_{n} \) . Hence it suffices to show that for any subvariety \( W \) of an irreducible variety \( V, V \smallsetminus W \) is dense in \( V \) ; or, equivalently, if a subvariety \( {V}^{\prime } \) of an irreducible variety \( V \) contains an open set of \( V \), then \( V = {V}^{\prime } \) . This follows at once from Theorem 2.11. Theorem 2.11 (Identity theorem for irreducible varieties). Let \( {V}_{1},{V}_{2} \) be irreducible varieties (in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or in \( {\mathbb{C}}^{n} \) ), and let \( U \) be any open set (in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) ). If \[ {V}_{1} \cap U = {V}_{2} \cap U \neq \varnothing \] then \( {V}_{1} = {V}_{2} \) . Proof of Theorem 2.11. Since any variety in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) may be represented by an affine variety in \( {\mathbb{C}}^{n + 1} \), we may without loss of generality consider only the affine case. For this, it suffices to prove that any polynomial in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) which is zero on an open subset of \( {V}_{1} \) is zero on all of \( {V}_{1} \), for then, likewise, it is zero on all of \( {V}_{2} \), hence \( {V}_{2} \subset {V}_{1} \) . Similarly, \( {V}_{1} \subset {V}_{2} \), so \( {V}_{1} \) would equal \( {V}_{2} \) . Since the value at any point in \( {V}_{1} \) of an arbitrary polynomial in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) coincides with the value of that polynomial \( {\;\operatorname{mod}\;\mathrm{J}}\left( {V}_{1}\right) \), it is enough to show this: Let \( p \) be any element in \( {V}_{1} \) ’s coordinate ring \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathrm{J}\left( {V}_{1}\right) = \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) which vanishes on an open subset of \( {V}_{1} \) ; then \( p \) vanishes on all of \( {V}_{1} \) . Now from Theorem 2.4, any open set of \( {V}_{1} \) contains some point (0) of dimension \( d \), such that after renumbering coordinates if necessary, the part of \( {V}_{1} \) near (0) is the graph of a function analytic in a neighborhood of \( \left( 0\right) \in {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d}}\left( { \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}}}\right) \) . Hence the natural projection on \( {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d}} \) of the part of \( {V}_{1} \) near (0) is an open set of \( {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d}} \) . We want to show that \( p = p\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is the zero polynomial. A point \( \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) is in \( {V}_{1} \) iff \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \rightarrow \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) defines a \( \mathbb{C} \) -homomorphism of \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) ; by hypothesis, for each \( \left( {{a}_{1},\ldots ,{a}_{n}}\right) \in {V}_{1} \) near \( \left( 0\right) \) , \( p\left( {{a}_{1},\ldots ,{a}_{n}}\right) = 0 \) -that is, \( p \) is in the kernel of each such specialization of \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . It is easily seen that we may assume \( \left\{ {{x}_{1},\ldots ,{x}_{d}}\right\} \) is a transcendence base of \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . (Note that within some neighborhood \( N \) of \( \left( 0\right) \in {\mathbb{C}}^{n} \), above each point of \( N \cap {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d}} \) there \( {\pi }_{{X}_{d + 1},\ldots ,{X}_{n}} \) -lies just one point of \( {V}_{1} \) ; a higher transcendence degree would yield, for any \( N \) , infinitely many points above most points of \( N \cap {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d}} \) .) If \( p \) were not the zero polynomial, it would satisfy a minimal equation \[ {q}_{0}{p}^{m} + \ldots + {q}_{m} = 0,\;\text{ where }\;{q}_{i} \in \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{d}}\right\rbrack ; \] (1) note that by minimality, (2.12) \( {q}_{m} \) cannot be the zero polynomial. Since \( p\left( {{a}_{1},\ldots ,{a}_{d}}\right) = 0 \) ,(1) implies that \( {q}_{m}\left( {{a}_{1},\ldots ,{a}_{d}}\right) = 0 \) . But since \( \left\{ {{x}_{1},\ldots ,{x}_{d}}\right\} \) is a transcendence base \( \left( {{a}_{1},\ldots ,{a}_{d}}\right) \) may be arbitrarily chosen in this specialization, so \( {q}_{m} = 0 \) throughout some neighborhood of \( \left( 0\right) \in {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d}} \) . It is then easily proved that \( {q}_{m} \) is the zero polynomial, and this is a contradiction to (2.12). Therefore \( p \) is the zero polynomial in \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) , which is what we wanted to prove. Hence Theorem 2.11 is proved, and therefore also Theorem 2.9. We next translate dimension into purely algebraic terms, based on the transcendence degree of an affine irreducible variety's coordinate ring (Theorem 2.14). This characterization often yields simple proofs of dimensional properties, and extends naturally to a definition for varieties over an arbitrary field (where ideas like "smoothness" may not be so readily available). To prove Theorem 2.14, we use the following result (cf. Lemma 10.4 of Chapter II). Theorem 2.13. Let \( V \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}} \) be a nonempty irreducible variety. Suppose \( V \) ’s coordinate ring \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) has transcendence base \( \left\{ {{x}_{1},\ldots ,{x}_{d}}\right\} \) . Let (0) be a typical point of \( V \), and suppose that \( V \cap {\mathbb{C}}_{{X}_{d + 1},\ldots ,{X}_{n}} \) consists of only finitely many points. Recall that any product of disks a polydisk. Then for each polydisk \( {\Delta }^{n - d} \subset {\mathbb{C}}_{{X}_{d + 1},\ldots ,{X}_{n}} \) centered at (0), there is a polydisk \( {\Delta }^{d} \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d}} \) centered at (0) such that above each point \( a \) in \( {\Delta }^{d} \) there is a point of \( V \) in \( a \times {\Delta }^{n - d} \) . Proof. Note that if \( V \) is of codimension 1, the theorem follows immediately from Lemma 10.4 of Chapter II. For arbitrary codimension, the proof can easily be reduced to the codimension-one case, as follows: First, the standard proof of the theorem of the primitive element (as given, for instance, in [van der Waerden, Vol. I, Section 40]) shows that some \( \mathbb{C} \) -linear combination of \( {x}_{d + 1},\ldots ,{x}_{n} \) is a primitive element for the extension \( \mathbb{C}\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) over \( \mathbb{C}\left( {{x}_{1},\ldots ,{x}_{d}}\right) \) . Without loss of generality, assume that coordinates in \( {\mathbb{C}}_{{X}_{d + 1},\ldots ,{X}_{n}} \) have been chosen so that \( {x}_{d + 1} \) is such a primitive element. Then each of \( {x}_{d + 2},\ldots ,{x}_{n} \) is a rational function of \( {x}_{1},\ldots ,{x}_{d + 1} \) . If \( {V}^{\prime } \) is the variety in \( {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{d + 1}} \) with generic point \( \left( {{x}_{1},\ldots ,{x}_{d + 1}}\right) \), then over each point of \( {V}^{\prime } \) near (0) there \( {\pi }_{{X}_{d + 2},\ldots ,{X}_{n}} \) -lies just one point of \( V \) . We are thus led back to the codimension-one case. We may now prove Theorem 2.14, which translates dimension into purely algebraic terms. First, if \( R \) is any integral domain containing a field \( k \), th
1042_(GTM203)The Symmetric Group
Definition 3.8.2
Definition 3.8.2 Partial skew tableaux \( P \) and \( Q \) are dual equivalent, written \( P \cong Q \), if whenever we apply the same slide sequence to both \( P \) and \( Q \), we get resultant tableaux of the same shape. - Note that the empty slide sequence can be applied to any tableau, so \( P \cong Q \) implies that \( \operatorname{sh}P = \operatorname{sh}Q \) . The next result is proved directly from our definitions. Lemma 3.8.3 Let \( P \cong Q \) . If applying the same sequence of slides to both tableaux yields \( {P}^{\prime } \) and \( {Q}^{\prime } \), then \( {P}^{\prime } \cong {Q}^{\prime } \) . ∎ One tableau may be used to determine a sequence of slides for another as follows. Let \( P \) and \( Q \) have shapes \( \mu /\nu \) and \( \lambda /\mu \), respectively. Then the cells of \( Q \), taken in the order determined by \( Q \) ’s entries, are a sequence of backward slides on \( P \) . Let \( {j}_{Q}\left( P\right) \) denote the result of applying this slide sequence to \( P \) . Also let \( V = {v}_{Q}\left( P\right) \) stand for the tableau formed by the sequence of cells vacated during the construction of \( {j}_{Q}\left( P\right) \), i.e., \[ {V}_{i, j} = k\;\text{if}\left( {i, j}\right) \text{was vacated when filling the cell of}k \in Q \] (3.15) Displaying the elements of \( \mathbf{P} \) as boldface and those of \( Q \) in normal type, we can compute an example. ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_131_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_131_0.jpg) With \( P \) and \( Q \) as before, the entries of \( P \) taken in reverse order define a sequence of forward slides on \( Q \) . Performing this sequence gives a tableau denoted by \( {j}^{P}\left( Q\right) \) . The vacating tableau \( V = {v}^{P}\left( Q\right) \) is still defined by equation (3.15), with \( P \) in place of \( Q \) . The reader can check that using our example tableaux we obtain \( {j}^{P}\left( Q\right) = {v}_{Q}\left( P\right) \) and \( {v}^{P}\left( Q\right) = {j}_{Q}\left( P\right) \) . This always happens, although it is not obvious. The interested reader should consult [Hai 92]. From these definitions it is clear that we can generalize equations (3.13) and (3.14). Letting \( V = {v}^{P}\left( Q\right) \) and \( W = {v}_{Q}\left( P\right) \), we have \[ {j}^{W}{j}_{Q}\left( P\right) = P\text{ and }{j}_{V}{j}^{P}\left( Q\right) = Q. \] (3.16) To show that dual equivalence and dual Knuth equivalence are really the same, we have to concentrate on small tableaux first. Definition 3.8.4 A partial tableau \( P \) is miniature if \( P \) has exactly three elements. - The miniature tableaux are used to model the dual Knuth relations of the first and second kinds. Proposition 3.8.5 Let \( P \) and \( Q \) be distinct miniature tableaux of the same shape \( \lambda /\mu \) and content. Then \[ P\overset{{K}^{ * }}{ \cong }Q \Leftrightarrow P\overset{ * }{ \cong }Q. \] Proof. Without loss of generality, let \( P \) and \( Q \) be standard. " \( \Rightarrow \) " By induction on the number of slides, it suffices to show the following. Let \( c \) be a cell for a slide on \( P, Q \) and let \( {P}^{\prime },{Q}^{\prime } \) be the resultant tableaux. Then we must have \[ \operatorname{sh}{P}^{\prime } = \operatorname{sh}{Q}^{\prime }\text{ and }{P}^{\prime }\overset{{K}^{ * }}{ \cong }{Q}^{\prime }. \] (3.17) This is a tedious case-by-case verification. First, we must write down all the skew shapes with 3 cells (up to those translations that do not affect slides so the number of diagrams will be finite). Then we must find the possible tableau pairs for each shape (there will be at most two pairs corresponding to \( \cong \) and \( \cong \) ). Finally, all possible slides must be tried on each pair. We leave the details to the reader. However, we will do one of the cases as an illustration. Suppose that \( \lambda = \left( {2,1}\right) \) and \( \mu = \varnothing \) . Then the only pair of tableaux of this shape is \[ P = \begin{array}{ll} 1 & 2 \\ 3 & \end{array}\text{ and }Q = \begin{array}{ll} 1 & 3 \\ 2 & \end{array} \] or vice versa, and \( P\overset{{1}^{ * }}{ \cong }Q \) . The results of the three possible slides on \( P, Q \) are given in the following table, from which it is easy to verify that (3.17) holds. ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_132_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_132_0.jpg) " \( \Leftarrow \) " Let \( N \) be a normal standard tableau of shape \( \mu \) . So \( {P}^{\prime } = {j}^{N}\left( P\right) \) and \( {Q}^{\prime } = {j}^{N}\left( Q\right) \) are normal miniature tableaux. Now \( P\overset{ * }{ \cong }Q \) implies that \( \operatorname{sh}{P}^{\prime } = \operatorname{sh}{Q}^{\prime } \) . This hypothesis also guarantees that \( {v}^{N}\left( P\right) = {v}^{N}\left( Q\right) = V \) , say. Applying equation (3.16), \[ {j}_{V}\left( {P}^{\prime }\right) = P \neq Q = {j}_{V}\left( {Q}^{\prime }\right) \] which gives \( {P}^{\prime } \neq {Q}^{\prime } \) . Thus \( {P}^{\prime } \) and \( {Q}^{\prime } \) are distinct miniature tableaux of the same normal shape. The only possibility is, then, \[ \left\{ {{P}^{\prime },{Q}^{\prime }}\right\} = \left\{ \begin{array}{llll} 1 & 2, & 1 & 3 \\ 3 & & 2 & \end{array}\right\} . \] Since \( {P}^{\prime } \cong {Q}^{\prime } \), we have, by what was proved in the forward direction, \[ P = {j}_{V}\left( {P}^{\prime }\right) \overset{{K}^{ * }}{ \cong }{j}_{V}\left( {Q}^{\prime }\right) = Q.\blacksquare \] To make it more convenient to talk about miniature subtableaux of a larger tableau, we make the following definition. Definition 3.8.6 Let \( P \) and \( Q \) be standard skew tableaux with \[ \operatorname{sh}P = \mu /\nu \vdash m\;\text{ and }\;\operatorname{sh}Q = \lambda /\mu \vdash n. \] Then \( P \cup Q \) denotes the tableau of shape \( \lambda /\nu \vdash m + n \) such that \[ {\left( P \cup Q\right) }_{c} = \left\{ \begin{array}{ll} {P}_{c} & \text{ if }c \in \mu /\nu \\ {Q}_{c} + m & \text{ if }c \in \lambda /\mu \end{array}\right. \] Using the \( P \) and \( Q \) on page 118, we have \[ P \cup Q = \begin{array}{lll} 1 & 2 & 3 \\ 4 & 5 & 7. \\ 6 & & \end{array} \] We need one more lemma before the main theorem of this section. Lemma 3.8.7 ([Hai 92]) Let \( V, W, P \), and \( Q \) be standard skew tableaux with \[ \operatorname{sh}V = \mu /\nu ,\;\operatorname{sh}P = \operatorname{sh}Q = \lambda /\mu ,\;\operatorname{sh}W = \kappa /\lambda . \] Then \[ P\overset{ * }{ \cong }Q \Rightarrow V \cup P \cup W\overset{ * }{ \cong }V \cup Q \cup W. \] Proof. Consider what happens in performing a single forward slide on \( V \cup \) \( P \cup W \), say into cell \( c \) . Because of the relative order of the elements in the \( V \) , \( P \), and \( W \) portions of the tableau, the slide can be broken up into three parts. First of all, the slide travels through \( V \), creating a new tableau \( {V}^{\prime } = {j}_{c}\left( V\right) \) and vacating some inner corner \( d \) of \( \mu \) . Then \( P \) becomes \( {P}^{\prime } = {j}_{d}\left( P\right) \), vacating cell \( e \), and finally \( W \) is transformed into \( {W}^{\prime } = {j}_{e}\left( W\right) \) . Thus \( {j}_{c}\left( {V \cup P \cup W}\right) = \) \( {V}^{\prime } \cup {P}^{\prime } \cup {W}^{\prime } \) . Now perform the same slide on \( V \cup Q \cup W \) . Tableau \( V \) is replaced by \( {j}_{c}\left( V\right) = {V}^{\prime } \), vacating \( d \) . If \( {Q}^{\prime } = {j}_{d}\left( Q\right) \), then, since \( P \cong Q \), we have \( \operatorname{sh}{P}^{\prime } = \) sh \( {Q}^{\prime } \) . So \( e \) is vacated as before, and \( W \) becomes \( {W}^{\prime } \) . Thus \( {j}_{c}\left( {V \cup Q \cup W}\right) = \) \( {V}^{\prime } \cup {Q}^{\prime } \cup {W}^{\prime } \) with \( {P}^{\prime } \cong {Q}^{\prime } \) by Lemma 3.8.3. Now the preceding also holds, mutatis mutandis, to backward slides. Hence applying the same slide to both \( V \cup P \cup W \) and \( V \cup P \cup W \) yields tableaux of the same shape that still satisfy the hypotheses of the lemma. By induction, we are done. - We can now show that Proposition 3.8.5 actually holds for all pairs of tableaux. Theorem 3.8.8 ([Hai 92]) Let \( P \) and \( Q \) be standard tableaux of the same shape \( \lambda /\mu \) . Then \[ P\overset{{K}^{ * }}{ \cong }Q \Leftrightarrow P\overset{ * }{ \cong }Q. \] Proof. " \( \Rightarrow \) " We need to consider only the case where \( P \) and \( Q \) differ by a single dual Knuth relation, say the first (the second is similar). Now \( Q \) is obtained from \( P \) by switching \( k + 1 \) and \( k + 2 \) for some \( k \) . So \[ P = V \cup {P}^{\prime } \cup W\text{ and }Q = V \cup {Q}^{\prime } \cup W, \] where \( {P}^{\prime } \) and \( {Q}^{\prime } \) are the miniature subtableaux of \( P \) and \( Q \), respectively, that contain \( k, k + 1 \), and \( k + 2 \) . By hypothesis, \( {P}^{\prime }\overset{{1}^{ * }}{ \cong }{Q}^{\prime } \), which implies \( {P}^{\prime }\overset{ * }{ \cong }{Q}^{\prime } \) by Proposition 3.8.5. But then the lemma just proved applies to show that \( P\overset{ * }{ \cong }Q \) . " \( \Leftarrow \) " Let tableau \( N \) be of normal shape \( \mu \) . Let \[ {P}^{\prime } = {j}^{N}\left( P\right) \text{ and }{Q}^{\prime } = {j}^{N}\left( Q\right) . \] Since \( P \cong Q \), we have \( {P}^{\prime } \cong {Q}^{\prime } \) (Lemma 3.8.3) and \( {v}^{N}\left( P\right) = {v}^{N}\left( Q\right) = V \) for some tableau \( V \) . Thus, in particular, \( \operatorname{sh}{P}^{\prime } = \operatorname{sh}{Q}^{\prime } \), so that \( {P}^{\prime } \) and \( {Q}^{\prime } \) are dual Knuth equivalent by Proposition 3.8.1. Now, by definition, we have a sequence of dual Knuth relations \[ {P}^{\prime } = {P}_{1}\overset{{i}^{\prime * }}{ \cong }{P}_{2}\overset{{j}^{\prime * }}{ \cong }\cdots \overset{{l}^{\prime * }}{ \cong }{P}_{k} = {Q}^{\prime }, \] where \( {i}^
1088_(GTM245)Complex Analysis
Definition 9.29
Definition 9.29. Let \( D \) be a domain in \( \mathbb{C}, u : D \rightarrow \mathbb{R} \) be a continuous function, and \( U \) be an open disc such that \( \operatorname{cl}U \subset D \) . The continuous function \( {u}_{U} \) defined in \( D \) by being harmonic in \( U \) and coinciding with \( u \) in \( D - U \) is called the harmonization of \( u \) in \( U \) . Lemma 9.30. Let \( D \) be a domain in \( \mathbb{C} \) and \( U \) be an open disc such that \( \operatorname{cl}U \subset D \) . If \( u \) is subharmonic in \( D \), then so is \( {u}_{U} \) . Proof. It suffices to show that \( {u}_{U} \) satisfies the mean value inequality (9.15) at every point \( c \) in \( \partial U \) . Since \( u \) is subharmonic in \( D, u\left( z\right) \leq {u}_{U}\left( z\right) \) for all \( z \) in \( D \) . But then \[ {u}_{U}\left( c\right) = u\left( c\right) \leq \frac{1}{2\pi }{\int }_{0}^{2\pi }u\left( {c + r{\mathrm{e}}^{t\theta }}\right) \mathrm{d}\theta \leq \frac{1}{2\pi }{\int }_{0}^{2\pi }{u}_{U}\left( {c + r{\mathrm{e}}^{t\theta }}\right) \mathrm{d}\theta \] for all small positive values of \( r \), and the result follows. Remark 9.31. We have shown that the family of subharmonic function on a domain \( D \) is a cone (i.e., the family is closed under addition and multiplication by positive constants) in the vector space of continuous functions on \( D \) . It is also closed under maximization (by property 6 above) and harmonization (by Lemma 9.30). These last two properties are the key to progress. Definition 9.32. Let \( D \) be a domain in \( \mathbb{C} \) . A Perron family \( \mathcal{F} \) in \( D \) is a nonempty collection of subharmonic functions in \( D \) such that (a) If \( u, v \) are in \( \mathcal{F} \), then so is \( \max \{ u, v\} \) . (b) If \( u \) is in \( \mathcal{F} \), then so is \( {u}_{U} \) for every disc \( U \) with cl \( U \subset D \) . The following result, due to Perron, is useful for constructing harmonic functions. Theorem 9.33 (Perron’s Principle). If \( \mathcal{F} \) is a uniformly bounded from above Perron family in \( D \), then the function defined for \( z \in D \) by \[ V\left( z\right) = \sup \{ u\left( z\right) : u \in \mathcal{F}\} \] (9.16) is harmonic in \( D \) . Proof. First note that by definition a Perron family is never empty. Since we are assuming that there exists a constant \( M \) such that \( u\left( z\right) < M \) for all \( z \) in \( D \) and all \( u \) in \( \mathcal{F} \), the function \( V \) is clearly well defined and real-valued. Let \( U \) be any disc such that \( \operatorname{cl}U \subset D \) . It is enough to show that \( V \) is harmonic in \( U \) . For any point \( {z}_{0} \) in \( U \), there exists a sequence \( \left\{ {{u}_{j} : j \in \mathbb{N}}\right\} \) of functions in \( \mathcal{F} \) such that \[ \mathop{\lim }\limits_{{j \rightarrow \infty }}{u}_{j}\left( {z}_{0}\right) = V\left( {z}_{0}\right) \] (9.17) Without loss of generality, we may assume \( {u}_{j + 1} \geq {u}_{j} \) for all \( j \) in \( \mathbb{N} \), since if \( \left\{ {u}_{j}\right\} \) is any sequence in \( \mathcal{F} \) satisfying (9.17), then the new sequence given by \( {v}_{1} = {u}_{1} \) and \( {v}_{j + 1} = \max \left\{ {{u}_{j + 1},{v}_{j}}\right\} \) for \( j \geq 1 \) is also contained in \( \mathcal{F} \), satisfies (9.17) (with \( {u}_{j} \) replaced by \( {v}_{j} \), of course), and is nondecreasing, as needed. The sequence \( \left\{ {{w}_{j} = {\left( {u}_{j}\right) }_{U}}\right\} \) of harmonizations of the \( {u}_{j} \) in \( U \) consists of subharmonic functions with the following properties: 1. \( {w}_{j} \geq {u}_{j} \) for all \( j \) 2. \( {w}_{j} \leq {w}_{j + 1} < M \) for all \( j \) , since the two inequalities clearly hold outside \( U \) and on the boundary of \( U \), from which it follows that they also hold in \( U \) . Thus the sequence \( \left\{ {w}_{j}\right\} \) lies in \( \mathcal{F} \), is nondecreasing, and satisfies \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{w}_{j}\left( {z}_{0}\right) = \) \( V\left( {z}_{0}\right) \) . It follows from the Harnack's convergence Theorem 9.14 that the function defined by \[ \Phi \left( z\right) = \mathop{\lim }\limits_{{j \rightarrow \infty }}{w}_{j}\left( z\right) = \sup \left\{ {{w}_{j}\left( z\right) : j \in \mathbb{N}}\right\} \] is harmonic in \( U \) . We will now show that \( \Phi = V \) in \( U \) . Let \( c \) denote any point in \( U \) . As before, we can find a nondecreasing sequence \( \left\{ {s}_{j}\right\} \) in \( \mathcal{F} \) such that \( V\left( c\right) = \mathop{\lim }\limits_{{j \rightarrow \infty }}{s}_{j}\left( c\right) \) . By setting \( {t}_{1} = \max \left\{ {{s}_{1},{w}_{1}}\right\} \) and \( {t}_{j + 1} = \max \left\{ {{s}_{j + 1},{w}_{j + 1},{t}_{j}}\right\} \) for all \( j \geq 1 \), we obtain a nondecreasing sequence \( \left\{ {t}_{j}\right\} \) in \( \mathcal{F} \) such that \( {t}_{j} \geq {w}_{j} \) for all \( j \), and such that \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{t}_{j}\left( z\right) = V\left( z\right) \) for \( z = c \) and \( z = {z}_{0}. \) The harmonizations of the \( {t}_{j} \) in \( U \) give a nondecreasing sequence \( \left\{ {{r}_{j} = {\left( {t}_{j}\right) }_{U}}\right\} \) in \( \mathcal{F} \) satisfying \( M > {r}_{j} \geq {t}_{j} \geq {w}_{j} \) for all \( j \) . As before, the function defined by \[ \Psi \left( z\right) = \sup \left\{ {{r}_{j}\left( z\right) : j \in \mathbb{N}}\right\} \] is harmonic in \( U \), and coincides with \( V \) at \( c \) and \( {z}_{0} \) . But \( \Psi \geq \Phi \), since \( {r}_{j} \geq {w}_{j} \) for all \( j \), and hence \( \Psi - \Phi \) is a nonnegative harmonic function in \( U \) . Since it is equal to zero at \( {z}_{0} \), by the minimum principle for harmonic functions, it is identically zero in \( U \), and the result follows. ## 9.8 The Dirichlet Problem (Revisited) This section has two parts. The first describes a method for obtaining the solution to the Dirichlet problem, provided it is solvable. In the second part, we offer a solution. Recall that the Dirichlet problem for a bounded region \( D \) in \( \mathbb{C} \) and a function \( f \in \) \( \mathbf{C}\left( {\partial D}\right) \) is to find a continuous function \( U \) on the closure of \( D \) whose restriction to \( D \) is harmonic and which agrees with \( f \) on the boundary of \( D \) . Under these conditions, let \( \mathcal{F} \) denote the family of all continuous functions \( u \) on cl \( D \) such that \( u \) is subharmonic in \( D \) and \( u \leq f \) on \( \partial D \) . Then \( \mathcal{F} \) is a Perron family of functions uniformly bounded from above. Note that the constant function \( u = \min \{ f\left( z\right) : z \in \) \( \partial D\} \) belongs to \( \mathcal{F} \), hence \( \mathcal{F} \) is nonempty. The other conditions for \( \mathcal{F} \) to be a Perron family are also easily verified. Therefore, by Theorem 9.33, the function \( V \) defined by (9.16) is harmonic in \( D \) . Now, if we assume that there is a solution \( U \) to the Dirichlet problem for \( D \) and \( f \), then we can show that \( U = V \) . Indeed, for each \( u \) in \( \mathcal{F} \) the function \( u - U \) is subharmonic in \( D \), and satisfies \( u - U = u - f \leq 0 \) on \( \partial D \), from where it follows that \( u - U \leq 0 \) in \( D \), and hence \( V \leq U \) in \( D \) . But \( U \) belongs to \( \mathcal{F} \), and it follows that \( U \leq V \), and therefore \( U = V \) . The Dirichlet problem does not always have a solution. A very simple example is given by considering the domain \( D = \{ 0 < \left| z\right| < 1\} \) and the function \[ f\left( z\right) = \left\{ \begin{array}{ll} 0, & \text{ if }\left| z\right| = 1 \\ 1, & \text{ if }z = 0 \end{array}\right. \] The corresponding function \( V \) given by Theorem 9.33 is harmonic in the punctured disc \( D \) . If the Dirichlet problem were solvable in our case, then \( V \) would extend to a continuous function on \( \left| z\right| \leq 1 \) that is harmonic in \( \left| z\right| < 1 \) (see Exercise 9.17). But then the maximum principle would imply that \( V \) is identically zero, a contradiction. To solve the Dirichlet problem, we start with a bounded domain \( D \subset \mathbb{C} \), with boundary \( \partial D \), and the following definition. Definition 9.34. A function \( \beta \) is a barrier at \( {z}_{0} \in \partial D \), and \( {z}_{0} \) is a regular point for the Dirichlet problem provided there exists an open neighborhood \( N \) of \( {z}_{0} \) in \( \mathbb{C} \) such that (1) \( \beta \in \mathbf{C}\left( {\operatorname{cl}D \cap N}\right) \) . (2) \( - \beta \) is subharmonic in \( D \cap N \) . (3) \( \beta \left( z\right) > 0 \) for \( z \neq {z}_{0},\beta \left( {z}_{0}\right) = 0 \) . (4) \( \beta \left( z\right) = 1 \) for \( z \notin N \) . Remark 9.35. A few observations are in order. 1. Condition (4) is easily satisfied by adjusting a function \( \beta \) that satisfies the other three conditions for being a barrier. To see this we may assume that \( N \) is relatively compact in \( \mathbb{C} \), and choose a smaller neighborhood \( {N}_{0} \) of \( {z}_{0} \) with cl \( {N}_{0} \subset N \) . Then let \[ m = \min \left\{ {\beta \left( z\right) ;z \in \operatorname{cl}\left( {N - {N}_{0}}\right) \cap \operatorname{cl}D}\right\} \] note that \( m > 0 \), and define \[ {\beta }_{1}\left( z\right) = \left\{ \begin{array}{ll} \min \{ m,\beta \left( z\right) \} & \text{ for }z \in N \cap D, \\ m & \text{ for }z \in \operatorname{cl}\left( {D - N}\right) . \end{array}\right. \] Finally set \( {\beta }_{2} = \frac{{\beta }_{1}}{m} \), and observe that \( {\beta }_{2} \) satisfies all the conditions for being a barrier at \( {z}_{0} \) . Thus, to prove the existence of a barrier, it suffices to produce a function that satisfies the first thr
1064_(GTM223)Fourier Analysis and Its Applications
Definition 8.5
Definition 8.5 The Fourier transform of \( f \in {\mathcal{S}}^{\prime } \) is the distribution \( \widehat{f} \) that is defined by the formula \[ \widehat{f}\left\lbrack \varphi \right\rbrack = f\left\lbrack \widehat{\varphi }\right\rbrack \;\text{ for all }\varphi \in \mathbb{S}. \] Just as in Chapter 7, we shall also write \( \widehat{f} = \mathcal{F}\left( f\right) \) (but now we use ordinary brackets instead of square ones, in order to avoid confusing it with the notation \( f\left\lbrack \varphi \right\rbrack \) ). \( \textbf{Remark. } \) The equality \( \int f\widehat{g} = \int \widehat{f}g \) is sometimes considered to be a variant of the polarized version of the Plancherel formula. We proceed to check that \( \widehat{f} \) is actually a tempered distribution. It is clear that it is linear: \[ \widehat{f}\left\lbrack {{c}_{1}{\varphi }_{1} + {c}_{2}{\varphi }_{2}}\right\rbrack = f\left\lbrack {\left( {c}_{1}{\varphi }_{1} + {c}_{2}{\varphi }_{2}\right) }^{ \land }\right\rbrack = f\left\lbrack {{c}_{1}\widehat{{\varphi }_{1}} + {c}_{2}\widehat{{\varphi }_{2}}}\right\rbrack \] \[ = {c}_{1}f\left\lbrack \widehat{{\varphi }_{1}}\right\rbrack + {c}_{2}\left\lbrack \widehat{{\varphi }_{2}}\right\rbrack = {c}_{1}\widehat{f}\left\lbrack {\varphi }_{1}\right\rbrack + {c}_{2}\widehat{f}\left\lbrack {\varphi }_{2}\right\rbrack . \] The continuity is a simple consequence of the continuity of the Fourier transformation on \( \mathcal{S} \) : if \( {\varphi }_{j}\overset{\mathcal{S}}{ \rightarrow }\psi \), then \[ \widehat{f}\left\lbrack {\varphi }_{j}\right\rbrack = f\left\lbrack \widehat{{\varphi }_{j}}\right\rbrack \rightarrow f\left\lbrack \widehat{\psi }\right\rbrack = \widehat{f}\left\lbrack \psi \right\rbrack \] which tells us precisely that \( \widehat{f} \) is continuous, and thus a distribution. Let us compute the Fourier transforms of some distributions. We start with a few examples that are ordinary functions, but do not belong to \( {L}^{1}\left( \mathbf{R}\right) \) . Example 8.27. Let \( f\left( x\right) = 1 \) for all \( x \) . What is the Fourier transform \( \widehat{f} \) ? We should have \[ \widehat{f}\left\lbrack \varphi \right\rbrack = f\left\lbrack \widehat{\varphi }\right\rbrack = {\int }_{\mathbf{R}}f\left( x\right) \widehat{\varphi }\left( x\right) {dx} = {\int }_{\mathbf{R}}\widehat{\varphi }\left( x\right) {dx} \] \[ = {2\pi } \cdot \frac{1}{2\pi }{\int }_{\mathbf{R}}\widehat{\varphi }\left( x\right) {e}^{i0x}{dx} = {2\pi \varphi }\left( 0\right) = {2\pi \delta }\left\lbrack \varphi \right\rbrack . \] It follows that \( \widehat{f} = {2\pi \delta } \), or \( \widehat{f}\left( \omega \right) = {2\pi \delta }\left( \omega \right) \) if we want to stress the name of the independent variable. (Notice that the test functions and their transforms have reversed independent variables in this connection!) Example 8.28. Take \( f\left( x\right) = {x}^{n}\left( {n\text{integer} \geq 0}\right) \) . This function defines a tempered distribution, and its transform satisfies \[ \widehat{f}\left\lbrack \varphi \right\rbrack = f\left\lbrack \widehat{\varphi }\right\rbrack = {\int }_{\mathbf{R}}{x}^{n}\widehat{\varphi }\left( x\right) {dx}. \] By the ordinary rules for Fourier transforms we have that \( {\left( ix\right) }^{n}\widehat{\varphi }\left( x\right) \) is the transform of the function \( {\varphi }^{\left( n\right) } \), which means that \( {x}^{n}\widehat{\varphi }\left( x\right) \) is the transform of \( {\left( -i\right) }^{n}{\varphi }^{\left( n\right) } \) . The inversion formula then gives that \( \widehat{f}\left\lbrack \varphi \right\rbrack = {2\pi }{\left( -i\right) }^{n}{\varphi }^{\left( n\right) }\left( 0\right) \) . In the preceding section we saw that the \( n \) th derivative of \( \delta \) is described by \( {\delta }^{\left( n\right) }\left\lbrack \varphi \right\rbrack = {\left( -1\right) }^{n}{\varphi }^{\left( n\right) }\left( 0\right) \) . Thus we must have \( \widehat{f} = {2\pi }{i}^{n}{\delta }^{\left( n\right) } \) . Before giving further examples we present a list of rules of computation, which on the face look quite familiar, but now are in need of new proofs. To simplify the formulation, we introduce two new notations. First, if \( f \in {\mathcal{S}}^{\prime } \) , then \( \check{f} \) is what could be symbolically written as \( f\left( {-x}\right) \) ; more precisely, we define \( \check{f}\left\lbrack \varphi \right\rbrack = \int f\left( x\right) \varphi \left( {-x}\right) {dx} \) . We say that \( f \) is even if \( \check{f} = f \), odd if \( \check{f} = - f \) . Secondly, if \( a \in \mathbf{R} \) and \( \varphi \in \mathcal{S} \), we define the translated function \( {\varphi }_{a} \) by \( {\varphi }_{a}\left( x\right) = \varphi \left( {x - a}\right) \) . For \( f \in {\mathcal{S}}^{\prime } \) the translated distribution \( {f}_{a} \) is \( f\left( {x - a}\right) \) , which means \( {f}_{a}\left\lbrack \varphi \right\rbrack = \int f\left( x\right) \varphi \left( {x + a}\right) {dx} = f\left\lbrack {\varphi }_{-a}\right\rbrack \) . (This notation is a generalization of the notation \( {\delta }_{a} \) to arbitary distributions.) Theorem 8.3 If \( f, g \in {\mathcal{S}}^{\prime } \), then (a) \( f \) is even/odd \( \Leftrightarrow \widehat{f} \) is even/odd. (b) \( \widehat{\widehat{f}} = {2\pi }\check{f} \) . (c) \( {\widehat{f}}_{a} = {e}^{-{ia\omega }}\widehat{f} \) . (d) \( \mathcal{F}\left( {{e}^{iax}f}\right) = {\left( \widehat{f}\right) }_{a} \) . (e) \( {\widehat{f}}^{\prime } = {i\omega }\widehat{f} \) . (f) \( \overset{⏜}{xf} = i{\left( \widehat{f}\right) }^{\prime } \) . Proving these formulae are excellent exercises in the definitions of the notions involved. As examples, we perform the proofs of rules (d) and (e): (d): The effect of the left-hand member on a test function \( \varphi \) is rewritten: \[ \mathcal{F}\left( {{e}^{iax}f}\right) \left\lbrack \varphi \right\rbrack = \left( {{e}^{iax}f}\right) \left\lbrack \widehat{\varphi }\right\rbrack = f\left\lbrack {{e}^{iax}\widehat{\varphi }}\right\rbrack = f\left\lbrack {\mathcal{F}\left( {\varphi }_{-a}\right) }\right\rbrack = \widehat{f}\left\lbrack {\varphi }_{-a}\right\rbrack = {\left( \widehat{f}\right) }_{a}\left\lbrack \varphi \right\rbrack . \] Each equality sign corresponds to a definition or a theorem: the first one is the definition of the Fourier transform; the second one is the definition of the product of a function and a distribution; the third one is a rule for "classical" Fourier transforms; the fourth is again the definition of the Fourier transform; and the last one is the definition of the translate of a distribution. (e) is proved similarly; the reader is asked to identify the reason for each equality sign in the following formula: \[ {\widehat{f}}^{\prime }\left\lbrack \varphi \right\rbrack = {f}^{\prime }\left\lbrack \widehat{\varphi }\right\rbrack = - f\left\lbrack {\left( \widehat{\varphi }\right) }^{\prime }\right\rbrack = - f\left\lbrack {\mathcal{F}\left( {-{i\omega \varphi }}\right) }\right\rbrack = f\left\lbrack {\mathcal{F}\left( {i\omega \varphi }\right) }\right\rbrack = \widehat{f}\left\lbrack {i\omega \varphi }\right\rbrack \] \[ = \left( {{i\omega }\widehat{f}}\right) \left\lbrack \varphi \right\rbrack \] We proceed to give more examples of transforms of distributions. Example 8.29. Transform \( f = \) P.V. \( 1/x \) (Example 8.15, page 205): we have seen that \( {xf} = 1 \) . Transformation gives \( {iD}\widehat{f} = {2\pi \delta } = {2\pi }{H}^{\prime } \), which can be rewritten as \( {iD}\left( {\widehat{f} + {2\pi iH}}\right) = 0 \) . By Theorem 8.1 it follows that \( \widehat{f} + {2\pi iH} = c = \) constant, whence \( \widehat{f} = c - {2\pi iH} \) . To determine the constant we notice that \( f \) is odd, and thus \( \widehat{f} \) must also be odd. This gives \( c = {\pi i} \) and \( \widehat{f} = {\pi i}\left( {1 - {2H}}\right) \) . If we introduce the function \( \operatorname{sgn}x = x/\left| x\right| \) (the sign of \( x \) ), we can write the result as \( \widehat{f} = - {\pi i}\operatorname{sgn}x \) . Example 8.30. \( H = \) the Heaviside function. In Example 8.29 we saw that \[ \mathcal{F}\left( {\text{ P.V. }\frac{1}{x}}\right) \left( \omega \right) = {\pi i}\left( {1 - {2H}\left( \omega \right) }\right) . \] Since both sides are odd distributions, rule (b) gives \[ {\pi i}\mathcal{F}\left( {1 - {2H}}\right) \left( \omega \right) = {2\pi }{\left( \text{ P.V. }\frac{1}{\omega }\right) }^{ \vee } = - {2\pi }\text{ P.V. }\frac{1}{\omega }. \] On the other hand, \( \mathcal{F}\left( {1 - {2H}}\right) = \widehat{1} - 2\widehat{H} = {2\pi \delta } - 2\widehat{H} \) . From this we can solve \[ \widehat{H} = {\pi \delta }\left( \omega \right) - i\text{ P.V. }\frac{1}{\omega }. \] As a finale to this section we prove the following result. Theorem 8.4 If \( f \in {\mathcal{S}}^{\prime } \), then \( {xf}\left( x\right) \) is the zero distribution if and only if \( f = {A\delta } \) for some constant \( A \) . Proof. Transformation of the equation \( {xf}\left( x\right) = 0 \) gives \( i{\widehat{f}}^{\prime } = 0 \), and by Theorem 8.1 this means that \( \widehat{f} = C \), where \( C \) is a constant. Transform back again: since \( \widehat{1} = {2\pi \delta } \), we find that \( f \) must be a constant times \( \delta \), and the proof is complete. By translation of the situation in the theorem, it is seen that the following also holds: if \( f \in {\mathcal{S}}^{\prime } \) and \( \left( {x - a}\right) f\left( x\right) = 0 \), then \( f = A{\delta }_{a} \) for some constant \( A \) . This can be further generalized. If \( \chi \) is a multiplicator function that has a simple zero at the point \( x = a \), and \( {\chi f} = 0 \), then \( f = A{\delta }_{a} \) . The proof is built on writing \( \chi \left( x\right) = \left( {x - a}\right) \psi \left( x\right) \), where \( \psi \left( a\right) \neq 0 \), and th
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 12.4
Definition 12.4. An element of \( \lambda \) of \( t \) is an analytically integral element if \[ \langle \lambda, H\rangle \in \mathbb{Z} \] for all \( H \) in \( \Gamma \) . An element \( \lambda \) of \( \mathfrak{t} \) is an algebraically integral element if \[ \left\langle {\lambda ,{H}_{\alpha }}\right\rangle = 2\frac{\langle \lambda ,\alpha \rangle }{\langle \alpha ,\alpha \rangle } \in \mathbb{Z} \] for each real root \( \alpha \) . An element \( \lambda \) of \( \mathfrak{t} \) is dominant if \[ \langle \lambda ,\alpha \rangle \geq 0 \] for all \( \alpha \in \Delta \) . Finally, if \( \Delta = \left\{ {{\alpha }_{1},\ldots ,{\alpha }_{r}}\right\} \) and \( \mu \) and \( \lambda \) are two elements of \( \mathfrak{t} \), we say that \( \mu \) is higher than \( \lambda \) if \[ \mu - \lambda = {c}_{1}{\alpha }_{1} + \cdots + {c}_{r}{\alpha }_{r} \] with \( {c}_{j} \geq 0 \) . We denote this relation by \( \mu \succcurlyeq \lambda \) . The notion of an algebraically integral element is essentially the one we used in Chapters 8 and 9 in the context of semisimple Lie algebras. Specifically, if \( \mathfrak{g} \mathrel{\text{:=}} {\mathfrak{k}}_{\mathbb{C}} \) is semisimple, then \( \lambda \in \mathfrak{t} \) is algebraically integral if and only if \( {i\lambda } \) is integral in the sense of Definition 8.34. We will see in Sect. 12.2 that every algebraically integral element is analytically integral, but not vice versa. In Chapter 13, we will show that when \( K \) is simply connected, the two notions of integrality coincide. Proposition 12.5. Let \( \left( {\sum, V}\right) \) be a representation of \( K \) and let \( \sigma \) be the associated representation of \( \mathfrak{k} \) . If \( \lambda \in \mathfrak{t} \) is a real weight of \( \sigma \), then \( \lambda \) is an analytically integral element. Proof. If \( v \) is a weight vector with weight \( \lambda \) and \( H \) is an element of \( \Gamma \), then on the one hand, \[ \sum \left( {e}^{2\pi H}\right) v = {Iv} = v \] while on the other hand, \[ \sum \left( {e}^{2\pi H}\right) v = {e}^{{2\pi \sigma }\left( H\right) }v = {e}^{{2\pi i}\langle \lambda, H\rangle }v. \] Thus, \( {e}^{{2\pi i}\langle \lambda, H\rangle } = 1 \), which implies that \( \langle \lambda, H\rangle \in \mathbb{Z} \) . We are now ready to state the theorem of the highest weight for (finite-dimensional) representations of a connected compact group. Theorem 12.6 (Theorem of the Highest Weight). If \( K \) is a connected, compact matrix Lie group and \( T \) is a maximal torus in \( K \), the following results hold. 1. Every irreducible representation of \( K \) has a highest weight. 2. Two irreducible representations of \( K \) with the same highest weight are isomorphic. 3. The highest weight of each irreducible representation of \( K \) is dominant and analytically integral. 4. If \( \mu \) is a dominant, analytically integral element, there exists an irreducible representation of \( K \) with highest weight \( \mu \) . Let us suppose now that \( \mathfrak{g} \mathrel{\text{:=}} {\mathfrak{k}}_{\mathbb{C}} \) is semisimple. Even in this case, the set of analytically integral elements may not coincide with the set of algebraically integral elements, as we will see in several examples in Sect. 12.2. Thus, the theorem of the highest weight for \( \mathfrak{g} \) (Theorems 9.4 and 9.5) will, in general, have a different set of possible highest weights than the theorem of the highest weight for \( K \) . This discrepancy arises because a representation of \( \mathfrak{g} \) may not come from a representation of \( K \), unless \( K \) is simply connected. In the simply connected case, there is no such discrepancy; according to Corollary 13.20, when \( K \) is simply connected, every algebraically integral element is analytically integral. We will develop the tools for proving Theorem 12.6 in Sects. 12.2-12.4, with the proof itself coming in Sect. 12.5. The hard part of the theorem is Point 4; this will be established by appealing to a completeness result for characters. ## 12.2 Analytically Integral Elements In this section, we establish some elementary properties of the set of analytically integral elements (Definition 12.4) and consider several examples. One additional key result that we will establish in Chapter 13 is Corollary 13.20, which says that when \( K \) is simply connected, the set of analytically integral elements coincides with the set of algebraically integral elements. ## Proposition 12.7. 1. The set of analytically integral elements is invariant under the action of the Weyl group. 2. Every analytically integral element is algebraically integral. 3. Every real root is analytically integral. We begin with an important lemma. Lemma 12.8. If \( \alpha \in \mathfrak{t} \) is a real root and \( {H}_{\alpha } = {2\alpha }/\langle \alpha ,\alpha \rangle \) is the associated real coroot, we have \[ {e}^{{2\pi }{H}_{\alpha }} = I \] in \( K \) . That is to say, \( {H}_{\alpha } \) belongs to the kernel \( \Gamma \) of the exponential map. Proof. By Corollary 7.20, there is a Lie algebra homomorphism \( \phi : \mathrm{{su}}\left( 2\right) \rightarrow \ell \) such that the element \( {iH} = \operatorname{diag}\left( {i, - i}\right) \) in \( \mathrm{{su}}\left( 2\right) \) maps to the real coroot \( {H}_{\alpha } \) . (In the notation of the corollary, \( {H}_{\alpha } = 2{E}_{1}^{\alpha } \) .) Since \( \mathrm{{SU}}\left( 2\right) \) is simply connected, there is (Theorem 5.6) a homomorphism \( \Phi : \mathrm{{SU}}\left( 2\right) \rightarrow K \) for which the associated Lie algebra homomorphism is \( \phi \) . Now, the element \( {iH} \in \operatorname{su}\left( 2\right) \) satisfies \( {e}^{2\pi iH} = I \), and thus \[ I = \Phi \left( {e}^{2\pi iH}\right) = {e}^{{2\pi \phi }\left( {iH}\right) } = {e}^{{2\pi }{H}_{\alpha }}, \] as claimed. Proof of Proposition 12.7. For Point 1, we have already shown (following Definition 12.3) that \( \Gamma \) is invariant under the action of \( W \) . Thus, if \( \lambda \) is analytically integral and \( w \) is the Weyl group element represented by \( x \), we have \( \langle w \cdot \lambda, H\rangle = \) \( \left\langle {\lambda ,{w}^{-1} \cdot H}\right\rangle \in \mathbb{Z} \), since \( {w}^{-1} \cdot H \) is in \( \Gamma \) . For Point 2, note that for each \( \alpha \), we have \( {H}_{\alpha } \in \Gamma \) by Lemma 12.8. Thus, if \( \lambda \) is analytically integral, \( \left\langle {\lambda ,{H}_{\alpha }}\right\rangle \in \mathbb{Z} \) , showing that \( \lambda \) is algebraically integral. For Point 3, we note that the real roots are real weights of the adjoint representation, which is a representation of the group \( K \) , and not just its Lie algebra. Thus, by Proposition 12.5, the real roots are analytically integral. Proposition 12.9. If \( \lambda \) is an analytically integral element, there is a well-defined function \( {f}_{\lambda } : T \rightarrow \mathbb{C} \) such that \[ {f}_{\lambda }\left( {e}^{H}\right) = {e}^{i\langle \lambda, H\rangle } \] (12.2) for all \( H \in \mathfrak{t} \) . Conversely, for any \( \lambda \in \mathfrak{t} \), if there is a well-defined function on \( T \) given by the right-hand side of (12.2), then \( \lambda \) must be analytically integral. Proof. Replacing \( H \) by \( {2\pi H} \), we can equivalently write (12.2) as \[ {f}_{\lambda }\left( {e}^{2\pi H}\right) = {e}^{{2\pi i}\langle \lambda, H\rangle },\;H \in \mathfrak{t}. \] (12.3) Now, since \( T \) is connected and commutative, every \( t \in T \) can be written as \( t = \) \( {e}^{2\pi H} \) for some \( H \in \mathfrak{t} \) . Furthermore, \( {e}^{{2\pi }\left( {H + {H}^{\prime }}\right) } = {e}^{2\pi H} \) if and only if \( {e}^{{2\pi }{H}^{\prime }} = I \) , that is, if and only if \( {H}^{\prime } \in \Gamma \) . Thus, the right-hand side of (12.3) defines a function on \( T \) if and only if \( {e}^{{2\pi i}\left\langle {\lambda, H + {H}^{\prime }}\right\rangle } = {e}^{{2\pi i}\langle \widetilde{\lambda }, H\rangle } \) for all \( {H}^{\prime } \in \Gamma \) . This happens if and only if \( {e}^{{2\pi i}\left\langle {\lambda ,{H}^{\prime }}\right\rangle } = 1 \) for all \( {H}^{\prime } \in \Gamma \), that is, if and only if \( \left\langle {\lambda ,{H}^{\prime }}\right\rangle \in \mathbb{Z} \) for all \( {H}^{\prime } \in \Gamma \) . Proposition 12.10. The exponentials \( {f}_{\lambda } \) in (12.2), as \( \lambda \) ranges over the set of analytically integral elements, are orthonormal with respect to the normalized volume form dt on \( T \) : \[ {\int }_{T}\overline{{f}_{\lambda }\left( t\right) }{f}_{{\lambda }^{\prime }}\left( t\right) {dt} = {\delta }_{\lambda ,{\lambda }^{\prime }} \] (12.4) Proof. Let us identify \( T \) with \( {\left( {S}^{1}\right) }^{k} \) for some \( k \), so that \( \mathfrak{t} \) is identified with \( {\mathbb{R}}^{k} \) and the scaled exponential map is given by \[ \left( {{\theta }_{1},\ldots ,{\theta }_{n}}\right) \mapsto \left( {{e}^{{2\pi i}{\theta }_{1}},\ldots ,{e}^{{2\pi i}{\theta }_{k}}}\right) . \] The kernel \( \Gamma \) of the exponential is the integer lattice inside \( {\mathbb{R}}^{k} \) . The lattice of analytically integral elements (points having integer inner product with each element of \( \Gamma \) ) is then also the integer lattice. Thus, the exponentials in the proposition are the functions of the form \[ {f}_{\lambda }\left( {{e}^{i{\theta }_{1}},\ldots ,{e}^{i{\theta }_{k}}}\right) = {e}^{i{\lambda }_{1}{\theta }_{1}}\cdots {e}^{i{\lambda }_{k}{\theta }_{k}}, \] with \( \lambda = \left( {{\lambda }_{1},\ldots ,{\lambda }_{k}}\right) \in {\mathbb{Z}}^{k} \) . Meanwhile, if we use the coordinates \( {\theta }_{1},\ldots ,{\theta }_{k} \) on \( T \), then any \( k \) -form on \( T \) can be represented as a density \( \rho \) times \( d{\theta }_{1} \land \cdots \land d{\theta }_{n} \) . Since the vo
1105_(GTM260)Monomial.Ideals,Jurgen.Herzog(2010)
Definition 12.1.1
Definition 12.1.1. A polymatroid on the ground set \( \left\lbrack n\right\rbrack \) is a nonempty compact subset \( \mathcal{P} \) in \( {\mathbb{R}}_{ + }^{n} \), the set of independent vectors, such that (P1) every subvector of an independent vector is independent; (P2) if \( \mathbf{u},\mathbf{v} \in \mathcal{P} \) with \( \left| \mathbf{v}\right| > \left| \mathbf{u}\right| \), then there is a vector \( \mathbf{w} \in P \) such that \[ \mathbf{u} < \mathbf{w} \leq \mathbf{u} \vee \mathbf{v} \] A base of a polymatroid \( \mathcal{P} \subset {\mathbb{R}}_{ + }^{n} \) is a maximal independent vector of \( \mathcal{P} \) , i.e. an independent vector \( \mathbf{u} \in \mathcal{P} \) with \( \mathbf{u} < \mathbf{v} \) for no \( \mathbf{v} \in \mathcal{P} \) . Every base of \( \mathcal{P} \) has the same modulus \( \operatorname{rank}\left( \mathcal{P}\right) \), the rank of \( \mathcal{P} \) . In fact, if \( u \) and \( v \) are bases of \( \mathcal{P} \) with \( \left| \mathbf{u}\right| < \left| \mathbf{v}\right| \), then by (P2) there exists \( \mathbf{w} \in P \) with \( \mathbf{u} < \mathbf{w} \leq \mathbf{u} \vee \mathbf{v} \) , contradicting the maximality of \( \mathbf{u} \) . Let \( \mathcal{P} \subset {\mathbb{R}}_{ + }^{n} \) be a polymatroid on the ground set \( \left\lbrack n\right\rbrack \) . Let \( {2}^{\left\lbrack n\right\rbrack } \) denote the set of all subsets of \( \left\lbrack n\right\rbrack \) . The ground set rank function of \( \mathcal{P} \) is a function \( \rho : {2}^{\left\lbrack n\right\rbrack } \rightarrow {\mathbb{R}}_{ + } \) defined by setting \[ \rho \left( A\right) = \max \{ \mathbf{v}\left( A\right) : \mathbf{v} \in \mathcal{P}\} \] for all \( \varnothing \neq A \subset \left\lbrack n\right\rbrack \) together with \( \rho \left( \varnothing \right) = 0 \) . Given a vector \( \mathbf{x} \in {\mathbb{R}}_{ + }^{n} \), an independent vector \( \mathbf{u} \in \mathcal{P} \) is called a maximal independent subvector of \( \mathbf{x} \) if (i) \( \mathbf{u} \leq \mathbf{x} \) and (ii) \( \mathbf{u} < \mathbf{v} \leq \mathbf{x} \) for no \( \mathbf{v} \in \mathcal{P} \) . Since \( \mathcal{P} \) is compact, a maximal independent subvector of \( \mathbf{x} \in {\mathbb{R}}_{ + }^{n} \) exists. Moreover, if \( \mathbf{x} \in {\mathbb{R}}_{ + }^{n} \) and \( \mathbf{w} \in \mathcal{P} \) with \( \mathbf{w} \leq \mathbf{x} \), then, since \( \{ \mathbf{y} \in \mathcal{P} : \mathbf{w} \leq \mathbf{y}\} \) is compact, there is a maximal independent subvector \( \mathbf{u} \in \mathcal{P} \) with \( \mathbf{w} \leq \mathbf{u} \leq \mathbf{x} \) . If each of \( \mathbf{u} \) and \( {\mathbf{u}}^{\prime } \) is a maximal independent subvector of \( \mathbf{x} \in {\mathbb{R}}_{ + }^{n} \), then \( \left| \mathbf{u}\right| = \left| {\mathbf{u}}^{\prime }\right| \) . In fact, if \( \left| \mathbf{u}\right| < \left| {\mathbf{u}}^{\prime }\right| \), then by (P2) there is \( {\mathbf{u}}^{\prime \prime } \in \mathcal{P} \) with \( \mathbf{u} < {\mathbf{u}}^{\prime \prime } \leq \) \( \mathbf{u} \vee {\mathbf{u}}^{\prime } \leq \mathbf{x} \), contradicting the maximality of \( \mathbf{u} \) . For a vector \( \mathbf{x} \in {\mathbb{R}}_{ + } \), we define \( \xi \left( \mathbf{x}\right) = \left| \mathbf{u}\right| \), where \( \mathbf{u} \in \mathcal{P} \) is a maximal independent subvector of \( \mathbf{x} \) . Lemma 12.1.2. Let \( \mathbf{x},\mathbf{y} \in {\mathbb{R}}_{ + }^{n} \) . Then \[ \xi \left( \mathbf{x}\right) + \xi \left( \mathbf{y}\right) \geq \xi \left( {\mathbf{x} \vee \mathbf{y}}\right) + \xi \left( {\mathbf{x} \land \mathbf{y}}\right) \] Proof. Let \( \mathbf{a} \in \mathcal{P} \) be a maximal independent subvector of \( \mathbf{x} \land \mathbf{y} \) . Since \( \mathbf{a} \leq \) \( \mathbf{x} \vee \mathbf{y} \), there exists a maximal independent subvector \( \mathbf{b} \in \mathcal{P} \) of \( \mathbf{x} \vee \mathbf{y} \) with \( \mathbf{a} \leq \mathbf{b} \leq \mathbf{x} \vee \mathbf{y} \) . Since \( \mathbf{b} \land \left( {\mathbf{x} \land \mathbf{y}}\right) \in \mathcal{P} \) and \( \mathbf{a} \leq \mathbf{b} \land \left( {\mathbf{x} \land \mathbf{y}}\right) \leq \mathbf{x} \land \mathbf{y} \), one has \( \mathbf{a} = \mathbf{b} \land \left( {\mathbf{x} \land \mathbf{y}}\right) \) . We claim \[ \mathbf{a} + \mathbf{b} = \mathbf{b} \land \mathbf{x} + \mathbf{b} \land \mathbf{y} \] In fact, since \( \mathbf{b} \leq \mathbf{x} \vee \mathbf{y} \), one has \( \mathbf{b}\left( i\right) \leq \max \{ \mathbf{x}\left( i\right) ,\mathbf{y}\left( i\right) \} \) for each \( i \in \left\lbrack n\right\rbrack \) . Let \( \mathbf{x}\left( i\right) \leq \mathbf{y}\left( i\right) \) . Then \( \mathbf{a}\left( i\right) = \min \{ \mathbf{b}\left( i\right) ,\mathbf{x}\left( i\right) \} \) and \( \mathbf{b}\left( i\right) = \min \{ \mathbf{b}\left( i\right) ,\mathbf{y}\left( i\right) \} \) . Thus \( \mathbf{a}\left( i\right) + \mathbf{b}\left( i\right) = \left( {\mathbf{b} \land \mathbf{x}}\right) \left( i\right) + \left( {\mathbf{b} \land \mathbf{y}}\right) \left( i\right) \), as required. Since \( \mathbf{b} \land \mathbf{x} \in \mathcal{P} \) is a subvector of \( \mathbf{x} \) and since \( \mathbf{b} \land \mathbf{y} \in \mathcal{P} \) is a subvector of \( \mathbf{y} \), it follows that \( \left| {\mathbf{b} \land \mathbf{x}}\right| \leq \xi \left( \mathbf{x}\right) \) and \( \left| {\mathbf{b} \land \mathbf{y}}\right| \leq \xi \left( \mathbf{y}\right) \) . Thus \[ \xi \left( {\mathbf{x} \land \mathbf{y}}\right) + \xi \left( {\mathbf{x} \vee \mathbf{y}}\right) = \left| \mathbf{a}\right| + \left| \mathbf{b}\right| \] \[ = \left| {\mathbf{b} \land \mathbf{x}}\right| + \left| {\mathbf{b} \land \mathbf{y}}\right| \] \[ \leq \xi \left( \mathbf{x}\right) + \xi \left( \mathbf{y}\right) \] as desired. We now come to an important result of polymatroids. Theorem 12.1.3. (a) Let \( \mathcal{P} \subset {\mathbb{R}}_{ + }^{n} \) be a polymatroid on the ground set \( \left\lbrack n\right\rbrack \) and \( \rho \) its ground set rank function. Then \( \rho \) is nondecreasing, i.e. if \( A \subset B \subset \left\lbrack n\right\rbrack \) , then \( \rho \left( A\right) \leq \rho \left( B\right) \), and is submodular, i.e. \[ \rho \left( A\right) + \rho \left( B\right) \geq \rho \left( {A \cup B}\right) + \rho \left( {A \cap B}\right) \] for all \( A, B \subset \left\lbrack n\right\rbrack \) . Moreover, \( \mathcal{P} \) coincides with the compact set \[ \left\{ {\mathbf{x} \in {\mathbb{R}}_{ + }^{n} : \mathbf{x}\left( A\right) \leq \rho \left( A\right), A \subset \left\lbrack n\right\rbrack }\right\} \] (12.1) (b) Conversely, given a nondecreasing and submodular function \( \rho : {2}^{\left\lbrack n\right\rbrack } \rightarrow \) \( {\mathbb{R}}_{ + } \) with \( \rho \left( \varnothing \right) = 0 \) . Then the compact set (12.1) is a polymatroid on the ground set \( \left\lbrack n\right\rbrack \) with \( \rho \) its ground set rank function. Proof. (a) \( \Rightarrow \) (b): Clearly, \( \rho \) is nondecreasing. In general, for \( X \subset \left\lbrack n\right\rbrack \) and for \( \mathbf{y} \in {\mathbb{R}}_{ + } \), we define \( {\mathbf{y}}_{X} \in {\mathbb{R}}_{ + } \) by setting \( {\mathbf{y}}_{X}\left( i\right) = \mathbf{y}\left( i\right), i \in X \) and \( {\mathbf{y}}_{X}\left( i\right) = 0, i \in \left\lbrack n\right\rbrack \smallsetminus X \) . Let \( r = \operatorname{rank}\left( \mathcal{P}\right) \) and \( \mathbf{a} = \left( {r, r,\ldots, r}\right) \in {\mathbb{R}}_{ + }^{n} \) . Thus \( \mathbf{w} \leq \mathbf{a} \) for all \( \mathbf{w} \in \mathcal{P} \) . We claim that, for each subset \( X \subset \left\lbrack n\right\rbrack \), one has \[ \rho \left( X\right) = \xi \left( {\mathbf{a}}_{X}\right) \] To see why this is true, write \( \mathbf{b} \) for a maximal independent subvector of \( {\mathbf{a}}_{X} \) . Then \( \xi \left( {\mathbf{a}}_{X}\right) = \left| \mathbf{b}\right| = \mathbf{b}\left( X\right) \leq \rho \left( X\right) \) . On the other hand, if \( \rho \left( X\right) = \mathbf{w}\left( X\right) \) , where \( \mathbf{w} \in \mathcal{P} \), then, since \( {\mathbf{w}}_{X} \leq {\mathbf{a}}_{X} \), one has \( \rho \left( X\right) = \mathbf{w}\left( X\right) = \left| {\mathbf{w}}_{X}\right| \leq \xi \left( {\mathbf{a}}_{X}\right) \) . Thus \( \rho \left( X\right) = \xi \left( {\mathbf{a}}_{X}\right) \) . Let \( A \) and \( B \) be subsets of \( \left\lbrack n\right\rbrack \) . Then \( \rho \left( {A \cup B}\right) = \xi \left( {\mathbf{a}}_{A \cup B}\right) = \xi \left( {{\mathbf{a}}_{A} \vee {\mathbf{a}}_{B}}\right) \) and \( \rho \left( {A \cap B}\right) = \xi \left( {v}_{A \cap B}\right) = \xi \left( {{\mathbf{a}}_{A} \land {\mathbf{a}}_{B}}\right) \) . Thus the submodularity of \( \rho \) follows from Lemma 12.1.2. Let \( \mathcal{Q} \) denote the compact set (12.1). It follows from the definition of \( \rho \) that \( \mathcal{P} \subset \mathcal{Q} \) . We will show \( \mathcal{Q} \subset \mathcal{P} \) . Suppose that there exists \( \mathbf{v} \in \mathcal{Q} \) with \( \mathbf{v} \notin \mathcal{P} \) . Let \( \mathbf{u} \in \mathcal{P} \) be a maximal independent subvector of \( \mathbf{v} \) which maximizes \( \left| {N\left( \mathbf{u}\right) }\right| \), where \[ N\left( \mathbf{u}\right) = \{ i \in \left\lbrack n\right\rbrack : \mathbf{u}\left( i\right) < \mathbf{v}\left( i\right) \} . \] Let \( \mathbf{w} = \left( {\mathbf{u} + \mathbf{v}}\right) /2 \in {\mathbb{R}}_{ + }^{n} \) and \( \mathbf{b} \in \mathcal{P} \) with \( \mathbf{b}\left( {N\left( \mathbf{u}\right) }\right) = \rho \left( {N\left( \mathbf{u}\right) }\right) \) . Since \( \mathbf{w} \in \mathcal{Q} \), it follows that \[ \mathbf{u}\left( {N\left( \mathbf{u}\right) }\right) < \mathbf{w}\left( {N\left( \mathbf{u}\right) }\right) \leq \rho \left( {N\left( \mathbf{u}\right) }\right) = b\left( {N\left( \mathbf{u}\right) }\right) . \] Since \( \left| {\mathbf{u}}
113_Topological Groups
Definition 4.1
Definition 4.1. Throughout this chapter, by a word we shall understand a finite sequence of 0 ’s, 1’s, and 2’s. The empty word is admitted. A Markov algorithm is a matrix \( A \) of the form \[ \begin{array}{lll} {a}_{0} & {b}_{0} & {c}_{0} \\ {a}_{1} & {b}_{1} & {c}_{1} \\ \vdots & \vdots & \vdots \\ {a}_{m} & {b}_{m} & {c}_{m} \end{array} \] such that \( {a}_{0},\ldots ,{a}_{m},{b}_{0},\ldots ,{b}_{m} \) are words and \( {c}_{0},\ldots ,{c}_{m} \in \{ 0,1\} \) . A word \( a \) occurs in a word \( b \) if there are words \( c \) and \( d \) such that \( b = {cad} \) . Of course \( a \) may occur in \( b \) several times. An occurrence of \( a \) in \( b \) is a triple \( \left( {c, a, d}\right) \) such that \( b = {cad} \) . It is called the first occurrence of \( a \) in \( b \) if \( c \) has shortest length among all occurrences of \( a \) in \( b \) . An algorithmic step under \( A \) is a pair \( \left( {d, e}\right) \) of words with the following properties: (i) there is an \( i \leq m \) such that \( {a}_{i} \) occurs in \( d \) ; (ii) if \( i \leq m \) is minimum such that \( {a}_{i} \) occurs in \( d \), and if \( \left( {f,{a}_{i}, g}\right) \) is the first occurrence of \( {a}_{i} \) in \( d \), then \( e = f{b}_{i}g \) . Such an algorithmic step is said to be nonterminating, if with \( i \) as in (ii), \( {c}_{i} = 0 \) ; otherwise (i.e., with \( {c}_{i} = 1 \) ), it is called terminating. A computation under \( A \) is a finite sequence \( \left\langle {{d}_{0},\ldots ,{d}_{m}}\right\rangle \) of words such that for each \( i < \) \( m - 1,\left( {{d}_{i},{d}_{i + 1}}\right) \) is a nonterminating algorithmic step, while \( \left( {{d}_{m - 1},{d}_{m}}\right) \) is a terminating algorithmic step. Now an \( m \) -ary function \( f \) is algorithmic if there is a Markov algorithm \( A \) as above such that for any \( {x}_{0},\ldots ,{x}_{m - 1} \in \omega \) there is a computation \( \left\langle {{d}_{0},\ldots ,{d}_{n}}\right\rangle \) under \( A \) such that the following conditions hold: (iii) \( {d}_{0} = {\left\lbrack \begin{matrix} 0 & 1 \end{matrix}\right\rbrack }^{\left( x0 + 1\right) }{\left\lbrack \begin{matrix} 0 & \cdots & 0 \end{matrix}\right\rbrack }^{\left( x\left( m - 1\right) + 1\right) }\left\lbrack \begin{matrix} 0 & 2 \end{matrix}\right\rbrack \) ; (iv) \( \langle 2\rangle \) occurs only once in \( {d}_{n} \) ; (v) \( {01}^{\left( f\left( x0,\ldots, x\left( m - 1\right) \right) + 1\right) }{02} \) occurs in \( {d}_{n} \) . We then say that \( A \) computes \( f \) . A row \( {a}_{i}{b}_{i} \) 0 in a Markov algorithm will be indicated \( {a}_{i} \rightarrow {b}_{i} \), while a row \( {a}_{i}{b}_{i}1 \) will be indicated \( {a}_{i} \rightarrow \cdot {b}_{i} \) . A Markov algorithm lists out finitely many substitutions of one word for another, and an algorithmic computation consists in just mechanically applying these substitutions until reaching a substitution of the form \( {a}_{i} \rightarrow \cdot {b}_{i} \) . Clearly, then, an algorithmic function is effective in the intuitive sense. Markov algorithms are related to Post systems and to formal grammars. Now we shall give some examples of algorithms, which we shall not numerate since they are not needed later. The algorithm \( {A}_{0} \) : \[ \langle 0\rangle \rightarrow \cdot \langle 0\rangle \] works as follows: any computation under \( A \) is of length 2 and simply repeats the word: \( \langle a, a\rangle \), where \( \langle 0\rangle \) occurs in \( a \) . Consider the algorithm \( {A}_{1} \) : \[ \langle 0\rangle \rightarrow \cdot \langle {01}\rangle \] Some examples of computations under \( {A}_{1} \) are: (1) \( \langle \langle 0\rangle ,\langle {01}\rangle \rangle \) (2) \( \langle \langle {00}\rangle ,\langle {010}\rangle \rangle \) (3) \( \langle \langle {11010}\rangle ,\langle {110110}\rangle \rangle \) . Let \( {A}_{2} \) be the following algorithm: \[ \langle 0\rangle \rightarrow \langle 1\rangle \] \[ \langle 1\rangle \rightarrow \cdot \langle 1\rangle \text{.} \] The algorithm \( {A}_{2} \) takes any word and replaces all 0 ’s by 1 ’s, then stops. Let \( {A}_{3} \) be \[ \langle 1\rangle \rightarrow \langle {11}\rangle \] Clearly no computation under \( {A}_{3} \) exists. Starting with a word in which \( \langle 1\rangle \) occurs, \( {A}_{3} \) manufactures more and more one’s. Lemma 4.2. Every Turing computable function is algorithmic. Proof. Let \( f\left( {n\text{-ary}}\right) \) be computed by a Turing machine \( M \), with notation as in 1.1 and 3.9. With each row \( {ti} = \left( {{c}_{j\left( i\right) },{\varepsilon i},{vi},{di}}\right) \) of \( M\left( {1 \leq i \leq {2m}}\right) \) we shall associate one or more rows \( {t}^{\prime }\left( {i,0}\right) ,\ldots ,{t}^{\prime }\left( {i,{pi}}\right) \) of a Markov algorithm, depending on \( {vi} \) . Case 1. \( {vi} = 0 \) or 1 . We associate the row \[ \left\langle \begin{array}{llll} {\varepsilon }_{i} & 2 & {1}^{\left( cji + 1\right) } & 2 \end{array}\right\rangle \rightarrow \left\langle \begin{array}{llll} {v}_{i} & 2 & {1}^{\left( di + 1\right) } & 2 \end{array}\right\rangle . \] Case 2. \( {vi} = 2 \) . We associate the rows (in order) \( \left\langle \begin{array}{lllll} 0 & {\varepsilon }_{i} & 2 & {1}^{\left( cji + 1\right) } & 2 \end{array}\right\rangle \rightarrow \left\langle \begin{array}{lllll} 0 & 2 & {1}^{\left( di + 1\right) } & 2 & {\varepsilon }_{i} \end{array}\right\rangle \) \( \left\langle \begin{array}{lllll} 1 & {\varepsilon }_{i} & 2 & {1}^{\left( cji + 1\right) } & 2 \end{array}\right\rangle \rightarrow \left\langle \begin{array}{lllll} 1 & 2 & {1}^{\left( di + 1\right) } & 2 & {\varepsilon }_{i} \end{array}\right\rangle \) \[ \left\langle {{\varepsilon }_{i}\;2\;{1}^{\left( cji + 1\right) }\;2}\right\rangle \rightarrow \left\langle {0\;2\;{1}^{\left( di + 1\right) }\;2\;{\varepsilon }_{i}}\right\rangle \] Case 3. \( {vi} = 3 \) . We associate the rows (in order) \( \left\langle \begin{array}{lllll} {\varepsilon }_{i} & 2 & {1}^{\left( cji + 1\right) } & 2 & 0 \end{array}\right\rangle \rightarrow \left\langle \begin{array}{lllll} {\varepsilon }_{i} & 0 & 2 & {1}^{\left( di + 1\right) } & 2 \end{array}\right\rangle \) \( \left\langle \begin{array}{lllll} {\varepsilon }_{i} & 2 & {1}^{\left( cji + 1\right) } & 2 & 1 \end{array}\right\rangle \rightarrow \left\langle \begin{array}{lllll} {\varepsilon }_{i} & 1 & 2 & {1}^{\left( di + 1\right) } & 2 \end{array}\right\rangle \) \( \left\langle {{\varepsilon }_{i}\;2\;{1}^{\left( cji + 1\right) }\;2}\right\rangle \rightarrow \left\langle {{\varepsilon }_{i}\;0\;2\;{1}^{\left( di + 1\right) }\;2}\right\rangle . \) Case 4. \( {vi} = 4 \) . We associate the row \[ \left\langle \begin{array}{llll} {\varepsilon }_{i} & 2 & {1}^{\left( cji + 1\right) } & 2 \end{array}\right\rangle \rightarrow \cdot \left\langle \begin{array}{ll} {\varepsilon }_{i} & 2 \end{array}\right\rangle . \] Now let \( A \) be the following Markov algorithm: \[ {t}^{\prime }\left( {{2m}, p\left( {2m}\right) }\right) \] \[ \langle 2\rangle \rightarrow \left\langle \begin{array}{lll} 2 & {1}^{\left( c1 + 1\right) } & 2 \end{array}\right\rangle . \] We claim that \( A \) computes \( f \) . To see this, let \( {x}_{0},\ldots ,{x}_{n - 1} \in \omega \) . Since \( M \) computes \( f \), by 3.9 there is a computation \( \left\langle {\left( {F,{c}_{1},0}\right) ,\left( {{G}_{1},{a}_{1},{b}_{1}}\right) ,\ldots ,\left( {{G}_{q - 1},{a}_{q - 1}}\right. }\right. \) , \( \left. {b}_{q - 1}\right) \rangle \) of \( M \) with the following properties: (1) \( 0{1}^{\left( x0 + 1\right) }0\cdots 0{1}^{\left( x\left( m - 1\right) + 1\right) } \) lies on \( F \) ending at -1, and \( F \) is 0 elsewhere; (2) \( {1}^{\left( x\left( m - 1\right) + 1\right) }{01}^{\left( f\left( x0,\ldots, x\left( m - 1\right) + 1\right) \right) }0 \) lies on \( {G}_{q - 1} \) ending at \( {b}_{q - 1} \) . Now let \( {G}_{0} = F,{a}_{0} = {c}_{1},{b}_{0} = 0 \) . Let \( {Q}_{-1} \) be the word \[ \begin{array}{llllllll} 0 & {1}^{\left( x0 + 1\right) } & 0 & \cdots & 0 & {1}^{\left( x\left( m - 1\right) + 1\right) } & 0 & 2. \end{array} \] Now we define \( {N}_{i},{P}_{i},{Q}_{i} \) for \( i < q \) by induction. Let \( {N}_{0} \) be \( 0{1}^{\left( x0 + 1\right) }0\cdots \) \( {01}^{\left( x\left( m - 1\right) + 1\right) }0,{P}_{0} = 0 \) (the empty sequence), and \( {Q}_{0} = {01}^{\left( x0 + 1\right) }0\cdots \) \( {01}^{\left( x\left( m - 1\right) + 1\right) }{021}^{\left( c1 + 1\right) }2 \) . Suppose now that \( i + 1 < q \) and that \( {N}_{i},{P}_{i},{Q}_{i} \) have been defined so that the following conditions hold: (3) \( {N}_{i} \neq 0 \) ; (4) \( {N}_{i} \) lies on \( {G}_{i} \) ending at \( {b}_{i} \) ; (5) \( {P}_{i} \) lies on \( {G}_{i} \) beginning at \( {b}_{i} + 1 \) ; (6) \( {G}_{i} \) is 0 except for \( {N}_{i}{P}_{i} \) ; (7) exactly two 2’s occur in \( {Q}_{i} \) ; (8) \( {N}_{i}{21}^{\left( ai + 1\right) }2{P}_{i} = {Q}_{i} \) ; (9) if \( i \neq 0 \), then \( \left( {{Q}_{i - 1},{Q}_{i}}\right) \) is a nonterminating algorithmic step under \( A \) . Clearly (3)-(9) hold for \( i = 0 \) . We now define \( {N}_{i + 1},{P}_{i + 1},{Q}_{i + 1} \) . Let the row of \( M \) beginning with \( {a}_{i}{G}_{i}{b}_{i} \) be \[ {a}_{i}\;{G}_{i}{b}_{i}\;v\;w. \] We now distinguish cases depending on \( v \) . Note that, since \( i < q - 1, v \neq 4 \) . In each case we define \( {N}_{i + 1},{P}_{i + 1},{Q}_{i + 1} \), and it will then be evident that (3)- (9) hold for \( i + 1 \) in that case. In each case, let \( {Q}_{i + 1} \) be defined by (8) for \( i + 1 \) . Case 1. \( v = 0 \) . Let \( {N}_{i + 1} \) be \( {N}_{i} \) with its last entry replaced by 0, and let \( {P}_{i + 1} = {P}_{i} \) Case 2. \( v = 1 \) . Similarly. Case 3. \( v = 2 \) . Here we take two subcases: Subcase \( I.{N}_{i} \) has length at least 2 . Write \( {N}_{i} = {N}_{i + 1}\varepsilon \), where \( \varepsilon
117_《微积分笔记》最终版_by零蛋大
Definition 4.6
Definition 4.6. Let \( \mathcal{A} \) be a finite sub-algebra of \( \mathcal{B} \) with \( \xi \left( \mathcal{A}\right) = \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} \) . The entropy of \( \mathcal{A} \) (or of \( \xi \left( \mathcal{A}\right) \) ) is the number \( H\left( \mathcal{A}\right) = H\left( {\xi \left( \mathcal{A}\right) }\right) = - \mathop{\sum }\limits_{{i = 1}}^{k}m\left( {A}_{i}\right) \) \( \log m\left( {A}_{1}\right) \) . As mentioned above, \( H\left( {\cdot /}\right) \) is a measure of the uncertainty removed (or information gained) by performing the experiment with outcomes \( \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} \) ## Remarks (1) If \( \mathcal{A} = \{ X,\phi \} \) then \( H\left( \mathcal{A}\right) = 0 \) . Here \( \mathcal{A} \) represents the outcomes of a "certain" experiment so there is no uncertainty about the outcome. (2) If \( \xi \left( \mathcal{A}\right) = \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} \) where \( m\left( {A}_{i}\right) = 1/k\forall i \) then \[ H\left( {k/}\right) = - \mathop{\sum }\limits_{{i = 1}}^{k}\frac{1}{k}\log \frac{1}{k} = \log k. \] We shalt show later (Corollary 4.2.1) that \( \log k \) is the maximum value for the entropy of a partition with \( k \) sets. The greatest uncertainty about the outcome should occur when the outcomes are equally likely. (3) \( H\left( {c/}\right) \geq 0 \) . (4) If \( \mathcal{A} \doteq \mathcal{C} \) then \( H\left( \mathcal{A}\right) = H\left( \mathcal{C}\right) \) . (5) If \( T : X \rightarrow X \) is measure-preserving then \( H\left( {{T}^{-1} \cdot \mathcal{A}}\right) = H\left( \mathcal{A}\right) \) . Several properties of entropy are implied by the following elementary result. Theorem 4.2. The function \( \phi : \lbrack 0,\infty ) \rightarrow R \) defined by \[ \phi \left( x\right) = \left\{ \begin{array}{ll} 0 & \text{if }x = 0 \\ x \cdot \log x & \text{if }x \neq 0 \end{array}\right. \] is strictly convex, i.e., \( \phi \left( {{\alpha x} + {\beta y}}\right) \leq {\alpha \phi }\left( x\right) + {\beta \phi }\left( y\right) \) if \( x, y \in \lbrack 0,\infty ),\alpha ,\beta \geq 0 \) , \( \alpha + \beta = 1 \) ; with equality only when \( x = y \) or \( \alpha = 0 \) or \( \beta = 0 \) . ![59f762bd-21ea-4e62-bb94-788c5175630f_86_0.jpg](images/59f762bd-21ea-4e62-bb94-788c5175630f_86_0.jpg) By induction we get \[ \phi \left( {\mathop{\sum }\limits_{{i = 1}}^{k}{\alpha }_{i}{x}_{i}}\right) \leq \mathop{\sum }\limits_{{i = 1}}^{k}{\alpha }_{i}\phi \left( {x}_{i}\right) \] if \( {x}_{i} \in \lbrack 0,\infty ),{\alpha }_{1} \geq 0,\mathop{\sum }\limits_{{i = 1}}^{k}{\alpha }_{i} = 1 \) ; and equality holds only when all the \( {x}_{i} \) , corresponding to non-zero \( {\alpha }_{i} \), are equal. Proof. We have \[ {\phi }^{\prime }\left( x\right) = 1 + \log x \] \[ {\phi }^{\prime \prime }\left( x\right) = \frac{1}{x} > 0\;\text{ on }\left( {0,\infty }\right) . \] Fix \( \alpha ,\beta \) with \( \alpha > 0,\beta > 0 \) . Suppose \( y > x \) . By the mean value theorem \( \phi \left( y\right) - \phi \left( {{\alpha x} + {\beta y}}\right) = {\phi }^{\prime }\left( z\right) \alpha \left( {y - x}\right) \; \) for some \( z \) with \( {\alpha \gamma } + {\beta y} < z < y \) and \( \phi \left( {{\alpha x} + {\beta y}}\right) - \phi \left( x\right) = {\phi }^{\prime }\left( w\right) \beta \left( {y - x}\right) \) for some \( w \) with \( x < w < {\alpha x} + {\beta y}. \) Since \( {\phi }^{\prime \prime } > 0 \), we have \( {\phi }^{\prime }\left( z\right) > {\phi }^{\prime }\left( w\right) \) and hence \[ \beta \left( {\phi \left( y\right) - \phi \left( {{\alpha x} + {\beta y}}\right) }\right) = {\phi }^{\prime }\left( z\right) {\alpha \beta }\left( {y - x}\right) > {\phi }^{\prime }\left( w\right) {\alpha \beta }\left( {y - x}\right) \] \[ = \alpha \left( {\phi \left( {x + {\beta y}}\right) - \phi \left( x\right) }\right) . \] Therefore \( \phi \left( {{\alpha x} + {\beta y}}\right) < {\alpha \phi }\left( x\right) + {\beta \phi }\left( y\right) \) if \( x, y > 0 \) . It clearly holds also if \( x, y \geq 0 \) and \( x \neq y \) . Corollary 4.2.1. If \( \xi = \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} \) then \( H\left( \xi \right) \leq \log k \), and \( H\left( \xi \right) = \log k \) only when \( m\left( {A}_{i}\right) = 1/k \) for all \( i \) . Proof. Put \( {\alpha }_{i} = 1/k \) and \( {x}_{t} = m\left( {A}_{i}\right) ,1 \leq i \leq k \) . ## §4.3 Conditional Entropy Conditional entropy is not required in order to give the definition of the entropy of a transformation. It is useful in deriving properties of entropy, and we discuss it now before we consider the entropy of a transformation. Let \( \mathcal{A},\mathcal{C} \) be finite sub- \( \sigma \) -algebras of \( \mathcal{D} \) and \[ \xi \left( \mathcal{A}\right) = \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} ,\;\xi \left( \mathcal{C}\right) = \left\{ {{C}_{1},\ldots ,{C}_{p}}\right\} . \] The discussion in \( \$ {4.2} \) suggests the following definition. Definition 4.7. The entropy of \( \mathcal{A} \) given \( \mathcal{C} \) is the number \[ H\left( {\xi \left( \mathcal{A}\right) /\xi \left( \mathcal{C}\right) }\right) = H\left( {\mathcal{A}/\mathcal{C}}\right) = - \mathop{\sum }\limits_{{j = 1}}^{p}m\left( {C}_{j}\right) \mathop{\sum }\limits_{{i = 1}}^{k}\frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \] \[ = - \mathop{\sum }\limits_{{i.j}}m\left( {{A}_{i} \cap {C}_{j}}\right) \log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \] omitting the \( \jmath \) -terms when \( m\left( {C}_{1}\right) = 0 \) . So to get \( H\left( {\mathcal{A}/\mathcal{C}}\right) \) one considers \( {C}_{j} \) as a measure space with normalized measure \( m\left( \cdot \right) /m\left( {C}_{i}\right) \) and calculates the entropy of the partition of the set \( {C}_{j} \) induced by \( \xi \left( \mathcal{A}\right) \) (this gives \[ - \mathop{\sum }\limits_{{i = 1}}^{k}\frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) }\log \frac{m\left( {{A}_{i} \cap {C}_{j}}\right) }{m\left( {C}_{j}\right) } \] and then averages the answer taking into account the size of \( {C}_{j}.\left( {H\left( {\mathcal{A}/\% }\right) }\right. \) measures the uncertainty about the outcome of \( \mathcal{A} \) given that we will be told the outcome of \( \mathcal{C} \) .) Let \( {\Gamma }^{ * } \) denote the \( \sigma \) -field \( \{ \phi, X\} \) . The \( {\mathcal{A}}^{\prime }H\left( {\mathcal{A}/\mathcal{N}}\right) = H\left( \mathcal{A}\right) \) . (Since \( V \) represents the outcome of the trivial experiment one gains nothing from knowledge of it.) ## Remarks (1) \( H\left( {x/\mathcal{C}}\right) \geq 0 \) . (2) If \( \mathcal{A} \doteq \mathcal{L} \) then \( H\left( {\mathcal{A}/\mathcal{C}}\right) = H\left( {\mathcal{D}/\mathcal{C}}\right) \) . (3) If \( \mathcal{C} \doteq \mathcal{S} \) then \( H\left( {\mathcal{A}/\mathcal{C}}\right) = H\left( {\mathcal{A}/\mathcal{D}}\right) \) . Theorem 4.3. Let \( \left( {X,\mathcal{B}, m}\right) \) be a probability space. If \( \mathcal{A},\mathcal{C},\mathcal{D} \) are finite subalgebras of \( \mathcal{B} \) then: (i) \( H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) = H\left( {\mathcal{A}/\mathcal{D}}\right) + H\left( {\mathcal{C}/\mathcal{A} \vee \mathcal{D}}\right) \) . (ii) \( H\left( {\mathcal{A} \vee \mathcal{C}}\right) = H\left( \mathcal{A}\right) + H\left( {\mathcal{C}/\mathcal{A}}\right) \) . (iii) \( \mathcal{S} \subseteq \mathcal{C} \Rightarrow H\left( {\mathcal{A}/\mathcal{Q}}\right) \leq H\left( {\mathcal{C}/\mathcal{D}}\right) \) . (iv) \( \mathcal{A} \subseteq \mathcal{C} \Rightarrow H\left( \mathcal{A}\right) \leq H\left( \mathcal{C}\right) \) . (v) \( \mathcal{C} \subseteq \mathcal{D} \Rightarrow H\left( {\mathcal{A}/\mathcal{C}}\right) \geq H\left( {\mathcal{A}/\mathcal{D}}\right) \) . (vi) \( H\left( \mathcal{A}\right) \geq H\left( {\mathcal{A}/\mathcal{D}}\right) \) . (vii) \( H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) \leq H\left( {\mathcal{A}/\mathcal{D}}\right) + H\left( {\mathcal{C}/\mathcal{D}}\right) \) (viii) \( H\left( {\mathcal{A} \vee \mathcal{C}}\right) \leq H\left( \mathcal{A}\right) + H\left( \mathcal{C}\right) \) . (ix) If \( T \) is measure-preserving then: \[ H\left( {{T}^{-1}\mathcal{A}/{T}^{-1}\mathcal{C}}\right) = H\left( {\mathcal{A}/\mathcal{C}}\right) \text{, and} \] (x) \( H\left( {{T}^{-1}\mathcal{A}}\right) = H\left( {s\mathcal{A}}\right) \) . (The reader should think of the intuitive meaning of each statement. This enables one to remember these results easily.) Proof. Let \( \xi \left( \mathcal{A}\right) = \left\{ {A}_{i}\right\} ,\xi \left( \mathcal{C}\right) = \left\{ {C}_{j}\right\} ,\xi \left( \mathcal{D}\right) = \left\{ {D}_{k}\right\} \) and assume, without loss of generality, that all sets have strictly positive measure (since if \( \xi \left( \mathcal{A}\right) = \) \( \left\{ {{A}_{1},\ldots ,{A}_{k}}\right\} \) with \( m\left( {A}_{i}\right) > 0,1 \leq i \leq r \) and \( m\left( {A}_{i}\right) = 0, r < i \leq k \) we can replace \( \xi \left( \mathcal{N}\right) \) by \( \left\{ {{A}_{1},\ldots ,{A}_{r - 1},{A}_{r} \cup {A}_{r + 1} \cup \cdots \cup {A}_{k}}\right\} \) (see remarks (2),(3) above). \[ \text{(i)}H\left( {\mathcal{A} \vee \mathcal{C}/\mathcal{D}}\right) = - \mathop{\sum }\limits_{{i, j, k}}m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) \log \frac{m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) }\text{.} \] But \[ \frac{m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } = \frac{m\left( {{A}_{i} \cap {C}_{j} \cap {D}_{k}}\right) }{m\left( {{A}_{i} \cap {D}_{k}}\right) }\frac{m\left( {{A}_{i} \cap {D}_{k}}\right) }{m\left( {D}_{k}\right) } \] unless \( m\left( {{A}_{i} \cap {D}_{k}}\right) = 0 \) and then the left hand s
1097_(GTM253)Elementary Functional Analysis
Definition 1.32
Definition 1.32. An orthonormal basis for a Hilbert space \( \mathcal{H} \) is a maximal orthonormal set; that is, an orthonormal set that is not properly contained in any orthonormal set. It is easy to see that in the \( {\ell }^{2} \) example above, the set \( \left\{ {{e}_{j} : j \geq 1}\right\} \) is an orthonormal basis. Harder, but still true, is that \( \left\{ {{e}^{int} : n \in \mathbb{Z}}\right\} \), where \( \mathbb{Z} \) is the set of all integers and \( {e}^{int} = \cos \left( {nt}\right) + i\sin \left( {nt}\right) \), is an orthonormal basis for \( {L}^{2}\left( T\right) \) . This result is a consequence of Fejér's theorem; for a proof the reader is referred to [48]. Every Hilbert space has an orthonormal basis (see Exercise 3.1 in Chapter 3). The proof of this statement uses Zorn's lemma, which will be discussed in Section 3.1. The Hilbert spaces of principal interest to us will either have a finite or countably infinite orthonormal basis. A Hilbert space is also a vector space, and as such it has a linear (or Hamel) basis. We digress here briefly to recall some facts and terminology from linear algebra. Given a nonempty subset \( S \) in a vector space \( V \), by a linear combination of vectors in \( S \) we mean a finite sum of the form \[ \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{v}_{j} \] where the vectors \( {v}_{j} \) are in \( S \) and the coefficients \( {\alpha }_{j} \) are scalars. A set \( S \) spans \( V \) if every vector in \( V \) is a (necessarily finite) linear combination of vectors in \( S \) . A set \( S \) of vectors is said to be linearly independent if the only linear combination of vectors in \( S \) that is equal to the zero vector is the one whose scalar coefficients are all zero. A linear, or Hamel, basis for a vector space \( V \) is a subset of \( V \) that is both linearly independent and spans \( V \) . It is easy to see that a Hamel basis can be equivalently defined as a maximal linearly independent subset of \( V \) ; that is, a linearly independent set that is not properly contained in any linearly independent set. Every vector space has a Hamel basis, and for a given vector space, any two Hamel bases can be put in one-to-one correspondence; proofs of these can be provided by the reader, or found, for example, in [14]. Notice that the concept of a Hamel basis depends only on the linear structure of \( V \), and not the topological structure that comes when the vector space is endowed with a norm or inner product. It is for this reason that in a Hilbert space the concept of an orthonormal basis proves to be more central than that of a Hamel basis, so much so that in the context of Hilbert spaces the term "basis" will always mean "orthonormal basis," and "dimension" will always refer to the (common) cardinality of any orthonormal basis. In particular, a Hilbert space is said to be finite-dimensional if it has a finite orthonormal basis, and infinite-dimensional otherwise. This convention will not lead to any confusion because of the following two facts: A finite orthonormal set in a Hilbert space \( \mathcal{H} \) that is not properly contained in any orthonormal set is in fact a Hamel basis for \( \mathcal{H} \), and no Hilbert space with a finite Hamel basis can contain an infinite orthonormal set. See Exercise 1.21 for a further exploration of these and related ideas. Given a linearly independent sequence \( {\left\{ {f}_{n}\right\} }_{1}^{\infty } \) in a Hilbert space \( \mathcal{H} \), there always exists an orthonormal sequence \( {\left\{ {e}_{n}\right\} }_{1}^{\infty } \) such that \[ \operatorname{span}\left\{ {{f}_{1},{f}_{2},\ldots ,{f}_{k}}\right\} = \operatorname{span}\left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{k}}\right\} \] for each positive integer \( k \), where "span" denotes the set of linear combinations of the indicated set. An inductive process for constructing the vectors \( {e}_{j} \), called Gram-Schmidt orthonormalization, is outlined in Exercise 1.19. The last topic of this section is motivated by the question: When is an orthonormal set in a Hilbert space an orthonormal basis? When \( \left\{ {e}_{k}\right\} \) is a finite or countably infinite orthonormal set in \( \mathcal{H} \), then for every vector \( h \in \mathcal{H} \) we have \[ \sum {\left| \left\langle h,{e}_{k}\right\rangle \right| }^{2} \leq \parallel h{\parallel }^{2} \] this is known as Bessel's inequality. It follows from the observation that the closest vector to \( h \) in the linear span of the orthonormal set \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{n}}\right\} \) is \( \mathop{\sum }\limits_{1}^{n}\left\langle {h,{e}_{k}}\right\rangle {e}_{k} \) (see Exercise 1.21), and the Pythagorean identity of Proposition 1.22. The identity in (e) of the next result is called Parseval's identity; it is the equality case of Bessel's inequality. Theorem 1.33. If \( {\left\{ {e}_{n}\right\} }_{1}^{\infty } \) is an orthonormal sequence in a Hilbert space \( \mathcal{H} \), then the following conditions are equivalent: (a) \( {\left\{ {e}_{n}\right\} }_{1}^{\infty } \) is an orthonormal basis. (b) If \( h \in \mathcal{H} \) and \( h \bot {e}_{n} \) for all \( n \), then \( h = 0 \) . (c) For every \( h \in \mathcal{H}, h = \mathop{\sum }\limits_{1}^{\infty }\left\langle {h,{e}_{n}}\right\rangle {e}_{n} \) ; equality here means the convergence in the norm of \( \mathcal{H} \) of the partial sums to \( h \) . (d) For every \( h \in \mathcal{H} \), there exist complex numbers \( {a}_{n} \) so that \( h = \mathop{\sum }\limits_{1}^{\infty }{a}_{n}{e}_{n} \) . (e) For every \( h \in \mathcal{H},\mathop{\sum }\limits_{1}^{\infty }{\left| \left\langle h,{e}_{n}\right\rangle \right| }^{2} = \parallel h{\parallel }^{2} \) . (f) For all \( h \) and \( g \) in \( \mathcal{H},\mathop{\sum }\limits_{1}^{\infty }\left\langle {h,{e}_{n}}\right\rangle \left\langle {{e}_{n}, g}\right\rangle = \langle h, g\rangle \) . Proof. The equivalence of (a) and (b) follows almost immediately from the definition, since if \( 0 \neq h \) and \( h \bot {e}_{n} \) for all \( n \), then \( {\left\{ {e}_{n}\right\} }_{1}^{\infty } \cup \{ h/\parallel h\parallel \} \) is an orthonormal set. Now assume (b) and suppose \( h \in \mathcal{H} \) and let \( {c}_{n} = \left\langle {h,{e}_{n}}\right\rangle \) . By Bessel’s inequality we have \( \mathop{\sum }\limits_{1}^{\infty }{\left| {c}_{n}\right| }^{2} < \infty \), so that the partial sums \( {s}_{k} = \mathop{\sum }\limits_{{n = 1}}^{k}{c}_{n}{e}_{n} \) form a Cauchy sequence in \( \mathcal{H} \) with \[ {\begin{Vmatrix}{s}_{k} - {s}_{m}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{n = m + 1}}^{k}{\left| {c}_{n}\right| }^{2} \] whenever \( k > m \) . By completeness these partial sums must converge in \( \mathcal{H} \) to some vector \( s \) . We claim \( s = h \), which will show (c). For each fixed \( n \) , \[ \left\langle {s,{e}_{n}}\right\rangle = \left\langle {\mathop{\lim }\limits_{{k \rightarrow \infty }}{s}_{k},{e}_{n}}\right\rangle = \mathop{\lim }\limits_{{k \rightarrow \infty }}\left\langle {{s}_{k},{e}_{n}}\right\rangle = \left\langle {h,{e}_{n}}\right\rangle , \] where we have used the continuity of the inner product, a consequence of the Cauchy-Schwarz inequality. Thus \( \left\langle {s - h,{e}_{n}}\right\rangle = 0 \) for all \( n \), and by (b), \( s = h \) . The reverse implication, \( \left( c\right) \Rightarrow \left( b\right) \), is easy, since if (c) holds, and \( h \bot {e}_{n} \) for all \( n \), then \( h = \mathop{\sum }\limits_{1}^{\infty }\left\langle {h,{e}_{n}}\right\rangle {e}_{n} \) implies that \( h = 0 \) . Clearly (c) implies (d) and the reverse implication follows from setting \( {f}_{k} = \) \( \mathop{\sum }\limits_{{j = 1}}^{k}{a}_{j}{e}_{j} \) and noting as above that \[ \left\langle {h,{e}_{n}}\right\rangle = \left\langle {\mathop{\lim }\limits_{{k \rightarrow \infty }}{f}_{k},{e}_{n}}\right\rangle = \mathop{\lim }\limits_{{k \rightarrow \infty }}\left\langle {{f}_{k},{e}_{n}}\right\rangle = {a}_{n}. \] Next we show that (c) implies (e). Continuity of the norm shows that if \( h = \) \( \mathop{\sum }\limits_{1}^{\infty }\left\langle {h,{e}_{n}}\right\rangle {e}_{n} \), then \( \begin{Vmatrix}{s}_{k}\end{Vmatrix} \rightarrow \parallel h\parallel \) where \( {s}_{k} \) is the partial sum \( \mathop{\sum }\limits_{{n = 1}}^{k}\left\langle {h,{e}_{n}}\right\rangle {e}_{n} \) and \( {\begin{Vmatrix}{s}_{k}\end{Vmatrix}}^{2} = \) \( \mathop{\sum }\limits_{1}^{\bar{k}}{\left| \left\langle h,{e}_{n}\right\rangle \right| }^{2} \) by the Pythagorean formula. Thus (e) holds. Clearly (f) implies (e) and the reverse implication can be obtained by using the polarization identity to write \( \langle h, g\rangle \) in terms of \( \parallel h + g{\parallel }^{2},\parallel h - g{\parallel }^{2},\parallel h + {ig}{\parallel }^{2} \) and \( \parallel h - {ig}{\parallel }^{2} \), expanding each of these norms using (e), and computing. Finally, if (e) holds, and \( h \bot {e}_{n} \) for all \( n \), then \( \parallel h{\parallel }^{2} = \mathop{\sum }\limits_{1}^{\infty }{\left| \left\langle h,{e}_{n}\right\rangle \right| }^{2} = 0 \), giving (b). When \( {\left\{ {e}_{n}\right\} }_{1}^{\infty } \) is an orthonormal basis for a Hilbert space \( \mathcal{H} \), and \( h \in \mathcal{H} \), the scalars \( \left\langle {h,{e}_{n}}\right\rangle \) are called the Fourier coefficients of \( h \) with respect to \( {\left\{ {e}_{n}\right\} }_{1}^{\infty } \) . In this case, the sum in (c) of the above theorem is referred to as the Fourier series of \( h \) , relative to the specified orthonormal basis. No countably infinite orthonormal basis can ever be a Hamel basis. Indeed, using Gram-Schmidt orthonormalization we can show something stronger. Suppose that \( \left\{ {{f}_{1},{f}_{2},\ldots }\right\} \) is a linearly independent sequence in a Hilbert space \( \mathcal{H} \) . We claim that there is a vector in \( \ma
1097_(GTM253)Elementary Functional Analysis
Definition 2.24
Definition 2.24. We say that \( A \in \mathcal{B}\left( {\mathcal{H},\mathcal{K}}\right) \) is bounded below if there is a \( \delta > 0 \) such that \( \parallel {Ah}\parallel \geq \delta \parallel h\parallel \) for all \( h \) in \( \mathcal{H} \) . Clearly, if \( A \) is bounded below, its kernel is \( \{ 0\} \) and hence \( A \) is one-to-one. However, \( A \) being one-to-one does not imply that \( A \) is bounded below; a diagonal operator with diagonal sequence \( \{ 1/n\} \) provides a counterexample. A weakening of the condition "A maps \( \mathcal{H} \) onto \( \mathcal{K} \) " is the requirement that the range of \( A \) is dense in \( \mathcal{K} \) ; i.e., the closure of the range of \( A \) should be all of \( \mathcal{K} \) . Theorem 2.25. If \( A \) is a bounded linear operator from a Hilbert space \( \mathcal{H} \) to a Hilbert space \( \mathcal{K} \), then \( A \) is invertible if and only if \( A \) is bounded below and has dense range. Proof. The "only if" direction is easy: \( A \) invertible guarantees that the range of \( A \) is equal to \( \mathcal{K} \), and moreover for any \( h \in \mathcal{H} \) , \[ \parallel h\parallel = \begin{Vmatrix}{{A}^{-1}{Ah}}\end{Vmatrix} \leq \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\parallel {Ah}\parallel \] so that \[ \parallel {Ah}\parallel \geq \frac{1}{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}\parallel h\parallel \] and \( A \) is bounded below. The "if" direction is outlined in Exercise 2.12. When we ask why a particular operator fails to be invertible, it is sometimes more useful to see which of the properties "bounded below" and/or "dense range" it fails to have, rather than looking at the properties "one-to-one" and "onto." ## 2.3 Adjoints of Banach Space Operators So far we have defined \( {A}^{ * } \) when \( A \) is a bounded linear operator between Hilbert spaces, and our definition seems closely tied to the inner product structure. We pause briefly in this section to see if we can define the adjoint of a bounded linear operator between Banach spaces. For simplicity, we restrict attention to \( A \in \mathcal{B}\left( X\right) \), where \( X \) is a Banach space. Let us begin by rephrasing the defining property of the adjoint of a Hilbert space operator. If \( A \in \mathcal{B}\left( \mathcal{H}\right) \) where \( \mathcal{H} \) is a Hilbert space, then \( {A}^{ * } \) is the unique bounded linear operator on \( \mathcal{H} \) satisfying \[ \langle {Ax}, y\rangle = \left\langle {x,{A}^{ * }y}\right\rangle \] (2.3) for all \( x \) and \( y \) in \( \mathcal{H} \) . Let the linear functional \( x \rightarrow \langle x, y\rangle \) on \( \mathcal{H} \) (for fixed \( y \) ) be denoted by \( {\Lambda }_{y} \) . Thus we can rewrite Equation (2.3) as \[ {\Lambda }_{y}\left( {Ax}\right) = {\Lambda }_{{A}^{ * }y}\left( x\right) \] Even more suggestively, let us think of the map that associates the linear functional \( {\Lambda }_{y} \) to the linear functional \( {\Lambda }_{{A}^{ * }y} \) . We have \[ {\Lambda }_{{A}^{ * }y}\left( x\right) = \left\langle {x,{A}^{ * }y}\right\rangle = \langle {Ax}, y\rangle = {\Lambda }_{y}\left( {Ax}\right) = \left( {{\Lambda }_{y} \circ A}\right) \left( x\right) \] for \( x \) and \( y \) in \( \mathcal{H} \) . This suggests we try defining \( {A}^{ * } \) for \( A \in \mathcal{B}\left( X\right) \), where \( X \) is now a Banach space, as the map on the dual space \( {X}^{ * } \) that sends \( \Lambda \in {X}^{ * } \) to \( \Lambda \circ A \) : \[ {A}^{ * }\left( \Lambda \right) = \Lambda \circ A. \] It is easy to see that \( {A}^{ * }\left( \Lambda \right) \) is in \( {X}^{ * } \) and that the map \( {A}^{ * } : {X}^{ * } \rightarrow {X}^{ * } \) is linear. If we adopt this as the definition of \( {A}^{ * } \) when \( A \) is a bounded linear operator on the Banach space \( X \), how does it compare with our earlier definition in the case that \( X \) is actually a Hilbert space \( \mathcal{H} \) ? To answer this, let \( C : \mathcal{H} \rightarrow {\mathcal{H}}^{ * } \) be the surjective conjugate linear isometry sending \( y \) to \( {\Lambda }_{y} \) ; "conjugate linear" referring to the fact that \( C\left( {\alpha y}\right) = \bar{\alpha }C\left( y\right) \) for scalars \( \alpha \) . We claim that \( C{A}_{HS}^{ * } = {A}_{BS}^{ * }C \), as schematically illustrated below. Here the subscripts \( {HS} \) and \( {BS} \) indicate we are using the adjoint definition in, respectively, the Hilbert space setting or Banach space setting, so that \( {A}_{HS}^{ * } \) acts on \( \mathcal{H} \) while \( {A}_{BS}^{ * } \) acts on \( {\mathcal{H}}^{ * } \) . ![93fc48f6-6c56-41d5-a901-2e91c907d5c6_51_0.jpg](images/93fc48f6-6c56-41d5-a901-2e91c907d5c6_51_0.jpg) This is verified by observing that \( C{A}_{HS}^{ * }x \) is the bounded linear functional on \( \mathcal{H} \) given as inner product with \( {A}_{HS}^{ * }x \), that is, the bounded linear functional taking \( y \in \) \( \mathcal{H} \) to \( \left\langle {y,{A}_{HS}^{ * }x}\right\rangle = \langle {Ay}, x\rangle \) . On the other hand, \( {A}_{BS}^{ * }{Cx} \) is the bounded linear functional on \( \mathcal{H} \) taking \( y \) to \[ \left\lbrack {{A}_{BS}^{ * }\left( {Cx}\right) }\right\rbrack \left( y\right) = {Cx}\left( {Ay}\right) = \langle {Ay}, x\rangle . \] The conjugate linearity of \( C \) means that while \( {\left( \alpha {A}_{HS}\right) }^{ * } = \bar{\alpha }{A}_{HS}^{ * },{\left( \alpha A\right) }_{BS}^{ * } = \alpha {A}_{BS}^{ * } \) . It is pleasant to import the Hilbert space notation \( \langle \cdot , \cdot \rangle \) into the Banach space setting. For \( X \) a Banach space, denote a generic element of \( {X}^{ * } \) by \( {x}^{ * } \) and write \[ {x}^{ * }\left( x\right) \equiv \left\langle {x,{x}^{ * }}\right\rangle \] Notice, then, that if \( A \) is in \( \mathcal{B}\left( X\right) \) and \( {A}^{ * } \) is in \( \mathcal{B}\left( {X}^{ * }\right) \), we have \( \left\langle {{Ax},{x}^{ * }}\right\rangle = \left\langle {x,{A}^{ * }{x}^{ * }}\right\rangle \) since both are equal to \( {x}^{ * }\left( {Ax}\right) \) . Example 2.26. Consider the Banach space \( X = C\left\lbrack {0,1}\right\rbrack \) in the supremum norm. As in Exercise 2.3, let \( \varphi \) be a continuous map of \( \left\lbrack {0,1}\right\rbrack \) into \( \left\lbrack {0,1}\right\rbrack \) and define the bounded linear operator \( {C}_{\varphi } \) on \( X \) by \( {C}_{\varphi }\left( f\right) = f \circ \varphi \) . Fix a point \( p \in \left\lbrack {0,1}\right\rbrack \) and let \( {\Lambda }_{p} \) be the bounded linear functional of evaluation at \( p : {\Lambda }_{p}\left( f\right) = f\left( p\right) \) for all \( f \in X \) . We seek to identify \( {C}_{\varphi }^{ * }\left( {\Lambda }_{p}\right) \) as an element of \( {X}^{ * } \) . We have \[ {C}_{\varphi }^{ * }\left( {\Lambda }_{p}\right) \left( f\right) = {\Lambda }_{p}\left( {{C}_{\varphi }\left( f\right) }\right) = {\Lambda }_{p}\left( {f \circ \varphi }\right) = f\left( {\varphi \left( p\right) }\right) = {\Lambda }_{\varphi \left( p\right) }\left( f\right) \] for every \( f \) in \( X \) . Thus \( {C}_{\varphi }^{ * }\left( {\Lambda }_{p}\right) = {\Lambda }_{\varphi \left( p\right) } \) . In our "inner product" notation the relevant calculation looks like \[ \left\langle {f,{C}_{\varphi }^{ * }\left( {\Lambda }_{p}\right) }\right\rangle = \left\langle {f \circ \varphi ,{\Lambda }_{p}}\right\rangle = f\left( {\varphi \left( p\right) }\right) = \left\langle {f,{\Lambda }_{\varphi \left( p\right) }}\right\rangle . \] The concept of the adjoint operator had its beginnings in the work of Riesz in 1909 for operators on \( {L}^{p}\left\lbrack {a, b}\right\rbrack \) . Riesz used the terminology "Transponierte" or "transposed operator." By 1930 the idea had been extended by Banach and Juliusz Schauder to the general setting of a bounded linear operator between Banach spaces, with the terms "opération adjointe" and "opération conjuguée" being introduced. ## 2.4 Exercises 2.1. Let \( X \) and \( Y \) be normed linear spaces, and let \( \mathcal{B}\left( {X, Y}\right) \) denote the collection of all bounded linear operators from \( X \) into \( Y \) endowed with the operator norm. Show that \( \mathcal{B}\left( {X, Y}\right) \) is a normed linear space, and \( \mathcal{B}\left( {X, Y}\right) \) is a Banach space whenever \( Y \) is a Banach space. The vector operations in \( \mathcal{B}\left( {X, Y}\right) \) are to be defined pointwise: \( \left( {A + B}\right) \left( x\right) = {Ax} + {Bx} \), and \( \left( {\alpha A}\right) \left( x\right) = \alpha \left( {Ax}\right) \) . 2.2. Suppose \( M \) is a dense subspace in a Banach space \( X \) (meaning that the closure of \( M \) is all of \( X \) ) and suppose that \( T : M \rightarrow Y \) is linear, where \( Y \) is a Banach space, with \( \parallel {Tm}{\parallel }_{Y} \leq K\parallel m{\parallel }_{X} \) for some \( K < \infty \) and all \( m \in M \) . Show that \( T \) extends, in a unique way, to a bounded linear operator from \( X \) into \( Y \) . 2.3. Let \( X = \left\lbrack {0,1}\right\rbrack \) or more generally any compact Hausdorff space, and let \( \mathcal{Y} = \) \( C\left( X\right) \), the Banach space of continuous, complex-valued functions on \( X \), in the supremum norm. For any continuous function \( \varphi \) mapping \( X \) into itself, define the composition operator \( {C}_{\varphi } \) on \( \mathcal{Y} \) by \( {C}_{\varphi }\left( f\right) = f \circ \varphi \) . Prove that \( {C}_{\varphi } \) is a bounded linear operator on \( \mathcal{Y} \) . For which \( \varphi \) is \( {C}_{\varphi } \) invertible? 2.4. Compute the norm of the multiplication operator \( {M}_{z} \) (equivalently the Toeplitz operator \( {T}_{z} \) ) on \( {L}_{a}^{2}\left( \mathbb{D}\right) \) . 2.5. Show that the Toeplitz operator with symbol \( \varphi \) acting on the Bergman space \( {L}_{a}^{2}\left( \mathbb{D}\right) \) has adjo
1088_(GTM245)Complex Analysis
Definition 3.51
Definition 3.51. Let \( c \in \mathbb{C} \) . Assume that \[ f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{\left( z - c\right) }^{n}\text{ for all }\left| {z - c}\right| < \rho \text{ and for some }\rho > 0. \] If \( f \) is not identically zero, it follows from Theorem 3.45 that there exists an \( N \in \) \( {\mathbb{Z}}_{ \geq 0} \) such that \[ {a}_{N} \neq 0\text{and}{a}_{n} = 0\text{for all}n\text{such that}0 \leq n < N\text{.} \] Thus \[ f\left( z\right) = {\left( z - c\right) }^{N}\mathop{\sum }\limits_{{p = 0}}^{\infty }{a}_{N + p}{\left( z - c\right) }^{p} = {\left( z - c\right) }^{N}g\left( z\right) , \] with \( g \) having a power series expansion at \( c \) and \( g\left( c\right) \neq 0 \) . We define \[ N = {v}_{c}\left( f\right) = \text{ order }\left( \text{ of the zero }\right) \text{ of }f\text{ at }c. \] Note that \( N \geq 0 \), and \( N = 0 \) if and only if \( f\left( c\right) \neq 0 \) . If \( N = 1 \), then we say that \( f \) has a simple zero at \( c \) . Definition 3.52. (a) Let \( f \) be defined in a deleted neighborhood of \( c \in \mathbb{C} \) (see the Standard Terminology summary). We say that \[ \mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \infty \] if for all \( M > 0 \), there exists a \( \delta > 0 \) such that \[ 0 < \left| {z - c}\right| < \delta \Rightarrow \left| {f\left( z\right) }\right| > M. \] (b) Let \( \alpha \in \widehat{\mathbb{C}} \), and let \( f \) be defined in \( \left| z\right| > M \) for some \( M > 0 \) (equivalently, we say that \( f \) is defined in a deleted neighborhood of \( \infty \) in \( \widehat{\mathbb{C}} \) ). We say \[ \mathop{\lim }\limits_{{z \rightarrow \infty }}f\left( z\right) = \alpha \] provided \[ \mathop{\lim }\limits_{{z \rightarrow 0}}f\left( \frac{1}{z}\right) = \alpha \] (c) The above defines the concept of continuous maps between sets in the Riemann sphere \( \widehat{\mathbb{C}} \) . (d) A function \( f \) defined in a neighborhood of \( \infty \) is holomorphic (has a power series expansion) at \( \infty \) if and only if \( g\left( z\right) = f\left( \frac{1}{z}\right) \) is holomorphic (has a power series expansion) at \( z = 0 \), where we define \( g\left( 0\right) = f\left( \infty \right) \) . Definition 3.53. Let \( U \subset \mathbb{C} \) be a neighborhood of a point \( c \) . A function \( f \) that is holomorphic in \( {U}^{\prime } = U - \{ c\} \), a deleted neighborhood of the point \( c \), has a removable singularity at \( c \) if there is a holomorphic function in \( U \) that agrees with \( f \) on \( {U}^{\prime } \) . Otherwise \( c \) is called a singularity of \( f \) . Note that all singularities are isolated points. Let us consider two functions \( f \) and \( g \) having power series expansions at each point of a domain \( D \) in \( \widehat{\mathbb{C}} \) . Assume that neither function vanishes identically on \( D \) and fix \( c \in D \cap \mathbb{C} \) . Let \[ F\left( z\right) = \frac{f\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( f\right) }}\text{ and }G\left( z\right) = \frac{g\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( g\right) }} \] for \( z \in D \) . Then the functions \( F \) and \( G \) have removable singularities at \( c \), do not vanish there, and have power series expansions at each point of \( D \) . Furthermore, we define a new function \( h \) on \( D \) by \[ h\left( z\right) = \frac{f}{g}\left( z\right) = \frac{{\left( z - c\right) }^{{v}_{c}\left( f\right) }F\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( g\right) }G\left( z\right) }\text{ for all }z \in D \] and fixed \( c \in D \cap \mathbb{C} \) . There are exactly three distinct possibilities for the behavior of the function \( h \) at \( z = c \), which lead to the following definitions. Definition 3.54. (I) If \( {v}_{c}\left( g\right) > {v}_{c}\left( f\right) \), then \( h\left( c\right) = \infty \) (this defines \( h\left( c\right) \), and the resulting function \( h \) is continuous at \( c \) ). We say that \( h \) has a pole of order \( {v}_{c}\left( g\right) - {v}_{c}\left( f\right) \) at \( c \) . If \( {v}_{c}\left( g\right) - {v}_{c}\left( f\right) = 1 \), we say that the pole is simple. (II) If \( {v}_{c}\left( g\right) = {v}_{c}\left( f\right) \), then the singularity of \( h \) at \( c \) is removable, and, by definition, \( h\left( c\right) = \frac{F\left( c\right) }{G\left( c\right) } \neq 0 \) . (III) If \( {v}_{c}\left( g\right) < {v}_{c}\left( f\right) \), then the singularity is again removable and in this case \( h\left( c\right) = 0 \) . In all cases we set \( {v}_{c}\left( h\right) = {v}_{c}\left( f\right) - {v}_{c}\left( g\right) \) and call it the order or multiplicity of \( h \) at \( c \) . In cases (II) and (III) of the definition, \( h \) has a power series expansion at \( c \) as a consequence of the following result. Theorem 3.55. If a function \( f \) has a power series expansion at \( c \) and \( f\left( c\right) \neq 0 \) , then \( \frac{1}{f} \) also has a power series expansion at \( c \) . Proof. Without loss of generality we assume \( c = 0 \) and \( f\left( 0\right) = 1 \) . Thus \[ f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{z}^{n},{a}_{0} = 1, \] and the radius of convergence of the series is nonzero. We want to find the reciprocal power series, that is, a series \( g \) with positive radius of convergence, that we write as \[ g\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{b}_{n}{z}^{n} \] and satisfies \[ \left( {\sum {a}_{n}{z}^{n}}\right) \left( {\sum {b}_{n}{z}^{n}}\right) = 1 \] The LHS and the RHS are both power series, where the RHS is a power series expansion whose coefficients are all equal to zero except for the first one. Equating the first two coefficients on both sides, we obtain \[ {a}_{0}{b}_{0} = 1\text{, from where}{b}_{0} = 1\text{, and} \] \[ {a}_{1}{b}_{0} + {a}_{0}{b}_{1} = 0,\;\text{ from where }{b}_{1} = - {a}_{1}{b}_{0} = - {a}_{1}. \] Similarly, using the \( n \) -th coefficient of the power series when expanded for the LHS, for \( n \geq 1 \), we obtain \[ {a}_{n}{b}_{0} + {a}_{n - 1}{b}_{1} + \cdots + {a}_{0}{b}_{n} = 0. \] Thus by induction we define \[ {b}_{n} = - \mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{b}_{j}{a}_{n - j}, n \geq 1. \] Since \( \rho > 0 \), we have \( \frac{1}{\rho } < + \infty \) . Since \( \mathop{\limsup }\limits_{n}{\left| {a}_{n}\right| }^{\frac{1}{n}} = \frac{1}{\rho } \), there exists a positive number \( k \) such that \( \left| {a}_{n}\right| \leq {k}^{n} \) . We show by the use of induction, once again, that \( \left| {b}_{n}\right| \leq {2}^{n - 1}{k}^{n} \) for all \( n \geq 1 \) . For \( n = 1 \), we have \( {b}_{1} = - {a}_{1} \) and hence \( \left| {b}_{1}\right| = \left| {a}_{1}\right| \leq k \) . Suppose the inequality holds for \( 1 \leq j \leq n \) for some \( n \geq 1 \) . Then \[ \left| {b}_{n + 1}\right| \leq \mathop{\sum }\limits_{{j = 0}}^{n}\left| {b}_{j}\right| \left| {a}_{n + 1 - j}\right| = \left| {a}_{n + 1}\right| + \mathop{\sum }\limits_{{j = 1}}^{n}\left| {b}_{j}\right| \left| {a}_{n + 1 - j}\right| \] \[ \leq {k}^{n + 1} + \mathop{\sum }\limits_{{j = 1}}^{n}{2}^{j - 1}{k}^{j}{k}^{n + 1 - j} \] \[ = {k}^{n + 1}\left( {1 + {2}^{n} - 1}\right) \text{.} \] Thus there is a reciprocal series, with radius of convergence \( \sigma \) satisfying \[ \frac{1}{\sigma } = \mathop{\limsup }\limits_{n}{\left| {b}_{n}\right| }^{\frac{1}{n}} \leq \mathop{\lim }\limits_{n}\left( {2}^{1 - \frac{1}{n}}\right) k = {2k} \] and therefore nonzero. Corollary 3.56. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \) and \( f \) a function defined on \( D \) . If \( f \) has a power series expansion at each point of \( D \) and \( f\left( z\right) \neq 0 \) for all \( z \in D \), then \( \frac{1}{f} \) has a power series expansion at each point of \( D \) . Definition 3.57. For each domain \( D \subseteq \widehat{\mathbb{C}} \), we define \( \mathbf{H}\left( D\right) = \{ f : D \rightarrow \mathbb{C};f \) has a power series expansion at each point of \( D\} . \) We will see in Chap. 5 that \( \mathbf{H}\left( D\right) \) is the set of holomorphic functions on \( D \) . Corollary 3.58. Assume that \( D \) is a domain in \( \widehat{\mathbb{C}} \) . The set \( \mathbf{H}\left( D\right) \) is an integral domain and an algebra over \( \mathbb{C} \) . Its units are the functions that never vanish on \( D \) . Definition 3.59. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \) . A function \( f : D \rightarrow \widehat{\mathbb{C}} \) is meromorphic on \( D \) if it is locally \( {}^{9} \) the ratio of two functions having power series expansions (with the denominator not identically zero). The set of meromorphic functions on \( D \) is denoted by \( \mathbf{M}\left( D\right) \) . Recall that, by our convention, \( \mathbf{M}{\left( D\right) }_{ \neq 0} \) is the set of meromorphic functions with the constant function 0 omitted, where \( 0\left( z\right) = 0 \) for all \( z \) in \( D \) . \( {}^{9} \) A property \( P \) is satisfied locally on an open set \( D \) if for each point \( c \in D \), there exists a neighborhood \( U \subset D \) of \( c \) such that \( P \) is satisfied in \( U \) . Corollary 3.60. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \), let \( c \) be any point in \( D \cap \mathbb{C} \), and let \( f \in \mathbf{M}{\left( D\right) }_{ \neq 0} \) . There exist a connected neighborhood \( U \) of \( c \) in \( D \), an integer \( n = {v}_{c}f \), and a unit \( g \in \mathbf{H}\left( U\right) \) such that \[ f\left( z\right) = {\left( z - c\right) }^{n}g\left( z\right) \text{ for all }z \in U. \] Remark 3.61. If \( \infty \in D \), there exists an appropriate version of the above Corollary for \( c = \infty \) exists; see Exercise 3.7. Corollary 3.62. If \( D \) is a domain in \( \widehat{\mathbb{C}} \), then the set \( \mathbf{M}\left( D
1056_(GTM216)Matrices
Definition 9.6
Definition 9.6 The elementary divisors of the matrix \( M \in {\mathbf{M}}_{n}\left( k\right) \) are the polynomials \( {q}_{k}^{\alpha \left( {j, k}\right) } \) for which the exponent \( \alpha \left( {j, k}\right) \) is nonzero. The multiplicity of an elementary divisor \( {q}_{k}^{m} \) is the number of solutions \( j \) of the equation \( \alpha \left( {j, k}\right) = m \) . The list of elementary divisors is the sequence of these polynomials, repeated with their multiplicities. Let us begin with the case of the companion matrix \( N \) of some polynomial \( p \) . Its similarity invariants are \( \left( {1,\ldots ,1, p}\right) \) (see above). Let \( {r}_{1},\ldots ,{r}_{t} \) be its elementary divisors (we observe that each has multiplicity one). We then have \( p = {r}_{1}\cdots {r}_{t} \), and the \( {r}_{l}\mathrm{\;s} \) are pairwise coprime. With each \( {r}_{l} \) we associate its companion matrix \( {N}_{l} \), and we form a block-diagonal matrix \( {N}^{\prime } \mathrel{\text{:=}} \operatorname{diag}\left( {{N}_{1},\ldots ,{N}_{t}}\right) \) . Each \( {N}_{l} - X{I}_{l} \) is equivalent to a diagonal matrix \[ \left( \begin{array}{lll} 1 & & \\ & \ddots & \\ & & 1 \\ & & {r}_{l} \end{array}\right) \] in \( {\mathbf{M}}_{n\left( l\right) }\left( {k\left\lbrack X\right\rbrack }\right) \), therefore the whole matrix \( {N}^{\prime } - X{I}_{n} \) is equivalent to \[ Q \mathrel{\text{:=}} \left( \begin{array}{lllll} 1 & & & & \\ & \ddots & & O & \\ & 1 & & & \\ & & {r}_{1} & & \\ & O & & \ddots & \\ & & & & {r}_{t} \end{array}\right) . \] Let us now compute the similarity invariants of \( {N}^{\prime } \), that is, the invariant factors of \( Q \) . It is enough to compute the greatest common divisor \( {D}_{n - 1} \) of the minors of size \( n - 1 \) . Taking into account the principal minors of \( Q \), we see that \( {D}_{n - 1} \) must divide every product of the form \[ \mathop{\prod }\limits_{{l \neq k}}{r}_{l},\;1 \leq k \leq t \] Because the \( {r}_{l}\mathrm{\;s} \) are pairwise coprime, this implies that \( {D}_{n - 1} = 1 \) . This means that the list of similarity invariants of \( {N}^{\prime } \) has the form \( \left( {1,\ldots ,1, \cdot }\right) \), where the last polynomial must be the characteristic polynomial of \( {N}^{\prime } \) . This polynomial is the product of the characteristic polynomials of the \( {N}_{l}\mathrm{\;s} \) . These being equal to the \( {r}_{l}\mathrm{\;s} \), the characteristic polynomial of \( {N}^{\prime } \) is \( p \) . Finally, \( N \) and \( {N}^{\prime } \) have the same similarity invariants and are therefore similar. Now let \( M \) be a general matrix in \( {\mathbf{M}}_{n}\left( k\right) \) . We apply the former reduction to every diagonal block \( {M}_{j} \) of its Frobenius canonical form. Each \( {M}_{j} \) is similar to a block-diagonal matrix whose diagonal blocks are companion matrices corresponding to the elementary divisors of \( M \) entering into the factorization of the \( j \) th invariant polynomial of \( M \) . We have thus proved the following statement. Theorem 9.10 Let \( {r}_{1},\ldots ,{r}_{s} \) be the elementary divisors of \( M \in {\mathbf{M}}_{n}\left( k\right) \) . Then \( M \) is similar to a block-diagonal matrix \( {M}^{\prime } \) whose diagonal blocks are companion matrices of the \( {r}_{l}s \) . The matrix \( {M}^{\prime } \) is called the second canonical form of \( M \) . ## Remark The exact computation of the second canonical form of a given matrix is impossible in general, in contrast to the case of the first form. Indeed, if there were an algorithmic construction, it would provide an algorithm for factorizing polynomials into irreducible factors via the formation of the companion matrix, a task known to be impossible if \( k = \mathbb{R} \) or \( \mathbb{C} \) . Recall that one of the most important results in Galois theory, known as Abel's theorem, states the impossibility of solving a general polynomial equation of degree at least five with complex coefficients, using only the basic operations and the extraction of roots of any order. ## 9.3.4 Jordan Form of a Matrix If the characteristic polynomial splits over \( k \), which holds for instance when the field \( k \) is algebraically closed, the elementary divisors have the form \( {\left( X - a\right) }^{r} \) for \( a \in k \) and \( r \geq 1 \) . In that case, the second canonical form can be greatly simplified by replacing the companion matrix of the monomial \( {\left( X - a\right) }^{r} \) by its Jordan block \[ J\left( {a;r}\right) \mathrel{\text{:=}} \left( \begin{matrix} a & 1 & 0 & \cdots & 0 \\ 0 & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ \vdots & & \ddots & \ddots & 1 \\ 0 & \cdots & \cdots & 0 & a \end{matrix}\right) . \] The characteristic polynomial of \( J\left( {a;r}\right) \) (an \( r \times r \) triangular matrix) is \( {\left( X - a\right) }^{r} \) , whereas the matrix \( X{I}_{r} - J\left( {a;r}\right) \) possesses an invertible minor of order \( r - 1 \), namely Exercises \[ \left( \begin{matrix} - 1 & 0 & \cdots & 0 \\ X - a & \ddots & \ddots & \vdots \\ & \ddots & \ddots & 0 \\ & & X - a & - 1 \end{matrix}\right) \] which we obtain by deleting the first column and the last row. Again, this shows that \( {D}_{n - 1}\left( {X{I}_{r} - J}\right) = 1 \), so that the invariant factors \( {d}_{1},\ldots ,{d}_{r - 1} \) are equal to 1 . Hence \( {d}_{r} = {D}_{r}\left( {X{I}_{r} - J}\right) = \det \left( {X{I}_{r} - J}\right) = {\left( X - a\right) }^{r} \) . Its invariant factors are thus \( 1,\ldots ,1,{\left( X - a\right) }^{r} \), as required, hence the following theorem. Theorem 9.11 When an elementary divisor of \( M \) is \( {\left( X - a\right) }^{r} \), one may, in the second canonical form of \( M \), replace its companion matrix by the Jordan block \( J\left( {a;r}\right) \) . Corollary 9.1 If the characteristic polynomial of \( M \) splits over \( k \), then \( M \) is similar to a block-diagonal matrix whose \( j \) th diagonal block is a Jordan block \( J\left( {{a}_{j};{r}_{j}}\right) \) . This form is unique, up to the order of blocks. Corollary 9.2 If \( k \) is algebraically closed (e.g., if \( k = \mathbb{C} \) ), then every square matrix M is similar to a block-diagonal matrix whose jth diagonal block is a Jordan block \( J\left( {{a}_{j};{r}_{j}}\right) \) . This form is unique, up to the order of blocks. ## Exercises 1. Show that every principal ideal domain is a unique factorization domain. 2. Verify that the characteristic polynomial of the companion matrix of a polynomial \( p \) is equal to \( p \) . 3. Let \( k \) be a field and \( M \in {\mathbf{M}}_{n}\left( k\right) \) . Show that \( M,{M}^{T} \) have the same rank and that in general, the rank of \( {M}^{T}M \) is less than or equal to that of \( M \) . Show that the equality of these ranks always holds if \( k = \mathbb{R} \), but that strict inequality is possible, for example with \( k = \mathbb{C} \) . 4. Compute the elementary divisors of the matrices \[ \left( \begin{matrix} {22} & {23} & {10} & - {98} \\ {12} & {18} & {16} & - {38} \\ - {15} & - {19} & - {13} & {58} \\ 6 & 7 & 4 & - {25} \end{matrix}\right) ,\;\left( \begin{matrix} 0 & - {21} & - {56} & - {96} \\ {18} & {36} & {52} & - 8 \\ - {12} & - {17} & - {16} & {38} \\ 3 & 2 & - 2 & - {20} \end{matrix}\right) \] and \[ \left( \begin{matrix} {44} & {89} & {120} & - {32} \\ 0 & - {12} & - {32} & - {56} \\ - {14} & - {20} & - {16} & {49} \\ 8 & {14} & {16} & - {16} \end{matrix}\right) \] in \( {\mathbf{M}}_{n}\left( \mathbb{C}\right) \) . What are their Jordan reductions? 5. (Lagrange's theorem.) Let \( K \) be a field and \( A \in {\mathbf{M}}_{n}\left( K\right) \) . Let \( X, Y \in {K}^{n} \) be vectors such that \( {X}^{T}{AY} \neq 0 \) . We normalize by \( {X}^{T}{AY} = 1 \) and define \[ B \mathrel{\text{:=}} A - \left( {AY}\right) \left( {{X}^{T}A}\right) . \] Show that in the factorization \[ {PAQ} = \left( \begin{matrix} {I}_{r} & 0 \\ 0 & {0}_{n - r} \end{matrix}\right) ,\;P, Q \in {\mathbf{{GL}}}_{n}\left( K\right) , \] one can choose \( Y \) as the first column of \( Q \) and \( {X}^{T} \) as the first row of \( P \) . Deduce that \( \operatorname{rk}B = \operatorname{rk}A - 1 \) . More generally, show that if \( X, Y \in {\mathbf{M}}_{n \times m}\left( K\right) \) are such that \( {X}^{T}{AY} \in {\mathbf{{GL}}}_{m}\left( K\right) \) , then the rank of \[ B \mathrel{\text{:=}} A - \left( {AY}\right) {\left( {X}^{T}AY\right) }^{-1}\left( {{X}^{T}A}\right) , \] equals \( \operatorname{rk}A - m \) . If \( A \in {\operatorname{Sym}}_{n}\left( \mathbb{R}\right) \) and if \( A \) is positive-semidefinite, and if \( X = Y \), show that \( B \) is also positive-semidefinite. 6. For \( A \in {\mathbf{M}}_{n}\left( \mathbb{C}\right) \), consider the linear differential equation in \( {\mathbb{C}}^{n} \) \[ \frac{dx}{dt} = {Ax} \] (9.1) a. Let \( P \in {\mathbf{{GL}}}_{n}\left( \mathbb{C}\right) \) and let \( t \mapsto x\left( t\right) \) be a solution of (9.1). What is the differential equation satisfied by \( t \mapsto {Px}\left( t\right) \) ? b. Let \( {\left( X - a\right) }^{m} \) be an elementary divisor of \( A \) . Show that for every \( k = \) \( 0,\ldots, m - 1 \) ,(9.1) possesses solutions of the form \( {\mathrm{e}}^{at}{Q}_{k}\left( t\right) \), where \( {Q}_{k} \) is a complex-valued polynomial map of degree \( k \) . 7. Consider the following differential equation of order \( n \) in \( \mathbb{C} \) : \[ {x}^{\left( n\right) }\left( t\right) + {a}_{1}{x}^{\left( n - 1\right) }\left( t\right) + \cdots + {a}_{n}x\left( t\right) = 0. \] (9.2) a. Define \( p\left( X\right) = {X}^{n} + {a}_{1}{X}^{n - 1} + \cdots + {a}_{n} \) and let \( M \) be the companion matrix of \( p \) . Let \[ p\left( X\right) = \mathop{\prod }\limits_{{a \in A}}{\left( X - a\right) }^{{n}_{a}} \] be the factorization of \( p \)
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 5.1.10
Definition 5.1.10. Let \( V \) be an open subset of \( \Delta \left( A\right) \) and let \( f \in {A}^{ * } \) . Then \( f \) is said to vanish on \( V \) if \( f\left( x\right) = 0 \) for all \( x \in A \) for which supp \( \widehat{x} \) is compact and contained in \( V \) . Lemma 5.1.11. Let \( A \) be a semisimple and regular commutative Banach algebra. Given \( f \in {A}^{ * } \), there exists a largest open subset of \( \Delta \left( A\right) \) on which \( f \) vanishes. Proof. We first show that if \( f \) vanishes on finitely many open subsets \( {V}_{1},\ldots ,{V}_{n} \) of \( \Delta \left( A\right) \), then \( f \) vanishes on \( \mathop{\bigcup }\limits_{{j = 1}}^{n}{V}_{j} \) . To that end, let \( x \in A \) be such that supp \( \widehat{x} \) is compact and contained in \( \mathop{\bigcup }\limits_{{j = 1}}^{n}{V}_{j} \) . Since \( A \) is regular, by Corollary 4.2.12 there exist \( {u}_{1},\ldots ,{u}_{n} \in A \) so that \( \operatorname{supp}{\widehat{u}}_{j} \subseteq {V}_{j},1 \leq j \leq n \) , and \( \mathop{\sum }\limits_{{j = 1}}^{n}{\widehat{u}}_{j} = 1 \) on supp \( \widehat{x} \) . Because \( A \) is semisimple, it follows that \( x = \mathop{\sum }\limits_{{j = 1}}^{{n}^{\prime }}x{u}_{j} \), and since supp \( {\widehat{xu}}_{j} \subseteq {V}_{j} \) for \( 1 \leq j \leq n \), we conclude that \[ f\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}f\left( {x{u}_{j}}\right) = 0 \] because \( f \) vanishes on each \( {V}_{j} \) . Now, let \( \mathcal{V} \) be the collection of all open subsets of \( \Delta \left( A\right) \) on which \( f \) vanishes and let \( U = \bigcup \{ V : V \in \mathcal{V}\} \) . Then \( f \) vanishes on \( U \) . Indeed, if \( x \in A \) is such that supp \( \widehat{x} \) is compact and contained in \( U \), then there exist \( {V}_{1},\ldots ,{V}_{n} \in \mathcal{V} \) with \( \operatorname{supp}\widehat{x} \subseteq \mathop{\bigcup }\limits_{{j = 1}}^{n}{V}_{j} \), and hence \( f\left( x\right) = 0 \) by the first part of the proof. Thus \( f \) vanishes on \( U \) and, by definition, \( U \) is the largest open subset of \( \Delta \left( A\right) \) on which \( f \) vanishes. Definition 5.1.12. Let \( f \in {A}^{ * } \) and let \( U \) be the largest open subset of \( \Delta \left( A\right) \) on which \( f \) vanishes (Lemma 5.1.11). The closed set \( \Delta \left( A\right) \smallsetminus U \) is called the support of \( f \) and denoted supp \( f \) . Now the characterisation of spectral sets in terms of \( {A}^{ * } \), announced above, is as follows. Proposition 5.1.13. Let \( E \) be a closed subset of \( \Delta \left( A\right) \) . Then \( E \) is a spectral set if and only if whenever \( f \in {A}^{ * } \) is such that \( \operatorname{supp}f \subseteq E \), then \( f\left( x\right) = 0 \) for all \( x \in k\left( E\right) \) . Proof. Suppose first that \( E \) is a set of synthesis and let \( f \in {A}^{ * } \) such that \( \operatorname{supp}f \subseteq E \) . Then \( f \) vanishes on \( \Delta \left( A\right) \smallsetminus E \) and hence \( f\left( x\right) = 0 \) for all \( x \in j\left( E\right) \) . Thus \( f\left( x\right) = 0 \) for all \( x \in \overline{j\left( E\right) } = k\left( E\right) \) . Conversely, if \( \overline{j\left( E\right) } \neq k\left( E\right) \), then by the Hahn-Banach theorem there exists \( f \in {A}^{ * } \) such that \( f\left( x\right) = 0 \) for all \( x \in \overline{j\left( E\right) } \), whereas \( f\left( y\right) \neq 0 \) for some \( y \in k\left( E\right) \) . Then \( f \) vanishes on \( \Delta \left( A\right) \smallsetminus E \) and hence \( \operatorname{supp}f \subseteq E \) . This finishes the proof. When proceeding to study the local membership principle, it is convenient to introduce the following notation. For \( a \in A \) and \( M \subseteq A \), let \( \Delta \left( {a, M}\right) \) denote the closed subset of \( \Delta \left( A\right) \) consisting of all \( \varphi \in \Delta \left( A\right) \) such that \( \widehat{a} \) does not belong locally to \( M \) at \( \varphi \) . Lemma 5.1.14. Let \( A \) be semisimple and regular and let \( I \) be a closed ideal of \( A \) . Let \( x \in A \) and let \( \varphi \) be an isolated point of \( \Delta \left( {x, I}\right) \) . In addition, suppose that \( \overline{j\left( \varphi \right) } \) possesses an approximate identity. Then \( \widehat{x} \) does not belong locally to \( \overline{j\left( \varphi \right) } \) at \( \varphi \) . Proof. Towards a contradiction, assume that \( \widehat{x} \) belongs locally to \( \overline{j\left( \varphi \right) } \) at \( \varphi \) , and let \( U \) be a neighbourhood of \( \varphi \) and \( y \in \overline{j\left( \varphi \right) } \) such that \( \widehat{x} = \widehat{y} \) on \( U \) . Then, because \( \varphi \) is an isolated point of \( \Delta \left( {x, I}\right) \), it is an isolated point of \( \Delta \left( {y, I}\right) \) and we can choose an open neighbourhood \( V \) of \( \varphi \) such that \( V \subseteq U \) and \( V \cap \Delta \left( {y, I}\right) = \{ \varphi \} \) . By Corollary 4.2.9, there exists \( z \in A \) such that \( \widehat{z} = 1 \) on some neighbourhood of \( \varphi \) and \( \operatorname{supp}\widehat{z} \subseteq V \) . Finally, since \( \overline{j\left( \varphi \right) } \) has an approximate identity, there exists a sequence \( {\left( {u}_{n}\right) }_{n} \) in \( j\left( \varphi \right) \) such that \( \begin{Vmatrix}{{u}_{n}y - y}\end{Vmatrix} \rightarrow 0 \) as \( n \rightarrow \infty \) . Now consider the elements \( {z}_{n} = {u}_{n}{zy}, n \in \mathbb{N} \), of \( A \) . Then \( \widehat{{z}_{n}} \) belongs locally to \( I \) at infinity and at every \( \psi \in \Delta \left( A\right) \smallsetminus V \) since supp \( \widehat{z} \subseteq V \) . Moreover, \( \widehat{{z}_{n}} \) belongs locally to \( I \) at \( \varphi \) since \( \widehat{{u}_{n}} \) vanishes in some neighbourhood of \( \varphi \), and also at every \( \psi \in V \smallsetminus \{ \varphi \} \) because \( V \cap \Delta \left( {y, I}\right) = \varnothing \) . It follows that \( {z}_{n} \in I \) and therefore \( {zy} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{z}_{n} \in I \) . After all, this means that \( \widehat{y} \) belongs locally to \( I \) at \( \varphi \) since \( \widehat{z} \) is identically one in some neighbourhood of \( \varphi \) . This contradicts \( \varphi \in \Delta \left( {y, I}\right) \) and finishes the proof. We conclude this section with the following proposition which is applied in the next section. Proposition 5.1.15. Let \( A \) be a regular and semisimple commutative Banach algebra. Let \( I \) be a closed ideal of \( A \) and let \( x \in A \) be such that \( h\left( I\right) \subseteq h\left( x\right) \) . Then (i) \( \Delta \left( {x, I}\right) \) is contained in \( h\left( I\right) \cap \partial \left( {h\left( x\right) }\right) = \partial \left( {h\left( I\right) }\right) \cap \partial \left( {h\left( x\right) }\right) \) . (ii) If singletons in \( \Delta \left( A\right) \) are Ditkin sets, then \( \Delta \left( {x, I}\right) \) has no isolated points. Proof. (i) By Lemma 5.1.3, \( x \) belongs locally to \( I \) at each point of \( h{\left( x\right) }^{0} \) and at each point of \( \Delta \left( A\right) \smallsetminus h\left( I\right) \) . Thus \[ \Delta \left( {x, I}\right) \subseteq h\left( I\right) \cap \left( {\Delta \left( A\right) \smallsetminus h{\left( x\right) }^{0}}\right) . \] However, since \( h\left( I\right) \subseteq h\left( x\right) \) , \[ \partial \left( {h\left( I\right) }\right) \cap \partial \left( {h\left( x\right) }\right) = \left( {h\left( I\right) \cap \overline{\Delta \left( A\right) \smallsetminus h\left( I\right) }}\right) \cap \left( {h\left( x\right) \cap \overline{\Delta \left( A\right) \smallsetminus h\left( x\right) }}\right) \] \[ = h\left( I\right) \cap h\left( x\right) \cap \overline{\Delta \left( A\right) \smallsetminus h\left( x\right) } = h\left( I\right) \cap \partial \left( {h\left( x\right) }\right) \] \[ = h\left( I\right) \cap \left( {h\left( x\right) \cap \left( {\Delta \left( A\right) \smallsetminus h{\left( x\right) }^{0}}\right) }\right) \] \[ = h\left( I\right) \cap \left( {\Delta \left( A\right) \smallsetminus h{\left( x\right) }^{0}}\right) . \] So \( \Delta \left( {x, I}\right) \subseteq \partial \left( {h\left( I\right) }\right) \cap \partial \left( {h\left( x\right) }\right) \) . (ii) Assume that \( \Delta \left( {x, I}\right) \) has an isolated point \( \varphi \) . Because \( \{ \varphi \} \) is a Ditkin set, it follows from Lemma 5.1.14 that \( x \) does not belong locally to \( \overline{j\left( \varphi \right) } = k\left( \varphi \right) \) at \( \varphi \) . But \( \varphi \in h\left( I\right) \subseteq h\left( x\right) \), so that \( x \in k\left( \varphi \right) \) . This contradiction shows (ii). ## 5.2 Spectral sets and Ditkin sets Let \( A \) be a commutative Banach algebra. Our objective in this section is the naturally arising problem of which closed subsets of \( \Delta \left( A\right) \) are sets of synthesis or Ditkin sets and whether these classes of subsets of \( \Delta \left( A\right) \) are preserved under certain operations, such as forming finite unions. We begin with the latter question which allows a satisfactory answer for Ditkin sets, as the next two results show. Lemma 5.2.1. The union of two Ditkin sets is a Ditkin set. Proof. Let \( {E}_{1} \) and \( {E}_{2} \) be Ditkin sets in \( \Delta \left( A\right) \) and let \( E = {E}_{1} \cup {E}_{2} \) . We have to show that given \( x \in k\left( E\right) \) and \( \varepsilon > 0 \), there exists \( y \in j\left( E\right) \) such that \( \parallel x - {xy}\parallel \leq \epsilon \) . Now, since \( x \in k\left( {E}_{1}\right) \) and \( {E}_{1} \) is a Ditkin set, there exists \( {y}_{1} \in j\left( {E}_{1}\right) \) such that \( \begin{Vmatrix}{x - x{y}_{1}}\end{Vmatrix} \leq \varepsilo
1056_(GTM216)Matrices
Definition 8.3
Definition 8.3 A matrix \( M \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) is said to be stochastic if \( M \geq 0 \) and if for every \( i = 1,\ldots, n \), one has \[ \mathop{\sum }\limits_{{j = 1}}^{n}{m}_{ij} = 1 \] One says that \( M \) is bistochastic (or doubly stochastic) if both \( M \) and \( {M}^{T} \) are stochastic. Denoting by \( \mathbf{e} \in {\mathbb{R}}^{n} \) the vector all of whose coordinates equal one, one sees that \( M \) is stochastic if and only if \( M \geq 0 \) and \( M\mathbf{e} = \mathbf{e} \) . Likewise, \( M \) is bistochastic if \( M \geq 0, M\mathbf{e} = \mathbf{e} \), and \( {\mathbf{e}}^{T}M = {\mathbf{e}}^{T} \) . If \( M \) is stochastic, one has \( \parallel {Mx}{\parallel }_{\infty } \leq \parallel x{\parallel }_{\infty } \) for every \( x \in {\mathbb{C}}^{n} \), and therefore \( \rho \left( M\right) \leq 1 \) . But because \( M\mathbf{e} = \mathbf{e} \), one has in fact \( \rho \left( M\right) = 1 \) . The stochastic matrices play an important role in the study of Markov chains. A special instance of a bistochastic matrix is a permutation matrix \( P\left( \sigma \right) \left( {\sigma \in {S}_{n}}\right) \) , whose entries are \[ {p}_{ij} = {\delta }_{\sigma \left( i\right) }^{j}. \] The following theorem enlightens the role of permutation matrices. Theorem 8.4 (Birkhoff) A matrix \( M \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) is bistochastic if and only if it is a center of mass (i.e., a barycenter with nonnegative weights) of permutation matrices. The fact that a center of mass of permutation matrices is a doubly stochastic matrix is obvious, because the set \( {\mathbf{{DS}}}_{n} \) of doubly stochastic matrices is convex. The interest of the theorem lies in the statement that if \( M \in {\mathbf{{DS}}}_{n} \), there exist permutation matrices \( {P}_{1},\ldots ,{P}_{r} \) and positive real numbers \( {\alpha }_{1},\ldots ,{\alpha }_{r} \) with \( {\alpha }_{1} + \cdots + {\alpha }_{r} = 1 \) such that \( M = \) \( {\alpha }_{1}{P}_{1} + \cdots + {\alpha }_{r}{P}_{r} \) Let us recall that a point \( x \) of a convex set \( C \) is an extremal point if \( x \in \left\lbrack {y, z}\right\rbrack \subset C \) implies \( x = y = z \) . The permutation matrices are extremal points of \( {\mathbf{M}}_{n}\left( \left\lbrack {0,1}\right\rbrack \right) \sim \) \( {\left\lbrack 0,1\right\rbrack }^{{n}^{2}} \), thus they are extremal points of the smaller convex set \( {\mathbf{{DS}}}_{n} \) . The Krein-Milman theorem (see [34], Theorem 3.23) says that a convex compact subset of \( {\mathbb{R}}^{n} \) is the convex hull, that is, the set of centers of mass of its extremal points. Because \( {\mathbf{{DS}}}_{n} \) is closed and bounded, hence compact, we may apply this statement. Theorem 8.4 thus amounts to saying that the extremal points of \( {\Delta }_{n} \) are precisely the permutation matrices. Proof. Let \( M \in {\mathbf{{DS}}}_{n} \) be given. If \( M \) is not a permutation matrix, there exists an entry \( {m}_{{i}_{1}{j}_{1}} \in \left( {0,1}\right) \) . Inasmuch as \( M \) is stochastic, there also exists \( {j}_{2} \neq {j}_{1} \) such that \( {m}_{{i}_{1}{j}_{2}} \in \left( {0,1}\right) \) . Because \( {M}^{T} \) is stochastic, there exists \( {i}_{2} \neq {i}_{1} \) such that \( {m}_{{i}_{2}{j}_{2}} \in \left( {0,1}\right) \) . By this procedure one constructs a sequence \( \left( {{j}_{1},{i}_{1},{j}_{2},{i}_{2},\ldots }\right) \) such that \( {m}_{{i}_{\ell }{j}_{\ell }} \in \left( {0,1}\right) \) and \( {m}_{{i}_{\ell - 1}{j}_{\ell }} \in \left( {0,1}\right) \) . The set of indices is finite, therefore it eventually happens that one of the indices (a row index or a column index) is repeated. Therefore, one can assume that the sequence \( \left( {{j}_{s},{i}_{s},\ldots ,{j}_{r},{i}_{r},{j}_{r + 1} = {j}_{s}}\right) \) has the above property, and that \( {j}_{s},\ldots ,{j}_{r} \) are pairwise distinct, as well as \( {i}_{s}\ldots ,{i}_{r} \) . Let us define a matrix \( B \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) by \( {b}_{{i}_{\ell }{j}_{\ell }} = 1,{b}_{{i}_{\ell }{j}_{\ell + 1}} = - 1,{b}_{ij} = 0 \) otherwise. By construction, \( B\mathbf{e} = 0 \) and \( {\mathbf{e}}^{T}B = 0 \) . If \( \alpha \in \mathbb{R} \), one therefore has \( \left( {M \pm {\alpha B}}\right) \mathbf{e} = \mathbf{e} \) and \( {\mathbf{e}}^{T}\left( {M \pm {\alpha B}}\right) = {\mathbf{e}}^{T} \) . If \( \alpha > 0 \) is small enough, \( M \pm {\alpha B} \) turns out to be nonnegative. Finally, \( M + {\alpha B} \) and \( M - {\alpha B} \) are bistochastic, and \[ M = \frac{1}{2}\left( {M - {\alpha B}}\right) + \frac{1}{2}\left( {M + {\alpha B}}\right) . \] Hence \( M \) is not an extremal point of \( {\mathbf{{DS}}}_{n} \) . Here is a nontrivial consequence (Stoer and Witzgall [36]): Corollary 8.1 Let \( \parallel \cdot \parallel \) be a norm on \( {\mathbb{R}}^{n} \), invariant under permutation of the coordinates. Then \( \parallel M\parallel = 1 \) for every bistochastic matrix (where as usual we have denoted \( \parallel \cdot \parallel \) the induced norm on \( {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) ). Proof. To begin with, \( \parallel P\parallel = 1 \) for every permutation matrix, by assumption. Because the induced norm is convex (true for every norm), one deduces from Birkhoff's theorem that \( \parallel M\parallel \leq 1 \) for every bistochastic matrix. Furthermore, \( M\mathbf{e} = \mathbf{e} \) implies \( \parallel M\parallel \geq \parallel M\mathbf{e}\parallel /\parallel \mathbf{e}\parallel = 1 \) This result applies, for instance, to the norm \( \parallel \cdot {\parallel }_{p} \), providing a nontrivial convex set on which the map \( 1/p \mapsto \log \parallel M{\parallel }_{p} \) is constant (compare with Theorem 7.2). The bistochastic matrices are intimately related to the relation \( \prec \) (see Section 6.5). In fact, we have the following theorem. Theorem 8.5 A matrix \( A \) is bistochastic if and only if \( {Ax} \succ x \) for every \( x \in {\mathbb{R}}^{n} \) . Proof. If \( A \) is bistochastic, then \( \parallel {Ax}{\parallel }_{1} \leq \parallel A{\parallel }_{1}\parallel x{\parallel }_{1} = \parallel x{\parallel }_{1} \), because \( {A}^{T} \) is stochastic. Because \( A \) is stochastic, \( A\mathbf{e} = \mathbf{e} \) . Applying the inequality to \( x - t\mathbf{e} \), one therefore has \( \parallel {Ax} - t\mathbf{e}{\parallel }_{1} \leq \parallel x - t\mathbf{e}{\parallel }_{1} \) . Proposition 6.4 then shows that \( x \prec {Ax} \) . Conversely, let us assume that \( x \prec {Ax} \) for every \( x \in {\mathbb{R}}^{n} \) . Choosing \( x \) as the \( j \) th vector of the canonical basis, \( {\mathbf{e}}^{j} \), the inequality \( {s}_{1}\left( {\mathbf{e}}^{j}\right) \leq {s}_{1}\left( {A{\mathbf{e}}^{j}}\right) \) expresses that \( A \) is a nonnegative matrix, and \( {s}_{n}\left( {\mathbf{e}}^{j}\right) = {s}_{n}\left( {A{\mathbf{e}}^{j}}\right) \) yields \[ \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ij} = 1 \] (8.2) One then chooses \( x = \mathbf{e} \) . The inequality \( {s}_{1}\left( \mathbf{e}\right) \leq {s}_{1}\left( {A\mathbf{e}}\right) \) expresses \( {}^{1} \) that \( A\mathbf{e} \geq \mathbf{e} \) . Finally, \( {s}_{n}\left( \mathbf{e}\right) = {s}_{n}\left( {A\mathbf{e}}\right) \) and \( A\mathbf{e} \geq \mathbf{e} \) give \( A\mathbf{e} = \mathbf{e} \) . Hence, \( A \) is bistochastic. This statement is completed by the following. Theorem 8.6 Let \( x, y \in {\mathbb{R}}^{n} \) . Then \( x \prec y \) if and only if there exists a bistochastic matrix \( A \) such that \( y = {Ax} \) . Proof. From the previous theorem, it is enough to show that if \( x \prec y \), there exists \( A \), a bistochastic matrix, such that \( y = {Ax} \) . To do so, one applies Theorem 6.8: there exists an Hermitian matrix \( H \) whose diagonal and spectrum are \( y \) and \( x \), respectively. Let us diagonalize \( H \) by a unitary conjugation: \( H = {U}^{ * }{DU} \), with \( D = \operatorname{diag}\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) . Then \( y = {Ax} \), where \( {a}_{ij} = {\left| {u}_{ij}\right| }^{2} \) . Because \( U \) is unitary, \( A \) is bistochastic. \( {}^{2} \) ## Exercises 1. We consider the following three properties for a matrix \( M \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) . P1 \( M \) is nonnegative. P2 \( {M}^{T}e = e \), where \( e = {\left( 1,\ldots ,1\right) }^{T} \) . P3 \( \parallel M{\parallel }_{1} \leq 1 \) . a. Show that \( \mathbf{{P2}} \) and \( \mathbf{{P3}} \) imply \( \mathbf{{P1}} \) . b. Show that \( \mathbf{{P2}} \) and \( \mathbf{{P1}} \) imply \( \mathbf{{P3}} \) . c. Do P1 and P3 imply P2 ? 2. Here is another proof of the simplicity of \( \rho \left( A\right) \) in the Perron-Frobenius theorem, which does not require Lemma 12. We assume that \( A \) is irreducible and nonnegative, and we denote by \( x \) a positive eigenvector associated with the eigenvalue \( \rho \left( A\right) \) . a. Let \( K \) be the set of nonnegative eigenvectors \( y \) associated with \( \rho \left( A\right) \) such that \( \parallel y{\parallel }_{1} = 1 \) . Show that \( K \) is compact and convex. b. Show that the geometric multiplicity of \( \rho \left( A\right) \) equals 1. (Hint: Otherwise, \( K \) would contain a vector with at least one zero component.) c. Show that the algebraic multiplicity of \( \rho \left( A\right) \) equals 1. (Hint: Otherwise, there would be a nonnegative vector \( y \) such that \( {Ay} - \rho \left( A\right) y = x > 0 \) .) 3. Let \( M \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) be either strictly diagonally dominant or irreducible and strongly diagonally dominant. Assume that \( {m}_{jj} > 0 \) for every \( j = 1,\ldots, n \) and \( {m}_{ij} \leq 0 \) otherwise. Show that \( M \) is invertible and that the solution of \( {Mx} = b \) , when \( b \geq 0 \), satisfies \( x \geq 0 \) . Deduce that \( {M}^{-1} \geq 0 \) . ---
1068_(GTM227)Combinatorial Commutative Algebra
Definition 18.44
Definition 18.44 \( {\mathcal{H}}_{\mathbb{k}\left\lbrack \mathbf{x}\right\rbrack }^{h} \) is called the multigraded Hilbert scheme, and the admissible family \( {\mathcal{U}}_{\mathbb{k}\left\lbrack \mathbf{x}\right\rbrack }^{h} \) over it is the universal admissible family. Theorem 18.43 implies that both the multigraded Hilbert scheme and the universal family over it are unique up to canonical isomorphism. Thus \( {\mathcal{H}}_{\mathbb{k}\left\lbrack \mathbf{x}\right\rbrack }^{h} \) really is the one best family. It parametrizes the admissible ideals in \( \mathbb{k}\left\lbrack \mathbf{x}\right\rbrack \) in the sense that (by Theorem 18.43) they are in bijection with the \( \mathbb{k} \) -points of \( {\mathcal{H}}_{\mathbb{k}\left\lbrack \mathbf{x}\right\rbrack }^{h} \), by which we mean the morphisms \( \operatorname{Spec}\left( \mathbb{k}\right) \rightarrow {\mathcal{H}}_{\mathbb{k}\left\lbrack \mathbf{x}\right\rbrack }^{h} \) . There is an explicit (but quite complicated) algorithm to derive polynomial equations that locally describe the scheme \( {\mathcal{H}}_{\mathbb{k}\left\lbrack \mathbf{x}\right\rbrack }^{h} \) [HS04]. The algorithm generalizes the derivation of the equations in the parameters \( {c}_{\mathbf{v}}^{\mathbf{u}} \) for the charts \( {U}_{\lambda } \) in the previous sections. The construction has the following important consequence. Recall from Definition 8.7 what it means for the grading of \( \mathbb{k}\left\lbrack \mathbf{x}\right\rbrack \) to be positive. Corollary 18.45 If the grading of the polynomial ring \( \mathbb{k}\left\lbrack \mathbf{x}\right\rbrack \) is positive then the multigraded Hilbert scheme \( {\mathcal{H}}_{\mathbb{k}\left\lbrack \mathbf{x}\right\rbrack }^{h} \) is projective over the ground ring \( \mathbb{k} \) . The remainder of this section is devoted to examples of multigraded Hilbert schemes, with the ground ring \( \mathbb{k} \) being the complex numbers \( \mathbb{C} \) . Based on the results of Section 18.2, we propose the following conjecture. Conjecture 18.46 The multigraded Hilbert scheme \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} \) is smooth and irreducible for any multigrading on \( \mathbb{C}\left\lbrack {x, y}\right\rbrack \) and any Hilbert function \( h \) . Example 18.47 The following examples illustrate the range of Conjecture 18.46. (i) If \( A = \{ 0\} \) is the one-element group, then \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} \) is the Hilbert scheme of \( h\left( 0\right) \) points in the affine plane \( {\mathbb{C}}^{2} \) . We saw in Theorem 18.7 that this Hilbert scheme is smooth and irreducible of dimension \( {2n} \) . (ii) Let \( A = \mathbb{Z} \) . If \( \deg \left( x\right) \) and \( \deg \left( y\right) \) are both positive integers and \( h \) has finite support, then \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} \) is an irreducible component in the fixed-point set of a \( {\mathbb{C}}^{ * } \) -action on the Hilbert scheme of points. It was proved by Evain [Eva02] that this scheme is smooth and irreducible. (iii) If \( A = \mathbb{Z},\deg \left( x\right) = \deg \left( y\right) = 1 \), and \( h\left( a\right) \equiv 1 \), then \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} = {\mathbb{P}}^{1} \) . (iv) More generally, set \( A = \mathbb{Z} \) and \( \deg \left( x\right) = \deg \left( y\right) = 1 \), but instead let \( h\left( a\right) = \min \left( {m, a + 1}\right) \) for some \( m \in \mathbb{N} \) . Then \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} \) is the Hilbert scheme of \( m \) points on \( {\mathbb{P}}^{1} \) . (v) If \( A = \mathbb{Z},\deg \left( x\right) = - \deg \left( y\right) = 1 \), and \( h\left( a\right) \equiv 1 \), then \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} = {\mathbb{C}}^{1} \) . (vi) If \( A = {\mathbb{Z}}^{2},\deg \left( x\right) = \left( {1,0}\right) \), and \( \deg \left( y\right) = \left( {0,1}\right) \), then \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} \) is either empty or a point. In the latter case it consists of one monomial ideal. (vii) If \( A = \mathbb{Z}/2\mathbb{Z},\deg \left( x\right) = \deg \left( y\right) = 1 \), and \( h\left( 0\right) = h\left( 1\right) = 1 \), then \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y}\right\rbrack }^{h} \) is isomorphic to the cotangent bundle of the projective line \( {\mathbb{P}}^{1} \) . We have seen in Section 18.4 that the statement of Conjecture 18.46 does not extend to 3 variables. The counterexample from Iarrobino's Theorem is \( {\mathcal{H}}_{\mathbb{C}\left\lbrack {x, y, z}\right\rbrack }^{102} \), the Hilbert scheme of 102 points in affine 3-space. What follows is an example with only 9 points but using a two-dimensional grading. It is the smallest known example of a reducible multigraded Hilbert scheme. Example 18.48 Let \( n = 3 \), fix the \( {\mathbb{Z}}^{2} \) -grading in Example 18.41, and let \( h = {h}_{M} \) be the Hilbert function with Hilbert series (18.15). The multi-graded Hilbert scheme \( {H}_{\mathbb{C}\left\lbrack {x, y, z}\right\rbrack }^{h} \) is the reduced union of two projective lines \( {\mathbb{P}}^{1} \) that intersect in the common torus fixed point \( M \) . The universal family equals \[ \left\langle {{x}^{3}, x{y}^{2},{x}^{2}y,{y}^{3},{a}_{0}{x}^{2}z - {a}_{1}{xy},{b}_{0}{xyz} - {b}_{1}{y}^{2},{y}^{2}z,{z}^{2}}\right\rangle \;\text{ with }\;{a}_{1}{b}_{1} = 0. \] Here, \( \left( {{a}_{0} : {a}_{1}}\right) \) and \( \left( {{b}_{0} : {b}_{1}}\right) \) are coordinates on two projective lines. This Hilbert scheme has three torus fixed points, namely the three monomial ideals in the family. The ideal \( M \) in (18.14) is the singular point on \( {H}_{\mathbb{C}\left\lbrack {x, y, z}\right\rbrack }^{h} \) . \( \diamond \) Example 18.49 In algebraic geometry, there are classical examples of Hilbert schemes with multiple components. Let \( n = 4 \) and take \( h\left( m\right) = \) \( {2m} + 2 \) for \( m \geq 1 \), but \( h\left( 0\right) = 1 \) . The corresponding Hilbert scheme has two components. A generic point of the first component corresponds to a pair of skew lines in projective space. A generic point of the second component corresponds to a conic in projective space and a point outside the plane of the conic. The two meet along a component, a generic point of which corresponds to two crossing lines in \( {\mathbb{P}}^{3} \) with some nonreduced scheme structure at the crossing point in the direction normal to the plane spanned by the lines. (There are several other types of ideals in this family as well: their schemes are double lines with nonreduced structure at one point and plane conics with the extra point in the plane of the conic and not reduced.) \( \diamond \) We will present two more classes of multigraded Hilbert schemes that have appeared in the commutative algebra literature. These are the classical Grothendieck Hilbert scheme and the toric Hilbert scheme. Example 18.50 Let \( A = \mathbb{Z} \) and give \( \mathbb{C}\left\lbrack \mathbf{x}\right\rbrack \) the standard grading with \( \deg \left( {x}_{i}\right) = 1 \) for \( i = 1,\ldots, n \) . Consider the following family of Hilbert functions \( h \) . Let \( p\left( t\right) \) be any univariate polynomial with \( p\left( \mathbb{N}\right) \subseteq \mathbb{N} \) . Fix a sufficiently large integer \( g \gg 0 \) . (For experts: the number \( g \) has to exceed the Gotzmann number.) These data define a Hilbert function \( h : A \rightarrow \mathbb{N} \) by \[ h\left( a\right) = \left( \begin{matrix} n + a - 1 \\ a \end{matrix}\right) \text{ if }a < g\;\text{ and }\;h\left( a\right) = p\left( a\right) \text{ if }a \geq g. \] The multigraded Hilbert scheme \( {H}_{\mathbb{C}\left\lbrack \mathbf{x}\right\rbrack }^{h} \) parametrizes all subschemes of projective space \( {\mathbb{P}}^{n - 1} \) with Hilbert polynomial \( p \) . This is the classical Hilbert scheme due to Grothendieck. It is known to be connected [Har66a]. \( \diamond \) Example 18.51 Fix any grading by an abelian group \( A \) on the polynomial ring \( \mathbb{C}\left\lbrack \mathbf{x}\right\rbrack = \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . The toric Hilbert scheme is defined as the multigraded Hilbert scheme \( {H}_{\mathbb{C}\left\lbrack \mathbf{x}\right\rbrack }^{\mathbf{1}} \), where \( \mathbf{1} \) denotes the characteristic function of the semigroup \( {A}_{ + } \) . This means that \( \mathbf{1}\left( \mathbf{a}\right) = 1 \) if \( \mathbf{a} \in {A}_{ + } \) and \( \mathbf{1}\left( \mathbf{a}\right) = 0 \) if \( \mathbf{a} \in A \smallsetminus {A}_{ + } \) . There is distinguished point on the toric Hilbert scheme \( {H}_{\mathbb{C}\left\lbrack \mathbf{x}\right\rbrack }^{1} \), namely the lattice ideal \( {I}_{L} \) studied in Chapter 7, for \( L = \left\{ {\mathbf{u} \in {\mathbb{Z}}^{n} \mid \deg \left( \mathbf{u}\right) = \mathbf{0}}\right\} \) . To see this, note that \( \mathbb{C}\left\lbrack \mathbf{x}\right\rbrack /{I}_{L} \) is isomorphic to the semigroup ring \( \mathbb{C}\left\lbrack {A}_{ + }\right\rbrack \), and obviously the Hilbert function of \( \mathbb{C}\left\lbrack {A}_{ + }\right\rbrack \) is the characteristic function 1 of \( {A}_{ + } \) . The toric Hilbert scheme \( {H}_{\mathbb{C}\left\lbrack \mathbf{x}\right\rbrack }^{1} \) parametrizes all \( A \) -homogeneous ideals with the same Hilbert function as the lattice ideal \( {I}_{L} \) . The toric Hilbert scheme is a combinatorial object whose study is closely related to triangulations of polytopes. Using this connection to polyhedral geometry, Santos recently established the following result [San04]. Theorem 18.52 (Santos) There exists a grading of the polynomial ring \( \mathbb{C}\left\lbrack \mathbf{x}\right\rbrack = \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{26}}\right\rbrack \) in 26
1068_(GTM227)Combinatorial Commutative Algebra
Definition 3.12
Definition 3.12 Let \( I = \left\langle {{m}_{1},\ldots ,{m}_{r}}\right\rangle \) and \( {I}_{\epsilon } = \left\langle {{m}_{\epsilon ,1},\ldots ,{m}_{\epsilon, r}}\right\rangle \) be monomial ideals in \( S \) and \( {S}_{\epsilon } \), respectively. Call \( {I}_{\epsilon } \) a strong deformation of \( I \) if the partial order on \( \{ 1,\ldots, r\} \) by \( x \) -degree of the \( {m}_{\epsilon, i} \) refines the partial order by \( x \) -degree of the \( {m}_{i} \), and the same holds for \( y \) and \( z \) . We also say that \( I \) is a specialization of \( {I}_{\epsilon } \) . Constructing a strong deformation \( {I}_{\epsilon } \) of any given monomial ideal \( I \) is easy: simply replace each generator \( {m}_{i} \) by a nearby generator \( {m}_{\epsilon, i} \) in such a way that \( \mathop{\lim }\limits_{{\epsilon \rightarrow 0}}{m}_{\epsilon, i} = {m}_{i} \) . The ideal \( {I}_{\epsilon } \) need not be strongly generic; however, it will be if the strong deformation is chosen randomly. Example 3.13 The ideal in \( {S}_{\epsilon } \) given by \[ \left\langle {{x}^{3},{x}^{2 + \epsilon }{y}^{1 + \epsilon },{x}^{2}{z}^{1},{x}^{1 + {2\epsilon }}{y}^{2},{x}^{1 + \epsilon }{y}^{1}{z}^{1 + \epsilon },{x}^{1}{z}^{2 + \epsilon },{y}^{3},{y}^{2 - \epsilon }{z}^{1 + {2\epsilon }},{y}^{1 + {2\epsilon }}{z}^{2},{z}^{3}}\right\rangle \] is one possible strongly generic deformation of the ideal \( \langle x, y, z{\rangle }^{3} \) in \( S \) . \( \diamond \) Proposition 3.14 Suppose \( I \) is a monomial ideal in \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) and \( {I}_{\epsilon } \) is a strong deformation resolved by a planar map \( {G}_{\epsilon } \) . Specializing the vertices (hence also the edges and regions) of \( {G}_{\epsilon } \) yields a planar map resolution of \( I \) . Proof. Consider the minimal free resolution \( {\mathcal{F}}_{{G}_{\epsilon }} \) determined by the triangulation \( {G}_{\epsilon } \) as in (3.2). The specialization \( G \) of the labeled planar map \( {G}_{\epsilon } \) still gives a complex \( {\mathcal{F}}_{G} \) of free modules over \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \), and we need to demonstrate its exactness. Considering any fixed \( {\mathbb{N}}^{3} \) -degree \( \omega = \left( {a, b, c}\right) \) , we must demonstrate exactness of the complex of vector spaces over \( \mathbb{k} \) in the degree \( \omega \) part of \( {\mathcal{F}}_{G} \) . Define \( {\omega }_{\epsilon } \) as the exponent vector on \[ \operatorname{lcm}\left( {{m}_{\epsilon, i} \mid {m}_{i}\text{ divides }{x}^{a}{y}^{b}{z}^{c}}\right) . \] The summands contributing to the degree \( \omega \) part of \( {\mathcal{F}}_{G} \) are exactly those summands of \( {\mathcal{F}}_{{G}_{\epsilon }} \) contributing to its degree \( {\omega }_{\epsilon } \) part, which is exact. In the next section we will demonstrate how any planar map resolution can be made minimal by successively removing edges and joining adjacent regions. For now, we derive a sharp complexity bound from Proposition 3.14 using Euler’s formula, which states that \( v - e + f = 1 \) for any connected planar map with \( v \) vertices, \( e \) edges, and \( f \) bounded faces [Wes01, Theorem 6.1.21], plus its consequences for simple planar graphs with at least three vertices: \( e \leq {3v} - 6 \) [Wes01, Theorem 6.1.23] and \( f \leq {2v} - 5 \) . Corollary 3.15 An ideal \( I \) generated by \( r \geq 3 \) monomials in \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) has at most \( {3r} - 6 \) minimal first syzygies and \( {2r} - 5 \) minimal second syzygies. These Betti number bounds are attained if \( I \) is artinian, strongly generic, and \( {xyz} \) divides all but three minimal generators. Proof. Choose a strong deformation \( {I}_{\epsilon } \) of \( I \) that is strongly generic. Proposition 3.14 implies that \( I \) has Betti numbers no larger than those of \( {I}_{\epsilon } \), so we need only prove the first sentence of the theorem for \( {I}_{\epsilon } \) . Theorem 3.11 implies that \( {I}_{\epsilon } \) is resolved by a planar map, so Euler’s formula and its consequences give the desired result. For the second statement, let \( {x}^{a},{y}^{b} \), and \( {z}^{c} \) be the three special generators of \( I \) . Every other minimal generator \( {x}^{i}{y}^{j}{z}^{k} \) satisfies \( i \geq 1, j \geq 1 \) , and \( k \geq 1 \), so that \( \left\{ {{x}^{a},{y}^{b}}\right\} ,\left\{ {{x}^{a},{z}^{c}}\right\} \), and \( \left\{ {{y}^{b},{z}^{c}}\right\} \) are edges in Buch \( \left( I\right) \) . By Proposition 3.9, Buch \( \left( I\right) \) is a triangulation of a triangle with \( r \) vertices such that \( r - 3 \) vertices lie in the interior. It follows from Euler’s formula and the easy equality \( {2e} = 3\left( {f + 1}\right) \) for any such triangulation that the number of edges is \( {3r} - 6 \) and the number of triangles is \( {2r} - 5 \) . The desired result is now immediate from the minimality in Theorem 3.11. ## 3.5 The planar resolution algorithm Our goal for the rest of this chapter is to demonstrate how the nonplanarity obstacles at the end of Section 3.3 can be overcome. Definition 3.16 A graph \( G \) with at least three vertices is 3-connected if deleting any pair of vertices along with all edges incident to them leaves a connected graph. Given a set \( \mathcal{V} \) of vertices in \( G \), define the suspension of \( G \) over \( \mathcal{V} \) by adding a new vertex to \( G \) and connecting it by edges to all vertices in \( \mathcal{V} \) . The graph \( G \) is almost 3-connected if it comes with a set \( \mathcal{V} \) of three distinguished vertices such that the suspension of \( G \) over \( \mathcal{V} \) is 3-connected. The vertex sets of our graphs will be monomials minimally generating some ideal \( I \) inside \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) . Note that when \( I \) is artinian, such a vertex set contains a distinguished set \( \mathcal{V} \) of three vertices: the pure-power generators \( {x}^{a},{y}^{b} \), and \( {z}^{c} \) . Now we come to the main result in this chapter. Theorem 3.17 Every monomial ideal \( I \) in \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) has a minimal free resolution by some planar map. If \( I \) is artinian then the graph \( G \) underlying any such planar map is almost 3-connected. The vertices, edges, and bounded regions of this planar map are labeled by their associated "staircase corners" as in the examples above. This determines a complex of free modules over \( S = \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) as in (3.2). Let us begin by presenting an algorithm for finding a planar map resolution as in Theorem 3.17 for artinian ideals. Given a deformation \( {I}_{\epsilon } \) of a monomial ideal \( I = \left\langle {{m}_{1},\ldots ,{m}_{r}}\right\rangle \) with \( {m}_{i} = {x}^{{a}_{i}}{y}^{{b}_{i}}{z}^{{c}_{i}} \), write the \( {i}^{\text{th }} \) deformed generator as \( {m}_{\epsilon, i} = {x}^{{a}_{\epsilon, i}}{y}^{{b}_{\epsilon, i}}{z}^{{c}_{\epsilon, i}} \) . Algorithm 3.18 requires a generic deformation satisfying the condition \[ \text{if}{a}_{i} = {a}_{j}\text{and}{c}_{i} < {c}_{j}\text{then}{a}_{\epsilon, i} < {a}_{\epsilon, j} \] (3.3) as well as its analogues via cyclic permutation of \( \left( {a, b, c}\right) \) . Observe that \( {c}_{i} < {c}_{j} \) is equivalent to \( {b}_{i} > {b}_{j} \) when the condition \( {a}_{i} = {a}_{j} \) is assumed; in other words, if two generators lie at the same distance in front of the \( {yz} \) - plane, then the lower one lies farther to the right (as seen from far out on the \( x \) -axis). Condition (3.3) says that among generators that start at the same distance from the \( {yz} \) -plane, the deformation pulls increasingly farther from the \( {yz} \) -plane as the generators move up and to the left. Keep in mind while reading the algorithm that its geometric content will be explained in the course of its proof of correctness. Algorithm 3.18 Fix an artinian monomial ideal \( I \) inside \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) . - initialize \( {I}_{\epsilon } = \) the strongly generic deformation of \( I \) in (3.3), and \( G = \operatorname{Buch}\left( {I}_{\epsilon }\right) \) - while \( {I}_{\epsilon } \neq I \) do - choose \( u \in \{ a, b, c\} \) and an index \( i \) such that \( {u}_{\epsilon, i} \) is minimal among the deformed \( u \) -coordinates satisfying \( {u}_{\epsilon, i} \neq {u}_{i} \) . Assume (for the sake of notation) that \( u = a \), by cyclic symmetry of \( \left( {a, b, c}\right) \) . - find the region of \( G \) whose monomial label \( {x}^{\alpha }{y}^{\beta }{z}^{\gamma } \) has \( \alpha = {a}_{\epsilon, i} \) and \( \gamma \) minimal. - find the generator \( {m}_{\epsilon, j} \) with the least \( x \) -degree among those with \( y \) -degree \( \beta \) and \( z \) -degree strictly less than \( \gamma \) . - redefine \( {I}_{\epsilon } \) and \( G \) by setting \( {a}_{\epsilon, i} = {a}_{i} \) and leaving all other generators alone. - if \( {a}_{j} = {a}_{i} \) then delete from \( G \) the edge labeled \( {x}^{{a}_{i}}{y}^{\beta }{z}^{\gamma } \), else leave \( G \) unchanged - output \( G \) ![9d852306-8a03-41f2-b2e7-a141e7b451e2_66_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_66_0.jpg) Figure 3.4: The geometry of Algorithm 3.18 The reason for choosing such a specific strongly generic deformation and then being so careful about how the specialization proceeds is that we need control over which syzygy degrees collide at any given stage. In particular, at most one edge should disappear at a time. Proof of correctness. If \( I \) is generic then the algorithm terminates immediately and correctly by Theorem 3.11. By induction on the number of passes through the while-do loop, assume that \( {I}_{\epsilon } \) at the beginning of the loop is minimally resolved by the regions, edges, and
1329_[肖梁] Abstract Algebra (2022F)
Definition 2.1.1
Definition 2.1.1. Let \( G \) be a group and \( A \) a subset. Write \( \langle A\rangle \) for the subgroup of \( G \) generated by \( A \) . Explicitly \[ \langle A\rangle = \left\{ {{a}_{1}^{{\epsilon }_{1}}{a}_{2}^{{\epsilon }_{2}}\cdots {a}_{r}^{{\epsilon }_{r}} \mid {a}_{1},\ldots ,{a}_{r} \in A,{\epsilon }_{1},\ldots ,{\epsilon }_{r} \in \{ \pm 1\} }\right\} \] It is also the same as the intersection of those subgroups \( H \) of \( G \) containing \( A \) . Remark 2.1.2. When \( G \) abelian and \( A = \left\{ {{a}_{1},\ldots ,{a}_{r}}\right\} \), we have \[ \langle A\rangle = \left\{ {{a}_{1}^{{d}_{1}}\cdots {a}_{r}^{{d}_{r}} \mid {d}_{1},\ldots ,{d}_{r} \in \mathbb{Z}}\right\} . \] 2.1.3. Lattices of subgroups. Sometimes, it is helpful to enlist subgroups of a group in a diagram encoding their inclusion relations by linking a line between them (with subgroups at below). For example: ( \( p \) is a prime number) \( {\mathbf{Z}}_{{p}^{n}} \) ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_11_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_11_0.jpg) Definition 2.1.4. Let \( G \) be a group and \( x \in G \) be an element. Define the order of \( x \) in \( G \) , denoted by \( \left| x\right| \) to be - \( \infty \), if \( {x}^{n} \neq e \) for any \( n \in \mathbb{N} \) (in this case, \( \langle x\rangle \) is a subgroup of \( G \) isomorphic to \( \mathbb{Z} \) ); otherwise, - \( n \) for the least positive integer \( n \) such that \( {x}^{n} = e \) (in this case, \( \langle x\rangle < G \) is a subgroup isomorphic to \( {\mathbf{Z}}_{n} \) ). In all cases, \( \left| x\right| = \left| {\langle x\rangle }\right| \) . 2.2. Cosets. Cosets may be viewed as analogues of affine subsets in linear algebra. Definition 2.2.1. Let \( H \) be a subgroup of \( G \) . A left coset is a set of the form (for some \( g \in G \) ) \[ {gH} \mathrel{\text{:=}} \underset{11}{\left\{ gh \mid h \in H\right\} }. \] In particular, if \( g \in H \), then \( {gH} = H \) . A right coset is a set of the form (for some \( g \in G \) ) \[ {Hg} \mathrel{\text{:=}} \{ {hg} \mid h \in H\} . \] Remark 2.2.2. Occasionally, people may abbreviate left cosets to simply cosets. (We will mostly work with left cosets throughout, but similar statement should also hold for right cosets.) When \( G \) is abelian, left cosets and right cosets are the same. Proposition 2.2.3. Two (left) coset \( {g}_{1}H \) and \( {g}_{2}H \) are - either equal (which is equivalent to \( {g}_{1}^{-1}{g}_{2} \in H \) ); - or disjoint (which is equivalent to \( {g}_{1}^{-1}{g}_{2} \notin H \) ). In particular, \( G \) is the disjoint union of left cosets for \( H \) . Proof. We will prove that ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_12_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_12_0.jpg) (A) If \( {g}_{1}^{-1}{g}_{2} \in H \), then \( {g}_{1}H = {g}_{1}\left( {{g}_{1}^{-1}{g}_{2} \cdot H}\right) = {g}_{2}H \) (as for any element \( h \in H, h \cdot H = H \) as sets). (B) If \( {g}_{1}H \cap {g}_{2}H \neq \varnothing \), say \( x = {g}_{1}{h}_{1} = {g}_{2}{h}_{2} \) for \( {h}_{1},{h}_{2} \in H \) . Then \( {g}_{1} = x{h}_{1}^{-1} \) and \( {g}_{2} = x{h}_{2}^{-1} \) . So \[ {g}_{1}^{-1}{g}_{2} = {\left( x{h}_{1}^{-1}\right) }^{-1} \cdot x{h}_{2}^{-1} = {h}_{1}{x}^{-1}x{h}_{2}^{-1} = {h}_{1}{h}_{2}^{-1} \in H. \] 2.3. Lagrange Theorem. The first big theorem in group theory is Lagrange's theorem. Definition 2.3.1. Write \( G/H \mathrel{\text{:=}} \{ {gH} \mid g \in G\} \) for the set of left cosets. Similarly, \( H \smallsetminus G \mathrel{\text{:=}} \) \( \{ {Hg} \mid g \in G\} \) . The above proposition says \( G = \mathop{\coprod }\limits_{{{gH} \in G/H}}{gH} \) is a disjoint union. We call \( \left\lbrack {G : H}\right\rbrack \mathrel{\text{:=}} \left| {G/H}\right| \) the index of \( H \) as a subgroup of \( G \) . Theorem 2.3.2 (Lagrange). If \( G \) is a finite group and \( H < G \) is a subgroup, then \( \left| H\right| \) divides \( \left| G\right| \) . Proof. As each cosets for \( H \) has exactly \( \left| H\right| \) elements, we have \( \left| G\right| = \left\lbrack {G : H}\right\rbrack \cdot \left| H\right| \) . Corollary 2.3.3. (1) If \( G \) is a finite group, then \( \left| x\right| \) divides \( \left| G\right| \) . (2) For every element \( x \in G,{x}^{\left| G\right| } = e \) . Proof. (1) This follows from Lagrange theorem because \( \left| x\right| = \left| {\langle x\rangle }\right| \) and \( \langle x\rangle \) is a subgroup of \( G \) . (2) follows from (1) immediately. Example 2.3.4. Following Example 0.2.3, fix a positive integer \( n \) . Consider \( G = {\mathbb{Z}}_{n}^{ \times } \mathrel{\text{:=}} \) \( \{ \bar{a} = a{\;\operatorname{mod}\;n} \mid \gcd \left( {a, n}\right) = 1\} \), the group of modulo \( n \) residual classes that are coprime to \( n \) . The operation for the group \( G \) is the multiplication. We know \( \left| G\right| = \varphi \left( n\right) \) is the Euler function. Then Lagrange theorem for \( G \) says that, if \( \gcd \left( {a, n}\right) = 1,{\bar{a}}^{\varphi \left( n\right) } = \overline{1} \) ; or equivalently, \[ \gcd \left( {a, n}\right) = 1\; \Rightarrow \;{a}^{\varphi \left( n\right) } \equiv 1\;\left( {\;\operatorname{mod}\;n}\right) . \] This is known as the Euler's theorem generalizing Fermat's little theorem. Corollary 2.3.5. If a group \( G \) has \( p \) elements with \( p \) a prime, then \( G \) is cyclic (and in particular abelian). Proof. Take an element \( x \in G \) such that \( x \neq e \) . Then \( \left| x\right| \) divides \( \left| G\right| = p \) . Yet \( \left| x\right| \neq 1 \) ; so \( \left| x\right| = p \), i.e. \( \left| {\langle x\rangle }\right| = p \) . So \( G = \langle x\rangle \) is cyclic. ## 2.4. Conjugation, normal subgroups, and quotient groups. Definition 2.4.1. Let \( G \) be a group and \( a, g \in G \) . We call \( {ga}{g}^{-1} \) the conjugate of \( a \) by \( g \) . Lemma 2.4.2. If \( H \) is a subgroup of \( G \) and \( g \in G \), then \( {gH}{g}^{-1} \mathrel{\text{:=}} \left\{ {{gh}{g}^{-1} \mid h \in H}\right\} \) is a subgroup, called the conjugate of \( H \) by \( g \) . Proof. Given \( {ga}{g}^{-1},{gb}{g}^{-1} \in {gH}{g}^{-1} \) , \[ \left( {{ga}{g}^{-1}}\right) {\left( gb{g}^{-1}\right) }^{-1} = {ga}{g}^{-1} \cdot g{b}^{-1}{g}^{-1} = {ga}{b}^{-1}{g}^{-1} \in {gH}{g}^{-1}. \] So \( {gH}{g}^{-1} \) is a subgroup of \( G \) . Definition 2.4.3. A subgroup \( H \leq G \) is normal if for any \( g \in G, H = {gH}{g}^{-1} \) ; namely, all conjugates of \( H \) are just \( H \) itself. Note that this condition is also equivalent to \( {gH} = {Hg} \) for any \( g \in G \), namely the left coset of \( g \) is the same as right coset of \( g \) for each \( g \) . We write \( H \trianglelefteq G \) to denote normal subgroups. For example, \( \{ 1\} \trianglelefteq G \) and \( G \trianglelefteq G \) . Definition 2.4.4. Let \( H \trianglelefteq G \) be a normal subgroup. For \( a, b \in G \), we have \[ {aH} \cdot {bH} \mathrel{\text{:=}} \{ k\ell \mid k \in {aH},\ell \in {bH}\} = {abH} \cdot H = {abH} \] as subsets of \( G \) . (This avoids the discussion of whether the product is well-defined.) This defines a group structure on \( G/H \) . The identity is \( {eH} = H \) ; and the inverse of \( {aH} \) is \( {a}^{-1}H \) . We call \( G/H \) the quotient group or the factor group of \( G \) by \( H \) . Example 2.4.5. (1) Every subgroup of an abelian group is normal, because \( {gH}{g}^{-1} = H \) automatically holds. (2) A positive integer \( n \) defines a (normal) subgroup of \( \left( {\mathbb{Z}, + }\right) \) given by \( \langle n\rangle = \{ x \in \) \( \mathbb{Z} \mid n \) divides \( x\} \) . The quotient group \( \mathbb{Z}/\langle n\rangle \) has elements \( a + \langle n\rangle \) for \( a = 0,1,\ldots, n - 1 \) . Then we have a natural isomorphism \[ \mathbb{Z}/\langle n\rangle \overset{ \cong }{ \rightarrow }{\mathbf{Z}}_{n} \] \[ a + \langle n\rangle \mapsto \bar{a} \] We often write this quotient as \( \mathbb{Z}/n\mathbb{Z} \) and thus \( {\mathbf{Z}}_{n} \cong \mathbb{Z}/n\mathbb{Z} \) (this may be viewed as the definition of \( {\mathbf{Z}}_{n} \) ). ## 2.5. Some technical results. Proposition 2.5.1. Let \( H \) and \( K \) be subgroups of a group \( G \) . Define \( {HK} : = \{ {hk} \mid h \in \) \( H, k \in K\} \) . When \( G \) is finite, we have \[ \left| {HK}\right| = \frac{\left| H\right| \cdot \left| K\right| }{\left| H \cap K\right| } \] Proof. By Proposition 2.2.3, \( {HK} \) is a disjoint union of left cosets of \( K \), namely \[ {HK} = {h}_{1}K \sqcup {h}_{2}K \sqcup \cdots \sqcup {h}_{m}K. \] We claim that for the same \( {h}_{1},\ldots ,{h}_{m} \) , \[ H = {h}_{1}\left( {H \cap K}\right) \sqcup \cdots \sqcup {h}_{m}\left( {H \cap K}\right) . \] The claim implies the proposition as \[ \frac{\left| HK\right| }{\left| K\right| } = m = \frac{\left| H\right| }{\left| H \cap K\right| } \] Now, we prove the claim: for every \( h,{h}^{\prime } \in H \) , \[ \text{(2.5.1.1)}\;{hK} = {h}^{\prime }K \Leftrightarrow {h}^{-1}{h}^{\prime } \in K \Leftrightarrow {h}^{-1}{h}^{\prime } \in H \cap K \Leftrightarrow h\left( {H \cap K}\right) = {h}^{\prime }\left( {H \cap K}\right) \text{.} \] So we deduce that \[ {HK} = \mathop{\bigcup }\limits_{{h \in H}}{hK} = {h}_{1}K \sqcup \cdots \sqcup {h}_{m}K \] \[ H = \mathop{\bigcup }\limits_{{h \in H}}h\left( {H \cap K}\right) = {h}_{1}\left( {H \cap K}\right) \sqcup \cdots \sqcup {h}_{m}\left( {H \cap K}\right) . \] Here, the equalities mean to first write tautologically \( {HK} \) as the union of all left \( K \) -cosets (parameterized by \( h \in H \) ) and in a parallel way write \( H \) as the union of all left \( H \cap K \) - cosets (parameterized by \( h \in H \) ). Then (2.5.1.1) tells us that two left \( K \) -cosets are the same if and only if the corresponding left \( H\overline{\cap K\text{-cosets are the same. Th
1329_[肖梁] Abstract Algebra (2022F)
Definition 16.1.1
Definition 16.1.1. The characteristic of a field \( F \), denoted by \( \operatorname{char}\left( F\right) \), is - the smallest positive integer \( p \), such that \( \underset{p}{\underbrace{1 + \cdots + 1}} = 0 \) if such \( p \) exists, and - 0 , otherwise. Remark 16.1.2. (1) If \( \operatorname{char}\left( F\right) > 0 \), it must be a prime number \( p \) . This is because if \( \operatorname{char}\left( F\right) = m \cdot n \) for \( m, n \in \mathbb{N} \), we have \( {mn} = 0 \) in \( F \), and thus either \( m = 0 \) or \( n = 0 \) in \( F \) . (2) We sometimes also use the letter \( K \) to denote field, this comes from the German words for field: Körper (body). Definition 16.1.3. The prime field of a field \( F \) is the smallest field of \( F \) containing \( {1}_{F} \) ; it is - \( {\mathbb{F}}_{p} \), if \( \operatorname{char}\left( F\right) = p > 0 \), or - \( \mathbb{Q} \), if \( \operatorname{char}\left( F\right) = 0 \) . 16.2. Field extensions. Notation 16.2.1. If \( F \subseteq K \) is a subfield of a field, we say that \( K \) is a field extension of \( F \) . Sometimes, we call \( F \) the base field. Any fields that sits as \( F \subseteq E \subseteq K \) are called intermediate fields. We often write ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_99_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_99_0.jpg) Note also that \( F \subseteq K \) makes \( K \) an \( F \) -vector space. Definition 16.2.2. The degree of the field extension \( K \) of \( F \) is \( \left\lbrack {K : F}\right\rbrack = {\dim }_{F}K \) . The extension is finite/infinite if \( \left\lbrack {K : F}\right\rbrack \) is. Theorem 16.2.3. Let \( F \subseteq E \subseteq K \) be field extensions. Then \( \left\lbrack {K : F}\right\rbrack = \left\lbrack {K : E}\right\rbrack \left\lbrack {E : F}\right\rbrack \) . Proof. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_99_1.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_99_1.jpg) Set \( \left\lbrack {K : E}\right\rbrack = m \) and \( \left\lbrack {E : F}\right\rbrack = n \) . Let \( \left\{ {{\alpha }_{1},\ldots ,{\alpha }_{m}}\right\} \) be a \( E \) -basis of \( K \) and \( \left\{ {{\beta }_{1},\ldots ,{\beta }_{n}}\right\} \) be an \( F \) -basis of \( E \) . Then every element \( x \) of \( E \) can be written as a sum \[ {c}_{1}{\alpha }_{1} + \cdots + {c}_{n}{\alpha }_{n}\;\text{ with each }{c}_{i} \in E \] and in turn each \( {c}_{i} \) can be written as a sum \[ {c}_{i} = {d}_{i1}{\beta }_{1} + \cdots + {d}_{im}{\beta }_{m}\;\text{ with each }{d}_{ij} \in F. \] Thus \( x \) can be written as \[ x = \mathop{\sum }\limits_{{i = 1}}^{m}\mathop{\sum }\limits_{{j = 1}}^{n}{d}_{ij}{\alpha }_{i}{\beta }_{j} \] This shows that \( \left\{ {{\alpha }_{i}{\beta }_{j} \mid i = 1,\ldots, m, j = 1,\ldots, n}\right\} \) generate \( E \) as an \( F \) -vector space. Next we show that these \( {\alpha }_{i}{\beta }_{j} \) ’s are \( F \) -linearly independent. Indeed, suppose that there exists \( {c}_{ij} \in F \) for \( i = 1,\ldots, m \) and \( j = 1,\ldots, n \) such that \[ \mathop{\sum }\limits_{{i = 1}}^{m}\mathop{\sum }\limits_{{j = 1}}^{n}{c}_{ij}{\alpha }_{i}{\beta }_{j} = 0 \] Then note that for each fixed \( i,\mathop{\sum }\limits_{{j = 1}}^{n}{c}_{ij}{\beta }_{j} \in K \) . As \( {\alpha }_{1},\ldots ,{\alpha }_{m} \) form a \( E \) -basis of \( K \), we must have \[ \text{for every}i\text{, the coefficient of}{\alpha }_{i}\text{, namely}\mathop{\sum }\limits_{{j = 1}}^{n}{c}_{ij}{\beta }_{j} = 0\text{.} \] Moreover, as \( {\beta }_{1},\ldots ,{\beta }_{n} \) form an \( F \) -basis of \( K \), we deduce that all \( {c}_{ij} = 0 \) . Example 16.2.4. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_100_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_100_0.jpg) 16.3. Construction of field extensions. We start with an important fact. Lemma 16.3.1. Let \( F \) and \( E \) be fields. A homomorphism \( \phi : F \rightarrow E \) must be injective. This then realizes \( E \) as an extension of \( \phi \left( F\right) \simeq F \) . Proof. As \( \ker \phi \) is an ideal of \( F \), namely \( \{ 0\} \) or \( F \) . But our convention of ring homomorphisms require to send \( {1}_{F} \) to \( {1}_{E} \) . So \( \ker \phi \neq F \), and thus \( \ker \phi = \{ 0\} \), i.e. \( \phi \) is injective. 16.3.2. Construction of field extensions. Let \( F \) be a field and let \( p\left( x\right) \in F\left\lbrack x\right\rbrack \) be an irreducible polynomial of degree \( n \) . Then \( \left( {p\left( x\right) }\right) \) is a prime ideal and hence a maximal ideal (as \( F\left\lbrack x\right\rbrack \) is a PID). This implies that \[ K \mathrel{\text{:=}} F\left\lbrack x\right\rbrack /\left( {p\left( x\right) }\right) \;\text{is a field.} \] We put \( \theta \mathrel{\text{:=}} x{\;\operatorname{mod}\;\left( {p\left( x\right) }\right) } \in K \) . Then \[ K = \left\{ {{a}_{0} + {a}_{1}\theta + \cdots + {a}_{n - 1}{\theta }^{n - 1} \mid {a}_{0},\ldots ,{a}_{n - 1} \in F}\right\} . \] (This follows from the fact that every polynomial \( a\left( x\right) \) can be uniquely written as \( a\left( x\right) = \) \( q\left( x\right) p\left( x\right) + r\left( x\right) \) with \( \deg r\left( x\right) < n \) ; then \( a\left( x\right) {\;\operatorname{mod}\;\left( {p\left( x\right) }\right) } = r\left( x\right) \) .) So \( K \) is an \( F \) -vector space of dimension \( n \) . Moreover, \( F \) embeds in \( K \) as constant polynomials. We say that \( K \) is the extension of \( F \) of degree \( n \) determined by \( p\left( x\right) \) . Lemma 16.3.3. Equation \( p\left( x\right) = 0 \) has a zero in \( K \) . Proof. Assume that \( p\left( x\right) = {p}_{0} + {p}_{1}x + \cdots + {p}_{n}{x}^{n} \) . Then \[ p\left( \theta \right) = {p}_{0} + {p}_{1}\theta + \cdots + {p}_{n}{\theta }^{n} = {p}_{0} + {p}_{1}x + \cdots + {p}_{n}{x}^{n} + \left( {p\left( x\right) }\right) = 0 + \left( {p\left( x\right) }\right) . \] So \( \theta \) is a "tautological" zero of \( p\left( x\right) = 0 \) in \( K \) . Example 16.3.4. (1) Recall the natural isomorphism \[ \mathbb{R}\left\lbrack x\right\rbrack /\left( {{x}^{2} + 1}\right) \overset{ \simeq }{ \rightarrow }\mathbb{C} \] \[ a + {bx} \mapsto a + b\mathbf{i} \] But from now on, we will try to distinguish these two: \( \mathbb{R}\left\lbrack x\right\rbrack /\left( {{x}^{2} + 1}\right) \) is an abstractly constructed field extension of \( \mathbb{R} \) . It is isomorphic to \( \mathbb{C} \) . BUT there are TWO WAYS of to make such an isomorphism \[ {\phi }_{1},{\phi }_{2} : \mathbb{R}\left\lbrack x\right\rbrack /\left( {{x}^{2} + 1}\right) \overset{ \simeq }{ \rightarrow }\mathbb{C} \] \[ {\phi }_{1}\left( {a + {bx}}\right) = a + b\mathbf{i}\;\text{ and }\;{\phi }_{2}\left( {a + {bx}}\right) = a - b\mathbf{i}. \] (2) For \( K = \mathbb{Q}\left\lbrack x\right\rbrack /\left( {{x}^{3} - 2}\right) \), we have three realizations: - Realization 1: given by \[ {\iota }_{1} : K \hookrightarrow \mathbb{R} \] \[ x \mapsto \sqrt[3]{2}\text{.} \] So \( K \simeq {\iota }_{1}\left( K\right) \) is a subfield of \( \mathbb{R} \) . - Realizations 2 and 3: given by \[ {\iota }_{2},{\iota }_{3} : K \hookrightarrow \mathbb{R} \] \[ x \mapsto \left\{ \begin{array}{l} {\iota }_{2}\left( x\right) = {e}^{{2\pi }\mathbf{i}/3}\sqrt[3]{2} \\ {\iota }_{3}\left( x\right) = {e}^{{4\pi }\mathbf{i}/3}\sqrt[3]{2} \end{array}\right. \] So \( {\iota }_{2}\left( K\right) \) and \( {\iota }_{3}\left( K\right) \) are different fields from \( {\iota }_{1}\left( K\right) \) . But they are abstractly isomorphic (for the purpose of algebraic operations). (3) \( {\mathbb{F}}_{2}\left\lbrack x\right\rbrack /\left( {{x}^{2} + x + 1}\right) \) is a field extension of \( {\mathbb{F}}_{2} \) of degree 2 . This gives \( K \) a field of 4 elements. Definition 16.3.5. Let \( K \) be an extension of \( F \), and let \( {\alpha }_{1},\ldots ,{\alpha }_{n} \in K \) . (1) The field extension of \( F \) generated by \( {\alpha }_{1},\ldots ,{\alpha }_{n} \), denoted by \( F\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \), is the smallest subfield of \( K \) containing \( F \) . (2) If \( K = F\left( \alpha \right) \) for some \( \alpha \in K \), then we say that \( K \) is a simple extension of \( F \) . (3) If \( K = F\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \), we say that \( K \) is a finitely generated extension of \( F \) . We remark that \( F\left( {{\alpha }_{1},{\alpha }_{2}}\right) = \left( {F\left( {\alpha }_{1}\right) }\right) \left( {\alpha }_{2}\right) \) . Theorem 16.3.6. Let \( K \) be a field extension of \( F \) and let \( \alpha \in K \) . We have a dichotomy: (1) either \( 1,\alpha ,{\alpha }^{2},\ldots \) are linearly independent over \( F \), in which case \( F\left( \alpha \right) \simeq F\left( x\right) = \) \( \operatorname{Frac}\left( {F\left\lbrack x\right\rbrack }\right) \) , (2) or \( 1,\alpha ,{\alpha }^{2},\ldots \) are linearly dependent over \( F \), in which case, there exists a unique monic polynomial \( {m}_{\alpha }\left( x\right) = {m}_{\alpha, F}\left( x\right) \), called the minimal polynomial of \( \alpha \) over \( F \) , that is irreducible over \( F \) and \( {m}_{\alpha }\left( \alpha \right) = 0 \) . Moreover, \( F\left( \alpha \right) = F\left\lbrack x\right\rbrack /\left( {{m}_{\alpha }\left( x\right) }\right) \) and \( \left\lbrack {F\left( \alpha \right) : F}\right\rbrack = \deg {m}_{\alpha }\left( x\right) \) . Proof. Consider case (1): the condition implies that \[ \phi : F\left\lbrack x\right\rbrack \hookrightarrow K \] \[ f\left( x\right) \mapsto f\left( \alpha \right) \] is an injective homomorphism. This clearly extends to a homomorphism \[ \phi : F\left( x\right) \hookrightarrow K \] \[ f\left( x\right) /g\left( x\right) \mapsto f\left( \alpha \right) /g\left( \alpha \right) \] as \( g\left( \alpha \right) \neq 0 \) . This \( \phi \) must be injective by Lemma 16.3.1. Its image is \( \phi \l
1189_(GTM95)Probability-1
Definition 1
Definition 1. A sequence of random variables \( {\xi }_{1},\ldots ,{\xi }_{n} \) is called a martingale (with respect to the decompositions \( {\mathcal{D}}_{1} \preccurlyeq {\mathcal{D}}_{2} \preccurlyeq \cdots \preccurlyeq {\mathcal{D}}_{n} \) ) if (1) \( {\xi }_{k} \) is \( {\mathcal{D}}_{k} \) -measurable, (2) \( \mathrm{E}\left( {{\xi }_{k + 1} \mid {\mathcal{D}}_{k}}\right) = {\xi }_{k},1 \leq k \leq n - 1 \) . In order to emphasize the system of decompositions with respect to which the random variables \( \xi = \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \) form a martingale, we shall use the notation \[ \xi = {\left( {\xi }_{k},{\mathcal{D}}_{k}\right) }_{1 \leq k \leq n} \] (1) where for the sake of simplicity we often do not mention explicitly that \( 1 \leq k \leq n \) . When \( {\mathcal{D}}_{k} \) is induced by \( {\xi }_{1},\ldots ,{\xi }_{n} \), i.e., \[ {\mathcal{D}}_{k} = {\mathcal{D}}_{{\xi }_{1},\ldots ,{\xi }_{k}} \] instead of saying that \( \xi = \left( {{\xi }_{k},{\mathcal{D}}_{k}}\right) \) is a martingale, we simply say that the sequence \( \xi = \left( {\xi }_{k}\right) \) is a martingale. Here are some examples of martingales. Example 1. Let \( {\eta }_{1},\ldots ,{\eta }_{n} \) be independent Bernoulli random variables with \[ \mathrm{P}\left( {{\eta }_{k} = 1}\right) = \mathrm{P}\left( {{\eta }_{k} = - 1}\right) = \frac{1}{2}, \] \[ {S}_{k} = {\eta }_{1} + \cdots + {\eta }_{k}\;\text{ and }\;{\mathcal{D}}_{k} = {\mathcal{D}}_{{\eta }_{1},\ldots ,{\eta }_{k}}. \] We observe that the decompositions \( {\mathcal{D}}_{k} \) have a simple structure: \[ {\mathcal{D}}_{1} = \left\{ {{D}^{ + },{D}^{ - }}\right\} \] where \( {D}^{ + } = \left\{ {\omega : {\eta }_{1} = + 1}\right\} ,{D}^{ - } = \left\{ {\omega : {\eta }_{1} = - 1}\right\} \) ; \[ {\mathcal{D}}_{2} = \left\{ {{D}^{+ + },{D}^{+ - },{D}^{- + },{D}^{- - }}\right\} \] where \( {D}^{+ + } = \left\{ {\omega : {\eta }_{1} = + 1,{\eta }_{2} = + 1}\right\} ,\ldots ,{D}^{- - } = \left\{ {\omega : {\eta }_{1} = - 1,{\eta }_{2} = - 1}\right\} \) , etc. It is also easy to see that \( {\mathcal{D}}_{{\eta }_{1},\ldots ,{\eta }_{k}} = {\mathcal{D}}_{{S}_{1},\ldots ,{S}_{k}} \) . Let us show that \( {\left( {S}_{k},{\mathcal{D}}_{k}\right) }_{1 \leq k \leq n} \) form a martingale. In fact, \( {S}_{k} \) is \( {\mathcal{D}}_{k} \) -measurable, and by (12) and (18) of Sect. 8 \[ \mathrm{E}\left( {{S}_{k + 1} \mid {\mathcal{D}}_{k}}\right) = \mathrm{E}\left( {{S}_{k} + {\eta }_{k + 1} \mid {\mathcal{D}}_{k}}\right) \] \[ = \mathrm{E}\left( {{S}_{k} \mid {\mathcal{D}}_{k}}\right) + \mathrm{E}\left( {{\eta }_{k + 1} \mid {\mathcal{D}}_{k}}\right) = {S}_{k} + \mathrm{E}{\eta }_{k + 1} = {S}_{k}. \] If we put \( {S}_{0} = 0 \) and take \( {D}_{0} = \{ \Omega \} \), the trivial decomposition, then the sequence \( {\left( {S}_{k},{\mathcal{D}}_{k}\right) }_{0 \leq k \leq n} \) also forms a martingale. Example 2. Let \( {\eta }_{1},\ldots ,{\eta }_{n} \) be independent Bernoulli random variables with \( \mathrm{P}\left( {{\eta }_{i} = 1}\right) = p,\mathrm{P}\left( {{\eta }_{i} = - 1}\right) = q \) . If \( p \neq q \), each of the sequences \( \xi = \left( {\xi }_{k}\right) \) with \[ {\xi }_{k} = {\left( \frac{q}{p}\right) }^{{S}_{k}},\;{\xi }_{k} = {S}_{k} - k\left( {p - q}\right) ,\;\text{ where }\;{S}_{k} = {\eta }_{1} + \cdots + {\eta }_{k}, \] is a martingale. Example 3. Let \( \eta \) be a random variable, \( {\mathcal{D}}_{1} \preccurlyeq \cdots \preccurlyeq {\mathcal{D}}_{n} \), and \[ {\xi }_{k} = \mathrm{E}\left( {\eta \mid {\mathcal{D}}_{k}}\right) \] (2) Then the sequence \( \xi = {\left( {\xi }_{k},{\mathcal{D}}_{k}\right) }_{1 \leq k \leq n} \) is a martingale. In fact, it is evident that \( \mathrm{E}\left( {\eta \mid {\mathcal{D}}_{k}}\right) \) is \( {\mathcal{D}}_{k} \) -measurable, and by (20) of Sect. 8 \[ \mathrm{E}\left( {{\xi }_{k + 1} \mid {\mathcal{D}}_{k}}\right) = \mathrm{E}\left\lbrack {\mathrm{E}\left( {\eta \mid {\mathcal{D}}_{k + 1}}\right) \mid {\mathcal{D}}_{k}}\right\rbrack = \mathrm{E}\left( {\eta \mid {\mathcal{D}}_{k}}\right) = {\xi }_{k}. \] In this connection we notice that if \( \xi = \left( {{\xi }_{k},{\mathcal{D}}_{k}}\right) \) is any martingale, then by (20) of Sect. 8 \[ {\xi }_{k} = \mathrm{E}\left( {{\xi }_{k + 1} \mid {\mathcal{D}}_{k}}\right) = \mathrm{E}\left\lbrack {\mathrm{E}\left( {{\xi }_{k + 2} \mid {\mathcal{D}}_{k + 1}}\right) \mid {\mathcal{D}}_{k}}\right\rbrack \] \[ = \mathrm{E}\left( {{\xi }_{k + 2} \mid {\mathcal{D}}_{k}}\right) = \cdots = \mathrm{E}\left( {{\xi }_{n} \mid {\mathcal{D}}_{k}}\right) . \] (3) Consequently the set of martingales \( \xi = \left( {{\xi }_{k},{\mathcal{D}}_{k}}\right) \) is exhausted by the martingales of the form (2). (We note that for infinite sequences \( \xi = {\left( {\xi }_{k},{\mathcal{D}}_{k}\right) }_{k \geq 1} \) this is, in general, no longer the case; see Problem 6 in Sect. 1 of Chap. 7, Vol. 2.) Example 4. Let \( {\eta }_{1},\ldots ,{\eta }_{n} \) be a sequence of independent identically distributed random variables, \( {S}_{k} = {\eta }_{1} + \cdots + {\eta }_{k} \), and \( {\mathcal{D}}_{1} = {\mathcal{D}}_{{S}_{n}},{\mathcal{D}}_{2} = {\mathcal{D}}_{{S}_{n},{S}_{n - 1}},\ldots ,{\mathcal{D}}_{n} = \) \( {\mathcal{D}}_{{S}_{n},{S}_{n - 1},\ldots ,{S}_{1}} \) . Let us show that the sequence \( \xi = \left( {{\xi }_{k},{\mathcal{D}}_{k}}\right) \) with \[ {\xi }_{1} = \frac{{S}_{n}}{n},{\xi }_{2} = \frac{{S}_{n - 1}}{n - 1},\ldots ,{\xi }_{k} = \frac{{S}_{n + 1 - k}}{n + 1 - k},\ldots ,{\xi }_{n} = {S}_{1} \] is a martingale. In the first place, it is clear that \( {\mathcal{D}}_{k} \preccurlyeq {\mathcal{D}}_{k + 1} \) and \( {\xi }_{k} \) is \( {\mathcal{D}}_{k} \) -measurable. Moreover, we have by symmetry, for \( j \leq n - k + 1 \) , \[ \mathrm{E}\left( {{\eta }_{j} \mid {\mathcal{D}}_{k}}\right) = \mathrm{E}\left( {{\eta }_{1} \mid {\mathcal{D}}_{k}}\right) \] (4) (compare (26), Sect. 8). Therefore \[ \left( {n - k + 1}\right) \mathrm{E}\left( {{\eta }_{1} \mid {\mathcal{D}}_{k}}\right) = \mathop{\sum }\limits_{{j = 1}}^{{n - k + 1}}\mathrm{E}\left( {{\eta }_{j} \mid {\mathcal{D}}_{k}}\right) = \mathrm{E}\left( {{S}_{n - k + 1} \mid {\mathcal{D}}_{k}}\right) = {S}_{n - k + 1}, \] and consequently \[ {\xi }_{k} = \frac{{S}_{n - k + 1}}{n - k + 1} = \mathrm{E}\left( {{\eta }_{1} \mid {\mathcal{D}}_{k}}\right) \] and it follows from Example 3 that \( \xi = \left( {{\xi }_{k},{\mathcal{D}}_{k}}\right) \) is a martingale. Remark. From this martingale property of the sequence \( \xi = {\left( {\xi }_{k},{\mathcal{D}}_{k}\right) }_{1 \leq k \leq n} \), it is clear why we will sometimes say that the sequence \( {\left( {S}_{k}/k\right) }_{1 \leq k \leq n} \) forms a reversed martingale. (Compare Problem 5 in Sect. 1, Chap. 7, Vol. 2). Example 5. Let \( {\eta }_{1},\ldots ,{\eta }_{n} \) be independent Bernoulli random variables with \[ \mathrm{P}\left( {{\eta }_{i} = + 1}\right) = \mathrm{P}\left( {{\eta }_{i} = - 1}\right) = \frac{1}{2}, \] \( {S}_{k} = {\eta }_{1} + \cdots + {\eta }_{k} \) . Let \( A \) and \( B \) be integers, \( A < 0 < B \) . Then with \( 0 < \lambda < \pi /2 \) , the sequence \( \xi = \left( {{\xi }_{k},{\mathcal{D}}_{k}}\right) \) with \( {\mathcal{D}}_{k} = {\mathcal{D}}_{{S}_{1},\ldots ,{S}_{k}} \) and \[ {\xi }_{k} = {\left( \cos \lambda \right) }^{-k}\exp \left\{ {{i\lambda }\left( {{S}_{k} - \frac{B + A}{2}}\right) }\right\} ,\;1 \leq k \leq n, \] (5) is a complex martingale (i.e., the real and imaginary parts of \( {\xi }_{k},1 \leq k \leq n \), form martingales). 3. It follows from the definition of a martingale that the expectation \( \mathrm{E}{\xi }_{k} \) is the same for every \( k \) : \[ \mathrm{E}{\xi }_{k} = \mathrm{E}{\xi }_{1} \] It turns out that this property persists if time \( k \) is replaced by a stopping time. In order to formulate this property we introduce the following definition. Definition 2. A random variable \( \tau = \tau \left( \omega \right) \) that takes the values \( 1,2,\ldots, n \) is called a stopping time (with respect to decompositions \( {\left( {\mathcal{D}}_{k}\right) }_{1 \leq k \leq n},{\mathcal{D}}_{1} \preccurlyeq {\mathcal{D}}_{2} \preccurlyeq \cdots \preccurlyeq {\mathcal{D}}_{n} \) ) if, for any \( k = 1,\ldots, n \), the random variable \( {I}_{\{ \tau = k\} }\left( \omega \right) \) is \( {\mathcal{D}}_{k} \) -measurable. If we consider \( {\mathcal{D}}_{k} \) as the decomposition induced by observations for \( k \) steps (for example, \( {\mathcal{D}}_{k} = {\mathcal{D}}_{{\eta }_{1},\ldots .{\eta }_{k}} \), the decomposition induced by the variables \( {\eta }_{1},\ldots ,{\eta }_{k} \) ), then the \( {\mathcal{D}}_{k} \) -measurability of \( {I}_{\{ \tau = k\} }\left( \omega \right) \) means that the realization or nonrealization of the event \( \{ \tau = k\} \) is determined only by observations for \( k \) steps (and is independent of the "future"). If \( {\mathcal{B}}_{k} = \alpha \left( {\mathcal{D}}_{k}\right) \), then the \( {\mathcal{D}}_{k} \) -measurability of \( {I}_{\{ \tau = k\} }\left( \omega \right) \) is equivalent to the assumption that \[ \{ \tau = k\} \in {\mathcal{B}}_{k} \] (6) We have already encountered specific examples of stopping times: the times \( {\tau }_{k}^{x} \) and \( {\sigma }_{2n} \) introduced in Sects. 9 and 10. Those times are special cases of stopping times of the form \[ {\tau }^{A} = \min \left\{ {0 < k \leq n : {\xi }_{k} \in A}\right\} \] (7) \[ {\sigma }^{A} = \min \left\{ {0 \leq k \leq n : {\xi }_{k} \in A}\right\} \] which are the times (respectively the first time after zero and the first time) for a sequence \( {\xi }_{0},{\xi }_{1},\ldots ,{\xi }_{n} \) to attain a point of the set \( A \) . 4. Theorem 1. Let \( \xi = {\left( {\xi }_{k},{\mathcal{D}}_{k}\right) }_{1 \leq k \leq n} \) be a martingale and \( \tau \) a stopping time with respect to the de
109_The rising sea Foundations of Algebraic Geometry
Definition 12.50
Definition 12.50. We denote by \( {\sum }_{d} \) the regular cell complex consisting of the cells \( {e}_{A} \) (where \( A \) ranges over the spherical simplices of \( \sum \) ), and we call \( {\sum }_{d} \) the dual of \( \sum \) . If \( \sum = \sum \left( {W, S}\right) \) for some Coxeter system \( \left( {W, S}\right) \), then we set \( {\sum }_{d} = {\sum }_{d}\left( {W, S}\right) \) and call it the dual Coxeter complex of \( \left( {W, S}\right) \) . Note that the correspondence \( A \mapsto {e}_{A} \) is order-reversing, so there is an order-preserving bijection between the (nonempty) cells of \( {\sum }_{d}\left( {W, S}\right) \) and the finite standard cosets in \( W \), ordered by inclusion. Remark 12.51. If we had wanted to rely on Appendix A, we could have simply applied Proposition A. 25 to the poset of finite standard cosets (ordered by inclusion). The reader might find it instructive to carry out this approach as an exercise. We repeat, as we already remarked above, that if \( \sum \) is finite then \( X = \) \( \left| {\sum }_{d}\right| \) is topologically the cone over \( \left| \sum \right| \) ; hence it is a ball. In fact, the whole space is equal to the unique top-dimensional cell \( {e}_{\varnothing } \) corresponding to the empty simplex. Its bounding sphere is the usual Coxeter complex, with the cell decomposition dual to the standard triangulation. ## 12.3.4 Properties Fix a Coxeter system \( \left( {W, S}\right) \) . We summarize for ease of reference some properties of \( {\sum }_{d} = {\sum }_{d}\left( {W, S}\right) \) . All are easy to verify right from the definition and/or from the correspondence between \( {\sum }_{d} \) and the finite standard cosets. Some of these properties were already stated in the course of the construction of \( {\sum }_{d} \) . (1) \( W \) operates on \( {\sum }_{d} \), and the cell stabilizers are the finite parabolic subgroups of \( W \) . [Warning: In contrast to the situation for the ordinary Coxeter complex, the cell stabilizers do not fix the cells pointwise. For example, the stabilizer of every edge is a group of order 2 whose generator flips the edge.] (2) The 1-skeleton of \( {\sum }_{d} \) is the Cayley graph of \( \left( {W, S}\right) \) . (3) There is a set of cells \( L \subseteq {\sum }_{d} \) such that the cell stabilizers \( {W}_{e} \) for \( e \in L \) are the finite standard parabolic subgroups of \( W \) . More precisely, the function \( e \mapsto {W}_{e} \) is an order-preserving bijection from \( L \) to the set of finite standard parabolic subgroups of \( W \), ordered by inclusion. Every cell in \( {\sum }_{d} \) is \( W \) -equivalent to a unique cell in \( L \) . (4) The bijection in (3) extends to an order-preserving, \( W \) -equivariant bijection from the cells of \( {\sum }_{d} \) to the set of finite standard cosets \( w{W}_{J} \) in \( W \) . (5) If \( W \) is finite, then \( \left| {\sum }_{d}\right| \) is a ball of dimension equal to the rank of \( \left( {W, S}\right) \) . (6) For any standard subgroup \( {W}_{J} \) of \( W\left( {J \subseteq S}\right) \), the dual Coxeter complex \( {\sum }_{d}^{J} \mathrel{\text{:=}} \sum \left( {{W}_{J}, J}\right) \) is a subcomplex of \( {\sum }_{d} \) . In case \( {W}_{J} \) is finite, the ball \( \left| {\sum }_{d}^{J}\right| \) is the cell of \( {\sum }_{d} \) corresponding to \( {W}_{J} \) under the bijection in (3). Note that \( L \) in (3) is simply the set of cells \( {e}_{A} \) such that \( A \) is a face of the fundamental chamber in \( \sum \) . The reader may find it helpful to explicitly work through the definition of \( {\sum }_{d} \) in the case of Example 12.48 (starting with the barycentric subdivision of \( \sum \) ) to see how it leads to the picture in Figure 12.8. Example 12.49 is also instructive; here one starts with the cone over the barycentric subdivision of \( \sum \) . We close this subsection with one nontrivial property of the dual Coxeter complex. Proposition 12.52. The underlying space \( X = \left| {\sum }_{d}\right| \) is always contractible. Sketch of proof. As in Section 4.12, our proof will be complete except for (routine) homotopy-theoretic details. The proposition is trivial if \( \sum \) is finite, so we may assume that it is infinite and hence contractible (Theorem 4.127). Let \( {\sum }^{\prime } \) be the set of nonempty simplices. Then the flag complex \( K\left( {\sum }^{\prime }\right) \) is the barycentric subdivision of \( \sum \) and hence is also contractible. So it suffices to show that the inclusion of \( X = \left| {K\left( {\sum }_{f}\right) }\right| \) into \( \left| {K\left( {\sum }^{\prime }\right) }\right| \) is a homotopy equivalence. We can build the poset \( {\sum }^{\prime } \) from \( {\sum }_{f} \) by successively adjoining elements \( A \in {\sum }^{\prime } \smallsetminus {\sum }_{f} \) in order of increasing codimension of \( A \) as a simplex of \( \sum \) . Since \( {\sum }_{f} \) already contains all simplices of codimensions 0 and 1 (chambers and panels), this means that we start with the simplices \( A \) of codimension 2, then those of codimension 3, and so on. For fixed codimension, the adjunctions can be done in any order. Each time we adjoin an element \( A \) of \( {\sum }^{\prime } \smallsetminus {\sum }_{f} \), all \( B > A \) are already present, but no \( B \leq A \) is present. On the level of flag complexes, then, the effect of the adjunction is to cone off the subcomplex \( K\left( {\sum }_{ > A}\right) \) . But \( K\left( {\sum }_{ > A}\right) \) is the barycentric subdivision of \( {\operatorname{lk}}_{\sum }A \), which is an infinite Coxeter complex and hence is contractible. It follows that none of the adjunctions change the homotopy type. ## Exercises 12.53. What is \( L \) in Example 12.48? Locate the cells of \( L \) in Figure 12.9 and describe their stabilizers. 12.54. If \( \left( {W, S}\right) \) has irreducible components \( \left( {{W}_{1},{S}_{1}}\right) ,\ldots ,\left( {{W}_{k},{S}_{k}}\right) \), describe \( {\sum }_{d}\left( {W, S}\right) \) in terms of the cell complexes \( {\sum }_{d}\left( {{W}_{i},{S}_{i}}\right) \) . Contrast this with the behavior of the ordinary (simplicial) Coxeter complex \( \sum \left( {W, S}\right) \) . ## 12.3.5 Remarks on the Spherical Case Let \( \left( {W, S}\right) \) be a Coxeter system with \( W \) finite. We outline here an alternative construction of \( {\sum }_{d} = \sum \left( {W, S}\right) \) based on the duality theory for polytopes as given, for instance, in Ziegler [286]. To apply this theory one needs to know that \( \sum \left( {W, S}\right) \) can be realized as the boundary of a polytope \( \widehat{\sum } \), whose facets cut across the chambers. We have not mentioned this before, but it is a general fact about the cell decomposition of the sphere induced by an essential hyperplane arrangement \( \mathcal{H} \) . A proof is given in Appendix A (see Section A.2.3). Other proofs can be found in [37, Example 4.1.7; 286, Corollary 7.18]. A simple illustration of this fact is given in Figure A.4. For a second example, consider the group of type \( {\mathrm{A}}_{3} \) discussed in Example 12.49, and imagine "flattening out" each spherical triangle in Figure 0.1 or 1.3, i.e., replacing it by the (Euclidean) convex hull of its vertices. The resulting polytope is \( \widetilde{\sum } \) . By the well-known duality theory of polytopes, which we briefly review in Section A.2.2, \( \widehat{\sum } \) has a dual polytope \( P \), called the zonotope associated with \( \mathcal{H} \) . For the reflection arrangement of type \( {\mathrm{A}}_{3}, P \) is the permutahedron in Figure 12.10. See Ziegler [286, Example 0.10] for more information about the permutahedron and for further pictures. Returning now to the case of a general reflection arrangement, duality theory gives an order-reversing correspondence between the faces of \( \widehat{\sum } \) and those of \( P \), including the entire polytope and the empty face in both cases, so \( P \) provides a model for \( {\sum }_{d} \) . [Recall that a polytope is naturally the underlying space of a regular cell complex; see Section A.2.2.] In particular, the \( W \) -action is simply transitive on the vertices of \( P \) . So we can describe \( P \) as the convex hull of a generic \( W \) -orbit in the ambient space of the reflection representation of \( W \) . For definiteness, we could take the \( W \) -orbit of the point of the fundamental chamber at distance 1 from each wall. [Such a point exists and is unique because \( C \) is a simplicial cone.] Thus we could have dispensed with duality theory and simply taken this description of \( P \) as the definition of the dual Coxeter complex in the finite case. This approach is taken in Charney-Davis [79, Lemma 2.1.3]. ## 12.3.6 The Euclidean and Hyperbolic Cases If \( W \) is a Euclidean reflection group acting on a Euclidean space \( V \), then \( \left| {\sum }_{d}\right| \) can be identified with \( V \), decomposed into cells dual to the (nonempty) simplices of \( \sum \) . Here we can appeal to the theory of manifolds or simply to Proposition 10.13. Example 12.48 described a special case of this. The situation is similar if \( W \) is a hyperbolic reflection group (see Proposition 10.52), except that \( \left| {\sum }_{d}\right| \) is not the entire hyperbolic space if the fundamental polyhedron has vertices at infinity. The Coxeter group \( W = {\operatorname{PGL}}_{2}\left( \mathbb{Z}\right) \) , whose Coxeter complex was shown in Figure 2.4, provides an instructive example. If one draws the chamber graph on top of that picture, one sees a pattern of hyperbolic hexagons and squares. The cell complex \( {\sum }_{d} \) is obtained by filling these in. One can view it as a truncated hyperbolic plane obtained by cutting off neighborhoods of the cusps. The result is a "thickened tree," decomposed into solid hexagons and squares,
1096_(GTM252)Distributions and Operators
Definition 7.15
Definition 7.15. The symbol (defined modulo \( {S}^{-\infty }\left( \Omega \right) \) ) in the right-hand side of (7.41) is denoted \( p\left( {x,\xi }\right) \circ {p}^{\prime }\left( {x,\xi }\right) \) and is called the Leibniz product of \( p \) and \( {p}^{\prime } \) . The symbol in the right-hand side of (7.40) is denoted \( {p}^{\circ * }\left( {x,\xi }\right) \) . The rule for \( p \circ {p}^{\prime } \) is a generalization of the usual (Leibniz) rule for composition of differential operators with variable coefficients. The notation \( p\# {p}^{\prime } \) is also often used. Note that (7.41) shows \[ p\left( {x,\xi }\right) \circ {p}^{\prime }\left( {x,\xi }\right) \sim p\left( {x,\xi }\right) {p}^{\prime }\left( {x,\xi }\right) + r\left( {x,\xi }\right) \text{with} \] \[ r\left( {x,\xi }\right) \sim \mathop{\sum }\limits_{{\left| \alpha \right| \geq 1}}\frac{1}{\alpha !}{D}_{\xi }^{\alpha }p\left( {x,\xi }\right) {\partial }_{x}^{\alpha }{p}^{\prime }\left( {x,\xi }\right) , \] (7.43) where \( r \) is of order \( d + {d}^{\prime } - 1 \) . Thus (7.4) has been obtained for these symbol classes with \( \operatorname{Op}\left( {pq}\right) \) of order \( d + {d}^{\prime } \) and \( \mathcal{R} \) of order \( d + {d}^{\prime } - 1 \) . In calculations concerned with elliptic problems, this information is sometimes sufficient; one does not need the detailed information on the structure of \( r\left( {x,\xi }\right) \) . But there are also applications where the terms in \( r \) are important, first of all the term of order \( d + {d}^{\prime } - 1,\mathop{\sum }\limits_{{j = 1}}^{n}{D}_{{\xi }_{j}}p{\partial }_{{x}_{j}}{p}^{\prime } \) . For example, the commutator of \( \mathrm{{Op}}\left( p\right) \) and \( \mathrm{{Op}}\left( {p}^{\prime }\right) \) (for scalar operators) has the symbol \[ p \circ {p}^{\prime } - {p}^{\prime } \circ p \sim - i\mathop{\sum }\limits_{{j = 1}}^{n}\left( {{\partial }_{{\xi }_{j}}p{\partial }_{{x}_{j}}{p}^{\prime } - {\partial }_{{x}_{j}}p{\partial }_{{\xi }_{j}}{p}^{\prime }}\right) + {r}^{\prime }, \] with \( {r}^{\prime } \) of order \( d + {d}^{\prime } - 2 \) . The sum over \( j \) is called the Poisson bracket of \( p \) and \( {p}^{\prime } \) ; it plays a role in many considerations. Remark 7.16. We here sketch a certain spectral property. Let \( p\left( {x,\xi }\right) \in \) \( {S}^{0}\left( {\Omega ,{\mathbb{R}}^{n}}\right) \) and let \( \left( {{x}_{0},{\xi }_{0}}\right) \) be a point with \( {x}_{0} \in \Omega \) and \( \left| {\xi }_{0}\right| = 1 \) ; by translation and dilation we can obtain that \( {x}_{0} = 0 \) and \( B\left( {0,2}\right) \subset \Omega \) . The sequence \[ {u}_{k}\left( x\right) = {k}^{n/2}\chi \left( {kx}\right) {e}^{i{k}^{2}x \cdot {\xi }_{0}},\;k \in {\mathbb{N}}_{0}, \] (7.44) has the following properties: (i) \( {\begin{Vmatrix}{u}_{k}\end{Vmatrix}}_{0} = \parallel \chi {\parallel }_{0}\left( { \neq 0}\right) \) for all \( k \) , (ii) \( \left( {{u}_{k}, v}\right) \rightarrow 0 \) for \( k \rightarrow \infty \), all \( v \in {L}_{2}\left( {\mathbb{R}}^{n}\right) \) , (7.45) (iii) \( {\begin{Vmatrix}\chi \operatorname{Op}\left( p\right) {u}_{k} - {p}^{0}\left( 0,{\xi }_{0}\right) \cdot {u}_{k}\end{Vmatrix}}_{0} \rightarrow 0 \) for \( k \rightarrow \infty \) . Here (i) and (ii) imply that \( {u}_{k} \) has no convergent subsequence in \( {L}_{2}\left( {\mathbb{R}}^{n}\right) \) . It is used to show that if \( \operatorname{Op}\left( p\right) \) is continuous from \( {H}_{\text{comp }}^{0}\left( \Omega \right) \) to \( {H}_{\text{loc }}^{1}\left( \Omega \right) \), then \( {p}^{0}\left( {0,{\xi }_{0}}\right) \) must equal zero, for the compactness of the injection \( {H}^{1}\left( {B\left( {0,2}\right) }\right) \hookrightarrow \) \( {H}^{0}\left( {B\left( {0,2}\right) }\right) \) (cf. Section 8.2) then shows that \( \chi \operatorname{Op}\left( p\right) {u}_{k} \) has a convergent subsequence in \( {H}^{0}\left( {B\left( {0,2}\right) }\right) \), and then, unless \( p\left( {0,{\xi }_{0}}\right) = 0 \) ,(iii) will imply that \( {u}_{k} \) has a convergent subsequence in \( {L}_{2} \) in contradiction to (i) and (ii). Applying this argument to every point \( \left( {{x}_{0},{\xi }_{0}}\right) \) with \( \left| {\xi }_{0}\right| = 1 \), we conclude that if \( \mathrm{{Op}}\left( p\right) \) is of order 0 and maps \( {H}_{\text{comp }}^{0}\left( \Omega \right) \) continuously into \( {H}_{\text{loc }}^{1}\left( \Omega \right) \), its principal symbol must equal 0 . (The proof is found in [H67] and is sometimes called Hörmander's variant of Gohberg's lemma, referring to a version given by Gohberg in [G60].) The properties (i)-(iii) mean that \( {u}_{k} \) is a singular sequence for the operator \( {P}_{1} - a \) with \( {P}_{1} = \chi \operatorname{Op}\left( p\right) \) and \( a = {p}^{0}\left( {0,{\xi }_{0}}\right) \) ; this implies that \( a \) belongs to the essential spectrum of \( {P}_{1} \), namely, the set \[ \operatorname{essspec}\left( {P}_{1}\right) = \bigcap \left\{ {\operatorname{spec}\left( {{P}_{1} + K}\right) \mid K\text{ compact in }{L}_{2}\left( \Omega \right) }\right\} . \] Since the operator norm is \( \geq \) the spectral radius, the operator norm of \( {P}_{1} \) (and of any \( {P}_{1} + K \) ) must be \( \geq \left| a\right| \) . It follows that the operator norm in \( {L}_{2}\left( \Omega \right) \) of \( {\psi P} \) for any \( \psi \in {C}_{0}^{\infty } \) is \( \geq \mathop{\sup }\limits_{{x \in \Omega ,\left| \xi \right| = 1}}\left| {\psi \left( x\right) {p}^{0}\left( {x,\xi }\right) }\right| \) . (So if we know that the norm of \( {\psi P} \) is \( \leq C \) for all \( \left| \psi \right| \leq 1 \), then also \( \sup \left| {p}^{0}\right| \leq C \) ; a remark that can be very useful.) By compositions with properly supported versions of \( \operatorname{Op}\left( {\langle \xi {\rangle }^{t}}\right) \) (for suitable \( t) \), it is seen more generally that if \( P = \operatorname{Op}\left( {p\left( {x,\xi }\right) }\right) \) is of order \( d \) and maps \( {H}_{\text{comp }}^{s}\left( \Omega \right) \) into \( {H}_{\text{loc }}^{s - d + 1}\left( \Omega \right) \), then its principal symbol equals zero. In particular, if \( P = \operatorname{Op}\left( {r\left( {x,\xi }\right) }\right) \) where \( r \in {S}^{+\infty }\left( {\Omega ,{\mathbb{R}}^{n}}\right) \), and maps \( {\mathcal{E}}^{\prime }\left( \Omega \right) \) into \( {C}^{\infty }\left( \Omega \right) \), then all the homogeneous terms in each asymptotic series for \( r\left( {x,\xi }\right) \) are zero, i.e., \( r\left( {x,\xi }\right) \sim 0 \) (hence is in \( {S}^{-\infty }\left( {\Omega ,{\mathbb{R}}^{n}}\right) \) ). This gives a proof that a symbol in \( x \) -form is determined from the operator it defines, uniquely modulo \( {S}^{-\infty }\left( {\Omega ,{\mathbb{R}}^{n}}\right) \) . ## 7.4 Elliptic pseudodifferential operators One of the most important applications of Theorem 7.13 is the construction of a parametrix (an almost-inverse) to an elliptic operator. We shall now define ellipticity, and here we include matrix-formed symbols and operators. The space of \( \left( {{N}^{\prime } \times N}\right) \) -matrices of symbols in \( {S}^{d}\left( {\sum ,{\mathbb{R}}^{n}}\right) \) (resp. \( {S}_{1,0}^{d}\left( {\sum ,{\mathbb{R}}^{n}}\right) \) ) is denoted \( {S}^{d}\left( {\sum ,{\mathbb{R}}^{n}}\right) \otimes \mathcal{L}\left( {{\mathbb{C}}^{N},{\mathbb{C}}^{{N}^{\prime }}}\right) \) (resp. \( {S}_{1,0}^{d}\left( {\sum ,{\mathbb{R}}^{n}}\right) \otimes \mathcal{L}\left( {{\mathbb{C}}^{N},{\mathbb{C}}^{{N}^{\prime }}}\right) \) ) since complex \( \left( {{N}^{\prime } \times N}\right) \) -matrices can be identified with linear maps from \( {\mathbb{C}}^{N} \) to \( {\mathbb{C}}^{{N}^{\prime }} \) (i.e., elements of \( \left. {\mathcal{L}\left( {{\mathbb{C}}^{N},{\mathbb{C}}^{{N}^{\prime }}}\right) }\right) \) . The symbols in these classes of course define \( \left( {{N}^{\prime } \times N}\right) \) -matrices of operators (notation: when \( p \in {S}_{1,0}^{d}\left( {\Omega \times \Omega ,{\mathbb{R}}^{n}}\right) \otimes \) \( \mathcal{L}\left( {{\mathbb{C}}^{N},{\mathbb{C}}^{{N}^{\prime }}}\right) \), then \( P = \operatorname{Op}\left( p\right) \) sends \( {C}_{0}^{\infty }{\left( \Omega \right) }^{N} \) into \( {C}^{\infty }{\left( \Omega \right) }^{{N}^{\prime }} \) ). Ellipticity is primarily defined for square matrices (the case \( N = {N}^{\prime } \) ), but has a natural extension to general matrices. Definition 7.17. \( {1}^{ \circ } \) Let \( p \in {S}^{d}\left( {\Omega ,{\mathbb{R}}^{n}}\right) \otimes \mathcal{L}\left( {{\mathbb{C}}^{N},{\mathbb{C}}^{N}}\right) \) . Then \( p \), and \( P = \) \( \mathrm{{Op}}\left( p\right) \), and any \( \psi \) do \( {P}^{\prime } \) with \( {P}^{\prime } \sim P \), is said to be elliptic of order \( d \), when the principal symbol \( {p}_{d}\left( {x,\xi }\right) \) is invertible for all \( x \in \Omega \) and all \( \left| \xi \right| \geq 1 \) . \( {2}^{ \circ } \) Let \( p \in {S}^{d}\left( {\Omega ,{\mathbb{R}}^{n}}\right) \otimes \mathcal{L}\left( {{\mathbb{C}}^{N},{\mathbb{C}}^{{N}^{\prime }}}\right) \) . Then \( p \) (and \( \operatorname{Op}\left( p\right) \) and any \( {P}^{\prime } \sim \) \( \mathrm{{Op}}\left( p\right) \) ) is said to be injectively elliptic of order \( d \), resp. surjectively elliptic of order \( d \), when \( {p}_{d}\left( {x,\xi }\right) \) is injective, resp. surjective, from \( {\mathbb{C}}^{N} \) to \( {\mathbb{C}}^{{N}^{\prime }} \) , for all \( x \in \Omega \) and \( \left| \xi \right| \geq 1 \) . (In particular, \( {N}^{\prime } \geq N \) resp. \( {N}^{\prime } \leq N \) .) Note that since \( {p}_{d} \) is homogeneous of degree \( d \) for \( \left| \xi \right| \geq 1 \), it is only necessary to check the invertibility for \( \left| \xi \right| = 1 \) . The definition (and its usefulness) extends to the classes \( {S}_{\var
1068_(GTM227)Combinatorial Commutative Algebra
Definition 13.28
Definition 13.28 Suppose that \( {\mathcal{F}}_{ \bullet } \) is a free resolution of \( S/{I}_{\Delta } \) that has monomial matrices in which every row and column label is squarefree. The generalized Čech complex \( {\check{\mathcal{C}}}_{\mathcal{F}} \) is the complex of localizations of \( S \) obtained by replacing every 1 in every row and column label with the symbol \( * \) . This complex is to be considered as a cohomological complex as in Example 13.6. When \( \mathcal{F} \) . is minimal, \( {\mathcal{C}}_{\mathcal{F}} \) is called the canonical Čech complex of \( {I}_{\Delta } \) and we use \( {\check{\mathcal{C}}}_{\Delta }^{ \bullet } \) to denote it. Example 13.29 Start with the triangular, square, and pentagonal minimal cellular resolutions of \[ S/\langle a, b, c\rangle ,\;S/\langle {ab},{bc},{cd},{ad}\rangle ,\;\text{ and }\;S/\langle {abc},{bcd},{cde},{ade},{abe}\rangle \] for appropriate \( S \) in Example 4.12. The associated canonical Čech complexes have monomial matrices filled with the coboundary complexes of the following labeled cell complexes: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_270_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_270_0.jpg) The empty set is labeled \( 0\cdots 0 \) in all three pictures. The triangle here gives monomial matrices for the usual Čech complex in Example 13.6, whereas the triangle in Example 4.12 is the Koszul complex in Example 1.27 (both with different sign conventions). This example works more generally for irrelevant ideals of smooth (or simplicial) projective toric varieties. The usual Čech complex is a generalized Čech complex. Proposition 13.30 Suppose that \( {I}_{\Delta } \) is generated by squarefree monomials \( {m}_{1},\ldots ,{m}_{r} \) . If \( \mathcal{F} \) . is the Taylor resolution on these generators, then \( {\check{\mathcal{C}}}_{\mathcal{F}}^{ \bullet } = \) \( {\check{\mathcal{C}}}^{ \bullet }\left( {{m}_{1},\ldots ,{m}_{r}}\right) \) is the usual Čech complex. This is a key point, and it follows immediately from the definitions. Now we come to the main result on generalized Čech complexes. Theorem 13.31 The local cohomology of \( M \) supported on \( {I}_{\Delta } \) is the cohomology of any generalized Čech complex tensored with \( M \) : \[ {H}_{{I}_{\Delta }}^{i}\left( M\right) = {H}^{i}\left( {M \otimes {\check{\mathcal{C}}}_{\mathcal{F}}^{ \bullet }}\right) \] The proof, at the end of this section, relies on a construction that extends the construction in Definition 13.28 to arbitrary \( {\mathbb{Z}}^{n} \) -graded modules. Definition 13.32 The Čech hull of a \( {\mathbb{Z}}^{n} \) -graded module \( M \) is the \( {\mathbb{Z}}^{n} \) - graded module \( \check{C}M \) whose degree \( \mathbf{b} \) piece is \[ {\left( \check{C}M\right) }_{\mathbf{b}} = {M}_{{\mathbf{b}}_{ + }}\;\text{ where }\;{\mathbf{b}}_{ + } = \mathop{\sum }\limits_{{{b}_{i} \geq 0}}{b}_{i}{\mathbf{e}}_{i} \] and \( {\mathbf{e}}_{i} \) is the \( {i}^{\text{th }} \) standard basis vector of \( {\mathbb{Z}}^{n} \) . Equivalently, \[ \check{C}M = {\bigoplus }_{\mathbf{b} \in {\mathbb{N}}^{n}}{M}_{\mathbf{b}}{ \otimes }_{\mathbb{k}}\mathbb{k}\left\lbrack {{x}_{i}^{-1} \mid {b}_{i} = 0}\right\rbrack . \] The action of multiplication by \( {x}_{i} \) is \[ \cdot {x}_{i} : {\left( \check{C}M\right) }_{\mathbf{b}} \rightarrow {\left( \check{C}M\right) }_{{\mathbf{e}}_{i} + \mathbf{b}} = \left\{ \begin{array}{ll} \text{ identity } & \text{ if }{b}_{i} < 0 \\ \cdot {x}_{i} : {M}_{{\mathbf{b}}_{ + }} \rightarrow {M}_{{\mathbf{e}}_{i} + {\mathbf{b}}_{ + }} & \text{ if }{b}_{i} \geq 0. \end{array}\right. \] Note that \( {\mathbf{e}}_{i} + {\mathbf{b}}_{ + } = {\left( {\mathbf{e}}_{i} + \mathbf{b}\right) }_{ + } \) whenever \( {b}_{i} \geq 0 \) . The staircase diagram of \( \check{C}I \) for any ideal \( I \) (not necessarily square-free) is obtained by pushing to negative infinity any point on the staircase diagram for \( I \) that touches the boundary of the positive orthant: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_271_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_271_0.jpg) Heuristically, the first description of \( \check{C}M \) in the definition says that if you want to know what \( \check{C}M \) looks like in degree \( \mathbf{b} \in {\mathbb{Z}}^{n} \), then check what \( M \) looks like in the nonnegative degree closest to \( \mathbf{b} \) ; the second description says that the vector space \( {M}_{\mathbf{a}} \) for \( \mathbf{a} \in {\mathbb{N}}^{n} \) is copied into all degrees \( \mathbf{b} \) such that \( {\mathbf{b}}_{ + } = \mathbf{a} \) . The Čech hull "forgets" everything about the original module that occurred in degrees outside \( {\mathbb{N}}^{n} \) . The Čech hull can be applied to a homogeneous map of degree 0 between two modules, by copying the maps in the \( {\mathbb{N}}^{n} \) -graded degrees as prescribed. Checking \( {\mathbb{Z}}^{n} \) -degree by \( {\mathbb{Z}}^{n} \) -degree yields the following simple result. Lemma 13.33 The Čech hull takes exact sequences to exact sequences. Next we need to see how to recover the construction in Definition 13.28 using the Čech hull. Set \( \mathbf{1} = \left( {1,\ldots ,1}\right) \) and write \( {\omega }_{S} = S\left( {-\mathbf{1}}\right) \), the free module generated in degee 1. Proposition 13.34 If \( \mathcal{F} \) . is a free resolution of \( S/{I}_{\Delta } \) with squarefree row and column labels, then the generalized Čech complex can be expressed as \[ {\check{\mathcal{C}}}_{\mathcal{F}}^{ \bullet } = \left( {\check{\mathcal{C}}{\mathcal{F}}^{ \bullet }}\right) \left( \mathbf{1}\right) \] the \( {\mathbb{Z}}^{n} \) -graded translate down by 1 of the Čech hull of \( {\mathcal{F}}^{ \bullet } = \underline{\operatorname{Hom}}\left( {{\mathcal{F}}_{ \bullet },{\omega }_{S}}\right) \) . Proof. Every summand \( S\left( {-\sigma }\right) \) in \( \mathcal{F} \) . becomes a summand \( S\left( {-\bar{\sigma }}\right) \) with generator of degree \( \bar{\sigma } = \mathbf{1} - \sigma \) in \( {\mathcal{F}}^{ \bullet } \) . It is straightforward to check that \( \check{\mathcal{C}}\left( {S\left( {-\bar{\sigma }}\right) }\right) = \mathbb{k}\left\{ {{\mathbf{x}}^{\mathbf{b}} \mid {\mathbf{b}}_{ + } \succcurlyeq \bar{\sigma }}\right\} = S\left\lbrack {\mathbf{x}}^{-\sigma }\right\rbrack \left( {-\bar{\sigma }}\right) \) . Consequently, the summand \( \check{\mathcal{C}}\left( {S\left( {-\bar{\sigma }}\right) }\right) \left( \mathbf{1}\right) \) of \( \check{\mathcal{C}}\left( {\mathcal{F}}^{ \bullet }\right) \left( \mathbf{1}\right) \) is the localization whose vector label has a \( * \) precisely where \( \sigma \) has a 1 . Proof of Theorem 13.31. Every squarefree resolution \( {\mathcal{F}}_{ \bullet } \) of \( S/{I}_{\Delta } \) contains a minimal free resolution. Applying \( \underline{\operatorname{Hom}}\left( {-,{\omega }_{S}}\right) \) produces a surjection from \( {\mathcal{F}}^{ \bullet } \) to the dual of the minimal free resolution, and this surjection induces an isomorphism on cohomology (which is \( {\underline{\operatorname{Ext}}}^{ \bullet }\left( {S/{I}_{\Delta },{\omega }_{S}}\right) \) in both cases). By Proposition 13.34, taking Čech hulls and subsequently translating by 1 yields a map \( {\check{\mathcal{C}}}_{\mathcal{F}}^{ \bullet } \rightarrow {\check{\mathcal{C}}}_{\Delta }^{ \bullet } \), and Lemma 13.33 implies that it induces an isomorphism on cohomology. Since \( {\check{\mathcal{C}}}_{\Delta }^{ \bullet } \) and \( {\check{\mathcal{C}}}_{\mathcal{F}}^{ \bullet } \) are both complexes of flat modules, a standard lemma from homological algebra (see [Mil00b, Lemma 6.11] for a proof) implies that the induced map \( M \otimes {\mathcal{C}}_{\mathcal{F}}^{ \bullet } \rightarrow M \otimes {\mathcal{C}}_{\Delta }^{ \bullet } \) is an isomorphism on cohomology. Therefore we need only show that \( {H}^{i}\left( {\bar{M} \otimes {\check{\mathcal{C}}}_{\Delta }^{ \bullet }}\right) = {H}_{{I}_{\Delta }}^{i}\left( M\right) \) . But this follows by taking \( \mathcal{F} \) . above to be the Taylor resolution, by Proposition 13.30 and Theorem 13.7. The reader wishing to carry out algorithmic computation of local cohomology over \( S \) with monomial support should use a canonical Čech complex instead of the usual Čech complex, because the canonical Čech complex always has fewer summands - usually many fewer - and is shorter. ## 13.4 Cohen-Macaulay conditions The importance of a commutative ring or module being Cohen-Macaulay cannot be overstated. We have already seen the Cohen-Macaulay condition in the context of Alexander duality for resolutions (Section 5.5) and for generic monomial ideals (Section 6.2). In general, there are numerous equivalent ways to detect the Cohen-Macaulay condition for a module, and many of these fit nicely into the realm of combinatorial commutative algebra. Unfortunately, the equivalences of many of these criteria require homological methods from general-that is, not really combinatorial - commutative algebra, so it would take us too far astray to present a self-contained proof of them all. That being said, the Cohen-Macaulay condition is so robust, comes up so often, and is so useful in combinatorics that we would be remiss were we not to at least present some of the equivalent conditions. This we shall do, with references to where missing parts of the proofs can be found. Afterward, we give some examples of how the criteria can be applied in combinatorial situations. A few of the Cohen-Macaulay criteria involve notions from commutative algebra that we have not yet seen in this book. Definition 13.35 Fix a positive multigrading of \( S = \mathbb{k}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) by \( {\mathbb{Z}}^{d} \) , and a graded ideal \( I \) . A sequence \( \mathbf{y} = {y}_{1},\ldots ,{y}_{r} \) of \( {\mathbb{Z}}^{d} \) -graded homogeneous elements in the graded maximal ideal of \( S/I \) is called a - system
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 4.4.1
Definition 4.4.1. Let \( G \) be a locally compact Abelian group and \( {C}^{b}\left( G\right) \) the space of all bounded continuous functions on \( G \) endowed with the supremum norm \( \parallel \cdot {\parallel }_{\infty } \) . Let \( \lambda : f \rightarrow {\lambda }_{f} \) denote the regular representation of \( {L}^{1}\left( G\right) \) on \( {L}^{2}\left( G\right) \) as introduced in Section 2.7 and recall that the group \( {C}^{ * } \) -algebra, \( {C}^{ * }\left( G\right) \), is the closure of the \( \lambda \left( {{L}^{1}\left( G\right) }\right) \) in \( \mathcal{B}\left( {{L}^{2}\left( G\right) }\right) \) . Let \( {C}^{\infty }\left( G\right) \) be the set of all \( f \in {C}^{b}\left( G\right) \) such that there exist \( T \in {C}^{ * }\left( G\right) \) and a sequence \( {\left( {f}_{n}\right) }_{n} \) in \( {L}^{1}\left( G\right) \cap {C}^{b}\left( G\right) \) with the following properties. (i) \( {\lambda }_{{f}_{n}} \rightarrow T \) in \( \mathcal{B}\left( {{L}^{2}\left( G\right) }\right) \) as \( n \rightarrow \infty \) . (ii) \( {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{\infty } \rightarrow 0 \) as \( n \rightarrow \infty \) . Lemma 4.4.2. Given \( f \in {C}^{\infty }\left( G\right) \), the operator \( T \in {C}^{ * }\left( G\right) \) in Definition 4.4.1 is unique and is denoted \( {T}_{f} \) . Proof. Let \( T, S \in {C}^{ * }\left( G\right) \) and suppose that \( {\left( {f}_{n}\right) }_{n} \) and \( {\left( {g}_{n}\right) }_{n} \) are sequences in \( {L}^{1}\left( G\right) \cap {C}^{b}\left( G\right) \) such that, as \( n \rightarrow \infty \) , \[ {\lambda }_{{f}_{n}} \rightarrow T,\;{\lambda }_{{g}_{n}} \rightarrow S\text{ and }{\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{\infty } \rightarrow 0,\;{\begin{Vmatrix}{g}_{n} - f\end{Vmatrix}}_{\infty } \rightarrow 0. \] Let \( Q = T - S \) . Then, for arbitrary \( g \in {C}_{c}\left( G\right) \) , \[ {\begin{Vmatrix}\left( {f}_{n} - {g}_{n}\right) * g\end{Vmatrix}}_{\infty } \leq {\begin{Vmatrix}{f}_{n} - {g}_{n}\end{Vmatrix}}_{\infty }\parallel g{\parallel }_{1} \leq \parallel g{\parallel }_{1}\left( {{\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{\infty } + {\begin{Vmatrix}{g}_{n} - f\end{Vmatrix}}_{\infty }}\right) \rightarrow 0 \] and \[ {\begin{Vmatrix}\left( {f}_{n} - {g}_{n}\right) * g - Q\left( g\right) \end{Vmatrix}}_{2} \leq {\begin{Vmatrix}\left( {\lambda }_{{f}_{n}} - T\right) \left( g\right) \end{Vmatrix}}_{2} + {\begin{Vmatrix}\left( {\lambda }_{{g}_{n}} - S\right) \left( g\right) \end{Vmatrix}}_{2} \rightarrow 0. \] In particular, for every compact subset \( {\left. K\text{ of }G,\left( \left( {f}_{n} - {g}_{n}\right) * g\right) \right| }_{K} \rightarrow 0 \) uniformly and \( {\left. \left( \left( {f}_{n} - {g}_{n}\right) * g\right) \right| }_{K} \rightarrow {\left. Q\left( g\right) \right| }_{K} \) in \( {L}^{2}\left( K\right) \) . Both facts together imply that \( {\left. Q\left( g\right) \right| }_{K} = 0 \) in \( {L}^{2}\left( K\right) \) . This holds for all compact subsets \( K \) of \( G \) . Thus it follows that \( Q\left( g\right) = 0 \) . Since \( {C}_{c}\left( G\right) \) is dense in \( {L}^{2}\left( G\right) \), we conclude that \( Q = 0 \) . Remark 4.4.3. Let \( f \in {L}^{1}\left( G\right) \cap {C}^{\infty }\left( G\right) \) . Then, taking \( {f}_{n} = f \) for all \( n \in \mathbb{N} \) , we see that \( {T}_{f} = {\lambda }_{f} \) . Hence the three Gelfand transforms \( {\widehat{T}}_{f},{\widehat{\lambda }}_{f} \), and \( \widehat{f} \) coincide on \( \widehat{G} \) . Thus, defining \( \widehat{f} = {\widehat{T}}_{f} \) for \( f \in {C}^{\infty }\left( G\right) \), the assignment \( f \rightarrow \widehat{f} \) coincides on \( {L}^{1}\left( G\right) \cap {C}^{\infty }\left( G\right) \) with the Gelfand transformation of \( {L}^{1}\left( G\right) \) . In addition, we have \( \parallel \widehat{f}{\parallel }_{\infty } = {\begin{Vmatrix}{\widehat{T}}_{f}\end{Vmatrix}}_{\infty } = \begin{Vmatrix}{T}_{f}\end{Vmatrix} \) . In preparation for the inversion formula and the Plancherel theorem we have to provide a series of technical lemmas. Lemma 4.4.4. The map \( f \rightarrow {T}_{f} \) from \( {C}^{\infty }\left( G\right) \) into \( {C}^{ * }\left( G\right) \) is linear and injective. Proof. Linearity of the map is obvious. Thus it remains to show that \( f = 0 \) whenever \( {T}_{f} = 0 \) . So suppose there exists a sequence \( {\left( {f}_{n}\right) }_{n} \) in \( {L}^{1}\left( G\right) \cap {C}^{b}\left( G\right) \) such that \( {\lambda }_{{f}_{n}} \rightarrow 0 \) and \( {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{\infty } \rightarrow 0 \) . Then, for any \( g \in {C}_{c}\left( G\right) ,{\begin{Vmatrix}{f}_{n} * g\end{Vmatrix}}_{2} \rightarrow 0 \) and \[ {\begin{Vmatrix}{f}_{n} * g - f * g\end{Vmatrix}}_{\infty } \leq {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{\infty }\parallel g{\parallel }_{1} \rightarrow 0. \] As in the proof of Lemma 4.4.2, it follows that \( f * g = 0 \) in \( {L}^{2}\left( G\right) \) . However, since \( f * g \) is continuous, we get that \( {\int }_{G}f\left( x\right) g\left( x\right) {dx} = 0 \) . Finally, since \( {C}_{c}\left( G\right) \) is dense in \( {L}^{1}\left( G\right) \) and \( f \in {L}^{\infty }\left( G\right) = {L}^{1}{\left( G\right) }^{ * } \), it follows that \( f = 0 \) . Lemma 4.4.5. Let \( S \in {C}^{ * }\left( G\right) \) and \( g, h \in {C}_{c}\left( G\right) \) . Then (i) \( g * S\left( h\right) \in {C}^{\infty }\left( G\right) \) and \( {T}_{g * S\left( h\right) } = S{\lambda }_{g * h} \) . (ii) \( S\left( g\right) * S{\left( g\right) }^{ * } \in {C}^{\infty }\left( G\right) \) and \( {T}_{S\left( g\right) * S{\left( g\right) }^{ * }} = S{S}^{ * }{\lambda }_{g * {g}^{ * }} \) . Proof. (i) We know that \( g * S\left( h\right) \in {C}_{0}\left( G\right) \subseteq {C}^{b}\left( G\right) \) . Let \( {\left( {f}_{n}\right) }_{n} \subseteq {C}_{c}\left( G\right) \) be such that \( {\lambda }_{{f}_{n}} \rightarrow S \) in \( {C}^{ * }\left( G\right) \) . Then, for every \( x \in G \) , \[ \left| {\left( {{f}_{n} * g * h}\right) \left( x\right) - \left( {g * S\left( h\right) }\right) \left( x\right) }\right| \leq {\int }_{G}\left| {g\left( y\right) }\right| \cdot \left| {{\lambda }_{{f}_{n}}\left( h\right) \left( {{y}^{-1}x}\right) - S\left( h\right) \left( {{y}^{-1}x}\right) }\right| {dy} \] \[ \leq \parallel g{\parallel }_{2} \cdot {\begin{Vmatrix}{R}_{x}\left( {\lambda }_{{f}_{n}}\left( h\right) \right) - {R}_{x}\left( S\left( h\right) \right) \end{Vmatrix}}_{2} \] \[ \leq \begin{Vmatrix}{{\lambda }_{{f}_{n}} - S}\end{Vmatrix} \cdot \parallel h{\parallel }_{2}\parallel g{\parallel }_{2} \] which tends to zero as \( n \rightarrow \infty \) . Moreover, for each \( u \in {C}_{c}\left( G\right) \) , \[ {\begin{Vmatrix}{\lambda }_{{f}_{n} * g * h}\left( u\right) - S{\lambda }_{g * h}\end{Vmatrix}}_{2} \leq \begin{Vmatrix}{{\lambda }_{{f}_{n}} - S}\end{Vmatrix} \cdot \parallel g * h * u{\parallel }_{2} \] \[ \leq \begin{Vmatrix}{{\lambda }_{{f}_{n}} - S}\end{Vmatrix} \cdot \parallel u{\parallel }_{2}\parallel g * h{\parallel }_{1} \] and hence \[ \begin{Vmatrix}{{\lambda }_{{f}_{n} * g * h} - S{\lambda }_{g * h}}\end{Vmatrix} \leq \begin{Vmatrix}{{\lambda }_{{f}_{n}} - S}\end{Vmatrix} \cdot \parallel g * h{\parallel }_{1} \rightarrow 0. \] This shows that \( g * S\left( h\right) \in {C}^{\infty }\left( G\right) \) and \( {T}_{g * S\left( h\right) } = S{\lambda }_{g * h} \) . (ii) Clearly, \( S\left( g\right) * S{\left( g\right) }^{ * } \in {C}_{0}\left( G\right) \subseteq {C}^{b}\left( G\right) \) . Let \( {\left( {f}_{n}\right) }_{n} \) be a sequence in \( {C}_{c}\left( G\right) \) with \( {\lambda }_{{f}_{n}} \rightarrow S \) . Then \[ \begin{Vmatrix}{{\lambda }_{{f}_{n} * {f}_{n}^{ * } * g * {g}^{ * }} - S{S}^{ * }{\lambda }_{g * {g}^{ * }}}\end{Vmatrix} = \begin{Vmatrix}{{\lambda }_{{f}_{n}}{\lambda }_{{f}_{n}}^{ * }{\lambda }_{g * {g}^{ * }} - S{S}^{ * }{\lambda }_{g * {g}^{ * }}}\end{Vmatrix}, \] which converges to zero as \( n \rightarrow \infty \) . Moreover, we have \( {f}_{n} * {f}_{n}^{ * } * g * {g}^{ * } \in \) \( {L}^{1}\left( G\right) \cap {C}^{b}\left( G\right) \) and \[ {\begin{Vmatrix}{f}_{n} * {f}_{n}^{ * } * g * {g}^{ * } - S\left( g\right) * S{\left( g\right) }^{ * }\end{Vmatrix}}_{\infty } = {\begin{Vmatrix}{\lambda }_{{f}_{n}}\left( g\right) * {\lambda }_{{f}_{n}}{\left( g\right) }^{ * } - S\left( g\right) * S{\left( g\right) }^{ * }\end{Vmatrix}}_{\infty } \] \[ \leq \parallel {\lambda }_{{f}_{n}}\left( g\right) {\parallel }_{2}\parallel {\lambda }_{{f}_{n}}{\left( g\right) }^{ * } - S{\left( g\right) }^{ * }{\parallel }_{2} \] \[ + {\begin{Vmatrix}S{\left( g\right) }^{ * }\end{Vmatrix}}_{2}{\begin{Vmatrix}{\lambda }_{{f}_{n}}\left( g\right) - S\left( g\right) \end{Vmatrix}}_{2} \] \[ \leq \begin{Vmatrix}{{\lambda }_{{f}_{n}} - S}\end{Vmatrix} \cdot \parallel g{\parallel }_{2}\left( {{\begin{Vmatrix}{\lambda }_{{f}_{n}}\left( g\right) \end{Vmatrix}}_{2} + \parallel S\left( g\right) {\parallel }_{2}}\right) , \] which also converges to 0 as \( n \rightarrow \infty \) . This proves (ii). Lemma 4.4.6. Let \( f \in {C}^{\infty }\left( G\right), x \in G \) and \( \alpha \in \widehat{G} \) . Then (i) \( {f}^{ * } \in {C}^{\infty }\left( G\right) \) and \( \widehat{{f}^{ * }} = \overline{\widehat{f}} \) . (ii) \( {L}_{x}f \in {C}^{\infty }\left( G\right) \) and \( \widehat{{L}_{x}f}\left( \alpha \right) = \overline{\alpha \left( x\right) }\widehat{f}\left( \alpha \right) \) . (iii) \( {\alpha f} \in {C}^{\infty }\left( G\right) \) and \( \widehat{\alpha f} = {L}_{\alpha }\widehat{f} \) . Proof. Let \( {\left( {f}_{n}\right) }_{n} \subseteq {L}^{1}\left( G\right) \cap {C}^{b}\left( G\right) \) and \( T \in {C}^{ * }\left( G\right) \) such that \( {\lambda }_{{f}_{n}} \rightarrow T \) and \( {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{\infty } \rightarrow 0. \) (i) Because \( {\begin{Vmatrix}{f}_{n}^{ * } - {f}
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 4.5.1
Definition 4.5.1 Let \( \Omega \subset {\mathbb{R}}^{n} \) be open. A function \( u \in {L}^{1}\left( \Omega \right) \) is said to be of bounded variations, if the total variation of \( u \) on \( \Omega \) is: \[ \parallel {Du}\parallel \left( \Omega \right) \mathrel{\text{:=}} \sup \left\{ {{\int }_{\Omega }u\operatorname{div}\phi \mid \phi \in {C}_{0}^{1}\left( {\Omega ,{\mathbb{R}}^{n}}\right) ,\parallel \phi \left( x\right) \parallel \leq 1,\text{ for a.e. }x \in \Omega }\right\} < \infty . \] The space \( {BV}\left( \Omega \right) \) consists of all bounded variation functions on \( \Omega \) with norm: \[ \parallel u{\parallel }_{BV} = \parallel u{\parallel }_{{L}^{1}} + \parallel {Du}\parallel \left( \Omega \right) . \] Since \( \parallel .{\parallel }_{{R}^{n}} \) and \( \parallel .{\parallel }_{\infty } \) are equivalent, for \( u \in {BV}\left( \Omega \right) \), the total variation \( \parallel {Du}\parallel \) can be regarded as a measure \( \mu \) : \[ \frac{1}{\sqrt{n}}\mu \left( \Omega \right) \leq \parallel {Du}\parallel \left( \Omega \right) \leq \sqrt{n}\mu \left( \Omega \right) . \] (4.49) Moreover, the distributional derivative \( {Du} \) makes sense, it is related to the Radon measure \( \mu \) and vector measurable function \( \nu \left( x\right) \) . Example 4.5.2 If \( u \in {W}^{1,1}\left( \Omega \right) \), then \( u \in {BV}\left( \Omega \right) \), and \[ \parallel u{\parallel }_{BV} = \parallel u{\parallel }_{{W}^{1,1}} \] In fact, we shall verify that \( \parallel {Du}\parallel \left( \Omega \right) = {\int }_{\Omega }\left| {\nabla u}\right| {dx} \) . On one hand, \[ \left| {{\int }_{\Omega }u\operatorname{div}{\phi dx}}\right| = \left| {{\int }_{\Omega }\nabla u \cdot {\phi dx}}\right| \leq {\int }_{\Omega }\left| {\nabla u}\right| {dx} \] for all \( \phi \in {C}_{0}^{1}\left( {\Omega ,{R}^{n}}\right) \) with \( \parallel \phi \left( x\right) \parallel \leq 1\forall x \in \Omega \), it follows that \( \parallel {Du}\parallel \left( \Omega \right) \leq \) \( {\int }_{\Omega }\left| {\nabla u}\right| {dx} \) On the other hand, \( \forall u \in {W}^{1,1}\left( \Omega \right) ,\forall \epsilon > 0,\exists {\phi }_{\epsilon } \in {C}_{0}^{1}\left( {\Omega ,{\mathbb{R}}^{n}}\right) \) with \( \begin{Vmatrix}{{\phi }_{\epsilon }\left( x\right) }\end{Vmatrix} \leq \) \( 1,\forall x \in \Omega \) satisfying: \[ {\int }_{\Omega }\left| {\nabla u}\right| {dx} \leq {\int }_{\Omega }\nabla u \cdot {\phi }_{\epsilon }{dx} + \epsilon = {\int }_{\Omega }u \cdot \operatorname{div}{\phi }_{\epsilon }{dx} + \epsilon \leq \parallel {Du}\parallel \left( \Omega \right) + \epsilon . \] Since \( \epsilon > 0 \) is arbitrary, we obtain \( {\int }_{\Omega }\left| {\nabla u}\right| {dx} \leq \parallel {Du}\parallel \left( \Omega \right) \) . Example 4.5.3 Let \( S \subset {\mathbb{R}}^{n} \) be a \( {C}^{\infty } \) compact \( \left( {n - 1}\right) \) -dimensional hypersurface with the induced metric, and let \( {H}^{n - 1} \) be the \( \left( {n - 1}\right) \) -dimensional Hausdorff measure in \( {\mathbb{R}}^{n} \) . The area of \( S \) is \( {H}^{n - 1}\left( S\right) \) . Let \( \Omega \) be the body bounded by \( S \) , and let \( {\chi }_{\Omega } \) be the characteristic function of \( \Omega \) . Then \( {\chi }_{\Omega } \in {BV}\left( {R}^{n}\right) \), and \[ \begin{Vmatrix}{D{\chi }_{\Omega }}\end{Vmatrix}\left( {\mathbb{R}}^{n}\right) = {H}^{n - 1}\left( S\right) . \] Indeed, by the Gauss formula, \( \forall \phi \in {C}_{0}^{1}\left( {{\mathbb{R}}^{n},{\mathbb{R}}^{n}}\right) \) \[ {\int }_{{R}^{n}}{\chi }_{\Omega }\operatorname{div}{\phi dx} = {\int }_{\Omega }\operatorname{div}{\phi dx} = {\int }_{S}\mathrm{n}\left( x\right) \cdot \phi \left( x\right) d{\mathrm{H}}^{n - 1}, \] where \( \mathrm{n}\left( x\right) \) is the unit exterior normal. Thus \[ \begin{Vmatrix}{D{\chi }_{\Omega }}\end{Vmatrix}\left( {\mathbb{R}}^{n}\right) \leq {\mathrm{H}}^{n - 1}\left( S\right) . \] On the other hand, one extends \( \mathrm{n} \) to be a \( {C}^{\infty } \) vector field \( V \) over \( {\mathbb{R}}^{n} \) with \( \parallel V\left( x\right) \parallel \leq 1\forall x \in {\mathbb{R}}^{n} \) . This can be done by a partition of unity. Then, \( \forall \rho \in {C}_{0}^{\infty }\left( {{\mathbb{R}}^{n},{\mathbb{R}}^{1}}\right) \) with \( \left| {\rho \left( x\right) }\right| \leq 1,\forall x \in {\mathbb{R}}^{n} \), let \( \phi = {\rho V} \), we have \[ {\int }_{{R}^{n}}{\chi }_{\Omega }\operatorname{div}{\phi dx} = {\int }_{S}{\rho d}{\mathrm{H}}^{n - 1}. \] Thus, \[ \begin{Vmatrix}{D{\chi }_{\Omega }}\end{Vmatrix}\left( {\mathbb{R}}^{n}\right) \] \[ = \sup \left\{ {{\int }_{{\mathbb{R}}^{n}}{\chi }_{\Omega }\operatorname{div}{\phi dx} \mid \phi \in {C}_{0}^{\infty }\left( {{\mathbb{R}}^{n},{\mathbb{R}}^{n}}\right) ,\text{ with }\parallel \phi \left( x\right) \parallel \leq 1,\forall x \in {\mathbb{R}}^{n}}\right\} \] \[ \geq \sup \left\{ {{\int }_{S}{\rho d}{\mathrm{H}}^{n - 1}\left| {\rho \in {C}_{0}^{\infty }\left( {{\mathbb{R}}^{n},{\mathbb{R}}^{1}}\right) ,}\right| \rho \left( x\right) \mid \leq 1,\forall x \in {\mathbb{R}}^{n}}\right\} \] \[ = {\mathrm{H}}^{n - 1}\left( S\right) \text{.} \] These two examples show that \( {W}^{1,1}\left( \Omega \right) \) is strictly contained in \( {BV}\left( \Omega \right) \) , since for \( n = 1 \), the characteristic function \( {\chi }_{\left\lbrack 0,1\right\rbrack } \in {BV}\left( {\mathbb{R}}^{1}\right) \) but not in \( {W}^{1,1} \) . This leads us to extend the definition of the co-dimensional one area to the boundary of more general domains. Definition 4.5.4 Let \( E \) be a Borel set in an open domain \( \Omega \subset {\mathbb{R}}^{n} \) . We call \[ \parallel \partial E\parallel \left( \Omega \right) = \begin{Vmatrix}{D{\chi }_{E}}\end{Vmatrix}\left( \Omega \right) \] the perimeter of \( E \) in \( \Omega \) . As a function space with the norm \( \parallel u{\parallel }_{BV} = \parallel u{\parallel }_{{L}^{1}} + \parallel {Du}\parallel \left( \Omega \right) ,{BV}\left( \Omega \right) \) is a Banach space. Only the completeness remains to be verified. Lemma 4.5.5 (Lower semi-continuity) If \( \left\{ {u}_{j}\right\} \subset {BV}\left( \Omega \right) \), and \( {u}_{j} \rightarrow u \) in \( {L}^{1} \) ; then for every open \( U \subset \Omega \) \[ \parallel {Du}\parallel \left( U\right) \leq \mathop{\liminf }\limits_{{j \rightarrow \infty }}\begin{Vmatrix}{D{u}_{j}}\end{Vmatrix}\left( U\right) . \] \( \left( {4.50}\right) \) If further, \( \sup \left\{ {\begin{Vmatrix}{D{u}_{j}}\end{Vmatrix}\left( \Omega \right) \mid j \in \mathcal{N}}\right\} < \infty \), then \( u \in {BV}\left( \Omega \right) \) . Proof. \( \forall \phi \in {C}_{0}^{1}\left( {U,{\mathbb{R}}^{n}}\right) \) with \( \parallel \phi \left( x\right) \parallel \leq 1 \), one has \[ {\int }_{U}u\operatorname{div}{\phi dx} = \mathop{\lim }\limits_{{j \rightarrow \infty }}{\int }_{U}{u}_{j}\operatorname{div}{\phi dx} \leq \mathop{\liminf }\limits_{{j \rightarrow \infty }}\begin{Vmatrix}{D{u}_{j}}\end{Vmatrix}\left( U\right) . \] (4.50) is proved. Theorem 4.5.6 \( {BV}\left( \Omega \right) \) is complete. Proof. For a Cauchy sequence \( \left\{ {u}_{j}\right\} \) in the BV norm, it is obvious that \( {u}_{j} \rightarrow u \) in \( {L}^{1} \) . By the previous lemma, \( \parallel {Du}\parallel \left( \Omega \right) < \infty \), and then \( u \in {BV}\left( \Omega \right) \) . It remains to show that \( \begin{Vmatrix}{D\left( {{u}_{j} - u}\right) }\end{Vmatrix}\left( \Omega \right) \rightarrow 0 \) . Again, from the lower semi-continuity lemma, \( \forall \epsilon > 0,\exists {j}_{0} \in \mathcal{N} \) such that \[ \begin{Vmatrix}{D\left( {{u}_{j} - u}\right) }\end{Vmatrix}\left( \Omega \right) \leq \mathop{\liminf }\limits_{{k \rightarrow \infty }}\begin{Vmatrix}{D\left( {{u}_{j} - {u}_{k}}\right) }\end{Vmatrix}\left( \Omega \right) < \epsilon ,\text{ as }j \geq {j}_{0}. \] Now we consider the possibility of \( {C}^{\infty } \) approximation of the BV functions. Since the \( {W}^{1,1} \) norm equals the BV norm for \( {C}^{1} \) functions, and \( {C}^{\infty } \) is dense in \( {W}^{1,1} \), we can only have: Theorem 4.5.7 Let \( \Omega \) be an open domain of \( {\mathbb{R}}^{n} \) . Then for \( \forall u \in {BV}\left( \Omega \right) \) , \( \exists {u}_{j} \in {BV}\left( \Omega \right) \cap {C}^{\infty }\left( \Omega \right) \) such that 1. \( {u}_{j} \rightarrow u \) in \( {L}^{1}\left( \Omega \right) \) , 2. \( \begin{Vmatrix}{D{u}_{j}}\end{Vmatrix} \rightarrow \parallel {Du}\parallel \) in the sense of Radon measure. In particular, \( \begin{Vmatrix}{D{u}_{j}}\end{Vmatrix}\left( \Omega \right) \rightarrow \parallel {Du}\parallel \left( \Omega \right) \) . We omit the proof, but refer to Giusti [Gi], p. 14. Theorem 4.5.8 (Compactness) Let \( \Omega \) be a bounded open domain of \( {\mathbb{R}}^{n} \) . Any sequence \( \left\{ {u}_{j}\right\} \subset {BV}\left( \Omega \right) \) with \( {\begin{Vmatrix}{u}_{j}\end{Vmatrix}}_{BV} \leq M < \infty \) possesses a convergent subsequence in the \( {L}^{1} \) norm, and the limit \( u \in {BV}\left( \Omega \right) \), with \( \parallel u{\parallel }_{BV} \leq M \) . Proof. We take a sequence \( \left\{ {v}_{j}\right\} \subset {BV}\left( \Omega \right) \cap {C}^{\infty }\left( \Omega \right) \) such that \( {\begin{Vmatrix}{u}_{j} - {v}_{j}\end{Vmatrix}}_{{L}^{1}} < \) \( \frac{1}{j} \), and \( \begin{Vmatrix}{D{v}_{j}}\end{Vmatrix}\left( \Omega \right) \leq M + \frac{1}{j} \) . From Example 4.5.2, we have \( {\begin{Vmatrix}{v}_{j}\end{Vmatrix}}_{{W}^{1,1}} = \) \( {\begin{Vmatrix}D{v}_{j}\end{Vmatrix}}_{BV} \) . According to the Rellich-Kondrachev compactness theorem, there is a subsequence \( {v}_{{j}_{k}}{L}^{1} \) -converges to \( u \) . From the lower semi-continuity lemma, \( \parallel {Du}\parallel \left( \Omega \right) \leq M \), and \( u \in {BV}\left( \Omega \right) \) . Obviou
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 5.19
Definition 5.19. If \( G \) is a matrix Lie group with Lie algebra \( \mathfrak{g} \), then \( H \subset G \) is a connected Lie subgroup of \( G \) if the following conditions are satisfied: 1. \( H \) is a subgroup of \( G \) . 2. The Lie algebra \( \mathfrak{h} \) of \( H \) is a Lie subalgebra of \( \mathfrak{g} \) . 3. Every element of \( H \) can be written in the form \( {e}^{{X}_{1}}{e}^{{X}_{2}}\cdots {e}^{{X}_{m}} \), with \( {X}_{1},\ldots ,{X}_{m} \in \mathfrak{h} \) . Connected Lie subgroups are also called analytic subgroups. Note that any group \( H \) as in the definition is path connected, since each element of \( H \) can be connected to the identity in \( H \) by a path of the form \[ t \mapsto {e}^{\left( {1 - t}\right) {X}_{1}}{e}^{\left( {1 - t}\right) {X}_{2}}\cdots {e}^{\left( {1 - t}\right) {X}_{m}}. \] The group \( {H}_{0} \) in (5.22) is a connected Lie subgroup of \( \mathrm{{GL}}\left( {2;\mathbb{C}}\right) \) whose Lie algebra is the algebra \( \mathfrak{h} \) in (5.21). We are now ready to state the main result of this section, which is our second major application of the Baker-Campbell-Hausdorff formula. Theorem 5.20. Let \( G \) be a matrix Lie group with Lie algebra \( \mathfrak{g} \) and let \( \mathfrak{h} \) be a Lie subalgebra of \( \mathfrak{g} \) . Then there exists a unique connected Lie subgroup \( H \) of \( G \) with Lie algebra \( \mathfrak{h} \) . If \( \mathfrak{h} \) is the subalgebra of \( \mathfrak{{gl}}\left( {2;\mathbb{C}}\right) \) in (5.21), then the connected Lie subgroup \( H \) is the group \( {H}_{0} \) in (5.22), which is not closed. In practice, Theorem 5.20 is most useful in those cases where the connected Lie subgroup \( H \) turns out to be closed. See Proposition 5.24 and Exercises 10, 13, and 14 for conditions under which this is the case. We now begin working toward the proof of Theorem 5.20. Since \( G \) is assumed to be a matrix Lie group, we may as well assume that \( G = \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . After all, if \( G \) is a closed subgroup of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) and \( H \) is a connected Lie subgroup of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) whose Lie algebra \( \mathfrak{h} \) is contained in \( \mathfrak{g} \), then \( H \) is also a connected Lie subgroup of \( G \) . We now let \[ H = \left\{ {{e}^{{X}_{1}}{e}^{{X}_{2}}\cdots {e}^{{X}_{N}} \mid {X}_{1},\ldots ,{X}_{N} \in \mathfrak{h}}\right\} , \] (5.23) which is a subgroup of \( G \) . The key issue is to prove that the Lie algebra of \( H \) , in the sense of Definition 5.18, is \( \mathfrak{h} \) . Once we know that \( \operatorname{Lie}\left( H\right) = \mathfrak{h} \), we will immediately conclude that \( H \) is a connected Lie subgroup with Lie algebra \( \mathfrak{h} \), the remaining properties in Definition 5.19 being true by definition. Note that for the claim \( \operatorname{Lie}\left( H\right) = \mathfrak{h} \) to be true, it essential that \( \mathfrak{h} \) be a subalgebra of \( \mathfrak{{gl}}\left( {n;\mathbb{C}}\right) \), and not merely a subspace; compare Exercise 11. As in the proof of Theorem 3.42, we think of \( \operatorname{gl}\left( {n;\mathbb{C}}\right) \) as \( {\mathbb{R}}^{2{n}^{2}} \) and we decompose \( \mathrm{{gl}}\left( {n;\mathbb{C}}\right) \) as the direct sum of \( \mathfrak{h} \) and \( D \), where \( D \) is the orthogonal complement of \( \mathfrak{h} \) with respect to the usual inner product on \( {\mathbb{R}}^{2{n}^{2}} \) . Then, as shown in the proof of Theorem 3.42, there exist neighborhoods \( U \) and \( V \) of the origin in \( \mathfrak{h} \) and \( D \) , respectively, and a neighborhood \( W \) of \( I \) in \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) with the following properties: Each \( A \in W \) can be written uniquely as \[ A = {e}^{X}{e}^{Y},\;X \in U, Y \in V, \] (5.24) in such a way that \( X \) and \( Y \) depend continuously on \( A \) . We think of the decomposition in (5.24) as our local coordinates in a neighborhood of the identity in \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . If \( X \) is a small element of \( \mathfrak{h} \), the decomposition of \( {e}^{X} \) is just \( {e}^{X}{e}^{0} \) . If we take the product of two elements of the form \( {e}^{{X}_{1}}{e}^{{X}_{2}} \), with \( {X}_{1} \) and \( {X}_{2} \) small elements of \( \mathfrak{h} \) , then since \( \mathfrak{h} \) is a subalgebra, if we combine the exponentials as \( {e}^{{X}_{1}}{e}^{{X}_{2}} = {e}^{{X}_{3}} \) by means of the Baker-Campbell-Hausdorff formula, \( {X}_{3} \) will again be in \( \mathfrak{h} \) . Thus, if we take a small number of products as in (5.23) with the \( {X}_{j} \) ’s being small elements of \( \mathfrak{h} \), we will move from the identity in the \( X \) -direction in the decomposition (5.24). Globally, however, \( H \) may wind around and come back to points in \( W \) of the form (5.24) with \( Y \neq 0 \) . (See Figure 5.3.) Indeed, as the example of the "irrational line" in (5.22) shows, there may be elements of \( H \) in \( W \) with arbitrarily small nonzero values of \( Y \) . Nevertheless, we will see that the set of \( Y \) values that occurs is at most countable. Lemma 5.21. Decompose \( \mathfrak{{gl}}\left( {n;\mathbb{C}}\right) \) as \( \mathfrak{h} \oplus D \) and let \( V \) be a neighborhood of the origin in \( D \) as in (5.24). If \( E \subset V \) is defined by \[ E = \left\{ {Y \in V \mid {e}^{Y} \in H}\right\} \] then \( E \) is at most countable. Assuming the lemma, we may now prove Theorem 5.20. ![a7bfd4a7-7795-4350-a407-6ad11be11f96_141_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_141_0.jpg) Fig. 5.3 The black lines indicate the portion of \( H \) in the set \( W \) . The group \( H \) intersects \( {e}^{V} \) in at most countably many points Proof of Theorem 5.20. As we have already observed, it suffices to show that the Lie algebra of \( H \) is \( \mathfrak{h} \) . Let \( {\mathfrak{h}}^{\prime } \) be the Lie algebra of \( H \), which clearly contains \( \mathfrak{h} \) . For \( Z \in {\mathfrak{h}}^{\prime } \), we may write, for all sufficiently small \( t \) , \[ {e}^{tZ} = {e}^{X\left( t\right) }{e}^{Y\left( t\right) } \] where \( X\left( t\right) \in U \subset \mathfrak{h} \) and \( Y\left( t\right) \in V \subset D \) and where \( X\left( t\right) \) and \( Y\left( t\right) \) are continuous functions of \( t \) . Since \( Z \) is in the Lie algebra of \( H \), we have \( {e}^{tZ} \in H \) for all \( t \) . Since, also, \( {e}^{X\left( t\right) } \) is in the group \( H \), we conclude that \( {e}^{Y\left( t\right) } \) is in \( H \) for all sufficiently small \( t \) . If \( Y\left( t\right) \) were not constant, then it would take on uncountably many values, which would mean that \( E \) is uncountable, violating Lemma 5.21. So, \( Y\left( t\right) \) must be constant, and since \( Y\left( 0\right) = 0 \), this means that \( Y\left( t\right) \) is identically equal to zero. Thus, for small \( t \), we have \( {e}^{tZ} = {e}^{X\left( t\right) } \) and, therefore, \( {tZ} = X\left( t\right) \in \mathfrak{h} \) . This means \( Z \in \mathfrak{h} \) and we conclude that \( {\mathfrak{h}}^{\prime } \subset \mathfrak{h} \) . Before proving Lemma 5.21, we prove another lemma. Lemma 5.22. Pick a basis for \( \mathfrak{h} \) and call an element of \( \mathfrak{h} \) rational if its coefficients with respect to this basis are rational. Then for every \( \delta > 0 \) and every \( A \in H \), there exist rational elements \( {R}_{1},\ldots ,{R}_{m} \) of \( \mathfrak{h} \) such that \[ A = {e}^{{R}_{1}}{e}^{{R}_{2}}\cdots {e}^{{R}_{m}}{e}^{X} \] where \( X \) is in \( \mathfrak{h} \) and \( \parallel X\parallel < \delta \) . Suppose we take \( \delta \) small enough that the ball of radius \( \delta \) in \( \mathfrak{h} \) is contained in \( U \) . Then since there are only countably many \( m \) -tuples of the form \( \left( {{R}_{1},\ldots ,{R}_{m}}\right) \) with \( {R}_{j} \) rational, the lemma tells us that \( H \) can be covered by countably many translates of the set \( {e}^{U} \) . Proof. Choose \( \varepsilon > 0 \) so that for all \( X, Y \in \mathfrak{h} \) with \( \parallel X\parallel < \varepsilon \) and \( \parallel Y\parallel < \varepsilon \), the Baker-Campbell-Hausdorff holds for \( X \) and \( Y \) . Let \( C\left( {\cdot , \cdot }\right) \) denote the right-hand side of the formula, so that \[ {e}^{X}{e}^{Y} = {e}^{C\left( {X, Y}\right) } \] whenever \( \parallel X\parallel ,\parallel Y\parallel < \varepsilon \) . It is not hard to see that \( C\left( {\cdot , \cdot }\right) \) is a continuous. Now, if the lemma holds for some \( \delta \), it also holds for any \( {\delta }^{\prime } > \delta \) . Thus, it is harmless to assume \( \delta \) is less than \( \varepsilon \) and small enough that if \( \parallel X\parallel ,\parallel Y\parallel < \delta \), we have \( \parallel C\left( {X, Y}\right) \parallel < \varepsilon \) . Since \( {e}^{X} = {\left( {e}^{X/k}\right) }^{k} \), every element \( A \) of \( H \) can be written as \[ A = {e}^{{X}_{1}}\cdots {e}^{{X}_{N}} \] (5.25) with \( {X}_{j} \in \mathfrak{h} \) and \( \begin{Vmatrix}{X}_{j}\end{Vmatrix} < \delta \) . We now proceed by induction on \( N \) . If \( N = 0 \), then \( A = I = {e}^{0} \), and there is nothing to prove. Assume the lemma for \( A \) ’s that can be expressed as in (5.25) for some integer \( N \), and consider \( A \) of the form \[ A = {e}^{{X}_{1}}\cdots {e}^{{X}_{N}}{e}^{{X}_{N + 1}} \] (5.26) with \( {X}_{j} \in \mathfrak{h} \) and \( \begin{Vmatrix}{X}_{j}\end{Vmatrix} < \delta \) . Applying our induction hypothesis to \( {e}^{{X}_{1}}\cdots {e}^{{X}_{N}} \), we obtain \[ A = {e}^{{R}_{1}}\cdots {e}^{{R}_{m}}{e}^{X}{e}^{{X}_{N + 1}} \] \[ = {e}^{{R}_{1}}\cdots {e}^{{R}_{m}}{e}^{C\left( {X,{X}_{N + 1}}\right) }. \] where the \( {R}_{j} \) ’s are rational and \( \begin{Vmatrix}
1281_[张恭庆] Lecture Notes on Calculus of Variations
Definition 11.1
Definition 11.1 A function \( L \) is said to be quasi-convex, if \( \forall A \in {\mathbb{R}}^{nN} \) (an \( n \times N \) matrix), \( \forall \) hypercube \( D \subset {\mathbb{R}}^{n},\forall v \in {W}_{0}^{1,\infty }\left( {D,{\mathbb{R}}^{N}}\right) \), we have \[ \operatorname{mes}\left( D\right) L\left( A\right) \leq {\int }_{D}L\left( {A + \nabla v\left( x\right) }\right) {dx}. \] The importance of quasi-convexity is to insure the weak sequential lower semi-continuity of a functional. The proof of the following theorem is rather lengthy, we refer the interested readers to \( \left\lbrack \mathrm{{Da}}\right\rbrack \), pp. 156-167. Theorem 11.5 (Morrey-Acerbi-Fusco) When \( 1 \leq p < \infty \), if \[ I\left( u\right) = {\int }_{\Omega }L\left( {\nabla u}\right) {dx} \] is weakly sequentially lower semi-continuous on \( {W}^{1, p}\left( {\Omega ,{\mathbb{R}}^{N}}\right) \) (or when \( p = \infty \) , \( I \) is weakly-* sequentially lower semi-continuous), then \( L \) is quasi-convex. Conversely, if we add the growth conditions: \[ \left\{ \begin{array}{ll} \left| {L\left( A\right) \leq \alpha \left( {1 + \left| A\right| }\right) }\right| & p = 1 \\ - \alpha \left( {1 + {\left| A\right| }^{q}}\right) \leq L\left( A\right) \leq \alpha \left( {1 + {\left| A\right| }^{p}}\right) & 1 \leq q < p < \infty \\ \left| {L\left( A\right) }\right| \leq \eta \left( \left| A\right| \right) & p = \infty \end{array}\right. \] where \( \alpha > 0 \) is a constant, and \( \eta \) is a non-decreasing continuous function, and if \( L \) is quasi-convex, then when \( 1 \leq p < \infty \) , \( I \) is weakly sequentially lower semicontinuous on \( {W}^{1, p}\left( {\Omega ,{\mathbb{R}}^{N}}\right) \) (or when \( p = \infty, I \) is weak-* sequentially lower semi-continuous). What kind of functions are quasi-convex? Suppose \( f : {\mathbb{R}}^{1} \rightarrow {\mathbb{R}}^{1} \) is convex and \( u \in {L}^{1}\left( \Omega \right) \), then from Jessen’s inequality: \[ f\left( {\frac{1}{\operatorname{mes}\left( \Omega \right) }{\int }_{\Omega }u\left( x\right) }\right) {dx} \leq \frac{1}{\operatorname{mes}\left( \Omega \right) }{\int }_{\Omega }f\left( {u\left( x\right) }\right) {dx} \] we see that if \( L \) is convex, then \( L \) must be quasi-convex. In fact, if \( p \mapsto L\left( p\right) \) is convex, then \[ L\left( p\right) = L\left( {\operatorname{mes}{\left( D\right) }^{-1}{\int }_{D}\left( {p + \nabla \varphi \left( x\right) }\right) {dx}}\right) \] \[ \leq \operatorname{mes}{\left( D\right) }^{-1}{\int }_{D}L\left( {p + \nabla \varphi \left( x\right) }\right) {dx}),\;\forall \varphi \in {W}_{0}^{1,\infty }\left( {\Omega ,{\mathbb{R}}^{N}}\right) . \] It is worth noting when \( n = 1 \) or \( N = 1 \), quasi-convexity and convexity coincide. We first verify the above statement by showing quasi-convexity implies convexity for \( n = 1 \) . Let \( \xi ,\eta \in {\mathbb{R}}^{N},\forall \lambda \in \left\lbrack {0,1}\right\rbrack \) . Let \[ {\xi }_{1} = \xi + \left( {1 - \lambda }\right) \eta \] \[ {\xi }_{2} = \xi - {\lambda \eta } \] Then \[ \xi = \lambda {\xi }_{1} + \left( {1 - \lambda }\right) {\xi }_{2} \] \[ \eta = {\xi }_{1} - {\xi }_{2} \] Define \[ \varphi \left( t\right) = \eta \left\{ \begin{array}{ll} t\left( {1 - \lambda }\right) , & t \in \lbrack 0,\lambda ) \\ \left( {1 - t}\right) \lambda , & t \in \lbrack \lambda ,1) \end{array}\right. \] and substituting it into the quasi-convexity assumption, \[ L\left( \xi \right) \leq {\int }_{0}^{1}L\left( {\xi + {\varphi }^{\prime }\left( t\right) }\right) {dt} \] \[ = {\int }_{0}^{\lambda }L\left( {\xi }_{1}\right) {dt} + {\int }_{\lambda }^{1}L\left( {\xi }_{2}\right) {dt} \] \[ = {\lambda L}\left( {\xi }_{1}\right) + \left( {1 - \lambda }\right) L\left( {\xi }_{2}\right) \] it shows that \( p \mapsto L\left( p\right) \) is convex. Next, we verify quasi-convexity implies convexity for \( N = 1, n > 1 \) . That is, \( \forall {\xi }_{1},{\xi }_{2} \in {\mathbb{R}}^{n} \), we want to show \[ L\left( {\lambda {\xi }_{1} + \left( {1 - \lambda }\right) {\xi }_{2}}\right) \leq {\lambda L}\left( {\xi }_{1}\right) + \left( {1 - \lambda }\right) L\left( {\xi }_{2}\right) \] Take a hypercube \( D \subset \Omega \), without loss of generality, we may assume \( D = {\left\lbrack 0,1\right\rbrack }^{n} \) . We continue to use the above defined function \( \varphi ,\forall x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \in D \), let \[ {u}_{k}\left( x\right) = \eta {k}^{-1}\varphi \left( {k{x}_{1}}\right) \] then \[ \nabla {u}_{k}\left( x\right) = \eta \left\{ \begin{array}{ll} \left( {1 - \lambda }\right) , & \left\{ {k{x}_{1}}\right\} \in \lbrack 0,\lambda ) \\ - \lambda , & \left\{ {k{x}_{1}}\right\} \in \lbrack \lambda ,1) \end{array}\right. \] where \( \{ y\} \) represents the fractional part of \( y \in {\mathbb{R}}^{1} \) and \( \eta = {\xi }_{1} - {\xi }_{2} \) . Furthermore, let \[ {v}_{k}\left( x\right) = \eta \min \left\{ {{k}^{-1}\varphi \left( {k{x}_{1}}\right) ,\operatorname{dist}\left( {x,\partial D}\right) }\right\} \] where \( \operatorname{dist}\left( {x,\partial D}\right) = \inf \left\{ {\mathop{\sup }\limits_{{1 \leq i \leq n}}\begin{Vmatrix}{{x}_{i} - {y}_{i}}\end{Vmatrix} \mid y = \left( {{y}_{1},\ldots ,{y}_{n}}\right) \in \partial D}\right\} \) . Thus, \( {\left. {v}_{k}\right| }_{\partial D} = 0 \), and there exists a constant \( K > 0 \) such that \( \left| {{u}_{k}\left( x\right) - {v}_{k}\left( x\right) }\right| \leq \) \( K\parallel x - y\parallel \), whence, \( {v}_{k} \in {W}_{0}^{1, p}\left( D\right) ,1 < p < \infty \) . In addition, \[ \operatorname{mes}\left\{ {x \in D \mid \nabla {u}_{k}\left( x\right) \neq \nabla {v}_{k}\left( x\right) }\right\} \rightarrow 0,\;k \rightarrow \infty . \] If we take \( {D}_{1} = \left\{ {x \in D \mid \nabla {u}_{k}\left( x\right) = \left( {1 - \lambda }\right) \eta }\right\} ,{D}_{2} = \left\{ {x \in D \mid \nabla {u}_{k}\left( x\right) = }\right. \) \( - {\lambda \eta }\} \), then \( D = {D}_{1} \cup {D}_{2} \) . Since \( L \) is quasi-convex, \[ \operatorname{mes}\left( D\right) L\left( {\lambda {\xi }_{1} + \left( {1 - \lambda }\right) {\xi }_{2}}\right) \leq {\int }_{D}L\left( {\lambda {\xi }_{1} + \left( {1 - \lambda }\right) {\xi }_{2} + \nabla {v}_{k}\left( x\right) }\right) {dx}. \] Let \( \xi = \lambda {\xi }_{1} + \left( {1 - \lambda }\right) {\xi }_{2} \), then by the absolute continuity of integrals, we have \[ \lim {\int }_{D}L\left( {\xi + \nabla {v}_{k}\left( x\right) }\right) {dx} = \lim {\int }_{D}L\left( {\xi + \nabla {u}_{k}\left( x\right) }\right) {dx} \] \[ = {\int }_{{D}_{1}}L\left( {\xi }_{1}\right) {dx} + {\int }_{{D}_{2}}L\left( {\xi }_{2}\right) {dx} \] \[ = \left( {{\lambda L}\left( {\xi }_{1}\right) + \left( {1 - \lambda }\right) L\left( {\xi }_{2}\right) }\right) \operatorname{mes}\left( D\right) . \] This proves our assertion. However, there are quasi-convex functions which are not convex. For example, the determinant function \[ A \mapsto \det \left( A\right) \text{.} \] We only verify this for the case \( N = n = 2 \) . Given a matrix \( A = \left( {a}_{ij}\right) \) . Let \( u = \left( {{u}_{1},{u}_{2}}\right), P = \left( {p}_{j}^{i}\right) \), and \( L\left( P\right) = \det \left( P\right) \) . Since \[ \det \left( {\nabla \varphi }\right) = {\partial }_{1}\left( {{\varphi }_{1}{\partial }_{2}{\varphi }_{2}}\right) - {\partial }_{2}\left( {{\varphi }_{1}{\partial }_{1}{\varphi }_{2}}\right) \] we have: \[ {\int }_{D}\det \left( {\nabla \varphi }\right) {dx} = 0,\forall \varphi \in {W}_{0}^{1,\infty }\left( \Omega \right) . \] Consequently, \[ \operatorname{mes}{\left( D\right) }^{-1}{\int }_{D}\det \left( {A + \nabla \varphi \left( x\right) }\right) {dx} \] \[ = \operatorname{mes}{\left( D\right) }^{-1}{\int }_{D}\left\lbrack {\det \left( A\right) + \det \left( {\nabla \varphi \left( x\right) }\right) + {a}_{11}{\partial }_{2}{\varphi }_{2} + {a}_{22}{\partial }_{1}{\varphi }_{1}}\right. \] \[ \left. {-{a}_{12}{\partial }_{1}{\varphi }_{2} - {a}_{21}{\partial }_{2}{\varphi }_{1}}\right\rbrack {dx} \] \[ = \det \left( A\right) \text{.} \] Recall the Legendre-Hadamard condition from Lecture 6 in determining whether \( {u}_{0} \) is a weak minimum, \[ {L}_{{p}_{\alpha }^{j}{p}_{\beta }^{k}}\left( {x,{u}_{0}\left( x\right) ,\nabla {u}_{0}\left( x\right) }\right) {\pi }_{\alpha }^{j}{p}_{\beta }^{k} \geq 0,\forall \pi \in {\mathbb{R}}^{n \times N},\operatorname{rank}\left( \pi \right) = 1. \] This is related to convexity. In fact, if \( \forall \left( {x, u}\right) \in \Omega \times {\mathbb{R}}^{N}, p \mapsto L\left( {x, u, p}\right) \) is convex, and if \( L \in {C}^{2} \), then for any \( u \), the Legendre-Hadamard condition holds. However, the Legendre-Hadamard condition does not require \( L \) to be convex for all \( n \times N \) matrices, it only requires \( L \) to be convex for all rank 1 matrices. We shall call the Lagrangian \( L \) rank 1 convex (for brievity, we omit \( \left( {x, u}\right) \in \Omega \times {\mathbb{R}}^{N} \) ) if it satisfies the following conditions: \[ L\left( {{\lambda B} + \left( {1 - \lambda }\right) C}\right) \leq {\lambda L}\left( B\right) + \left( {1 - \lambda }\right) L\left( C\right) , \] \[ \forall \lambda \in \left\lbrack {0,1}\right\rbrack ,\;\forall B, C \in {M}^{n \times N},\;\operatorname{rank}\left( {B - C}\right) = 1 \] In summary, convex \( \Rightarrow \) quasi-convex \( \Rightarrow \) rank \( 1 \) convex \( \overset{\text{ for }n = 1\text{ or }N = 1}{ \Rightarrow } \) convex. Returning to the existence problem, with the assistance of the Morrey-Acerbi-Fusco theorem, we have a more general existence theorem. Theorem 11.6 Suppose \( L : {M}^{n \times N}\overset{C}{ \rightarrow }{\mathbb{R}}^{1} \) is quasi-convex and there exist constants \( {C}_{2} > {C}_{1} > 0 \) such that \[ {C}_{1}{\left| A\right| }^{p} \leq L\left( A\right) \leq {C}_{2}\left( {1 + {\l
1329_[肖梁] Abstract Algebra (2022F)
Definition 6.3.3
Definition 6.3.3. Let \( p \) be a prime number. A finite group \( G \) is called a \( p \) -group if \( \left| G\right| \) is a power of \( p \) . Proposition 6.3.4. For a nontrivial p-group \( G, Z\left( G\right) \) is nontrivial. Proof. We use class formation for the \( p \) -group \( G \) : ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_41_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_41_0.jpg) From this, we see that \( p \) divides \( \left| {Z\left( G\right) }\right| \) . Extended Readings after Section 6 6.4. Automorphisms of a group. Let \( G \) be a group. Recall that \[ \operatorname{Aut}\left( G\right) = \{ \phi : G\overset{ \simeq }{ \rightarrow }G\text{ isomorphism }\} \] Recall that the conjugation gives a homomorphism \[ \text{Ad} : G \rightarrow \operatorname{Aut}\left( G\right) \] \[ g \mapsto \left( {{\operatorname{Ad}}_{g} : h \mapsto {gh}{g}^{-1}}\right) . \] We have shown in Properties 6.1.6(2) that \( \overline{\ker \left( \mathrm{{Ad}}\right) } = Z\left( G\right) \) . Definition 6.4.1. The subgroup \( \operatorname{Ad}\left( G\right) \subseteq \operatorname{Aut}\left( G\right) \), denoted by \( \operatorname{Inn}\left( G\right) \), are called the group of inner automorphism. The following lemma will show that \( \operatorname{Inn}\left( G\right) \) is a normal subgroup of \( \operatorname{Aut}\left( G\right) \) . The quotient \[ \operatorname{Out}\left( G\right) \mathrel{\text{:=}} \operatorname{Aut}\left( G\right) /\operatorname{Inn}\left( G\right) \] is called the group of outer automorphisms of \( G \) . Lemma 6.4.2. For a group \( G,\operatorname{Inn}\left( G\right) \vartriangleleft \operatorname{Aut}\left( G\right) \) . Proof. We need to show that, if \( \sigma : G\overset{ \simeq }{ \Rightarrow }G \) is an automorphism, then \( \sigma \operatorname{Inn}\left( G\right) {\sigma }^{-1} = \operatorname{Inn}\left( G\right) \) . (In fact, it suffices to prove " \( \subseteq \) ", and " \( \supseteq \) " follows from the inclusion \( \subseteq \) for \( {\sigma }^{-1} \) .) For this, take \( {\operatorname{Ad}}_{g} \in \operatorname{Inn}\left( G\right) \) for \( g \in G \) . We claim that \[ \sigma \circ {\operatorname{Ad}}_{g} \circ {\sigma }^{-1} : G \rightarrow G \] as an automorphism is equal to \( {\operatorname{Ad}}_{\sigma \left( g\right) } \), so it belongs to \( \operatorname{Inn}\left( G\right) \) ; and thus proving \( \sigma \operatorname{Inn}\left( G\right) {\sigma }^{-1} \subseteq \) \( \operatorname{Inn}\left( G\right) \) . Indeed, we have \[ \sigma \circ {\operatorname{Ad}}_{g} \circ {\sigma }^{-1}\left( h\right) = \sigma \left( {{\operatorname{Ad}}_{g}\left( {{\sigma }^{-1}\left( h\right) }\right) }\right) = \sigma \left( {g{\sigma }^{-1}\left( h\right) {g}^{-1}}\right) \] \[ = \sigma \left( g\right) \sigma \left( {{\sigma }^{-1}\left( h\right) }\right) \sigma {\left( g\right) }^{-1} = \sigma \left( g\right) {h\sigma }{\left( g\right) }^{-1} = {\operatorname{Ad}}_{\sigma \left( g\right) }\left( h\right) . \] Example 6.4.3. Consider \( G = {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) \), the conjugation action gives \( \mathrm{{Ad}} : {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) \rightarrow \) \( \operatorname{Aut}\left( G\right) \), then \[ \ker \left( \mathrm{{Ad}}\right) = Z\left( {{\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) }\right) = \left\{ {A \in {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) \mid {AB} = {BA},\text{ for all }B \in {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) }\right\} \] \[ = \left\{ {a \cdot {I}_{n} \mid a \in {\mathbb{Q}}^{ \times }}\right\} \cong {\mathbb{Q}}^{ \times }. \] Thus we have \( \operatorname{Inn}\left( G\right) \cong {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) /{\mathbb{Q}}^{ \times } = : {\mathrm{{PGL}}}_{n}\left( \mathbb{Q}\right) \) (the projective general linear group). What about automorphisms that are not inner? The automorphism \[ \psi : {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) \rightarrow {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) \] \[ A \mapsto {}^{t}{A}^{-1} \] satisfies \( \psi \left( {AB}\right) = \psi \left( A\right) \psi \left( B\right) \) . This gives an action by automorphism \[ {\operatorname{PGL}}_{n}\left( \mathbb{Q}\right) \rtimes \{ 1,\psi \} \subset {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) \] We remark that: if we confine to automorphisms of \( G \) that are given by polynomials, then \[ \operatorname{Aut}{\left( {\mathrm{{GL}}}_{n}\left( \mathbb{Q}\right) \right) }^{\mathrm{{alg}}} \cong {\mathrm{{PGL}}}_{n}\left( \mathbb{Q}\right) \rtimes \{ 1,\psi \} . \] Example 6.4.4. For \( G = {S}_{n} \), the conjugation action \( \operatorname{Ad} : {S}_{n} \rightarrow \operatorname{Aut}\left( {S}_{n}\right) \) is injective. Here is an interesting fact: If \( n \neq 6 \), Ad \( : {S}_{n} \rightarrow \operatorname{Aut}\left( {S}_{n}\right) \) is an isomorphism, i.e. all automorphisms of \( {S}_{n} \) are "inner". When \( n = 6 \), there exists an automorphism \( \psi : {S}_{6}\overset{ \simeq }{ \rightarrow }{S}_{6} \) that is not inner, given by \[ \psi \left( \left( {12}\right) \right) = \left( {12}\right) \left( {34}\right) \left( {56}\right) ,\;\psi \left( \left( {23}\right) \right) = \left( {14}\right) \left( {25}\right) \left( {36}\right) ,\;\psi \left( \left( {34}\right) \right) = \left( {13}\right) \left( {24}\right) \left( {56}\right) , \] \[ \psi \left( \left( {45}\right) \right) = \left( {12}\right) \left( {36}\right) \left( {45}\right) ,\;\text{ and }\;\psi \left( \left( {56}\right) \right) = \left( {14}\right) \left( {23}\right) \left( {56}\right) . \] In fact, one can prove that \( \operatorname{Aut}\left( {S}_{6}\right) = {S}_{6} \rtimes \{ 1,\psi \} \) . ## 6.5. Characteristic subgroups. Definition 6.5.1. Let \( G \) be a group. We say that a subgroup \( H \) is characteristic, denoted as \( H\operatorname{char}G \), if for any automorphism \( \sigma \) of \( G,\sigma \left( H\right) = H \) . Properties 6.5.2. (1) If \( H \leq G \) is the unique subgroup of that order, then \( H \) is characteristic. For example, in \( {\mathbf{Z}}_{n} \), for every \( d \mid n,\langle d\rangle \subseteq {\mathbf{Z}}_{n} \) is characteristic. (2) Characteristic subgroups are normal. Indeed, if \( H\operatorname{char}G \) and \( g \in G,{\operatorname{Ad}}_{g} : G \rightarrow G \) is an automorphism. The definition of characteristic subgroups gives: \[ {\operatorname{Ad}}_{g}\left( H\right) = H \Rightarrow H \trianglelefteq G. \] (3) If \( K\operatorname{char}H \) and \( H \trianglelefteq G \), then \( K \trianglelefteq G \) . If \( K\operatorname{char}H \) and \( H\operatorname{char}G \), then \( K\operatorname{char}G \), i.e. characteristic subgroup is transitive. (Proving the first statement: given any \( g \in G \), as \( H \trianglelefteq G \), we have \( {gH}{g}^{-1} = H \) . Thus \[ {\operatorname{Ad}}_{g} : H \rightarrow H \] \[ h \mapsto {gh}{g}^{-1} \] is an automorphism. The property of characteristic subgroups implies that \( {\operatorname{Ad}}_{g}\left( K\right) = \) \( K \) . ## 7. Sylow's THEOREMS In this lecture, we focus on one of the most important tools in the study of finite groups: Sylow's theorems. In some sense, the main motivation of Sylow's theorem is to find abstract ways to - study groups in a way similar to, for \( n = {p}_{1}^{{\alpha }_{1}}\cdots {p}_{r}^{{\alpha }_{r}} \), having \( {\mathbf{Z}}_{n} = {\mathbf{Z}}_{{p}_{1}^{{\alpha }_{1}}} \times \cdots {\mathbf{Z}}_{{p}_{r}^{{\alpha }_{r}}} \) , and hence reducing the study of \( {\mathbf{Z}}_{n} \) to each of prime \( p \) ; (namely to separating the structure to the part for each prime \( p \) ) and - consider the group \( G \) acting on some natural set \( X \) (as a way to re-present the group). ## 7.1. Statement of Sylow's theorems. Definition 7.1.1. Fix a prime number \( p \) . (1) A \( p \) -group is a finite group whose order is a power of \( p \) . (2) If \( G \) is a finite group of order \( \left| G\right| = {p}^{r}m \) with \( r, m \in \mathbb{N} \) and \( p \nmid m \), a subgroup \( H \) of \( G \) of order exactly \( {p}^{r} \) is called a Sylow \( p \) -subgroup or \( p \) -Sylow subgroup. Write \[ {\operatorname{Syl}}_{p}\left( G\right) \mathrel{\text{:=}} \{ \text{ Sylow }p\text{-subgroups of }G\} \;\text{ and }\;{n}_{p} \mathrel{\text{:=}} \left| {{\operatorname{Syl}}_{p}\left( G\right) }\right| . \] Theorem 7.1.2 (Sylow’s theorem). Let \( G \) be a finite group with \( \left| G\right| = {p}^{r}m \) with \( r, m \in \mathbb{N} \) and \( p \nmid m \) . - (First Sylow Theorem) Sylow p-subgroups exist. - (Second Sylow Theorem) If \( P \) is a Sylow p-subgroup of \( G \), and \( Q \leq G \) is a subgroup of p-power order, then there exists \( g \in G \) such that \( Q \leq {gP}{g}^{-1} \) (note \( {gP}{g}^{-1} \) is also a Sylow p-subgroup). In other words, we have - all Sylow p-subgroups are conjugate; and - all subgroups of p-power order is contained in a Sylow p-subgroup. - (Third Sylow Theorem) The number \( {n}_{p} = \left| {{\operatorname{Syl}}_{p}\left( G\right) }\right| \) satisfies (1) \( {n}_{p} \equiv 1{\;\operatorname{mod}\;p} \), and (2) \( {n}_{p} \mid m \) . ## 7.2. Proof of Sylow's theorems and their corollaries. 7.2.1. Proof of First Sylow Theorem. We use an induction on \( \left| G\right| \) . When \( \left| G\right| = 1 \), there is nothing to prove. Suppose that the First Sylow Theorem is proved for finite groups of order \( < n \) . Let \( G \) be a finite group of order \( n = {p}^{r}m \) with \( r, m \in \mathbb{N} \) and \( p \nmid m \) . Case 1: \( p \nmid \left| G\right| \), i.e. \( r = 0 \) . Then \( \{ 1\} \subseteq G \) is the Sylow \( p \) -subgroup of \( G \) . Case 2: If \( p \) divides \( \left| {Z\left( G\right) }\right| \), then \( Z\left( G\right) \) is a finitely generated abelian group; so \[ Z\left( G\right) = \underset{p\text{-part }}{\underbrace{{\mathbf{Z}}_{{p}^{{r}_{1}}} \times \cdots \times {\mathbf{Z}}_{{p}^
1092_(GTM249)Classical Fourier Analysis
Definition 2.1.9
Definition 2.1.9. Given a function \( g \) on \( {\mathbf{R}}^{n} \) and \( \varepsilon > 0 \), we denote by \( {g}_{\varepsilon } \) the following function: \[ {g}_{\varepsilon }\left( x\right) = {\varepsilon }^{-n}g\left( {{\varepsilon }^{-1}x}\right) . \] (2.1.7) As observed in Example 1.2.17, if \( g \) is an integrable function with integral equal to 1 , then the family defined by (2.1.7) is an approximate identity. Therefore, convolution with \( {g}_{\varepsilon } \) is an averaging operation. The Hardy-Littlewood maximal function \( \mathcal{M}\left( f\right) \) is obtained as the supremum of the averages of a function \( f \) with respect to the dilates of the kernel \( k = {v}_{n}^{-1}{\chi }_{B\left( {0,1}\right) } \) in \( {\mathbf{R}}^{n} \) ; here \( {v}_{n} \) is the volume of the unit ball \( B\left( {0,1}\right) \) . Indeed, we have \[ \mathcal{M}\left( f\right) \left( x\right) = \mathop{\sup }\limits_{{\varepsilon > 0}}\frac{1}{{v}_{n}{\varepsilon }^{n}}{\int }_{{\mathbf{R}}^{n}}\left| {f\left( {x - y}\right) }\right| {\chi }_{B\left( {0,1}\right) }\left( \frac{y}{\varepsilon }\right) {dy} \] \[ = \mathop{\sup }\limits_{{\varepsilon > 0}}\left( {\left| f\right| * {k}_{\varepsilon }}\right) \left( x\right) . \] Note that the function \( k = {v}_{n}^{-1}{\chi }_{B\left( {0,1}\right) } \) has integral equal to 1, and convolving with \( {k}_{\varepsilon } \) is an averaging operation. It turns out that the Hardy-Littlewood maximal function controls the averages of a function with respect to any radially decreasing \( {L}^{1} \) function. Recall that a function \( f \) on \( {\mathbf{R}}^{n} \) is called radial if \( f\left( x\right) = f\left( y\right) \) whenever \( \left| x\right| = \left| y\right| \) . Note that a radial function \( f \) on \( {\mathbf{R}}^{n} \) has the form \( f\left( x\right) = \varphi \left( \left| x\right| \right) \) for some function \( \varphi \) on \( {\mathbf{R}}^{ + } \) . We have the following result. Theorem 2.1.10. Let \( k \geq 0 \) be a function on \( \lbrack 0,\infty ) \) that is continuous except at a finite number of points. Suppose that \( K\left( x\right) = k\left( \left| x\right| \right) \) is an integrable function on \( {\mathbf{R}}^{n} \) that satisfies \[ K\left( x\right) \geq K\left( y\right) ,\;\text{ whenever }\left| x\right| \leq \left| y\right| , \] (2.1.8) i.e., \( k \) is decreasing. Then the following estimate is true: \[ \mathop{\sup }\limits_{{\varepsilon > 0}}\left( {\left| f\right| * {K}_{\varepsilon }}\right) \left( x\right) \leq \parallel K{\parallel }_{{L}^{1}}\mathcal{M}\left( f\right) \left( x\right) \] (2.1.9) for all locally integrable functions \( f \) on \( {\mathbf{R}}^{n} \) . Proof. We prove (2.1.9) when \( K \) is radial, satisfies (2.1.8), and is compactly supported and continuous. When this case is established, select a sequence \( {K}_{j} \) of radial, compactly supported, continuous functions that increase to \( K \) as \( j \rightarrow \infty \) . This is possible, since the function \( k \) is continuous except at a finite number of points. If (2.1.9) holds for each \( {K}_{j} \), passing to the limit implies that (2.1.9) also holds for \( K \) . Next, we observe that it suffices to prove (2.1.9) for \( x = 0 \) . When this case is established, replacing \( f\left( t\right) \) by \( f\left( {t + x}\right) \) implies that (2.1.9) holds for all \( x \) . Let us now fix a radial, continuous, and compactly supported function \( K \) with support in the ball \( B\left( {0, R}\right) \), satisfying (2.1.8). Also fix an \( f \in {L}_{\text{loc }}^{1} \) and take \( x = 0 \) . Let \( {e}_{1} \) be the vector \( \left( {1,0,0,\ldots ,0}\right) \) on the unit sphere \( {\mathbf{S}}^{n - 1} \) . Polar coordinates give \[ {\int }_{{\mathbf{R}}^{n}}\left| {f\left( y\right) }\right| {K}_{\varepsilon }\left( {-y}\right) {dy} = {\int }_{0}^{\infty }{\int }_{{\mathbf{S}}^{n - 1}}\left| {f\left( {r\theta }\right) }\right| {K}_{\varepsilon }\left( {r{e}_{1}}\right) {r}^{n - 1}{d\theta dr}. \] (2.1.10) Define functions \[ F\left( r\right) = {\int }_{{\mathbf{S}}^{n - 1}}\left| {f\left( {r\theta }\right) }\right| {d\theta } \] \[ G\left( r\right) = {\int }_{0}^{r}F\left( s\right) {s}^{n - 1}{ds} \] where \( {d\theta } \) denotes surface measure on \( {\mathbf{S}}^{n - 1} \) . Using these functions,(2.1.10), and integration by parts, we obtain \[ {\int }_{{\mathbf{R}}^{n}}\left| {f\left( y\right) }\right| {K}_{\varepsilon }\left( y\right) {dy} = {\int }_{0}^{\varepsilon R}F\left( r\right) {r}^{n - 1}{K}_{\varepsilon }\left( {r{e}_{1}}\right) {dr} \] \[ = G\left( {\varepsilon R}\right) {K}_{\varepsilon }\left( {{\varepsilon R}{e}_{1}}\right) - G\left( 0\right) {K}_{\varepsilon }\left( 0\right) - {\int }_{0}^{\varepsilon R}G\left( r\right) d{K}_{\varepsilon }\left( {r{e}_{1}}\right) \] \[ = {\int }_{0}^{\infty }G\left( r\right) d\left( {-{K}_{\varepsilon }\left( {r{e}_{1}}\right) }\right) \] (2.1.11) where two of the integrals are of Lebesgue-Stieltjes type and we used our assumptions that \( G\left( 0\right) = 0,{K}_{\varepsilon }\left( 0\right) < \infty, G\left( {\varepsilon R}\right) < \infty \), and \( {K}_{\varepsilon }\left( {{\varepsilon R}{e}_{1}}\right) = 0 \) . Let \( {v}_{n} \) be the volume of the unit ball in \( {\mathbf{R}}^{n} \) . Since \[ G\left( r\right) = {\int }_{0}^{r}F\left( s\right) {s}^{n - 1}{ds} = {\int }_{\left| y\right| \leq r}\left| {f\left( y\right) }\right| {dy} \leq \mathcal{M}\left( f\right) \left( 0\right) {v}_{n}{r}^{n}, \] it follows that the expression in (2.1.11) is dominated by \[ \mathcal{M}\left( f\right) \left( 0\right) {v}_{n}{\int }_{0}^{\infty }{r}^{n}d\left( {-{K}_{\varepsilon }\left( {r{e}_{1}}\right) }\right) = \mathcal{M}\left( f\right) \left( 0\right) {\int }_{0}^{\infty }n{v}_{n}{r}^{n - 1}{K}_{\varepsilon }\left( {r{e}_{1}}\right) {dr} \] \[ = \mathcal{M}\left( f\right) \left( 0\right) \parallel K{\parallel }_{{L}^{1}}. \] Here we used integration by parts and the fact that the surface measure of the unit sphere \( {\mathbf{S}}^{n - 1} \) is equal to \( n{v}_{n} \) . See Appendix A.3. The theorem is now proved. Remark 2.1.11. Theorem 2.1.10 can be generalized as follows. If \( K \) is an \( {L}^{1} \) function on \( {\mathbf{R}}^{n} \) such that \( \left| {K\left( x\right) }\right| \leq {k}_{0}\left( \left| x\right| \right) = {K}_{0}\left( x\right) \), where \( {k}_{0} \) is a nonnegative decreasing function on \( \lbrack 0,\infty ) \) that is continuous except at a finite number of points, then (2.1.9) holds with \( \parallel K{\parallel }_{{L}^{1}} \) replaced by \( {\begin{Vmatrix}{K}_{0}\end{Vmatrix}}_{{L}^{1}} \) . Such a \( {K}_{0} \) is called a radial decreasing majorant of \( K \) . This observation is formulated as the following corollary. Corollary 2.1.12. If a function \( \varphi \) has an integrable radially decreasing majorant \( \Phi \) , then the estimate \[ \mathop{\sup }\limits_{{t > 0}}\left| {\left( {f * {\varphi }_{t}}\right) \left( x\right) }\right| \leq \parallel \Phi {\parallel }_{{L}^{1}}\mathcal{M}\left( f\right) \left( x\right) \] is valid for all locally integrable functions \( f \) on \( {\mathbf{R}}^{n} \) . Example 2.1.13. Let \[ P\left( x\right) = \frac{{c}_{n}}{{\left( 1 + {\left| x\right| }^{2}\right) }^{\frac{n + 1}{2}}}, \] where \( {c}_{n} \) is a constant such that \[ {\int }_{{\mathbf{R}}^{n}}P\left( x\right) {dx} = 1 \] The function \( P \) is called the Poisson kernel. We define \( {L}^{1} \) dilates \( {P}_{t} \) of the Poisson kernel \( P \) by setting \[ {P}_{t}\left( x\right) = {t}^{-n}P\left( {{t}^{-1}x}\right) \] for \( t > 0 \) . It is straightforward to verify that when \( n \geq 2 \) , \[ \frac{{d}^{2}}{d{t}^{2}}{P}_{t} + \mathop{\sum }\limits_{{j = 1}}^{n}{\partial }_{j}^{2}{P}_{t} = 0 \] that is, \( {P}_{t}\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is a harmonic function of the variables \( \left( {{x}_{1},\ldots ,{x}_{n}, t}\right) \) . Therefore, for \( f \in {L}^{p}\left( {\mathbf{R}}^{n}\right) ,1 \leq p < \infty \), the function \[ u\left( {x, t}\right) = \left( {f * {P}_{t}}\right) \left( x\right) \] is harmonic in \( {\mathbf{R}}_{ + }^{n + 1} \) and converges to \( f\left( x\right) \) in \( {L}^{p}\left( {dx}\right) \) as \( t \rightarrow 0 \), since \( {\left\{ {P}_{t}\right\} }_{t > 0} \) is an approximate identity. If we knew that \( f * {P}_{t} \) converged to \( f \) a.e. as \( t \rightarrow 0 \), then we could say that \( u\left( {x, t}\right) \) solves the Dirichlet problem \[ {\partial }_{t}^{2}u + \mathop{\sum }\limits_{{j = 1}}^{n}{\partial }_{j}^{2}u = 0\;\text{ on }{\mathbf{R}}_{ + }^{n + 1}, \] (2.1.12) \[ u\left( {x,0}\right) = f\left( x\right) \;\text{ a.e. on }{\mathbf{R}}^{n}. \] Solving the Dirichlet problem (2.1.12) motivates the study of the almost everywhere convergence of the expressions \( f * {P}_{t} \) . Let us now compute the value of the constant \( {c}_{n} \) . Denote by \( {\omega }_{n - 1} \) the surface area of \( {\mathbf{S}}^{n - 1} \) . Using polar coordinates, we obtain \[ \frac{1}{{c}_{n}} = {\int }_{{\mathbf{R}}^{n}}\frac{dx}{{\left( 1 + {\left| x\right| }^{2}\right) }^{\frac{n + 1}{2}}} \] \[ = {\omega }_{n - 1}{\int }_{0}^{\infty }\frac{{r}^{n - 1}}{{\left( 1 + {r}^{2}\right) }^{\frac{n + 1}{2}}}{dr} \] \[ = {\omega }_{n - 1}{\int }_{0}^{\pi /2}{\left( \sin \varphi \right) }^{n - 1}{d\varphi }\;\left( {r = \tan \varphi }\right) \] \[ = \frac{2{\pi }^{\frac{n}{2}}}{\Gamma \left( \frac{n}{2}\right) }\frac{1}{2}\frac{\Gamma \left( \frac{n}{2}\right) \Gamma \left( \frac{1}{2}\right) }{\Gamma \left( \frac{n + 1}{2}\right) } \] \[ = \frac{{\pi }^{\frac{n + 1}{2}}}{\Gamma \left( \frac{n + 1}{2}\right) } \] where we used the formula for \( {\omega }_{n - 1} \) in Appendix A. 3 and an identity in Appendix A.4. We conclude that \[ {c}_{n} = \frac{\Gamma \left( \frac{n + 1}{2}\right) }{{\pi }^{\frac{n + 1}{2}}} \] and that the Poisson kernel on \( {
109_The rising sea Foundations of Algebraic Geometry
Definition 5.15
Definition 5.15. We call two chambers \( C, D \in \mathcal{C} \) adjacent if they are \( s \) -adjacent for some \( s \in S \) . A sequence \( \Gamma : {C}_{0},\ldots ,{C}_{n} \) of \( n + 1 \) chambers such that \( {C}_{i - 1} \) and \( {C}_{i} \) are adjacent for all \( 1 \leq i \leq n \) is called a gallery of length \( n \) . We say that \( {C}_{0} \) and \( {C}_{n} \) are connected by \( \Gamma \) . If there is no gallery of length \( < n \) connecting \( {C}_{0} \) and \( {C}_{n} \), then we say that the gallery distance between \( {C}_{0} \) and \( {C}_{n} \) is \( n \) and write \( d\left( {{C}_{0},{C}_{n}}\right) = n \) . The gallery \( \Gamma \) is called minimal if \( d\left( {{C}_{0},{C}_{n}}\right) = n \) . If \( {s}_{i} = \delta \left( {{C}_{i - 1},{C}_{i}}\right) \) for \( 1 \leq i \leq n \), then \( \mathbf{s}\left( \Gamma \right) \mathrel{\text{:=}} \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) is called the type of the gallery \( \Gamma \) . We can now derive some basic properties of galleries from the WD axioms. Lemma 5.16. Let \( C \) and \( D \) be chambers and let \( w \mathrel{\text{:=}} \delta \left( {C, D}\right) \) . (1) If \( \Gamma \) is a gallery of type \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) connecting \( C \) and \( D \), then there exists a subword \( \left( {{s}_{{i}_{1}},\ldots ,{s}_{{i}_{m}}}\right) \) of \( \mathbf{s} \) such that \( w = {s}_{{i}_{1}}\cdots {s}_{{i}_{m}} \), where \( 0 \leq \) \( m \leq n \) and \( 1 \leq {i}_{1} < \cdots < {i}_{m} \leq n \) . If, in addition, \( \mathbf{s} \) is reduced in the sense of Section 2.3.1, then \( w = {s}_{1}\cdots {s}_{n} \), and \( \Gamma \) is minimal. (2) If \( w = {s}_{1}\cdots {s}_{n} \) with \( {s}_{1},\ldots ,{s}_{n} \in S \), then there exists a gallery \( \Gamma \) of type \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) connecting \( C \) and \( D \) . If, in addition, \( \mathbf{s} \) is reduced, then this gallery \( \Gamma \) is uniquely determined and minimal. Proof. In both parts of the lemma, we proceed by induction on \( n \) . (1) Let \( \Gamma : C = {C}_{0},{C}_{1},\ldots ,{C}_{n} = D \) be the given gallery. Let \( {w}^{\prime } = \) \( \delta \left( {{C}_{1}, D}\right) \), and apply (WD2) to the triangle ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_239_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_239_0.jpg) to deduce that \( w \in \left\{ {{s}_{1}{w}^{\prime },{w}^{\prime }}\right\} \) . By induction, we may assume that \( {w}^{\prime } \) is the product of the elements occurring in a subword of \( \left( {{s}_{2},\ldots ,{s}_{n}}\right) \) . This immediately yields our first claim about \( w \) . Note that this part of the proof already implies \( d\left( {C, D}\right) \geq l\left( w\right) \), since \( l\left( w\right) \leq m \leq n \), where \( n \) is the length of an arbitrary gallery connecting \( C \) and \( D \) . Now assume that in addition, \( \mathbf{s} \) is reduced. Then also \( {\mathbf{s}}^{\prime } \mathrel{\text{:=}} \left( {{s}_{2},\ldots ,{s}_{n}}\right) \) is reduced. So the induction hypothesis yields \( {w}^{\prime } = \delta \left( {{C}_{1}, D}\right) = {s}_{2}\cdots {s}_{n} \) in this case. Hence \( l\left( {{s}_{1}{w}^{\prime }}\right) = l\left( {{s}_{1}\cdots {s}_{n}}\right) = n = l\left( {w}^{\prime }\right) + 1 \) . The application of (WD2) to the triangle above therefore yields \( w = {s}_{1}{w}^{\prime } = {s}_{1}\cdots {s}_{n} \) . In particular, \( n = l\left( w\right) \), and by the previous paragraph there is no gallery of length \( < l\left( w\right) \) connecting \( C \) and \( D \) . Hence \( \Gamma \) is minimal. (2) Assume \( w = {s}_{1}\cdots {s}_{n} \) with \( n > 0 \) . Applying (WD3), we obtain a chamber \( {C}_{1} \) that is \( {s}_{1} \) -adjacent to \( C \) with \( \delta \left( {{C}_{1}, D}\right) = {s}_{1}w = {s}_{2}\cdots {s}_{n} \) : ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_239_1.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_239_1.jpg) By the induction hypothesis there exists a gallery \( {\Gamma }^{\prime } : {C}_{1},\ldots ,{C}_{n} = D \) of type \( {\mathbf{s}}^{\prime } \mathrel{\text{:=}} \left( {{s}_{2},\ldots ,{s}_{n}}\right) \) connecting \( {C}_{1} \) and \( D \) . Hence \( \Gamma : C = {C}_{0},{C}_{1},\ldots ,{C}_{n} = D \) is a gallery of type \( \mathbf{s} \) connecting \( C \) and \( D \) . Now assume additionally that \( \mathbf{s} \) is reduced. Then \( \Gamma \) is minimal by part (1). We want to show that \( \Gamma \) is the only gallery of type \( \mathbf{s} \) connecting \( C \) and \( D \) . Note that the chamber \( {C}_{1} \) following \( C = {C}_{0} \) in \( \Gamma \) has to satisfy \( \delta \left( {C,{C}_{1}}\right) = {s}_{1} \) as well as (by part (1)) \( \delta \left( {{C}_{1}, D}\right) = {s}_{2}\cdots {s}_{n} = {s}_{1}w \) . Since \( l\left( {{s}_{1}w}\right) = n - 1 = l\left( w\right) - 1 \) , Lemma 5.5 implies that \( {C}_{1} \) is uniquely determined. Then by induction the gallery \( {\Gamma }^{\prime } \) of reduced type \( {\mathbf{s}}^{\prime } \) connecting \( {C}_{1} \) and \( D \) is also uniquely determined. Hence \( \Gamma \) is unique. Corollary 5.17. For any two chambers \( C, D \in \mathcal{C} \), we have: (1) \( d\left( {C, D}\right) = l\left( {\delta \left( {C, D}\right) }\right) \) . (2) \( \delta \left( {D, C}\right) = \delta {\left( C, D\right) }^{-1} \) . Proof. Set \( w \mathrel{\text{:=}} \delta \left( {C, D}\right) \) and choose a reduced decomposition \( w = {s}_{1}\cdots {s}_{n} \) of \( w \) . By Lemma 5.16(2), there exists a gallery \( \Gamma : C = {C}_{0},\ldots ,{C}_{n} = D \) of type \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) connecting \( C \) and \( D \) . It is minimal since \( \mathbf{s} \) is reduced, so \( d\left( {C, D}\right) = n = l\left( w\right) \) . This proves (1). Now \( D = {C}_{n},\ldots ,{C}_{0} = C \) is a gallery of reduced type \( \left( {{s}_{n},\ldots ,{s}_{1}}\right) \) connecting \( D \) and \( C \), so Lemma \( {5.16}\left( 1\right) \) implies \( \delta \left( {D, C}\right) = {s}_{n}\cdots {s}_{1} = {w}^{-1} = \delta {\left( C, D\right) }^{-1} \), proving (2). Remark 5.18. Using Corollary 5.17(2), we can deduce the following analogues of (WD2) and (WD3), thereby removing the asymmetry in those axioms: \( \left( {\mathbf{{WD2}}}^{\prime }\right) \) If \( {D}^{\prime } \in \mathcal{C} \) satisfies \( \delta \left( {D,{D}^{\prime }}\right) = s \in S \) and \( \delta \left( {C, D}\right) = w \), then \( \delta \left( {C,{D}^{\prime }}\right) = {ws} \) or \( w \) . If, in addition, \( l\left( {ws}\right) = l\left( w\right) + 1 \), then \( \delta \left( {C,{D}^{\prime }}\right) = {ws} \) . \( \left( {\mathbf{{WD3}}}^{\prime }\right) \) If \( \delta \left( {C, D}\right) = w \), then for any \( s \in S \) there is a chamber \( {D}^{\prime } \in \mathcal{C} \) such that \( \delta \left( {D,{D}^{\prime }}\right) = s \) and \( \delta \left( {C,{D}^{\prime }}\right) = {ws} \) . We also have, as in Lemma 5.5, that the \( {D}^{\prime } \) in \( \left( {\mathrm{{WD}}{3}^{\prime }}\right) \) is uniquely determined if \( l\left( {ws}\right) = l\left( w\right) - 1 \) . And, as in Remark 5.6, one can interpret this unique \( {D}^{\prime } \) as the chamber closest to \( C \) in the \( s \) -panel containing \( D \) . ## Exercises ## 5.19. Prove Remark 5.18. 5.20. (a) If \( \Gamma \) is a gallery of type \( \mathbf{s} \) in a building of type \( \left( {W, S}\right) \), show that \( \Gamma \) is minimal if and only if \( \mathbf{s} \) is reduced. (b) Given \( C, D \in \mathcal{C} \), show that there is a 1-1 correspondence between minimal galleries from \( C \) to \( D \) and reduced decompositions of \( w \mathrel{\text{:=}} \delta \left( {C, D}\right) \) . ## 5.2 Buildings as Chamber Systems The central idea of our new approach to buildings is to encode properties of galleries in a Weyl-group-valued distance function \( \delta \) . In Definition 5.1 we did this by requiring certain algebraic properties of \( \delta \), which enabled us to define adjacency and galleries. There is a slightly different but closely related way of achieving the same goal using "chamber systems." This approach was introduced by Tits [255], and a slight variant of it was taken as the definition of "building" in the books by Ronan [200] and Weiss [281]. In this section we will show that the definition of Ronan and Weiss is equivalent to the one we gave in Section 5.1. First of all we have to define the notion of a chamber system. This is done in Section A.1.4 in the context of chamber complexes. Here we do not presuppose any simplicial structure, so we just consider a chamber system as a set together with a family of equivalence relations. Definition 5.21. A chamber system over a set \( S \) is a nonempty set \( \mathcal{C} \) (whose elements are called chambers) together with a family of equivalence relations \( {\left( { \sim }_{s}\right) }_{s \in S} \) on \( \mathcal{C} \) indexed by \( S \) . The equivalence classes with respect to \( { \sim }_{s} \) are called \( s \) -panels. A panel is an \( s \) -panel for some \( s \in S \) . Two distinct chambers \( C \) and \( D \) are called \( s \) -adjacent if they are contained in the same \( s \) -panel and adjacent if they are \( s \) -adjacent for some \( s \in S \) . A gallery of length \( n \) connecting \( {C}_{0} \) and \( {C}_{n} \) is a sequence \( \Gamma : {C}_{0},\ldots ,{C}_{n} \) of \( n + 1 \) chambers such that \( {C}_{i - 1} \) and \( {C}_{i} \) are adjacent for all \( 1 \leq i \leq n \) . If \( {C}_{i - 1} \) and \( {C}_{i} \) are \( {s}_{i} \) -adjacent with \( {s}_{i} \in S \) for all \( i \), then we say that \( \Gamma \) is a gallery of type \( \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) . A chamber system can be viewed as a graph with colored edges; the vertices are the chambers, and \( s \) -adjacent chambers are connected by an edge of color \( s \) . See Exercise 5.25 for m
1172_(GTM8)Axiomatic Set Theory
Definition 8.12
Definition 8.12. If \( t \) is a term then \( {\operatorname{Ord}}^{1}\left( t\right) \triangleq g\left( t\right) ,\;{\operatorname{Ord}}^{2}\left( t\right) \triangleq 0,\;{\operatorname{Ord}}^{3}\left( t\right) \triangleq 0,\;\operatorname{Ord}\left( t\right) \triangleq {\omega }^{2} \cdot g\left( t\right) . \) Theorem 8.13. 1. \( \operatorname{Ord}\left( t\right) < \operatorname{Ord}\left( {P\left( t\right) }\right) \) 2. \( \max \left( {\operatorname{Ord}\left( {t}_{1}\right) ,\operatorname{Ord}\left( {t}_{2}\right) }\right) < \operatorname{Ord}\left( {{t}_{1} \in {t}_{2}}\right) \) . Remark. The preceding theorems provide a basis for the definition of a denotation operator \( D \) defined on terms and closed limited formulas. The definition is by recursion on Ord \( \left( t\right) \) and Ord \( \left( \varphi \right) \) . Definition 8.14 1. \( D\left( \underline{k}\right) \triangleq k, k \in K \) . 2. \( D\left( {{\widehat{x}}^{\alpha }\varphi \left( {x}^{\alpha }\right) }\right) \overset{\Delta }{ = }\left\{ {D\left( t\right) \mid t \in {T}_{\alpha } \land D\left( {\varphi \left( t\right) }\right) }\right\} \) . 3. \( D\left( {\neg \varphi }\right) \overset{\Delta }{ \leftrightarrow }\neg D\left( \varphi \right) \) . 4. \( D\left( {\varphi \land \psi }\right) \overset{\Delta }{ \leftrightarrow }D\left( \varphi \right) \land D\left( \psi \right) \) . 5. \( D\left( {\left( {\forall {x}^{\alpha }}\right) \varphi \left( {x}^{\alpha }\right) }\right) \overset{\Delta }{ \leftrightarrow }\left( {\forall t \in {T}_{\alpha }}\right) D\left( {\varphi \left( t\right) }\right) \) . 6. \( D\left( {{t}_{1} \in {t}_{2}}\right) \overset{\Delta }{ \leftrightarrow }D\left( {t}_{1}\right) \in D\left( {t}_{2}\right) ,{t}_{1},{t}_{2} \in T \) \( D\left( {K\left( t\right) }\right) \overset{\Delta }{ \leftrightarrow }D\left( t\right) \in K, t \in T \) \( D\left( {F\left( t\right) }\right) \overset{\Delta }{ \leftrightarrow }D\left( t\right) \in F, t \in T. \) Remark. More exactly \( D \) should be defined on \( {}^{\Gamma }{\varphi }^{1} \) and \( D \) restricted to closed limited formulas \( {}^{\Gamma }{\varphi }^{1} \) should be regarded as a function onto 2 . It should be noted that this recursive definition is permissible since \( {T}_{\alpha } \) and the class of closed limited formulas \( \varphi \) such that Ord \( \left( \varphi \right) < \alpha \) are sets. Theorem 8.15. \( D \) is definable in \( {\mathcal{L}}_{0}\left( {K\left( \right), F\left( \right) }\right) \) . Definition 3.16. \( D \) can be extended to an operator \( \widetilde{D} \) defined for all closed unlimited formulas of \( R\left( {K, F}\right) \) by adding \[ \widetilde{D}\left( {\left( {\forall x}\right) \varphi \left( x\right) }\right) \overset{\Delta }{ \leftrightarrow }\left( {\forall t \in T}\right) \widetilde{D}\left( {\varphi \left( t\right) }\right) . \] Remark. Since \( T \) is a proper class, \( \widetilde{D} \) is no longer definable in the language \( {\mathcal{L}}_{0}\left( {K\left( \;\right), F\left( \right) }\right) \), simultaneously for all unlimited formulas. As in the case of truth definitions \( \widetilde{D}\left( \varphi \right) \) is definable in \( {\mathcal{L}}_{0}\left( {K\left( \right), F\left( \right) }\right) \) for any particular formula \( \varphi \) or indeed for any set of formulas \( \varphi \) with less than \( n \) quantifiers, \( n \) a fixed natural number. Finally we relate the method of this section with the concepts introduced in \( §7 \) by proving the following theorem. Theorem 8.17. \( L\left\lbrack {K;F}\right\rbrack = \{ D\left( t\right) \mid t \in T\} \) where \( F \subseteq K \) and \( K \) is transitive. Remark. For the proof we need the following. Definition 8.18. A limited formula \( \varphi \) is of rank \( \leq \alpha \) iff every quantifier in \( \varphi \) is of the form \( \forall {x}^{\beta } \), for some \( \beta \leq \alpha \), and every constant term occurring in \( \varphi \) is an element of \( {T}_{\alpha } \) . We now define an operator \( {D}_{\alpha } \) for closed limited formulas of rank \( \leq \alpha \) : 1. \( {D}_{\alpha }\left( {{t}_{1} \in {t}_{2}}\right) \overset{\Delta }{ \leftrightarrow }D\left( {t}_{1}\right) \in D\left( {t}_{2}\right) \) . 2. \( {D}_{\alpha }\left( {K\left( t\right) }\right) \overset{\Delta }{ \leftrightarrow }D\left( t\right) \in K \) . 3. \( {D}_{\alpha }\left( {F\left( t\right) }\right) \overset{\Delta }{ \leftrightarrow }D\left( t\right) \in F \) . 4. \( {D}_{\alpha }\left( {\left( {\forall {x}^{\alpha }}\right) \varphi \left( {x}^{\alpha }\right) }\right) \overset{\Delta }{ \leftrightarrow }\left( {\forall x \in {T}_{\alpha }}\right) {D}_{\alpha }\left( {\varphi \left( x\right) }\right) \) . 5. \( {D}_{\alpha }\left( {\varphi \land \psi }\right) \overset{\Delta }{ \leftrightarrow }{D}_{\alpha }\left( \varphi \right) \land {D}_{\alpha }\left( \psi \right) \) . 6. \( {D}_{\alpha }\left( {\neg \varphi }\right) \overset{\Delta }{ \leftrightarrow }\neg {D}_{\alpha }\left( \varphi \right) \) . 7. \( {D}_{\alpha }\left( {\left( {\forall {x}^{\gamma }}\right) \varphi \left( {x}^{\gamma }\right) }\right) \overset{\Delta }{ \leftrightarrow }\left( {\forall x \in {B}_{\gamma }^{\prime }}\right) {D}_{\alpha }\left( {\varphi \left( x\right) }\right) ,\gamma < \alpha ,{B}_{\gamma }^{\prime } = \left\{ {D\left( t\right) \mid t \in {T}_{\gamma }}\right\} \) . Remark. Then \( {\bar{K}}_{\alpha } \subseteq {B}_{\alpha }^{\prime } \) and \( \alpha < \beta \rightarrow {B}_{\alpha }^{\prime } \in {B}_{\beta }^{\prime } \) since \( D\left( {{\widehat{x}}^{\alpha }\left( {{x}^{\alpha } \simeq {x}^{\alpha }}\right) }\right) = {B}_{\alpha }^{\prime } \) . Set \( {\mathbf{B}}_{\alpha }^{\prime } \triangleq \left\langle {{B}_{\alpha }^{\prime },{\bar{K}}_{\alpha },{\bar{F}}_{\alpha }}\right\rangle \) . We then prove by induction on \( \alpha \) that i) \( {B}_{\alpha } = {B}_{\alpha }^{\prime } \) and ii) \( D\left( \varphi \right) \leftrightarrow {\mathbf{B}}_{\alpha } \vDash {D}_{\alpha }\left( \varphi \right) \) for \( \varphi \) of rank \( \leq \alpha \) . We need only consider the case \( \alpha \notin {K}_{\mathrm{{II}}} \) . If i) holds for \( \alpha \leq \beta \) and \( t \in {T}_{\beta } \) then \[ D\left( {K\left( t\right) }\right) \leftrightarrow D\left( t\right) \in K \] \[ \leftrightarrow D\left( t\right) \in K \cap {B}_{\beta }^{\prime } \] \[ \leftrightarrow D\left( t\right) \in {\bar{K}}_{\beta }\;\text{ (by i) for }\alpha = \beta \text{ ) } \] \[ \leftrightarrow {\mathbf{B}}_{\beta } \vDash K\left( {D\left( t\right) }\right) \] Similarly, we can prove \[ D\left( {F\left( t\right) }\right) \leftrightarrow {\mathbf{B}}_{\beta } \vDash F\left( {D\left( t\right) }\right) . \] We next prove ii) for \( \alpha = \beta \) by induction on \( {\operatorname{Ord}}^{3}\left( \varphi \right) \) . Since all other cases are trivial or obtained from i) we need only prove a. \( D\left( {\left( {\forall {x}^{\beta }}\right) \varphi \left( {x}^{\beta }\right) }\right) \leftrightarrow {\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\left( {\forall {x}^{\beta }}\right) \varphi \left( {x}^{\beta }\right) }\right) \) and b. \( D\left( {\left( {\forall {x}^{\gamma }}\right) \varphi \left( {x}^{\gamma }\right) }\right) \leftrightarrow {\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\left( {\forall {x}^{\gamma }}\right) \varphi \left( {x}^{\gamma }\right) }\right) ,\gamma < \beta \) assuming ii) holds for all \( \varphi \left( t\right) \) with \( t \in {T}_{\beta } \) . \[ D\left( {\left( {\forall {x}^{\beta }}\right) \varphi \left( {x}^{\beta }\right) }\right) \leftrightarrow \left( {\forall t \in {T}_{\beta }}\right) D\left( {\varphi \left( t\right) }\right) \] \[ \leftrightarrow \left( {\forall t \in {T}_{\beta }}\right) \left\lbrack {{\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\varphi \left( {D\left( t\right) }\right) }\right) }\right\rbrack \;\text{ (by the induction hypothesis) } \] \[ \leftrightarrow \left( {\forall a \in {B}_{\beta }^{\prime }}\right) \left\lbrack {{\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\varphi \left( a\right) }\right) }\right\rbrack \] \[ \leftrightarrow \left( {\forall a \in {B}_{\beta }}\right) \left\lbrack {{\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\varphi \left( a\right) }\right) }\right\rbrack \] \[ \leftrightarrow {\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\left( {\forall {x}^{\beta }}\right) \varphi \left( {x}^{\beta }\right) }\right) \] \( \mathrm{b} \) is proved similarly. We now show that \( {B}_{\beta + 1} = {B}_{\beta + 1}^{\prime } \) \[ D\left( {{\widehat{x}}^{\beta }\varphi \left( {x}^{\beta }\right) }\right) = \left\{ {D\left( t\right) \mid t \in {T}_{\beta } \land D\left( {\varphi \left( t\right) }\right) }\right\} \] \[ = \left\{ {D\left( t\right) \mid t \in {T}_{\beta } \land {\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\varphi \left( {D\left( t\right) }\right) }\right) }\right\} \] \[ = \left\{ {a \in {B}_{\beta } \mid {\mathbf{B}}_{\beta } \vDash {D}_{\beta }\left( {\varphi \left( a\right) }\right) }\right\} \text{.} \] Thus if \( t = {\widehat{x}}^{\gamma }\varphi \left( {x}^{\gamma }\right) \) for some \( \gamma \leq \beta \), then \[ D\left( t\right) \in {B}_{\beta + 1}^{\prime } \rightsquigarrow D\left( t\right) \in {Df}\left( {\mathbf{B}}_{\beta }\right) . \] Furthermore \[ k = D\left( k\right) \in {B}_{\beta + 1}^{\prime } \leftrightarrow \operatorname{rank}\left( k\right) \leq \beta \land k \in K \] \[ \leftrightarrow k \in K \land R\left( {\beta + 1}\right) \] \[ \leftrightarrow k \in {\bar{K}}_{\beta + 1}\text{.} \] Thus \( {B}_{\beta + 1} = {B}_{\beta + 1}^{\prime } \) . Remark. The ramified language and the operator \( D \) are very useful in the sense that the definition of \( D \) is carried out by using \( K, F \) and transfinite recursion i.e., without using any knowledge about \( V \) other than the theory of ordinal numbers.
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 2.2.1
Definition 2.2.1 If \( V \) is a vector space over \( k \), an R-lattice \( L \) in \( V \) is a finitely generated \( R \) -module contained in \( V \) . Furthermore, \( L \) is a complete \( \underline{R\text{-}{lattice}} \) if \( L{ \otimes }_{R}k \cong V \) . Lemma 2.2.2 Let \( L \) be a complete lattice in \( V \) and \( M \) an \( R \) -submodule of \( V \) . Then \( M \) is a complete \( R \) -lattice if and only if there exists \( a \in R \) such that \( {aL} \subset M \subset {a}^{-1}L \) . (See Exercise 2.2, No. 1.) Definition 2.2.3 Let \( A \) be a quaternion algebra over \( k \) . An element \( \alpha \in A \) is an integer (over \( R \) ) if \( R\left\lbrack \alpha \right\rbrack \) is an \( R \) -lattice in \( A \) . Lemma 2.2.4 An element \( \alpha \in A \) is an integer if and only if the reduced trace \( \operatorname{tr}\left( \alpha \right) \) and the reduced norm \( n\left( \alpha \right) \) lie in \( R \) . Proof: Any \( \alpha \) in \( A \) satisfies the polynomial \[ {x}^{2} - \operatorname{tr}\left( \alpha \right) x + n\left( \alpha \right) = 0. \] Thus if the trace and norm lie in \( R \), then \( \alpha \) is clearly an integer in \( A \) . Suppose conversely that \( \alpha \) is an integer in \( A \) . If \( \alpha \in k \), then, since \( \alpha \) is integral over \( R \), it will lie in \( R \) . Thus \( \operatorname{tr}\left( \alpha \right), n\left( \alpha \right) \in R \) . Now suppose that \( \alpha \in A \smallsetminus k \) . If \( k\left( \alpha \right) \) is an integral domain, necessarily the case when \( A \) is a division algebra, then \( k\left( \alpha \right) \) is a quadratic field extension \( L \) of \( k \), as in Lemma 2.1.6. Note that \( \bar{\alpha } \) is the field extension conjugate of \( \alpha \) . Now \( \alpha ,\bar{\alpha } \in {R}_{L} \), the integral closure of \( R \) in \( L \), which is also a Dedekind domain. However, then \( \operatorname{tr}\left( \alpha \right), n\left( \alpha \right) \in {R}_{L} \cap k = R \) . If \( k\left( \alpha \right) \) is not an integral domain, then \( A \cong {M}_{2}\left( k\right) \) and \( \alpha \) is conjugate in \( {M}_{2}\left( k\right) \) to a matrix of the form \( \left( \begin{array}{ll} a & b \\ 0 & c \end{array}\right) \) , \( a, b, c \in k \) . But then \( {\alpha }^{n} = \left( \begin{matrix} {a}^{n} & * \\ 0 & {c}^{n} \end{matrix}\right) \) . Thus since \( \alpha \) is an integer in \( A \), then \( a, c \in R \) and the result follows. \( ▱ \) In contrast to the case of integers in number fields, it is not always true that the sum and product of a pair of integers in a quaternion algebra are necessarily integers. For example, if we take \( A = \left( \frac{-1,3}{\mathbb{Q}}\right) \) with standard basis \( \{ 1, i, j,{ij}\} \), then \( \alpha = j \) and \( \beta = \left( {{3j} + {4ij}}\right) /5 \) are integers, but neither \( \alpha + \beta \) nor \( {\alpha \beta } \) are integers. The role played by the ring of integers \( R \) in a number field is replaced by that of an order in a quaternion algebra. ## Definition 2.2.5 - An ideal \( I \) in \( A \) is a complete \( R \) -lattice. - An order \( \mathcal{O} \) in \( A \) is an ideal which is also a ring with 1 . - An order \( \mathcal{O} \) is maximal if it is maximal with respect to inclusion. ## Examples 2.2.6 1. If \( \left\{ {{x}_{1},{x}_{2},{x}_{3},{x}_{4}}\right\} \) is any \( k \) -base of \( A \), then the free module \( R\left\lbrack {{x}_{1},{x}_{2},{x}_{3},{x}_{4}}\right\rbrack \) is an ideal in \( A \) . 2. If \( A \cong \left( \frac{a, b}{k}\right) \), then by adjoining squares, if necessary, we can assume that \( a, b \in R \) . The free module \( R\left\lbrack {1, i, j,{ij}}\right\rbrack \), where \( \{ 1, i, j,{ij}\} \) is a standard basis, is an order in \( A \) . 3. The module \( {M}_{2}\left( R\right) \) is an order in \( {M}_{2}\left( k\right) \) . Indeed it is a maximal order. If not, then there exists an order \( \mathcal{O} \) containing \( {M}_{2}\left( R\right) \) and an element \( \left( \begin{array}{ll} x & y \\ z & w \end{array}\right) \in \mathcal{O} \) where at least one of the entries is not in \( R \) . By suitably multiplying and adding elements of \( {M}_{2}\left( R\right) \), it is easy to see that \( \mathcal{O} \) must contain an element \( \alpha = \left( \begin{array}{ll} a & 0 \\ 0 & 1 \end{array}\right) \), where \( a \notin R \) . However, \( R\left\lbrack \alpha \right\rbrack \) then fails to be an \( R \) -lattice, which as submodule of an \( R \) -lattice, is impossible. 4. If \( I \) is an ideal in \( A \), then the order on the left of \( I \) and the order on the right of \( I \), defined respectively by \[ {\mathcal{O}}_{\ell }\left( I\right) = \{ \alpha \in A \mid {\alpha I} \subset I\} ,\;{\mathcal{O}}_{r}\left( I\right) = \{ \alpha \in A \mid {I\alpha } \subset I\} \] (2.4) are orders in \( A \) . (See Exercise 2.2, No.2.) ## Lemma 2.2.7 1. \( \mathcal{O} \) is an order in \( A \) if and only if \( \mathcal{O} \) is a ring of integers in \( A \) which contains \( R \) and is such that \( k\mathcal{O} = A \) . 2. Every order is contained in a maximal order. Proof: Let \( \alpha \in \mathcal{O} \), where \( \mathcal{O} \) is an order in \( A \) . Since \( \mathcal{O} \) is an \( R \) -lattice, \( R\left\lbrack \alpha \right\rbrack \) will be an \( R \) -lattice and so \( \alpha \) is an integer. The other properties are immediate. For the converse, choose a basis \( \left\{ {{x}_{1},{x}_{2},{x}_{3},{x}_{4}}\right\} \) of \( A \) such that each \( {x}_{i} \in \) \( \mathcal{O} \) . Now the reduced trace defines a non-singular symmetric bilinear form on \( A \) (see Exercise 2.3, No. 1). Thus \( d = \det \left( {\operatorname{tr}\left( {{x}_{i}{x}_{j}}\right) }\right) \neq 0 \) . Let \( L = \left\{ {\sum {a}_{i}{x}_{i} \mid }\right. \) \( \left. {{a}_{i} \in R}\right\} \) . Thus \( L \subset \mathcal{O} \) . Now suppose \( \alpha \in \mathcal{O} \) so that \( \alpha = \sum {b}_{i}{x}_{i} \) with \( {b}_{i} \in k \) . For each \( j,\alpha {x}_{j} \in \mathcal{O} \) and so \( \operatorname{tr}\left( {\alpha {x}_{j}}\right) = \sum {b}_{i}\operatorname{tr}\left( {{x}_{i}{x}_{j}}\right) \in R \) . Thus \( {b}_{i} \in \left( {1/d}\right) R \) and \( \mathcal{O} \subset \left( {1/d}\right) L \) . Thus \( \mathcal{O} \) is finitely generated and the result follows. Using a Zorn's Lemma argument, the above characterisation shows that every order is contained in a maximal order. \( ▱ \) Let us consider the special cases where \( A = {M}_{2}\left( k\right) \) . If \( V \) is a two-dimensional space over \( k \), then \( A \) can be identified with \( \operatorname{End}\left( V\right) \) . If \( L \) is a complete \( R \) -lattice in \( V \), define \[ \operatorname{End}\left( L\right) = \{ \sigma \in \operatorname{End}\left( V\right) \mid \sigma \left( L\right) \subset L\} \] If \( V \) has basis \( \left\{ {{e}_{1},{e}_{2}}\right\} \) giving the identification of \( {M}_{2}\left( k\right) \) with \( \operatorname{End}\left( V\right) \), then \( {L}_{0} = R{e}_{1} + R{e}_{2} \) is a complete \( R \) -lattice and \( \operatorname{End}\left( {L}_{0}\right) \) is identified with the maximal order \( {M}_{2}\left( R\right) \) . For any complete \( R \) -lattice \( L \), there exists \( a \in R \) such that \( a{L}_{0} \subset L \subset {a}^{-1}{L}_{0} \) . It follows that \( {a}^{2}\operatorname{End}\left( {L}_{0}\right) \subset \operatorname{End}\left( L\right) \subset \) \( {a}^{-2}\operatorname{End}\left( {L}_{0}\right) \) . Thus each \( \operatorname{End}\left( L\right) \) is an order. Lemma 2.2.8 Let \( \mathcal{O} \) be an order in \( \operatorname{End}\left( V\right) \) . Then \( \mathcal{O} \subset \operatorname{End}\left( L\right) \) for some complete \( R \) -lattice \( L \) in \( V \) . Proof: Let \( L = \left\{ {\ell \in {L}_{0} \mid \mathcal{O}\ell \subset {L}_{0}}\right\} \) . Then \( L \) is an \( R \) -submodule of \( {L}_{0} \) . Also, if \( a\operatorname{End}\left( {L}_{0}\right) \subset \mathcal{O} \subset {a}^{-1}\operatorname{End}\left( {L}_{0}\right) \), then \( a{L}_{0} \subset L \) . Thus \( L \) is a complete \( R \) -lattice and \( \mathcal{O} \subset \operatorname{End}\left( L\right) \) . A simple description of these orders \( \operatorname{End}\left( L\right) \) can be given by obtaining a simple description of the complete \( R \) -lattices in \( V \) . Theorem 2.2.9 Let \( L \) be a complete \( R \) -lattice in \( V \) . Then there exists a basis \( \{ x, y\} \) of \( V \) and a fractional ideal \( J \) such that \( L = {Rx} + {Jy} \) . Proof: For any non-zero element \( y \in V, L \cap {ky} = {I}_{y}y \), where \( {I}_{y} = \{ \alpha \in \) \( k \mid {\alpha y} \in L\} \) . Since \( L \) is a complete \( R \) -lattice, there exists \( \beta \in R \) such that \( \beta {I}_{y} \subset R \) so that \( {I}_{y} \) is a fractional ideal. We first show that there is a basis \( \{ x, y\} \) of \( V \) such that \( L = {Ix} + {I}_{y}y \) for some fractional ideal \( I \) . Let \( \left\{ {{e}_{1},{e}_{2}}\right\} \) be a basis of \( V \) and define \[ I = \left\{ {\alpha \in k \mid \alpha {e}_{1} \in L + k{e}_{2}}\right\} \] Again it is easy to see that \( I \) is a fractional ideal. Since \( I{I}^{-1} = R \), there exist \( {\alpha }_{i} \in I \) and \( {\beta }_{i} \in {I}^{-1} \) such that \( 1 = \sum {\alpha }_{i}{\beta }_{i} \) . Now \( {\alpha }_{i}{e}_{1} = {\ell }_{i} + {\gamma }_{i}{e}_{2} \) where \( {\ell }_{i} \in L,{\gamma }_{i} \in k \) . Thus \( {e}_{1} = \sum {\beta }_{i}{\ell }_{i} + \gamma {e}_{2} \), where \( \gamma = \sum {\beta }_{i}{\gamma }_{i} \) . Let \( x = {e}_{1} - \gamma {e}_{2}, y = {e}_{2} \) . I claim that \( L = {Ix} + {I}_{y}y \) . First note that \[ {Ix} + {I}_{y}y = I\left( {{e}_{1} - \gamma {e}_{2}}\right) + L \cap {ky} = I\left( {\sum {\beta }_{i}{\ell }_{i}}\right) + L \cap {ky} \subset L. \] Conversely, suppose
1139_(GTM44)Elementary Algebraic Geometry
Definition 6.16
Definition 6.16. Let pure-dimensional varieties \( {V}_{1} \) and \( {V}_{2} \) in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) intersect properly; the fixed number in Theorem 6.15 is the degree of intersection of \( {V}_{1} \) and \( {V}_{2} \), written as \( \deg \left( {{V}_{1} \cdot {V}_{2}}\right) \) . Remark 6.17. Note that \( \deg \left( {{V}_{1} \cdot {V}_{2}}\right) \) is not in general the same as \( \deg \left( {{V}_{1} \cap {V}_{2}}\right) \), in Definition 6.3. See Example 6.26. The notation \( \deg \left( {{V}_{1} \cdot {V}_{2}}\right) \) will be further illuminated in Remark 6.25. Theorem 6.18. Let \( {V}_{1} \) and \( {V}_{2} \) in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) be of pure dimensions \( r \) and \( s \) , respectively, and let \( {L}^{\left( {n - r}\right) + \left( {n - s}\right) } = L \) be linear of dimension \( {2n} - r - s \) . If \( {V}_{1},{V}_{2} \), and \( L \) intersect properly at a point \( P \), then for almost every transform \( {V}_{1}{}^{T} \) near \( {V}_{1},{V}_{2}{}^{{T}^{\prime }} \) near \( {V}_{2} \), and \( {L}^{{T}^{\prime \prime }} \) near \( L \), there is a common, fixed number of distinct points of \( {V}_{1}{}^{T} \cap {V}_{2}{}^{{T}^{\prime }} \cap {L}^{{T}^{\prime \prime }} \) near \( P \) . Proof. The proof is entirely analogous to that of Theorem 6.15, except we use Theorem 6.10 instead of Theorem 11.1.1 of Chapter III. Definition 6.19. The fixed number of Theorem 6.18 is the intersection multiplicity, or multiplicity of intersection, of \( {V}_{1},{V}_{2} \), and \( L \) at \( P \) ; it is denoted by \( i\left( {{V}_{1},{V}_{2}, L;P}\right) \) . Theorem 6.20. Let \( {V}_{1} \) and \( {V}_{2} \) in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) be of pure dimensions \( r \) and \( s \) , respectively. If they intersect properly at a point \( P \), then for almost every \( P \) -containing transform \( {L}^{\prime } \) of a linear variety \( {L}^{\left( {n - r}\right) + \left( {n - s}\right) }, i\left( {{V}_{1},{V}_{2},{L}^{\prime };P}\right) \) is defined and has a common, fixed value. Proof. The proof is similar to that of Theorem 6.8.1. The " \( V \) " " used in that proof is now \[ {V}^{\dagger \dagger } \cap \left( {{\mathbb{C}}^{3{\left( n + 1\right) }^{2}} \times \mathrm{V}\left( {{X}_{n + 1} - 1}\right) }\right) , \] which is algebraic over a copy of \( {\mathbb{C}}^{3{\left( n + 1\right) }^{2}} \) . As noted just before the statement of Theorem 6.8, there is a proper subspace \( S \) of \( {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \) parametrizing the \( P \) -containing transforms of \( {L}^{\left( {n - r}\right) + \left( {n - s}\right) } \) ; we again take \( W \) to be the translate \( S \times \{ P\} \times \{ 1\} \) . The proof may now be completed, making the obvious changes in the proof of Theorem 6.8.1. Definition 6.21. The fixed number in Theorem 6.20 is the intersection multiplicity, or multiplicity of intersection, of \( {V}_{1} \) and \( {V}_{2} \) at \( P \) ; it is denoted by \( i\left( {{V}_{1},{V}_{2};P}\right) \) . Theorem 6.22. Let \( {V}_{1} \) and \( {V}_{2} \) in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) be of pure dimension, and suppose they intersect properly. If \( C \) is an irreducible component of \( {V}_{1} \cap {V}_{2} \), then at almost every point \( P \in C, i\left( {{V}_{1},{V}_{2};P}\right) \) has a common, fixed value. Proof. The proof is like that of Theorem 6.20; assume \( C \) is affine, and in place of \( S \times \{ P\} \times \{ 1\} \) for \( W \), use \( S \times C \times \{ 1\} \) . This gives the "almost all" statement over \( C \) instead of just at \( P \) . Definition 6.23. The fixed number in Theorem 6.22 is the multiplicity of intersection of \( {V}_{1} \) and \( {V}_{2} \) along \( C \), and is denoted by \( i\left( {{V}_{1},{V}_{2};C}\right) \) . Definition 6.24. Let \( {V}_{1} \) and \( {V}_{2} \) be two properly-intersecting pure-dimensional varieties in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) or \( {\mathbb{C}}^{n} \) . The formal sum \( \mathop{\sum }\limits_{{j = 1}}^{n}i\left( {{V}_{1},{V}_{2};{C}_{j}}\right) {C}_{j} \) of the distinct irreducible components \( {C}_{1},\ldots ,{C}_{n} \) of \( {V}_{1} \cap {V}_{2} \) is called the intersection product of \( {V}_{1} \) and \( {V}_{2} \), and is denoted by \( {V}_{1} \cdot {V}_{2} \) . Remark 6.25. It is natural to define the degree of \( {V}_{1} \cdot {V}_{2} \) to be \( \mathop{\sum }\limits_{{j = 1}}^{n}i\left( {{V}_{1},{V}_{2};{C}_{j}}\right) \cdot \deg {C}_{j} \) . If we do this, then we see that the symbol "deg \( \left( {{V}_{1} \cdot {V}_{2}}\right) \) " in Definition 6.16, is in fact what its notation indicates-it is the degree of \( {V}_{1} \cdot {V}_{2} \) . EXAMPLE 6.26. Rotating the circle \( \mathrm{V}\left( {{\left( X - r\right) }^{2} + {Z}^{2} - {s}^{2}}\right) \subset {\mathbb{R}}_{XZ}(r, s \in \mathbb{R} \) , \( r > s > 0) \) about \( {\mathbb{R}}_{Z} \) in \( {\mathbb{R}}_{XYZ} \) describes a real torus defined by the fourth-degree polynomial \[ p\left( {X, Y, Z}\right) = {\left( {X}^{2} + {Y}^{2} + {Z}^{2} + {r}^{2} - {s}^{2}\right) }^{2} - 4{r}^{2}\left( {{X}^{2} + {Y}^{2}}\right) \] it "rests on a tabletop" in the sense that it is tangent to \( \mathbf{V}\left( {Z - s}\right) \subset {\mathbb{R}}_{XYZ} \) . The corresponding complex varieties \( \mathbf{V}\left( p\right) \) and \( \mathbf{V}\left( {Z - s}\right) \) in \( {\mathbb{C}}_{XYZ} \) are surfaces of degree 4 and 1, respectively. The variety \( C = \mathbf{V}\left( p\right) \cap V\left( {Z - s}\right) \) has degree 2 (since it is a circle), and \( i\left( {\mathrm{\;V}\left( p\right) ,\mathrm{V}\left( {Z - s}\right) ;P}\right) = 2 \) at each \( P \in C \), so \( i\left( {\mathrm{\;V}\left( p\right) ,\mathrm{V}\left( {Z - s}\right) ;C}\right) = 2 \) . Thus \[ \mathrm{V}\left( p\right) \cdot \mathrm{V}\left( {Z - s}\right) = {2C} \] and \[ \deg \left( {\mathbf{V}\left( p\right) \cdot \mathbf{V}\left( {Z - s}\right) }\right) = 2\deg \left( {\mathbf{V}\left( p\right) \cap \mathbf{V}\left( {Z - s}\right) }\right) = 4. \] Note that \( C \) and \( \mathbf{V}\left( p\right) \) are nonsingular. Thus \( m\left( {C;P}\right) = 1 \) for each \( P \in C \) , \( m\left( {\mathbf{V}\left( p\right) ;Q}\right) = 1 \) for each \( Q \in \mathbf{V}\left( p\right) \), and \( m\left( {\mathbf{V}\left( p\right) ;C}\right) = 1 \) . ## Exercises 6.1 With notation as in the proof of Theorem 6.2, show that for almost every point \( P \in {\mathbb{C}}^{{\left( n + 1\right) }^{2}} \), there lie above \( P \) only finitely many points of the variety in (15). 6.2 Prove Theorem 6.12 for any irreducible variety \( {V}^{s} \subset {\mathbb{C}}^{n} \) . 6.3 Let \( V \) and \( W \) be properly-intersecting varieties in \( {\mathbb{C}}^{n} \), and let \( P \) be any point of \( V \cap W \) . If \( T \) is a nonsingular linear transformation of \( {\mathbb{C}}^{n} \), show that \( i\left( {V, W;P}\right) = \) \( i\left( {T\left( V\right), T\left( W\right) ;T\left( P\right) }\right) \) . 6.4 For any variety \( V \) in \( {\mathbb{C}}^{n} \) or \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \), show that \( V \) is nonsingular at \( P \in V \) iff \( m\left( {V;P}\right) = 1 \) . Generalize to the case where \( P \) is replaced by an irreducible subvariety of \( V \) . 6.5 Let \( V \subset {\mathbb{C}}^{n} \) be irreducible of dimension \( r \geq n/2 \), let \( L \subset {\mathbb{C}}^{n} \) be a linear variety of dimension \( r \) properly intersecting \( V \), and let \( P \in V \cap L \) be a nonsingular point of \( V \) . Show that \( L \) is the tangent space to \( V \) at \( P \) iff \( i\left( {V, L;P}\right) > 1 \) (cf. Exercise 4.5). 6.6 (a) Let \( {V}_{1} \) and \( {V}_{2} \) be varieties in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) such that \( \dim {V}_{1} + \dim {V}_{2} \geq n \) . Show that for almost every linear transform \( {V}_{1}^{T} \) of \( {V}_{1},{V}_{1}^{T} \) and \( {V}_{2} \) intersect properly. (b) State and prove an analogous result in the affine setting. 6.7 The class of perturbations considered in this section is not the only one that can be used to arrive at multiplicity and multiplicity of intersection. For instance, if \( V = \mathrm{V}\left( p\right) \subset {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}} = {\mathbb{C}}^{n} \) is a hypersurface, it turns out that one can use the one-dimensional family of level surfaces \( \left\{ {\left. \mathrm{V}\left( p\left( X\right) - c\right) \mid c \in {\mathbb{C}}_{Z}\} \text{to replace the set of}\right| }^{ \circ }\right\} \) linear transforms of \( V \) . Assume and use this fact in (a) and (b) below. (a) Show that the multiplicity of intersection at \( \left( {0,0}\right) \) of \( \mathbf{V}\left( {XY}\right) \subset {\mathbb{C}}_{XY} \) with any 1-subspace other than \( {\mathbb{C}}_{X} \) or \( {\mathbb{C}}_{Y} \), is two. (b) Let \( \left( X\right) = \left( {{X}_{1},\ldots ,{X}_{n}}\right) \), let \( q\left( X\right) \in \mathbb{C}\left\lbrack X\right\rbrack \smallsetminus \mathbb{C} \), and let a linear variety \( L \subset {\mathbb{C}}_{X} \) properly intersect \( \mathbf{V}\left( q\right) \) . Show that at an arbitrary point \( \left( 0\right) \in \mathbf{V}\left( q\right) \cap L \), intersecting the level curves of the hypersurface \( \mathrm{V}\left( {Y - q\left( X\right) }\right) \subset {\mathbb{C}}_{XY} \) with \( L \) yields the order with respect to \( L \) of \( q \) at \( \left( 0\right) \) . 6.8 Let \( C \) and \( \mathrm{V}\left( p\right) \left( {p \in \mathbb{C}\left\lbrack {X, Y}\right\rbrack \smallsetminus \mathbb{C}}\right) \) be two properly-intersecting curves in \( {\mathbb{C}}_{XY} = {\mathbb{C}}^{2} \) . For almost every \( c \in \mathbb{C} \vee \left( {p - c}\right) \cap C \) is a finite set \( {A}_{c} \) ; the number of points in \( {A}_{c} \) cl
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 2.1.9
Definition 2.1.9. Let \( A \) be a commutative Banach algebra. The radical of \( A,\operatorname{rad}\left( A\right) \), is defined by \[ \operatorname{rad}\left( A\right) = \bigcap \{ M : M \in \operatorname{Max}\left( A\right) \} = \bigcap \{ \ker \varphi : \varphi \in \Delta \left( A\right) \} \] where \( \operatorname{rad}\left( A\right) \) is understood to be \( A \) if \( \Delta \left( A\right) = \varnothing \) . Clearly, \( \operatorname{rad}\left( A\right) \) is a closed ideal of \( A \) . The algebra \( A \) is called semisimple if \( \operatorname{rad}\left( A\right) = \{ 0\} \) and radical if \( \operatorname{rad}\left( A\right) = A \) . In Examples 2.1.6 and 2.1.7 we have already seen examples of radical Banach algebras with nontrivial multiplication. On the other hand, it will follow from Theorem 2.2.5 in the next section that \( A \) is semisimple if and only if for every \( x \in A,{r}_{A}\left( x\right) = 0 \) implies that \( x = 0 \) . Because the spectral radius is subadditive and submultiplicative, this means that \( A \) is semisimple if and only if \( {r}_{A} \) is an algebra norm on \( A \) . Thus \( \Delta \left( A\right) \neq \varnothing \) . Returning to the existence of nonzero multiplicative linear functionals, assume that \( A \) is a commutative Banach algebra with identity. Then the proper ideal \( \{ 0\} \) is contained in some maximal ideal which, by Theorem 2.1.8, is the kernel of a homomorphism from \( A \) onto \( \mathbb{C} \) . We continue with a number of interesting applications of Lemma 2.1.5. Corollary 2.1.10. Let \( \phi \) be a homomorphism from a commutative Banach algebra \( A \) into a semisimple commutative Banach algebra \( B \) . Then \( \phi \) is continuous. Proof. By the closed graph theorem it suffices to show that if \( {x}_{n} \in A, n \in \mathbb{N} \) , are such that \( {x}_{n} \rightarrow 0 \) and \( \phi \left( {x}_{n}\right) \rightarrow b \) for some \( b \in B \), then \( b = 0 \) . Let \( \varphi \in \Delta \left( B\right) \) . Then \( \varphi \circ \phi \in \Delta \left( A\right) \cup \{ 0\} \) and hence both, \( \varphi \) and \( \varphi \circ \phi \), are continuous by Lemma 2.1.5. It follows that \[ \varphi \left( b\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}\varphi \left( {\phi \left( {x}_{n}\right) }\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left( {\varphi \circ \phi }\right) \left( {x}_{n}\right) = 0. \] Since this holds for all \( \varphi \in \Delta \left( B\right) \) and \( B \) is semisimple we get \( b = 0 \) . Corollary 2.1.11. On a semisimple commutative Banach algebra all Banach algebra norms are equivalent. Proof. Suppose \( A \) is a semisimple commutative Banach algebra, and let \( \parallel \cdot {\parallel }_{1} \) and \( \parallel \cdot {\parallel }_{2} \) be two Banach algebra norms on \( A \) . The statement follows by applying Corollary 2.1.10 with \( \phi \) the identity mappings \( \left( {A,\parallel \cdot {\parallel }_{1}}\right) \rightarrow \left( {A,\parallel \cdot {\parallel }_{2}}\right) \) and \( \left( {A,\parallel \cdot {\parallel }_{2}}\right) \rightarrow \left( {A,\parallel \cdot {\parallel }_{1}}\right) \) . Corollary 2.1.12. Every involution on a semisimple commutative Banach algebra \( A \) is continuous. Proof. Let \( \parallel \cdot \parallel \) be the given norm an \( A \) . We define a new norm \( \left| \cdot \right| \) on \( A \) by \( \left| x\right| = \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \) . It is clear that \( \left| \cdot \right| \) is submultiplicative. If \( {x}_{n} \in A, n \in \mathbb{N} \) , form a Cauchy sequence for \( \left| \cdot \right| \), then \( {\left( {x}_{n}^{ * }\right) }_{n} \) is a Cauchy sequence for \( \parallel \cdot \parallel \) . Consequently, \( \begin{Vmatrix}{{x}_{n}^{ * } - x}\end{Vmatrix} \rightarrow 0 \) for some \( x \in A \), and hence \( \left| {{x}_{n} - {x}^{ * }}\right| \rightarrow 0 \) . This shows that \( \left( {A,\left| \cdot \right| }\right) \) is complete. By Corollary 2.1.11 there exists \( c > 0 \) such that \[ \begin{Vmatrix}{x}^{ * }\end{Vmatrix} = \left| x\right| \leq c\parallel x\parallel \] for all \( x \in A \), as was to be shown. Let \( {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) denote the algebra of all infinitely many times differentiable functions on \( \left\lbrack {0,1}\right\rbrack \) . Corollary 2.1.13. The algebra \( {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) admits no Banach algebra norm. Proof. Suppose there is a Banach algebra norm \( \parallel \cdot \parallel \) on \( {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) . Applying Corollary 2.1.10 to the identity mapping from \( {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) into \( C\left\lbrack {0,1}\right\rbrack \) we see that there exists \( c > 0 \) such that \[ \parallel f{\parallel }_{\infty } \leq c\parallel f\parallel \] for all \( f \in {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) . Using this inequality, we prove that the differentiation mapping \( D : f \rightarrow {f}^{\prime } \) from \( {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) into itself is continuous. Thus, let \( {f}_{n} \in \) \( {C}^{\infty }\left\lbrack {0,1}\right\rbrack, n \in \mathbb{N} \), be such that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}{f}_{n}\end{Vmatrix} = 0\text{ and }\mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}{{f}_{n}^{\prime } - g}\end{Vmatrix} = 0 \] for some \( g \in {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) . Then \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{f}_{n}\end{Vmatrix}}_{\infty } = 0\text{ and }\mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{f}_{n}^{\prime } - g\end{Vmatrix}}_{\infty } = 0. \] Since for each \( x, y \in \left\lbrack {0,1}\right\rbrack \) , \[ \left| {{\int }_{x}^{y}g\left( t\right) {dt}}\right| \leq \left| {{f}_{n}\left( y\right) - {f}_{n}\left( x\right) }\right| + \left| {{\int }_{x}^{y}\left( {{f}_{n}^{\prime }\left( t\right) - g\left( t\right) }\right) {dt}}\right| \] \[ \leq 2{\begin{Vmatrix}{f}_{n}\end{Vmatrix}}_{\infty } + \left| {y - x}\right| \cdot {\begin{Vmatrix}{f}_{n}^{\prime } - g\end{Vmatrix}}_{\infty } \] it follows that \( {\int }_{x}^{y}g\left( t\right) {dt} = 0 \) . Hence \( g = 0 \) because \( x \) and \( y \) are arbitrary. By the closed graph theorem, \( D \) is continuous. Thus there exists \( d > 0 \) such that \[ \begin{Vmatrix}{f}^{\prime }\end{Vmatrix} \leq d\parallel f\parallel \] for all \( f \in {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) . Now, let \( f\left( t\right) = {e}^{2dt}, t \in \left\lbrack {0,1}\right\rbrack \) . Then \[ {2d}\parallel f\parallel = \begin{Vmatrix}{f}^{\prime }\end{Vmatrix} \leq d\parallel f\parallel . \] This contradiction shows that there cannot exist a Banach algebra norm on \( {C}^{\infty }\left\lbrack {0,1}\right\rbrack \) . ## 2.2 The Gelfand representation In this section we develop the basic elements of Gelfand's theory which represents a (semisimple) commutative Banach algebra as an algebra of continuous functions on a locally compact Hausdorff space. Definition 2.2.1. Let \( A \) be a commutative Banach algebra and, as before, \( \Delta \left( A\right) \) the set of all nonzero (hence surjective) algebra homomorphisms from \( A \) to \( \mathbb{C} \) . We endow \( \Delta \left( A\right) \) with the weakest topology with respect to which all the functions \[ \Delta \left( A\right) \rightarrow \mathbb{C},\;\varphi \rightarrow \varphi \left( x\right) ,\;x \in A, \] are continuous. A neighbourhood basis at \( {\varphi }_{0} \in \Delta \left( A\right) \) is then given by the collection of sets \[ U\left( {{\varphi }_{0},{x}_{1},\ldots ,{x}_{n},\epsilon }\right) = \left\{ {\varphi \in \Delta \left( A\right) : \left| {\varphi \left( {x}_{i}\right) - {\varphi }_{0}\left( {x}_{i}\right) }\right| < \epsilon ,1 \leq i \leq n}\right\} , \] where \( \epsilon > 0, n \in \mathbb{N} \), and \( {x}_{1},\ldots ,{x}_{n} \) are arbitrary elements of \( A \) . This topology on \( \Delta \left( A\right) \) is called the Gelfand topology. There are several names in use for the space \( \Delta \left( A\right) \), equipped with the Gelfand topology: The structure space, the spectrum or Gelfand space of \( A \), and the maximal ideal space, the latter notion being justified through the bijective correspondence between \( \Delta \left( A\right) \) and \( \operatorname{Max}\left( A\right) \) (Theorem 2.1.8). Remark 2.2.2. We have seen in Lemma 2.1.5 that \( \Delta \left( A\right) \) is contained in the unit ball of \( {A}^{ * } \) . The Gelfand topology obviously coincides with the relative \( {w}^{ * } \) -topology of \( {A}^{ * } \) on \( \Delta \left( A\right) \) . When adjoining an identity \( e \) to \( A,\Delta \left( {A}_{e}\right) = \) \( \Delta \left( A\right) \cup \left\{ {\varphi }_{\infty }\right\} \) (Remark 2.1.3) and according to the following theorem the topology on \( \Delta \left( A\right) \) is the one induced from \( \Delta \left( {A}_{e}\right) \) . Theorem 2.2.3. Let \( A \) be a commutative Banach algebra. Then (i) \( \Delta \left( A\right) \) is a locally compact Hausdorff space. (ii) \( \Delta \left( {A}_{e}\right) = \Delta \left( A\right) \cup \left\{ {\varphi }_{\infty }\right\} \) is the one-point compactification of \( \Delta \left( A\right) \) . (iii) \( \Delta \left( A\right) \) is compact if \( A \) has an identity. Proof. It is easy to see that \( \Delta \left( A\right) \) is a Hausdorff space. Indeed, if \( {\varphi }_{1} \) and \( {\varphi }_{2} \) are distinct elements of \( \Delta \left( A\right) \), then for some \( x \in A,\delta = \frac{1}{2}\left| {{\varphi }_{1}\left( x\right) - {\varphi }_{2}\left( x\right) }\right| > 0 \) and hence \[ U\left( {{\varphi }_{1}, x,\delta }\right) \cap U\left( {{\varphi }_{2}, x,\delta }\right) = \varnothing . \] To prove that \( \Delta \left( A\right) \) is compact if \( A \) has a
1075_(GTM233)Topics in Banach Space Theory
Definition 9.2.1
Definition 9.2.1. A basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of a Banach space \( X \) is symmetric if \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is equivalent to \( {\left( {e}_{\pi \left( n\right) }\right) }_{n = 1}^{\infty } \) for every permutation \( \pi \) of \( \mathbb{N} \) . Symmetric bases are in particular unconditional. They also have the property of being equivalent to all their (infinite) subsequences, as the next lemma states. Lemma 9.2.2. Suppose \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a symmetric basis of a Banach space \( X \) . Then there exists a constant \( D \) such that \[ {D}^{-1}\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{N}{a}_{i}{e}_{{j}_{i}}}\end{Vmatrix} \leq \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{N}{a}_{i}{e}_{{k}_{i}}}\end{Vmatrix} \leq D\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{N}{a}_{i}{e}_{{j}_{i}}}\end{Vmatrix} \] for every \( N \in \mathbb{N} \), every choice of scalars \( {\left( {a}_{i}\right) }_{i = 1}^{N} \), and every two families of distinct natural numbers \( \left\{ {{j}_{1},\ldots ,{j}_{N}}\right\} \) and \( \left\{ {{k}_{1},\ldots ,{k}_{N}}\right\} \) . Proof. It is enough to prove the lemma for the basic sequence \( {\left( {e}_{n}\right) }_{n \geq {n}_{0}} \) for some \( {n}_{0} \) . If it is false, then for every \( {n}_{0} \) we can build a strictly increasing sequence of natural numbers \( {\left( {p}_{n}\right) }_{n = 0}^{\infty } \) with \( {p}_{0} = 0 \), natural numbers \( {m}_{n} \leq {p}_{n} - {p}_{n - 1} \), scalars \( {\left( {a}_{n, i}\right) }_{n = 1, i = 1}^{\infty ,{m}_{n}} \) , and families \( \left\{ {{j}_{n,1},\ldots ,{j}_{n,{m}_{n}}}\right\} ,\left\{ {{k}_{n,1},\ldots ,{k}_{n,{m}_{n}}}\right\} \) such that for all \( n = 1,2,\ldots \) we have \[ {p}_{n - 1} + 1 \leq {j}_{n, i},{k}_{n, i} \leq {p}_{n},\;1 \leq i \leq {m}_{n}, \] \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{{m}_{n}}{a}_{n, i}{e}_{{j}_{n, i}}}\end{Vmatrix} < {2}^{-n} \] and \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{{m}_{n}}{a}_{n, i}{e}_{{k}_{n, i}}}\end{Vmatrix} > {2}^{n} \] Now one can make a permutation \( \pi \) of \( \mathbb{N} \) such that \( \pi \left\lbrack {{p}_{n - 1} + 1,{p}_{n}}\right\rbrack = \left\lbrack {{p}_{n - 1} + 1,{p}_{n}}\right\rbrack \) and \( \pi \left( {j}_{n, i}\right) = {k}_{n, i} \), and this will contradict the equivalence of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) and \( {\left( {e}_{\pi \left( n\right) }\right) }_{n = 1}^{\infty } \) . Remark 9.2.3. The converse of Lemma 9.2.2 need not be true. In fact, the summing basis of \( {c}_{0} \) is equivalent to all its subsequences and is not even unconditional. Definition 9.2.4. A basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) of a Banach space \( X \) is subsymmetric provided it is unconditional and for every increasing sequence of integers \( {\left\{ {n}_{i}\right\} }_{i = 1}^{\infty } \), the subbasis \( {\left( {e}_{{n}_{i}}\right) }_{i = 1}^{\infty } \) is equivalent to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . Lemma 9.2.2 yields that symmetric bases are subsymmetric. However, these two concepts do not coincide, as shown by the following example, due to Garling [98]. Example 9.2.5. A subsymmetric basis that is not symmetric. Let \( X \) be the space of all sequences of scalars \( \xi = {\left( {\xi }_{n}\right) }_{n = 1}^{\infty } \) for which \[ \parallel \xi \parallel = \sup \mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{\left| {\xi }_{{n}_{k}}\right| }{\sqrt{k}} < \infty \] the supremum being taken over all increasing sequences of integers \( {\left( {n}_{k}\right) }_{k = 1}^{\infty } \) . We leave for the reader the task to check that \( X \), endowed with the norm defined above, is a Banach space whose unit vectors \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) form a subsymmetric basis that is not symmetric. Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be a symmetric basis in a Banach space \( X \) . For every permutation \( \pi \) of \( \mathbb{N} \) and every sequence of signs \( \epsilon = {\left( {\epsilon }_{n}\right) }_{n = 1}^{\infty } \), there is an automorphism \[ {T}_{\pi ,\epsilon } : X \rightarrow X,\;x = \mathop{\sum }\limits_{{n = 1}}{a}_{n}{e}_{n} \mapsto {T}_{\pi ,\epsilon }\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\epsilon }_{n}{a}_{n}{e}_{\pi \left( n\right) }. \] The uniform boundedness principle yields a number \( K \) such that \[ \mathop{\sup }\limits_{{\pi ,\epsilon }}\begin{Vmatrix}{T}_{\pi ,\epsilon }\end{Vmatrix} \leq K \] i.e., the estimate \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{\infty }{\epsilon }_{n}{a}_{n}{e}_{\pi \left( n\right) }}\end{Vmatrix} \leq K\begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{e}_{n}}\end{Vmatrix} \] (9.10) holds for all choices of signs \( \left( {\epsilon }_{n}\right) \) and all permutations \( \pi \) . The smallest constant \( 1 \leq K \) in (9.10) is called the symmetric constant of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) and will be denoted by \( {\mathrm{K}}_{\mathrm{s}} \) . We then say that \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is \( K \) -symmetric whenever \( {\mathrm{K}}_{\mathrm{s}} \leq K \) . For every \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{e}_{n} \in X \), put \[ \parallel \left| x\right| \parallel = \sup \begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{\infty }{\epsilon }_{n}{a}_{n}{e}_{\pi \left( n\right) }}\end{Vmatrix} \] (9.11) the supremum being taken over all choices of scalars \( \left( {\epsilon }_{n}\right) \) of signs and all permutations of the natural numbers. Equation (9.11) defines a new norm on \( X \) equivalent to \( \parallel \cdot \parallel \), since \( \parallel x\parallel \leq \parallel \left| x\right| \parallel \leq K\parallel x\parallel \) for all \( x \in X \) . With respect to this norm, \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a 1-symmetric basis of \( X \) . Theorem 9.2.6. Let \( X \) be a Banach space with normalized 1-symmetric basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . Suppose that \( {\left( {u}_{n}\right) }_{n = 1}^{\infty } \) is a normalized constant-coefficient block basic sequence. Then the subspace \( \left\lbrack {u}_{n}\right\rbrack \) is complemented in \( X \) by a norm-one projection. Proof. For each \( k = 1,2,\ldots \), let \( {u}_{k} = {c}_{k}\mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{k} \), where \( {\left( {A}_{k}\right) }_{k = 1}^{\infty } \) is a sequence of mutually disjoint subsets of \( \mathbb{N} \) (notice that since \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is 1 -symmetric, the blocks of the basis need not be in increasing order). For every fixed \( n \in \mathbb{N} \), let \( {\Pi }_{n} \) denote the set of all permutations \( \pi \) of \( \mathbb{N} \) such that for each \( 1 \leq k \leq n,\pi \) restricted to \( {A}_{k} \) acts as a cyclic permutation of the elements of \( {A}_{k} \) (in particular, \( \left. {\pi \left( {A}_{k}\right) = {A}_{k}}\right) \) ), and \( \pi \left( j\right) = j \) for all \( j \notin { \cup }_{k = 1}^{n}{A}_{k} \) . Every \( \pi \in {\Pi }_{n} \) has associated an operator on \( X \) defined for \( x = \mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{j} \) as \[ {T}_{n,\pi }\left( {\mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{j}}\right) = \mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{\pi \left( j\right) } \] Notice that due to the 1 -symmetry of \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \), we have \( \begin{Vmatrix}{{T}_{n,\pi }\left( x\right) }\end{Vmatrix} = \parallel x\parallel \) . Let us define an operator on \( X \) by averaging over all possible choices of permutations \( \pi \in {\Pi }_{n} \) . Given \( x = \mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{j} \) , \[ {T}_{n}\left( x\right) = \frac{1}{\left| {\Pi }_{n}\right| }\mathop{\sum }\limits_{{\pi \in {\Pi }_{n}}}{T}_{n,\pi }\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{n}\left( {\frac{1}{\left| {A}_{k}\right| }\mathop{\sum }\limits_{{j \in {A}_{k}}}{a}_{j}}\right) \mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{j} + \mathop{\sum }\limits_{{j \notin { \cup }_{k = 1}^{n}{A}_{k}}}{a}_{j}{e}_{j}. \] Then, \[ \begin{Vmatrix}{{T}_{n}\left( x\right) }\end{Vmatrix} = \begin{Vmatrix}{\frac{1}{\left| {\Pi }_{n}\right| }\mathop{\sum }\limits_{{\pi \in {\Pi }_{n}}}{T}_{n,\pi }\left( x\right) }\end{Vmatrix} \leq \frac{1}{\left| {\Pi }_{n}\right| }\mathop{\sum }\limits_{{\pi \in {\Pi }_{n}}}\begin{Vmatrix}{{T}_{n,\pi }\left( x\right) }\end{Vmatrix} = \parallel x\parallel . \] Therefore, for each \( n \in \mathbb{N} \) the operator \[ {P}_{n}\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{n}\left( \left( {\frac{1}{\left| {A}_{k}\right| }\mathop{\sum }\limits_{{j \in {A}_{k}}}{a}_{j}}\right) \right) \mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{j},\;x \in X, \] is a norm-one projection onto \( {\left\lbrack {u}_{k}\right\rbrack }_{k = 1}^{n} \) . Now it readily follows that \[ P\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {\frac{1}{\left| {A}_{k}\right| }\mathop{\sum }\limits_{{j \in {A}_{k}}}{a}_{j}}\right) \underset{{c}_{k}^{-1}{u}_{k}}{\underbrace{\mathop{\sum }\limits_{{j \in {A}_{k}}}{e}_{j}}} \] is a well-defined projection from \( X \) onto \( \left\lbrack {u}_{k}\right\rbrack \) with \( \parallel P\parallel = 1 \) . ## 9.3 Uniqueness of Unconditional Basis Zippin's theorem (Theorem 9.1.8) has a number of very elegant applications. We give a couple in this section. The first relates to the theorem of Lindenstrauss and Pelczyński proved in Section 8.3. There we saw that the normalized unconditional bases of the three spaces \( {c}_{0},{\ell }_{1} \), and \( {\ell }_{2} \) are unique (up to equivalence); we also saw that in contrast, the spaces \( {\ell }_{p} \) for \( p \neq 1,2 \) have at least two nonequivalent normalized unconditional bases. In 1969, Lindenstrauss a
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 4.4.7
Definition 4.4.7 The \( {w}^{ * } \) -measurable map \( \nu : \Omega \rightarrow \mathcal{M}\left( {\mathbb{R}}^{N}\right) \), defined in Theorem 4.4.6, is called a Young measure generated by the sequence \( \left\{ {z}_{{k}_{j}}\right\} \subset \) \( {L}^{\infty }\left( {\Omega ,{\mathbb{R}}^{N}}\right) \) . Remark 4.4.8 The conclusion (2) has a probability interpretation: In the limit, \( g\left( {z}_{{k}_{j}}\right) \) takes the value \( g\left( y\right) \) with probability \( {\nu }_{x}\left( y\right) \) at \( x \) . Therefore the Young measure can be used to describe the local phase proportions in an infinitesimally fine mixture. Corollary 4.4.9 Assume that \( \nu \) is the Young measure associated with the sequence \( \left\{ {z}_{k}\right\} \) and that \( {z}_{k} \rightarrow z \) in measure. Then \( {\nu }_{x} = {\delta }_{z\left( x\right) } \) a.e. Proof. \( \forall g \in {C}_{0}\left( {\mathbb{R}}^{N}\right), g\left( {z}_{k}\right) \rightarrow g\left( z\right) \) in measure. But by Theorem 4.4.6, \( g\left( {z}_{k}\right) {}^{ * } \rightharpoonup \bar{g} = \langle \nu, g\rangle \) . Therefore, \[ \left\langle {{\nu }_{x}, g}\right\rangle = g\left( {z\left( x\right) }\right) = \left\langle {{\delta }_{z\left( x\right) }, g}\right\rangle ,\text{ a.e }. \] Remark 4.4.10 In Theorem 4.4.6, the \( {L}^{\infty } \) -bounded sequence \( \left\{ {z}_{k}\right\} \subset {L}^{\infty }(\Omega \) , \( {\mathbb{R}}^{N} \) ) can be replaced by the \( {L}^{1} \) bounded sequence \( \left\{ {z}_{k}\right\} \subset {L}^{1}\left( {\Omega ,{\mathbb{R}}^{N}}\right) \) . In this case, \( {\nu }_{x} \) is \( {\mathcal{L}}^{n}\lfloor \Omega \) a.e. a probability measure with \[ {\int }_{\Omega }{\int }_{{\mathbb{R}}^{N}}\left| y\right| d{\nu }_{x}\left( y\right) {dx} \leq \mathop{\liminf }\limits_{{k \rightarrow \infty }}{\begin{Vmatrix}{z}_{k}\end{Vmatrix}}_{{L}^{1}}. \] Again we claim that \( \sigma \) is the restriction of the Lebesgue measure on \( \Omega \) . In fact, only (4.45) should be modified: \( \forall R > 0 \) \[ \sigma \left( K\right) \geq \mu \left( {K \times \overline{{B}_{R}\left( \theta \right) }}\right) \] \[ \geq \mathop{\limsup }\limits_{{k \rightarrow \infty }}{\mu }_{k}\left( {K \times \overline{{B}_{R}\left( \theta \right) }}\right) \] \[ \geq \mathop{\limsup }\limits_{{k \rightarrow \infty }}{\mathcal{L}}^{n}\left( \left\{ {x \in K \mid \left| {{z}_{k}\left( x\right) }\right| \leq R}\right\} \right) \] \[ \geq {\mathcal{L}}^{n}\left( K\right) - \frac{1}{R}\mathop{\sup }\limits_{k}{\begin{Vmatrix}{z}_{k}\end{Vmatrix}}_{{L}^{1}} \] As \( R \rightarrow \infty \), it follows that \( \sigma \left( K\right) \geq {\mathcal{L}}^{n}\left( K\right) \) . Finally, we use Remark 4.4.5, and choose a sequence of positive functions \( {f}_{k}\left( {x, y}\right) \uparrow \left| y\right| \) pointwisely; the monotone convergence theorem implies the conclusion. Corollary 4.4.11 Assume that \( \left\{ {z}_{k}\right\} \subset {L}^{\infty }\left( {\Omega ,{\mathbb{R}}^{N}}\right) \), satisfies \( {z}_{k} \rightarrow z \), a.e., and that \( \left\{ {w}_{k}\right\} \subset {L}^{\infty }\left( {\Omega ,{\mathbb{R}}^{M}}\right) \) generates the Young measure \( \nu \) . Then \( \left\{ {{z}_{k},{w}_{k}}\right\} \) : \( \Omega \rightarrow {\mathbb{R}}^{N + M} \) generates the Young measure \( {\delta }_{z\left( x\right) } \otimes {\nu }_{x}, x \in \Omega \) . Proof. \( \forall \varphi \in {C}_{0}\left( {\mathbb{R}}^{N}\right) ,\forall \psi \in {C}_{0}\left( {\mathbb{R}}^{M}\right) ,\forall \eta \in {L}^{1}\left( \Omega \right) \), by definition and the Lebesgue's dominance theorem, we have \[ \varphi \left( {z}_{k}\right) \rightarrow \varphi \left( z\right) \text{, a.e.,} \] \[ {\eta \varphi }\left( {z}_{k}\right) \rightarrow {\eta \varphi }\left( z\right) \text{ in }{L}^{1}\left( \Omega \right) \] \[ \psi \left( {w}_{k}\right) {}^{ * } \rightharpoonup \bar{\psi } = \langle \nu ,\psi \rangle \text{ in }{L}^{\infty }\left( \Omega \right) . \] This implies that \[ {\int }_{\Omega }\eta \left( {\varphi \bigotimes \psi }\right) \left( {{z}_{k},{w}_{k}}\right) {dx} = {\int }_{\Omega }{\eta \varphi }\left( {z}_{k}\right) \psi \left( {w}_{k}\right) {dx} \rightarrow {\int }_{\Omega }\eta \left( x\right) \varphi \left( {z\left( x\right) }\right) \left\langle {{\nu }_{x},\psi }\right\rangle {dx}. \] Then, \[ \left( {\varphi \bigotimes \psi }\right) {\left( {z}_{k},{w}_{k}\right) }^{ * } \rightharpoonup \left\langle {{\delta }_{z}\bigotimes \nu ,\varphi \bigotimes \psi }\right\rangle \text{ in }{L}^{\infty }\left( \Omega \right) . \] Since \( {C}_{0}\left( {\mathbb{R}}^{N}\right) \bigotimes {C}_{0}\left( {\mathbb{R}}^{M}\right) \) is dense in \( {C}_{0}\left( {\mathbb{R}}^{N + M}\right) \), it follows that \[ f\left( {{z}_{k},{w}_{k}}\right) {}^{ * } \rightharpoonup \left\langle {{\delta }_{z}\bigotimes \nu, f}\right\rangle \text{ in }{L}^{\infty }\left( \Omega \right) , \] for all \( f \in {C}_{0}\left( {\mathbb{R}}^{N + M}\right) \) . Example 4.4.12 Let \( \varphi \) be the function defined in Corollary 4.3.3, and \( {\varphi }_{k}\left( x\right) = \varphi \left( {kx}\right) ,\forall k \in \mathcal{N} \) . It is known that \[ {\varphi }_{k}{}^{ * } \rightharpoonup {\int }_{0}^{1}\varphi \left( x\right) {dx} = {\lambda \alpha } + \left( {1 - \lambda }\right) \beta . \] Similarly, \( \forall f \in C\left( {\mathbb{R}}^{1}\right) \), we have \[ f \circ {\varphi }_{k}{}^{ * } \rightharpoonup {\int }_{0}^{1}f\left( {\varphi \left( x\right) }\right) {dx} = {\lambda f}\left( \alpha \right) + \left( {1 - \lambda }\right) f\left( \beta \right) . \] Therefore the sequence \( \left\{ {\varphi }_{k}\right\} \) is associated with a Young measure: \( \nu = \lambda {\delta }_{\alpha } + \) \( \left( {1 - \lambda }\right) {\delta }_{\beta } \) ; i.e., \( {\nu }_{x} = {\lambda \delta }\left( {x - \alpha }\right) + \left( {1 - \lambda }\right) \delta \left( {x - \beta }\right) ,\forall x \in {\mathbb{R}}^{1} \) . Example 4.4.13 Let \( \left\{ {u}_{n}\right\} \) be the sequence of sawtooth functions: \[ {u}_{n}\left( x\right) = \left\{ \begin{array}{l} x - \frac{k}{n}\;\text{ if }x \in \left\lbrack {\frac{k}{n},\frac{{2k} + 1}{n}}\right\rbrack \\ - x + \frac{k + 1}{n}\;\text{ if }x \in \left\lbrack {\frac{{2k} + 1}{n},\frac{k + 1}{n}}\right\rbrack , \end{array}\right. \] which is a minimizing sequence of the functional \( J \) (see equation 4.32). Let \[ {z}_{n}\left( x\right) = {u}_{n}^{\prime }\left( x\right) = \left\{ \begin{array}{ll} 1 & \text{ if }x \in \left\lbrack {\frac{k}{n},\frac{{2k} + 1}{n}}\right\rbrack , \\ - 1 & \text{ if }x \in \left\lbrack \frac{{2k} + 1}{n}\right\rbrack . \end{array}\right. \] Let \( \nu \) be the Young measure associated with to the sequence \( {z}_{k} \) . Conclusion: \( \nu = \frac{1}{2}\left( {{\delta }_{-1} + {\delta }_{+1}}\right) \) . In fact, \[ g\left( {{z}_{n}\left( x\right) }\right) * \rightharpoonup \left\langle {{\nu }_{x}, g}\right\rangle , \] for all \( g \in {C}_{0}\left( {\mathbb{R}}^{1}\right) \) . Let us take \( g\left( y\right) = \min \left\{ {{\left( {y}^{2} - 1\right) }^{2},1}\right\} \) . Since \( J\left( {u}_{n}\right) \rightarrow 0 \) , \( \left\langle {{\nu }_{x}, g}\right\rangle = 0 \) a.e. This implies that \( \operatorname{supp}\left( {\nu }_{x}\right) \subset \{ - 1,1\} \), i.e., \( {\nu }_{x} = \lambda \left( x\right) {\delta }_{-1} + \) \( \left( {1 - \lambda \left( x\right) }\right) {\delta }_{+1} \) . Also, \( {z}_{n}{}^{ * } \rightharpoonup 0 \) in \( {L}^{\infty }\left( {\mathbb{R}}^{1}\right) \) and the relation \( {g}_{1}\left( {{z}_{n}\left( x\right) }\right) {}^{ * } \rightharpoonup \left\langle {{\nu }_{x},{g}_{1}}\right\rangle \) in \( {L}^{\infty }\left( {\mathbb{R}}^{1}\right) \), holds for all \( {g}_{1} \in {C}_{0}\left( {\mathbb{R}}^{1}\right) \) with \( {g}_{1}\left( y\right) = y \), as \( \left| y\right| < 2 \) . Thus \[ \left\langle {\lambda \left( x\right) {\delta }_{-1} + \left( {1 - \lambda \left( x\right) }\right) {\delta }_{+1},{g}_{1}}\right\rangle = 0. \] This implies that \( 1 - {2\lambda }\left( x\right) = 0 \), or \( \lambda \left( x\right) = \frac{1}{2} \) . Example 4.4.14 Let \( B, C \in {M}^{n \times N} \) satisfy \( \operatorname{rank}\left( {B - C}\right) = 1 \) . Assume \( \exists \lambda \in \) \( \left( {0,1}\right) \), such that \( {\lambda B} + \left( {1 - \lambda }\right) C = 0 \) . It is known that without loss of generality, one may assume \( B = \left( {1 - \lambda }\right) a\bigotimes {e}_{1} \), and \( C = - {\lambda a}\bigotimes {e}_{1} \) for some \( a \in {\mathbb{R}}^{N} \) and \( {e}_{1} \in {\mathbb{R}}^{n} \) . Let \[ \varphi \left( t\right) = \left\{ \begin{array}{l} \left( {1 - \lambda }\right) t\;\text{ if }t \in \left\lbrack {0,\lambda }\right\rbrack , \\ - \lambda \left( {t - 1}\right) \;\text{ if }t \in \left\lbrack {\lambda ,1}\right\rbrack . \end{array}\right. \] \[ \forall x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \in D \mathrel{\text{:=}} {\left\lbrack 0,1\right\rbrack }^{n},\text{ let } \] \[ {u}_{k}\left( x\right) = a{k}^{-1}\varphi \left( {k{x}_{1}}\right) ,\;k = 1,2,\ldots ,\text{ and }{z}_{k} = \nabla {u}_{k}. \] Then \( \left\{ {z}_{k}\right\} \) is associated with the Young measure \( \nu = \lambda {\delta }_{B} + \left( {1 - \lambda }\right) {\delta }_{C} \) , where \( {\delta }_{B} \) and \( {\delta }_{C} \) are the probability measures concentrated at the matrices \( B \) and \( C \), respectively. In fact, \[ {z}_{k}\left( x\right) = \left\{ \begin{array}{ll} B & \text{ if }\left\{ {k{x}_{1}}\right\} \in \left( {0,\lambda }\right) \\ C & \text{ if }\left\{ {k{x}_{1}}\right\} \in \left( {\lambda ,1}\right) \end{array}\right. \] so is \( \operatorname{dist}\left( {{z}_{k},\{ B, C\} }\right) = 0 \) . By Theorem 4.4.6, the probability measure \( \nu \) satisfies \( \operatorname{supp}\left( {\nu }_{x}\right) \subset \{ B, C\} \) a.e. Therefore, \( \exists \mu \left(
11_2022-An_Analogy_of_the_Carleson-Hunt_Theorem_with_Respe
Definition 5.13
Definition 5.13. Let \( X \) be a dense Borel subset of a compact metric space \( \bar{X} \), with a probability measure \( \mu \) defined on the restriction of the Borel \( \sigma \) -algebra \( \mathcal{B} \) to \( X \) . The resulting probability space \( \left( {X,\mathcal{B},\mu }\right) \) is a Borel probability space. For a compact metric space \( X \), the space \( \mathcal{M}\left( X\right) \) of Borel probability measures on \( X \) itself carries the structure of a compact metric space with respect to the weak*-topology. In particular, we can define the Borel \( \sigma \) - algebra \( {\mathcal{B}}_{\mathcal{M}\left( X\right) } \) on the space \( \mathcal{M}\left( X\right) \) in the usual way. If \( X \) is a Borel subset of a compact metric space \( \bar{X} \), then we define \[ \mathcal{M}\left( X\right) = \{ \mu \in \mathcal{M}\left( \bar{X}\right) \mid \mu \left( {\bar{X} \smallsetminus X}\right) = 0\} , \] and we will see in Lemma 5.23 that \( \mathcal{M}\left( X\right) \) is a Borel subset of \( \mathcal{M}\left( \bar{X}\right) \) . We are now in a position to state and prove the main result of this chapter. A set is called conull if it is the complement of a null set. For \( \sigma \) -algebras \( \mathcal{C},{\mathcal{C}}^{\prime } \) the relation \[ \mathcal{C}\underset{\mu }{\underline{ \subseteq }}{\mathcal{C}}^{\prime } \] means that for any \( A \in \mathcal{C} \) there is a set \( {A}^{\prime } \in {\mathcal{C}}^{\prime } \) with \( \mu \left( {A\bigtriangleup {A}^{\prime }}\right) = 0 \) . We also define \[ \mathcal{C} = {\mathcal{C}}^{\prime } \] to mean that \( \mathcal{C} \subseteq {\mathcal{C}}^{\prime } \) and \( {\mathcal{C}}^{\prime } \subseteq \mathcal{C} \) . A \( \sigma \) -algebra \( \mathcal{A} \) on \( X \) is countably-generated if there exists a countable set \( \left\{ {{A}_{1},{A}_{2},\ldots }\right\} \) of subsets of \( X \) with the property that \( \mathcal{A} = \sigma \left( \left\{ {{A}_{1},{A}_{2},\ldots }\right\} \right) \) is the smallest \( \sigma \) -algebra (that is, the intersection of every) \( \sigma \) -algebra containing the sets \( {A}_{1},{A}_{2},\ldots \) . Theorem 5.14. Let \( \left( {X,\mathcal{B},\mu }\right) \) be a Borel probability space, and \( \mathcal{A} \subseteq \mathcal{B} \) a \( \sigma \) - algebra. Then there exists an \( \mathcal{A} \) -measurable conull set \( {X}^{\prime } \subseteq X \) and a system \( \left\{ {{\mu }_{x}^{\mathcal{A}} \mid x \in {X}^{\prime }}\right\} \) of measures on \( X \), referred to as conditional measures, with the following properties. (1) \( {\mu }_{x}^{\mathcal{A}} \) is a probability measure on \( X \) with \[ E\left( {f \mid \mathcal{A}}\right) \left( x\right) = \int f\left( y\right) \mathrm{d}{\mu }_{x}^{\mathcal{A}}\left( y\right) \] (5.3) almost everywhere for all \( f \in {\mathcal{L}}^{1}\left( {X,\mathcal{B},\mu }\right) \) . In other words, for any func- \( {\operatorname{tion}}^{ * }f \in {\mathcal{L}}^{1}\left( {X,\mathcal{B},\mu }\right) \) we have that \( \int f\left( y\right) \mathrm{d}{\mu }_{x}^{\mathcal{A}}\left( y\right) \) exists for all \( x \) belonging to a conull set in \( \mathcal{A} \), that on this set \[ x \mapsto \int f\left( y\right) \mathrm{d}{\mu }_{x}^{\mathcal{A}}\left( y\right) \] depends \( \mathcal{A} \) -measurably on \( x \), and that \[ {\int }_{A}\int f\left( y\right) \mathrm{d}{\mu }_{x}^{\mathcal{A}}\left( y\right) \mathrm{d}\mu \left( x\right) = {\int }_{A}f\mathrm{\;d}\mu \] for all \( A \in \mathcal{A} \) . (2) If \( \mathcal{A} \) is countably-generated, then \( {\mu }_{x}^{\mathcal{A}}\left( {\left\lbrack x\right\rbrack }_{\mathcal{A}}\right) = 1 \) for all \( x \in {X}^{\prime } \), where \[ {\left\lbrack x\right\rbrack }_{\mathcal{A}} = \mathop{\bigcap }\limits_{{x \in A \in \mathcal{A}}}A \] * Notice that we are forced to work with genuine functions in \( {\mathcal{L}}^{1} \) in order that the righthand side of (5.3) is defined. As we said before, \( {\mu }_{x}^{\mathcal{A}} \) may be singular to \( \mu \) . is the atom of \( \mathcal{A} \) containing \( x \) ; moreover \( {\mu }_{x}^{\mathcal{A}} = {\mu }_{y}^{\mathcal{A}} \) for \( x, y \in {X}^{\prime } \) whenever \( {\left\lbrack x\right\rbrack }_{\mathcal{A}} = {\left\lbrack y\right\rbrack }_{\mathcal{A}} \) . (3) Property (1) uniquely determines \( {\mu }_{x}^{\mathcal{A}} \) for a.e. \( x \in X \) . In fact, property (1) for a dense countable set of functions in \( C\left( \bar{X}\right) \) uniquely determines \( {\mu }_{x}^{\mathcal{A}} \) for a.e. \( x \in X \) . (4) If \( \widetilde{\mathcal{A}} \) is any \( \sigma \) -algebra with \( \mathcal{A} = \widetilde{\mathcal{A}} \), then \( {\mu }_{x}^{\mathcal{A}} = {\mu }_{x}^{\widetilde{\mathcal{A}}} \) almost everywhere. Remark 5.15. Theorem 5.14 is rather technical but quite powerful, so we assemble here some comments that will be useful both in the proof and in situations where the results are applied. (a) For a countably generated \( \sigma \) -algebra \( \mathcal{A} = \sigma \left( \left\{ {{A}_{1},{A}_{2},\ldots }\right\} \right) \) the atom in (2) is given by \[ {\left\lbrack x\right\rbrack }_{\mathcal{A}} = \mathop{\bigcap }\limits_{{x \in {A}_{i}}}{A}_{i} \cap \mathop{\bigcap }\limits_{{x \notin {A}_{i}}}X \smallsetminus {A}_{i} \] (5.4) and hence is \( \mathcal{A} \) -measurable (see Exercise 5.3.1). In fact \( {\left\lbrack x\right\rbrack }_{\mathcal{A}} \) is the smallest element of \( \mathcal{A} \) containing \( x \) . (b) If \( N \subseteq X \) is a null set for \( \mu \), then \( {\mu }_{x}^{\mathcal{A}}\left( N\right) = 0 \) almost everywhere. In other words, for a \( \mu \) -null set \( N \), the set \( N \) is also a \( {\mu }_{x}^{\mathcal{A}} \) -null set for \( \mu \) -almost every \( x \) . This follows from property (1) applied to the function \( f = {\chi }_{N} \) . In many interesting cases, the atoms \( {\left\lbrack x\right\rbrack }_{\mathcal{A}} \) are null sets with respect to \( \mu \) , and so \( {\mu }_{x}^{\mathcal{A}} \) is singular to \( \mu \) . (c) The conditional measures constructed in Theorem 5.14 are sometimes said to give a disintegration of the measure \( \mu \) . (d) Notice that the uniqueness in property (3) (and similarly for (4)) may require switching to smaller conull sets. That is, if \( {\mu }_{x}^{\mathcal{A}} \) for \( x \in {X}^{\prime } \subseteq X \) and \( {\widetilde{\mu }}_{x}^{\mathcal{A}} \) for \( x \in {\widetilde{X}}^{\prime } \subseteq X \) are two systems of measures as in (1), then the claim is that there exists a conull subset \( {X}^{\prime \prime } \subseteq {X}^{\prime } \cap {\widetilde{X}}^{\prime } \) with \( {\mu }_{x}^{\mathcal{A}} = {\widetilde{\mu }}_{x}^{\mathcal{A}} \) for all \( x \in {X}^{\prime \prime } \) . (e) We only ever talk about atoms for countably generated \( \sigma \) -algebras. The first reason for this is that for a general \( \sigma \) -algebra the expression defined in Theorem 5.14(2) by an uncountable intersection may not be measurable (let alone \( \mathcal{A} \) -measurable). Moreover, even in those cases where the expression happens to be \( \mathcal{A} \) -measurable, the definition cannot be used to prove the stated assertions. We also note that it is not true that any sub- \( \sigma \) -algebra of a countably-generated \( \sigma \) -algebra is countably generated (but see Lemma 5.17 for a more positive statement). For example, the \( \sigma \) -algebra of null sets in \( \mathbb{T} \) with respect to Lebesgue measure is not countably-generated (but there are more interesting examples, see Exercise 6.1.2). Example 5.16. Let \( X = {\left\lbrack 0,1\right\rbrack }^{2} \) and \( \mathcal{A} = \mathcal{B} \times \{ \varnothing ,\left\lbrack {0,1}\right\rbrack \} \) as in Example 5.3. In this case Theorem 5.14 claims that any Borel probability measure \( \mu \) on \( X \) can be decomposed into "vertical components": the conditional measures \( {\mu }_{\left( {x}_{1},{x}_{2}\right) }^{\mathcal{A}} \) are defined on the vertical line segments \( \left\{ {x}_{1}\right\} \times \left\lbrack {0,1}\right\rbrack \), and these sets are precisely the atoms of \( \mathcal{A} \) . Moreover, \[ \mu \left( B\right) = {\int }_{X}{\mu }_{\left( {x}_{1},{x}_{2}\right) }^{\mathcal{A}}\left( B\right) \mathrm{d}\mu \left( {{x}_{1},{x}_{2}}\right) . \] (5.5) In this example \( {\mu }_{\left( {x}_{1},{x}_{2}\right) }^{\mathcal{A}} = {\nu }_{{x}_{1}} \) does not depend on \( {x}_{2} \), so (5.5) may be written as \[ \mu \left( B\right) = {\int }_{\left\lbrack 0,1\right\rbrack }{\nu }_{{x}_{1}}\left( B\right) \mathrm{d}\bar{\mu }\left( {x}_{1}\right) \] (5.6) where \( \bar{\mu } = {\pi }_{ * }\mu \) is the measure on \( \left\lbrack {0,1}\right\rbrack \) obtained by the projection \[ \pi : {\left\lbrack 0,1\right\rbrack }^{2} \rightarrow \left\lbrack {0,1}\right\rbrack \] \[ \left( {{x}_{1},{x}_{2}}\right) \mapsto {x}_{1} \] While (5.6) looks simpler than (5.5), in order to arrive at it a quotient space and a quotient measure has to be constructed (see Sect. 5.4). For simplicity we will often work with expressions like (5.5) in the general context. Once \( \mu \) is known explicitly, the measures \( {\mu }_{\left( {x}_{1},{x}_{2}\right) }^{\mathcal{A}} \) can often be computed. For example, if \( \mu \) is defined by \[ \int f\mathrm{\;d}\mu = \frac{1}{3}\int f\left( {s, s}\right) \mathrm{d}s + {\int }_{0}^{1}{\int }_{0}^{\sqrt{s}}f\left( {s, t}\right) \mathrm{d}t\mathrm{\;d}s \] then \[ {\mu }_{\left( {x}_{1},{x}_{2}\right) }^{\mathcal{A}} = \frac{1}{\sqrt{{x}_{1}} + 1/3}{\delta }_{{x}_{1}} \times \left( {\frac{1}{3}{\delta }_{{x}_{1}} + {m}_{\left\lbrack 0,\sqrt{{x}_{1}}\right\rbrack }}\right) . \] To see that this equation holds, the reader should use Theorem 5.14(3). However, the real force of Theorem 5.14 lies in the fact that it allows an unknown measure to be decomposed into components which are often easier to wor
109_The rising sea Foundations of Algebraic Geometry
Definition 5.216
Definition 5.216. Given \( J \subseteq S \), we say that two chambers \( C, D \in \mathcal{C} \) are \( J \) -equivalent if there is a gallery of type \( \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) connecting \( C \) and \( D \) with \( {s}_{i} \in J \) for all \( 1 \leq i \leq n \) . The equivalence classes are called \( J \) -residues, or residues of type \( J \), and the \( J \) -residue containing a given chamber \( C \) is denoted by \( {R}_{J}\left( C\right) \) . A subset \( \mathcal{R} \subseteq \mathcal{C} \) is called a residue if it is a \( J \) -residue for some \( J \subseteq S \) . In the present generality of arbitrary chamber systems, it is not true that a residue has a well-defined type \( J \), or even a well-defined rank \( \left| J\right| \) . Nevertheless, we will allow ourselves to say " \( \mathcal{R} \) is a residue of rank \( m \) " as shorthand for " \( \mathcal{R} = {R}_{J}\left( C\right) \) for some \( C \in \mathcal{C} \) and some \( J \subseteq S \) such that \( \left| J\right| = m \) ." We will also say that a residue \( \mathcal{R} \) is spherical if it is a \( J \) -residue for some spherical subset \( J \subseteq S \), where \( J \) is said to be spherical if \( {W}_{J} \) is finite. [Note that this notion makes sense only because we have a fixed Coxeter system in mind.] For example, panels are spherical residues of rank 1 . The chamber system \( \mathcal{C} \) is called connected if \( \mathcal{C} = {R}_{S}\left( C\right) \) for some (and hence any) \( C \in \mathcal{C} \) . Finally, note that a residue \( {R}_{J}\left( C\right) \) can be viewed as a chamber system in its own right, over the set \( J \) . Definition 5.217. A morphism between two chamber systems \( {\mathcal{C}}^{\prime } \) and \( \mathcal{C} \) over \( S \) is a map \( \kappa : {\mathcal{C}}^{\prime } \rightarrow \mathcal{C} \) such that \[ {C}^{\prime }{ \sim }_{s}{D}^{\prime } \Rightarrow \kappa \left( {C}^{\prime }\right) { \sim }_{s}\kappa \left( {D}^{\prime }\right) \] for any \( {C}^{\prime },{D}^{\prime } \in {\mathcal{C}}^{\prime } \) and \( s \in S \) . A morphism \( \kappa \) is called an isomorphism if it is bijective and the inverse \( {\kappa }^{-1} \) is also a morphism; in other words, \[ \kappa \left( {C}^{\prime }\right) { \sim }_{s}\kappa \left( {D}^{\prime }\right) \Rightarrow {C}^{\prime }{ \sim }_{s}{D}^{\prime } \] (5.21) for any \( {C}^{\prime },{D}^{\prime } \in {\mathcal{C}}^{\prime } \) and \( s \in S \) . The implication (5.21) says that the bijection \( \kappa \) maps every \( s \) -panel in \( {\mathcal{C}}^{\prime } \) onto an \( s \) -panel in \( \mathcal{C} \) . This leads naturally to our next definition. Definition 5.218. Given two chamber systems \( {\mathcal{C}}^{\prime } \) and \( \mathcal{C} \) over \( S \) and a natural number \( m \), we call a morphism \( \kappa : {\mathcal{C}}^{\prime } \rightarrow \mathcal{C} \) an \( m \) -covering if for every spherical subset \( J \subseteq S \) of cardinality \( \left| J\right| \leq m \), every \( J \) -residue of \( {\mathcal{C}}^{\prime } \) is mapped bijectively onto a \( J \) -residue of \( \mathcal{C} \) . More briefly, the definition says that \( \kappa \) maps every spherical residue of rank at most \( m \) bijectively onto a (spherical) residue of the same rank. But we have spelled this out carefully in order to avoid ambiguity. Remarks 5.219. (a) Note that any \( m \) -covering maps \( s \) -panels of \( {\mathcal{C}}^{\prime } \) bijectively onto \( s \) -panels of \( \mathcal{C} \), since the natural number \( m \), by convention, is at least 1. It follows that the bijections between \( J \) -residues in the definition are actually isomorphisms of chamber systems over \( J \) . (b) Some of the literature uses a more restrictive notion of \( m \) -covering, in which \( J \) is allowed to be an arbitrary subset of \( S \) with \( \left| J\right| \leq m \) . We have chosen the definition that will be useful for us in what follows. Note that it makes use of our standing assumption that \( S \) is the set of distinguished generators of a Coxeter group \( W \) . The first observation about covering maps is that they have a "path-lifting" property that should look familiar to anyone who has studied covering maps in topology: Lemma 5.220. Let \( \kappa : {\mathcal{C}}^{\prime } \rightarrow \mathcal{C} \) be a morphism of chamber systems over \( S \) such that for all \( s \in S,\kappa \) maps every \( s \) -panel of \( {\mathcal{C}}^{\prime } \) onto an \( s \) -panel of \( \mathcal{C} \) . Let \( {C}^{\prime } \) be a chamber in \( {\mathcal{C}}^{\prime } \), let \( \mathbf{s} \) be an \( S \) -word, and let \( \Gamma \) be a gallery in \( \mathcal{C} \) of type \( \mathbf{s} \) starting at \( \kappa \left( {C}^{\prime }\right) \) . Then there is a gallery \( {\Gamma }^{\prime } \) in \( {\mathcal{C}}^{\prime } \) of type \( \mathbf{s} \) starting at \( {C}^{\prime } \) such that \( \kappa \left( {\Gamma }^{\prime }\right) = \Gamma \) . Consequently, \( \kappa \) maps every \( J \) -residue of \( {\mathcal{C}}^{\prime } \) onto a \( J \) -residue of \( \mathcal{C} \) for all \( J \subseteq S \) . Proof. This is immediate from the definitions. Lemma 5.221. Let \( \kappa : {\mathcal{C}}^{\prime } \rightarrow \mathcal{C} \) be a 1-covering, where \( {\mathcal{C}}^{\prime } \) is a chamber system over \( S \) and \( \mathcal{C} \) is a building of type \( \left( {W, S}\right) \) . Assume that any two chambers \( {C}^{\prime },{D}^{\prime } \in {\mathcal{C}}^{\prime } \) can be connected by a gallery of reduced type. Then \( {\mathcal{C}}^{\prime } \) is a building of type \( \left( {W, S}\right) \) and \( \kappa \) is an isomorphism. Proof. (a) \( \kappa \) is surjective. Choose an arbitrary \( {C}^{\prime } \in {\mathcal{C}}^{\prime } \) . Then \( \kappa \) maps \( {R}_{S}\left( {C}^{\prime }\right) \) onto \( {R}_{S}\left( {\kappa \left( {C}^{\prime }\right) }\right) \) by the last assertion of Lemma 5.220. Since \( \mathcal{C} \), being a building, is connected, \( {R}_{S}\left( {\kappa \left( {C}^{\prime }\right) }\right) = \mathcal{C}. \) (b) \( \kappa \) is an isomorphism. Let \( {C}^{\prime } \) and \( {D}^{\prime } \) be distinct chambers in \( {\mathcal{C}}^{\prime } \) . We have to show that \( \kappa \left( {C}^{\prime }\right) \neq \) \( \kappa \left( {D}^{\prime }\right) \) and that \[ \kappa \left( {C}^{\prime }\right) { \sim }_{s}\kappa \left( {D}^{\prime }\right) \Rightarrow {C}^{\prime }{ \sim }_{s}{D}^{\prime } \] for any \( {C}^{\prime },{D}^{\prime } \in {\mathcal{C}}^{\prime } \) and \( s \in S \) . By assumption, there exists a gallery \( {\Gamma }^{\prime } \) of reduced type \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) in \( {\mathcal{C}}^{\prime } \) connecting \( {C}^{\prime } \) and \( {D}^{\prime } \) . Since \( \kappa \) sends \( s \) -panels in \( {\mathcal{C}}^{\prime } \) bijectively onto \( s \) -panels in \( \mathcal{C},\kappa \left( {\Gamma }^{\prime }\right) \) is a gallery in \( \mathcal{C} \) of the same reduced type \( \mathbf{s} \) ; hence \( \delta \left( {\kappa \left( {C}^{\prime }\right) ,\kappa \left( {D}^{\prime }\right) }\right) = {s}_{1}\cdots {s}_{n} \) by condition (G) in Section 5.2. In particular, \( \delta \left( {\kappa \left( {C}^{\prime }\right) ,\kappa \left( {D}^{\prime }\right) }\right) \neq 1 \), so \( \kappa \left( {C}^{\prime }\right) \neq \kappa \left( {D}^{\prime }\right) \) . If we now assume that \( \kappa \left( {C}^{\prime }\right) { \sim }_{s}\kappa \left( {D}^{\prime }\right) \) for some \( s \in S \), then \( \delta \left( {\kappa \left( {C}^{\prime }\right) ,\kappa \left( {D}^{\prime }\right) }\right) = s \) and \( \mathbf{s} \) is a reduced decomposition of \( s \) . Hence \( n = 1 \) and \( \mathbf{s} = \left( s\right) \), so \( {C}^{\prime }{ \sim }_{s}{D}^{\prime } \) . (c) \( \left( {{\mathcal{C}}^{\prime },{\left( { \sim }_{s}\right) }_{s \in S}}\right) \) is a building of type \( \left( {W, S}\right) \) . Since we are working in the context of buildings as chamber systems, what we mean here is that there exists a function \( {\delta }^{\prime } : {\mathcal{C}}^{\prime } \times {\mathcal{C}}^{\prime } \rightarrow W \) such that the conditions of Proposition 5.23 are satisfied. This follows at once from (b) if we set \( {\delta }^{\prime }\left( {{C}^{\prime },{D}^{\prime }}\right) \mathrel{\text{:=}} \delta \left( {\kappa \left( {C}^{\prime }\right) ,\kappa \left( {D}^{\prime }\right) }\right) \) for \( {C}^{\prime },{D}^{\prime } \in {\mathcal{C}}^{\prime } \) . Remark 5.222. Suppose the building \( \mathcal{C} \) in Lemma 5.221 is spherical and of rank 2. Viewing it as a graph with colored edges, we can form its universal cover \( \kappa : {\mathcal{C}}^{\prime } \rightarrow \mathcal{C} \), which is a 1-covering (see Exercise 5.224 below) but not an isomorphism. This shows that the assumption we made in Lemma 5.221 is not vacuous. However, it is a characteristic feature of buildings that we only have to check spherical rank-2 residues in order to make sure that a covering is an isomorphism: Proposition 5.223. Let \( \kappa : {\mathcal{C}}^{\prime } \rightarrow \mathcal{C} \) be a 2-covering, where \( {\mathcal{C}}^{\prime } \) is a connected chamber system over \( S \) and \( \mathcal{C} \) is a building of type \( \left( {W, S}\right) \) . Then \( {\mathcal{C}}^{\prime } \) is a building of type \( \left( {W, S}\right) \), and \( \kappa \) is an isomorphism. Proof. We will verify the assumption of Lemma 5.221. Since \( {\mathcal{C}}^{\prime } \) is connected, it suffices to show that any minimal gallery in \( {\mathcal{C}}^{\prime } \) has reduced type. For this we first observe the following: (*) Let \( {\Gamma }^{\prime } \) be a gallery of type \( \mathbf{s} \) in \( {\mathcal{C}}^{\prime } \) connecting \( {C}^{\prime } \) and \( {D}^{\prime } \), and let \( \mathbf{t} \) be an \( S \) -word homotopic t
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space
Definition 2.6.12
Definition 2.6.12. The abstract lattice \( \mathcal{L} \) is attainable if there exists a bounded linear operator \( A \) on an infinite-dimensional separable complex Hilbert space such that Lat \( A \) is order-isomorphic to \( \mathcal{L} \) . Surprisingly little is known about which lattices are obtainable. The invariant subspace problem, the question whether there is an \( A \) whose only invariant subspaces are \( \{ 0\} \) and \( \mathcal{H} \), can be rephrased as: is the totally ordered lattice with two elements attainable? For \( U \) the unilateral shift, Lat \( U \) is a very complicated and rich lattice, as Theorem 2.6.7 indicates. Some of its sublattices will be of the form Lat \( A \) for suitable operators \( A \) . Recall that given two subspaces \( \mathcal{M} \) and \( \mathcal{N} \) of a Hilbert space \( \mathcal{H} \), the subspace \( \mathcal{N} \ominus \mathcal{M} \) is defined to be \( \mathcal{N} \cap {\mathcal{M}}^{ \bot } \) . The next theorem shows that an "interval" of an attainable lattice is an attainable lattice. Theorem 2.6.13. Let \( A \) be a bounded operator on an infinite-dimensional separable Hilbert space. Suppose that \( \mathcal{M} \) and \( \mathcal{N} \) are in Lat \( A \) and \( \mathcal{M} \subset \mathcal{N} \) . If \( \mathcal{N} \ominus \mathcal{M} \) is infinite-dimensional, then the lattice \[ \{ \mathcal{L} : \mathcal{L} \in \operatorname{Lat}A\text{and}\mathcal{M} \subset \mathcal{L} \subset \mathcal{N}\} \] is attainable. Proof. Let \( P \) be the projection onto the subspace \( \mathcal{N} \ominus \mathcal{M} \) . Define the bounded linear operator \( B \) on \( \mathcal{N} \ominus \mathcal{M} \) as \( B = {\left. PA\right| }_{\mathcal{N} \ominus \mathcal{M}} \) . We will show that Lat \( B \) is order-isomorphic to the lattice of the theorem. Let \( \mathcal{K} \in \operatorname{Lat}B \) . We show that \( \mathcal{M} \oplus \mathcal{K} \) is in the lattice of the theorem. First of all, since \( \mathcal{K} \subset \mathcal{N} \ominus \mathcal{M} \), it is clear that \( \mathcal{M} \subset \mathcal{M} \oplus \mathcal{K} \subset \mathcal{N} \) . Let \( m + k \in \mathcal{M} \oplus \mathcal{K} \) . Then \[ A\left( {m + k}\right) = {Am} + \left( {I - P}\right) {Ak} + {PAk}. \] Since \( m \in \mathcal{M} \) and \( \mathcal{M} \in \operatorname{Lat}A \), it follows that \( {Am} \in \mathcal{M} \) . Since \( k \in \mathcal{K} \subset \mathcal{N} \) and \( \mathcal{N} \in \operatorname{Lat}A \), we have that \( {Ak} \in \mathcal{N} \) . Thus, since \( I - P \) is the projection onto \( {\mathcal{N}}^{ \bot } \oplus \mathcal{M} \), it follows that \( \left( {I - P}\right) {Ak} \in \mathcal{M} \) . Lastly, \( {PAk} = {Bk} \) and since \( k \in \mathcal{K} \) and \( \mathcal{K} \in \operatorname{Lat}B \) we must have that \( {Bk} \in \mathcal{K} \) . Thus \[ A\left( {m + k}\right) = \left( {{Am} + \left( {I - P}\right) {Ak}}\right) + {Bk} \in \mathcal{M} \oplus \mathcal{K}. \] Hence \( \mathcal{M} \oplus \mathcal{K} \) is a member of the lattice of the theorem. Now suppose that \( \mathcal{L} \) is a member of the given lattice. Define \( \mathcal{K} = \mathcal{L} \ominus \mathcal{M} \) . Clearly \( \mathcal{K} \subset \mathcal{N} \ominus \mathcal{M} \) . We will prove that \( \mathcal{K} \in \operatorname{Lat}B \) . Let \( k \in \mathcal{K} \) . Since \( \mathcal{K} \subset \mathcal{L} \) and \( \mathcal{L} \in \operatorname{Lat}A \), we have \( {Ak} \in \mathcal{L} \) . Write \( {Ak} \) as \( {Ak} = f + g \) with \( f \in \mathcal{L} \ominus \mathcal{M} \) and \( g \in \mathcal{M} \) . Then \( {PAk} = f \) since \( P \) is the projection onto \( \mathcal{N} \ominus \mathcal{M} \) and \( \mathcal{L} \ominus \mathcal{M} \subset \mathcal{N} \ominus \mathcal{M} \) . Thus \( {Bk} = {PAk} = f \in \mathcal{L} \ominus \mathcal{M} = \mathcal{K} \), so \( \mathcal{K} \in \operatorname{Lat}B \) . Since \( \mathcal{M} \subset \mathcal{L},\mathcal{K} = \mathcal{L} \ominus \mathcal{M} \) is equivalent to \( \mathcal{L} = \mathcal{M} \oplus \mathcal{K} \) . Thus \( \mathcal{K} \in \operatorname{Lat}B \) if and only if \( \mathcal{L} = \mathcal{M} \oplus \mathcal{K} \) is in the lattice in the statement of the theorem, which establishes the isomorphism. The invariant subspace lattice of the unilateral shift has interesting "intervals", including the ordinary closed unit interval. Example 2.6.14. Let \[ \phi \left( z\right) = \exp \left( \frac{z + 1}{z - 1}\right) \] and let \( \mathcal{M} = {\left( \phi {\mathbf{H}}^{2}\right) }^{ \bot } \) . Then Lat \( \left( {\left. {U}^{ * }\right| }_{\mathcal{M}}\right) \) is order-isomorphic to the closed unit interval \( \left\lbrack {0,1}\right\rbrack \) with its standard ordering. Proof. The function \( \phi \) is inner singular. The measure \( \mu \) defined by \( \mu \left( A\right) = {2\pi } \) for any Borel set \( A \) containing 0 and \( \mu \left( B\right) = 0 \) for Borel sets \( B \) that do not contain zero is the measure provided by Theorem 2.6.5. It follows from Beurling’s theorem (Theorem 2.2.12) that \( \phi {\mathbf{H}}^{2} \in \operatorname{Lat}U \) . By Theorem 1.2.20, \( \mathcal{M} \in \operatorname{Lat}{U}^{ * } \) . Let \( \mathcal{N} \in \operatorname{Lat}\left( {\left. {U}^{ * }\right| }_{\mathcal{M}}\right) \) . This is equivalent to \( {\mathcal{M}}^{ \bot } \subset {\mathcal{N}}^{ \bot } \) and \( {\mathcal{N}}^{ \bot } \in \operatorname{Lat}U \) . But this means that \( {\mathcal{N}}^{ \bot } = {\phi }_{a}{\mathbf{H}}^{2} \) for some inner function \( {\phi }_{a} \) such that \( {\phi }_{a} \) divides \( \phi \) . Since \( \phi \) is singular, Theorem 2.6.7 implies that \( {\phi }_{a} \) is also singular, and the singular measure \( {\mu }_{a} \) corresponding to \( {\phi }_{a} \) must be less than or equal to \( \mu \) . This is the same as saying that \[ {\phi }_{a}\left( z\right) = \exp \left( {a\frac{z + 1}{z - 1}}\right) \] for some \( a \in \left\lbrack {0,1}\right\rbrack \) . Thus Lat \( \left( {\left. {U}^{ * }\right| }_{\mathcal{M}}\right) \cong \left\lbrack {0,1}\right\rbrack \) as lattices. ## 2.7 Outer Functions The structure of the outer functions can be explicitly described. First we need to establish the following technical lemma. Lemma 2.7.1. If \( f \in {\mathbf{H}}^{2} \) and \( f \) is not identically 0, then \( \log \left| {\widetilde{f}\left( {e}^{i\theta }\right) }\right| \) is in \( {\mathbf{L}}^{1}\left( {{S}^{1},{d\theta }}\right) \) . Proof. First of all, write the function log as the difference of its positive and negative parts; that is, \( \log x = {\log }^{ + }x - {\log }^{ - }x \) . Let \( B \) be the Blaschke product formed from the zeros of \( f \) . Then \( f = {Bg} \) for some \( g \in {\mathbf{H}}^{2} \) that never vanishes. Since \( \left| {\widetilde{f}\left( {e}^{i\theta }\right) }\right| = \left| {\widetilde{g}\left( {e}^{i\theta }\right) }\right| \) a.e., it suffices to show that \[ \log \left| {\widetilde{g}\left( {e}^{i\theta }\right) }\right| \in {\mathbf{L}}^{1}\left( {{S}^{1},{d\theta }}\right) . \] Since \( g \) never vanishes in \( \mathbb{D} \), we can write \( g\left( z\right) = {e}^{h\left( z\right) } \) for some function \( h \) analytic in \( \mathbb{D} \) . Then \[ \left| {g\left( z\right) }\right| = \exp \left( {\operatorname{Re}h\left( z\right) }\right) \] Dividing by a constant if necessary, we may assume that \( \left| {g\left( 0\right) }\right| = 1 \), or, equivalently, that \( \operatorname{Re}h\left( 0\right) = 0 \) . Fix \( r \in \left( {0,1}\right) \) . Then \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }\log \left| {g\left( {r{e}^{i\theta }}\right) }\right| {d\theta } = \frac{1}{2\pi }{\int }_{0}^{2\pi }\operatorname{Re}h\left( {r{e}^{i\theta }}\right) {d\theta } = \operatorname{Re}h\left( 0\right) = 0, \] since \( \frac{1}{2\pi }{\int }_{0}^{2\pi }h\left( {r{e}^{i\theta }}\right) {d\theta } = h\left( 0\right) \) by Cauchy’s integral formula (Theorem 1.1.19) applied to the function \( {h}_{r}\left( z\right) = h\left( {rz}\right) \), which is analytic on a neighborhood of \( \overline{\mathbb{D}} \) . This implies that \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }{\log }^{ + }\left| {g\left( {r{e}^{i\theta }}\right) }\right| {d\theta } = \frac{1}{2\pi }{\int }_{0}^{2\pi }{\log }^{ - }\left| {g\left( {r{e}^{i\theta }}\right) }\right| {d\theta } \] and thus that \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }\left| {\log \left| {g\left( {r{e}^{i\theta }}\right) }\right| }\right| {d\theta } = 2\frac{1}{2\pi }{\int }_{0}^{2\pi }{\log }^{ + }\left| {g\left( {r{e}^{i\theta }}\right) }\right| {d\theta } \] \[ = \frac{1}{2\pi }{\int }_{0}^{2\pi }{\log }^{ + }{\left| g\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta } \] Since \( {\log }^{ + }x \leq x \) for all \( x > 0 \) and \( g \in {\mathbf{H}}^{2} \), we then have \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }\left| {\log \left| {g\left( {r{e}^{i\theta }}\right) }\right| }\right| {d\theta } \leq \frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| g\left( r{e}^{i\theta }\right) \right| }^{2}{d\theta } \leq \parallel g{\parallel }^{2}. \] Also, we can choose an increasing sequence \( \left\{ {r}_{n}\right\} \) of positive numbers with \( \left\{ {r}_{n}\right\} \rightarrow 1 \) such that \( \left\{ {g\left( {{r}_{n}{e}^{i\theta }}\right) }\right\} \rightarrow g\left( {e}^{i\theta }\right) \) a.e. (by Corollary 1.1.11). From the above, for each \( n \) , \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }\left| {\log \left| {g\left( {{r}_{n}{e}^{i\theta }}\right) }\right| }\right| {d\theta } \leq \parallel g{\parallel }^{2}. \] A straightforward application of Fatou's lemma on convergence of Lebesgue integrals [47, p. 23] implies that \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }\left| {\log \left| {\widetilde{g}\left( {e}^{i\theta }\right) }\right| }\right| {d\theta } \leq \parallel g{\parallel }^{2} \] which is the desired result. Notice that this gives another proof of the F. and M. Riesz theorem (proven in Theorem 2.
1129_(GTM35)Several Complex Variables and Banach Algebras
Definition 18.2
Definition 18.2. \( {H}_{1} \) is the space of all real-valued functions \( u \) on \( \Gamma \) such that \( u \) is absolutely continuous, \( u \in {L}^{2}\left( \Gamma \right) \) and \( \dot{u} \in {L}^{2}\left( \Gamma \right) \) . For \( u \in {H}_{1} \), we put \[ \parallel u{\parallel }_{1} = \parallel u{\parallel }_{{L}^{2}} + \parallel \dot{u}{\parallel }_{{L}^{2}} \] Normed with \( \parallel {\parallel }_{1},{H}_{1} \) is a Banach space. Fix \( u \in {H}_{1} \) . \[ u = {a}_{0} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\cos {n\theta } + {b}_{n}\sin {n\theta }. \] Since \( \dot{u} \in {L}^{2},\mathop{\sum }\limits_{1}^{\infty }{n}^{2}\left( {{a}_{n}^{2} + {b}_{n}^{2}}\right) < \infty \) and so \( \mathop{\sum }\limits_{1}^{\infty }\left( {\left| {a}_{n}\right| + \left| {b}_{n}\right| }\right) < \infty \) . Definition 18.3. For \( u \) as above, \[ {Tu} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\sin {n\theta } - {b}_{n}\cos {n\theta }. \] Observe the following facts: (2) If \( u, v \in {H}_{1} \), then \( u + {iv} \) is a boundary function provided \( u = - {Tv} \) . (3) If \( u \in {H}_{1} \), then \( {Tu} \in {H}_{1} \) and \( \parallel {Tu}{\parallel }_{1} \leq \parallel u{\parallel }_{1} \) . Definition 18.4. Let \( {w}_{2},\ldots ,{w}_{n} \) be smooth boundary functions and put \( w = \) \( \left( {{w}_{2},\ldots ,{w}_{n}}\right) .w \) is then a map of \( \Gamma \) into \( {\mathbb{C}}^{n - 1} \) . For \( x \in {H}_{1} \) , \[ {A}_{w}x = - T\{ h\left( {x, w}\right) \} \] where \( h \) is as in (1). \( {A}_{w} \) is thus a map of \( {H}_{1} \) into \( {H}_{1} \) . Let \( U \) be as in Theorem 18.3 and choose \( \delta > 0 \) such that the point described by (1) with parameters \( {x}_{1} \) and \( w \) lies in \( U \) provided \( \left| {x}_{1}\right| < \delta \) and \( \left| {w}_{j}\right| < \delta ,2 \leq \) \( j \leq n \) . Lemma 18.4. Let \( {w}_{2},\ldots ,{w}_{n} \) be smooth boundary functions with \( \left| {w}_{j}\right| < \delta \) for all \( j \) and such that \( {w}_{2} \) is schlicht, i.e., its analytic extension is one-one in \( \left| \zeta \right| \leq 1 \) . Put \( A = {A}_{w} \) . Suppose \( {x}^{ * } \in {H}_{1},\left| {x}^{ * }\right| < \delta \) on \( \Gamma \) and \( A{x}^{ * } = {x}^{ * } \) . Then \( \exists \) analytic disk \( E \) with \( \partial E \) contained in \( U \) . Proof. Since \( A{x}^{ * } = {x}^{ * },{x}^{ * } = - T\left\{ {h\left( {{X}^{ * }, w}\right) }\right\} \), and so \( {x}^{ * } + {ih}\left( {{x}^{ * }, w}\right) \) is a boundary function by (2). Let \( \psi \) be the analytic extension of \( {x}^{ * } + {ih}\left( {{x}^{ * }, w}\right) \) to \( \left| \zeta \right| < 1 \) . The set defined for \( \left| \zeta \right| \leq 1 \) by \( {z}_{1} = \psi \left( \zeta \right) ,{z}_{2} = {w}_{2}\left( \zeta \right) ,\ldots ,{z}_{n} = {w}_{n}\left( \zeta \right) \) is an analytic disk \( E \) in \( {\mathbb{C}}^{n} \) . \( \partial E \) is defined for \( \left| \zeta \right| = 1 \) by \( {z}_{1} = {x}^{ * }\left( \zeta \right) + {ih}\left( {{x}^{ * }\left( \zeta \right) }\right. \) , \( \left. {w\left( \zeta \right) }\right) ,{z}_{2} = {w}_{2}\left( \zeta \right) ,\ldots ,{z}_{n} = {w}_{n}\left( \zeta \right) \) and so by (1) lies on \( \mathop{\sum }\limits^{{{2n} - 1}} \) . Since by hypothesis \( \left| {x}^{ * }\right| < \delta \) and \( \left| {w}_{j}\right| < \delta \) for all \( j,\partial E \subset U \) . In view of the preceding, to prove Theorem 18.3, it suffices to show that \( A = {A}_{w} \) has a fix-point \( {x}^{ * } \) in \( {H}_{1} \) with \( \left| {x}^{ * }\right| < \delta \) for prescribed small \( w \) . To produce this fix-point, we shall use the following well-known Lemma on metric spaces. Lemma 18.5. Let \( K \) be a complete metric space with metric \( \rho \) and \( \Phi \) a map of \( K \) into \( K \) which satisfies \[ \rho \left( {\Phi \left( x\right) ,\Phi \left( y\right) }\right) \leq {\alpha \rho }\left( {x, y}\right) ,\;\text{ all }x, y \in K. \] where \( \alpha \) is a constant with \( 0 < \alpha < 1 \) . Then \( \Phi \) has a fix-point in \( K \) . We give the proof of Exercise 18.1. As complete metric space we shall use the ball in \( {H}_{1} \) of radius \( M,{B}_{M} = \{ x \in \) \( \left. {{H}_{1}\parallel \parallel x{\parallel }_{1} \leq M}\right\} \) . We shall show that for small \( M \) if \( \left| w\right| \) is sufficiently small and \( A = {A}_{w} \), then (4) \( A \) maps \( {B}_{M} \) into \( {B}_{M} \) . (5) \( \exists \alpha ,0 < \alpha < 1 \), such that \[ \parallel {Ax} - {Ay}{\parallel }_{1} \leq \alpha \parallel x - y{\parallel }_{1}\;\text{ for all }x, y \in {B}_{M}. \] Hence Lemma 18.5 will apply to \( A \) . We need some notation. Fix \( N \) and let \( x = \left( {{x}_{1},\ldots ,{x}_{N}}\right) \) be a map of \( \Gamma \) into \( {\mathbb{R}}^{N} \) such that \( {x}_{i} \in {H}_{1} \) for each \( i \) . \[ \dot{x} = \left( {{\dot{x}}_{1},\ldots ,{\dot{x}}_{N}}\right) ,\left| x\right| = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{2}} \] \[ \parallel x{\parallel }_{1} = \sqrt{{\int }_{\Gamma }{\left| x\right| }^{2}{d\theta }} + \sqrt{{\int }_{\Gamma }{\left| \dot{x}\right| }^{2}{d\theta }} \] \[ \parallel x{\parallel }_{\infty } = \sup \left| x\right| \text{, taken over}\Gamma \text{.} \] Observe that \( \parallel x{\parallel }_{\infty } \leq C\parallel x{\parallel }_{1} \), where \( C \) is a constant depending only on \( N \) . In the following two Exercises, \( h \) is a smooth function on \( {\mathbb{R}}^{N} \) which vanishes at 0 of order \( \geq 2 \) . *EXERCISE 18.2. \( \exists \) constant \( K \) depending only on \( h \) such that for every map \( x \) of \( \Gamma \) into \( {\mathbb{R}}^{N} \) with \( \parallel x{\parallel }_{\infty } \leq 1 \) , \[ \parallel h\left( x\right) {\parallel }_{1} \leq K{\left( \parallel x{\parallel }_{1}\right) }^{2}. \] *EXERCISE 18.3. \( \exists \) constant \( K \) depending only on \( h \) such that for every pair of maps \( x, y \) of \( \Gamma \) into \( {\mathbb{R}}^{N} \) with \( \parallel x{\parallel }_{\infty } \leq 1,\parallel y{\parallel }_{\infty } \leq 1 \) . \[ \parallel h\left( x\right) - h\left( y\right) {\parallel }_{1} < K\parallel x - y{\parallel }_{1}\left( {\parallel x{\parallel }_{1} + \parallel y{\parallel }_{1}}\right) . \] Fix boundary functions \( {w}_{2},\ldots ,{w}_{n} \) as earlier and put \( w = \left( {{w}_{2},\ldots ,{w}_{n}}\right) \) . Then \( w \) is a map of \( \Gamma \) into \( {\mathbb{C}}^{n - 1} = {\mathbb{R}}^{{2n} - 2} \) . Lemma 18.6. For all sufficiently small \( M > 0 \) the following holds: if \( \parallel w{\parallel }_{1} < \) \( M \) and \( A = {A}_{w} \), then \( A \) maps \( {B}_{M} \) into \( {B}_{M} \) and \( \exists \alpha ,0 < \alpha < 1 \), such that \( \parallel {Ax} - {Ay}{\parallel }_{1} \leq \alpha \parallel x - y{\parallel }_{1} \) for all \( x, y \in {B}_{M}. \) Proof. Fix \( M \) and choose \( w \) with \( \parallel w{\parallel }_{1} < M \) and choose \( x \in {B}_{M} \) . The map \( \left( {x, w}\right) \) takes \( \Gamma \) into \( \mathbb{R} \times {\mathbb{C}}^{n - 1} = {\mathbb{R}}^{{2n} - 1} \) . If \( M \) is small, \( \parallel \left( {x, w}\right) {\parallel }_{\infty } \leq 1 \) . Since \( \left( {x, w}\right) = \left( {x,0}\right) + \left( {0, w}\right) , \) \[ \parallel \left( {x, w}\right) {\parallel }_{1} \leq \parallel \left( {x,0}\right) {\parallel }_{1} + \parallel \left( {0, w}\right) {\parallel }_{1} = \parallel x{\parallel }_{1} + \parallel w{\parallel }_{1}. \] By Exercise 18.2, \[ \parallel h\left( {x, w}\right) {\parallel }_{1} < K{\left( \parallel \left( x, w\right) {\parallel }_{1}\right) }^{2} \] \[ < K{\left( \parallel x{\parallel }_{1} + \parallel w{\parallel }_{1}\right) }^{2} < K{\left( M + M\right) }^{2} = 4{M}^{2}K. \] \[ \parallel {Ax}{\parallel }_{1} = \parallel T\{ h\left( {x, w}\right) \} {\parallel }_{1} \leq \parallel h\left( {x, w}\right) {\parallel }_{1} < 4{M}^{2}K. \] Hence if \( M < 1/{4K},\parallel {Ax}{\parallel }_{1} \leq M \) . So for \( M < 1/{4K}, A \) maps \( {B}_{M} \) into \( {B}_{M} \) . Next fix \( M < 1/{4K} \) and \( w \) with \( \parallel w{\parallel }_{1} < M \) and fix \( x, y \in {B}_{M} \) . If \( M \) is small, \( \parallel \left( {x, w}\right) {\parallel }_{\infty } \leq 1 \) and \( \parallel \left( {y, w}\right) {\parallel }_{\infty } \leq 1. \) \[ {Ax} - {Ay} = T\{ h\left( {y, w}\right) - h\left( {x, w}\right) \} . \] Hence by (3), and Exercise 18.3, \( \parallel {Ax} - {Ay}{\parallel }_{1} \leq \parallel h\left( {y, w}\right) - h\left( {x, w}\right) {\parallel }_{1} \leq \) \( \left. {\left. K\left| \right| \left( x, w\right) - \left( y, w\right) {\left| \right| }_{1}\left( \parallel x, w\right) {\parallel }_{1} + \parallel \left( y, w\right) {\parallel }_{1}\right| }_{1}\right) \leq K\parallel x - y{\parallel }_{1}\left( {\parallel x{\parallel }_{1} + \parallel y{\parallel }_{1} + }\right. \) \( 2\left| \right| w{\left| \right| }_{1}) \leq {4MK}\left| \right| x - y{\left| \right| }_{1} \) . Put \( \alpha = {4MK} \) . Then \( \alpha < 1 \) and we are done. Proof of Theorem 18.3. Choose \( M \) by Lemma 18.6, choose \( w \) with \( \parallel w{\parallel }_{1} < M \) and put \( A = {A}_{w} \) . In view of Lemmas 18.5 and 18.6, \( A \) has a fix-point \( {x}^{ * } \) in \( {B}_{M} \) . Since for \( x \in {H}_{1},\parallel x{\parallel }_{\infty } \leq C\parallel x{\parallel }_{1} \), where \( C \) is a constant, for given \( \delta > 0\exists M \) such that \( {x}^{ * } \in {B}_{M} \) implies \( \left| {x}^{ * }\right| < \delta \) on \( \Gamma \) . By Lemma 18.4 it follows that the desired analytic disk exists. So Theorem 18.3 is proved. We now consider the general case of a smooth \( k \) -dimensional submanifold \( \mathop{\sum }\limits^{k} \) of \( {\mathbb{C}}^{n} \) with \( k > n \) . Assume \( 0 \in \mathop{\sum }\limits^{k} \) . Denote by \( P
1329_[肖梁] Abstract Algebra (2022F)
Definition 1.6.1
Definition 1.6.1. Two groups \( \left( {G, * }\right) \) and \( \left( {H, \star }\right) \) are isomorphic if there exists a bijection \( \phi : G\overset{ \sim }{ \rightarrow }H \) such that, for any \( g, h \in G \) , (1) \( \phi \left( {g * h}\right) = \phi \left( g\right) \star \phi \left( h\right) \) ; (2) \( \phi \left( {e}_{G}\right) = {e}_{H} \) ; (3) \( \phi \left( {g}^{-1}\right) = \phi {\left( g\right) }^{-1} \) . We write \( G \simeq H \) ; such a map \( \phi \) is called an isomorphism. (In fact, we will see in the next lecture that condition (1) implies (2) and (3).) Example 1.6.2. (1) \( \exp : \left( {\mathbb{R}, + }\right) \rightarrow \left( {{\mathbb{R}}_{ > 0}, \cdot }\right) \) is an isomorphism. (2) The following is an isomorphism. \[ {\mathbf{Z}}_{n}\overset{ \cong }{ \rightarrow }{\mu }_{n} = \{ \text{ all }n\text{ th roots of unity in }\mathbb{C}\} \] \[ a \mapsto {\zeta }_{n}^{a} = {e}^{{2\pi i}\frac{a}{n}}. \] Remark 1.6.3. In group theory, isomorphic groups are considered "same". Basic question in group theory: classify groups with certain properties. For example, all groups of order 6 are either isomorphic to \( {\mathbf{Z}}_{6} \) or to \( {S}_{3} \) . In particular, this says that \( {D}_{6} \simeq {S}_{3} \) (by identifying the symmetry of a regular triangle with the symmetry of the three vertices). Yet \( {\mathbf{Z}}_{6} ≄ {S}_{3} \) because \( {S}_{3} \) is not commutative. 1.7. Important examples of groups III: Cyclic groups. Definition 1.7.1. A group \( H \) is called cyclic if it can be generated by one element, i.e. there exists \( x \in H \), such that \( H = \left\{ {{x}^{n} \mid n \in \mathbb{Z}}\right\} \) . Sometimes we write \( H = \langle x\rangle \) . The following is clear. Lemma 1.7.2. There are two kinds of cyclic groups \( H = \langle x\rangle \) (up to isomorphism): (1) If there exists a positive integer \( n \) such that \( {x}^{n} = e \), then take \( n \) to be the minimal such number. Then \( H = \left\{ {1, x,{x}^{2},\ldots ,{x}^{n - 1}}\right\} \) and \( H \) is isomorphic to \( {\mathbf{Z}}_{n} \) through \( \phi : H\overset{ \simeq }{ \Rightarrow }{\mathbf{Z}}_{n} \) sending \( \phi \left( {x}^{a}\right) = a \) . In particular, \( \left| H\right| = n \) . (2) Suppose the positive integer \( n \) in (1) does not exist. Then \( H \simeq \mathbb{Z} \) and in particular, \( \left| H\right| = \infty \) . Example 1.7.3. The generators of the cyclic group \( {\mathbf{Z}}_{n} \) are precisely the elements in \( {\mathbf{Z}}_{n}^{ \times } = \) \( \{ \bar{a} \mid \gcd \left( {a, n}\right) = 1\} \) . 1.8. Subgroups. We hope to express the development of the theory of groups parallel to that of vector spaces: <table><thead><tr><th>Vector spaces</th><th>Groups</th></tr></thead><tr><td>Direct sums</td><td>Direct products \( \sqrt{} \)</td></tr><tr><td>Linear isomorphisms</td><td>Isomorphisms \( \sqrt{} \)</td></tr><tr><td>Subspaces</td><td>Subgroups</td></tr></table> Definition 1.8.1. A subset \( H \) of a group \( G \) is called a subgroup, denoted by \( H < G \), if (1) \( e \in H \) ; (2) for any \( a, b \in H,{ab} \in H \) ; (3) for any \( a \in H,{a}^{-1} \in H \) . There is an alternative definition: a nonempty subset \( H \subseteq G \) is a subgroup if and only if \[ \text{for any}a, b \in H \Rightarrow a{b}^{-1} \in H\text{.} \] (Note that taking \( a = b \) implies \( e \in H \) ; taking \( a = 1 \) implies that \( {b}^{-1} \in H \), and \( a{\left( {b}^{-1}\right) }^{-1} = \) \( {ab} \in H \) .) The subset \( \{ e\} \) and the entire group \( G \) are subgroups of \( G \) ; they are called the trivial subgroups of \( G \) . ## Extended readings after Section 1 1.1 Matrix groups. Let \( F \) be a field (or just think of \( F = \mathbb{Q},\mathbb{R},\mathbb{C} \) or a finite field \( {\mathbb{F}}_{p} \) with \( p \) a prime number). We may define the general linear group with coefficients in \( F \) : \[ {\mathrm{{GL}}}_{n}\left( F\right) \mathrel{\text{:=}} \left\{ {A \in {\mathrm{M}}_{n \times n}\left( F\right) \mid A\text{ is an invertible matrix }}\right\} . \] The group structure is given by matrix multiplication. This group admits natural (interesting) subgroups. \[ {B}_{n}\left( F\right) \mathrel{\text{:=}} \left\{ {A \in {\mathrm{{GL}}}_{n}\left( F\right) \mid A\text{ is upper triangular }}\right\} \] \[ {N}_{n}\left( F\right) \mathrel{\text{:=}} \left\{ {A \in {\mathrm{{GL}}}_{n}\left( F\right) \mid A\text{ is strictly upper triangular }}\right\} . \] It is an interesting exercise to see that when \( F = {\mathbb{F}}_{p} \) , \[ \left| {{\mathrm{{GL}}}_{n}\left( {\mathbb{F}}_{p}\right) }\right| = \left( {{p}^{n} - 1}\right) \left( {{p}^{n} - p}\right) \left( {{p}^{n} - {p}^{2}}\right) \cdots \left( {{p}^{n} - {p}^{n - 1}}\right) ,\;\left| {{B}_{n}\left( {\mathbb{F}}_{p}\right) }\right| = {\left( p - 1\right) }^{n}{p}^{\left( {{n}^{2} - n}\right) /2}. \] (We will see later in Lagrange theorem that the order of a subgroup always divides the order of the big group. One sees above that \( \left| {{B}_{n}\left( {\mathbb{F}}_{p}\right) }\right| \) divides \( \left| {{\mathrm{{GL}}}_{n}\left( {\mathbb{F}}_{p}\right) }\right| \), in a quite non-trivial way.) 1.2 The quaternion group. The quaternion group \( {Q}_{8} \) is the group given by \[ {Q}_{8} = \{ 1, - 1, i, - i, j, - j, k, - k\} . \] subject to relations: \[ i \cdot i = j \cdot j = k \cdot k = - 1, i \cdot j = k, j \cdot k = i, k \cdot i = j. \] The name comes from the quaternion algebra (or the Hamiltonian algebra): \( \mathbb{H} \mathrel{\text{:=}} \mathbb{R} \oplus \mathbb{R} \cdot i \oplus \mathbb{R} \cdot j \oplus \mathbb{R} \cdot k. \) subject to the above same relation. This \( \mathbb{H} \) is a non-commutative ring, in which every nonzero element admits an inverse. (Such rings are called division rings or skew fields.) The quaternion group embeds into \( {\mathbb{H}}^{ \times } \) as a subgroup. 1.3 Subgroups of cyclic groups. We show that every subgroup of a cyclic group is cyclic. More precisely, let \( G = \langle x\rangle \) be a cyclic group. Assume that \( n \) is the minimal positive integer such that \( {x}^{n} = e \) if such \( n \) exists, and \( n = 0 \) otherwise. Let \( H \) be a subgroup of \( G \) . Assume that \( H \neq \left\{ {e}_{G}\right\} \) (otherwise the statement is trivial.) Let \( m \) be the minimal positive integer such that \( {x}^{m} \in H \) . We claim that \( H = \left\langle {x}^{m}\right\rangle \), namely, every element of \( H \) is of the form \( {x}^{mr} \) for some \( r \in \mathbb{Z} \) . Suppose not, say \( {x}^{a} \in H \) with \( a \in \mathbb{Z} \) not divisible by \( m \) . Write \( a = {mr} + s \) with \( r, s \in \mathbb{Z} \) and \( s \in \{ 0,\ldots, m - 1\} \) . Then \( {x}^{s} = {x}^{a}{\left( {x}^{m}\right) }^{-r} \in H \), contradicting with the minimality of \( m \) . This completes the proof of: every subgroup of a cyclic group is cyclic. More precisely, (1) When \( H \simeq {\mathbf{Z}}_{n} \), any subgroup is generated by \( \left\langle {x}^{m}\right\rangle \) for some minimal \( m \) . It is not hard to see that \( {x}^{\gcd \{ m, n\} } \in H \) ; so such \( m \) must be a divisor of \( n \) . Any for any divisor \( m \) of \( n,\left\langle {x}^{m}\right\rangle \) is a subgroup of \( {\mathbf{Z}}_{n} \), of order \( \frac{n}{m} \) . (2) When \( H \simeq \mathbb{Z} \), any subgroup is of the form \( \left\langle {x}^{m}\right\rangle \) for some \( m \in \mathbb{N} \) . It is again infinite cyclic. We continue with parallel development of vector spaces versus groups. <table><thead><tr><th>Vector spaces</th><th>Groups</th></tr></thead><tr><td>Direct sums</td><td>Direct products \( \sqrt{} \)</td></tr><tr><td>Linear isomorphisms</td><td>Isomorphism \( \sqrt{} \)</td></tr><tr><td>Subspaces</td><td>Subgroups \( \sqrt{} \)</td></tr><tr><td>Affine subsets \( v + W \)</td><td>Cosets</td></tr><tr><td>Quotient spaces</td><td>Quotient groups</td></tr><tr><td>Linear maps</td><td>Homomorphism</td></tr></table> ## 2.1. Representing subgroups. Definition 2.1.1. Let \( G \) be a group and \( A \) a subset. Write \( \langle A\rangle \) for the subgroup of \( G \) generated by \( A \) . Explicitly \[ \langle A\rangle = \left\{ {{a}_{1}^{{\epsilon }_{1}}{a}_{2}^{{\epsilon }_{2}}\cdots {a}_{r}^{{\epsilon }_{r}} \mid {a}_{1},\ldots ,{a}_{r} \in A,{\epsilon }_{1},\ldots ,{\epsilon }_{r} \in \{ \pm 1\} }\right\} \] It is also the same as the intersection of those subgroups \( H \) of \( G \) containing \( A \) . Remark 2.1.2. When \( G \) abelian and \( A = \left\{ {{a}_{1},\ldots ,{a}_{r}}\right\} \), we have \[ \langle A\rangle = \left\{ {{a}_{1}^{{d}_{1}}\cdots {a}_{r}^{{d}_{r}} \mid {d}_{1},\ldots ,{d}_{r} \in \mathbb{Z}}\right\} . \] 2.1.3. Lattices of subgroups. Sometimes, it is helpful to enlist subgroups of a group in a diagram encoding their inclusion relations by linking a line between them (with subgroups at below). For example: ( \( p \) is a prime number) \( {\mathbf{Z}}_{{p}^{n}} \) ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_11_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_11_0.jpg) Definition 2.1.4. Let \( G \) be a group and \( x \in G \) be an element. Define the order of \( x \) in \( G \) , denoted by \( \left| x\right| \) to be - \( \infty \), if \( {x}^{n} \neq e \) for any \( n \in \mathbb{N} \) (in this case, \( \langle x\rangle \) is a subgroup of \( G \) isomorphic to \( \mathbb{Z} \) ); otherwise, - \( n \) for the least positive integer \( n \) such that \( {x}^{n} = e \) (in this case, \( \langle x\rangle < G \) is a subgroup isomorphic to \( {\mathbf{Z}}_{n} \) ). In all cases, \( \left| x\right| = \left| {\langle x\rangle }\right| \) . 2.2. Cosets. Cosets may be viewed as analogues of affine subsets in linear algebra. Definition 2.2.1. Let \( H \) be a subgroup of \( G \) . A left coset is a set of the form (for some \( g \in G \) ) \[ {gH} \mathrel{\t
1079_(GTM237)An Introduction to Operators on the Hardy-Hilbert Space
Definition 4.4.10
Definition 4.4.10. The bounded operator \( A \) is hyponormal if \( \parallel {Af}\parallel \geq \begin{Vmatrix}{{A}^{ * }f}\end{Vmatrix} \) for every vector \( f \in \mathcal{H} \) . There are also very few hyponormal Hankel operators. Theorem 4.4.11. Every hyponormal Hankel operator is normal. Proof. By Theorem 4.4.2, it follows that \( \begin{Vmatrix}{{H}^{ * }{f}^{ * }}\end{Vmatrix} = \parallel {Hf}\parallel \) for every \( f \in {\widetilde{\mathbf{H}}}^{2} \) , where \( {f}^{ * } \) is the vector whose coefficients are the conjugates of those of \( f \) (Notation 4.4.1). Applying this to \( {f}^{ * } \) yields \( \begin{Vmatrix}{{H}^{ * }f}\end{Vmatrix} = \begin{Vmatrix}{H{f}^{ * }}\end{Vmatrix} \) . If \( H \) is hyponormal, then \( \parallel {Hf}\parallel \geq \begin{Vmatrix}{{H}^{ * }f}\end{Vmatrix} \) for every \( f \) . Therefore, \( \begin{Vmatrix}{H{f}^{ * }}\end{Vmatrix} \geq \) \( \begin{Vmatrix}{{H}^{ * }{f}^{ * }}\end{Vmatrix} \) . By the above equations, this yields \( \begin{Vmatrix}{{H}^{ * }f}\end{Vmatrix} \geq \parallel {Hf}\parallel \) . But \( \parallel {Hf}\parallel \geq \) \( \begin{Vmatrix}{{H}^{ * }f}\end{Vmatrix} \), since \( H \) is hyponormal. Hence \( \begin{Vmatrix}{{H}^{ * }f}\end{Vmatrix} = \parallel {Hf}\parallel \) for all \( f \), and \( H \) is normal. ## 4.5 Relations Between Hankel and Toeplitz Operators There are some interesting relations between the Hankel and Toeplitz operators with symbols \( \phi ,\psi \), and \( {\phi \psi } \) . One consequence of these formulas is a precise determination of when a Hankel and a Toeplitz operator commute with each other. Theorem 4.5.1. Let \( \phi \) and \( \psi \) be in \( {\mathbf{L}}^{\infty } \) . Then \[ {H}_{{e}^{i\theta }\breve{\phi }}{H}_{{e}^{i\theta }\psi } = {T}_{\phi \psi } - {T}_{\phi }{T}_{\psi } \] Proof. The flip operator, \( J \), and the projection onto \( {\widetilde{\mathbf{H}}}^{2}, P \), satisfy the following equation: \[ {JPJ} = {M}_{{e}^{i\theta }}\left( {I - P}\right) {M}_{{e}^{-{i\theta }}}. \] (This can easily be verified by applying each side to the basis vectors \( \left\{ {e}^{in\theta }\right\} \) .) Thus \[ {H}_{{e}^{i\theta }\breve{\phi }}{H}_{{e}^{i\theta }\psi } = \left( {{PJ}{M}_{{e}^{i\theta }\breve{\phi }}}\right) \left( {{PJ}{M}_{{e}^{i\theta }\psi }}\right) \] \[ = P\left( {{M}_{{e}^{-{i\theta }}\phi }J}\right) \left( {{PJ}{M}_{{e}^{i\theta }\psi }}\right) \;\text{ since }J{M}_{{e}^{i\theta }\breve{\phi }} = {M}_{{e}^{-{i\theta }}\phi }J \] \[ = P{M}_{\phi }{M}_{{e}^{-{i\theta }}}\left( {JPJ}\right) {M}_{{e}^{i\theta }}{M}_{\psi } \] \[ = P{M}_{\phi }{M}_{{e}^{-{i\theta }}}\left( {{M}_{{e}^{i\theta }}\left( {I - P}\right) {M}_{{e}^{-{i\theta }}}}\right) {M}_{{e}^{i\theta }}{M}_{\psi } \] \[ = P{M}_{\phi }\left( {I - P}\right) {M}_{\psi } \] \[ = \left( {P{M}_{\phi }{M}_{\psi }}\right) - \left( {P{M}_{\phi }}\right) \left( {P{M}_{\psi }}\right) \] \[ = {T}_{\phi \psi } - {T}_{\phi }{T}_{\psi } \] --- It is easy to rephrase the above theorem to express the product of any two Hankel operators in terms of Toeplitz operators. --- Corollary 4.5.2. If \( \phi \) and \( \psi \) are in \( {\mathbf{L}}^{\infty } \), then \[ {H}_{\phi }{H}_{\psi } = {T}_{\breve{\phi }\psi } - {T}_{{e}^{i\theta }\breve{\phi }}{T}_{{e}^{-{i\theta }}\psi }. \] Proof. By the previous theorem, \[ {H}_{{e}^{i\theta }\breve{\alpha }}{H}_{{e}^{i\theta }\beta } = {T}_{\alpha \beta } - {T}_{\alpha }{T}_{\beta } \] for \( \alpha \) and \( \beta \) in \( {\mathbf{L}}^{\infty } \) . Let \( \alpha = {e}^{i\theta }\breve{\phi } \) and \( \beta = {e}^{-{i\theta }}\psi \) . Making this substitution in the equation above gives the result. One consequence of this corollary is another proof of the following (cf. Corollary 4.4.6). Corollary 4.5.3. If the product of two Hankel operators is Toeplitz, then at least one of the Hankel operators is 0 . Proof. If \( {H}_{\phi }{H}_{\psi } \) is a Toeplitz operator, then since the sum of two Toeplitz operators is Toeplitz, it follows from the previous corollary that \( {T}_{{e}^{i\theta }\breve{\phi }}{T}_{{e}^{-{i\theta }}\psi } \) is a Toeplitz operator. Thus either \( {e}^{-{i\theta }}\psi \) is analytic or \( {e}^{i\theta }\breve{\phi } \) is coanalytic (Theorem 3.2.11), so \( {H}_{\psi } = 0 \) or \( {H}_{\phi } = 0 \) . It should also be noted that Theorem 4.5.1 shows that the following facts, which we previously obtained independently, are equivalent to each other: - If the product of two Hankel operators is zero, then one of them is zero (Corollary 4.4.7). - If \( {T}_{\phi \psi } = {T}_{\phi }{T}_{\psi } \), then either \( \phi \) is coanalytic or \( \psi \) is analytic (Theorem 3.2.11). Another important equation relating Hankel and Toeplitz operators is the following. Theorem 4.5.4. Let \( \phi \) and \( \psi \) be in \( {\mathbf{L}}^{\infty } \) . Then \[ {T}_{\breve{\phi }}{H}_{{e}^{i\theta }\psi } + {H}_{{e}^{i\theta }\phi }{T}_{\psi } = {H}_{{e}^{i\theta }{\phi \psi }}. \] Proof. This follows from a computation similar to that in the proof of Theorem 4.5.1. Using \( J{M}_{\phi }J = {M}_{\breve{\phi }} \) and \( {JPJ} = {M}_{{e}^{i\theta }}\left( {I - P}\right) {M}_{{e}^{-{i\theta }}} \), we get \[ {T}_{\breve{\phi }}{H}_{{e}^{i\theta }\psi } = \left( {P{M}_{\breve{\phi }}}\right) \left( {{PJ}{M}_{{e}^{i\theta }\psi }}\right) \] \[ = P\left( {J{M}_{\phi }J}\right) \left( {{PJ}{M}_{{e}^{i\theta }}{M}_{\psi }}\right) \] \[ = {PJ}{M}_{\phi }\left( {JPJ}\right) {M}_{{e}^{i\theta }}{M}_{\psi } \] \[ = {PJ}{M}_{\phi }\left( {{M}_{{e}^{i\theta }}\left( {I - P}\right) {M}_{{e}^{-{i\theta }}}}\right) {M}_{{e}^{i\theta }}{M}_{\psi } \] \[ = {PJ}{M}_{{e}^{i\theta }\phi }\left( {I - P}\right) {M}_{\psi } \] \[ = \left( {{PJ}{M}_{{e}^{i\theta }\phi }{M}_{\psi }}\right) - \left( {{PJ}{M}_{{e}^{i\theta }\phi }}\right) \left( {P{M}_{\psi }}\right) \] \[ = {H}_{{e}^{i\theta }{\phi \psi }} - {H}_{{e}^{i\theta }\phi }{T}_{\psi } \] Under certain circumstances, the product of a Hankel operator and a Toeplitz operator is a Hankel operator. Corollary 4.5.5. (i) If \( \psi \) is in \( {\widetilde{\mathbf{H}}}^{\infty } \), then \( {H}_{\phi }{T}_{\psi } = {H}_{\phi \psi } \) . (ii) If \( \psi \) is in \( {\widetilde{\mathbf{H}}}^{\infty } \), then \( {T}_{\breve{\psi }}{H}_{\phi } = {H}_{\psi \phi } \) . Proof. Recall from the previous theorem that, for \( \alpha \) and \( \beta \) in \( {\mathbf{L}}^{\infty } \) , \[ {T}_{\breve{\alpha }}{H}_{{e}^{i\theta }\beta } + {H}_{{e}^{i\theta }\alpha }{T}_{\beta } = {H}_{{e}^{i\theta }{\alpha \beta }}. \] Taking \( \alpha = {e}^{-{i\theta }}\phi \) and \( \beta = \psi \) gives \( \left( i\right) \), since \( {H}_{{e}^{i\theta }\beta } = 0 \) . Taking \( \alpha = \psi \) and \( \beta = {e}^{-{i\theta }}\phi \) we obtain (ii), since \( {H}_{{e}^{i\theta }\alpha } = 0 \) . We have seen that Toeplitz operators rarely commute with each other (Theorem 3.2.13) and that Hankel operators rarely commute with each other (Theorem 4.4.8). We now consider the question of determining when a Hankel operator commutes with a Toeplitz operator. This is also quite rare. Theorem 4.5.6. Suppose neither of the Toeplitz operator \( {T}_{\phi } \) and the Hankel operator \( H \) is a multiple of the identity. Then \( H{T}_{\phi } = {T}_{\phi }H \) if and only if \( H \) is a multiple of \( {H}_{{e}^{i\theta }\phi } \) and both of the functions \( \phi + \breve{\phi } \) and \( \phi \breve{\phi } \) are constant functions. Proof. First suppose that \( \phi + \breve{\phi } = c \) and \( \phi \breve{\phi } = d \) for complex numbers \( c \) and \( d \) . Theorem 4.5.4 states that \[ {T}_{\phi }{H}_{{e}^{i\theta }\phi } + {H}_{{e}^{i\theta }\breve{\phi }}{T}_{\phi } = {H}_{{e}^{i\theta }\phi \breve{\phi }}. \] Since \( \phi \breve{\phi } = d \), it follows that \[ {T}_{\phi }{H}_{{e}^{i\theta }\phi } + {H}_{{e}^{i\theta }\check{\phi }}{T}_{\phi } = {H}_{{e}^{i\theta }\phi \check{\phi }} = {H}_{d{e}^{i\theta }}. \] Since \( d{e}^{i\theta } \) is in \( {e}^{i\theta }{\widetilde{\mathbf{H}}}^{2} \), it follows that \( {H}_{d{e}^{i\theta }} = 0 \), and thus that \[ {T}_{\phi }{H}_{{e}^{i\theta }\phi } + {H}_{{e}^{i\theta }\check{\phi }}{T}_{\phi } = 0. \] Also, since \( \phi + \breve{\phi } = c \), we have \( {H}_{{e}^{i\theta }\breve{\phi }} = {H}_{c{e}^{i\theta } - {e}^{i\theta }\phi } \), and therefore \[ {T}_{\phi }{H}_{{e}^{i\theta }\phi } + {H}_{c{e}^{i\theta }}{T}_{\phi } - {H}_{{e}^{i\theta }\phi }{T}_{\phi } = 0. \] Since \( c{e}^{i\theta } \) is in \( {e}^{i\theta }{\widetilde{\mathbf{H}}}^{2},{H}_{c{e}^{i\theta }} = 0 \) . Therefore \[ {T}_{\phi }{H}_{{e}^{i\theta }\phi } = {H}_{{e}^{i\theta }\phi }{T}_{\phi } \] It follows that if \( H \) is a multiple of \( {H}_{{e}^{i\theta }\phi } \), then \( H \) commutes with \( {T}_{\phi } \) . To prove the converse, suppose that \( H{T}_{\phi } = {T}_{\phi }H \) . Multiplying \( {T}_{\phi }H \) on the right by \( U \), using the fact that \( {U}^{ * }H = {HU} \), noticing that \( {H}_{{e}_{0}} = {PJ}{M}_{{e}_{0}} = \) \( {PJ} = {e}_{0} \otimes {e}_{0} \), and using Theorem 4.5.1, we get \[ {T}_{\phi }{HU} = {T}_{\phi }{U}^{ * }H \] \[ = {T}_{\phi }{T}_{{e}^{-{i\theta }}}H \] \[ = \left( {{T}_{\phi {e}^{-{i\theta }}} - {H}_{{e}^{i\theta }\breve{\phi }}{H}_{{e}^{i\theta }{e}^{-{i\theta }}}}\right) H \] \[ = \left( {{T}_{{e}^{-{i\theta }}}{T}_{\phi } - {H}_{{e}^{i\theta }\check{\phi }}{H}_{{e}_{0}}}\right) H \] \[ = {U}^{ * }{T}_{\phi }H - {H}_{{e}^{i\theta }\breve{\phi }}\left( {{e}_{0} \otimes {e}_{0}}\right) H \] \[ = {U}^{ * }{T}_{\phi }H - \left( {{H}_{{e}^{i\theta }\breve{\phi }}{e}_{0}}\right) \otimes \left( {{H}^{ * }{e}_{0}}\right) . \] Performing similar computations beginning with \( H{T}_{\phi } \) yields \[ H{T}_{\phi }U = H{T}_{\phi }{T}_{{e}^{i\theta }} \] \[ = H{T}_{{e}^{i\theta }\phi } \] \[ = H\left( {{T}_{{e}^{i\theta }}{T}_{\phi } + {H}_{{e}^{i\theta
106_106_The Cantor function
Definition 2.2
Definition 2.2. An instantaneous description of a Turing machine \( M \) with alphabet \( \mathfrak{S} \) and set \( \mathfrak{Q} \) of internal states is a finite string \( {s}_{{\alpha }_{1}}{s}_{{\alpha }_{2}}\cdots {s}_{{\alpha }_{r}}q{s}_{{\beta }_{1}}{s}_{{\beta }_{2}}\cdots {s}_{{\beta }_{t}} \) , where \( {s}_{{\alpha }_{i}},{s}_{{\beta }_{j}} \in \mathfrak{S} \) and \( q \in \mathfrak{Q} \) . The strings \( {s}_{{\alpha }_{1}}\cdots {s}_{{\alpha }_{r}} \) and \( {s}_{{\beta }_{1}}\cdots {s}_{{\beta }_{t}} \) are often denoted by single symbols such as \( \sigma ,\tau \) . An instantaneous description \( d = {s}_{{\alpha }_{1}}{s}_{{\alpha }_{2}}\cdots {s}_{{\alpha }_{r}}q{s}_{{\beta }_{1}}{s}_{{\beta }_{2}}\cdots {s}_{{\beta }_{t}} \) is then written simply as \( d = {\sigma q\tau } \) . Each of \( \sigma ,\tau \) may be the empty string. Since we are interested in the state of \( M \), rather than in descriptions of the state of \( M \), we need to know when two descriptions determine the same state. The previous discussion shows that the only freedom in the definition of description is in the choice of \( a \) and \( b \) . Thus two descriptions \( d = {\sigma q\tau } \) and \( {d}^{\prime } = {\sigma }^{\prime }{q}^{\prime }{\tau }^{\prime } \) describe the same state if and only if \( q = {q}^{\prime },{\sigma }^{\prime } \) is obtainable from \( \sigma \) by adding or deleting a number of symbols \( {s}_{0} \) on the left, and \( {\tau }^{\prime } \) is obtainable from \( \tau \) by adding or deleting a number of symbols \( {s}_{0} \) on the right. Descriptions related in this way are called equivalent, and the equivalence class containing the description \( d \) is denoted by \( \left\lbrack d\right\rbrack \) and called the state described by \( d \) . For each state \( \left\lbrack d\right\rbrack \), there is a unique description \( d = {\sigma q\tau } \) such that the first symbol (if any) of \( \sigma \), and the last symbol (if any) of \( \tau \) are distinct from \( {s}_{0} \) . This description is called the shortest description of \( \left\lbrack d\right\rbrack \) . Definition 2.3. The Turing machine \( M \) takes the state \( \left\lbrack d\right\rbrack \) into the state \( \left\lbrack {d}^{\prime }\right\rbrack \), written \( \left\lbrack d\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}^{\prime }\right\rbrack \), if for some representatives \( d = {\sigma q\tau } \) and \( {d}^{\prime } = {\sigma }^{\prime }{q}^{\prime }{\tau }^{\prime } \) , where \( \tau = {s}_{\alpha }{\tau }_{1} \), either (i) \( \left( {q,{s}_{\alpha },{s}_{{\alpha }^{\prime }},{q}^{\prime }}\right) \in M \) and \( {\sigma }^{\prime } = \sigma ,{\tau }^{\prime } = {s}_{{\alpha }^{\prime }}{\tau }_{1} \), or (ii) \( \left( {q,{s}_{\alpha }, R,{q}^{\prime }}\right) \in M \) and \( {\sigma }^{\prime } = \sigma {s}_{\alpha },{\tau }^{\prime } = {\tau }_{1} \), or (iii) \( \left( {q,{s}_{\alpha }, L,{q}^{\prime }}\right) \in M \) and \( \sigma = {\sigma }^{\prime }{s}_{\beta },{\tau }^{\prime } = {s}_{\beta }\tau \) for some \( {s}_{\beta } \in S \) . Exercise 2.4. Prove that there is at most one state \( \left\lbrack {d}^{\prime }\right\rbrack \) such that \( \left\lbrack d\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}^{\prime }\right\rbrack \) . When \( \left\lbrack {d}^{\prime }\right\rbrack \) exists, show that to each \( d \in \left\lbrack d\right\rbrack \), there corresponds a \( {d}^{\prime } \in \left\lbrack {d}^{\prime }\right\rbrack \) so that \( d \) and \( {d}^{\prime } \) are related as in (i),(ii) or (iii) of the definition (appropriately modified if \( \sigma \) or \( \tau \) is empty). Definition 2.5. A state \( \left\lbrack {\sigma q\tau }\right\rbrack \) is called initial if \( q = {q}_{0} \) . A state \( \left\lbrack {{\sigma q}{s}_{\alpha }{\tau }_{1}}\right\rbrack \) is called terminal if there is no quadruple \( \left( {q,{s}_{\alpha }, c, d}\right) \) in \( M \) . Exercise 2.6. Show that \( \left\lbrack d\right\rbrack \) is terminal if and only if there does not exist a state \( \left\lbrack {d}^{\prime }\right\rbrack \) such that \( \left\lbrack d\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}^{\prime }\right\rbrack \) . Definition 2.7. A computation by the machine \( M \) is a finite sequence \( \left\lbrack {d}_{0}\right\rbrack ,\left\lbrack {d}_{1}\right\rbrack ,\ldots ,\left\lbrack {d}_{p}\right\rbrack \) of states such that \( \left\lbrack {d}_{0}\right\rbrack \) is initial, \( \left\lbrack {d}_{p}\right\rbrack \) is terminal and \( \left\lbrack {d}_{i}\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}_{i + 1}\right\rbrack \) for \( i = 0,1,\ldots, p - 1 \) . Computations are by definition finite. Given \( M \) and \( \left\lbrack d\right\rbrack \), there is no guarantee that \( M \), started in state \( \left\lbrack d\right\rbrack \) and allowed to operate, will ever stop (i.e., will execute a computation). Definition 2.8. We say that \( M \) fails for the input \( \left\lbrack {d}_{0}\right\rbrack \) if there is no computation by \( M \) beginning with the state \( \left\lbrack {d}_{0}\right\rbrack \) . For each state \( \left\lbrack {d}_{i}\right\rbrack \), there is a unique \( \left\lbrack {d}_{i + 1}\right\rbrack \) such that \( \left\lbrack {d}_{i}\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}_{i + 1}\right\rbrack \) . Hence failure of \( M \) for the input \( \left\lbrack {d}_{0}\right\rbrack \) means that the sequence of states taken by \( M \) and beginning with \( \left\lbrack {d}_{0}\right\rbrack \) is infinite-i.e., the machine never stops. Henceforth, the state \( \left\lbrack d\right\rbrack \) will be denoted simply by some description \( d \) . The context will make clear the sense in which symbols such as \( d,{d}_{i} \) are being used. ## Exercises 2.9. A stereo-Turing machine \( M \) has its tape divided into two parallel tracks. The symbols on a pair of squares (one above the other) are read simultaneously. Show that there is a (mono-)Turing machine \( {M}^{\prime } \) which will perform essentially the same computations as \( M \) . 2.10. The operator of the Turing machine \( M \) has been asked to record the output of \( M \) (i.e., the symbols printed on the tape) at the end of each computation by \( M \) . Does the operator have any problems? Show that a machine \( {M}^{\prime } \) can be designed so as to perform essentially the same computations as \( M \), and which in addition will place marker symbols (not in the alphabet of \( M \) ) either at the furthest out points of the tape used in each computation, or alternatively at the nearest points such that the stopping position of \( {M}^{\prime } \), and all non-blank symbols, lie between them. 2.11. A dual-Turing machine \( M \) with alphabet \( \mathfrak{S} \) has two tapes which can move independently. Show that there is a Turing machine with alphabet \( \mathfrak{S} \times \mathfrak{S} \) which will, when given an initial state corresponding to the pair of initial states of a computation by \( M \), perform a computation whose terminal state corresponds to the pair of terminal states of \( M \) . 2.12. \( {M}_{1} \) and \( {M}_{2} \) are Turing machines with the same alphabet \( \mathfrak{S} \) . A computation by \( {M}_{1} \) and \( {M}_{2} \) consists of a computation by each of \( {M}_{1} \) and \( {M}_{2} \) such that, if \( \sigma {q}_{i}\tau \) is the output of \( {M}_{1} \), then \( \sigma {q}_{0}\tau \) is the input for \( {M}_{2} \) . Show that there is a Turing machine \( M \), whose alphabet contains \( \mathfrak{S} \), such that if \( M \) is started in an initial state of a computation by \( {M}_{1} \) and \( {M}_{2} \) with terminal state \( \sigma {q}_{j}\tau \), then \( M \) executes a computation with terminal state \( \sigma {q}_{k}\tau \) for some \( {q}_{k} \), while \( M \) fails if started in any other initial state. 2.13. \( {M}_{1},\ldots ,{M}_{n} \) are Turing machines with the same alphabet. An algorithm requires that at each step, exactly one of \( {M}_{1},\ldots ,{M}_{n} \) be applied to the result of the previous step. The Turing machine \( M \), applied to the output of any step, determines which of \( {M}_{1},\ldots ,{M}_{n} \) is to be applied for the next step. Show that there is a single Turing machine which can execute the algorithm and give the same ultimate output. 2.14. Most digital computers can read and write on magnetic tape. The tapes are finite, but the operator can replace them if they run out. Show that such computers can be regarded as Turing machines. In fact, the most sophisticated computers can be regarded as Turing machines. (This is not a mathematical exercise. The reader is asked to review his experience of computers and to see that the definitions given so far are broad enough to embrace the computational features of the computers he has used.) ## §3 Recursive Functions Let \( M \) be a Turing machine with alphabet \( \mathfrak{S} \) . We show how to use \( M \) to associate with each pair \( \left( {k,\ell }\right) \) of natural numbers a subset \( {U}_{M}^{\left( k,\ell \right) } \) of \( {\mathbf{N}}^{k} \) and a function \( {\Psi }_{M}^{\left( k,\ell \right) } : {U}_{M}^{\left( k,\ell \right) } \rightarrow {\mathbf{N}}^{\ell } \) . For \( \left( {{n}_{1},\ldots ,{n}_{k}}\right) \in {\mathbf{N}}^{k} \), put \[ \operatorname{code}\left( {{n}_{1},\ldots ,{n}_{k}}\right) = {s}_{1}^{{n}_{1}}{s}_{0}{s}_{1}^{{n}_{2}}{s}_{0}\cdots {s}_{1}^{{n}_{k - 1}}{s}_{0}{s}_{1}^{{n}_{k}}, \] where the notation \( {s}^{n} \) denotes a string of \( n \) consecutive symbols \( s \) . There may or may not be a computation by \( M \) whose initial state is the state \( {d}_{0} = \) \( {q}_{0} \) code \( \left( {{n}_{1},\ldots ,{n}_{k}}\right) \) . If there is, let \( {d}_{t} = {\sigma q\tau } \) be its (uniquely determined) terminal state. Choose a description \( {d}_{t} \) of this terminal state which has at
1269_[姜明] Image Reconstruction, Processing and Analysis
Definition 5.1.1
Definition 5.1.1. Let \( {x}_{k} \in {\mathbf{R}}^{n}, k = 0,1,2,\cdots \) . Then the sequence \( \left\{ {x}_{k}\right\} \) is said to converge to a point \( {x}_{ * } \in {\mathbf{R}}^{n} \) if for every \( i \), the \( i \) -th component \( {\left( {x}_{k} - {x}_{ * }\right) }_{i} \) satisfies \[ \mathop{\lim }\limits_{{k \rightarrow \infty }}{\left( {x}_{k} - {x}_{ * }\right) }_{i} = 0 \] (5.1) If for some vector norm \( \parallel \cdot \parallel \), there exists \( K \geq 0 \) and \( \alpha \in \lbrack 0,1) \) such that for all \( k \geq K \) , \[ \begin{Vmatrix}{{x}_{k + 1} - {x}_{ * }}\end{Vmatrix} \leq \alpha \begin{Vmatrix}{{x}_{k} - {x}_{ * }}\end{Vmatrix} \] (5.2) then \( \left\{ {x}_{k}\right\} \) is said to converge q-linearly to \( {x}_{ * } \) . in the norm \( \parallel \cdot \parallel \) . If for some sequence of scalars \( {\alpha }_{k} \) that converge to 0 \[ \begin{Vmatrix}{{x}_{k + 1} - {x}_{ * }}\end{Vmatrix} \leq {\alpha }_{k}\begin{Vmatrix}{{x}_{k} - {x}_{ * }}\end{Vmatrix} \] (5.3) then \( \left\{ {x}_{k}\right\} \) is said to converge q-superiinearly to \( {x}_{ * } \) . If \( \left\{ {x}_{k}\right\} \) converges to \( {x}_{ * } \) and there exist \( K \geq 0, p > 0 \), and \( \alpha > 0 \) such that for all \( k \geq K \) \[ \begin{Vmatrix}{{x}_{k + 1} - {x}_{ * }}\end{Vmatrix} \leq \alpha \begin{Vmatrix}{{x}_{k} - {x}_{ * }}\end{Vmatrix} \] (5.4) \( \left\{ {x}_{k}\right\} \) is said to converge to \( {x}_{ * } \) with \( q \) -order at least \( p \) . If \( p = 2 \), then the convergence is called is called q-quadratic. Note that q-order 1 is not the same as q-linear convergence and that a sequence may be linearly convergent in one norm but not another, but that superlinear and q-order \( p > 1 \) convergence are independent of the choice of norm on \( {\mathbf{R}}^{n} \) . Most methods for continuously differentiable optimization problems are locally q-superfinearly or locally q-quadratically convergent, meaning that they converge to the solution \( {x}_{ * } \) with a superlinear or quadratic rate if they are started sufficiently close to \( {x}_{ * } \) . In practice, local quadratic convergence is quite fast as it implies that the number of significant digits in \( {x}_{k} \) as an approximation to \( {x}_{ * } \), roughly doubles at each iteration once \( {x}_{k} \) is near \( {x}_{ * } \) . Locally superlinearly convergent method that occur in optimization also are often quickly convergent in practice. Linear convergence, however, can be quite slow, especially if the constant \( \alpha \) depends upon the problem as it usually does. Thus linearly convergent methods are avoided wherever possible in optimization unless it is known that \( \alpha \) is acceptably small. It is easy to define rates of convergence higher than quadratic, e.g, cubic, but they play virtually no role in practical algorithms for mufti-variable optimization problems. The prefix \( q \) preteding the words linear, superlinear, or quadratic stands for ’quotient’ rates of convergence. This notation, commonly used in the optimization literature, is used to contrast with \( r \) (’root’) rates of convergence, which are a weaker form of convergence. A sequence is r-linearly convergent, for example, if the errors \( \begin{Vmatrix}{{x}_{k} - {x}_{ * }}\end{Vmatrix} \) are bonded by a sequence of scalars \( \left\{ {b}_{k}\right\} \) which converge q-linearly to 0 . The bisection algorithm for solving scalar nonlinear equations is a perfect example of an r-linear method since the midpoints of the intervals containing a root can be taken as the iterates and half the lengths of the intervals are a sequence of error bounds that converge q-linearly to 0 . Similar definitions apply to other r-rates of convergence; for further detail see Ortega and Rheinboldt, 1970]. Thoughout this lecture notes, if no prefix precedes the words linear, superlinear, of quadratic, q-order convergence is assumed ## Chapter 6 ## Iterative Image Reconstruction ## Methods In this chapter, we present several iterative reconstruction algorithms. ## 6.1 Iterative Algorithm ## 6.1.1 Introduction Although the FBP method has been the method of choice by CT manufacturers, efforts are being made to revisit iterative methods Censor, 1983, Censor and Herman, 1987, Hanke and Hansen, 1 Censor and Zenios, 1997, Bertero and Boccacci, 1998, Wang et al., 1999, Saad and van der Vorst, 2000 Relative to closed-form solutions such as the filtered back-projection algorithm, the iterative approach has a major potential to achieve a superior performance in handling incomplete, noisy, and dynamic data. Although the iterative approach is generally slow, the computing technology is coming to the point that commercial implementation of iterative methods becomes practical for important radiological applications. Many imaging systems can be modeled by the equation \[ {Ax} = b \] (6.1) \[ \text{106} \] where the observed data is \( b = {\left( {b}^{1}\cdots {b}^{M}\right) }^{\mathrm{T}} \in {\mathbf{K}}^{M} \) and the original image is \( x = {\left( {x}_{1}\cdots {x}_{N}\right) }^{\mathrm{T}} \in \) \( {\mathbf{K}}^{N} \) . \( \mathbf{K} \) can be the real number field \( \mathbf{R} \) or the complex number field \( \mathbf{C}.A = \left( {A}_{i, j}\right) \) is a nonzero \( M \times N \) matrix. For example, computerized tomography (CT) and magnetic resonance imaging (MRI) which are two important modalities in medical imaging can be modeled by Eq. (6.1) Herman, 1980, Natterer, 2001, Haacke et al., 1999, Liang and Lauterbur, 2000. A lot of iterative methods has been proposed for the numerical solution of discrete models in tomography. However, only the EM (expectation-maximization) and ART-like (algebraic reconstruction technique) have found widespread use. Thus, we concentrate on these two methods. ## 6.2 EM algorithm ## 6.2.1 Introduction The expectation-maximization method of Dempster, Laird and Rubin [Dempster et al., 1977] is a general approach for formulating recursive algorithms that can be used to determine the ML (maximum likelihood) or MAP (maximum a posteriori) estimate of a parameter \( \theta \) in terms of some observed data \( y \), see \( §{3.1} \), and Eqs. (3.4) and (3.5). Note we have not used the notation \( x \) for the observed data. \( x \) is used for another important quantity introduced later. In statistical term, \( y \) is the sampling from a random variable \( Y \) with a distribution or density function depending on a parameter \( \theta \) . \( \theta \) is a parameter vector residing in a subset \( \Theta \) of the \( p \) -dimensional space \( {\mathbf{R}}^{p} \) . For imaging problems, \( \theta \) is composed of the image to be estimated and other interested parameters, and \( y \) is composed of the data values produced by the imaging system. Let \( \mathbf{p}\left( {y \mid \theta }\right) \) be the (conditional) probability density function of the observed data (given the parameter \( \theta \) ). Let \[ L\left( {y \mid \theta }\right) = \log \mathbf{p}\left( {y \mid \theta }\right) \] (6.2) be the log-likelihood given the observed data \( y \) . The ML estimate of the parameter \( \theta \) is then equal to, see Eq. (3.4), \[ \widehat{\theta } = \underset{\theta \in \Theta }{\arg \max }L\left( {y \mid \theta }\right) \] (6.3) The MAP estimate is equal to, see Eq. (3.5), \[ \widehat{\theta } = \underset{\theta \in \Theta }{\arg \max }L\left( {y \mid \theta }\right) - P\left( \theta \right) \] (6.4) where \( L\left( {y \mid \theta }\right) \) is the log-likelihood function Eq. (6.2) and \( P\left( \theta \right) = - \log \mathbf{p}\left( \theta \right) \) . Let \[ \Phi \left( \theta \right) = L\left( {y \mid \theta }\right) - P\left( \theta \right) . \] (6.5) \( \Phi \left( \theta \right) \) is sometimes called penalized likelihood. In the following, we discuss the EM approach for ML and MAP estimate and then some its variants and recent developments. ## 6.2.2 General EM Algorithm Without Prior The EM method begins by selecting some hypothetical data, \( x \), called the hidden data or complete data, also being samples from some random variable \( X \) depending on \( \theta \), such that - there is a function \( T \) such that the observed data \( y \), also called incomplete data, can be recovered from the hidden data, \[ y = T\left( x\right) \] (6.6) which is called the admissible equation; - the log-likelihood function \[ L\left( {x \mid \theta }\right) = \log \mathbf{p}\left( {x \mid \theta }\right) \] (6.7) of the hidden data can be formulated and the required analytical calculations can be accomplished, where \( \mathbf{p}\left( {x \mid \theta }\right) \) is the conditional probability density function of the hidden data given the parameter \( \theta \) . We introduce two formulas needed in the following. Let \[ {\delta }_{a = b} = \left\{ \begin{array}{ll} 1, & \text{ if }a = b \\ 0, & \text{ otherwise. } \end{array}\right. \] (6.8) By the admissible equation Eq. (6.6), \[ \Pr \left( {y \mid x,\theta }\right) = \frac{\Pr \left( {y, x \mid \theta }\right) }{\Pr \left( {x \mid \theta }\right) } = \frac{\Pr \left( {x \mid \theta }\right) }{\Pr \left( {x \mid \theta }\right) }{\delta }_{y = T\left( x\right) } \] (6.9) \[ = {\delta }_{y = T\left( x\right) } \] (6.10) and \[ \Pr \left( {x \mid y,\theta }\right) = \frac{\Pr \left( {y \mid x,\theta }\right) \Pr \left( {x \mid \theta }\right) }{\Pr \left( {y \mid \theta }\right) } \] (6.11) \[ = \frac{\Pr \left( {x \mid \theta }\right) }{\Pr \left( {y \mid \theta }\right) }{\delta }_{T\left( x\right) = y} \] (6.12) There is considerable flexibility in making this selection, and making a good choice has largely been based on experience drawn from familiarity with the physical problem at hand and its mathematical model. The choice can influence the behavior of the recursive algorithm that results, such as
113_Topological Groups
Definition 29.27
Definition 29.27. Let \( \mathcal{L} \) be a many sorted language as above. An \( \mathcal{L} \) - structure is a triple \( \mathfrak{A} = \left( {A, f, R}\right) \) such that: (i) \( A \) is a function which assigns to each \( s \in \mathcal{S} \) a nonempty set \( {A}_{s} \) ; (ii) \( f \) is a function whose domain is the set of operation symbols of \( \mathcal{L} \) ; if \( \mathbf{O} \) is an operation symbol of rank \( \left( {{s}_{0},\ldots ,{s}_{m}}\right) \), then \( {f}_{\mathbf{0}} : {A}_{s0} \times \cdots \) \( \times {A}_{s\left( {m - 1}\right) } \rightarrow {A}_{sm} \) (iii) \( \mathbf{R} \) is a function whose domain is the set of relation symbols of \( \mathcal{L} \) ; if \( \mathbf{R} \) is a relation symbol of rank \( \left( {{s}_{0},\ldots ,{s}_{m}}\right) \), then \( {R}_{\mathbf{R}} \subseteq {A}_{s0} \times \cdots \) \( \times {A}_{sm} \) . Let \( \mathcal{L} \) and \( \mathfrak{A} \) be as in 29.27. Given \( x \in {\mathrm{P}}_{s \in \mathcal{S}}{}^{\omega }{A}_{s} \), it is clear how to define the notions \( {\sigma }^{\mathfrak{A}}x \) and \( \mathfrak{A} \vDash \varphi \left\lbrack x\right\rbrack \) . This, then, defines the fundamental notions for many-sorted logic. There is a natural way of relating many sorted logic to ordinary logic. Let \( \mathcal{L} \) be a many sorted language, as above. With \( \mathcal{L} \) we associate an ordinary first-order language \( {\mathcal{L}}^{ * } \) as follows. The language \( {\mathcal{L}}^{ * } \) is to have the relation and operation symbols of \( \mathcal{L} \), and additional unary relation symbols \( {\mathbf{P}}_{s} \) for \( s \in \mathcal{S} \) . If \( \mathbf{O} \) is an operation symbol of \( \mathcal{L} \) of rank \( \left( {{s}_{0},\ldots ,{s}_{m}}\right) \), then as a symbol of \( \mathcal{L} * \) the operation symbol \( \mathbf{O} \) will have rank \( m \) . For a relation symbol \( \mathbf{R} \) of rank \( \left( {{s}_{0},\ldots ,{s}_{m}}\right) \), the symbol \( \mathbf{R} \) will have rank \( m + 1 \) . We shall treat the variables \( {v}_{i}^{s} \) as the variables of \( {\mathcal{L}}^{ * } \) also. Note that this is, strictly speaking, impossible when \( \left| \mathcal{S}\right| > {\aleph }_{0} \), but for almost all logical purposes it makes no difference. Now with each formula \( \varphi \) of \( \mathcal{L} \) we associate the formula \( {\varphi }^{ * } \) of \( \mathcal{L} * \) obtained by replacing " \( \forall {v}_{i}^{s} \) " by " \( \forall {v}_{i}^{s}\left( {{\mathbf{P}}_{s}{v}_{i}^{s} \rightarrow }\right. \) " throughout \( \varphi \) . Let \( \Gamma \) be the set of all of the following sentences of \( {\mathcal{L}}^{ * } \) : \[ \exists {v}_{0}^{s}{\mathbf{P}}_{s}{v}_{0}^{s}\;\text{ for each }s \in \mathcal{S}; \] \[ \forall {v}_{0}^{s0}\forall {v}_{1}^{s1}\cdots \forall {v}_{m - 1}^{s\left( {m - 1}\right) }\left\lbrack {\mathop{\bigwedge }\limits_{{i < m}}{\mathbf{P}}_{si}{v}_{i}^{si} \rightarrow {\mathbf{P}}_{sm}\mathbf{O}{v}_{0}^{s0}\cdots {v}_{m - 1}^{s\left( {m - 1}\right) }}\right\rbrack \] for each operation symbol \( \mathbf{O} \) of \( \mathcal{L} \) of rank \( \left( {{s}_{0},\ldots ,{s}_{m}}\right) \) . Next, let \( \mathfrak{A} \) be any \( \mathcal{L} \) -structure. Fix \( a \in A \) . We convert it into an \( \mathcal{L} * \) -structure \( {\mathfrak{A}}_{a}^{ * } \) as follows. Set \[ {A}^{ * } = \mathop{\bigcup }\limits_{{s \in \mathcal{S}}}{A}_{s} \] \[ {\mathbf{R}}^{\mathfrak{A} * } = {\mathbf{R}}^{\mathfrak{A}} \] \[ {\mathbf{O}}^{\mathfrak{A} * }\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) = {\mathbf{O}}^{\mathfrak{A}}\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) \;\text{if }{a}_{i} \in {A}_{si}\text{ for each }i > m, \] \[ {\mathbf{O}}^{\mathfrak{A} * }\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) = {a}_{0}\;\text{ otherwise,} \] where \( \mathbf{O} \) has rank \( \left( {{s}_{0},\ldots ,{s}_{m - 1}}\right) \) . The following proposition is then easy to establish. Proposition 29.28. Let \( \mathcal{L} \) be a many sorted language, \( {\mathcal{L}}^{ * } \) the associated first-order language, \( \mathfrak{A} \) an \( \mathcal{L} \) -structure. Then \( \mathfrak{A} * \) is a model of \( \Gamma \) above. Furthermore, let \( x \in {\mathrm{P}}_{s\epsilon }{\mathcal{S}}^{\omega }{A}_{s} \) . Then \( {\sigma }^{\mathfrak{A}}x = {\sigma }^{\mathfrak{A} * }x \) and \( \mathfrak{A} \vDash \varphi \left\lbrack x\right\rbrack \) iff \( {\mathfrak{A}}^{ * } \vDash \) \( {\varphi }^{ * }\left\lbrack x\right\rbrack \) . Corollary 29.29. For any many-sorted sentence \( \varphi , \vDash \varphi \) iff \( \Gamma \vDash {\varphi }^{ * } \) . Corollary 29.30. \( {g}^{+ * }\{ \varphi : \varphi \) is a many-sorted sentence and \( \vDash \varphi \} \) is r.e. Again, explicit axiom systems for many-sorted logic have been developed. The model theory for many-sorted logic is rather well developed. Some of the details are outlined in the exercises. ## BIBLIOGRAPHY These references form a more extensive introduction to logic without equality, description operators, \( \varepsilon \) -operators, and many-sorted logic respectively. 1. Church, A. Introduction to Mathematical Logic. Princeton: Princeton Univ. Press (1956). 2. Kalish, D. and Montague, R. Remarks on descriptions and natural deduction. Arch. Math. Logik u. Grundl., 3 (1957), 50-73. 3. Leisenring, A. C. Mathematical Logic and Hilbert’s &Symbol. New York: Gordon and Breach (1969). 4. Mal'cev, A. I. Model correspondences. In The metamathematics of Algebraic Systems: Collected Papers 1936-1967. North-Holland (1971), 66-94. ## EXERCISES 29.31. Let \( \mathcal{L} \) be a relational language without equality. Let \( \left\langle {{\mathfrak{A}}_{i} : i \in I}\right\rangle \) be a system of \( \mathcal{L} \) -structures. Let \( F \) be an ultrafilter on \( I \) . We define a structure \( \mathfrak{B} = \mathop{\bigcap }\limits_{{i \in I}}^{F}{\mathfrak{A}}_{i} \), which serves the role of ultraproducts in logic without equality. Its universe is \( B = \mathop{P}\limits_{{i \in I}}{A}_{i} \) . Given an \( m \) -ary relation symbol \( \mathbf{R} \) , we set \[ {R}^{\mathfrak{B}} = \{ x \in {}^{m}B : \{ i : \left( {{x}_{0i},\ldots ,{x}_{m - 1, i}}\right) \in {\mathbf{R}}^{\mathfrak{A}}\} \in F \] Prove the following version of the fundamental theorem on ultraproducts: If \( x \in {}^{\omega }B \) and \( \varphi \) is a formula of \( \mathcal{L} \), then the following two conditions are equivalent: (i) \( \mathop{\bigcap }\limits_{{i \in I}}^{F}{\mathfrak{A}}_{i} \vDash \varphi \left\lbrack x\right\rbrack \) ; (ii) \( \left\{ {i \in I : {\mathfrak{A}}_{i} \vDash \varphi \left\lbrack {{\mathrm{{pr}}}_{i} \cdot x}\right\rbrack }\right\} \in F \) . 29.32. Find a set \( \Gamma \) of sentences in a suitable language without equality such that \( \Gamma \) has no finite model. 29.33. Let \( \mathcal{L} \) be a relational language without equality, \( \mathfrak{A} \) an \( \mathcal{L} \) -structure. A relation \( E \subseteq A \times A \) is a congruence relation in \( \mathfrak{A} \) if it is an equivalence relation on \( A \), and for any \( m \) -ary relation symbol \( \mathbf{R} \) of \( \mathcal{L} \), if \( a, b \in {}^{m}A \) , \( {a}_{i}E{b}_{i} \) for all \( i < m \), and \( a \in {\mathbf{R}}^{\mathfrak{A}} \) then \( b \in {\mathbf{R}}^{\mathfrak{A}} \) . Given such an \( E \), we let \( \mathfrak{A}/E \) be the \( \mathcal{L} \) -structure with universe \( A/E \) and with \( {\mathbf{R}}^{\mathfrak{A}/E} = \left\{ {x \in {}^{m}\left( {A/E}\right) \text{ : there exists }a \in {\mathbf{R}}^{\mathfrak{A}}\text{ with }{a}_{\mathfrak{i}} \in {x}_{\mathfrak{i}}\text{ for all }i < m}\right\} . \) Show that the mapping \( f \) such that \( {fa} = {\left\lbrack a\right\rbrack }_{E} \) for all \( a \in A \) is a two-way homomorphism from \( \mathfrak{A} \) onto \( \mathfrak{A}/E \) . 29.34. With \( \mathcal{L} \) and \( \mathfrak{A} \) as in 29.33, there is a maximal congruence on \( \mathfrak{A} \) . 29.35. For any relational language \( \mathcal{L} \) without equality there is a denumerable \( \mathcal{L} \) -structure \( \mathfrak{A} \) and a congruence \( E \) on \( \mathfrak{A} \) such that \( \left| {A/E}\right| = 1 \) . 29.36. Let \( \mathcal{L} \) be a relational language without equality. An \( \mathcal{L} \) -structure \( \mathfrak{A} \) is primitive if there is no congruence on \( \mathfrak{A} \) except the identity. If \( \left| A\right| > 1 \) and \( \mathfrak{A} \) is elementarily equivalent (without equality) to a one-element \( \mathcal{L} \) - structure, then \( \mathfrak{A} \) is not primitive. 29.37. Let \( \mathcal{L} \) be a relational language without equality and with only finitely many non logical constants. Suppose that \( \mathfrak{A} \) is a finite \( \mathcal{L} \) -structure, \( \mathfrak{A} \) is elementarily equivalent (without equality) to \( \mathfrak{B} \), and \( \left| A\right| < \left| B\right| \) . Then \( \mathfrak{B} \) is not primitive. 29.38. Formulate Definition 29.18 more precisely. Hint: let Trmfmla be the set of all pairs \( \left( {\tau ,0}\right) ,\left( {\varphi ,1}\right) \) with \( \tau \) a term and \( \varphi \) a formula, and define Trmfmla in a standard set-theoretic way. 29.39. Let \( \left( {\mathcal{L},\tau ,\mathbf{0}}\right) \) be a descriptive triple. Take as logical axioms the schemes 10.23(1)-(5) as well as the following two schemes: (6) \( \forall {v}_{i}\left( {{v}_{i} = {v}_{j} \leftrightarrow \varphi }\right) \rightarrow {v}_{j} = \tau {v}_{i}\varphi \), if \( j \) is minimum such that \( {v}_{j} \) does not occur in \( \varphi \) ; (7) \( \neg \exists {v}_{j}\forall {v}_{i}\left( {{v}_{i} \equiv {v}_{j} \leftrightarrow \varphi }\right) \rightarrow \tau {v}_{i}\varphi = \mathbf{0} \), with \( j \) as in (6). This gives rise to a notion \( \Gamma { \vdash }_{\mathrm{d}}\varphi \) and the attendant notions, such as d-consistency. Prove the usual versions of the complet
1065_(GTM224)Metric Structures in Differential Geometry
Definition 6.1
Definition 6.1. Let \( c \) be a geodesic in \( M \) . A vector field \( Y \) along \( c \) is called a Jacobi field along \( c \) if \[ {Y}^{\prime \prime } + R\left( {Y,\dot{c}}\right) \dot{c} = 0. \] Notice that the collection \( {\mathcal{J}}_{c} \) of Jacobi fields along \( c \) is a vector space that contains \( \dot{c} \) . The space of Jacobi fields orthogonal to \( \dot{c} \) is the one of interest to us: If \( X \) and \( Y \) are Jacobi, then \[ \left\langle {{Y}^{\prime \prime }, X}\right\rangle = - \langle R\left( {Y,\dot{c}}\right) \dot{c}, X\rangle = - \langle R\left( {X,\dot{c}}\right) \dot{c}, Y\rangle = \left\langle {{X}^{\prime \prime }, Y}\right\rangle \] by Proposition 3.1. Thus, \( \left\langle {{X}^{\prime \prime }, Y}\right\rangle - \left\langle {{Y}^{\prime \prime }, X}\right\rangle = 0 \), and \( \left\langle {{X}^{\prime }, Y}\right\rangle - \left\langle {{Y}^{\prime }, X}\right\rangle \) must be constant. In particular, \( \langle Y,\dot{c}{\rangle }^{\prime } = \left\langle {{Y}^{\prime },\dot{c}}\right\rangle = \left\langle {{Y}^{\prime },\dot{c}}\right\rangle - \left\langle {Y,{\dot{c}}^{\prime }}\right\rangle \) is constant, so that for a normal geodesic, the tangential component \( {Y}^{T} \) of \( Y \) is given by \[ {Y}^{T} = \langle Y,\dot{c}\rangle \dot{c} = \left( {a + {bt}}\right) \dot{c},\;a = \langle Y,\dot{c}\rangle \left( 0\right) ,\;b = \langle Y,\dot{c}{\rangle }^{\prime }\left( 0\right) , \] and satisfies the Jacobi equation. It follows that the component \( {Y}^{ \bot } = Y - {Y}^{T} \) of \( Y \) orthogonal to \( \dot{c} \) is also a Jacobi field. Proposition 6.1. Let \( c : I \rightarrow M \) be a geodesic, \( {t}_{0} \in I \) . For any \( v, w \in \) \( {M}_{c\left( {t}_{0}\right) } \) there exists a unique Jacobi field \( Y \) along \( c \) with \( Y\left( {t}_{0}\right) = v \) and \( {Y}^{\prime }\left( {t}_{0}\right) = \) \( w \) . Proof. Let \( {X}_{1},\ldots ,{X}_{n} \) be parallel fields along \( c \) such that \( {X}_{1}\left( {t}_{0}\right) ,\ldots ,{X}_{n - 1}\left( {t}_{0}\right) \) form an orthonormal basis of \( \dot{c}{\left( {t}_{0}\right) }^{ \bot } \), and \( {X}_{n} = \dot{c} \) . Any vector field \( Y \) along \( c \) can then be expressed as \[ Y = \mathop{\sum }\limits_{i}{f}^{i}{X}_{i},\;{f}^{i} = \left\{ \begin{array}{ll} \left\langle {Y,{X}_{i}}\right\rangle , & \text{ for }i \leq n - 1, \\ \left\langle {Y,\frac{{X}_{n}}{{\left| {X}_{n}\right| }^{2}}}\right\rangle , & \text{ for }i = n. \end{array}\right. \] Since \( {X}_{i} \) is parallel, \( {Y}^{\prime \prime } = \sum {f}^{i\prime \prime }{X}_{i} \) . Furthermore, \( R\left( {{X}_{i},\dot{c}}\right) \dot{c} = \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{h}_{i}^{j}{X}_{j} \) , where \( {h}_{i}^{j} = \left\langle {R\left( {{X}_{i},\dot{c}}\right) \dot{c},{X}_{j}}\right\rangle \), so that \( R\left( {Y,\dot{c}}\right) \dot{c} = \mathop{\sum }\limits_{{i, j = 1}}^{{n - 1}}{f}^{i}{h}_{i}^{j}{X}_{j} \) . The Jacobi equation then reads \[ \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}\left( {{f}^{j\prime \prime } + \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}{f}^{i}{h}_{i}^{j}}\right) {X}_{j} = 0,\;{f}^{n\prime \prime } = 0, \] or equivalently, \[ {f}^{j\prime \prime } + \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}{f}^{i}{h}_{i}^{j} = 0,\;j = 1,\ldots, n, \] if we set \( {h}_{i}^{n} = \left\langle {R\left( {{X}_{i},\dot{c}}\right) \dot{c},\dot{c}}\right\rangle = 0 \) . This is a homogeneous system of \( \mathrm{n} \) linear second-order equations, which has a unique solution for given initial values \( {f}^{j}\left( {t}_{0}\right) = \left\langle {v,{X}_{j}\left( {t}_{0}\right) }\right\rangle ,{f}^{j\prime }\left( {t}_{0}\right) = \left\langle {w,{X}_{j}\left( {t}_{0}\right) }\right\rangle \left( {j < n}\right) ,{f}^{n}\left( {t}_{0}\right) = \left\langle {v,\left( {\dot{c}/{\left| \dot{c}\right| }^{2}}\right) \left( {t}_{0}\right) }\right\rangle , \) and \( {f}^{n\prime }\left( {t}_{0}\right) = \left\langle {w,\left( {\dot{c}/{\left| \dot{c}\right| }^{2}}\right) \left( {t}_{0}\right) }\right\rangle \) . Proposition 6.1 implies that the space \( {\mathcal{J}}_{c} \) of Jacobi fields along \( c \) is \( {2n} \) - dimensional, since the map \[ {\mathcal{J}}_{c} \rightarrow {M}_{c\left( {t}_{0}\right) } \times {M}_{c\left( {t}_{0}\right) } \] \[ Y \mapsto \left( {Y\left( {t}_{0}\right) ,{Y}^{\prime }\left( {t}_{0}\right) }\right) \] is an isomorphism. EXAMPLE 6.1. Let \( {M}^{n} \) be a space of constant curvature \( \kappa \), and let \( {c}_{\kappa },{s}_{\kappa } \) denote the solutions of the differential equation \[ {f}^{\prime \prime } + {\kappa f} = 0 \] with \( {c}_{\kappa }\left( 0\right) = 1,{c}_{\kappa }^{\prime }\left( 0\right) = 0,{s}_{\kappa }\left( 0\right) = 0,{s}_{\kappa }^{\prime }\left( 0\right) = 1 \) . For example, \( {c}_{1} = \cos \), and \( {s}_{1} = \sin \) . Consider a normal geodesic \( c : \left\lbrack {0, b}\right\rbrack \rightarrow M \) . Given \( v, w \in {M}_{c\left( 0\right) } \) orthogonal to \( \dot{c}\left( 0\right) \), the Jacobi field \( Y \) along \( c \) with \( Y\left( 0\right) = v \) and \( {Y}^{\prime }\left( 0\right) = w \) is given by \[ Y = {c}_{\kappa }E + {s}_{\kappa }F \] where \( E \) and \( F \) are the parallel fields along \( c \) with \( E\left( 0\right) = v \) and \( F\left( 0\right) = w \) : Indeed, \( {Y}^{\prime \prime } = {c}_{\kappa }^{\prime \prime }E + {s}_{\kappa }^{\prime \prime }F = - {\kappa Y} = - R\left( {Y,\dot{c}}\right) \dot{c} \), so that \( Y \) is a Jacobi field, and clearly satisfies the initial conditions at 0 . Jacobi fields essentially arise out of variations of geodesics: If \( c : \left\lbrack {a, b}\right\rbrack \rightarrow M \) is a curve, and \( I \) is an interval containing 0, a variation of \( c \) is a smooth homotopy \( V : \left\lbrack {a, b}\right\rbrack \times I \rightarrow M \) with \( V\left( {t,0}\right) = c\left( t\right) \) for \( t \in \left\lbrack {a, b}\right\rbrack \) . Notice that \( {V}_{ * }{D}_{1}\left( {t,0}\right) = \dot{c}\left( t\right) \) ; the variational vector field \( Y \) along \( c \) is defined by \( Y\left( t\right) = \) \( {V}_{ * }{D}_{2}\left( {t,0}\right) \) Proposition 6.2. Let \( c : \left\lbrack {0, b}\right\rbrack \rightarrow M \) be a geodesic. If \( V \) is a variation of \( c \) through geodesics-meaning that \( t \mapsto V\left( {t, s}\right) \) is a geodesic for each \( s \), then the variational vector field \( t \mapsto {V}_{ * }{D}_{2}\left( {t,0}\right) \) is Jacobi along \( c \) . Conversely, let \( Y \) be a Jacobi field along \( c \) . Then there exists a variation \( V \) of \( c \) through geodesics whose variational vector field equals \( Y \) . Proof. Given a variation \( V \) of \( c \) through geodesics, define vector fields \( \widetilde{X} \) and \( \widetilde{Y} \) along \( V \) by \( \widetilde{X} = {V}_{ * }{D}_{1},\widetilde{Y} = {V}_{ * }{D}_{2} \) . By assumption, \( {\nabla }_{{D}_{1}}\widetilde{X} = 0 \), so that \[ R\left( {\widetilde{Y},\widetilde{X}}\right) \widetilde{X} = {\nabla }_{{D}_{2}}{\nabla }_{{D}_{1}}\widetilde{X} - {\nabla }_{{D}_{1}}{\nabla }_{{D}_{2}}\widetilde{X} = - {\nabla }_{{D}_{1}}{\nabla }_{{D}_{2}}\widetilde{X} \] \[ = - {\nabla }_{{D}_{1}}{\nabla }_{{D}_{2}}{V}_{ * }{D}_{1} = - {\nabla }_{{D}_{1}}{\nabla }_{{D}_{1}}{V}_{ * }{D}_{2} \] \[ = - {\nabla }_{{D}_{1}}{\nabla }_{{D}_{1}}\widetilde{Y} \] When \( s = 0 \), the above expression becomes \( R\left( {Y,\dot{c}}\right) \dot{c} = - {Y}^{\prime \prime } \), and \( Y \) is Jacobi. Conversely, suppose \( Y \) is a Jacobi field along \( c \), and \( v \mathrel{\text{:=}} Y\left( 0\right), w \mathrel{\text{:=}} {Y}^{\prime }\left( 0\right) \) . Let \( \gamma \) be a curve with \( \dot{\gamma }\left( 0\right) = v \), and \( X, W \) parallel fields along \( \gamma \) with \( X\left( 0\right) = \) \( \dot{c}\left( 0\right), W\left( 0\right) = w \) . Choose \( \epsilon > 0 \) small enough so that \( t\left( {X\left( s\right) + {sW}\left( s\right) }\right) \) belongs to the domain of \( {\exp }_{\gamma \left( s\right) } \) for \( \left( {t, s}\right) \in \left\lbrack {0, b}\right\rbrack \times \left( {-\epsilon ,\epsilon }\right) \), and consider the variation \[ V : \left\lbrack {0, b}\right\rbrack \times \left( {-\epsilon ,\epsilon }\right) \rightarrow M \] \[ \left( {t, s}\right) \mapsto {\exp }_{\gamma \left( s\right) }t\left( {X\left( s\right) + {sW}\left( s\right) }\right) \] of \( c \) . Since the curves \( t \mapsto V\left( {t, s}\right) \) are geodesics, the variational vector field \( Z \) is Jacobi along \( c \) . Moreover, \( V\left( {0, s}\right) = \gamma \left( s\right) \), so that \( Z\left( 0\right) = \dot{\gamma }\left( 0\right) = v \) . Finally, \[ {Z}^{\prime }\left( 0\right) = {\nabla }_{{D}_{1}\left( {0,0}\right) }{V}_{ * }{D}_{2} = {\nabla }_{{D}_{2}\left( {0,0}\right) }{V}_{ * }{D}_{1} = W\left( 0\right) = w, \] because \( {V}_{ * }{D}_{1}\left( {0, s}\right) = X\left( s\right) + {sW}\left( s\right) \), and \( X, W \) are parallel along \( \gamma \) . By Proposition 6.1, \( Z = Y \) . In the special case when \( Y\left( 0\right) = 0 \), the variation from Proposition 6.2 becomes \( V\left( {t, s}\right) = {\exp }_{c\left( 0\right) }t\left( {\dot{c}\left( 0\right) + {sw}}\right) \) ; the Jacobi field \( Y \) with initial conditions \( Y\left( 0\right) = 0,{Y}^{\prime }\left( 0\right) = w \) is given by (6.1) \[ Y\left( t\right) = {\exp }_{c\left( 0\right) * }\left( {t{\mathcal{J}}_{t\dot{c}\left( 0\right) }w}\right) . \] One can interpret this as follows: Let \( p = c\left( 0\right), v = \dot{c}\left( 0\right) \), and consider the manifold \( {M}_{p} \) with the canonical Riemannian metric induced by the inner product on \( {M}_{p} \) . Then \( t \mapsto t{\mathcal{J}}_{tv}w \) is the Jacobi field \( F \) along the geodesic \( t \mapsto {tv} \) in \( {M}_{p} \) with \( F\left( 0\right) = 0,{F}^{\prime }\left( 0\right) = {\mathcal
1016_(GTM181)Numerical Analysis
Definition 7.21
Definition 7.21 An \( n \times n \) matrix \( B = \left( {b}_{jk}\right) \) is called a Hessenberg matrix if \( {b}_{jk} = 0 \) for \( 1 \leq k \leq j - 2, j = 3,\ldots, n \) ; i.e., in the lower triangular part of a Hessenberg matrix only the elements of the first subdiagonal can be different from zero. We proceed by showing that each matrix \( A \) can be transformed into Hessenberg form by unitary transformations using Householder matrices. We start with generating zeros in the first column by multiplying \( A \) from the left by a Householder matrix \( {H}_{1} \) . We write \[ A = \left( \begin{matrix} {a}_{11} & * \\ {\widetilde{a}}_{1} & \widetilde{A} \end{matrix}\right) \] where \( \widetilde{A} \) is an \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix and \( {\widetilde{a}}_{1} \) an \( \left( {n - 1}\right) \) vector. Then considering a Householder matrix \( {H}_{1} \) of the form \[ {H}_{1} = \left( \begin{matrix} 1 & 0 \\ 0 & {\widetilde{H}}_{1} \end{matrix}\right) \] where \( {\widetilde{H}}_{1} = I - 2{\widetilde{v}}_{1}{\widetilde{v}}_{1}^{ * } \) is an \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) Householder matrix, we have \[ A{H}_{1}^{ * } = \left( \begin{matrix} {a}_{11} & * \\ {\widetilde{a}}_{1} & \widetilde{A}{\widetilde{H}}_{1}^{ * } \end{matrix}\right) \] and \[ {H}_{1}A{H}_{1}^{ * } = \left( \begin{matrix} {a}_{11} & * \\ {\widetilde{H}}_{1}{\widetilde{a}}_{1} & {\widetilde{H}}_{1}\widetilde{A}{\widetilde{H}}_{1}^{ * } \end{matrix}\right) . \] As shown in the proof of Theorem 2.13, choosing \[ {\widetilde{v}}_{1} = \frac{{u}_{1}}{\sqrt{{u}_{1}^{ * }{u}_{1}}} \] where \[ {u}_{1} = {\widetilde{a}}_{1} \mp \sigma {\left( 1,0,\ldots ,0\right) }^{T} \] and \[ \sigma = \left\{ \begin{array}{ll} \frac{{a}_{21}}{\left| {a}_{21}\right| }\sqrt{{\widetilde{a}}_{1}^{ * }{\widetilde{a}}_{1}}, & {a}_{21} \neq 0, \\ \sqrt{{\widetilde{a}}_{1}^{ * }{\widetilde{a}}_{1}}, & {a}_{21} = 0, \end{array}\right. \] eliminates all elements of \( {\widetilde{a}}_{1} \) with the exception of the first component. Hence the first column of the transformed matrix is of the required form. Now assume that \( {A}_{k} \) is an \( n \times n \) matrix of the form \[ {A}_{k} = \left( \begin{matrix} {B}_{k} & * \\ 0{\widetilde{a}}_{k} & {\widetilde{A}}_{n - k} \end{matrix}\right) \] where \( {B}_{k} \) is a \( k \times k \) Hessenberg matrix, \( {\widetilde{A}}_{n - k} \) an \( \left( {n - k}\right) \times \left( {n - k}\right) \) matrix, \( {\widetilde{a}}_{k} \) an \( \left( {n - k}\right) \) vector, and 0 the \( \left( {n - k}\right) \times \left( {k - 1}\right) \) zero matrix. Then for a Householder transformation of the form \[ {H}_{k} = \left( \begin{matrix} {I}_{k} & 0 \\ 0 & {\widetilde{H}}_{n - k} \end{matrix}\right) \] where \( {I}_{k} \) denotes the \( k \times k \) identity matrix and \( {\widetilde{H}}_{n - k} \) is an \( \left( {n - k}\right) \times \left( {n - k}\right) \) Householder matrix, it follows that \[ {A}_{k}{H}_{k}^{ * } = \left( \begin{matrix} {B}_{k} & * \\ 0{\widetilde{a}}_{k} & {\widetilde{A}}_{n - k}{\widetilde{H}}_{n - k}^{ * } \end{matrix}\right) \] and \[ {H}_{k}{A}_{k}{H}_{k}^{ * } = \left( \begin{matrix} {B}_{k} & * & \\ 0 & {\widetilde{H}}_{n - k}{\widetilde{a}}_{k} & {\widetilde{H}}_{n - k}{\widetilde{A}}_{n - k}{\widetilde{H}}_{n - k}^{ * } \end{matrix}\right) . \] Now, proceeding as above, we can choose \( {\widetilde{H}}_{n - k} \) such that all elements of \( {\widetilde{H}}_{n - k}{\widetilde{a}}_{k} \) vanish with the exception of the first component. This procedure reduces a further column into Hessenberg form. We can summarize our analysis in the following theorem. Theorem 7.22 To each \( n \times n \) matrix \( A \) there exist \( n - 2 \) Householder matrices \( {H}_{1},\ldots ,{H}_{n - 2} \) such for \( Q = {H}_{n - 2}\cdots {H}_{1} \) the matrix \[ B = {Q}^{ * }{AQ} \] ## is a Hessenberg matrix. For a Hessenberg matrix the value of the characteristic polynomial and its derivative at a point \( \lambda \in \mathbb{C} \) can be computed easily without computing the coefficients of the polynomial. These two quantities are required for employing Newton's method for approximating the eigenvalues as the zeros of the characteristic polynomial. We first consider the case of a symmetric Hessenberg matrix. Example 7.23 Let \[ A = \left( \begin{matrix} {a}_{1} & {c}_{2} & & & & \\ {c}_{2} & {a}_{2} & {c}_{3} & & & \\ & {c}_{3} & {a}_{3} & {c}_{4} & & \\ & & \cdot & \cdot & \cdot & \\ & & & {c}_{n - 1} & {a}_{n - 1} & {c}_{n} \\ & & & & {c}_{n} & {a}_{n} \end{matrix}\right) \] be a symmetric tridiagonal matrix. Denote by \( {A}_{k} \) the \( k \times k \) submatrix consisting of the first \( k \) rows and columns of \( A \), and let \( {p}_{k} \) denote the characteristic polynomial of \( {A}_{k} \) . Then we have the recurrence relations \[ {p}_{k}\left( \lambda \right) = \left( {{a}_{k} - \lambda }\right) {p}_{k - 1}\left( \lambda \right) - {c}_{k}^{2}{p}_{k - 2}\left( \lambda \right) ,\;k = 2,\ldots, n, \] (7.25) and \[ {p}_{k}^{\prime }\left( \lambda \right) = \left( {{a}_{k} - \lambda }\right) {p}_{k - 1}^{\prime }\left( \lambda \right) - {c}_{k}^{2}{p}_{k - 2}^{\prime }\left( \lambda \right) - {p}_{k - 1}\left( \lambda \right) ,\;k = 2,\ldots, n, \] (7.26) starting with \( {p}_{0}\left( \lambda \right) = 1 \) and \( {p}_{1}\left( \lambda \right) = {a}_{1} - \lambda \) . Proof. The recursion (7.25) follows by expanding \( \det \left( {{A}_{k} - {\lambda I}}\right) \) with respect to the last column, and (7.26) is obtained by differentiating (7.25). Example 7.24 The \( n \times n \) tridiagonal matrix \[ A = \left( \begin{array}{rrrrrr} 2 & - 1 & & & & \\ - 1 & 2 & - 1 & & & \\ & - 1 & 2 & - 1 & & \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ & & & - 1 & 2 & - 1 \\ & & & & - 1 & 2 \end{array}\right) \] has the eigenvalues \[ {\lambda }_{j} = 4{\sin }^{2}\frac{j\pi }{2\left( {n + 1}\right) },\;j = 1,\ldots, n \] (see Example 4.17). Table 7.1 gives the results of the Newton iteration using (7.25) and (7.26) for computing the smallest eigenvalue \( {\lambda }_{\min } = {\lambda }_{1} \) and the largest eigenvalue \( {\lambda }_{\max } = {\lambda }_{n} \) for \( n = {10} \) . The starting values are obtained from the Gerschgorin estimates \( \left| {\lambda - 2}\right| \leq 2 \) following from Theorem 7.7. \( ▱ \) TABLE 7.1. Hessenberg method for Example 7.24 <table><thead><tr><th>\( {\lambda }_{\max } \)</th><th>\( {\lambda }_{\min } \)</th></tr></thead><tr><td>4.00000000</td><td>0.00000000</td></tr><tr><td>3.95000000</td><td>0.05000000</td></tr><tr><td>3.92542110</td><td>0.07457890</td></tr><tr><td>3.91933549</td><td>0.08066451</td></tr><tr><td>3.91898705</td><td>0.08101295</td></tr><tr><td>3.91898595</td><td>0.08101405</td></tr><tr><td>3.91898595</td><td>0.08101405</td></tr></table> We conclude this section by describing the computation of the quotient of the value of the characteristic polynomial \( p\left( \lambda \right) = \det \left( {B - {\lambda I}}\right) \) and its derivative for a general Hessenberg matrix \( B = \left( {b}_{jk}\right) \) . We assume that \( {b}_{j, j - 1} \neq 0 \) for \( j = 2,\ldots, n \) ; i.e., \( B \) is irreducible (see Problem 7.15). For a given \( \lambda \) we determine \[ \xi = \xi \left( \lambda \right) = {\left( {\xi }_{1},\ldots ,{\xi }_{n}\right) }^{T} \] and \( \alpha = \alpha \left( \lambda \right) \) such that \[ \left( {{b}_{11} - \lambda }\right) {\xi }_{1} + \;{b}_{12}{\xi }_{2} + \;\cdots \; + \;{b}_{1n}{\xi }_{n} = \alpha , \] \[ {b}_{21}{\xi }_{1} + \left( {{b}_{22} - \lambda }\right) {\xi }_{2} + \cdots + {b}_{2n}{\xi }_{n} = 0 \] \[ {b}_{n, n - 1}{\xi }_{n - 1} + \left( {{b}_{nn} - \lambda }\right) {\xi }_{n} = 0 \] and \( {\xi }_{n} = 1 \) . This is an \( n \times n \) upper triangular linear system for the \( n \) unknowns \( \alpha ,{\xi }_{1},\ldots ,{\xi }_{n - 1} \), and it can be solved by backward substitution. Setting \[ C = \left( \begin{matrix} {b}_{11} - \lambda & {b}_{12} & . & . & {b}_{1, n - 1} & \alpha \\ {b}_{21} & {b}_{21} - \lambda & . & . & {b}_{2, n - 1} & 0 \\ & & . & . & . & . \\ & & & & {b}_{n, n - 1} & 0 \end{matrix}\right) \] by Cramer's rule we have that \[ 1 = {\xi }_{n} = \frac{\det C}{\det \left( {B - {\lambda I}}\right) } = \frac{{\left( -1\right) }^{n - 1}{b}_{21}\cdots {b}_{n, n - 1}\alpha }{\det \left( {B - {\lambda I}}\right) }, \] that is, \[ p\left( \lambda \right) = {\left( -1\right) }^{n - 1}{b}_{21}\cdots {b}_{n, n - 1}\alpha \left( \lambda \right) . \] Differentiating the last equation yields \[ {p}^{\prime }\left( \lambda \right) = {\left( -1\right) }^{n - 1}{b}_{21}\cdots {b}_{n, n - 1}{\alpha }^{\prime }\left( \lambda \right) \] and therefore \[ \frac{p\left( \lambda \right) }{{p}^{\prime }\left( \lambda \right) } = \frac{\alpha \left( \lambda \right) }{{\alpha }^{\prime }\left( \lambda \right) } \] By differentiating the above linear system with respect to \( \lambda \) we obtain the linear system \[ \left( {{b}_{11} - \lambda }\right) {\eta }_{1} + \;{b}_{12}{\eta }_{2} + \cdots + {b}_{1, n - 1}{\eta }_{n - 1} = {\xi }_{1} + \beta \] \[ {b}_{21}{\eta }_{1} + \left( {{b}_{22} - \lambda }\right) {\eta }_{2} + \cdots + {b}_{2, n - 1}{\eta }_{n - 1} = {\xi }_{2} \] ... \[ {b}_{n, n - 1}{\eta }_{n - 1} = {\xi }_{n} \] for the derivatives \( \beta = {\alpha }^{\prime },{\eta }_{1} = {\xi }_{1}^{\prime },\ldots ,{\eta }_{n - 1} = {\xi }_{n - 1}^{\prime } \) . This linear system again can be solved by backward substitution for the \( n \) unknowns \( \beta ,{\eta }_{1},\ldots ,{\eta }_{n - 1} \) . Thus we have proven the following theorem. Theorem 7.25 Let \( B = \left( {b}_{jk}\right) \) be an irreducible Hessenberg matrix and let \( \lambda \in \mathbb{C} \) . Starting from \( {\xi }_{n} = 1,{\eta }_{n} = 0 \), compute recursively \[ {\xi }_{n - k} = \frac{1}{{b}_{n - k + 1, n - k}}\left\
1083_(GTM240)Number Theory II
Definition 9.8.3
Definition 9.8.3. The functions \( {J}_{\nu }\left( x\right) \) and \( {I}_{\nu }\left( x\right) \) are called the Bessel functions of the first and second kind respectively, and the functions \( {Y}_{\nu }\left( x\right) \) and \( {K}_{\nu }\left( x\right) \) the modified Bessel functions of the first and second kind. Remarks. (1) As will become clear from the asymptotic expansions given below, the reader should think of the functions \( J\left( x\right) \) and \( Y\left( x\right) \) as the functions \( \cos \left( x\right) \) and \( \sin \left( x\right) \) respectively, and of the functions \( I\left( x\right) \) and \( K\left( x\right) \) as the functions \( {e}^{x} \) and \( {e}^{-x} \) . In particular, the functions \( J, Y \), and \( K \) often occur in expansions, but almost never the function \( I \) since it is exponentially large. (2) The normalization of the functions \( J \) and \( I \) is natural. That of the functions \( K \) is canonical up to multiplication by a constant, since it is the only solution to the differential equation that tends exponentially fast to zero at infinity. On the other hand, the normalization of the function \( Y \) (which is sometimes denoted by \( N \) ) is less natural, but we have chosen the one occurring in the literature. \( \mathrm{X} \) Proposition 9.8.4. We have \[ {J}_{\nu - 1}\left( x\right) + {J}_{\nu + 1}\left( x\right) = \frac{2\nu }{x}{J}_{\nu }\left( x\right) ,\;{J}_{\nu - 1}\left( x\right) - {J}_{\nu + 1}\left( x\right) = 2{J}_{\nu }^{\prime }\left( x\right) , \] \[ {Y}_{\nu - 1}\left( x\right) + {Y}_{\nu + 1}\left( x\right) = \frac{2\nu }{x}{Y}_{\nu }\left( x\right) ,\;{Y}_{\nu - 1}\left( x\right) - {Y}_{\nu + 1}\left( x\right) = 2{Y}_{\nu }^{\prime }\left( x\right) , \] \[ {I}_{\nu - 1}\left( x\right) - {I}_{\nu + 1}\left( x\right) = \frac{2\nu }{x}{I}_{\nu }\left( x\right) ,\;{I}_{\nu - 1}\left( x\right) + {I}_{\nu + 1}\left( x\right) = 2{I}_{\nu }^{\prime }\left( x\right) , \] \[ {K}_{\nu + 1}\left( x\right) - {K}_{\nu - 1}\left( x\right) = \frac{2\nu }{x}{K}_{\nu }\left( x\right) ,\;{K}_{\nu - 1}\left( x\right) + {K}_{\nu + 1}\left( x\right) = - 2{K}_{\nu }^{\prime }\left( x\right) . \] Proof. Immediate from the series expansions and the definitions of \( Y \) and \( K \), using \( \Gamma \left( {\nu + k + 1}\right) = \left( {\nu + k}\right) \Gamma \left( {\nu + k}\right) \), and left to the reader (Exercise 116). Proposition 9.8.5. When \( \nu \in \left( {1/2}\right) + \mathbb{Z} \) the four Bessel functions are elementary functions. More precisely: (1) We have \[ {J}_{1/2}\left( x\right) = {Y}_{-1/2}\left( x\right) = \sqrt{\frac{2}{\pi x}}\sin \left( x\right) ,\;{J}_{-1/2}\left( x\right) = - {Y}_{1/2}\left( x\right) = \sqrt{\frac{2}{\pi x}}\cos \left( x\right) , \] \[ {I}_{1/2}\left( x\right) = \sqrt{\frac{2}{\pi x}}\sinh \left( x\right) ,\;{I}_{-1/2}\left( x\right) = \sqrt{\frac{2}{\pi x}}\cosh \left( x\right) , \] \[ {K}_{1/2}\left( x\right) = {K}_{-1/2}\left( x\right) = \sqrt{\frac{\pi }{2x}}{e}^{-x}. \] (2) More generally, there exist polynomials \( {P}_{n}\left( X\right) \) and \( {Q}_{n}\left( X\right) \) satisfying \( \deg \left( {P}_{n}\right) = \deg \left( {Q}_{n}\right) = n,{P}_{n}\left( {-X}\right) = {\left( -1\right) }^{n}{P}_{n}\left( X\right) ,{Q}_{n}\left( {-X}\right) = {\left( -1\right) }^{n}{Q}_{n}\left( X\right) \) , and such that for \( k \in {\mathbb{Z}}_{ \geq 0} \) we have \[ {J}_{k + 1/2}\left( x\right) = \sqrt{\frac{2}{\pi x}}\left( {{P}_{k}\left( {1/x}\right) \sin \left( x\right) - {Q}_{k - 1}\left( {1/x}\right) \cos \left( x\right) }\right) , \] \[ {J}_{-k - 1/2}\left( x\right) = {\left( -1\right) }^{k}\sqrt{\frac{2}{\pi x}}\left( {{P}_{k}\left( {1/x}\right) \cos \left( x\right) + {Q}_{k - 1}\left( {1/x}\right) \sin \left( x\right) }\right) , \] \[ {Y}_{k + 1/2}\left( x\right) = {\left( -1\right) }^{k - 1}{J}_{-k - 1/2}\left( x\right) ,\;{Y}_{-k - 1/2} = {\left( -1\right) }^{k}{J}_{k + 1/2}\left( x\right) , \] \[ {I}_{k + 1/2}\left( x\right) = \sqrt{\frac{2}{\pi x}}\left( {{i}^{k}{P}_{k}\left( {i/x}\right) \sinh \left( x\right) + {i}^{k - 1}{Q}_{k - 1}\left( {i/x}\right) \cosh \left( x\right) }\right) , \] \[ {I}_{-k - 1/2}\left( x\right) = \sqrt{\frac{2}{\pi x}}\left( {{i}^{k}{P}_{k}\left( {i/x}\right) \cosh \left( x\right) + {i}^{k - 1}{Q}_{k - 1}\left( {i/x}\right) \sinh \left( x\right) }\right) , \] \[ {K}_{k + 1/2}\left( x\right) = {K}_{-k - 1/2}\left( x\right) = \sqrt{\frac{\pi }{2x}}\left( {{i}^{k}{P}_{k}\left( {1/\left( {ix}\right) }\right) + {i}^{k - 1}{Q}_{k - 1}\left( {1/\left( {ix}\right) }\right) }\right) {e}^{-x}. \] Proof. The formulas for \( {J}_{\pm 1/2}\left( x\right) \) and \( {I}_{\pm 1/2}\left( x\right) \) follow immediately from the power series expansion, using the formula \[ \Gamma \left( {k + 3/2}\right) = \left( {{2k} + 1}\right) !\sqrt{\pi }/\left( {k!{2}^{{2k} + 1}}\right) , \] which is an immediate consequence of the duplication formula of the gamma function. The formulas for \( Y \) and \( K \) then follow from the definition. Finally, the assertions of (2) follow from (1) and the recurrences of Proposition 9.8.4. The details are left to the reader (Exercise 117). ## 9.8.2 Integral Representations and Applications Apart from the power series expansions around \( x = 0 \), which are readily found, the only results that we need are given in the following propositions. Proposition 9.8.6. We have the integral representations \[ {J}_{\nu }\left( x\right) = \frac{1}{\pi }{\int }_{0}^{\pi }\cos \left( {x\sin \left( t\right) - {\nu t}}\right) {dt} - \frac{\sin \left( {\pi \nu }\right) }{\pi }{\int }_{0}^{\infty }{e}^{-x\sinh \left( t\right) - {\nu t}}{dt} \] \[ {Y}_{\nu }\left( x\right) = \frac{1}{\pi }{\int }_{0}^{\pi }\sin \left( {x\sin \left( t\right) - {\nu t}}\right) {dt} - \frac{1}{\pi }{\int }_{0}^{\infty }{e}^{-x\sinh \left( t\right) }\left( {{e}^{\nu t} + \cos \left( {\pi \nu }\right) {e}^{-{\nu t}}}\right) {dt} \] \[ {I}_{\nu }\left( x\right) = \frac{1}{\pi }{\int }_{0}^{\pi }{e}^{x\cos \left( t\right) }\cos \left( {\nu t}\right) {dt} - \frac{\sin \left( {\nu \pi }\right) }{\pi }{\int }_{0}^{\infty }{e}^{-x\cosh \left( t\right) - {\nu t}}{dt} \] \[ {K}_{\nu }\left( x\right) = {\int }_{0}^{\infty }{e}^{-x\cosh \left( t\right) }\cosh \left( {\nu t}\right) {dt}. \] Proof. We first prove the formula for \( {J}_{\nu }\left( x\right) \) . By Proposition 9.8.1 we have \[ {J}_{\nu }\left( x\right) = \mathop{\sum }\limits_{{k \geq 0}}\frac{{\left( -1\right) }^{k}{\left( x/2\right) }^{\nu + {2k}}}{k!\Gamma \left( {\nu + 1 + k}\right) }. \] On the other hand, by Exercise 99, for all \( z \in \mathbb{C} \) and all \( \varepsilon > 0 \) we have \[ \frac{1}{\Gamma \left( z\right) } = \frac{1}{2i\pi }{\int }_{C}{t}^{-z}{e}^{t}{dt} \] where \( C \) is any contour coming from \( - \infty \), turning in the positive direction around 0, and going back to \( - \infty \) . Since the radius of convergence of the series is infinite, we deduce that \[ {J}_{\nu }\left( x\right) = \frac{{\left( x/2\right) }^{\nu }}{2i\pi }{\int }_{C}\mathop{\sum }\limits_{{k \geq 0}}\frac{{\left( -1\right) }^{k}{\left( x/2\right) }^{2k}{t}^{-\nu - k - 1}}{k!}{e}^{t}{dt} \] \[ = \frac{{\left( x/2\right) }^{\nu }}{2i\pi }{\int }_{C}{t}^{-\nu - 1}{e}^{t - {x}^{2}/\left( {4t}\right) }{dt} \] so setting \( t = \left( {x/2}\right) u \) we obtain \[ {J}_{\nu }\left( x\right) = \frac{1}{2i\pi }{\int }_{{C}^{\prime }}{u}^{-\nu - 1}{e}^{\left( {x/2}\right) \left( {u - 1/u}\right) }{du} \] for some other contour \( {C}^{\prime } \) of the same type. We now make the change of variable \( u = {e}^{w} \) . We choose as contour \( {C}_{1} \) the rectangular contour with vertices \( \infty - {i\pi }, - {i\pi },{i\pi },\infty + {i\pi } \) . It is clear that as \( w \) goes along this contour, \( u = {e}^{w} \) goes from \( - \infty \) to -1, around the trigonometric circle back to -1, and then returns to \( - \infty \), hence is (the limit of) a suitable contour \( C \) . Thus \[ {J}_{\nu }\left( x\right) = \frac{1}{2i\pi }{\int }_{{C}_{1}}{e}^{-{\nu w}}{e}^{x\sinh \left( w\right) }{dw} \] which gives the desired integral representations after splitting the contour \( {C}_{1} \) into its three sides and making the evident necessary changes of variable. It is now immediate to deduce the integral for \( {Y}_{\nu }\left( x\right) \) from the definition: we have \[ \sin \left( {\nu \pi }\right) {Y}_{\nu }\left( x\right) = \cos \left( {\nu \pi }\right) {J}_{\nu }\left( x\right) - {J}_{-\nu }\left( x\right) = \frac{{I}_{1}}{\pi } - \frac{\sin \left( {\nu \pi }\right) }{\pi }{I}_{2}, \] where \[ {I}_{1} = \cos \left( {\nu \pi }\right) {\int }_{0}^{\pi }\cos \left( {x\sin \left( t\right) - {\nu t}}\right) {dt} - {\int }_{0}^{\pi }\cos \left( {x\sin \left( t\right) + {\nu t}}\right) {dt}\;\text{ and } \] \[ {I}_{2} = {\int }_{0}^{\infty }{e}^{-x\sinh \left( t\right) }\left( {\cos \left( {\nu \pi }\right) {e}^{-{\nu t}} + {e}^{\nu t}}\right) {dt}. \] Now since \[ \cos \left( {\nu \pi }\right) \cos \left( {x\sin \left( t\right) - {\nu t}}\right) = \cos \left( {x\sin \left( t\right) + \nu \left( {\pi - t}\right) }\right) + \sin \left( {\nu \pi }\right) \sin \left( {x\sin \left( t\right) - {\nu t}}\right) \] and \[ {\int }_{0}^{\pi }\cos \left( {x\sin \left( t\right) + \nu \left( {\pi - t}\right) }\right) {dt} = {\int }_{0}^{\pi }\cos \left( {x\sin \left( t\right) + {\nu t}}\right) {dt} \] we have \[ {I}_{1} = \sin \left( {\nu \pi }\right) {\int }_{0}^{\pi }\sin \left( {x\sin \left( t\right) - {\nu t}}\right) {dt} \] Combining this with the formula for \( {I}_{2} \) we obtain the integral representation of \( {Y}_{\nu } \) for \( \nu \notin \mathbb{Z} \), hence for all \( \nu \) by continuity. For \( {I}_{\nu }\left( x\right) \) the proof is identical to that of \( {J}_{\nu }\left( x\right) \) since the series expansion is obtained by removing the factor \( {\left( -1\right) }^{k} \), so that \[ {I}_{
18_Algebra Chapter 0
Definition 2.13
Definition 2.13. An \( R \) -module \( N \) is flat if the functor \( \_ { \otimes }_{R}N \) is exact. In the exercises the reader will explore easy properties of this notion and useful equivalent formulations in particular cases. We have already checked that \( \mathbb{Z}/2\mathbb{Z} \) is not a flat \( \mathbb{Z} \) -module, while free modules are flat. Flat modules are hugely important: in algebraic geometry, 'flatness' is the condition expressing the fact that the objects in a family vary 'continuously', preserving certain key invariants. Example 2.14. Consider the affine algebraic set \( \mathcal{V}\left( {xy}\right) \) in the plane \( {\mathbb{A}}^{2} \) (over a fixed field \( k \) ) and the ’projection on the first coordinate’ \( \mathcal{V}\left( {xy}\right) \rightarrow {\mathbb{A}}^{1},\left( {x, y}\right) \mapsto x \) : ![23387543-548b-40c2-8595-200756212a0f_531_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_531_0.jpg) In terms of coordinate rings (cf. SVII 2.3), this map corresponds to the homomorphism of \( k \) -algebras: \[ k\left\lbrack x\right\rbrack \rightarrow \frac{k\left\lbrack {x, y}\right\rbrack }{\left( xy\right) } \] defined by mapping \( x \) to the coset \( x + \left( {xy}\right) \) (this will be completely clear to the reader who has worked out Exercise VII 2.12.). This homomorphism defines a \( k\left\lbrack x\right\rbrack \) - module structure on \( k\left\lbrack {x, y}\right\rbrack /\left( {xy}\right) \), and we can wonder whether the latter is flat in the sense of Definition 2.13 From the geometric point of view, clearly something ’not flat’ is going on over the point \( x = 0 \), so we consider the inclusion of the ideal \( \left( x\right) \) in \( k\left\lbrack x\right\rbrack \) : \[ k\left\lbrack x\right\rbrack {}^{ \subset \cdot x} \rightarrow k\left\lbrack x\right\rbrack \] Tensoring by \( k\left\lbrack {x, y}\right\rbrack /\left( {xy}\right) \), we obtain \[ \frac{k\left\lbrack {x, y}\right\rbrack }{\left( xy\right) }\overset{\cdot x}{ \rightarrow }\frac{k\left\lbrack {x, y}\right\rbrack }{\left( xy\right) } \] which is not injective, because it sends to zero the nonzero coset \( y + \left( {xy}\right) \) . Therefore \( k\left\lbrack {x, y}\right\rbrack /\left( {xy}\right) \) is not flat as a \( k\left\lbrack x\right\rbrack \) -module. The term flat was inspired precisely by such 'geometric' examples. 2.4. The Tor functors. The ’failure of exactness’ of the functor \( {}_{ - }{ \otimes }_{R}N \) is measured by another functor \( R \) -Mod \( \rightarrow R \) -Mod, called \( {\operatorname{Tor}}_{1}^{R}\left( {\_, N}\right) \) : if \( N \) is flat (for example, if it is free), then \( {\operatorname{Tor}}_{1}^{R}\left( {M, N}\right) = 0 \) for all modules \( M \) . In fact (amazingly) if \[ 0 \rightarrow A \rightarrow B \rightarrow C \rightarrow 0 \] is an exact sequence of \( R \) -modules, one obtains a new exact sequence after tensoring by any \( N \) : \[ {\operatorname{Tor}}_{1}^{R}\left( {C, N}\right) \rightarrow A{ \otimes }_{R}N \rightarrow B{ \otimes }_{R}N \rightarrow C{ \otimes }_{R}N \rightarrow 0, \] so if \( {\operatorname{Tor}}_{1}^{R}\left( {C, N}\right) = 0 \), then the module on the left vanishes; thus every short exact sequence ending in \( C \) remains exact after tensoring by \( N \) in this case. In fact (astonishingly) for all \( N \) one can continue this sequence with more Tor-modules, obtaining a longer exact complex: \[ {\operatorname{Tor}}_{1}^{R}\left( {A, N}\right) \rightarrow {\operatorname{Tor}}_{1}^{R}\left( {B, N}\right) \rightarrow {\operatorname{Tor}}_{1}^{R}\left( {C, N}\right) \rightarrow A{ \otimes }_{R}N \rightarrow B{ \otimes }_{R}N \rightarrow C{ \otimes }_{R}N \rightarrow 0. \] This is not the end of the story: the complex may be continued even further by invoking new functors \( {\operatorname{Tor}}_{2}^{R}\left( {\_, N}\right) ,{\operatorname{Tor}}_{3}^{R}\left( {\_, N}\right) \), etc. These are the derived functors of tensor. To 'compute' these functors, one may apply the following procedure: given an \( R \) -module \( M \), find a free resolution (SVI 4.2) \[ \cdots \rightarrow {R}^{\oplus {S}_{2}} \rightarrow {R}^{\oplus {S}_{1}} \rightarrow {R}^{\oplus {S}_{0}} \rightarrow M \rightarrow 0 \] throw \( M \) away, and tensor the free part by \( N \), obtaining a complex \( {M}_{ \bullet }{ \otimes }_{R}N \) : \[ \cdots \rightarrow {N}^{\oplus {S}_{2}} \rightarrow {N}^{\oplus {S}_{1}} \rightarrow {N}^{\oplus {S}_{0}} \rightarrow 0 \] (recall again that tensor commutes with colimits, hence with direct sums, therefore \( \left. {{R}^{\oplus m}{ \otimes }_{R}N \cong {N}^{\oplus m}}\right) \) ; then take the homology of this complex (cf. § 111.3). Astoundingly, this will not depend (up to isomorphism) on the chosen free resolution, so we can define \[ {\operatorname{Tor}}_{i}^{R}\left( {M, N}\right) \mathrel{\text{:=}} {H}_{i}\left( {{M}_{ \bullet } \otimes N}\right) . \] For example, according to this definition \( {\operatorname{Tor}}_{0}^{R}\left( {M, N}\right) \cong M{ \otimes }_{R}N \) (Exercise 2.14), and \( {\operatorname{Tor}}_{i}^{R}\left( {M, N}\right) = 0 \) for all \( i > 0 \) and all \( M \) if \( N \) is flat (because then tensoring by \( N \) is an exact functor, so tensoring the resolution of \( M \) returns an exact sequence, thus with no homology). In fact, this proves a remarkable property of the Tor functors: if \( {\operatorname{Tor}}_{1}^{R}\left( {M, N}\right) = 0 \) for all \( M \), then \( {\operatorname{Tor}}_{i}^{R}\left( {M, N}\right) = 0 \) for all \( i > 0 \) for all modules \( M \) . Indeed, \( N \) is then flat. At this point you may feel that something is a little out of balance: why focus on the functor \( {}_{ - }{ \otimes }_{R}N \), rather than \( M{ \otimes }_{R}{}_{ - } \) ? Since \( M{ \otimes }_{R}N \) is canonically isomorphic to \( N{ \otimes }_{R}M \) (in the commutative case; cf. Example 2.2), we could expect the same to apply to every \( {\operatorname{Tor}}_{i}^{R} : {\operatorname{Tor}}_{i}^{R}\left( {M, N}\right) \) ought to be canonically isomorphic to \( {\operatorname{Tor}}_{i}^{R}\left( {N, M}\right) \) for all \( i \) . Equivalently, we should be able to compute \( {\operatorname{Tor}}_{i}^{R}\left( {M, N}\right) \) as the homology of \( M{ \otimes }_{R}{N}_{ \bullet } \), where \( {N}_{ \bullet } \) is a free resolution of \( N \) . This is indeed the case. In due time (§ 1X17 and 8) we will prove this and all the other wonderful facts I have stated in this subsection. For now, I am asking the reader to believe that the Tor functors can be defined as I have indicated, and the facts reviewed here will suffice for simple computations (see for example Exercises 2.15 and 2.17) and applications. In fact, we know enough about finitely generated modules over PIDs to get a preliminary sense of what is involved in proving such general facts. Recall that we have been able to establish that every finitely generated module \( M \) over a PID \( R \) has a free resolution of length 1 : \[ 0 \rightarrow {R}^{\oplus {m}_{1}} \rightarrow {R}^{\oplus {m}_{0}} \rightarrow M \rightarrow 0. \] This property characterizes PIDs (Proposition VI 5.4). If \[ 0 \rightarrow A \rightarrow B \rightarrow C \rightarrow 0 \] is an exact sequence of \( R \) -modules, it is not hard to see that one can produce 'compatible' resolutions, in the sense that the rows of the following diagram will be exact as well as the columns: ![23387543-548b-40c2-8595-200756212a0f_533_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_533_0.jpg) (This will be proven in gory detail in §IX17) Tensor the two ’free’ rows by \( N \) ; they remain exact (tensoring commutes with direct sums): ![23387543-548b-40c2-8595-200756212a0f_533_1.jpg](images/23387543-548b-40c2-8595-200756212a0f_533_1.jpg) Now the columns (preceded and followed by 0 ) are precisely the complexes \( {A}_{ \bullet }{ \otimes }_{R}N \) , \( {B}_{ \bullet }{ \otimes }_{R}N,{C}_{ \bullet }{ \otimes }_{R}N \) whose homology ’computes’ the Tor modules. Applying the snake lemma (Lemma III17.8, cf. Remark III17.10) gives the exact sequence ![23387543-548b-40c2-8595-200756212a0f_533_2.jpg](images/23387543-548b-40c2-8595-200756212a0f_533_2.jpg) which is precisely the sequence of Tor modules conjured up above, \[ \begin{array}{l} 0 \rightarrow {\operatorname{Tor}}_{1}^{R}\left( {A, N}\right) \rightarrow {\operatorname{Tor}}_{1}^{R}\left( {B, N}\right) \rightarrow {\operatorname{Tor}}_{1}^{R}\left( {C, N}\right) \\ \left( {A{ \otimes }_{R}N \rightarrow B{ \otimes }_{R}N \rightarrow C{ \otimes }_{R}N \rightarrow C{ \otimes }_{R}N \rightarrow 0}\right) \\ \end{array} \] with a 0 on the left for good measure (due to the fact that \( {\operatorname{Tor}}_{2}^{R} \) vanishes if \( R \) is a PID; cf. Exercise 2.17). Note that \( {\operatorname{Tor}}_{i}^{k} \) vanishes for \( i > 0 \) if \( k \) is a field, as vector spaces are flat, and \( {\operatorname{Tor}}_{i}^{R} \) vanishes for \( i > 1 \) if \( R \) is a PID (Exercise 2.17). These facts are not surprising, in view of the procedure described above for computing Tor and of the considerations at the end of SVI 5.2 a bound on the length of free resolutions for modules over a ring \( R \) will imply a bound on nonzero Tor’s. For particularly nice rings (such as the rings corresponding to 'smooth' points in algebraic geometry) this bound agrees with the Krull dimension; but precise results of this sort are beyond the scope of this book. ## Exercises \( R \) denotes a fixed commutative ring. 2.1. \( \vartriangleright \) Let \( M, N \) be \( R \) -modules, and assume that \( N \) is cyclic. Prove that every element of \( M{ \otimes }_{R}N \) may be written as a pure tensor. [9.1] 2.2. \( \vartriangleright \) Prove ’by hand’ (that is, without appealing to the right-exactness of tensor) that \( \mathbb{Z}/n\mathbb{Z}{ \otimes }_{\mathbb{Z}}\mathbb{Z}/m\mathbb{Z} \cong 0 \) if \( m, n \) a
1167_(GTM73)Algebra
Definition 1.1
Definition 1.1. A (left) module A over a ring \( \mathrm{R} \) is simple (or irreducible) provided \( \mathrm{{RA}} \neq 0 \) and \( \mathrm{A} \) has no proper submodules. A ring \( \mathrm{R} \) is simple if \( {\mathrm{R}}^{2} \neq 0 \) and \( \mathrm{R} \) has no proper (two-sided) ideals. REMARKS. (i) Every simple module [ring] is nonzero. (ii) Every simple module over a ring with identity is unitary (Exercise IV.1.17). A unitary module \( A \) over a ring \( R \) with identity has \( {RA} \neq 0 \), whence \( A \) is simple if and only if \( A \) has no proper submodules. (iii) Every simple module \( A \) is cyclic; in fact, \( A = {Ra} \) for every nonzero \( a \in A \) . [Proof: both \( {Ra}\left( {a\varepsilon A}\right) \) and \( B = \{ c \in A \mid {Rc} = 0\} \) are submodules of \( A \), whence each is either 0 or \( A \) by simplicity. But \( {RA} \neq 0 \) implies \( B \neq A \) . Consequently \( B = 0 \) , whence \( {Ra} = A \) for all nonzero \( a \in A \) .] However a cyclic module need not be simple (for example, the cyclic \( \mathbf{Z} \) -module \( {Z}_{6} \) ). (iv) The definitions of "simple" for groups, modules, and rings can be subsumed into one general definition, which might be roughly stated as: an algebraic object \( C \) that is nontrivial in some reasonable sense (for example, \( {RA} \neq 0 \) or \( {R}^{2} \neq 0 \) ) is simple, provided that every homomorphism with domain \( C \) has kernel 0 or \( C \) . The point here is that the absence of nontrivial kernels is equivalent to the absence of proper normal subgroups of a group or proper submodules of a module or proper ideals of a ring as the case may be. EXAMPLE. Every division ring is a simple ring and a simple \( D \) -module (see the Remarks preceding Theorem III.2.2). EXAMPLE. Let \( D \) be a division ring and let \( R = {\operatorname{Mat}}_{n}D\left( {n > 1}\right) \) . For each \( k\left( {1 \leq k \leq n}\right) ,{I}_{k} = \left\{ {\left( {a}_{ij}\right) {\varepsilon R} \mid {a}_{ij} = 0\text{ for }j \neq k}\right\} \) is a simple left \( R \) -module (see the proof of Corollary VIII.1.12). EXAMPLE. The preceding example shows that \( {\operatorname{Mat}}_{n}D \) ( \( D \) a division ring) is not a simple left module over itself if \( n > 1 \) . However, the ring \( {\operatorname{Mat}}_{n}D\left( {n \geq 1}\right) \) is simple by Exercise III.2.9. Thus by Theorem VII.1.4 the endomorphism ring of any finite dimensional vector space over a division ring is a simple ring. EXAMPLE. A left ideal \( I \) of a ring \( R \) is said to be a minimal left ideal if \( I \neq 0 \) and for every left ideal \( J \) such that \( 0 \subset J \subset I \), either \( J = 0 \) or \( J = I \) . A left ideal \( I \) of \( R \) such that \( {RI} \neq 0 \) is a simple left \( R \) -module if and only if \( I \) is a minimal left ideal. EXAMPLE. Let \( F \) be a field of characteristic zero and \( R \) the additive group of polynomials \( F\left\lbrack {x, y}\right\rbrack \) . Define multiplication in \( R \) by requiring that multiplication be distributive and that \( {xy} = {yx} + 1 \) and \( {ax} = {xa},{ay} = {ya} \) for \( {a\varepsilon F} \) . Then \( R \) is a well-defined simple ring that has no zero divisors and is not a division ring (Exercise 1). Let \( A = {Ra} \) be a cyclic \( R \) -module. The map \( \theta : R \rightarrow A \) defined by \( r \mapsto {ra} \) is an \( R \) -module epimorphism whose kernel \( I \) is a left ideal (submodule) of \( R \) (Theorem IV.1.5). By the First Isomorphism Theorem IV.1.7 \( R/I \) is isomorphic to \( A \) . By Theorem IV.1.10 every submodule of \( R/I \) is of the form \( J/I \), where \( J \) is a left ideal of \( R \) that contains \( I \) . Consequently \( R/I \) (and hence \( A \) ) has no proper submodules if and only if \( I \) is a maximal left ideal of \( R \) . Since every simple \( R \) -module is cyclic by Remark (iii) above, every simple \( R \) -module is isomorphic to \( R/I \) for some maximal left ideal \( I \) . Conversely, if \( I \) is a maximal left ideal of \( R, R/I \) will be simple provided \( R\left( {R/I}\right) \neq 0 \) . A condition that guarantees that \( R\left( {R/I}\right) \neq 0 \) is given by Definition 1.2. A left ideal I in a ring \( \mathrm{R} \) is regular (or modular) if there exists \( \mathrm{e}\varepsilon \mathrm{R} \) such that \( \mathrm{r} - \mathrm{{re}} \in \mathrm{I} \) for every \( \mathrm{r} \in \mathrm{R} \) . Similarly, a right ideal \( \mathrm{J} \) is regular if there exists \( \mathrm{e}\varepsilon \mathrm{R} \) such that \( \mathrm{r} - \mathrm{{er}}\varepsilon \mathrm{J} \) for every \( \mathrm{r}\varepsilon \mathrm{R} \) . REMARK. Every left ideal in a ring \( R \) with identity is regular (let \( e = {1}_{R} \) ). Theorem 1.3. A left module \( \mathrm{A} \) over a ring \( \mathrm{R} \) is simple if and only if \( \mathrm{A} \) is isomorphic to \( \mathrm{R}/\mathrm{I} \) for some regular maximal left ideal \( \mathrm{I} \) . REMARKS. If \( R \) has an identity, the theorem is an immediate consequence of the discussion above. The theorem is true if "left" is replaced by "right" throughout. PROOF OF 1.3. The discussion preceding Definition 1.2 shows that if \( A \) is simple, then \( A = {Ra} \cong R/I \) where the maximal left ideal \( I \) is the kernel of \( \theta \) . Since \( A = {Ra}, a = {ea} \) for some \( {e\varepsilon R} \) . Consequently, for any \( {r\varepsilon R},{ra} = {rea} \) or \( \left( {r - {re}}\right) a = 0 \), whence \( r - {re\varepsilon }\operatorname{Ker}\theta = I \) . Therefore \( I \) is regular. Conversely let \( I \) be a regular maximal left ideal of \( R \) such that \( A \cong R/I \) . In view of the discussion preceding Definition 1.2 it suffices to prove that \( R\left( {R/I}\right) \neq 0 \) . If this is not the case, then for all \( r \in {Rr}\left( {e + I}\right) = I \), whence \( {re\varepsilon I} \) . Since \( r - {re\varepsilon I} \), we have \( r \in I \) . Thus \( R = I \), contradicting the maximality of \( I \) . Having developed the necessary facts about simplicity we now turn to primitivity. In order to define primitive rings we need: Theorem 1.4. Let \( \mathrm{B} \) be a subset of a left module \( \mathrm{A} \) over a ring \( \mathrm{R} \) . Then \( \mathcal{Q}\left( \mathrm{B}\right) = \{ \mathrm{r}\varepsilon \mathrm{R} \mid \mathrm{{rb}} = 0 \) for all \( \mathrm{b}\varepsilon \mathrm{B}\} \) is a left ideal of \( \mathrm{R} \) . If \( \mathrm{B} \) is a submodule of \( \mathrm{A} \), then \( \mathcal{Q}\left( \mathrm{B}\right) \) is an ideal. \( \mathcal{Q}\left( B\right) \) is called the (left) annihilator of \( B \) . The right annihilator of a right module is defined analogously. SKETCH OF PROOF OF 1.4. It is easy to verify that \( \mathcal{Q}\left( B\right) \) is a left ideal. Let \( B \) be a submodule. If \( r \in R \) and \( s \in \mathcal{Q}\left( B\right) \), then for every \( b \in B\left( {sr}\right) b = s\left( {rb}\right) = 0 \) since \( {rb} \in B \) . Consequently, \( {sr} \in \mathcal{Q}\left( B\right) \), whence \( \mathcal{Q}\left( B\right) \) is also a right ideal. Definition 1.5. A (left) module A is faithful if its (left) annihilator \( \mathrm{Q}\left( \mathrm{A}\right) \) is \( 0 \) . A ring \( \mathrm{R} \) is (left) primitive if there exists a simple faithful left R-module. Right primitive rings are defined analogously. There do exist right primitive rings that are not left primitive (see G. Bergman [58]). Hereafter "primitive" will always mean "left primitive." However, all results proved for left primitive rings are true, mutatis mutandis, for right primitive rings. EXAMPLE. Let \( V \) be a (possibly infinite dimensional) vector space over a division ring \( D \) and let \( R \) be the endomorphism ring \( {\operatorname{Hom}}_{D}\left( {V, V}\right) \) of \( V \) . Recall that \( V \) is a left \( R \) -module with \( {\theta v} = \theta \left( v\right) \) for \( v \in V,\theta \in R \) (Exercise IV.1.7). If \( u \) is a nonzero vector in \( V \), then there is a basis of \( V \) that contains \( u \) (Theorem IV.2.4). If \( v \in V \), then there exists \( {\theta }_{v} \in R \) such that \( {\theta }_{v}u = v \) (just define \( {\theta }_{v}\left( u\right) = v \) and \( {\theta }_{v}\left( w\right) = 0 \) for all other basis elements \( w \) ; then \( {\theta }_{v}{\varepsilon R} \) by Theorems IV.2.1 and IV.2.4). Therefore \( {Ru} = V \) for any nonzero \( u \in V \), whence \( V \) has no proper \( R \) -submodules. Since \( R \) has an identity, \( {RV} \neq 0 \) . Thus \( V \) is a simple \( R \) -module. If \( {\theta V} = 0\left( {\theta \in R}\right) \), then clearly \( \theta = 0 \), whence \( \mathcal{Q}\left( V\right) = 0 \) and \( V \) is a faithful \( R \) -module. Therefore, \( R \) is primitive. If \( V \) is finite dimensional over \( D \), then \( R \) is simple by Exercise III.2.9 and Theorem VII.1.4. But if \( V \) is infinite dimensional over \( D \), then \( R \) is not simple: the set of all \( \theta \in R \) such that \( \operatorname{Im}\theta \) is finite dimensional subspace of \( V \) is a proper ideal of \( R \) (Exercise 3). The next two results provide other examples of primitive rings. Proposition 1.6. A simple ring \( \mathrm{R} \) with identity is primitive. PROOF. \( R \) contains a maximal left ideal \( I \) by Theorem III.2.18. Since \( R \) has an identity \( I \) is regular, whence \( R/I \) is a simple \( R \) -module by Theorem 1.3. Since \( \mathcal{Q}\left( {R/I}\right) \) is an ideal of \( R \) that does not contain \( {1}_{R}, Q\left( {R/I}\right) = 0 \) by simplicity. Therefore \( R/I \) is faithful. Proposition 1.7. A commutative ring \( \mathrm{R} \) is primitive if and only if \( \mathrm{R} \) is a field. PROOF. A field is primitive by Proposition 1.6. Conversely, let \( A \) be a faithful simple left \( R \) -module. Then \( A \cong R/I \) for some regular maximal left
18_Algebra Chapter 0
Definition 5.4
Definition 5.4. Let \( M \) be an \( R \) -module. The dual \( {M}^{ \vee } \) of \( M \) is the \( R \) -module \( {\operatorname{Hom}}_{R}\left( {M, R}\right) \) . One use of the dual module is to translate Hom computations into \( \otimes \) computations, at least in special cases. Proposition 5.5. Let \( M \) be any \( R \) -module, and let \( F \) be a free \( R \) -module of finite rank. Then \[ {\operatorname{Hom}}_{R}\left( {M, F}\right) \cong {M}^{ \vee }{ \otimes }_{R}F. \] Proof. By hypothesis \( F \cong {R}^{\oplus n} \cong {R}^{n} \) ; hence \( {\operatorname{Hom}}_{R}\left( {M, F}\right) \cong {\operatorname{Hom}}_{R}\left( {M,{R}^{n}}\right) \cong {\operatorname{Hom}}_{R}{\left( M, R\right) }^{n} \cong {\operatorname{Hom}}_{R}\left( {M, R}\right) { \otimes }_{R}{R}^{n} \cong {M}^{ \vee }{ \otimes }_{R}F, \) by Corollary 5.3 In fact, note that for all \( R \) -modules \( M, N \) there is a natural ’evaluation’ map \[ \epsilon : {M}^{ \vee }{ \otimes }_{R}N \rightarrow {\operatorname{Hom}}_{R}\left( {M, N}\right) \] defined on pure tensors by mapping \( f \otimes n \) (for \( f \in {\operatorname{Hom}}_{R}\left( {M, R}\right) \) and \( n \in N \) ) to the \( R \) -module homomorphism \( M \rightarrow N \) given by \[ m \mapsto f\left( m\right) n\text{.} \] The reader will check (Exercise 5.5) that \( \epsilon \) is an isomorphism if \( N \) is free of finite rank; this gives a more precise version of Proposition 5.5. 5.3. Duals of free modules. By definition, the ’duality’ functor \( M \mapsto {M}^{ \vee } \) is a particular case of the contravariant flavor of Hom; hence it is itself contravariant and commutes with limits. Here is a first, immediate consequence: Lemma 5.6. For every family \( \left\{ {M}_{i}\right\} \) of R-modules, \( {\left( {\bigoplus }_{i}{M}_{i}\right) }^{ \vee } \cong \mathop{\prod }\limits_{i}{M}_{i}^{ \vee } \) . Specializing to the case in which \( {M}_{i} = R \) for all \( i \) determines the dual of a free module: Corollary 5.7. The dual of a free module is isomorphic to a product of copies of \( R \) : \[ {\left( {R}^{\oplus S}\right) }^{ \vee } \cong {R}^{S} \] In particular, \( {\left( {R}^{n}\right) }^{ \vee } \cong {R}^{n} \) : if \( F \) is a free \( R \) -module of finite rank, then \( {F}^{ \vee } \cong F \) . Proof. This follows from Lemma 5.6 and the fact that \( {R}^{ \vee } = {\operatorname{Hom}}_{R}\left( {R, R}\right) \) is isomorphic to \( R \) . Carefully note the magically disappearing \( \oplus \) from the left-hand side to the right-hand side in the statement of Corollary 5.7 direct sums become direct products through the contravariant Hom (Corollary 5.3). However, a direct product of finitely many modules happens to be isomorphic to their direct sum (as we have known for a long time: Proposition 111(6.1), hence the second part of the statement. Remark 5.8. If \( S \) is infinite, \( {R}^{S} \) is ’much larger’ than \( {R}^{\oplus S} \) : the first module consists of all functions \( S \rightarrow R \), while the second only retains those that are zero for all but finitely many \( s \in S \) . Remark 5.9. Note that the set \( S \) is not determined by the isomorphism class of a free module \( F = {R}^{\oplus S} \) . We proved in Corollary VII1.11 (if \( R \) is an integral domain) that the cardinality of \( S \) is determined by the isomorphism class of \( F \), but of course \( S \) itself, say as a subset of \( F \), is not. In fact, recall (SVI 1.2) that the choice of a specific isomorphism \( F \cong {R}^{\oplus S} \) is equivalent to the choice of a basis of \( F \) . Now, the isomorphism appearing in Corollary 5.7 requires knowledge of \( S \) ; indeed, we will verify in a moment (Example 5.11) that this isomorphism, even in the finite case, does depend on the choice of a basis. It is not canonical! The reader should endeavor to remember this slogan: a finite-rank free module-for example, a finite-dimensional vector space-is isomorphic to its dual, but not canonically. J This remark can be clarified by means of the following notion. Definition 5.10. Consider the standard basis \( \left( {{\mathbf{e}}_{1},\ldots ,{\mathbf{e}}_{n}}\right) \) of \( {R}^{n} \) . The dual basis of \( {\left( {R}^{n}\right) }^{ \vee } \cong {R}^{n} \) consists of \( \left( {{\check{\mathbf{e}}}_{1},\ldots ,{\check{\mathbf{e}}}_{n}}\right) \), where \( {\check{\mathbf{e}}}_{i} \in {\left( {R}^{n}\right) }^{ \vee } = {\operatorname{Hom}}_{R}\left( {{R}^{n}, R}\right) \) is determined by \[ {\check{\mathbf{e}}}_{i}\left( {\mathbf{e}}_{j}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }j = i \\ 0 & \text{ otherwise. } \end{array}\right. \] The isomorphism \( {\left( {R}^{n}\right) }^{ \vee } \cong {R}^{n} \) of Corollary 5.7 is obtained precisely by matching \( {\mathbf{e}}_{i} \) and \( {\check{\mathbf{e}}}_{i} \) : this will be crystal clear to the reader who works out Exercise 5.7. In particular, the vectors \( {\check{\mathbf{e}}}_{i} \) do form a basis of \( {\left( {R}^{n}\right) }^{ \vee } \) . Example 5.11. To see that these isomorphisms do depend on the choice of the basis, consider the standard basis \( \left( {{\mathbf{e}}_{1},{\mathbf{e}}_{2}}\right) \) of \( {R}^{2} \) and the corresponding dual basis \( \left( {{\check{\mathbf{e}}}_{1},{\check{\mathbf{e}}}_{2}}\right) \) of \( {\left( {R}^{2}\right) }^{ \vee } \), and then choose a different basis \( \left( {{\mathbf{e}}_{1}^{\prime },{\mathbf{e}}_{2}^{\prime }}\right) \) for \( {R}^{2} \), where \( {\mathbf{e}}_{1}^{\prime } = {\mathbf{e}}_{1} \) and \( {\mathbf{e}}_{2}^{\prime } = {\mathbf{e}}_{1} + {\mathbf{e}}_{2} \), and let \( \left( {{\check{\mathbf{e}}}_{1}^{\prime },{\check{\mathbf{e}}}_{2}^{\prime }}\right) \) be the corresponding dual basis. By definition \[ {\check{\mathbf{e}}}_{1}^{\prime }\left( {\mathbf{e}}_{2}^{\prime }\right) = 0 \] while \[ {\check{\mathbf{e}}}_{1}\left( {\mathbf{e}}_{2}^{\prime }\right) = {\check{\mathbf{e}}}_{1}\left( {{\mathbf{e}}_{1} + {\mathbf{e}}_{2}}\right) = 1 + 0 = 1. \] Therefore, \( {\check{\mathbf{e}}}_{1}^{\prime } \neq {\check{\mathbf{e}}}_{1} \) even if \( {\mathbf{e}}_{1}^{\prime } = {\mathbf{e}}_{1} \) : the two isomorphisms \( {R}^{2} \cong {\left( {R}^{2}\right) }^{ \vee } \) determined by the two bases are different. The fact that duality leads to noncanonical isomorphisms is somewhat unpleasant, but something magical will happen soon (§5.5): we will recover a canonical-that is, independent of any choice - isomorphism (on finite-rank free modules) by applying the duality functor twice. Whatever twist is introduced by the duality functor will be untwisted by applying duality again. 5.4. Duality and exactness. Before seeing this, let us contemplate another immediate consequence of the fact that duality is a particular case of the contravariant Hom: Lemma 5.12. The duality functor is left-exact: every exact sequence \[ L \rightarrow M \rightarrow N \rightarrow 0 \] of \( R \) -modules induces an exact sequence \[ 0 \rightarrow {N}^{ \vee } \rightarrow {M}^{ \vee } \rightarrow {L}^{ \vee } \] Proof. This is an immediate consequence of the left-exactness of Hom. The duality functor is not exact in general: for example, taking duals in the exact sequence of abelian groups (a.k.a. Z-modules) \[ 0 \rightarrow \mathbb{Z}\xrightarrow[]{\; \cdot 2\;}\mathbb{Z} \rightarrow \mathbb{Z}/2\mathbb{Z} \rightarrow 0 \] gives the sequence \( \left( *\right) \) \[ 0 \rightarrow 0 \rightarrow {\mathbb{Z}}^{ \vee }\overset{\gamma }{ \rightarrow }{\mathbb{Z}}^{ \vee } \rightarrow 0 \] since the dual of \( \mathbb{Z}/2\mathbb{Z} \) is zero (Exercise 5.6). The map \( \gamma \) in this sequence is the dual of the multiplication by 2 . If \( f : \mathbb{Z} \rightarrow \mathbb{Z} \) is an element of \( {\mathbb{Z}}^{ \vee } \), then \( \gamma \left( f\right) \) is obtained by composition: ![23387543-548b-40c2-8595-200756212a0f_563_0.jpg](images/23387543-548b-40c2-8595-200756212a0f_563_0.jpg) By linearity, \( \gamma \left( f\right) \left( n\right) = f\left( {2n}\right) = {2f}\left( n\right) \) is even for all \( n \in \mathbb{Z} \) . In particular, the identity \( \mathbb{Z} \rightarrow \mathbb{Z} \) (as an element of \( {\mathbb{Z}}^{ \vee } \) ) is not in the image of \( \gamma \) . Thus \( \gamma \) is not surjective, and the sequence (*) is not exact. There are situations in which duality is exact; we will understand this better after digesting [6] but the following observation will suffice for now. Proposition 5.13. Let \[ 0 \rightarrow M\xrightarrow[]{\;\mu \;}N\xrightarrow[]{\;\nu \;}P \rightarrow 0 \] be an exact sequence of \( R \) -modules, with \( P \) free. Then the induced sequence \[ 0 \rightarrow {P}^{ \vee }\xrightarrow[]{{\nu }^{ \vee }}{N}^{ \vee }\xrightarrow[]{{\mu }^{ \vee }}{M}^{ \vee } \rightarrow 0 \] is exact. This is also not new to the diligent readers, as it is a particular case of Exercise III 7.8 and, further, it is a consequence of Exercise 2.23 But here is a direct argument: Proof. Lemma 5.12 takes care of all but the surjectivity of the map \( {N}^{ \vee } \rightarrow {M}^{ \vee } \) induced from \( M \rightarrow N \) : ![23387543-548b-40c2-8595-200756212a0f_563_1.jpg](images/23387543-548b-40c2-8595-200756212a0f_563_1.jpg) The question is whether every \( R \) -linear \( f : M \rightarrow R \) can be extended to an \( R \) -linear map \( g : N \rightarrow R \) so that \( f = g \circ \mu \), that is, \( f = {\mu }^{ \vee }\left( g\right) \) . As \( P \) is free, \( P \cong {F}^{R}\left( S\right) \cong {R}^{\oplus S} \) for some set \( S \) . Choosing (arbitrarily!) preimages in \( N \) of the standard basis vectors \( {\mathbf{e}}_{s} \in {R}^{\oplus S} \) gives a set-function \( S \rightarrow N \) , extending to an \( R \) -linear map \( \rho : P \rightarrow N \) by the universal property of free modules: \[ 0 \rightarrow M\xrightarrow[]{\mu }N\xrightarrow[\overleftarrow{\rho }]{\nu }P \ri
1065_(GTM224)Metric Structures in Differential Geometry
Definition 4.2
Definition 4.2. The tangent bundle (resp. cotangent bundle) of \( M \) is the set \( {TM} = { \cup }_{p \in M}{M}_{p} \) (resp. \( {T}^{ * }M = { \cup }_{p \in M}{M}_{p}^{ * } \) ). The bundle projections are the maps \( \pi : {TM} \rightarrow M \) and \( \widetilde{\pi } : {T}^{ * }M \rightarrow M \) which map a tangent or cotangent vector to its footpoint. Proposition 4.2. The differentiable structure \( \mathcal{D} \) on \( {M}^{n} \) induces in a natural way \( {2n} \) -dimensional differentiable structures on the tangent and cotangent bundles of \( M \) . Proof. For each chart \( \left( {U, x}\right) \) of \( M \), define a chart \( \left( {{\pi }^{-1}\left( U\right) ,\bar{x}}\right) \) of \( {TM} \) , where \( \bar{x} : {\pi }^{-1}\left( U\right) \rightarrow {\mathbb{R}}^{2n} \) is given by \[ \bar{x}\left( v\right) = \left( {x \circ \pi \left( v\right), d{x}^{1}\left( {\pi \left( v\right) }\right) v,\ldots, d{x}^{n}\left( {\pi \left( v\right) }\right) v}\right) . \] Similarly, define \( \widetilde{x} : {\widetilde{\pi }}^{-1}\left( U\right) \rightarrow {\mathbb{R}}^{2n} \) by \[ \widetilde{x}\left( \alpha \right) = \left( {x \circ \widetilde{\pi }\left( \alpha \right) ,\alpha \left( {\partial /\partial {x}^{1}\left( {\widetilde{\pi }\left( \alpha \right) }\right) }\right) ,\ldots ,\alpha \left( {\partial /\partial {x}^{n}\left( {\widetilde{\pi }\left( \alpha \right) }\right) }\right) }\right) . \] One checks that the collection \( \left\{ {{\bar{x}}^{-1}\left( V\right) \mid \left( {U, x}\right) \in \mathcal{D}, V\text{open in}{\mathbb{R}}^{2n}}\right\} \) forms a basis for a second countable Hausdorff topology on \( {TM} \) . A similar argument, using \( \widetilde{x} \) instead of \( \bar{x} \), works for \( {T}^{ * }M \) . Let \( \mathcal{A} = \left\{ {\left( {{\pi }^{-1}\left( U\right) ,\bar{x}}\right) \mid \left( {U, x}\right) \in \mathcal{D}}\right\} \) . We claim that \( \mathcal{A} \) is an atlas for \( {TM} \) : clearly, each \( \bar{x} : {\pi }^{-1}\left( U\right) \rightarrow x\left( U\right) \times {\mathbb{R}}^{n} \) is a homeomorphism. Furthermore, if \( \left( {V, y}\right) \) is another chart of \( M \), and \( \left( {a, b}\right) \in x\left( {U \cap V}\right) \times {\mathbb{R}}^{n} \), then \[ \bar{y} \circ {\bar{x}}^{-1}\left( {a, b}\right) = \left( {y \circ {x}^{-1}\left( a\right), D\left( {y \circ {x}^{-1}}\right) \left( a\right) \left( b\right) }\right) . \] To see this, write \( b = \sum {b}_{i}{\mathbf{e}}_{i} \) ; then \[ {\bar{x}}^{-1}\left( {a, b}\right) = \mathop{\sum }\limits_{i}{b}_{i}\frac{\partial }{\partial {x}^{i}}\left( {{x}^{-1}\left( a\right) }\right) = \mathop{\sum }\limits_{{i, j}}{b}_{i}\frac{\partial {y}^{j}}{\partial {x}^{i}}\left( {{x}^{-1}\left( a\right) }\right) \frac{\partial }{\partial {y}^{j}}\left( {{x}^{-1}\left( a\right) }\right) , \] so that \[ \left( {\bar{y} \circ {\bar{x}}^{-1}}\right) \left( {a, b}\right) = \left( {y \circ {x}^{-1}\left( a\right) ,\mathop{\sum }\limits_{{i, j}}{b}_{i}\frac{\partial {y}^{j}}{\partial {x}^{i}}\left( {{x}^{-1}\left( a\right) }\right) {\mathbf{e}}_{j}}\right) \] \[ = \left( {y \circ {x}^{-1}\left( a\right), D\left( {y \circ {x}^{-1}}\right) \left( a\right) \left( b\right) }\right) . \] For example, the bundle projection \( \pi : {TM} \rightarrow M \) is differentiable, since for any pair \( \left( {U, x}\right) ,\left( {{\pi }^{-1}\left( U\right) ,\bar{x}}\right) \) of related charts, \( x \circ \pi \circ {\bar{x}}^{-1} : x\left( U\right) \times {\mathbb{R}}^{n} \rightarrow x\left( U\right) \) is the projection onto the first factor. Any \( f : M \rightarrow N \) induces a differentiable map \( {f}_{ * } : {TM} \rightarrow {TN} \), called the derivative of \( f \) : For \( v \in {M}_{p} \), set \( {f}_{ * }v \mathrel{\text{:=}} {f}_{*p}v \) . Differentiability follows from the easily checked identity: \[ \left( {\bar{y} \circ {f}_{ * } \circ {\bar{x}}^{-1}}\right) \left( {a, b}\right) = \left( {y \circ f \circ {x}^{-1}\left( a\right), D\left( {y \circ f \circ {x}^{-1}}\right) \left( a\right) b}\right) . \] EXERCISE 7. Show that if \( M \) is connected, then any two points of \( M \) can be joined by a smooth curve. EXERCISE 8. (a) Prove that \( {\mathcal{J}}_{v} : {\mathbb{R}}^{n} \rightarrow {\left( {\mathbb{R}}^{n}\right) }_{v} \) from Examples and Remarks 4.1(iv) satisfies \( {\mathcal{J}}_{v}w\left( f\right) = {D}_{w}f\left( v\right) = {\left( f \circ c\right) }^{\prime }\left( 0\right) \), where \( c \) is any curve with \( c\left( 0\right) = v,{c}^{\prime }\left( 0\right) = w \) . (b) Show that any \( v \in {TM} \) equals \( \dot{c}\left( 0\right) \) for some curve \( c \) in \( M \) . EXERCISE 9. For positive \( \rho ,\sigma \), consider the helix \( c : \mathbb{R} \rightarrow {\mathbb{R}}^{3} \), given by \( c\left( t\right) = \left( {\rho \cos t,\rho \sin t,{\sigma t}}\right) \) . Express \( \dot{c}\left( t\right) \) in terms of the standard basis of \( {\mathbb{R}}_{c\left( t\right) }^{3} \) . EXERCISE 10. Let \( M \) be connected, \( f : M \rightarrow N \) a differentiable map. Show that if \( {f}_{*p} = 0 \) for all \( p \) in \( M \), then \( f \) is a constant map. EXERCISE 11. Fill in the details of the argument for the cotangent bundle in the proof of Proposition 4.2. ## 5. The Inverse and Implicit Function Theorems Let \( U \) be an open set in \( M, f : U \rightarrow N \) a differentiable map. The rank of \( f \) at \( p \in U \) is the rank of the linear map \( {f}_{*p} : {M}_{p} \rightarrow {N}_{f\left( p\right) } \), that is, the dimension of the space \( {f}_{ * }\left( {M}_{p}\right) \) . Recall the following theorem from calculus: THEOREM 5.1 (Inverse Function Theorem). Let \( U \) be an open set in \( {\mathbb{R}}^{n} \) , \( f : U \rightarrow {\mathbb{R}}^{n} \) a differentiable map. If \( f \) has maximal rank \( \left( { = n}\right) \) at \( p \in U \), then there exists a neighborhood \( V \) of \( p \) such that the restriction \( f : V \rightarrow f\left( V\right) \) is a diffeomorphism. The inverse function theorem immediately generalizes to manifolds: THEOREM 5.2 (Inverse Function Theorem for Manifolds). Let \( M \) and \( N \) be manifolds of dimension \( n \), and \( f : U \rightarrow N \) a smooth map, where \( U \) is open in \( M \) . If \( f \) has maximal rank at \( p \in U \), then there exists a neighborhood \( V \) of \( p \) such that the restriction \( f : V \rightarrow f\left( V\right) \) is a diffeomorphism. Proof. Consider coordinate maps \( x \) at \( p, y \) at \( f\left( p\right) \), and apply Theorem 5.1 to \( y \circ f \circ {x}^{-1} \) . Conclude by observing that \( x \) and \( y \) are diffeomorphisms. We now use the inverse function theorem to derive the Euclidean version of one of the essential tools in differential geometry: THEOREM 5.3 (Implicit Function Theorem). Let \( U \) be a neighborhood of 0 in \( {\mathbb{R}}^{n}, f : U \rightarrow {\mathbb{R}}^{k} \) a smooth map with \( f\left( 0\right) = 0 \) . For \( n \leq k \), let \( \imath : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{k} \) denote the inclusion \( \iota \left( {{a}_{1},\ldots ,{a}_{n}}\right) = \left( {{a}_{1},\ldots ,{a}_{n},0,\ldots ,0}\right) \), and for \( n \geq k \), let \( \pi : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{k} \) denote the projection \( \pi \left( {{a}_{1},\ldots ,{a}_{k},\ldots ,{a}_{n}}\right) = \left( {{a}_{1},\ldots ,{a}_{k}}\right) \) . (1) If \( n \leq k \) and \( f \) has maximal rank \( \left( { = n}\right) \) at 0, then there exists a coordinate map \( g \) of \( {\mathbb{R}}^{k} \) around 0 such that \( g \circ f = \imath \) in a neighborhood of \( 0 \in {\mathbb{R}}^{n} \) . (2) If \( n \geq k \) and \( f \) has maximal rank \( \left( { = k}\right) \) at 0, then there exists a coordinate map \( h \) of \( {\mathbb{R}}^{n} \) around 0 such that \( f \circ h = \pi \) in a neighborhood of \( 0 \in {\mathbb{R}}^{n} \) . Proof. In order to prove (1), observe that the \( k \times n \) matrix \( \left( {{D}_{j}{f}^{i}\left( 0\right) }\right) \) has rank \( n \) . By rearranging the component functions \( {f}^{i} \) of \( f \) if necessary (which amounts to composing \( f \) with an invertible transformation, hence a diffeomorphism of \( {\mathbb{R}}^{k} \) ), we may assume that the \( n \times n \) submatrix \( {\left( {D}_{j}{f}^{i}\left( 0\right) \right) }_{1 \leq i, j \leq n} \) is invertible. Define \( F : U \times {\mathbb{R}}^{k - n} \rightarrow {\mathbb{R}}^{k} \) by \[ F\left( {{a}_{1},\ldots ,{a}_{n},{a}_{n + 1},\ldots ,{a}_{k}}\right) \mathrel{\text{:=}} f\left( {{a}_{1},\ldots ,{a}_{n}}\right) + \left( {0,\ldots ,0,{a}_{n + 1},\ldots ,{a}_{k}}\right) . \] Then \( F \circ \imath = f \), and the Jacobian matrix of \( F \) at 0 is \[ \left( \begin{matrix} {\left( {D}_{j}{f}^{i}\left( 0\right) \right) }_{1 \leq i \leq n} & 0 \\ {\left( {D}_{j}{f}^{i}\left( 0\right) \right) }_{n + 1 \leq i \leq k} & {1}_{{\mathbb{R}}^{k - n}} \end{matrix}\right) , \] which has nonzero determinant. Consequently, \( F \) has a local inverse \( g \), and \( g \circ f = g \circ F \circ \imath = \imath \) . This establishes (1). Similarly, in (2), we may assume that the \( k \times k \) submatrix \( {\left( {D}_{j}{f}^{i}\left( 0\right) \right) }_{1 \leq i, j \leq k} \) is invertible. Define \( F : U \rightarrow {\mathbb{R}}^{n} \) by \[ F\left( {{a}_{1},\ldots ,{a}_{n}}\right) \mathrel{\text{:=}} \left( {f\left( {{a}_{1},\ldots ,{a}_{n}}\right) ,{a}_{k + 1},\ldots ,{a}_{n}}\right) . \] Then \( f = \pi \circ F \), and the Jacobian of \( F \) at 0 is \[ \left( \begin{matrix} {\left( {D}_{j}{f}^{i}\left( 0\right) \right) }_{1 \leq j \leq k} & {\left( {D}_{j}{f}^{i}\left( 0\right) \right) }_{k + 1 \leq j \leq n} \\ 0 & {1}_{{\mathbb{R}}^{n - k}} \end{matrix}\right) , \] which is invertible. Thus, \( F \) has a local inverse \( h \), and \( f \circ h = \pi \circ F \circ h = \pi \) . ## 6. Submanifolds The implicit function theorem enables us to
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 1.20
Definition 1.20. A Lie group is a smooth manifold \( G \) which is also a group and such that the group product \[ G \times G \rightarrow G \] and the inverse map \( G \rightarrow G \) are smooth. Example 1.21. Let \[ G = \mathbb{R} \times \mathbb{R} \times {S}^{1} = \left\{ {\left( {x, y, u}\right) \mid x \in \mathbb{R}, y \in \mathbb{R}, u \in {S}^{1} \subset \mathbb{C}}\right\} , \] equipped with the group product given by \[ \left( {{x}_{1},{y}_{1},{u}_{1}}\right) \cdot \left( {{x}_{2},{y}_{2},{u}_{2}}\right) = \left( {{x}_{1} + {x}_{2},{y}_{1} + {y}_{2},{e}^{i{x}_{1}{y}_{2}}{u}_{1}{u}_{2}}\right) . \] Then \( G \) is a Lie group. Proof. It is easily checked that this operation is associative; the product of three elements with either grouping is \[ \left( {{x}_{1} + {x}_{2} + {x}_{3},{y}_{1} + {y}_{2} + {y}_{3},{e}^{i\left( {{x}_{1}{y}_{2} + {x}_{1}{y}_{3} + {x}_{2}{y}_{3}}\right) }{u}_{1}{u}_{2}{u}_{3}}\right) . \] There is an identity element in \( G \), namely \( e = \left( {0,0,1}\right) \) and each element \( \left( {x, y, u}\right) \) has an inverse given by \( \left( {-x, - y,{e}^{ixy}{u}^{-1}}\right) \) . Thus, \( G \) is, in fact, a group. Furthermore, both the group product and the map that sends each element to its inverse are clearly smooth, showing that \( G \) is a Lie group. Although there is nothing about matrices in the definition of the group \( G \) in Example 1.21, we may still ask whether \( G \) is isomorphic to some matrix Lie group. This turns out to be false. As shown in Sect. 4.8, there is no continuous, injective homomorphism of \( G \) into any \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . We conclude, then, that not every Lie group is isomorphic to a matrix Lie group. Nevertheless, most of the interesting examples of Lie groups are matrix Lie groups. Let us now think briefly about how we might show that every matrix Lie group is a Lie group. We will prove in Sect. 3.7 that every matrix Lie group is an "embedded submanifold" of \( {M}_{n}\left( \mathbb{C}\right) \cong {\mathbb{R}}^{2{n}^{2}} \) . The operations of matrix multiplication and inversion are smooth on \( {M}_{n}\left( \mathbb{C}\right) \) (after restricting to the open subset of invertible matrices in the case of inversion). Thus, the restriction of these operations to a matrix Lie group \( G \subset {M}_{n}\left( \mathbb{C}\right) \) is also smooth, making \( G \) into a Lie group. It is customary to call a map \( \Phi \) between two Lie groups a Lie group homomorphism if \( \Phi \) is a group homomorphism and \( \Phi \) is smooth, whereas we have (in Definition 1.18) required only that \( \Phi \) be continuous. We will show, however, that every continuous homomorphism between matrix Lie groups is automatically smooth, so that there is no conflict of terminology. See Corollary 3.50 to Theorem 3.42. Finally, we note that since every matrix Lie group \( G \) is a manifold, \( G \) must be locally path connected. It then follows by a standard topological argument that \( G \) is connected if and only if it is path connected. ## 1.6 Exercises 1. Let \( {\left\lbrack \cdot , \cdot \right\rbrack }_{n, k} \) be the symmetric bilinear form on \( {\mathbb{R}}^{n + k} \) defined in (1.5). Let \( g \) be the \( \left( {n + k}\right) \times \left( {n + k}\right) \) diagonal matrix with first \( n \) diagonal entries equal to one and last \( k \) diagonal entries equal to minus one: \[ g = \left( \begin{array}{rr} {I}_{n} & 0 \\ 0 & - {I}_{k} \end{array}\right) \] Show that for all \( x, y \in {\mathbb{R}}^{n + k} \) , \[ {\left\lbrack x, y\right\rbrack }_{n, k} = \langle x,{gy}\rangle . \] Show that a \( \left( {n + k}\right) \times \left( {n + k}\right) \) real matrix \( A \) belongs to \( \mathrm{O}\left( {n;k}\right) \) if and only if \( g{A}^{tr}g = {A}^{-1} \) . 2. Let \( \omega \) be the skew-symmetric bilinear form on \( {\mathbb{R}}^{2n} \) given by (1.7). Let \( \Omega \) be the \( {2n} \times {2n} \) matrix \[ \Omega = \left( \begin{array}{rr} 0 & I \\ - I & 0 \end{array}\right) \] Show that for all \( x, y \in {\mathbb{R}}^{2n} \), we have \[ \omega \left( {x, y}\right) = \langle x,{\Omega y}\rangle \] Show that a \( {2n} \times {2n} \) matrix \( A \) belongs to \( \operatorname{Sp}\left( {n;\mathbb{R}}\right) \) if and only if \( - \Omega {A}^{tr}\Omega = \) \( {A}^{-1} \) . Note: A similar analysis applies to \( \operatorname{Sp}\left( {n;\mathbb{C}}\right) \) . 3. Show that the symplectic group \( \operatorname{Sp}\left( {1;\mathbb{R}}\right) \subset \mathrm{{GL}}\left( {2;\mathbb{R}}\right) \) is equal to \( \mathrm{{SL}}\left( {2;\mathbb{R}}\right) \) . Show that \( \mathrm{{Sp}}\left( {1;\mathbb{C}}\right) = \mathrm{{SL}}\left( {2;\mathbb{C}}\right) \) and that \( \mathrm{{Sp}}\left( 1\right) = \mathrm{{SU}}\left( 2\right) \) . 4. Show that a matrix \( R \) belongs to \( \mathrm{{SO}}\left( 2\right) \) if and only if it can be expressed in the form \[ \left( \begin{array}{rr} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{array}\right) \] for some \( \theta \in \mathbb{R} \) . Show that a matrix \( R \) belongs to \( \mathrm{O}\left( 2\right) \) if and only if it is of one of the two forms: \[ A = \left( \begin{matrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{matrix}\right) \;\text{ or }\;A = \left( \begin{matrix} \cos \theta & \sin \theta \\ \sin \theta & - \cos \theta \end{matrix}\right) . \] Hint: Recall that for \( A \) to be in \( \mathrm{O}\left( 2\right) \), the columns of \( A \) must be orthonormal. 5. Show that if \( \alpha \) and \( \beta \) are arbitrary complex numbers satisfying \( {\left| \alpha \right| }^{2} + {\left| \beta \right| }^{2} = 1 \) , then the matrix \[ A = \left( \begin{array}{rr} \alpha & - \bar{\beta } \\ \beta & \bar{\alpha } \end{array}\right) \] is in \( \mathrm{{SU}}\left( 2\right) \) . Show that every \( A \in \mathrm{{SU}}\left( 2\right) \) can be expressed in this form for a unique pair \( \left( {\alpha ,\beta }\right) \) satisfying \( {\left| \alpha \right| }^{2} + {\left| \beta \right| }^{2} = 1 \) . 6. Suppose \( U \) belongs to \( {M}_{2n}\left( \mathbb{C}\right) \) and \( U \) has an orthonormal basis of eigenvectors satisfying the conditions in Theorem 1.6. Show that \( U \) belongs to \( \operatorname{Sp}\left( n\right) \) . Hint: Start by showing that \( U \) is unitary. Then show that \( \omega \left( {{Uz},{Uw}}\right) = \omega \left( {z, w}\right) \) if \( z \) and \( w \) belong to the basis \( {u}_{1},\ldots ,{u}_{n},{v}_{1},\ldots ,{v}_{n} \) . 7. Using Theorem 1.6, show that \( \operatorname{Sp}\left( n\right) \) is connected and that every element of \( \operatorname{Sp}\left( n\right) \) has determinant 1 . 8. Determine the center \( Z\left( H\right) \) of the Heisenberg group \( H \) . Show that the quotient group \( H/Z\left( H\right) \) is commutative. 9. Suppose \( a \) is an irrational real number. Show that the set \( {E}_{a} \) of numbers of the form \( {e}^{2\pi ina}, n \in \mathbb{Z} \), is dense in the unit circle \( {S}^{1} \) . Hint: Show that if we divide \( {S}^{1} \) into \( N \) equally sized "bins" of length \( {2\pi }/N \) , there is at least one bin that contains infinitely many elements of \( {E}_{a} \) . Then use the fact that \( {E}_{a} \) is a subgroup of \( {S}^{1} \) . 10. Let \( a \) be an irrational real number and let \( G \) be the following subgroup of \( \mathrm{{GL}}\left( {2;\mathbb{C}}\right) \) : \[ G = \left\{ {\left. \left( \begin{matrix} {e}^{it} & 0 \\ 0 & {e}^{ita} \end{matrix}\right) \right| \;t \in \mathbb{R}}\right\} . \] Show that \[ \bar{G} = \left\{ {\left. \left( \begin{matrix} {e}^{i\theta } & 0 \\ 0 & {e}^{i\phi } \end{matrix}\right) \right| \;\theta ,\phi \in \mathbb{R}}\right\} , \] where \( \bar{G} \) denotes the closure of the set \( G \) inside the space of \( 2 \times 2 \) matrices. Hint: Use Exercise 9. 11. A subset \( E \) of a matrix Lie group \( G \) is called discrete if for each \( A \) in \( E \) there is a neighborhood \( U \) of \( A \) in \( G \) such that \( U \) contains no point in \( E \) except for \( A \) . Suppose that \( G \) is a connected matrix Lie group and \( N \) is a discrete normal subgroup of \( G \) . Show that \( N \) is contained in the center of \( G \) . 12. This problem gives an alternative proof of Proposition 1.11, namely that \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) is connected. Suppose \( A \) and \( B \) are invertible \( n \times n \) matrices. Show that there are only finitely many complex numbers \( \lambda \) for which \( \det \left( {{\lambda A} + \left( {1 - \lambda }\right) B}\right) = 0 \) . Show that there exists a continuous path \( A\left( t\right) \) of the form \( A\left( t\right) = \lambda \left( t\right) A + \left( {1 - \lambda \left( t\right) }\right) B \) connecting \( A \) to \( B \) and such that \( A\left( t\right) \) lies in \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) . Here, \( \lambda \left( t\right) \) is a continuous path in the plane with \( \lambda \left( 0\right) = 0 \) and \( \lambda \left( 1\right) = 1 \) . 13. Show that \( \mathrm{{SO}}\left( n\right) \) is connected, using the following outline. For the case \( n = 1 \), there is nothing to show, since a \( 1 \times 1 \) matrix with determinant one must be [1]. Assume, then, that \( n \geq 2 \) . Let \( {e}_{1} \) denote the unit vector with entries \( 1,0,\ldots ,0 \) in \( {\mathbb{R}}^{n} \) . For every unit vector \( v \in {\mathbb{R}}^{n} \), show that there exists a continuous path \( R\left( t\right) \) in \( \mathrm{{SO}}\left( n\right) \) with \( R\left( 0\right) = I \) and \( R\left( 1\right) v = {e}_{1} \) . (Thus, any unit vector can be "continuously rotated" to \( {e}_{1} \) .) Now, show that any element \( R \) of \( \mathrm{{SO}}\left( n\right) \) can be connected to a block-diagonal matrix of the form \[ \left( \begin{array}{ll} 1 & \\ & {R}_{1} \end{array}\right
1009_(GTM175)An Introduction to Knot Theory
Definition 6.2
Definition 6.2. Suppose that \( M \) is a module over a commutative ring \( R \), having an \( m \times n \) presentation matrix \( A \) . The \( {r}^{th} \) elementary ideal \( {\mathcal{E}}_{r} \) of \( M \) is the ideal of \( R \) generated by all the \( \left( {m - r + 1}\right) \times \left( {m - r + 1}\right) \) minors of \( A \) . Of course, an \( \left( {m - r + 1}\right) \times \left( {m - r + 1}\right) \) minor is the determinant of the matrix that remains after the removal from \( A \) of \( \left( {r - 1}\right) \) rows and \( \left( {n - m + r - 1}\right) \) columns. The standard elementary properties of determinants, together with the above theorem, show that the elementary ideals are independent of the presentation matrix chosen to evaluate them. Note that \( {\mathcal{E}}_{r - 1} \subseteq {\mathcal{E}}_{r} \) . By convention, \( {\mathcal{E}}_{r} = R \) when \( r > m \) and \( {\mathcal{E}}_{r} = 0 \) if \( r \leq 0 \) . Note that if \( n = m \), the matrix \( A \) is square. Then there is only one \( m \times m \) minor, and \( {\mathcal{E}}_{1} \) is the principal ideal of \( R \) generated by det \( A \) . A standard example is gained by observing that a finite abelian group \( G \) is a \( \mathbb{Z} \) -module, it does have a square presentation matrix, and \( {\mathcal{E}}_{1} \) is the ideal of \( \mathbb{Z} \) generated by \( \left| G\right| \), the order of the group \( G \) . Returning to geometric things, consider the first homology group, with integer coefficients, of an orientable, compact, connected surface \( F \) with \( n \) boundary components. Any elementary homology theory - simplicial homology or singular homology, for example (or just basic intuition) asserts that \( {H}_{1}\left( {F;\mathbb{Z}}\right) = { \oplus }_{{2g} + n - 1}\mathbb{Z} \) generated by \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \), where the \( {f}_{i} \) are the oriented simple closed curves shown in Figure 6.1. There follows now a consideration of what happens when \( F \) is embedded in \( {S}^{3} \), probably with the "bands" of Figure 6.1 twisted, linked and knotted. The next result can be regarded as an instance of Alexander duality. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_61_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_61_0.jpg) Figure 6.1 Proposition 6.3. Suppose that \( F \) is a connected, compact, orientable surface with non-empty boundary, piecewise linearly contained in \( {S}^{3} \) . Then the homology groups \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \) and \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) are isomorphic, and there is a unique nonsingular bilinear form \[ \beta : {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \times {H}_{1}\left( {F;\mathbb{Z}}\right) \rightarrow \mathbb{Z} \] with the property that \( \beta \left( {\left\lbrack c\right\rbrack ,\left\lbrack d\right\rbrack }\right) = \operatorname{lk}\left( {c, d}\right) \) for any oriented simple closed curves \( c \) and \( d \) in \( {S}^{3} - F \) and \( F \) respectively. Proof. The surface \( F \) is now embedded in \( {S}^{3} \) . As before, \( {H}_{1}\left( {F;\mathbb{Z}}\right) = \) \( {\bigoplus }_{{2g} + n - 1}\mathbb{Z} \) generated by \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \) . Let \( V \) be a regular neighbourhood of \( F \) in \( {S}^{3} \), so that \( V \) is just a 3-ball with \( \left( {{2g} + n - 1}\right) 1 \) -handles attached. The inclusion of \( F \) in \( V \) is a homotopy equivalence, and \( {H}_{1}\left( {\partial V;\mathbb{Z}}\right) = \left( {{\bigoplus }_{{2g} + n - 1}\mathbb{Z}}\right) \oplus \left( {{\bigoplus }_{{2g} + n - 1}\mathbb{Z}}\right) \) . For this, generators \( \left\{ {\left\lbrack {f}_{i}^{\prime }\right\rbrack : 1 \leq i \leq {2g} + n - 1}\right\} \) and \( \left\{ {\left\lbrack {e}_{i}\right\rbrack : 1 \leq i \leq {2g} + n - 1}\right\} \) can be chosen so that each \( {e}_{i} \) is the boundary of a small disc in \( V \) that meets \( {f}_{i} \) at one point, and the inclusion \( \partial V \subset V \) induces on homology a map sending \( \left\lbrack {f}_{i}^{\prime }\right\rbrack \) to \( \left\lbrack {f}_{i}\right\rbrack \) and \( \left\lbrack {e}_{i}\right\rbrack \) to zero. Furthermore, the orientations of the \( \left\{ {e}_{i}\right\} \) can be chosen so that \( \operatorname{lk}\left( {{e}_{i},{f}_{j}}\right) = {\delta }_{ij} \) (the Krönecker delta). This all relates to the homology of the standard inclusion of \( F \) in a standard handlebody \( V \) ; it is \( {S}^{3} - F \) that is of interest. Now, if \( {V}^{\prime } \) is the closure of \( {S}^{3} - V \), then the inclusion of \( {V}^{\prime } \) in \( {S}^{3} - F \) is a homotopy equivalence. The Mayer-Vietoris theorem for \( {S}^{3} \) expressed as the union of \( V \) and \( {V}^{\prime } \) asserts that the following sequence is exact: \[ {H}_{2}\left( {{S}^{3};\mathbb{Z}}\right) \rightarrow {H}_{1}\left( {\partial V;\mathbb{Z}}\right) \rightarrow {H}_{1}\left( {V;\mathbb{Z}}\right) \oplus {H}_{1}\left( {{V}^{\prime };\mathbb{Z}}\right) \rightarrow {H}_{1}\left( {{S}^{3};\mathbb{Z}}\right) . \] As the first and last groups in this sequence are zero, the map in the middle, induced by inclusion maps, is an isomorphism. Thus \( {H}_{1}\left( {{V}^{\prime };\mathbb{Z}}\right) \) (which is isomorphic to \( \left. {{H}_{1}\left( {{S}^{3} - F}\right) }\right) \) is isomorphic to \( {\bigoplus }_{{2g} + n - 1}\mathbb{Z} \) and is generated by \( \left\{ {\left\lbrack {e}_{i}\right\rbrack : 1 \leq i \leq }\right. \) \( {2g} + n - 1\} \) . Now define \[ \beta : {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \times {H}_{1}\left( {F;\mathbb{Z}}\right) \rightarrow \mathbb{Z} \] by \( \beta \left( {\left\lbrack {e}_{i}\right\rbrack ,\left\lbrack {f}_{j}\right\rbrack }\right) = {\delta }_{ij} \), and extend linearly. Suppose now that \( c \) and \( d \) are any oriented simple closed curves in \( {S}^{3} - F \) and \( F \) respectively, where \( \left\lbrack c\right\rbrack = \mathop{\sum }\limits_{i}{\lambda }_{i}\left\lbrack {e}_{i}\right\rbrack \) and \( \left\lbrack d\right\rbrack = \mathop{\sum }\limits_{i}{\mu }_{i}\left\lbrack {f}_{i}\right\rbrack \) . Then \( \beta \left( {\left\lbrack c\right\rbrack ,\left\lbrack d\right\rbrack }\right) = \mathop{\sum }\limits_{i}{\lambda }_{i}{\mu }_{i} \) . However, \( \operatorname{lk}\left( {c,{f}_{j}}\right) = \) \( \left\lbrack c\right\rbrack = \mathop{\sum }\limits_{i}{\lambda }_{i}\left\lbrack \overline{{e}_{i}}\right\rbrack \in {H}_{1}\left( {{S}^{3} - {f}_{j};\mathbb{Z}}\right) \) . Thus \( \operatorname{lk}\left( {c,{f}_{j}}\right) = {\lambda }_{j} \) . Similarly, \( \operatorname{lk}\left( {d, c}\right) = \) \( \mathop{\sum }\limits_{i}{\mu }_{i}\left\lbrack {f}_{i}\right\rbrack \in {H}_{1}\left( {{S}^{3} - c;\mathbb{Z}}\right) \), but this is \( \mathop{\sum }\limits_{i}{\mu }_{i}\operatorname{lk}\left( {{f}_{i}, c}\right) \), which by the above is \( \mathop{\sum }\limits_{i}{\lambda }_{i}{\mu }_{i} \) . Hence, as required, \( \beta \left( {\left\lbrack c\right\rbrack ,\left\lbrack d\right\rbrack }\right) = \operatorname{lk}\left( {c, d}\right) \) . Note that, whereas the above proof uses bases, \( \beta \) is characterised by linking numbers and is independent of bases. Note, too, that the bases used are mutually dual with respect to \( \beta \) in the sense that \( \beta \left( {\left\lbrack {e}_{i}\right\rbrack ,\left\lbrack {f}_{j}\right\rbrack }\right) = {\delta }_{ij} \), and so, using standard base changing arguments, corresponding to any base for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) there is a \( \beta \) -dual base for \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \) and vice versa. Now suppose that \( F \) is a Seifert surface for an oriented link \( L \) in \( {S}^{3} \), so that \( \partial F = L \) . Let \( N \) be a regular neighbourhood of \( L \), a disjoint union of solid tori that "fatten" the components of \( L \) . Let \( X \) be the closure of \( {S}^{3} - N \) . Then \( F \cap X \) is \( F \) with a (collar) neighbourhood of \( \partial F \) removed. Thus \( F \cap X \) is just a copy of \( F \) and, just to simplify notation, it will be regarded as actually being \( F \) . This \( F \) has a regular neighbourhood \( F \times \left\lbrack {-1,1}\right\rbrack \) in \( X \), with \( F \) identified with \( F \times 0 \) and the notation chosen so that the meridian of every component of \( L \) enters the neighbourhood at \( F \times - 1 \) and leaves it at \( F \times 1 \) . Let \( {i}^{ \pm } \) be the two embeddings \( F \rightarrow {S}^{3} - F \) defined by \( {i}^{ \pm }\left( x\right) = x \times \pm 1 \) and, if \( c \) is an oriented simple closed curve in \( F \), let \( {c}^{ \pm } = {i}^{ \pm }c \) . Definition 6.4. Associated to the Seifert surface \( F \) for an oriented link \( L \) is the Seifert form \[ \alpha : {H}_{1}\left( {F;\mathbb{Z}}\right) \times {H}_{1}\left( {F;\mathbb{Z}}\right) \rightarrow \mathbb{Z} \] defined by \( \alpha \left( {x, y}\right) = \beta \left( {{\left( {i}^{ - }\right) }_{ \star }x, y}\right) \) . Note that, from Proposition \( {6.3},\alpha \) is defined and bilinear, and if \( a \) and \( b \) are simple closed oriented curves in \( F \), then \( \alpha \left( {\left\lbrack a\right\rbrack ,\left\lbrack b\right\rbrack }\right) = \operatorname{lk}\left( {{a}^{ - }, b}\right) \) . Further, by sliding with respect to the second coordinate of \( F \times \left\lbrack {-1,1}\right\rbrack \), this is equal to \( \operatorname{lk}\left( {a,{b}^{ + }}\right) \) . Taking a basis \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \) for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) with a \( \beta \) -dual basis \( \left\{ \left\lbrack {e}_{i}\right\rbrack \right\} \) for \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \) as before, \( \alpha \) is represented by the Seifert matrix \( A \), where \[ {A}_{ij} = \alpha \left( {\left\lbrack {f}_{i}\right\rbra
1225_(Griffiths) Introduction to Algebraic Curves
Definition 3.8
Definition 3.8. Suppose \( C,{C}^{\prime } \) are Riemann surfaces with \( \left\{ \left( {{U}_{i},{z}_{i}}\right) \right\} \) and \( \left\{ \left( {{U}_{a}^{\prime },{z}_{a}^{\prime }}\right) \right\} \) as their holomorphic coordinate coverings, respectively. Then a holomorphic mapping \( f : C \rightarrow {C}^{\prime } \) is by definition a family of continuous mappings \[ {f}_{i} : {U}_{i} \rightarrow {C}^{\prime } \] \( \left( {\forall i}\right) \) such that a) \( {f}_{i} = {f}_{j} \) on \( {U}_{i} \cap {U}_{j} \) for \( {U}_{i} \cap {U}_{j} \neq \varnothing \) ; b) \( {z}_{a}^{\prime } \circ {f}_{i} \circ {z}_{i}^{-1} \) is a holomorphic function on \( {f}^{-1}\left( {U}_{a}^{\prime }\right) \cap {U}_{i} \) whenever \( {f}^{-1}\left( {U}_{a}^{\prime }\right) \cap {U}_{i} \neq \varnothing \) . REMARK 3.9. Clearly, any meromorphic function \( f \) on the Riemann surface \( C \) is a holomorphic mapping into the Riemann sphere \( S \) . EXERCISE 3.3. Suppose \( C,{C}^{\prime } \) are Riemann surfaces and \( f : C \rightarrow {C}^{\prime } \) is a holomorphic mapping with \( p \in C, q \in {C}^{\prime } \) and \( f\left( p\right) = q \) . Prove that there exist local coordinates \( z \) in a neighborhood of the point \( p \) on \( C \) and \( w \) in a neighborhood of the point \( q \) on \( {C}^{\prime } \) which satisfy \( z\left( p\right) = 0 \) and \( w\left( q\right) = 0 \), such that relative to \( z \) and \( w, f \) has the local representation \( w = {z}^{\mu },\mu \in {\mathbb{Z}}^{ + } \) . Moreover, relative to local coordinates which satisfy the above conditions, the value of the above \( \mu \) is always the same. We call \( \left( {\mu - 1}\right) \) the ramification index of \( f \) at the point \( p \) . ## §4. Holomorphic and meromorphic differentials In this section we shall discuss differentials on a Riemann surface \( C \) . Definition 4.1. Suppose \( C \) is a Riemann surface. Then a holomorphic differential (respectively, meromorphic differential) \( \omega \) is by definition a family \( \left\{ \left( {{U}_{i},{z}_{i},{\omega }_{i}}\right) \right\} \) such that: a) \( \left\{ \left( {{U}_{i},{z}_{i}}\right) \right\} \) is a holomorphic covering of \( C \), and \[ {\omega }_{i} = {f}_{i}\left( {z}_{i}\right) d{z}_{i} \] where \( {f}_{i} \in \mathcal{O}\left( {U}_{i}\right) \) (resp. \( K\left( {U}_{i}\right) \) ); b) if \( {z}_{i} = {\varphi }_{ij}\left( {z}_{j}\right) \) is the coordinate transformation on \( {U}_{i} \cap {U}_{j}\left( { \neq \varnothing }\right) \) , then \[ {f}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) }\right) \frac{d{\varphi }_{ij}\left( {z}_{j}\right) }{d{z}_{j}} = {f}_{j}\left( {z}_{j}\right) , \] i.e., the local representation of the differential changes according to the chain rule \[ {f}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) }\right) d{\varphi }_{ij}\left( {z}_{j}\right) = {f}_{j}\left( {z}_{j}\right) d{z}_{j}. \] We use \( {\Omega }^{1}\left( C\right) \) (respectively \( {K}^{1}\left( C\right) \) ) to denote the set of holomorphic differentials (respectively, meromorphic differentials) on \( C \) . The following result is obvious: Proposition 4.2. Suppose \( {\omega }_{0},{\omega }_{1} \) are meromorphic differentials on \( C \) , \( {\omega }_{0} ≢ 0 \) . Then \( {\omega }_{1}/{\omega }_{0} \) defines a meromorphic function on \( C \) . EXAMPLE 1. Suppose \( r\left( z\right) \) is a rational function on \( \mathbb{C} \), then \( r\left( z\right) {dz} \) is a meromorphic differential on the Riemann sphere \( S \) . EXERCISE 4.1. Prove that all the \( \omega \in {K}^{1}\left( S\right) \) are of the above form. Suggestion. Fix a rational function \( {r}_{0}\left( z\right) \), and let \[ {\omega }_{0} = {r}_{0}\left( z\right) {dz} \] then \( \omega /{\omega }_{0} \in K\left( S\right) \) for any \( \omega \in {K}^{1}\left( S\right) \) and and we have determined \( K\left( S\right) \) above. EXAMPLE 2. Suppose \[ \Lambda = \left\{ {{m}_{1}{w}_{1} + {m}_{2}{w}_{2} \mid {m}_{1},{m}_{2} \in \mathbb{Z}}\right\} \] is a lattice in \( \mathbb{C} \), and \( f \) is a doubly periodic meromorphic function with period \( \left( {{w}_{1},{w}_{2}}\right) \), then \( f\left( w\right) {dw} \) is a meromorphic differential defined on \( C = \mathbb{C}/\Lambda \) . EXERCISE 4.2. Verify that \( \omega = {dw} \) yields a holomorphic differential on \( C = \mathbb{C}/\Lambda \) . EXAMPLE 3. Suppose \( C \) is a Riemann surface, with \[ f = \left\{ \left( {{U}_{i},{z}_{i},{f}_{i}\left( {z}_{i}\right) }\right) \right\} \in K\left( C\right) . \] Then the following expression defines a meromorphic differential \( {df} \in \) \( {K}^{1}\left( C\right) \) \[ {df} = \left\{ \left( {{U}_{i},{z}_{i}, d{f}_{i}\left( {z}_{i}\right) = \frac{d{f}_{i}\left( {z}_{i}\right) }{d{z}_{i}}d{z}_{i}}\right) \right\} . \] Definition 4.3. We call df, as defined above, the differential of the meromorphic function \( f \) . DEFINITION 4.4. Suppose \( C \) is a Riemann surface with \[ \omega = \left\{ \left( {{U}_{i},{z}_{i},{f}_{i}\left( {z}_{i}\right) d{z}_{i}}\right) \right\} \in {K}^{1}\left( C\right) ,\;p \in {U}_{i} \cap {U}_{j}. \] Then \[ {\nu }_{p}\left( {f}_{i}\right) = {\nu }_{p}\left( {{f}_{i}\left( {{\varphi }_{ij}\left( {z}_{j}\right) }\right) \frac{d{\varphi }_{ij}\left( {z}_{j}\right) }{d{z}_{j}}}\right) = {\nu }_{p}\left( {f}_{j}\right) , \] and thus we can define \[ {\nu }_{p}\left( \omega \right) = {\nu }_{p}\left( {f}_{i}\right) ,\;p \in {U}_{i}. \] If \( {\nu }_{p}\left( \omega \right) > 0 \), then \( p \) is called a zero of \( \omega \) ; if \( {\nu }_{p}\left( \omega \right) < 0 \), then \( p \) is called a pole of \( \omega \) . DEFINITION 4.5. Suppose \( C \) is a Riemann surface with \[ \omega = \left\{ \left( {{U}_{i},{z}_{i},{f}_{i}\left( {z}_{i}\right) d{z}_{i}}\right) \right\} \in {K}^{1}\left( C\right) , \] \( \gamma \) is a piecewise smooth curve on \( C \) not containing the poles of \( \omega \), and \( \gamma = \mathop{\bigcup }\limits_{i}{\gamma }_{i} \) is any partition of \( \gamma \) satisfying \( {\gamma }_{i} \subset {U}_{i} \) . We define the integral \[ {\int }_{\gamma }\omega = \mathop{\sum }\limits_{i}{\int }_{{\gamma }_{i}}{f}_{i}\left( {z}_{i}\right) d{z}_{i} \] This definition is well defined, for suppose \( \gamma = \mathop{\bigcup }\limits_{j}{\gamma }_{j}^{\prime } \) were another partition of \( {\gamma }_{j} \) satisfying \( {\gamma }_{j}^{\prime } \subset {U}_{j} \) . Then by the change of variables formula: \[ {\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{i}\left( {z}_{i}\right) d{z}_{i} = {\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{j}\left( {z}_{j}\right) d{z}_{j}, \] so that \[ \mathop{\sum }\limits_{i}{\int }_{{\gamma }_{i}}{f}_{i}\left( {z}_{i}\right) d{z}_{i} = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{i}\left( {z}_{i}\right) d{z}_{i} \] \[ = \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{i}{\int }_{{\gamma }_{i} \cap {\gamma }_{j}^{\prime }}{f}_{j}\left( {z}_{j}\right) d{z}_{j} \] \[ = \mathop{\sum }\limits_{j}{\int }_{{\gamma }_{j}^{\prime }}{f}_{j}\left( {z}_{j}\right) d{z}_{j} \] Theorem 4.6 (Stokes’ Theorem for holomorphic differentials). Suppose \( C \) is a Riemann surface with \( \Omega \subset C \) an open set, \( \bar{\Omega } \) compact, \( \partial \Omega = \gamma \) a piecewise smooth curve, and \( \omega \) a holomorphic differential defined on an open set containing \( \bar{\Omega } \) . Then \[ {\int }_{\partial \Omega }\omega = 0 \] Proof. Suitably subdivide \( \Omega \) into a disjoint union \( \Omega = \mathop{\bigcup }\limits_{i}{\Omega }_{i} \) such that for each \( i \) , \[ {\Omega }_{i} \subset {U}_{i} \] and \( \partial {\Omega }_{i} \) is a piecewise smooth curve. By using local coordinate representations and applying Cauchy's theorems, we get \[ {\int }_{\partial {\Omega }_{i}}\omega = 0 \] whence \[ {\int }_{\partial \Omega }\omega = \mathop{\sum }\limits_{i}{\int }_{\partial {\Omega }_{t}}\omega = 0 \] since the contributions of the boundaries \( \partial {\Omega }_{i} \) cancel out other than those arcs contributing to \( \partial \Omega \) . Q.E.D. EXERCISE 4.3. Prove that there are no holomorphic differentials on the Riemann sphere \( S \) except the trivial one. Suggestion. Let \( \omega \) be a holomorphic differential, and fix \( q \in S \) . Consider \[ f\left( p\right) = {\int }_{q}^{p}\omega ,\;p \in S, \] which we know from Stokes' theorem to be well defined (why?). It is easily seen that \( f \) is a holomorphic function and thus constant. Hence \( \omega = {df} = 0. \) Definition 4.7. Let \( C \) be a Riemann surface with \( \omega \in {K}^{1}\left( C\right), p \in C \) , \( {\gamma }_{p} \) a small circle around the point \( p \), and \( \omega \) having no poles other than \( p \) on the disc surrounded by \( {\gamma }_{p} \) ( \( p \) itself may or may not be a pole). Then we define the residue at the point \( p \) of \( \omega \) to be \[ {\operatorname{Res}}_{p}\left( \omega \right) = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}\omega \] From Stokes' Theorem, this definition is independent of the choice of the small circle \( {\gamma }_{p} \) . Moreover, for \( p \in {U}_{j},{\gamma }_{p} \subset {U}_{j} \), if \( \omega = {f}_{j}\left( {z}_{j}\right) d{z}_{j} \) in \( {U}_{j} \), then \[ {\operatorname{Res}}_{p}\left( \omega \right) = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}\omega = \frac{1}{2\pi i}{\oint }_{{\gamma }_{p}}{f}_{j}\left( {z}_{j}\right) d{z}_{j} \] \[ = {\operatorname{Res}}_{p}\left( {{f}_{j}\left( {z}_{j}\right) d{z}_{j}}\right) \text{.} \] THEOREM 4.8 (RESIDUE THEOREM). Suppose \( C \) is a compact Riemann surface. For \( \omega \in {K}^{1}\left( C\right) \), we have \[ \mathop{\sum }\limits_{{p \in C}}{\operatorname{Res}}_{p}\left( \omega \right) = 0 \] Proof. Since \( C \) is compact, \( \omega \) can have onl
1358_[陈松蹊&张慧铭] A Course in Fixed and High-dimensional Multivariate Analysis (2020)
Definition 1.2.8
Definition 1.2.8 If an n-dimensional random vector \( \mathbf{Y} \) has a characteristic function \( {\phi }_{\mathbf{Y}}\left( \mathbf{t}\right) = {e}^{i{\mathbf{t}}^{T}\mu }\phi \left( {{\mathbf{t}}^{T}\mathbf{\Lambda }\mathbf{t}}\right) \) for constant matrices \( {\mu }_{p \times 1} \) and \( {\mathbf{\Lambda }}_{p \times n} \geq 0 \), then \( \mathbf{Y} \) is said to be elliptically contoured (EC) distributed with parameters \( \mu ,\mathbf{\Lambda } \) and \( \phi \) ; and denoted as \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \) Theorem 1.2.15. A univariate function \( \phi \left( \cdot \right) \) can be used to define an elliptically contoured (EC) distribution, \( E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \) for every \( \mu \in {\mathbb{R}}^{p} \) and \( \mathbf{\Lambda } \geq 0 \) with \( \operatorname{rank}\left( \mathbf{\Lambda }\right) = k \) if and only if \( \phi \in {\Phi }_{k} = \left\{ {\phi : \phi \left( {{t}_{1}^{2} + \cdots + {t}_{k}^{2}}\right) }\right. \) where \( \phi \) is a characteristic function \( \} \) . Proof : To see the " \( \Rightarrow \) " part, we choose \( \mu = \mathbf{0},\mathbf{\Lambda } = \left( \begin{matrix} {\mathbf{I}}_{k} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{matrix}\right) \) for \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \) . Hence, \( {\phi }_{\mathbf{Y}}\left( \mathbf{t}\right) = \phi \left( {{\mathbf{t}}^{T}\mathbf{\Lambda }\mathbf{t}}\right) = \phi \left( {{t}_{1}^{2} + \cdots + {t}_{k}^{2}}\right) \) is a characteristic function. Thus, \( \phi \in {\Phi }_{k} \) . For the sufficient part, since \( \phi \in {\Phi }_{k} \) is a characteristic function, define a random vector \( \mathbf{X} \in {\mathbf{S}}_{p}\left( \phi \right) \) such that \( {\phi }_{\mathbf{X}}\left( \mathbf{t}\right) = \phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{t}}\right) \) for \( \mathbf{t} \in {\mathbb{R}}^{k} \) . For any \( \mathbf{\Lambda } \geq 0 \) with \( \operatorname{rank}\left( \mathbf{\Lambda }\right) = k \) . There exists a \( k \times n \) matrix \( \mathbf{A} \) such that \( \mathbf{\Lambda } = {\mathbf{A}}^{\mathbf{T}}\mathbf{A} \) . Let \( \mathbf{Y} = \mu + {\mathbf{A}}^{\mathbf{T}}\mathbf{X} \) . The characteristic function of \( \mathbf{Y} \) is \[ {\phi }_{\mathbf{Y}}\left( \mathbf{t}\right) = {e}^{i{\mathbf{t}}^{\mathbf{T}}\mu } \cdot \phi \left( {{\mathbf{t}}^{\mathbf{T}}{\mathbf{A}}^{\mathbf{T}}\mathbf{A}\mathbf{t}}\right) = {e}^{i{\mathbf{t}}^{\mathbf{T}}\mu }\phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{\Lambda }\mathbf{t}}\right) \] for any \( \mathbf{t} \in {\mathbb{R}}^{k} \) . Hence, \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \) and \( \phi \) can be used to define an Elliptically Contoured distribution. Corollary 1.2.7 \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \), with \( \operatorname{rank}\left( \mathbf{\Lambda }\right) = k \) if and only if \( \mathbf{Y}\overset{d}{ = }\mu + \) \( R{\mathbf{A}}^{T}{U}^{\left( k\right) } \) where \( R \) and \( {U}^{\left( k\right) } \) are independent, \( R \) has a distribution function \( F,\phi \left( t\right) = \) \( {\int }_{0}^{\infty }{\Omega }_{p}\left( {t{r}^{2}}\right) {dF}\left( r\right) \), and \( \mathbf{A} \) satisfies \( \mathbf{\Lambda } = {\mathbf{A}}^{\mathbf{T}}\mathbf{A} \) . Proof: For " \( \Leftarrow \) ", we let \( \mathbf{X} = R{U}^{\left( k\right) } \in {S}_{k}\left( \phi \right) \), the characteristic function of \( X \) is \( {\phi }_{\mathbf{X}}\left( \mathbf{t}\right) = \) \( \phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{t}}\right) \) . Then let \( \mathbf{Y} = \mu + {\mathbf{A}}^{\mathbf{T}}\mathbf{X} \), whose c.f. \( {\phi }_{\mathbf{Y}}\left( \mathbf{t}\right) = {e}^{i{\mathbf{t}}^{\mathbf{T}}\mu },\phi \left( {{\mathbf{t}}^{\mathbf{T}}{\mathbf{A}}^{\mathbf{T}}\mathbf{A}\mathbf{t}}\right) = {e}^{i{\mathbf{t}}^{\mathbf{T}}\mu }\phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{\Lambda }\mathbf{t}}\right) \) . Therefore, \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \) . For the " \( \Rightarrow \) " part, as \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \Rightarrow {\phi }_{\mathbf{Y}}\left( \mathbf{t}\right) = {e}^{i{\mathbf{t}}^{\mathbf{T}}\mu }\phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{\Lambda }t}\right) \) . Suppose that \( \phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{t}}\right) \) is a characteristic function of \( R{U}^{\left( p\right) } \) . Note that the characteristic function of \( \mu + k{\mathbf{{AU}}}^{\left( k\right) } \) where \( {\mathbf{A}}^{\mathbf{T}}\mathbf{A} = \mathbf{\Lambda } \) is also \( {e}^{i{\mathbf{t}}^{\mathbf{T}}\mu }\phi \left( {{\mathbf{t}}^{\mathbf{T}}\mathbf{\Lambda }\mathbf{t}}\right) \) . Thus \( \mathbf{Y}\overset{d}{ = }\mu + R{\mathbf{A}}^{\mathbf{T}}{U}^{\left( k\right) } \) . Corollary 1.2.16. If \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \), then \( {\mathbf{r}}_{m \times 1} + {\mathbf{B}}_{m \times n}\mathbf{Y}\overset{d}{ \sim }E{C}_{m}\left( {r + \mathbf{B}\mu ,\mathbf{B}\mathbf{\Lambda }{\mathbf{B}}^{\mathbf{T}},\phi }\right) \) . Hence the EC family is closed under the linear transformation, which is like the Gaussian family shown in Lemma 1.2.1. Suppose \( \mathbf{Y}\overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) ,\mathbf{\Lambda } > 0 \) . If \( \mathbf{Y} \) admits a density, it must be in a form of \( {\left| \mathbf{\Lambda }\right| }^{\frac{1}{2}}f\left( {{\left( \mathbf{Y} - \mu \right) }^{T}{\mathbf{\Lambda }}^{-1}\left( {\mathbf{Y} - \mu }\right) }\right) \) where \( f \) qualifies for a density in \( {S}_{p}\left( \phi \right) \), i.e. \( f \geq 0 \) and \( \int f\left( {{y}^{T}y}\right) {dy} = 1 \) . If \( \mathbf{Y} \sim E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) ,\operatorname{rank}\left( \mathbf{\Lambda }\right) = k \), then from Corollary 1.2.16 \( \mathbf{Y}\overset{d}{ = }\mu + R{\mathbf{A}}^{\mathbf{T}}{U}^{\left( k\right) } \) such that \( \mathbf{\Lambda } = {\mathbf{A}}^{\mathbf{T}}\mathbf{A} \) and \( \mathbf{R} \) and \( {U}^{\left( k\right) } \) are independent. Then, we have \( E\left( \mathbf{Y}\right) = \mu \) as \( E\left( {U}^{k}\right) = \mathbf{0} \) . Furthermore, \[ \sum = : \operatorname{Var}\left( \mathbf{Y}\right) = E\left( {{R}^{2}{\mathbf{A}}^{\mathbf{T}}{U}^{\left( k\right) }{U}^{{\left( k\right) }^{T}}\mathbf{A}}\right) \] \[ = E\left( {R}^{2}\right) {\mathbf{A}}^{\mathbf{T}}\operatorname{Var}\left( {U}^{\left( k\right) }\right) \mathbf{A} \] \[ = E\left( {R}^{2}\right) \frac{\mathbf{\Lambda }}{\operatorname{rank}\left( \mathbf{\Lambda }\right) } \] provided \( E\left( {R}^{2}\right) < \infty \) . Hence \( \mathbf{\Lambda } \) does not fully define \( \sum \), the variance of \( \mathbf{Y} \) . There is a contribution from \( R \) as well. ## Marginal Distributions To obtain a marginal distribution of an EC distribution, consider the following partitions \[ \mathbf{Y} = \left( \begin{matrix} {\mathbf{Y}}^{\left( \mathbf{1}\right) }q \times 1 \\ {\mathbf{Y}}^{\left( \mathbf{2}\right) }\left( {p - q}\right) \times 1 \end{matrix}\right) \overset{d}{ \sim }E{C}_{p}\left( {\mu ,\mathbf{\Lambda },\phi }\right) \text{ where } \] \[ \mu = \left( \begin{matrix} {\mu }_{q \times 1}^{\left( 1\right) } \\ {\mu }_{\left( {p - q}\right) \times 1}^{\left( 2\right) } \end{matrix}\right) \text{ and }\mathbf{\Lambda } = \left( \begin{array}{ll} {\mathbf{\Lambda }}_{11} & {\mathbf{\Lambda }}_{12} \\ {\mathbf{\Lambda }}_{21} & {\mathbf{\Lambda }}_{22} \end{array}\right) . \] From Corollary 1.2.16, \[ {\mathbf{Y}}^{\left( \mathbf{1}\right) } = \left( {{\mathbf{I}}_{q},{\mathbf{0}}_{q \times \left( {p - q}\right) }}\right) \mathbf{Y}\overset{d}{ \sim }E{C}_{a}\left( {{\mu }^{\left( 1\right) }{\mathbf{\Lambda }}_{11},\phi }\right) \;\text{ and } \] \[ {\mathbf{Y}}^{\left( \mathbf{2}\right) } = \left( {{\mathbf{0}}_{\left( {p - q}\right) \times q},{\mathbf{I}}_{\left( {p - q}\right) \times \left( {p - q}\right) }}\right) \mathbf{Y}\overset{d}{ \sim }E{C}_{p - q}\left( {{\mu }^{\left( 2\right) },{\mathbf{\Lambda }}_{22},\phi }\right) . \] Hence, \( \mathbf{Y} \) ’s marginal distributions are also EC. We note in passing that, if \( \phi \in {\Phi }_{p} \), then \( \phi \in {\Phi }_{p - q} \) for all \( q < p \), which then implies \( {\Phi }_{p} \subseteq {\Phi }_{q} \) . Suppose \( \mathbf{\Lambda } > 0 \) and \( \mathbf{Y} \) has a density in the form of \[ {\left| \mathbf{\Lambda }\right| }^{-1/2}f\left( {{\left( \mathbf{Y} - \mu \right) }^{T}{\mathbf{\Lambda }}^{-1}\left( {\mathbf{Y} - \mu }\right) }\right) . \] Partition the covariance such that \[ \operatorname{Var}\left( \mathbf{Y}\right) = \mathbf{\sum } = \left( \begin{array}{ll} {\mathbf{\sum }}_{\mathbf{{11}}} & {\mathbf{\sum }}_{\mathbf{{12}}} \\ {\mathbf{\sum }}_{\mathbf{{21}}} & {\mathbf{\sum }}_{\mathbf{{22}}} \end{array}\right) \] where \( {\mathbf{\sum }}_{ij} = \frac{E\left( {R}^{2}\right) }{\operatorname{rank}\left( \mathbf{\Lambda }\right) }{\Lambda }_{ij} \) . \[ \text{Let}{\mathbf{Z}}^{\left( \mathbf{1}\right) } = {\mathbf{Y}}^{\left( \mathbf{1}\right) } - {\mathbf{\Lambda }}_{\mathbf{{12}}}{\mathbf{\Lambda }}_{\mathbf{{22}}}{}^{-1}{\mathbf{Y}}^{\left( \mathbf{2}\right) }\text{and}{\mathbf{Z}}^{\left( \mathbf{2}\right) } = {\mathbf{Y}}^{\left( \mathbf{2}\right) }\text{, that is,} \] \[ \left( \begin{matrix} {\mathbf{Y}}^{\left( \mathbf{1}\right) } \\ {\mathbf{Y}}^{\left( \mathbf{2}\right) } \end{matrix}\right) = \left( \begin{matrix} \mathbf{I}, & {\mathbf{\Lambda }}_{\mathbf{{12}}}{\mathbf{\Lambda }}_{\mathbf{{22}}}{}^{-1} \\ \mathbf{0}, & \mathbf{I} \end{matrix}\right) \left( \begin{matrix} {\mathbf{Z}}^{\left( \mathbf{1}\right) } \\ {\mathbf{Z}}^{\left( \mathbf{2}\right) } \end{matri
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 3.1.3
Definition 3.1.3. A generalized homology theory is a theory satisfying Axioms 1 through 6 . When we wish to stress the difference between it and a generalized homology theory, we will refer to a homology theory as an ordinary homology theory. Our results in this chapter will apply to any generalized homology theory, so in this chapter we let \( {H}_{i}\left( X\right) \) or \( {H}_{i}\left( {X, A}\right) \) denote a generalized homology group. We are being careful by denoting the induced maps on homology by \( {f}_{i} : {H}_{i}\left( X\right) \rightarrow \) \( {H}_{i}\left( Y\right) \), for example, but it is common practice to denote all these maps by \( {f}_{ * } \), so that \( {f}_{ * } : {H}_{i}\left( X\right) \rightarrow {H}_{i}\left( Y\right) \), and we shall sometimes follow this practice. (Sometimes the profusion of indices creates confusion rather than clarity.) We now introduce, for later reference, a properly that a generalized homology theory may have. Definition 3.1.4. A generalized homology theory is compactly supported if for any pair \( \left( {X, A}\right) \) and any element \( \alpha \in {H}_{i}\left( {X, A}\right) \) there is a compact pair \( \left( {{X}_{0},{A}_{0}}\right) \subseteq \left( {X, A}\right) \) with \( \alpha = {j}_{ * }\left( {\alpha }_{0}\right) \) for some element \( \alpha \in {H}_{i}\left( {{X}_{0},{A}_{0}}\right) \), where \( j : \left( {{X}_{0},{A}_{0}}\right) \rightarrow \left( {X, A}\right) \) is the inclusion. ## 3.2 Consequences of the Axioms We first list some consequences of the axioms. Again, \( i \) is allowed to be arbitrary. Lemma 3.2.1. (i) \( {H}_{i}\left( \varnothing \right) = 0 \) . (ii) Let \( f : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) be a homotopy equivalence. Then \( {f}_{i} : {H}_{i}\left( {X, A}\right) \rightarrow \) \( {H}_{i}\left( {Y, B}\right) \) is an isomorphism. (iii) Suppose that \( A \) is a retract of \( X \) . Let \( j : A \rightarrow X \) be the inclusion and \( r : X \rightarrow A \) be a retraction. Then \( {j}_{i} : {H}_{i}\left( A\right) \rightarrow {H}_{i}\left( X\right) \) is an injection and \( {r}_{i} : {H}_{i}\left( X\right) \rightarrow {H}_{i}\left( A\right) \) is a surjection. Furthermore, \[ {H}_{i}\left( X\right) \cong {H}_{i}\left( A\right) \oplus {H}_{i}\left( {X, A}\right) \] and \[ {H}_{i}\left( {X, A}\right) \cong \operatorname{Ker}\left( {r}_{i}\right) \] (iv) Let \( X \) be a space and let \( {X}_{1} \) and \( {X}_{2} \) be unions of components of \( X \) . Let \( {j}^{1} : {X}_{1} \rightarrow \) \( X \) and \( {j}^{2} : {X}_{2} \rightarrow X \) be the inclusions. Then \( {j}_{i}^{1} + {j}_{i}^{2} : {H}_{i}\left( {X}_{1}\right) \oplus {H}_{i}\left( {X}_{2}\right) \rightarrow {H}_{i}\left( X\right) \) is an isomorphism. Lemma 3.2.1(ii) has the following useful generalization Theorem 3.2.2. Let \( f : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) be a map of pairs and suppose that both \( f \) : \( X \rightarrow Y \) and \( f \mid A : A \rightarrow B \) are homotopy equivalences. Then \( {f}_{i} : {H}_{i}\left( {X, A}\right) \rightarrow {H}_{i}\left( {Y, B}\right) \) is an isomorphism for all \( i \) . Proof. We have the commutative diagram of exact sequences: ![21ef530b-1e09-406a-b041-cf4539af5c14_37_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_37_0.jpg) The first, second, fourth, and fifth vertical arrows are isomorphisms. Hence, by Lemma A.1.8, so is the third. It is convenient to formulate the following: Definition 3.2.3. Let \( X \) be a nonempty space. Let \( f : X \rightarrow * \) (the space consisting of a single point). The reduced homology group \[ {\widetilde{H}}_{i}\left( X\right) = \operatorname{Ker}\left( {{f}_{i} : {H}_{i}\left( X\right) \rightarrow {H}_{i}\left( *\right) }\right) . \] \( \diamond \) Theorem 3.2.4. Let \( X \) be a nonempty space and let \( {x}_{0} \) be an arbitrary point of \( X \) . Then for each \( i \) , (i) \( {\widetilde{H}}_{i}\left( X\right) \cong {H}_{i}\left( {X,{x}_{0}}\right) \) , (ii) \( {H}_{i}\left( X\right) \cong {H}_{i}\left( {x}_{0}\right) \oplus {\widetilde{H}}_{i}\left( X\right) \) . Proof. As \( {x}_{0} \) is a retract of \( X \), this is a special case of Lemma 3.2.1. Lemma 3.2.5. Let \( f : X \rightarrow Y \) be a map. Then \( f \) induces well-defined maps \( {\widetilde{f}}_{i} \) : \( {\widetilde{H}}_{i}\left( X\right) \rightarrow {\widetilde{H}}_{i}\left( Y\right) \) for every \( i \), where \( {\widetilde{f}}_{i} = {f}_{i} \mid {\widetilde{H}}_{i}\left( X\right) \) . Proof. This follows immediately from the commutativity of the diagram ![21ef530b-1e09-406a-b041-cf4539af5c14_37_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_37_1.jpg) Remark 3.2.6. Note that \( f : X \rightarrow Y \) does not induce \( {f}_{i} : \left( {X,{x}_{0}}\right) \rightarrow \left( {Y,{y}_{0}}\right) \) unless \( {y}_{0} = f\left( {x}_{0}\right) \), so we must be careful with the isomorphisms in Theorem 3.2.4. If \( X \) and \( Y \) are both path connected the situation is not so bad. For if \( Z \) is a path-connected space and \( {z}_{0} \) and \( {z}_{1} \) are any two points in \( Z \), the maps \( {j}_{0} : \{ x\} \rightarrow \left\{ {z}_{0}\right\} \subseteq Z \) and \( {j}_{1} : \{ * \} \rightarrow \left\{ {z}_{1}\right\} \subseteq Z \) are homotopic, so the maps of these on homology agree, i.e., \( {H}_{i}\left( {z}_{0}\right) \) and \( {H}_{i}\left( {z}_{1}\right) \) are the same subgroup of \( {H}_{i}\left( Z\right) \) . But if \( Z \) is not path connected, this need not be the case (see Remark 4.1.2). We recall that if the closure of \( U \) is contained in the interior of \( A \), then the inclusion \( \left( {X - U, A - U}\right) \rightarrow \left( {X, A}\right) \) is excisive. It is easy to see that this cannot be weakened to the closure of \( U \) contained in \( A \), and we give examples in Example 4.1.11. It also cannot be weakened to \( U \) contained in the interior of \( A \) . Examples are harder to construct, but we give two in Example 5.2.8. In practice, we often are in the situation where \( A \) is a closed set and we wish to excise the interior of \( A \), i.e., we wish to have the inclusion \( \left( {X - \operatorname{int}\left( A\right) ,\partial A}\right) \rightarrow \left( {X, A}\right) \) be excisive. This is true if \( \partial A \) sits "nicely" in \( X \) . To be precise, we have the following very useful result. Theorem 3.2.7. (1) Let \( A \) be a nonempty closed subset of \( X \) . Suppose that \( \partial A \) has an open neighborhood \( C \) in \( A \) such that the inclusions \( \left( {X - A}\right) \cup C \rightarrow X - \operatorname{int}\left( A\right) \) and \( \partial A \rightarrow C \) are both homotopy equivalences. Then \( \left( {X - \operatorname{int}\left( A\right) ,\partial A}\right) \rightarrow \left( {X, A}\right) \) is excisive. (2) Let \( A \) be a nonempty closed subset of \( X \) . Suppose that \( \partial A \) has an open neighborhood \( C \) in \( X - \operatorname{int}\left( A\right) \) such that the inclusions \( A \rightarrow C \cup A \) and \( \partial A \rightarrow C \) are both homotopy equivalences. Then \( \left( {X - \operatorname{int}\left( A\right) ,\partial A}\right) \rightarrow \left( {X, A}\right) \) is excisive. Proof. (1) Let \( V = A - C \) . Then \( V \) is a closed set in the interior of \( A \), so \( (X - V, A - \) \( V) \rightarrow \left( {X, A}\right) \) is excisive. But \( X - V = \left( {X - A}\right) \cup C \) and \( A - V = C \) . By hypothesis the first of these is homotopy equivalent to \( X - \operatorname{int}\left( A\right) \) and the second of these is homotopy equivalent to \( \partial A \), so by Theorem 3.2.2 \( \left( {X - \operatorname{int}\left( A\right) ,\partial A}\right) \rightarrow \left( {X, A}\right) \) is excisive. (2) Note that \( \operatorname{int}\left( A\right) \) is a set whose closure \( A \) is contained in the interior of \( C \cup A \) . Thus \( \left( {X - \operatorname{int}\left( A\right) ,\left( {C \cup A}\right) - \operatorname{int}\left( A\right) }\right) \rightarrow \left( {X, C \cup A}\right) \) . But \( \left( {C \cup A}\right) - \operatorname{int}\left( A\right) = C \) . By hypothesis \( C \) is homotopy equivalent to \( \partial A \) and \( C \cup A \) to \( A \), so again by Theorem 3.2.2 \( \left( {X - \operatorname{int}\left( A\right) ,\partial A}\right) \rightarrow \left( {X, A}\right) \) is excisive. Remark 3.2.8. It is easiest to visualize this theorem by considering the following pictures, where in (1) \( C \) is a "collar" of \( \partial A \) inside \( A \), and in (2) \( C \) is a "collar" of \( A \) outside \( A \) . ![21ef530b-1e09-406a-b041-cf4539af5c14_38_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_38_0.jpg) The way these situations most often arise is when when \( \partial A \) is a strong deformation retract of \( C \) . Note that in (2), when \( C \) is outside \( A \), this is equivalent to \( W = C \cup A \) being an open neighborhood of \( A \) having \( A \) as a strong deformation retract. \( \diamond \) This theorem allows us to give two interpretations of relative homology. Recall we defined \( {cA} \), the cone on \( A \), in Definition 1.2.10. The identification space \( X{ \cup }_{A}{cA} \) is the quotient space of the disjoint union of \( X \) and \( {cA} \) under the identification of \( a \in A \subseteq X \) with \( \left( {a,0}\right) \in {cA} \) . We let \( * \) denote the "cone point", i.e., the point to which \( A \times \{ 1\} \) is identified. Also, in the second half of the theorem, \( X/A \) is the quotient space of \( X \) obtained by identifying \( A \) to a point, and we let \( * \) denote the point \( A/A \) . Theorem 3.2.9. (1) Let \( A \) be a nonempty subset of \( X \) . Then for each \( i,{H}_{i}\left( {X, A}\right) \) is isomorphic to the reduced homology group \( {\widetilde{H}}_{i}\left( {X \cup {cA}}\right) \) . (2) Let \
108_The Joys of Haar Measure
Definition 3.5.6
Definition 3.5.6. Let \( K \) be a commutative field, let \( A \) and \( B \) be two polynomials in \( K\left\lbrack X\right\rbrack \) with respective leading terms a and \( b \), and let \( {\alpha }_{i} \) and \( {\beta }_{j} \) be the roots of \( A \) and \( B \) in some algebraic closure of \( K \), repeated with multiplicity. We define the resultant \( R\left( {A, B}\right) \) of \( A \) and \( B \) by one of the following equivalent formulas: \[ R\left( {A, B}\right) = {a}^{\deg \left( B\right) }\mathop{\prod }\limits_{i}B\left( {\alpha }_{i}\right) = {\left( -1\right) }^{\deg \left( A\right) \deg \left( B\right) }{b}^{\deg \left( A\right) }\mathop{\prod }\limits_{j}A\left( {\beta }_{j}\right) \] \[ = {a}^{\deg \left( B\right) }{b}^{\deg \left( A\right) }\mathop{\prod }\limits_{{i, j}}\left( {{\alpha }_{i} - {\beta }_{j}}\right) . \] It is clear that \( R\left( {A, B}\right) = 0 \) if and only if \( A \) and \( B \) have a common root, hence if and only if \( \gcd \left( {A, B}\right) \) is not constant. Furthermore, the resultant is clearly multiplicative in \( A \) and \( B \), in other words, for instance \( R\left( {{A}_{1}{A}_{2}, B}\right) = \) \( R\left( {{A}_{1}, B}\right) R\left( {{A}_{2}, B}\right) \) . We give without proof the following slightly less trivial proposition. Proposition 3.5.7. Let \( \mathcal{O} \) be a subring of \( K \), and assume that \( A \) and \( B \) are in \( \mathcal{O}\left\lbrack X\right\rbrack \) . (1) We have \( R\left( {A, B}\right) \in \mathcal{O} \) . (2) There exist polynomials \( U\left( X\right) \) and \( V\left( X\right) \) in \( \mathcal{O}\left\lbrack X\right\rbrack \) such that \[ U\left( X\right) A\left( X\right) + V\left( X\right) B\left( X\right) = R\left( {A, B}\right) . \] Note that the second statement of the proposition does not simply follow from the extended Euclidean algorithm. Proposition 3.5.8. Let \( 1 \leq m < n \) . Then \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) = 1 \), unless \( m \mid n \) and \( n/m = {p}^{a} \) is a power of a prime \( p \), in which case \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) = {p}^{\phi \left( m\right) } \) . Proof. Assume first that \( \gcd \left( {m, n}\right) = 1 \) and that \( m > 1 \) . Thus \( {\Phi }_{m}\left( X\right) \mid \) \( \left( {{X}^{m} - 1}\right) /\left( {X - 1}\right) \), so by multiplicativity of the resultant \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) \mid R\left( {{X}^{m} - }\right. \) \( \left. {1,{\Phi }_{n}}\right) /R\left( {X - 1,{\Phi }_{n}}\right) \) . By definition of the resultant we have \[ R\left( {{X}^{m} - 1,{\Phi }_{n}}\right) = \pm {\mathcal{N}}_{\mathbb{Q}\left( {\zeta }_{n}\right) /\mathbb{Q}}\left( {{\zeta }_{n}^{m} - 1}\right) . \] Since \( m \) and \( n \) are coprime, \( {\zeta }_{n}^{m} - 1 \) is a conjugate of \( {\zeta }_{n} - 1 \), hence \[ R\left( {{X}^{m} - 1,{\Phi }_{n}}\right) = \pm {\mathcal{N}}_{\mathbb{Q}\left( {\zeta }_{n}\right) /\mathbb{Q}}\left( {{\zeta }_{n} - 1}\right) = \pm R\left( {X - 1,{\Phi }_{n}}\right) , \] proving that \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) = \pm 1 \) when \( \gcd \left( {m, n}\right) = 1 \) and \( m > 1 \) . If \( \gcd \left( {m, n}\right) = 1 \) and \( m = 1 \), we simply have \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) = R\left( {X - 1,{\Phi }_{n}}\right) = \) \( {\Phi }_{n}\left( 1\right) \), and by Proposition 3.5.4 this is equal to 1 unless \( n = {p}^{k} \) for some prime \( p \) and \( k \geq 1 \), in which case it is equal to \( p \) . Consider now the general case, and set \( d = \gcd \left( {m, n}\right) \) . By Lemma 3.5.3 we know that \( {\Phi }_{m}\left( X\right) \mid {\Phi }_{m/d}\left( {X}^{d}\right) \) and \( {\Phi }_{n}\left( X\right) \mid {\Phi }_{n/d}\left( {X}^{d}\right) \), and since the resultant is multiplicative it follows that \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) \mid R\left( {{\Phi }_{m/d}\left( {X}^{d}\right) ,{\Phi }_{n/d}\left( {X}^{d}\right) }\right) \) . In addition, it is clear that we also have \( R\left( {{\Phi }_{m/d}\left( {X}^{d}\right) ,{\Phi }_{n/d}\left( {X}^{d}\right) }\right) \mid R{\left( {\Phi }_{m/d},{\Phi }_{n/d}\right) }^{d} \) . Since \( \gcd \left( {m/d, n/d}\right) = 1 \), by what we have shown in the coprime case we deduce that \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) = 1 \), except perhaps when \( m/d = 1 \) (in other words \( m \mid n) \), and \( n/d = n/m = {p}^{k} \) is a prime power. We leave to the reader the proof that in this case \( R\left( {{\Phi }_{m},{\Phi }_{n}}\right) = \pm {p}^{\phi \left( m\right) } \), and that the signs of the resultants are always positive (see Exercise 22). ## 3.5.2 Field-Theoretic Properties of \( \mathbb{Q}\left( {\zeta }_{n}\right) \) Definition 3.5.9. Let \( n \geq 1 \) and let \( {\zeta }_{n} \) be a primitive \( n \) th root of unity. The \( n \) th cyclotomic field is the field \( {K}_{n} = \mathbb{Q}\left( {\zeta }_{n}\right) \) . Since \( {\zeta }_{{4m} + 2} = - {\zeta }_{{2m} + 1} \), we have \( {K}_{{4m} + 2} = {K}_{{2m} + 1} \), so we will in general assume that \( n ≢ 2\left( {\;\operatorname{mod}\;4}\right) \) . The basic field-theoretic properties of cyclotomic fields are summarized in the following proposition. Proposition 3.5.10. (1) The polynomial \( {\Phi }_{n}\left( X\right) \) is irreducible in \( \mathbb{Q}\left\lbrack X\right\rbrack \) ; in other words, \( {\Phi }_{n}\left( X\right) \) is the minimal polynomial of \( {\zeta }_{n} \) over \( \mathbb{Q} \), and \( \left\lbrack {\mathbb{Q}\left( {\zeta }_{n}\right) }\right. \) : \( \mathbb{Q}\rbrack = \phi \left( n\right) \), Euler’s phi function. (2) The extension \( \mathbb{Q}\left( {\zeta }_{n}\right) /\mathbb{Q} \) is a Galois extension, the Galois group \( G \) being canonically isomorphic to \( {\left( \mathbb{Z}/n\mathbb{Z}\right) }^{ * } \) by the map that sends \( a \in {\left( \mathbb{Z}/n\mathbb{Z}\right) }^{ * } \) to the automorphism \( {\sigma }_{a} \in G \) such that \( {\sigma }_{a}\left( {\zeta }_{n}\right) = {\zeta }_{n}^{a} \) . Proof. (I thank J.-F. Jaulent for the following proof of this very classical result.) Let \( P \) be an irreducible factor of \( {\Phi }_{n} \) in \( \mathbb{Q}\left\lbrack X\right\rbrack \), let \( \zeta \) be a root of \( P \) , and let \( p \) be a prime number such that \( p \nmid n \) . I claim that \( {\zeta }^{p} \) is also a root of \( P \) . Indeed, assume otherwise, and let \( Q \) be the minimal monic polynomial of \( {\zeta }^{p} \) over \( \mathbb{Q} \), so that \( Q \in \mathbb{Z}\left\lbrack X\right\rbrack \) . Since \( {\zeta }^{p} \) is not a root of \( P \) we have \( P \neq Q \) ; hence the irreducible polynomials \( P \) and \( Q \) are coprime. On the other hand, \( P\left( X\right) \left| {{\Phi }_{n}\left( X\right) }\right| \left( {{X}^{n} - 1}\right) \) by assumption, and since \( {\left( {\zeta }^{p}\right) }^{n} = 1 \) we also have \( Q\left( X\right) \mid \left( {{X}^{n} - 1}\right) \), hence \( P\left( X\right) Q\left( X\right) \mid \left( {{X}^{n} - 1}\right) \) since \( P \) and \( Q \) are coprime. On the other hand, \( \zeta \) is a root of \( Q\left( {X}^{p}\right) \), so that \( P\left( X\right) \mid Q\left( {X}^{p}\right) \), and using to denote reduction modulo \( p \) we deduce that \( \bar{P}\left( X\right) \mid \bar{Q}\left( {X}^{p}\right) = \bar{Q}{\left( X\right) }^{p} \) , since \( \bar{Q} \in {\mathbb{F}}_{p}\left\lbrack X\right\rbrack \) . In particular, if \( \bar{R} \) is an irreducible factor of \( \bar{P} \) in \( {\mathbb{F}}_{p}\left\lbrack X\right\rbrack \) then \( \bar{R} \mid \bar{Q} \) . But since \( P\left( X\right) Q\left( X\right) \mid \left( {{X}^{n} - 1}\right) \) it follows that \( {\bar{R}}^{2} \mid {X}^{n} - \overline{1} \) in \( {\mathbb{F}}_{p}\left\lbrack X\right\rbrack \), in other words that \( {X}^{n} - \overline{1} \) is not coprime to its derivative (i.e., is not separable), which is absurd since its derivative is \( n{X}^{n - 1} \) and \( p \nmid n \), proving my claim. By induction on the number of prime divisors of \( k \) counted with multiplicity, it follows from my claim that for any \( k \) coprime with \( n,{\zeta }^{k} \) is a root of \( P \) . Since \( \zeta \) is a root of \( {\Phi }_{n} \), hence a primitive \( n \) th root of unity, it follows that all primitive \( n \) th roots of unity are roots of \( P \), hence that \( {\Phi }_{n} \mid P \), so that \( {\Phi }_{n} = P \), proving that \( {\Phi }_{n} \) is irreducible. The other statements of (1) have been proved in passing, and (2) is an immediate consequence and left to the reader. Note that the irreducibility of \( {\Phi }_{{p}^{k}}\left( X\right) \) follows immediately from the Eisenstein criterion (see Corollary 4.1.36). Corollary 3.5.11. There is a one-to-one correspondence between subfields of \( \mathbb{Q}\left( {\zeta }_{n}\right) \) and subgroups of \( {\left( \mathbb{Z}/n\mathbb{Z}\right) }^{ * } \) . Proof. This is simply Galois theory. Corollary 3.5.12. The subgroup of roots of unity of \( \mathbb{Q}\left( {\zeta }_{n}\right) \) is the group of \( \pm {\zeta }_{n}^{i} \) for \( 0 \leq i < n \), or equivalently, the subgroup of order \( {2n} \) generated by \( {\zeta }_{2n} = - {\zeta }_{n} \) when \( n \) is odd, or the subgroup of order \( n \) generated by \( {\zeta }_{n} \) when \( 4 \mid n \) . Proof. Let \( {\zeta }_{m} \) be an \( m \) th root of unity in \( \mathbb{Q}\left( {\zeta }_{n}\right) \) . We thus have \( \mathbb{Q}\left( {\zeta }_{m}\right) \subset \) \( \mathbb{Q}\left( {\zeta }_{n}\right) \), hence by the proposition \( \phi \left( m\right) \mid \phi \left( n\right) \) . Since \( \phi \left( m\right) \) tends to infinity with \( m \), it follows that \( m \) is bounded as a function of \( n \) ; hence the group of roots of unity in \( \mathbb{Q}\left( {\zeta }_{n}\right) \) is finite (this is of course true in any number field). By Corollary 2.4.3 it follows that it is a cyclic subgroup \( \left\langle {\zeta }_{m}\ri
1225_(Griffiths) Introduction to Algebraic Curves
Definition 3.1
Definition 3.1. Suppose \( C \) is a compact Riemann surface of genus \( g \geq 2 \) and \( \left\{ {{\omega }_{1},\ldots ,{\omega }_{g}}\right\} \) is a basis of \( {\Omega }^{1}\left( C\right) \) . Then the mapping \[ {\varphi }_{K} : C \rightarrow {\mathbb{P}}^{g - 1} \] \[ p \mapsto \left\lbrack {{\omega }_{1}\left( p\right) ,\ldots ,{\omega }_{g}\left( p\right) }\right\rbrack \] is called the canonical map of \( C \) . REMARK 3.2. The precise interpretation of this definition is as follows: if \( z \) is a local coordinate near \( p \), suppose near \( p \) that \[ {\omega }_{\alpha } = {f}_{\alpha }\left( z\right) {dz}\;\left( {\alpha = 1,\ldots, g}\right) , \] then \[ {\varphi }_{K}\left( p\right) = \left\lbrack {{f}_{1}\left( {z\left( p\right) }\right) ,\ldots ,{f}_{g}\left( {z\left( p\right) }\right) }\right\rbrack . \] EXERCISE 3.1. Prove that the above definition of \( {\varphi }_{K}\left( p\right) \) is independent of the choice of the local coordinate near \( p \) . REMARK 3.3. We should point out that Definition 3.1 is well defined, that is to say, for any \( p \in C,{\omega }_{1}\left( p\right) ,\ldots ,{\omega }_{g}\left( p\right) \) are not all zero. In fact, if for a certain \( p \in C,{\omega }_{\alpha }\left( p\right) = 0\left( {\forall \alpha = 1,\ldots, g}\right) \), then \( {\Omega }^{1}\left( p\right) = {\Omega }^{1}\left( C\right) \) and hence \[ i\left( p\right) = \dim {\Omega }^{1}\left( C\right) = g. \] By the Riemann-Roch theorem, we have \[ l\left( p\right) = 1 - g + g + 1 = 2, \] which implies that there exists a nonconstant meromorphic function \( f \) in \( \mathcal{L}\left( p\right) \), with \( {f}^{-1}\left( \infty \right) = p \) . Therefore \[ \deg f = 1\text{.} \] In other words, \[ f : C \rightarrow {\mathbb{P}}^{1} \] is an isomorphism. However \( C \) has genus \( g > 1 \), whereas \( {\mathbb{P}}^{1} \) has genus 0 , which is a contradiction. The canonical map possesses the following properties. Proposition 3.4. \( {\varphi }_{K} \) is nondegenerate. (See Theorem 10.1 in Chapter I.) Proof. Let us assume that \( {\varphi }_{K} \) is degenerate. Then there exist \( {\lambda }_{\alpha } \in \mathbb{C} \) \( \left( {\alpha = 1,2,\ldots, g}\right) \), not all zero, such that for all \( p \in C \) we have \[ \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\lambda }_{\alpha }{\omega }_{\alpha }\left( p\right) = 0 \] Then \[ \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\lambda }_{\alpha }{\omega }_{\alpha } = 0 \] which contradicts the fact that \( \left\{ {{\omega }_{1},\ldots ,{\omega }_{g}}\right\} \) is a basis of \( {\Omega }^{1}\left( C\right) \) . Q.E.D. Proposition 3.5. If \[ {\varphi }_{K}\left( p\right) = {\varphi }_{K}\left( q\right) \] where \( p, q \in C, p \neq q \), then there exists a holomorphic mapping of degree 2 from \( C \) into \( {\mathbb{P}}^{1} \) . Proof. \( {\varphi }_{K}\left( p\right) = {\varphi }_{K}\left( q\right) \) implies that \[ {\omega }_{\alpha }\left( p\right) = \lambda {\omega }_{\alpha }\left( q\right) \;\left( {\alpha = 1,\ldots, g}\right) , \] for some \( 0 \neq \lambda \in \mathbb{C} \) . (Note: This means that in terms of suitable local coordinates \( z, w \) around \( p, q \) respectively, if \( {\omega }_{\alpha } = {f}_{\alpha }\left( z\right) {dz} \) near \( p \) and \( {\omega }_{\alpha } = {g}_{\alpha }\left( w\right) {dw} \) near \( q \), then \( {f}_{\alpha }\left( z\right) = \lambda {g}_{\alpha }\left( w\right) \) for all \( \alpha \) .) Choose \( D = p + q \) and calculate \( i\left( D\right) \) . First, we have \[ {\Omega }^{1}\left( D\right) = {\Omega }^{1}\left( p\right) \] In fact, for any \( \omega \in {\Omega }^{1}\left( C\right) \) if \[ \omega = \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\mu }_{\alpha }{\omega }_{\alpha } \] where \( {\mu }_{\alpha } \in \mathbb{C},\left( {\alpha = 1,\ldots, g}\right) \), then \[ \omega \left( p\right) = 0 \Leftrightarrow \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\mu }_{\alpha }{\omega }_{\alpha }\left( p\right) = 0 \] \[ \Leftrightarrow \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\mu }_{\alpha }\lambda {\omega }_{\alpha }\left( q\right) = 0 \] \[ \Leftrightarrow \omega \left( q\right) = 0\text{.} \] So \( {\Omega }^{1}\left( p\right) = {\Omega }^{1}\left( {p + q}\right) = {\Omega }^{1}\left( D\right) \) . Thus to calculate \( i\left( D\right) \), it suffices to calculate \( i\left( p\right) \) . We consider the linear equation with variables \( {\lambda }_{1},\ldots ,{\lambda }_{g} \) : \[ {\lambda }_{1}{\omega }_{1}\left( p\right) + {\lambda }_{2}{\omega }_{2}\left( p\right) + \cdots + {\lambda }_{g}{\omega }_{g}\left( p\right) = 0. \] From Remark 3.3 we see that the \( {\omega }_{\alpha }\left( p\right) \;\left( {\alpha = 1,\ldots, g}\right) \) are not all zero. Hence this linear equation has \( \left( {g - 1}\right) \) linearly independent solution vectors \( \left( {{\lambda }_{1},\ldots ,{\lambda }_{g}}\right) \) in \( {\mathbb{C}}^{g} \) which then give rise to \( \left( {g - 1}\right) \) linearly independent elements in \( {\Omega }^{1}\left( p\right) \) . Thus we have \[ i\left( D\right) = i\left( p\right) = g - 1. \] We apply the Riemann-Roch theorem to \( D = p + q \) to get \[ l\left( D\right) = 2 - g + \left( {g - 1}\right) + 1 = 2, \] and therefore there exists a nonconstant meromorphic function \( f \in \) \( \mathcal{L}\left( {p + q}\right) \) . Consider \( f \) to be a holomorphic mapping from \( C \) into \( {\mathbb{P}}^{1} \) ; then \[ {f}^{-1}\left( \infty \right) = p + q \] \( \left( {{f}^{-1}\left( \infty \right) }\right. \) cannot be a single point \( p \) (or \( q \) ), otherwise we would have \( C \cong {\mathbb{P}}^{1} \) which contradicts the fact that \( g \geq 2 \) ). Hence \[ \deg f = 2\text{. Q.E.D.}{}^{\left( 1\right) } \] We can divide the set of compact Riemann surfaces of genus \( g > 1 \) into two classes, depending on whether or not there exists a holomorphic mapping of degree 2 from the Riemann surface into \( {\mathbb{P}}^{1} \) . DEFINITION 3.6. A compact Riemann surface \( C \) of genus \( g > 1 \) is called hyperelliptic if there exists a holomorphic mapping of degree 2 from \( C \) into \( {\mathbb{P}}^{1} \) ; otherwise \( C \) is called nonhyperelliptic. Below we shall first consider the canonical maps on nonhyperelliptic compact Riemann surfaces. The case of hyperelliptic compact Riemann surfaces will be discussed in detail in the next section. DEFINITION 3.7. Suppose \( C \) is a nonhyperelliptic compact Riemann surface of genus \( g \geq 2 \), and \[ {\varphi }_{K} : C \rightarrow {\mathbb{P}}^{g - 1} \] is its canonical map. Then \[ {\varphi }_{K}\left( C\right) \subset {\mathbb{P}}^{g - 1} \] is called a canonical curve. \( {}^{\left( 1\right) } \) This argument may be shortened using Brill-Noether reciprocity. We may change \( g \geq 2 \) to \( g \geq 3 \) in this definition, for in \( §5 \) of this chapter we shall prove that all compact Riemann surfaces of \( g = 2 \) are hyperelliptic. Proposition 3.8. If \( C \) is a nonhyperelliptic compact Riemann surface, then the canonical map \( {\varphi }_{K} \) is injective, and \( {\varphi }_{K}\left( C\right) \) is smooth. Proof. Proposition 3.5 implies that \( {\varphi }_{K} \) is injective. In order to prove that \( {\varphi }_{K}\left( C\right) \) is smooth, we only need to prove that the differential \( {\left( {\varphi }_{K}\right) }_{ * } \) is everywhere nonsingular (because \( {\varphi }_{K} \) is already an injective mapping). Suppose \( p \in C \) . During the course of proving Proposition 3.5, we already proved that \[ \dim {\Omega }^{1}\left( p\right) = g - 1 \] We now claim that \[ \dim {\Omega }^{1}\left( {2p}\right) = g - 2. \] In fact, suppose \( z \) is a local coordinate near the point \( p \) with \( z\left( p\right) = 0 \) . Then near \( p \) we have \[ {\omega }_{\alpha } = {f}_{\alpha }\left( z\right) {dz}\;\left( {\alpha = 1,\ldots, g}\right) , \] where \( {\omega }_{1},\ldots ,{\omega }_{g} \) represent the basis of \( {\Omega }^{1}\left( C\right) \) as usual. For any \( \omega \in \) \( {\Omega }^{1}\left( {2p}\right) \), suppose \[ \omega = \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\lambda }_{\alpha }{\omega }_{\alpha } \] where \( {\lambda }_{\alpha } \in \mathbb{C}\left( {\alpha = 1,\ldots, g}\right) \) . We then have \[ \left\{ \begin{array}{l} \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\lambda }_{\alpha }{f}_{\alpha }\left( 0\right) = 0 \\ \mathop{\sum }\limits_{{\alpha = 1}}^{g}{\lambda }_{\alpha }{f}_{\alpha }^{\prime }\left( 0\right) = 0 \end{array}\right. \] It therefore suffices to prove that these two linear equations in \( {\lambda }_{\alpha }(\alpha = \) \( 1,\ldots, g) \), are linearly independent. Suppose not, then \[ {\Omega }^{1}\left( p\right) = {\Omega }^{1}\left( {2p}\right) \] and by the Riemann-Roch theorem, we have \[ l\left( {2p}\right) = 2 - g + \left( {g - 1}\right) + 1 = 2. \] It then follows that there exists a nonconstant meromorphic function \( f \) with second-order pole \( p \) in \( \mathcal{L}\left( {2p}\right) \) (as before, \( f \) cannot have \( p \) as first-order pole, for then we would have \( C \cong {\mathbb{P}}^{1} \), which would contradict the fact that \( g \geq 2 \) ). We look at \( f \) as a mapping from \( C \) into \( {\mathbb{P}}^{1} \) . Then \[ \deg f = 2 \] and this contradicts the nonhyperellipticity of \( C \) . Thus, we have \[ \dim \left( {{\Omega }^{1}\left( C\right) /{\Omega }^{1}\left( p\right) }\right) = 1,\;\dim \left( {{\Omega }^{1}\left( p\right) /{\Omega }^{1}\left( {2p}\right) }\right) = 1, \] as well as \[ \dim {\Omega }^{1}\left( {2p}\right) = g - 2 \] So we can then choose a basis \( {\omega }_{1},\ldots ,{\omega }_{g} \) of \( {\Omega }^{1}\left( C\right) \) such that near the point \( p \) we have \[ \left\{ \begin{array}{l} {\omega }_{1} = {h}_{1}\left( z\right) {dz}, \\ {\omega }_{2} = z{h}_{2}\left( z\right) {dz}, \\ {\
1126_(GTM32)Lectures in Abstract Algebra III. Theory of Fields and Galois Theory
Definition 1
Definition 1. Let \( \mathrm{P} \) be an abelian extension of a field \( \Phi \) . Then \( \mathrm{P}/\Phi \) is called a Kummer m-extension if the Galois group of \( \mathrm{P}/\Phi \) is of exponent \( m \) and \( \Phi \) contains \( m \) distinct \( m \) -th roots of 1 . We shall now suppose that \( \Phi \) is a given field which contains \( m \) distinct \( m \) -th roots of 1. The field \( \Phi \) and the integer \( m \) will be fixed throughout our discussion. We are interested in obtaining a survey of the Kummer \( {m}^{\prime } \) -extensions \( \mathrm{P}/\Phi \) where \( {m}^{\prime } \mid m \) . We recall that the condition that \( \Phi \) contain \( m \) distinct \( m \) -th roots of \( 1\mathrm{{im}} \) - plies that the characteristic is not a divisor of \( m \) (§ 2.2). If \( \mathrm{P}/\Phi \) is a Kummer \( {m}^{\prime } \) -extension where \( {m}^{\prime } \mid m \), then \( \left\lbrack {\mathrm{P} : \Phi }\right\rbrack = \left( {G : 1}\right) \) and, since the exponent and order of a finite commutative group are divisible by the same primes, we see that the characteristic is not a divisor of \( \left\lbrack {\mathrm{P} : \Phi }\right\rbrack \) . Let \( \Phi \) and \( m \) be as indicated and let \( \mathrm{P}/\Phi \) be a Kummer \( {m}^{\prime } \) -extension, \( {m}^{\prime } \mid m \) . Let \( {\mathrm{P}}^{ * } \) and \( {\Phi }^{ * } \) be the multiplicative groups of non-zero elements of \( \mathrm{P} \) and \( \Phi \) respectively. For \( {\rho \varepsilon }{\mathrm{P}}^{ * } \), the mapping \( \rho \rightarrow {\rho }^{m} \) is an endomorphism of \( {\mathrm{P}}^{ * } \) which maps \( {\Phi }^{ * } \) into itself. The kernel of \( \rho \rightarrow {\rho }^{m} \) is \( \widehat{Z}\left( m\right) \) the group of order \( m \) of \( m \) -th roots of 1 and \( Z\left( m\right) \subseteq {\Phi }^{ * } \) . Let (7) \[ M\left( \mathrm{P}\right) = \left\{ {{\rho \varepsilon }{\mathrm{P}}^{ * } \mid {\rho }^{m}\varepsilon {\Phi }^{ * }}\right\} \] (8) \[ N\left( \mathrm{P}\right) = \left\{ {{\rho }^{m} \mid {\rho \varepsilon M}\left( \mathrm{P}\right) }\right\} . \] Thus \( M\left( \mathrm{P}\right) \) consists of the \( m \) -th roots in \( \mathrm{P} \) of the elements of \( {\Phi }^{ * } \) and \( N\left( \mathrm{P}\right) \) is the set of elements of \( {\Phi }^{ * } \) which are \( m \) -th powers of elements of \( \mathrm{P} \) . It is clear that \( M\left( \mathrm{P}\right) \) is a subgroup of \( {\mathrm{P}}^{ * } \) containing \( {\Phi }^{ * } \) and \( N\left( \mathrm{P}\right) \) is a subgroup of \( {\Phi }^{ * } \) containing \( {\Phi }^{*m} = \) \( \left\{ {{\alpha }^{m} \mid {\alpha \varepsilon }{\Phi }^{ * }}\right\} \) . Let \( {\rho \varepsilon M}\left( \mathrm{P}\right) \) and set \( {\chi }_{\rho }\left( s\right) = {\rho }^{s}{\rho }^{-1},{s\varepsilon G} \) . Since \( {\rho }^{m} = {\alpha \varepsilon \Phi } \) , \( {\left( {\rho }^{s}\right) }^{m} = \alpha \) so \( {\rho }^{s}{\rho }^{-1} \) e \( Z\left( m\right) \) . Moreover, since \( Z\left( m\right) \subseteq \Phi \) , \[ {\chi }_{\rho }\left( {st}\right) = {\rho }^{st}{\rho }^{-1} = {\left( {\rho }^{s}{\rho }^{-1}\right) }^{t}\left( {{\rho }^{t}{\rho }^{-1}}\right) = {\chi }_{\rho }\left( s\right) {\chi }_{\rho }\left( t\right) . \] Thus we see that \( {\chi }_{\rho } \) e Hom \( \left( {G, Z}\right), Z = Z\left( m\right) \), which is a character group of the finite commutative group \( G \) since the exponent of \( G \) is a divisor of \( m \) . Conversely, let \( \chi \) be any element of Hom \( \left( {G, Z}\right) \) . Then we have \( \chi \left( {st}\right) = \chi \left( s\right) \chi \left( t\right) = \chi {\left( s\right) }^{t}\chi \left( t\right) \), so Noether’s equations are satisfied. Consequently, by Noether's theorem (Th. 1.19), there exists a non-zero element \( {\rho \varepsilon }\mathrm{P} \) such that \( \chi \left( s\right) = \) \( {\rho }^{s}{\rho }^{-1} \) . Since \( {\rho }^{s}{\rho }^{-1} \) e \( Z \) we have \( {\left( {\rho }^{s}\right) }^{m} = {\rho }^{m} \) or \( {\left( {\rho }^{m}\right) }^{s} = {\rho }^{m} \) for every \( {seG} \) . This implies that \( {\rho }^{m}{\varepsilon \Phi } \) and so \( {\rho \varepsilon M}\left( \mathrm{P}\right) \) . We have therefore shown that every element of the character group Hom \( \left( {G, Z}\right) \) is of the form \( \chi \left( s\right) = {\rho }^{s}{\rho }^{-1},\rho \) in \( M\left( \mathrm{P}\right) \) . If \( {\rho }_{1},{\rho }_{2}{\varepsilon M}\left( \mathrm{P}\right) \) and \( {\chi }_{{\rho }_{1}},{\chi }_{{\rho }_{2}} \) are the corresponding characters of \( G \), then \( {\chi }_{{\rho }_{1}{\rho }_{2}}\left( s\right) = \) \( {\left( {\rho }_{1}{\rho }_{2}\right) }^{s}{\left( {\rho }_{1}{\rho }_{2}\right) }^{-1} = {\rho }_{1}{}^{s}{\rho }_{1}{}^{-1}{\rho }_{2}{}^{s}{\rho }_{2}{}^{-1} = {\chi }_{{\rho }_{1}}\left( s\right) {\chi }_{{\rho }_{2}}\left( s\right) \) . Hence the mapping \( \rho \rightarrow {\chi }_{\rho }\left( s\right) \) is a homomorphism of \( M\left( \mathrm{P}\right) \) onto Hom \( \left( {G, Z}\right) \) . The kernel of this homomorphism is the set of elements \( {\rho \varepsilon M}\left( \mathrm{P}\right) \) such that \( {\rho }^{s}{\rho }^{-1} = 1,{s\varepsilon G} \) . This is just the set of elements satisfying \( {\rho }^{s} = \rho ,{s\varepsilon G},\rho \neq 0 \) and so it is \( {\Phi }^{ * } \) . It is convenient to state the result which we have just obtained on the homomorphism of \( M\left( \mathrm{P}\right) \) onto Hom \( \left( {G, Z}\right) \) as a result on exact sequences of group homomorphisms. If \( {G}_{1},{G}_{2},\cdots ,{G}_{k} \) are groups and \( {\eta }_{i} \) is a homomorphism of \( {G}_{i} \) into \( {G}_{i + 1} \), then we say that the sequence \[ {G}_{1}\underset{{\eta }_{1}}{ \rightarrow }{G}_{2}\underset{{\eta }_{2}}{ \rightarrow }\cdots \rightarrow {G}_{k - 1}\underset{{\eta }_{k - 1}}{ \rightarrow }{G}_{k} \] is exact if for each \( i = 1,2,\cdots, k - 2 \) the image of \( {G}_{i} \) under \( {\eta }_{i} \) coincides with the kernel of \( {\eta }_{i + 1} \) . If 1 denotes the group consisting of 1 alone then the only homomorphism of 1 into any group \( G \) is \( 1 \rightarrow 1 \) . It follows from this and the definition of exactness that \( 1 \rightarrow {G}_{1}\underset{\eta }{ \rightarrow }{G}_{2} \) is exact if and only if \( \eta \) is \( 1 - 1 \) and \( {G}_{1}\underset{\eta }{ \rightarrow }{G}_{2} \rightarrow 1 \) is exact if and only if \( \eta \) is surjective. Using this terminology we can state the following theorem. Theorem 7. Let \( \Phi \) be a field containing \( m \) distinct \( m \) -th roots of 1 and let \( \mathrm{P}/\Phi \) be a Kummer \( {m}^{\prime } \) -extension where \( {m}^{\prime } \mid m \) . Let \( M\left( \mathrm{P}\right) \) be defined by (7) where \( {\mathrm{P}}^{ * } \) is the multiplicative group of \( \mathrm{P} \) and \( {\Phi }^{ * } \) is the multiplicative group of \( \Phi \) . Then we have the exact sequence of multiplicative groups \[ 1 \rightarrow {\Phi }^{ * } \rightarrow M\left( \mathrm{P}\right) \rightarrow \operatorname{Hom}\left( {G, Z}\right) \rightarrow 1 \] where the homomorphism of \( {\Phi }^{ * } \) is the inclusion mapping and that of \( M\left( \mathrm{P}\right) \) is \( \rho \rightarrow {\chi }_{\rho },{\chi }_{\rho }\left( s\right) = {\rho }^{s}{\rho }^{-1} \) . The factor group \( M\left( \mathrm{P}\right) /{\mathrm{P}}^{ * } \) is finite and isomorphic to \( G \) . We have \( \mathrm{P} = \Phi \left( {M\left( \mathrm{P}\right) }\right) \) and \( \mathrm{P} = \) \( \Phi \left( {{\rho }_{1},{\rho }_{2},\cdots ,{\rho }_{r}}\right) ,{\rho }_{i} \) in \( M\left( \mathrm{P}\right) \), if and only if the cosets \( {\rho }_{i}{\Phi }^{ * } \) generate \( M\left( \mathrm{P}\right) /{\Phi }^{ * } \) . Proof. The first statement on the exactness of the displayed sequence means that \( {\Phi }^{ * } \) is the kernel of the mapping \( \rho \rightarrow {\chi }_{\rho } \) and this mapping is surjective on Hom \( \left( {G, Z}\right) \) . Both of these facts were established above. Consequently, we have Hom \( \left( {G, Z}\right) \cong \) \( M\left( \mathrm{P}\right) /{\Phi }^{ * } \) . Since Hom \( \left( {G, Z}\right) \cong G \), by Theorem 6, we have \( M\left( \mathrm{P}\right) /{\Phi }^{ * } \cong G \) . This proves the second statement. Now let \( {\rho }_{1},\cdots ,{\rho }_{r} \) be elements of \( M\left( \mathrm{P}\right) \) such that the cosets \( {\rho }_{i}{\Phi }^{ * } \) generate the finite group \( M\left( \mathrm{P}\right) /{\Phi }^{ * } \) . Clearly the homomorphism \( \rho \rightarrow {\chi }_{\rho } \) of \( M\left( \mathrm{P}\right) \) gives the isomorphism \( \rho {\Phi }^{ * } \rightarrow {\chi }_{\rho } \) of \( M\left( \mathrm{P}\right) /{\Phi }^{ * } \) onto Hom \( \left( {G, Z}\right) \) . Hence we see that the characters \( {\chi }_{{\rho }_{i}} \) generate Hom \( \left( {G, Z}\right) \) . Now let \( {\mathrm{P}}^{\prime } = \Phi \left( {{\rho }_{1},{\rho }_{2},\cdots ,{\rho }_{r}}\right) \) and let \( H \) be the subgroup of \( G \) corresponding to \( {\mathrm{P}}^{\prime } \) (the Galois group of \( \mathrm{P}/{\mathrm{P}}^{\prime } \) ). If \( {t\varepsilon H} \), we have \( {\rho }_{i}{}^{t} = {\rho }_{i},1 \leq i \leq r \), so \( {\chi }_{{\rho }_{i}}\left( t\right) = 1 \) . This implies that \( \chi \left( t\right) = 1 \) for every \( \chi \) e Hom \( \left( {G, Z}\right) \) . It follows from Corollary 1 to Theorem 6 that \( t = 1 \) . Thus \( H = 1 \) which implies that \( {\mathrm{P}}^{\prime } = \Phi \left( {{\rho }_{1},\cdots ,{\rho }_{r}}\right) \) \( = \mathrm{P} \) and \( \mathrm{P} = \Phi \left( {M\left( \mathrm{P}\right) }\right) \) . Conversely, let \( {\rho }_{1},\cdots ,{\rho }_{r}{\varepsilon M}\left( \mathrm{P}\right) \) satisfy \( \Phi \left( {{\rho }_{1},\cdots ,{\rho }_{r}}\right) = \mathrm{P} \) and let \( {s\varepsilon G} \) . Then \( {\rho }_{i}{}^{s} = {\rho }_{i},1 \le
1172_(GTM8)Axiomatic Set Theory
Definition 10.5
Definition 10.5. \( \langle M,\mathbf{P}\rangle \) is a setting for forcing iff 1. \( M \) is a standard transitive model of \( {ZF} \) , 2. \( \mathbf{P} \) is a partially ordered structure and \( \mathbf{P} \in M \) , 3. \( M \) is countable. Remark. Under these assumptions we know that for each \( p \in P \) there is a \( G \) that is \( \mathbf{P} \) -generic over \( M \) and \( p \in G \) . In fact 1 and 3 could be weakened. In particular, it would be sufficient to require instead of 3 \( {3}^{\prime }.\mathcal{P}\left( \mathbf{P}\right) \cap M \) is countable. The following theorem is a kind of completeness theorem for forcing. Theorem 10.6. If \( \langle M,\mathbf{P}\rangle \) is a setting for forcing then \[ p \Vdash \varphi \leftrightarrow \left( {\forall {G}^{\prime }}\right) \left\lbrack {{G}^{\prime }\text{ is }\mathbf{P}\text{-generic over }M \land p \in {G}^{\prime } \rightarrow M\left\lbrack {G}^{\prime }\right\rbrack \vDash \varphi }\right\rbrack . \] Proof. Using the one-to-one correspondence between P-generic sets over \( M \) and \( M \) -complete homomorphisms from \( \mathbf{B} \) into 2 we need only show: \( p \Vdash \varphi \leftrightarrow \left( {\forall {h}^{\prime }}\right) \left\lbrack {{h}^{\prime } : \left| \mathbf{B}\right| \rightarrow \left| \mathbf{2}\right| }\right. \) is an \( M \) -complete homomorphism \[ \left. {\land {h}^{\prime }\left( {\left\lbrack p\right\rbrack }^{-0}\right) = \mathbf{1} \rightarrow {h}^{\prime }\left( {\llbracket \varphi \rrbracket }\right) = \mathbf{1}}\right\rbrack \text{.} \] But the right-hand side is equivalent to \[ {\left\lbrack p\right\rbrack }^{-0} \leq \llbracket \varphi \rrbracket \] which in turn is equivalent to each of the following: \[ p \in \llbracket \varphi \rrbracket \] \[ p \Vdash \varphi \text{.} \] Remark. We could also define forcing either (1) by using the recursive conditions of Theorem 10.4 or (2) by the equivalence of Theorem 10.6. On the other hand, the definition of forcing by (1) allows us to define a B-valued interpretation \( \llbracket \rrbracket \) by \[ \llbracket \varphi \rrbracket = \{ p \in P \mid p \Vdash \varphi \} . \] Theorem 10.7. \( q \leq p \land p \Vdash \varphi \rightarrow q \Vdash \varphi \) . Proof. Obvious from the definition. Corollary 10.8. \( \neg \left\lbrack {p \Vdash \varphi \land p \Vdash \neg \varphi }\right\rbrack \) . Remark. On the other hand, we need not have \( p \Vdash \varphi \vee p \Vdash \neg \varphi \) . Theorem 10.9. If \( G \) is \( \mathbf{P} \) -generic over \( M \) and \( S = \{ p \in P \mid p \Vdash \varphi \} \) is dense then \( M\left\lbrack G\right\rbrack \vDash \varphi \) . Proof. \( S = \llbracket \varphi \rrbracket \) is regular open in \( \mathbf{P} \) since \( \llbracket \varphi \rrbracket \in B \) . Then, since \( S \) is dense, \( S = {S}^{-0} = 1 \) . Therefore \( \llbracket \varphi \rrbracket = 1 \) . Definition 10.10. For each \( S \subseteq P, S \) is dense beneath \( p \) iff \( \left\lbrack p\right\rbrack \subseteq {S}^{ - } \) . Theorem 10.11. If \( G \) is P-generic over \( M \), if \( p \in G \) and if \( S \in M \) is dense beneath \( p \) then \( G \cap S \neq 0 \) . Proof. Under the given hypothesis if \[ {S}^{\prime } = S \cup \{ q \in P \mid \neg \operatorname{Comp}\left( {p, q}\right) \} \] then \( {S}^{\prime } \in M \) and \( {S}^{\prime } \) is dense, hence \( G \cap {S}^{\prime } \neq 0 \) . But any two elements of \( G \) are compatible, hence \( G \cap S \neq 0 \) . Theorem 10.12. If \( G \) is \( \mathbf{P} \) -generic over \( M \) and \( p \in G \) then \[ p \Vdash \left( {\exists x}\right) \varphi \left( x\right) \rightarrow \left( {\exists q \leq p}\right) \left( {\exists t \in T}\right) \left( {q \in G \land q \Vdash \varphi \left( t\right) }\right) . \] Proof. \[ p \Vdash \left( {\exists x}\right) \varphi \left( x\right) \leftrightarrow p \in \mathop{\sum }\limits_{{t \in T}}\llbracket \varphi \left( t\right) \rrbracket = {\left( \mathop{\bigcup }\limits_{{t \in T}}\llbracket \varphi \left( t\right) \rrbracket \right) }^{-0} \] \[ \leftrightarrow \left\lbrack p\right\rbrack \subseteq {\left( \mathop{\bigcup }\limits_{{t \in T}}\llbracket \varphi \left( t\right) \rrbracket \right) }^{ - }. \] So \( p \Vdash \left( {\exists x}\right) \varphi \left( x\right) \) implies that \( \mathop{\bigcup }\limits_{{t \in T}}\llbracket \varphi \left( t\right) \rrbracket \) is dense beneath \( p \), and the same holds for \( {S}^{\prime } = \left\lbrack p\right\rbrack \cap \mathop{\bigcup }\limits_{{t \in T}}\llbracket \varphi \left( t\right) \rrbracket \) . Also \( {S}^{\prime } \in M \) since \( B \in M \) . Therefore by Theorem 10.11 \[ p \Vdash \left( {\exists x}\right) \varphi \left( x\right) \rightarrow \left( {\exists q \in G}\right) \left\lbrack {q \leq p \land q \in \mathop{\bigcup }\limits_{{t \in T}}\llbracket \varphi \left( t\right) \rrbracket }\right\rbrack \] \[ \rightarrow \left( {\exists q}\right) \left( {\exists t \in T}\right) \left\lbrack {q \leq p \land q \in G \land q \Vdash \varphi \left( t\right) }\right\rbrack . \] ## 11. The Independence of \( V = L \) and the \( {CH} \) Cohen's technique of forcing was created for the specific purpose of proving the independence of several axioms of set theory from those of general set theory. In this section we will use Cohen's method to prove the independence of \( V = L \) and the \( {CH} \) from the axioms of \( {ZF} \) . Let \( M \) be a countable standard transitive model of \( {ZF} + V = L \) . Definition 11.1. \[ P \triangleq \left\{ {\left\langle {{p}_{1},{p}_{2}}\right\rangle \mid {p}_{1} \subseteq \omega \land {p}_{2} \subseteq \omega \land {\overline{\bar{p}}}_{1} < \omega \land {\overline{\bar{p}}}_{2} < \omega \land {p}_{1} \cap {p}_{2} = 0}\right\} . \] \[ \left\langle {{p}_{1},{p}_{2}}\right\rangle \leq \left\langle {{p}_{1}^{\prime },{p}_{2}^{\prime }}\right\rangle \overset{\Delta }{ \leftrightarrow }{p}_{1}^{\prime } \subseteq {p}_{1} \land {p}_{2}^{\prime } \subseteq {p}_{2}. \] \[ \mathbf{P} \triangleq \langle P, \leq \rangle \] \[ \forall G \subseteq P,\widetilde{a}\left( G\right) \triangleq \left\{ {n \in \omega \mid \left( {\exists {p}_{1},{p}_{2}}\right) \left\lbrack {n \in {p}_{1} \land \left\langle {{p}_{1},{p}_{2}}\right\rangle \in G}\right\rbrack }\right\} . \] \[ \forall a \subseteq \omega ,\widetilde{G}\left( a\right) \triangleq \left\{ {\left\langle {{p}_{1},{p}_{2}}\right\rangle \mid {p}_{1} \subseteq a \land {p}_{2} \subseteq \omega - a \land \left\langle {{p}_{1},{p}_{2}}\right\rangle \in P}\right\} . \] Exercise. Prove that the partial order structure \( \mathbf{P} \) is fine in the sense of Definition 5.21. Remark. \( P \in M,\widetilde{a}\left( G\right) \subseteq \omega \) and \( \widetilde{G}\left( a\right) \subseteq P \) . Lemma 1. \( {a}_{1} \subseteq \omega \land {a}_{2} \subseteq \omega \land {a}_{1} \neq {a}_{2} \rightarrow \widetilde{G}\left( {a}_{1}\right) \neq \widetilde{G}\left( {a}_{2}\right) \) . Proof. Without loss of generality we may assume \[ \left( {\exists n \in \omega }\right) \left\lbrack {n \in {a}_{1} \land n \notin {a}_{2}}\right\rbrack . \] Then \( \langle \{ n\} ,0\rangle \in \widetilde{G}\left( {a}_{1}\right) \) but \( \langle \{ n\} ,0\rangle \notin \widetilde{G}\left( {a}_{2}\right) \) . Lemma 2. If \( G \) is \( \mathbf{P} \) -generic over \( M \) then \( \widetilde{G}\left( {\widetilde{a}\left( G\right) }\right) = G \) . Proof. If \( G \) is \( \mathbf{P} \) -generic over \( M \) and \( p = \left\langle {{p}_{1},{p}_{2}}\right\rangle \in G \) then \( {p}_{1} \subseteq \widetilde{a}\left( G\right) \) and \( {p}_{2} \subseteq \omega - \widetilde{a}\left( G\right) \) . For if \( n \in {p}_{2} \) and \( n \in \widetilde{a}\left( G\right) \) then \( \exists q = \left\langle {{q}_{1},{q}_{2}}\right\rangle \in G, n \in {q}_{1} \) . Since \( p, q \in G \) , \( \exists r = \left\langle {{r}_{1},{r}_{2}}\right\rangle \in G, r \leq p \land r \leq q \) . Since \( r \leq p \land n \in {p}_{2} \) we have \( n \in {r}_{2} \) . But also \( n \in {r}_{1} \) since \( r \leq q \land n \in {q}_{1} \) . But \( r \in P \) and hence \( {r}_{1} \cap {r}_{2} = 0 \) . This is a contradiction. Hence \( G \subseteq \widetilde{G}\left( {\widetilde{a}\left( G\right) }\right) \) . \[ \text{If}p = \left\langle {{p}_{1},{p}_{2}}\right\rangle \in \widetilde{G}\left( {\widetilde{a}\left( G\right) }\right) \text{then}{p}_{1} \subseteq \widetilde{a}\left( G\right) \land {p}_{2} \subseteq \omega - \widetilde{a}\left( G\right) \text{. If} \] \[ {p}_{1} = \left\{ {{n}_{1},\ldots ,{n}_{k}}\right\} \] then since \( {p}_{1} \subseteq \widetilde{a}\left( G\right) ,\exists {q}^{i} \in G,{q}^{i} = \left\langle {{q}_{1}{}^{i},{q}_{2}{}^{i}}\right\rangle, i = 1,2 \), such that \[ {n}_{1} \in {q}_{1}^{1} \land {n}_{2} \in {q}_{1}^{2} \] \( \exists r \in G, r \leq {q}^{1} \land r \leq {q}^{2}, r = \left\langle {{r}_{1},{r}_{2}}\right\rangle \) . Then \( {n}_{1},{n}_{2} \in {r}_{1} \) . Thus, by induction \( \exists q = \left\langle {{q}_{1},{q}_{2}}\right\rangle \in G,{p}_{1} \subseteq {q}_{1} \), i.e., \( q \leq \left\langle {{p}_{1},0}\right\rangle \) . Let \( {p}_{2} = \left\{ {{m}_{1},\ldots ,{m}_{l}}\right\} \) and let \( S = \mathop{\bigcup }\limits_{{i = 1}}^{l}\left\lbrack \left\langle {\left\{ {m}_{i}\right\} ,0}\right\rangle \right\rbrack \cup \left\lbrack \left\langle {0,{p}_{2}}\right\rangle \right\rbrack \) . Then \( S \) is dense and hence \( S \cap G \neq 0 \) . Let \( {q}^{\prime } \) be in \( S \cap G \) . Since \( {p}_{2} \subset \omega - \widetilde{a}\left( G\right) ,{q}_{1}^{\prime } \cap {p}_{2} = 0 \) where \( {q}^{\prime } = \left\langle {{q}_{1}^{\prime },{q}_{2}^{\prime }}\right\rangle \) . Therefore \( {q}^{\prime } \leq \left\langle {0,{p}_{2}}\right\rangle \) . Since \( q,{q}^{\prime } \in G,\exists r \in G, r \leq q \land r \leq {q}^{\prime } \) . So \( r \leq \left\langle {{p}_{1},{p}_{2}}\
1347_[陈亚浙&吴兰成] Second Order Elliptic Equations and Elliptic Systems
Definition 6.1
Definition 6.1. Let \( E \subset {\mathbb{R}}^{n},0 \leq k < + \infty ,0 < \delta \leq + \infty \), and let \( \left\{ {F}_{j}\right\} \) be a family of open sets in \( {\mathbb{R}}^{n} \) . Set (6.1) \[ {\mathcal{H}}_{k}^{\delta }\left( E\right) = {\omega }_{k}{2}^{-k}\inf \left\{ {\mathop{\sum }\limits_{j}{\left( \operatorname{diam}{F}_{j}\right) }^{k} \mid \mathop{\bigcup }\limits_{j}{F}_{j} \supset E,\operatorname{diam}{F}_{j} < \delta }\right\} , \] and (6.2) \[ {\mathcal{H}}_{k}\left( E\right) = \mathop{\lim }\limits_{{\delta \rightarrow 0}}{\mathcal{H}}_{k}^{\delta }\left( E\right) = \mathop{\sup }\limits_{{\delta > 0}}{\mathcal{H}}_{k}^{\delta }\left( E\right) \] where \( {\omega }_{k} = {\Gamma }^{k}\left( {1/2}\right) /\Gamma \left( {1 + k/2}\right) .{\mathcal{H}}_{k}\left( E\right) \) is said to be the \( k \) -dimensional Hausdorff measure of \( E \) . Notice that if \( 0 < {\delta }_{1} < {\delta }_{2} \), then \( {\mathcal{H}}_{k}^{{\delta }_{1}}\left( E\right) \geq {\mathcal{H}}_{k}^{{\delta }_{2}}\left( E\right) \) ; therefore the limit in (6.2) exists (it may be equal to \( + \infty \) ). Example 6.1. Let \( E \) be a Lebesgue measurable set in \( {\mathbb{R}}^{n} \) . Then \( {\mathcal{H}}_{n}\left( E\right) = \) \( {\mathcal{L}}_{n}\left( E\right) \), where \( {\mathcal{L}}_{n}\left( E\right) \) denotes the Lebesgue measure of \( E \) . Example 6.2. Let \( E \) be a set consisting of finitely many points \( {P}_{i}(i = \) \( 1,\cdots, m) \) . Then \( {\mathcal{H}}_{0}\left( E\right) = m \), while \( {\mathcal{H}}_{1}\left( E\right) = 0 \) . Example 6.3. Let \( E \) be a rectifiable curve \( l \) in \( {\mathbb{R}}^{n}\left( {n \geq 2}\right) \) . Then \( {\mathcal{H}}_{1}\left( E\right) = \) the length of \( l \), while \( {\mathcal{H}}_{2}\left( E\right) = 0 \) . Example 6.4. Let \( E \) be a rectifiable surface \( S \) in \( {\mathbb{R}}^{n}\left( {n \geq 3}\right) \) . Then \( {\mathcal{H}}_{2}\left( E\right) = \) the area of \( S \), while \( {\mathcal{H}}_{3}\left( E\right) = 0 \) . From Definition 6.1, we easily deduce Theorem 6.1. If \( {\mathcal{H}}_{k}\left( E\right) < + \infty \), then for any \( \varepsilon > 0 \) , \[ {\mathcal{H}}_{k + \varepsilon }\left( E\right) = 0 \] We now introduce the following definition: Definition 6.2. The real number \[ \inf \left\{ {k \in {\mathbb{R}}^{ + } \mid {\mathcal{H}}_{k}\left( E\right) = 0}\right\} \] is said to be the Hausdorff dimension of the set \( E \) and is denoted by \( {\dim }_{\mathcal{H}}E \) . In the above examples, we have \( {\dim }_{\mathcal{H}}\left( {\mathop{\bigcup }\limits_{{i = 1}}^{m}{P}_{i}}\right) = 0,{\dim }_{\mathcal{H}}\left( l\right) = 1 \) , \( {\dim }_{\mathcal{H}}S = 2 \) . ## 6.2. Estimate for the Hausdorff dimension of the singular set. Now we estimate the Hausdorff dimension of the singular set of a weak solution of the elliptic system (3.1) or (5.1). We have shown in Theorem 3.1 (or Theorem 5.1, respectively) that a weak solution of the elliptic system (3.1) (or (5.1)) is locally Hölder continuous in an open subset \( {\Omega }_{0} \) of \( \Omega \) such that (6.3) \[ {\Omega }_{0} = \left\{ {x \in \Omega \left| {\;\mathop{\liminf }\limits_{{\rho \rightarrow 0}}{\rho }^{2 - n}{\int }_{{B}_{\rho }\left( x\right) }{\left| Du\right| }^{2}{dz} = 0}\right. }\right\} , \] i.e., \( \Omega \smallsetminus {\Omega }_{0} \subset \sum \), where (6.4) \[ \sum = \left\{ {x \in \Omega \left| {\;\mathop{\liminf }\limits_{{\rho \rightarrow 0}}{\rho }^{2 - n}{\int }_{{B}_{\rho }\left( x\right) }{\left| Du\right| }^{2}{dz} > 0}\right. }\right\} . \] Theorem \( {4.3}^{\prime } \) implies that \( \left| {Du}\right| \in {L}_{loc}^{p}\left( \Omega \right) \) for some \( p > 2 \) . By Hölder’s inequality, \[ {\left\{ {\rho }^{2 - n}{\int }_{{B}_{\rho }\left( x\right) }{\left| Du\left( z\right) \right| }^{2}dz\right\} }^{1/2} \leq {\left\{ {\rho }^{p - n}{\int }_{{B}_{\rho }\left( x\right) }{\left| Du\left( z\right) \right| }^{p}dx\right\} }^{1/p}. \] Therefore \( \sum \subset {E}_{n - p} \), where (6.5) \[ {E}_{n - p} = \left\{ {x \in \Omega \left| {\;\mathop{\limsup }\limits_{{\rho \rightarrow 0}}{\rho }^{p - n}{\int }_{{B}_{\rho }\left( x\right) }{\left| Du\left( z\right) \right| }^{p}{dx} > 0}\right. }\right\} . \] In order to estimate the Hausdorff dimension of \( \Omega \smallsetminus {\Omega }_{0} \), it suffices to estimate the Hausdorff dimension of \( {E}_{n - p} \) . We first prove the following lemma: Lemma 6.2 (Covering lemma). Let \( G \) be a bounded set in \( {\mathbb{R}}^{n} \) . If \( r : x \mapsto r\left( x\right) \) is a function defined on \( G \) such that \( 0 < r\left( x\right) < 1 \), then there exists a sequence of points \( \left\{ {x}_{i}\right\} ,{x}_{i} \in G\left( {i = 1,2,\cdots }\right) \), such that (6.6) \[ B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \cap B\left( {{x}_{j}, r\left( {x}_{j}\right) }\right) = \varnothing ,\;i \neq j, \] (6.7) \[ \mathop{\bigcup }\limits_{i}B\left( {{x}_{i},{3r}\left( {x}_{i}\right) }\right) \supset G \] Proof. Consider the following family of balls: \[ {B}_{{2}^{-k},{2}^{-k - 1}} = \left\{ {B\left( {x, r\left( x\right) }\right) \mid x \in G,{2}^{-k - 1} \leq r\left( x\right) < {2}^{-k}}\right\} ,\;k = 0,1,2,\cdots . \] Since \( G \) is bounded, we can choose finitely many (say \( {n}_{1} \) ) balls which comprise a subfamily of \( {B}_{1,1/2} \), given by \[ {\widetilde{B}}_{1,1/2} = \left\{ {B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \mid {x}_{i} \in G,\frac{1}{2} \leq r\left( {x}_{i}\right) < 1, i = 1,\cdots ,{n}_{1}}\right\} , \] such that (1) \( B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \cap B\left( {{x}_{j}, r\left( {x}_{j}\right) }\right) = \varnothing, i \neq j, i, j = 1,\cdots ,{n}_{1} \) ; (2) every ball in \( {B}_{1,1/2} \) must intersect with at least one ball in \( {\widetilde{B}}_{1,1/2} \) . Next, we choose a subfamily of \( {B}_{1/2,1/4} \), given by \[ {\widetilde{B}}_{1/2,1/4} = \left\{ {B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \mid {x}_{i} \in G,\frac{1}{4} \leq r\left( {x}_{i}\right) < \frac{1}{2}, i = {n}_{1} + 1,\cdots ,{n}_{2}}\right\} ,\;{n}_{2} \geq {n}_{1}, \] such that (1) \( B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \cap B\left( {{x}_{j}, r\left( {x}_{j}\right) }\right) = \varnothing, i \neq j, i, j = 1,\cdots ,{n}_{2} \) ; (2) every ball in \( {B}_{1/2,1/4} \) must intersect with at least one ball in \[ {\widetilde{B}}_{1,1/2} \cup {\widetilde{B}}_{1/2,1/4} = \left\{ {B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \mid {x}_{i} \in G,\;i = 1,\cdots ,{n}_{2}}\right\} . \] (It could happen that \( {n}_{2} = {n}_{1} \) ; in this case \( {\widetilde{B}}_{1/2,1/4} = \varnothing \) .) Continuing this process, if \( {x}_{1},\cdots ,{x}_{{n}_{k}} \) are chosen, then we choose a subfamily of \( {B}_{{2}^{-k},{2}^{-k - 1}} \), given by \[ {\widetilde{B}}_{{2}^{-k},{2}^{-k - 1}} = \left\{ {B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \mid {x}_{i} \in G,{2}^{-k - 1} \leq r\left( {x}_{i}\right) < {2}^{-k},}\right. \] \[ \left. {i = {n}_{k} + 1,\cdots ,{n}_{k + 1}}\right\} ,\;{n}_{k + 1} \geq {n}_{k}, \] with the following properties: (1) \( B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \cap B\left( {{x}_{j}, r\left( {x}_{j}\right) }\right) = \varnothing, i \neq j, i, j = 1,\cdots ,{n}_{k + 1} \) ; (2) every ball in \( {B}_{{2}^{-k},{2}^{-k - 1}} \) must intersect with at least one ball in \[ \mathop{\bigcup }\limits_{{j = 0}}^{k}{\widetilde{B}}_{{2}^{-j},{2}^{-j - 1}} = \left\{ {B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \mid {x}_{i} \in G,\;i = 1,\cdots ,{n}_{k + 1}}\right\} . \] (It could happen that \( {n}_{k + 1} = {n}_{k} \) ; in this case \( {\widetilde{B}}_{{2}^{-k},{2}^{-k - 1}} = \varnothing \) .) By our selection, it is obvious that (6.6) is satisfied. We next prove (6.7). In fact, for \( x \in G \), we can find some \( {x}_{i} \) such that \[ B\left( {x, r\left( x\right) }\right) \cap B\left( {{x}_{i}, r\left( {x}_{i}\right) }\right) \neq \varnothing , \] and \( {2r}\left( {x}_{i}\right) \geq r\left( x\right) \) . Thus \[ \left| {x - {x}_{i}}\right| \leq r\left( x\right) + r\left( {x}_{i}\right) \leq {3r}\left( {x}_{i}\right) \] i.e., \( x \in B\left( {{x}_{i},{3r}\left( {x}_{i}\right) }\right) \) . The proof is complete. Lemma 6.3. Let \( \Omega \) be an open set in \( {\mathbb{R}}^{n}, v \in {L}_{loc}^{1}\left( \Omega \right) \), and \( 0 \leq \alpha < n \) . For the set \[ {E}_{\alpha } = \left\{ {x \in \Omega \left| {\;\mathop{\limsup }\limits_{{\rho \rightarrow 0}}{\rho }^{-\alpha }{\int }_{{B}_{\rho }\left( x\right) }\left| {v\left( z\right) }\right| {dz} > 0}\right. }\right\} , \] we have \( {\mathcal{H}}_{\alpha }\left( {E}_{\alpha }\right) = 0 \) . Proof. It suffices to prove that for any compact \( K \subset \Omega \) , \[ {\mathcal{H}}_{\alpha }\left( {{E}_{\alpha } \cap K}\right) = 0 \] Setting \( F = {E}_{\alpha } \cap K \) and \[ {F}^{\left( s\right) } = \left\{ {x \in F\left| {\;\mathop{\limsup }\limits_{{\rho \rightarrow 0}}{\rho }^{-\alpha }{\int }_{{B}_{\rho }\left( x\right) }\left| {v\left( z\right) }\right| {dz} > \frac{1}{s}}\right. }\right\} \;\left( {s = 1,2,\cdots }\right) ; \] we then have \( {F}^{\left( 1\right) } \subset {F}^{\left( 2\right) } \subset \cdots \) and \[ F = \mathop{\bigcup }\limits_{{s = 1}}^{\infty }{F}^{\left( s\right) } \] Therefore it suffices to show that \( {\mathcal{H}}_{\alpha }\left( {F}^{\left( s\right) }\right) = 0 \) for each positive integer \( s \) . Let \( Q \) be a bounded open set satisfying \( K \subset Q \subset \bar{Q} \subset \Omega \) . For any \( x \in {F}^{\left( s\right) } \) and any \( \delta \) , \( 0 < \delta < \operatorname{dist}\left( {K,\partial Q}\right) \land 1 \), we deduce from the definition of \( {F}^{\left( s\right) } \) that there exists \( r\left( x\right) ,0 < r\left( x\right) < \delta \), such that (6.8) \[ {r}^{-\alpha }\left
1167_(GTM73)Algebra
Definition 1.1
Definition 1.1. Let \( \mathrm{R} \) be a ring. \( A\left( {left}\right) \mathbf{R} \) -module is an additive abelian group A together with a function \( \mathrm{R} \times \mathrm{A} \rightarrow \mathrm{A} \) (the image of \( \left( {\mathrm{r},\mathrm{a}}\right) \) being denoted by \( \mathrm{{ra}} \) ) such that for all \( \mathrm{r},\mathrm{s} \in \mathrm{R} \) and \( \mathrm{a},\mathrm{b} \in \mathrm{A} \) : (i) \( r\left( {a + b}\right) = {ra} + {rb} \) . (ii) \( \left( {r + s}\right) a = {ra} + {sa} \) . (iii) \( r\left( {sa}\right) = \left( {rs}\right) a \) . If \( \mathrm{R} \) has an identity element \( {1}_{\mathrm{R}} \) and (iv) \( {1}_{\mathrm{R}}\mathrm{a} = \mathrm{a} \) for all \( \mathrm{a}\varepsilon \mathrm{A} \) , then \( \mathrm{A} \) is said to be a unitary \( \mathrm{R} \) -module. If \( \mathrm{R} \) is a division ring, then a unitary \( \mathrm{R} \) -module is called a (left) vector space. A (unitary) right \( R \) -module is defined similarly via a function \( A \times R \rightarrow A \) denoted \( \left( {a, r}\right) \mapsto {ar} \) and satisfying the obvious analogues of (i)-(iv). From now on, unless specified otherwise, " \( R \) -module" means "left \( R \) -module" and it is understood that all theorems about left \( R \) -modules also hold, mutatis mutandis, for right \( R \) - modules. A given group \( A \) may have many different \( R \) -module structures (both left and right). If \( R \) is commutative, it is easy to verify that every left \( R \) -module \( A \) can be given the structure of a right \( R \) -module by defining \( {ar} = {ra} \) for \( r \in R, a \in A \) (commutativity is needed for (iii); for a generalization of this idea to arbitrary rings, see Exercise 16). Unless specified otherwise, every module \( A \) over a commutative ring \( R \) is assumed to be both a left and a right module with \( {ar} = {ra} \) for all \( r \in R, a \in A \) . If \( A \) is a module with additive identity element \( {0}_{A} \) over a ring \( R \) with additive identity \( {0}_{R} \), then it is easy to show that for all \( r \in R, a \in A \) : \[ r{0}_{A} = {0}_{A}\text{ and }{0}_{R}a = {0}_{A}. \] In the sequel \( {0}_{A},{0}_{R},0 \in \mathbf{Z} \) and the trivial module \( \{ 0\} \) will all be denoted 0 . It also is easy to verify that for all \( r \in R, n \in \mathbf{Z} \) and \( a \in A \) : \[ \left( {-r}\right) a = - \left( {ra}\right) = r\left( {-a}\right) \text{ and }n\left( {ra}\right) = r\left( {na}\right) , \] where \( {na} \) has its usual meaning for groups (Definition I.1.8, additive notation). EXAMPLE. Every additive abelian group \( G \) is a unitary \( \mathbf{Z} \) -module, with \( {na}\left( {n \in \mathbf{Z}, a \in G}\right) \) given by Definition I.1.8. EXAMPLE. If \( S \) is a ring and \( R \) is a subring, then \( S \) is an \( R \) -module (but not vice versa!) with \( {ra}\left( {r \in R, a \in S}\right) \) being multiplication in \( S \) . In particular, the rings \( R\left\lbrack {{x}_{1},\ldots ,{x}_{m}}\right\rbrack \) and \( R\left\lbrack \left\lbrack x\right\rbrack \right\rbrack \) are \( R \) -modules. EXAMPLES. If \( I \) is a left ideal of a ring \( R \), then \( I \) is a left \( R \) -module with \( {ra}\left( {r \in R, a \in I}\right) \) being the ordinary product in \( R \) . In particular,0 and \( R \) are \( R \) -modules. Furthermore, since \( I \) is an additive subgroup of \( R, R/I \) is an (abelian) group. \( R/I \) is an \( R \) -module with \( r\left( {{r}_{1} + I}\right) = r{r}_{1} + I.R/I \) need not be a ring, however, unless \( I \) is a two-sided ideal. EXAMPLE. Let \( R \) and \( S \) be rings and \( \varphi : R \rightarrow S \) a ring homomorphism. Then every \( S \) -module \( A \) can be made into an \( R \) -module by defining \( {rx}\left( {x \in A}\right) \) to be \( \varphi \left( r\right) x \) . One says that the \( R \) -module structure of \( A \) is given by pullback along \( \varphi \) . EXAMPLE. Let \( A \) be an abelian group and End \( A \) its endomorphism ring (see p. 116). Then \( A \) is a unitary (End \( A \) )-module, with \( {fa} \) defined to be \( f\left( a\right) \) (for \( a \in A \) , \( {f\varepsilon } \) End \( A \) ). EXAMPLE. If \( R \) is a ring, every abelian group can be made into an \( R \) -module with trivial module structure by defining \( {ra} = 0 \) for all \( {r\varepsilon R} \) and \( {a\varepsilon A} \) . Definition 1.2. Let \( \mathrm{A} \) and \( \mathrm{B} \) be modules over a ring \( \mathrm{R} \) . A function \( \mathrm{f} : \mathrm{A} \rightarrow \mathrm{B} \) is an R-module homomorphism provided that for all a, c ε A and r ε R : \[ f\left( {a + c}\right) = f\left( a\right) + f\left( c\right) \text{ and }f\left( {ra}\right) = {rf}\left( a\right) . \] If \( \mathrm{R} \) is a division ring, then an \( \mathrm{R} \) -module homomorphism is called a linear transformation. When the context is clear \( R \) -module homomorphisms are called simply homomorphisms. Observe that an \( R \) -module homomorphism \( f : A \rightarrow B \) is necessarily a homomorphism of additive abelian groups. Consequently the same terminology is used: \( f \) is an R-module monomorphism [resp. epimorphism, isomorphism] if it is injective [resp. surjective, bijective] as a map of sets. The kernel of \( f \) is its kernel as a homomorphism of abelian groups, namely \( \operatorname{Ker}f = \{ a \in A \mid f\left( a\right) = 0\} \) . Similarly the image of \( f \) is the set \( \operatorname{Im}f = \{ b \in B \mid b = f\left( a\right) \) for some \( a \in A\} \) . Finally, Theorem I.2.3 implies: (i) \( f \) is an \( R \) -module monomorphism if and only if \( \operatorname{Ker}f = 0 \) ; (ii) \( f : A \rightarrow B \) is an \( R \) -module isomorphism if and only if there is an \( R \) -module homomorphism \( g : B \rightarrow A \) such that \( {gf} = {1}_{A} \) and \( {fg} = {1}_{B} \) . EXAMPLES. For any modules the zero map \( 0 : A \rightarrow B \) given by \( a \mapsto 0\left( {a\varepsilon A}\right) \) is a module homomorphism. Every homomorphism of abelian groups is a \( \mathbf{Z} \) -module homomorphism. If \( R \) is a ring, the map \( R\left\lbrack x\right\rbrack \rightarrow R\left\lbrack x\right\rbrack \) given by \( f \mapsto {xf} \) (for example, \( \left. {\left( {{x}^{2} + 1}\right) \mapsto x\left( {{x}^{2} + 1}\right) }\right) \) is an \( R \) -module homomorphism, but not a ring homomorphism. REMARK. For a given ring \( R \) the class of all \( R \) -modules [resp. unitary \( R \) -modules] and \( R \) -module homomorphisms clearly forms a (concrete) category. In fact, one can define epimorphisms and monomorphisms strictly in categorical terms (objects and morphisms only - no elements); see Exercise 2. Definition 1.3. Let \( \mathrm{R} \) be a ring, \( \mathrm{A} \) an \( \mathrm{R} \) -module and \( \mathrm{B} \) a nonempty subset of \( \mathrm{A} \) . \( \mathrm{B} \) is a submodule of A provided that \( \mathrm{B} \) is an additive subgroup of \( \mathrm{A} \) and \( \mathrm{{rb}}\varepsilon \mathrm{B} \) for all \( \mathrm{r}\varepsilon \mathrm{R} \) , \( \mathrm{b}\varepsilon \) B. A submodule of a vector space over a division ring is called a subspace. Note that a submodule is itself a module. Also a submodule of a unitary module over a ring with identity is necessarily unitary. EXAMPLES. If \( R \) is a ring and \( f : A \rightarrow B \) is an \( R \) -module homomorphism, then Ker \( f \) is a submodule of \( A \) and Im \( f \) is a submodule of \( B \) . If \( C \) is any submodule of \( B \) , then \( {f}^{-1}\left( C\right) = \{ a \in A \mid f\left( a\right) \in C\} \) is a submodule of \( A \) . EXAMPLE. Let \( I \) be a left ideal of the ring \( R, A \) an \( R \) -module and \( S \) a nonempty subset of \( A \) . Then \( {IS} = \left\{ {\mathop{\sum }\limits_{{i = 1}}^{n}{r}_{i}{a}_{i} \mid {r}_{i} \in I;{a}_{i} \in S;n \in {\mathbf{N}}^{ * }}\right\} \) is a submodule of \( A \) (Exercise 3). Similarly if \( a \in A \), then \( {Ia} = \{ {ra} \mid r \in I\} \) is a submodule of \( A \) . EXAMPLE. If \( \left\{ {{B}_{i} \mid i \in I}\right\} \) is a family of submodules of a module \( A \), then \( \mathop{\bigcap }\limits_{{i \in I}}{B}_{i} \) is easily seen to be a submodule of \( A \) . Definition 1.4. If \( \mathrm{X} \) is a subset of a module \( \mathrm{A} \) over a ring \( \mathrm{R} \), then the intersection of all submodules of A containing \( \mathrm{X} \) is called the submodule generated by \( \mathbf{X} \) (or spanned by \( \mathrm{X} \) ). If \( X \) is finite, and \( X \) generates the module \( B, B \) is said to be finitely generated. If \( X = \varnothing \), then \( X \) clearly generates the zero module. If \( X \) consists of a single element, \( X = \{ a\} \), then the submodule generated by \( X \) is called the cyclic (sub)module generated by \( a \) . Finally, if \( \left\{ {{B}_{i} \mid i \in I}\right\} \) is a family of submodules of \( A \), then the submodule generated by \( X = \mathop{\bigcup }\limits_{{i \in I}}{B}_{i} \) is called the sum of the modules \( {B}_{i} \) . If the index set \( I \) is finite, the sum of \( {B}_{1},\ldots ,{B}_{n} \) is denoted \( {B}_{1} + {B}_{2} + \cdots + {B}_{n} \) . Theorem 1.5. Let \( \mathrm{R} \) be a ring, \( \mathrm{A} \) an \( \mathrm{R} \) -module, \( \mathrm{X} \) a subset of \( \mathrm{A},\left\{ {{\mathrm{B}}_{\mathrm{i}} \mid \mathrm{i} \in \mathrm{I}}\right\} \) a family of submodules of \( \mathrm{A} \) and \( \mathrm{a}\varepsilon \mathrm{A} \) . Let \( \mathrm{{Ra}} = \{ \mathrm{{ra}} \mid \mathrm{r}\varepsilon \mathrm{R}\} \) . (i) \( \mathrm{{Ra}} \) is a submodule of \( \mathrm{A} \) and the map \( \mathrm{R} \rightarrow \mathrm{{Ra}} \) given by \( \mathrm{r} \mapsto \mathrm{{ra}} \) is an \( \mathrm{R} \) -module epimorphism. (ii) The cyclic submodule \( \mathrm{C} \) generated by \( \mathrm{a} \) is \( \{ \mathrm{{ra}} + \mathrm{{na}} \mid
1347_[陈亚浙&吴兰成] Second Order Elliptic Equations and Elliptic Systems
Definition 1.4
Definition 1.4. Let \( \Omega \subset {\mathbb{R}}^{n},{x}_{0} \in \Omega \) . Let \( w \) be the function such that its graph is a cone surface with vertex at \( \left( {{x}_{0},\lambda }\right) \) and base \( \Omega \) (see Fig. 1). We denote its image set of the normal mapping by (1.9) \[ \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack = {\chi }_{w}\left( \Omega \right) \] ![ea7682a1-65ee-483d-b2fe-4cbf0ebf6e25_96_0.jpg](images/ea7682a1-65ee-483d-b2fe-4cbf0ebf6e25_96_0.jpg) Fig. 1 Lemma 1.2. Let \( u \in C\left( \Omega \right) \) . Then (1) for any \( y \in {\Gamma }_{u} \) , (1.10) \[ \left| p\right| \leq \frac{2\sup \left| u\right| }{\operatorname{dist}\{ y,\partial \Omega \} },\;\forall p \in \chi \left( y\right) \] \( \left( 2\right) \) the normal mapping maps any compact subset of \( \Omega \) to a closed set in \( {\mathbb{R}}^{n} \) . Proof. For \( y \in {\Gamma }_{u} \) , (1.11) \[ u\left( y\right) + p \cdot \left( {x - y}\right) \geq u\left( x\right) ,\;\forall x \in \Omega . \] The ray starting at \( y \) with direction \( - p \) intersects \( \partial \Omega \) at \( {x}_{0} \), i.e., \( \left( {1.12}\right) \) \[ {x}_{0} = y - \frac{1}{\left| p\right| }\left| {{x}_{0} - y}\right| p \] Using compact subsets of \( \Omega \) to approximate \( \Omega \) if necessary, we may assume without loss of generality that \( u \) is continuous on \( \bar{\Omega } \) . Choosing \( x \) to be \( {x}_{0} \) in (1.11), we obtain \[ u\left( y\right) - \left| {{x}_{0} - y}\right| \left| p\right| \geq u\left( {x}_{0}\right) \] and therefore \[ \left| p\right| \leq \frac{2\sup \left| u\right| }{\left| {x}_{0} - y\right| } \leq \frac{2\sup \left| u\right| }{\operatorname{dist}\{ y,\partial \Omega \} }. \] Now we prove (2). Let \( F \) be a compact subset of \( \Omega \) . Suppose that \( \left\{ {p}_{n}\right\} \subset \chi \left( F\right) \) and \( {p}_{n} \rightarrow {p}_{0}\left( {n \rightarrow \infty }\right) \) . We want to show that \( {p}_{0} \in \chi \left( F\right) \) . Since \( {p}_{n} \in \chi \left( F\right) \), there exists \( {y}_{n} \in F \) such that \( {p}_{n} \in \chi \left( {y}_{n}\right) \) . From the definition of a normal mapping, \[ u\left( {y}_{n}\right) + {p}_{n} \cdot \left( {x - {y}_{n}}\right) \geq u\left( x\right) ,\;\forall x \in \Omega . \] Since \( F \) is compact, a subsequence \( \left\{ {y}_{{n}_{k}}\right\} \) converges to some \( {y}_{0} \in F \) as \( k \rightarrow \infty \) . Letting \( n = {n}_{k} \rightarrow \infty \) in the above inequality, we easily see that \( {p}_{0} \in \chi \left( {y}_{0}\right) \) . Lemma 1.3. Suppose that \( \Omega, A \) are open domains in \( {\mathbb{R}}^{n} \) . (1) If \( \Omega \subset A \), then for \( {x}_{0} \in \Omega \) , \[ \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack \supset A\left\lbrack {{x}_{0},\lambda }\right\rbrack \] (2) If the diameter of \( \Omega \) is \( d \), then (1.13) \[ \left| {\Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| \geq {\left( \frac{\lambda }{d}\right) }^{n}{\omega }_{n} \] where \( \left| \cdot \right| \) denotes the measure of the set and \( {\omega }_{n} \) is the volume of an \( n \) -dimensional unit ball. Proof. (1) is obvious. We prove (2). Clearly, \( {B}_{d}\left( {x}_{0}\right) \supset \Omega \) . Let \( A = {B}_{d}\left( {x}_{0}\right) \) . By (1) and (1.8), \[ \left| {\Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| \geq \left| {A\left\lbrack {{x}_{0},\lambda }\right\rbrack }\right| = \left| {{B}_{\lambda /d}\left( 0\right) }\right| = {\omega }_{n}{\left( \frac{\lambda }{d}\right) }^{n}, \] where we use Lemma 1.2 (2) to deduce that \( \Omega \left\lbrack {{x}_{0},\lambda }\right\rbrack \) and \( A\left\lbrack {{x}_{0},\lambda }\right\rbrack \) are measurable sets. Lemma 1.4. Suppose that \( u \in {C}^{2}\left( \Omega \right), g \in C\left( \bar{\Omega }\right), g \geq 0 \), and \( E \) is a measurable subset of \( {\Gamma }_{u} \) . Then (1.14) \[ {\int }_{{Du}\left( E\right) }g\left( {\xi \left( p\right) }\right) {dp} \leq {\int }_{E}g\left( x\right) \det \left( {-{D}^{2}u}\right) {dx} \] where \( \xi \left( p\right) = {\left( Du\right) }^{-1}\left( p\right) \) is well defined and is continuous on \( {Du}\left( E\right) \) except on a zero measure set. Proof. Let \( J\left( x\right) = \det \left( {-{D}^{2}u}\right), S = \{ x \in \Omega \mid J\left( x\right) = 0\} \) . By Sard’s theorem (cf. Appendix 2), \( \left| {{Du}\left( S\right) }\right| = 0 \) . First we assume that \( E \) is open. Then \( E \smallsetminus S \) is an open set. Thus there exist cubes \( {\left\{ {C}_{l}\right\} }_{l = 1}^{\infty } \) with disjoint interior and sides parallel to the coordinate axes such that \( E \smallsetminus S = \mathop{\bigcup }\limits_{{l = 1}}^{\infty }{C}_{l} \) . We assume without loss of generality that the \( {C}_{l} \) are small enough so that \( {Du} : {C}_{l} \rightarrow {Du}\left( {C}_{l}\right) \) is a diffeomorphism. Then \[ {\int }_{{Du}\left( {C}_{l}\right) }g\left( {\xi \left( p\right) }\right) {dp} = {\int }_{{C}_{l}}g\left( x\right) J\left( x\right) {dx}. \] It follows that \[ {\int }_{{Du}\left( {E \smallsetminus S}\right) }g\left( {\xi \left( p\right) }\right) {dp} \leq \mathop{\sum }\limits_{l}{\int }_{{Du}\left( {C}_{l}\right) }g\left( {\xi \left( p\right) }\right) {dp} \] \[ = \mathop{\sum }\limits_{l}{\int }_{{C}_{l}}g\left( x\right) J\left( x\right) {dx} = {\int }_{E \smallsetminus S}g\left( x\right) J\left( x\right) {dx}. \] Using the definition of \( S \) and the fact that \( \left| {{Du}\left( S\right) }\right| = 0 \), we obtain (1.14). Now we assume that \( E \) is a measurable subset of \( {\Gamma }_{u} \) . Then there exists an open set \( G \supset E \smallsetminus S \) such that \( J\left( x\right) > 0 \) on \( G \) . Since \( E \smallsetminus S \) is a measurable set, there exist open sets \( {\left\{ {O}_{l}\right\} }_{l = 1}^{\infty } \) such that \( E \smallsetminus S \subset {O}_{l} \) and \( \left| {{O}_{l} \smallsetminus \left( {E \smallsetminus S}\right) }\right| \rightarrow 0\left( {l \rightarrow \infty }\right) \) . Using the above results, we derive \[ {\int }_{{Du}\left( E\right) }g\left( {\xi \left( p\right) }\right) {dp} \leq {\int }_{{Du}\left( {G \cap {O}_{l}}\right) }g\left( {\xi \left( p\right) }\right) {dp} \leq {\int }_{G \cap {O}_{l}}g\left( x\right) J\left( x\right) {dx}. \] Letting \( l \rightarrow \infty \) in the above inequality, we get (1.14). Lemma 1.5. Suppose that \( u \in C\left( \bar{\Omega }\right), u \leq 0 \) on \( \partial \Omega ,{x}_{0} \in \Omega \), and \( u\left( {x}_{0}\right) > 0 \) . Then (1.15) \[ \Omega \left\lbrack {{x}_{0}, u\left( {x}_{0}\right) }\right\rbrack \subset \chi \left( {\Gamma }_{u}^{ + }\right) \] where \( {\Gamma }_{u}^{ + } = {\Gamma }_{u} \cap \{ u \geq 0\} \) . Proof. This lemma is obvious from the geometric picture. However, here we shall give a rigorous analytical proof. Let \( p \in \Omega \left\lbrack {{x}_{0}, u\left( {x}_{0}\right) }\right\rbrack \) . From Definition 1.4, (1.16) \[ u\left( {x}_{0}\right) + p \cdot \left( {x - {x}_{0}}\right) \geq 0,\;\forall x \in \Omega . \] Define \[ {\lambda }_{0} = \inf \left\{ {\lambda \mid \lambda + p \cdot \left( {x - {x}_{0}}\right) \geq u\left( x\right) ,\;\forall x \in \Omega }\right\} . \] Since \( u \) is continuous, (1.17) \[ {\lambda }_{0} + p \cdot \left( {x - {x}_{0}}\right) \geq u\left( x\right) ,\;\forall x \in \Omega \] furthermore, there exists \( \xi \in \bar{\Omega } \) such that (1.18) \[ {\lambda }_{0} + p \cdot \left( {\xi - {x}_{0}}\right) = u\left( \xi \right) \] We consider two cases: Case (i): \( {\lambda }_{0} = u\left( {x}_{0}\right) \) . Then (1.17) implies that \( {x}_{0} \in {\Gamma }_{u}^{ + }, p \in \chi \left( {\Gamma }_{u}^{ + }\right) \) . Case (ii): \( {\lambda }_{0} > u\left( {x}_{0}\right) \) . We claim that \( \xi \notin \partial \Omega \) . In fact,(1.18) and (1.16) imply \[ u\left( \xi \right) > u\left( {x}_{0}\right) + p \cdot \left( {\xi - {x}_{0}}\right) \geq 0. \] Since \( {\left. u\right| }_{\partial \Omega } \leq 0 \), we must have \( \xi \notin \partial \Omega \) . We subtract (1.18) from (1.17) to obtain \[ u\left( \xi \right) + p \cdot \left( {x - \xi }\right) \geq u\left( x\right) ,\;\forall x \in \Omega . \] It follows that \( \xi \in {\Gamma }_{u}^{ + } \) and \( p \in \chi \left( {\Gamma }_{u}^{ + }\right) \) . Now we are ready to prove an Alexandroff-Bakelman-Pucci type of maximum principle. Lemma 1.6. Suppose that \( u \in {C}^{2}\left( \bar{\Omega }\right) \) and \( u \leq 0 \) on \( \partial \Omega \) . Then (1.19) \[ \mathop{\sup }\limits_{\Omega }u \leq \frac{d}{\sqrt[n]{{\omega }_{n}}}{\left\lbrack {\int }_{{\Gamma }_{u}^{ + }}\det \left( -{D}^{2}u\right) dx\right\rbrack }^{1/n} \] where \( d = \operatorname{diam}\Omega \) . Proof. For \( {x}_{0} \in \Omega, u\left( {x}_{0}\right) > 0 \), we can derive from (1.15) and (1.13) that \[ \left| {\chi \left( {\Gamma }_{u}^{ + }\right) }\right| \geq \left| {\Omega \left\lbrack {{x}_{0}, u\left( {x}_{0}\right) }\right\rbrack }\right| \geq {\omega }_{n}{\left\lbrack \frac{u\left( {x}_{0}\right) }{d}\right\rbrack }^{n} \] where \( \chi \left( {\Gamma }_{u}^{ + }\right) \) is a measurable set, by Lemma 1.2 (2). It follows that \[ u\left( {x}_{0}\right) \leq \frac{d}{\sqrt[n]{{\omega }_{n}}}{\left| \chi \left( {\Gamma }_{u}^{ + }\right) \right| }^{1/n}. \] By Lemma 1.4, \[ \left| {\chi \left( {\Gamma }_{u}^{ + }\right) }\right| = {\int }_{{\chi }_{\left( {\Gamma }_{u}^{ + }\right) }}{dp} \leq {\int }_{{\Gamma }_{u}^{ + }}\det \left( {-{D}^{2}u}\right) {dx}. \] Combining the above two inequalities, we get (1.19). We now relax the smoothness assumption on \( u \) in the above lemma. Theorem 1.7. If \( u \in C\left( \bar{\Omega }\right) \cap {W}_{loc}^{2, n}\left( \Omega \right) \), then (1.20) \[ \mathop{\sup }\limits_{\Omega }u \leq \mathop{\sup }\limits
1167_(GTM73)Algebra
Definition 2.1
Definition 2.1. Let \( \mathrm{G} \) and \( \mathrm{H} \) be semigroups. A function \( \mathrm{f} : \mathrm{G} \rightarrow \mathrm{H} \) is a homomorphism provided \[ \mathrm{f}\left( \mathrm{{ab}}\right) = \mathrm{f}\left( \mathrm{a}\right) \mathrm{f}\left( \mathrm{b}\right) \;\text{ for all }\mathrm{a},\mathrm{b}\varepsilon \mathrm{G}. \] If \( \mathrm{f} \) is injective as a map of sets, \( \mathrm{f} \) is said to be a monomorphism. If \( \mathrm{f} \) is surjective, \( \mathrm{f} \) is called an epimorphism. If \( \mathrm{f} \) is bijective, \( \mathrm{f} \) is called an isomorphism. In this case \( \mathrm{G} \) and \( \mathrm{H} \) are said to be isomorphic (written \( \mathrm{G} \cong \mathrm{H} \) ). A homomorphism \( \mathrm{f} : \mathrm{G} \rightarrow \mathrm{G} \) is called an endomorphism of \( \mathrm{G} \) and an isomorphism \( \mathrm{f} : \mathrm{G} \rightarrow \mathrm{G} \) is called an automorphism of \( \mathrm{G} \) . If \( f : G \rightarrow H \) and \( g : H \rightarrow K \) are homomorphisms of semigroups, it is easy to see that \( {gf} : G \rightarrow K \) is also a homomorphism. Likewise the composition of monomor-phisms is a monomorphism; similarly for epimorphisms, isomorphisms and auto-morphisms. If \( G \) and \( H \) are groups with identities \( {e}_{G} \) and \( {e}_{H} \) respectively and \( f : G \rightarrow H \) is a homomorphism, then \( f\left( {e}_{G}\right) = {e}_{H} \) ; however, this is not true for mon-oids (Exercise 1). Furthermore \( f\left( {a}^{-1}\right) = f{\left( a\right) }^{-1} \) for all \( a \in G \) (Exercise 1). EXAMPLE. The map \( f : \mathbf{Z} \rightarrow {Z}_{m} \) given by \( x \mapsto \bar{x} \) (that is, each integer is mapped onto its equivalence class in \( {Z}_{m} \) ) is an epimorphism of additive groups. \( f \) is called the canonical epimorphism of \( \mathbf{Z} \) onto \( {Z}_{m} \) . Similarly, the map \( g : \mathbf{Q} \rightarrow \mathbf{Q}/\mathbf{Z} \) given by \( r \mapsto \bar{r} \) is also an epimorphism of additive groups. EXAMPLE. If \( A \) is an abelian group, then the map given by \( a \mapsto {a}^{-1} \) is an automorphism of \( A \) . The map given by \( a \mapsto {a}^{2} \) is an endomorphism of \( A \) . EXAMPLE. Let \( 1 < m,{k\varepsilon }{\mathbf{N}}^{ * } \) . The map \( g : {Z}_{m} \rightarrow {Z}_{mk} \) given by \( x \mapsto \overline{kx} \) is a monomorphism. EXAMPLE. Given groups \( G \) and \( H \), there are four homomorphisms: \( G\underset{{\pi }_{1}}{\overset{{\iota }_{1}}{ \rightleftarrows }}G \times H\underset{{\pi }_{2}}{\overset{{\iota }_{2}}{ \rightleftarrows }}H \), given by \( {\iota }_{1}\left( g\right) = \left( {g, e}\right) ;{\iota }_{2}\left( h\right) = \left( {e, h}\right) ;{\pi }_{1}\left( {g, h}\right) = g;{\pi }_{2}\left( {g, h}\right) = h \) . \( {\iota }_{i} \) is a monomorphism and \( {\pi }_{j} \) is an epimorphism \( \left( {i, j = 1,2}\right) \) . Definition 2.2. Let \( \mathrm{f} : \mathrm{G} \rightarrow \mathrm{H} \) be a homomorphism of groups. The kernel of \( \mathrm{f} \) (denoted Ker \( \;\mathrm{f})\; \) is \( \;\{ \mathrm{a}\;\varepsilon \;\mathrm{G} \mid \mathrm{f}\left( \mathrm{a}\right) = \mathrm{e}\;\varepsilon \;\mathrm{H}\} .\; \) If \( \mathrm{A}\; \) is a subset of \( \;\mathrm{G} \), then \( \;\mathrm{f}\left( \mathrm{A}\right) = \{ \mathrm{b}\;\varepsilon \;\mathrm{H}\;|\;\mathrm{b} = \mathrm{f}\left( \mathrm{a}\right) \) for some \( \mathrm{a} \in \mathrm{A}\} \) is the image of \( \mathrm{A} \) . \( \mathrm{f}\left( \mathrm{G}\right) \) is called the image of \( \mathrm{f} \) and denoted \( \operatorname{Im}\mathrm{f} \) . If \( \mathrm{B} \) is a subset of \( \mathrm{H},{\mathrm{f}}^{-1}\left( \mathrm{\;B}\right) = \{ \mathrm{a}\varepsilon \mathrm{G} \mid \mathrm{f}\left( \mathrm{a}\right) \varepsilon \mathrm{B}\} \) is the inverse image of \( \mathrm{B} \) . Theorem 2.3. Let \( \mathrm{f} : \mathrm{G} \rightarrow \mathrm{H} \) be a homomorphism of groups. Then (i) \( \mathrm{f} \) is a monomorphism if and only if \( \operatorname{Ker}\mathrm{f} = \{ \mathrm{e}\} \) ; (ii) \( \mathrm{f} \) is an isomorphism if and only if there is a homomorphism \( {\mathrm{f}}^{-1} : \mathrm{H} \rightarrow \mathrm{G} \) such that \( {\mathrm{{ff}}}^{-1} = {1}_{\mathrm{H}} \) and \( {\mathrm{f}}^{-1}\mathrm{f} = {1}_{\mathrm{G}} \) . PROOF. (i) If \( f \) is a monomorphism and \( a \in \operatorname{Ker}f \), then \( f\left( a\right) = {e}_{H} = f\left( e\right) \) , whence \( a = e \) and \( \operatorname{Ker}f = \{ e\} \) . If \( \operatorname{Ker}f = \{ e\} \) and \( f\left( a\right) = f\left( b\right) \), then \( {e}_{H} = f\left( a\right) f{\left( b\right) }^{-1} \) \( = f\left( a\right) f\left( {b}^{-1}\right) = f\left( {a{b}^{-1}}\right) \) so that \( a{b}^{-1}\varepsilon \operatorname{Ker}f \) . Therefore, \( a{b}^{-1} = e \) (that is, \( a = b \) ) and \( f \) is a monomorphism. (ii) If \( f \) is an isomorphism, then by (13) of Introduction, Section 3 there is a map of sets \( {f}^{-1} : H \rightarrow G \) such that \( {f}^{-1}f = {1}_{G} \) and \( f{f}^{-1} = {1}_{H}.{f}^{-1} \) is easily seen to be a homomorphism. The converse is an immediate consequence of (13) of Introduction, Section 3 and Definition 2.1. Let \( G \) be a semigroup and \( H \) a nonempty subset of \( G \) . If for every \( a,{b\varepsilon H} \) we have \( {ab} \) e \( H \), we say that \( H \) is closed under the product in \( G \) . This amounts to saying that the binary operation on \( G \), when restricted to \( H \), is in fact a binary operation on \( H \) . Definition 2.4. Let \( \mathrm{G} \) be a group and \( \mathrm{H} \) a nonempty subset that is closed under the product in \( \mathrm{G} \) . If \( \mathrm{H} \) is itself a group under the product in \( \mathrm{G} \), then \( \mathrm{H} \) is said to be a subgroup of \( \mathrm{G} \) . This is denoted by \( \mathrm{H} < \mathrm{G} \) . Two examples of subgroups of a group \( G \) are \( G \) itself and the trivial subgroup \( \langle e\rangle \) consisting only of the identity element. A subgroup \( H \) such that \( H \neq G, H \neq \langle e\rangle \) is called a proper subgroup. EXAMPLE. The set of all multiples of some fixed integer \( n \) is a subgroup of \( \mathbf{Z} \) , which is isomorphic to \( \mathbf{Z} \) (Exercise 7). EXAMPLE. In \( {S}_{n} \), the group of all permutations of \( \{ 1,2,\ldots, n\} \), the set of all permutations that leave \( n \) fixed forms a subgroup isomorphic to \( {S}_{n - 1} \) (Exercise 8). EXAMPLE. In \( {Z}_{6} = \{ 0,1,2,3,4,5\} \), both \( \{ 0,3\} \) and \( \{ 0,2,4\} \) are subgroups under addition. If \( p \) is prime, \( \left( {{Z}_{p}, + }\right) \) has no proper subgroups. EXAMPLE. If \( f : G \rightarrow H \) is a homomorphism of groups, then Ker \( f \) is a subgroup of \( G \) . If \( A \) is a subgroup of \( G, f\left( A\right) \) is a subgroup of \( H \) ; in particular Im \( f \) is a subgroup of \( H \) . If \( B \) is a subgroup of \( H,{f}^{-1}\left( B\right) \) is a subgroup of \( G \) (Exercise 9). EXAMPLE. If \( G \) is a group, then the set Aut \( G \) of all automorphisms of \( G \) is a group, with composition of functions as binary operation (Exercise 15). By Theorem 1.2 the identity element of any subgroup \( H \) is the identity element of \( G \) and the inverse of \( a \in H \) is the inverse \( {a}^{-1} \) of \( a \) in \( G \) . Theorem 2.5. Let \( \mathrm{H} \) be a nonempty subset of a group \( \mathrm{G} \) . Then \( \mathrm{H} \) is a subgroup of \( \mathrm{G} \) if and only if \( {\mathrm{{ab}}}^{-1}\varepsilon \mathrm{H} \) for all \( \mathrm{a},\mathrm{b}\varepsilon \mathrm{H} \) . PROOF. ( \( \Leftarrow \) ) There exists \( {a\varepsilon H} \) and hence \( e = a{a}^{-1}{\varepsilon H} \) . Thus for any \( {b\varepsilon H},{b}^{-1} \) \( = e{b}^{-1}{\varepsilon H} \) . If \( a,{b\varepsilon H} \), then \( {b}^{-1}{\varepsilon H} \) and hence \( {ab} = a{\left( {b}^{-1}\right) }^{-1}{\varepsilon H} \) . The product in \( H \) is associative since \( G \) is a group. Therefore \( H \) is a (sub)group. The converse is trivial. Corollary 2.6. If \( \mathrm{G} \) is a group and \( \left\{ {{\mathrm{H}}_{\mathrm{i}} \mid \mathrm{i}\varepsilon \mathrm{I}}\right\} \) is a nonempty family of subgroups, then \( \bigcap {\mathrm{H}}_{\mathrm{i}} \) is a subgroup of \( \mathrm{G} \) . \( {ieI} \) PROOF. Exercise. Definition 2.7. Let \( \mathrm{G} \) be a group and \( \mathrm{X} \) a subset of \( \mathrm{G} \) . Let \( \left\{ {{\mathrm{H}}_{\mathrm{i}} \mid \mathrm{i}\varepsilon \mathrm{I}}\right\} \) be the family of all subgroups of \( \mathrm{G} \) which contain \( \mathrm{X} \) . Then \( \mathop{\bigcap }\limits_{{i \in I}}{\mathrm{H}}_{\mathrm{i}} \) is called the subgroup of \( \mathrm{G} \) generated by the set \( \mathrm{X} \) and denoted \( \langle \mathrm{X}\rangle \) . The elements of \( X \) are the generators of the subgroup \( \langle X\rangle \), which may also be generated by other subsets (that is, we may have \( \langle X\rangle = \langle Y\rangle \) with \( X \neq Y \) ). If \( X = \left\{ {{a}_{1},\ldots ,{a}_{n}}\right\} \), we write \( \left\langle {{a}_{1},\ldots ,{a}_{n}}\right\rangle \) in place of \( \langle X\rangle \) . If \( G = \left\langle {{a}_{1},\ldots ,{a}_{n}}\right\rangle ,\left( {{a}_{i} \in G}\right) \) , \( G \) is said to be finitely generated. If \( {a\varepsilon G} \), the subgroup \( \langle a\rangle \) is called the cyclic (sub)- group generated by \( a \) . Theorem 2.8. If \( \mathrm{G} \) is a group and \( \mathrm{X} \) is a nonempty subset of \( \mathrm{G} \), then the subgroup \( \langle \mathrm{X}\rangle \) generated by \( \mathrm{X} \) consists of all finite products \( {\mathrm{a}}_{1}{}^{{\mathrm{n}}_{1}}{\mathrm{a}}_{2}{}^{{\mathrm{n}}_{2}}\cdots {\mathrm{a}}_{\mathrm{t}}{}^{{\mathrm{n}}_{\mathrm{t}}}\left( {{\mathrm{a}}_{\mathrm{i}} \in \mathrm{X};{\mathrm{n}}_{\mathrm{i}} \in \mathbf{Z}}\rig
109_The rising sea Foundations of Algebraic Geometry
Definition 6.105
Definition 6.105. The unitary group \( \mathrm{U}\left( V\right) \) is the group of \( K \) -linear automor-phisms of \( V \) that preserve \( \langle - , - \rangle \) . The special unitary group \( \mathrm{{SU}}\left( V\right) \leq U\left( V\right) \) is the subgroup consisting of automorphisms of determinant 1 . The groups depend on the particular conjugation on \( K \), which we have omitted from the notation. If \( V = {K}^{m} \) with the standard form, we will write \( {\mathrm{U}}_{m}\left( K\right) \) and \( {\mathrm{{SU}}}_{m}\left( K\right) \) . The construction of a BN-pair in \( \mathrm{U}\left( V\right) \) (or \( \mathrm{{SU}}\left( V\right) \) ) now proceeds as in the case of the orthogonal groups, except that it is easier. In particular, \( U\left( V\right) \) works as well as \( \mathrm{{SU}}\left( V\right) \), and they both yield the same thick building of type \( {\mathrm{C}}_{n} \), where \( n \) is the Witt index, again defined to be the common dimension of the maximal totally isotropic subspaces. (In the case of the standard form, the Witt index is the same number \( n \) that occurred in the definition of the form.) The building can be identified with the flag complex of totally isotropic subspaces of \( V \) . Details are left to the interested reader. Finally, we remark that one can use the simplicity theorems of Section 6.2.7 to deduce that \( \operatorname{PSU}\left( V\right) \), which is defined to be \( \mathrm{{SU}}\left( V\right) \) modulo its center [consisting of multiples of the identity by scalars of norm 1], is simple if \( \dim V \geq 2 \) and the Witt index is \( > 0 \), except for \( {\operatorname{PSU}}_{2}\left( {\mathbb{F}}_{4}\right) ,{\operatorname{PSU}}_{2}\left( {\mathbb{F}}_{9}\right) \), and \( {\operatorname{PSU}}_{3}\left( {\mathbb{F}}_{4}\right) \) . As usual, one must first show that \( \operatorname{PSU}\left( V\right) \) is perfect; see [24, Section 22; 123, Chapter 11]. See also Remark 6.100. ## 6.9 Example: \( {\mathrm{{SL}}}_{n} \) over a Field with Discrete Valuation Up to now, all of our examples of BN-pairs have had finite Weyl groups (and hence spherical buildings). It turns out that many of the same groups that occurred in those examples admit a second BN-pair structure whenever the ground field comes equipped with a discrete valuation. This was first noticed by Iwahori and Matsumoto [137] and was later generalized to a much larger class of groups by Bruhat and Tits [59]. The Weyl group for this second BN-pair is an infinite Euclidean reflection group, and the associated building has apartments that are Euclidean spaces. We will illustrate this by treating the groups \( {\mathrm{{SL}}}_{n} \) . But first we must review discrete valuations. ## 6.9.1 Discrete Valuations Let \( K \) be a field and \( {K}^{ * } \) its multiplicative group of nonzero elements. Definition 6.106. A discrete valuation on \( K \) is a surjective homomorphism \( v : {K}^{ * } \rightarrow \mathbb{Z} \) satisfying the following inequality: \[ v\left( {x + y}\right) \geq \min \{ v\left( x\right), v\left( y\right) \} \] for all \( x, y \in {K}^{ * } \) with \( x + y \neq 0 \) . It is convenient to extend \( v \) to a function defined on all of \( K \) by setting \( v\left( 0\right) = + \infty \) ; the inequality then remains valid for all \( x, y \in K \) . Note that we necessarily have \( v\left( {-1}\right) = 0 \), since \( \mathbb{Z} \) is torsion-free; hence \( v\left( {-x}\right) = v\left( x\right) \) . It follows from this and the inequality above that the set \( A \mathrel{\text{:=}} \{ x \in K \mid v\left( x\right) \geq 0\} \) is a subring of \( K \) ; it is called the valuation ring associated to \( K \) . And any ring \( A \) that arises in this way from a discrete valuation is called a discrete valuation ring. The group \( {A}^{ * } \) of units of \( A \) is precisely the kernel \( {v}^{-1}\left( 0\right) \) of \( v \) . So if we pick an element \( \pi \in K \) with \( v\left( \pi \right) = 1 \), then every element \( x \in {K}^{ * } \) is uniquely expressible in the form \( x = {\pi }^{n}u \) with \( n \in \mathbb{Z} \) and \( u \in {A}^{ * } \) . In particular, \( K \) is the field of fractions of \( A \) . The principal ideal \( {\pi A} \) generated by \( \pi \) can be described in terms of \( v \) as \( \{ x \in K \mid v\left( x\right) > 0\} \) . It is a maximal ideal, since every element of \( A \) not in \( {\pi A} \) is a unit. The quotient ring \( k \mathrel{\text{:=}} A/{\pi A} \) is therefore a field, called the residue field associated to the valuation \( v \) . Example 6.107. Let \( K \) be the field \( \mathbb{Q} \) of rational numbers, and let \( p \) be a prime number. The p-adic valuation on \( \mathbb{Q} \) is defined by setting \( v\left( x\right) \) equal to the exponent of \( p \) in the prime factorization of \( x \) . More precisely, given \( x \in {\mathbb{Q}}^{ * } \), write \( x = {p}^{n}u \), where \( n \) is a (possibly negative) integer and \( u \) is a rational number whose numerator and denominator are not divisible by \( p \) ; then \( v\left( x\right) = n \) . The valuation ring \( A \) is the ring of fractions \( a/b \) with \( a, b \in \mathbb{Z} \) and \( b \) not divisible by \( p \) . [The ring \( A \) happens to be the localization of \( \mathbb{Z} \) at \( p \) , but we will not make any use of this.] The residue field \( k \) is the field \( {\mathbb{F}}_{p} \) of integers \( {\;\operatorname{mod}\;p} \) ; one sees this by using the homomorphism \( A \rightarrow {\mathbb{F}}_{p} \) given by \( a/b \mapsto \left( {a{\;\operatorname{mod}\;p}}\right) {\left( b{\;\operatorname{mod}\;p}\right) }^{-1} \), where \( a \) and \( b \) are as above. The valuation ring \( A \) in this example can be described informally as the largest subring of \( \mathbb{Q} \) on which reduction mod \( p \) makes sense. It is thus the natural ring to introduce if one wants to relate the field \( \mathbb{Q} \) to the field \( {\mathbb{F}}_{p} \) . This illustrates our point of view toward valuations: We will be interested in studying objects (such as matrix groups) defined over a field \( K \), and we wish to "reduce" to a simpler field \( k \) as an aid in this study; a discrete valuation makes this possible by providing us with a ring \( A \) to serve as intermediary between \( K \) and \( k \) : ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_369_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_369_0.jpg) Examples 6.108. Let \( K = k\left( t\right) \), the field of rational functions in one variable over a field \( k \) . (a) Let \( v = {v}_{0} \) be the valuation on \( K \) that gives the order of vanishing at 0 of a rational function. In other words, if \( f\left( t\right) = {t}^{n}g\left( t\right) \), where \( t \) does not divide the numerator or denominator of \( g\left( t\right) \), then \( v\left( f\right) = n \) . This is the analogue of the \( p \) -adic valuation, with the polynomial ring \( k\left\lbrack t\right\rbrack \) playing the role of \( \mathbb{Z} \) and \( t \) playing the role of \( p \) . The valuation ring \( A \) is the set of rational functions such that \( f\left( 0\right) \) is defined, the maximal ideal is generated by \( \pi \mathrel{\text{:=}} t \), and the residue field \( A/{\pi A} \) can be identified with the original field \( k \) via \( f \mapsto f\left( 0\right) \) for \( f \in A \) . (b) Now take \( v = {v}_{\infty } \), the order of vanishing of a rational function at infinity. In other words, if \( f\left( t\right) = g\left( t\right) /h\left( t\right) \) with \( g \) and \( h \) polynomials, then \[ {v}_{\infty }\left( f\right) \mathrel{\text{:=}} \deg h - \deg g. \] Note that \( {v}_{\infty }\left( {f\left( t\right) }\right) = {v}_{0}(f\left( {1/t}\right) \) . The valuation ring \( A \) is the set of rational functions such that \( f\left( \infty \right) \) is defined, i.e., the degree of the numerator is less than or equal to the degree of the denominator. We can take \( \pi = 1/t \), and we can identify the residue field with \( k \) via \( f \mapsto f\left( \infty \right) \) . Returning now to the general theory, we note that the study of the arithmetic of \( A \) (e.g., ideals and prime factorization) is fairly trivial: Proposition 6.109. A discrete valuation ring \( A \) is a principal ideal domain, and every nonzero ideal is generated by \( {\pi }^{n} \) for some \( n \geq 0 \) . In particular, \( {\pi A} \) is the unique nonzero prime ideal of \( A \) . Proof. Let \( I \) be a nonzero ideal and let \( n \mathrel{\text{:=}} \min \{ v\left( a\right) \mid a \in I\} \) . Then \( I \) contains \( {\pi }^{n} \), and every element of \( I \) is divisible by \( {\pi }^{n} \) ; hence \( I = {\pi }^{n}A \) . One consequence of this is that we can apply the basic facts about modules over a principal ideal domain (e.g., a submodule of a free module is free). Let's recall some of these facts, in the form in which we'll need them later. Definition 6.110. Let \( V \) be the vector space \( {K}^{n} \) . By a lattice (or A-lattice) in \( V \) we will mean an \( A \) -submodule \( L < V \) of the form \( L = A{e}_{1} \oplus \cdots \oplus A{e}_{n} \) for some basis \( {e}_{1},\ldots ,{e}_{n} \) of \( V \) . In particular, \( L \) is a free \( A \) -module of rank \( n \) . If we take \( {e}_{1},\ldots ,{e}_{n} \) to be the standard basis of \( V \), then the resulting lattice is \( {A}^{n} \), which we call the standard lattice. If \( {L}^{\prime } \) is a second lattice in \( V \), then we can choose our basis \( {e}_{1},\ldots ,{e}_{n} \) for \( L \) in such a way that \( {L}^{\prime } \) admits a basis of the form \( {\lambda }_{1}{e}_{1},\ldots ,{\lambda }_{n}{e}_{n} \) for some scalars \( {\lambda }_{i} \in {K}^{ * } \) . This fact should be familiar for the case \( {L}^{\prime } \leq L \), and the general case follows easily. [Choose a large integer \( M \) such that \( {\pi }^{M}{L}^{\prime } \leq L
11_2022-An_Analogy_of_the_Carleson-Hunt_Theorem_with_Respe
Definition 2.42
Definition 2.42. The map \( {T}_{A} : A \rightarrow A \) defined (almost everywhere) by \[ {T}_{A}\left( x\right) = {T}^{{r}_{A}\left( x\right) }\left( x\right) \] is called the transformation induced by \( T \) on the set \( A \) . Notice that both \( {r}_{A} : X \rightarrow \mathbb{N} \) and \( {T}_{A} : A \rightarrow A \) are measurable by the following argument. For \( n \geq 1 \), write \( {A}_{n} = \left\{ {x \in A \mid {r}_{a}\left( x\right) = n}\right\} \) . Then the sets \[ {A}_{1} = A \cap {T}^{-1}A \] \[ {A}_{2} = A \cap {T}^{-2}A \smallsetminus {A}_{1} \] \[ \vdots \] \[ {A}_{n} = A \cap {T}^{-n}A \smallsetminus \mathop{\bigcup }\limits_{{i < n}}{A}_{i} \] are all measurable, as is \[ {T}^{n}{A}_{n} = A \cap {T}^{n}A \smallsetminus \left( {{TA} \cup {T}^{2}A \cup \cdots \cup {T}^{n - 1}A}\right) , \] since \( T \) is invertible by assumption. Lemma 2.43. The induced transformation \( {T}_{A} \) is a measure-preserving transformation on the space \( \left( {A,{\left. \mathcal{B}\right| }_{A},{\mu }_{A} = {\left. \frac{1}{\mu \left( A\right) }\mu \right| }_{A},{T}_{A}}\right) \) . If \( T \) is ergodic with respect to \( \mu \) then \( {T}_{A} \) is ergodic with respect to \( {\mu }_{A} \) . The notation means that the \( \sigma \) -algebra consists of \( {\left. \mathcal{B}\right| }_{A} = \{ B \cap A \mid B \in \mathcal{B}\} \) and the measure is defined for \( B \in {\left. \mathcal{B}\right| }_{A} \) by \( {\mu }_{A}\left( B\right) = \frac{1}{\mu \left( A\right) }\mu \left( B\right) \) . The effect of \( {T}_{A} \) is seen in the Kakutani skyscraper Fig. 2.2. The original transformation \( T \) sends any point with a floor above it to the point immediately above on the next floor, and any point on a top floor is moved somewhere to the base floor \( A \) . The induced transformation \( {T}_{A} \) is the map defined almost everywhere on the bottom floor by sending each point to the point obtained by going through all the floors above it and returning to \( A \) . ![8dbccc5c-1b75-4e41-a357-96decb48997c_81_0.jpg](images/8dbccc5c-1b75-4e41-a357-96decb48997c_81_0.jpg) Fig. 2.2 The induced transformation \( {T}_{A} \) Proof of Lemma 2.43. If \( B \subseteq A \) is measurable, then \( B = \mathop{\bigsqcup }\limits_{{n \geq 1}}B \cap {A}_{n} \) is a disjoint union so \[ {\mu }_{A}\left( B\right) = \frac{1}{\mu \left( A\right) }\mathop{\sum }\limits_{{n \geq 1}}\mu \left( {B \cap {A}_{n}}\right) \] (2.37) Now \[ {T}_{A}\left( B\right) = \mathop{\bigsqcup }\limits_{{n \geq 1}}{T}_{A}\left( {B \cap {A}_{n}}\right) = \mathop{\bigsqcup }\limits_{{n \geq 1}}{T}^{n}\left( {B \cap {A}_{n}}\right) \] so \[ {\mu }_{A}\left( {{T}_{A}\left( B\right) }\right) = \frac{1}{\mu \left( A\right) }\mathop{\sum }\limits_{{n \geq 1}}\mu \left( {{T}^{n}\left( {B \cap {A}_{n}}\right) }\right) \] \[ = \frac{1}{\mu \left( A\right) }\mathop{\sum }\limits_{{n \geq 1}}\mu \left( {B \cap {A}_{n}}\right) \;\left( {\text{ since }T\text{ preserves }\mu }\right) \] \[ = \mu \left( B\right) \] by (2.37). If \( {T}_{A} \) is not ergodic, then there is a \( {T}_{A} \) -invariant measurable set \( B \subseteq A \) with \( 0 < \mu \left( B\right) < \mu \left( A\right) \) ; it follows that \( \mathop{\bigcup }\limits_{{n \geq 1}}\mathop{\bigcup }\limits_{{j = 0}}^{{n - 1}}{T}^{j}\left( {B \cap {A}_{n}}\right) \) is a nontrivial \( T \) -invariant set, showing that \( T \) is not ergodic. Poincaré recurrence (Theorem 2.11) says that for any measure-preserving system \( \left( {X,\mathcal{B},\mu, T}\right) \) and set \( A \) of positive measure, almost every point on the ground floor of the associated Kakutani skyscraper returns to the ground floor at some point. Ergodicity strengthens this statement to say that almost every point of the entire space \( X \) lies on some floor of the skyscraper. This enables a quantitative version of Poincaré recurrence to be found, a result due to Kac [168]. Theorem 2.44 (Kac). Let \( \left( {X,\mathcal{B},\mu, T}\right) \) be an ergodic measure-preserving system and let \( A \in \mathcal{B} \) have \( \mu \left( A\right) > 0 \) . Then the expected return time to \( A \) is \( \frac{1}{\mu \left( A\right) } \) ; equivalently \[ {\int }_{A}{r}_{A}\mathrm{\;d}\mu = 1 \] \( {\text{Proof}}^{\left( {35}\right) } \) . Referring to Fig. 2.2, each column \[ {A}_{n} \sqcup T\left( {A}_{n}\right) \sqcup \cdots \sqcup {T}^{n - 1}\left( {A}_{n}\right) \] comprises \( n \) disjoint sets each of measure \( \mu \left( {A}_{n}\right) \), and the entire skyscraper contains almost all of \( X \) by ergodicity and Proposition 2.14(3) applied to the transformation \( {T}^{-1} \) . It follows that \[ 1 = \mu \left( X\right) = \mathop{\sum }\limits_{{n \geq 1}}{n\mu }\left( {A}_{n}\right) = {\int }_{A}{r}_{A}\mathrm{\;d}\mu \] by the monotone convergence theorem (Theorem A.16), since \( {r}_{A} \) is the increasing limit of the functions \( \mathop{\sum }\limits_{{k = 1}}^{n}k{\chi }_{{A}_{k}} \) as \( n \rightarrow \infty \) . Kakutani skyscrapers are a powerful tool in ergodic theory. A simple application is to prove the Kakutani-Rokhlin lemma (Lemma 2.45) proved by Kakutani [172] and Rokhlin [315]. Lemma 2.45 (Kakutani-Rokhlin). Let \( \left( {X,\mathcal{B},\mu, T}\right) \) be an invertible ergodic measure-preserving system and assume that \( \mu \) is non-atomic (that is, \( \mu \left( {\{ x\} }\right) = 0 \) for all \( x \in X \) ). Then for any \( n \geq 1 \) and \( \varepsilon > 0 \) there is a set \( B \in \mathcal{B} \) with the property that \[ B, T\left( B\right) ,\ldots ,{T}^{n - 1}\left( B\right) \] are disjoint sets, and \[ \mu \left( {B \sqcup T\left( B\right) \sqcup \cdots \sqcup {T}^{n - 1}\left( B\right) }\right) > 1 - \varepsilon . \] As the proof will show, the lemma uses only division (constructing a quotient and remainder) and the Kakutani skyscraper. Proof of Lemma 2.45. Let \( A \) be a measurable set with \( 0 < \mu \left( A\right) < \varepsilon /n \) (such a set exists by the assumption that \( \mu \) is non-atomic) and form the Kakutani skyscraper over \( A \) . Then \( X \) decomposes into a union of disjoint columns of the form \[ {A}_{k} \sqcup T\left( {A}_{k}\right) \sqcup \cdots \sqcup {T}^{k - 1}\left( {A}_{k}\right) \] for \( k \geq 1 \), as in Fig. 2.2. Now let \[ B = \mathop{\bigsqcup }\limits_{{k \geq n}}\mathop{\bigsqcup }\limits_{{j = 0}}^{{\lfloor k/n\rfloor - 1}}{T}^{jn}\left( {A}_{k}\right) \] the set obtained by grouping together that part of the ground floor made up of the sets \( {A}_{k} \) with \( k \geq n \) together with every \( n \) th floor above that part of the ground floor (stopping before the top of the skyscraper). By construction the sets \( B, T\left( B\right) ,\ldots ,{T}^{n - 1}\left( B\right) \) are disjoint, and together they cover all of \( X \) apart from a set comprising no more than \( n \) of the floors in each of the towers, which therefore has measure no more than \( n\mathop{\sum }\limits_{{k = 1}}^{\infty }\mu \left( {A}_{k}\right) \leq {n\mu }\left( A\right) < \varepsilon \) . One often refers to the structure given by Lemma 2.45 as a Rokhlin tower of height \( n \) with base \( B \) and residual set of size \( \varepsilon \) . ## Exercises for Sect. 2.9 Exercise 2.9.1. Show that the inducing construction can be reversed in the following sense. Let \( \left( {X,\mathcal{B},\mu, T}\right) \) be a measure-preserving system, and let \( r : X \rightarrow {\mathbb{N}}_{0} \) be a map in \( {L}_{\mu }^{1} \) . The suspension defined by \( r \) is the system \( \left( {{X}^{\left( r\right) },{\mathcal{B}}^{\left( r\right) },{\mu }^{\left( r\right) },{T}^{\left( r\right) }}\right) \), where: - \( {X}^{\left( r\right) } = \{ \left( {x, n}\right) \mid 0 \leq n < r\left( x\right) \} \) ; - \( {\mathcal{B}}^{\left( r\right) } \) is the product \( \sigma \) -algebra of \( \mathcal{B} \) and the Borel \( \sigma \) -algebra on \( \mathbb{N} \) (which comprises all subsets); - \( {\mu }^{\left( r\right) } \) is defined by \( {\mu }^{\left( r\right) }\left( {A \times N}\right) = \frac{1}{\int r\mathrm{\;d}\mu }\mu \left( A\right) \times \left| N\right| \) for \( A \in \mathcal{B} \) and \( N \subseteq \mathbb{N} \) ; and \[ \text{-}{T}^{\left( r\right) }\left( {x, n}\right) = \left\{ \begin{array}{ll} \left( {x, n + 1}\right) & \text{ if }n + 1 < r\left( x\right) ; \\ \left( {T\left( x\right) ,0}\right) & \text{ if }n + 1 = r\left( x\right) . \end{array}\right. \] (a) Verify that this defines a finite measure-preserving system. (b) Show that the induced map on the set \( A = \{ \left( {x,0}\right) \mid x \in X\} \) is isomorphic to the original system \( \left( {X,\mathcal{B},\mu, T}\right) \) . Exercise 2.9.2. \( {}^{\left( {36}\right) } \) The hypothesis of ergodicity in Lemma 2.45 can be weakened as follows. An invertible measure-preserving system \( \left( {X,\mathcal{B},\mu, T}\right) \) is called aperiodic if \( \mu \left( \left\{ {x \in X \mid {T}^{k}\left( x\right) = x}\right\} \right) = 0 \) for all \( k \in \mathbb{Z} \smallsetminus \{ 0\} \) . (a) Show that an ergodic transformation on a non-atomic space is aperiodic. (b) Find an example of an aperiodic transformation on a non-atomic space that is not ergodic. (c) Prove Lemma 2.45 for an invertible aperiodic transformation on a nonatomic space. Exercise 2.9.3. (37) Show that the Kakutani-Rokhlin lemma (Lemma 2.45) does not hold for arbitrary sequences of iterates of the map \( T \) . Specifically, show that for an ergodic measure-preserving system \( \left( {X,\mathcal{B},\mu, T}\right) \), sequence \( {a}_{1},\ldots ,{a}_{n} \) of distinct integers, and \( \varepsilon > 0 \) it is not always possible to find a measurable set \( A \) with the properties that \( {T}^{{a}_{1}}\left( A\right) ,\ldots ,{T}^{{a}_{n}}\left( A\right) \) are disjoint and \( \mu \left( {\mathop{\bigcup }\limits_{{i = 1}}^{n}{T}^{{a}_{i}}\left( A\right) }\right) > \varepsilon \) . E
1359_[陈省身] Lectures on Differential Geometry
Definition 4.6
Definition 4.6. Suppose \( D \) is a connection on a holomorphic vector bundle \( \left( {E, M,\pi }\right) \) . If for any holomorphic local frame field \( S \), the connection matrix \( \omega \) is composed of differential \( \left( {1,0}\right) \) -forms on \( M \) with respect to the canonical almost complex structure on \( M \), then \( D \) is called a type \( \left( {1,0}\right) \) connection. The above definition is meaningful, since the fact that \( \omega \) is a type \( \left( {1,0}\right) \) matrix is independent of the choice of the local frame field \( S \) . Indeed, under a holomorphic transformation (4.55) of local frame fields, \( A \) is a matrix of holomorphic functions. Hence \( \bar{\partial }A = 0 \) by Theorem 3.6. Thus \[ {\omega }^{\prime } = {dA} \cdot {A}^{-1} + A \cdot \omega \cdot {A}^{-1} \] \[ = {\theta A} \cdot {A}^{-1} + A \cdot \omega \cdot {A}^{-1}. \] Thus, if \( \omega \) is a type \( \left( {1,0}\right) \) matrix, then so is \( {\omega }^{\prime } \) . The converse is also true. If \( E \) is a Hermitian holomorphic vector bundle, then there is a unique type \( \left( {1,0}\right) \) compatible connection on \( E \) . In fact, the condition for \( \omega \) to be a compatible connection is \[ {dH} = \omega \cdot H + H \cdot {}^{t}\bar{\omega } \] If \( \omega \) is a type \( \left( {1,0}\right) \) matrix, then \( \bar{\omega } \) is a type \( \left( {0,1}\right) \) matrix. Thus \[ \partial H = \omega \cdot H. \] (4.56) From this we see that the type \( \left( {1,0}\right) \) compatible connection \( \omega \) must be given by \[ \omega = \partial H \cdot {H}^{-1} \] (4.57) It is easy to verify that the above equation actually gives a connection on \( E \) . The curvature matrix for a type \( \left( {1,0}\right) \) compatible connection on a Hermitian holomorphic vector bundle \( E \) is \[ \Omega = d\left( {\partial H \cdot {H}^{-1}}\right) - \left( {\partial H \cdot {H}^{-1}}\right) \land \left( {\partial H \cdot {H}^{-1}}\right) \] (4.58) \[ = - \partial \bar{\partial }H \cdot {H}^{-1} + \partial H \cdot {H}^{-1} \land \bar{\partial }H \cdot {H}^{-1}. \] Hence \( \Omega \) is a matrix of differential \( \left( {1,1}\right) \) -forms. ## §7-5 Hermitian Manifolds and Kählerian Manifolds Definition 5.1. Suppose \( M \) is an \( m \) -dimensional complex manifold. If a Hermitian structure \( H \) is given on the tangent bundle of \( M \), then \( M \) is called a Hermitian manifold. For a local complex coordinate system \( \left( {U;{z}^{1},\ldots ,{z}^{m}}\right) \), the corresponding local frame field of the tangent bundle is \[ {s}_{i} = \frac{\partial }{\partial {z}^{i}} \] (5.1) In this section we restrict the values assumed by indices as follows: \[ 1 \leq i, j, k, l \leq m \] and adopt the Einstein summation convention notation, omitting the summation symbol. Let \[ {h}_{i\bar{k}} = {h}_{\bar{k}i} = H\left( {\frac{\partial }{\partial {z}^{i}},\frac{\partial }{\partial {z}^{k}}}\right) . \] (5.2) Then \[ {\bar{h}}_{k\bar{i}} = {h}_{\bar{k}i} \] (5.3) and the matrix \( H = \left( {h}_{i\bar{k}}\right) \) is positive definite. If \( \xi ,\eta \) are two type \( \left( {1,0}\right) \) tangent vector fields on \( U \), then they can be expressed as \[ \xi = {\xi }^{i}\frac{\partial }{\partial {z}^{i}},\;\eta = {\eta }^{k}\frac{\partial }{\partial {z}^{k}}, \] (5.4) where \( {\xi }^{i},{\eta }^{k} \) are smooth complex-valued functions on \( U \) . Thus \[ \begin{matrix} H\left( {\xi ,\eta }\right) = {h}_{i\overline{k}}{\xi }^{i}{\overline{\eta }}^{k}. \end{matrix} \] (5.5) The Kählerian form on \( M \) , \[ \widehat{H} = \frac{i}{2}{h}_{i\overrightarrow{k}}d{z}^{i} \land d{\bar{z}}^{k} \] (5.6) is a real-valued differential \( \left( {1,1}\right) \) -form. A connection on the tangent bundle has a torsion matrix. Suppose the coframe field dual to the local frame field \( S = {}^{t}\left( {{s}_{1},\ldots ,{s}_{m}}\right) \) is \( \sigma = \) \( \left( {{\sigma }^{1},\ldots ,{\sigma }^{m}}\right) \) . If there is another local frame field \[ {S}^{\prime } = A \cdot S \] (5.7) then its dual coframe field is \[ {\sigma }^{\prime } = \sigma \cdot {A}^{-1} \] or \[ \sigma = {\sigma }^{\prime } \cdot A. \] (5.8) Exteriorly differentiating (5.8) we obtain \[ {d\sigma } = d{\sigma }^{\prime } \cdot A - {\sigma }^{\prime } \land {dA} \] \[ = \;\left( {d{\sigma }^{\prime } - {\sigma }^{\prime } \land {\omega }^{\prime }}\right) A + \sigma \land \omega , \] that is \[ \tau = {\tau }^{\prime } \cdot A \] (5.9) where \[ \tau = {d\sigma } - \sigma \land \omega ,\;{\tau }^{\prime } = d{\sigma }^{\prime } - {\sigma }^{\prime } \land {\omega }^{\prime }. \] \( \left( {5.10}\right) \) \( \tau \) is a \( \left( {1 \times m}\right) \) matrix of complex-valued exterior differential 2-forms, called the torsion matrix of a connection on the tangent bundle of a complex manifold. Theorem 5.1. Suppose \( M \) is a Hermitian manifold. A necessary and sufficient condition for \( D \) to be a type \( \left( {1,0}\right) \) connection on the tangent bundle of \( M \) is that its torsion matrix is composed of differential \( \left( {2,0}\right) \) -forms. Proof. Suppose \( \sigma = \left( {{\sigma }^{1},\ldots ,{\sigma }^{m}}\right) \) is the coframe field dual to the holomorphic frame field \( S \) . Then every \( {\sigma }^{i} \) is a holomorphic differential \( \left( {1,0}\right) \) -form, that is, for the complex local coordinate system \( {z}^{i},{\sigma }^{i} \) can be expressed as \[ {\sigma }^{i} = {a}_{j}^{i}d{z}^{j} \] where the \( {a}_{j}^{i} \) are holomorphic functions. Thus \[ \bar{\partial }\sigma = 0. \] (5.11) The connection matrix \( \omega \) can be decomposed uniquely as \[ \omega = {\omega }_{1} + {\omega }_{2} \] (5.12) where \( {\omega }_{1},{\omega }_{2} \) are matrices of differential \( \left( {1,0}\right) \) -forms and \( \left( {0,1}\right) \) -forms, respectively. Thus the torsion matrix \( \tau \) can be expressed as \[ \tau = {d\sigma } - \sigma \land \omega \] (5.13) \[ = \;\left( {\partial \sigma - \sigma \land {\omega }_{1}}\right) - \sigma \land {\omega }_{2}. \] The right hand side is already decomposed into a sum of a type \( \left( {2,0}\right) \) matrix and a type \( \left( {1,1}\right) \) matrix. Thus we see that a necessary and sufficient condition for the torsion matrix to be composed of differential \( \left( {2,0}\right) \) -forms is \[ \sigma \land {\omega }_{2} = 0. \] (5.14) Assume that \( {\omega }_{2} = \left( {\theta }_{j}^{k}\right) \), then (5.14) becomes \[ {\sigma }^{j} \land {\theta }_{j}^{k} = 0. \] \( \left( {5.15}\right) \) By Cartan's Lemma we have \[ {\theta }_{j}^{k} = {a}_{ij}^{k}{\sigma }^{i} \] (5.16) where the \( {a}_{ij}^{k} \) are smooth complex-valued functions. Since \( {\sigma }^{j} \) is a differential \( \left( {1,0}\right) \) -form and \( {\theta }_{j}^{k} \) is a differential \( \left( {0,1}\right) \) -form, condition \( \left( {5.14}\right) \) is equivalent to \[ {a}_{ij}^{k} = 0,\;\text{ that is,}\;{\omega }_{2} = 0, \] \( \left( {5.17}\right) \) which means that \( \omega \) is a \( \left( {1,0}\right) \) -form. In our definition of type \( \left( {1,0}\right) \) connections we used holomorphic frame fields; yet the criterion given in Theorem 5.1 requires only the smoothness of local frames fields. This is because (5.9) implies that a change of local frame fields as in (5.7) does not change the even order of the torsion form \( {\tau }^{i} \), i.e., the type of the torsion matrix. This is a convenient fact in the study of Hermitian manifolds. In fact, a regular frame field \( \left\{ {{s}_{i};H\left( {{s}_{i},{s}_{j}}\right) = {\delta }_{ij}}\right\} \) is smooth, but not necessarily holomorphic. According to our discussion in the last paragraph of the previous section, there exists a unique type \( \left( {1,0}\right) \) compatible connection on the tangent bundle of a Hermitian manifold. Its torsion matrix must be of \( \left( {2,0}\right) \) -type, and its curvature matrix \( \left( {1,1}\right) \) -type. We usually call this connection the Hermitian connection. Definition 5.2. If the Kählerian form \( \widehat{H} \) of a Hermitian manifold \( M \) is a closed exterior differential form, that is, \[ d\widehat{H} = 0 \] (5.18) then we call \( M \) a Kählerian manifold. Theorem 5.2. A necessary and sufficient condition for a Hermitian manifold \( M \) to be a Kählerian manifold is that the torsion matrix of the Hermitian connection on \( M \) is zero. Proof. Obviously, the two conditions mentioned in this theorem are both independent of the choice of frame fields. Hence we only need to prove the theorem with respect to the natural frame field (5.1). Suppose the coframe field dual to \( \left( {5.1}\right) \) is \[ \sigma = \left( {d{z}^{1},\ldots, d{z}^{m}}\right) \] Then \( {d\sigma } = 0 \) . Since the Hermitian connection is \( \omega = \partial H \cdot {H}^{-1} \), where \( H \) is given by (5.2), a necessary and sufficient condition for the torsion matrix \( \tau \) to be zero is: \[ \sigma \land \partial H = 0, \] (5.19) or \[ \mathop{\sum }\limits_{{i, j}}\frac{\partial {h}_{i\bar{k}}}{\partial {z}^{j}}d{z}^{j} \land d{z}^{i} = 0 \] \( \left( {5.20}\right) \) \[ \frac{\partial {h}_{i\bar{k}}}{\partial {z}^{j}} - \frac{\partial {h}_{j\bar{k}}}{\partial {z}^{i}}\; = \;0,\;1 \leq i, j, k \leq m. \] But the exterior derivative of the Kählerian form \( \widehat{H} \) is \[ d\widehat{H} = \frac{i}{2}\left( {\frac{\partial {h}_{i\bar{k}}}{\partial {z}^{j}}d{z}^{j} + \frac{\partial {h}_{i\bar{k}}}{\partial {\bar{z}}^{j}}d{\bar{z}}^{j}}\right) \land d{z}^{i} \land d{\bar{z}}^{k} \] (5.21) \[ = \frac{i}{2}\left\{ {\frac{\partial {h}_{i\bar{k}}}{\partial {z}^{j}}d{z}^{j} \land d{z}^{i} \land d{\bar{z}}^{k} - \overline{\frac{\partial {h}_{\bar{i}k}}{\partial {z}^{j}}d{z}^{j} \land d{z}^{k} \land d{\bar{z}}^{i}}}\right\} . \
1068_(GTM227)Combinatorial Commutative Algebra
Definition 2.5
Definition 2.5 Suppose that \( I = \left\langle {{f}_{1},\ldots ,{f}_{r}}\right\rangle \) . The set \( \left\{ {{f}_{1},\ldots ,{f}_{r}}\right\} \) of generators constitutes a Gröbner basis if the initial terms of \( {f}_{1},\ldots ,{f}_{r} \) generate the initial ideal of \( I \) ; that is, if \( \operatorname{in}\left( I\right) = \left\langle {\operatorname{in}\left( {f}_{1}\right) ,\ldots ,\operatorname{in}\left( {f}_{r}\right) }\right\rangle \) . Every ideal in \( S \) has a (finite) Gröbner basis for every term order, because \( \operatorname{in}\left( I\right) \) is finitely generated by Hilbert’s basis theorem. Note that there is no need to mention any ideals when we say,"The set \( \left\{ {{f}_{1},\ldots ,{f}_{r}}\right\} \) is a Gröbner basis," as the set must be a Gröbner basis for the ideal \( I = \left\langle {{f}_{1},\ldots ,{f}_{r}}\right\rangle \) it generates. On the other hand, most ideals have many different Gröbner bases for a fixed term order. This uniqueness issue can be resolved by considering a reduced Gröbner basis \( \left\{ {{f}_{1},\ldots ,{f}_{r}}\right\} \), which means that \( \operatorname{in}\left( {f}_{i}\right) \) has coefficient 1 for each \( i = 1,\ldots, r \), and that the only monomial appearing anywhere in \( \left\{ {{f}_{1},\ldots ,{f}_{r}}\right\} \) that is divisible by the initial term \( \operatorname{in}\left( {f}_{i}\right) \) is \( \operatorname{in}\left( {f}_{i}\right) \) itself; see Exercise 2.5. In the proof of the next lemma, we will use a general tool due to Weispfenning [Wei92] for establishing finiteness results in Gröbner basis theory. Suppose that \( \mathbf{y} \) is a set of variables different from \( {x}_{1},\ldots ,{x}_{n} \), and let \( J \) be an ideal in \( S\left\lbrack \mathbf{y}\right\rbrack \), which is the polynomial ring over \( \mathbb{k} \) in the variables \( \mathbf{x} \) and \( \mathbf{y} \) . Every \( \mathbb{k} \) -algebra homomorphism \( \phi : \mathbb{k}\left\lbrack \mathbf{y}\right\rbrack \rightarrow \mathbb{k} \) determines a homomorphism \( {\phi }_{S} : S\left\lbrack \mathbf{y}\right\rbrack \rightarrow S \) that sends the \( \mathbf{y} \) variables to constants. The image \( {\phi }_{S}\left( J\right) \) is an ideal in \( S \) . Given a fixed term order \( < \) on \( S \) (not on \( S\left\lbrack \mathbf{y}\right\rbrack \) ), Weispfenning proves that \( J \) has a comprehensive Gröbner basis, meaning a finite set \( \mathcal{C} \) of polynomials \( p\left( {\mathbf{x},\mathbf{y}}\right) \in J \) such that for every homomorphism \( \phi : \mathbb{k}\left\lbrack \mathbf{y}\right\rbrack \rightarrow \mathbb{k} \), the specialized set \( {\phi }_{S}\left( \mathcal{C}\right) \) is a Gröbner basis for the specialized ideal \( {\phi }_{S}\left( J\right) \) in \( S \) with respect to the term order \( < \) . Returning to group actions on \( S \), every matrix \( g \in G{L}_{n}\left( \mathbb{k}\right) \) determines the initial monomial ideal in \( \left( {g \cdot I}\right) \) . After fixing a term order, we call two matrices \( g \) and \( {g}^{\prime } \) equivalent if \[ \operatorname{in}\left( {g \cdot I}\right) = \operatorname{in}\left( {{g}^{\prime } \cdot I}\right) . \] The resulting partition of the group \( G{L}_{n}\left( \mathbb{k}\right) \) into equivalence classes is a geometrically well-behaved stratification, as we shall now see. To explain the geometry, we need a little terminology. Let \( \mathbf{g} = \left( {g}_{ij}\right) \) be an \( n \times n \) matrix of indeterminates, so that the algebra \( \mathbb{k}\left\lbrack \mathbf{g}\right\rbrack \) consists of (some of the) polynomial functions on \( G{L}_{n}\left( \mathbb{k}\right) \) . The term Zariski closed set inside of \( G{L}_{n}\left( \mathbb{k}\right) \) or \( {\mathbb{k}}^{n} \) refers to the zero set of an ideal in \( \mathbb{k}\left\lbrack \mathbf{g}\right\rbrack \) or \( S \) . If \( V \) is a Zariski closed set, then a Zariski open subset of \( V \) refers to the complement of a Zariski closed subset of \( V \) . Lemma 2.6 For a fixed ideal \( I \) and term order \( < \), the number of equivalence classes in \( G{L}_{n}\left( \mathbb{k}\right) \) is finite. One of these classes is a nonempty Zariski open subset \( U \) inside of \( G{L}_{n}\left( \mathbb{k}\right) \) . Proof. Consider the polynomial ring \( S\left\lbrack {{g}_{11},\ldots ,{g}_{nn}}\right\rbrack = \mathbb{k}\left\lbrack {\mathbf{g},\mathbf{x}}\right\rbrack \) in \( {n}^{2} + n \) unknowns. Suppose that \( {p}_{1}\left( \mathbf{x}\right) ,\ldots ,{p}_{r}\left( \mathbf{x}\right) \) are generators of the given ideal \( I \) in \( S \) . Let \( J \) be the ideal generated by the elements \( \mathbf{g} \cdot {p}_{1}\left( \mathbf{x}\right) ,\ldots ,\mathbf{g} \cdot {p}_{r}\left( \mathbf{x}\right) \) in \( \mathbb{k}\left\lbrack {\mathbf{g},\mathbf{x}}\right\rbrack \), and fix a comprehensive Gröbner basis \( \mathcal{C} \) for \( J \) . The equivalence classes in \( G{L}_{n}\left( \mathbb{k}\right) \) can be read off from the coefficients of the polynomials in \( \mathcal{C} \) . These coefficients are polynomials in \( \mathbb{k}\left\lbrack \mathbf{g}\right\rbrack \) . By requiring that \( \det \left( \mathbf{g}\right) \neq 0 \) and by imposing the conditions " \( = 0 \) " and " \( \neq 0 \) " on these coefficient polynomials in all possible ways, we can read off all possible initial ideals \( \operatorname{in}\left( {g \cdot I}\right) \) . Since \( \mathcal{C} \) is finite, there are only finitely many possibilities, and hence the number of distinct ideals \( \operatorname{in}\left( {g \cdot I}\right) \) as \( g \) runs over \( G{L}_{n}\left( \mathbb{k}\right) \) is finite. The unique Zariski open equivalence class \( U \) can be specified by imposing the condition " \( \neq 0 \) " on all the leading coefficients of the polynomials in the comprehensive Gröbner basis \( \mathcal{C} \) . The previous lemma tells us that the next definition makes sense. Definition 2.7 Fix a term order \( < \) on \( S \) . The initial ideal \( {\operatorname{in}}_{ < }\left( {g \cdot I}\right) \) that, as a function of \( g \), is constant on a Zariski open subset \( U \) of \( G{L}_{n} \) is called the generic initial ideal of \( I \) for the term order \( < \) . It is denoted by \[ {\operatorname{gin}}_{ < }\left( I\right) = {\operatorname{in}}_{ < }\left( {g \cdot I}\right) \] Example 2.8 Let \( n = 2 \) and consider the ideal \( I = \left\langle {{x}_{1}^{2},{x}_{2}^{2}}\right\rangle \), where \( < \) is the lexicographic order with \( {x}_{1} > {x}_{2} \) . For this term order, the ideal \( J \) defined in the proof of Lemma 2.6 has the comprehensive Gröbner basis \[ \mathcal{C} = \left\{ {{g}_{11}^{2}{x}_{1}^{2} + 2{g}_{11}{g}_{12}{x}_{1}{x}_{2} + {g}_{12}^{2}{x}_{2}^{2},{g}_{21}^{2}{x}_{1}^{2} + 2{g}_{21}{g}_{22}{x}_{1}{x}_{2} + {g}_{22}^{2}{x}_{2}^{2},}\right. \] \[ 2{g}_{21}{g}_{11}\left( {{g}_{22}{g}_{11} - {g}_{21}{g}_{12}}\right) {x}_{1}{x}_{2} + \left( {{g}_{22}{g}_{11} - {g}_{21}{g}_{12}}\right) \left( {{g}_{21}{g}_{12} + {g}_{22}{g}_{11}}\right) {x}_{2}^{2}, \] \[ \left. {{\left( {g}_{22}{g}_{11} - {g}_{21}{g}_{12}\right) }^{3}{x}_{2}^{3}}\right\} \text{.} \] The group \( G{L}_{2}\left( \mathbb{k}\right) \) decomposes into only two equivalence classes in this case: \[ \text{-}{\operatorname{in}}_{ < }\left( {g \cdot I}\right) = \left\langle {{x}_{1}^{2},{x}_{2}^{2}}\right\rangle \text{if}{g}_{11}{g}_{21} = 0 \] \[ \text{-}{\operatorname{in}}_{ < }\left( {g \cdot I}\right) = \left\langle {{x}_{1}^{2},{x}_{1}{x}_{2},{x}_{2}^{3}}\right\rangle \text{if}{g}_{11}{g}_{21} \neq 0 \] The second ideal is the generic initial ideal: \( \operatorname{gin}\left( I\right) = \left\langle {{x}_{1}^{2},{x}_{1}{x}_{2},{x}_{2}^{3}}\right\rangle \) . The punch line is the result of Galligo, Bayer, and Stillman describing a general procedure to turn arbitrary ideals into Borel-fixed ideals. Theorem 2.9 The generic initial ideal \( {\sin }_{ < }\left( I\right) \) is Borel-fixed. Proof. We refer to Eisenbud's commutative algebra textbook, where this result appears as [Eis95, Theorem 15.20]. A complete proof is given there. \( ▱ \) It is important to note that the generic initial ideal \( {\sin }_{ < }\left( I\right) \) depends heavily on the choice of the term order \( < \) . Two extreme examples of term orders are the purely lexicographic term order, denoted \( { < }_{\mathrm{{lex}}} \), and the reverse lexicographic term order, denoted \( { < }_{\text{revlex }} \) . For two monomials \( {\mathbf{x}}^{\mathbf{a}} \) and \( {\mathbf{x}}^{\mathbf{b}} \) of the same degree, we have \( {\mathbf{x}}^{\mathbf{a}}{ > }_{\text{lex }}{\mathbf{x}}^{\mathbf{b}} \) if the leftmost nonzero entry of the vector \( \mathbf{a} - \mathbf{b} \) is positive, whereas \( {\mathbf{x}}^{\mathbf{a}}{ > }_{\text{revlex }}{\mathbf{x}}^{\mathbf{b}} \) if the rightmost nonzero entry of the vector \( \mathbf{a} - \mathbf{b} \) is negative. Example 2.10 Let \( f, g \in \mathbb{k}\left\lbrack {{x}_{1},{x}_{2},{x}_{3},{x}_{4}}\right\rbrack \) be generic forms of degrees \( d \) and \( e \), respectively. Considering the three smallest nontrivial cases, we list the generic initial ideal of \( I = \langle f, g\rangle \) for both the lexicographic order and the reverse lexicographic order. The ideals \( J = {\operatorname{gin}}_{\text{lex }}\left( I\right) \) are: \[ \left( {d, e}\right) = \left( {2,2}\right) \;J = \left\langle {{x}_{2}^{4},{x}_{1}{x}_{3}^{2},{x}_{1}{x}_{2},{x}_{1}^{2}}\right\rangle \] \[ = \left\langle {{x}_{1},{x}_{2}^{4}}\right\rangle \cap \left\langle {{x}_{1}^{2},{x}_{2},{x}_{3}^{2}}\right\rangle \] \[ \left( {d, e}\right) = \left( {2,3}\right) \;J = \left\langle {{x}_{2}^{6},{x}_{1}{x}_{3}^{6},{x}_{1}{x}_{2}{x}_{4}^{4},{x}_{1}{x}_{2}{x}_{3}{x}_{4}^{2},{x}_{1}{x}_{2}{x}_{3}^{2},{x}_{1}{x}_{2}^{2},{x}_{1}^{2}}\right\rangle \] \[ = \langle {x}_{1},{x}_{2}^{6}\rangle \cap \langle {x}_{1}^{2},{x}_{2},{x}_{3}^{6
106_106_The Cantor function
Definition 1.6
Definition 1.6. The (reduced) first-order algebra \( P\left( {V,\mathcal{R}}\right) \) on \( \left( {V,\mathcal{R}}\right) \) is the factor algebra of \( \widetilde{P}\left( {V,\mathcal{R}}\right) \) by the congruence relation \( \approx \) . The elements of \( P = P\left( {V,\mathcal{R}}\right) \) are the congruence classes. If \( w \in \widetilde{P} \) and \( \left\lbrack w\right\rbrack \) is the congruence class of \( w \), then \[ \left( {\forall x}\right) \left\lbrack w\right\rbrack = \left\lbrack {\left( {\forall x}\right) w}\right\rbrack \] and \[ \left\lbrack {w}_{1}\right\rbrack \Rightarrow \left\lbrack {w}_{2}\right\rbrack = \left\lbrack {{w}_{1} \Rightarrow {w}_{2}}\right\rbrack \] Definition 1.7. Let \( w \in P \) . We define the set \( \operatorname{var}\left( w\right) \) of (free) variables of \( w \) by putting \( \operatorname{var}\left( w\right) = \operatorname{var}\left( \widetilde{w}\right) \), where \( \widetilde{w} \in \widetilde{P} \) is some representative of the congruence class \( w \), and where \( \operatorname{var}\left( \widetilde{w}\right) \) is defined inductively by (i) \( \operatorname{var}\left( F\right) = \varnothing \) , (ii) \( \operatorname{var}\left( {r\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\right) = \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) for \( r \in \mathcal{R},{x}_{1},\ldots ,{x}_{n} \in V \) , (iii) \( \operatorname{var}\left( {{\widetilde{w}}_{1} \Rightarrow {\widetilde{w}}_{2}}\right) = \operatorname{var}\left( {\widetilde{w}}_{1}\right) \cup \operatorname{var}\left( {\widetilde{w}}_{2}\right) \) , (iv) \( \operatorname{var}\left( {\left( {\forall x}\right) \widetilde{w}}\right) = \operatorname{var}\left( \widetilde{w}\right) - \{ x\} \) . Definition 1.8. Let \( A \subseteq P \) . Put \[ \operatorname{var}\left( A\right) = \mathop{\bigcup }\limits_{{p \in A}}\operatorname{var}\left( p\right) \] ## Exercises 1.9. Show that if \( {\widetilde{w}}_{1} \approx {\widetilde{w}}_{2} \), then \( \operatorname{var}\left( {\widetilde{w}}_{1}\right) = \operatorname{var}\left( {\widetilde{w}}_{2}\right) \), and conclude that \( \operatorname{var}\left( w\right) \) is defined for \( w \in P \) . 1.10. Show that for any \( w \in P \), there is a representative \( \widetilde{w} \) of \( w \) such that no variable \( x \in V \) appears in \( \widetilde{w} \) more than once in a quantifier \( \left( {\forall x}\right) \) , and no \( x \in \operatorname{var}\left( w\right) \) appears at all in a quantifier (i.e., \( \widetilde{w} \) has no repeated dummy variables, and no free variables also appear as dummies). We assume henceforth that any \( w \in P \) is represented by a \( \widetilde{w} \in \widetilde{P} \) having the form described in Exercise 1.10. We shall also usually abuse notation and not distinguish between \( p \in \widetilde{P} \) and \( \left\lbrack p\right\rbrack \in P \) . ## §2 Interpretations We want to think of the elements of \( V \) as names of objects, and the elements of \( \mathcal{R} \) as relations among those objects. If we take a non-empty set \( U \) , and a function \( \varphi : V \rightarrow U \), then we can think of \( x \in V \) as a name for the element \( \varphi \left( x\right) \in U \) . Of course, not every element \( u \in U \) need have a name, while some elements \( u \) may well have more than one name. Next we take a function \( \psi \) , from \( \mathcal{R} \) into the set of all relations on \( U \), such that if \( r \in {\mathcal{R}}_{n} \), then \( \psi \left( r\right) \) is an \( n \) -ary relation. It will be convenient to write simply \( {\varphi x} \) for \( \varphi \left( x\right) \), and \( {\psi r} \) for \( \psi \left( r\right) \) . As for valuations, these again should be functions \( v : P \rightarrow {\mathbb{Z}}_{2} \) which will correspond to our intuitive notion of truth. Since our interpretation of the element \( r\left( {{x}_{1},\ldots ,{x}_{n}}\right) \in P \) in terms of \( U,\varphi ,\psi \) must obviously be the statement that \( \left( {\varphi {x}_{1},\ldots ,\varphi {x}_{n}}\right) \in {\psi r} \), we shall require of \( v \) that (a) if \( r \in {\mathcal{R}}_{n} \) and \( {x}_{1},\ldots ,{x}_{n} \in V \), then \( v\left( {r\left( {{x}_{1},\ldots ,{x}_{n}}\right) = 1}\right. \) if \( \left( {\varphi {x}_{1},\ldots ,\varphi {x}_{n}}\right) \in \) \( {\psi r} \), and is 0 otherwise, while we still require that (b) \( v \) is a homomorphism of \( \{ F, \Rightarrow \} \) -algebras. It remains for us to define truth for a proposition of the form \( \left( {\forall x}\right) p\left( x\right) \) in terms of our understanding of it for \( p\left( x\right) \), and so we use an induction over the depth of quantification. Let \( {P}_{k}\left( {V,\mathcal{R}}\right) \) be the set of all elements \( p \) of \( P\left( {V,\mathcal{R}}\right) \) with \( d\left( p\right) \leq k \) . If we take some new variable \( t \), then intuitively, we consider \( \left( {\forall x}\right) p\left( x\right) \left( { = \left( {\forall t}\right) p\left( t\right) }\right) \) to be true if \( p\left( t\right) \) is true no matter how we choose to interpret \( t \) . This leads to a further requirement for \( v \), namely: \( \left( {c}_{k}\right) \) Suppose \( p = \left( {\forall x}\right) q\left( x\right) \) has depth \( k \) . Put \( {V}^{\prime } = V \cup \{ t\} \) where \( t \notin V \) . If for every extension \( {\varphi }^{\prime } : {V}^{\prime } \rightarrow U \) of \( \varphi \) and for every \( {v}_{k - 1}^{\prime } : {P}_{k - 1}\left( {{V}^{\prime }, R}\right) \rightarrow {\mathbb{Z}}_{2} \) ; such that \( \left( {{\varphi }^{\prime },\psi ,{v}_{k - 1}^{\prime }}\right) \) satisfy (a),(b) and \( \left( {c}_{i}\right) \) for all \( i < k \), we have \( {v}_{k - 1}^{\prime }\left( {q\left( t\right) }\right) = \) 1, then \( v\left( p\right) = 1 \), otherwise \( v\left( p\right) = 0 \) . Exercise 2.1. Given \( U,\varphi ,\psi \), prove that there is one and only one function \( v : P \rightarrow {\mathbb{Z}}_{2} \) satisfying (a),(b) and \( \left( {c}_{i}\right) \) for all \( i \) . Briefly, the above exposition of the components of an interpretation of \( P\left( {V,\mathcal{R}}\right) \) can be expressed as follows. Definition 2.2. An interpretation of \( P = P\left( {V,\mathcal{R}}\right) \) in the domain \( U \) is a quadruple \( \left( {U,\varphi ,\psi, v}\right) \) satisfying the conditions (a),(b) and \( \left( {c}_{k}\right) \) for all \( k \) . As before, we write \( A \vDash p \) if \( A \subseteq P, p \in P \) and \( v\left( p\right) = 1 \) for every interpretation of \( P \) for which \( v\left( A\right) \subseteq \{ 1\} \) . We denote by \( \operatorname{Con}\left( A\right) \) the set of all \( p \) such that \( A \vDash p \) . We write \( \vDash p \) for \( \varnothing \vDash p \), and any \( p \) for which \( \vDash p \), is called valid or a tautology. ## Exercises 2.3. Let \( w\left( {{u}_{1},\ldots ,{u}_{n}}\right) \) be any tautology of \( \operatorname{Prop}\left( \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \right) \) . Let \( {p}_{1},\ldots ,{p}_{n} \in P\left( {V,\mathcal{R}}\right) \) . Prove that \( \vDash w\left( {{p}_{1},\ldots ,{p}_{n}}\right) \) . 2.4. \( A \subseteq P\left( {V,\mathcal{R}}\right) \) and \( p\left( x\right) \in A \) for all \( x \in V \) . Does it follow that \( A \vDash \left( {\forall x}\right) p\left( x\right) ? \) ## §3 Proof in \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) To complete the construction of the logic called the First-Order Predicate Calculus on \( \left( {V,\mathcal{R}}\right) \), and henceforth denoted by \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \), we have to define a proof in \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) . Definition 3.1. The set of axioms of \( \operatorname{Pred}\left( {V,\mathcal{R}}\right) \) is the set \( \mathcal{A} = \) \( {\mathcal{A}}_{1} \cup \cdots \cup {\mathcal{A}}_{5} \), where \[ {\mathcal{A}}_{1} = \{ p \Rightarrow \left( {q \Rightarrow p}\right) \mid p, q \in P\left( {V,\mathcal{R}}\right) \} \] \[ {\mathcal{A}}_{2} = \{ \left( {p \Rightarrow \left( {q \Rightarrow r}\right) }\right) \Rightarrow \left( {\left( {p \Rightarrow q}\right) \Rightarrow \left( {p \Rightarrow r}\right) }\right) \mid p, q, r \in P\left( {V,\mathcal{R}}\right) \} , \] \[ {\mathcal{A}}_{3} = \{ \sim \sim p \Rightarrow p \mid p \in P\left( {V,\mathcal{R}}\right) \} \] \[ {\mathcal{A}}_{4} = \{ \left( {\forall x}\right) \left( {p \Rightarrow q}\right) \Rightarrow \left( {p \Rightarrow \left( {\left( {\forall x}\right) q}\right) }\right) \mid p, q \in P\left( {V,\mathcal{R}}\right), x \notin \operatorname{var}\left( p\right) \} , \] \[ {\mathcal{A}}_{5} = \{ \left( {\forall x}\right) p\left( x\right) \Rightarrow p\left( y\right) \mid p\left( x\right) \in P\left( {V,\mathcal{R}}\right), y \in V\} . \] We remind the reader that these axioms are stated in terms of elements of the reduced predicate algebra. In \( {\mathcal{A}}_{5} \), for example, the substitution of \( y \) for \( x \) in \( p\left( x\right) \) implies that we have chosen a representative of \( \left\lbrack {\left( {\forall x}\right) p\left( x\right) }\right\rbrack \) in which \( \left( {\forall y}\right) \) does not appear. In addition to Modus Ponens, we shall use one further rule of inference, which will enable us to formalise the following commonly occurring argument: we have proved \( p\left( x\right) \), but \( x \) was any element, and therefore \( \left( {\forall x}\right) p\left( x\right) \) . The rule of inference called Generalisation allows us to deduce \( \left( {\forall x}\right) p\left( x\right) \) from \( p\left( x\right) \) provided \( x \) is general. The restriction on the use of Generalisation needs to be stated carefully. Definition 3.2 Let \( A \subseteq P, p \in P \) . A proof of length \( n \) of \( p \) from \( A \) is a sequence \( {p}_{1},\ldots ,{p}_{n} \) of \( n \) elements of \( P \) such that \( {p
1069_(GTM228)A First Course in Modular Forms
Definition 9.6.1
Definition 9.6.1. An irreducible Galois representation \[ \rho : {G}_{\mathbb{Q}} \rightarrow {\mathrm{{GL}}}_{2}\left( {\mathbb{Q}}_{\ell }\right) \] such that \( \det \rho = {\chi }_{\ell } \) is modular if there exists a newform \( f \in {\mathcal{S}}_{2}\left( {{\Gamma }_{0}\left( {M}_{f}\right) }\right) \) such that \( {\mathbb{K}}_{f,\lambda } = {\mathbb{Q}}_{\ell } \) for some maximal ideal \( \lambda \) of \( {\mathcal{O}}_{{\mathbb{K}}_{f}} \) lying over \( \ell \) and such that \( {\rho }_{f,\lambda } \sim \rho \) . In particular, if \( E \) is an elliptic curve over \( \mathbb{Q} \) and \( \ell \) is prime then the Galois representation \( {\rho }_{E,\ell } \) is a candidate to be modular since it is irreducible and its determinant is \( {\chi }_{\ell } \) by the proof of Theorem 9.4.1. Theorem 9.6.2 (Modularity Theorem, Version \( R \) ). Let \( E \) be an elliptic curve over \( \mathbb{Q} \) . Then \( {\rho }_{E,\ell } \) is modular for some \( \ell \) . This is the version that was proved, for semistable curves in [Wil95] and [TW95] and then for all curves in [BCDT01]. We will explain how Version \( R \) of Modularity leads to Version \( {a}_{p} \) from Chapter 8, which in turn implies a stronger Version \( R \) , Theorem 9.6.3 (Modularity Theorem, strong Version \( R \) ). Let \( E \) be an elliptic curve over \( \mathbb{Q} \) with conductor \( N \) . Then for some newform \( f \in \) \( {\mathcal{S}}_{2}\left( {{\Gamma }_{0}\left( N\right) }\right) \) with number field \( {\mathbb{K}}_{f} = \mathbb{Q} \) , \[ {\rho }_{f,\ell } \sim {\rho }_{E,\ell }\;\text{ for all }\ell \] Given Version \( R \) of Modularity, let \( E \) be an elliptic curve over \( \mathbb{Q} \) with conductor \( N \) . Then there exists a newform \( f \in {\mathcal{S}}_{2}\left( {{\Gamma }_{0}\left( {M}_{f}\right) }\right) \) as in Definition 9.6.1, so \( {\rho }_{f,\lambda } \sim {\rho }_{E,\ell } \) for some suitable maximal ideal \( \lambda \) of its number field \( {\mathbb{K}}_{f} \) . Thus \( {\rho }_{E,\ell }\left( {\operatorname{Frob}}_{\mathfrak{p}}\right) \) satisfies the polynomial \( {x}^{2} - {a}_{p}\left( f\right) x + p \) for any absolute Frobe-nius element \( {\operatorname{Frob}}_{\mathfrak{p}} \) where \( \mathfrak{p} \) lies over any prime \( p \nmid \ell {M}_{f} \), since \( {\rho }_{f,\lambda } \) does. But the characteristic polynomial of \( {\rho }_{E,\ell }\left( {\operatorname{Frob}}_{\mathfrak{p}}\right) \) for any \( {\operatorname{Frob}}_{\mathfrak{p}} \) where \( p \nmid \ell N \) is \( {x}^{2} - {a}_{p}\left( E\right) x + p \) . Therefore \( {a}_{p}\left( f\right) = {a}_{p}\left( E\right) \) for all but finitely many \( p \) . The work of Carayol mentioned at the end of Chapter 8 shows equality for all \( p \) and shows that \( {M}_{f} = N \) . This is Version \( {a}_{p} \) . On the other hand, given Version \( {a}_{p} \) of Modularity, again let \( E \) be an elliptic curve over \( \mathbb{Q} \) with conductor \( N \) . There exists a newform \( f \in {\mathcal{S}}_{2}\left( {{\Gamma }_{0}\left( N\right) }\right) \) such that \( {a}_{p}\left( f\right) = {a}_{p}\left( E\right) \) for all \( p \) . Since \( {a}_{p}\left( f\right) \in \mathbb{Z} \) for all \( p \), the number field of \( f \) is \( \mathbb{Q} \) and the Abelian variety \( {A}_{f} \) is an elliptic curve. Consider the representations \( {\rho }_{f,\ell } = {\rho }_{{A}_{f},\ell } \) and \( {\rho }_{E,\ell } \) for any \( \ell \) . The respective characteristic polynomials of \( {\rho }_{f,\ell }\left( {\operatorname{Frob}}_{\mathfrak{p}}\right) \) and \( {\rho }_{E,\ell }\left( {\operatorname{Frob}}_{\mathfrak{p}}\right) \) are \( {x}^{2} - {a}_{p}\left( f\right) x + p \) and \( {x}^{2} - {a}_{p}\left( E\right) X + p \) for all but finitely many \( p \) . Thus the characteristic polynomials are equal on a dense subset of \( {G}_{\mathbb{Q}} \), and since trace and determinant are continuous this makes the characteristic polynomials equal. Consequently the representations are equivalent (Exercise 9.6.1). Since \( \ell \) is arbitrary, this is the strong Version \( R \) . The reasoning from Version \( R \) of Modularity to Version \( {a}_{p} \) and then back to the strong Version \( R \) (or see Exercise 9.6.2) proves Proposition 9.6.4. Let \( E \) be an elliptic curve over \( \mathbb{Q} \) . Then if \( {\rho }_{E,\ell } \) is modular for some \( \ell \) then \( {\rho }_{E,\ell } \) is modular for all \( \ell \) . Theorem 9.5.4 generalizes to a result that Galois representations are associated to modular forms of weights other than 2. Theorem 9.6.5. Let \( f \in {\mathcal{S}}_{k}\left( {N,\chi }\right) \) be a normalized eigenform with number field \( {\mathbb{K}}_{f} \) . Let \( \ell \) be prime. For each maximal ideal \( \lambda \) of \( {\mathcal{O}}_{{\mathbb{K}}_{f}} \) lying over \( \ell \) there is an irreducible 2-dimensional Galois representation \[ {\rho }_{f,\lambda } : {G}_{\mathbb{Q}} \rightarrow {\mathrm{{GL}}}_{2}\left( {\mathbb{K}}_{f,\lambda }\right) \] This representation is unramified at all primes \( p \nmid \ell N \) . For any \( \mathfrak{p} \subset \overline{\mathbb{Z}} \) lying over such \( p \), the characteristic equation of \( {\rho }_{f,\lambda }\left( {\mathrm{{Frob}}}_{\mathfrak{p}}\right) \) is \[ {x}^{2} - {a}_{p}\left( f\right) x + \chi \left( p\right) {p}^{k - 1} = 0. \] Similarly to the remark after Theorem 9.4.1, note that the characteristic equation is independent of \( \lambda \) . The set \( \left\{ {\rho }_{f,\lambda }\right\} \) (for fixed \( f \) and varying \( \lambda \) ) form what is called a compatible system of \( \ell \) -adic Galois representations. The characteristic equation shows that \[ \det {\rho }_{f,\lambda } = \chi {\chi }_{\ell }^{k - 1} \] where the Dirichlet character \( \chi \) is being identified with the Galois representation \( {\rho }_{\chi } \) from Section 9.3 (Exercise 9.6.3). It follows that the representation \( {\rho }_{f,\lambda } \) satisfies \[ \det {\rho }_{f,\lambda }\left( \operatorname{conj}\right) = - 1 \] where as before, conj denotes complex conjugation. Indeed, \( \chi \left( \operatorname{conj}\right) = \chi \left( {-1}\right) \) as noted in Section 9.3, and necessarily \( \chi \left( {-1}\right) = {\left( -1\right) }^{k} \) for \( {\mathcal{S}}_{k}\left( {N,\chi }\right) \) to be nontrivial. Since \( {\chi }_{\ell }\left( \operatorname{conj}\right) = - 1 \) from Section 9.3 as well, the result follows. In general, a Galois representation \( \rho \) such that \( \det \rho \left( \operatorname{conj}\right) = - 1 \) is called \( {odd} \) . Section 9.5 constructed \( {\rho }_{f,\lambda } \) when \( k = 2 \) but didn’t prove that \( {\rho }_{f,\lambda } \) is irreducible or that the equation is the characteristic equation. The construction for \( k > 2 \), due to Deligne [Del71], is similar but uses more sophisticated machinery. The dual of \( {\mathcal{S}}_{k}\left( {{\Gamma }_{1}\left( N\right) }\right) \) contains a lattice \( L \) that is stable under the action of \( {\mathbb{T}}_{\mathbb{Z}} \) and such that \( L{ \otimes }_{\mathbb{Q}}{\mathbb{Q}}_{\ell } \) admits a compatible action of \( {G}_{\mathbb{Q}} \) . To define the Galois action and generalize the Eichler-Shimura Relation, Deligne used étale cohomology. The construction for \( k = 1 \), due to Deligne and Serre [DS74], is different. It uses congruences between \( f \) and modular forms of higher weight (Exercise 9.6.4 will illustrate how to produce such congruences), and it produces a single representation with finite image, \[ {\rho }_{f} : {G}_{\mathbb{Q}} \rightarrow {\mathrm{{GL}}}_{2}\left( {\mathbb{K}}_{f}\right) \] that gives rise to all the \( {\rho }_{f,\lambda } \) . By embedding \( {\mathbb{K}}_{f} \) in \( \mathbb{C} \) we can view \( {\rho }_{f} \) as a complex representation \( {G}_{\mathbb{Q}} \rightarrow {\mathrm{{GL}}}_{2}\left( \mathbb{C}\right) \), a phenomenon unique to \( k = 1 \) . For example the representation \( \rho \) from the end of Section 9.1 is \( {\rho }_{{\theta }_{\chi }} \) . Theorem 9.5.4 also generalizes to Eisenstein series and reducible representations. Recall the Eisenstein series \( {E}_{k}^{\psi ,\varphi } \in {\mathcal{E}}_{k}\left( {N,\chi }\right) \) from Chapter 4, where \( \psi \) and \( \varphi \) are primitive Dirichlet characters modulo \( u \) and \( v \) where \( {uv} \mid N \) and \( {\psi \varphi } = \chi \) at level \( N \) . The Fourier coefficients given in Theorem 4.5.1 for \( n \geq 1 \) are \[ {a}_{n}\left( {E}_{k}^{\psi ,\varphi }\right) = 2{\sigma }_{k - 1}^{\psi ,\varphi }\left( n\right) ,\text{ where }{\sigma }_{k - 1}^{\psi ,\varphi }\left( n\right) = \mathop{\sum }\limits_{\substack{{m \mid n} \\ {m > 0} }}\psi \left( {n/m}\right) \varphi \left( m\right) {m}^{k - 1}. \] Recall also the Eisenstein series \[ {E}_{k}^{\psi ,\varphi, t}\left( \tau \right) = \left\{ \begin{array}{ll} {E}_{k}^{\psi ,\varphi }\left( {t\tau }\right) & \text{ unless }k = 2,\psi = \varphi = \mathbf{1}, \\ {E}_{2}^{\mathbf{1},\mathbf{1}}\left( \tau \right) - t{E}_{2}^{\mathbf{1},\mathbf{1}}\left( {t\tau }\right) & \text{ if }k = 2,\psi = \varphi = \mathbf{1}. \end{array}\right. \] Here \( t \) is a positive integer and \( {tuv} \mid N \) . These form a basis of \( {\mathcal{E}}_{k}\left( {N,\chi }\right) \) as the triples \( \left( {\psi ,\varphi, t}\right) \) run through the elements of a set \( {A}_{N, k} \) such that \( {\psi \varphi } = \chi \) , cf. Theorem 4.5.2, Theorem 4.6.2, and Theorem 4.8.1. By Proposition 5.2.3, \( {E}_{k}^{\psi ,\varphi, t} \) for such a triple is an eigenform if \( {uv} = N \) or if \( k = 2 \) and \( \psi = \varphi = \mathbf{1} \) and \( t \) is prime and \( N \) is a power of \( t \) . Theorem 9.6.6. Let \( f = \frac{1}{2}{E}_{k}^{\psi ,\varphi, t} \in {\mathcal{E}}_{k}\left( {N,\chi }\right) \) where \( {E}_{k}^{\psi ,\varphi, t} \) is an eigenform as just described. Let \( \ell \) be
1139_(GTM44)Elementary Algebraic Geometry
Definition 5.9
Definition 5.9. Let \( T \) be a topological space, and let \( P \) be an abstract point not in \( T \) . The one-point compactification \( {T}^{ * } \) of \( T \) is the space described as follows: (5.9.1) The underlying set of \( {T}^{ * } \) is \( T \cup \{ P\} \) ; (5.9.2) A basis for the open sets is given by: (a) the open sets of \( T \) ; (b) subsets \( U \) of \( T \cup \{ P\} \) such that \( \left( {T\cup \{ P\} }\right) \smallsetminus U \) is a closed compact set of \( T \) . EXAMPLE 5.10 (5.10.1) The one-point compactification \( {\mathbb{R}}^{ * } \) of \( \mathbb{R} \) (with the usual topology) is a real circle (that is, the topological space \( {\mathbb{P}}^{1}\left( \mathbb{R}\right) \) ). (5.10.2) \( {\left( {\mathbb{R}}^{2}\right) }^{ * } = \) sphere. (5.10.3) The one-point compactification of a sphere with finitely many points \( {P}_{1},\ldots ,{P}_{n} \) missing is the sphere with \( {P}_{1},\ldots ,{P}_{n} \) all identified to one point. (5.10.4) The one point compactification of a compact set \( T \) is \( T \) together with an extra closed, isolated point. Lemma 5.11. Let \( T \) be a compact Hausdorff space, and let \( P \) be any point of \( T \) . Then \[ {\left( T\smallsetminus \{ P\} \right) }^{ * } = T \] The proof is a strightforward exercise and is left to the reader. One can now see the following: If \( C \) is any curve in \( {\mathbb{P}}^{2}\left( \mathbb{C}\right) \) and if \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) any subspace of \( {\mathbb{P}}^{2}\left( \mathbb{C}\right) \), then \( C \) is either a near \( s \) -sheeted covering of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \), or the one-point compactification of such a covering. Remark 5.12. If we dehomogenize \( {\mathbb{P}}^{2}\left( \mathbb{C}\right) \) at any 1-subspace through \( {P}_{\infty } \) and choose linear coordinates \( X, Y \) in the resulting affine space, then the part of \( C \) in this \( {\mathbb{C}}_{XY} \) is \( \mathbf{V}\left( p\right) \), for some \( p\left( {X, Y}\right) \in \mathbb{C}\left\lbrack {X, Y}\right\rbrack \) . If \( {\deg }_{Y}p = n \), then \( \left( {C,{\mathbb{C}}_{X},{\pi }_{Y}}\right) \) is a near \( s \) -sheeted cover, where \( s \leq n \) . If \( {\deg }_{Y}p = 0 \), then \( p\left( {X, Y}\right) \) is in \( \mathbb{C}\left\lbrack X\right\rbrack \), and \( C \) is simply the completion of finitely many parallel lines \( X = \) a constant in \( {\mathbb{C}}_{XY} \) . Now if we are given any such representation of \( C \) as a near cover, and if we know the nature of \( C \) about each of the finitely many exceptional (discriminant) points, then in practice it is fairly easy to determine the topology of the whole curve. We now illustrate this with a few specific examples; in Section II,10 we use sphere coverings to obtain a more general result, the topological nature of an important class of curves. Example 5.13. We first reconsider from this new viewpoint the circle \( C \subset {\mathbb{P}}^{2}\left( \mathbb{C}\right) \) defined by \( \mathrm{V}\left( {{X}^{2} + {Y}^{2} - {Z}^{2}}\right) \subset {\mathbb{C}}_{XYZ} \) . Let \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) and \( {P}_{\infty } \) be represented in \( {\mathbb{C}}_{XYZ} \) by \( {\mathbb{C}}_{XZ} \) and \( {\mathbb{C}}_{Y} \), respectively. (Hence relative to the affine part \( {\mathbb{C}}_{XY},{\mathbb{P}}^{1}\left( \mathbb{C}\right) \) contains \( {\mathbb{C}}_{X} \subset {\mathbb{C}}_{XY} \) and \( {P}_{\infty } \) completes \( {\mathbb{C}}_{Y} \) .) Now \( {X}^{2} + {Y}^{2} - {Z}^{2} \) evaluated at \( \left( {0,1,0}\right) \in {\mathbb{C}}_{Y} \subset {\mathbb{C}}_{XYZ} \) is nonzero, so \( {P}_{\infty } \notin C \) . And by looking at affine representatives of \( C \) in dehomogenizations at \( Z \) and \( X \), we see that \( C \) is a near 2-covering of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) . There are two exceptional points of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) above which there are fewer than two points of \( C \smallsetminus \left\{ {P}_{\infty }\right\} \) ; these are both in \( {\mathbb{C}}_{X} \), at \( X = \pm 1 \) . What is the nature of \( C \) above each of these two points? Let us first expand \( {X}^{2} + {Y}^{2} - 1 \) about the point \( X = 1, Y = 0 \), or, what is the same, set \( {X}^{\prime } = \) \( X - 1 \) and \( {Y}^{\prime } = Y \) and expand about \( {X}^{\prime } = 0,{Y}^{\prime } = 0 \) . This gives \( {\left( {X}^{\prime } + 1\right) }^{2} + \) \( {\left( {Y}^{\prime }\right) }^{2} - 1 = 0 \), or \[ {\left( {Y}^{\prime }\right) }^{2} = - {X}^{\prime }\left( {2 + {X}^{\prime }}\right) . \] What is the effect of going once around a small circle in \( {\mathbb{C}}_{{X}^{\prime }} \) centered at \( {X}^{\prime } = 0 \) ? Set \( {X}^{\prime } = r{e}^{i\theta }, r \) small. Then \[ {Y}^{\prime } = \pm \sqrt{r}{e}^{{i\theta }/2}{\left( 2 + r{e}^{i\theta }\right) }^{1/2} \] as \( \theta \) increases from 0 to \( {2\pi },{e}^{{i\theta }/2} \) changes from +1 to -1, while for \( r \) sufficiently small, the factor \( {\left( 2 - r{e}^{i\theta }\right) }^{1/2} \) remains the same. Hence one circuit about a circle of small radius \( r \) cannot lead us from one zero of \( \left( {Y}^{\prime }\right) + r\left( {2 + r}\right) \) (when \( \theta = 0 \) ) back to itself, so one circuit must lead to a different zero. However two circuits obviously do lead back to the original zero. Thus the part of \( C \) near \( \left( {{X}^{\prime },{Y}^{\prime }}\right) = \left( {0,0}\right) \) behaves like \( {\left( {Y}^{\prime }\right) }^{2} = - 2{X}^{\prime } \), and one gets a 2-ramp about \( \left( {X, Y}\right) = \left( {1,0}\right) \) . Similarly, there is another such 2-ramp about \( \left( {X, Y}\right) = \left( {-1,0}\right) \) . One can construct a double covering of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) having 2-ramps above any two distinct points \( {P}_{1},{P}_{2} \in {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) as follows: Take two concentric spheres and make two slits, one above the other. Let the edges of the cut inner sphere be \( {E}_{1} \) and \( {E}_{2} \), and of the outer sphere be \( {E}_{3} \) and \( {E}_{4} \), where \( {E}_{3} \) lies above \( {E}_{1} \) , and \( {E}_{4} \) above \( {E}_{2} \) . Now sew \( {E}_{1} \) to \( {E}_{4} \) and \( {E}_{2} \) to \( {E}_{3} \) . (This amounts to first switching the edges, then sewing.) This construction gives us a 2-ramp at each of \( {P}_{1} \) and \( {P}_{2} \) . At the top of Figure 19 we have separated the two cut spheres. We may easily see the topology of our curve \( C \) if we perform the sewing as indicated in the rest of Figure 19. We thus see from this new viewpoint that the complex circle \( C \) is topologically a sphere. ![9396b131-9501-41be-b2cf-577fd90ab693_80_0.jpg](images/9396b131-9501-41be-b2cf-577fd90ab693_80_0.jpg) Figure 19 Example 5.14. The representation of a given curve \( C \) as a near covering can change markedly as we vary \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) and \( {P}_{\infty } \) . For instance in Example 5.13, one might choose for \( {P}_{\infty } \) a point in the circle. This can be done, for example, by picking coordinates in \( {\mathbb{C}}_{XYZ} \) so that dehomogenizing at \( Z \) gives, in affine space \( {\mathbb{C}}_{XY} \), the complex parabola \( \mathbf{V}\left( {Y - {X}^{2}}\right) \) . Let \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) be the 1-subspace containing \( {\mathbb{C}}_{X} \subset {\mathbb{C}}_{XY} \), and let \( {P}_{ \propto } \) be the point of \( {\mathbb{P}}^{2}\left( \mathbb{C}\right) \) completing \( {\mathbb{C}}_{Y}\left( { \subset {\mathbb{C}}_{XY}}\right) \) . Then \( \left( {0,1,0}\right) \) is a point in the 1-space \( {\mathbb{C}}_{Y} \) of \( {\mathbb{C}}_{XYZ}\left( {\mathbb{C}}_{Y}\right. \) represents \( \left. {P}_{\infty }\right) \), and \( {YZ} - {X}^{2} \) evaluated at \( \left( {0,1,0}\right) \) is zero, so \( {P}_{\infty } \in C \) . Now \( \mathrm{V}\left( {Y - {X}^{2}}\right) \) is a near 1-covering of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) and a 1-covering of \( {\mathbb{C}}_{X} \) since there is exactly one point of \( C \) over each point of \( {\mathbb{C}}_{X} \) . Thus \( C \) is topologically the one-point compactification of a 1-sheeted covering of \( {\mathbb{C}}_{X} \) . It is easily seen that a 1-sheeted cover of \( {\mathbb{C}}_{X} \) is itself homeomorphic to \( {\mathbb{C}}_{X} \) ; since the one-point compactification of \( \mathbb{C} \) is a sphere, we again end up with a sphere as underlying topological space of \( C \) . Example 5.15. Choosing the same \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) and \( {P}_{\infty } \) as in Example 5.14 but writing the parabola as \( \mathrm{V}\left( {{Y}^{2} - X}\right) \) again represents a change in the relative position of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) ,{P}_{\infty } \), and the curve. The variety \( V \) now describes a near 2- sheeted cover of \( {\mathbb{C}}_{X} \) ; there are two distinct points above each point of \( {\mathbb{C}}_{X} \) except at 0 . The homogenization \( {Y}^{2} - {XZ} \) evaluated at \( \left( {0,1,0}\right) \) is nonzero, so \( {P}_{\infty } \notin V \) . The graph in \( {\mathbb{C}}_{XY} \) of \( {Y}^{2} = X \) near \( \left( {0,0}\right) \) is, of course, a 2-ramp. What about above the infinite point \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \smallsetminus {\mathbb{C}}_{X} \) ? Dehomogenizing \( {Y}^{2} - {XZ} \) at \( X = 1 \) places this point at the origin, the new affine representative being given by \( {Y}^{2} - Z = 0 \) ; in \( {\mathbb{C}}_{YZ} \) it is \( {\mathbb{C}}_{Z} \) whose completion is \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) . Then \( {Y}^{2} = Z \) describes another 2-ramp (a "ramp at infinity" from the viewpoint of our original \( {\mathbb{C}}_{XY} \) ). We thus have a near 2-sheeted covering of a sphere, with two ramps at the exceptional points,
106_106_The Cantor function
Definition 2.3
Definition 2.3. The Turing machine \( M \) takes the state \( \left\lbrack d\right\rbrack \) into the state \( \left\lbrack {d}^{\prime }\right\rbrack \), written \( \left\lbrack d\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}^{\prime }\right\rbrack \), if for some representatives \( d = {\sigma q\tau } \) and \( {d}^{\prime } = {\sigma }^{\prime }{q}^{\prime }{\tau }^{\prime } \) , where \( \tau = {s}_{\alpha }{\tau }_{1} \), either (i) \( \left( {q,{s}_{\alpha },{s}_{{\alpha }^{\prime }},{q}^{\prime }}\right) \in M \) and \( {\sigma }^{\prime } = \sigma ,{\tau }^{\prime } = {s}_{{\alpha }^{\prime }}{\tau }_{1} \), or (ii) \( \left( {q,{s}_{\alpha }, R,{q}^{\prime }}\right) \in M \) and \( {\sigma }^{\prime } = \sigma {s}_{\alpha },{\tau }^{\prime } = {\tau }_{1} \), or (iii) \( \left( {q,{s}_{\alpha }, L,{q}^{\prime }}\right) \in M \) and \( \sigma = {\sigma }^{\prime }{s}_{\beta },{\tau }^{\prime } = {s}_{\beta }\tau \) for some \( {s}_{\beta } \in S \) . Exercise 2.4. Prove that there is at most one state \( \left\lbrack {d}^{\prime }\right\rbrack \) such that \( \left\lbrack d\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}^{\prime }\right\rbrack \) . When \( \left\lbrack {d}^{\prime }\right\rbrack \) exists, show that to each \( d \in \left\lbrack d\right\rbrack \), there corresponds a \( {d}^{\prime } \in \left\lbrack {d}^{\prime }\right\rbrack \) so that \( d \) and \( {d}^{\prime } \) are related as in (i),(ii) or (iii) of the definition (appropriately modified if \( \sigma \) or \( \tau \) is empty). Definition 2.5. A state \( \left\lbrack {\sigma q\tau }\right\rbrack \) is called initial if \( q = {q}_{0} \) . A state \( \left\lbrack {{\sigma q}{s}_{\alpha }{\tau }_{1}}\right\rbrack \) is called terminal if there is no quadruple \( \left( {q,{s}_{\alpha }, c, d}\right) \) in \( M \) . Exercise 2.6. Show that \( \left\lbrack d\right\rbrack \) is terminal if and only if there does not exist a state \( \left\lbrack {d}^{\prime }\right\rbrack \) such that \( \left\lbrack d\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}^{\prime }\right\rbrack \) . Definition 2.7. A computation by the machine \( M \) is a finite sequence \( \left\lbrack {d}_{0}\right\rbrack ,\left\lbrack {d}_{1}\right\rbrack ,\ldots ,\left\lbrack {d}_{p}\right\rbrack \) of states such that \( \left\lbrack {d}_{0}\right\rbrack \) is initial, \( \left\lbrack {d}_{p}\right\rbrack \) is terminal and \( \left\lbrack {d}_{i}\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}_{i + 1}\right\rbrack \) for \( i = 0,1,\ldots, p - 1 \) . Computations are by definition finite. Given \( M \) and \( \left\lbrack d\right\rbrack \), there is no guarantee that \( M \), started in state \( \left\lbrack d\right\rbrack \) and allowed to operate, will ever stop (i.e., will execute a computation). Definition 2.8. We say that \( M \) fails for the input \( \left\lbrack {d}_{0}\right\rbrack \) if there is no computation by \( M \) beginning with the state \( \left\lbrack {d}_{0}\right\rbrack \) . For each state \( \left\lbrack {d}_{i}\right\rbrack \), there is a unique \( \left\lbrack {d}_{i + 1}\right\rbrack \) such that \( \left\lbrack {d}_{i}\right\rbrack \overset{M}{ \rightarrow }\left\lbrack {d}_{i + 1}\right\rbrack \) . Hence failure of \( M \) for the input \( \left\lbrack {d}_{0}\right\rbrack \) means that the sequence of states taken by \( M \) and beginning with \( \left\lbrack {d}_{0}\right\rbrack \) is infinite-i.e., the machine never stops. Henceforth, the state \( \left\lbrack d\right\rbrack \) will be denoted simply by some description \( d \) . The context will make clear the sense in which symbols such as \( d,{d}_{i} \) are being used. ## Exercises 2.9. A stereo-Turing machine \( M \) has its tape divided into two parallel tracks. The symbols on a pair of squares (one above the other) are read simultaneously. Show that there is a (mono-)Turing machine \( {M}^{\prime } \) which will perform essentially the same computations as \( M \) . 2.10. The operator of the Turing machine \( M \) has been asked to record the output of \( M \) (i.e., the symbols printed on the tape) at the end of each computation by \( M \) . Does the operator have any problems? Show that a machine \( {M}^{\prime } \) can be designed so as to perform essentially the same computations as \( M \), and which in addition will place marker symbols (not in the alphabet of \( M \) ) either at the furthest out points of the tape used in each computation, or alternatively at the nearest points such that the stopping position of \( {M}^{\prime } \), and all non-blank symbols, lie between them. 2.11. A dual-Turing machine \( M \) with alphabet \( \mathfrak{S} \) has two tapes which can move independently. Show that there is a Turing machine with alphabet \( \mathfrak{S} \times \mathfrak{S} \) which will, when given an initial state corresponding to the pair of initial states of a computation by \( M \), perform a computation whose terminal state corresponds to the pair of terminal states of \( M \) . 2.12. \( {M}_{1} \) and \( {M}_{2} \) are Turing machines with the same alphabet \( \mathfrak{S} \) . A computation by \( {M}_{1} \) and \( {M}_{2} \) consists of a computation by each of \( {M}_{1} \) and \( {M}_{2} \) such that, if \( \sigma {q}_{i}\tau \) is the output of \( {M}_{1} \), then \( \sigma {q}_{0}\tau \) is the input for \( {M}_{2} \) . Show that there is a Turing machine \( M \), whose alphabet contains \( \mathfrak{S} \), such that if \( M \) is started in an initial state of a computation by \( {M}_{1} \) and \( {M}_{2} \) with terminal state \( \sigma {q}_{j}\tau \), then \( M \) executes a computation with terminal state \( \sigma {q}_{k}\tau \) for some \( {q}_{k} \), while \( M \) fails if started in any other initial state. 2.13. \( {M}_{1},\ldots ,{M}_{n} \) are Turing machines with the same alphabet. An algorithm requires that at each step, exactly one of \( {M}_{1},\ldots ,{M}_{n} \) be applied to the result of the previous step. The Turing machine \( M \), applied to the output of any step, determines which of \( {M}_{1},\ldots ,{M}_{n} \) is to be applied for the next step. Show that there is a single Turing machine which can execute the algorithm and give the same ultimate output. 2.14. Most digital computers can read and write on magnetic tape. The tapes are finite, but the operator can replace them if they run out. Show that such computers can be regarded as Turing machines. In fact, the most sophisticated computers can be regarded as Turing machines. (This is not a mathematical exercise. The reader is asked to review his experience of computers and to see that the definitions given so far are broad enough to embrace the computational features of the computers he has used.) ## §3 Recursive Functions Let \( M \) be a Turing machine with alphabet \( \mathfrak{S} \) . We show how to use \( M \) to associate with each pair \( \left( {k,\ell }\right) \) of natural numbers a subset \( {U}_{M}^{\left( k,\ell \right) } \) of \( {\mathbf{N}}^{k} \) and a function \( {\Psi }_{M}^{\left( k,\ell \right) } : {U}_{M}^{\left( k,\ell \right) } \rightarrow {\mathbf{N}}^{\ell } \) . For \( \left( {{n}_{1},\ldots ,{n}_{k}}\right) \in {\mathbf{N}}^{k} \), put \[ \operatorname{code}\left( {{n}_{1},\ldots ,{n}_{k}}\right) = {s}_{1}^{{n}_{1}}{s}_{0}{s}_{1}^{{n}_{2}}{s}_{0}\cdots {s}_{1}^{{n}_{k - 1}}{s}_{0}{s}_{1}^{{n}_{k}}, \] where the notation \( {s}^{n} \) denotes a string of \( n \) consecutive symbols \( s \) . There may or may not be a computation by \( M \) whose initial state is the state \( {d}_{0} = \) \( {q}_{0} \) code \( \left( {{n}_{1},\ldots ,{n}_{k}}\right) \) . If there is, let \( {d}_{t} = {\sigma q\tau } \) be its (uniquely determined) terminal state. Choose a description \( {d}_{t} \) of this terminal state which has at least \( \ell \) occurrences of \( {s}_{0} \) in \( \tau \), and determine \( \left( {{a}_{1},\ldots ,{a}_{t}}\right) \in {\mathbf{N}}^{t} \) by defining \( {a}_{1} \) to be the number of times \( {s}_{1} \) occurs in \( \tau \) before the first occurrence of \( {s}_{0} \), and \( {a}_{i} \) (for \( 2 \leq i \leq \ell \) ) to be the number of times \( {s}_{1} \) occurs in \( \tau \) between the \( \left( {i - 1}\right) \) th and the \( i \) th occurrences of \( {s}_{0} \) . Let \( {U}_{M}^{(k, l} \) ) be the subset of \( {\mathbf{N}}^{k} \) consisting of all \( \left( {{n}_{1},\ldots ,{n}_{k}}\right) \in {\mathbf{N}}^{k} \) for which there exists a computation by \( M \) with initial state \( {q}_{0}\operatorname{code}\left( {{n}_{1},\ldots ,{n}_{k}}\right) \), and so for which an element \( \left( {{a}_{1},\ldots ,{a}_{\ell }}\right) \in {\mathbf{N}}^{\ell } \) is defined. The function \( {\Psi }_{M}^{\left( k,\ell \right) } \), with domain \( {U}_{M}^{\left( k,\ell \right) } \), is, defined by the rule \[ {\Psi }_{M}^{\left( k,\ell \right) }\left( {{n}_{1},\ldots ,{n}_{k}}\right) = \left( {{a}_{1},\ldots ,{a}_{\ell }}\right) . \] Definition 3.1. A function \( {\Psi }_{M}^{\left( k,\ell \right) } \) defined as above in terms of a Turing machine \( M \) is called a partial recursive function \( {}^{2} \) . The function \( {\Psi }_{M}^{\left( k, l\right) } \) is called a (total) recursive function if \( {U}_{M}^{\left( k,\ell \right) } = {\mathbf{N}}^{k} \) . --- \( {}^{2} \) These functions are usually called Turing computable functions, with a different definition being given for recursive functions. The equivalence of the two definitions is a significant result, but the proof is tedious. The reader is referred to \( §1 \) of Chapter \( \mathrm{X} \) for further information, and to [10], pp. 120-121, 207-237 for full details. --- ## Exercises 3.2. \( f : U \rightarrow {\mathbf{N}}^{t} \) is a partial recursive function with domain \( U \subseteq {\mathbf{N}}^{k} \) . Show that there is a Turing machine \( M \) such that \( {\Psi }_{M}^{\left( k,\ell \right) } = f \) and such that,
1126_(GTM32)Lectures in Abstract Algebra III. Theory of Fields and Galois Theory
Definition 6
Definition 6. An ordered (commutative) group \( G \) is a commutative group \( G \) together with a subset \( H \) satisfying the three conditions: 1) \( 1 \notin H,2 \) ) if \( a \) e \( G \) either \( a \) e \( H, a = 1 \) or \( {a}^{-1} \) e \( H,3 \) ) \( H \) is closed under the multiplication in \( G \) . If \( \left( {G, H}\right) \) is an ordered group, then we let \( {H}^{-1} = \left\{ {{b}^{-1} \mid {b\varepsilon H}}\right\} \) . Then condition 2 states that \( G = H \cup \{ 1\} \cup {H}^{-1} \) . Moreover, these sets are non-overlapping. This is assumed for \( H \) and \( \{ 1\} \) in condition 1 and it follows for \( {H}^{-1} \) and \( \{ 1\} \) on observing that, if \( {1\varepsilon }{H}^{-1} \), then \( {1\varepsilon H} \) contrary to condition 1. Finally, if \( {a\varepsilon H} \cap \) \( {H}^{-1} \), then \( {a}^{-1} \) e \( H \) and \( 1 = a{a}^{-1} \) e \( H \) by condition 3 . This again contradicts condition 1. The positive reals form an ordered group if we take \( H \) to be the set of elements \( < 1 \) . We can take \( H \) equally well to be the set of elements \( > 1 \) . In fact, if \( G \) is any ordered group, then \( {H}^{-1} \) is closed under multiplication and satisfies conditions 1 and 2 of Definition 6, so we can obtain another ordered group on replacing \( H \) by \( {H}^{-1} \) . In any ordered group \( G \) we define \( a < b \) to mean that \( a{b}^{-1}{\varepsilon H} \) . This defines a linear ordering in \( G \), that is, we have the following properties: 1. \( a < b, b < c \) implies \( a < c \) . 2. For any pair \( \left( {a, b}\right), a, b \) e \( G \), one and only one of the following holds: \( a < b \) , \( a = b, b < a \) (as usual we write \( b > a \) for \( a < b \) ). The order in \( G \) is invariant under multiplication, that is, we have: 3 . If \( a < b \) , then \( {ac} < {bc} \) . Conversely, if a relation \( a < b \) is defined in a group \( G \) so that properties 1,2, and 3 hold, then \( G \) is ordered by the subset \( H = \{ a \mid a < 1\} \) . Clearly condition 1 of Definition 6 holds for \( H \) . To prove conditions 2 and 3 we note first that, if \( a < b \) and \( c < d \), then \( {ac} < {bc} < {bd} \) so \( {ac} < {bd} \) ; hence, \( a < b \) if and only if \( {a}^{-1} > {b}^{-1} \) . In particular, \( a < 1 \) if and only if \( {a}^{-1} > 1 \) . Since any \( a \) satisfies one of the conditions: \( a < 1, a = 1, a > 1 \), it is clear that condition 2 of Definition 6 holds. Finally, \( a < 1 \) , \( b < 1 \) imply \( {ab} < 1 \), so \( H \) is closed under the multiplication in \( G \) . We remark also that the ordering defined by \( H \) in the manner indicated: \( a < b \) if \( a{b}^{-1}\mathrm{e}H \) is the same as the original ordering since \( a{b}^{-1} \) e \( H \) means \( a{b}^{-1} < 1 \) and this holds if and only if \( a < b \) . If \( {G}_{1} \) is a subgroup of an ordered group \( G \) ordered by the set \( H = \{ a \mid {a\varepsilon G}, a > 1\} \), then \( {G}_{1} \) has an induced ordering defined by \( {H}_{1} = {G}_{1} \cap H \) . This can be verified directly, or it can be seen by noting that the relation \( > \) defined in \( G \) gives a relation in \( {G}_{1} \) which satisfies the conditions stated before. If \( G \) is ordered by \( H \) and \( {G}^{\prime } \) is a second ordered group, ordered by \( {H}^{\prime } \), then an isomorphism \( \eta \) of \( G \) into \( {G}^{\prime } \) is called an order-isomorphism if \( {H\eta } \subseteq {H}^{\prime } \) . Also \( G \) and \( {G}^{\prime } \) are order-isomorphic if there exists an order isomorphism \( \eta \) of \( G \) onto \( {G}^{\prime } \) . In this case one necessarily has \( {H\eta } = \) \( {H}^{\prime } \) . For example, the group of positive reals under multiplication with \( H \) defined as before is order isomorphic to the additive group of all the real numbers ordered by the set \( {H}^{\prime } \) of negative reals. The mapping \( a \rightarrow \log a \) (natural logarithm) is an order isomorphism of the first group onto the second one. If \( G \) is an ordered group, \( G \) contains no elements \( \neq 1 \) of finite order; for, if \( a < 1\left( {a > 1}\right) \), then \( {a}^{n} < 1\left( {{a}^{n} > 1}\right) \), so \( {a}^{n} \neq 1 \) for every positive integer \( n \) . A consequence of this property of \( G \) is that for any fixed integer \( n \) the mapping \( x \rightarrow {x}^{n} \) of \( G \) is an isomorphism of \( G \) onto a subgroup of \( G \), which is order preserving if \( n \geq 1 \) . To define general valuations we shall need to consider ordered groups \( V \) with 0 . We define such a system to be an ordered group \( G \) to which a 0 element has been adjoined: \( V = G \cup \{ 0\} \) . The ordering in \( G \) is extended to \( V \) by defining \( 0 < a \) for every \( {a\varepsilon G} \) and we define \( {a0} = 0 \) for all \( a \) . We can now give the following Definition 7. Let \( \Phi \) be a field and let \( V \) be an ordered (commutative) group with 0. A mapping \( \varphi : \alpha \rightarrow \varphi \left( \alpha \right) \) of \( \Phi \) into \( V \) is called a valuation if (i) \( \varphi \left( \alpha \right) = 0 \) if and only if \( \alpha = 0 \) . (ii) \( \varphi \left( {\alpha \beta }\right) = \varphi \left( \alpha \right) \varphi \left( \beta \right) \) . (iii) \( \varphi \left( {\alpha + \beta }\right) \leq \max \left( {\varphi \left( \alpha \right) ,\varphi \left( \beta \right) }\right) \) . The exact sweep of this definition will become apparent soon. At this point it is clear that real non-archimedean valuations are a special case in which \( V \) is the set of non-negative real numbers. On the other hand, it should be noted that the real archimedean valuations are not valuations in the present sense. This inconsistency in terminology will cause no real difficulty. We shall now give an example of a valuation for which \( V \) is not the non-negative reals. Example. In this example we shall find it convenient to use the additive notation in the group \( G \) . The modifications in Definition 7 which are necessitated by this change are obvious, so we shall not write these down. The group \( G \) we shall consider is the additive group of integer pairs \( \left( {k, l}\right) \) . We introduce the lexicographic order in \( G \), that is, we define \( \left( {k, l}\right) < \left( {{k}^{\prime },{l}^{\prime }}\right) \) if either \( k < {k}^{\prime } \) or \( k = {k}^{\prime } \) and \( l < {l}^{\prime } \) . One checks that this is a linear ordering preserved under addition; hence \( G \) is an ordered (additive) group. We let \( V = G \cup \{ \infty \} \) where the ordering is extended to \( V \) by setting \( \infty > \left( {k, l}\right) \) for every \( \left( {k, l}\right) \) e \( G \) . Also we define \( \left( {k, l}\right) + \infty = \infty \) . Now let \( \mathrm{P} = \Phi \left( {\xi ,\eta }\right) \), a purely transcendental extension of a field \( \Phi \) where \( \{ \xi ,\eta \} \) is a transcendency basis for \( \mathrm{P} \) over \( \Phi \) . If \( {a\varepsilon }\mathrm{P} \) and \( a \neq 0 \), we can write \( a = {\xi }^{m}{\eta }^{n}p\left( {\xi ,\eta }\right) q{\left( \xi ,\eta \right) }^{-1} \) where \( p\left( {\xi ,\eta }\right) \) and \( q\left( {\xi ,\eta }\right) \) are polynomials in \( \xi ,\eta \) with non-zero constant terms, and \( m \) and \( n \) are integers. Then we define \( \varphi \left( a\right) = \left( {m, n}\right) \) . Also we set \( \varphi \left( 0\right) = \infty \) . Then (i) holds. It is easy to check that \( \varphi \left( {ab}\right) = \varphi \left( a\right) + \varphi \left( b\right) \) and \( \varphi \left( {a + b}\right) \geq \min \left( {\varphi \left( a\right) ,\varphi \left( b\right) }\right) \) . The first of these is (ii) in the additive notation and the second can be changed to (iii) by reversing the ordering (writing \( > \) for \( < \) ). Hence our function is essentially a valuation. ## EXERCISES 1. Let \( G \) be the additive ordered group of integer pairs \( \left( {k, l}\right) \) given in the foregoing example. Let \( c \) and \( e \) be real numbers such that \( 0 < c < 1 \) and \( e \) is positive and irrational. Show that the mapping \( \left( {k, l}\right) \rightarrow {c}^{k + {el}} \) is an isomorphism of \( G \) into the ordered multiplicative group of positive real numbers \( P \) . Show that \( G \) is not order isomorphic to a subgroup of \( P \) . 2. Let \( \mathrm{P} = \Phi \left( {\xi ,\eta }\right) \) and \( a = {\xi }^{m}{\eta }^{n}p\left( {\xi ,\eta }\right) q{\left( \xi ,\eta \right) }^{-1} \) where \( p \) and \( q \) are polynomials in \( \xi ,\eta \) with non-zero constant terms, as in the example above. Define \( \psi \left( a\right) = \) \( {c}^{m + {en}} \) where \( c \) and \( e \) are real numbers, \( 0 < c < 1, e \) positive irrational. Show that \( \psi \) is a non-archimedean real valuation which is not discrete. 3. Define a valuation \( \varphi \) of an integral domain 0 by replacing the field \( \Phi \) in Definition 7 by the integral domain 0 . Show that any valuation \( \psi \) of 0 into \( V \) has a unique extension to a valuation of the field of fractions \( \Phi \) of \( 0 \) . 4. Let \( G \) be an arbitrary (commutative) ordered group and let \( \mathfrak{o} = {\Phi }_{0}\left( G\right) \) be the group ring over a field \( {\Phi }_{0} \) of \( G \) (Vol. I, ex. 2, p. 95). Show that \( \mathfrak{o} \) is an integral domain. If \( a = \mathop{\sum }\limits_{1}^{r}{\alpha }_{i}{g}_{i},{\alpha }_{i} \neq 0 \) in \( {\Phi }_{0},{g}_{i} \) e \( G \), define \( \varphi \left( a\right) = \min {g}_{i} \) (in the ordering \( < \) defined in \( G \) ). Define \( \varphi \left( 0\right) = 0 \) . Show that \( \varphi \) is a valuation of \( \mathfrak{o} \) . Use exs. 3 and 4 to show that if \( V \) is any ordered group with 0, then there exists a field \( \Phi \) with a valuation \( \varphi \) of \( \Phi \) i
1359_[陈省身] Lectures on Differential Geometry
Definition 1.2
Definition 1.2. Suppose \( M \) is an \( m \) -dimensional manifold. If a given set of coordinate charts \( \mathcal{A} = \left\{ {\left( {U,{\varphi }_{U}}\right) ,\left( {V,{\varphi }_{V}}\right) ,\left( {W,{\varphi }_{W}}\right) ,\cdots }\right\} \) on \( M \) satisfies the following conditions, then we call \( \mathcal{A} \) a \( {C}^{r} \) -differentiable structure on \( M \) : 1) \( \{ U, V, W,\ldots \} \) is an open covering of \( M \) ; 2) any two coordinate charts in \( \mathcal{A} \) are \( {C}^{r} \) -compatible; 3) \( \mathcal{A} \) is maximal, i.e., if a coordinate chart \( \left( {\widetilde{U},{\varphi }_{\widehat{U}}}\right) \) is \( {C}^{r} \) -compatible with all coordinate charts in \( \mathcal{A} \), then \( \left( {\widetilde{U},{\varphi }_{\widetilde{U}}}\right) \in \mathcal{A} \) . If a \( {C}^{r} \) -differentiable structure is given on \( M \), then \( M \) is called a \( {C}^{r} \) - differentiable manifold. A coordinate chart in a given differentiable structure is called a compatible (admissible) coordinate chart of \( M \) . From now on, a local coordinate system of a point \( p \) on a differentiable manifold \( M \) refers to a coordinate system obtained from an admissible coordinate chart containing \( p \) . Remark 1. Conditions 1) and 2) in Definition 1.2 are primary. It is not hard to show that if a set \( {\mathcal{A}}^{\prime } \) of coordinate charts satisfies 1) and 2), then for any positive integer \( s,0 < s \leq r \), there exists a unique \( {C}^{s} \) -differentiable structure \( \mathcal{A} \) such that \( {\mathcal{A}}^{\prime } \subset \mathcal{A} \) . In fact, suppose \( \mathcal{A} \) represents the set of all coordinate charts which are \( {C}^{s} \) -compatible with every coordinate chart in \( {\mathcal{A}}^{\prime } \), then \( \mathcal{A} \) is a \( {C}^{s} \) -differentiable structure uniquely determined by \( {\mathcal{A}}^{\prime } \) . Hence, to construct a differentiable manifold, we need only choose a covering by compatible charts. Remark 2. In this book, we also assume that any manifold \( M \) is a second countable topological space, i.e., \( M \) has a countable topological basis (see footnote on page 2). Remark 3. If a \( {C}^{\infty } \) -differentiable structure is given on \( M \), then \( M \) is called a smooth manifold. If \( M \) has a \( {C}^{\omega } \) -differentiable structure, then \( M \) is called an analytic manifold. In this book, we are mostly interested in smooth manifolds. When there is no confusion, the term manifold will mean smooth manifold. Example 1. For \( M = {\mathbb{R}}^{m} \), let \( U = M \) and \( {\varphi }_{U} \) be the identity map. Then \( \left\{ \left( {U,{\varphi }_{U}}\right) \right\} \) is a coordinate covering of \( {\mathbb{R}}^{m} \) . This provides a smooth differentiable structure on \( {\mathbb{R}}^{m} \), called the standard differentiable structure of \( {\mathbb{R}}^{m} \) . Example 2. Consider the \( m \) -dimensional unit sphere \[ {S}^{m} = \left\{ {x \in {\mathbb{R}}^{m + 1}\left| {\;{\left( {x}^{1}\right) }^{2} + \cdots + {\left( {x}^{m + 1}\right) }^{2} = 1}\right. }\right\} . \] For \( m = 1 \), take the following four coordinate charts: \[ \left\{ \begin{array}{l} {U}_{1}\left\{ {x \in {S}^{1} \mid {x}^{2} > 0}\right\} ,{\varphi }_{{U}_{1}}\left( x\right) = {x}^{1}, \\ {U}_{2}\left\{ {x \in {S}^{1} \mid {x}^{2} < 0}\right\} ,{\varphi }_{{U}_{2}}\left( x\right) = {x}^{1}, \\ {V}_{1}\left\{ {x \in {S}^{1} \mid {x}^{1} > 0}\right\} ,{\varphi }_{{V}_{1}}\left( x\right) = {x}^{2}, \\ {V}_{2}\left\{ {x \in {S}^{1} \mid {x}^{1} < 0}\right\} ,{\varphi }_{{V}_{2}}\left( x\right) = {x}^{2}. \end{array}\right. \] (1.8) ![89cd1142-afa9-47ad-a74a-27b70d90fa5e_16_0.jpg](images/89cd1142-afa9-47ad-a74a-27b70d90fa5e_16_0.jpg) Figure 2. Obviously, \( \left\{ {{U}_{1},{U}_{2},{V}_{1},{V}_{2}}\right\} \) is an open covering of \( {S}^{1} \) . In the intersection \( {U}_{1} \cap {V}_{2} \), we have (see Figure 2) \[ \left\{ \begin{array}{l} {x}^{2} = \sqrt{1 - {\left( {x}^{1}\right) }^{2}} > 0 \\ {x}^{1} = - \sqrt{1 - {\left( {x}^{2}\right) }^{2}} < 0 \end{array}\right. \] (1.9) These are both \( {C}^{\infty } \) functions, thus \( \left( {{U}_{1},{\varphi }_{{U}_{1}}}\right) \) and \( \left( {{V}_{2},{\varphi }_{{V}_{2}}}\right) \) are \( {C}^{\infty } \) . compatible. Similarly, any other pair of the given coordinate charts are \( {C}^{\infty } \) - compatible. Hence these coordinate charts suffice to make \( {S}^{1} \) a 1-dimensional smooth manifold. For \( m > 1 \), the smooth structure on \( {S}^{m} \) can be defined similarly. Example 3. The \( m \) -dimensional projective space \( {P}^{m} \) . Define a relation \( \sim \) in \( {\mathbb{R}}^{m + 1} - \{ 0\} \) as follows: for \( x, y \in {\mathbb{R}}^{m + 1} - \{ 0\}, x \sim y \) if and only if there exists a real number \( a \) such that \( x = {ay} \) . Obviously, \( \sim \) is an equivalence relation. For \( x \in {\mathbb{R}}^{m + 1} - \{ 0\} \), denote the equivalence class of \( x \) by \[ \left\lbrack x\right\rbrack = \left\lbrack {{x}^{1},\ldots ,{x}^{m + 1}}\right\rbrack . \] The \( m \) -dimensional projective space is the quotient space \[ {P}^{m} = \left( {{\mathbb{R}}^{m + 1}-\{ 0\} }\right) / \sim \] \( \left( {1.10}\right) \) \[ = \left\{ {\left\lbrack x\right\rbrack \mid x \in {\mathbb{R}}^{m + 1}-\{ 0\} }\right\} . \] The numbers of the \( \left( {m + 1}\right) \) -tuple \( \left( {{x}^{1},\ldots ,{x}^{m + 1}}\right) \) are called the homogeneous coordinates of \( \left\lbrack x\right\rbrack \) . They are determined by \( \left\lbrack x\right\rbrack \) up to a a nonzero factor. \( {P}^{m} \) is thus the space of all straight lines in \( {\mathbb{R}}^{m + 1} \) which pass through the origin. Let \[ \begin{cases} {U}_{i} & = \left\{ {\left. \left\lbrack {{x}^{1},\ldots ,{x}^{m + 1}}\right\rbrack \right| \;{x}^{i} \neq 0}\right\} , \\ {\varphi }_{i}\left( \left\lbrack x\right\rbrack \right) & = \left( {{}_{i}{\xi }_{1},\ldots ,{}_{i}{\xi }_{i - 1},{}_{i}{\xi }_{i + 1},\ldots ,{}_{i}{\xi }_{m + 1}}\right) , \end{cases} \] (1.11) where \( 1 \leq i \leq m + 1,{}_{i}{\xi }_{h} = {x}^{h}/{x}^{i}\left( {h \neq i}\right) \) . Obviously, \( \left\{ {{U}_{i},1 \leq i \leq m + 1}\right\} \) forms an open covering of \( {P}^{m} \) . On \( {U}_{i} \cap {U}_{j}, i \neq j \), the change of coordinates is given by \[ \left\{ \begin{array}{l} {}_{j}{\xi }_{h} = \frac{{}_{i}{\xi }_{h}}{{}_{i}{\xi }_{j}},\;h \neq i, j \\ {}_{j}{\xi }_{i} = \frac{1}{{}_{i}{\xi }_{j}} \end{array}\right. \] (1.12) Hence \( {\left\{ \left( {U}_{i},{\varphi }_{i}\right) \right\} }_{1 \leq i \leq m + 1} \) suffices to generate a smooth structure on \( {\mathbb{P}}^{m} \) . Remark. In each of the above examples, the respective coordinate charts given are in fact \( {C}^{\omega } \) -compatible also, and so provide the structures for \( {\mathbb{R}}^{m} \) , \( {S}^{m} \), and \( {P}^{m} \) as analytic manifolds. Example 4 (Milnor’s Exotic Sphere). There may exist distinct differentiable structures on a single topological manifold. J. Milnor gave a famous example (Milnor 1956), which shows that there exist nonisomorphic smooth structures on homeomorphic topological manifolds (see the discussion following the remark to definition 1.3 below). Hence a differentiable structure is more than a topological structure. A complete understanding of the Milnor sphere is outside the scope of this text. Here we will give only a brief description of the main ideas. [A more recent example is the existence of distinct smooth structures on \( {\mathbb{R}}^{4} \) discovered by S. K. Donaldson (see Donaldson and Kronheimer 1991)]. Choose two antipodal points \( A \) and \( B \) in \( {S}^{4} \) . Let \[ {U}_{1} = {S}^{4} - \{ A\} ,\;{U}_{2} = {S}^{4} - \{ B\} . \] (1.13) Then \( {U}_{1} \) and \( {U}_{2} \) form an open covering of \( {S}^{4} \) . We wish to paste the trivial sphere bundles \( {U}_{1} \times {S}^{3} \) and \( {U}_{2} \times {S}^{3} \) together to get the 3 -sphere bundle \( {\sum }^{7} \) over \( {S}^{4} \) . Under the stereographic projection, \( {U}_{1} \) and \( {U}_{2} \) are both homeomorphic to \( {\mathbb{R}}^{4} \), and \( {U}_{1} \cap {U}_{2} \) is homeomorphic to \( {\mathbb{R}}^{4} - \{ 0\} \) . Identify the elements of \( {\mathbb{R}}^{4} - \{ 0\} \) as quaternions, and choose an odd number \( \kappa \), where \( {\kappa }^{2} - 1 ≢ 0{\;\operatorname{mod}\;7} \) . Consider the map \( \tau : \left( {{\mathbb{R}}^{4}-\{ 0\} }\right) \times {S}^{3} \rightarrow \left( {{\mathbb{R}}^{4}-\{ 0\} }\right) \times {S}^{3} \), such that for every \( \left( {u, v}\right) \in \left( {{\mathbb{R}}^{4}-\{ 0\} }\right) \times {S}^{3} \), we have \[ \tau \left( {u, v}\right) = \left( {\frac{u}{\parallel u{\parallel }^{2}},\frac{{u}^{h}v{u}^{j}}{\parallel u\parallel }}\right) , \] (1.14) where \[ h = \frac{\kappa + 1}{2},\;j = \frac{1 - \kappa }{2}, \] (1.15) and in (1.14) the multiplication and the norm \( \left| \right| \left| \right| \) are in the sense of quaternions. Obviously \( \tau \) is a smooth map. We can thus paste \( {U}_{1} \times {S}^{3} \) and \( {U}_{2} \times {S}^{3} \) together using \( \tau \) . It can be proved that the \( {\sum }^{7} \) constructed in this way is homeomorphic to the 7-dimensional unit sphere \( {S}^{7} \), but its differentiable structure is different from the standard differentiable structure of \( {S}^{7} \) (Example 2). On a smooth manifold, the concept of a smooth function is well-defined. Let \( f \) be a real-valued function defined on an \( m \) -dimensional smooth manifold \( M \) . If \( p \in M \), and \( \left( {U,{\varphi }_{U}}\right) \) is a compatible coordinate chart containing \( p \), then \( f \circ {\varphi }_{U}^{-1} \) is a real-valued function defined on the open subset \( {\varphi }_{U}\left( U\right) \) of the Euclidean space \( {\mathbb{R}}^{m} \) . If \( f \circ {\varphi }_{
1329_[肖梁] Abstract Algebra (2022F)
Definition 11.1.1
Definition 11.1.1. A ring \( R \) is a set together with two binary operations + and \( \cdot \), satisfying (1) \( \left( {R, + }\right) \) is an abelian group under "addition" (with 0 as the additive unit); (2) the "multiplication" - is associative, i.e. \( \left( {a \cdot b}\right) \cdot c = a \cdot \left( {b \cdot c}\right) \) for any \( a, b, c \in R \) ; (3) the distributive law holds in \( R \), i.e., for all \( a, b, c \in R \) , \[ \left( {a + b}\right) \cdot c = a \cdot c + b \cdot c\;\text{ and }\;a \cdot \left( {b + c}\right) = a \cdot b + a \cdot c; \] (4) \( R \) is unital, i.e. there exists an element \( 1 \in R \) such that \( 1 \neq 0 \) and that \[ 1 \cdot a = a \cdot 1 = a,\text{ for all }a \in R. \] In this course, all rings are assumed to be unital and \( 1 \neq 0 \), i.e. condition (4) above holds. We say that a ring \( R \) is commutative if \( a \cdot b = b \cdot a \) for all \( a, b \in R \) . Definition 11.1.2. A ring is called a division ring or a skew field if every nonzero element \( a \in R \) has a multiplicative inverse. A commutative division ring is called a field. Example 11.1.3. (1) \( \left( {\mathbb{Z},+, \cdot }\right) \) and \( \left( {{\mathbf{Z}}_{n},+, \cdot }\right) \) are rings. (2) \( \mathbb{Q},\mathbb{R} \), and \( \mathbb{C} \) are fields. (3) \( \mathbb{Z}\left\lbrack \frac{1}{N}\right\rbrack = \left\{ {\left. \frac{a}{{N}^{r}}\right| \;a \in \mathbb{Z}, r \in {\mathbb{Z}}_{ \geq 0}}\right\} \) is a subring of \( \mathbb{Q} \) . (4) If \( R \) is a ring, then \( R\left\lbrack x\right\rbrack = \left\{ {\left. {\mathop{\sum }\limits_{{n \geq 0}}{a}_{n}{x}^{n}}\right| \;{a}_{n} \in R}\right\} \) is a ring, called the polynomial ring over \( R \) . More generally, we may define \( R\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) similarly as the ring of multivariable polynomial rings with coefficients in \( R \) . In this construction, we often require \( R \) to be commutative. (5) If \( R \) is a ring, then the set of all \( n \times n \) matrices \( {\operatorname{Mat}}_{n \times n}\left( R\right) \) in \( R \) is a ring. (6) \( \mathbb{H} \mathrel{\text{:=}} \{ a + {bi} + {ci} + {dk} \mid a, b, c, d \in \mathbb{R}\} \) is called the ring of Hamilton quaternions. The multiplication is given by the rules: \[ {i}^{2} = {j}^{2} = {k}^{2} = - 1,{ij} = - {ji} = k,{jk} = - {kj} = i,{ki} = - {ik} = j. \] There are additional structures, for \( z = a + {bi} + {cj} + {dk} \in \mathbb{H} \) , conjugations : \( \bar{z} = \overline{a + {bi} + {cj} + {dk}} \mathrel{\text{:=}} a - {bi} - {cj} - {dk},\; \) norm map : \( \operatorname{Nm}\left( z\right) \mathrel{\text{:=}} z\bar{z} = {a}^{2} + {b}^{2} + {c}^{2} + {d}^{2} \in {\mathbb{R}}_{ \geq 0} \) . It can be seen that if \( z \neq 0 \), then \( {z}^{-1} = \bar{z}/\operatorname{Nm}\left( z\right) \) is a multiplicative inverse. So \( \mathbb{H} \) is a division ring. (See extended reasons for more discussion.) (7) (Group rings) Let \( R \) be a commutative and \( G \) a group. Define the associated group ring of \( G \) over \( R \) to be \[ R\left\lbrack G\right\rbrack \mathrel{\text{:=}} \left\{ {\text{ finite sums }\mathop{\sum }\limits_{{g \in G}}{a}_{g}g \mid {a}_{g} \in R}\right\} . \] The multiplication is given by \[ \left( {\mathop{\sum }\limits_{{g \in G}}{a}_{g}g}\right) \left( {\mathop{\sum }\limits_{{h \in G}}{b}_{h}h}\right) = \mathop{\sum }\limits_{{g, h \in G}}{a}_{g}{b}_{h}{gh}. \] The multiplicative unit in \( R\left\lbrack G\right\rbrack \) is \( 1 \cdot {e}_{G} \) (where \( {e}_{G} \) is the unit in \( G \) ). For example, when \( G = {\mathbf{Z}}_{n} = \left\langle {\sigma \mid {\sigma }^{n} = 1}\right\rangle \), we have \[ R\left\lbrack G\right\rbrack = \left\{ {{a}_{0} + {a}_{1}\sigma + \cdots + {a}_{n - 1}{\sigma }^{n - 1} \mid {a}_{i} \in R}\right\} , \] subject to the rule that \( {\sigma }^{n} = 1 \) . For another example, \( G = \mathbb{Z} = \langle \sigma \rangle \), then \[ R\left\lbrack G\right\rbrack = R\left\lbrack {x}^{\pm 1}\right\rbrack = \left\{ {\text{ finite sum }\mathop{\sum }\limits_{{n \in \mathbb{Z}}}{a}_{n}{x}^{n} \mid {a}_{n} \in R}\right\} . \] Definition 11.1.4. Let \( R \) and \( S \) be rings. (1) A ring homomorphism is a map \( \phi : R \rightarrow S \) satisfying (a) \( \phi \left( {a + b}\right) = \phi \left( a\right) + \phi \left( b\right) \) for all \( a, b \in R \) (which in particular implies that \( \phi \left( 0\right) = 0 \) ), (b) \( \phi \left( {ab}\right) = \phi \left( a\right) \phi \left( b\right) \) for all \( a, b \in R \) , (c) \( \phi \left( 1\right) = 1 \) . (2) The kernel of \( \phi \) is \( \ker \phi = {\phi }^{-1}\left( 0\right) \) . In particular, \( \phi \) is injective if and only if \( \ker \phi = \) \( \left( \begin{matrix} 0 \\ \end{matrix}\right) \) . (3) A homomorphism \( \phi \) is called an isomorphism if it is bijective. Remark 11.1.5. In some books, a ring is not assumed to contain 1 and a ring homomorphism needs not to send 1 to 1 . We impose both conditions, as this is the case we almost always encounter in the future. Example 11.1.6. (1) \( \phi : \mathbb{Z} \rightarrow {\mathbf{Z}}_{n} \) sending \( a \) to \( a{\;\operatorname{mod}\;n} \) is a homomorphism. (2) \( \phi : R \rightarrow S \) sending all elements of \( R \) to \( 0 \in S \) is not a homomorphism (under our definition). (3) \( \phi : \mathbb{Z} \rightarrow \mathbb{Z} \) sending \( \phi \left( x\right) = {nx} \) is not a homomorphism unless \( n = 1 \) . (4) If \( R \) is a commutative ring, then for any \( r \in R \), there is a natural evaluation homomorphism: \[ {\phi }_{r} : R\left\lbrack x\right\rbrack \rightarrow R \] \[ f\left( x\right) \mapsto f\left( r\right) \] Note that it is crucial to assume \( R \) to be commutative here! Definition 11.1.7. Let \( R \) be a ring. (1) A nonzero element \( a \in R \) is called a zero-divisor if there exists a nonzero element \( b \in R \) such that either \( {ab} = 0 \) or \( {ba} = 0 \) . (2) \( u \in R \) is called a unit in \( R \) if there exists \( v \in R \) such that \[ {uv} = {vu} = 1\text{.} \] The set of units in \( R \) is \( {R}^{ \times } \) . They form a group under multiplication. A commutative ring \( R \) containing no zero-divisor is called an integral domain. Example 11.1.8. (1) \( {\mathbf{Z}}_{n}^{ \times } = \{ a{\;\operatorname{mod}\;n} \mid \gcd \left( {a, n}\right) = 1\} \) . The zero divisors in \( {\mathbf{Z}}_{n} \) are \( \{ a{\;\operatorname{mod}\;n} \neq 0{\;\operatorname{mod}\;n} \mid \gcd \left( {a, n}\right) \neq 1\} \) . (2) If \( R \) is an integral domain, then so is \( R\left\lbrack x\right\rbrack \) . This is because if one has two nonzero polynomials \( f\left( x\right) = {a}_{m}{x}^{m} + \cdots + {a}_{0} \) and \( g\left( x\right) = {b}_{n}{x}^{n} + \cdots + {b}_{0} \) (with \( {a}_{m} \neq 0 \) and \( {b}_{n} \neq 0 \) ), then \( f\left( x\right) g\left( x\right) \) has leading term \( {a}_{m}{b}_{n}{x}^{m + n} \) . But \( {a}_{m}{b}_{n} \neq 0 \), so \( f\left( x\right) g\left( x\right) \neq 0 \) . This shows that \( R\left\lbrack x\right\rbrack \) is an integral domain. Lemma 11.1.9. A finite integral domain \( R \) is a field. Proof. For any nonzero element \( a \in R \), we need to find its inverse. Consider the following homomorphism of additive groups \[ \begin{matrix} {\phi }_{a} : \left( {R, + }\right) \rightarrow \left( {R, + }\right) \\ x \mapsto {ax} \end{matrix} \] Then \( \ker {\phi }_{a} = \{ x \in R \mid {ax} = 0\} = \{ 0\} \) as \( R \) is an integral domain. Thus \( {\phi }_{a} \) is injective, and hence an isomorphism by counting the number of elements. In particular, \( {a}^{-1} \mathrel{\text{:=}} {\phi }_{a}^{-1}\left( 1\right) \) is a multiplicative inverse of \( a \) . Definition 11.1.10. For an integral domain \( R \), we define its fraction field or the quotient field to be \[ \operatorname{Frac}\left( R\right) \mathrel{\text{:=}} \{ \left( {a, b}\right) \in R \times \left( {R\smallsetminus \{ 0\} }\right) \} /\left( {\left( {a, b}\right) \sim \left( {c, d}\right) \text{ if and only if }{ad} = {bc}}\right) . \] In particular, \( R \) is a field. Example 11.1.11. (1) \( \operatorname{Frac}\left( \mathbb{Z}\right) = \mathbb{Q} \) . (2) For a field \( k,\operatorname{Frac}\left( {k\left\lbrack x\right\rbrack }\right) = k\left( x\right) \) the field of rational functions in \( x \) (with coefficients in \( k \) ). 11.2. Ideals. To develop the story for rings in a parallel way to that of groups, we now introduce the analogue of normal groups in the theory of rings. Definition 11.2.1. A subset \( I \subseteq R \) is called a left ideal if (1) for any \( a, b \in I, a - b \in I \) (so that \( I \) is a subgroup of \( \left( {R, + }\right) \) ); (2) for any \( a \in I \) and \( x \in R \), we have \( {xa} \in I \) . We say that \( I \) is a right ideal if it satisfies the above conditions with (2) replaced by \( {ax} \in I \) . We say that \( I \) is an ideal (or a two-sided ideal) if it is a left ideal and a right ideal as the same time. We say that \( I \) is a proper ideal if \( I \neq R \) . Remark 11.2.2. (1) For commutative rings, there is no difference between left, right, or two-sided ideals. (2) An ideal of a ring is (usually) not a ring, because \( 1 \notin I \) . \( \left( {1 \in I\text{implies that}I = R\text{.}}\right) \) Definition 11.2.3. Let \( R \) be a ring and \( I \) a two-sided ideal such that \( I \neq R \) . We define the quotient ring \( R/I \mathrel{\text{:=}} \{ x + I \mid x \in R\} \) (quotient as an additive group) with operations: \[ \left( {x + I}\right) + \left( {y + I}\right) = \left( {x + y}\right) + I\text{ and }\left( {x + I}\right) \cdot \left( {y + I}\right) = \left( {xy}\right) + I. \] We check that the multiplication is well-defined: if \( {x}^{\prime } = x + a \) and \( {y}^{\prime } = y + b \) with \( a, b \in I \) , then \[ {x}^{\prime }{y}^{\p
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 2.34
Definition 2.34. Given normed spaces \( X, Y \) and an open subset \( W \) of \( X \), a map \( f : W \rightarrow Y \) is said to be of class \( {D}^{1} \) at \( \bar{w} \) (resp. on \( W \) ) if it is Hadamard differentiable around \( \bar{w} \) (resp. on \( W \) ) and if \( {df} : W \times X \rightarrow Y \) is continuous at \( \left( {\bar{w}, v}\right) \) for all \( v \in X \) (resp. on \( W \times X \) ). We say that \( f \) is of class \( {D}^{k} \) with \( k \in \mathbb{N}, k > 1 \), if \( f \) is of class \( {D}^{1} \) and if \( {df} \) is of class \( {D}^{k - 1} \) . We denote by \( {D}^{1}\left( {W, Y}\right) \) the space of maps of class \( {D}^{1} \) from \( W \) to \( Y \) and by \( B{D}^{1}\left( {W, Y}\right) \) the space of maps \( f \in {D}^{1}\left( {W, Y}\right) \) that are bounded and such that \( {f}^{\prime } \) is bounded from \( W \) to \( L\left( {X, Y}\right) \) . Let us note the following two properties. Proposition 2.35. For every \( f \in {D}^{1}\left( {W, Y}\right) \) the map \( {f}^{\prime } : w \mapsto {Df}\left( w\right) \mathrel{\text{:=}} {df}\left( {w, \cdot }\right) \) is locally bounded. Proof. Suppose, to the contrary, that there exist \( w \in W \) and a sequence \( \left( {w}_{n}\right) \rightarrow w \) such that \( \left( {r}_{n}\right) \mathrel{\text{:=}} \left( \begin{Vmatrix}{{Df}\left( {w}_{n}\right) }\end{Vmatrix}\right) \rightarrow + \infty \) . For each \( n \in \mathbb{N} \) one can pick some unit vector \( {u}_{n} \in X \) such that \( \begin{Vmatrix}{{df}\left( {{w}_{n},{u}_{n}}\right) }\end{Vmatrix} > {r}_{n} - 1 \) . Setting (for \( n \in \mathbb{N} \) large) \( {x}_{n} \mathrel{\text{:=}} {r}_{n}^{-1}{u}_{n} \), we see that \( \left( \left( {{w}_{n},{x}_{n}}\right) \right) \rightarrow \left( {w,0}\right) \) but \( \left( \begin{Vmatrix}{{df}\left( {{w}_{n},{x}_{n}}\right) }\end{Vmatrix}\right) \rightarrow 1 \), a contradiction. Corollary 2.36. Let \( f : W \rightarrow Y \) be a Hadamard (or Gâteaux) differentiable function. Then \( f \) is of class \( {D}^{1} \) if and only if \( {f}^{\prime } \) is locally bounded and for all \( u \in X \) the map \( x \mapsto {f}^{\prime }\left( x\right) u \) is continuous. In particular, if \( Y = \mathbb{R} \) and if \( f \in {D}^{1}\left( {W,\mathbb{R}}\right) \) , the derivative is continuous when \( {X}^{ * } \) is provided with the topology of uniform convergence on compact sets (the bw* topology). Proof. The necessary condition stems from the preceding proposition. The sufficient condition follows from the inequalities \[ \begin{Vmatrix}{{f}^{\prime }\left( w\right) v - {f}^{\prime }\left( x\right) u}\end{Vmatrix} \leq \begin{Vmatrix}{{f}^{\prime }\left( w\right) \left( {v - u}\right) }\end{Vmatrix} + \begin{Vmatrix}{{f}^{\prime }\left( w\right) u - {f}^{\prime }\left( x\right) u}\end{Vmatrix} \leq {m\varepsilon }/\left( {2m}\right) + \varepsilon /2 = \varepsilon , \] when for some \( m > 0 \) and a given \( \varepsilon > 0 \) one can find a neighborhood \( V \) of \( x \) in \( W \) such that \( \begin{Vmatrix}{{f}^{\prime }\left( w\right) }\end{Vmatrix} \leq m \) for \( w \in V \) and \( \begin{Vmatrix}{{f}^{\prime }\left( w\right) u - {f}^{\prime }\left( x\right) u}\end{Vmatrix} \leq \varepsilon /2 \) for \( w \in V, w \in \) \( B\left( {u,\varepsilon /{2m}}\right) \) . Proposition 2.37. If \( X, Y, Z \) are normed spaces, if \( U \) and \( V \) are open subsets of \( X \) and \( Y \) respectively, and if \( f \in {D}^{1}\left( {U, Y}\right), g \in {D}^{1}\left( {V, Z}\right) \), then \( h \mathrel{\text{:=}} g \circ f \in {D}^{1}\left( {W, Z}\right) \) for \( W \mathrel{\text{:=}} {f}^{-1}\left( V\right) \) . Proof. This conclusion is an immediate consequence of the formula \( {dh}\left( {w, x}\right) = \) \( {dg}\left( {f\left( w\right) ,{df}\left( {w, x}\right) }\right) \) for all \( \left( {w, x}\right) \in W \times X \) . Under a differentiability assumption, convex functions, integral functionals, and Nemitskii operators are important examples of maps of class \( {D}^{1} \) . Example (Nemitskii operators). Let \( \left( {S,\mathcal{F},\mu }\right) \) be a measure space, let \( X, Y \) be Banach spaces, let \( f : S \times X \rightarrow Y \) be a measurable map of class \( {D}^{1} \) in its second variable and such that \( g : \left( {s, x, v}\right) \mapsto d{f}_{s}\left( {x, v}\right) \) is measurable, \( {f}_{s} \) being the map \( x \mapsto f\left( {s, x}\right) \) . Then, if for \( p, q \in \lbrack 1, + \infty ) \), the Nemitskii operator \( F : {L}_{p}\left( {S, X}\right) \rightarrow \) \( {L}_{q}\left( {S, Y}\right) \) given by \( F\left( u\right) \mathrel{\text{:=}} f\left( {\cdot, u\left( \cdot \right) }\right) \) for \( u \in {L}_{p}\left( {S, X}\right) \) is well defined and Gâteaux differentiable, with derivative given by \( {D}_{r}F\left( u\right) \left( v\right) = {df}\left( {\cdot, u\left( \cdot \right), v\left( \cdot \right) }\right) \), then \( F \) is of class \( {D}^{1} \) . This follows from the following result applied to \( g \mathrel{\text{:=}} {df}\left( {\cdot, u\left( \cdot \right), v\left( \cdot \right) }\right) \) (see [37]). Lemma 2.38 (Krasnoselskii’s theorem). Let \( \left( {S,\mathcal{F},\mu }\right) \) be a measure space, let \( W, Z \) be Banach spaces, and let \( g : S \times W \rightarrow Z \) be a measurable map such that for all \( s \in S \smallsetminus N \), where \( N \) has null measure, the map \( g\left( {s, \cdot }\right) \) is continuous. If for some \( p, q \in \lbrack 1, + \infty ) \) and all \( u \in {L}_{p}\left( {S, W}\right) \) the map \( g\left( {\cdot, u\left( \cdot \right) }\right) \) belongs to \( {L}_{q}\left( {S, Z}\right) \) , then the Nemitskii operator \( G : {L}_{p}\left( {S, W}\right) \rightarrow {L}_{q}\left( {S, Z}\right) \) given by \( G\left( u\right) \mathrel{\text{:=}} g\left( {\cdot, u\left( \cdot \right) }\right) \) for \( u \in {L}_{p}\left( {S, X}\right) \) is continuous. ## Exercises 1. Let \( X, Y \) be normed spaces and let \( W \) be an open subset of \( X \) . Prove that \( f : W \rightarrow Y \) is Hadamard differentiable at \( \bar{x} \) if and only if there exists a continuous linear map \( \ell : X \rightarrow Y \) such that the map \( {q}_{t} \) given by \( {q}_{t}\left( v\right) \mathrel{\text{:=}} \left( {1/t}\right) \left( {f\left( {\bar{x} + {tv}}\right) - f\left( \bar{x}\right) }\right) \) converges to \( \ell \) as \( t \rightarrow {0}_{ + } \), uniformly on compact subsets of \( X \) . Deduce another proof of Proposition 2.50 below from this characterization. 2. Prove that if \( f : W \rightarrow Y \) is radially differentiable at \( \bar{x} \) in the direction \( u \) and if \( f \) is directionally steady at \( \bar{x} \) in the direction \( u \) in the sense that \( \left( {1/t}\right) (f\left( {\bar{x} + {tv}}\right) - \) \( f\left( {\bar{x} + {tu}}\right) ) \rightarrow 0 \) as \( \left( {t, v}\right) \rightarrow \left( {{0}_{ + }, u}\right) \), then \( f \) is directionally differentiable at \( \bar{x} \) in the direction \( u \) . Give an example showing that this criterion is more general than the Lipschitz condition of Proposition 2.25. 3. Let \( f : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) be given by \( f\left( {r, s}\right) \mathrel{\text{:=}} {r}^{2}s{\left( {r}^{2} + {s}^{2}\right) }^{-1} \) for \( \left( {r, s}\right) \in {\mathbb{R}}^{2} \smallsetminus \{ \left( {0,0}\right) \} \) , \( f\left( {0,0}\right) = 0 \) . Show that \( f \) has a radial derivative (which is in fact a bilateral derivative) but is not Gâteaux differentiable at \( \left( {0,0}\right) \) . 4. Let \( E \) be a Hilbert space and let \( X \mathrel{\text{:=}} {D}^{1}\left( {T, E}\right) \), where \( T \mathrel{\text{:=}} \left\lbrack {0,1}\right\rbrack \) . Endow \( X \) with the norm \( \parallel x\parallel \mathrel{\text{:=}} \mathop{\sup }\limits_{{t \in T}}\parallel x\left( t\right) \parallel + \mathop{\sup }\limits_{{t \in T}}\begin{Vmatrix}{{x}^{\prime }\left( t\right) }\end{Vmatrix} \) . Define the length of a curve \( x : \left\lbrack {0,1}\right\rbrack \rightarrow E \) by \[ \ell \left( x\right) \mathrel{\text{:=}} {\int }_{0}^{1}\begin{Vmatrix}{{x}^{\prime }\left( t\right) }\end{Vmatrix}\mathrm{d}t \] (a) Show that \( \ell \) is a continuous sublinear functional on \( X \) with Lipschitz rate 1 . (b) Let \( W \) be the set of \( x \in X \) such that \( {x}^{\prime }\left( t\right) \neq 0 \) for all \( t \in \left\lbrack {0,1}\right\rbrack \) . Show that \( W \) is an open subset of \( X \) and that \( \ell \) is Gâteaux differentiable on \( W \) . (c) Show that \( \ell \) is of class \( {D}^{1} \) on \( W \) [Hint: Use convergence results for integrals.] In order to prove that \( \ell \) is of class \( {C}^{1} \) one may use the following questions. (d) Let \( {E}_{0} \mathrel{\text{:=}} E \smallsetminus \{ 0\} \) and let \( D : {E}_{0} \rightarrow E \) be given by \( D\left( v\right) \mathrel{\text{:=}} \parallel v{\parallel }^{-1}v \) . Given \( u, v \in {E}_{0} \) show that \( \parallel D\left( u\right) - D\left( v\right) \parallel \leq 2\parallel u{\parallel }^{-1}\parallel u - v\parallel \) . (e) Deduce from the preceding inequality that \( {\ell }^{\prime } \) is continuous. 5. Prove the assertion following Corollary 2.31, defining the geodesic distance \( {d}_{W}\left( {x,{x}^{\prime }}\right) \) between two points \( x,{x}^{\prime } \) of \( W \) as the infimum of the lengths of curves joining \( x \) to \( {x}^{\prime } \) . 6. Prove that if \( f : W \rightarrow Y \) has a directional derivative at some point \( \bar{x} \) of the open subset \( W \) of \( X \), then its derivative \( {Df}\left( \bar{x}\right) : u \mapsto {df}\left( {\bar{x}, u}\right) \) is continuous if it is linear. 7. Prove Proposition 2.29 by deducing it from the classical mean value theorem (Lemma 2.6) for real-valued functions, using the Hahn-Banach theorem. [Hint: Take \( {y}^{ * } \) with norm one such that \(
1099_(GTM255)Symmetry, Representations, and Invariants
Definition 2.5.10
Definition 2.5.10. The Killing form of \( \mathfrak{g} \) is the bilinear form \( B\left( {X, Y}\right) = \operatorname{tr}\left( {\operatorname{ad}X\operatorname{ad}Y}\right) \) for \( X, Y \in \mathfrak{g} \) . Recall that \( \mathfrak{g} \) is semisimple if it is the direct sum of simple Lie algebras. We now obtain Cartan's criterion for semisimplicity. Theorem 2.5.11. The Lie algebra \( \mathfrak{g} \) is semisimple if and only if its Killing form is nondegenerate. Proof. Assume that \( \mathfrak{g} \) is semisimple. Since the adjoint representation of a simple Lie algebra is faithful, the same is true for a semisimple Lie algebra. Hence a semisimple Lie algebra \( \mathfrak{g} \) is isomorphic to a Lie subalgebra of \( \operatorname{End}\left( \mathfrak{g}\right) \) . Let \[ \mathfrak{g} = {\mathfrak{g}}_{1} \oplus \cdots \oplus {\mathfrak{g}}_{r} \] (Lie algebra direct sum), where each \( {\mathfrak{g}}_{i} \) is a simple Lie algebra. If \( \mathfrak{m} \) is an abelian ideal in \( \mathfrak{g} \), then \( \mathfrak{m} \cap {\mathfrak{g}}_{i} \) is an abelian ideal in \( {\mathfrak{g}}_{i} \), for each \( i \), and hence is zero. Thus \( \mathfrak{m} = 0 \) . Hence \( B \) is nondegenerate by Corollary 2.5.8. Conversely, suppose the Killing form is nondegenerate. Then the adjoint representation is faithful. To show that \( \mathfrak{g} \) is semisimple, it suffices by Corollary 2.5.8 to show that \( \mathfrak{g} \) has no nonzero abelian ideals. Suppose \( \mathfrak{a} \) is an ideal in \( \mathfrak{g}, X \in \mathfrak{a} \), and \( Y \in \mathfrak{g} \) . Then ad \( X \) ad \( Y \) maps \( \mathfrak{g} \) into \( \mathfrak{a} \) and leaves \( \mathfrak{a} \) invariant. Hence \[ B\left( {X, Y}\right) = \operatorname{tr}\left( {{\left. \operatorname{ad}X\right| }_{\mathfrak{a}}\operatorname{ad}{\left. Y\right| }_{\mathfrak{a}}}\right) . \] (2.43) If \( \mathfrak{a} \) is an abelian ideal, then \( {\left. \operatorname{ad}X\right| }_{\mathfrak{a}} = 0 \) . Since \( B \) is nondegenerate,(2.43) implies that \( X = 0 \) . Thus \( \mathfrak{a} = 0 \) . Corollary 2.5.12. Suppose \( \mathfrak{g} \) is a semisimple Lie algebra and \( D \in \operatorname{Der}\left( \mathfrak{g}\right) \) . Then there exists \( X \in \mathfrak{g} \) such that \( D = \operatorname{ad}X \) . Proof. The derivation property \( D\left( \left\lbrack {Y, Z}\right\rbrack \right) = \left\lbrack {D\left( Y\right), Z}\right\rbrack + \left\lbrack {Y, D\left( Z\right) }\right\rbrack \) can be expressed as the commutation relation \[ \left\lbrack {D,\operatorname{ad}Y}\right\rbrack = \operatorname{ad}D\left( Y\right) \;\text{ for all }Y \in \mathfrak{g}. \] (2.44) Consider the linear functional \( Y \mapsto \operatorname{tr}\left( {D\operatorname{ad}Y}\right) \) on \( \mathfrak{g} \) . Since the Killing form is nondegenerate, there exists \( X \in \mathfrak{g} \) such that \( \operatorname{tr}\left( {D\operatorname{ad}Y}\right) = B\left( {X, Y}\right) \) for all \( Y \in \mathfrak{g} \) . Take \( Y, Z \in \mathfrak{g} \) and use the invariance of \( B \) to obtain \[ B\left( {\operatorname{ad}X\left( Y\right), Z}\right) = B\left( {X,\left\lbrack {Y, Z}\right\rbrack }\right) = \operatorname{tr}\left( {D\operatorname{ad}\left\lbrack {Y, Z}\right\rbrack }\right) = \operatorname{tr}\left( {D\left\lbrack {\operatorname{ad}Y,\operatorname{ad}Z}\right\rbrack }\right) \] \[ = \operatorname{tr}\left( {D\operatorname{ad}Y\operatorname{ad}Z}\right) - \operatorname{tr}\left( {D\operatorname{ad}Z\operatorname{ad}Y}\right) = \operatorname{tr}\left( {\left\lbrack {D,\operatorname{ad}Y}\right\rbrack \operatorname{ad}Z}\right) . \] Hence (2.44) and the nondegeneracy of \( B \) give ad \( X = D \) . For the next result we need the following formula, valid for any elements \( Y, Z \) in a Lie algebra \( \mathfrak{g} \), any \( D \in \operatorname{Der}\left( \mathfrak{g}\right) \), and any scalars \( \lambda ,\mu \) : \[ {\left( D - \left( \lambda + \mu \right) \right) }^{k}\left\lbrack {Y, Z}\right\rbrack = \mathop{\sum }\limits_{r}\left( \begin{array}{l} k \\ r \end{array}\right) \left\lbrack {{\left( D - \lambda \right) }^{r}Y,{\left( D - \mu \right) }^{k - r}Z}\right\rbrack . \] (2.45) (The proof is by induction on \( k \) using the derivation property and the inclusion-exclusion identity for binomial coefficients.) Corollary 2.5.13. Let \( \mathfrak{g} \) be a semisimple Lie algebra. If \( X \in \mathfrak{g} \) and \( \operatorname{ad}X = S + N \) is the additive Jordan decomposition in \( \operatorname{End}\left( \mathfrak{g}\right) \) (with \( S \) semisimple, \( N \) nilpotent, and \( \left\lbrack {S, N}\right\rbrack = 0) \), then there exist \( {X}_{s},{X}_{n} \in \mathfrak{g} \) such that \( \operatorname{ad}{X}_{s} = S \) and \( \operatorname{ad}{X}_{n} = N \) . Proof. Let \( \lambda \in \mathbb{C} \) and set \[ {\mathfrak{g}}_{\lambda }\left( X\right) = \mathop{\bigcup }\limits_{{k \geq 1}}\operatorname{Ker}{\left( \operatorname{ad}X - \lambda \right) }^{k} \] (the generalized \( \lambda \) eigenspace of \( \operatorname{ad}X \) ). The Jordan decomposition of \( \operatorname{ad}X \) then gives a direct-sum decomposition \[ \mathfrak{g} = {\bigoplus }_{\lambda }{\mathfrak{g}}_{\lambda }\left( X\right) \] and \( S \) acts by \( \lambda \) on \( {\mathfrak{g}}_{\lambda }\left( X\right) \) . Taking \( D = \operatorname{ad}X, Y \in {\mathfrak{g}}_{\lambda }\left( X\right), Z \in {\mathfrak{g}}_{\mu }\left( X\right) \), and \( k \) sufficiently large in (2.45), we see that \[ \left\lbrack {{\mathfrak{g}}_{\lambda }\left( X\right) ,{\mathfrak{g}}_{\mu }\left( X\right) }\right\rbrack \subset {\mathfrak{g}}_{\lambda + \mu }\left( X\right) . \] (2.46) Hence \( S \) is a derivation of \( \mathfrak{g} \) . By Corollary 2.5.12 there exists \( {X}_{s} \in \mathfrak{g} \) such that ad \( {X}_{s} = \) S. Set \( {X}_{n} = X - {X}_{s} \) . ## 2.5.2 Root Space Decomposition In this section we shall show that every semisimple Lie algebra has a root space decomposition with the properties that we established in Section 2.4 for the Lie algebras of the classical groups. We begin with the following Lie algebra generalization of a familiar property of nilpotent linear transformations: Theorem 2.5.14 (Engel). Let \( V \) be a nonzero finite-dimensional vector space and let \( \mathfrak{g} \subset \operatorname{End}\left( V\right) \) be a Lie algebra. Assume that every \( X \in \mathfrak{g} \) is a nilpotent linear transformation. Then there exists a nonzero vector \( {v}_{0} \in V \) such that \( X{v}_{0} = 0 \) for all \( X \in \mathfrak{g} \) . Proof. For \( X \in \operatorname{End}\left( V\right) \) write \( {L}_{X} \) and \( {R}_{X} \) for the linear transformations of \( \operatorname{End}\left( V\right) \) given by left and right multiplication by \( X \), respectively. Then ad \( X = {L}_{X} - {R}_{X} \) and \( {L}_{X} \) commutes with \( {R}_{X} \) . Hence \[ {\left( \operatorname{ad}X\right) }^{k} = \mathop{\sum }\limits_{j}\left( \begin{array}{l} k \\ j \end{array}\right) {\left( -1\right) }^{k - j}{\left( {L}_{X}\right) }^{j}{\left( {R}_{X}\right) }^{k - j} \] by the binomial expansion. If \( X \) is nilpotent on \( V \) then \( {X}^{n} = 0 \), where \( n = \dim V \) . Thus \( {\left( {L}_{X}\right) }^{j}{\left( {R}_{X}\right) }^{{2n} - j} = 0 \) if \( 0 \leq j \leq {2n} \) . Hence \( {\left( \operatorname{ad}X\right) }^{2n} = 0 \), so \( \operatorname{ad}X \) is nilpotent on \( \operatorname{End}\left( V\right) \) . We prove the theorem by induction on \( \dim \mathfrak{g} \) (when \( \dim \mathfrak{g} = 1 \) the theorem is clearly true). Take a proper subalgebra \( \mathfrak{h} \subset \mathfrak{g} \) of maximal dimension. Then \( \mathfrak{h} \) acts on \( \mathfrak{g}/\mathfrak{h} \) by the adjoint representation. This action is by nilpotent linear transformations, so the induction hypothesis implies that there exists \( Y \notin \mathfrak{h} \) such that \[ \left\lbrack {X, Y}\right\rbrack \equiv 0{\;\operatorname{mod}\;\mathfrak{h}}\;\text{ for all }X \in \mathfrak{h}. \] Thus \( \mathbb{C}Y + \mathfrak{h} \) is a Lie subalgebra of \( \mathfrak{g} \), since \( \left\lbrack {Y,\mathfrak{h}}\right\rbrack \subset \mathfrak{h} \) . But \( \mathfrak{h} \) was chosen maximal, so we must have \( \mathfrak{g} = \mathbb{C}Y + \mathfrak{h} \) . Set \[ W = \{ v \in V : {Xv} = 0\text{ for all }X \in \mathfrak{h}\} . \] By the induction hypothesis we know that \( W \neq 0 \) . If \( v \in W \) then \[ {XYv} = {YXv} + \left\lbrack {X, Y}\right\rbrack v = 0 \] for all \( X \in \mathfrak{h} \), since \( \left\lbrack {X, Y}\right\rbrack \in \mathfrak{h} \) . Thus \( W \) is invariant under \( Y \), so there exists a nonzero vector \( {v}_{0} \in W \) such that \( Y{v}_{0} = 0 \) . It follows that \( \mathfrak{g}{v}_{0} = 0 \) . Corollary 2.5.15. There exists a basis for \( V \) in which the elements of \( \mathfrak{g} \) are represented by strictly upper-triangular matrices. Proof. This follows by repeated application of Theorem 2.5.14, replacing \( V \) by \( V/\mathbb{C}{v}_{0} \) at each step. Corollary 2.5.16. Suppose \( \mathfrak{g} \) is a semisimple Lie algebra. Then there exists a nonzero element \( X \in \mathfrak{g} \) such that \( \operatorname{ad}X \) is semisimple. Proof. We argue by contradiction. If \( \mathfrak{g} \) contained no nonzero elements \( X \) with ad \( X \) semisimple, then Corollary 2.5.13 would imply that \( \operatorname{ad}X \) is nilpotent for all \( X \in \mathfrak{g} \) . Hence Corollary 2.5.15 would furnish a basis for \( \mathfrak{g} \) such that \( \operatorname{ad}X \) is strictly upper triangular. But then ad \( X \) ad \( Y \) would also be strictly upper triangular for all \( X, Y \in \mathfrak{g} \) , and hence the Killing form would be zero, contradicting Theorem 2.5.11. For the rest of this section we let \( \mathfrak{g} \) be a semisimple Lie algebra. We call a subalgebra \( \mat
18_Algebra Chapter 0
Definition 3.12
Definition 3.12. The Euler characteristic of \( {V}_{ \bullet } \) is the integer \[ \chi \left( {V}_{ \bullet }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\dim \left( {V}_{i}\right) \] The original motivation for the introduction of this number is topological: with suitable positions, this Euler characteristic equals the Euler characteristic obtained by triangulating a manifold and then computing the number of vertices of the triangulation, minus the number of edges, plus the number of faces, etc. The following simple result is then a straightforward (and very useful) generalization of Proposition 3.11 Proposition 3.13. With notation as above, \[ \chi \left( {V}_{ \bullet }\right) = \mathop{\sum }\limits_{{i = 0}}^{N}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }\right) }\right) . \] In particular, if \( {V}_{ \bullet } \) is exact, then \( \chi \left( {V}_{ \bullet }\right) = 0 \) . Proof. There is nothing to show for \( N = 0 \), and the result follows directly from Proposition 3.11 if \( N = 1 \) (Exercise 3.15). Arguing by induction, given a complex \[ {V}_{ \bullet } : \;0 \rightarrow {V}_{N}\overset{{\alpha }_{N}}{ \rightarrow }{V}_{N - 1}\overset{{\alpha }_{N - 1}}{ \rightarrow }\cdots \overset{{\alpha }_{2}}{ \rightarrow }{V}_{1}\overset{{\alpha }_{1}}{ \rightarrow }{V}_{0} \rightarrow 0, \] we may assume that the result is known for 'shorter' complexes. Consider then the truncation \[ {V}_{ \bullet }^{\prime } : \;0 \rightarrow {V}_{N - 1}\overset{{\alpha }_{N - 1}}{ \rightarrow }\cdots \overset{{\alpha }_{2}}{ \rightarrow }{V}_{1}\overset{{\alpha }_{1}}{ \rightarrow }{V}_{0} \rightarrow 0. \] Then \[ \chi \left( {V}_{ \bullet }\right) = \chi \left( {V}_{ \bullet }^{\prime }\right) + {\left( -1\right) }^{N}\dim \left( {V}_{N}\right) \] and \[ {H}_{i}\left( {V}_{ \bullet }\right) = {H}_{i}\left( {V}_{ \bullet }^{\prime }\right) \;\text{ for }0 \leq i \leq N - 2, \] while \[ {H}_{N - 1}\left( {V}_{ \bullet }^{\prime }\right) = \ker \left( {\alpha }_{N - 1}\right) ,\;{H}_{N - 1}\left( {V}_{ \bullet }\right) = \frac{\ker \left( {\alpha }_{N - 1}\right) }{\operatorname{im}\left( {\alpha }_{N}\right) },\;{H}_{N}\left( {V}_{ \bullet }\right) = \ker \left( {\alpha }_{N}\right) . \] By Proposition 3.11 (cf. Claim 3.10), \[ \dim \left( {V}_{N}\right) = \dim \left( {\operatorname{im}\left( {\alpha }_{N}\right) }\right) + \dim \left( {\ker \left( {\alpha }_{N}\right) }\right) \] and \[ \dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }\right) }\right) = \dim \left( {\ker \left( {\alpha }_{N - 1}\right) }\right) - \dim \left( {\operatorname{im}\left( {\alpha }_{N}\right) }\right) \] therefore \[ \dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }^{\prime }\right) }\right) - \dim \left( {V}_{N}\right) = \dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }\right) }\right) - \dim \left( {{H}_{N}\left( {V}_{ \bullet }\right) }\right) . \] Putting all of this together with the induction hypothesis, \[ \chi \left( {V}_{ \bullet }^{\prime }\right) = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }^{\prime }\right) }\right) \] gives \[ \chi \left( {V}_{ \bullet }\right) = \chi \left( {V}_{ \bullet }^{\prime }\right) + {\left( -1\right) }^{N}\dim \left( {V}_{N}\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }^{\prime }\right) }\right) + {\left( -1\right) }^{N}\dim \left( {V}_{N}\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{N - 2}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }^{\prime }\right) }\right) + {\left( -1\right) }^{N - 1}\left( {\dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }^{\prime }\right) }\right) - \dim \left( {V}_{N}\right) }\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{{N - 2}}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }\right) }\right) + {\left( -1\right) }^{N - 1}\left( {\dim \left( {{H}_{N - 1}\left( {V}_{ \bullet }\right) }\right) - \dim \left( {{H}_{N}\left( {V}_{ \bullet }\right) }\right) }\right) \] \[ = \mathop{\sum }\limits_{{i = 0}}^{N}{\left( -1\right) }^{i}\dim \left( {{H}_{i}\left( {V}_{ \bullet }\right) }\right) \] as needed. In terms of the topological motivation recalled above, Proposition 3.13 tells us that the Euler characteristic of a manifold may be computed as the alternating sum of the ranks of its homology, that is, of its Betti numbers. Having come this far, I cannot refrain from mentioning the next, equally simpleminded, generalization. The reader has surely noticed that the only tool used in the proof of Proposition 3.13 was the 'additivity' property of dimension, established in Proposition 3.11 if \[ 0 \rightarrow U \rightarrow V \rightarrow W \rightarrow 0 \] is exact, then \[ \dim \left( V\right) = \dim \left( U\right) + \dim \left( W\right) \] Proposition 3.13 is a formal consequence of this one property of \( \dim \) . With this in mind, we can reinterpret what we have just done in the following curious way. Consider the category \( k \) -Vect \( {}^{f} \) of finite-dimensional \( k \) -vector spaces. Each object \( V \) of \( k - {\operatorname{Vect}}^{f} \) determines an isomorphism class \( \left\lbrack V\right\rbrack \) . Let \( F\left( {k - {\operatorname{Vect}}^{f}}\right) \) be the free abelian group on the set of these isomorphism classes; further, let \( E \) be the subgroup generated by the elements \[ \left\lbrack V\right\rbrack - \left\lbrack U\right\rbrack - \left\lbrack W\right\rbrack \] for all short exact sequences \[ 0 \rightarrow U \rightarrow V \rightarrow W \rightarrow 0 \] in \( k \) -Vect \( {}^{f} \) . The quotient group \[ K\left( {k - {\operatorname{Vect}}^{f}}\right) \mathrel{\text{:=}} \frac{F\left( {k - {\operatorname{Vect}}^{f}}\right) }{E} \] is called the Grothendieck group of the category \( k \) -Vect \( {}^{f} \) . The element determined by \( V \) in the Grothendieck group is still denoted \( \left\lbrack V\right\rbrack \) . More generally, a Grothendieck group may be defined for any category admitting a notion of exact sequence. Every complex \( {V}_{ \bullet } \) determines an element in \( K\left( {k - {\operatorname{Vect}}^{f}}\right) \), namely \[ {\chi }_{K}\left( {V}_{ \bullet }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\left\lbrack {V}_{i}\right\rbrack \in K\left( {k - {\operatorname{Vect}}^{f}}\right) . \] Claim 3.14. With notation as above, we have the following: - \( {\chi }_{K} \) ’s an Euler characteristic’, in the sense that it satisfies the formula given in Proposition 3.13: \[ {\chi }_{K}\left( {V}_{ \bullet }\right) = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\left\lbrack {{H}_{i}\left( {V}_{ \bullet }\right) }\right\rbrack \] - \( {\chi }_{K} \) is a ’universal Euler characteristic’, in the following sense. Let \( G \) be an abelian group, and let \( \delta \) be a function associating an element of \( G \) to each finite-dimensional vector space, such that \( \delta \left( V\right) = \delta \left( {V}^{\prime }\right) \) if \( V \cong {V}^{\prime } \) and \( \delta \left( {V/U}\right) = \) \( \delta \left( V\right) - \delta \left( U\right) \) . For \( {V}_{ \bullet } \) a complex, define \[ {\chi }_{G}\left( {V}_{ \bullet }\right) = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\delta \left( {V}_{i}\right) \] Then \( \delta \) induces a (unique) group homomorphism \[ K\left( {k{\text{-Vect }}^{f}}\right) \rightarrow G \] mapping \( {\chi }_{K}\left( {V}_{ \bullet }\right) \) to \( {\chi }_{G}\left( {V}_{ \bullet }\right) \) . - In particular, \( \delta = \dim \) induces a group homomorphism \[ K\left( {k{\text{-Vect }}^{f}}\right) \rightarrow \mathbb{Z} \] such that \( {\chi }_{K}\left( {V}_{ \bullet }\right) \mapsto \chi \left( {V}_{ \bullet }\right) \) . - This is in fact an isomorphism. The best way to convince the reader that this impressive claim is completely trivial is to leave its proof to the reader (Exercise 3.16). The first point is proved by adapting the proof of Proposition 3.13 the second point is a harmless mixture of universal properties; the third point follows from the second; and the last point follows from the fact that \( \dim \left( k\right) = 1 \) and the IBN property. The last point is, in fact, rather anticlimactic: if the impressively abstract Grothendieck group turns out to just be a copy of the integers, why bother defining it? The answer is, of course, that this only stresses how special the category \( k{\text{-Vect}}^{f} \) is. The definition of the Grothendieck group can be given in any context in which complexes and a notion of exactness are available (for example, in the category of finitely generated 20 modules over any ring). The formal arguments proving Claim 3.14 will go through in any such context and provide us with a useful notion of 'universal Euler characteristic'. We will come back to complexes and homology in Chapter IX. ## Exercises 3.1. Use Gaussian elimination to find all integer solutions of the system of equations \[ \begin{cases} {7x} - {36y} + {12z} & = 1 \\ - {8x} + {42y} - {14z} & = 2 \end{cases} \] 3.2. \( \vartriangleright \) Provide details for the proof of Lemma 3.2. [3.2] 3.3. Redo Exercise II 8.8. 3.4. Formalize the discussion of 'universal identities': by what cocktail of universal properties is it true that if an identity holds in \( \mathbb{Z}\left\lbrack {{x}_{1},\ldots ,{x}_{r}}\right\rbrack \), then it holds over every commutative ring \( R \), for every choice of \( {x}_{i} \in R \) ? (Is the commutativity of \( R \) necessary?) 3.5. \( \vartriangleright \) Let \( A \) be an \( n \times n \) square invertible matrix with entries in a field, and consider the \( n \times \left( {2n}\right) \) matrix \( B = \left( {A \mid {I}_{n}}\right) \) obtained by placing the identity matrix to the side of \( A \) . Perform elementary row operations on
1068_(GTM227)Combinatorial Commutative Algebra
Definition 9.23
Definition 9.23 A Laurent monomial module \( M \) in \( T = \mathbb{k}\left\lbrack {{x}_{1}^{\pm 1},\ldots ,{x}_{n}^{\pm 1}}\right\rbrack \) is called generic if all its minimal first syzygies \( {\mathbf{x}}^{\mathbf{u}}{\mathbf{e}}_{i} - {\mathbf{x}}^{\mathbf{v}}{\mathbf{e}}_{j} \) have full support. This condition means that every variable \( {x}_{\ell } \) appears either in \( {\mathbf{x}}^{\mathbf{u}} \) or in \( {\mathbf{x}}^{\mathbf{v}} \) . This definition is the essence behind genericity for monomial ideals, although for ideals there are "boundary effects" coming from the fact that \( {\mathbb{N}}^{n} \) is a special subset of \( {\mathbb{Z}}^{n} \) . To be precise, the genericity condition on the minimal first syzygies \( {\mathbf{x}}^{\mathbf{u}}{\mathbf{e}}_{i} - {\mathbf{x}}^{\mathbf{v}}{\mathbf{e}}_{j} \) of an ideal requires only that \( \operatorname{supp}\left( {\mathbf{x}}^{\mathbf{u} + \mathbf{v}}\right) = \operatorname{supp}\left( {\operatorname{lcm}\left( {{m}_{i},{m}_{j}}\right) }\right) \), as opposed to \( \operatorname{supp}\left( {\mathbf{x}}^{\mathbf{u} + \mathbf{v}}\right) = \{ 1,\ldots, n\} \) for Laurent monomial modules. This definition allows us to treat the boundary exponent 0 differently than the strictly positive exponents coming from the interior of \( {\mathbb{N}}^{n} \) . Just like the hull complex, the Scarf complex defined earlier for monomial ideals makes sense for Laurent monomial modules, too, as does the theorem on free resolutions of generic objects. Theorem 9.24 For generic Laurent monomial modules \( M \), the following coincide: 1. The Scarf complex of \( M \) 2. The hull resolution of \( M \) 3. The minimal free resolution of \( M \) Proof. The proof of Theorem 6.13 carries over from monomial ideals to Laurent monomial modules. A lattice \( L \) in \( {\mathbb{Z}}^{n} \) is called generic if its associated lattice module \( {M}_{L} \) is generic. Equivalently, the lattice \( L \) is generic if the lattice ideal \( {I}_{L} \) is generated by binomials \( {\mathbf{x}}^{\mathbf{u}} - {\mathbf{x}}^{\mathbf{v}} \) with full support, so every variable \( {x}_{\ell } \) appears in every minimal generator of \( {I}_{L} \) . Applying Corollary 9.18 to the minimal free resolution in Theorem 9.24, we get the following result. Corollary 9.25 The minimal free resolution of a generic lattice ideal \( {I}_{L} \) is its Scarf complex, which is the image under \( \pi \) of the Scarf complex of \( {M}_{L} \) . The lattice \( L \) in Example 9.21 is generic because all three generators of \[ {I}_{L} = \left\langle {{x}_{1}{x}_{2}^{2} - {x}_{3}^{2},{x}_{1}{x}_{3} - {x}_{2}^{3},{x}_{2}{x}_{3} - {x}_{1}^{2}}\right\rangle \] have full support. The Scarf complex of \( {M}_{L} \) coincides with the hull complex depicted in Fig. 9.1. The Scarf complex of \( {I}_{L} \) is a minimal free resolution. Geometrically, it is a subdivision of the torus with two triangles. Example 9.26 Things become much more complicated in four dimensions. The smallest codimension 1 generic lattice module in four variables is determined by the lattice \( L = \ker \left( \left\lbrack \begin{array}{llll} {20} & {24} & {25} & {31} \end{array}\right\rbrack \right) \subset {\mathbb{Z}}^{4} \) . The lattice ideal \( {I}_{L} \) is the ideal of the monomial curve \( t \mapsto \left( {{t}^{20},{t}^{24},{t}^{25},{t}^{31}}\right) \) in affine 4-space. The group algebra is \( S\left\lbrack L\right\rbrack = \mathbb{k}\left\lbrack {a, b, c, d}\right\rbrack \left\lbrack {{\mathbf{z}}^{\mathbf{v}} \mid \mathbf{v} \in L}\right\rbrack \), and \[ {M}_{L} = S\left\lbrack L\right\rbrack /\left\langle {{a}^{4} - {bcd}{\mathbf{z}}^{ * },{a}^{3}{c}^{2} - {b}^{2}{d}^{2}{\mathbf{z}}^{ * },{a}^{2}{b}^{3} - {c}^{2}{d}^{2}{\mathbf{z}}^{ * }, a{b}^{2}c - {d}^{3}{\mathbf{z}}^{ * },}\right. \] \[ {b}^{4} - {a}^{2}{cd}{\mathbf{z}}^{ * },{b}^{3}{c}^{2} - {a}^{3}{d}^{2}{\mathbf{z}}^{ * },{c}^{3} - {abd}{\mathbf{z}}^{ * }\rangle \] where, for instance, the * in \( {a}^{4} - {bcd}{\mathbf{z}}^{ * } \) is the vector in \( L \) that is 4 times the first generator minus 1 times each of the second, third, and fourth generators. The hull \( = \) Scarf \( = \) minimal resolution of \( S/{I}_{L} \) has the form \[ 0 \leftarrow S \leftarrow {S}^{7} \leftarrow {S}^{12} \leftarrow {S}^{6} \leftarrow 0. \] Up to the action of \( L \), there are 6 tetrahedra corresponding to the second syzygies and 12 triangles corresponding to the first syzygies. In Theorem 6.26 we described what it means for a monomial ideal to be generic. Similar equivalences hold for monomial modules \( M \) . In particular, \( M \) is generic if and only if its Scarf complex is unchanged by arbitrary deformations. It would nice to make a similar statement also for deformations in the subclass of lattice modules. Here, the situation is more complicated, but it is the case that generic lattices deserve to called "generic" among all lattices: they are "abundant" in a sense that we are about to make precise. Consider the set \( {\mathcal{S}}_{d, n} \) of all rational \( d \times n \) matrices \( \mathbf{L} \) such that the row span of \( \mathbf{L} \) meets \( {\mathbb{N}}^{n} \) only in the origin. Each such matrix \( \mathbf{L} \) defines a rank \( d \) sublattice \( L = {\operatorname{rowspan}}_{\mathbb{Q}}\left( \mathbf{L}\right) \cap {\mathbb{Z}}^{n} \) . Let \( {\mathcal{T}}_{d, n} \) be the subset of all matrices \( \mathbf{L} \) in \( {\mathcal{S}}_{d, n} \) such that the corresponding lattice \( L \) is not generic. Theorem 9.27 (Barany and Scarf) The closure of \( {\mathcal{T}}_{d, n} \) has measure zero in the closure of \( {\mathcal{S}}_{d, n} \) in \( {\mathbb{R}}^{d \times n} \) . Proof. Condition (A3) in the article [BaS96] by Barany and Scarf describes an open set of matrices \( \mathbf{L} \) that represent generic lattices. Theorem 1 in [BaS96] shows that the set of all generic lattices with a fixed Scarf complex is an open polyhedral cone. The union of these cones is a dense subset in the closure of \( {\mathcal{S}}_{d, n} \) . Theorem 9.27 means in practice that if the rational matrix \( \mathbf{L} \) is chosen at random, with respect to any reasonable distribution on rational matrices, then the corresponding lattice ideal will be generic. What is puzzling is that virtually all lattice ideals one encounters in commutative algebra seem to be nongeneric; i.e., they lie in the measure zero subset \( {\mathcal{T}}_{d, n} \) . The deterministic construction of generic lattice ideals with prescribed properties (such as Betti numbers) is an open problem that appears to be difficult. It is also not known how to "deform" a lattice ideal to a "nearby" generic lattice ideal. ## Exercises 9.1 Let \( Q \) be the affine semigroup in \( {\mathbb{Z}}^{d} \) spanned by the vectors \( {\mathbf{e}}_{i} + {\mathbf{e}}_{j} \), where \( 1 \leq i < j \leq d \) . In other words, \( Q \) is spanned by all zero-one vectors with precisely two ones. Determine the \( K \) -polynomial \( {\mathcal{K}}_{Q}\left( {{t}_{1},\ldots ,{t}_{d}}\right) \) of the semigroup \( Q \) . 9.2 Let \( M \) be the Laurent monomial module generated by \( \left\{ {{x}^{u}{y}^{v}{z}^{w} \mid u + v + w = 0}\right. \) and not all three coordinates of \( \left( {u, v, w}\right) \) are even \( \} \) . Draw a picture of \( M \) . Find a cellular minimal free resolution of \( M \) over \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) . 9.3 Let \( L \) be the kernel of the matrix \( \left\lbrack \begin{array}{llll} 3 & 2 & 1 & 0 \\ 0 & 1 & 2 & 3 \end{array}\right\rbrack \) . Show that the hull resolution of the Laurent monomial module \( {M}_{L} \) is minimal. What happens modulo the action by the lattice \( L \) ? Answer: Depicted after the last exercise in this chapter. 9.4 What projective dimensions are possible for ideals \( {I}_{L} \) of pointed affine semi-groups spanned by six vectors in \( {\mathbb{Z}}^{3} \) ? Give an explicit example for each value. 9.5 Compute the hull resolution for the ideal of \( 2 \times 2 \) minors in a \( 2 \times 4 \) matrix. 9.6 Consider the lattice ideal generated by all the \( 2 \times 2 \) minors of a generic \( 4 \times 4 \) matrix, and compute its minimal free resolution. Classify all syzygies up to symmetry, and determine the corresponding simplicial complexes \( {\Delta }_{\mathbf{b}} \) . 9.7 Compute the hull resolution of the ideal of \( 2 \times 2 \) minors of a generic \( 3 \times 3 \) matrix, and compare it with the minimal free resolution of that same ideal. 9.8 Compute the hull complex hull \( \left( {M}_{L}\right) \) of the sublattice of \( {\mathbb{Z}}^{5} \) spanned by three vectors \( \left( {1, - 2,1,0,0}\right) ,\left( {0,1, - 2,1,0}\right) \), and \( \left( {0,0,1, - 2,1}\right) \) . 9.9 Let \( Q \) be the subsemigroup of \( {\mathbb{Z}}^{3} \) generated by the six vectors \( \left( {1,0,0}\right) ,\left( {0,1,0}\right) \) , \( \left( {0,0,1}\right) ,\left( {-1,1,0}\right) ,\left( {-1,0,1}\right) \), and \( \left( {0, - 1,1}\right) \) . Determine the corresponding lattice \( L \) and show that it is unimodular. Then compute (a) generators for the Lawrence ideal \( {I}_{\Lambda \left( L\right) } \) ,(b) the three-dimensional cell complex \( {\mathcal{H}}_{L}/L \) as in Example 9.22, and (c) the minimal free resolution of \( {I}_{\Lambda \left( L\right) } \) . 9.10 Determine the \( K \) -polynomial \( {\mathcal{K}}_{Q}\left( t\right) \) of the semigroup \( Q = \mathbb{N}\{ {20},{24},{25},{31}\} \) in Example 9.16. 9.11 Find an explicit generic lattice \( L \) of codimension 1 in \( {\mathbb{Z}}^{5} \) . List the faces of the Scarf complex of your lattice and describe the minimal free resolution of \( {I}_{L} \) . Answer to Exercise 9.3 Translates of the left picture by \( L \) constitute hull \( \left( {M}_{L}\right) \) : ![9d852306-8a03-41f2-b2e7-a141e7b451e2_200_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_200_0.jpg) Opposite e
1112_(GTM267)Quantum Theory for Mathematicians
Definition 7.18
Definition 7.18 Suppose the following structures are given: (1) a \( \sigma \) -finite measure space \( \left( {X,\Omega ,\mu }\right) \) ,(2) a collection \( {\left\{ {\mathbf{H}}_{\lambda }\right\} }_{\lambda \in X} \) of separable Hilbert spaces for which the dimension function is measurable, and (3) a measurability structure on \( {\left\{ {\mathbf{H}}_{\lambda }\right\} }_{\lambda \in X} \) . Then the direct integral of the \( {\mathbf{H}}_{\lambda } \) ’s with respect to \( \mu \), denoted \[ {\int }_{X}^{ \oplus }{\mathbf{H}}_{\lambda }{d\mu }\left( \lambda \right) \] is the space of equivalence classes of almost-everywhere-equal measurable sections \( s \) for which \[ \parallel s{\parallel }^{2} \mathrel{\text{:=}} {\int }_{X}\langle s\left( \lambda \right), s\left( \lambda \right) {\rangle }_{\lambda }{d\mu }\left( \lambda \right) < \infty . \] The inner product \( \left\langle {{s}_{1},{s}_{2}}\right\rangle \) of two such sections \( {s}_{1} \) and \( {s}_{2} \) is given by the formula \[ \left\langle {{s}_{1},{s}_{2}}\right\rangle \mathrel{\text{:=}} {\int }_{X}{\left\langle {s}_{1}\left( \lambda \right) ,{s}_{2}\left( \lambda \right) \right\rangle }_{\lambda }{d\mu }\left( \lambda \right) . \] To see that the integral defining the inner product of two finite-norm sections is finite, note that \( \left| {\left\langle {s}_{1}\left( \lambda \right) ,{s}_{2}\left( \lambda \right) \right\rangle }_{\lambda }\right| \leq {\begin{Vmatrix}{s}_{1}\left( \lambda \right) \end{Vmatrix}}_{\lambda }{\begin{Vmatrix}{s}_{2}\left( \lambda \right) \end{Vmatrix}}_{\lambda } \) . By assumption, \( {\begin{Vmatrix}{s}_{j}\left( \lambda \right) \end{Vmatrix}}_{\lambda } \) is a square-integrable function of \( \lambda \) for \( j = 1,2 \), and the product of two square-integrable functions is integrable. Thus, the integrand in the definition of \( \left\langle {{s}_{1},{s}_{2}}\right\rangle \) is also integrable. It is not hard to show, using an argument similar to the proof of completeness of \( {L}^{2} \) spaces, that a direct integral of Hilbert spaces is a Hilbert space. Let us think of two important special cases of the direct integral construction. First, if each of the \( {\mathbf{H}}_{\lambda } \) ’s is simply \( \mathbb{C} \), then the direct integral (with the obvious measurability structure) is simply \( {L}^{2}\left( {X,\mu }\right) \) . Second, suppose that \( X = \left\{ {{\lambda }_{1},{\lambda }_{2},\ldots }\right\} \) is countable, \( \Omega \) is the \( \sigma \) -algebra of all subsets of \( X \), and \( \mu \) is the counting measure on \( X \) . Then the direct integral is the Hilbert space direct sum (Definition A.45). Given a direct integral, suppose we have some \( {\lambda }_{0} \in X \) for which \( \left\{ {\lambda }_{0}\right\} \) is measurable and such that \( c \mathrel{\text{:=}} \mu \left( \left\{ {\lambda }_{0}\right\} \right) > 0 \) . Then we can embed \( {\mathbf{H}}_{{\lambda }_{0}} \) isometrically into the direct integral by mapping each \( \psi \in {\mathbf{H}}_{{\lambda }_{0}} \) to the section \( s \) given by \[ s\left( \lambda \right) = \left\{ {\begin{matrix} \frac{1}{\sqrt{c}}\psi ,\;\lambda = {\lambda }_{0} \\ 0,\;\lambda \neq {\lambda }_{0} \end{matrix}.}\right. \] Even if \( \mu \left( \left\{ {\lambda }_{0}\right\} \right) = 0 \), we may still think that \( {\mathbf{H}}_{{\lambda }_{0}} \) is a sort of "generalized subspace" of the direct integral. Theorem 7.19 (Spectral Theorem, Second Form) If \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint, then there exists a \( \sigma \) -finite measure \( \mu \) on \( \sigma \left( A\right) \), a direct integral \[ {\int }_{\sigma \left( A\right) }^{ \oplus }{\mathbf{H}}_{\lambda }{d\mu }\left( \lambda \right) \] and a unitary map \( U \) between \( \mathbf{H} \) and the direct integral such that \[ \left\lbrack {{UA}{U}^{-1}\left( s\right) }\right\rbrack \left( \lambda \right) = {\lambda s}\left( \lambda \right) \] (7.20) for all sections \( s \) in the direct integral. The proof of Theorem 7.19 is given in the next chapter, along with the proof of our first version of the spectral theorem. In the meantime, let us think about what this version of the spectral theorem is saying. We may think that the unitary map \( U \) is an identification of our original Hilbert space \( \mathbf{H} \) with a certain direct integral over the spectrum of \( A \) . Under this identification, the self-adjoint operator \( A \) becomes the operator of multiplication by \( \lambda \), that is, the map sending the section \( s\left( \lambda \right) \) to \( {\lambda s}\left( \lambda \right) \) . Roughly speaking, then, the operator \( A \) acts (under our identification) as \( {\lambda I} \) on each space \( {\mathbf{H}}_{\lambda } \) . Thus, we may think of \( {\mathbf{H}}_{\lambda } \) as being something like an "eigenspace" for \( A \), for each element \( \lambda \) of the spectrum of \( A \) . Of course, unless \( \mu \left( {\{ \lambda \} }\right) > 0 \), the Hilbert space \( {\mathbf{H}}_{\lambda } \) is not actually contained in \( \mathbf{H} \) . Nevertheless, we may think of elements of a given \( {\mathbf{H}}_{\lambda } \) as "generalized eigenvectors" for the operator \( A \) . The direct integral formulation of the spectral theorem leads readily to a classification result for bounded self-adjoint operators. See Proposition 7.24 later in this section. Meanwhile, as we noted earlier in this section, the method of proof for Theorem 7.19 also yields a version of the spectral theorem involving multiplication operators on ordinary \( {L}^{2} \) spaces. Theorem 7.20 (Spectral Theorem, Multiplication Operator Form) Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint. Then there exists a \( \sigma \) -finite measure space \( \left( {X,\mu }\right) \), a bounded, measurable, real-valued function \( h \) on \( X \), and a unitary map \( U : \mathbf{H} \rightarrow {L}^{2}\left( {X,\mu }\right) \) such that \[ \left\lbrack {{UA}{U}^{-1}\left( \psi \right) }\right\rbrack \left( \lambda \right) = h\left( \lambda \right) \psi \left( \lambda \right) \] for all \( \psi \in {L}^{2}\left( {X,\mu }\right) \) . We return now to a discussion of the direct integral version of the spectral theorem. This version gives a simple description of the functional calculus. Proposition 7.21 Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint and \( U \) is a unitary map as in Theorem 7.19. Then for any bounded measurable function \( f \) on \( \sigma \left( A\right) \), we have \[ \left\lbrack {{Uf}\left( A\right) {U}^{-1}\left( s\right) }\right\rbrack \left( \lambda \right) = f\left( \lambda \right) s\left( \lambda \right) . \] Thus, roughly speaking, \( f\left( A\right) \) is defined to be \( f\left( \lambda \right) I \) on each "generalized eigenspace" \( {\mathbf{H}}_{\lambda } \) . Proposition 7.21 follows directly from (7.20) if \( f \) is a polynomial; the result for continuous \( f \) then follows by taking uniform limits. The result for general \( f \) is then easily established by using the limiting arguments of Chap. 8, especially Exercise 3. Let us now consider what sort of uniqueness there should be in the second version of the spectral theorem. There is a "trivial" source of nonuniqueness coming from the possibility that some of the \( {\mathbf{H}}_{\lambda } \) ’s may have dimension 0 . Let \( {E}_{0} \) denote the set of \( \lambda \) for which \( \dim {\mathbf{H}}_{\lambda } = 0 \) . Even if \( \mu \left( {E}_{0}\right) > 0 \), the set \( {E}_{0} \) makes no contribution to the norm of a section, since every section is automatically zero on \( {E}_{0} \) . Thus, we may define a new measure \( \widetilde{\mu } \) by setting \( \widetilde{\mu }\left( E\right) = \mu \left( {E \cap {E}_{0}^{c}}\right) \), so that \( \widetilde{\mu } \) agrees with \( \mu \) on \( {E}_{0}^{c} \) but is zero on \( {E}_{0} \) . Then the direct integrals of the \( {\mathbf{H}}_{\lambda } \) ’s with respect to \( \mu \) and with respect to \( \widetilde{\mu } \) are "indistinguishable." Thus, we can always modify a direct integral so as to assume that \( \dim {\mathbf{H}}_{\lambda } > 0 \) for almost every \( \lambda \) . Meanwhile, unlike the projection-valued measure \( {\mu }^{A} \) in Theorem 7.12, the measure \( \mu \) in Theorem 7.19 is not unique, but only unique up to equivalence, where two \( \sigma \) -finite measures on a given measurable space are equivalent if they have precisely the same sets of measure zero. For a given measure \( \mu \), the Hilbert spaces \( {\mathbf{H}}_{\lambda } \) are unique only up to unitary equivalence, meaning that only the dimension of the spaces is uniquely determined. Even the dimension of \( {\mathbf{H}}_{\lambda } \) is uniquely determined only up to a set of \( \mu \) -measure zero. As it turns out, the sources of nonuniqueness in this paragraph and the previous paragraph are all that exist. Proposition 7.22 (Uniqueness in Theorem 7.19) Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is self-adjoint and consider two different direct integrals as in Theorem 7.19, one with measure \( {\mu }^{\left( 1\right) } \) and Hilbert spaces \( {\mathbf{H}}_{\lambda }^{\left( 1\right) } \) and the other with measure \( {\mu }^{\left( 2\right) } \) and Hilbert spaces \( {\mathbf{H}}_{\lambda }^{\left( 2\right) } \) . If \( \dim {\mathbf{H}}_{\lambda }^{\left( j\right) } > 0 \) for \( {\mu }^{\left( j\right) } \) -almost every \( \lambda \) \( \left( {j = 1,2}\right) \), then \( {\mu }^{\left( 1\right) } \) and \( {\mu }^{\left( 2\right) } \) are mutually absolutely continuous and \[ \dim {\mathbf{H}}_{\lambda }^{\left( 1\right) } = \dim {\mathbf{H}}_{\lambda }^{\left( 2\right) } \] for \( {\mu }^{\left( j\right) } \) -almost every \( \lambda \left( {j = 1,2}\right) \) . See the end of the next chapter for a sketch of the proof of this uniqueness result.
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 4.1
Definition 4.1. The critical point \( O\left( {0,0}\right) \) for the system of equations is called a stable (unstable) attractor if there exists a \( \delta > 0 \), such that any solution \( x = x\left( t\right), y = y\left( t\right) \) with initial conditions satisfying \( {x}^{2}\left( {t}_{0}\right) + {y}^{2}\left( {t}_{0}\right) < \delta \) will have the property \[ \mathop{\lim }\limits_{{t \rightarrow \infty }}\left\lbrack {{x}^{2}\left( t\right) + {y}^{2}\left( t\right) }\right\rbrack = 0\;\left( {\mathop{\lim }\limits_{{t \rightarrow - \infty }}\left\lbrack {{x}^{2}\left( t\right) + {y}^{2}\left( t\right) }\right\rbrack = 0}\right) . \] THEOREM 4.1. Suppose that \( O\left( {0,0}\right) \) is a stable (unstable) attractor for system (4.1), and the additional terms in system (4.2) satisfy hypothesis [H1]. Then \( O\left( {0,0}\right) \) is also a stable (unstable) attractor for (4.2). Proof. If \( O\left( {0,0}\right) \) is a stable (unstable) attractor for (4.1), then \( q = \) \( {ad} - {bc} > 0, p = - \left( {a + d}\right) > 0\;\left( { < 0}\right) \) . Let \[ V\left( {x, y}\right) = \left( {{ad} - {bc}}\right) \left( {{x}^{2} + {y}^{2}}\right) + {\left( ay - cx\right) }^{2} + {\left( by - dx\right) }^{2}. \] Then \( V\left( {0,0}\right) = 0, V\left( {x, y}\right) > 0 \) if \( {x}^{2} + {y}^{2} \neq 0 \) . Further, \[ {\left. \frac{dV}{dt}\right| }_{\left( {4.1}\right) } = 2\left( {a + d}\right) \left( {{ad} - {bc}}\right) \left( {{x}^{2} + {y}^{2}}\right) < 0\left( { > 0}\right) ,\;{x}^{2} + {y}^{2} \neq 0, \] \[ {\left. \frac{dV}{dt}\right| }_{\left( {4.2}\right) } = 2\left( {a + d}\right) \left( {{ad} - {bc}}\right) \left( {{x}^{2} + {y}^{2}}\right) + o\left( {{x}^{2} + {y}^{2}}\right) \;\text{ as }{x}^{2} + {y}^{2} \rightarrow 0. \] Thus when \( 0 < r \ll 1,{\left. dV/dt\right| }_{\left( {4.2}\right) } < 0\left( { > 0}\right) \) . Consequently, \( O\left( {0,0}\right) \) is also an attractor for system (4.2) and the stability is unchanged. THEOREM 4.2. Suppose that \( O\left( {0,0}\right) \) is a stable (unstable) focus of system (4.1), and hypothesis \( \left\lbrack {\mathrm{H}1}\right\rbrack \) is satisfied. Then \( O\left( {0,0}\right) \) is also a stable (unstable) focus of system (4.2). Proof. System (4.2) can be transformed into \[ \frac{d{x}^{\prime }}{dt} = {\mu }_{1}{x}^{\prime } + {\mu }_{2}{y}^{\prime } + {\Phi }^{\prime }\left( {{x}^{\prime },{y}^{\prime }}\right) ,\;\frac{d{y}^{\prime }}{dt} = - {\mu }_{2}{x}^{\prime } + {\mu }_{1}{y}^{\prime } + {\Psi }^{\prime }\left( {{x}^{\prime },{y}^{\prime }}\right) , \] (4.3) by a nonsingular linear transformation. Here \( {\Phi }^{\prime },{\Psi }^{\prime } \) are linear combinations of \( \Phi \) and \( \Psi \), and thus also satisfy hypothesis [H1]. The characteristic equation of (4.3), \( G\left( \theta \right) \equiv - {\mu }_{2} = 0 \), does not have any real root, and is thus classified as the definite sign case. By Theorem 3.2, \( O\left( {0,0}\right) \) has to be a center, center focus, or focus for system (4.3) (i.e., (4.2)). Moreover, Theorem 4.1 implies that \( O\left( {0,0}\right) \) must also be a stable (or unstable) attractor for system (4.2). Consequently, \( O\left( {0,0}\right) \) must be a stable (or unstable) focus for (4.2). If \( O\left( {0,0}\right) \) is a saddle point, node, or proper node of (4.1), then a nonsingular linear transform can transform (4.2) into \[ \frac{d{x}^{\prime }}{dt} = {\lambda }_{1}{x}^{\prime } + {\Phi }^{\prime }\left( {{x}^{\prime },{y}^{\prime }}\right) ,\;\frac{d{y}^{\prime }}{dt} = {\lambda }_{2}{y}^{\prime } + {\Psi }^{\prime }\left( {{x}^{\prime },{y}^{\prime }}\right) . \] (4.4) Here \( {\Phi }^{\prime },{\Psi }^{\prime } \) are linear combinations of \( \Phi \) and \( \Psi \) ; and the symbols \( {\Phi }^{\prime },{\Psi }^{\prime } \) are used again as before for simplicity. If \( \Phi ,\Psi \) satisfy hypotheses [H1] or [H2], then \( {\Phi }^{\prime },{\Psi }^{\prime } \) will respectively satisfy hypotheses [H1] or [H2]. THEOREM 4.3. Let \( O\left( {0,0}\right) \) be a node for system (4.1) with eigenvalues \( \left| {\lambda }_{2}\right| > \left| {\lambda }_{1}\right| \) . Then, (i) if \( \left\lbrack {\mathrm{H}1}\right\rbrack \) is satisfied, all solution orbits of (4.4) near the characteristic point must tend to the critical point \( O \) along characteristic directions \( \theta = \) \( 0,\frac{\pi }{2},\pi ,\frac{3\pi }{2}; \) (ii) if \( \left\lbrack {\mathrm{H}2}\right\rbrack \) is satisfied, there is one solution orbit of (4.4) tending to the critical point along direction \( \theta = \frac{\pi }{2},\frac{3\pi }{2} \) . Proof. By Theorem 4.1, the point \( O\left( {0,0}\right) \) is also an attractor for system (4.2) without any change in stability. Moreover, for system (4.4), we have \[ G\left( \theta \right) = \frac{{\lambda }_{2} - {\lambda }_{1}}{2}\sin {2\theta },\;H\left( \theta \right) = {\lambda }_{2}{\sin }^{2}\theta + {\lambda }_{1}{\cos }^{2}\theta . \] The characteristic equation \( G\left( \theta \right) = 0 \) has real roots \( \theta = 0,\frac{\pi }{2},\pi \) , \( \frac{3\pi }{2} \), which are all simple. Further, \[ H\left( 0\right) {G}^{\prime }\left( 0\right) = H\left( \pi \right) {G}^{\prime }\left( \pi \right) = {\lambda }_{1}\left( {{\lambda }_{2} - {\lambda }_{1}}\right) > 0, \] \[ H\left( \frac{\pi }{2}\right) {G}^{\prime }\left( \frac{\pi }{2}\right) = H\left( \frac{3\pi }{2}\right) {G}^{\prime }\left( \frac{3\pi }{2}\right) = {\lambda }_{2}\left( {{\lambda }_{1} - {\lambda }_{2}}\right) < 0. \] By Theorems 3.4 and 3.5, inside \( S\left( {0,{r}_{1}}\right) \) we can construct normal regions of the first type \( {T}_{1},{T}_{2} \) respectively near \( \theta = 0,\pi \), and normal regions of the second type \( {T}_{3},{T}_{4} \) respectively near \( \theta = \frac{\pi }{2},\frac{3\pi }{2} \) . Within each sector inside \( S\left( {0,{r}_{1}}\right) \smallsetminus \mathop{\bigcup }\limits_{{l = 1}}^{4}{T}_{l}, G\left( \theta \right) \) is of definite sign; and it follows from Theorems 3.1, 3.4, and 3.5 that the first assertion (i) of this theorem is valid. As to part (ii), we note that [H1] and [H2] imply that \[ \frac{\partial {\Phi }^{\prime }}{\partial \theta },\;\frac{\partial {\Psi }^{\prime }}{\partial \theta } = o\left( r\right) ,\;\text{ as }r \rightarrow 0, \] and the conditions (3.17), (3.18) in Theorem 3.7 are satisfied. Consequently, Theorem 3.7 implies that along each direction \( \theta = \frac{\pi }{2},\frac{3\pi }{2} \), there is a unique solution tending to the critical point \( O \) . THEOREM 4.4. Suppose that \( O\left( {0,0}\right) \) is a saddle point for system (4.1) with \( {\lambda }_{1} > 0 > {\lambda }_{2} \) . (i) If \( \left\lbrack {\mathrm{H}1}\right\rbrack \) is satisfied, then system (4.4) has solution orbit(s) tending to the critical point \( O \) along \( \theta = \frac{\pi }{2},\frac{3\pi }{2} \) as \( t \rightarrow + \infty \), and solution orbit(s) tending to the critical point \( O \) along \( \theta = 0,\pi \) as \( t \rightarrow - \infty \) . Moreover, there exists \( {r}_{2} > 0 \) such that for any \( P \in S\left( {0,{r}_{2}}\right) \), the orbit \( \overrightarrow{f}\left( {P, I}\right) \) leaves \( S\left( {0,{r}_{2}}\right) \) in both directions, unless it is an orbit with the above property. (ii) If \( \left\lbrack {\mathrm{H}2}\right\rbrack \) is satisfied, then there exists a unique orbit tending to the critical point \( O \) along \( \theta = 0,\frac{\pi }{2},\pi ,\frac{3\pi }{2} \) respectively. Proof. Note that in this case, the characteristic equation \( G\left( \theta \right) = 0 \) has only four simple roots \( \theta = 0,\frac{\pi }{2},\pi ,\frac{3\pi }{2} \) and \[ H\left( 0\right) {G}^{\prime }\left( 0\right) = H\left( \pi \right) {G}^{\prime }\left( \pi \right) = {\lambda }_{1}\left( {{\lambda }_{2} - {\lambda }_{1}}\right) < 0, \] \[ H\left( \frac{\pi }{2}\right) {G}^{\prime }\left( \frac{\pi }{2}\right) = H\left( \frac{3\pi }{2}\right) {G}^{\prime }\left( \frac{3\pi }{2}\right) = {\lambda }_{2}\left( {{\lambda }_{1} - {\lambda }_{2}}\right) < 0. \] From Theorem 3.5, we can construct normal regions of second type \( {T}_{1},{T}_{2} \) , \( {T}_{3} \), and \( {T}_{4} \) in \( S\left( {0,\delta }\right) \) near \( \theta = 0,\frac{\pi }{2},\pi \), and \( \frac{3\pi }{2} \) respectively. Further, since \( H\left( 0\right) = H\left( \pi \right) = {\lambda }_{1} > 0 \), there exist orbits tending to the critical point \( O \) along \( \theta = 0,\pi \) as \( t \rightarrow - \infty \) ; and since \( H\left( \frac{\pi }{2}\right) = H\left( \frac{3\pi }{2}\right) = {\lambda }_{2} < 0 \), there exist orbits tending to the critical point \( O \) along \( \theta = \frac{\pi }{2},\frac{3\pi }{2} \) as \( t \rightarrow + \infty \) . For \( {r}_{2} < {r}_{1}, G\left( \theta \right) \) is of definite sign in each sector of \( S\left( {0,{r}_{2}}\right) \smallsetminus \mathop{\bigcup }\limits_{{l = 1}}^{4}{T}_{l} \) . From Theorem 3.1, when \( {r}_{2} \) is sufficiently small, an orbit \( \overrightarrow{f}\left( {P, I}\right) \) through any \( P \in S\left( {0,{r}_{2}}\right) \smallsetminus \mathop{\bigcup }\limits_{{t = 1}}^{4}{T}_{t} \) must intersect the boundary segment of some \( {T}_{t} \) and then leave \( S\left( {0,{r}_{2}}\right) \), when \( t \) are continued in both positive and negative directions. This proves part (i). The proof of part (ii) is similar to that for Theorem 4.3. THEOREM 4.5. Suppose that \( O\left( {0,0}\right) \) is a proper node for system (4.1), and the additional terms for system (4.2) satisfy [H2] and the hypothesis \( \left\lbrack {\mathrm{{Hl}}}^{ * }\right\rbrack \Phi \left( {x, y}\right) ,\Psi \left( {x, y}\right) = o\left( {r}^{1 + \varepsilon }\right) \), as \( r \rightarrow 0 \) , where \( \varepsilon > 0 \) is an arbitrarily small positive number. Then there is a unique orbit for system (4.2) tending to the critical point \( O \)
1114_(GTM269)Locally Convex Spaces
Definition 3.38
Definition 3.38. An LF-space is a vector space \( X \) over \( \mathbb{R} \) or \( \mathbb{C} \) for which \[ X = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{X}_{n} \] where each \( {X}_{n} \) is a subspace of \( X \) equipped with a Hausdorff, locally convex topology making each \( {X}_{n} \) into a Fréchet space, subject to the following three constraints: 1. Each \( {X}_{n} \subset {X}_{n + 1} \) . 2. The topology \( {X}_{n + 1} \) induces on \( {X}_{n} \) is its Fréchet space topology. 3. \( {X}_{n} \neq X \) for all \( n \) . The LF-topology on \( X \) is defined via Theorem 3.2, using the base: \[ {\mathcal{B}}_{0} = \left\{ {B \subset X : B}\right. \text{is convex and balanced, and}B\bigcap {X}_{n}\text{is a} \] neighborhood of 0 in the Fréchet topology of \( {X}_{n} \) . \( \} \) Observe that since each \( {X}_{n} \) is complete, it is closed (Proposition 1.30), so thanks to constraint 3, an LF-space is always first category. One more definition, which we return to later and in Sect. 4.3: an LB-space is an LF-space in which the subspaces \( {X}_{n} \) are actually Banach spaces. Examples of LF-Spaces I. \( \mathbb{R}\left\lbrack x\right\rbrack \) or \( \mathbb{C}\left\lbrack x\right\rbrack \) . (Polynomials.) Here, \( {X}_{n} \) consists of polynomials of degree \( \leq n \) . The topology on \( {X}_{n} \) can be taken as its Euclidean topology, since \( {X}_{n} \approx {\mathbb{F}}^{n + 1} \) : \[ \mathop{\sum }\limits_{{j = 0}}^{n}{a}_{j}{x}^{j} \leftrightarrow \left( \begin{matrix} {a}_{0} \\ {a}_{1} \\ \vdots \\ {a}_{n} \end{matrix}\right) \] II. \( {C}_{c}\left( H\right) \), the continuous functions with compact support on a noncompact, locally compact, \( \sigma \) -compact Hausdorff space. \( H \) can be written as \[ H = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{K}_{n} \] where each \( {K}_{n} \) is compact, and each \( {K}_{n} \subset \operatorname{int}\left( {K}_{n + 1}\right) \) . Set \[ {X}_{n} = \left\{ {f \in {C}_{c}\left( H\right) : \operatorname{supp}\left( f\right) \subset {K}_{n}}\right\} . \] The topology on \( {X}_{n} \) is given by \( \parallel f\parallel = \max \left| {f\left( {K}_{n}\right) }\right| \) . III. \( {C}_{c}^{\infty }\left( {\mathbb{R}}^{m}\right) \), the space of \( {C}^{\infty } \) functions on \( {\mathbb{R}}^{m} \) with compact support. Letting \( {B}_{r}^{ - } \) denote the closed ball of radius \( r \) , \[ {X}_{n} = \left\{ {f \in {C}_{c}^{\infty }\left( {\mathbb{R}}^{m}\right) : \operatorname{supp}\left( f\right) \subset {B}_{n}^{ - }}\right\} . \] The Fréchet topology on \( {X}_{n} \) is the one it gets as a (closed) subspace of \( {C}^{\infty }\left( {\mathbb{R}}^{m}\right) \) . This example can be expanded to a \( {C}^{\infty } \) manifold that is \( \sigma \) -compact but not compact. Examples I and II above are LB-spaces, while Example III is not. Example III is so important that it is conceivable that LF-spaces, as a class of locally convex spaces, would be defined even if Example III were the only example. One other feature stands out about the examples: The inclusions \( {X}_{n} \hookrightarrow {X}_{n + 1} \) are actually isometries, when one uses the metrics associated with the [semi]norm[s] used in the previous section. It turns out that this can always be arranged (Exercise 14), but this fact is not particularly useful. It is immediately evident that the topology given in Definition 3.38 works: \( {\mathcal{B}}_{0} \) is evidently closed under intersections and multiplication by \( \frac{1}{2} \) . However, anything else will take some work. There is a fundamental construction that needs a lot of discussion. This construction basically works between \( {X}_{n} \) and \( {X}_{n + 1} \) ; it will be referred to as the "link" construction (not a standard term), because it will form a link in the chain of containments \[ {X}_{k} \subset {X}_{k + 1} \subset {X}_{k + 2} \subset \cdots \subset {X}_{n} \subset {X}_{n + 1} \subset \cdots \] \[ \frac{ \uparrow }{\text{ here }} \] Given: a convex, balanced neighborhood \( {U}_{n} \) of 0 in the Fréchet space \( {X}_{n} \), and another convex, balanced neighborhood \( {U}_{n + 1} \) of 0 in the Fréchet space \( {X}_{n + 1} \) , subject to the condition: \( {U}_{n + 1} \cap {X}_{n} \subset {U}_{n} \) . Set \[ L\left( {{U}_{n},{U}_{n + 1}}\right) = \left\{ {{tx} + \left( {1 - t}\right) y : x \in {U}_{n}, y \in {U}_{n + 1}, t \in \lbrack 0,1)}\right\} . \] The basic properties here are easily established, and will be immediately used in the next proposition. These results will subsequently be referred to by their "Fact number." 1. \( L\left( {{U}_{n},{U}_{n + 1}}\right) \) is convex and balanced, and \( L\left( {{U}_{n},{U}_{n + 1}}\right) \supset {U}_{n + 1} \) . \( L\left( {{U}_{n},{U}_{n + 1}}\right) \) is convex by Proposition 2.14, while if \( x \in {U}_{n}, y \in {U}_{n + 1} \) , \( t \in \lbrack 0,1) \), and \( \left| c\right| \leq 1 \), then \[ c\left( {{tx} + \left( {1 - t}\right) y}\right) = t\left( {cx}\right) + \left( {1 - t}\right) \left( {cy}\right) \in L\left( {{U}_{n},{U}_{n + 1}}\right) . \] Finally, taking \( t = 0, y \in L\left( {{U}_{n},{U}_{n + 1}}\right) \) . 2. \( \lbrack 0,1){U}_{n} \) is contained in the interior in \( {X}_{n + 1} \) of \( L\left( {{U}_{n},{U}_{n + 1}}\right) \) : If \( 0 \leq t < 1 \), then for all \( x \in {U}_{n},{tx} + \left( {1 - t}\right) {U}_{n + 1} \) is a neighborhood of \( {tx} \) in \( {X}_{n + 1} \), and \( {tx} + \left( {1 - t}\right) {U}_{n + 1} \subset L\left( {{U}_{n},{U}_{n + 1}}\right) \) . 3. \( L\left( {{U}_{n},{U}_{n + 1}}\right) \cap {X}_{n} \subset {U}_{n} \) : If \( 0 \leq t < 1, x \in {U}_{n} \), and \( y \in {U}_{n + 1} \), with \( z = {tx} + \left( {1 - t}\right) y \in {X}_{n} \), then \( y = {\left( 1 - t\right) }^{-1}\left( {z - {tx}}\right) \in {X}_{n} \), so \( y \in {U}_{n + 1} \cap {X}_{n} \subset {U}_{n} \) . Hence \( z \in {U}_{n} \) since \( {U}_{n} \) is convex. 4. If \( {U}_{n} \) is open in \( {X}_{n} \), then \( L\left( {{U}_{n},{U}_{n + 1}}\right) \cap {X}_{n} = {U}_{n} \) . From facts 2 and 3, \( \lbrack 0,1){U}_{n} \subset L\left( {{U}_{n},{U}_{n + 1}}\right) \bigcap {X}_{n} \subset {U}_{n} \) . But \( {U}_{n} = \lbrack 0,1){U}_{n} \) when \( {U}_{n} \) is open (Theorem 2.15). 5. If \( {U}_{n} \) is open in \( {X}_{n} \), then \( L\left( {{U}_{n},{U}_{n + 1}}\right) = \operatorname{con}\left( {{U}_{n}\bigcup {U}_{n + 1}}\right) \), the convex hull of \( {U}_{n} \) and \( {U}_{n + 1} \) : \( L\left( {{U}_{n},{U}_{n + 1}}\right) \) is convex (Fact 1), and everything in \( L\left( {{U}_{n},{U}_{n + 1}}\right) \) lies in \( \operatorname{con}\left( {{U}_{n} \cup {U}_{n + 1}}\right) \) by Proposition 2.12. On the other hand, \( {U}_{n} \subset L\left( {{U}_{n},{U}_{n + 1}}\right) \) when \( {U}_{n} \) is open, and \( {U}_{n + 1} \subset L\left( {{U}_{n},{U}_{n + 1}}\right) \) by Fact 1, so \( {U}_{n}\bigcup {U}_{n + 1} \subset \) \( L\left( {{U}_{n},{U}_{n + 1}}\right) \), giving \( \operatorname{con}\left( {{U}_{n} \cup {U}_{n + 1}}\right) \subset L\left( {{U}_{n},{U}_{n + 1}}\right) \) . 6. If \( {U}_{n + 1} \) is open in \( {X}_{n + 1} \), then \( L\left( {{U}_{n},{U}_{n + 1}}\right) \) is open in \( {X}_{n + 1} \) : \[ L\left( {{U}_{n},{U}_{n + 1}}\right) = \mathop{\bigcup }\limits_{{0 \leq t < 1}}\mathop{\bigcup }\limits_{{x \in {U}_{n}}}\left( {{tx} + \left( {1 - t}\right) {U}_{n + 1}}\right) . \] It seems clear at this point that the link construction works best with open sets, and for the full chain construction we shall make that restriction. Suppose we have a sequence of convex, balanced sets \( \left\langle {U}_{n}\right\rangle \), starting at some \( k \) and going to \( \infty \), where each \( {U}_{n} \) is open in \( {X}_{n} \), with \( {U}_{n} \supset {X}_{n} \cap {U}_{n + 1} \) . Recursively define \[ {V}_{k} = {U}_{k};{V}_{n + 1} = L\left( {{V}_{n},{U}_{n + 1}}\right) . \] The "chain" is \( \left\langle {V}_{n}\right\rangle \), and will be so referred to in the next few "facts." The next two make our construction legitimate. 7. For all \( n \geq k,{U}_{n} \subset {V}_{n} \) : True when \( n = k \) by definition; true for \( n + 1 \) by Fact 1 . (This is not an induction.) 8. For all \( n \geq k,{U}_{n + 1} \cap {X}_{n} \subset {V}_{n} \) : \( {U}_{n + 1} \cap {X}_{n} \subset {U}_{n} \) by assumption, so that \( {U}_{n + 1} \cap {X}_{n} \subset {U}_{n} \subset {V}_{n} \) by Fact 7. 9. For all \( n \geq k,{V}_{n} \) is open in \( {X}_{n} \) . True when \( n = k \) by definition; true for \( n + 1 \) by Fact 6 . (This is not an induction, either.) 10. For all \( n \geq k,{V}_{n + 1} \cap {X}_{n} = {V}_{n} \) : True by Fact 4, which applies by Fact 9. 11. For all \( n \geq m \geq k,{V}_{n} \cap {X}_{m} = {V}_{m} \) : True when \( n = m \) by definition; if \( {V}_{n} \cap {X}_{m} = {V}_{m} \), then \( {V}_{n + 1} \cap {X}_{m} = \) \( {V}_{n + 1} \cap {X}_{n} \cap {X}_{m} = {V}_{n} \cap {X}_{m} = {V}_{m} \) by Fact 10. (This is an induction, on \( n \) .) 12. \( V = \mathop{\bigcup }\limits_{{n = k}}^{\infty }{V}_{n} \) is a convex, balanced, \( {LF} \) -neighborhood of 0 in \( X \), for which \( V \cap {X}_{m} = {V}_{m} \) for each \( m \geq k \) : This is an ascending (Fact 10) union of convex balanced sets, so it is convex and balanced. Since it is ascending, for any \( m \geq k \) : \[ V\bigcap {X}_{m} = \left( {\mathop{\bigcup }\limits_{{n = m}}^{\infty }{V}_{n}}\right) \bigcap {X}_{m} = \mathop{\bigcup }\limits_{{n = m}}^{\infty }\left( {{V}_{n}\bigcap {X}_{m}}\right) = {V}_{m}. \] It follows that \( V \) belongs to our LF base. We are now ready for our first main result. Proposition 3.39. Suppose \( X \) is an LF-space, constructed from an ascending union of Fréchet subspaces \( \left\langle {X}_{n}\right\rangle \) . Then: (a) The LF topology on \( X \) induces the (original) Fréchet topology on each \( {X}_{k} \) . (b) \( X \) is Hausdorff. (c
108_The Joys of Haar Measure
Definition 4.1.1
Definition 4.1.1. Let \( K \) be any field. An absolute value is a map \( \parallel \parallel \) from \( K \) to \( \mathbb{R} \) satisfying the following properties: (1) (Definiteness.) For all \( x \in K \) we have \( \parallel x\parallel \geq 0 \), and \( \parallel x\parallel = 0 \) if and only if \( x = 0 \) . (2) (Multiplicativity.) For all \( x \) and \( y \) in \( K \) we have \( \parallel {xy}\parallel = \parallel x\parallel \parallel y\parallel \) . (3) (Generalized triangle inequality.) There exists \( a > 0 \) such that for all \( x \) and \( y \) in \( K \) we have \( \parallel x + y{\parallel }^{a} \leq \parallel x{\parallel }^{a} + \parallel y{\parallel }^{a} \) . Before going any further, it is important that the reader keep in mind the following three examples: - If \( K \) is a subfield of \( \mathbb{R} \), the ordinary absolute value \( \left| x\right| \) is clearly an absolute value in the above sense, with \( a = 1 \) . More generally, if \( K \) is a subfield of \( \mathbb{C} \) the modulus \( \left| x\right| \) is also an absolute value with \( a = 1 \) . - If \( K \) is a subfield of \( \mathbb{C} \), the square of the modulus \( {\left| x\right| }^{2} \) is clearly an absolute value in the above sense, with \( a = 1/2 \) . - If \( K = \mathbb{Q} \) and \( p \) is a prime number, the map defined by \( {\left| 0\right| }_{p} = 0 \) and \( {\left| x\right| }_{p} = {C}^{-{v}_{p}\left( x\right) } \), where \( {v}_{p}\left( x\right) \) is the exponent of \( p \) in the decomposition of \( x \) into a product of prime powers and \( C > 1 \) is a fixed constant, is an absolute value called a \( p \) -adic absolute value. It is clear that an absolute value \( \parallel \parallel \) makes \( K \) into a metric space, the distance being defined by \( d\left( {x, y}\right) = \parallel x - y{\parallel }^{a} \) . This metric depends on \( a \), but it is immediate that the induced topology does not. Definition 4.1.2. Two absolute values on a field \( K \) are called equivalent if they induce the same topology on \( K \) . Lemma 4.1.3. Two absolute values \( \parallel {\parallel }_{1} \) and \( \parallel {\parallel }_{2} \) are equivalent if and only if there exists \( c > 0 \) such that \( \parallel x{\parallel }_{2} = \parallel x{\parallel }_{1}^{c} \) for all \( x \in K \) . Proof. If \( \parallel x{\parallel }_{2} = \parallel x{\parallel }_{1}^{c} \), for each \( R > 0 \) we have \( {B}_{1}\left( {0, R}\right) = {B}_{2}\left( {0,{R}^{c}}\right) \) , where \( {B}_{i}\left( {0, R}\right) \) denotes the open ball of radius \( R \) for the metric induced by \( \parallel {\parallel }_{i} \) ; hence the topologies are equivalent. Conversely, assume that the topologies are equivalent. Since for any absolute value, \( \parallel x\parallel < 1 \) if and only if \( \begin{Vmatrix}{x}^{n}\end{Vmatrix} \rightarrow 0 \) as \( n \rightarrow \infty \) if and only if \( {x}^{n} \rightarrow 0 \) as \( n \rightarrow \infty \), it follows that \( \parallel x{\parallel }_{1} < 1 \) if and only if \( \parallel x{\parallel }_{2} < 1 \) . We may assume that there exists \( {x}_{0} \neq 0 \) satisfying this; otherwise, we would have \( \parallel x{\parallel }_{1} = \parallel x{\parallel }_{2} = 1 \) for all \( x \neq 0 \) . We define \( c \) by the equality \( {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{2} = {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{1}^{c} \), so that by what we have said we have \( c > 0 \) . If \( x \in {K}^{ * } \) is such that \( \parallel x{\parallel }_{1} < 1 \), we can define a real number \( \lambda > 0 \) by the equality \( \parallel x{\parallel }_{1} = {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{1}^{\lambda } \) . If \( m \) and \( n \) are positive integers such that \( m/n > \lambda \), then \( {\begin{Vmatrix}{x}_{0}^{m}/{x}^{n}\end{Vmatrix}}_{1} = {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{1}^{m - {n\lambda }} < 1 \), whence \( {\begin{Vmatrix}{x}_{0}^{m}/{x}^{n}\end{Vmatrix}}_{2} < 1 \) ; in other words, \( \parallel x{\parallel }_{2} > {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{2}^{m/n} \) . If we let \( m/n \) tend to \( \lambda \) keeping \( m/n \geq \lambda \) , then since \( \begin{Vmatrix}{x}_{0}\end{Vmatrix} < 1 \), by continuity we have \( \parallel x{\parallel }_{2} \geq {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{2}^{\lambda } \) . Using a sequence of rational numbers \( m/n \) such that \( m/n < \lambda \) and tending to \( \lambda \), we obtain in the same way \( \parallel x{\parallel }_{2} \leq {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{2}^{\lambda } \), so that finally \[ \parallel x{\parallel }_{2} = {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{2}^{\lambda } = {\begin{Vmatrix}{x}_{0}\end{Vmatrix}}_{1}^{c\lambda } = \parallel x{\parallel }_{1}^{c}, \] proving the lemma. As we have done in the above proof, from now on when we speak of absolute values we will implicitly assume that we exclude the trivial absolute value \( \parallel x\parallel = 1 \) for \( x \neq 0 \), which induces the discrete topology on \( K \) . Definition 4.1.4. We say that an absolute value \( \parallel \parallel \) on a field \( K \) is Archimedean if \( K \) has characteristic 0 and if there exists \( m \in \mathbb{Z} \) such that \( \parallel m\parallel > 1 \) . Otherwise, it is said to be non-Archimedean. The important result of this section, called Ostrowski's theorem, gives a complete description of all absolute values on a number field. We begin by describing the Archimedean ones. ## 4.1.2 Archimedean Absolute Values Lemma 4.1.5. Let \( K \) be a number field. The Archimedean absolute values are given by \( \parallel \alpha \parallel = {\left| \sigma \left( \alpha \right) \right| }^{c} \), where \( c > 0 \) and \( \sigma \) is any fixed embedding from \( K \) to \( \mathbb{C} \) . Proof. It is clear that the maps defined in this way are Archimedean absolute values; hence conversely, let \( \parallel \parallel \) be an Archimedean absolute value. Since any element of \( K \) may be written as \( x/y \) with \( x \) and \( y \) in \( {\mathbb{Z}}_{K} \), it is enough to prove the lemma for elements of \( {\mathbb{Z}}_{K} \) . In addition, replacing if necessary \( \parallel \parallel \) by \( \parallel {\parallel }^{a} \), we may assume that \( a = 1 \) in Definition 4.1.1, in other words that we have the ordinary triangle inequality \( \parallel x + y\parallel \leq \parallel x\parallel + \parallel y\parallel \) . We first prove the following claim: Claim. There exists \( c > 0 \) such that \( \parallel m\parallel = {\left| m\right| }^{c} \) for all \( m \in \mathbb{Z} \) . Indeed, by multiplicativity we know that \( \parallel 1\parallel = 1 \) and \( \parallel \pm 1{\parallel }^{2} = \parallel 1\parallel = 1 \) ; hence again by multiplicativity we may assume that \( m > 1 \) . For the moment fix some \( {m}_{0} > 1 \), and for any positive \( m \) and \( N \) write \( {m}^{N} \) in base \( {m}_{0} \), i.e., in the form \[ {m}^{N} = \mathop{\sum }\limits_{{0 \leq n \leq N\log \left( m\right) /\log \left( {m}_{0}\right) }}{a}_{n}{m}_{0}^{n}\text{ with }0 \leq {a}_{n} < {m}_{0}. \] Let \( A \) be an upper bound for all the \( \parallel a\parallel \) with \( 0 \leq a < {m}_{0} \) . By the triangle inequality, if \( \begin{Vmatrix}{m}_{0}\end{Vmatrix} \leq 1 \) we would have for all positive \( m \) and \( N \) \[ \parallel m{\parallel }^{N} \leq A\left( {1 + N\log \left( m\right) /\log \left( {m}_{0}\right) }\right) ; \] hence by taking \( N \) th roots and letting \( N \rightarrow \infty \) we would obtain \( \parallel m\parallel \leq 1 \) for all \( m \), in contradiction with the Archimedean assumption. Since this is true for all \( {m}_{0} > 1 \), it follows that for all \( m > 1 \) we have \( \parallel m\parallel > 1 \) . Applying again the triangle inequality, but using \( \begin{Vmatrix}{m}_{0}\end{Vmatrix} > 1 \), we obtain \[ \parallel m{\parallel }^{N} \leq A\left( {1 + N\log \left( m\right) /\log \left( {m}_{0}\right) }\right) {\begin{Vmatrix}{m}_{0}\end{Vmatrix}}^{N\log \left( m\right) /\log \left( {m}_{0}\right) }, \] hence again by taking \( N \) th roots and letting \( N \rightarrow \infty \) we now obtain \( \parallel m\parallel \leq {\begin{Vmatrix}{m}_{0}\end{Vmatrix}}^{\log \left( m\right) /\log \left( {m}_{0}\right) } \), in other words \( \parallel m{\parallel }^{1/\log \left( m\right) } \leq {\begin{Vmatrix}{m}_{0}\end{Vmatrix}}^{1/\log \left( {m}_{0}\right) } \) . Exchanging \( m \) and \( {m}_{0} \), we deduce that we have equality, in other words that \( C = \parallel m{\parallel }^{1/\log \left( m\right) } \) is independent of \( m > 1 \) . Thus \[ \parallel m\parallel = {C}^{\log \left( m\right) } = \exp \left( {\log \left( C\right) \log \left( m\right) }\right) = {m}^{\log \left( C\right) } = {m}^{c} \] with \( c = \log \left( C\right) > 0 \), proving our claim. We now fix this value of \( c \) . For any nonzero \( \alpha \in {\mathbb{Z}}_{K} \), order the complex embeddings \( {\sigma }_{i} \) so that \( \left| {{\sigma }_{i}\left( \alpha \right) }\right| \geq \left| {{\sigma }_{i + 1}\left( \alpha \right) }\right| \) (ordinary modulus in \( \mathbb{C} \) ). This ordering of course depends on \( \alpha \) . Let \( N \) be a positive integer, and write \[ \mathop{\prod }\limits_{\sigma }\left( {X - \sigma \left( {\alpha }^{N}\right) }\right) = {X}^{n} + {a}_{n - 1}{X}^{n - 1} + \cdots + {a}_{0}, \] and for all \( m \leq n \) define \( {P}_{m} = \mathop{\prod }\limits_{{1 \leq i \leq m}}\left| {{\sigma }_{i}\left( {\alpha }^{N}\right) }\right| \) . The \( \pm {a}_{n - m} \) are the elementary symmetric functions of the \( \sigma \left( {\alpha }^{N}\right) \), and are equal to a sum of \( \left( \begin{matrix} n \\ m \end{matrix}\right) \) monomials in the \( \sigma \left( {\alpha }^{N}\right) \) . Because of our ordering, the largest modulus of these monomials is equal to \( \pm {P}_{m} \) ; hence \( \left| {a}_{n - m}\right| \leq \left( \begin{array}{l} n \\ m \end{array}\ri
109_The rising sea Foundations of Algebraic Geometry
Definition 1.95
Definition 1.95. The matrix \( M = {\left( m\left( s, t\right) \right) }_{s, t \in S} \) is called the Coxeter matrix associated to \( W \) . More precisely, \( M \) is associated to \( W \) together with a choice of fundamental chamber. It is an \( n \times n \) matrix whose rows and columns are indexed by the set \( S \) of fundamental reflections. The short explanation of Corollary 1.94 is that the Coxeter matrix determines the Gram matrix and the Gram matrix determines \( \left( {W, V}\right) \) . Note that we have the following explicit formula for \( s \) in terms of the inner product, and hence in terms of the Coxeter matrix: \[ s\left( x\right) = x - 2\left\langle {{e}_{s}, x}\right\rangle {e}_{s}. \] (1.23) This is simply formula (1.1) specialized to the case that \( \alpha \) is a unit vector. Remark 1.96. The Coxeter matrix has the following formal properties: It is a symmetric matrix of integers \( m\left( {s, t}\right) \), with \( m\left( {s, s}\right) = 1 \) and \( m\left( {s, t}\right) \geq 2 \) for \( s \neq t \) . But not every such matrix can be the Coxeter matrix of a finite reflection group. A further necessary (and, as we will see in Section 2.5.4, sufficient) condition is that the matrix \( A \mathrel{\text{:=}} {\left( -\cos \left( \pi /m\left( s, t\right) \right) \right) }_{s, t \in S} \) must be positive definite. This fact, together with Corollary 1.94, is the basis for the classification result stated in Section 1.3. Indeed, the proof of that result in Bourbaki [44], Grove-Benson [124], and Humphreys [133] consists in analyzing the possibilities for \( M \), given that \( A \) is positive definite. Exercise 1.97. What happens to \( M \) if we change the choice of \( C \) ? ## 1.5.6 The Coxeter Diagram Instead of working directly with the Coxeter matrix \( M \), one usually works with a diagram called the Coxeter diagram, which encodes all the information in \( M \) . The diagram has \( n \) vertices, one for each \( s \in S \), and the vertices corresponding to distinct elements \( s, t \) are connected by an edge if and only if \( m\left( {s, t}\right) \geq 3 \) . If \( m\left( {s, t}\right) \geq 4 \), then there is more than one convention in the literature as to how to indicate this in the diagram; the one we will follow is simply to label the edge with the number \( m\left( {s, t}\right) \) . In summary, a labeled edge (with label necessarily at least 4) indicates the value of the corresponding \( m\left( {s, t}\right) \) ; an unlabeled edge indicates that \( m\left( {s, t}\right) = 3 \) ; and the lack of an edge joining \( s \) and \( t \) indicates that \( m\left( {s, t}\right) = 2 \) . The Coxeter diagrams for all of the irreducible finite reflection groups are shown in Table 1.1. Based on the examples we have given (and Exercise 1.92), the reader should be able to check that the diagrams are correct for the cases \( {\mathrm{A}}_{n},{\mathrm{C}}_{n},{\mathrm{D}}_{n},{\mathrm{G}}_{2},{\mathrm{H}}_{3} \), and \( {\mathrm{I}}_{2}\left( m\right) \) . Remarks 1.98. (a) Note that the diagrams that occur in this table are very special. For example, the graphs are all trees; there is very little branching in these trees; and the edge labels are rarely necessary (i.e., the numbers \( m\left( {s, t}\right) \) are rarely bigger than 3). One does not need the full force of the classification theorem in order to know these properties; in fact, these properties are among the first few observations that occur in the proof of the classification theorem given in the cited references. (b) Readers who have studied Lie theory will be familiar with the Dynkin diagram of a root system. The Dynkin diagram is similar to the Coxeter diagram, but it contains slightly more information; in particular, it contains enough information to distinguish the root system of type \( {\mathrm{B}}_{n} \) from that of type \( {\mathrm{C}}_{n} \) for \( n \geq 3 \), even though these root systems have the same Weyl group. (c) In the diagrams corresponding to root systems (all but the last three diagrams in Table 1.1), the only edge labels that occur are 4 and 6. According to a common convention different from the one we have adopted, one omits these labels and instead draws a double bond (two parallel edges) when \( m\left( {s, t}\right) = 4 \) and a triple bond (three parallel edges) when \( m\left( {s, t}\right) = 6 \) . ## Exercises 1.99. Compute the Coxeter diagrams for the reflection groups of type \( {\mathrm{D}}_{2} \) and \( {\mathrm{D}}_{3} \) . Why aren’t these listed in the table? 1.100. Show that an essential finite reflection group \( \left( {W, V}\right) \) is irreducible if and only if the graph underlying its Coxeter diagram is connected. Deduce, in the reducible case, a canonical decomposition \[ \left( {W, V}\right) \cong \left( {{W}_{1} \times \cdots \times {W}_{k},{V}_{1} \oplus \cdots \oplus {V}_{k}}\right) \] ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_71_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_71_0.jpg) Table 1.1. Coxeter diagrams of the irreducible finite reflection groups. into "irreducible components," one for each connected component of the Cox-eter diagram. 1.101. Let \( \left( {W, V}\right) \) be an essential irreducible finite reflection group. The purpose of this exercise is to show that \( \left( {W, V}\right) \) is also irreducible in the sense of representation theory, i.e., the only \( W \) -invariant subspaces of \( V \) are \( \{ 0\} \) and \( V \) . Let \( {V}^{\prime } \) be a \( W \) -invariant subspace. (a) For each \( s \in S \), show that either \( {V}^{\prime } \) contains \( {e}_{s} \) or \( {V}^{\prime } \) is contained in the hyperplane \( {H}_{s} \mathrel{\text{:=}} {e}_{s}{}^{ \bot } \) . (b) If \( {V}^{\prime } \) contains \( {e}_{s} \) for some \( s \in S \), show that \( {V}^{\prime } \) contains \( {e}_{s} \) for all \( s \in S \) . (c) Deduce from (a) and (b) that \( {V}^{\prime } = \{ 0\} \) or \( V \) . Thus the action of \( W \) on \( V \) is irreducible in the sense of representation theory. (d) Show that the only linear endomorphisms of \( V \) that commute with all elements of \( W \) are the scalar-multiplication operators. [This implies that the action of \( W \) on \( V \) is absolutely irreducible.] 1.102. Use Exercise 1.101 to give a new proof of Corollary 1.91(2): The center of an essential, irreducible finite reflection group \( W \) is trivial unless \( W \) contains -1, in which case the center is \( \{ \pm 1\} \) . ## 1.5.7 Fundamental Domain and Stabilizers When studying the action of a group on a set, one wants to know how many orbits there are and what the stabilizers are at typical points of these orbits. Both of these questions have extremely simple answers in the case of \( W \) acting on \( V \) . We need one bit of terminology. Definition 1.103. If a group \( G \) acts on a space \( X \), then we call a subset \( Y \subseteq X \) a strict fundamental domain if \( Y \) is closed and is a set of representatives for the \( G \) -orbits in \( X \) . Theorem 1.104. Let \( \left( {W, V}\right) \) be a finite reflection group, \( C \) a chamber, and \( S \) the set of reflections with respect to the walls of \( C \) . Then \( \bar{C} \) is a strict fundamental domain for the action of \( W \) on \( V \) . Moreover, the stabilizer \( {W}_{x} \) of a point \( x \in \bar{C} \) is the subgroup \( \left\langle {S}_{x}\right\rangle \) generated by \( {S}_{x} \mathrel{\text{:=}} \{ s \in S \mid {sx} = x\} \) . In particular, \( {W}_{x} \) fixes every point of \( \bar{A} \), where \( A \) is the cell containing \( x \) . Proof. Since \( W \) is transitive on the chambers, it is clear that every point of \( V \) is \( W \) -equivalent to a point of \( \bar{C} \) . Everything else in the theorem will follow if we prove the following claim: For \( x, y \in \bar{C} \) and \( w \in W \), if \( {wx} = y \) then \( x = y \) and \( w \in \left\langle {S}_{x}\right\rangle \) . We argue by induction on the length \( l \mathrel{\text{:=}} l\left( w\right) \) of \( w \) with respect to \( S \) . If \( l = 0 \) there is nothing to prove, so assume \( l > 0 \) and choose a reduced decomposition \( w = {s}_{1}\cdots {s}_{l} \) . Since the corresponding gallery from \( C \) to \( {wC} \) is minimal (Corollary 1.75), we know that \( C \) and \( {wC} \) are separated by the wall \( {H}_{1} \) fixed by \( {s}_{1} \) . We therefore have \[ {wx} = y \in \bar{C} \cap w\bar{C} \subseteq {H}_{1}. \] So if we apply \( {s}_{1} \) to both sides of the equation \( {wx} = y \), we obtain \[ {w}^{\prime }x = {s}_{1}y = y \] where \( {w}^{\prime } \mathrel{\text{:=}} {s}_{1}w = {s}_{2}\cdots {s}_{l} \) . By the induction hypothesis, it follows that \( x = y \) [whence \( {s}_{1} \in {S}_{x} \) ] and that \( {w}^{\prime } \in \left\langle {S}_{x}\right\rangle \) . So \( w = {s}_{1}{w}^{\prime } \) is also in \( \left\langle {S}_{x}\right\rangle \), and the proof is complete. Corollary 1.105. For any cell \( A \), the stabilizer \( {W}_{A} \) of \( A \) (as a set) fixes \( A \) pointwise. Proof. We may assume that \( A \) is a face of the fundamental chamber and hence that \( A \subseteq \bar{C} \) . Then no two distinct points of \( A \) are \( W \) -equivalent, and the result follows at once. Exercise 1.106. Let \( A \) and \( B \) be cells, and let \( {AB} \) be their product (Section 1.4.6). Show that \( {W}_{AB} = {W}_{A} \cap {W}_{B} \) . ## 1.5.8 The Poset \( \sum \) as a Simplicial Complex The fact that every chamber is a simplicial cone in the essential case suggests that the hyperplanes in \( \mathcal{H} \) cut the unit sphere in \( V \) into (spherical) simplices. Thus it seems intuitively clear that the poset \( \sum \mathrel{\text{:=}} \sum \left( {W, V}\right) \) of cells can be identified with the poset of simplices of a simplicial complex that triangulates a sphere of dimension \( \operatorname{rank}\left( {W, V}\right) -
1139_(GTM44)Elementary Algebraic Geometry
Definition 7.5
Definition 7.5. An ideal in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \) is closed if it is the intersection of some set of prime ideals in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \) . For any ideal \( \mathfrak{a} \subset \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \), the map \( \mathfrak{a} \rightarrow \sqrt{\mathfrak{a}} = \mathop{\bigcap }\limits_{{\mathfrak{P} \supset \mathfrak{a}}}\mathfrak{P} \) is a closure map. Thus we see (from Lemma 2.6) that for closed ideals \( {\mathfrak{c}}_{1},{\mathfrak{c}}_{2} \subset \) \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \), defining + by \( {\mathfrak{c}}_{1} + {\mathfrak{c}}_{2} = \sqrt{{\mathfrak{c}}_{1} + {\mathfrak{c}}_{2}} \), makes the set \( \mathcal{J} \) of closed ideals into a lattice \( \left( {\mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) ,\cap , + }\right) \) . That these closed ideals do indeed correspond under \( {h}_{\mathrm{p}}{}^{-1} \) to the closed ideals of \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) containing \( \mathfrak{p} \) is shown by Lemma 7.6. \( {h}_{\mathfrak{p}}{}^{-1} \) defines a natural lattice-embedding of \[ \left( {\mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) ,\cap , + }\right) \] into \( \left( {\mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) ,\cap , + }\right) \) . Proof. Clearly \( {h}_{\mathfrak{p}}{}^{-1} \) defines a set-injection of \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) into \( \mathcal{I}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \) . That this injection is actually into \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \) , i.e., that \( {h}_{\mathfrak{p}}{}^{-1} \) embeds \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) into \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \), may be seen as follows: First note that for any ideal \( \mathfrak{a} \supset \mathfrak{p} \) in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack ,{h}_{\mathfrak{p}}{}^{-1} \) induces a 1:1-onto map \( \mathfrak{P} \rightarrow {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{P}\right) \) from the set of prime ideals \( \mathfrak{P} \) of \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \) containing \( \mathfrak{a}/\mathfrak{p} \), to the set of prime ideals of \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) containing a. (Note that \( {h}_{\mathfrak{p}}{}^{-1} \) preserves primality of ideals in \[ \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \] as does \( {h}_{\mathfrak{p}} \) for those ideals of \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) containing \( \mathfrak{p} \) .) Now let \( \mathfrak{c} \) be any ideal in \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) ; then \[ {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{c}\right) = {h}_{\mathfrak{p}}{}^{-1}\left( {\mathop{\bigcap }\limits_{{\mathfrak{P} \supset \mathfrak{c}}}\mathfrak{P}}\right) = \mathop{\bigcap }\limits_{{\mathfrak{P} \supset \mathfrak{c}}}\left( {{h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{P}\right) }\right) = \mathop{\bigcap }\limits_{{{\mathfrak{P}}^{\prime } \supset {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{c}\right) }}{\mathfrak{P}}^{\prime } \] \[ = \sqrt{{h}_{\mathfrak{p}}^{-1}\left( \mathfrak{c}\right) } \in \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) . \] It is easy to see that this embedding preserves lattice structure, for \( {h}_{\mathrm{p}}{}^{-1} \) preserves intersection (from (11)); it also preserves + , that is, for closed ideals \( {\mathrm{c}}_{1},{\mathrm{c}}_{2} \) \[ {h}_{\mathfrak{p}}{}^{-1}\left( \sqrt{{\mathfrak{c}}_{1} + {\mathfrak{c}}_{2}}\right) = \sqrt{{h}_{\mathfrak{p}}{}^{-1}\left( {\mathfrak{c}}_{1}\right) + {h}_{\mathfrak{p}}{}^{-1}\left( {\mathfrak{c}}_{2}\right) }. \] This follows since \( {h}_{\mathrm{p}}{}^{-1} \) preserves sum (from (12)) and radical. Now that we have shown that \( {h}_{\mathfrak{p}}{}^{-1} \) induces lattice embeddings, it is natural to ask if it likewise preserves decomposition of ideals into irreducibles. It does indeed. Since any homomorphic image of a Noetherian ring is Noetherian, the p.o. set \( \mathcal{I}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) satisfies the a.c.c., so a fortiori \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) does too. Hence any element in either of these sets has an irredundant decomposition into irreducibles. This decomposition is unique in the case of \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) since it is distributive. It is isomorphic to a sublattice of \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \), which itself is isomorphic to the distributive lattice of subvarieties \( \mathcal{V}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \) . Now if \( \mathfrak{a} \subset \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \) is irreducible, then \( {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{a}\right) \subset \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) is too; this is obvious from the definition of irreducibility. Therefore if \( \mathfrak{a} = {\mathfrak{a}}_{1} \cap \ldots \cap {\mathfrak{a}}_{r} \) is a decomposition of \( \mathfrak{a} \in \mathcal{I}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) into irreducibles, then \[ {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{a}\right) = {h}_{\mathfrak{p}}{}^{-1}\left( {\mathfrak{a}}_{1}\right) \cap \ldots \cap {h}_{\mathfrak{p}}{}^{-1}\left( {\mathfrak{a}}_{r}\right) \] is a decomposition of \( {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{a}\right) \) into irreducibles in \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) . And if \( \mathfrak{a} \in \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \), then since \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \) is distributive, this decomposition is unique. Notice that in the proof of Lemmas 7.4 and 7.6, no use was made that one of the rings was of the specific form \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) ; as the reader can easily verify, the proofs go through verbatim using any coordinate ring \( R \) and natural homomorphism \( {h}_{\mathfrak{p}} \) to \( R/\mathfrak{p} \) . The same comments apply to unique decomposition of \( {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{a}\right) \), for any ideal \( \mathfrak{a} \) in \( R/\mathfrak{p} \) . Since we refer later to the more general forms of these lemmas, we state them explicitly: Lemma 7.7. Let \( R \) be any coordinate ring, \( \mathfrak{p} \) any prime ideal of \( R \), and \( {h}_{\mathfrak{p}} \), the natural homomorphism \( {h}_{\mathfrak{p}} : R \rightarrow R/\mathfrak{p} \) . Then \( {h}_{\mathfrak{p}}{}^{-1} \) induces a natural lattice-embedding \( \mathfrak{a} \rightarrow {h}_{\mathfrak{p}}{}^{-1}\left( \mathfrak{a}\right) \) of \( \left( {\mathcal{I}\left( {R/\mathfrak{p}}\right) ,\cap , + }\right) \) into \( \left( {\mathcal{I}\left( R\right) ,\cap , + }\right) \) . Lemma 7.8. \( {h}_{\mathfrak{p}}{}^{-1} \) above defines a natural lattice-embedding of \( \left( {\mathcal{J}\left( {R/\mathfrak{p}}\right) ,\cap , + }\right) \) into \( \left( {\mathcal{J}\left( R\right) ,\cap , + }\right) \) . In this section we have so far made no mention of a lattice \[ \text{ “ }\mathcal{V}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p},\cap , \cup }\right) \text{.” } \] Since \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) is embedded in \( \mathcal{J}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \), it is natural to ask if in some sense we can make a statement like " \( \mathcal{V}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) is embedded in \( \mathcal{V}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack }\right) \) ." For this we need an appropriate definition of \( \mathcal{V}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) . As we saw at the beginning of this section, \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \) may be looked at as a ring of functions on the subvariety \( \mathrm{V}\left( \mathfrak{p}\right) \) in \( {\mathbb{C}}_{{X}_{1},\ldots ,{X}_{n}};\mathcal{V}\left( {\mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p}}\right) \) then becomes the lattice of subvarieties of \( V\left( p\right) \) . In this way the embedding statement would indeed hold. But there arises a question: We were able to produce the above \( V\left( p\right) \) because the coordinate ring was presented in the particular form \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack /\mathfrak{p} \) . However there are many ways of writing a given coordinate ring \( R \) as a quotient ring of some \( \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{m}}\right\rbrack \), and for each
106_106_The Cantor function
Definition 6.1
Definition 6.1. Let \( \mathcal{T} = \left( {\mathcal{R}, A, C}\right) \) and \( {\mathcal{T}}^{\prime } = \left( {{\mathcal{R}}^{\prime },{A}^{\prime },{C}^{\prime }}\right) \) be first-order theories. We say \( {\mathcal{T}}^{\prime } \) extends \( \mathcal{T} \), and write \( {\mathcal{T}}^{\prime } \supseteq \mathcal{T} \), if \( {\mathcal{R}}^{\prime } \supseteq \mathcal{R},{A}^{\prime } \supseteq A \) and \( {C}^{\prime } \supseteq C \) . - Definition 6.2. Let \( {\mathcal{T}}^{\prime } \supseteq \mathcal{T} \), and let \( M = \left( {M, v,\psi }\right) \) be a model of \( \mathcal{T} \) . We say that \( M \) extends to a model of \( {\mathcal{T}}^{\prime } \) if there exist \( {v}^{\prime } : {C}^{\prime } \rightarrow M \) and \( {\psi }^{\prime } : {\mathcal{R}}^{\prime } \rightarrow \operatorname{rel}\left( M\right) \), extending \( v,\psi \) respectively, such that \( \left( {M,{v}^{\prime },{\psi }^{\prime }}\right) \) is a model of \( {\mathcal{T}}^{\prime } \) . Definition 6.3. Let \( \mathcal{T} = \left( {\mathcal{R}, A, C}\right) \supseteq {\mathcal{N}}_{0} \) and have \( \mathbf{N} \) as model. Let \( f : U \rightarrow \mathbf{N} \) be a function defined on some subset \( U \) of \( \mathbf{N} \) . We say that \( f \) is strongly definable in \( \mathcal{T} \) if there is an element \( p\left( {x, y}\right) \in P\left( {V,\mathcal{R}}\right) \) such that, for all \( m, n \in \mathbf{N},\mathcal{T} \vdash p\left( {m, n}\right) \) if and only if \( m \in U \) and \( f\left( m\right) = n \) . The definition is extended in the obvious way for functions of several variables. The key result we intend to prove is that if \( \mathcal{T} \supseteq {\mathcal{N}}_{0} \) and has \( \mathbf{N} \) as model, then every partial recursive function is strongly definable in \( \mathcal{T} \) . The proof is tedious, although the idea is simple - we build up descriptions of the state of a given Turing machine as a function of the input and the number of steps performed. Definition 6.4. The state function corresponding to the state \( \left\lbrack {{s}_{{\beta }_{\ell }}\cdots }\right. \) \( \left. {{s}_{{\beta }_{1}}{q}_{i}{s}_{{\alpha }_{1}}\cdots {s}_{{\alpha }_{k}}}\right\rbrack \) is the function \( f : \mathbf{N} \rightarrow \mathbf{N} \) given by \[ f\left( 0\right) = G\left( {q}_{i}\right) \] \[ f\left( {{2i} + 1}\right) = G\left( {s}_{{\alpha }_{i + 1}}\right) ,0 \leq i \leq k - 1, \] \[ f\left( {{2i} + 1}\right) = G\left( {s}_{0}\right), i \geq k, \] \[ f\left( {{2i} + 2}\right) = G\left( {s}_{{\beta }_{i + 1}}\right) ,0 \leq i \leq \ell - 1, \] \[ f\left( {{2i} + 2}\right) = G\left( {s}_{0}\right), i \geq \ell \text{.} \] State functions are always strongly definable, as they take the value \( G\left( {s}_{0}\right) \) except on a finite set. For a given Turing machine, it is easy to construct a description of the state function \( {f}_{1} \) produced from an initial state \( f \) after one step of the computation. Continuing, one can produce, for any \( n \), a description of the state function \( {f}_{n} \) after \( n \) steps. The difficulty in this approach is that the complexity of the description so obtained increases with \( n \), whereas we need a single description of \( {f}_{x}\left( y\right) \) as a function of the two variables \( x, y \) . Fortunately, there is a trick which allows us to give bounded definitions of arbitrary finite sequences. Lemma 6.5. (The Sequence Number Lemma). There exists a strongly definable function seq: \( {\mathbf{N}}^{ + } \times \mathbf{N} \rightarrow \mathbf{N} \) such that, for any \( n \) and \( {a}_{0},{a}_{1},\ldots ,{a}_{n} \in \mathbf{N} \) , there exists \( b \in {\mathbf{N}}^{ + } \) with the property that \( \operatorname{seq}\left( {b, r}\right) = {a}_{r} \) for \( r = 0,\ldots, n \) . Proof. Let \( T\left( n\right) \) denote the \( n \) th triangular number: \[ T\left( n\right) = 1 + 2 + \cdots + n = \frac{1}{2}n\left( {n + 1}\right) . \] For each \( z > 0 \), there is a unique \( n \) such that \[ T\left( n\right) < z \leq T\left( {n + 1}\right) = T\left( n\right) + n + 1. \] Thus \( z \) is uniquely expressible as \( z = T\left( n\right) + y \) with \( 0 < y \leq n + 1 \) . (We choose this range for \( y \) because later we shall need \( y \neq 0 \) .) Put \( x = n + 2 - y \) . Then \( x, y \) are uniquely determined functions of \( z \), which we denote by \( L\left( z\right) \) , \( R\left( z\right) \) respectively. Put \( P\left( {x, y}\right) = T\left( {x + y - 2}\right) .P, L, R \) are strongly definable functions, for we may regard \( z = P\left( {x, y}\right) \) as an abbreviation for \[ \left( {x > 0}\right) \land \left( {y > 0}\right) \land \left( {{2z} = \left( {x + y - 2}\right) \left( {x + y - 1}\right) + {2y}}\right) , \] \( x = L\left( z\right) \) as one for \[ \left( {x > 0}\right) \land \left( {z > 0}\right) \land \left( {\exists y}\right) \left( {\left( {y > 0}\right) \land \left( {{2z} = \left( {x + y - 2}\right) \left( {x + y - 1}\right) + {2y}}\right) }\right) , \] and \( y = R\left( z\right) \) as one for \[ \left( {y > 0}\right) \land \left( {z > 0}\right) \land \left( {\exists x}\right) \left( {\left( {x > 0}\right) \land \left( {{2z} = \left( {x + y - 2}\right) \left( {x + y - 1}\right) + {2y}}\right) }\right) . \] The function \( \operatorname{seq}\left( {b, r}\right) \) is defined to be the remainder on division of \( L\left( b\right) \) by \( 1 + \left( {r + 1}\right) R\left( b\right) \) . This is strongly definable, the relation \( z = \operatorname{seq}\left( {x, y}\right) \) being given by \[ \left( {x > 0}\right) \land \left( {z < 1 + \left( {y + 1}\right) R\left( x\right) }\right) \land \left( {\exists t}\right) \left( {L\left( x\right) = t\left( {1 + \left( {y + 1}\right) R\left( x\right) }\right) + z}\right) . \] Finally, given \( {a}_{0},{a}_{1},\ldots ,{a}_{n} \in \mathbf{N} \), we have to find \( b \in {\mathbf{N}}^{ + } \) such that \( \operatorname{seq}\left( {b, r}\right) = {a}_{r} \) for \( 0 \leq r \leq n \) . Pick \( c \in \mathbf{N} \) such that \( c > {a}_{r} \) for \( 0 \leq r \leq n \) and such that \( c \) is divisible by each of \( 1,2,\ldots, n \) . Put \( {m}_{r} = 1 + \left( {r + 1}\right) c, r = 0,\ldots, n \) . \( {m}_{r} \) and \( {m}_{s} \) are relatively prime for every pair \( r, s \) such that \( 0 \leq r < s \leq n \), for if \( d \) is a common divisor of \( {m}_{r} \) and \( {m}_{s} \) , \( d \) also divides \( \left( {s + 1}\right) {m}_{r} - \left( {r + 1}\right) {m}_{s} = \) \( s - r \) . Hence \( d \) divides \( c \), and the definition of \( {m}_{r} \) shows now that \( d = 1 \) . We may therefore apply the Chinese Remainder Theorem (see [10], p 135) to the system of congruences \[ x \equiv {a}_{r}{\;\operatorname{mod}\;{m}_{r}}\left( {r = 0,\ldots, n}\right) . \] Let \( e \) be a positive solution to this system, and put \( b = P\left( {e, c}\right) \) . Then \( e = L\left( b\right) \) , \( c = R\left( b\right), L\left( b\right) \equiv {a}_{r}{\;\operatorname{mod}\;\left( {1 + \left( {r + 1}\right) R\left( b\right) }\right) } \), and \( {a}_{r} < c < 1 + \left( {r + 1}\right) R\left( b\right) \) , showing that \( {a}_{r} = \operatorname{seq}\left( {b, r}\right) \) for \( r = 0,\ldots, n \) . ## Exercises 6.6. Given \( m, n, r \in \mathbf{N} \) such that \( m + n = r \), prove that \( {\mathcal{N}}_{0} \vdash a\left( {m, n, r}\right) \) . Hence show that if \( \mathcal{T} \supseteq {\mathcal{N}}_{0} \) and has \( \mathbf{N} \) as a model, then \( \mathcal{T} \vdash a\left( {m, n, r}\right) \) implies \( {\mathcal{N}}_{0} \vdash a\left( {m, n, r}\right) \) for \( m, n, r \in \mathbf{N} \) . Do the same thing for multiplication. 6.7. For \( m, n, r \in \mathbf{N} \) and \( \mathcal{T} \supseteq {\mathcal{N}}_{0} \) with \( \mathbf{N} \) as model, show that \( \mathcal{T} \vdash \operatorname{seq}\left( {m, n}\right) = r \) if and only if \( \operatorname{seq}\left( {m, n}\right) = r \) . (This shows that the formula given above as a definition in \( \mathcal{T} \) of seq indeed strongly defines seq.) The sequence number function defined in Lemma 6.5 enables us to give definitions in \( \mathcal{T} \) of various functions describing a computation by a Turing machine \( M \) . We give the definitions and leave the reader to verify them. If \( M \) has a quadruple \( \left( {{q}_{\alpha },{s}_{\beta }, a, b}\right) \), we define \( {M}_{\alpha ,\beta }\left( {x, y, z}\right) \in P\left( {V,\mathcal{R}}\right) \) as follows. We have \( b = {q}_{\gamma } \) for some \( \gamma, a = {s}_{{\beta }^{\prime }} \) (for some \( {\beta }^{\prime } \) ) or \( a = L \) or \( a = R \) . Put \[ {M}_{\alpha ,\beta }\left( {x, y, z}\right) = \left( {\operatorname{seq}\left( {x,0}\right) = G\left( {q}_{\alpha }\right) }\right) \land \left( {\operatorname{seq}\left( {x,1}\right) = G\left( {s}_{\beta }\right) }\right) \] \[ \land \left( {y = 0 \Rightarrow z = G\left( {q}_{y}\right) }\right) \land K\left( {x, y, z}\right) , \] where \[ K\left( {x, y, z}\right) = \left( {y = 1 \Rightarrow z = G\left( {s}_{{\beta }^{\prime }}\right) }\right) \land \left( {y > 1 \Rightarrow z = \operatorname{seq}\left( {x, y}\right) }\right) \text{ if }a = {s}_{{\beta }^{\prime }}, \] \[ K\left( {x, y, z}\right) = \left\lbrack {\left( {\left( {\exists k}\right) \left( {y = {2k} + 1}\right) }\right) \Rightarrow z = \mathrm{{seq}}\left( {x, y + 2}\right) }\right\rbrack \land \left( {y = 2 \Rightarrow z = \mathrm{{seq}}\left( {x,1}\right) }\right) \] \[ \land \left\lbrack {\left( {\left( {\exists k}\right) \left( {y = {2k} + 4}\right) }\right) \Rightarrow z = \operatorname{seq}\left( {x, y - 2}\right) }\right\rbrack \;\text{ if }\;a = R, \] \[ \begin{matrix} K\left( {x, y, z}\right) = (y = 1 \Rightarrow z = \mathrm{{seq}}\left( {x,2}\right) \land \left\lbrack {\left( {\left( {\exists k}\right) \left( {y = {2k} + 3}\right) }\right) \Rightarrow z = \mathrm{{seq}}\left( {x, y - 2}\right) }\right\
1044_(GTM205)Rational Homotopy Theory
Definition 1
Definition 1. The grade, \( {\operatorname{grade}}_{A}\left( M\right) \), of a right \( A \) -module is the least integer \( k \) such that \( {\operatorname{Ext}}_{A}^{k}\left( {M, A}\right) \neq 0 \) . (If \( {\operatorname{Ext}}_{A}^{ * }\left( {M, A}\right) = 0 \) we say \( {\operatorname{grade}}_{A}\left( M\right) = \infty \) .) The projective grade, proj \( {\operatorname{grade}}_{A}\left( M\right) \), is the least integer \( k \) (or \( \infty \) ) such that \( {\operatorname{Ext}}_{A}^{k}\left( {M, P}\right) \neq 0 \) for some \( A \) -projective module \( P \) . 2. If \( A = {\left\{ {A}_{i}\right\} }_{i > 0} \) and \( {A}_{0} = \mathbb{k} \) then the depth of \( A \), depth \( A \), is the grade of the trivial \( A \) -module \( \mathbb{R} \) . The global dimension of \( A \), gldim \( A \) is the greatest integer \( k \) (or \( \infty \) ) such that \( {\operatorname{Ext}}_{A}^{k}\left( {\mathbb{R}, - }\right) \neq 0 \) . The main theorem of this section provides a connection between LS category and the homological notion of grade. Recall from \( §2\left( \mathrm{c}\right) \) that if \[ f : X \rightarrow Y \] is a continuous map then a holonomy action of the loop space \( {\Omega Y} \) is determined in the homotopy fibre \( F \) of \( f \) . This makes \( {H}_{ * }\left( F\right) \) into a right \( {H}_{ * }\left( {\Omega Y}\right) \) -module. We shall prove: - If \( X \) is normal and \( \left( {Y,{y}_{0}}\right) \) is well based, and if \( {H}_{ * }\left( F\right) \) and \( {H}_{ * }\left( {\Omega Y}\right) \) are 1k-free then \[ {\operatorname{projgrade}}_{{H}_{ * }\left( {\Omega Y}\right) }{H}_{ * }\left( F\right) \leq \operatorname{cat}f. \] Moreover, if equality holds then \[ {\operatorname{projgrade}}_{{H}_{ * }\left( {\Omega Y}\right) }{H}_{ * }\left( F\right) = \operatorname{cat}f = {\operatorname{projdim}}_{{H}_{ * }\left( {\Omega Y}\right) }{H}_{ * }\left( F\right) . \] When each \( {H}_{i}\left( F\right) \) and \( {H}_{j}\left( {\Omega F}\right) \) have finite \( \mathbb{k} \) bases we will replace projgrade by grade in this theorem. The special case that \( f = {id} : X \rightarrow X \) is sufficiently important that we restate the theorem for it: - If \( X \) is path connected, well based and normal and if \( {H}_{ * }\left( {\Omega X}\right) \) is \( \mathbb{k} \) -free with a finite basis in each degree then \[ \operatorname{depth}{H}_{ * }\left( {\Omega X}\right) \leq \operatorname{cat}X \] If equality holds then \( \operatorname{depth}{H}_{ * }\left( {\Omega X}\right) = \operatorname{cat}X = \operatorname{gl}\dim {H}_{ * }\left( {\Omega X}\right) \) . The depth theorem for topological spaces was originally deduced, for the case that \( \mathbb{R} = \mathbb{Q} \) and \( {H}_{ * }\left( X\right) \) has finite type, from a theorem on Sullivan algebras established in a joint paper with Jacobsson and Löfwall [54]. Subsequently it was extended for spaces to \( \mathbb{R} = {\mathbb{F}}_{p} \) (with \( {H}_{ * }\left( X\right) \) still of finite type) in a joint paper with Lemaire [85] using a different but similar approach. However the theorem for Sullivan models in [54] is for a coefficient field of any characteristic, and is applied as such to the Ext-algebra of a local commutative ring. The proof given here (for the more general grade theorem) is different in form, although based on the same underlying idea. We shall, however, also sketch the proof of the Sullivan algebra theorem in characteristic zero, since it provides an interesting application of the material in \( §{29} \) : - If \( L \) is the homotopy Lie algebra of a minimal Sullivan algebra \( \left( {{\Lambda V}, d}\right) \) and if \( V = {\left\{ {V}^{i}\right\} }_{i \geq 2} \) is a graded vector space of finite type then \[ \operatorname{depth}{UL} \leq \operatorname{cat}\left( {{\Lambda V}, d}\right) \leq \operatorname{gldim}{UL}. \] Moreover, if \( \operatorname{depth}{UL} = \operatorname{cat}\left( {{\Lambda V}, d}\right) \) then \( \operatorname{cat}\left( {{\Lambda V}, d}\right) = \operatorname{gl}\dim {UL} \) . Note: When \( \left( {{\Lambda V}, d}\right) \) is the Sullivan model of a simply connected space \( X \), then \( {UL} \cong {H}_{ * }\left( {\Omega X}\right) \) and \( {\operatorname{cat}}_{0}X = \operatorname{cat}\left( {{\Lambda V}, d}\right) \) ,(Theorem 21.5, Proposition 29.4) and so the two results coincide. This section is organized into the following topics: (a) Complexes of finite length. (b) \( {\Omega Y} \) -spaces and \( {C}_{ * }\left( {\Omega Y}\right) \) -modules. (c) The Milnor resolution of \( \mathbb{k} \) . (d) The grade theorem for a homotopy fibre. (e) The depth of \( {H}_{ * }\left( {\Omega X}\right) \) . (f) The depth of \( {UL} \) . (g) The depth theorem for Sullivan algebras. Both the grade theorem and the Sullivan algebra theorem have their roots in an elementary theorem about \( H\left( {{\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{ * }}\right) }\right) \), where \( A \) is a graded algebra, \( {P}_{ * } \) is an \( A \) -projective resolution of an \( A \) -module \( M \) and \( {Q}_{ * } \) is a complex of free \( A \) -modules of finite length. Topics (b)-(e) are then devoted to the grade theorem, which is proved in a topological setting with no reference to Sullivan algebras or graded Lie algebras. This material can be read with only Part I, Part III and \( §{27} \) as background. Topics (f) and (g) deal with the Sullivan algebra result. ## (a) Complexes of finite length. Let \( A \) be a graded algebra (over \( \mathbb{R} \) ). As in \( §{34} \) we denote by \( {P}_{ * } = \left\{ {P}_{i}\right\} \) a chain complex of \( A \) -modules \( {P}_{i} = {P}_{i, * } \) of the form \[ 0 \leftarrow {P}_{0, * }\overset{d}{ \leftarrow }{P}_{1, * } \leftarrow \cdots \] in which \( {\left( {P}_{i, * }\right) }_{j} = {P}_{i, j - i} \) . If \( {Q}_{ * } \) is a second such chain complex then \( {\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{ * }}\right) \) will denote the bigraded complex of \( \mathbb{k} \) -modules given by \[ {\operatorname{Hom}}_{A}{\left( {P}_{ * },{Q}_{ * }\right) }_{i, * } = \mathop{\prod }\limits_{j}{\operatorname{Hom}}_{A}\left( {{P}_{j},{Q}_{j + i}}\right) , \] with \( {df} = d \circ f - {\left( -1\right) }^{\deg f}f \circ d \) . Key to the proof of the grade theorem is Lemma 35.1 Suppose \( {P}_{ * }\overset{ \simeq }{ \rightarrow }M \) is an \( A \) -projective resolution of an \( A \) -module \( M \) and suppose \( {Q}_{ * } = {\left\{ {Q}_{i}\right\} }_{0 \leq i \leq m} \) is a complex of free \( A \) -modules. Then \[ {H}_{i, * }\left( {{\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{ * }}\right) }\right) = 0,\;i > m - {\operatorname{projgrade}}_{A}M. \] proof: Set \( {Q}_{ * }^{\prime } = {\left\{ {Q}_{i}\right\} }_{0 \leq i \leq m - 1} \) . Because the \( {P}_{i} \) are \( A \) -projective the sequence \[ 0 \rightarrow {\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{ * }^{\prime }}\right) \rightarrow {\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{ * }}\right) \rightarrow {\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{m}}\right) \rightarrow 0 \] is exact. Since \( {Q}_{m} \) is \( A \) -free, \( {H}_{i, * }\left( {{\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{m}}\right) }\right) = {\operatorname{Ext}}_{A}^{m - i}\left( {M,{Q}_{m}}\right) = 0 \) , \( i > m - {\operatorname{projgrade}}_{A}M \) . By induction on \( m,{H}_{i, * }\left( {{\operatorname{Hom}}_{A}\left( {{\ddot{P}}_{ * },{Q}_{ * }^{\prime }}\right) }\right) = 0, i > \) \( m \) - proj grade \( {}_{A}M \) . The lemma follows. Suppose next that an \( A \) -free module \( N \) has an \( A \) -basis \( {x}_{\alpha } \) with only finitely many elements in each (ordinary) degree and that \( A \) and \( N \) are concentrated in degrees \( \geq 0 \) . It is then immediate that \[ {\operatorname{Hom}}_{A}\left( {-, N}\right) = \mathop{\prod }\limits_{\alpha }{x}_{\alpha } \cdot {\operatorname{Hom}}_{A}\left( {-, A}\right) , \] whence \[ {\operatorname{Ext}}_{A}\left( {-, N}\right) = \mathop{\prod }\limits_{\alpha }{x}_{\alpha } \cdot {\operatorname{Ext}}_{A}\left( {-, A}\right) . \] Thus we have Lemma 35.2 Suppose in the situation of Lemma 35.1 that \( A \) and \( {Q}_{i} \) are concentrated in nonnegative degrees, and that each \( {Q}_{i} \) has an \( A \) -basis with finitely many elements in each degree. Then \[ {H}_{i, * }\left( {{\operatorname{Hom}}_{A}\left( {{P}_{ * },{Q}_{ * }}\right) }\right) = 0,\;i > m - {\operatorname{grade}}_{A}M. \] (b) \( {\Omega Y} \) -spaces and \( {C}_{ * }\left( {\Omega Y}\right) \) -modules. Let \( \left( {Y,{y}_{0}}\right) \) be a based path connected topological space. Multiplication in the loop space \( {\Omega Y} \) makes it into a topological monoid \( \left( {§2\left( \mathrm{\;b}\right) }\right) \) . A topological space \( X \) equipped with a right \( {\Omega Y} \) -action will be called an \( {\Omega Y} \) -space and a map of \( {\Omega Y} \) -spaces is a continuous map that preserves the action. If \( X \) and \( Z \) are \( {\Omega Y} \) -spaces then \( {\Omega Y} \) acts diagonally on \( X \times Z \) via \( \left( {x, z}\right) \cdot \gamma = \left( {x \cdot \gamma, z \cdot \gamma }\right) \) . Next recall \( \left( {§8\left( \mathrm{a}\right) }\right) \) that multiplication in \( {\Omega Y} \) makes \( {C}_{ * }\left( {\Omega Y}\right) \) into a chain algebra via the Eilenberg-Zilber equivalence, \[ {C}_{ * }\left( {\Omega Y}\right) \otimes {C}_{ * }\left( {\Omega Y}\right) \overset{\mathrm{{EZ}}}{ \rightarrow }{C}_{ * }\left( {{\Omega Y} \times {\Omega Y}}\right) \xrightarrow[]{{C}_{ * }\left( \mathrm{{mult}}\right) }{C}_{ * }\left( {\Omega Y}\right) . \] In the same way, if \( X \) is any \( {\Omega Y} \) -space then the action defines a \( {C}_{ * }\left( {\Omega Y}\right) \) -module structure in \( {C}_{ * }\left( X\right) \) . For example, the constant map \( {\Omega Y} \rightarrow {pt} \) defines
1033_(GTM196)Basic Homological Algebra
Definition 9.6
Definition 9.6 If \( B \in {}_{R}\mathbf{M} \), then an injective envelope of \( B \) is an injective essential extension of \( B \) . From Lemma 9.5, any injective envelope will contain an isomorphic copy of any other essential extension. We're getting close to the main theorem for injective envelopes. There is one more result needed; it constructs injective envelopes. Proposition 9.7 Suppose \( E \) is an injective left \( R \) -module, and \( B \) is a submodule of \( E \) . Let \( C \) be any maximal essential extension of \( B \) in \( E \) . Then \( C \) is an injective envelope of \( B \) . Proof: First, observe that \( C \) has no nontrivial essential extensions \( {C}^{\prime } \) in \( E \), since \( {C}^{\prime } \) would then be an essential extension of \( B \) (contradicting maximality): \( \;0 \neq A \subset {C}^{\prime } \Rightarrow 0 \neq A \cap C \Rightarrow 0 \neq \left( {A \cap C}\right) \cap B = A \cap B. \) Now if \( D \) is any essential extension of \( C \) at all, then by Lemma 9.5, \( E \) contains an isomorphic copy \( {D}^{\prime } \) of \( D \), from which \( {D}^{\prime } = C \) and then \( D = C \) . That is, \( C \) has no nontrivial essential extensions, so \( C \) is injective by Proposition 9.4. Since \( C \) is by definition an essential extension of \( B, C \) is an injective envelope of \( B \) . Corollary 9.8 Any \( B \in {}_{R}\mathbf{M} \) has an injective envelope. Proof: In Proposition 9.7, \( E \) and \( C \) exist by the enough injectives theorem and Lemma 9.1. The main theorem reads as follows. Theorem 9.9 Suppose \( B \in {}_{R}\mathbf{M} \) . Then a) \( B \) has an injective envelope, and any two injective envelopes of \( B \) are isomorphic. b) If \( E\left( B\right) \) (respectively, \( E\left( C\right) \) ) is an injective envelope of \( B \) (respectively, \( C \) ), then any \( \sigma \in \operatorname{Hom}\left( {B, C}\right) \) has an extension \( \tau \in \operatorname{Hom}(E\left( B\right) \) , \( E\left( C\right) ) \) . Furthermore, if \( \sigma \) is one-to-one or bijective, then so are all such \( \tau \) . c) Any injective envelope \( E\left( B\right) \) of \( B \) is a largest essential extension of \( B \) in that \( E\left( B\right) \) contains an isomorphic copy of any other essential extension of \( B \) . d) Any injective envelope \( E\left( B\right) \) of \( B \) is a smallest injective extension of \( B \) in that any other injective extension of \( B \) contains an isomorphic \( \operatorname{copy} \) of \( E\left( B\right) \) . Proof: First, the quick deductions. (c) follows directly from Lemma 9.5, as remarked following the proof. (d) follows from the uniqueness part of (a) since any injective extension of \( B \) contains, via Lemma 9.1 and Proposition 9.7, an injective envelope. Finally, (a) follows from Lemma 9.5 and the "bijective" part of (b), by setting \( C = B \) and \( \sigma = {i}_{B} \) . There remains (b). \( \tau \) is constructed as a filler: ![9de6df55-e4f3-42f3-808c-b3f170e8f76c_295_0.jpg](images/9de6df55-e4f3-42f3-808c-b3f170e8f76c_295_0.jpg) and may not be unique. As in the proof of Lemma 9.5, \( \tau \) is one-to-one when \( \sigma \) is, since \( \ker \sigma = \ker \tau \cap B \) so that \( \ker \sigma = 0 \Rightarrow \ker \tau = 0 \), since \( E\left( B\right) \) is an essential extension of \( B \) . If additionally \( \sigma \) is bijective, then \( \tau \left( {E\left( B\right) }\right) \approx E\left( B\right) \), so \( \tau \left( {E\left( B\right) }\right) \) is an injective submodule of \( E\left( C\right) \), forcing \( E\left( C\right) = \tau \left( {E\left( B\right) }\right) \oplus A \) for some \( A \), since \( \tau \left( {E\left( B\right) }\right) \) is injective and so is an absolute direct summand. But now \( A \cap C = A \cap \sigma \left( B\right) = A \cap \tau \left( B\right) \subset \) \( A \cap \tau \left( {E\left( B\right) }\right) = 0 \) so that \( A = 0 \) (and \( \tau \left( {E\left( B\right) }\right) = E\left( C\right) \) ), since \( E\left( C\right) \) is an essential extension of \( C \) . The possibility acknowledged in (b) actually happens, and the extensions ( \( \tau \) ’s) are not unique. This prevents \( B \mapsto E\left( B\right) \) from being used to define a functor, and is much more serious than the easily solved "For each \( B \) , choose a particular injective envelope and call it \( E\left( B\right) \) ." Example 32 The injective envelope of \( \mathbb{Z} \) is \( \mathbb{Q} \) (exercise). The injective envelope of \( {\mathbb{Z}}_{p} \) is the \( p \) -quasicyclic group \( {\mathbf{C}}_{p} = \mathbb{Z}\left\lbrack {1/p}\right\rbrack /\mathbb{Z}p \) (exercise). The natural map \( n \mapsto n + \mathbb{Z}p \) has lots of extensions to \( \mathbb{Q} \) . For instance \( 1/p \) may go to \( 1/p + \mathbb{Z}p \), or it may go to \( 1 + 1/p + \mathbb{Z}p \), or \( 2 + 1/p + \mathbb{Z}p \), or \( \ldots \) So much for injectives. How about projectives? What happens if we reverse the arrows? To see what is needed, it is best to find first an interpretation of "essential extension" that involves only categorical concepts. The first thing to go is " \( B \) is a submodule of \( C \) ," which is replaced by " \( \iota : B \rightarrow C \) is one-to-one." The \( A \subset C \) for which we have " \( A \neq 0 \Rightarrow \) \( A \cap \iota \left( B\right) \neq 0 \Leftrightarrow {\iota }^{-1}\left( A\right) \neq 0 \) " is replaced by an arrow \( f : C \rightarrow D \) with kernel \( A \) : " \( f \) is not one-to-one \( \Rightarrow {f\iota } \) is not one-to-one," or " \( {f\iota } \) is one-to-one \( \Rightarrow f \) is one-to-one." The opposite notion to an essential extension can now be defined. A homomorphism \( \pi : C \rightarrow B \) is called a cover if \( \pi \) is onto, and for all \( f \in \) \( \operatorname{Hom}\left( {D, C}\right) ,{\pi f} \) is onto \( \Rightarrow f \) is onto. The analog of an injective envelope is then a projective cover. If one exists, it will have properties similar to injective envelopes: Any projective which maps onto our module will factor through the projective cover, and so on. The problem is existence. We shall return to all this in Section 9.6, since one situation when they do exist is for finitely generated modules over quasilocal rings. In general, however, the problem is with Lemma 9.1. The analogous construction for quotients does not Zornify. \( {}^{4} \) Example \( {33}{\mathbb{Z}}_{2} \in {}_{\mathbb{Z}}\mathbf{M} \) does not have a projective cover. Suppose \( P \) is projective and \( \pi : P \rightarrow {\mathbb{Z}}_{2} \) is a cover. ( \( P \) is free, but we don’t need that.) If \( g : \mathbb{Z} \rightarrow {\mathbb{Z}}_{2} \) is the usual map, then any filler \( f \) : ![9de6df55-e4f3-42f3-808c-b3f170e8f76c_296_0.jpg](images/9de6df55-e4f3-42f3-808c-b3f170e8f76c_296_0.jpg) would have to be onto, that is, \( P \approx \mathbb{Z}\left( {\mathbb{Z}}_{n}\right. \) is not allowed since \( {\mathbb{Z}}_{n} \) is not projective) so that \( f \) will be an isomorphism. But then \( \widetilde{f}\left( n\right) = {3f}\left( n\right) \) is a filler which isn't onto. ## 9.2 Universal Coefficients This section requires material discussed in Chapters 1, 2, 3, and 4. The universal coefficient theorems in algebraic topology are results that allow the computation of cohomology from homology, as well as allowing the coefficient group in homology to change. There are two universal coefficient theorems, one of which is a corollary to the Künneth theorem in the next section. The other is the subject of this section. Both are really homological algebra results in disguise. The usual starting situation is a chain complex \[ \cdots \rightarrow {P}_{n + 1}\overset{{d}_{n + 1}}{ \rightarrow }{P}_{n}\overset{{d}_{n}}{ \rightarrow }{P}_{n - 1} \rightarrow \cdots \] consisting of projective modules over a left hereditary ring \( R \) . That is, the ring \( R \) has left global dimension less than or equal to one, so that submodules of projective modules are projective, and quotient modules of injective modules are injective. In the application to algebraic topology, \( R \) is --- \( {}^{4} \) I first heard this lovely verb from George Seligman, when I was a graduate student. --- usually \( \mathbb{Z} \), but the result is quite general. In fact, it comes from assembling even more general results. The proof here is modeled on the one in Massey [54, pp. 269-273 and 314-315]. Set \[ {B}_{n} = \operatorname{im}\left( {d}_{n + 1}\right) \] \[ {Z}_{n} = \ker \left( {d}_{n}\right) \] \[ {H}_{n} = {Z}_{n}/{B}_{n} = \text{ homology at }{P}_{n}. \] This notation will be used throughout this section. The first thing to observe is that \( {H}_{n} \) can be computed in another way. We have the "standard picture" ![9de6df55-e4f3-42f3-808c-b3f170e8f76c_297_0.jpg](images/9de6df55-e4f3-42f3-808c-b3f170e8f76c_297_0.jpg) which has the virtue that everything is exact. We also have (since \( {Z}_{n} = \) ker \( {d}_{n} \) ) an injection \( {\bar{d}}_{n} : {P}_{n}/{Z}_{n} \rightarrow {P}_{n - 1} \), yielding the "unstandard picture" ![9de6df55-e4f3-42f3-808c-b3f170e8f76c_297_1.jpg](images/9de6df55-e4f3-42f3-808c-b3f170e8f76c_297_1.jpg) The universal coefficient theorem "computes" the result if \( \operatorname{Hom}\left( {\bullet, C}\right) \) is applied to the complex \( \left\langle {{P}_{i},{d}_{i}}\right\rangle \) and homology is then taken. One such result can be obtained directly without any assumptions on \( R \) or on the \( {P}_{i} \) . Proposition 9.10 Suppose \( R \) is any ring for which \( \left\langle {{P}_{i},{d}_{i}}\right\rangle \) is a chain complex in \( {}_{R}\mathbf{M} \) . Denote the homology at \( {P}_{n} \) by \( {H}_{n} \) . Suppose \( C \) is an injective left \( R \) -module. Then the homology of \( \left\langle {\operatorname{Hom}\left( {{P}_{i}, C}\right) ,{d}_{i}^{ * }}\right\rangle \) at \( \operatorname
113_Topological Groups
Definition 29.23
Definition 29.23. Let \( \left( {\mathcal{L},\mathbf{\varepsilon },\mathbf{O}}\right) \) be a choice triple, and let \( {\mathcal{L}}^{\prime } \) be a Skolem expansion of \( \mathcal{L} \), with notation as in 11.33. With terms \( \sigma \) and formulas \( \varphi \) of \( \left( {\mathcal{L},\mathbf{\varepsilon },\mathbf{O}}\right) \) we shall associate terms \( {\sigma }^{ * } \) and formulas \( {\varphi }^{ * } \) of \( {\mathcal{L}}^{\prime } \), by recursion: \[ {v}_{i}^{ * } \equiv {v}_{i};{\left( \mathbf{O}{\sigma }_{0}\cdots {\sigma }_{m - 1}\right) }^{ * } = \mathbf{O}{\sigma }_{0}^{ * }\cdots {\sigma }_{m - 1}^{ * }; \] \[ {\left( \sigma \equiv \rho \right) }^{ * } = {\sigma }^{ * } \equiv {\rho }^{ * };{\left( \mathbf{R}{\sigma }_{0}\cdots {\sigma }_{m - 1}\right) }^{ * } = \mathbf{R}{\sigma }_{0}^{ * }\cdots {\sigma }_{m - 1}^{ * }; \] \[ {\left( \neg \varphi \right) }^{ * } = \neg {\varphi }^{ * },{\left( \varphi \vee \psi \right) }^{ * } = {\varphi }^{ * } \vee {\psi }^{ * },{\left( \varphi \land \psi \right) }^{ * } = {\varphi }^{ * } \land {\psi }^{ * }, \] \[ {\left( \forall {v}_{i}\varphi \right) }^{ * } = \forall {v}_{i}{\varphi }^{ * } \] the only nontrivial part of the definition is in dealing with \( \varepsilon {v}_{i}\varphi \) . Let Fv \( \exists {v}_{i}\varphi = \left\{ {{v}_{j0},\ldots ,{v}_{j\left( {m - 1}\right) }}\right\} \) with \( {j}_{0} < \cdots < {j}_{m - 1} \) . Choose \( k \) minimal such that \( {\varphi }^{ * } \) is a formula of \( {\mathcal{L}}_{k} \) . Then we set \[ {\left( \mathbf{\varepsilon }{v}_{i}\varphi \right) }^{ * } = {S}_{\exists {vi}{\varphi }^{ * }}^{k}{v}_{j0}\cdots {v}_{j\left( {m - 1}\right) }. \] Next, let \( \mathfrak{A} \) be an \( \mathcal{L} \) -structure, and let \( f \) be a choice function for nonempty subsets of \( A \) . We shall define an \( {\mathcal{L}}^{\prime } \) -structure \( {\mathfrak{B}}_{\mathfrak{A}f} \) which is to be an expansion of \( \mathfrak{A} \) . Thus we must interpret in \( {\mathfrak{B}}_{\mathfrak{A}f} \) all of the new operation symbols \( {S}_{\exists {\alpha \psi }}^{k} \) . For each term \( \mathbf{\varepsilon }{v}_{i}\varphi \) of \( \left( {\mathcal{L},\mathbf{\varepsilon },\mathbf{O}}\right) \) with notation as above, and for each \( {a}_{0},\ldots ,{a}_{m - 1} \in A \), let \( x \in {}^{ \circ }A \) be any sequence with \( {x}_{jt} = {a}_{t} \) for each \( t < m \) and set \[ {S}_{\exists {vi}{o}^{ * }}^{k\mathfrak{B}\mathfrak{A}f}\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) = f\left\{ {a : \left( {\mathfrak{A}, f}\right) \vDash \varphi \left\lbrack {x}_{a}^{i}\right\rbrack }\right\} \;\text{ if this is nonempty,} \] \[ {S}_{\exists {vi}{\varphi }^{ * }}^{k\mathfrak{B}\mathfrak{A}f}\left( {{a}_{0},\ldots ,{a}_{m - 1}}\right) = {\mathbf{O}}^{\mathfrak{A}}\;\text{ otherwise. } \] Since our mapping * is clearly one-one, this is possible. For \( {S}_{\exists {\alpha \psi }}^{k} \) not of the above form, let \( {S}_{\exists {\alpha \psi }}^{k3391f} \) be the constant function with value \( {\mathbf{O}}^{24} \) . Proposition 29.24. Let the notation be as above. Suppose that \( \sigma \) is a term of \( \left( {\mathcal{L},\mathbf{\varepsilon },\mathbf{O}}\right) ,\varphi \) is a formula of \( \left( {\mathcal{L},\mathbf{\varepsilon },\mathbf{O}}\right) \), and \( x \in {}^{\omega }A \) . Then \( {\sigma }^{\mathfrak{A}f}x = {\sigma }^{*\mathfrak{B}\mathfrak{A}f}x \) and \( \left( {\mathfrak{A}, f}\right) \vDash \varphi \left\lbrack x\right\rbrack \) iff \( {\mathfrak{B}}_{\mathfrak{A}f} \vDash {\varphi }^{ * }\left\lbrack x\right\rbrack . \) Proof. The simultaneous inductive proof on \( \sigma \) and \( \varphi \) is straightforward. From this proposition we again obtain a weak completeness theorem: Theorem 29.25. The set \( {\mathcal{g}}^{+ * }\{ \varphi : \varphi \) is a sentence \( \left( {\mathcal{L},\mathbf{\varepsilon },\mathbf{O}}\right) \) and \( \vDash \varphi \} \) is recursively enumerable. Proof. We claim that \( \vDash \varphi \) iff \( \Gamma \vDash {\varphi }^{ * } \), using the notation of 29.23, where \( \Gamma \) is the following set of sentences of \( {\mathcal{L}}^{\prime } \) : the Skolem set of \( {\mathcal{L}}^{\prime } \) over \( \mathcal{L} \) ; \( {\chi }_{ij\varphi \psi } \) for \( i, j < \omega \) and \( \varphi ,\psi \) formulas of \( {\mathcal{L}}^{\prime } \) , where \( {\chi }_{ij\omega \psi } \) is formed as follows. Let \( \mathrm{{Fv}}\exists {v}_{i}\varphi = \left\{ {{v}_{k0},\ldots ,{v}_{k\left( {m - 1}\right) }}\right\} \) with \( {k}_{0} < \) \( \cdots < {k}_{m - 1} \), and \( \operatorname{Fv}\exists {v}_{j}\psi \left\{ {{v}_{l0},\ldots ,{v}_{l\left( {n - 1}\right) }}\right\} \) with \( {l}_{0} < \cdots < {l}_{n - 1} \) . Choose \( s \) minimum such that \( \varphi \) is a formula of \( {\mathcal{L}}_{s} \), and \( t \) minimum such that \( \psi \) is a formula of \( {\mathcal{L}}_{t} \) . Let \( {v}_{u0},\ldots ,{v}_{u\left( {m + n}\right) } \) be the first \( m + n + 1 \) distinct variables not occurring in \( \exists {v}_{i}\varphi \) or \( \exists {v}_{j}\psi \), and set \[ {\varphi }^{\prime } = {\operatorname{Subf}}_{vu0}^{vk0}\cdots {\operatorname{Subf}}_{{vu}\left( {m - 1}\right) }^{{vk}\left( {m - 1}\right) }\varphi , \] \[ {\psi }^{\prime } = {\operatorname{Subf}}_{vum}^{vl0}\cdots {\operatorname{Subf}}_{{vu}\left( {m + n - 1}\right) }^{{vl}\left( {n - 1}\right) }\psi . \] Then let \( {\chi }_{ij\varphi \psi } \) be the sentence \[ \forall {v}_{u0}\cdots \forall {v}_{u\left( {m + n - 1}\right) }\left\lbrack {\forall {v}_{u\left( {m + n}\right) }\left( {{\operatorname{Subf}}_{{vu}\left( {m + n}\right) }^{vt}{\varphi }^{\prime } \leftrightarrow }\right. }\right. \] \[ {Sub}{f}_{{vu}\left( {m\; + \;n}\right) }^{vj}{\psi }^{\prime }) \rightarrow {S}_{\exists vi\varphi }^{s}{v}_{u0} \cdot \cdot \cdot {v}_{u\left( {m\; - \;1}\right) } = \;{S}_{\exists vj\psi }^{t}{v}_{um} \cdot \cdot \cdot {v}_{u\left( {m\; + \;n\; - \;1}\right) }\; \] Our claim can now be routinely checked. Since \( \Gamma \) is clearly effective, the theorem follows. As in the case of description operators, Theorem 29.25 implies the possibility of developing a proof theory based upon the \( \varepsilon \) -operator. This has been done, and the completeness theorem has been proved for the resulting notion \( { \vdash }_{\varepsilon } \) . Two of the major results here are: (1) ("second \( \varepsilon \) -theorem") if \( \Gamma \cup \{ \varphi \} \) is an \( \varepsilon \) -free set of sentences and \( \Gamma { \vdash }_{\varepsilon }\varphi \), then \( \Gamma \vDash \varphi \) ; (2) (“first \( \varepsilon \) - theorem") a formulation of Herbrand’s theorem in the \( \varepsilon \) -language. Note that for any formula \( \varphi \) we have \( \vDash \exists {v}_{0}\varphi \leftrightarrow \varphi \left( {\varepsilon {v}_{0}\varphi }\right) \) . This gives rise to the possibility of founding logic using the \( \varepsilon \) -symbol and no quantifiers, taking the above as a definition of \( \exists \) ; this has been carefully worked out. Another interesting use of the \( \varepsilon \) -calculus is in axiomatic set theory. Although the second \( \varepsilon \) -theorem above implies that nothing is gained by introducing the \( \varepsilon \) -symbol after setting out the usual axioms for \( \mathrm{{ZF}} \) (set theory without the axiom of choice), the situation is different if \( \mathbf{\varepsilon } \) -formulas are allowed in the schema of set formation of ZF. Then the axiom of choice in its usual formulation becomes provable. This is worked out carefully in Bourbaki's treatment of set theory. ## Many-Sorted Logic This variant of first-order logic is considerably more important than the ones above. The basic idea is to allow several universes in the structure instead of only one. As far as syntax is concerned, this is expressed in the following definition: Definition 29.26. A many sorted language \( \mathcal{L} \) is determined by specifying the following. There is a nonempty set \( \mathcal{S} \) of sorts. For each \( s \in \mathcal{S} \) we have individual variables of sort \( s \) : \[ {v}_{0}^{s},{v}_{1}^{s},\ldots \] The logical symbols are the usual ones: \( \neg, v, \land ,\forall , = \) . The nonlogical constants are some relation and operation symbols. Each relation symbol and each operation symbol has a rank which is a finite non-empty sequence of members of \( \mathcal{S} \) . Given such a language, we define terms and formulas as follows: (i) \( {v}_{i}^{s} \) is a term of sort \( s \) ; (ii) if \( \mathbf{O} \) is an operation symbol of rank \( \left( {{s}_{0},\ldots ,{s}_{m}}\right) \) and \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \) are terms of sorts \( {s}_{0},\ldots ,{s}_{m - 1} \) respectively, then \( \mathbf{O}{\sigma }_{0}\cdots {\sigma }_{m - 1} \) is a term of sort \( {s}_{m} \) ; (iii) if \( \sigma \) and \( \tau \) are terms of the same sort, then \( \sigma = \tau \) is a formula; (iv) if \( \mathbf{R} \) is a relation symbol of rank \( \left( {{s}_{0},\ldots ,{s}_{m}}\right) \) and \( {\sigma }_{0},\ldots ,{\sigma }_{m} \) are terms of sorts \( {s}_{0},\ldots ,{s}_{m} \) respectively, then \( \mathbf{R}{\sigma }_{0}\cdots {\sigma }_{m} \) is a formula; (v) if \( \varphi \) and \( \psi \) are formulas and \( \alpha \) is a variable, all of the following are formulas: \( \neg \varphi ,\varphi \vee \psi ,\varphi \land \psi ,\forall {\alpha \varphi } \) ; (vi) terms and formulas can only be formed in these ways. The notion of an \( \mathcal{L} \) -structure for such a language is clear: Definition 29.27. Let \( \mathcal{L} \) be a many sorted language as above. An \( \mathcal{L} \) - structure is a triple \( \mathfrak{A} = \left( {A, f, R}\right) \) such that: (i) \( A \) is a function which assigns to each \( s \in \mathcal{S} \) a nonempty set \( {A}_{s} \) ; (ii) \( f \) is a function whose domain is the set of operation symbols of \( \m
1088_(GTM245)Complex Analysis
Definition 4.19
Definition 4.19. A region \( R \subseteq {\mathbb{R}}^{2} \) is called \( \left( {xy}\right) \) -simple if it is bounded by a pdp and has the property that any horizontal or vertical line which has nonempty intersection with \( R \) intersects it an interval. Further, the set of values \( a \in \mathbb{R} \) for which the vertical line \( x = a \) has nonempty intersection with \( R \) is an interval, and the set of values \( c \in \mathbb{R} \) for which the horizontal line \( y = c \) has nonempty intersection with \( R \) is also an interval. Here an interval may consist of a single point. In particular, there exist real numbers \( c < d \) and functions \( {h}_{1} \) and \( {h}_{2} \) defined on the interval \( \left\lbrack {c, d}\right\rbrack \) such that the region \( R \) may be described as follows: \[ R = \left\{ {\left( {x, y}\right) ;c \leq y \leq d,{h}_{1}\left( y\right) \leq x \leq {h}_{2}\left( y\right) }\right\} . \] A similar description may be given interchanging the roles of the two variables \( x \) and \( y \) . Open discs and the interiors of rectangles and triangles are examples of \( \left( {xy}\right) \) - simple regions. We recall \( {}^{4} \) and establish a form of a theorem that will help us to further distinguish closed from exact differentials. Theorem 4.20 (Green’s Theorem). Let \( R \) be an \( \left( {xy}\right) \) -simple region and let \( \gamma \) denote its boundary oriented counterclockwise (this means that \( R \) lies to the left of the oriented curves on its boundary). Consider a \( {\mathbf{C}}^{1} \) -form \( \omega = P\mathrm{\;d}x + Q\mathrm{\;d}y \) on a region \( D \supset R \cup \gamma \) . Then \[ {\iint }_{R}\left( {\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}}\right) \mathrm{d}x\mathrm{\;d}y = {\int }_{\gamma }P\mathrm{\;d}x + Q\mathrm{\;d}y = {\int }_{\gamma }\omega . \] \( {}^{4} \) From calculus courses. Proof. Using the notation introduced in the definition of \( \left( {xy}\right) \) -simple regions, we have \[ {\iint }_{R}\frac{\partial Q}{\partial x}\mathrm{\;d}x\mathrm{\;d}y = {\int }_{c}^{d}{\int }_{{h}_{1}\left( y\right) }^{{h}_{2}\left( y\right) }\frac{\partial Q}{\partial x}\mathrm{\;d}x\mathrm{\;d}y \] \[ = {\int }_{c}^{d}\left\lbrack {Q\left( {{h}_{2}\left( y\right), y}\right) - Q\left( {{h}_{1}\left( y\right), y}\right) }\right\rbrack \mathrm{d}y \] \[ = {\int }_{c}^{d}Q\left( {{h}_{2}\left( y\right), y}\right) \mathrm{d}y + {\int }_{d}^{c}Q\left( {{h}_{1}\left( y\right), y}\right) \mathrm{d}y \] \[ = {\int }_{\gamma }Q\mathrm{\;d}y \] Similarly, \[ {\iint }_{R} - \frac{\partial P}{\partial y}\mathrm{\;d}x\mathrm{\;d}y = - {\iint }_{R}\frac{\partial P}{\partial y}\mathrm{\;d}y\mathrm{\;d}x = {\int }_{\gamma }P\mathrm{\;d}x. \] Remark 4.21. (1) The theorem can be easily extended to any region that may be divided into a finite union of \( \left( {xy}\right) \) -simple regions and their boundaries (by cancelation of integrals over common boundaries oppositely oriented); for instance, the interior of a compact convex set. (2) In terms of complex derivatives, the theorem can be restated as \[ {\iint }_{R}\left( {\frac{\partial Q}{\partial z} - \frac{\partial P}{\partial \bar{z}}}\right) \mathrm{d}z\mathrm{\;d}\bar{z} = {\int }_{\gamma }P\mathrm{\;d}z + Q\mathrm{\;d}\bar{z} \] where \( \mathrm{d}z\mathrm{\;d}\bar{z} = - {2\iota }\mathrm{d}x\mathrm{\;d}y \) . (3) In most real analysis courses and books (see, e.g., Theorem 5.12 of G.B. Folland Advanced Calculus, Prentice Hall, 2002), Green's theorem is given in the following form (we are now using complex notation). Theorem 4.22 (Green’s Theorem, Version 2). Let \( K \) be a compact set in \( \mathbb{C} \) which is the closure of its interior, with piecewise smooth positively oriented boundary \( \partial K \) . If \( f \) and \( g \) are \( {\mathbf{C}}^{1} \) -functions on a neighborhood of \( K \), then \[ {\iint }_{K}\left( {{g}_{z} - {f}_{\bar{z}}}\right) \mathrm{d}z\mathrm{\;d}\bar{z} = {\int }_{\partial K}f\left( z\right) \mathrm{d}z + g\left( z\right) \mathrm{d}\bar{z} \] We can now characterize the closed \( {\mathbf{C}}^{1} \) -forms. Theorem 4.23. Suppose that \( \omega = P\mathrm{\;d}x + Q\mathrm{\;d}y \) is a \( {\mathbf{C}}^{1} \) -differential form on a domain D. If \( \omega \) is closed, then \( {P}_{x} = {Q}_{y} \) . Conversely, if \( D \) is an open disc, \( P \) and \( Q \) are \( {\mathbf{C}}^{1} \) -functions on \( D \), and \( {P}_{x} = {Q}_{Y} \) , then \( \omega = P\mathrm{\;d}x + Q\mathrm{\;d}y \) is closed (hence exact) on the disc. Proof. If \( \omega \) is closed in the domain \( D \), then near every point in \( D \) there exists a function \( F \) such that \( \omega = \mathrm{d}F = {F}_{x}\mathrm{\;d}x + {F}_{y}\mathrm{\;d}y \) . But \( \omega \) is \( {\mathbf{C}}^{1} \) and thus \( F \) is \( {\mathbf{C}}^{2} \) ; therefore \( {P}_{y} = {F}_{xy} = {F}_{yx} = {Q}_{x} \) . For the converse on a disc \( D \), by Theorem 4.16 and Corollary 4.17, we need only show that \( {\int }_{\gamma }\omega = 0 \) for all paths \( \gamma \) in \( D \) that are boundaries of rectangles \( R \) with sides parallel to the coordinate axes and such that \( R \cup \gamma \subset D \) . But \[ {\int }_{\gamma }\omega = {\iint }_{R}\left( {{Q}_{x} - {P}_{y}}\right) \mathrm{d}x\mathrm{\;d}y = 0. \] Corollary 4.24. If \( \omega = P\mathrm{\;d}x + Q\mathrm{\;d}y \) is a \( {\mathbf{C}}^{1} \) -form on a domain \( D \), then \( \omega \) is closed on \( D \) if and only if \( {P}_{y} = {Q}_{x} \) in \( D \) . Proof. For any point in \( D \), consider an open disc \( U \) centered at that point and contained in \( D \), and apply the previous theorem to \( \omega \) restricted to \( U \) . Remark 4.25. Recall that \[ f\left( z\right) \mathrm{d}z = \left( {u + {\iota v}}\right) \left( {\mathrm{d}x + \iota \mathrm{d}y}\right) = \left( {u\mathrm{\;d}x - v\mathrm{\;d}y}\right) + \iota \left( {u\mathrm{\;d}y + v\mathrm{\;d}x}\right) = {\omega }_{1} + \iota {\omega }_{2}, \] with \( {\omega }_{1} \) and \( {\omega }_{2} \) real differentials. Thus \[ {\int }_{\gamma }f\left( z\right) \mathrm{d}z = {\int }_{\gamma }{\omega }_{1} + \imath {\int }_{\gamma }{\omega }_{2} \] Further, \( f\left( z\right) \mathrm{d}z \) is closed (respectively exact) if and only if both \( {\omega }_{1} \) and \( {\omega }_{2} \) are, and \( {F}_{j} \) is a primitive for \( {\omega }_{j}\left( {j = 1,2}\right) \) if and only if \( {F}_{1} + \iota {F}_{2} \) is a primitive for \( f\left( z\right) \mathrm{d}z \) . Lemma 4.26. Let \( f\left( z\right) \mathrm{d}z \) be of class \( {\mathbf{C}}^{1} \) on a domain \( D \) . Then \( f\left( z\right) \mathrm{d}z \) is closed on \( D \) if and only if \( f \) is holomorphic in \( D \) . Proof. By the above remarks and previous Corollary, \( f\left( z\right) \mathrm{d}z \) is a closed form on \( D \) if and only if \( {u}_{y} = - {v}_{x} \) and \( {v}_{y} = {u}_{x} \) if and only if \( u \) and \( v \) satisfy CR if and only if \( f \) is holomorphic. Lemma 4.27. A \( {\mathbf{C}}^{1} \) -function \( F \) is a primitive for \( f\left( z\right) \mathrm{d}z \) if and only if \( {F}^{\prime } = f \) . Proof. The function \( F \) is a primitive for \( f\left( z\right) \mathrm{d}z \) if and only if \( \mathrm{d}F = {F}_{z}\mathrm{\;d}z + {F}_{\bar{z}}\mathrm{\;d}\bar{z} = \) \( f\left( z\right) \mathrm{d}z \) if and only if \( {F}_{\bar{z}} = 0 \) and \( {F}_{z} = {F}^{\prime } = f \) . We have now proven the following result that gives a preliminary characterization of certain closed forms. Theorem 4.28. The differential form \( f\left( z\right) \mathrm{d}z \) is closed on a domain \( D \) if and only if \( {\int }_{\gamma }f\left( z\right) \mathrm{d}z = 0 \), for all boundaries \( \gamma \) of rectangles \( R \) contained in \( {D}^{5} \) with sides parallel to the coordinate axes. If \( f \in {\mathbf{C}}^{1}\left( D\right) \), then \( f\left( z\right) \mathrm{d}z \) is closed if and only if \( f \) is holomorphic on \( D \) . Remark 4.29. We shall see that the \( {\mathbf{C}}^{1} \) assumption in the last part of the theorem is not needed. Example 4.30. Not every closed form is exact. Let \( D = {\mathbb{C}}_{ \neq 0} \) and \( \omega = \frac{\mathrm{d}z}{z} \) . (a) If \( \gamma \left( t\right) = {\mathrm{e}}^{2\pi \iota t} \) for \( t \in \left\lbrack {0,1}\right\rbrack \), then \( {\int }_{\gamma }\omega = {2\pi \iota } \) . Thus \( \omega \) is not exact on \( D \) . (b) Since \( f\left( z\right) = \frac{1}{z} \) is holomorphic and \( {\mathbf{C}}^{1} \) on \( D,\omega \) is closed on \( D \) . Note that locally (in \( D \) ) we have \( \omega = \mathrm{d}F \), where \( F \) is a branch of the logarithm, and that we have just proved that there is no branch of the logarithm globally defined on \( D \) . We have produced two real forms on \( D = {\mathbb{C}}_{ \neq 0} \), the real and imaginary parts of \( \omega \) : \[ \frac{\mathrm{d}z}{z} = \frac{x\mathrm{\;d}x + y\mathrm{\;d}y}{{x}^{2} + {y}^{2}} + \imath \frac{-y\mathrm{\;d}x + x\mathrm{\;d}y}{{x}^{2} + {y}^{2}} = \mathrm{d}\log \left| z\right| + \imath \mathrm{d}\arg z = \mathrm{d}\log z. \] The first of the two real forms is exact, the second closed but not exact on \( D \) . Note that \( \operatorname{darg}z = \mathrm{d}\arctan \frac{y}{x} \) (for \( x \neq 0 \) ). Note also that \( \arg z \) and \( \arctan \frac{y}{x} \) are multivalued functions, whose differentials agree and are single-valued. It is important to observe that if we change the domain \( D \) then \( \omega \) may become exact; for instance, on the domain \( \mathbb{C} - ( - \infty ,0\rbrack \) the same form \( \omega = \frac{\mathrm{d}z}{z} \) is exact because \( \omega = \mathrm{{dLog}} \), where Log denotes the principal branch of the logarithm in this domain. We have been working with the integral of any differential form over any pdp. We want to extend the definition to the integral ove
1172_(GTM8)Axiomatic Set Theory
Definition 20.1
Definition 20.1. B satisfies the \( \left( {\omega ,\omega }\right) \) -weak distributive law \( \left( {\left( {\omega ,\omega }\right) \text{-WDL}}\right) \) iff for every family \( \left\{ {{b}_{nm} \mid n, m \in \omega }\right\} \subseteq B \) \[ \mathop{\prod }\limits_{{m < \omega }}\mathop{\sum }\limits_{{n < \omega }}{b}_{nm} = \mathop{\sum }\limits_{{f \in {\omega }^{\omega }}}\mathop{\prod }\limits_{{n \in \omega }}\mathop{\sum }\limits_{{m \leq f\left( n\right) }}{b}_{nm}. \] Similarly, if \( {\omega }_{\alpha } \) is not cofinal with \( \omega \) , \( \mathbf{B} \) satisfies the \( \left( {\omega ,{\omega }_{\alpha }}\right) \) -weak distributive law iff for every family \( \left\{ {{b}_{n\xi } \mid n \in \omega \land \xi < {\omega }_{\alpha }}\right\} \subseteq B \) 1. \( \mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}{b}_{n\xi } = \mathop{\sum }\limits_{{f \in {\omega }_{\alpha }{}^{\omega }}}\mathop{\prod }\limits_{{n \in \omega }}\mathop{\sum }\limits_{{\xi \leq f\left( n\right) }}{b}_{n\xi } \) . Remark. If \( {cf}\left( {\omega }_{\alpha }\right) > \omega \), the right-hand side of 1 is equal to \[ \mathop{\sum }\limits_{{n < {\omega }_{\alpha }}}\mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi \leq \eta }}{b}_{n\xi } \] Theorem 20.2. If \( \mathbf{B} \) satisfies the c.c.c. and \( {cf}\left( {\omega }_{\alpha }\right) > \omega \), then \( \mathbf{B} \) satisfies the \( \left( {\omega ,{\omega }_{\alpha }}\right) \) -WDL. Proof. Let \( \left\{ {{b}_{n\xi } \mid n < \omega \land \xi < {\omega }_{\alpha }}\right\} \subseteq B \) . Then by the c.c.c., for each \( n \in \omega \) there exists a countable set \( {C}_{n} \subseteq B \) such that \[ \mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}{b}_{n\xi } = \sup {C}_{n} \] Define \( {\eta }_{0} = \sup \left\{ {\xi < {\omega }_{\alpha } \mid \left( {\exists n \in \omega }\right) \left\lbrack {{b}_{n\xi } \in {C}_{n}}\right\rbrack }\right\} \) . Since \( {cf}\left( {\omega }_{\alpha }\right) > \omega ,{\eta }_{0} < {\omega }_{\alpha } \) and \[ \left( {\forall n \in \omega }\right) \left\lbrack {\mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}{b}_{n\xi } = \mathop{\sum }\limits_{{\xi \leq {\eta }_{0}}}{b}_{n\xi }}\right\rbrack \] hence \[ \mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}{b}_{n\xi } = \mathop{\sum }\limits_{{\eta < {\omega }_{\alpha }}}\mathop{\prod }\limits_{{n \in \omega }}\mathop{\sum }\limits_{{\xi \leq \eta }}{b}_{n\xi } \] Theorem 20.3. If \( {cf}\left( {\omega }_{\alpha }\right) > \omega \), then \( \mathbf{B} \) satisfies \( \left( {\omega ,{\omega }_{\alpha }}\right) \) -WDL iff \[ \llbracket {cf}{\left( {\omega }_{\alpha }\right) }^{ \smile } > \breve{\omega }\rrbracket = \mathbf{1}. \] Proof. Assume that \( \mathbf{B} \) satisfies the \( \left( {\omega ,{\omega }_{\alpha }}\right) \) -WDL. Let \( f \in {V}^{\left( \mathbf{B}\right) } \) and \( b = \llbracket f : \breve{\omega } \rightarrow {\left( {\omega }_{\alpha }\right) }^{ \vee }\rrbracket \), i.e., \[ b = \llbracket \left( {\forall x \in \check{\omega }}\right) \left( {\exists y \in {\left( {\omega }_{\alpha }\right) }^{ \vee }}\right) \left( {\forall z}\right) \left\lbrack {\langle x, z\rangle \in f \leftrightarrow z = y}\right\rbrack \rrbracket \] \[ = \mathop{\prod }\limits_{{n \in \omega }}\mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}\left\lbrack {\left\lbrack {\left( {\forall z}\right) \left\lbrack {\langle \check{n}, z\rangle \in f \leftrightarrow z = \check{\xi }}\right\rbrack }\right\rbrack .}\right\rbrack \] Define \( {b}_{n\xi } = \llbracket f\left( \check{n}\right) = \check{\xi }\rrbracket \), which should be understood as \[ \llbracket \left( {\forall z}\right) \left\lbrack {\langle \check{n}, z\rangle \in f \leftrightarrow z = \check{\xi }}\right\rbrack \rrbracket . \] Then \[ b = \mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}{b}_{n\xi } \] \[ = \mathop{\sum }\limits_{{\eta < {\omega }_{\alpha }}}\mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi \leq \eta }}{b}_{n\xi }\;\text{ by the }\left( {\omega ,{\omega }_{\alpha }}\right) \text{-WDL } \] \[ = \llbracket {\left( \exists \eta < {\omega }_{\alpha }\right) }^{ \smile }\left( {\forall n < \omega }\right) \left\lbrack {f\left( n\right) \leq \eta }\right\rbrack \rrbracket . \] Since \( \llbracket \operatorname{cf}\left( {\left( {\omega }_{\alpha }\right) }^{ \smile }\right) > \breve{\omega }\rrbracket = \llbracket \left( {\forall f}\right) \lbrack \) if \( f : \breve{\omega } \rightarrow {\left( {\omega }_{\alpha }\right) }^{ \smile } \) then \( \left( {\exists \eta < {\left( {\omega }_{\alpha }\right) }^{ \smile }}\right) \left( {\forall n < \breve{\omega }}\right) \left\lbrack {f\left( n\right) \leq \eta }\right\rbrack \rbrack \rbrack , \) this proves \( \llbracket {cf}\left( {\left( {\omega }_{\alpha }\right) }^{ \smile }\right) > \breve{\omega }\rrbracket = \mathbf{1} \) . To prove the converse, let \( \left\{ {{b}_{n\xi } \mid n < \omega \land \xi < {\omega }_{\alpha }}\right\} \subseteq B \) and assume \( \llbracket {cf}\left( {\left( {\omega }_{\alpha }\right) }^{ \smile }\right) > \breve{\omega }\rrbracket = 1 \) . Define \[ f \in {V}^{\left( \mathbf{B}\right) }\text{ by }\mathcal{D}\left( f\right) = \left\{ {\langle \check{n},\check{\xi }{\rangle }^{\left( \mathbf{B}\right) } \mid n \in \omega \land \xi < {\omega }_{\alpha }}\right\} , \] \[ \left( {\forall n \in \omega }\right) \left( {\forall \xi < {\omega }_{\alpha }}\right) \left\lbrack {f\left( {\langle \check{n},\check{\xi }{\rangle }^{\left( \mathbf{B}\right) }}\right) = {b}_{n\xi }}\right\rbrack . \] Then again \[ \text{i)}\llbracket f : \check{\omega } \rightarrow {\left( {\omega }_{\alpha }\right) }^{ \vee }\rrbracket = \mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}{b}_{n\xi } \] and \[ \text{ii)}\llbracket f : \check{\omega } \rightarrow {\left( {\omega }_{\alpha }\right) }^{ \smile }\rrbracket \cdot \left\lbrack {\left( {\exists \eta < {\left( {\omega }_{\alpha }\right) }^{ \smile }}\right) \left( {\forall n < \check{\omega }}\right) \left\lbrack {f\left( n\right) \leq \eta }\right\rbrack }\right\rbrack = \mathop{\sum }\limits_{{\eta < {\omega }_{\alpha }}}\mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi \leq \eta }}{b}_{n\xi }\text{.} \] But, since \( \llbracket {cf}\left( {\left( {\omega }_{\alpha }\right) }^{ \smile }\right) > \omega \rrbracket = 1 \) . \[ \llbracket f : \check{\omega } \rightarrow {\left( {\omega }_{\alpha }\right) }^{ \smile }\rrbracket \leq \llbracket \left( {\exists \eta < {\left( {\omega }_{\alpha }\right) }^{ \smile }}\right) \left( {\forall n < \check{\omega }}\right) \left\lbrack {f\left( n\right) \leq \eta }\right\rbrack \rrbracket . \] Therefore, by i) and ii), \[ \mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi < {\omega }_{\alpha }}}{b}_{n\xi } = \mathop{\sum }\limits_{{n < {\omega }_{\alpha }}}\mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{\xi \leq \eta }}{b}_{n\xi } \] Remark. Next we interpret the \( \left( {\omega ,\omega }\right) \) -WDL: Theorem 20.4. B satisfies the \( \left( {\omega ,\omega }\right) \) -WDL iff \[ \llbracket \left( {\forall g}\right) \left\lbrack {\text{if}g : \check{\omega } \rightarrow \check{\omega }\text{then}\left( {\exists f \in {\left( {\omega }^{\omega }\right) }^{ \smile }}\right) \left( {\forall n \in \omega }\right) \left\lbrack {g\left( n\right) \leq f\left( n\right) }\right\rbrack }\right\rbrack \rrbracket = \mathbf{1}\text{,} \] i.e., if we define a partial ordering \( \prec \) for the number theoretic functions by \( f \prec g \leftrightarrow \left( {\forall n < \omega }\right) \left\lbrack {f\left( n\right) \leq g\left( n\right) }\right\rbrack \) for \( f, g \in {\omega }^{\omega } \), then in \( {\mathbf{V}}^{\left( \mathbf{B}\right) } \), the standard number theoretic functions (elements of \( {\left( {\omega }^{\omega }\right) }^{ \smile } \) ) are cofinal in the set of all number theoretic functions. Proof. Assume that \( \mathbf{B} \) satisfies the \( \left( {\omega ,\omega }\right) \) -WDL. Let \( g \in {V}^{\left( \mathbf{B}\right) } \) and define \[ b = \llbracket g : \check{\omega } \rightarrow \check{\omega }\rrbracket \] \[ {b}_{nm} = \llbracket g\left( \breve{n}\right) = \breve{m}\rrbracket \] for \( n, m < \omega \) as in the previous proof. 176 Then \[ b = \mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{m < \omega }}{b}_{nm} \] \[ = \mathop{\sum }\limits_{{f \in {\omega }^{\omega }}}\mathop{\prod }\limits_{{n < \omega }}\mathop{\sum }\limits_{{m \leq f\left( n\right) }}{b}_{nm}\;\text{ by the }\left( {\omega ,\omega }\right) \text{-WDL } \] \[ = \mathop{\sum }\limits_{{f \in {\omega }^{\omega }}}\mathop{\prod }\limits_{{n < \omega }}\llbracket g\left( \check{n}\right) \leq \check{f}\left( \check{n}\right) \rrbracket \] \[ b = \left\lbrack \left\lbrack {\left( {\exists f \in {\left( {\omega }^{\omega }\right) }^{ \vee }}\right) \left( {\forall n < \check{\omega }}\right) \left\lbrack {g\left( n\right) \leq f\left( n\right) }\right\rbrack }\right) \right\rbrack . \] The converse is proved similarly. Definition 20.5. A Boolean \( \sigma \) -algebra \( \mathbf{B} \) is a measure algebra iff there exists a strictly positive \( \sigma \) -measure \( m \) on \( \mathbf{B} \), i.e., a function from \( \left| \mathbf{B}\right| \) into \( \left\lbrack {0,1}\right\rbrack \), the closed interval of real numbers between 0 and 1, such that \[ \left( {\forall b \in B}\right) \left\lbrack {b \neq \mathbf{0} \rightarrow m\left( b\right) > 0}\right\rbrack \land m\left( \mathbf{1}\right) = 1 \] and \[ \left( {\forall b \in {B}^{\infty }}\right) \left\lbrack {\left( {\forall i, j < \omega }\right) \left\lbrack {i \neq j \rightarrow {b}_{i} \c
106_106_The Cantor function
Definition 2.1
Definition 2.1. Let \( X \) be any set, let \( F \) be a \( T \) -algebra and let \( \sigma : X \rightarrow F \) be a function. We say that \( F \) (more strictly \( \left( {F,\sigma }\right) \) ) is a free T-algebra on the set \( X \) of free generators if, for every \( T \) -algebra \( A \) and function \( \tau : X \rightarrow A \) , there exists a unique homomorphism \( \varphi : F \rightarrow A \) such that \( {\varphi \sigma } = \tau \) : ![aa35aa61-3413-461b-96f9-006ca0282e6b_13_0.jpg](images/aa35aa61-3413-461b-96f9-006ca0282e6b_13_0.jpg) Observe that if \( \left( {F,\sigma }\right) \) is free, then \( \sigma \) is injective. For it is easily seen that there exists a \( T \) -algebra with more than one element, and hence if \( {x}_{1},{x}_{2} \) are distinct elements of \( X \), then for some \( A \) and \( \tau \) we have \( \tau \left( {x}_{1}\right) \neq \tau \left( {x}_{2}\right) \), which implies \( \sigma \left( {x}_{1}\right) \neq \sigma \left( {x}_{2}\right) \) . The next theorem asserts the existence of a free \( T \) -algebra on a set \( X \), and the proof is constructive. Informally, one could describe the free \( T \) -algebra on \( X \) as the collection of all formal expressions that can be formed from \( X \) and \( T \) by using only finitely many elements of \( X \) and \( T \) in any one expression. But to say precisely what is meant by a formal expression in the elements of \( X \) using the operations of \( T \) is tantamount to constructing the free algebra. Theorem 2.2. For any set \( X \) and any type \( T \), there exists a free \( T \) -algebra on \( X \) . This free T-algebra on \( X \) is unique up to isomorphism. Proof. (a) Uniqueness. We show first that if \( \left( {F,\sigma }\right) \) is free on \( X \), and if \( \varphi : F \rightarrow F \) is a homomorphism such that \( {\varphi \sigma } = \sigma \), then \( \varphi = {1}_{F} \), the identity map on \( F \) . To show this, we take \( A = F \) and \( \tau = \sigma \) in the defining condition. Then \( {1}_{F} : F \rightarrow F \) has the required property for \( \varphi \), and hence by its uniqueness is the only such map. Now let \( \left( {F,\sigma }\right) \) and \( \left( {{F}^{\prime },{\sigma }^{\prime }}\right) \) be free on \( X \) . ![aa35aa61-3413-461b-96f9-006ca0282e6b_14_0.jpg](images/aa35aa61-3413-461b-96f9-006ca0282e6b_14_0.jpg) Since \( \left( {F,\sigma }\right) \) is free, there exists a homomorphism \( \varphi : F \rightarrow {F}^{\prime } \) such that \( {\varphi \sigma } = {\sigma }^{\prime } \) . Since \( \left( {{F}^{\prime },{\sigma }^{\prime }}\right) \) is free, there exists a homomorphism \( {\varphi }^{\prime } : {F}^{\prime } \rightarrow F \) such that \( {\varphi }^{\prime }{\sigma }^{\prime } = \sigma \) . Hence \( {\varphi }^{\prime }{\varphi \sigma } = {\varphi }^{\prime }{\sigma }^{\prime } = \sigma \), and by the result above, \( {\varphi }^{\prime }\varphi = {1}_{F} \) . Similarly, \( \varphi {\varphi }^{\prime } = {1}_{{F}^{\prime }} \) . Thus \( \varphi ,{\varphi }^{\prime } \) are mutually inverse isomorphisms, and so uniqueness is proved. (b) Existence. An algebra \( F \) will be constructed as a union of sets \( {F}_{n} \) \( \left( {n \in \mathbf{N}}\right) \), which are defined inductively as follows. (i) \( {F}_{0} \) is the disjoint union of \( X \) and \( {T}_{0} \) . (ii) Assume \( {F}_{r} \) is defined for \( 0 \leq r < n \) . Then define \[ {F}_{n} = \left\{ {\left( {t,{a}_{1},\ldots ,{a}_{k}}\right) \mid t \in T,\operatorname{ar}\left( t\right) = k,{a}_{i} \in {F}_{{r}_{i}},\mathop{\sum }\limits_{{i = 1}}^{k}{r}_{i} = n - 1}\right\} . \] (iii) Put \( F = \mathop{\bigcup }\limits_{{n \in \mathbb{N}}}{F}_{n} \) . The set \( F \) is now given. To make it into a \( T \) -algebra, we must specify the action of the operations \( t \in T \) . (iv) If \( t \in {T}_{k} \) and \( {a}_{1},\ldots ,{a}_{k} \in F \), put \( t\left( {{a}_{1},\ldots ,{a}_{k}}\right) = \left( {t,{a}_{1},\ldots ,{a}_{k}}\right) \) . In particular, if \( t \in {T}_{0} \), then \( {t}_{F} \) is the element \( t \) of \( {F}_{0} \) . This makes \( F \) into a \( T \) -algebra. To complete the construction, we must give the map \( \sigma : X \rightarrow F \) . (v) For each \( x \in X \), put \( \sigma \left( x\right) = x \in {F}_{0} \) . Finally, we have to prove that \( F \) is free on \( X \), i.e., we must show that if \( A \) is any \( T \) -algebra and \( \tau : X \rightarrow A \) any map of \( X \) into \( A \), then there exists a unique homomorphism \( \varphi : F \rightarrow A \) such that \( {\varphi \sigma } = \tau \) . We do this by constructing inductively the restriction \( {\varphi }_{n} \) of \( \varphi \) to \( {F}_{n} \) and by showing that \( {\varphi }_{n} \) is completely determined by \( \tau \) and the \( {\varphi }_{k} \) for \( k < n \) . We have \( {F}_{0} = {T}_{0} \cup X \) . The homomorphism condition requires \( {\varphi }_{0}\left( {t}_{F}\right) = \) \( {t}_{A} \) for \( t \in {T}_{0} \), while for \( x \in X \) we require \( {\varphi \sigma }\left( x\right) = \tau \left( x\right) \), and so we must have \( {\varphi }_{0}\left( x\right) = \tau \left( x\right) \) . Thus \( {\varphi }_{0} : {F}_{0} \rightarrow A \) is defined, and is uniquely determined by the conditions to be satisfied by \( \varphi \) . Suppose that \( {\varphi }_{k} \) is defined and uniquely determined for \( k < n \) . An element of \( {F}_{n}\left( {n > 0}\right) \) is of the form \( \left( {t,{a}_{1}\ldots ,{a}_{k}}\right) \), where \( t \in {T}_{k},{a}_{i} \in {F}_{{r}_{i}} \) and \( \mathop{\sum }\limits_{{i = 1}}^{k}{r}_{i} = n - 1 \) . Thus \( {\varphi }_{{r}_{i}}\left( {a}_{i}\right) \) is already uniquely defined for \( i = 1,\ldots, k \) . Furthermore, since \( \left( {t,{a}_{1},\ldots ,{a}_{k}}\right) = t\left( {{a}_{1},\ldots ,{a}_{k}}\right) \), and since the homomorphism property of \( \varphi \) requires that \[ \varphi \left( {t,{a}_{1},\ldots ,{a}_{k}}\right) = t\left( {\varphi \left( {a}_{1}\right) ,\ldots ,\varphi \left( {a}_{k}\right) }\right) \] we must define \[ {\varphi }_{n}\left( {t,{a}_{1},\ldots ,{a}_{k}}\right) = t\left( {{\varphi }_{{r}_{1}}\left( {a}_{1}\right) ,\ldots ,{\varphi }_{{r}_{k}}\left( {a}_{k}\right) }\right) . \] This determines \( {\varphi }_{n} \) uniquely, and as each element of \( F \) belongs to exactly one subset \( {F}_{n} \), on putting \( \varphi \left( \alpha \right) = {\varphi }_{n}\left( \alpha \right) \) for \( \alpha \in {F}_{n}\left( {n \geq 0}\right) \), we see that \( \varphi \) is a homomorphism from \( F \) to \( A \) satisfying \( {\varphi \sigma }\left( x\right) = {\varphi }_{0}\left( x\right) = \tau \left( x\right) \) for all \( x \in X \) as required, and that \( \varphi \) is the only such homomorphism. The above inductive construction of the free \( T \) -algebra \( F \) fits in with its informal description-each \( {F}_{n} \) is a collection of " \( T \) -expressions", increasing in complexity with \( n \) . The notion of a \( T \) -expression is useful for an arbitrary \( T \) -algebra, so we shall formalise it, making use of free \( T \) -algebras to do so. Let \( A \) be any \( T \) -algebra, and let \( F \) be the free \( T \) -algebra on the set \( {X}_{n} = \) \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) . For any (not necessarily distinct) elements \( {a}_{1},\ldots ,{a}_{n} \in A \) , there exists a unique homomorphism \( \varphi : F \rightarrow A \) with \( \varphi \left( {x}_{i}\right) = {a}_{i}\left( {i = 1,\ldots, n}\right) \) . If \( w \in F \), then \( \varphi \left( w\right) \) is an element of \( A \) which is uniquely determined by \( {a}_{1},\ldots ,{a}_{n} \) . Hence we may define a function \( {w}_{A} : {A}^{n} \rightarrow A \) by putting \( {w}_{A}\left( {{a}_{1},\ldots }\right. \) , \( \left. {a}_{n}\right) = \varphi \left( w\right) \) . We omit the subscript \( A \) and write simply \( w\left( {{a}_{1},\ldots ,{a}_{n}}\right) \) . If in particular we take \( A = F \) and \( {a}_{i} = {x}_{i}\left( {i = 1,\ldots, n}\right) \), then \( \varphi \) is the identity and \( w\left( {{x}_{1},\ldots ,{x}_{n}}\right) = w \) . Definition 2.3. A \( T \) -word in the variables \( {x}_{1},\ldots ,{x}_{n} \) is an element of the free \( T \) -algebra on the set \( {X}_{n} = \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) of free generators. Definition 2.4. A word in the elements \( {a}_{1},\ldots ,{a}_{n} \) of a \( T \) -algebra \( A \) is an element \( w\left( {{a}_{1},\ldots ,{a}_{n}}\right) \in A \), where \( w \) is a \( T \) -word in the variables \( {x}_{1},\ldots ,{x}_{n} \) . We have used and even implicitly defined the term "variable" in the above definitions. In normal usage, a variable is "defined" as a symbol for which any element of the appropriate kind may be substituted. We give a formal definition of variable, confirming that our variables have this usual property. Definition 2.5. A T-algebra variable is an element of the free generating set of a free \( T \) -algebra. Among the words in the variables \( {x}_{1},\ldots ,{x}_{n} \) are the words \( {x}_{i}\left( {i = 1,\ldots, n}\right) \) , having the property that \( {x}_{i}\left( {{a}_{1},\ldots ,{a}_{n}}\right) = {a}_{i} \) . Thus variables may also be regarded as coordinate functions. The concept of a coordinate function certainly provides the most convenient definition of variable for use in analysis. For example, when we speak of a function \( f\left( {x, y}\right) \) as a function of two real variables \( x, y \), we have a function \( f \), defined on some subset of \( \mathbf{R} \times \mathbf{R}, \) together with coordinate projections \( \;x\left( {a, b}\right) = a, y\left( {a, b}\right) = b\left( {a, b \in \mathbf{R}}\right) , \) and \( f\left( {x, y}\right) \) is in fact the composite function \( f\left( {a, b}\right) = f\left( {x\left( {a, b}\right), y\left( {a, b}\right) }\right) \) . ## Exercises 2.6. \( T \) consists
1172_(GTM8)Axiomatic Set Theory
Definition 9.1
Definition 9.1. \( \;\mathbf{M} \) is a \( \mathbf{B} \) -valued substructure of \( {\mathbf{M}}^{\prime } \) iff 1. \( M \subseteq {M}^{\prime } \) , 2. For each \( n \) -ary predicate symbol \( R \) of \( \mathcal{L} \), including \( = \) and \( \in \) , \[ \left( {\forall {a}_{1},\ldots ,{a}_{n} \in M}\right) \left\lbrack {{\left\lbrack R\left( {a}_{1},\ldots ,{a}_{n}\right) \right\rbrack }_{\mathbf{M}} = {\left\lbrack R\left( {a}_{1},\ldots ,{a}_{n}\right) \right\rbrack }_{{\mathbf{M}}^{\prime }}}\right\rbrack \] 3. \( {c}^{\mathbf{M}} = {c}^{{\mathbf{M}}^{\prime }} \) for each individual constant \( c \) of \( \mathcal{L} \) . Remark. Most of the conditions 1-3 of page 68 can be easily generalized to the B-valued case. It is, however, more difficult to find an adequate condition corresponding to the requirement that \( {\mathbf{M}}_{\alpha } \) be transitive. Definition 9.2. If \( \mathbf{M} \) is a B-valued structure for \( \mathcal{L} \) and \( {M}^{\prime } \subseteq M \), then an element \( b \in M \) is defined over \( {M}^{\prime } \) iff \[ \left( {\forall x \in M}\right) \left\lbrack {\llbracket x \in b\rrbracket = \mathop{\sum }\limits_{{{x}^{\prime } \in {M}^{\prime }}}\left\lbrack {x = {x}^{\prime }}\right\rbrack \left\lbrack {{x}^{\prime } \in b}\right\rbrack }\right\rbrack . \] Remark. Thus, in order to calculate the value of \( \llbracket x \in b\rrbracket \), if \( b \) is defined over \( {M}^{\prime } \), we need only know the values \( \llbracket {x}^{\prime } \in b\rrbracket \) for \( {x}^{\prime } \in {M}^{\prime } \) . We now wish to formulate conditions analogous to 1-3 of page 68. Let \( \left\langle {{\mathbf{M}}_{\alpha } \mid \alpha \in {On}}\right\rangle \) be a sequence of \( \mathbf{B} \) -valued structures for the language \( \mathcal{L} \) such that \( {M}_{\alpha } \) is a nonempty set except for \( {M}_{0} \) , 1. \( {\mathbf{M}}_{\alpha } \) is a B-valued substructure of \( {\mathbf{M}}_{\beta } \), for \( \alpha < \beta \) , and \[ \text{2.}{M}_{\alpha } = \mathop{\bigcup }\limits_{{\beta < \alpha }}{M}_{\beta },\alpha \in {K}_{\mathrm{{II}}}\text{.} \] Then \( M \triangleq \mathop{\bigcup }\limits_{{\alpha \in {On}}}{M}_{\alpha } \) . Again we can define \( \mathbf{M} \) such that \( \left| \mathbf{M}\right| = M,\mathbf{M} \) is a B-valued structure and \( {\mathbf{M}}_{\alpha } \) is a \( \mathbf{B} \) -valued substructure of \( \mathbf{M} \) for all \( \alpha \) . \( \mathbf{M} \) is uniquely determined by these conditions. \( \llbracket \varphi \rrbracket \) stands for \( \llbracket \varphi {\rrbracket }_{\mathbf{M}} \) . Furthermore we require the following conditions. 3. \( {\mathbf{M}}_{\alpha } \) satisfies the Axiom of Extensionality. 4. For each \( b \in {M}_{\alpha + 1}, b \) is defined over \( {M}_{\alpha } \) . 5. For each formula \( \varphi \) of \( {\mathcal{L}}_{0} \) \[ \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {M}_{\alpha }}\right) \left( {\exists b \in {M}_{\alpha + 1}}\right) \left( {\forall a \in {M}_{\alpha }}\right) \left\lbrack {\llbracket \varphi \left( {a,{a}_{1},\ldots ,{a}_{n}}\right) {\rrbracket }_{{\mathbf{M}}_{\alpha }} = \llbracket a \in b\rrbracket }\right\rbrack . \] Condition 4 replaces the requirement that \( {M}_{\alpha } \) be transitive for all \( \alpha \) in the 2-valued case. Note that 5 is just the condition 3 of page 68 i.e., \[ {Df}\left( {\mathbf{M}}_{\alpha }\right) \subseteq {M}_{\alpha + 1}\;\text{ for }\;\mathbf{B} = \mathbf{2}. \] Since \( {\mathbf{M}}_{\alpha } \) is a B-valued substructure of \( \mathbf{M} \) \[ \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {M}_{\alpha }}\right) \left\lbrack {\llbracket \varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) {\rrbracket }_{{\mathbf{M}}_{\alpha }} = \llbracket \varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) \rrbracket }\right\rbrack \] if \( \varphi \) contains no quantifiers. The following three theorems are proved just as in the case \( \mathbf{B} = \mathbf{2} \) . Theorem 9.3. If \( \mathbf{A} \) is a \( \mathbf{B} \) -valued structure for \( \mathcal{L} \) and \( {c}^{\Lambda } \in A \) for every individual constant \( c \) of \( \mathcal{L} \), then there exists a unique B-valued substructure \( \mathbf{C} \) of \( \mathbf{A} \) such that \( \left| \mathbf{C}\right| = A \) . Theorem 9.4. If \( \mathbf{A} \) is a \( \mathbf{B} \) -valued structure for \( \mathcal{L} \) and \( \left| \mathbf{A}\right| \) is a set, then there exists a formula \( \Phi \) of \( {\mathcal{L}}_{0} \) such that for all closed formulas \( \varphi \) of \( \mathcal{L}\left( {C\left( A\right) }\right) \) \[ \llbracket \varphi {\rrbracket }_{\mathbf{A}} = b \leftrightarrow \Phi \left( {\mathbf{A},\mathbf{B},\ulcorner \varphi \urcorner, b}\right) . \] Theorem 9.5. If \( \mathbf{A} \) is a \( \mathbf{B} \) -valued structure for \( \mathcal{L} \) where \( A = \left| \mathbf{A}\right| \) may be a proper class then for each formula \( \varphi \) of \( {\mathcal{L}}_{0} \) there exists a formula \( \psi \) of \( {\mathcal{L}}_{0} \) such that \[ \left( {\forall {a}_{1},\ldots ,{a}_{n} \in A}\right) \left\lbrack {\lbrack \left\lbrack {\varphi \left( {{a}_{1},\ldots ,{a}_{n}}\right) }\right\rbrack {\rbrack }_{\mathbf{A}} = b \leftrightarrow \psi \left( {\mathbf{A},\mathbf{B},{a}_{1},\ldots ,{a}_{n}, b}\right) }\right\rbrack . \] Theorem 9.6. \( \left( {\forall a \in {M}_{\alpha }}\right) \left\lbrack {\lbrack \left\lbrack {\exists x \in a)\varphi \left( x\right) }\right\rbrack = \mathop{\sum }\limits_{{x \in {M}_{\alpha }}}\llbracket x \in a\rrbracket \llbracket \varphi \left( x\right) \rrbracket }\right\rbrack \) . Proof. If \( a \in {M}_{\alpha } \) then \( a \in {M}_{\alpha + 1} \) and hence \( a \) is defined over \( {M}_{\alpha } \) . \[ \llbracket \left( {\exists x \in a}\right) \varphi \left( x\right) \rrbracket = \mathop{\sum }\limits_{{x \in M}}\llbracket x \in a\rrbracket \llbracket \varphi \left( x\right) \rrbracket \] \[ = \mathop{\sum }\limits_{{x \in M}}\mathop{\sum }\limits_{{{x}^{\prime } \in {M}_{\alpha }}}\left\lbrack {x = {x}^{\prime }}\right\rbrack \left\lbrack {{x}^{\prime } \in a}\right\rbrack \left\lbrack {\varphi \left( x\right) }\right\rbrack \] (Since \( a \) is defined over \( {M}_{\alpha } \) ) \[ \leq \mathop{\sum }\limits_{{x \in M}}\mathop{\sum }\limits_{{{x}^{\prime } \in {M}_{\alpha }}}\llbracket x = {x}^{\prime }\rrbracket \llbracket {x}^{\prime } \in a\rrbracket \llbracket \varphi \left( {x}^{\prime }\right) \rrbracket \;\text{ (Axiom of Equality) } \] \[ \leq \mathop{\sum }\limits_{{{x}^{\prime } \in {M}_{\alpha }}}\left\lbrack {{x}^{\prime } \in a}\right\rbrack \left\lbrack {\varphi \left( {x}^{\prime }\right) }\right\rbrack \] \[ \leq \mathop{\sum }\limits_{{x \in M}}\left\lbrack {x \in a}\right\rbrack \left\lbrack {\varphi \left( x\right) }\right\rbrack \] \[ = \llbracket \left( {\exists x \in a}\right) \varphi \left( x\right) \rrbracket . \] Theorem 9.7. \( \left( {\forall a \in {M}_{\alpha }}\right) \left\lbrack {\lbrack \left( {\forall x \in a}\right) \varphi \left( x\right) \rbrack = \mathop{\prod }\limits_{{x \in {M}_{\alpha }}}\left( {\llbracket x \in a\rrbracket \Rightarrow \llbracket \varphi \left( x\right) \rrbracket }\right) }\right\rbrack \) . Remark. Theorem 9.7 follows from duality. The preceding results enable us to cope with bounded quantifiers. As an application we have the following. Theorem 9.8. M satisfies the Axiom of Extensionality i.e., \[ \left( {\forall a, b \in M}\right) \left\lbrack {\lbrack \lbrack \left( {\forall x}\right) \left\lbrack {x \in a \leftrightarrow x \in b}\right\rbrack \rightarrow a = b\rbrack = 1}\right\rbrack . \] Proof. If \( a, b \in M \) then \( \left( {\exists \alpha }\right) \left\lbrack {a \in {M}_{\alpha } \land b \in {M}_{\alpha }}\right\rbrack \) . Then from Theorem 9.7 \[ \llbracket \left( {\forall x}\right) \left\lbrack {x \in a \leftrightarrow x \in b}\right\rbrack \rrbracket = \mathop{\prod }\limits_{{x \in {M}_{\alpha }}}\llbracket x \in a \leftrightarrow x \in b\rrbracket \] \[ = \mathop{\prod }\limits_{{x \in {M}_{\alpha }}}\llbracket x \in a \leftrightarrow x \in b{\rrbracket }_{{\mathbf{M}}_{\alpha }} \] \[ \leq \llbracket a = b{\rrbracket }_{{\mathbf{M}}_{\alpha }}\;\text{(by 3 above)} \] \[ = \llbracket a = b\rrbracket \text{.} \] Theorem 9.9. M satisfies the Axiom of Unions i.e., \[ \left( {\forall a \in M}\right) \llbracket \left( {\exists b}\right) \left( {\forall x}\right) \left\lbrack {x \in b \leftrightarrow \left( {\exists y \in a}\right) \left\lbrack {x \in y}\right\rbrack }\right\rbrack = 1. \] Proof. If \( a \in {M}_{\alpha } \) then \( \exists b \in {M}_{\alpha + 1} \) such that \[ \left( {\forall {x}^{\prime } \in {M}_{\alpha }}\right) \left\lbrack {{\left\lbrack \left( \exists y \in a\right) \left\lbrack {x}^{\prime } \in y\right\rbrack \right\rbrack }_{{\mathbf{M}}_{\alpha }} = \left\lbrack \left\lbrack {{x}^{\prime } \in b}\right\rbrack \right\rbrack }\right\rbrack . \] Since \( b \) is defined over \( {M}_{\alpha } \) , \[ \llbracket x \in b\rrbracket = \mathop{\sum }\limits_{{{x}^{\prime } \in {M}_{\alpha }}}\llbracket x = {x}^{\prime }\rrbracket {\left\lbrack \left( \exists y \in a\right) \left\lbrack {x}^{\prime } \in y\right\rbrack \right\rbrack }_{{\mathbf{M}}_{\alpha }} \] \[ = \mathop{\sum }\limits_{{{x}^{\prime } \in {M}_{\alpha }}}\left\lbrack {x = {x}^{\prime }}\right\rbrack \mathop{\sum }\limits_{{y \in {M}_{\alpha }}}\left\lbrack {y \in a}\right\rbrack \left\lbrack {{x}^{\prime } \in y}\right\rbrack \] \[ = \mathop{\sum }\limits_{{y \in {M}_{\alpha }}}\llbracket y \in a\rrbracket \mathop{\sum }\limits_{{{x}^{\prime } \in {M}_{\alpha }}}\llbracket x = {x}^{\prime }\rrbracket \llbracket {x}^{\prime } \in y\rrbracket \] \[ = \mathop{\sum }\limits_{{y \in {M}_{\alpha }}}\llbracket y \in a\rrbracket \llbracket x \in y\rrbracket \;\left( {\text{ Since }y\text{ is defined over }{M}_{\alpha }}\right) \] \[ = \llbracket \left( {\exis
18_Algebra Chapter 0
Definition 4.7
Definition 4.7. An \( R \) -module \( M \) is finitely presented if for some positive integers \( m, n \) there is an exact sequence \[ {R}^{n}\overset{\varphi }{ \rightarrow }{R}^{m} \rightarrow M \rightarrow 0 \] Such a sequence is called a presentation of \( M \) . In other words, finitely presented modules are cokernels (cf. 1116.2) of homomorphisms between finitely generated free modules. Everything about \( M \) must be encoded in the homomorphism \( \varphi \) ; therefore, we should be able to describe the module \( M \) by studying the matrix corresponding to \( \varphi \) . There is a gap between finitely presented modules and finitely generated modules, but on reasonable rings the two notions coincide: \( {}^{21} \) In context the exactness of a sequence of \( R \) -modules will be understood, so the displayed sequence is a way to denote the fact that there exists a surjective homomorphism of \( R \) -modules from \( R \) to \( M \) ; cf. Example III 7.2 Also note the convention of denoting \( R \) by \( {R}^{1} \) when it is viewed as a module over itself. Lemma 4.8. If \( R \) is a Noetherian ring, then every finitely generated \( R \) -module is finitely presented. Proof. If \( M \) is a finitely generated module, there is an exact sequence \[ {R}^{m}\overset{\pi }{ \rightarrow }M \rightarrow 0 \] for some \( m \) . Since \( R \) is Noetherian, \( {R}^{m} \) is Noetherian as an \( R \) -module (Corollary III 6.8). Thus \( \ker \pi \) is finitely generated; that is, there is an exact sequence \[ {R}^{n} \rightarrow \ker \pi \rightarrow 0 \] for some \( n \) . Putting together the two sequences gives a presentation of \( M \) . Once we have gone one step to obtain generators and two steps to get a presentation, we should hit upon the idea to keep going: Definition 4.9. A resolution of an \( R \) -module \( M \) by finitely generated free modules is an exact complex \[ \ldots \rightarrow {R}^{{m}_{3}} \rightarrow {R}^{{m}_{2}} \rightarrow {R}^{{m}_{1}} \rightarrow {R}^{{m}_{0}} \rightarrow M \rightarrow 0. \] Iterating the argument proving Lemma 4.8 shows that if \( R \) is Noetherian, then every finitely generated module has a resolution as in Definition 4.9 It is an important conceptual step to realize that \( M \) may be studied by studying an exact complex of free modules \[ \ldots \rightarrow {R}^{{m}_{3}} \rightarrow {R}^{{m}_{2}} \rightarrow {R}^{{m}_{1}} \rightarrow {R}^{{m}_{0}} \] resolving \( M \), that is, such that \( M \) is the cokernel of the last map. The \( {R}^{{m}_{0}} \) piece keeps track of the generators of \( M;{R}^{{m}_{1}} \) accounts for the relations among these generators; \( {R}^{{m}_{2}} \) records relations among the relations; and so on. Developing this idea in full generality would take us too far for now: for example, we would have to deal with the fact that every module admits many different resolutions (for example, we can bump up every \( {m}_{i} \) by one by direct-summing each term in the complex with a copy of \( {R}^{1} \), sent to itself by the maps in the complex). We will do this very carefully later on, in Chapter IX However, we can already learn something by considering coarse questions, such as ’how long’ a resolution can be. A priori, there is no reason to expect a free resolutions to be ’finite’, that is, such that \( {m}_{i} = 0 \) for \( i \gg 0 \) . Such finiteness conditions tell us something special about the base ring \( R \) . The first natural question of this type is, for which rings \( R \) is it the case that every finitely generated \( R \) -module \( M \) has a free resolution ’of length 0 ’, that is, stopping at \( {m}_{0} \) ? That would mean that there is an exact sequence \[ 0 \rightarrow {R}^{{m}_{0}} \rightarrow M \rightarrow 0. \] Therefore, \( M \) itself must be free. What does this say about \( R \) ? Proposition 4.10. Let \( R \) be an integral domain. Then \( R \) is a field if and only if every finitely generated \( R \) -module is free. Proof. If \( R \) is a field, then every \( R \) -module is free, by Proposition 1.7. For the converse, assume that every finitely generated \( R \) -module is free; in particular, every cyclic module is free; in particular, every cyclic module is torsion-free. But then \( R \) is a field, by Lemma 4.5 The next natural question concerns rings for which finitely generated modules admit free resolutions of length 1 . It is convenient to phrase the question in stronger terms, that is, to require that for every finitely generated \( R \) -module \( M \) and every beginning of a free resolution \[ {R}^{{m}_{0}}\overset{\pi }{ \rightarrow }M \rightarrow 0 \] the resolution can be completed to a length 1 free resolution. This would amount to demanding that there exist an integer \( {m}_{1} \) and an \( R \) -module homomorphism \( {R}^{{m}_{1}} \rightarrow {R}^{{m}_{0}} \) such that the sequence \[ 0 \rightarrow {R}^{{m}_{1}} \rightarrow {R}^{{m}_{0}}\xrightarrow[]{\pi }M \rightarrow 0 \] is exact. Equivalently, this condition requires that the module \( \ker \pi \) of relations among the \( {m}_{0} \) generators necessarily be free. Claim 4.11. Let \( R \) be an integral domain satisfying this property. Then \( R \) is a PID. Proof. Let \( I \) be an ideal of \( R \), and apply the condition to \( M = R/I \) . Since we have an epimorphism \[ {R}^{1}\overset{\pi }{ \rightarrow }R/I \rightarrow 0 \] the condition says that \( \ker \pi \) is free; that is, \( I \) is free. Since \( I \) is a free submodule of \( R \), which is free of rank \( 1, I \) must be free of rank \( \leq 1 \) by Proposition 1.9, Therefore \( I \) is generated by one element, as needed. The classification result for finitely generated modules over PIDs (Theorem 5.6), which I keep bringing up, will essentially be a converse to Claim 4.11 the mysterious condition requiring free resolutions of finitely generated modules to have length at most 1 turns out to be a characterization of PIDs, just as the length 0 condition is a characterization of fields (as proved in Proposition 4.10). We will work this out in \( \$ {5.2} \) 4.3. Reading a presentation. Let us return to the brilliant idea of studying a finitely presented module \( M \) by studying a homomorphism of free modules \( \left( *\right) \) \[ \varphi : {R}^{n} \rightarrow {R}^{m} \] such that \( M = \operatorname{coker}\varphi \) . As we know, we can describe \( \varphi \) completely by considering a matrix \( A \) representing it, and therefore we can describe any finitely presented module by giving a matrix corresponding to (a homomorphism corresponding to) it. In many cases, judicious use of the material developed in [2] allows us to determine the module \( M \) explicitly. For example, take \[ \left( \begin{array}{ll} 1 & 3 \\ 2 & 3 \\ 5 & 9 \end{array}\right) \] this matrix corresponds to a homomorphism \( {\mathbb{Z}}^{2} \rightarrow {\mathbb{Z}}^{3} \), hence to a \( \mathbb{Z} \) -module, that is, a finitely generated abelian group \( G \) . The reader should figure out what \( G \) is more explicitly (in terms of the classification of SIV16, cf. Exercise 2.19) before reading on. In the rest of this section I will simply tie up loose ends into a more concrete recipe to perform these operations. Incidentally, a number of software packages can perform sophisticated operations on modules (say, over polynomial rings); a personal favorite is Macaulay2. These packages rely precisely on the correspondence between modules and matrices: with due care, every operation on modules (such as direct sums, tensors, quotients, etc.) can be executed on the corresponding matrices. For example, Lemma 4.12. Let \( A, B \) be matrices with entries in an integral domain \( R \), and let \( M, N \) denote the corresponding \( R \) -modules. Then \( M \oplus N \) corresponds to the block matrix \[ \left( \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right) \] Proof. This follows immediately from Exercise 4.16 Coming back to (*), note that the module \( M \) cannot know which bases we have chosen for \( {R}^{n} \) or \( {R}^{m} \) ; that is, \( M = \operatorname{coker}\varphi \) really depends on the homomorphism \( \varphi \), not on the specific matrix representation we have chosen for \( \varphi \) . This is an issue that we have already encountered, and treated rather thoroughly, in 12.2 and following: 'equivalent' matrices represent the same homomorphism and hence the same module. In the context we are exploring now, Proposition 2.5 tells us that two matrices \( A, B \) represent the same module \( M \) if there exist invertible matrices \( P, Q \) such that \( B = {PAQ} \) . But this is not the whole story. Two different homomorphisms \( {\varphi }_{1},{\varphi }_{2} \) may have isomorphic cokernels, even if they act between different modules: the extreme case being any isomorphism \[ {R}^{m} \rightarrow {R}^{m} \] whose cokernel is 0 (regardless of the isomorphism and no matter what \( m \) is). Therefore, if a matrix \( {A}^{\prime } \) corresponds to a module \( M \), then (by Lemma 4.12) so does the block matrix \[ A = \left( \begin{matrix} {I}_{r} & 0 \\ 0 & {A}^{\prime } \end{matrix}\right) \] where \( {I}_{r} \) is the \( r \times r \) identity matrix (and \( r \) is any nonnegative integer); in fact, \( {I}_{r} \) . could be replaced here by any invertible matrix. The following proposition attempts to formalize these observations. Proposition 4.13. Let \( A \) be a matrix with entries in an integral domain \( R \), and let \( B \) be obtained from \( A \) by any sequence of the following operations: - switch two rows or two columns; - add to one row (resp., column) a multiple of another row (resp., column); - multiply all entries in one row (or column) by a unit of \( R \) ; - if a unit is the only nonzero entry in a row (or column), remove the row and column containing that entry. Then \( B \) represents the same \( R \
1057_(GTM217)Model Theory
Definition 2.4.12
Definition 2.4.12 Let \( \mathcal{L} \) be a language and \( \kappa \) be an infinite cardinal. The formulas of the infinitary logic \( {\mathcal{L}}_{\kappa ,\omega } \) are defined inductively as follows: i) Every atomic \( \mathcal{L} \) -formula is a formula of \( {\mathcal{L}}_{\kappa ,\omega } \) . ii) If \( X \) is a set of formulas of \( {\mathcal{L}}_{\kappa ,\omega } \) such that all of the free variables come from a fixed finite set and \( \left| X\right| < \kappa \), then \[ \mathop{\bigwedge }\limits_{{\phi \in X}}\phi \text{ and }\mathop{\bigvee }\limits_{{\phi \in X}}\phi \] are formulas of \( {\mathcal{L}}_{\kappa ,\omega } \) . iii) If \( \phi \) is a formula of \( {\mathcal{L}}_{\kappa ,\omega } \), then so are \( \neg \phi ,\forall {v\phi } \), and \( \exists {v\phi } \) . We say that \( \phi \) is a formula of \( {\mathcal{L}}_{\infty ,\omega } \) if it is an \( {\mathcal{L}}_{\kappa ,\omega } \) -formula for some infinite cardinal \( \kappa \) . When \( \kappa = {\aleph }_{1} \), it is traditional to write \( {\mathcal{L}}_{{\omega }_{1},\omega } \) . Intuitively, \( {\mathcal{L}}_{{\omega }_{1},\omega } \) is the language where we allow countable conjunctions and countable disjunctions. As in Definition 1.1.6, we can define satisfaction for formulas of \( {\mathcal{L}}_{\infty ,\omega } \) . The only difference is that \( \mathop{\bigwedge }\limits_{{\phi \in X}}\phi \) is true if all of the \( \phi \in X \) are true and \( \bigvee \phi \) is true if at least one of the formulas \( \phi \in X \) is true. \( \phi \in X \) If \( \mathcal{L} \) is any first-order language and \( \mathcal{M} \) is an \( \mathcal{L} \) -structure we define a sequence of \( {\mathcal{L}}_{\infty ,\omega } \) -formulas \( {\phi }_{\bar{a},\alpha }^{\mathcal{M}}\left( \bar{v}\right) \), where \( \bar{a} \in {M}^{l} \) and \( \alpha \) is an ordinal as follows: \[ {\phi }_{\bar{a},0}^{\mathcal{M}}\left( \bar{v}\right) = \mathop{\bigwedge }\limits_{{\psi \in X}}\psi \left( \bar{v}\right) \] where \( X = \{ \psi : \mathcal{M} \vDash \psi \left( \bar{a}\right) \) and \( \psi \) is atomic or the negation of an atomic \( \mathcal{L} \) -formula \( \} \) . If \( \alpha \) is a limit ordinal, then \[ {\phi }_{\bar{a},\alpha }^{\mathcal{M}}\left( \bar{v}\right) = \mathop{\bigwedge }\limits_{{\beta < \alpha }}{\phi }_{\bar{a},\beta }^{\mathcal{M}}\left( \bar{v}\right) \] If \( \alpha = \beta + 1 \), then \[ {\phi }_{\bar{a},\alpha }^{\mathcal{M}}\left( \bar{v}\right) = \mathop{\bigwedge }\limits_{{b \in M}}\exists w{\phi }_{\bar{a}b,\beta }^{\mathcal{M}}\left( {\bar{v}, w}\right) \land \forall w\mathop{\bigvee }\limits_{{b \in M}}{\phi }_{\bar{a}b,\beta }^{\mathcal{M}}\left( {\bar{v}, w}\right) . \] Lemma 2.4.13 Let \( \mathcal{M} \) and \( \mathcal{N} \) be \( \mathcal{L} \) -structures, \( \bar{a} \in {M}^{l} \), and \( \bar{b} \in {N}^{l} \) . Then, \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\alpha }\left( {\mathcal{N},\bar{b}}\right) \) if and only if \( \mathcal{N} \vDash {\phi }_{\bar{a},\alpha }^{\mathcal{M}}\left( \bar{b}\right) \) . Proof We prove this by induction on \( \alpha \) (see Appendix A). Because \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{0}\left( {\mathcal{N},\bar{b}}\right) \) if and only if they satisfy the same atomic formulas, the lemma holds for \( \alpha = 0 \) . Suppose that \( \gamma \) is a limit ordinal and the lemma is true for all \( \alpha < \gamma \) . Then \[ \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\gamma }\left( {\mathcal{N},\bar{b}}\right) \; \Leftrightarrow \;\left( {\mathcal{M},\bar{a}}\right) { \sim }_{\alpha }\left( {\mathcal{N},\bar{b}}\right) \text{ for all }\alpha < \gamma \] \[ \Leftrightarrow \mathcal{N} \vDash {\phi }_{\bar{a},\alpha }^{\mathcal{M}}\left( \bar{b}\right) \text{ for all }\alpha < \gamma \] \[ \Leftrightarrow \mathcal{N} \vDash {\phi }_{\bar{a},\gamma }^{\mathcal{M}}\left( \bar{b}\right) \] Suppose that the lemma is true for \( \alpha \) . First, suppose that \( \mathcal{N} \vDash {\phi }_{\bar{a},\alpha + 1}^{\mathcal{M}}\left( \bar{b}\right) \) . Let \( c \in M \) . Because \[ \mathcal{N} \vDash \mathop{\bigwedge }\limits_{{x \in M}}\exists w{\phi }_{\bar{a}x,\alpha }^{\mathcal{M}}\left( {\bar{b}, w}\right) \] there is \( d \in N \) such that \( \mathcal{N} \vDash {\phi }_{\bar{a}c,\alpha }^{\mathcal{M}}\left( {\bar{b}, d}\right) \) . By induction, \( \left( {\mathcal{M},\bar{a}, c}\right) { \sim }_{\alpha } \) \( \left( {\mathcal{N},\bar{b}, d}\right) \) . If \( d \in N \), then because \[ \mathcal{N} \vDash \forall w\mathop{\bigvee }\limits_{{c \in M}}{\phi }_{\bar{a}c,\alpha }^{\mathcal{M}}\left( {\bar{b}, w}\right) \] there is \( c \in M \) such that \( \mathcal{N} \vDash {\phi }_{\bar{a}c,\alpha }^{\mathcal{M}}\left( {\bar{b}, d}\right) \) and \( \left( {\mathcal{M},\bar{a}, c}\right) { \sim }_{\alpha }\left( {\mathcal{N},\bar{b}, d}\right) \) . Thus \( \left( {\mathcal{M},\overline{a}}\right) { \sim }_{\alpha + 1}\left( {\mathcal{N},\overline{b}}\right) . \) Suppose, on the other hand, that \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\alpha + 1}\left( {\mathcal{N},\bar{b}}\right) \) . Suppose that \( c \in M \), then there is \( d \in N \) such that \( \left( {\mathcal{M},\bar{a}, c}\right) { \sim }_{\alpha }\left( {\mathcal{N},\bar{b}, d}\right) \) and \( \mathcal{N} \vDash {\phi }_{\bar{a}c,\alpha }^{\mathcal{M}}\left( {\bar{b}, d}\right) \) . Similarly, if \( d \in N \), then there is \( c \in M \) such that \( \mathcal{N} \vDash {\phi }_{\bar{a}c,\alpha }^{\mathcal{M}}\left( {\bar{b}, d}\right) \) . Thus, \( \mathcal{N} \vDash {\phi }_{\bar{a},\alpha + 1}^{\mathcal{M}}\left( \bar{b}\right) \), as desired. Lemma 2.4.14 For any infinite \( \mathcal{L} \) -structure \( \mathcal{M} \), there is an ordinal \( \alpha < \) \( {\left| M\right| }^{ + } \) such that if \( \bar{a},\bar{b} \in {M}^{l} \) and \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\alpha }\left( {\mathcal{M},\bar{b}}\right) \), then \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\beta }\left( {\mathcal{M},\bar{b}}\right) \) for all \( \beta \) . We call the least such \( \alpha \) the Scott rank of \( \mathcal{M} \) . Proof Let \( {\Gamma }_{\alpha } = \left\{ {\left( {\bar{a},\bar{b}}\right) : \bar{a},\bar{b} \in {M}^{l}}\right. \) for some \( l = 0,1,\ldots \) and \( \left( {\mathcal{M},\bar{a}}\right) { \mathrel{\text{\sim \not{} }} }_{\alpha } \) \( \left( {\mathcal{M},\bar{b}}\right) \} \) . Clearly, \( {\Gamma }_{\alpha } \subseteq {\Gamma }_{\beta } \) for \( \alpha < \beta \) . Claim 1 If \( {\Gamma }_{\alpha } = {\Gamma }_{\alpha + 1} \), then \( {\Gamma }_{\alpha } = {\Gamma }_{\beta } \) for all \( \beta > \alpha \) . We prove this by induction on \( \beta \) . If \( \beta \) is a limit ordinal and the claim holds for all \( \gamma < \beta \), then it also holds for \( \beta \) . Suppose that the claim is true for \( \beta > \alpha \) and we want to show that it holds for \( \beta + 1 \) . Suppose that \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\beta } \) \( \left( {\mathcal{M},\bar{b}}\right) \) and \( c \in M \) . Because \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\alpha + 1}\left( {\mathcal{M},\bar{b}}\right) \), there is \( d \in N \) such that \( \left( {\mathcal{M},\bar{a}, c}\right) { \sim }_{\alpha }\left( {\mathcal{M},\bar{b}, d}\right) \) . By our inductive assumption, \( \left( {\mathcal{M},\bar{a}, c}\right) { \sim }_{\beta } \) \( \left( {\mathcal{M},\bar{b}, d}\right) \) . Similarly, if \( d \in M \), then \( c \in M \) such that \( \left( {\mathcal{M},\bar{a}, c}\right) { \sim }_{\beta }\left( {\mathcal{M},\bar{b}, d}\right) \) . Thus, \( \left( {\mathcal{M},\bar{a}}\right) { \sim }_{\beta + 1}\left( {\mathcal{M},\bar{b}}\right) \) as desired. Claim 2 There is an ordinal \( \alpha < {\left| M\right| }^{ + } \) such that \( {\Gamma }_{\alpha } = {\Gamma }_{\alpha + 1} \) . Suppose not. Then, for each \( \alpha < {\left| \mathcal{M}\right| }^{ + } \), choose \( \left( {{\bar{a}}_{\alpha },{\bar{b}}_{\alpha }}\right) \in {\Gamma }_{\alpha + 1} \smallsetminus {\Gamma }_{\alpha } \) . Because \( {\Gamma }_{\alpha } \subseteq {\Gamma }_{\beta } \) for \( \alpha < \beta \), the function \( \alpha \mapsto \left( {{\bar{a}}_{\alpha },{\bar{b}}_{\alpha }}\right) \) is one-to-one. Because there are only \( \left| M\right| \) finite sequences from \( M \) this is impossible. We conclude this section with Scott's Isomorphism Theorem that every countable \( \mathcal{L} \) -structure is described up to isomorphism by a single \( {\mathcal{L}}_{{\omega }_{1},{\omega }^{ - }} \) sentence. Let \( \mathcal{M} \) be an infinite \( \mathcal{L} \) -structure of cardinality \( \kappa \), and let \( \alpha \) be the Scott rank of \( \mathcal{M} \) . Let \( {\Phi }^{\mathcal{M}} \) be the sentence \[ {\phi }_{\varnothing ,\alpha }^{\mathcal{M}} \land \mathop{\bigwedge }\limits_{{l = 0}}^{\infty }\mathop{\bigwedge }\limits_{{\bar{a} \in {M}^{l}}}\forall \bar{v}\left( {{\phi }_{\bar{a},\alpha }^{\mathcal{M}}\left( \bar{v}\right) \rightarrow {\phi }_{\bar{a},\alpha + 1}^{\mathcal{M}}\left( \bar{v}\right) }\right) . \] Because all of the conjunctions and disjunctions in \( {\phi }_{\bar{a},\beta }^{\mathcal{M}} \) are of size \( \kappa ,{\phi }_{\bar{a},\beta }^{\mathcal{M}} \in \) \( {\mathcal{L}}_{{\kappa }^{ + },\omega } \) for all ordinals \( \beta < {\kappa }^{ + } \) . Thus \( {\Phi }^{\mathcal{M}} \) is an \( {\mathcal{L}}_{{\kappa }^{ + },\omega } \) -sentence. We call \( {\Phi }^{\mathcal{M}} \) the Scott sentence of \( \mathcal{M} \) . If \( \mathcal{M} \) is countable, then \( {\Phi }^{\mathcal{M}} \in {\mathcal{L}}_{{\omega }_{1},\omega } \) . Theorem 2.4.15 (Scott’s Isomorphism Theorem) Let \( \mathcal{M} \) be a countable \( \mathcal{L} \) -structure, and let \( {\Phi }^{\mathcal{M}} \in {\mathcal{L}}_{{\omega }_{1},\omega } \) be the Scott sentence of \( \mathcal{M} \) . Then, \( \mathcal{N} \cong \mat
1172_(GTM8)Axiomatic Set Theory
Definition 1.10
Definition 1.10. A Boolean algebra \( \langle B, + , \cdot , - ,\mathbf{0},\mathbf{1}\rangle \) is complete iff \[ \left( {\forall A \subseteq B}\right) \left( {\exists b,{b}^{\prime } \in B}\right) \left\lbrack {b = \mathop{\sum }\limits_{{a \in A}}a \land {b}^{\prime } = \mathop{\prod }\limits_{{a \in A}}a}\right\rbrack . \] Example. If \( a \neq 0 \) then the Boolean algebra \( \langle \mathcal{P}\left( a\right) , \cup , \cap , - ,0, a\rangle \) is complete. Indeed if \( A \subseteq \mathcal{P}\left( a\right) \) and \( A \neq 0 \), then \[ \mathop{\sum }\limits_{{b \in A}}b = \bigcup \left( A\right) \land \mathop{\prod }\limits_{{b \in A}}b = \bigcap \left( A\right) . \] Theorem 1.11. If \( \langle B, + , \cdot , - ,0,1\rangle \) is a Boolean algebra and \( A \subseteq B \) then 1. \( - \mathop{\sum }\limits_{{a \in A}}a = \mathop{\prod }\limits_{{a \in A}}\left( {-a}\right) \) . 2. \( - \mathop{\prod }\limits_{{a \in A}}a = \mathop{\sum }\limits_{{a \in A}}\left( {-a}\right) \) . Proof. 1. Since \( \left( {\forall b \in A}\right) \left\lbrack {b \leq \mathop{\sum }\limits_{{a \in A}}a}\right\rbrack \) we have \( - \mathop{\sum }\limits_{{a \in A}}a \leq - b \) and hence \[ - \mathop{\sum }\limits_{{a \in A}}a \leq \mathop{\prod }\limits_{{a \in A}}\left( {-a}\right) \] Also \( \left( {\forall b \in A}\right) \left\lbrack {\mathop{\prod }\limits_{{a \in A}}\left( {-a}\right) \leq - b}\right\rbrack \) . Therefore \( b \leq - \mathop{\prod }\limits_{{a \in A}}\left( {-a}\right) \), hence \[ \mathop{\sum }\limits_{{a \in A}}a \leq - \mathop{\prod }\limits_{{a \in A}}\left( {-a}\right) \] i.e., \[ \mathop{\prod }\limits_{{a \in A}}\left( {-a}\right) \leq - \mathop{\sum }\limits_{{a \in A}}a \] 2. Left to the reader. Theorem 1.12. If \( \langle B, + , \cdot , - ,0,1\rangle \) is a Boolean algebra, if \( b, c \in B \) , \( A \subseteq B \), and \[ b = \mathop{\sum }\limits_{{a \in A}}a \] then \[ {cb} = \mathop{\sum }\limits_{{a \in A}}{ca}. \] Proof. If \( a \in A \) then by Definition 1.9, \( a \leq b \) and hence \( {ca} \leq {cb} \) . If for each \( a \in A,{ca} \leq d \) then since \( a = \left( {-c + c}\right) a = - {ca} + {ca} \leq - c + d \) it follows from Definition 1.9 that \( b \leq {}^{ - }c + d \) . Hence \( {cb} \leq d \) and again from Definition \( {1.9}\mathop{\sum }\limits_{{a \in A}}{ca} = {cb} \) . Remark. Having now reviewed the basic properties of Boolean algebras we turn to the problem of characterizing complete Boolean algebras. As a first step in this direction we will show that the collection of regular open sets of a topological space is the universe of a Boolean algebra that is almost a natural algebra. Definition 1.13. The structure \( \langle X, T\rangle \) is a topological space iff \( X \neq 0 \) , 1. \( T \subseteq \mathcal{P}\left( X\right) \land 0 \in T \land X \in T \) . 2. \( A \subseteq T \rightarrow \bigcup \left( A\right) \in T \) . 3. \( \left( {\forall N,{N}^{\prime } \in T}\right) \left\lbrack {N \cap {N}^{\prime } \in T}\right\rbrack \) . \( T \) is a topology on \( X \) iff \( \langle X, T\rangle \) is a topological space. If \( a \in X \) and \( N \in T \) then \( N \) is a neighborhood of \( a \) iff \( a \in N \) . If \( N \) is a neighborhood of \( a \) we write \( N\left( a\right) \) . Theorem 1.14. \( \mathcal{P}\left( X\right) \) is a topology on \( X \) . Proof. Left to the reader. Definition 1.15. \( T \) is the discrete topology on \( X \) iff \( T = \mathcal{P}\left( X\right) \) . Definition 1.16. If \( T \) is a topology on \( X \) and \( A \subseteq X \) then 1. \( {A}^{0} \triangleq \{ x \in A \mid \left( {\exists N\left( x\right) }\right) \left\lbrack {N\left( x\right) \subseteq A}\right\rbrack \} \) . 2. \( {A}^{ - } \triangleq \{ x \in X \mid \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap A \neq 0}\right\rbrack \} \) . Theorem 1.17. If \( T \) is a topology on \( X \) and \( A \subseteq X \) then \( {A}^{0} \in T \) . Proof. If \( B = \{ N \in T \mid N \subseteq A\} \) then \( B \subseteq T \) . Furthermore \[ x \in {A}^{0} \leftrightarrow \exists N\left( x\right) \subseteq A \] \[ \leftrightarrow \exists N\left( x\right) \in B \] \[ \leftrightarrow x \in \cup \left( B\right) \text{.} \] Then \( {A}^{0} = \bigcup \left( B\right) \in T \) . Definition 1.18. \( {T}^{\prime } \) is a base for the topology \( T \) on \( X \) iff 1. \( {T}^{\prime } \subseteq T \) . 2. \( \left( {\forall A \subseteq X}\right) \left\lbrack {A = {A}^{0} \rightarrow \left( {\exists B \subseteq {T}^{\prime }}\right) \left\lbrack {A = \bigcup \left( B\right) }\right\rbrack }\right\rbrack \) . Theorem 1.19. If \( X \neq 0 \), if \( {T}^{\prime } \) is a collection of subsets of \( X \) with the properties 1. \( \left( {\forall a \in X}\right) \left( {\exists A \in {T}^{\prime }}\right) \left\lbrack {a \in A}\right\rbrack \) . 2. \( \left( {\forall a \in X}\right) \left( {\forall {A}_{1},{A}_{2} \in {T}^{\prime }}\right) \left\lbrack {a \in {A}_{1} \cap {A}_{2} \rightarrow }\right. \) \( \left( {\exists {A}_{3} \in {T}^{\prime }}\right) \left\lbrack {a \in {A}_{3} \land {A}_{3} \subseteq {A}_{1} \cap {A}_{2}}\right\rbrack \rbrack . \) Then \( {T}^{\prime } \) is a base for a topology on \( X \) . Proof. If \( T = \left\{ {B \subseteq X \mid \left( {\exists C \subseteq {T}^{\prime }}\right) \left\lbrack {B = \bigcup \left( C\right) }\right\rbrack }\right\} \) then \( 0 = \bigcup \left( 0\right) \in T \) and from property \( 1, X = \bigcup \left( {T}^{\prime }\right) \in T \) . This establishes property 1 of Definition 1.13. To prove 2 of Definition 1.13 we wish to show that \( \bigcup \left( S\right) \in T \) whenever \( S \subseteq T \) . From the definition of \( T \) it is clear that if \( S \subseteq T \) then \( \forall B \in S,\exists C \subseteq {T}^{\prime } \) \[ B = \bigcup \left( C\right) \] If \[ {C}_{B} = \left\{ {A \in {T}^{\prime } \mid A \subseteq B}\right\} \] then \[ B = \bigcup \left( {C}_{B}\right) \] and \[ \mathop{\bigcup }\limits_{{B \in S}}B = \mathop{\bigcup }\limits_{{B \in S}} \cup \left( {C}_{B}\right) \] \[ = \bigcup \left( {\mathop{\bigcup }\limits_{{B \in S}}{C}_{B}}\right) \] Since \( \mathop{\bigcup }\limits_{{B \in S}}{C}_{B} \subseteq {T}^{\prime },\bigcup \left( S\right) \in T \) . If \( {B}_{1},{B}_{2} \in T \) then \( \exists {C}_{1},{C}_{2} \subseteq {T}^{\prime } \) \[ {B}_{1} = \bigcup \left( {C}_{1}\right) \land {B}_{2} = \bigcup \left( {C}_{2}\right) \] Therefore \[ {B}_{1} \cap {B}_{2} = \left( {\mathop{\bigcup }\limits_{{{A}_{1} \in {C}_{1}}}{A}_{1}}\right) \cap \left( {\mathop{\bigcup }\limits_{{{A}_{2} \in {C}_{2}}}{A}_{2}}\right) \] \[ = \mathop{\bigcup }\limits_{\substack{{{A}_{1} \in {C}_{1}} \\ {{A}_{2} \in {C}_{2}} }}\left( {{A}_{1} \cap {A}_{2}}\right) \] \[ = \mathop{\bigcup }\limits_{\substack{{{A}_{1} \in {C}_{1}} \\ {{A}_{2} \in {C}_{2}} \\ {{A}_{3} \subseteq {A}_{1} \cap {A}_{2}} }}{A}_{3} \] (By 2). Then \( {B}_{1} \cap {B}_{2} \in T \) ; hence \( T \) is a topology on \( X \) . Clearly \( {T}^{\prime } \) is a base for \( T \) . Definition 1.20. If \( T \) is a topology on \( X \) and \( A \subseteq X \) then 1. \( A \) is open iff \( A = {A}^{0} \) . 2. \( A \) is regular open iff \( A = {A}^{-0} \) . 3. \( A \) is closed iff \( A = {A}^{ - } \) . 4. \( A \) is clopen iff \( A \) is both open and closed. 5. \( A \) is dense in \( X \) iff \( {A}^{ - } = X \) . Remark. From Theorem 1.17 we see that if \( T \) is a topology on \( X \) then \( T \) is the collection of open sets in that topology. A base for a topology is simply a collection of open sets from which all other open sets can be generated by unions. For the set of real numbers \( R \) the intervals \( \left( {a, b}\right) \triangleq \{ x \in R \mid a < x < b\} \) form a base for what is called the natural topology on \( R \) . In this topology \( \left( {0,1}\right) \), and indeed every interval \( \left( {a, b}\right) \), is not only open but regular open. \( \left\lbrack {a, b}\right\rbrack \triangleq \{ x \in R \mid a \leq x \leq b\} = {\left( a, b\right) }^{ - } \) . Thus for example \( \left\lbrack {1,2}\right\rbrack \) is closed. Furthermore \( \left( {0,1}\right) \cup \left( {1,2}\right) \) is open but not regular open. The set of all rationals is dense in \( R \) . In this topology there are exactly two clopen sets 0 and \( R \) . Theorem 1.21. 1. In any topology on \( X \) both 0 and \( X \) are clopen. 2. In the discrete topology on \( X \) every set is clopen and the collection of singleton sets is a base. Proof. Left to the reader. Remark. The next few theorems deal with properties that are true in every topological space \( \langle X, T\rangle \) . In discussing properties that depend upon \( X \) but are independent of the topology \( T \), it is conventional to suppress reference to \( T \) and to speak simply of a topological space \( X \) . Hereafter we will use this convention. Theorem 1.22. If \( A \subseteq X \) and if \( B \subseteq X \) then 1. \( {A}^{0} \subseteq A \subseteq {A}^{ - } \) . 2. \( {A}^{00} = {A}^{0} \land {A}^{- - } = {A}^{ - } \) . 3. \( A \subseteq B \rightarrow {A}^{0} \subseteq {B}^{0} \land {A}^{ - } \subseteq {B}^{ - } \) . 4. \( {\left( X - A\right) }^{ - } = X - {A}^{0} \land {\left( X - A\right) }^{0} = X - {A}^{ - } \) . Proof. 1. \( x \in {A}^{0} \rightarrow \exists N\left( x\right) \subseteq A \) \[ \rightarrow x \in A \] \[ x \in A \rightarrow \left( {\forall N\left( x\right) }\right) \left\lbrack {N\left( x\right) \cap A \neq 0}\right\rbrack \] \[ \rightarrow x \in {A}^{ - }\text{.} \] 2. \( x \in {A}^{0} \rightarrow \exists N\left( x\right) \subseteq A \) \[ \rightarrow \left( {\exists N\left( x\right) }\right) \lbrack x \in \left( {N\left( x\right) \cap {A}^{0}}\right) \; \land \;\left( {N\left( x\right) \cap {A}^{0}}\right) \in T \] \[ \left. {\land \left( {N\left( x\right) \cap {A}^{0}
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 6.45
Definition 6.45 ([813, Definition 3.2]). A finite family \( {\left( {S}_{i}\right) }_{i \in I} \) of closed subsets of a normed space \( X \) with \( I \mathrel{\text{:=}} {\mathbb{N}}_{k} \) is said to be synergetic at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if \( \left( {x}_{n, i}\right) \rightarrow \bar{x},\left( {x}_{n, i}^{ * }\right) \overset{ * }{ \rightarrow }0 \) are such that \( {x}_{n, i} \in {S}_{i},{x}_{n, i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) for all \( \left( {n, i}\right) \in \mathbb{N} \times I \) and \( \left( {{x}_{n,1}^{ * } + \cdots + {x}_{n, k}^{ * }}\right) \rightarrow 0 \) implies that for all \( i \in I \), one has \( \left( {x}_{n, i}^{ * }\right) \rightarrow 0 \) . Two subsets are synergetic at some point \( \bar{z} \) of their intersection whenever one of them is normally compact at \( \bar{z} \) . However, it may happen that they are synergetic at \( \bar{z} \) while none of them is normally compact at \( \bar{z} \) . This happens for \( A \times B \) and \( C \times D \) with \( \bar{z} \mathrel{\text{:=}} \left( {\bar{x},\bar{y}}\right), A \) (resp. \( D \) ) being normally compact at \( \bar{x} \) (resp. \( \bar{y} \) ) while \( B \) and \( C \) are arbitrary (for instance singletons in infinite-dimensional spaces). The preceding notion can be related to alliedness with the help of the following normal qualification condition (NQC): \[ {x}_{i}^{ * } \in {N}_{L}\left( {{S}_{i},\bar{x}}\right) ,{x}_{1}^{ * } + \cdots + {x}_{k}^{ * } = 0 \Rightarrow {x}_{1}^{ * } = \cdots = {x}_{k}^{ * } = 0. \] (6.15) Proposition 6.46. A finite family \( {\left( {S}_{i}\right) }_{i \in I}\left( {I \mathrel{\text{:=}} {\mathbb{N}}_{k}}\right) \) of closed subsets of an Asplund space \( X \) is allied at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if and only if it is synergetic at \( \bar{x} \) and the normal qualification condition (6.15) holds. In particular, if \( X \) is finite-dimensional,(6.15) implies alliedness and (6.13), (6.14). Proof. The necessity condition ("only if" assertion) is obvious (see (6.11)). Conversely, suppose \( {\left( {S}_{i}\right) }_{i \in I} \) is synergetic and (NQC) holds. Let \( {x}_{n, i} \in {S}_{i} \) and let \( {x}_{n, i}^{ * } \in \) \( {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) for \( \left( {n, i}\right) \in \mathbb{N} \times I \) be such that \( {\left( {x}_{n, i}\right) }_{n} \rightarrow \bar{x},{\left( \begin{Vmatrix}{x}_{n,1}^{ * } + \cdots + {x}_{n, k}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0 \) . We may assume that \( {r}_{n} \mathrel{\text{:=}} \max \left( {\begin{Vmatrix}{x}_{n,1}^{ * }\end{Vmatrix},\ldots ,\begin{Vmatrix}{x}_{n, k}^{ * }\end{Vmatrix}}\right) \) is positive for all \( n \) . Let \( {w}_{n, i}^{ * } \mathrel{\text{:=}} \) \( {x}_{n, i}^{ * }/{r}_{n} \) . Let \( r \) be a limit point of \( \left( {r}_{n}\right) \) in \( {\overline{\mathbb{R}}}_{ + } \mathrel{\text{:=}} \left\lbrack {0, + \infty }\right\rbrack \) . Taking subsequences, we may assume that \( \left( {r}_{n}\right) \) converges to \( r \) and that \( {\left( {w}_{n, i}^{ * }\right) }_{n}{\text{weak}}^{ * } \) converges to some \( {w}_{i}^{ * } \in {B}_{{X}^{ * }} \) for all \( i \in I \) . Then \( {w}_{i}^{ * } \in {N}_{L}\left( {{S}_{i},\bar{x}}\right) \), and if \( r \neq 0 \), one has \( {w}_{1}^{ * } + \cdots + {w}_{k}^{ * } = 0 \) . The (NQC) condition implies that \( {w}_{i}^{ * } = 0 \), a contradiction to the synergy of the family \( {\left( {S}_{i}\right) }_{i \in I} \) and the fact that there is some \( j \in I \) such that \( \begin{Vmatrix}{w}_{n, j}^{ * }\end{Vmatrix} = 1 \) for infinitely many \( n \in \mathbb{N} \) . Thus \( r = 0 \) . Since \( r \) is an arbitrary limit point of \( \left( {r}_{n}\right) \), one gets \( \left( {r}_{n}\right) \rightarrow 0 \) . ## Exercises 1. Check that the inclusion \( {N}_{L}\left( {F \cup G,\bar{x}}\right) \subset {N}_{L}\left( {F,\bar{x}}\right) \cap {N}_{L}\left( {G,\bar{x}}\right) \) for \( F, G \subset X,\bar{x} \in \) \( F \cap G \) is not satisfied for \( X \mathrel{\text{:=}} {\mathbb{R}}^{2}, F \mathrel{\text{:=}} \mathbb{R} \times \{ 0\}, G \mathrel{\text{:=}} \{ 0\} \times \mathbb{R},\bar{x} \mathrel{\text{:=}} \left( {0,0}\right) \) . 2. A family \( {\left( {S}_{i}\right) }_{i \in I} \) of closed subsets of a normed space \( X \), with \( I \mathrel{\text{:=}} {\mathbb{N}}_{k} \), is said to satisfy the limiting qualification condition (LQC) at \( \bar{x} \in S \) if whenever \( \left( {x}_{n, i}\right) \rightarrow \bar{x} \) , \( \left( {x}_{n, i}^{ * }\right) \overset{ * }{ \rightarrow }{x}_{i}^{ * } \) with \( {x}_{n, i} \in {S}_{i},{x}_{n, i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) for all \( \left( {n, i}\right) \in \mathbb{N} \times I \) and \( \left( {{x}_{n,1}^{ * } + \cdots + }\right. \) \( \left. {x}_{n, k}^{ * }\right) \rightarrow 0 \), one has \( {x}_{i}^{ * } = 0 \) for all \( i \in I \) . Note that this condition is a consequence of the normal qualification condition (NQC), hence is also a consequence of alliedness. Show that \( {\left( {S}_{i}\right) }_{i \in I} \) is allied at \( \bar{x} \) if and only if it is synergetic at \( \bar{x} \) and (LQC) holds. 3. For \( \varepsilon > 0 \), the \( \varepsilon \) -plastering of a cone \( P \) of a normed space \( Z \) is the set \[ {P}_{\varepsilon } \mathrel{\text{:=}} \{ z \in Z : d\left( {z, P}\right) < \varepsilon \parallel z\parallel \} \cup \{ 0\} . \] Two cones \( P, Q \) of \( Z \) are said to be apart if \( \operatorname{gap}\left( {P \cap {S}_{Z}, Q \cap {S}_{Z}}\right) > 0 \), where \( {S}_{Z} \) is the unit sphere in \( Z \) and for two subsets \( C, D \) of \( Z \), the gap between \( C \) and \( D \) is defined by \( \operatorname{gap}\left( {C, D}\right) \mathrel{\text{:=}} \inf \{ \parallel x - y\parallel : x \in C, y \in D\} \) . Show that \( P, Q \) are apart if and only if for some \( \varepsilon > 0 \) one has \( {P}_{\varepsilon } \cap {Q}_{\varepsilon } = \{ 0\} \), if and only if for some \( \alpha > 0 \) one has \( {P}_{\alpha } \cap Q = \{ 0\} \) . 4. Show that a pair \( \left( {F, G}\right) \) of closed subsets of a normed space \( Z \) is allied at \( \bar{x} \in F \cap \) \( G \) if and only if it satisfies the following local uniform alliedness (LUA) property: there exists \( \varepsilon > 0 \) such that for all \( y \in F \cap B\left( {\bar{x},\varepsilon }\right), z \in G \cap B\left( {\bar{x},\varepsilon }\right) \) the cones \( {N}_{F}\left( {F, y}\right) \) and \( {N}_{F}\left( {G, z}\right) \) are apart. 5. Show that the (LUA) property at \( \bar{x} \in F \cap G \) is equivalent to the fuzzy qualification condition (FQC) at \( \bar{x} \) : there exists \( \gamma \in \left( {0,1}\right) \) such that for all \( y \in F \cap B\left( {\bar{x},\gamma }\right), z \in \) \( G \cap B\left( {\bar{x},\gamma }\right) \) one has \[ \left( {{N}_{F}\left( {F, y}\right) + \gamma {B}_{{Z}^{ * }}}\right) \cap \left( {-{N}_{F}\left( {G, z}\right) + \gamma {B}_{{Z}^{ * }}}\right) \cap {B}_{{Z}^{ * }} \subset \left( {1 - \gamma }\right) {B}_{{Z}^{ * }}. \] (6.16) 6. Let \( F \mathrel{\text{:=}} {\mathbb{R}}_{ + } \times {\mathbb{R}}_{ + }, G \mathrel{\text{:=}} {\mathbb{R}}_{ + } \times {\mathbb{R}}_{ - } \) in \( X \mathrel{\text{:=}} {\mathbb{R}}^{2},\bar{x} = \left( {0,0}\right) \) . Then \( \{ 0\} \times {\mathbb{R}}_{ - } \subset \) \( {N}_{L}\left( {F,\bar{x}}\right) \cap \left( {-{N}_{L}\left( {G,\bar{x}}\right) }\right) \), so that conditions (6.15),(6.16) are not satisfied, whereas for all \( x \in X \smallsetminus \left( {F \cap G}\right), y, z \) close enough to \( \bar{x} \) and \( {y}^{ * } \in {\partial }_{F}{d}_{F}\left( y\right) ,{z}^{ * } \in {\partial }_{F}{d}_{G}\left( z\right) \) one has \( \begin{Vmatrix}{{y}^{ * } + {z}^{ * }}\end{Vmatrix} \geq 1 \) and relation (6.11) holds. 7. Check with an example that the metric estimate (6.13) of the linear coherence condition is a more general property than alliedness or synergy. [Hint: Take an infinite-dimensional Banach space \( W \), endow \( X \mathrel{\text{:=}} W \times \mathbb{R} \) with the sum norm, consider \( F \mathrel{\text{:=}} \{ 0\} \times {\mathbb{R}}_{ - }, G \mathrel{\text{:=}} \{ 0\} \times {\mathbb{R}}_{ + } \), and show that \( d\left( {\cdot, F \cap G}\right) \leq d\left( {\cdot, F}\right) + \) \( d\left( {\cdot, G}\right) \) but that \( F, G \) are not allied at \( \left( {0,0}\right) \) and that conditions (6.15),(6.16) are not satisfied.] ## 6.3.2 Coderivative to an Intersection of Multimaps Now let us pass to multimaps. Since the graph of a multimap is a subset of a product space, the preceding concepts can be adapted to such a product structure in order to get refined conditions. For the sake of simplicity of notation, we limit our study to families of two members and we identify a multimap with its graph. Definition 6.47. Two multimaps \( F, G : X \rightrightarrows Y \) are said to be range-allied (resp. source-allied) at \( \bar{z} \in F \cap G \) if \( \left( {w}_{n}\right) \rightarrow \bar{z} \) in \( F,\left( {z}_{n}\right) \rightarrow \bar{z} \) in \( G,\left( {w}_{n}^{ * }\right) ,\left( {z}_{n}^{ * }\right) \) in \( {X}^{ * } \times {Y}^{ * } \) are such that \( {w}_{n}^{ * } \mathrel{\text{:=}} \left( {{u}_{n}^{ * },{v}_{n}^{ * }}\right) \in {N}_{F}\left( {F,{w}_{n}}\right) ,{z}_{n}^{ * } = \left( {{x}_{n}^{ * },{y}_{n}^{ * }}\right) \in {N}_{F}\left( {G,{z}_{n}}\right) \) for all \( n \in \mathbb{N} \) and \( \left( {{w}_{n}^{ * } + {z}_{n}^{ * }}\right) \rightarrow 0 \) implies that one has \( \left( {v}_{n}^{ * }\right) \rightarrow 0 \) (resp. \( \left( {u}_{n}^{ * }\right) \rightarrow 0 \) ). Clearly, if \( F \mathrel{\text{:=}} B \times C, G \mathrel{\text{:=}} D \times E \), where \( C \) and \( E \) are allied at \( \bar{y} \in C \cap E \) , then \( F \) and \( G \) are range-allied at \( \bar{z} \mathrel