book_name
stringclasses
89 values
def_number
stringlengths
12
19
text
stringlengths
5.47k
10k
1074_(GTM232)An Introduction to Number Theory
Definition 4.12
Definition 4.12. A nonzero ideal \( I \neq R \) in a commutative ring \( R \) is called maximal if for any ideal \( J, J \mid I \) implies that \( J = I \) . An ideal \( P \) is prime if \( P \mid {IJ} \) implies that \( P \mid I \) or \( P \mid J \) . Exercise 4.12. In a commutative ring \( R \), let \( M \) and \( P \) denote ideals. (a) Show that \( M \) is maximal if and only if the quotient ring \( R/M \) is a field. (b) Show that \( P \) is prime if and only if \( R/P \) is an integral domain (that is, in \( R/P \) the equation \( {ab} = 0 \) forces either \( a \) or \( b \) to be 0 ). (c) Deduce that every maximal ideal is prime. Theorem 4.13. [Fundamenta Theorem of Arithmetic for Ideals] Any nonzero proper ideal in \( {O}_{\mathbb{K}} \) can be written as a product of prime ideals, and that factorization is unique up to order. Proof. If \( I \) is not maximal, it can be written as a product of two nontrivial ideals. Comparing norms shows these ideals must have norms smaller than \( I \) . Keep going: The sequence of norms is descending, so it must terminate, resulting in a finite factorization of \( I \) . By Exercise 4.12, every maximal ideal is prime, so all that remains is to demonstrate that the resulting factorization is unique. This uniqueness follows from Corollary 4.9, which allows cancellation of nonzero ideals common to two products. ## 4.4 The Ideal Class Group In this section, we are going to see how the nineteenth-century mathematicians interpreted Exercise 3.32 on p. 81 in terms of quadratic fields. The major result we will present is that ideals in \( {O}_{\mathbb{K}} \), for a quadratic field \( \mathbb{K} \), can be described using a finite list of representatives \( {I}_{1},\ldots ,{I}_{h} \) ; any nontrivial ideal \( I \) can be written \( {I}_{i}P \), where \( 1 \leq i \leq h \) and \( P \) is a principal ideal. Thus \( h \), known as the class number, measures the extent to which \( {O}_{\mathbb{K}} \) fails to be a principal ideal domain. This statement was proved for arbitrary algebraic number fields and proved to be influential in the way number theory developed in the twentieth century. Given two ideals \( I \) and \( J \) in \( {O}_{\mathbb{K}} \), define a relation \( \sim \) by \[ I \sim J\text{if and only if}I = {\lambda J}\text{for some}\lambda \in {\mathbb{K}}^{ * }\text{.} \] Exercise 4.13. Show that \( \sim \) is an equivalence relation. We are going to outline a proof of the following important theorem. Theorem 4.14. There are only finitely many equivalence classes of ideals in \( {O}_{\mathbb{K}} \) under \( \sim \) . One class is easy to spot - namely the one consisting of all principal ideals. Of course, \( {O}_{\mathbb{K}} \) is a principal ideal domain if and only if there is only one class under the relation. One can define a multiplication on classes: If \( \left\lbrack I\right\rbrack \) denotes the class containing \( I \), then one can show that the multiplication defined by \[ \left\lbrack I\right\rbrack \left\lbrack J\right\rbrack = \left\lbrack {IJ}\right\rbrack \] (4.2) is independent of the representatives chosen. Corollary 4.15. The set of classes under \( \sim \) forms a finite Abelian group. The group in Corollary 4.15 is known as the ideal class group of \( \mathbb{K} \) (or just the class group). Proof of Corollary 4.15. In the class group, associativity of multiplication is inherited from \( {O}_{\mathbb{K}} \) . The element \( \left\lbrack {O}_{\mathbb{K}}\right\rbrack \) acts as the identity. Finally, given any nonzero ideal \( I \), the relation \( I{I}^{ * } = \left( {N\left( I\right) }\right) \) shows that the inverse of the class \( \left\lbrack I\right\rbrack \) is \( \left\lbrack {I}^{ * }\right\rbrack \) . Lemma 4.16. Given a square-free integer \( d \neq 1 \), there is a constant \( {C}_{d} \) that depends upon \( d \) only such that for any nonzero ideal \( I \) of \( {O}_{\mathbb{K}},\mathbb{K} = \mathbb{Q}\left( \sqrt{d}\right) \) , there is a nonzero element \( \alpha \in I \) with \( \left| {N\left( \alpha \right) }\right| \leq {C}_{d}N\left( I\right) \) . Exercise 4.14. *Prove Lemma 4.16. The basic idea is a technique similar to that used in the proof of Theorem 3.21 showing that a lattice point must exist in a region constrained by various inequalities. Since the original proof, considerable efforts have gone into decreasing the constant \( {C}_{d} \) for practical application. The best techniques use the geometry of numbers, a theory initiated by Minkowski. Proof of Theorem 4.14. First show that every class contains an ideal whose norm is bounded by \( {C}_{d} \) . Given a class \( \left\lbrack I\right\rbrack \), apply Lemma 4.16 with \( {I}^{ * } \) replacing \( I \) . Now \( \left( \alpha \right) \subseteq {I}^{ * } \), so we can write \( \left( \alpha \right) = {I}^{ * }J \) for some ideal \( J \) . However, this gives a relation \( \left\lbrack {I}^{ * }\right\rbrack \left\lbrack J\right\rbrack = \left\lbrack \left( \alpha \right) \right\rbrack \) in the class group. This means that \( \left\lbrack J\right\rbrack \) is the inverse of \( \left\lbrack {I}^{ * }\right\rbrack \) . However, we remarked earlier that \( \left\lbrack I\right\rbrack \) and \( \left\lbrack {I}^{ * }\right\rbrack \) are mutual inverses in the class group. Hence \( \left\lbrack I\right\rbrack = \left\lbrack J\right\rbrack \) . Now \[ \left| {N\left( \alpha \right) }\right| = N\left( \left( \alpha \right) \right) = N\left( {I}^{ * }\right) N\left( J\right) \] Since the left-hand side is bounded by \( {C}_{d}N\left( {I}^{ * }\right) \), we can cancel \( N\left( {I}^{ * }\right) \) to obtain \( N\left( J\right) \leq {C}_{d} \) . Now the theorem follows easily: For any given integer \( k \geq 0 \), there are only finitely many ideals of norm \( k \) ; this is because any ideal must be a product of prime ideals of norm \( p \) or \( {p}^{2} \), where \( p \) runs through the prime factors of \( k \) . There are only finitely many such prime ideals and hence there are only finitely many ideals of norm \( k \) . Now apply this to the integers \( k \leq {C}_{d} \) to deduce that there are only finitely many ideals of norm bounded by \( {C}_{d} \) . Since each class contains an ideal whose norm is thus bounded, by the first part of the proof, it follows that there are only finitely many classes. Exercise 4.15. Investigate the relationship between quadratic forms and ideals in quadratic fields. In particular, show that Exercise 3.32 on p. 81 is equivalent to Theorem 4.14. (Hint: If \( I \) denotes an ideal with basis \( \{ \alpha ,\beta \} \), show that for \( x, y \in \mathbb{Z}, N\left( {{x\alpha } + {y\beta }}\right) /N\left( I\right) \) is a (binary) integral quadratic form. How does a change of basis for \( I \) relate to the form? What effect does multiplying \( I \) by a principal ideal have on the form?) ## 4.4.1 Prime Ideals To better understand prime ideals, we close with an exercise that links up the various trains of thought in this chapter and shows that ideal theory better explains the various phenomena encountered in Chapter 3. Exercise 4.16. Factorize the ideal (6) into prime ideals in \( \mathbb{Z}\left\lbrack \sqrt{-5}\right\rbrack \), expressing each prime factor in the form \( \left( {a, b + c\sqrt{-5}}\right) \) . Exercise 4.17. Let \( {O}_{\mathbb{K}} \) denote the ring of algebraic integers in the quadratic field \( \mathbb{K} = \mathbb{Q}\left( \sqrt{d}\right) \) for a square-free integer \( d \) . (a) If \( P \) is a prime ideal in \( {O}_{\mathbb{K}} \), show that \( P \mid \left( p\right) \) for some integer prime \( p \in \mathbb{Z} \) . (b) Show that there are only three possibilities for the factorization of the ideal \( \left( p\right) \) in \( {O}_{\mathbb{K}} \) : \( \left( p\right) = {P}_{1}{P}_{2} \) where \( {P}_{1} \) and \( {P}_{2} \) are prime ideals in \( {O}_{\mathbb{K}} \) ( \( p \) splits); \( \left( p\right) = P \), where \( P \) is a prime ideal in \( {O}_{\mathbb{K}} \) ( \( p \) is inert); \( \left( p\right) = {P}^{2} \), where \( P \) is a prime ideal in \( {O}_{\mathbb{K}} \) ( \( p \) is ramified). This should be compared with the possible primes in \( \mathbb{Z}\left\lbrack i\right\rbrack \) described in Theorem 2.8(3). The following exercise gives a complete description of splitting types in terms of the Legendre symbol. Exercise 4.18. Let \( {O}_{\mathbb{K}} \) denote the ring of algebraic integers in the quadratic field \( \mathbb{K} = \mathbb{Q}\left( \sqrt{d}\right) \) for a square-free integer \( d \) . Let \( D = d \) if \( d \equiv 1 \) modulo 4 and let \( D = {4d} \) otherwise. Show that an odd prime \( p \) is inert, ramified, or split as the Legendre symbol \( \left( \frac{D}{p}\right) \) is \( - 1,0 \), or +1, respectively. What are the possibilities when \( p = 2 \) ? We should say something about the terminology. Splitting and inertia are fairly obvious, the latter signifying that the prime \( p \) remains prime in this bigger ring, just as primes \( p \equiv 3 \) modulo 4 remain primes in \( \mathbb{Z}\left\lbrack i\right\rbrack \) . The term "ramify" means literally to branch, and we see here something of an overlap with the theory of functions. A function such as \( y = \sqrt{x} \) really consists of two possible branches. This notion was borrowed deliberately to name the phenomenon seen in number theory, where a prime in \( \mathbb{Z} \) becomes a power of a prime in a larger ring. We end this chapter with a definition because it is going to appear again in Chapter 11. Definition 4.17. Let \( \mathbb{K} = \mathbb{Q}\left( \sqrt{d}\right) \) denote a quadratic field, where \( d \) is a square-free integer. Define \( D \) by \[ D = \left\{ \begin{array}{ll} d & \text{ if }d \equiv 1{\;\operatorname{modulo}\;4}\text{ and } \\ {4d} & \text{ otherwise. } \end{array}\right. \] (4.3) Then \( D \) is called the discriminant of the quadratic field \( \mathbb{K} =
1139_(GTM44)Elementary Algebraic Geometry
Definition 3.5
Definition 3.5. If \( R \) is the coordinate ring of an irreducible variety \( V \subset {\mathbb{C}}^{n} \) , and if \( \mathfrak{p} = \mathrm{J}\left( W\right) \) is the prime ideal of an irreducible subvariety \( W \) of \( V \) , then the local ring \( {R}_{\mathrm{p}} \) is called the localization of \( V \) at \( W \) (or along \( W \) ), or the local ring of \( V \) at \( W \) ; in this case \( {R}_{\mathfrak{p}} \) is also denoted by \( \mathfrak{o}\left( {W;V}\right) \) . Definition 3.6. Let \( V \subset {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) be irreducible, and let \( {K}_{V} \) be \( V \) ’s function field-that is, the set of quotients of equal-degree forms in \( {x}_{1},\ldots ,{x}_{n + 1} \), where \( \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n + 1}}\right\rbrack = \mathbb{C}\left\lbrack {{X}_{1},\ldots ,{X}_{n + 1}}\right\rbrack /\mathrm{J}\left( V\right) \) . If \( W \) is an irreducible subvariety of \( V \), then the set of all elements of \( {K}_{V} \) which can be written as \( p/q \) , where \( p \) and \( q \) are forms in \( {x}_{1},\ldots ,{x}_{n + 1} \) of the same degree, and where \( q \) is not identically zero on \( W \), forms a subring of \( {K}_{V} \) ; it is called the local ring of \( V \) at \( W \), and is denoted by \( \mathfrak{o}\left( {W;V}\right) \) . Remark 3.7. If \( W \subset V \) are irreducible varieties in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \), and if \( R \) is the coordinate ring of any dehomogenization \( \mathrm{D}\left( V\right) \) of \( V \) (where \( W \) is not contained in the hyperplane at infinity), then \( \mathfrak{o}\left( {W;V}\right) \) is the localization \( {R}_{\mathfrak{p}} = \) \( \mathfrak{o}\left( {\mathrm{D}\left( W\right) ;\mathrm{D}\left( V\right) }\right) \) of \( R \) at \( \mathrm{D}\left( W\right) = \mathrm{V}\left( \mathfrak{p}\right) \) ; this follows from the fact that if we without loss of generality dehomogenize at \( {X}_{n + 1} \), then \[ \frac{p\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) }{q\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) } = \frac{p\left( {{x}_{1}/{x}_{n + 1},\ldots ,1}\right) }{q\left( {{x}_{1}/{x}_{n + 1},\ldots ,1}\right) }. \] The left-hand side is an element of \( \mathfrak{o}\left( {W;V}\right) \), while the right-hand side belongs to \( {R}_{\mathfrak{p}} \) . Many of the basic algebraic and geometric relations between \( R \) and \( {R}_{\mathfrak{p}} \) may be compactly expressed using a double sequence, as in Diagrams 2 and 3 of Chapter III. We explore this next. Again, for expository purposes we select a fixed variety \( V \subset {\mathbb{C}}_{{x}_{1},\ldots ,{x}_{n}} \) having \( R = \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) as coordinate ring, and we let \( W = \mathbf{V}\left( \mathfrak{p}\right) \) be an arbitrary, fixed irreducible subvariety of \( V \) . Our sequence is given in Diagram 1. ![9396b131-9501-41be-b2cf-577fd90ab693_247_0.jpg](images/9396b131-9501-41be-b2cf-577fd90ab693_247_0.jpg) Diagram 1. In this diagram, \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) denotes the lattice \( \left( {\mathcal{I}\left( {R}_{\mathfrak{p}}\right) , \subset ,\cap , + }\right) \) of ideals of \( {R}_{\mathfrak{p}} \) and \( \mathcal{J}\left( {R}_{\mathfrak{p}}\right) \) denotes the lattice \( \left( {\mathcal{J}\left( {R}_{\mathfrak{p}}\right) , \subset ,\cap , + }\right) \) of closed ideals of \( {R}_{\mathfrak{p}} \) . Closure in \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) is with respect to the radical of Definition 1.1 of Chapter III; by Lemma 5.7 of Chapter III the radical of an ideal \( \mathfrak{a} \) in \( {R}_{\mathfrak{p}} \) will be seen to be the intersection of all prime ideals of \( {R}_{\mathfrak{p}} \) which contain \( \mathfrak{a} \), since \( {R}_{\mathfrak{p}} \) is Noetherian (Lemma 3.9). This radical is not in general the intersection of the \( \mathfrak{a} \) -containing maximal ideals of \( {R}_{\mathfrak{p}} \), since \( {R}_{\mathfrak{p}} \) has but one maximal ideal. Continuing the explanation of symbols in Diagram \( 1,\mathcal{G}\left( {R}_{\mathfrak{p}}\right) \) denotes the lattice \( \left( {\mathcal{G}\left( {R}_{\mathfrak{p}}\right) , \subset ,\cap , \cup }\right) \) of all \( {V}_{W} \) where \( V \in \mathcal{I} \) and \( W \) is fixed, with \( \subset , \cap \), and \( \cup \) as in Definition 3.3. The letter \( \mathcal{G} \) reminds us that these ordered pairs \( {V}_{W} \) are identified with germs (We remark that there exists an analogous sequence at the analytic level, where one uses germs instead of representatives, since there is not in general a canonical representative of each "analytic germ," as is the case with algebraic varieties, where there is a unique smallest algebraic variety representing a given "algebraic germ." One can even push certain aspects to the differential level.) It is easily seen that \( \mathcal{G}\left( {R}_{\mathfrak{p}}\right) \) actually is a lattice, using Definition 3.3 together with the fact that \( \varnothing \) and the subvarieties of \( V \) containing \( \mathbf{V}\left( \mathfrak{p}\right) \) form a lattice. As for the various maps, \( {\left( \;\right) }^{c} \) and \( {\left( \;\right) }^{e} \) are just contraction and extension of ideals. Since \( R \rightarrow {R}_{\mathfrak{p}} \) is an embedding, \( {\left( \;\right) }^{c} \) reduces to intersection with \( R \) . In contrast to extension in Section III,10, we shall see that \( {\left( \;\right) }^{e} \) maps closed ideals in \( \mathcal{I}\left( R\right) \) to closed ideals in \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) . The map \( {\left( \;\right) }_{W} \) sends \( V \) into \( {V}_{W} \) , and \( i \) assigns to each \( {V}_{W} \) the variety \( i\left( {V}_{W}\right) = {V}_{\left( W\right) } \) . (Thus \( i \) simply removes from \( {V}_{W} \) reference to the "center" \( W \) .) Finally, the bottom horizontal maps i* and \( \sqrt{} \) are the embedding and radical maps; \( {G}^{ * } \) and \( {J}^{ * } \) will be defined in terms of the other maps, and will turn out to be mutually inverse lattice-reversing isomorphisms. In establishing properties of these maps, extension and contraction between \( \mathcal{I}\left( R\right) \) and \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) play a basic part; we look at them first. \( {\left( \;\right) }^{e} : \mathcal{I}\left( R\right) \rightarrow \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) This map is onto \( \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \) ; in particular, each ideal \( {\mathfrak{a}}^{ * } \subset {R}_{\mathfrak{p}} \) comes from the ideal \( {\mathfrak{a}}^{*c} \subset R \) -that is, For each \( {\mathfrak{a}}^{ * } \in {R}_{\mathfrak{p}} \) , \[ {\mathfrak{a}}^{ * } = {\mathfrak{a}}^{*{ce}} \] (8) Proof. That \( {\mathfrak{a}}^{*{ce}} \subset {\mathfrak{a}}^{ * } \) is obvious, since \( {a}^{ * } \in {\mathfrak{a}}^{*{ce}} \) implies that \( {a}^{ * } = a/m \) for some \( a \in {\mathfrak{a}}^{*c} \) and some \( m \in R \smallsetminus \mathfrak{p} \) . To show \( {\mathfrak{a}}^{ * } \subset {\mathfrak{a}}^{*{ce}} \), let \( {a}^{ * } \in {\mathfrak{a}}^{ * } \) . Then \( {a}^{ * } \in {R}_{\mathfrak{p}} \) , which implies \( {a}^{ * } = a/m \) for some \( a \in R \) and \( m \in R \smallsetminus \mathfrak{p} \) ; also \( a = m{a}^{ * } \), so \( a \in {a}^{ * } \) , which means \( a \in {\mathfrak{a}}^{ * } \cap R = {\mathfrak{a}}^{*c} \) . Hence \( {\mathfrak{a}}^{ * } = a/m \in {\mathfrak{a}}^{*{ce}} \) , Next note that \( {\left( \;\right) }^{e} \) is not necessarily \( 1 : 1 \), since \[ {\mathfrak{a}}^{e} = {R}_{\mathfrak{p}}\text{ for every ideal }\mathfrak{a} ⊄ \mathfrak{p}. \] (9) ( \( \mathfrak{a} \subset \mathfrak{p} \) implies that there is an \( m \in \mathfrak{a} \cap \left( {R \smallsetminus \mathfrak{p}}\right) \), hence \( m/m = 1 \in {\mathfrak{a}}^{e} \) .) However, (3.8) \( {\left( \;\right) }^{e} \) is \( 1 : 1 \) on the set of contracted ideals of \( \mathcal{I}\left( R\right) \) . For if \( \mathfrak{a} = {\mathfrak{a}}^{*c} \) and \( \mathfrak{b} = {\mathfrak{b}}^{*c} \), and if \( {\mathfrak{a}}^{e} = {\mathfrak{a}}^{*{ce}} = {\mathfrak{b}}^{e} = {\mathfrak{b}}^{*{ce}} \), then \( {\mathfrak{a}}^{ * } = {\mathfrak{b}}^{ * } \), so \( \mathfrak{a} = {\mathfrak{a}}^{*c} = \mathfrak{b} = {\mathfrak{b}}^{*c}. \) \( {\left( \;\right) }^{c} : \mathcal{I}\left( {R}_{\mathfrak{p}}\right) \rightarrow \mathcal{I}\left( R\right) \) This map is not necessarily onto, because \( {\mathfrak{a}}^{*c} \) is either \( R \) or is contained in \( \mathfrak{p} \) . (If \( {\mathfrak{a}}^{*c} \) is not contained in \( \mathfrak{p} \), then \( {\mathfrak{a}}^{ * } = {\mathfrak{a}}^{ce} = {R}_{\mathfrak{p}} \), whence \( {\mathfrak{a}}^{*c} = R \) .) Next note that \( {\left( \text{ }\text{ }\right) }^{c} \) is \( 1 : 1 \), for if \( {\mathfrak{a}}^{*c} = {\mathfrak{b}}^{*c} \), then \( {\mathfrak{a}}^{*{ce}} = {\mathfrak{b}}^{*{ce}} = {\mathfrak{a}}^{ * } = {\mathfrak{b}}^{ * } \) . In general \( \mathfrak{a} \neq {\mathfrak{a}}^{ec} \), but we always have \[ \mathfrak{a} \subset {\mathfrak{a}}^{ec}. \] (10) (Theorem 3.14 will supply geometric meaning to (10), and also to Theorem 3.10 below.) The following characterization of \( {\mathfrak{a}}^{ec} \) is useful: \[ {\mathfrak{a}}^{ec} = \{ a \in R \mid {am} \in \mathfrak{a}\text{, for some }m \in R \smallsetminus \mathfrak{p}\} . \] (11) Proof \( \subset \) : Each element of \( {\mathfrak{a}}^{e} \) is a sum of quotients of elements in \( \mathfrak{a} \) by elements in \( R \smallsetminus \mathfrak{p} \) ; obviously such a sum is itself such a quotient. Hence an element \( a \) is in \( {\mathfrak{a}}^{ec} \) iff it is in \( R \) and is of the form \( a = {a}^{\prime }/m \) where \( {a}^{\prime } \in \mathfrak{a} \) . Hence \( {am} = {a}^{\prime } \in \mathfrak{a} \) , proving the inclusion. \( \supset \) : Any \( a \) on the right-hand side of (11) can be w
18_Algebra Chapter 0
Definition 1.8
Definition 1.8. An element \( a \) in a ring \( R \) is a left-zero-divisor if there exist elements \( b \neq 0 \) in \( R \) for which \( {ab} = 0 \) . The reader will have no difficulty figuring out what a right-zero-divisor should be. The element 0 is a zero-divisor in all nonzero rings \( R \) ; the zero ring is the only ring without zero-divisors(!). Proposition 1.9. In a ring \( R, a \in R \) is not a left- (resp., right-) zero-divisor if and only if left (resp., right) multiplication by a is an injective function \( R \rightarrow R \) . In other words, \( a \) is not a left- (resp., right-) zero-divisor if and only if multiplicative left- (resp., right-) cancellation by the element \( a \) holds in \( R \) . Proof. Let's verify the 'left' statement (the 'right' statement is of course entirely analogous). Assume \( a \) is not a left-zero-divisor and \( {ab} = {ac} \) for \( b, c \in R \) . Then, by distributivity, \[ a\left( {b - c}\right) = {ab} - {ac} = 0, \] and this implies \( b - c = 0 \) since \( a \) is not a left-zero-divisor; that is, \( b = c \) . This proves that left-multiplication is injective in this case. Conversely, if \( a \) is a left-zero-divisor, then \( \exists b \neq 0 \) such that \( {ab} = 0 = a \cdot 0 \) ; this shows that left-multiplication is not injective in this case, concluding the proof. Rings such as \( \mathbb{Z},\mathbb{Q} \), etc., are commutative rings without (nonzero) zero-divisors. Such rings are very special, but very important, and they deserve their own terminology: Definition 1.10. An integral domain is a nonzero commutative ring \( R \) (with 1) such that \[ \left( {\forall a, b \in R}\right) : \;{ab} = 0 \Rightarrow a = 0\text{ or }b = 0. \] Chapter V will be entirely devoted to integral domains. An element which is not a zero-divisor is called a non-zero-divisor. Thus, integral domains are those nonzero commutative rings in which every nonzero element is a non-zero-divisor. By Proposition 1.9, multiplicative cancellation by nonzero elements holds in integral domains. The rings \( \mathbb{Z},\mathbb{Q},\mathbb{R},\mathbb{C} \) are all integral domains. As we have seen, some \( \mathbb{Z}/n\mathbb{Z} \) are not integral domains. Here is one of those places where the reader can do him/herself a great favor by pausing a moment and figuring something out: answer the question, which \( \mathbb{Z}/n\mathbb{Z} \) are integral domains? This is entirely within reach, given what the reader knows already. Don't read ahead before figuring this out - this question will be answered within a few short paragraphs, spoiling all the fun. There are even subtler reasons why \( \mathbb{Z} \) is a very special ring: we will see in due time that it is a 'UFD' (unique factorization domain); in fact, it is a 'PID' (principal ideal domain); in fact, it is more special still!, as it is a 'Euclidean domain'. All of this will be discussed in Chapter V particularly V12 However, \( \mathbb{Q},\mathbb{R},\mathbb{C} \) are more special than all of that and then some, since they are fields. Definition 1.11. An element \( u \) of a ring \( R \) is a left-unit if \( \exists v \in R \) such that \( {uv} = 1 \) ; it is a right-unit if \( \exists v \in R \) such that \( {vu} = 1 \) . Units are two-sided units. Proposition 1.12. In a ring \( R \) : - \( u \) is a left- (resp., right-) unit if and only if left- (resp., right-) multiplication by \( u \) is a surjective functions \( R \rightarrow R \) ; - if \( u \) is a left- (resp., right-) unit, then right- (resp., left-) multiplication by \( u \) is injective; that is, \( u \) is not a right- (resp., left-) zero-divisor; - the inverse of a two-sided unit is unique; - two-sided units form a group under multiplication. Proof. These assertions are all straightforward. For example, denote by \( {\rho }_{u} : R \rightarrow \) \( R \) right-multiplication by \( u \), so that \( {\rho }_{u}\left( r\right) = {ru} \) . If \( u \) is a right-unit, let \( v \in R \) be such that \( {vu} = 1 \) ; then \( \forall r \in R \) \[ {\rho }_{u} \circ {\rho }_{v}\left( r\right) = {\rho }_{u}\left( {rv}\right) = \left( {rv}\right) u = r\left( {vu}\right) = r{1}_{R} = r. \] That is, \( {\rho }_{v} \) is a right-inverse to \( {\rho }_{u} \), and therefore \( {\rho }_{u} \) is surjective (Proposition 1.2.1). Conversely, if \( {\rho }_{u} \) is surjective, then there exists a \( v \) such that \( {1}_{R} = {\rho }_{\left( u\right) }\left( v\right) = {vu} \) , so that \( u \) is a right-unit. This checks the first statement, for right-units. For the second statement, denote by \( {\lambda }_{u} : R \rightarrow R \) left-multiplication by \( u \) : \( {\lambda }_{u}\left( r\right) = {ur} \) . Assume \( u \) is a right-unit, and let \( v \) be such that \( {vu} = {1}_{R} \) ; then \( \forall r \in R \) \[ {\lambda }_{v} \circ {\lambda }_{u}\left( r\right) = {\lambda }_{v}\left( {ur}\right) = v\left( {ur}\right) = \left( {vu}\right) r = {1}_{R}r = r. \] That is, \( {\lambda }_{v} \) is a left-inverse to \( {\lambda }_{u} \), so \( {\lambda }_{u} \) is injective (Proposition 112.1 again). The rest of the proof is left to the reader (Exercise 1.9). Since the inverse of a two-sided unit \( u \) is unique, we can give it a name; of course we denote it by \( {u}^{-1} \) . The reader should keep in mind that inverses of left- or right-units are not unique in general, so the 'inverse notation' is not appropriate for them. Definition 1.13. A division ring is a ring in which every nonzero element is a two-sided unit. We will mostly be concerned with the commutative case, which has its own name: Definition 1.14. A field is a nonzero commutative ring \( R \) (with 1) in which every nonzero element is a unit. The whole of Chapter VII will be devoted to studying fields. By Proposition 1.12 (second part), every field is an integral domain, but not conversely: indeed, \( \mathbb{Z} \) is an integral domain, but it is not a field. Remember: \[ \text{field} \Rightarrow \text{integral domain,} \] integral domain \( \Rightarrow \) field. There is a situation, however, in which the two notions coincide: Proposition 1.15. Assume \( R \) is a finite commutative ring; then \( R \) is an integral domain if and only if it is a field. Proof. One implication holds for all rings, as pointed out above; thus we only have to verify that if \( R \) is a finite integral domain, then it is a field. This amounts to verifying that if \( a \) is a non-zero-divisor in a finite (commutative) ring \( R \), then it is a unit in \( R \) . Now, if \( a \) is a non-zero-divisor, then multiplication by \( a \) in \( R \) is injective (Proposition 1.9); hence it is surjective, as the ring is finite, by the pigeon-hole principle; hence \( a \) is a unit, by Proposition 1.12, Remark 1.16. A little surprisingly, the hypothesis of commutativity in Proposition 1.15 is actually superfluous: a theorem known as Wedderburn's little theorem shows that finite division rings are necessarily commutative. The reader will prove this fact in a distant future (Exercise VII 5.14). Example 1.17. The group of units in the ring \( \mathbb{Z}/n\mathbb{Z} \) is precisely the group \( {\left( \mathbb{Z}/n\mathbb{Z}\right) }^{ * } \) introduced in [112.3] indeed, a class \( {\left\lbrack m\right\rbrack }_{n} \) is a unit if and only if (right-) multiplication by \( {\left\lbrack m\right\rbrack }_{n} \) is surjective (by Proposition 1.12), if and only if the map \( a \mapsto a{\left\lbrack m\right\rbrack }_{n} \) is surjective, if and only if \( {\left\lbrack m\right\rbrack }_{n} \) generates \( \mathbb{Z}/n\mathbb{Z} \), if and only if \( \gcd \left( {m, n}\right) = 1 \) (Corollary II 2.5), if and only if \( {\left\lbrack m\right\rbrack }_{n} \in {\left( \mathbb{Z}/n\mathbb{Z}\right) }^{ * } \) . In particular, those \( n \) for which all nonzero elements of \( \mathbb{Z}/n\mathbb{Z} \) are units (that is, for which \( \mathbb{Z}/n\mathbb{Z} \) is a field) are precisely those \( n \in \mathbb{Z} \) for which \( \gcd \left( {m, n}\right) = 1 \) for all \( m \) that are not multiples of \( n \) ; this is the case if and only if \( n \) is prime. Putting this together with Proposition 1.15, we get the pretty classification (for integers \( p \neq 0 \) ) \[ \mathbb{Z}/p\mathbb{Z}\text{ integral domain } \Leftrightarrow \mathbb{Z}/p\mathbb{Z}\text{ field } \Leftrightarrow p\text{ prime,} \] which the reader is well advised to remember firmly. Example 1.18. The rings \( \mathbb{Z}/p\mathbb{Z} \), with \( p \) prime, are not the only finite fields. In fact, for every prime \( p \) and every integer \( r > 0 \) there is a (unique, in a suitable sense) multiplication on the product group \[ \underset{r\text{ times }}{\underbrace{\mathbb{Z}/p\mathbb{Z} \times \cdots \times \mathbb{Z}/p\mathbb{Z}}} \] making it into a field. A discussion of these fields will have to wait until we have accumulated much more material (cf. (VII15.1), but the reader could already try to construct small examples 'by hand' (cf. Exercise 1.11). 1.3. Polynomial rings. We will study polynomial rings in some depth, especially over fields; they are another class of examples that is to some extent already familiar to our reader. I will capitalize on this familiarity and avoid a truly formal (and truly tedious) definition. Definition 1.19. Let \( R \) be a ring. A polynomial \( f\left( x\right) \) in the indeterminate \( x \) and with coefficients in \( R \) is a finite linear combination of nonnegative ’powers’ of \( x \) with coefficients in \( R \) : \[ f\left( x\right) = \mathop{\sum }\limits_{{i \geq 0}}{a}_{i}{x}^{i} = {a}_{0} + {a}_{1}x + {a}_{2}{x}^{2} + \cdots , \] where all \( {a}_{i} \) are elements of \( R \) (the coefficients) and we require \( {a}_{i} = 0 \) for \( i \gg 0 \) . Two polynomials are taken to be equal if all the coefficients are equal: \[ \mathop{\sum }\limits_{{i \geq 0}}{a}_{i}{x}^{i} = \mathop{\sum
108_The Joys of Haar Measure
Definition 7.1.7
Definition 7.1.7. Let \( E \) and \( {E}^{\prime } \) be two elliptic curves with identity elements \( \mathcal{O} \) and \( {\mathcal{O}}^{\prime } \) respectively. An isogeny \( \phi \) from \( E \) to \( {E}^{\prime } \) is a morphism of algebraic curves from \( E \) to \( {E}^{\prime } \) such that \( \phi \left( \mathcal{O}\right) = {\mathcal{O}}^{\prime } \) . A nonconstant isogeny is one such that there exists \( P \in E \) such that \( \phi \left( P\right) \neq {\mathcal{O}}^{\prime } \) . We say that \( E \) and \( {E}^{\prime } \) are isogenous if there exists a nonconstant isogeny from \( E \) to \( {E}^{\prime } \) . We will implicitly assume that our isogenies are nonconstant. By the above theorem, an isogeny \( \phi \) preserves the group law, in other words is such that \( \phi \left( {P + {P}^{\prime }}\right) = \phi \left( P\right) + \phi \left( {P}^{\prime }\right) \), where addition on the left is on the curve \( E \), and on the right is on the curve \( {E}^{\prime } \) . The following results summarize the main properties of isogenies; see [Sil1] for details and proofs. Theorem 7.1.8. Let \( \phi \) be a nonconstant isogeny from \( E \) to \( {E}^{\prime } \) defined over an algebraically closed field \( K \) . Then (1) The map \( \phi \) is surjective. (2) \( \phi \) is a finite map; in other words, the fiber over any point of \( {E}^{\prime } \) is constant and finite. From these properties it is easy to see that \( \phi \) induces an injective map from the function field of \( {E}^{\prime } \) to that of \( E \) over some algebraic closure of the base field. The degree of the corresponding field extension is finite and called the degree of \( \phi \) . If this field extension is separable, the degree of \( \phi \) is also equal to the cardinality of a fiber, in other words to \( \left| {\operatorname{Ker}\left( \phi \right) }\right| \), but this is not true in general. Thus, as algebraic curves, or equivalently, over an algebraically closed field extension of the base field, a nonconstant isogeny induces an isomorphism from \( E/\operatorname{Ker}\left( \phi \right) \) to \( {E}^{\prime } \), where \( E/\operatorname{Ker}\left( \phi \right) \) must be suitably defined as an elliptic curve. If there exists a nonconstant isogeny \( \phi \) from \( E \) to \( {E}^{\prime } \) of degree \( m \), we say that \( E \) and \( {E}^{\prime } \) are \( m \) -isogenous. Conversely, we have the following: Proposition 7.1.9. If \( G \) is a finite subgroup of \( E \) there exists a natural elliptic curve \( {E}^{\prime } \) and an isogeny \( \phi \) from \( E \) to \( {E}^{\prime } \) whose kernel (over some algebraic closure) is equal to \( G \) . The elliptic curve \( {E}^{\prime } \) is well defined up to isomorphism and is denoted by \( E/G \) . Note that the equation of \( {E}^{\prime } \) can be given explicitly by formulas due to Vélu [Vel]. Two isogenous elliptic curves are very similar, but are in general not isomorphic. For instance, Theorem 8.1.3 tells us that two elliptic curves defined over \( \mathbb{Q} \) that are isogenous over \( \mathbb{Q} \) have for instance the same rank and the same \( L \) -function. However, they do not necessarily have the same torsion subgroup: for instance, it follows from Proposition 8.4.3 that the elliptic curves \( {y}^{2} = {x}^{3} + 1 \) and \( {y}^{2} = {x}^{3} - {27} \) are 3-isogenous, but it is easily shown using for instance the Nagell-Lutz Theorem 8.1.10 that the torsion subgroup of the former has order 6 , while the torsion subgroup of the latter has order 2 . Proposition 7.1.10. Let \( \phi \) be a nonconstant isogeny from an elliptic curve \( E \) to \( {E}^{\prime } \) of degree \( m \) . There exists an isogeny \( \psi \) from \( {E}^{\prime } \) to \( E \), called the dual isogeny of \( \phi \), such that \[ \psi \circ \phi = {\left\lbrack m\right\rbrack }_{E}\;\text{ and }\;\phi \circ \psi = {\left\lbrack m\right\rbrack }_{{E}^{\prime }}, \] where \( \left\lbrack m\right\rbrack \) denotes the multiplication-by-m map on the corresponding curve. An isogeny of degree \( m \) will also be called an \( m \) -isogeny. We define the degree of the constant isogeny to be 0 . We will see several examples of isogenies in the next chapter, for instance in Section 8.2 on rational 2-descent, where the basic tools are 2-isogenies. ## 7.2 Transformations into Weierstrass Form ## 7.2.1 Statement of the Problem In this section, we explain how to transform the most commonly encountered equations of elliptic curves into Weierstrass form (simple or not, since it is trivial to transform into simple Weierstrass form by completing the square or the cube, if the characteristic permits). We will usually assume that the characteristic of the base field is different from 2 and 3 , although some of the transformations are valid in more general cases. Recall that a birational transformation is a rational map with rational inverse, outside of a finite number of poles. Although not entirely trivial, it can be shown that in the case of curves (but not of higher-dimensional varieties), two curves are isomorphic if and only if they are birationally equivalent, in other words if there exists a birational transformation from one to the other. It will be slightly simpler to work in projective coordinates instead of affine ones. Thus whenever projective coordinates \( \left( {x, y, z}\right) \) appear, it is always implicit that \( \left( {x, y, z}\right) \neq \left( {0,0,0}\right) \) . Apart from simple or generalized Weierstrass equations, an elliptic curve can be given in the following ways, among others: (1) \( f\left( {x, y, z}\right) = 0 \), where \( f \) is a homogeneous cubic polynomial whose three partial derivatives do not vanish simultaneously, together with a known rational point \( \left( {{x}_{0},{y}_{0},{z}_{0}}\right) \) . (2) \( {y}^{2}{z}^{2} = f\left( {x, z}\right) \), where \( f\left( {x, z}\right) \) is a homogeneous polynomial of degree 4 such that \( f\left( {1,0}\right) \neq 0 \) and without multiple roots, together with a known rational point \( \left( {{x}_{0},{y}_{0},{z}_{0}}\right) \) (this type of equation is called a hyperelliptic quartic). Note that in this case the point at infinity \( \left( {0,1,0}\right) \) is a singular point with distinct tangents, and if the given point is at infinity we ask that the slopes of the tangents be rational. This is equivalent to the fact that \( f\left( {x,1}\right) \) is a fourth-ndegree polynomial whose leading coefficient is a square. (3) \( {f}_{1}\left( {x, y, z, t}\right) = {f}_{2}\left( {x, y, z, t}\right) = 0 \), where \( {f}_{1} \) and \( {f}_{2} \) are two homogeneous quadratic polynomials together with a common projective rational solution \( \left( {{x}_{0},{y}_{0},{z}_{0},{t}_{0}}\right) \), and additional conditions to ensure that the corresponding curve is nonsingular and of genus 1 . We first explain how to transform each of the above equations into Weierstrass form. More precisely, we will show how (3) and (2) transform into (1), and explain how to transform (1) into Weierstrass form. In fact we will see that (2) can also be directly transformed into Weierstrass form. ## 7.2.2 Transformation of the Intersection of Two Quadrics Assume that we are given the homogeneous quadratic equations \( {f}_{1}\left( {x, y, z, t}\right) = \) \( {f}_{2}\left( {x, y, z, t}\right) = 0 \) with common projective rational solution \( \left( {{x}_{0},{y}_{0},{z}_{0},{t}_{0}}\right) \), and assume that the intersection of the corresponding quadrics is nonsingular and of genus 1 . For \( i = 1 \) and 2 write \[ {f}_{i}\left( {x, y, z, t}\right) = {A}_{i}{t}^{2} + {L}_{i}\left( {x, y, z}\right) t + {Q}_{i}\left( {x, y, z}\right) , \] where \( {A}_{i} \) is a constant, \( {L}_{i} \) is linear, and \( {Q}_{i} \) quadratic. By making a linear coordinate change, we may send the rational solution to the projective point \( \left( {0,0,0,1}\right) \), so that in the new coordinates we have \( {A}_{i} = 0 \) ; hence the equations take the form \( t{L}_{i}\left( {x, y, z}\right) + {Q}_{i}\left( {x, y, z}\right) = 0 \) . I claim that the linear forms \( {L}_{1} \) and \( {L}_{2} \) are linearly independent: indeed, otherwise we could replace one of the equations, \( {f}_{1} \) say, by a suitable linear combination of \( {f}_{1} \) and \( {f}_{2} \) to make the \( {L}_{1} \) term disappear, so that the equations would read \( {Q}_{1}\left( {x, y, z}\right) = 0 \) and \( t{L}_{2}\left( {x, y, z}\right) + {Q}_{2}\left( {x, y, z}\right) = 0 \) . This second equation expresses \( t \) rationally in terms of \( x, y \), and \( z \), and the first is a conic, which is of genus 0, a contradiction that proves my claim. Eliminating \( t \) between the two equations \( t{L}_{i}\left( {x, y, z}\right) + {Q}_{i}\left( {x, y, z}\right) = 0 \), we thus have a new equation \( C\left( {x, y, z}\right) = 0 \) with \( C = {L}_{1}{Q}_{2} - {L}_{2}{Q}_{1} \) . This is a homogeneous cubic equation with a projective rational point obtained by solving the homogeneous system of linear equations \( {L}_{1}\left( {x, y, z}\right) = {L}_{2}\left( {x, y, z}\right) = 0 \) , which has a unique projective solution since the \( {L}_{i} \) are independent. This shows how (3) can be transformed into (1). ## 7.2.3 Transformation of a Hyperelliptic Quartic Assume now that we are given the equation \( {y}^{2}{z}^{2} = f\left( {x, z}\right) \) with \( f\left( {x, z}\right) \) a homogeneous polynomial of degree 4, and a rational point \( \left( {{x}_{0},{y}_{0},{z}_{0}}\right) \), assumed to have rational tangents if \( {z}_{0} = 0 \) . If \( {z}_{0} = 0 \), we do nothing. Otherwise, by a translation \( x \mapsto x + {kz} \) for a suitable \( k \in K \), we may assume that \( {x}_{0} = 0 \) , so that the equation is \( {y}^{2}{z}^{2} = f\left( {x, z}\right) \) with \( f\left( {0, z}\right) = \left( {{y}_{0}^{2}/{z}_{0}^{2}}\right) {z}^{
1359_[陈省身] Lectures on Differential Geometry
Definition 2.1
Definition 2.1. Suppose \( C : {u}^{i} = {u}^{i}\left( t\right) \) is a parametrized curve on \( M \), and \( X\left( t\right) \) is a tangent vector field defined on \( C \) given by \[ X\left( t\right) = {x}^{i}\left( t\right) {\left( \frac{\partial }{\partial {u}^{i}}\right) }_{C\left( t\right) }. \] (2.17) We say that \( X\left( t\right) \) is parallel along \( C \) if its absolute differential along \( C \) is zero, i.e., if \[ \frac{DX}{dt} = 0 \] (2.18) If the tangent vectors of a curve \( C \) are parallel along \( C \), then we call \( C \) a self-parallel curve, or a geodesic. Equation (2.18) is equivalent to \[ \frac{d{x}^{i}}{dt} + {x}^{j}{\Gamma }_{jk}^{i}\frac{d{u}^{k}}{dt} = 0 \] \( \left( {2.19}\right) \) This is a system of first-order ordinary differential equations. Thus a given tangent vector \( X \) at any point on \( C \) gives rise to a parallel tangent vector field, called the parallel displacement of \( X \) along the curve \( C \) . By the general discussion in \( §4 - 1 \), we see that a parallel displacement along \( C \) establishes an isomorphism between the tangent spaces at any two points on \( C \) . If \( C \) is a geodesic, then its tangent vector \[ X\left( t\right) = \frac{d{u}^{i}\left( t\right) }{dt}{\left( \frac{\partial }{\partial {u}^{i}}\right) }_{C\left( t\right) } \] is parallel along \( C \) . Therefore a geodesic curve \( C \) should satisfy: \[ \frac{{d}^{2}{u}^{i}}{d{t}^{2}} + {\Gamma }_{jk}^{i}\frac{d{u}^{j}}{dt}\frac{d{u}^{k}}{dt} = 0 \] \( \left( {2.20}\right) \) This is a system of second-order ordinary differential equations. Thus there exists a unique geodesic through a given point of \( M \) which is tangent to a given tangent vector at that point. We now discuss the curvature matrix \( \Omega \) of an affine connection. Since \[ {\omega }_{i}^{j} = {\Gamma }_{ik}^{j}d{u}^{k} \] (2.21) we have \[ d{\omega }_{i}^{j} - {\omega }_{i}^{h} \land {\omega }_{h}^{j} = \frac{\partial {\Gamma }_{ik}^{j}}{\partial {u}^{l}}d{u}^{l} \land d{u}^{k} - {\Gamma }_{il}^{h}{\Gamma }_{hk}^{j}d{u}^{l} \land d{u}^{k} \] \[ = \frac{1}{2}\left( {\frac{\partial {\Gamma }_{il}^{j}}{\partial {u}^{k}} - \frac{\partial {\Gamma }_{ik}^{j}}{\partial {u}^{l}} + {\Gamma }_{il}^{h}{\Gamma }_{hk}^{j} - {\Gamma }_{ik}^{h}{\Gamma }_{hl}^{j}}\right) d{u}^{k} \land d{u}^{l}. \] Therefore \[ {\Omega }_{i}^{j} = \frac{1}{2}{R}_{ikl}^{j}d{u}^{k} \land d{u}^{l} \] (2.22) where \[ {R}_{ikl}^{j} = \frac{\partial {\Gamma }_{il}^{j}}{\partial {u}^{k}} - \frac{\partial {\Gamma }_{ik}^{j}}{\partial {u}^{l}} + {\Gamma }_{il}^{h}{\Gamma }_{hk}^{j} - {\Gamma }_{ik}^{h}{\Gamma }_{hl}^{j}. \] (2.23) If \( \left( {W;{w}^{i}}\right) \) is another coordinate system of \( M \), then the local frame field on \( W,{S}^{\prime } = {}^{t}\left( {\frac{\partial }{\partial {w}^{1}},\ldots ,\frac{\partial }{\partial {w}^{m}}}\right) \), is related to \( S \) on \( U \cap W \) by (2.2). By (1.29) we have \[ {\Omega }^{\prime } = {J}_{WU} \cdot \Omega \cdot {J}_{WU}^{-1} \] (2.24) where \( {\Omega }^{\prime } \) is the curvature matrix of the connection \( D \) under the coordinate system \( \left( {W;{w}^{i}}\right) \) . Componentwise the above equation can be written \[ {\Omega }^{\prime j}{}_{i} = {\Omega }_{p}^{q}\frac{\partial {u}^{p}}{\partial {w}^{i}}\frac{\partial {w}^{j}}{\partial {u}^{q}}. \] Thus \[ {R}^{\prime }{}_{ikl}^{j} = {R}_{prs}^{q}\frac{\partial {w}^{j}}{\partial {u}^{q}}\frac{\partial {u}^{p}}{\partial {w}^{i}}\frac{\partial {u}^{r}}{\partial {w}^{k}}\frac{\partial {u}^{s}}{\partial {w}^{l}}, \] \( \left( {2.25}\right) \) where \( {R}^{\prime j}{}_{ikl} \) is determined by \[ {\Omega }_{i}^{\prime j} = \frac{1}{2}{R}_{ikl}^{\prime j}d{w}^{k} \land d{w}^{l} \] Comparing (2.25) with (2.9) of Chapter 2 we observe that \( {R}_{ikl}^{j} \) satisfies the transformation rule for the components of type- \( \left( {1,3}\right) \) tensors. Therefore \[ R = {R}_{ikl}^{j}\frac{\partial }{\partial {u}^{j}} \otimes d{u}^{i} \otimes d{u}^{k} \otimes d{u}^{l} \] (2.26) is independent of the choice of local coordinates, and is called the curvature tensor of the affine connection \( D \) . For any two smooth tangent vector fields \( X, Y \) on \( M \) we have the curvature operator \( R\left( {X, Y}\right) \) [see (1.30)] which maps a tangent vector field on \( M \) to another tangent vector field. By Theorem 1.3, \( R\left( {X, Y}\right) \) can be written \[ R\left( {X, Y}\right) = {D}_{X}{D}_{Y} - {D}_{Y}{D}_{X} - {D}_{\left\lbrack X, Y\right\rbrack }. \] \( \left( {2.27}\right) \) Now we can express \( R\left( {X, Y}\right) \) in terms of the curvature tensor. Suppose \( X, Y \) , \( Z \) are tangent vector fields with local expressions \[ X = {X}^{i}\frac{\partial }{\partial {u}^{i}},\;Y = {Y}^{i}\frac{\partial }{\partial {u}^{i}},\;Z = {Z}^{i}\frac{\partial }{\partial {u}^{i}}. \] (2.28) Then \[ R\left( {X, Y}\right) Z = {Z}^{i}\left\langle {X \land Y,{\Omega }_{i}^{j}}\right\rangle \frac{\partial }{\partial {u}^{j}} \] (2.29) \[ = {{R}^{j}}_{ikl}{Z}^{i}{X}^{k}{Y}^{l}\frac{\partial }{\partial {u}^{j}}. \] Thus \[ {R}_{ikl}^{j} = \left\langle {R\left( {\frac{\partial }{\partial {u}^{k}},\frac{\partial }{\partial {u}^{l}}}\right) \frac{\partial }{\partial {u}^{i}}, d{u}^{j}}\right\rangle . \] \( \left( {2.30}\right) \) We know that the connection coefficients \( {\Gamma }_{ik}^{j} \) do not satisfy the transformation rule for tensors. But if we define \[ {T}_{ik}^{j} = {\Gamma }_{ki}^{j} - {\Gamma }_{ik}^{j} \] (2.31) then (2.5) implies \[ {T}^{\prime }{}_{ik}^{j} = {T}_{pr}^{q}\frac{\partial {w}^{j}}{\partial {u}^{q}}\frac{\partial {u}^{p}}{\partial {w}^{i}}\frac{\partial {u}^{r}}{\partial {w}^{k}}. \] (2.32) Hence \( {T}_{ik}^{j} \) satisfies the transformation rule for the components of \( \left( {1,2}\right) \) -type tensors. Thus \[ T = {T}_{ik}^{j}\frac{\partial }{\partial {u}^{j}} \otimes d{u}^{i} \otimes d{u}^{k} \] (2.33) is a \( \left( {1,2}\right) \) -type tensor, called the torsion tensor of the affine connection \( D \) . By (2.31) the components of the torsion tensor \( T \) are skew-symmetric with respect to the lower indices, that is, \[ {T}_{ik}^{j} = - {T}_{ki}^{j} \] \( \left( {2.34}\right) \) Being a \( \left( {1,2}\right) \) -type tensor, \( T \) can be viewed as a map from \( \Gamma \left( {T\left( M\right) }\right) \times \Gamma \left( {T\left( M\right) }\right) \) to \( \Gamma \left( {T\left( M\right) }\right) \) . Suppose \( X, Y \) are any two tangent vector fields on \( M \) . Then \( T\left( {X, Y}\right) \) is a tangent vector field on \( M \) with local expression \[ T\left( {X, Y}\right) = {T}_{ij}^{k}{X}^{i}{Y}^{j}\frac{\partial }{\partial {u}^{k}}. \] \( \left( {2.35}\right) \) The reader should verify that \[ T\left( {X, Y}\right) = {D}_{X}Y - {D}_{Y}X - \left\lbrack {X, Y}\right\rbrack . \] (2.36) Definition 2.2. If the torsion tensor of an affine connection \( D \) is zero, then the connection is said to be torsion-free. A torsion-free affine connection always exists. In fact, if the coefficients of a connection \( D \) are \( {\Gamma }_{ik}^{j} \), then set \[ {\widetilde{\Gamma }}_{ik}^{j} = \frac{1}{2}\left( {{\Gamma }_{ik}^{j} + {\Gamma }_{ki}^{j}}\right) \] (2.37) Obviously, \( {\widetilde{\Gamma }}_{ik}^{j} \) is symmetric with respect to the lower indices and satisfies (2.5) under a local change of coordinates. Therefore the \( {\widetilde{\Gamma }}_{ik}^{j} \) are the coefficients of some connection \( \widetilde{D} \), and \( \widetilde{D} \) is torsion-free. Any connection can be decomposed into a sum of a multiple of its torsion tensor and a torsion-free connection. In fact, (2.31) and (2.37) give \[ {\Gamma }_{ik}^{j} = - \frac{1}{2}{T}_{ik}^{j} + {\widetilde{\Gamma }}_{ik}^{j} \] (2.38) that is, \[ {D}_{X}Z = \frac{1}{2}T\left( {X, Z}\right) + {\widetilde{D}}_{X}Z \] (2.39) The geodesic equation (2.20) is equivalent to \[ \frac{{d}^{2}{u}^{i}}{d{t}^{2}} + {\widetilde{\Gamma }}_{jk}^{i}\frac{d{u}^{j}}{dt}\frac{d{u}^{k}}{dt} = 0 \] \( \left( {2.40}\right) \) Thus a connection \( D \) and the corresponding torsion-free connection \( \widetilde{D} \) have the same geodesics. The following two theorems indicate that torsion-free affine connections have relatively desirable properties. Theorem 2.1. Suppose \( D \) is a torsion-free affine connection on \( M \) . Then for any point \( p \in M \) there exists a local coordinate system \( {u}^{i} \) such that the corresponding connection coefficients \( {\Gamma }_{ik}^{j} \) vanish at \( p \) . Proof. Suppose \( \left( {W;{w}^{i}}\right) \) is a local coordinate system at \( p \) with connection coefficients \( {\Gamma }^{\prime j}{}_{ik} \) . Let \[ {u}^{i} = {w}^{i} + \frac{1}{2}{\Gamma }^{\prime }{}_{jk}^{i}\left( p\right) \left( {{w}^{j} - {w}^{j}\left( p\right) }\right) \left( {{w}^{k} - {w}^{k}\left( p\right) }\right) . \] (2.41) Then \[ {\left. \frac{\partial {u}^{i}}{\partial {w}^{j}}\right| }_{p} = {\delta }_{j}^{i},{\left. \;\frac{{\partial }^{2}{u}^{i}}{\partial {w}^{j}\partial {w}^{k}}\right| }_{p} = {\Gamma }^{\prime }{}_{jk}^{i}\left( p\right) . \] \( \left( {2.42}\right) \) Thus the matrix \( \left( \frac{\partial {u}^{i}}{\partial {w}^{j}}\right) \) is nondegenerate near \( p \), and (2.41) provides for a change of local coordinates in a neighborhood of \( p \) . From (2.5) we see that the connection coefficients \( {\Gamma }_{ik}^{j} \) in the new coordinate system \( {u}^{i} \) satisfy \[ {\Gamma }_{ik}^{j}\left( p\right) = 0,\;1 \leq i, j, k \leq m. \] Theorem 2.2. Suppose \( D \) is a torsion-free affine connection on \( M \) . Then we have the Bianchi identity: \[ {R}_{{ikl}, h}^{j} + {R}_{{ilh}, k}^{j} + {R}_{{ihk}, l}^{j} = 0. \] (2.43) Proof. From Theorem 1.4 we have \[ d{\Omega }_{i}^{j} = {\omega }_{i}^{k} \land {\Omega }_{k}^{j} - {\Omega }_{i}^{k} \land {\omega }_{k}^{j} \] that is, \[ \fra
1139_(GTM44)Elementary Algebraic Geometry
Definition 8.9
Definition 8.9. Let \( \left( {A, M,\pi }\right) \) be a near cover (Definition 5.2). A connected open set \( \mathcal{O} \subset M \) is said to be liftable to \( A \) if there is an open set \( \mathcal{Q} \subset A \) such that \( \pi \mid \mathcal{Q} \) is a homeomorphism from \( \mathcal{Q} \) to \( \mathcal{O};\mathcal{Q} \) is then a lifting of \( \mathcal{O} \) . If \( P \in \mathbb{Q} \) we say \( \mathbb{Q} \) is a lifting through \( P \) , and that \( \mathbb{Q} \) lifts \( \mathbb{O} \) through \( P \) . A chain \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) in \( M \) is liftable to \( A \) if there is a chain \( \left( {{\mathcal{Q}}_{1},\ldots ,{\mathcal{Q}}_{m}}\right) \) in \( A \) such that each \( {Q}_{i} \) is a lifting of \( {O}_{i} \) . Then \( \left( {{Q}_{1},\ldots ,{Q}_{m}}\right) \) is called a lifting of \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \), and a lifting through \( P \) if \( P \in {\mathcal{Q}}_{1} \cup \ldots \cup {\mathcal{Q}}_{m} \) . Definition 8.10. Let \( \mathcal{O} \) be a connected open subset of \( \mathbb{C} = {\mathbb{C}}_{X} \) . The graph in \( \mathcal{O} \times {\mathbb{C}}_{Y} \) of a function single-valued and complex-analytic on \( \mathcal{O} \), is called an analytic function element. Note that an analytic function element describes in a natural way a lifting of \( \mathcal{O} \) ; we therefore write \( \mathcal{Q} \) to denote such a function element. If \( P \in Q \), then \( Q \) is an analytic function element through \( P \) . A chain \( \left( {{\mathcal{Q}}_{1},\ldots ,{\mathcal{Q}}_{m}}\right) \) of analytic function elements lifting a chain \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) of \( \mathbb{C} \) is called the analytic continuation from \( {\mathcal{Q}}_{1} \) to \( {\mathcal{Q}}_{m} \) along \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) , or the analytic continuation of \( {\mathcal{Q}}_{1} \) along \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) ; if \( P \in {\mathcal{Q}}_{1} \) and \( {P}^{\prime } \in {\mathcal{Q}}_{m} \) , \( \left( {{\mathcal{Q}}_{1},\ldots ,{\mathcal{Q}}_{m}}\right) \) is the analytic continuation from \( P \) to \( {P}^{\prime } \) along \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{m}}\right) \) . Relative to the cover of special interest to us, namely \( \left( {\mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) }\right. \) , \( \mathbb{C} \smallsetminus \mathcal{D},{\pi }_{Y} \) ), there is about each point of \( \mathbb{C} \smallsetminus \mathcal{D} \) a connected open neighborhood \( \mathcal{O} \) which has a lifting \( \mathcal{Q} \) . Any such lifting is the graph of a function analytic on \( \mathcal{C} \) (from Theorem 3.6)-that is, any such \( \mathcal{Q} \) is an analytic function element. When considering chains in our proof of Theorem 8.5, it will be of technical convenience to restrict our attention to connected open sets \( {\mathcal{O}}_{i} \) of \( \mathbb{C} \smallsetminus \mathcal{D} \) which are liftable through each point of \( {\pi }_{Y}{}^{-1}\left( {\mathcal{O}}_{i}\right) \) (which means that \( {\pi }_{Y}{}^{-1}\left( {\mathcal{O}}_{i}\right) \) consists of \( n\left( { = {\deg }_{Y}p}\right) \) functional elements. Note that there is such an \( \mathcal{O} \) about each point of \( \mathbb{C} \smallsetminus \mathcal{D} \) . Definition 8.11. Relative to \( \left( {\mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) ,\mathbb{C} \smallsetminus \mathcal{D},{\pi }_{Y}}\right) \), any connected open set \( \mathcal{O} \) of \( \mathbb{C} \smallsetminus \mathcal{D} \) which lifts through each point of \( {\pi }_{Y}{}^{-1}\left( \mathcal{O}\right) \) is an allowable set. Any chain of allowable open sets is an allowable chain. Lemma 8.13 below is used in our proof of Theorem 8.5 and gives an important class of allowable open sets. Definition 8.12. An open set \( \Omega \subset \mathbb{C} \) is simply connected if it is homeomorphic to an open disk. Examples are: \( \mathbb{C} \) itself; \( \mathbb{C} \smallsetminus \) (nonnegative real axis); \( \mathbb{C} \smallsetminus \Phi \), where \( \Phi \) is any closed, non-self-intersecting polygonal path that goes out to the infinite point of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \) (see Figure 20). ![9396b131-9501-41be-b2cf-577fd90ab693_97_0.jpg](images/9396b131-9501-41be-b2cf-577fd90ab693_97_0.jpg) Figure 20 Lemma 8.13. Relative to \( \left( {\mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) ,\mathbb{C} \smallsetminus \mathcal{D},{\pi }_{Y}}\right) \), any simply connected open subset of \( \mathbb{C} \smallsetminus \mathcal{D} \) is allowable. This is an immediate consequence of the familiar "monodromy theorem" (proved in most standard texts on elementary complex analysis). To state it, we use the following ideas: First, let \( U \) be a nonempty open subset of \( \mathbb{C} \) . A polygonal path in \( U \) is the union of closed line segments \( {\overline{{P}_{i}, P}}_{i + 1} \subset U(i = 0,\ldots \) , \( r - 1) \) connecting finitely many ordered points \( \left( {{P}_{0},\ldots ,{P}_{r}}\right) \left( {{P}_{i} \neq {P}_{i + 1}}\right) \) in \( U \) . Now suppose \( \mathcal{Q} \) is an analytic function element which is a lifting of a connected open set \( \mathcal{O} \subset U \) . We say \( \mathcal{Q} \) can be continued along a polygonal path \( \overline{{P}_{0},{P}_{1}} \cup \ldots \cup \overline{{P}_{r - 1},{P}_{r}} \) in \( U \) if \( {P}_{0} \in \mathcal{O} \) and if there is a chain \( \mathcal{O},{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{r} \) in \( U \) such that \( {\bar{P}}_{i},{\bar{P}}_{i + 1} \subseteq {\mathcal{O}}_{i + 1}\left( {i = 0,\ldots, r - 1}\right) \), and such that there is an analytic continuation of \( Q \) along \( O,\ldots ,{O}_{r} \) . Theorem 8.14 (Monodromy theorem). Let \( \Omega \) be a simply connected open set in \( \mathbb{C} \), and suppose an analytic function element \( \mathcal{Q} \) is a lifting of a connected open set \( \mathcal{O} \subset \Omega \) . If \( \mathcal{Q} \) can be analytically continued along any polygonal path in \( \Omega \) , then \( \mathcal{Q} \) has a unique extension to a (single-valued) function which is analytic at each point of \( \Omega \) . For a proof of Theorem 8.14, see, e.g., [Ahlfors, Chapter VI, Theorem 2]. To prove Lemma 8.13, we need only verify that in our case, the hypothesis of Theorem 8.14 is satisfied, i.e., that for any simply connected open subset \( \Omega \) of \( \mathbb{C} \smallsetminus \mathcal{D} \), we can analytically continue any analytic function element along any polygonal path in \( \Omega \) . The argument is easy, and may be left to the exercises (Exercise 8.1). We now prove that \( \mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) \) is chainwise connected by contradiction. Suppose \( P \) and \( Q \) are two points of \( \mathrm{V}\left( p\right) \smallsetminus {\pi }_{Y}{}^{-1}\left( \mathcal{D}\right) \) such that there is no analytic continuation from \( P \) to \( Q \) along any allowable chain in \( \mathbb{C} \smallsetminus \mathcal{D} \) . Choose a non-self-intersecting polygonal path \( \Phi \) in \( \mathbb{C} \) connecting the finitely many points of \( \mathcal{D} \), and the infinite point of \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) = {\mathbb{C}}_{X} \cup \{ \infty \} \), as suggested by Figure 20. We can obviously choose \( \Phi \) so it does not go through \( {\pi }_{Y}\left( P\right) \) or \( {\pi }_{Y}\left( Q\right) \) . The "slit sphere" \( {\mathbb{P}}^{1}\left( \mathbb{C}\right) \smallsetminus \Phi \) is then topologically an open disk of \( \mathbb{C} \), and is therefore simply connected. Now each point of \( C \) above any point of \( \mathbb{C} \smallsetminus \Phi \) is contained in an analytic function element, and by Lemma 8.13 each such function element extends to an analytic function on \( \mathbb{C} \smallsetminus \Phi \) . Since there are \( n \) points of \( C \) above each point of \( \mathbb{C} \smallsetminus \Phi \), there are just \( n \) such functions \( {f}_{i} \) on \( \mathbb{C} \smallsetminus \Phi \) . Call their graphs \( {F}_{1},\ldots ,{F}_{n} \) . Suppose, to be specific, that \( P \in {F}_{1} \) and \( Q \in {F}_{n} \) . Now let \( P \) and \( {P}^{\prime } \) be two points of \( C \) lying over \( \mathbb{C} \smallsetminus \Phi \), and suppose that we can analytically continue from \( P \) to \( {P}^{\prime } \) along some allowable chain \( \left( {{\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{r}}\right) \) in \( C \smallsetminus \mathcal{D} \) . Choose open sets such that \( {\mathcal{O}}_{0},{\mathcal{O}}_{n + 1} \subset \mathbb{C} \smallsetminus \Phi, P \in {\mathcal{O}}_{0} \subset {\mathcal{O}}_{1} \) and \( {P}^{\prime } \in {\mathcal{O}}_{r + 1} \supset {\mathcal{O}}_{r} \) . Since \( \mathbb{C} \smallsetminus \Phi \) is simply connected, it is allowable by Lemma 8.13. Hence its subsets \( {\mathcal{O}}_{0},{\mathcal{O}}_{r + 1} \) are also allowable, and therefore \( (\mathbb{C} \smallsetminus \Phi ,{\mathcal{O}}_{0} \) , \( {\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{r},{\mathcal{O}}_{r + 1},\mathbb{C} \smallsetminus \Phi \) ) is an allowable chain. Thus we may assume without loss of generality that any such analytic continuation in \( \mathbb{C} \smallsetminus \mathcal{D} \) from a point \( P \in C \) to any other point \( {P}^{\prime } \in C \), where \( {\pi }_{Y}\left( P\right) \) and \( {\pi }_{Y}\left( {P}^{\prime }\right) \in \mathbb{C} \smallsetminus \Phi \), is the lifting of some allowable chain from \( \mathbb{C} \smallsetminus \Phi \) to \( \mathbb{C} \smallsetminus \Phi \) . If \( P \in {F}_{i} \) and \( {P}^{\prime } \in {F}_{j} \), then this same chain also defines a continuation from any point in \( {F}_{i}
18_Algebra Chapter 0
Definition 2.7
Definition 2.7. A Euclidean valuation on an integral domain \( R \) is a valuation satisfying the following property 8 : for all \( a \in R \) and all nonzero \( b \in R \) there exist \( q, r \in R \) such that \[ a = {qb} + r \] with either \( r = 0 \) or \( v\left( r\right) < v\left( b\right) \) . An integral domain \( R \) is a Euclidean domain if it admits a Euclidean valuation. \( {}^{6} \) This fact is known as the fundamental theorem of arithmetic. \( {}^{7} \) Entire libraries have been written on the subject of valuations, studying a more precise notion than what is needed here. \( {}^{8} \) It is not uncommon to also require that \( v\left( {ab}\right) \geq v\left( b\right) \) for all nonzero \( a, b \in R \) ; but this is not needed in the considerations that follow, and cf. Exercise 2.15 We say that \( q \) is the quotient of the division and \( r \) is the remainder. Division with remainder in \( \mathbb{Z} \) and in \( k\left\lbrack x\right\rbrack \) (where \( k \) is a field) provide examples, so that \( \mathbb{Z} \) and \( k\left\lbrack x\right\rbrack \) are Euclidean domains. Proposition 2.8. Let \( R \) be a Euclidean domain. Then \( R \) is a PID. The proof is modeled after the instances encountered for \( \mathbb{Z} \) (Proposition 1114.4) and \( k\left\lbrack x\right\rbrack \) (which the reader has hopefully worked out in Exercise 11114.4). Proof. Let \( I \) be an ideal of \( R \) ; we have to prove that \( I \) is principal. If \( I = \{ 0\} \) , there is nothing to show; therefore, assume \( I \neq \{ 0\} \) . The valuation maps the nonzero elements of \( I \) to a subset of \( {\mathbb{Z}}^{ \geq 0} \) ; let \( b \in I \) be an element with the smallest valuation. Then I claim that \( I = \left( b\right) \) ; therefore \( I \) is principal, as needed. Since clearly \( \left( b\right) \subseteq I \), we only need to verify that \( I \subseteq \left( b\right) \) . For this, let \( a \in I \) and apply division with remainder: we have \[ a = {qb} + r \] for some \( q, r \) in \( R \), with \( r = 0 \) or \( v\left( r\right) < v\left( b\right) \) . But \[ r = a - {qb} \in I : \] by the minimality of \( v\left( b\right) \) among nonzero elements of \( I \), we cannot have \( v\left( r\right) < v\left( b\right) \) . Therefore \( r = 0 \), showing that \( a = {qb} \in \left( b\right) \), as needed. Proposition 2.8 justifies one more feature of the picture at the beginning of the chapter: the class of Euclidean domains is contained in the class of principal ideal domains. This inclusion is proper, as suggested in the picture. Producing an explicit example of a PID which is not a Euclidean domain is not so easy, but the gap between PIDs and Euclidean domains can in fact be described very sharply: PID may be characterized as domains satisfying a weaker requirement than 'division with remainder'. More precisely, a ’Dedekind-Hasse valuation’ is a valuation \( v \) such that \( \forall a, b \) , either \( \left( {a, b}\right) = \left( b\right) \) (that is, \( b \) divides \( a \) ) or there exists \( r \in \left( {a, b}\right) \) such that \( v\left( r\right) < v\left( b\right) \) . This latter condition amounts to requiring that there exist \( q, s \in R \) such that \( {as} = {bq} + r \) with \( v\left( r\right) < v\left( b\right) \) ; hence a Euclidean valuation (for which we may in fact choose \( s = 1 \) ) is a Dedekind-Hasse valuation. It is not hard to show that an integral domain is a PID if and only if it admits a Dedekind-Hasse valuation (Exercise 2.21). For example, this can be used to show that the ring \( \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{19}}}\right) /2}\right\rbrack \) is a PID: the norm considered in Exercise 2.18 in order to prove that this ring is not a Euclidean domain turns out to be a Dedekind-Hasse valuation . Thus, this ring gives an example of a PID that is not a Euclidean domain. One excellent feature of Euclidean domains, and the one giving them their names, is the presence of an effective algorithm computing greatest common divisors: the Euclidean algorithm. As Euclidean domains are PIDs, and hence UFDs, we know that they do have greatest common divisors. However, the 'algorithm' \( {}^{9} \) This boils down to a case-by-case analysis, which I am happily leaving to my most patient readers. obtained by distilling the proof of Lemma 2.3 is highly impractical: if we had to factor two integers \( a, b \) in order to compute their gcd, this would make it essentially impossible (with current technologies and factorization algorithms) for integers of a few hundred digits. The Euclidean algorithm bypasses the necessity of factorization: greatest common divisors of thousand-digit integers may be computed in a fraction of a second. The key lemma on which the algorithm is based is the following trivial general fact: Lemma 2.9. Let \( a = {bq} + r \) in \( a \) ring \( R \) . Then \( \left( {a, b}\right) = \left( {b, r}\right) \) . Proof. Indeed, \( r = a - {bq} \in \left( {a, b}\right) \), proving \( \left( {b, r}\right) \subseteq \left( {a, b}\right) \) ; and \( a = {bq} + r \in \left( {b, r}\right) \) , proving \( \left( {a, b}\right) \subseteq \left( {b, r}\right) \) . In particular, \[ \left( {\forall c \in R}\right) ,\;\left( {a, b}\right) \subseteq \left( c\right) \Leftrightarrow \left( {b, r}\right) \subseteq \left( c\right) ; \] that is, the set of common divisors of \( a, b \) and the set of common divisors of \( b, r \) coincide. Therefore, Corollary 2.10. Assume \( a = {bq} + r \) . Then \( a, b \) have a gcd if and only if \( b, r \) have \( {agcd} \), and in this case \( \gcd \left( {a, b}\right) = \gcd \left( {b, r}\right) \) . Of course ’ \( \gcd \left( {a, b}\right) = \gcd \left( {b, r}\right) \) ’ means that the two classes of associate elements coincide. These considerations hold over any integral domain; assume now that \( R \) is a Euclidean domain. Then we can use division with remainder to gain some control over the remainders \( r \) . Given two elements \( a, b \) in \( R \), with \( b \neq 0 \), we can apply division with remainder repeatedly: \[ a = b{q}_{1} + {r}_{1} \] \[ b = {r}_{1}{q}_{2} + {r}_{2} \] \[ {r}_{1} = {r}_{2}{q}_{3} + {r}_{3} \] \( \ldots \) as long as the remainder \( {r}_{i} \) is nonzero. Claim 2.11. This process terminates: that is, \( {r}_{N} = 0 \) for some \( N \) . Proof. Each line in the table is a division with remainder. If no \( {r}_{i} \) were zero, we would have an infinite decreasing sequence \[ v\left( b\right) > v\left( {r}_{1}\right) > v\left( {r}_{2}\right) > v\left( {r}_{3}\right) > \cdots \] of nonnegative integers, which is nonsense. Thus the table of divisions with remainders must be as follows: letting \( {r}_{0} = b \) , \[ a = {r}_{0}{q}_{1} + {r}_{1} \] \[ b = {r}_{1}{q}_{2} + {r}_{2} \] \[ {r}_{1} = {r}_{2}{q}_{3} + {r}_{3} \] \[ \text{...} \] \[ {r}_{N - 3} = {r}_{N - 2}{q}_{N - 1} + {r}_{N - 1} \] \[ {r}_{N - 2} = {r}_{N - 1}{q}_{N} \] with \( {r}_{N - 1} \neq 0 \) . Proposition 2.12. With notation as above, \( {r}_{N - 1} \) is a gcd of \( a, b \) . Proof. By Corollary 2.10, \[ \gcd \left( {a, b}\right) = \gcd \left( {b,{r}_{1}}\right) = \gcd \left( {{r}_{1},{r}_{2}}\right) = \cdots = \gcd \left( {{r}_{N - 2},{r}_{N - 1}}\right) . \] But \( {r}_{N - 2} = {r}_{N - 1}{q}_{N - 1} \) gives \( {r}_{N - 2} \in \left( {r}_{N - 1}\right) \) ; hence \( \left( {{r}_{N - 2},{r}_{N - 1}}\right) = \left( {r}_{N - 1}\right) \) . Therefore \( {r}_{N - 1} \) is a gcd for \( {r}_{N - 2} \) and \( {r}_{N - 1} \), hence for \( a \) and \( b \), as needed. The ring of integers and the polynomial ring over a field are both Euclidean domains. Fields are Euclidean domains (as represented in the picture at the beginning of the chapter), but not for a very interesting reason: the remainder of the division by a nonzero element in a field is always zero, so every function qualifies as a 'Euclidean valuation' for trivial reasons. We will study another interesting Euclidean domain later in this chapter (§6.2). ## Exercises ## 2.1. \( \vartriangleright \) Prove Lemma 2.1 [92.1 2.2. Let \( R \) be a UFD, and let \( a, b, c \) be elements of \( R \) such that \( a \mid {bc} \) and \( \gcd \left( {a, b}\right) = \) 1. Prove that \( a \) divides \( c \) . 2.3. Let \( n \) be a positive integer. Prove that there is a one-to-one correspondence preserving multiplicities between the irreducible factors of \( n \) (as an integer) and the composition factors of \( \mathbb{Z}/n\mathbb{Z} \) (as a group). (In fact, the Jordan-Hölder theorem may be used to prove that \( \mathbb{Z} \) is a UFD.) 2.4. \( \vartriangleright \) Consider the elements \( x, y \) in \( \mathbb{Z}\left\lbrack {x, y}\right\rbrack \) . Prove that 1 is a gcd of \( x \) and \( y \), and yet 1 is not a linear combination of \( x \) and \( y \) . (Cf. Exercise 112.13) [42.1,32.3] 2.5. \( \vartriangleright \) Let \( R \) be the subring of \( \mathbb{Z}\left\lbrack t\right\rbrack \) consisting of polynomials with no term of degree 1: \( {a}_{0} + {a}_{2}{t}^{2} + \cdots + {a}_{d}{t}^{d} \) . - Prove that \( R \) is indeed a subring of \( \mathbb{Z}\left\lbrack t\right\rbrack \), and conclude that \( R \) is an integral domain. - List all common divisors of \( {t}^{5} \) and \( {t}^{6} \) in \( R \) . - Prove that \( {t}^{5} \) and \( {t}^{6} \) have no gcd in \( R \) . ## [82.1] 2.6. Let \( R \) be a domain with the property that the intersection of any family of principal ideals in \( R \) is necessarily a principal ideal. - Show that greatest common divisors exist in \( R \) . - Show that UFDs satisfy this property. 2.7. \( \vartriangleright \) Let \( R \) be a Noetherian domain, and assume that for all nonzero \( a, b \) in \( R \) , the greatest common divisors of \( a \) and \( b \) are linear combinations of \( a \) and \( b \) . Prove that \( R \) is a PID. [2.3] 2.8. Let \( R \) be a UFD, and let \
1009_(GTM175)An Introduction to Knot Theory
Definition 3.4
Definition 3.4. The writhe \( w\left( D\right) \) of a diagram \( D \) of an oriented link is the sum of the signs of the crossings of \( D \), where each crossing has sign +1 or -1 as defined (by convention) in Figure 1.11. Note that this definition of \( w\left( D\right) \) uses the orientation of the plane and that of the link. Note, too, that \( w\left( D\right) \) does not change if \( D \) is changed under a Type II or Type III Reidemeister move. However, \( w\left( D\right) \) does change by +1 or -1 if \( D \) is changed by a Type I Reidemeister move. It is thought that nineteenth-century knot tabulators believed that the writhe of a diagram was a knot invariant, at least when no reduction in the number of crossings by a Type I move was possible in a diagram. That lead to the famous error of the inclusion, in the early knot tables, of both a knot and its reflection, listed as \( {10}_{161} \) and \( {10}_{162} \) (an error detected by \( \mathrm{K} \) . Perko in the 1970's). See Figure 3.1. The writhes of the diagrams are -8 and 10, respectively; yet, modulo reflection, these diagrams represent the same knot. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_35_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_35_0.jpg) Figure 3.1 The writhe of an oriented link diagram and the bracket polynomial of the diagram with orientation neglected are, then, both invariant under Reidemeister moves of Types II and III, and both behave in a predictable way under Type I moves. This leads to the following result, which is essentially a statement of the existence of the Jones invariant. Theorem 3.5. Let \( D \) be a diagram of an oriented link \( L \) . Then the expression \[ {\left( -A\right) }^{-{3w}\left( D\right) }\langle D\rangle \] is an invariant of the oriented link \( L \) . Proof. It follows from Lemma 3.3 that the given expression is unchanged by Reidemeister moves of Types II and III; Lemma 3.2 and the above remarks on \( w\left( D\right) \) show it is unchanged by a Type I move. As any two diagrams of two equivalent links are related by a sequence of such moves, the result follows at once. Definition 3.6. The Jones polynomial \( V\left( L\right) \) of an oriented link \( L \) is the Laurent polynomial in \( {t}^{1/2} \), with integer coefficients, defined by \[ V\left( L\right) = {\left( {\left( -A\right) }^{-{3w}\left( D\right) }\langle D\rangle \right) }_{{t}^{1/2} = {A}^{-2}} \in \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack , \] where \( D \) is any oriented diagram for \( L \) . Here \( {t}^{1/2} \) is just an indeterminate the square of which is \( t \) . In fact, links with an odd number of components, including knots, have polynomials consisting of only integer powers of \( t \) . It is easy to show, by induction on the number of crossings in a diagram, that the given expression does indeed belong to \( \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \) . Note that by Theorem 3.5, the Jones polynomial invariant is well defined and that \( V \) (unknot) \( = 1 \) . At the time of writing, it is unknown whether there is a nontrivial knot \( K \) with \( V\left( K\right) = 1 \) and finding such a \( K \), or proving none exists, is thought to be an important problem. The following table gives the Jones polynomial of knots with diagrams of at most eight crossings. It does not take very long to calculate such a table directly from the definition. It is clear that if the orientation of every component of a link is changed, then the sign of each crossing does not change. Thus the Jones polynomial of a knot does not depend upon the orientation chosen for the knot. It is easy to check that if the oriented link \( {L}^{ * } \) is obtained from the oriented link \( L \) by reversing the orientation of one component \( K \), then \( V\left( {L}^{ * }\right) = {t}^{-3\operatorname{lk}\left( {K, L - K}\right) }V\left( L\right) \) . Thus the Jones polynomial depends on orientations in a very elementary way. Displayed in Table 3.1 are the coefficients of the Jones polynomials of the knots shown in Chapter 1. A bold entry in the table is a coefficient of \( {t}^{0} \) . For example, \[ V\left( {6}_{1}\right) = {t}^{-4} - {t}^{-3} + {t}^{-2} - 2{t}^{-1} + 2 - t + {t}^{2}. \] The bracket polynomial of a diagram can be regarded as an invariant of framed unoriented links. For the moment, regard a framed link as a link \( L \) with an integer TABLE 3.1. Jones Polynomial Table <table><tr><td>1.40.86.88.9</td><td>49.8</td><td>SS</td><td>物8.8</td><td>231.000.988.008%s88So8L2之乙L乙L</td><td>a99yS少S</td><td></td></tr><tr><td>I上0</td><td>1</td><td></td><td></td><td>上1II1\( - \)\( - \)\( - \)上L10011</td><td>上【I11【L</td><td>iADLC J. I.</td></tr><tr><td>乙I0r</td><td>山s</td><td>出</td><td>山乙</td><td>心乙乙乙乙乙乙1乙上下1S乙乙I01I</td><td>乙心上【I-I</td><td></td></tr><tr><td>乙上09</td><td>Ss</td><td>力</td><td>力山</td><td>力s山S山下SSS乙乙I山山山下I上上</td><td>乙乙1上上I0</td><td>JOHCOTOTYING</td></tr><tr><td>出乙【上</td><td>外9</td><td>少</td><td>sS</td><td>ssSrS力t山山山出乙力力SSL乙I</td><td>s下下乙【上1</td><td></td></tr><tr><td>sL06</td><td>L少</td><td>9</td><td>9s</td><td>SStStttSSSS乙t山山乙乙乙上</td><td>乙乙乙10I0</td><td></td></tr><tr><td>乙乙1山</td><td>少9</td><td>s</td><td>sUn</td><td>stStttrt山山下下SSSS下乙T</td><td>乙上1I</td><td></td></tr><tr><td>乙上09</td><td>Sr</td><td>S</td><td>力t</td><td>t力tS出下Ss乙乙乙乙乙下上下s10</td><td>上II0O</td><td>1.001C</td></tr><tr><td>00扩</td><td>山w</td><td>乙</td><td>下w</td><td>乙了乙乙乙乙上乙L上上1II乙II</td><td>0</td><td></td></tr><tr><td>1【</td><td>I1</td><td>【</td><td>I上</td><td>I1I111I10上【00</td><td></td><td></td></tr><tr><td></td><td></td><td>0</td><td></td><td>0上0</td><td></td><td></td></tr><tr><td></td><td></td><td>0</td><td></td><td>0</td><td></td><td></td></tr></table> assigned to each component. Let \( D \) be a diagram for \( L \) with the property that for each component \( K \) of \( L \), the part of \( D \) corresponding to \( K \) has as its writhe the integer assigned to \( K \) . Then \( \langle D\rangle \) is an invariant of the framed link. Note that any diagram for \( L \) can be adjusted by moves of Type I (or its reflection) to achieve any given framing. The Jones polynomial is characterised by the following proposition, which follows easily from the above definition (though historically it preceded that definition). Proposition 3.7. The Jones polynomial invariant is a function \[ V : \left\{ {\text{ Oriented links in }{S}^{3}}\right\} \rightarrow \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \] such that (i) \( V \) (unknot) \( = 1 \) , (ii) whenever three oriented links \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are the same, except in the neighbourhood of a point where they are as shown in Figure 3.2, then \[ {t}^{-1}V\left( {L}_{ + }\right) - {tV}\left( {L}_{ - }\right) + \left( {{t}^{-1/2} - {t}^{1/2}}\right) V\left( {L}_{0}\right) = 0. \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_38_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_38_0.jpg) Figure 3.2 Proof. \[ \langle X\rangle = A\langle X\rangle + {A}^{-1}\langle X\rangle \] \[ \langle X\rangle = {A}^{-1}\langle X\rangle + A\langle X\rangle . \] Multiplying the first equation by \( A \), the second by \( {A}^{-1} \), and subtracting gives \[ A\langle > < \rangle - {A}^{-1}\langle > < \rangle = \left( {{A}^{2} - {A}^{-2}}\right) \langle \rangle (\rangle . \] Thus, for the oriented links with diagrams as shown, using the fact that in those diagrams \( w\left( {L}_{ + }\right) - 1 = w\left( {L}_{0}\right) = w\left( {L}_{ - }\right) + 1 \), it follows that \[ - {A}^{4}V\left( {L}_{ + }\right) + {A}^{-4}V\left( {L}_{ - }\right) = \left( {{A}^{2} - {A}^{-2}}\right) V\left( {L}_{0}\right) . \] The substitution \( {t}^{1/2} = {A}^{-2} \) gives the required answer. Working from Proposition 3.7, a straightforward exercise shows that if \( {L}^{\prime } \) is \( L \) together with an additional trivial (unknotted, unlinking) component, then its Jones polynomial is given by \( V\left( {L}^{\prime }\right) = \left( {-{t}^{-1/2} - {t}^{1/2}}\right) V\left( L\right) \) . Proposition 3.7 characterises the invariant in that using it allows the Jones polynomial of any oriented link to be calculated. This follows from the fact that any link can be changed to an unlink of \( c \) unknots (for which the Jones polynomial is \( \left( {-{t}^{-1/2} - }\right. \) \( {t}^{1/2}{)}^{c - 1} \) ) by changing crossings in some diagram; formula (ii) of Proposition 3.7 relates the polynomials before and after such a change with the that of a link diagram with fewer crossings (which has a known polynomial by induction). The Jones polynomial of the sum of two knots is just the product of their Jones polynomials, that is, \[ V\left( {{K}_{1} + {K}_{2}}\right) = V\left( {K}_{1}\right) V\left( {K}_{2}\right) . \] This follows at once by considering a calculation of the polynomial of \( {K}_{1} + {K}_{2} \) and operating firstly on the crossings of just one summand. The same formula is true for links, but the sum of two links is not well defined; the result depends on which two components are fused together in the summing operation. That fact can easily be used, in a straightforward exercise, to produce two distinct links with the same Jones polynomial. If an oriented link has a diagram \( D \), its reflection has \( \bar{D} \) as a diagram; of course, \( w\left( D\right) = - w\left( \bar{D}\right) \) . As \( \langle \bar{D}\rangle = \overline{\langle D\rangle } \), this means that if \( \bar{L} \) is the reflection of the oriented link \( L \), then \( V\left( \bar{L}\right) \) is obtained from \( L \) by interchanging \( {t}^{-1/2} \) and \( {t}^{1/2} \) . The bracket polynomial of a diagram, of writhe equal to 3 , for the right-handed trefoil knot \( {3}_{1} \) has already been calculated, and that at once determines that \( - {t}^{4} + {t}^{3} + t \) is the Jones polynomial of the right-hand trefoil knot. Thus its reflection, the left-hand trefoil knot, has Jon
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 6.2.25
Definition 6.2.25. Let \( \left\{ {\bar{\varphi }}_{y}\right\} \) be an orientation of int \( \left( M\right) \), i.e., a compatible system of local orientations. The induced orientation of \( \partial M \) is the compatible system of local orientations \( \left\{ {\bar{\varphi }}_{x}\right\} \) obtained as follows: \( {\bar{\varphi }}_{x} \) is the composition of isomorphisms and their inverses \[ \mathbb{Z}\xrightarrow[]{{\bar{\varphi }}_{y}}{H}_{n}\left( {M, M - y}\right) \prec \;{H}_{n}\left( {{\varphi }_{\alpha }\left( B\right) ,{\varphi }_{\alpha }\left( B\right) - y}\right) \overset{{\left( {\varphi }_{\alpha }\right) }_{ * }}{ \prec }{H}_{n}\left( {B, B - q}\right) \] \[ {H}_{n - 1}\left( {\partial M,\partial M - x}\right) < \rightharpoonup {H}_{n - 1}\left( {{\varphi }_{\alpha }\left( C\right) ,{\varphi }_{\alpha }\left( C\right) - x}\right) \overset{{\left( {\varphi }_{\alpha }\right) }_{ * }}{ < }{H}_{n - 1}\left( {C, C - p}\right) \] Here \( {\varphi }_{\alpha } : {\mathbb{R}}_{ + }^{n} \rightarrow {U}_{\alpha } \subseteq M \) is a coordinate patch with \( {\varphi }_{\alpha }\left( p\right) = x \) and \( {\varphi }_{\alpha }\left( q\right) = y \) . The maps labelled \( {\left( {\varphi }_{\alpha }\right) }_{ * } \) are both restrictions of \( {\varphi }_{\alpha } \) to the respective domains, and the unlabelled maps are excision isomorphisms. Theorem 6.2.26. Let \( M \) be an oriented manifold with boundary. Then \( \partial M \) has a well-defined induced orientation given by the construction in Definition 6.2.25. In particular, \( \partial M \) is orientable. Proof. This is simply a matter of checking that the local orientations \( \left\{ {\bar{\varphi }}_{x}\right\} \) are indeed compatible, and that they are independent of the choice of coordinate patches \( \left( {{U}_{\alpha },{\varphi }_{\alpha }}\right) \) used in the construction. Remark 6.2.27. If \( M \) is not orientable, then \( \partial M \) may or may not be orientable (i.e., both possibilities may arise). In practice, we often also want to consider (co)homology with coefficients in a field. In this regard we have the following result. Lemma 6.2.28. Let \( G = \mathbb{F} \) be a field of characteristic 0 or odd characteristic. Then a manifold \( M \) is \( G \) -orientable if and only if it is orientable. If \( G = \mathbb{F} \) is a field of characteristic 2, then every manifold \( M \) is \( G \) -orientable. Proof. We do the more interesting case of a field \( \mathbb{F} \) of characteristic \( \neq 2 \) . Consider the diagram in Definition 6.2.3. If we let \( V = {\varphi }_{x}\left( D\right) \) and replace \( G \) in that diagram by \( {H}_{n}\left( {M, M - V;G}\right) \) and the two vertical maps by the isomorphisms induced by inclusions, we obtain a commutative diagram. This is true whether we use \( \mathbb{F} \) coefficients or \( \mathbb{Z} \) coefficients. But we also have the commutative diagram with the horizontal maps induced by the map \( \mathbb{Z} \rightarrow \mathbb{F} \) of coefficients ![21ef530b-1e09-406a-b041-cf4539af5c14_114_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_114_0.jpg) Now if \( M \) is \( \mathbb{Z} \) -orientable it has a compatible collection of local \( \mathbb{Z} \) -orientations \( \left\{ {\bar{\varphi }}_{x}\right\} \) and then \( \left\{ {{\bar{\varphi }}_{x} \otimes 1}\right\} \) is a compatible collection of local \( \mathbb{F} \) -orientations. On the other hand, suppose we have a compatible collection \( \left\{ {\bar{\psi }}_{x}\right\} \) of local \( \mathbb{F} \) - orientations. Fix a point \( x \in M \) . Then \( {\bar{\psi }}_{x}\left( 1\right) \in {H}_{n}\left( {M, M - x;\mathbb{F}}\right) \) is a generator, i.e., a nonzero element. This element may not be in the image of \( {H}_{n}\left( {M, M - x;\mathbb{Z}}\right) \) . But there is a nonzero element \( f \) of \( \mathbb{F} \) (in fact, exactly two such) such that \( f{\bar{\psi }}_{x}\left( 1\right) \) is the image in \( {H}_{n}\left( {M, M - x;\mathbb{F}}\right) \) of a generator of \( {H}_{n}\left( {M, M - x;\mathbb{Z}}\right) \) . By the commutativity of the above diagram that implies the same is true for \( f{\bar{\psi }}_{y}\left( 1\right) \) for every \( y \in M \) . Hence \( \left\{ {f{\bar{\psi }}_{x}}\right\} \) is a compatible system of local \( \mathbb{Z} \) -orientations of \( M \), where \( x \) varies over \( M \) . The proof of this lemma also shows how to obtain \( \mathbb{F} \) -orientations. Definition 6.2.29. Let \( M \) be orientable and let \( \mathbb{F} \) be an arbitrary field. An \( \mathbb{F} \) -orientation of \( M \) is a compatible system of local \( \mathbb{F} \) -orientations of the form \( \left\{ {{\bar{\varphi }}_{x} \otimes 1}\right\} \) where \( \left\{ {\bar{\varphi }}_{x}\right\} \) is a compatible system of local \( \mathbb{Z} \) -orientations of \( M \) . Let \( M \) be arbitrary and let \( \mathbb{F} \) be a field of characteristic 2. An \( \mathbb{F} \) -orientation of \( M \) is a compatible system of local \( \mathbb{F} \) -orientations of the form \( \left\{ {{\bar{\varphi }}_{x} \otimes 1}\right\} \) where \( \left\{ {\bar{\varphi }}_{x}\right\} \) is a compatible system of local \( \mathbb{Z}/2\mathbb{Z} \) -orientations of \( M \) . (In the characteristic 2 case compatibility is automatic.) It is easy to check that the two parts of this definition agree when \( M \) is orientable and \( \mathbb{F} \) has characteristic 2. Orientability has very important homological implications, given by the following theorem. We state this theorem for manifolds with boundary, which includes the case of manifolds by taking \( \partial M = \varnothing \) . The hypothesis that \( M \) be connected is not essentially restrictive, as otherwise we could consider each component of \( M \) separately. Theorem 6.2.30. Let \( M \) be a compact connected \( n \) -manifold with boundary. Let \( G = \mathbb{Z}/2\mathbb{Z} \) or \( \mathbb{Z} \) . If \( M \) is G-oriented, suppose that \( \left\{ {\bar{\varphi }}_{x}\right\} \) is a compatible system of local \( G \) -orientations giving the \( G \) -orientation of \( M \) . In this case, there is a unique homology class \( \left\lbrack {M,\partial M}\right\rbrack \in {H}_{n}\left( {M,\partial M;G}\right) \) with \( {i}_{ * }\left( \left\lbrack {M,\partial M}\right\rbrack \right) = {\varphi }_{x}\left( 1\right) \in {H}_{n}(M, M - \) \( x;G) \) for every \( x \in M \), where \( i : \left( {M,\partial M}\right) \rightarrow \left( {M, M - x}\right) \) is the inclusion of pairs. Furthermore, \( \left\lbrack {M,\partial M}\right\rbrack \) is a generator of \( {H}_{n}\left( {M,\partial M;G}\right) \) . If \( M \) is not \( G \) -orientable, then \( {H}_{n}\left( {M,\partial M;G}\right) = 0 \) . Since this theorem is so important, we will explicitly state one of its immediate consequences. Corollary 6.2.31. Let \( M \) be a compact connected \( n \) -manifold with boundary. (1) For any such \( M,{H}_{n}\left( {M,\partial M;\mathbb{Z}/2\mathbb{Z}}\right) \cong \mathbb{Z}/2\mathbb{Z} \) and \( {H}^{n}\left( {M,\partial M;\mathbb{Z}/2\mathbb{Z}}\right) \cong \mathbb{Z}/2\mathbb{Z} \) . (2) If \( M \) is orientable, then \( {H}_{n}\left( {M,\partial M;\mathbb{Z}}\right) \cong \mathbb{Z} \) and \( {H}^{n}\left( {M,\partial M;\mathbb{Z}}\right) \cong \mathbb{Z} \) . If \( M \) is not orientable, then \( {H}_{n}\left( {M,\partial M;\mathbb{Z}}\right) = 0 \) and \( {H}^{n}\left( {M,\partial M;\mathbb{Z}}\right) = 0 \) . Proof. The statements on homology are a direct consequence of Theorems 6.2.6 and 6.2.30. The statements for cohomology then follow from the universal coefficient theorem and Theorem 6.1.12. Example 6.2.32. We computed the homology of \( \mathbb{R}{P}^{n} \) in Theorem 4.3.4. Combining that result with Corollary 6.2.31, we see that \( \mathbb{R}{P}^{n} \) is orientable for \( n \) odd and nonorientable for \( n \) even. Definition 6.2.33. Let \( M \) be a compact connected \( G \) -oriented \( n \) -manifold, \( G = \mathbb{Z}/2\mathbb{Z} \) or \( \mathbb{Z} \) . The homology class \( \left\lbrack {M,\partial M}\right\rbrack \in {H}_{n}\left( {M,\partial M;G}\right) \) as in Theorem 6.2.30 is called the fundamental homology class (or simply fundamental class) of \( \left( {M,\partial M}\right) \) . Its dual \( \{ M,\partial M\} \) in \( {H}^{n}\left( {M,\partial M;G}\right) \), i.e., the cohomology class with \( e\left( {\{ M,\partial M\} ,\left\lbrack {M,\partial M}\right\rbrack }\right) = 1 \), is the fundamental cohomology class of \( \left( {M,\partial M}\right) \) . If \( M \) is oriented and \( G \) is any coefficient group, the image of \( \left\lbrack {M,\partial M}\right\rbrack \) in \( {H}_{n}\left( {M,\partial M;G}\right) \) under the coefficient map \( \mathbb{Z} \rightarrow G \) is also called a fundamental homology class, and similarly the image of \( \{ M,\partial M\} \) on \( {H}^{n}\left( {M,\partial M;G}\right) \) under the same coefficient map is also called a fundamental cohomology class. Remark 6.2.34. We need to be careful in Definition 6.2.33 when we referred to the dual of a homology class. In general, if \( V \) is a free abelian group (or a vector space) if does not make sense to speak of the dual of an element \( v \) of \( V \) . But it does make sense here. Suppose that \( M \) is connected. Then \( {H}_{n}\left( {M;G}\right) \) is free of rank 1 (in case \( M \) is orientable and \( G = \mathbb{Z} \) ) or is a 1-dimensional vector space over \( \mathbb{Z}/2\mathbb{Z} \) (for \( M \) arbitrary and \( G = \mathbb{Z}/2\mathbb{Z} \) ) and we have the pairing \( e : {H}^{n}\left( {M;G}\right) \otimes {H}_{n}\left( {M;G}\right) \rightarrow G \) . In this situation, given a generator \( v \) of \( {H}_{n}\left( {M;G}\right) \), there is a unique element (also a generator) \( {v}^{ * } \) in \( {H}^{n}\left( {M;G}\right) \) with \( e\left( {{v}^{ * }, v}\right) = 1 \), and \( {v}^{ * } \) is w
1088_(GTM245)Complex Analysis
Definition 9.10
Definition 9.10. A harmonic conjugate of a real-valued harmonic function \( u \) is any real-valued function \( v \) such that \( u + {uv} \) is holomorphic. Harmonic conjugates always exist locally, and globally on simply connected domains. They are unique up to additive real constants. In fact, it is easy to see that they are given locally as follows. Proposition 9.11. If \( g \) is harmonic and real-valued in \( \left| z\right| < \rho \) for some \( \rho > 0 \), then the harmonic conjugate of \( g \) vanishing at the origin is given by \[ \frac{1}{{2\pi }\imath }{\int }_{0}^{2\pi }g\left( {r{\mathrm{e}}^{\iota \theta }}\right) \cdot \frac{r{\mathrm{e}}^{-{\iota \theta }}z - r{\mathrm{e}}^{\iota \theta }\bar{z}}{{\left| r{\mathrm{e}}^{\iota \theta } - z\right| }^{2}}\mathrm{\;d}\theta ,\;\text{ for }\left| z\right| < r < \rho . \] The following result is interesting and useful. Theorem 9.12 (Harnack’s Inequalities). If \( g \) is a positive harmonic function on \( \left| z\right| < r \) that is continuous on \( \left| z\right| \leq r \), then \[ \frac{r - \left| z\right| }{r + \left| z\right| } \cdot g\left( 0\right) \leq g\left( z\right) \leq \frac{r + \left| z\right| }{r - \left| z\right| } \cdot g\left( 0\right) ,\text{ for all }\left| z\right| < r. \] Proof. Our starting point is (9.3). We use elementary estimates for the Poisson kernel: \[ \frac{r - \left| z\right| }{r + \left| z\right| } = \frac{{r}^{2} - {\left| z\right| }^{2}}{{\left( r + \left| z\right| \right) }^{2}} \leq \frac{{r}^{2} - {\left| z\right| }^{2}}{{\left| r{\mathrm{e}}^{t\theta } - z\right| }^{2}} \leq \frac{{r}^{2} - {\left| z\right| }^{2}}{{\left( r - \left| z\right| \right) }^{2}} = \frac{r + \left| z\right| }{r - \left| z\right| }. \] Multiplying these inequalities by the positive number \( g\left( w\right) = g\left( {r{\mathrm{e}}^{t\theta }}\right) \) and then averaging the resulting function over the circle \( \left| w\right| = r \), we obtain \[ \frac{r - \left| z\right| }{r + \left| z\right| } \cdot \frac{1}{2\pi }{\int }_{0}^{2\pi }g\left( {r{\mathrm{e}}^{t\theta }}\right) \mathrm{d}\theta \leq \frac{1}{2\pi }{\int }_{0}^{2\pi }g\left( {r{\mathrm{e}}^{t\theta }}\right) \cdot \frac{{r}^{2} - {\left| z\right| }^{2}}{{\left| r{\mathrm{e}}^{t\theta } - z\right| }^{2}}\mathrm{\;d}\theta \] \[ \leq \frac{r + \left| z\right| }{r - \left| z\right| } \cdot \frac{1}{2\pi }{\int }_{0}^{2\pi }g\left( {r{\mathrm{e}}^{\iota \theta }}\right) \mathrm{d}\theta . \] The middle term in the above inequalities is \( g\left( z\right) \) as a consequence of (9.3), while the extreme averages are equal to \( g\left( 0\right) \) by the MVP. Remark 9.13. Exercise 9.6 gives a remarkable consequence of Harnack's inequalities that we use in establishing our next result. Theorem 9.14 (Harnack's Convergence Theorem). Let \( D \) be a domain and let \( \left\{ {u}_{j}\right\} \) be a nondecreasing sequence of real-valued harmonic functions on \( D \) . Then (a) Either \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{u}_{j}\left( z\right) = + \infty \) for all \( z \in D \) (b) The function on \( D \) defined by \( U\left( z\right) = \mathop{\lim }\limits_{{j \rightarrow \infty }}{u}_{j}\left( z\right) \) is harmonic in \( D \) . Proof. Since a nondecreasing sequence of real numbers converges if and only if it is bounded, the assumption that \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{u}_{j}\left( z\right) \) is not \( + \infty \) for all \( z \in D \) allows us to conclude that there exist \( {z}_{0} \) in \( D \) and a real number \( M \) such that \( {u}_{j}\left( {z}_{0}\right) < M \) for all \( j \) . Then \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{u}_{j}\left( {z}_{0}\right) \) exists, and it equals the value of the series \[ {u}_{1}\left( {z}_{0}\right) + \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\lbrack {{u}_{n + 1}\left( {z}_{0}\right) - {u}_{n}\left( {z}_{0}\right) }\right\rbrack \] which is therefore convergent. Let \( K \) denote a compact subset of \( D \) . By enlarging \( K \) if necessary, we may assume that \( {z}_{0} \in K \) . It follows from Harnack’s inequalities (see Exercise 9.6) that there exists a real constant \( c \) such that \[ 0 \leq {u}_{n + 1}\left( z\right) - {u}_{n}\left( z\right) \leq c\left\lbrack {{u}_{n + 1}\left( {z}_{0}\right) - {u}_{n}\left( {z}_{0}\right) }\right\rbrack \] for all \( z \) in \( K \) and all \( n \) in \( \mathbb{N} \) . It follows immediately that the series \( {u}_{1}\left( z\right) + \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\lbrack {{u}_{n + 1}\left( z\right) - {u}_{n}\left( z\right) }\right\rbrack \) converges uniformly on \( K \) ; that is, \( {u}_{j} \) converges uniformly to a function \( U \) on compact subsets of \( D \) . It is now easy to show that \( U \) is harmonic in \( D \) . ## 9.3 The Dirichlet Problem Let \( D \) be a bounded region in \( \mathbb{C} \) and let \( f \in \mathbf{C}\left( {\partial D}\right) \) . The Dirichlet problem is to find a continuous function \( u \) defined on the closure of \( D \) that agrees with \( f \) on the boundary of \( D \) and whose restriction to \( D \) is harmonic. We will consider, for the moment, only the special case where \( D \) is a disc; without loss of generality we may assume that the disc has radius one and center at zero. For a piecewise continuous function \( u \) on \( {S}^{1} \) and \( z \in \mathbb{C} \) with \( \left| z\right| < 1 \), we define (compare with (9.3)) \[ P\left\lbrack u\right\rbrack \left( z\right) = \frac{1}{2\pi }{\int }_{0}^{2\pi }u\left( {\mathrm{e}}^{\iota \theta }\right) \cdot \Re \left( \frac{{\mathrm{e}}^{\iota \theta } + z}{{\mathrm{e}}^{\iota \theta } - z}\right) \mathrm{d}\theta \] (9.7) or, equivalently, \[ P\left\lbrack u\right\rbrack \left( z\right) = \frac{1}{2\pi }{\int }_{0}^{2\pi }u\left( {\mathrm{e}}^{\iota \theta }\right) \cdot \frac{1 - {\left| z\right| }^{2}}{{\left| {\mathrm{e}}^{\iota \theta } - z\right| }^{2}}\mathrm{\;d}\theta . \] (9.8) ![a50267de-c956-4a7f-8c2e-850adafcee65_251_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_251_0.jpg) Fig. \( {9.1}{u}_{1} \) and \( {u}_{2} \) . (a) The function \( {u}_{1} \) . (b) The function \( {u}_{2} \) . ## The following properties of the operator \( P \) are easily established: 1. \( P\left\lbrack u\right\rbrack \) is a well-defined function on the open unit disc. Hence we view \( P \) as an operator that assigns the function \( P\left\lbrack u\right\rbrack \), on the open unit disc to each piecewise continuous function \( u \) on the unit circle. 2. \( P\left\lbrack {u + v}\right\rbrack = P\left\lbrack u\right\rbrack + P\left\lbrack v\right\rbrack \) and \( P\left\lbrack {cu}\right\rbrack = c \cdot {Pu} \) for all piecewise continuous functions \( u \) and \( v \) on \( {S}^{1} \) and every constant \( c \) (thus \( P \) is a linear operator). 3. If \( u \) is a real nonnegative piecewise continuous function on \( {S}^{1} \), then \( P\left\lbrack u\right\rbrack \) is a real-valued nonnegative function on the open unit disc. 4. \( P\left\lbrack u\right\rbrack \) is harmonic in the open unit disc. To establish this claim we may assume (by linearity of the operator \( P \) ) that \( u \) is real-valued. In this case, \( P\left\lbrack u\right\rbrack \) is obviously the real part of an analytic function on the disc. 5. For all constants \( c \) , \[ P\left\lbrack c\right\rbrack = c, \] as follows from (9.6) (or directly because constant functions are harmonic). 6. Properties 5 and 3 imply that any bound on \( u \) yields the same bound on \( {Pu} \) . For example, for real-valued function \( u \) satisfying \( m \leq u \leq M \) for some real constants \( m \) and \( M \), we have \( m \leq P\left\lbrack u\right\rbrack \leq M \) . We now establish the solvability of the Dirichlet problem for discs. Theorem 9.15 (H. A. Schwarz). If \( u \) is a piecewise continuous function on the unit circle \( {S}^{1} \), then the function \( P\left\lbrack u\right\rbrack \) is harmonic on \( \{ \left| z\right| < 1\} \) ; furthermore, for \( {\theta }_{0} \in \mathbb{R} \), its limit as \( z \) approaches \( {\mathrm{e}}^{\iota {\theta }_{0}} \) is \( u\left( {\mathrm{e}}^{\iota {\theta }_{0}}\right) \) provided \( u \) is continuous at \( {\mathrm{e}}^{\iota {\theta }_{0}} \) . In particular, the Dirichlet problem is solvable for discs. Proof. We only have to study the boundary values for \( P\left\lbrack u\right\rbrack \) . Let \( {C}_{1} \) and \( {C}_{2} \) be complementary arcs on the unit circle. Let \( {u}_{1} \) be the function which coincides with \( u \) on \( {C}_{1} \) and vanishes on \( {C}_{2} \) ; let \( {u}_{2} \) be the corresponding function for \( {C}_{2} \) (see Fig. 9.1). Clearly \( P\left\lbrack u\right\rbrack = P\left\lbrack {u}_{1}\right\rbrack + P\left\lbrack {u}_{2}\right\rbrack \) . The function \( P\left\lbrack {u}_{1}\right\rbrack \) can be regarded as an integral over the arc \( {C}_{1} \) ; hence it is harmonic on \( \mathbb{C} - {C}_{1} \) . The expression \[ \Re \left( \frac{{\mathrm{e}}^{i\theta } + z}{{\mathrm{e}}^{i\theta } - z}\right) = \frac{1 - {\left| z\right| }^{2}}{{\left| {\mathrm{e}}^{i\theta } - z\right| }^{2}} \] vanishes on \( \left| z\right| = 1 \) for \( z \neq {\mathrm{e}}^{\iota \theta } \) . It follows that \( P\left\lbrack {u}_{1}\right\rbrack \) is zero on the one-dimensional interior of the arc \( {C}_{2} \) . By continuity \( P\left\lbrack {u}_{1}\right\rbrack \left( z\right) \) approaches zero as \( z \) approaches a point in the interior of \( {C}_{2} \) . In proving that \( P\left\lbrack u\right\rbrack \) has limit \( u\left( {\mathrm{e}}^{\iota {\theta }_{0}}\right) \) at \( {\mathrm{e}}^{\iota {\theta }_{0}} \), we may assume that \( u\left( {\mathrm{e}}^{\iota {\theta }_{0}}\right) = 0 \) (if not replace \( u \) by \( u - u\left( {\mathrm{e}}^{\iota {\theta }_{0}}\right) \) ). Under this assumption, given an \( \epsilon > 0
1042_(GTM203)The Symmetric Group
Definition 4.4.1
Definition 4.4.1 Given a partition \( \lambda \), the associated Schur function is \[ {s}_{\lambda }\left( \mathbf{x}\right) = \mathop{\sum }\limits_{T}{\mathbf{x}}^{T} \] where the sum is over all semistandard \( \lambda \) -tableaux \( T \) . By way of illustration, if \( \lambda = \left( {2,1}\right) \), then some of the possible tableaux are \[ T : \begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 3 \end{array},\begin{array}{l} 1 \\ 3 \end{array},\begin{array}{l} 1 \\ 3 \end{array},\ldots \begin{array}{l} 1 \\ 3 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 2 \end{array},\begin{array}{l} 1 \\ 4 \end{array},\ldots , \] so \[ {s}_{\left( 2,1\right) }\left( \mathbf{x}\right) = {x}_{1}^{2}{x}_{2} + {x}_{1}{x}_{2}^{2} + {x}_{1}^{2}{x}_{3} + {x}_{1}{x}_{3}^{2} + \cdots + 2{x}_{1}{x}_{2}{x}_{3} + 2{x}_{1}{x}_{2}{x}_{4} + \cdots . \] Note that if \( \lambda = \left( n\right) \), then a one-rowed tableau is just a weakly increasing sequence of \( n \) positive integers, i.e., a partition with \( n \) parts (written backward), so \[ {s}_{\left( n\right) }\left( \mathbf{x}\right) = {h}_{n}\left( \mathbf{x}\right) \] (4.9) If we have only one column, then the entries must increase from top to bottom, so the partition must have distinct parts and thus \[ {s}_{\left( {1}^{n}\right) } = {e}_{n}\left( \mathbf{x}\right) \] (4.10) Finally, if \( \lambda \vdash n \) is arbitrary, then \[ \left\lbrack {{x}_{1}{x}_{2}\cdots {x}_{n}}\right\rbrack {s}_{\lambda }\left( \mathbf{x}\right) = {f}^{\lambda } \] since pulling out this coefficient merely considers the standard tableaux. Before we can show that the \( {s}_{\lambda } \) are a basis for \( {\Lambda }^{n} \), we must verify that they are indeed symmetric functions. We give two proofs of this fact, one based on our results from representation theory and one combinatorial (the latter being due to Knuth [Knu 70]). Proposition 4.4.2 The function \( {s}_{\lambda }\left( \mathbf{x}\right) \) is symmetric. Proof 1. By definition of the Schur functions and Kostka numbers, \[ {s}_{\lambda } = \mathop{\sum }\limits_{\mu }{K}_{\lambda \mu }{\mathbf{x}}^{\mu } \] (4.11) where the sum is over all compositions \( \mu \) of \( n \) . Thus it is enough to show that \[ {K}_{\lambda \mu } = {K}_{\lambda \widetilde{\mu }} \] (4.12) for any rearrangement \( \widetilde{\mu } \) of \( \mu \) . But in this case \( {M}^{\mu } \) and \( {M}^{\widetilde{\mu }} \) are isomorphic modules. Thus they have the same decomposition into irreducibles, and (4.12) follows from Young's rule (Theorem 2.11.2). Proof 2. It suffices to show that \[ \left( {i, i + 1}\right) {s}_{\lambda }\left( \mathbf{x}\right) = {s}_{\lambda }\left( \mathbf{x}\right) \] for each adjacent transposition. To this end, we describe an involution on semistandard \( \lambda \) -tableaux \[ T \rightarrow {T}^{\prime } \] such that the numbers of \( i \) ’s and \( \left( {i + 1}\right) \) ’s arc exchanged when passing from \( T \) to \( {T}^{\prime } \) (with all other multiplicities staying the same). Given \( T \), each column contains either an \( i, i + 1 \) pair; exactly one of \( i, i + 1 \) ; or neither. Call the pairs fixed and all other occurrences of \( i \) or \( i + 1 \) free. In each row switch the number of free \( i \) ’s and \( \left( {i + 1}\right) \) ’s; i.e., if the the row consists of \( k \) free \( i \) ’s followed by \( l \) free \( \left( {i + 1}\right) \) ’s then replace them by \( l \) free \( i \) ’s followed by \( k \) free \( \left( {i + 1}\right) \) ’s. To illustrate, if \( i = 2 \) and \[ T = \begin{array}{llllllllll} 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 3 \\ 2 & 2 & 3 & 3 & 3 & 3 & & & & \end{array}, \] then the twos and threes in columns 2 through 4 and 7 through 10 are free. So \[ {T}^{\prime } = \begin{array}{llllllllll} 1 & 1 & 1 & 1 & 2 & 2 & 2 & 3 & 3 & 3 \\ 2 & 2 & 2 & 3 & 3 & 3 & & & & \end{array}. \] The new tableau \( {T}^{\prime } \) is still semistandard by the definition of free. Since the fixed \( i \) ’s and \( \left( {i + 1}\right) \) ’s come in pairs, this map has the desired exchange property. It is also clearly an involution. - Using the ideas in the proof of Theorem 4.3.7, part 1, the following result guarantees that the \( {s}_{\lambda } \) are a basis. Proposition 4.4.3 We have \[ {s}_{\lambda } = \mathop{\sum }\limits_{{\mu \leq \lambda }}{K}_{\lambda \mu }{m}_{\mu } \] where the sum is over partitions \( \mu \) (rather than compositions) and \( {K}_{\lambda \lambda } = 1 \) . Proof. By equation (4.11) and the symmetry of the Schur functions, we have \[ {s}_{\lambda } = \mathop{\sum }\limits_{\mu }{K}_{\lambda \mu }{m}_{\mu } \] where the sum is over all partitions \( \mu \) . We can prove that \[ {K}_{\lambda \mu } = \left\{ \begin{array}{ll} 0 & \text{ if }\lambda \ntrianglerighteq \mu \\ 1 & \text{ if }\lambda = \mu \end{array}\right. \] in two different ways. One is to appeal again to Young's rule and Corollary 2.4.7. The other is combinatorial. If \( {K}_{\lambda \mu } \neq 0 \), then consider a \( \lambda \) -tableau \( T \) of content \( \mu \) . Since \( T \) is column-strict, all occurrences of the numbers \( 1,2,\ldots, i \) are in rows 1 through \( i \) . This implies that for all \( i \) , \[ {\mu }_{1} + {\mu }_{2} + \cdots + {\mu }_{i} \leq {\lambda }_{1} + {\lambda }_{2} + \cdots + {\lambda }_{i} \] i.e., \( \mu \trianglelefteq \lambda \) . Furthermore, if \( \lambda = \mu \), then by the same reasoning there is only one tableau of shape and content \( \lambda \), namely, the one where row \( i \) contains all occurrences of \( i \) . (Some authors call this tableau superstandard.) - Corollary 4.4.4 The set \( \left\{ {{s}_{\lambda } : \lambda \vdash n}\right\} \) is a basis for \( {\Lambda }^{n} \) . ∎ ## 4.5 The Jacobi-Trudi Determinants The determinantal formula (Theorem 3.11.1) calculated the number of standard tableaux, \( {f}^{\lambda } \) . Analogously, the Jacobi-Trudi determinants provide another expression for \( {s}_{\lambda } \) in terms of elementary and complete symmetric functions. Jacobi [Jac 41] was the first to obtain this result, and his student Trudi [Tru 64] subsequently simplified it. We have already seen the special \( 1 \times 1 \) case of these determinants in equations (4.9) and (4.10). The general result is as follows. Any symmetric function with a negative subscript is defined to be zero. Theorem 4.5.1 (Jacobi-Trudi Determinants) Let \( \lambda = \left( {{\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{l}}\right) \) . We have \[ {s}_{\lambda } = \left| {h}_{{\lambda }_{i} - i + j}\right| \] and \[ {s}_{{\lambda }^{\prime }} = \left| {e}_{{\lambda }_{i} - i + j}\right| \] where \( {\lambda }^{\prime } \) is the conjugate of \( \lambda \) and both determinants are \( l \times l \) . Proof. We prove this theorem using a method of Lindström [Lin 73] that was independently discovered and exploited by Gessel [Ges um] and Gessel-Viennot [G-V 85, G-V ip]. (See also Karlin [Kar 88].) The crucial insight is that one can view both tableaux and determinants as lattice paths. Consider the plane \( \mathbb{Z} \times \mathbb{Z} \) of integer lattice points. We consider (possibly infinite) paths in this plane \[ p = {s}_{1},{s}_{2},{s}_{3},\ldots \] where each step \( {s}_{i} \) is of unit length northward \( \left( N\right) \) or eastward \( \left( E\right) \) . Such a path is shown in the following figure. ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_171_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_171_0.jpg) Label the eastward steps of \( p \) using one of two labelings. The e-labeling assigns to each eastward \( {s}_{i} \) the label \[ L\left( {s}_{i}\right) = i \] The \( h \) -labeling gives \( {s}_{i} \) the label \[ \check{L}\left( {s}_{i}\right) = \text{(the number of northward}{s}_{j}\text{preceding}{s}_{i}\text{) +1 .} \] Intuitively, in the \( h \) -labeling all the eastward steps on the line through the origin of \( p \) are labeled 1, all those on the line one unit above are labeled 2, and so on. Labeling our example path with each of the two possibilities yields the next pair of diagrams. ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_172_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_172_0.jpg) \( h \) -labeling It is convenient to extend \( \mathbb{Z} \times \mathbb{Z} \) by the addition of some points at infinity. Specifically, for each \( x \in \mathbb{Z} \), add a point \( \left( {x,\infty }\right) \) above every point on the vertical line with coordinate \( x \) . We assume that a path can reach \( \left( {x,\infty }\right) \) only by ending with an infinite number of consecutive northward steps along this line. If \( p \) starts at a vertex \( u \) and ends at a vertex \( v \) (which may be a point at infinity), then we write \( u\overset{p}{ \rightarrow }v \) . There are two weightings of paths corresponding to the two labelings. If \( p \) has only a finite number of eastward steps, define \[ {\mathbf{x}}^{p} = \mathop{\prod }\limits_{{{s}_{i} \in p}}{x}_{L\left( {s}_{i}\right) } \] and \[ {\check{\mathbf{x}}}^{p} = \mathop{\prod }\limits_{{{s}_{i} \in p}}{x}_{\check{L}\left( {s}_{i}\right) } \] where each product is taken over the eastward \( {s}_{i} \) in \( p \) . Note that \( {\mathbf{x}}^{p} \) is always square-free and \( {\check{\mathbf{x}}}^{p} \) can be any monomial. So we have \[ {e}_{n}\left( \mathbf{x}\right) = \mathop{\sum }\limits_{p}{\mathbf{x}}^{p} \] and \[ {h}_{n}\left( \mathbf{x}\right) = \mathop{\sum }\limits_{p}{\check{\mathbf{x}}}^{p} \] where both sums are over all paths \( \left( {a, b}\right) \overset{p}{ \rightarrow }\left( {a + n,\infty }\right) \) for any fixed initial vertex \( \left( {a, b}\right) \) . Just as all paths between one pair of points describes a lone elementary or complete symmetric
1074_(GTM232)An Introduction to Number Theory
Definition 2.2
Definition 2.2. A commutative ring \( R \) is Euclidean if there is a function \[ N : R \smallsetminus \{ 0\} \rightarrow \mathbb{N} \] with the following properties: (1) \( N\left( {ab}\right) = N\left( a\right) N\left( b\right) \) for all \( a, b \in R \), and (2) for all \( a, b \in R \), if \( b \neq 0 \), then there exist \( q, r \in R \) such that \[ a = {bq} + r\text{ and }r = 0\text{ or }N\left( r\right) < N\left( b\right) . \] Such a function is called a norm on \( R \) . Much of what follows can be done with weaker conditions. In particular, one does not need such a strong property as (1). However, in many cases, the norm does have this property, so we assume it to allow a speedier and more natural development of the argument. Example 2.3. The following are examples of Euclidean rings. (1) Let \( R = \mathbb{Z}\left\lbrack \mathrm{i}\right\rbrack \) denote the Gaussian integers, so \[ R = \{ x + \mathrm{i}y \mid x, y \in \mathbb{Z}\} \] where \( {\mathrm{i}}^{2} = - 1 \) . Setting \( N\left( {x + \mathrm{i}y}\right) = {x}^{2} + {y}^{2} \) shows that \( R \) is a Euclidean ring. (2) Let \( \mathbb{F} \) denote any field and let \( R = \mathbb{F}\left\lbrack x\right\rbrack \) be the ring of polynomials with coefficients in \( \mathbb{F} \) . Define \( N\left( f\right) = {2}^{\deg \left( f\right) } \), where \( \deg \left( f\right) \) is the degree of \( f \) in \( \mathbb{F}\left\lbrack x\right\rbrack \), which is defined for all nonzero elements of \( R \) . We prove the first of these; the second is an exercise. Proof THAT ℤ[i] Is Euclidean. Condition (1) of Definition 2.2 is easily verified by direct computation. For property (2), let \( a, b \neq 0 \in R \) and write \( a{b}^{-1} = p + \mathrm{i}q \) with \( p, q \in \mathbb{Q} \) . Now define \( m, n \in \mathbb{Z} \) by \[ m \in \lbrack p - 1/2, p + 1/2), n \in \lbrack q - 1/2, q + 1/2). \] Let \( q = m + \mathrm{i}n \in R \) and \( r = a - b\left( {m + \mathrm{i}n}\right) \) . For \( r \neq 0 \) , \[ N\left( r\right) = N\left( {\left( {a{b}^{-1} - m - \mathrm{i}n}\right) b}\right) \] \[ = N\left( {p + \mathrm{i}q - m - \mathrm{i}n}\right) N\left( b\right) \] \[ = N\left( {p - m + \mathrm{i}\left( {q - n}\right) }\right) N\left( b\right) \leq \left( {\frac{1}{4} + \frac{1}{4}}\right) N\left( b\right) < N\left( b\right) , \] showing property (2). Exercise 2.3. When \( R = \mathbb{Z} \), for any fixed \( a \) and \( b \), the values of \( q \) and \( r \) in Definition 2.2(2) are uniquely determined. Is the same true when \( R = \mathbb{Z}\left\lbrack \mathrm{i}\right\rbrack \) ? In any ring, we define greatest common divisors in exactly the same way as before. A greatest common divisor is defined up to multiplication by units (invertible elements). In any Euclidean ring, the function \( N \) can be used to define a Euclidean Algorithm, which can be used to find the greatest common divisor just as for the integers. Definition 2.4. In a ring \( R \) , (1) \( \alpha \) divides \( \beta \), written \( \alpha \mid \beta \), if there is an element \( \gamma \in R \) with \( \beta = {\alpha \gamma } \) ; (2) \( u \) is a unit if \( u \) divides 1 ; (3) \( \pi \) (not equal to zero nor to a unit) is prime if for all \( \alpha ,\beta \in R \) , \[ \pi \left| {{\alpha \beta } \Rightarrow \pi }\right| \alpha \text{ or }\pi \mid \beta \] (4) a non-unit \( \mu \) is irreducible if \[ \mu = {\alpha \beta } \Rightarrow \alpha \text{ or }\beta \text{ is a unit. } \] Notice that \( u \in R \) is a unit if and only if there is some \( \mu \) with \( {u\mu } = 1 \) . We write \( U\left( R\right) \) or \( {R}^{ * } \) for the units in the commutative ring \( R \) ; this is an Abelian group under multiplication. If the recent clutch of definitions are new to you, we recommend the following exercise. Exercise 2.4. (a) Show that, in any commutative ring, every prime element is irreducible. (b) Show that, in a Euclidean ring, \( u \) is a unit if and only if \( N\left( u\right) = 1 \) . (c) Show that there are infinitely many units in \( \mathbb{Z}\left\lbrack \sqrt{3}\right\rbrack \) . (d) Show that \( 3 + \sqrt{-2} \) is an irreducible element of \( \mathbb{Z}\left\lbrack \sqrt{-2}\right\rbrack \) . (e) Let \( \xi = \frac{-1 + \sqrt{-3}}{2} \) and \( R = \mathbb{Z}\left\lbrack \xi \right\rbrack \) . Prove that \( R \) is a Euclidean domain with respect to the norm \( N\left( {a + {b\xi }}\right) = {a}^{2} - {ab} + {b}^{2} = \left( {a + {b\xi }}\right) \left( {a + b\bar{\xi }}\right) \) and find all the units in \( R \) . Exercise 2.5. Prove the Remainder Theorem: For a polynomial \( f \in \mathbb{F}\left\lbrack x\right\rbrack ,\mathbb{F} \) a field, \( f\left( a\right) = 0 \) if and only if \( \left( {x - a}\right) \mid f\left( x\right) \) . Exercise 2.6. Give a different proof of Lemma 1.17 on p. 31 using group theory by considering the multiplicative group of units \( U\left( {\mathbb{Z}/{F}_{n}\mathbb{Z}}\right) = {\left( \mathbb{Z}/{F}_{n}\mathbb{Z}\right) }^{ * } \) . Exercise 2.7. Prove that \( \mathbb{Z}\left\lbrack x\right\rbrack \) does not have a Euclidean Algorithm by showing that the equation \( {2f}\left( x\right) + {xg}\left( x\right) = 1 \) has no solution for \( f, g \in \mathbb{Z}\left\lbrack x\right\rbrack \), but 2 and \( x \) have no common divisor in \( \mathbb{Z}\left\lbrack x\right\rbrack \) . Despite the conclusion of Exercise 2.7, the ring \( \mathbb{Z}\left\lbrack x\right\rbrack \) does have unique factorization into irreducibles. We will say that a ring has the Fundamental Theorem of Arithmetic if either of the following properties hold. (FTA1) Every irreducible element is prime. (FTA2) Every nonzero non-unit can be factorized uniquely up to order and multiplication by units. Theorem 2.5. Every Euclidean ring has the Fundamental Theorem of Arithmetic. Proof. Clearly, every irreducible \( \mu \) has \( N\left( \mu \right) \geq 2 \) . Arguing as we did in \( \mathbb{Z} \) shows we cannot keep factorizing into irreducibles forever, so the existence part is easy. To complete the argument, we just need to show that every irreducible is prime. This follows easily from Theorem 1.23. Let \( \mu \) be an irreducible and suppose that \( \mu \) divides \( {\alpha \beta } \) but \( \mu \) does not divide \( \alpha \) . Clearly, the greatest common divisor of \( \mu \) and \( \alpha \) is 1 because \( \mu \) admits only itself and units as divisors and \( \mu \) does not divide \( \alpha \), so we can write \[ {\mu x} + {\alpha y} = 1 \] for some \( x, y \in R \) by Theorem 1.23. Multiply through by \( \beta \) to obtain \[ {\mu x\beta } + {\alpha \beta y} = \beta . \] Since \( \mu \) divides both terms on the left-hand side, it must divide the right-hand side, and this completes the proof. ## 2.3 Sums of Squares The resolution of the Pythagorean equation ( Equation (2.1)) is an elementary and well-known result. We are now going to show how the Fundamental Theorem of Arithmetic in other contexts can yield solutions to less tractable Diophantine equations. Consider the following problem: Which integers can be represented as the sum of two squares? That is, what are the solutions to the Diophantine problem \[ n = {x}^{2} + {y}^{2}? \] When \( n \) is a prime, experimenting with a few small values suggests the following. Theorem 2.6. The prime \( p \) can be written as the sum of two squares if and only if \( p = 2 \) or \( p \) is congruent to 1 modulo 4 . To prove this, we are going to use the Fundamental Theorem of Arithmetic in the ring of Gaussian integers \( R = \mathbb{Z}\left\lbrack \mathrm{i}\right\rbrack \) with norm function \( N : R \rightarrow \mathbb{N} \) defined by \( N\left( {x + \mathrm{i}y}\right) = {x}^{2} + {y}^{2} \) as in Example 2.3(1). Lemma 2.7. If \( p \) is 2 or a prime congruent to 1 modulo 4, then the congruence \[ {T}^{2} + 1 \equiv 0\;\left( {\;\operatorname{mod}\;p}\right) \] is solvable in integers. Proof. This is clear for \( p = 2 \) so suppose \( p = {4n} + 1 \) for some integer \( n > 0 \) . Using al-Haytham's Theorem (Theorem 1.19), \[ \left( {p - 1}\right) ! = \left( {p - 1}\right) \left( {p - 2}\right) \cdots 3 \cdot 2 \cdot 1 \equiv - 1\;\left( {\;\operatorname{mod}\;p}\right) . \] Now \[ {4n} = p - 1 \equiv - 1\;\left( {\;\operatorname{mod}\;p}\right) \] \[ {4n} - 1 = p - 2 \equiv - 2\;\left( {\;\operatorname{mod}\;p}\right) \] \[ \vdots \] \[ {2n} + 1 = p - {2n} \equiv - {2n}\;\left( {\;\operatorname{mod}\;p}\right) . \] It follows that \[ \left( {-1}\right) \left( {-2}\right) \cdots \left( {-{2n}}\right) \left( {2n}\right) \left( {{2n} - 1}\right) \cdots 3 \cdot 2 \cdot 1 = \left( {2n}\right) !{\left( -1\right) }^{2n} \equiv - 1\;\left( {\;\operatorname{mod}\;p}\right) . \] Thus \( T = \left( {2n}\right) \) ! has \( {T}^{2} + 1 \equiv 0 \) modulo \( p \), proving the lemma. Proof of Theorem 2.6. The case \( p = 2 \) is trivial. The case when \( p \) is congruent to 3 modulo 4 is also dealt with easily; no integer that is congruent to 3 modulo 4 can be the sum of two squares because squares are 0 or 1 modulo 4. Assume that \( p \) is a prime congruent to 1 modulo 4 . By Lemma 2.7, we can write \[ {cp} = {T}^{2} + 1 = \left( {T + \mathrm{i}}\right) \left( {T - \mathrm{i}}\right) \text{ in }R = \mathbb{Z}\left\lbrack \mathrm{i}\right\rbrack \] for some integers \( T \) and \( c \) . Suppose (for a contradiction) that \( p \) is irreducible in \( R \) . Then since \( \mathbb{Z}\left\lbrack \mathrm{i}\right\rbrack \) has the Fundamental Theorem of Arithmetic, \( p \) is prime. Hence \( p \) must divide one of \( T \pm \mathrm{i} \) in \( R \) since it divides their product, and this is impossible because \( p \) does not divide the coefficient of i. It follows that \( p \) cannot be irreducible in \( R \) , so \[ p = {\mu \nu } \] is a product of two non-units in \( R \) . Taking the norm of both sides shows that \[ {p}^{2} = N\left( {\mu \nu
1359_[陈省身] Lectures on Differential Geometry
Definition 2.2
Definition 2.2. Suppose \( M \) is a connected Riemannian manifold, and \( p, q \) are two arbitrary points in \( M \) . Let \[ \rho \left( {p, q}\right) = \inf \overset{⏜}{pq} \] \( \left( {2.47}\right) \) where \( \overset{⏜}{pq} \) denotes the arc length of a curve connecting \( p \) and \( q \) with measurable arc length. Then \( \rho \left( {p, q}\right) \) is called the distance between points \( p \) and \( q \) . Because \( M \) is connected, there always exists a curve connecting \( p \) and \( q \) with measurable arc length. Therefore (2.47) is always meaningful, and defines a real function on \( M \times M \) . Theorem 2.6. The function \( \rho : M \times M \rightarrow \mathbb{R} \) has the following properties: 1) for any \( p, q \in M,\rho \left( {p, q}\right) \geq 0 \), and the equality holds only when \( p = q \) ; 2) \( \rho \left( {p, q}\right) = \rho \left( {q, p}\right) \) ; 3) for any three points \( p, q, r \in M \) we have \[ \rho \left( {p, q}\right) + \rho \left( {q, r}\right) \geq \rho \left( {p, r}\right) \] Therefore \( \rho \) becomes a distance function on \( M \) and makes \( M \) a metric space. The topology of \( M \) as a metric space and the original topology of \( M \) as a manifold are equivalent. Proof. According to definition (2.47), the above properties are obvious. We need only show that \( \rho \left( {p, q}\right) > 0 \) whenever \( p \neq q \) . Suppose \( p, q \) are any two points in \( M, p \neq q \) . Since \( M \) is a Hausdorff space, there exists a neighborhood \( U \) of \( \mathrm{p} \) such that \( q \notin U \) . By Theorem 2.4, there must exist a normal coordinate neighborhood \( W \subset U \) of \( p \) such that its normal coordinates are \( {u}^{i} = {\alpha }^{i}s \), where \( \mathop{\sum }\limits_{{i = 1}}^{\widetilde{m}}{\left( {\alpha }^{i}\right) }^{2} = 1 \) and \( 0 \leq s \leq {s}_{0} \) . Choose \( \delta \) such that \( 0 < \delta < {s}_{0} \) . Then the hypersurface \( {\sum }_{\delta } \subset W \) . Suppose \( \gamma \) is a measurable curve connecting \( p \) and \( q \) . Then the length of \( \gamma \) is at least \( \delta \), that is \[ \rho \left( {p, q}\right) \geq \delta > 0 \] By Theorem 2.5, the interior of \( {\sum }_{\delta } \) is precisely the set \[ \{ q \in M \mid \rho \left( {p, q}\right) < \delta \} \] that is, the interior of \( {\sum }_{\delta } \) is a \( \delta \) -ball neighborhood of \( p \) when \( M \) is viewed as a metric space. Thus the topology of \( M \) viewed as a metric space and the original topology of \( M \) are equivalent. We note that if \( W \) is a ball-shaped normal coordinate neighborhood at the point \( O \) constructed as in Theorem 2.4, then for any point \( p \in W \) the unique geodesic curve connecting \( O \) and \( p \) in \( W \) has length \( \rho \left( {O, p}\right) \) . Theorem 2.7. There exists a \( \eta \) -ball neighborhood \( W \) at any point \( p \) in a Riemannian manifold \( M \), where \( \eta \) is a sufficiently small positive number, such that any two points in \( W \) can be connected by a unique geodesic curve. Any neighborhood satisfying the above property is called a geodesic convex neighborhood. Thus the theorem states that there exists a geodesic convex neighborhood at every point in a Riemannian manifold. Proof. Suppose \( p \in M \) . By Theorem 2.4 there exists a ball-shaped normal coordinate neighborhood \( U \) of \( p \) with radius \( \epsilon \) such that for any point \( q \) in \( U \) there is a normal coordinate neighborhood \( {V}_{q} \) that contains \( U \) . We may assume that \( \epsilon \) also satisfies the requirements of Theorem 2.5. Choose a positive number \( \eta \leq \frac{1}{4}\epsilon \) . Then the \( \eta \) -ball neighborhood \( W \) of \( p \) is a geodesic convex neighborhood of \( p \) . Choose any \( {q}_{1},{q}_{2} \in W \) . Then \[ \rho \left( {{q}_{1},{q}_{2}}\right) \leq \rho \left( {p,{q}_{1}}\right) + \rho \left( {p,{q}_{2}}\right) < {2\eta } \leq \frac{\epsilon }{2}. \] (2.48) Suppose \( U\left( {{q}_{1};\epsilon /2}\right) \) is an \( \epsilon /2 \) -ball neighborhood of \( {q}_{1} \) . Then the above formula indicates that \( {q}_{2} \in U\left( {{q}_{1};\epsilon /2}\right) \) . For any \( q \in U\left( {{q}_{1};\epsilon /2}\right) \) we have \[ \rho \left( {p, q}\right) \leq \rho \left( {p,{q}_{1}}\right) + \rho \left( {{q}_{1}, q}\right) < \frac{3\epsilon }{4}. \] Hence \[ U\left( {{q}_{1};\frac{\epsilon }{2}}\right) \subset U \subset {V}_{{q}_{1}} \] (2.49) that is, the \( \epsilon /2 \) -ball neighborhood of \( {q}_{1} \) is contained in the normal coordinate neighborhood of \( {q}_{1} \) . By Theorem 2.4 and the statement immediately following the proof of Theorem 2.6, there exists a unique geodesic curve \( \gamma \) in \( U\left( {{q}_{1};\epsilon /2}\right) \) connecting \( {q}_{1} \) and \( {q}_{2} \), whose length is precisely \( \rho \left( {{q}_{1},{q}_{2}}\right) \) . In particular, if \( r \in \gamma \) , then \[ \rho \left( {{q}_{1}, r}\right) \leq \rho \left( {{q}_{1},{q}_{2}}\right) \] \( \left( {2.50}\right) \) Finally we prove that the geodesic curve \( \gamma \) lies inside \( W \) . Since \( \gamma \subset \) \( U\left( {{q}_{1};\epsilon /2}\right) \subset U \), the function \( \rho \left( {p, q}\right) \left( {q \in \gamma }\right) \) is bounded. If \( \gamma \) does not lie inside \( W \) completely, and \( {q}_{1},{q}_{2} \in W \), then the function \( \rho \left( {p, q}\right) \left( {q \in \gamma }\right) \) must attain its maximum at an interior point \( {q}_{0} \) of \( \gamma \) . Let \( \delta = \rho \left( {p,{q}_{0}}\right) \) . Then \( \delta < \epsilon \), and the hypersphere \( {\sum }_{\delta } \) is tangent to \( \gamma \) at \( {q}_{0} \) . By Theorem 2.5, \( \gamma \) lies completely outside \( {\sum }_{\delta } \) near \( {q}_{0} \), which contradicts the fact that \( \rho \left( {p, q}\right) \left( {q \in \gamma }\right) \) attains its maximum at \( {q}_{0} \) . Hence \( \gamma \subset W \) . ## §5-3 Sectional Curvature Suppose \( M \) is an \( m \) -dimensional Riemannian manifold whose curvature tensor \( R \) is a covariant tensor of rank 4, and \( {u}^{i} \) is a local coordinate system in \( M \) . Then \( R \) can be expressed as \[ R = {R}_{ijkl}d{u}^{i} \otimes d{u}^{j} \otimes d{u}^{k} \otimes d{u}^{l}, \] (3.1) where \( {R}_{ijkl} \) is defined as in (1.50). A covariant tensor of rank 4 can be viewed as a linear function on the space of contravariant tensors of rank 4 (see \( §2 - 2 \) ), so at every point \( p \in M \) we have a multilinear function \( R : {T}_{p}\left( M\right) \times {T}_{p}\left( M\right) \times \) \( {T}_{p}\left( M\right) \times {T}_{p}\left( M\right) \rightarrow R \), defined by \[ R\left( {X, Y, Z, W}\right) = \langle X \otimes Y \otimes Z \otimes W, R\rangle , \] (3.2) where the notation \( \langle \) , \( \rangle {isdefinedasin}\left( {2.17}\right) {ofChapter2}.{Ifwelet} \) \[ X = {X}^{i}\frac{\partial }{\partial {u}^{i}},\;Y = {Y}^{i}\frac{\partial }{\partial {u}^{i}},\;Z = {Z}^{i}\frac{\partial }{\partial {u}^{i}},\;W = {W}^{i}\frac{\partial }{\partial {u}^{i}}, \] (3.3) then \[ R\left( {X, Y, Z, W}\right) = {R}_{ijkl}{X}^{i}{Y}^{j}{Z}^{k}{W}^{l}. \] (3.4) In particular, \[ {R}_{ijkl} = R\left( {\frac{\partial }{\partial {u}^{i}},\frac{\partial }{\partial {u}^{j}},\frac{\partial }{\partial {u}^{k}},\frac{\partial }{\partial {u}^{l}}}\right) . \] (3.5) In \( §4 - 2 \), we have already interpreted the curvature tensor of a connection \( D \) as a curvature operator: for any given \( Z, W \in {T}_{p}\left( M\right), R\left( {Z, W}\right) \) is a linear map from \( {T}_{p}\left( M\right) \) to \( {T}_{p}\left( M\right) \) defined by \[ R\left( {Z, W}\right) X = {R}_{ikl}^{j}{X}^{i}{Z}^{k}{W}^{l}\frac{\partial }{\partial {u}^{j}}. \] (3.6) If \( D \) is the Levi-Civita connection of a Riemannian manifold \( M \), then we have \[ R\left( {X, Y, Z, W}\right) = \left( {R\left( {Z, W}\right) X}\right) \cdot Y \] (3.7) where the notation "." on the right hand side is the inner product defined by (1.4). By Theorem 1.4, the 4-linear function \( R\left( {X, Y, Z, W}\right) \) has the following properties: 1) \( R\left( {X, Y, Z, W}\right) = - R\left( {X, Y, W, Z}\right) = - R\left( {Y, X, Z, W}\right) \) ; 2) \( R\left( {X, Y, Z, W}\right) + R\left( {X, Z, W, Y}\right) + R\left( {X, W, Y, Z}\right) = 0 \) ; 3) \( R\left( {X, Y, Z, W}\right) = R\left( {Z, W, X, Y}\right) \) . Using the fundamental tensor \( G \) of \( M \), we can also define a 4-linear function as follows: \[ G\left( {X, Y, Z, W}\right) = G\left( {X, Z}\right) G\left( {Y, W}\right) - G\left( {X, W}\right) G\left( {Y, Z}\right) . \] (3.8) Obviously the function defined above is linear with respect to every variable, and also has the same properties 1)-3) as \( R\left( {X, Y, Z, W}\right) \) . If \( X, Y \in {T}_{p}\left( M\right) \), then \[ G\left( {X, Y, X, Y}\right) = {\left| X\right| }^{2} \cdot {\left| Y\right| }^{2} - {\left( X \cdot Y\right) }^{2} = {\left| X\right| }^{2} \cdot {\left| Y\right| }^{2} \cdot {\sin }^{2}\angle \left( {X, Y}\right) . \] (3.9) Therefore, when \( X, Y \) are linearly independent, \( G\left( {X, Y, X, Y}\right) \) is precisely the square of the area of the parallelogram determined by the tangent vectors \( X \) and \( Y \) . Hence \( G\left( {X, Y, X, Y}\right) \neq 0 \) . Suppose \( {X}^{\prime },{Y}^{\prime } \) are another two linearly independent tangent vectors at the point \( p \), and that they span the same 2-dimensional tangent subspace \( E \) as that spanned by \( X \) and \( Y \) . Then we may assume that \[ {X}^{\prime } = {aX} + {bY},\;{Y}^{\prime } = {cX} + {dY} \] where \( {ad} - {bc} \neq 0 \) . By properties 1)-3) we have \[ R\left( {{X}^{\prime },{Y}^{\prime },{X}^{\prime },{Y}^{\prime }}\right) = {\left( ad - bc\right) }^{2}R\left( {X, Y, X, Y}\right) , \] \[ G\
1088_(GTM245)Complex Analysis
Definition 9.32
Definition 9.32. Let \( D \) be a domain in \( \mathbb{C} \) . A Perron family \( \mathcal{F} \) in \( D \) is a nonempty collection of subharmonic functions in \( D \) such that (a) If \( u, v \) are in \( \mathcal{F} \), then so is \( \max \{ u, v\} \) . (b) If \( u \) is in \( \mathcal{F} \), then so is \( {u}_{U} \) for every disc \( U \) with cl \( U \subset D \) . The following result, due to Perron, is useful for constructing harmonic functions. Theorem 9.33 (Perron’s Principle). If \( \mathcal{F} \) is a uniformly bounded from above Perron family in \( D \), then the function defined for \( z \in D \) by \[ V\left( z\right) = \sup \{ u\left( z\right) : u \in \mathcal{F}\} \] (9.16) is harmonic in \( D \) . Proof. First note that by definition a Perron family is never empty. Since we are assuming that there exists a constant \( M \) such that \( u\left( z\right) < M \) for all \( z \) in \( D \) and all \( u \) in \( \mathcal{F} \), the function \( V \) is clearly well defined and real-valued. Let \( U \) be any disc such that \( \operatorname{cl}U \subset D \) . It is enough to show that \( V \) is harmonic in \( U \) . For any point \( {z}_{0} \) in \( U \), there exists a sequence \( \left\{ {{u}_{j} : j \in \mathbb{N}}\right\} \) of functions in \( \mathcal{F} \) such that \[ \mathop{\lim }\limits_{{j \rightarrow \infty }}{u}_{j}\left( {z}_{0}\right) = V\left( {z}_{0}\right) \] (9.17) Without loss of generality, we may assume \( {u}_{j + 1} \geq {u}_{j} \) for all \( j \) in \( \mathbb{N} \), since if \( \left\{ {u}_{j}\right\} \) is any sequence in \( \mathcal{F} \) satisfying (9.17), then the new sequence given by \( {v}_{1} = {u}_{1} \) and \( {v}_{j + 1} = \max \left\{ {{u}_{j + 1},{v}_{j}}\right\} \) for \( j \geq 1 \) is also contained in \( \mathcal{F} \), satisfies (9.17) (with \( {u}_{j} \) replaced by \( {v}_{j} \), of course), and is nondecreasing, as needed. The sequence \( \left\{ {{w}_{j} = {\left( {u}_{j}\right) }_{U}}\right\} \) of harmonizations of the \( {u}_{j} \) in \( U \) consists of subharmonic functions with the following properties: 1. \( {w}_{j} \geq {u}_{j} \) for all \( j \) 2. \( {w}_{j} \leq {w}_{j + 1} < M \) for all \( j \) , since the two inequalities clearly hold outside \( U \) and on the boundary of \( U \), from which it follows that they also hold in \( U \) . Thus the sequence \( \left\{ {w}_{j}\right\} \) lies in \( \mathcal{F} \), is nondecreasing, and satisfies \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{w}_{j}\left( {z}_{0}\right) = \) \( V\left( {z}_{0}\right) \) . It follows from the Harnack's convergence Theorem 9.14 that the function defined by \[ \Phi \left( z\right) = \mathop{\lim }\limits_{{j \rightarrow \infty }}{w}_{j}\left( z\right) = \sup \left\{ {{w}_{j}\left( z\right) : j \in \mathbb{N}}\right\} \] is harmonic in \( U \) . We will now show that \( \Phi = V \) in \( U \) . Let \( c \) denote any point in \( U \) . As before, we can find a nondecreasing sequence \( \left\{ {s}_{j}\right\} \) in \( \mathcal{F} \) such that \( V\left( c\right) = \mathop{\lim }\limits_{{j \rightarrow \infty }}{s}_{j}\left( c\right) \) . By setting \( {t}_{1} = \max \left\{ {{s}_{1},{w}_{1}}\right\} \) and \( {t}_{j + 1} = \max \left\{ {{s}_{j + 1},{w}_{j + 1},{t}_{j}}\right\} \) for all \( j \geq 1 \), we obtain a nondecreasing sequence \( \left\{ {t}_{j}\right\} \) in \( \mathcal{F} \) such that \( {t}_{j} \geq {w}_{j} \) for all \( j \), and such that \( \mathop{\lim }\limits_{{j \rightarrow \infty }}{t}_{j}\left( z\right) = V\left( z\right) \) for \( z = c \) and \( z = {z}_{0}. \) The harmonizations of the \( {t}_{j} \) in \( U \) give a nondecreasing sequence \( \left\{ {{r}_{j} = {\left( {t}_{j}\right) }_{U}}\right\} \) in \( \mathcal{F} \) satisfying \( M > {r}_{j} \geq {t}_{j} \geq {w}_{j} \) for all \( j \) . As before, the function defined by \[ \Psi \left( z\right) = \sup \left\{ {{r}_{j}\left( z\right) : j \in \mathbb{N}}\right\} \] is harmonic in \( U \), and coincides with \( V \) at \( c \) and \( {z}_{0} \) . But \( \Psi \geq \Phi \), since \( {r}_{j} \geq {w}_{j} \) for all \( j \), and hence \( \Psi - \Phi \) is a nonnegative harmonic function in \( U \) . Since it is equal to zero at \( {z}_{0} \), by the minimum principle for harmonic functions, it is identically zero in \( U \), and the result follows. ## 9.8 The Dirichlet Problem (Revisited) This section has two parts. The first describes a method for obtaining the solution to the Dirichlet problem, provided it is solvable. In the second part, we offer a solution. Recall that the Dirichlet problem for a bounded region \( D \) in \( \mathbb{C} \) and a function \( f \in \) \( \mathbf{C}\left( {\partial D}\right) \) is to find a continuous function \( U \) on the closure of \( D \) whose restriction to \( D \) is harmonic and which agrees with \( f \) on the boundary of \( D \) . Under these conditions, let \( \mathcal{F} \) denote the family of all continuous functions \( u \) on cl \( D \) such that \( u \) is subharmonic in \( D \) and \( u \leq f \) on \( \partial D \) . Then \( \mathcal{F} \) is a Perron family of functions uniformly bounded from above. Note that the constant function \( u = \min \{ f\left( z\right) : z \in \) \( \partial D\} \) belongs to \( \mathcal{F} \), hence \( \mathcal{F} \) is nonempty. The other conditions for \( \mathcal{F} \) to be a Perron family are also easily verified. Therefore, by Theorem 9.33, the function \( V \) defined by (9.16) is harmonic in \( D \) . Now, if we assume that there is a solution \( U \) to the Dirichlet problem for \( D \) and \( f \), then we can show that \( U = V \) . Indeed, for each \( u \) in \( \mathcal{F} \) the function \( u - U \) is subharmonic in \( D \), and satisfies \( u - U = u - f \leq 0 \) on \( \partial D \), from where it follows that \( u - U \leq 0 \) in \( D \), and hence \( V \leq U \) in \( D \) . But \( U \) belongs to \( \mathcal{F} \), and it follows that \( U \leq V \), and therefore \( U = V \) . The Dirichlet problem does not always have a solution. A very simple example is given by considering the domain \( D = \{ 0 < \left| z\right| < 1\} \) and the function \[ f\left( z\right) = \left\{ \begin{array}{ll} 0, & \text{ if }\left| z\right| = 1 \\ 1, & \text{ if }z = 0 \end{array}\right. \] The corresponding function \( V \) given by Theorem 9.33 is harmonic in the punctured disc \( D \) . If the Dirichlet problem were solvable in our case, then \( V \) would extend to a continuous function on \( \left| z\right| \leq 1 \) that is harmonic in \( \left| z\right| < 1 \) (see Exercise 9.17). But then the maximum principle would imply that \( V \) is identically zero, a contradiction. To solve the Dirichlet problem, we start with a bounded domain \( D \subset \mathbb{C} \), with boundary \( \partial D \), and the following definition. Definition 9.34. A function \( \beta \) is a barrier at \( {z}_{0} \in \partial D \), and \( {z}_{0} \) is a regular point for the Dirichlet problem provided there exists an open neighborhood \( N \) of \( {z}_{0} \) in \( \mathbb{C} \) such that (1) \( \beta \in \mathbf{C}\left( {\operatorname{cl}D \cap N}\right) \) . (2) \( - \beta \) is subharmonic in \( D \cap N \) . (3) \( \beta \left( z\right) > 0 \) for \( z \neq {z}_{0},\beta \left( {z}_{0}\right) = 0 \) . (4) \( \beta \left( z\right) = 1 \) for \( z \notin N \) . Remark 9.35. A few observations are in order. 1. Condition (4) is easily satisfied by adjusting a function \( \beta \) that satisfies the other three conditions for being a barrier. To see this we may assume that \( N \) is relatively compact in \( \mathbb{C} \), and choose a smaller neighborhood \( {N}_{0} \) of \( {z}_{0} \) with cl \( {N}_{0} \subset N \) . Then let \[ m = \min \left\{ {\beta \left( z\right) ;z \in \operatorname{cl}\left( {N - {N}_{0}}\right) \cap \operatorname{cl}D}\right\} \] note that \( m > 0 \), and define \[ {\beta }_{1}\left( z\right) = \left\{ \begin{array}{ll} \min \{ m,\beta \left( z\right) \} & \text{ for }z \in N \cap D, \\ m & \text{ for }z \in \operatorname{cl}\left( {D - N}\right) . \end{array}\right. \] Finally set \( {\beta }_{2} = \frac{{\beta }_{1}}{m} \), and observe that \( {\beta }_{2} \) satisfies all the conditions for being a barrier at \( {z}_{0} \) . Thus, to prove the existence of a barrier, it suffices to produce a function that satisfies the first three conditions. 2. The existence of barriers is a local property. If a point \( {z}_{0} \in \partial D \) can be reached by an analytic arc (a curve that is the image of \( \left\lbrack {0,1}\right\rbrack \) under an injective analytic map defined in a neighborhood of \( \left\lbrack {0,1}\right\rbrack ) \) with no points in common with cl \( D - \) \( \left\{ {z}_{0}\right\} \), then a barrier exists at this point. To establish this we may, without loss of generality, assume that \( {z}_{0} = 0 \), that the closure of \( D \) lies in the right half plane, and that the analytic arc consists of the negative real axis including the origin. Using polar coordinates \( z = r{\mathrm{e}}^{\iota \theta } \), we see that \( \beta \left( z\right) = {r}^{\frac{1}{2}}\cos \frac{\theta }{2}, - \pi < \theta < \pi \) , satisfies the first three conditions for a barrier function. Definition 9.36. Let \( D \) be a nonempty domain in \( \mathbb{C} \) . A solution \( u \) to the Dirichlet problem for \( f \in {\mathbf{C}}_{\mathbb{R}}\left( {\partial D}\right) \) is proper provided \[ \inf \{ f\left( w\right) ;w \in \partial D\} \leq u\left( z\right) \leq \sup \{ f\left( w\right) ;w \in \partial D\} \] for all \( z \) in \( D \) . A far-reaching generalization of Schwartz's Theorem 9.15 is provided by our next result. Theorem 9.37. Let \( D \) be a nonempty domain in \( \mathbb{C} \) . There exists a proper solution to the Dirichlet problem for \( D \) for every bounded continuous real-valued function on \( \partial D \) if and
113_Topological Groups
Definition 8.22
Definition 8.22. Let \( \mathcal{P} = \left( {n, c, P}\right) \) be a sentential language. Members of \( {}^{P}2 \) are called models of \( \mathcal{P} \) . (Intuitively,0 means falsity,1 means truth, and a function \( f \in {}^{P}2 \) is just an assignment of a truth value to each sentence of P.) Using the recursion principle for sentences, we can associate with each \( f \in {}^{P}2 \) a function \( {f}^{ + } : \) Sent \( \mathcal{P} \rightarrow 2 \) such that for any \( s \in P \) and any \( \varphi ,\psi \in {\text{Sent}}_{\mathcal{P}} \) \[ {f}^{ + }\langle s\rangle = {fs} \] \[ {f}^{ + }\neg \varphi = 1\;\text{ if }{f}^{ + }\varphi = 0, \] \[ {f}^{ + }\neg \varphi = 0\;\text{ if }{f}^{ + }\varphi = 1, \] \[ {f}^{ + }\left( {\varphi \rightarrow \psi }\right) = 0\;\text{ iff }{f}^{ + }\varphi = 1\text{ and }{f}^{ + }\varphi = 0. \] \( \left( {f}^{ + }\right. \) intuitively tells us about the truth or falsity of any sentence of \( \mathcal{P} \) , given the truth or falsity of members of \( P \) .) We say that \( f \) is a model of \( \varphi \) if \( {f}^{ + }\varphi = 1 \) ; \( f \) is a model of a set \( \Gamma \) of sentences iff \( {f}^{ + }\varphi = 1 \) for all \( \varphi \in \Gamma \) . We write \( \Gamma { \vDash }_{\mathcal{P}}\varphi \) iff every model of \( \Gamma \) is a model of \( \varphi \), and we write \( { \vDash }_{\mathcal{P}}\varphi \) instead of \( 0{ \vDash }_{\mathcal{P}}\varphi \) . Sentences \( \varphi \) with \( { \vDash }_{\mathcal{P}}\varphi \) are called tautologies. Whether or not a sentence \( \varphi \) is a tautology can be decided by the familiar truth table method: one writes in rows all possible \( f \in {}^{P}2 \) and for each such \( f \) calculates \( {f}^{ + }\varphi \) from inside out. Of course instead of all \( f \in {}^{P}2 \) it suffices to list only the \( f \in {}^{Q}2 \), where \( Q \) is the set of \( s \in P \) which occur in \( \varphi \) . For example, the following table shows that \( \left\langle {s}_{1}\right\rangle \rightarrow \left( {\left\langle {s}_{2}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle }\right) \) is a tautology: <table><thead><tr><th>\( {s}_{1} \)</th><th>\( {s}_{2} \)</th><th>\( \left\langle {s}_{2}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle \)</th><th>\( \left\langle {s}_{1}\right\rangle \rightarrow \left( {\left\langle {s}_{2}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle }\right) \)</th></tr></thead><tr><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>1</td><td>0</td><td>1</td><td>1</td></tr><tr><td>0</td><td>1</td><td>0</td><td>1</td></tr><tr><td>0</td><td>0</td><td>1</td><td>1</td></tr></table> The following table shows that \( \neg \left\langle {s}_{1}\right\rangle \rightarrow \left( {\neg \left\langle {s}_{1}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle }\right) \) is not a tautology: <table><thead><tr><th>\( {s}_{1} \)</th><th>\( \neg \left\langle {s}_{1}\right\rangle \)</th><th>\( \neg \left\langle {s}_{1}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle \)</th><th>\( \neg \left\langle {s}_{1}\right\rangle \rightarrow \left( {\neg \left\langle {s}_{1}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle }\right) \)</th></tr></thead><tr><td>1</td><td>0</td><td>1</td><td>1</td></tr><tr><td>0</td><td>1</td><td>0</td><td>0</td></tr></table> Clearly this truth table procedure provides an effective procedure for determining whether or not a sentence is a tautology. This statement could be made precise for sentential languages \( \mathcal{P} = \left( {n, c, P}\right) \) with \( P \) countable by the usual procedure of Gödel numbering. (See 10.19-10.22, where this is done in detail for first-order languages.) In practice, to check that a statement is or is not a tautology it is frequently better to argue informally, assuming the given sentence is not true and trying to infer a contradiction from this. For example, if \( \left\langle {s}_{1}\right\rangle \rightarrow \left( {\left\langle {s}_{2}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle }\right) \) is false, then \( {s}_{1} \) is true and \( \left\langle {s}_{2}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle \) is false; but this is impossible; \( \left\langle {s}_{2}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle \) is true since \( {s}_{1} \) is true. Thus \( \left\langle {s}_{1}\right\rangle \rightarrow \) \( \left( {\left\langle {s}_{2}\right\rangle \rightarrow \left\langle {s}_{1}\right\rangle }\right) \) is a tautology. We are going to show shortly that the relations \( \vdash \) and \( \vDash \) are identical. To do this we need some preliminary statements. Lemma 8.23. If \( \Gamma \vdash \varphi \), then \( \Gamma \vDash \varphi \) . Proof. Let \( \Delta = \{ \varphi \) : every model of \( \Gamma \) is a model of \( \varphi \} \) . It is easy to check, using truth tables for the logical axioms, that \( \Gamma \subseteq \Delta \), every logical axiom is in \( \Delta \), and \( \Delta \) is closed under detachment. Hence all \( \Gamma \) -theorems are in \( \Delta \) . The lemma follows. Definition 8.24. \( \Gamma \) is consistent iff \( \Gamma \nvdash \varphi \) for some \( \varphi \) . Theorem 8.25. The following conditions are equivalent: (i) \( \Gamma \) is inconsistent. (ii) \( \Gamma \vdash \neg \left( {\varphi \rightarrow \varphi }\right) \) for every sentence \( \varphi \) . (iii) \( \Gamma \vdash \neg \left( {\varphi \rightarrow \varphi }\right) \) for some sentence \( \varphi \) . Proof. Obviously \( \left( i\right) \Rightarrow \left( {ii}\right) \Rightarrow \left( {iii}\right) \) . Now suppose \( \Gamma \vdash \neg \left( {\varphi \rightarrow \varphi }\right) \) for a certain sentence \( \varphi \) . Let \( \psi \) be any sentence. By \( \mathrm{{Al}},\Gamma \vdash \left( {\varphi \rightarrow \varphi }\right) \rightarrow \) \( \left\lbrack {\neg \psi \rightarrow \left( {\varphi \rightarrow \varphi }\right) }\right\rbrack \) ; from 8.10 we infer that \( \Gamma \vdash \neg \psi \rightarrow \left( {\varphi \rightarrow \varphi }\right) \), and then 8.17 yields \( \Gamma \vdash \neg \left( {\varphi \rightarrow \varphi }\right) \rightarrow \neg \neg \psi \) . Hence \( \Gamma \vdash \neg \neg \psi \) . So by \( {8.16},\Gamma \vdash \psi : \psi \) being any sentence, \( \Gamma \) is inconsistent. Theorem 8.26. \( \Gamma \cup \{ \varphi \} \) is inconsistent iff \( \Gamma \vdash \neg \varphi \) . Proof. \( \Rightarrow \) : Since \( \Gamma \cup \{ \varphi \} \vdash \psi \) for any sentence \( \psi \), we have \( \Gamma \cup \{ \varphi \} \vdash \neg \varphi \), so by the deduction theorem \( \Gamma \vdash \varphi \rightarrow \neg \varphi \) . By 8.19, \( \Gamma \vdash \neg \varphi \) . \( \Leftarrow : \Gamma \cup \{ \varphi \} \vdash \neg \varphi \) and \( \Gamma \cup \{ \varphi \} \vdash \varphi \), so by 8.14, \( \Gamma \cup \{ \varphi \} \vdash \psi \) for any sentence \( \psi \) . Theorem 8.27. 0 is consistent. Proof. Since \( \neg \left( {\varphi \rightarrow \varphi }\right) \) always receives the value 0 under any model, for any sentence \( \varphi \), by 8.23 we have not \( \left( { \vdash \neg \left( {\varphi \rightarrow \varphi }\right) }\right) \) . Theorem 8.28 (Extended completeness theorem). Every consistent set of sentences has a model. Proof. Let \( \Gamma \) be a consistent set of sentences. Let \( \mathcal{A} = \{ \Delta : \Gamma \subseteq \Delta ,\Delta \) is consistent \( \} \) . Since \( \Gamma \in \mathcal{A},\mathcal{A} \) is nonempty. Suppose \( \mathcal{B} \) is a subset of \( \mathcal{A} \) simply ordered by inclusion, \( \mathcal{B} \neq 0 \) . Then \( \Gamma \subseteq \bigcup \mathcal{B} \) . Also, \( \bigcup \mathcal{B} \) is consistent, for, if not, there would be, by 8.25, a sentence \( \varphi \) such that \( \bigcup \mathcal{B} \vdash \neg \left( {\varphi \rightarrow \varphi }\right) \) . Then by \( {8.9},\left\{ {{\psi }_{0},\ldots ,{\psi }_{m - 1}}\right\} \vdash \neg \left( {\varphi \rightarrow \varphi }\right) \) for some finite subset \( \left\{ {{\psi }_{0},\ldots ,{\psi }_{m - 1}}\right\} \) of \( \bigcup \mathcal{B} \) . Say \( {\psi }_{0} \in {\Delta }_{0} \in \mathcal{B},\ldots ,{\psi }_{m - 1} \in {\Delta }_{m - 1} \in \mathcal{B} \) . Since \( \mathcal{B} \) is simply ordered, there is an \( i < m \) such that \( {\Delta }_{j} \subseteq {\Delta }_{i} \) for all \( j < m \) . Thus \( {\psi }_{0} \in {\Delta }_{i},\ldots ,{\psi }_{m - 1} \in {\Delta }_{i} \) , so \( {\Delta }_{i} \vdash \neg \left( {\varphi \rightarrow \varphi }\right) \) . Thus \( {\Delta }_{i} \) is inconsistent by 8.25, contradicting \( {\Delta }_{i} \in \mathcal{B} \) . Thus \( \cup \mathcal{B} \) is consistent. Hence we may apply Zorn’s lemma to obtain a member \( \Delta \) of \( \mathcal{A} \) maximal under inclusion. Now we establish some important properties of \( \Delta \) . (1) \[ \Delta \vdash \varphi \text{implies that}\varphi \in \Delta \text{.} \] For, if \( \Delta \vdash \varphi \) and \( \varphi \notin \Delta \), then \( \Delta \cup \{ \varphi \} \) is inconsistent, so by \( {8.26},\Delta \vdash \neg \varphi \) . Then by \( {8.14},\Delta \) is inconsistent, contradiction. (2) \[ \text{if}\varphi \in \text{Sent, then}\varphi \in \Delta \text{or}\neg \varphi \in \Delta \text{.} \] For, suppose \( \varphi \notin \Delta \) . Then \( \Delta \cup \{ \varphi \} \) is inconsistent, so by \( {8.26\Delta } \vdash \neg \varphi \), and (1) yields \( \neg \varphi \in \Delta \) . (3) \[ \varphi \rightarrow \psi \in \Delta \text{iff}\neg \varphi \in \Delta \text{or}\psi \in \Delta \text{.} \] To prove this, first suppose \( \neg \varphi \in \Delta \) . By 8.15 and (1), \( \varphi \rightarrow \psi \in \Delta \) . If \( \psi \in \Delta \) , then \( \varphi \rightarrow \psi \in \Delta \) by A1 and (1). Thus \( \Leftarrow \) in (3) holds. Now suppose \( \neg \varphi \notin \Delta \) and \( \psi \notin \Delta \) . By (2)
1074_(GTM232)An Introduction to Number Theory
Definition 10.7
Definition 10.7. Let \( G \) be a finite Abelian group. A character of \( G \) is a homomorphism \[ \chi : G \rightarrow \left( {{\mathbb{C}}^{ * }, \cdot }\right) \] The multiplicative group \( {\mathbb{C}}^{ * } \) is \( \mathbb{C} \smallsetminus \{ 0\} \) equipped with the usual multiplication. By convention, we will write all finite groups multiplicatively in this section - hence the identity will be written as \( {1}_{G} \) or 1 . For any group, the map \[ {\chi }_{0} : G \rightarrow {\mathbb{C}}^{ * },{\chi }_{0}\left( g\right) = 1, \] is a character called the trivial character. Lemma 10.8. Let \( G \) be a finite Abelian group, and let \( \chi \) be a character of \( G \) . Then \( \chi \left( {1}_{G}\right) = 1 \) and \( \chi \left( g\right) \) is a root of unity for any \( g \in G \) . In particular, \( \left| {\chi \left( g\right) }\right| = 1 \) . Thus \( \chi \left( g\right) \) lies on the unit circle in \( \mathbb{C} \) . Proof. Clearly \[ \chi \left( {1}_{G}\right) = \chi \left( {{1}_{G} \cdot {1}_{G}}\right) = \chi \left( {1}_{G}\right) \chi \left( {1}_{G}\right) \] so \( \chi \left( {1}_{G}\right) = 1 \) since \( \chi \left( {1}_{G}\right) \neq 0 \) . As to the second statement, we use the fact that for every \( g \in G \) there exists \( n \in \mathbb{N} \) such that \( {g}^{n} = {1}_{G} \) . This implies that \[ \chi {\left( g\right) }^{n} = \chi \left( {g}^{n}\right) = \chi \left( {1}_{G}\right) = 1. \] Example 10.9. Let \( G = {C}_{k} = \langle g\rangle \), a cyclic group of order \( k \) . Now \[ {g}^{k} = 1 \] so \[ \chi {\left( g\right) }^{k} = 1 \] and therefore \( \chi \left( g\right) \) must be a \( k \) th root of unity. Any of the \( k \) different \( k \) th roots of unity can occur as \( \chi \left( g\right) \), and of course \( \chi \left( g\right) \) determines all the values of \( \chi \) on \( G \) since \( G \) is generated by \( g \), so there are \( k \) distinct characters of \( G \) . We can label the characters of \( G \) with labels \( 0,1,\ldots, n - 1 \) as follows: \( {\chi }_{j} \) is determined by \( {\chi }_{j}\left( g\right) = {e}^{{2\pi }\mathrm{i}j/k} \), so \( {\chi }_{j}\left( {g}^{m}\right) = {e}^{{2\pi }\mathrm{i}{jm}/k} \) . Theorem 10.10. Let \( G \) be a finite Abelian group. Then the characters of \( G \) form a group with respect to the multiplication \[ \left( {\chi \cdot \psi }\right) \left( g\right) = \chi \left( g\right) \psi \left( g\right) \] denoted \( \widehat{G} \) . The identity in \( \widehat{G} \) is the trivial character. The group \( \widehat{G} \) is isomorphic to \( G \) . In particular, any finite Abelian group \( G \) of order \( n \) has exactly \( n \) distinct characters. This theorem is the first intimation of an entire dual world, a mirror image to the familiar world of finite Abelian groups. This duality extends to a larger class of Abelian groups and in that wider class takes subgroups to quotient groups, quotient groups to subgroups, and products to sums. Exercise 10.3. What happens if the same construction is made for other groups? (a) Describe the group \[ \widehat{\mathbb{Z}} = \left\{ {\text{ homomorphisms }\mathbb{Z} \rightarrow {\mathbb{S}}^{1}}\right\} \] (b) For nondiscrete groups \( G \), we need to restrict to continuous characters. Find \[ \widehat{{\mathbb{S}}^{1}} = \left\{ {\text{continuous homomorphisms}{\mathbb{S}}^{1} \rightarrow {\mathbb{S}}^{1}}\right\} . \] \( {\left( \mathrm{c}\right) }^{ * }\mathrm{\;A} \) more challenging problem is to describe the group \( \widehat{\mathbb{Q}} \) . Proof of Theorem 10.10. Use the structure theorem for finite Abelian groups, which says that \( G \) is isomorphic to a product of cyclic groups, \[ G \cong \mathop{\prod }\limits_{{j = 1}}^{k}{C}_{{n}_{j}} \] Choose a generator \( {g}_{j} \) for each of the factors \( {C}_{{n}_{j}} \) and define characters on \( G \) by \[ {\chi }^{\left( j\right) }\left( {*,\ldots ,*,{g}_{j},*,\ldots , * }\right) = {e}^{{2\pi }\mathrm{i}/{n}_{j}}, \] that is, ignore all entries except the \( j \) th, and there use the same definition as in Example 10.9. Then the characters \( {\chi }^{\left( 1\right) },\ldots ,{\chi }^{\left( k\right) } \) generate a subgroup of \( \widehat{G} \) that is isomorphic to \( G \) : Each \( {\chi }^{\left( j\right) } \) generates a cyclic group of order \( {n}_{j} \) , and this group has a trivial intersection with the span of all the other \( {\chi }^{\left( i\right) }\mathrm{s} \) since all characters in the latter have value 1 at \( {g}_{j} \) . Likewise, for any given character of \( G \), it is easy to write down a product of powers of the \( {\chi }^{\left( j\right) } \) that coincides with \( \chi \) on the generators \( {g}_{j} \) and hence on all of \( G \) . Corollary 10.11. Let \( G \) be a finite Abelian group. For any \( 1 \neq g \in G \), there exists \( \chi \in \widehat{G} \) such that \( \chi \left( g\right) \neq 1 \) . Proof. Looking again at the proof of Theorem 10.10, we may write \[ g = \left( {*,\ldots ,*,{g}_{j}^{r},*,\ldots , * }\right) \] with some entry \( {g}_{j}^{r} \neq 1,0 < r < {n}_{j} \) . Then \( {\chi }^{\left( j\right) }\left( g\right) = {e}^{{2\pi }\mathrm{i}r/{n}_{j}} \neq 1 \) . Theorem 10.12. Let \( G \) be a finite Abelian group. Then, for any element \( h \in \) \( G \) and any character \( \psi \in \widehat{G} \) , \[ \mathop{\sum }\limits_{{g \in G}}\psi \left( g\right) = \left\{ \begin{matrix} \left| G\right| & \text{ if }\psi = {\chi }_{0} \\ 0 & \text{ if }\psi \neq {\chi }_{0} \end{matrix}\right. \] (10.13) \[ \mathop{\sum }\limits_{{\chi \in \widehat{G}}}\chi \left( h\right) = \left\{ \begin{matrix} \left| G\right| & \text{ if }h = 1 \\ 0 & \text{ if }h \neq 1 \end{matrix}\right. \] (10.14) These identities are known as the orthogonality relations for finite Abelian group characters. Proof. Consider Equation (10.13) first. The case \( \psi = {\chi }_{0} \) is trivial, so assume \( \psi \neq {\chi }_{0} \) . There is an element \( h \in G \) such that \( \psi \left( h\right) \neq 1 \) . Then \[ \psi \left( h\right) \mathop{\sum }\limits_{{g \in G}}\psi \left( g\right) = \mathop{\sum }\limits_{{g \in G}}\psi \left( {gh}\right) = \mathop{\sum }\limits_{{g \in G}}\psi \left( g\right) \] because multiplication by \( h \) only permutes the summands. This equation can only be true if \( \mathop{\sum }\limits_{{g \in G}}\psi \left( g\right) = 0 \) . For Equation (10.14), assume \( h \neq 1 \) . By Corollary 10.11, there exists some character \( \psi \in \widehat{G} \) such that \( \psi \left( h\right) \neq 1 \) . We now use the dual of the argument above, \[ \psi \left( h\right) \mathop{\sum }\limits_{{\chi \in \widehat{G}}}\chi \left( h\right) = \mathop{\sum }\limits_{{\chi \in \widehat{G}}}\left( {\psi \cdot \chi }\right) \left( h\right) = \mathop{\sum }\limits_{{\chi \in \widehat{G}}}\chi \left( h\right) \] since multiplication by \( \psi \) only permutes the elements of \( \widehat{G} \), and again this can only be true if \( \mathop{\sum }\limits_{{\chi \in \widehat{G}}}\chi \left( h\right) = 0 \) . Corollary 10.13. For all \( g, h \in G \), we have \[ \mathop{\sum }\limits_{{\chi \in \widehat{G}}}\chi \left( g\right) \overline{\chi \left( h\right) } = \left\{ \begin{array}{ll} \left| G\right| & \text{ if }g = h \\ 0 & \text{ if }g \neq h \end{array}\right. \] Proof. Note that \[ \chi \left( {h}^{-1}\right) = \chi {\left( h\right) }^{-1} = \overline{\chi \left( h\right) } \] since \( \chi \left( h\right) \) is on the unit circle in \( \mathbb{C} \) . Then use Theorem 10.12 with \( g{h}^{-1} \) in place of \( h \) . This is the gadget in its ultimate form. Character theory allows us to construct functions that will extract any desired residue class. As an example, take \( G = U\left( {\mathbb{Z}/5\mathbb{Z}}\right) \cong {C}_{4} \) . Table 10.1 shows all the characters on \( G \) . Table 10.1. Characters on \( U\left( {\mathbb{Z}/5\mathbb{Z}}\right) \) . \[ \begin{array}{lllll} 1 & {\chi }_{0} & {\chi }_{1} & {\chi }_{2} & {\chi }_{3} \\ 1 & 1 & 1 & 1 & 1 \\ 2 & 1 & i & - 1 & - i \\ 4 & 1 & - 1 & 1 & - 1 \\ 3 & 1 & - i & - 1 & i \end{array} \] Note that we have written the elements of \( U\left( {\mathbb{Z}/5\mathbb{Z}}\right) \) in Table 10.1 in an unusual ordering \( {2}^{0},{2}^{1},{2}^{2},{2}^{3} \), adapted to the generator 2 . The character values behave likewise. Note also \( {\chi }_{1}^{2} = {\chi }_{2} \) and \( {\chi }_{1}^{3} = {\chi }_{3} = {\chi }_{1}^{-1} \) . We used earlier \[ {\chi }_{0}\left( n\right) + {\chi }_{1}\left( n\right) + {\chi }_{2}\left( n\right) + {\chi }_{3}\left( n\right) = 4{\mathrm{c}}_{1}\left( n\right) \] which is just the case \( h = 1 \) of Corollary 10.13. We asked then,"What about \( {\mathrm{c}}_{2}\left( n\right) \), which is 1 if \( n \) is congruent to 2 and 0 otherwise?" The corollary suggests that we take \( h = 2 \), and we get \[ {\chi }_{0}\left( n\right) - i{\chi }_{1}\left( n\right) - {\chi }_{2}\left( n\right) + i{\chi }_{3}\left( n\right) = 4{\mathrm{c}}_{2}\left( n\right) . \] This can be checked simply by going through the possible cases. If you compare the ideas used here with Fourier analysis, much is familiar. The expression \[ \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{h \in G}}\mathrm{f}\left( h\right) \overline{\mathrm{g}\left( h\right) } \] is an inner product on the vector space of all functions on \( G \), and the characters form a complete orthonormal set. There are no difficulties about convergence because the group is finite. In particular, any complex function on \( G \) can be written as a linear combination of the characters. ## 10.4 Dirichlet Characters and L-Functions Definition 10.14. Given \( 1 < q \in \mathbb{N} \), let \( G = U\left( {\mathbb{Z}/q\mathbb{Z}}\right) \) and fix a character \( \chi \) in \( \widehat{G} \) . Extend \( \chi \) to a function \( X \) on \( \mathbb{N} \) by setting \[ X\left( n\right) = \lef
1112_(GTM267)Quantum Theory for Mathematicians
Definition 23.21
Definition 23.21 A smooth, complex-valued function \( f \) on \( N \) is quantizable with respect to \( P \) if \( {Q}_{\text{pre }}\left( f\right) \) preserves the space of smooth sections that are polarized with respect to \( P \) . The following definition will provide a natural geometric condition guaranteeing quantizability of a function. Definition 23.22 A possibly complex vector field \( X \) preserves a polarization \( P \) if for every vector field \( Y \) lying in \( P \), the vector field \( \left\lbrack {X, Y}\right\rbrack \) also lies in \( P \) . Note that if \( X \) lies in \( P \), then \( X \) preserves \( P \), by the integrability assumption on \( P \) . There will typically be, however, many vector fields that do not lie in \( P \) but nevertheless preserve \( P \) . If \( X \) is a real vector field, then \( \left\lbrack {X, Y}\right\rbrack \) is the same as the Lie derivative \( {\mathcal{L}}_{X}\left( Y\right) \) . It is then not hard to show that \( X \) preserves \( P \) if and only if the flow generated by \( X \) preserves \( P \), that is, if and only if \( {\left( {\Phi }_{t}\right) }_{ * }\left( {P}_{z}\right) = {P}_{{\Phi }_{t}\left( z\right) } \) for all \( z \) and \( t \), where \( \Phi \) is the flow of \( X \) . Furthermore, if \( X \) is real, then \( X \) preserves \( P \) if and only if \( X \) preserves \( \bar{P} \) . Example 23.23 If \( N = {T}^{ * }M \) for some manifold \( M \) and \( P \) is the vertical polarization on \( N \), then a Hamiltonian vector field \( {X}_{f} \) preserves \( P \) if and only if \( f = {f}_{1} + {f}_{2} \), where \( {f}_{1} \) is constant on each fiber and \( {f}_{2} \) is linear on each fiber. Proof. In local coordinates \( \left\{ {{x}_{j},{p}_{j}}\right\} \), a vector field \( X \) lying in \( P \) has the form \( X = {g}_{j}\partial /\partial {p}_{j} \) . Thus, \[ \left\lbrack {{X}_{f}, X}\right\rbrack = \left\lbrack {\frac{\partial f}{\partial {p}_{j}}\frac{\partial }{\partial {x}_{j}},{g}_{k}\frac{\partial }{\partial {p}_{k}}}\right\rbrack - \left\lbrack {\frac{\partial f}{\partial {x}_{j}}\frac{\partial }{\partial {p}_{j}},{g}_{k}\frac{\partial }{\partial {p}_{k}}}\right\rbrack . \] This commutator will consist of three "good" terms, which involve only \( p \) -derivatives, along with the following "bad" term: \[ - {g}_{k}\frac{{\partial }^{2}f}{\partial {p}_{k}\partial {p}_{j}}\frac{\partial }{\partial {x}_{j}}. \] If \( {\partial }^{2}f/\partial {p}_{k}\partial {p}_{j} \) is 0 for all \( j \) and \( k \), then the bad term vanishes and \( \left\lbrack {{X}_{f}, X}\right\rbrack \) again lies in \( P \) . Conversely, if we want the bad term to vanish for each choice of the coefficient functions \( {g}_{j} \), we must have \( {\partial }^{2}f/\partial {p}_{k}\partial {p}_{j} = 0 \) for all \( j \) and \( k \) . Thus, for each fixed value of \( x, f \) must contain only terms that are independent of \( p \) and terms that are linear in \( p \) . ∎ We now identify the condition for quantizability of functions. Theorem 23.24 For any smooth, complex-valued function \( f \) on \( N \), if the Hamiltonian vector field \( {X}_{f} \) preserves \( \bar{P} \), then \( f \) is quantizable. Since we do not assume that \( f \) is real-valued, the condition that \( {X}_{f} \) preserve \( \bar{P} \) is not equivalent to the condition that \( {X}_{f} \) preserve \( P \) . Proof. Given a polarized section \( s \), we apply \( {Q}_{\text{pre }}\left( f\right) \) to \( s \) and then test whether \( {Q}_{\text{pre }}\left( f\right) s \) is still polarized, by applying \( {\nabla }_{X} \) for some vector field \( X \) lying in \( \bar{P} \) . To this end, it is useful to compute the commutator of \( {\nabla }_{X} \) and \( {Q}_{\text{pre }}\left( f\right) \), as follows: \[ \left\lbrack {{\nabla }_{X},{Q}_{\mathrm{{pre}}}\left( f\right) }\right\rbrack = i\hslash \left\lbrack {{\nabla }_{X},{\nabla }_{{X}_{f}}}\right\rbrack + \left\lbrack {{\nabla }_{X}, f}\right\rbrack \] \[ = i\hslash \left( {{\nabla }_{\left\lbrack X,{X}_{f}\right\rbrack } - \frac{i}{\hslash }\omega \left( {X,{X}_{f}}\right) }\right) + X\left( f\right) \] \[ = i\hslash {\nabla }_{\left\lbrack X,{X}_{f}\right\rbrack } \] (23.12) where we have used that \[ \omega \left( {X,{X}_{f}}\right) = - \omega \left( {{X}_{f}, X}\right) = - {df}\left( X\right) = - X\left( f\right) , \] by Definition 21.6. Since \( {X}_{f} \) preserves \( \bar{P} \), the vector field \( \left\lbrack {X,{X}_{f}}\right\rbrack \) again lies in \( \bar{P} \) and, thus, \[ {\nabla }_{X}\left( {{Q}_{\mathrm{{pre}}}\left( f\right) s}\right) = {Q}_{\mathrm{{pre}}}\left( f\right) {\nabla }_{X}s + i\hslash {\nabla }_{\left\lbrack X,{X}_{f}\right\rbrack }s = 0, \] for every polarized section \( s \), showing that \( {Q}_{\text{pre }}\left( f\right) s \) is again polarized. ∎ The converse of Theorem 23.24 is false in general. After all, as we will see in the following subsections, for a given polarization, there may not be any nonzero globally defined polarized sections, in which case, any function is quantizable. On the other hand, it can be shown that if \( {Q}_{\text{pre }}\left( f\right) \) preserves the space of locally defined polarized sections, then the Hamiltonian flow generated by \( f \) must preserve \( \bar{P} \) . This result follows by the same reasoning as in the proof of Theorem 23.24, once we know that there are sufficiently many locally defined polarized sections. We will establish such an existence result for purely real and purely complex polarizations in the following subsections; for the general case, see the discussion following Definition 9.1.1 in [45]. A special case of Theorem 23.24 is provided by "polarized functions," that is, functions \( f \) for which \( X\left( f\right) = 0 \) for all vector fields \( X \) lying in \( \bar{P} \) . For such an \( f \), the action of \( {Q}_{\text{pre }}\left( f\right) \) on the quantum space is simply multiplication by \( f \), as we anticipated in the introductory discussion in Sect. 23.4. Proposition 23.25 If \( f \) is a smooth, complex-valued function on \( N \) and the derivatives of \( f \) in the \( \bar{P} \) directions are zero, then \( {Q}_{\mathrm{{pre}}}\left( f\right) \) preserves the space \( P \) -polarized sections, and the restriction of \( {Q}_{\mathrm{{pre}}}\left( f\right) \) to this space is simply multiplication by \( f \) . We have already seen special cases of this result in the \( {\mathbb{R}}^{2n} \) case; see the discussion following Proposition 22.11. Proof. If the derivatives of \( f \) in the direction of \( \bar{P} \) are zero, then for \( X \in \bar{P} \) , we have \[ 0 = X\left( f\right) = {df}\left( X\right) = \omega \left( {{X}_{f}, X}\right) \] meaning that \( {X}_{f} \) is in the \( \omega \) -orthogonal complement of \( \bar{P} \) . But since \( \bar{P} \) is Lagrangian, this complement is just \( \bar{P} \) . Thus, \( {X}_{f} \) belongs to \( \bar{P} \) and, in particular, \( {X}_{f} \) preserves \( \bar{P} \), so that \( f \) is quantizable, by Theorem 23.24. Furthermore, \( {\nabla }_{{X}_{f}}s = 0 \) for any \( P \) -polarized section \( s \), leaving only the \( {fs} \) term in the formula for \( {Q}_{\text{pre }}\left( f\right) s \) . ## 23.5.2 The Real Case In the \( {\mathbb{R}}^{2n} \) case, we have already computed the space of polarized sections for the vertical polarization in Proposition 22.8. As we observed there, there are no nonzero polarized sections that are square integrable over \( {\mathbb{R}}^{2n} \) . The same difficulty is easily seen to arise for the vertical polarization on any cotangent bundle \( N = {T}^{ * }M \) . In Sect. 23.6, we will introduce half-forms to deal with this failure of square integrability. We now examine properties of general real polarizations. We will see that polarized sections always exist locally, but not always globally. Proposition 23.26 If \( P \) is a purely real polarization on \( N \), then for any \( {z}_{0} \in N \), there exist a neighborhood \( U \) of \( {z}_{0} \) and a \( P \) -polarized section \( s \) of \( L \) defined over \( U \) such that \( s\left( {z}_{0}\right) \neq 0 \) . Proof. According to the local form of the Frobenius theorem, we can find a neighborhood \( U \) of \( {z}_{0} \) and a diffeomorphism \( \Phi \) of \( U \) with a neighborhood \( V \) of the origin in \( {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \) such that under \( \Phi \), the polarization \( P \) looks like the vertical polarization. That is to say, for each \( z \in U \), the image of \( {P}_{z} \) under \( {\Phi }_{ * }\left( z\right) \) is just the span of the vectors \( \partial /\partial {y}_{1},\ldots ,\partial /\partial {y}_{n} \), where the \( y \) ’s are the coordinates on the second copy of \( {\mathbb{R}}^{n} \) . By shrinking \( U \) if necessary, we can assume that \( L \) can be trivialized over \( U \) and that the open set \( V \) is the product of a ball \( {B}_{1} \) centered at the origin in the first copy of \( {\mathbb{R}}^{n} \) with a ball \( {B}_{2} \) centered at the origin in the second copy of \( {\mathbb{R}}^{n} \) . Let \( \theta \) be the connection 1 -form for an isometric trivialization of \( L \) over \( U \) and let \( \widetilde{\theta } = {\left( {\Phi }^{-1}\right) }^{ * }\left( \theta \right) \) . Since the subspaces \( {P}_{z} \) are Lagrangian, the restriction of \( \widetilde{\theta } \) to the each set of the form \( \{ \mathbf{x}\} \times {B}_{2} \) is closed. Since \( {B}_{2} \) is simply connected, there exists, for each \( \mathbf{x} \in {B}_{1} \), a function \( {f}_{\mathbf{x}} \) on \( {B}_{2} \) such that the restriction of \( \widetilde{\theta } \) to \( \{ \mathbf{x}\} \times {B}_{2} \) equals \( d{f}_{\mathbf{x}} \) . If we assume that \( {f}_{\mathbf{x}}\left( 0\right) = 0 \), then \( {f}_{\mathbf{x}}\left( \mathbf{y}\right) \) will be smooth as a function of \( \left( {\mathbf{x},\mathbf{y}
1143_(GTM48)General Relativity for Mathematicians
Definition 6.4.4
Definition 6.4.4. A simple cosmological model is a cosmological model \( \left( {M,\mathcal{M}, z}\right) \) such that: (a) \( \left( {M, g}\right) \) is a simple cosmological spacetime,(b) \( M = {\mathbb{R}}^{3} \times \left( {0,\infty }\right) \) and \( R \rightarrow 0 \) as \( {u}^{4} \rightarrow 0 \) ; (c) \( {u}^{4}z = {10}^{10} \) years. Here the motivations for, and limitations of, assumption (a) have already been outlined (Sections 6.2.3 and 6.2.8). In (b), taking \( \mathcal{F} = \left( {0, a}\right), a \in (0,\infty \rbrack \) , involves no further loss of generality given (a) and the assumptions (Section 6.2.3) of a cosmological model (Exercise 6.2.13b). The remaining parts of (b) are that \( a = \infty \) and that \( R \rightarrow 0 \) as \( {u}^{4} \rightarrow 0 \) . Both of these can usually be proved once a sufficiently detailed matter model is chosen (cf. Proposition 6.2.7 and Exercise 6.2.11 for two examples). Thus (b) in effect acts as an extremely general assumption on \( \mathcal{M} \), which replaces the very specific requirement that \( \mathcal{M} \) be a dust. For a perfect fluid that obeys the requirements of Lemma 6.4.3, our assumption (b) implies that the energy density \( \rho \rightarrow \infty \) as \( {u}^{4} \rightarrow 0 \) . For this (and other) reasons we can again regard \( {u}^{4} \rightarrow 0 \) as approach to a big bang and expect that, qualitatively speaking, matter is becoming ever denser in this limit. Then, in view of Exercise 6.2.13, \( {u}^{4} \) can again be interpreted as maximum proper time since the big bang. Thus the motivations for, and limitations of, the final assumption \( {u}^{4}z = {10}^{10} \) years are just as before (Section 6.2.9). Let \( \left( {M,\mathcal{M}, z}\right) \) be a simple cosmological model. We again designate \( {u}^{4} \) as the cosmological time. Exercise 6.2.13 shows that the comoving reference frame is \( {\partial }_{4} \), that \( {\partial }_{4} \) is expanding, and that \( \dot{R} > 0 \) . We can now use a simple cosmological model to cure the particular disease indicated by Proposition 6.4.2. The resulting model, detailed in Proposition 6.4.5, is superior to the Einstein-de Sitter model or the modified Einstein-de Sitter model of Section 6.4.1 in that the microwave radiation is built into this model from the beginning in the form of a rest-mass zero perfect fluid. In particular, as a consequence of the influence of this rest-mass zero perfect fluid on spacetime (via the Einstein field equation), the resulting spacetime is no longer the Einstein-de Sitter spacetime. Of course, even this model has its own limitations (Sections 6.2.1a, b and 6.6). Proposition 6.4.5. Let \( \left( {M,\mathcal{M}, z}\right) \) be a simple cosmological model such that: (a) \( \mathcal{M} \) is a superposition of a dust \( {\mathcal{M}}_{1} = \left( {{\rho }_{1},{\partial }_{4}}\right) \) and a rest-mass zero perfect fluid \( {\mathcal{M}}_{2} = \left( {{\rho }_{2},{\rho }_{2}/3,{\partial }_{4}}\right) \) ; (b) the stress-energy tensor \( {\widehat{T}}_{2} \) of \( {\mathcal{M}}_{2} \) obeys \( \operatorname{div}{\widehat{\mathbf{T}}}_{2} = 0 \) ; (c) \( {\partial }_{\mu }{\rho }_{2} = 0 \), for \( \mu = 1,2,3 \) . Let a be the positive number defined by \( {\rho }_{2}z = a{\rho }_{1}z \) . Then, up to equivalence (Exercise 6.0.18) the following hold: (d) \( R \) is the function determined implicitly by \( u = {\left\lbrack R\left( u\right) + b\right\rbrack }^{1/2} \times \) \( \left\lbrack {R\left( u\right) - {2b}}\right\rbrack + 2{b}^{3/2} \), where \( b = a{\left\{ \left( {u}^{4}z\right) /\left\lbrack \left( 1 - 2a\right) {\left( a + 1\right) }^{1/2} + 2{a}^{3/2}\right\rbrack \right\} }^{2/3} \) ; (e) \( {\rho }_{1} = \left( {4{R}^{-3}/3}\right) \circ {u}^{4} \) ; and (f) \( {\rho }_{2} = \left( {{4b}{R}^{-4}/3}\right) \circ {u}^{4} \) . 6.4.6 Remarks Before giving the proof we make a few comments. (a) The interpretation of interest here is that \( {\mathcal{M}}_{1} \) models the matter in galaxies and \( {\mathcal{M}}_{2} \), as mentioned above, models the microwave photons. In this context, \( a \) is about \( {10}^{-4} \) and all three assumptions in Proposition 6.4.5. have been motivated earlier in this section. However, the proposition has other applications (cf. Section 6.6.4). (b) Note that conditions 6.4.5d-f determine \( \left( {M,\mathcal{M}, z}\right) \) uniquely if it exists. We are thus presenting our model in the form of a uniqueness theorem; Exercise 6.4.10 states the corresponding existence result and gives hints on its proof. (c) Note that as \( a \rightarrow 0 \), one has \( b \rightarrow 0,{\rho }_{2} \rightarrow 0 \), and \( R\left( u\right) \rightarrow {u}^{2/3} \) ; thus the whole model approaches the Einstein-de Sitter model, as one would expect. Proof of Proposition 6.4.5. Let \( \left( {M, z}\right) \) be a simple cosmological model such that Assumptions 6.4.5a-c hold. The Einstein field equation \( G = {T}_{1} + {T}_{2} \) implies div \( {\widehat{T}}_{1} = 0 \), because of (b) and div \( \widehat{G} = 0 \) ; it also implies \( {\partial }_{\mu }{\rho }_{1} = 0\forall \mu \in \left( {1,2,3}\right) \), because of (c) and Lemma 6.2.6b. Thus Lemma 6.4.3 implies both \( {\rho }_{1} = \left( {4{b}_{1}/3}\right) {\left( R \circ {u}^{4}\right) }^{-3} \) and \( {\rho }_{2} = \) \( \left( {4/3}\right) b{\left( R \circ {u}^{4}\right) }^{-4} \) for some \( {b}_{1}, b \in \left( {0,\infty }\right) \) . Using the diffeomorphism determined by \( {u}^{4} \rightarrow {u}^{4},{u}^{u} \rightarrow {\left( {b}_{1}\right) }^{-1/3}{u}^{u} \) for \( \mu = 1,2,3 \), we can and shall assume \( {b}_{1} = 1 \) without loss of generality (cf. Exercise 6.0.18a). Thus (e) and (f) hold. Using Lemma 6.2.6b again we now get \( 3{\left( \dot{R}/R\right) }^{2} = \) \( \left( {4/3}\right) \left( {{R}^{-3} + b{R}^{-4}}\right) \) . Since \( R \rightarrow 0 \) as \( {u}^{4} \rightarrow 0 \) we can integrate to obtain \( {u}^{4} = 3/2{\int }_{0}^{R}{vdv}/{\left( v + b\right) }^{1/2} = {\left( R + b\right) }^{1/2}\left( {R - {2b}}\right) + 2{b}^{3/2} \) . To complete the proof of (d), we note from (e) and (f) that \( {\rho }_{2}z = a{\rho }_{1}z \) iff \( b = {aR}\left( {{u}^{4}z}\right) \) iff \( b = a{\left\{ {10}^{10}/\left\lbrack {\left( 1 + a\right) }^{1/2}\left( 1 - 2a\right) + 2{a}^{3/2}\right\rbrack \right\} }^{2/3}. \) Now that we have a model (Proposition 6.4.5) that takes into account the influence of the microwave photons on spacetime, we must next make sure that, if we abandon the Einstein-de Sitter model we are not throwing out the baby with the bath water-that for the observations analyzed in Section 6.3 the model (Proposition 6.4.5) is no worse than the Einstein-de Sitter model. In fact such is the case. The observations mentioned in Section 6.3 concern only red shifts less than nine, in most cases very much less. Thus we are concerned with at most the \( {u}^{4} \) range determined by \( \left( {1/{10}}\right) R\left( {{u}^{4}z}\right) \leq R\left( {u}^{4}\right) \leq \) \( R\left( {{u}^{4}z}\right) \) (Exercise 6.0.17); here \( R \) is given by Proposition 6.4.5d with \( a \cong {10}^{-4} \) and \( {u}^{4}z = {10}^{10} \) years. Numerical estimates (using Taylor series) show that throughout this entire \( {u}^{4} \) range \( R\left( {u}^{4}\right) \) differs from \( {\left( {u}^{4}\right) }^{2/3} \) (the Einstein-de Sitter behavior) by less than \( 1\% \) . The same then holds for all the predicted effects in Section 6.3 and such small changes are negligible compared to the empirical uncertainties. As an example, note from Proposition 6.4.5d that \( a = {10}^{-4} \) gives \( b \cong {10}^{-4} \times {10}^{{20}/3} \) and thus \( R\left( {10}^{10}\right) \cong \left( {1 + {10}^{-4}}\right) {10}^{{20}/3} \) . The predicted value of the Hubble time is now \( \mathrm{t} = \left( {R/\dot{R}}\right) \left( {10}^{10}\right) \) (Exercise 6.3.17c). The model gives \( R/\dot{R} = R\left( {d{u}^{4}/{dR}}\right) = \left( {3/2}\right) {R}^{2}{\left( R + b\right) }^{-1/2} \) (Proposition 6.4.5d). Thus \( t \cong \left( {3/2}\right) \left( {1 + \frac{3}{2} \cdot {10}^{-4}}\right) \times {10}^{10} \) -that is, the predicted Hubble time differs from that predicted by the Einstein-de Sitter model by a bit more than \( {0.01}\% \) . Compared to the \( {15}\% \) observational inaccuracy of the Hubble constant (Section 6.1.7), and the still larger ## 6 Cosmology uncertainty in choosing \( {u}^{4}z = {10}^{10} \) years (Section 6.2.9), a discrepancy of \( \left( {3/2}\right) \times {10}^{-4} \) is grotesquely small. Thus the model (Proposition 6.4.5) shares the crude but important virtues of the Einstein-de Sitter model without having the disease diagnosed at the start of this section. Why didn't we use it ab initio? Because it is much clumsier than the Einstein-de Sitter model, is no better near here-now and, as we shall see shortly, has diseases of its own. ## EXERCISE 6.4.7. THE BIG BANG Let \( \left( {M,\mathcal{M}, z}\right) \) be a simple cosmological model such that \( \mathcal{M} \) is a superposition of perfect fluids \( \left\{ {\left( {{\rho }_{A},{p}_{A},{\partial }_{A}}\right) \mid A = 1,\ldots, N}\right\} \) . (a) Generalize Lemma 6.4.3 by showing \( \mathop{\sum }\limits_{{A = 1}}^{N}{\rho }_{A} \rightarrow \infty \) as \( {u}^{4} \rightarrow 0 \) . (Intuitively: "overall, matter gets denser as we approach the big bang".) (b) Use Example 3.12.1 to construct a case where \( N = 2 \) and \( {\rho }_{2} \rightarrow 0 \) as \( {u}^{4} \rightarrow 0 \) . (b) Shows that not every matter component need be present at the big bang " \( {u}^{4} = 0 \) ": some may be made later. In fact probably helium and deuterium are made later (Section 6.5). There is some indication that when quantum models are used for very early times in some general models (a) can fail: conceivably all forms of matter were made, by quantum process, after the big bang. S
1056_(GTM216)Matrices
Definition 5.2
Definition 5.2 A square matrix \( M \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) is - Symmetric if \( {M}^{T} = M \) - Skew-symmetric if \( {M}^{T} = - M \) - Orthogonal if \( {M}^{T} = {M}^{-1} \) We denote by \( {\mathbf{H}}_{n} \) the set of Hermitian matrices in \( {\mathbf{M}}_{n}\left( \mathbb{C}\right) \) . It is an \( \mathbb{R} \) -linear subspace of \( {\mathbf{M}}_{n}\left( \mathbb{C}\right) \), but not a \( \mathbb{C} \) -linear subspace, becausee \( {iM} \) is skew-Hermitian when \( M \) is Hermitian. If \( M \in {\mathbf{M}}_{n \times m}\left( \mathbb{C}\right) \), the matrices \( M + {M}^{ * }, i\left( {{M}^{ * } - M}\right), M{M}^{ * } \), and \( {M}^{ * }M \) are Hermitian. One sometimes calls \( \frac{1}{2}\left( {M + {M}^{ * }}\right) \) the real part of \( M \) and denotes it \( \Re M \) . Likewise, \( \frac{1}{2i}\left( {M - {M}^{ * }}\right) \) is the imaginary part of \( M \) and is denoted \( \Im M \) . Both are Hermitian and we have \[ M = \Re M + \mathrm{i}\Im M \] This terminology anticipates Chapter 10. A matrix \( M \) is unitary if \( {u}_{M} \) is an isometry, that is \( \langle {Mx},{My}\rangle \equiv \langle x, y\rangle \) . This is equivalent to saying that \( \parallel {Mx}\parallel \equiv \parallel x\parallel \) . The set of unitary matrices in \( {\mathbf{M}}_{n}\left( \mathbb{C}\right) \) forms a multiplicative group, denoted by \( {\mathbf{U}}_{n} \) . Unitary matrices satisfy \( \left| {\det M}\right| = 1 \), because \( \det {M}^{ * }M = {\left| \det M\right| }^{2} \) for every matrix \( M \) and \( {M}^{ * }M = {I}_{n} \) when \( M \) is unitary. The set of unitary matrices whose determinant equals 1, denoted by \( {\mathbf{{SU}}}_{n} \) is obviously a normal subgroup of \( {\mathbf{U}}_{n} \) . A matrix with real entries is orthogonal (respectively, symmetric, skew-symmetric) if and only if it is unitary, Hermitian, or skew-Hermitian. ## 5.1.3 Matrices and Sesquilinear Forms Given a matrix \( M \in {\mathbf{M}}_{n}\left( \mathbb{C}\right) \), the map \[ \left( {x, y}\right) \mapsto \langle x, y{\rangle }_{M} \mathrel{\text{:=}} \mathop{\sum }\limits_{{j, k}}{m}_{jk}\overline{{x}_{j}}{y}_{k} = {x}^{ * }{My} \] defined on \( {\mathbb{C}}^{n} \times {\mathbb{C}}^{n} \), is a sesquilinear form. When \( M = {I}_{n} \), this is nothing but the scalar product. It is Hermitian if and only if \( M \) is Hermitian. It follows that \( M \mapsto \) \( \langle \cdot , \cdot {\rangle }_{M} \) is an isomorphism between \( {\mathbf{H}}_{n} \) and the set of Hermitian forms over \( {\mathbb{C}}^{n} \) . We say that a Hermitian matrix \( H \) is degenerate (respectively, nondegenerate) if the form \( \langle \cdot , \cdot {\rangle }_{H} \) is so. Nondegeneracy amounts to saying that \( x \mapsto {Hx} \) is one-to-one. In other words, we have the following. Proposition 5.2 A Hermitian matrix \( H \) is degenerate (respectively, nondegenerate) if and only if \( \det H = 0 \) (respectively, \( \neq 0 \) ). We say that the Hermitian matrix \( H \) is positive-definite if \( \langle \cdot , \cdot {\rangle }_{H} \) is so. Then \( \sqrt{\langle \cdot , \cdot {\rangle }_{H}} \) is a norm over \( {\mathbb{C}}^{n} \) . If \( - H \) is positive-definite, we say that \( H \) is negative-definite. We denote by \( {\mathbf{{HPD}}}_{n} \) the set of positive-definite Hermitian matrices. If \( H \) and \( K \) are positive-definite, and if \( \lambda \) is a positive real number, then \( {\lambda H} + K \) is positive-definite. Therefore \( {\mathbf{{HPD}}}_{n} \) is a convex cone in \( {\mathbf{H}}_{n} \) . This cone turns out to be open. The Hermitian matrices \( H \) for which \( \langle \cdot , \cdot {\rangle }_{H} \) is a positive-semidefinite Hermitian form over \( {\mathbb{C}}^{n} \) are called positive-semidefinite Hermitian matrices. They also form a convex cone \( {\mathbf{H}}_{n}^{ + } \) . If \( H \in {\mathbf{H}}_{n}^{ + } \) and \( \varepsilon \) is a positive real number, then \( H + \varepsilon {I}_{n} \) is positive-definite. Because \( H + \varepsilon {I}_{n} \) tends to \( H \) as \( \varepsilon \rightarrow {0}^{ + } \), we see that the closure of \( {\mathbf{{HPD}}}_{n} \) is \( {\mathbf{H}}_{n}^{ + } \) . One defines similarly, among the real symmetric matrices, the positive-definite, respectively, positive-semidefinite, ones. Again, the positive-definite real symmetric matrices form an open cone in \( {\operatorname{Sym}}_{n}\left( \mathbb{R}\right) \), denoted by \( {\mathbf{{SPD}}}_{n} \), whose closure \( {\operatorname{Sym}}_{n}^{ + } \) is made of positive-semidefinite ones. The cone \( {\mathbf{{HPD}}}_{n} \) defines an order over \( {\mathbf{H}}_{n} \) : we write \( K > H \) when \( K - H \in {\mathbf{{HPD}}}_{n} \) , and more generally \( K \geq H \) if \( K - H \) is positive-semidefinite. The fact that \[ \left( {K \geq H \geq K}\right) \Rightarrow \left( {K = H}\right) \] follows from the next lemma. Lemma 6. Let \( H \) be Hermitian. If \( {x}^{ * }{Hx} = 0 \) for every \( x \in {\mathbb{C}}^{n} \), then \( H = {0}_{n} \) . Proof. Using (1.1), we have \( {y}^{ * }{Hx} = 0 \) for every \( x, y \in {\mathbb{C}}^{n} \) . Therefore \( {Hx} = 0 \) for every \( x \), which means \( H = {0}_{n} \) . We likewise define an ordering on real-valued symmetric matrices, referring to the ordering on real-valued quadratic forms. \( {}^{1} \) If \( U \) is unitary, the matrix \( {U}^{ * }{MU} \) is similar to \( M \), and we say that they are unitary similar. If \( M \) is normal, Hermitian, skew-Hermitian, or unitary, and if \( U \) is unitary, then \( {U}^{ * }{MU} \) is still normal, Hermitian, skew-Hermitian, or unitary. When \( {}^{1} \) We warn the reader that another order, a completely different one, although still denoted by the same symbol \( \geq \), is defined in Chapter 8 . The latter concerns general \( n \times m \) real-valued matrices, whereas the present one deals only with symmetric matrices. In practice, the context is never ambiguous. \( O \in {\mathbf{O}}_{n}\left( \mathbb{R}\right) \) and \( M \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \), we again say that \( {O}^{T}{MO} \) and \( M \) are orthogonally similar. We notice that Lemma 6 implies the following stronger result. Proposition 5.3 Let \( M \in {\mathbf{M}}_{n}\left( \mathbb{C}\right) \) be given. If \( {x}^{ * }{Mx} = 0 \) for every \( x \in {\mathbb{C}}^{n} \), then \( M = \) \( {0}_{n} \) . Proof. We decompose \( M = H + \mathrm{i}K \) into its real and imaginary parts. Recall that \( H, K \) are Hermitian. Then \[ {x}^{ * }{Mx} = {x}^{ * }{Hx} + \mathrm{i}{x}^{ * }{Kx} \] is the decomposition of a complex number into real and imaginary parts. From the assumption, we therefore have \( {x}^{ * }{Hx} = 0 \) and \( {x}^{ * }{Kx} = 0 \) for every \( x \) . Then Lemma 6 tells us that \( H = K = {0}_{n} \) . ## 5.2 Eigenvalues of Real- and Complex-Valued Matrices Let us recall that \( \mathbb{C} \) is algebraically closed. Therefore the characteristic polynomial of a complex-valued square matrix has roots if \( n \geq 1 \) . Therefore every endomorphism of a nontrivial \( \mathbb{C} \) -vector space possesses eigenvalues. A real-valued square matrix may have no eigenvalues in \( \mathbb{R} \), but it has at least one in \( \mathbb{C} \) . If \( n \) is odd, \( M \in {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) has at least one real eigenvalue, because \( {P}_{M} \) is real of odd degree. ## 5.2.1 Unitary Trigonalization If \( K = \mathbb{C} \), one sharpens Theorem 3.5. Theorem 5.1 (Schur) If \( M \in {\mathbf{M}}_{n}\left( \mathbb{C}\right) \), there exists a unitary matrix \( U \) such that \( {U}^{ * }{MU} \) is upper-triangular. One also says that every matrix with complex entries is unitarily trigonalizable. Proof. We proceed by induction over the size \( n \) of the matrices. The statement is trivial if \( n = 1 \) . Let us assume that it is true in \( {\mathbf{M}}_{n - 1}\left( \mathbb{C}\right) \), with \( n \geq 2 \) . Let \( M \in {\mathbf{M}}_{n}\left( \mathbb{C}\right) \) be a matrix. Because \( \mathbb{C} \) is algebraically closed, \( M \) has at least one eigenvalue \( \lambda \) . Let \( X \) be an eigenvector associated with \( \lambda \) . By dividing \( X \) by \( \parallel X\parallel \), we can assume that \( X \) is a unit vector. One can then find a unitary basis \( \left\{ {{X}^{1},{X}^{2},\ldots ,{X}^{n}}\right\} \) of \( {\mathbb{C}}^{n} \) whose first element is \( X \) . Let us consider the matrix \( V \mathrel{\text{:=}} \left( {{X}^{1} = X\left| {X}^{2}\right| \cdots \mid {X}^{n}}\right) \), which is unitary, and let us form the matrix \( {M}^{\prime } \mathrel{\text{:=}} {V}^{ * }{MV} \) . Because \[ V{M}^{\prime }{\mathbf{e}}^{1} = {MV}{\mathbf{e}}^{1} = {MX} = {\lambda X} = {\lambda V}{\mathbf{e}}^{1}, \] one obtains \( {M}^{\prime }{\mathbf{e}}^{1} = \lambda {\mathbf{e}}^{1} \) . In other words, \( {M}^{\prime } \) has the block-triangular form: \[ {M}^{\prime } = \left( \begin{matrix} \lambda & \cdots \\ {0}_{n - 1} & N \end{matrix}\right) \] where \( N \in {\mathbf{M}}_{n - 1}\left( \mathbb{C}\right) \) . Applying the induction hypothesis, there exists \( W \in {\mathbf{U}}_{n - 1} \) such that \( {W}^{ * }{NW} \) is upper-triangular. Let us denote by \( \widehat{W} \) the (block-diagonal) matrix \( \operatorname{diag}\left( {1, W}\right) \in {\mathbf{U}}_{n} \) . Then \( {\widehat{W}}^{ * }{M}^{\prime }\widehat{W} \) is upper-triangular. Hence, \( U = V\widehat{W} \) satisfies the conditions of the theorem. A useful consequence of Theorem 5.1 is the following. Corollary 5.1 The set of diagonalizable matrices is a dense subset in \( {\mathbf{M}}_{n}\left( \mathbb{C}\right) \) . ## Remark The set of real matrices diagonalizable within \( {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) is not dense in \( {\mathbf{M}}_{n}\left( \mathbb{R}\right) \) . For instance, the matrix \[ \left( \begin{matrix} 0 & 1 \\ - 1 & 0 \end{matrix}\r
1167_(GTM73)Algebra
Definition 6.1
Definition 6.1. An algebra A with identity over a field \( \mathbf{K} \) is said to be central simple if A is a simple \( \mathbf{K} \) -algebra and the center of \( \mathbf{A} \) is precisely \( \mathbf{K} \) . EXAMPLE. Let \( D \) be a division ring and let \( K \) be the center of \( D \) . It is easy to verify that if \( d \) is a nonzero element of \( K \), then \( {d}^{-1}{\varepsilon K} \) . Consequently \( K \) is a field. Clearly \( D \) is an algebra over \( K \) (with \( K \) acting by ordinary multiplication in \( D \) ). Furthermore since \( D \) is a simple ring with identity, it is also simple as an algebra. Thus \( D \) is a central simple algebra over \( K \) . Recall that if \( A \) and \( B \) are \( K \) -algebras with identities, then so is their tensor product \( A{\bigotimes }_{K}B \) (Theorem IV.7.4). The product of \( a \otimes b \) and \( {a}_{1} \otimes {b}_{1} \) is \( a{a}_{1} \otimes b{b}_{1} \) . Here and below we shall denote the set \( \left\{ {{1}_{A} \otimes b \mid b \in B}\right\} \) by \( {1}_{A}{\bigotimes }_{K}B \) and \( \left\{ {a \otimes {1}_{B} \mid a \in A}\right\} \) by \( A{ \otimes }_{K}{1}_{B} \) . Note that \( A{ \otimes }_{K}B = \left( {A{ \otimes }_{K}{1}_{B}}\right) \left( {{1}_{A}{ \otimes }_{K}B}\right) \) ; see p. 124. Theorem 6.2. If \( \mathrm{A} \) is a central simple algebra over a field \( \mathrm{K} \) and \( \mathrm{B} \) is a simple \( \mathrm{K} \) -algebra with identity, then \( \mathrm{A}{\bigotimes }_{\mathrm{K}}\mathrm{B} \) is a simple \( \mathrm{K} \) -algebra. PROOF. Since \( B \) is a vector space over \( K \), it has a basis \( Y \) and by Theorem IV.5.11 every element \( u \) of \( A{\bigotimes }_{K}B \) can be written \( \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i} \otimes {y}_{i} \), with \( {y}_{i} \in Y \) and the \( {a}_{i} \) unique. If \( U \) is any nonzero ideal of \( A{\bigotimes }_{K}B \), choose a nonzero \( u \in U \) such that \( u = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i} \otimes {y}_{i} \), with all \( {a}_{i} \neq 0 \) and \( n \) minimal. Since \( A \) is simple with identity and \( A{a}_{1}A \) is a nonzero ideal, \( A{a}_{1}A = A \) . Consequently there are elements \( {r}_{1},\ldots \) , \( {r}_{t},{s}_{1},\ldots ,{s}_{t} \in A \) such that \( {1}_{A} = \mathop{\sum }\limits_{{j = 1}}^{t}{r}_{j}{a}_{1}{s}_{j} \) . Since \( U \) is an ideal, the element \( v = \) \( \mathop{\sum }\limits_{{j = 1}}^{i}\left( {{r}_{j} \otimes {1}_{B}}\right) u\left( {{s}_{j} \otimes {1}_{B}}\right) \) is in \( U \) . Now \[ v = \mathop{\sum }\limits_{j}\left( {{r}_{j} \otimes {1}_{B}}\right) \left( {\mathop{\sum }\limits_{i}{a}_{i} \otimes {y}_{i}}\right) \left( {{s}_{j} \otimes {1}_{B}}\right) = \mathop{\sum }\limits_{i}\left( {\mathop{\sum }\limits_{j}{r}_{1}{a}_{i}{s}_{j}}\right) \otimes {y}_{i} \] \[ = \mathop{\sum }\limits_{j}{r}_{j}{a}_{1}{s}_{j} \otimes {y}_{1} + \mathop{\sum }\limits_{{i = 2}}^{n}\left( {\mathop{\sum }\limits_{j}{r}_{j}{a}_{i}{s}_{j}}\right) \otimes {y}_{i} = {1}_{A} \otimes {y}_{1} + \mathop{\sum }\limits_{{i = 2}}^{n}{\bar{a}}_{i} \otimes {y}_{i}, \] where \( {\bar{a}}_{i} = \mathop{\sum }\limits_{{j = 1}}^{t}{r}_{j}{a}_{i}{s}_{j} \) . By the minimality of \( n,{\bar{a}}_{i} \neq 0 \) for all \( i \geq 2 \) . If \( a \in A \), then the element \( w = \left( {a \otimes {1}_{B}}\right) v - v\left( {a \otimes {1}_{B}}\right) \) is in \( U \) and \[ w = \left( {a \otimes {y}_{1} + \mathop{\sum }\limits_{{i = 2}}^{n}a{\bar{a}}_{i} \otimes {y}_{i}}\right) - \left( {a \otimes {y}_{1} + \mathop{\sum }\limits_{{i = 2}}^{n}{\bar{a}}_{i}a \otimes {y}_{i}}\right) \] \[ = \mathop{\sum }\limits_{{i = 2}}^{n}\left( {a{\bar{a}}_{i} - {\bar{a}}_{i}a}\right) \otimes {y}_{i} \] By the minimality of \( n, w = 0 \) and \( a{\bar{a}}_{i} - {\bar{a}}_{i}a = 0 \) for all \( i \geq 2 \) . Thus \( a{\bar{a}}_{i} = {\bar{a}}_{i}a \) for all \( {a\varepsilon A} \) and each \( {\bar{a}}_{i} \) is in the center of \( A \), which by assumption is precisely \( K \) . Therefore \[ v = {1}_{A} \otimes {y}_{1} + \mathop{\sum }\limits_{{i = 2}}^{n}{\bar{a}}_{i} \otimes {y}_{i} = {1}_{A} \otimes {y}_{1} + \mathop{\sum }\limits_{{i = 2}}^{n}{1}_{A} \otimes {\bar{a}}_{i}{y}_{i} = {1}_{A} \otimes b, \] where \( b = {y}_{1} + {\bar{a}}_{2}{y}_{2} + \cdots + {\bar{a}}_{n}{y}_{n} \in B \) . Since each \( {\bar{a}}_{i} \neq 0 \) and the \( {y}_{i} \) are linearly independent over \( K, b \neq 0 \) . Thus, since \( B \) has an identity, the ideal \( {BbB} \) is precisely \( B \) by simplicity. Therefore, \[ {1}_{A}{ \otimes }_{K}B = {1}_{A} \otimes {BbB} = \left( {{1}_{A}{ \otimes }_{K}B}\right) \left( {{1}_{A} \otimes b}\right) \left( {{1}_{A}{ \otimes }_{K}B}\right) \] \[ = \left( {{1}_{A}{ \otimes }_{K}B}\right) v\left( {{1}_{A}{ \otimes }_{K}B}\right) \subset U. \] Consequently, \[ A{\bigotimes }_{K}B = \left( {A{\bigotimes }_{K}{1}_{B}}\right) \left( {{1}_{A}{\bigotimes }_{K}B}\right) \subset \left( {A{\bigotimes }_{K}{1}_{B}}\right) U \subset U. \] Therefore \( U = A{ \otimes }_{K}B \) and there is only one nonzero ideal of \( A{ \otimes }_{K}B \) . Since \( A{\bigotimes }_{K}B \) has an identity \( {1}_{A}\bigotimes {1}_{B},{\left( A{\bigotimes }_{K}B\right) }^{2} \neq 0 \), whence \( A{\bigotimes }_{K}B \) is simple. We now consider division rings. If \( D \) is a division ring and \( F \) is a subring of \( D \) containing \( {1}_{D} \) that is a field, \( F \) is called a subfield of \( D \) . Clearly \( D \) is a vector space over any subfield \( F \) . A subfield \( F \) of \( D \) is said to be a maximal subfield if it is not properly contained in any other subfield of \( D \) . Maximal subfields always exist (Exercise 4). Every maximal subfield \( F \) of \( D \) contains the center \( K \) of \( D \) (otherwise \( F \) and \( K \) would generate a subfield of \( D \) properly containing \( F \) ; Exercise 3). It is easy to see that \( F \) is actually a simple \( K \) -algebra. The maximal subfields of a division ring strongly influence the structure of the division ring itself, as the following theorems indicate. Theorem 6.3. Let \( \mathrm{D} \) be a division ring with center \( \mathrm{K} \) and let \( \mathrm{F} \) be a maximal subfield of D. Then D \( {\bigotimes }_{\mathrm{K}}\mathrm{F} \) is isomorphic (as a \( \mathrm{K} \) -algebra) to a dense subalgebra of \( {\operatorname{Hom}}_{\mathrm{F}}\left( {\mathrm{D},\mathrm{D}}\right) \), where \( \mathrm{D} \) is considered as a vector space over \( \mathrm{F} \) . PROOF. \( {\operatorname{Hom}}_{F}\left( {D, D}\right) \) is an \( F \) -algebra (third example after Definition IV.7.1) and hence a \( K \) -algebra. For each \( a \in D \) let \( {\alpha }_{a} : D \rightarrow D \) be defined by \( {\alpha }_{a}\left( x\right) = {xa} \) . For each \( c \in F \) let \( {\beta }_{c} : D \rightarrow D \) be defined by \( {\beta }_{c}\left( x\right) = {cx} \) . Verify that \( {\alpha }_{a},{\beta }_{c}\varepsilon {\operatorname{Hom}}_{F}\left( {D, D}\right) \) and that \( {\alpha }_{a}{\beta }_{c} = {\beta }_{c}{\alpha }_{a} \) for all \( a \in D, c \in F \) . Verify that the map \( D \times F \rightarrow {\operatorname{Hom}}_{F}\left( {D, D}\right) \) given by \( \left( {a, c}\right) \mapsto {\alpha }_{a}{\beta }_{c} \) is \( K \) -bilinear. By Theorem IV.5.6 this map induces a \( K \) -module homomorphism \( \theta : D{\bigotimes }_{K}F \rightarrow {\operatorname{Hom}}_{F}\left( {D, D}\right) \) such that \[ \theta \left( {\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i} \otimes {c}_{i}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{\alpha }_{{a}_{i}}{\beta }_{{c}_{i}}\;\left( {{a}_{i} \in D,{c}_{i} \in F}\right) . \] Verify that \( \theta \) is a \( K \) -algebra homomorphism, which is not zero (since \( \theta \left( {{1}_{D} \otimes {1}_{D}}\right) \) is the identity map on \( D \) ). Since \( D \) is a central simple and \( F \) a simple \( K \) -algebra, \( D{ \otimes }_{K}F \) is simple by Theorem 6.2. Since \( \theta \neq 0 \) and \( \operatorname{Ker}\theta \) is an algebra ideal, \( \operatorname{Ker}\theta = 0 \) , whence \( \theta \) is a monomorphism. Therefore \( D{\bigotimes }_{K}F \) is isomorphic to the \( K \) -subalgebra \( \operatorname{Im}\theta \) of \( {\operatorname{Hom}}_{F}\left( {D, D}\right) \) . We must show that \( A = \operatorname{Im}\theta \) is dense in \( {\operatorname{Hom}}_{F}\left( {D, D}\right) \) . \( D \) is clearly a left module over \( {\operatorname{Hom}}_{F}\left( {D, D}\right) \) with \( {fd} = f\left( d\right) \left( {f \in {\operatorname{Hom}}_{F}\left( {D, D}\right), d \in D}\right) \) . Consequently \( D \) is a left module over \( A = \operatorname{Im}\theta \) . If \( d \) is a nonzero element of \( D \), then since \( D \) is a division ring, \[ {Ad} = \left\{ {\theta \left( u\right) \left( d\right) \mid u \in D \otimes {}_{K}F}\right\} = \left\{ {\mathop{\sum }\limits_{i}{c}_{i}d{a}_{i} \mid i \in {\mathbf{N}}^{ * };{c}_{i} \in F;{a}_{i} \in D}\right\} = D. \] Consequently, \( D \) has no nontrivial \( A \) -submodules, whence \( D \) is a simple \( A \) -module. Furthermore \( D \) is a faithful \( A \) -module since the zero map is the only element \( f \) of \( {\operatorname{Hom}}_{F}\left( {D, D}\right) \) such that \( {fD} = 0 \) . Therefore by the Density Theorem \( {1.12A} \) is isomorphic to a dense subring of \( {\operatorname{Hom}}_{\Delta }\left( {D, D}\right) \), where \( \Delta \) is the division ring \( {\operatorname{Hom}}_{A}\left( {D, D}\right) \) and \( D \) is a left \( \Delta \) -vector space. Under the monomorphism \( A \rightarrow {\operatorname{Hom}}_{\Delta }\left( {D, D}\right) \) the image of \( {f\varepsilon A} \) is \( f \) considered as an element of \( {\operatorname{Hom}}_{\Delta }\left( {D, D}\right) \) . We now construct an isomorphism of rings \( F \cong \Delta \) . Let \( \beta : F \rightarrow \Delta = {\operatorname{Hom}}_{A}\left( {D, D}\right) \) be given by \( c \mapsto {\beta
1065_(GTM224)Metric Structures in Differential Geometry
Definition 1.1
Definition 1.1. A function \( f : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) is said to be symmetric if for any permutation \( \sigma \) of \( \{ 1,\ldots, n\}, f\left( {{\lambda }_{\sigma \left( 1\right) },\ldots ,{\lambda }_{\sigma \left( n\right) }}\right) = f\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \) for all \( {\lambda }_{i} \in \mathbb{R} \) . The elementary symmetric functions \( {s}_{1},\ldots ,{s}_{n} : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) are defined by \[ {s}_{k}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) = \mathop{\sum }\limits_{{{i}_{1} < \cdots < {i}_{k}}}{\lambda }_{{i}_{1}}{\lambda }_{{i}_{2}}\cdots {\lambda }_{{i}_{k}},\;1 \leq k \leq n. \] For example, \( {s}_{1}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) = \mathop{\sum }\limits_{i}{\lambda }_{i} \), and \( {s}_{n}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) = \mathop{\prod }\limits_{i}{\lambda }_{i} \) . A straightforward induction argument shows that \[ \left( {x - {\lambda }_{1}}\right) \cdots \left( {x - {\lambda }_{n}}\right) = {x}^{n} - {s}_{1}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) {x}^{n - 1} + \cdots + {\left( -1\right) }^{n}{s}_{n}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) . \] The polynomials \( {s}_{i} \) may be extended to \( {\mathbb{C}}^{n} \) . Notice that they are algebraically independent over the reals; i.e., if \( p : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) is a real polynomial such that \[ p\left( {{s}_{1}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) ,\ldots ,{s}_{n}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) }\right) = 0 \] for all \( {\lambda }_{i} \) with \( {s}_{j}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \in \mathbb{R} \), then \( p \equiv 0 \) . To see this, let \( {a}_{1},\ldots ,{a}_{n} \in \mathbb{R} \) , and denote by \( {\lambda }_{1},\ldots ,{\lambda }_{n} \in \mathbb{C} \) the roots of the equation (1.1) \[ {x}^{n} - {a}_{1}{x}^{n - 1} + \cdots + {\left( -1\right) }^{n}{a}_{n} = 0. \] Then \( {a}_{i} = {s}_{i}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \), and by assumption, \( p\left( {{a}_{1},\ldots ,{a}_{n}}\right) = 0 \) . Definition 1.2. A polynomial of degree \( k \) on a vector space \( V \) is a function \( p : V \rightarrow \mathbb{R} \) such that if \( {\omega }^{1},\ldots ,{\omega }^{n} \) is a basis of \( {V}^{ * } \), then there exist \( {a}_{{i}_{1}\cdots {i}_{k}} \in \mathbb{R} \) with \( p\left( v\right) = \sum {a}_{{i}_{1}\cdots {i}_{k}}{\omega }^{{i}_{1}}\left( v\right) \cdots {\omega }^{{i}_{k}}\left( v\right) \) for all \( v \in V \) . The coefficients of \( p \) may, and will, be assumed to be symmetric in the indices. The space of these polynomials will be denoted \( {P}_{k}\left( V\right) \), and \( P\left( V\right) \mathrel{\text{:=}} \) \( { \oplus }_{k = 0}^{\infty }{P}_{k}\left( V\right) \) is then an algebra with the usual product of functions. For example, \( {s}_{k} \in {P}_{k}\left( {\mathbb{R}}^{n}\right) \) . In fact, any symmetric polynomial \( f \) on \( {\mathbb{R}}^{n} \) is a function of \( {s}_{1},\ldots ,{s}_{n} \) : Given \( {a}_{1},\ldots ,{a}_{n} \in \mathbb{R} \), let \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) denote the corresponding roots of (1.1), and define \( F : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) by \( F\left( {{a}_{1},\ldots ,{a}_{n}}\right) = f\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \) . Then \[ f\left( {{x}_{1},\ldots ,{x}_{n}}\right) = F\left( {{s}_{1}\left( {{x}_{1},\ldots ,{x}_{n}}\right) ,\ldots ,{s}_{n}\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\right) . \] It can be shown that \( F \) may be chosen to be a polynomial. When \( V \) is the Lie algebra \( \mathfrak{g} \) of a group \( G \), a polynomial \( p \) in \( P\left( \mathfrak{g}\right) \) is said to be invariant if \( p\left( {{\operatorname{Ad}}_{g}v}\right) = p\left( v\right) \) for all \( v \in V, g \in G \) . The collection \( {P}_{G} \) of invariant polynomials on \( \mathfrak{g} \) is a subalgebra of \( P\left( \mathfrak{g}\right) \) . EXAMPLE 1.1. For \( A \in \mathfrak{{gl}}\left( n\right) \), define \( {f}_{i}\left( A\right) = {s}_{i}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \), where \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) are the eigenvalues of \( A \) ; equivalently, \( {f}_{i} \) is determined by the equation \[ \det \left( {{xI} - A}\right) = {x}^{n} - {f}_{1}\left( A\right) {x}^{n - 1} + \cdots + {\left( -1\right) }^{n}{f}_{n}\left( A\right) . \] Then \( {f}_{i} \) is an invariant polynomial of degree \( i \) on \( \mathfrak{{gl}}\left( n\right) \) . Instead of working with \( {P}_{k}\left( \mathfrak{g}\right) \), it is often more convenient to deal with the space \( {S}_{k}\left( \mathfrak{g}\right) \) of symmetric tensors of type \( \left( {0, k}\right) \) on \( \mathfrak{g} \) : The polarization of a polynomial \( p = \sum {a}_{{i}_{1}\ldots {i}_{k}}{\omega }^{{i}_{1}}\cdots {\omega }^{{i}_{k}} \in {P}_{k}\left( \mathfrak{g}\right) \) is \( \operatorname{pol}\left( p\right) \mathrel{\text{:=}} \sum {a}_{{i}_{1}\ldots {i}_{k}}{\omega }^{{i}_{1}} \otimes \cdots \otimes \) \( {\omega }^{{i}_{k}} \in {S}_{k}\left( \mathfrak{g}\right) \) . pol : \( {P}_{k}\left( \mathfrak{g}\right) \rightarrow {S}_{k}\left( \mathfrak{g}\right) \) has inverse \( {\operatorname{pol}}^{-1}\left( T\right) \left( v\right) = T\left( {v,\ldots, v}\right) \) for \( T \in {S}_{k}\left( \mathfrak{g}\right), v \in \mathfrak{g} \) . If we define multiplication in \( S\left( \mathfrak{g}\right) \mathrel{\text{:=}} { \oplus }_{k = 0}^{\infty }{S}_{k}\left( \mathfrak{g}\right) \) by \[ \left( {ST}\right) \left( {{v}_{1},\ldots ,{v}_{k + l}}\right) = \frac{1}{\left( {k + l}\right) !}\mathop{\sum }\limits_{{\sigma \in {P}_{k + l}}}S\left( {{v}_{\sigma \left( 1\right) },\ldots {v}_{\sigma \left( k\right) }}\right) \cdot T\left( {{v}_{\sigma \left( {k + 1}\right) },\ldots ,{v}_{\sigma \left( {k + l}\right) }}\right) \] for \( S \in {S}_{k}\left( \mathfrak{g}\right), T \in {S}_{l}\left( \mathfrak{g}\right) \), then the natural extension of pol to \( P\left( \mathfrak{g}\right) \) is an algebra isomorphism pol : \( P\left( \mathfrak{g}\right) \rightarrow S\left( \mathfrak{g}\right) \) . \( T \in {S}_{k}\left( \mathfrak{g}\right) \) is said to be invariant if \( T\left( {{\operatorname{Ad}}_{g}{v}_{1},\ldots ,{\operatorname{Ad}}_{g}{v}_{k}}\right) = T\left( {{v}_{1},\ldots ,{v}_{k}}\right) \) for \( g \in G,{v}_{i} \in \mathfrak{g} \) . The subalgebra \( {S}_{G} \) of invariant symmetric tensors is the isomorphic image of \( {P}_{G} \) via pol. We are now ready to carry these concepts over to bundles: Recall that if \( \xi = \pi : E \rightarrow M \) is a vector bundle over \( M \), then \( \operatorname{End}\left( \xi \right) = \operatorname{Hom}\left( {\xi ,\xi }\right) \) is the bundle over \( M \) with fiber \( \mathfrak{{gl}}\left( {E}_{p}\right) \) over \( p \) . Let us denote by \( {\operatorname{End}}_{k}\left( \xi \right) \) the \( k \) - fold tensor product \( \operatorname{End}\left( \xi \right) \otimes \cdots \otimes \operatorname{End}\left( \xi \right) \), and set \( \otimes \operatorname{End}\left( \xi \right) \mathrel{\text{:=}} { \oplus }_{k \geq 0}{\operatorname{End}}_{k}\left( \xi \right) \) . Since the fiber of this bundle is an algebra, we may define the product \( \alpha \otimes \) \( \beta \in {A}_{k + l}\left( {M, \otimes \operatorname{End}\left( \xi \right) }\right) \) of \( \otimes \operatorname{End}\left( \xi \right) \) -valued forms \( \alpha \in {A}_{k}\left( {M, \otimes \operatorname{End}\left( \xi \right) }\right) \) and \( \beta \in {A}_{l}\left( {M, \otimes \operatorname{End}\left( \xi \right) }\right) \) by \[ \left( {\alpha \otimes \beta }\right) \left( {{x}_{1},\ldots ,{x}_{k + l}}\right) = \frac{1}{k!l!}\mathop{\sum }\limits_{{\sigma \in {P}_{k + l}}}\alpha \left( {{x}_{\sigma \left( 1\right) },\ldots ,{x}_{\sigma \left( k\right) }}\right) \otimes \beta \left( {{x}_{\sigma \left( {k + 1}\right) },\ldots ,{x}_{\sigma \left( {k + l}\right) }}\right) . \] By Examples and Remarks 2.1(v) in Chapter 4, a connection on \( \xi \) induces one on \( \otimes \operatorname{End}\left( \xi \right) \) . Since multiplication in the algebra bundle is parallel, an argument similar to the one for the trivial line bundle \( {\epsilon }^{1} \) over \( M \) shows that the exterior covariant derivative operator satisfies (1.2) \[ {d}^{\nabla }\left( {\alpha \otimes \beta }\right) = {d}^{\nabla }\alpha \otimes \beta + {\left( -1\right) }^{k}\alpha \otimes {d}^{\nabla }\beta \] for \( \alpha ,\beta \) as above. Proposition 1.1. Let \( T \) denote a symmetric \( \left( {0, k}\right) \) tensor on \( \mathfrak{{gl}}\left( n\right) \), and \( \xi \) a rank \( n \) bundle over \( M \) with total space \( E \) and connection \( \nabla \) . If \( T \) is invariant, then it induces a parallel section \( \bar{T} \) of \( {\operatorname{End}}_{k}{\left( \xi \right) }^{ * } \) . Proof. Given \( p \in M \), choose an isomorphism \( b : {\mathbb{R}}^{n} \rightarrow {E}_{p} \), and define \[ \bar{T}\left( p\right) \left( {{L}_{1} \otimes \cdots \otimes {L}_{k}}\right) = T\left( {{b}^{-1} \circ {L}_{1} \circ b,\ldots ,{b}^{-1} \circ {L}_{k} \circ b}\right) ,\;{L}_{i} \in \mathfrak{{gl}}\left( {E}_{p}\right) . \] The argument used in the last section to show that \( \operatorname{Tr} \) is a well-defined parallel section of \( \operatorname{End}{\left( \xi \right) }^{ * } \) applies equally well to \( \bar{T} \) . If \( R \) is the curvature tensor of the connection on \( \xi \) and \( T \in {S}_{k}\left( {\mathfrak{{gl}}\left( n\right) }\right) \) is invariant, then the \( k \) -fold product \( {R}^{k} = R \otimes \cdots \otimes R \in {A}_{2k}\left( {M,{\operatorname{End}}_{k}\left( \xi \right) }\right) \), and \( \bar{T}\left( {R}^{k}\right) \) is an ordinary \( {2k} \) -form on \( M \) . THEOREM 1.1 (Weil). Let \( \xi \) denote a rank \
1083_(GTM240)Number Theory II
Definition 11.3.13
Definition 11.3.13. For \( k \in \mathbb{Z} \smallsetminus \{ 0\} \) we define the p-adic \( \chi \) -Bernoulli numbers and \( \chi \) -Euler constant by \[ {B}_{k, p}\left( \chi \right) = \mathop{\lim }\limits_{{r \rightarrow \infty }}{B}_{\phi \left( {p}^{r}\right) + k}\left( \chi \right) = - k{L}_{p}\left( {\chi {\omega }^{k},1 - k}\right) ,\text{ and } \] \[ {\gamma }_{p}\left( \chi \right) = - \mathop{\lim }\limits_{{r \rightarrow \infty }}\frac{{B}_{\phi \left( {p}^{r}\right) }\left( \chi \right) - \left( {1 - 1/p}\right) \delta \left( \chi \right) }{\phi \left( {p}^{r}\right) } \] \[ = \mathop{\lim }\limits_{{s \rightarrow 1}}\left( {{L}_{p}\left( {\chi, s}\right) - \frac{\left( {1 - 1/p}\right) \delta \left( \chi \right) }{s - 1}}\right) . \] In addition, we set \( {B}_{k, p} = {B}_{k, p}\left( {\chi }_{0}\right) \) and \( {\gamma }_{p} = {\gamma }_{p}\left( {\chi }_{0}\right) \) . Note that \( {\gamma }_{p} \) is the \( p \) -adic analogue of Euler’s constant, and that when \( \chi \neq {\chi }_{0} \) we evidently have \( {\gamma }_{p}\left( \chi \right) = {L}_{p}\left( {\chi ,1}\right) \), so that the notation \( {\gamma }_{p} \) is really useful only for \( \chi = {\chi }_{0} \) . Proposition 11.3.14. Assume that the conductor of \( \chi \) is a power of \( p \) (which is true in particular when \( \chi = {\chi }_{0} \) ). Then for \( k \in \mathbb{Z} \smallsetminus \{ 0\} \) we have \[ {B}_{k, p}\left( \chi \right) = \mathop{\lim }\limits_{{r \rightarrow \infty }}\frac{1}{{p}^{r}}\mathop{\sum }\limits_{{0 \leq n < {p}^{r}}}^{\left( p\right) }\chi \left( n\right) {n}^{k} = {\int }_{{\mathbb{Z}}_{p}^{ * }}\chi \left( t\right) {t}^{k}{dt}\;\text{ and } \] \[ {\gamma }_{p}\left( \chi \right) = - \mathop{\lim }\limits_{{r \rightarrow \infty }}\frac{1}{{p}^{r}}\mathop{\sum }\limits_{{0 \leq n < {p}^{r}}}^{\left( p\right) }\chi \left( n\right) {\log }_{p}\left( n\right) = - {\int }_{{\mathbb{Z}}_{p}^{ * }}\chi \left( t\right) {\log }_{p}\left( t\right) {dt}. \] Proof. This is a restatement of Corollary 11.3.11. From the definition it is immediate to deduce the following results. Proposition 11.3.15. (1) If \( \chi \left( {-1}\right) = {\left( -1\right) }^{k - 1} \) we have \( {B}_{k, p}\left( \chi \right) = 0 \), and if \( \chi \left( {-1}\right) = - 1 \) we have \( {\gamma }_{p}\left( \chi \right) = 0 \) . (2) If \( k \geq 1 \) we have \( {B}_{k, p}\left( \chi \right) = \left( {1 - \chi \left( p\right) {p}^{k - 1}}\right) {B}_{k}\left( \chi \right) \) . (3) As usual let \( m \) be a common multiple of \( f \) and \( {q}_{p} \), and set \( {H}_{n}\left( \chi \right) = \) \( \mathop{\sum }\limits_{{1 \leq a \leq m}}^{\left( p\right) }\chi \left( a\right) /{a}^{n} \) . If \( k \geq 1 \) and \( \chi \left( {-1}\right) = {\left( -1\right) }^{k} \) we have \[ {B}_{-k, p}\left( \chi \right) = k{L}_{p}\left( {\chi {\omega }^{-k}, k + 1}\right) = \mathop{\sum }\limits_{{j \geq 0}}{\left( -1\right) }^{j}\left( \begin{matrix} k + j - 1 \\ k - 1 \end{matrix}\right) {m}^{j - 1}{B}_{j}{H}_{k + j}\left( \chi \right) , \] and \[ {\gamma }_{p}\left( \chi \right) = \mathop{\lim }\limits_{{s \rightarrow 1}}\left( {{L}_{p}\left( {\chi, s}\right) - \frac{\left( {1 - 1/p}\right) \delta \left( \chi \right) }{s - 1}}\right) \] \[ = - \frac{1}{m}\mathop{\sum }\limits_{{0 \leq a < m}}^{\left( p\right) }\chi \left( a\right) {\log }_{p}\left( {\langle a\rangle }\right) + \mathop{\sum }\limits_{{j \geq 1}}\frac{{\left( -1\right) }^{j}}{j}{m}^{j - 1}{B}_{j}{H}_{j}\left( \chi \right) . \] (4) For all \( k \neq 0 \) we have \( {v}_{p}\left( {{B}_{k, p}\left( \chi \right) }\right) \geq - 1 \) . (5) If \( \chi \) is p-adically tame (see Definition 11.3.17 below) then \( {\gamma }_{p}\left( \chi \right) \) is p-integral, and in all cases \( {v}_{p}\left( {{\gamma }_{p}\left( \chi \right) }\right) \geq - 1 \) . Note that we will give stronger integrality statements later in Corollary 11.4.8. Proof. All the statements except the last two are clear from the definitions and Proposition 11.3.9. By Lemma 9.5.11 we know that \( {v}_{p}\left( {{B}_{k}\left( \chi \right) }\right) \geq - 1 \) , and since \( {B}_{k, p}\left( \chi \right) = \mathop{\lim }\limits_{{r \rightarrow \infty }}{B}_{\phi \left( {p}^{r}\right) + k}\left( \chi \right) \) we also have \( {v}_{p}\left( {{B}_{k, p}\left( \chi \right) }\right) \geq - 1 \) . This also follows from (1),(2), and (3). For (4), since \( {\gamma }_{p}\left( \chi \right) = 0 \) if \( \chi \) is odd, we may assume that \( \chi \) is even. Since \( {q}_{p} \mid m \) we have \( {v}_{p}\left( {{m}^{j - 1}/j}\right) \geq 1 \) for all \( j \geq 2 \), so by the ordinary Clausen-von Staudt theorem \( {v}_{p}\left( {{m}^{j - 1}{B}_{j}/j}\right) \geq 0 \) for \( j \geq 2 \), and for \( j = 1 \) we have \( {m}^{j - 1}{B}_{j} = - 1/2 \), which has nonnegative valuation if \( p \neq 2 \) . Since \( \chi \) is an even character, for \( p = 2 \) we have \[ {H}_{1}\left( \chi \right) = \mathop{\sum }\limits_{{1 \leq a \leq m}}^{\left( p\right) }\frac{\chi \left( a\right) }{a} = \mathop{\sum }\limits_{{1 \leq a \leq m/2}}^{\left( p\right) }\chi \left( a\right) \left( {\frac{1}{a} + \frac{1}{m - a}}\right) = m\mathop{\sum }\limits_{{1 \leq a \leq m/2}}^{\left( p\right) }\frac{\chi \left( a\right) }{a\left( {m - a}\right) }, \] so \( {v}_{p}\left( {{H}_{1}\left( \chi \right) }\right) \geq {v}_{p}\left( m\right) \geq {v}_{p}\left( {q}_{p}\right) = 2 \), proving that the valuation of the sum is nonnegative in all cases. Finally, the first sum \( \left( {1/m}\right) \mathop{\sum }\limits_{{0 \leq a < m}}^{\left( p\right) }\chi \left( a\right) {\log }_{p}\left( {\langle a\rangle }\right) \) will be studied in Theorem 11.3.19 below, which tells us that its valuation is also nonnegative if \( \chi \) is \( p \) -adically tame, and that otherwise it is greater than or equal to -1 . For future reference, note the following corollary. Corollary 11.3.16. Let \( k \geq 2 \) be an even integer. (1) We have \[ {B}_{-k, p} \equiv \frac{1}{p}\mathop{\sum }\limits_{{1 \leq a \leq p - 1}}\frac{1}{{a}^{k}}\left( {{\;\operatorname{mod}\;{p}^{v}}{\mathbb{Z}}_{p}}\right) , \] where \( v = 1 \) if \( 5 \leq p \leq k + 3 \), and \( v = 2 \) for \( p \geq k + 5 \) . (2) If \( p \geq k + 3 \) we have \[ {B}_{-k, p} \equiv \frac{k}{k + 1}{B}_{p - 1 - k}\left( {{\;\operatorname{mod}\;p}{\mathbb{Z}}_{p}}\right) . \] Proof. Immediate consequence of the proposition and of the Kummer congruences, and left to the reader; see Exercise 50. For example for \( p \geq 7 \) we have \( {B}_{-2, p} \equiv \left( {1/p}\right) \mathop{\sum }\limits_{{1 \leq a \leq p - 1}}1/{a}^{2}\left( {{\;\operatorname{mod}\;{p}^{2}}{\mathbb{Z}}_{p}}\right) \) , and for \( p \geq 5 \) we have \( {B}_{-2, p} \equiv \left( {2/3}\right) {B}_{p - 3}\left( {{\;\operatorname{mod}\;p}{\mathbb{Z}}_{p}}\right) \) . The corresponding congruences for \( p \leq 5 \) can be read off from the table that we give below. Proposition 11.3.15 gives a practical way of computing the constants \( {B}_{-k, p}\left( \chi \right) \) and \( {\gamma }_{p}\left( \chi \right) \), since the definition as a limit of Bernoulli numbers is much slower. For the convenience of the reader, we give a small table for \( \chi = {\chi }_{0} \), where as usual the \( p \) -adic digits are written from right to left, and the digits from 10 to 18 are coded with the letters \( A \) to \( H \) . <table><thead><tr><th>\( p \)</th><th>\( {\gamma }_{p} \)</th><th>\( {B}_{-2, p} \)</th><th>\( {B}_{-4, p} \)</th></tr><tr><th>2</th><th>\( \cdots {110110001100111} \)</th><th>\( \cdots {00000101000110.1} \)</th><th>...00111101111100.1</th></tr><tr><th>3</th><th>\( \cdots {112010222121220} \)</th><th>\( \begin{matrix} \cdots & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 2 \\ 2 & 1 & 2 & . \\ 2 & & & \end{matrix} \)</th><th>\( \cdots {11210011021012}.2 \)</th></tr><tr><th>5</th><th>\( \cdots {321122143203010} \)</th><th>\( \cdots {214004103314334} \)</th><th>......00341201131120.4</th></tr><tr><th>7</th><th>\( \cdots {025121026026425} \)</th><th>\( \cdots {113431404032362} \)</th><th>\( \cdots {362564350404462} \)</th></tr><tr><th>11</th><th>\( \cdots {9317447545512}\mathrm{A}1 \)</th><th>...8682761505A028A</th><th>......4349913A6604674</th></tr><tr><th>13</th><th>\( \cdots \) 1893BC946787040</th><th>\( \cdots \) 087B14A2BC94ACC</th><th>\( \cdots {78}\mathrm{C}{4809}\mathrm{C}{3055}\mathrm{B}{95} \)</th></tr><tr><th>17</th><th>\( \cdots {132} \) AE \( {449} \) B \( {942425} \)</th><th>\( \cdots \) 60294D387222D3E</th><th>......539496A1G54488A</th></tr><tr><th>19</th><th>\( \cdots {90489}\mathrm{H}{87}\mathrm{B}{72}\mathrm{{FHD}}2 \)</th><th>... G7GIF9767A0HGDA</th><th>· · · F47AB7GDEB7E956</th></tr><tr><th>\( p \)</th><th>\( {B}_{-6, p} \)</th><th>\( {B}_{-8, p} \)</th><th>\( {B}_{-{10}, p} \)</th></tr></thead><tr><td>2</td><td>\( \cdots \) 10111110100010.1</td><td>....00101010111000.1</td><td>......10110110111110.1</td></tr><tr><td>3</td><td>\( \cdots {00010112021000.2} \)</td><td>\( \cdots {22111112220122.2} \)</td><td>\( \begin{matrix} \cdots & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 2 & 0 & 2 \\ 2 & 0 & 2 & . \\ 2 & & & \end{matrix} \)</td></tr><tr><td>5</td><td>\( \cdots {241123000012322} \)</td><td>\( \cdots \) 20240200211300.4</td><td>\( \cdots {330344030340240} \)</td></tr><tr><td>7</td><td>\( \cdots {54261355252232.6} \)</td><td>\( \cdots \) 033431442506531</td><td>\( \cdots \) 506040436364625</td></tr><tr><td>11</td><td>... 8A7A967A8664645</td><td>\( \cdots {625244199481503} \)</td><td>...17A273A506351A.A</td></tr><tr><td>13</td><td>\( \cdots \) 0578730584B3284</td><td>...B4AC3B114A10797</td><td>\( \cdots \) A3C140B38800A3A</td></tr><tr><td>17</td><td>\( \cdots \) AE16BFA8D998D2A</td><td>\( \cdots \) 2GOABEGEC44B3E4</td><td>\( \cdots \) EC1E4BCEE3E3G49</td></tr><tr><td>19</td><td>\( \cdots \) 6EB027DEB2099B1</td><td>\( \cdots \) 12ED0C4C01GE318</td><td>. . . D958H1DE7004BG4</td></tr></table> A
1068_(GTM227)Combinatorial Commutative Algebra
Definition 10.5
Definition 10.5 The affine GIT quotient of \( {\mathbb{C}}^{n} \) modulo \( G \) is the affine toric variety \( \operatorname{Spec}\left( {S}^{G}\right) \) whose coordinate ring is the invariant ring \( {S}^{G} \) : \[ {\mathbb{C}}^{n}//G \mathrel{\text{:=}} \operatorname{Spec}\left( {S}^{G}\right) = \operatorname{Spec}\left( {S}_{\mathbf{0}}\right) = \operatorname{Spec}\left( {\mathbb{C}\left\lbrack Q\right\rbrack }\right) . \] where \( Q = {\mathbb{N}}^{n} \cap L \) is the saturated pointed semigroup in degree \( \mathbf{0} \) . The acronym GIT stands for Geometric Invariant Theory. Officially, the spectrum \( \operatorname{Spec}\left( {S}^{G}\right) \) of the ring \( {S}^{G} \) is the set of all prime ideals in \( {S}^{G} \) together with the Zariski topology on this set. However, since \( {S}^{G} \) is an integral domain that is generated as a \( \mathbb{C} \) -algebra by a finite set of monomials, namely those corresponding to the Hilbert basis \( {\mathcal{H}}_{Q} \) of \( Q \), we can identify \( \operatorname{Spec}\left( {S}^{G}\right) \) with the closure of the variety parametrized by those monomials. In particular, \( \operatorname{Spec}\left( {S}^{G}\right) \) is an irreducible affine subvariety of a complex vector space whose basis is in bijection with the Hilbert basis \( {\mathcal{H}}_{Q} \) . Observe that by Proposition 7.20, every saturated affine semigroup \( Q \) can be expressed as \( Q = L \cap {\mathbb{N}}^{n} \), so the spectrum of every normal affine semigroup ring \( \mathbb{C}\left\lbrack Q\right\rbrack \) is an affine toric variety. This construction of the quotient \( {\mathbb{C}}^{n}//G \) is fully satisfactory when \( G \) is a finite group. Note that in this case, the two groups \( \left( {G, * }\right) \) and \( \left( {A, + }\right) \) are actually isomorphic. Indeed, every cyclic group is isomorphic to its character group, and this property is preserved under taking direct sums. Let us work out an important family of examples of cyclic group actions. Example 10.6 (Veronese rings) Fix a positive integer \( p \) and let \( L \) denote the sublattice of \( {\mathbb{Z}}^{n} \) consisting of all vectors whose coordinate sum is divisible by \( p \) . Then \( A = {\mathbb{Z}}^{n}/L \) is isomorphic to the cyclic group \( \mathbb{Z}/p\mathbb{Z} \), and the grading of \( S = \mathbb{C}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) is given by total degree modulo \( p \) . The multiplicative group \( G \cong \mathbb{Z}/p\mathbb{Z} \) acts on \( {\mathbb{C}}^{n} \) via \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \mapsto \left( {\zeta {x}_{1},\ldots ,\zeta {x}_{n}}\right) \) , where \( \zeta \) is a primitive \( {p}^{\text{th }} \) root of unity. The invariant ring \( {S}^{G} = {S}_{\mathbf{0}} \) is the \( \mathbb{k} \) -linear span of all monomials \( {x}_{1}^{{i}_{1}}{x}_{2}^{{i}_{2}}\cdots {x}_{n}^{{i}_{n}} \) with the property that \( p \) divides \( {i}_{1} + {i}_{2} + \cdots + {i}_{n} \) . It is minimally generated as a \( \mathbb{k} \) -algebra by those monomials with \( {i}_{1} + \cdots + {i}_{n} = p \) . Equivalently, the Hilbert basis of \( Q = L \cap {\mathbb{N}}^{n} \) is \[ {\mathcal{H}}_{Q} = \left\{ {\left( {{i}_{1},{i}_{2},\ldots ,{i}_{n}}\right) \in {\mathbb{N}}^{n} \mid {i}_{1} + {i}_{2} + \cdots + {i}_{n} = p}\right\} . \] The ring \( {S}^{G} \) is the \( {p}^{\text{th }} \) Veronese subring of the polynomial ring \( S \) . ## 10.2 Projective quotients A major drawback of the affine GIT quotient is that \( {\mathbb{C}}^{n}//G \) is often only a point. Indeed, the spectrum of \( {S}^{G} \) is a point if and only if \( {S}^{G} \) consists just of the ground field \( \mathbb{C} \), or equivalently, when the only polynomials constant along all \( G \) -orbits are the constant polynomials. In view of our characterization of positive gradings in Theorem 8.6, we reach the following conclusion. Corollary 10.7 The A-grading is positive if and only if \( {\mathbb{C}}^{n}//G \) is a point. To fix this problem, we now introduce projective GIT quotients. These quotients are toric varieties that are not affine, so their description is a bit more tricky. In particular, more data are needed than simply the action of \( G \) on \( {\mathbb{C}}^{n} \) : we must fix an element \( \mathbf{a} \) in the grading group \( A \) . Consider the graded components \( {S}_{r\mathbf{a}} \) where \( r \) runs over all nonnegative integers, and take their (generally infinite) direct sum \[ {S}_{\left( \mathbf{a}\right) } = {S}_{\mathbf{0}} \oplus {S}_{\mathbf{a}} \oplus {S}_{2\mathbf{a}} \oplus {S}_{3\mathbf{a}} \oplus \cdots . \] (10.3) This graded \( {S}_{0} \) -module, each of whose graded pieces \( {S}_{r\mathrm{a}} \) is a finitely generated over \( {S}_{0} \) by Proposition 8.4, is actually an \( {S}_{0} \) -subalgebra of \( S \) . Indeed, the product of an element in \( {S}_{r\mathbf{a}} \) and an element in \( {S}_{{r}^{\prime }\mathbf{a}} \) lies in \( {S}_{\left( {r + {r}^{\prime }}\right) \mathbf{a}} \) by definition. Of course, every \( {S}_{0} \) -algebra is automatically a \( \mathbb{C} \) -algebra as well. In what follows it will be crucial to distinguish the \( {S}_{\mathbf{0}} \) -algebra structure on \( {S}_{\left( \mathbf{a}\right) } \) from its \( \mathbb{C} \) -algebra structure. The \( {S}_{\mathbf{0}} \) -algebra structure carries a natural \( \mathbb{N} \) -grading, which we emphasize by introducing an auxilliary grading variable \( \gamma \) that allows us to write \[ {S}_{\left( \mathbf{a}\right) } = {\bigoplus }_{r = 0}^{\infty }{\gamma }^{r}{S}_{r\mathbf{a}} = {S}_{\mathbf{0}} \oplus \gamma {S}_{\mathbf{a}} \oplus {\gamma }^{2}{S}_{2\mathbf{a}} \oplus {\gamma }^{3}{S}_{3\mathbf{a}} \oplus \cdots . \] (10.4) Definition 10.8 The projective GIT quotient of \( {\mathbb{C}}^{n} \) modulo \( G \) at a is the projective spectrum \( {\mathbb{C}}^{n}/{/}_{\mathbf{a}}G \) of the \( \mathbb{N} \) -graded \( {S}_{\mathbf{0}} \) -algebra \( {S}_{\left( \mathbf{a}\right) } \) : \[ {\mathbb{C}}^{n}//\mathbf{a}G = \operatorname{Proj}\left( {S}_{\left( \mathbf{a}\right) }\right) = \operatorname{Proj}\left( {{\bigoplus }_{r = 0}^{\infty }{\gamma }^{r}{S}_{r\mathbf{a}}}\right) . \] Officially, the toric variety \( \operatorname{Proj}\left( {S}_{\left( \mathbf{a}\right) }\right) \) consists of all prime ideals in \( {S}_{\left( \mathbf{a}\right) } \) homogeneous with respect to \( \gamma \) and not containing the irrelevant ideal \[ {S}_{\left( \mathbf{a}\right) }^{ + } = {\bigoplus }_{r = 1}^{\infty }{\gamma }^{r}{S}_{r\mathbf{a}} = \gamma {S}_{\mathbf{a}} \oplus {\gamma }^{2}{S}_{2\mathbf{a}} \oplus {\gamma }^{3}{S}_{3\mathbf{a}} \oplus \cdots . \] If \( P \) is such a homogeneous prime ideal in \( {S}_{\left( \mathbf{a}\right) } \), then \( P \cap {S}_{\mathbf{0}} \) is a prime ideal in \( {S}_{0} \) . This statement is more commonly phrased in geometric language. Proposition 10.9 The map \( P \mapsto P \cap {S}_{0} \) defines a projective morphism from the projective GIT quotient \( {\mathbb{C}}^{n}/{/}_{\mathbf{a}}G \) to the affine GIT quotient \( {\mathbb{C}}^{n}//G \) . \( {\mathbb{C}}^{n}/{/}_{\mathbf{a}}G \) is a projective toric variety if and only if \( S \) is positively graded by \( A \) . Proof. The canonical map from the projective spectrum of an \( \mathbb{N} \) -graded ring to the spectrum of its \( \mathbb{N} \) -graded degree zero part is a projective morphism by definition, proving the first statement. For the second, a complex variety is projective over \( \mathbb{C} \) if and only if it admits a projective morphism to the point \( \operatorname{Spec}\left( \mathbb{C}\right) \) . Thus the "if" direction is a consequence of Theorem 8.6 and Corollary 10.7. For the "only if" direction, note that \( {\mathbb{C}}^{n}//\mathbf{a}G \rightarrow {\mathbb{C}}^{n}//G \) is a surjective morphism to \( \operatorname{Spec}\left( {S}_{0}\right) \) . Since projective varieties admit only constant maps to affine varieties, the affine variety \( \operatorname{Spec}\left( {S}_{0}\right) \) must be a point. \( ▱ \) The ring \( {S}_{\left( \mathbf{a}\right) } \) and the quotient \( {\mathbb{C}}^{n}//\mathbf{a}G \) can be computed using the algorithm in the proof of Proposition 8.4: compute the Hilbert basis \( \mathcal{H} \) for the saturated semigroup \( {L}_{\mathbf{a}} \cap {\mathbb{N}}^{n + 1} \), where \( {L}_{\mathbf{a}} \) is the kernel of \( {\mathbb{Z}}^{n + 1} \rightarrow A \) under the morphism sending \( \left( {\mathbf{v}, r}\right) \) to \( \left( {\mathbf{v}\left( {\;\operatorname{mod}\;L}\right) }\right) - r \cdot \mathbf{a} \) . Let \( {\mathcal{H}}_{0} \) be the set of elements in \( \mathcal{H} \) having last coordinate zero, and set \( {\mathcal{H}}_{ + } = \mathcal{H} \smallsetminus {\mathcal{H}}_{0} \) . Proposition 10.10 The \( {S}_{\mathbf{0}} \) -algebra \( {S}_{\left( \mathbf{a}\right) } \) is minimally generated over \( {S}_{\mathbf{0}} \) by the monomials \( {\mathbf{x}}^{\mathbf{u}}{\gamma }^{r} \), where \( \left( {\mathbf{u}, r}\right) \) runs over all vectors in \( {\mathcal{H}}_{ + } \) . Proof. In the proof of Proposition 8.4, we saw that \( {S}_{\mathbf{0}} \) is minimally generated as a \( \mathbb{C} \) -algebra by the monomials \( {\mathbf{x}}^{\mathbf{u}} \) for \( \mathbf{u} \) in \( {\mathcal{H}}_{0} \) . Likewise, the ring \( {S}_{\left( \mathbf{a}\right) } \) is minimally generated as a \( \mathbb{C} \) -algebra by the monomials \( {\mathbf{x}}^{\mathbf{u}}{\gamma }^{r} \) for \( \left( {\mathbf{u}, r}\right) \) in \( \mathcal{H} \) . It follows that the monomials \( {\mathbf{x}}^{\mathbf{u}}{\gamma }^{r} \) with \( \left( {\mathbf{u}, r}\right) \in {\mathcal{H}}_{ + } \) generate \( {S}_{\left( \mathbf{a}\right) } \) as an \( {S}_{\mathbf{0}} \) -algebra. None of these monomials can be omitted. The toric variety \( {\mathbb{C}}^{n}//\mathbf{a}G \) is covered by affine open subsets \( \mathcal{U}\left( {{\mathbf{
1282_[张恭庆] Methods in Nonlinear Analysis
Definition 4.8.6
Definition 4.8.6 Let \( X \) be a Banach space. Let \( Q \subset X \) be a compact manifold with boundary \( \partial Q \) and let \( S \subset X \) be a closed subset of \( X \) . \( \partial Q \) is said linking with \( S \), if 1. \( \partial Q \cap S = \varnothing \) , 2. \( \forall \varphi : Q \rightarrow X \) continuous with \( {\left. \varphi \right| }_{\partial \Omega } = {\left. \mathrm{{id}}\right| }_{\partial \Omega } \), we have \( \varphi \left( Q\right) \cap S \neq \varnothing \) . Example 4.8.7 (Mountain pass) Let \( \Omega ,{x}_{0},{x}_{1} \) be as in Theorem 4.8.5. Set \( Q = \) the segment \( \left\{ {\lambda {x}_{0} + \left( {1 - \lambda }\right) {x}_{1} \mid \lambda \in \left\lbrack {0,1}\right\rbrack }\right\} \), and \( S = \partial \Omega \) . Then, \( \partial Q = \) \( \left\{ {{x}_{0},{x}_{1}}\right\} \) and \( S \) link. Example 4.8.8 Let \( {X}_{1} \) be a finite-dimensional linear subspace of the Banach space \( X \), and let \( {X}_{2} \) be its complement: \( X = {X}_{1} + {X}_{2} \) . Let \( Q = {B}_{R} \cap {X}_{1}, S = \) \( {X}_{2} \), where \( {B}_{R} \) is the ball with radius \( R > 0 \) centered at \( \theta \) . Then \( \partial Q \) and \( S \) link. Indeed, \( \forall \varphi : Q \rightarrow X \) continuous with \( {\left. \varphi \right| }_{\partial Q} = {\left. \mathrm{{id}}\right| }_{\partial \Omega } \), we want to show: \( \varphi \left( Q\right) \cap {X}_{2} \neq \varnothing \) . Let \( P \) be the projection onto \( {X}_{1} \) . It is sufficient to show that \( P \circ \varphi : Q \rightarrow {X}_{1} \) has a zero. Obviously \[ \deg \left( {P \circ \varphi, Q,\theta }\right) = \deg \left( {\mathrm{{id}}, Q,\theta }\right) = 1. \] The conclusion follows from the Brouwer degree theory. Theorem 4.8.9 Let \( X \) be a Banach space, and let \( f \in {C}^{1}\left( {X,{\mathbb{R}}^{1}}\right) \) . Assume that \( Q \subset X \) is a compact manifold with boundary \( \partial Q \) which links with a closed subset \( S \subset X \) . Set \[ \Gamma = \left\{ {\varphi \in C\left( {Q, X}\right) {\left| \varphi \right| }_{\partial Q} = {\left. \operatorname{id}\right| }_{\partial Q}}\right\} \] and \[ c = \mathop{\inf }\limits_{{\varphi \in \Gamma }}\mathop{\max }\limits_{{\xi \in Q}}f \circ \varphi \left( \xi \right) \] If \( \exists \alpha < \beta \) such that \[ \mathop{\sup }\limits_{{x \in \partial Q}}f\left( x\right) \leq \alpha < \beta \leq \mathop{\inf }\limits_{{x \in S}}f\left( x\right) \] and if \( {\left( PS\right) }_{c} \) holds, then \( c\left( { \geq \beta }\right) \) is a critical value. Proof. Let \( d \) be the distance on \( C\left( {Q, X}\right) \) . Then \( \left( {\Gamma, d}\right) \) is a metric space. Let \[ J\left( \varphi \right) = \mathop{\max }\limits_{{\xi \in Q}}f \circ \varphi \left( \xi \right) \] Invoke the assumption that \( \partial Q \) and \( S \) link, \( J \geq \mathop{\inf }\limits_{S}f \) . Moreover, \( J \) is locally Lipschitzian, since \[ J\left( {\varphi }_{1}\right) - J\left( {\varphi }_{2}\right) \leq \mathop{\max }\limits_{{\xi \in Q}}\left\lbrack {f \circ {\varphi }_{1}\left( \xi \right) - f \circ {\varphi }_{2}\left( \xi \right) }\right\rbrack \] \[ \leq \mathop{\max }\limits_{\underset{\xi \in Q}{\theta \in \left\lbrack {0,1}\right\rbrack }}\parallel {f}^{\prime }\left( {\theta {\varphi }_{1}\left( \xi \right) + \left( {1 - \theta }\right) {\varphi }_{2}\left( \xi \right) }\right) \parallel d\left( {{\varphi }_{1},{\varphi }_{2}}\right) . \] It follows that \[ \left| {J\left( {\varphi }_{1}\right) - J\left( {\varphi }_{2}\right) }\right| \leq {Cd}\left( {{\varphi }_{1},{\varphi }_{2}}\right) , \] where \( C \) is a constant depending on \( {\varphi }_{1},{\varphi }_{2} \) . We apply the Ekeland variational principle to \( J \), and obtain a sequence \( \left\{ {\varphi }_{n}\right\} \subset \Gamma \) satisfying \[ c \leq J\left( {\varphi }_{n}\right) < c + \frac{1}{n} \] (4.74) and \[ J\left( \varphi \right) \geq J\left( {\varphi }_{n}\right) - \frac{1}{n}d\left( {\varphi ,{\varphi }_{n}}\right) \] (4.75) \( n = 1,2,3,\ldots \) Set \[ \mathcal{M}\left( \varphi \right) = \{ \xi \in Q \mid f \circ \varphi \left( \xi \right) = J\left( \varphi \right) \} . \] Obviously \( \mathcal{M}\left( \varphi \right) \) is compact. We claim that \( \mathcal{M}\left( \varphi \right) \subset \operatorname{int}\left( Q\right) \) . Indeed, since \( \partial Q \) and \( S \) link, if \( \exists {\xi }_{0} \in \mathcal{M}\left( \varphi \right) \cap \partial Q \), then \[ f \circ \varphi \left( {\xi }_{0}\right) = \mathop{\max }\limits_{{\xi \in Q}}f \circ \varphi \left( \xi \right) \geq \mathop{\inf }\limits_{S}f \geq \beta . \] But, \[ f \circ \varphi \left( {\xi }_{0}\right) = f\left( {\xi }_{0}\right) \leq \mathop{\sup }\limits_{{x \in \partial Q}}f\left( x\right) \leq \alpha . \] This is a contradiction. Set \[ {\Gamma }_{0} = \left\{ {\psi \in C\left( {Q, X}\right) {\left| \psi \right| }_{\partial Q} = \theta }\right\} . \] It is a linear closed subspace of \( X \) . Let \( \parallel \cdot \parallel \) be the norm of \( C\left( {Q, X}\right) \) . \( \forall h \in \) \( {\Gamma }_{0}, \) with \( \parallel h\parallel = 1,\forall {\lambda }_{j} \downarrow 0,\forall {\xi }_{j} \in \mathcal{M}\left( {{\varphi }_{n} + {\lambda }_{j}h}\right) \), we have \[ {\lambda }_{j}^{-1}\left\lbrack {f \circ \left( {{\varphi }_{n} + {\lambda }_{j}h}\right) \left( {\xi }_{j}\right) - f \circ {\varphi }_{n}\left( {\xi }_{j}\right) }\right\rbrack \geq - \frac{1}{n}, \] (4.76) from (4.75). Since \( \left\{ {\xi }_{j}\right\} \subset Q \), we obtain a convergent subsequence \( {\xi }_{j} \rightarrow {\eta }_{n}^{ * } \in \) \( \mathcal{M}\left( {\varphi }_{n}\right) \), which depends on \( {\varphi }_{n},{\lambda }_{j} \) and \( h \) . After taking limits, we have \[ \left\langle {{f}^{\prime } \circ {\varphi }_{n}\left( {\eta }_{n}^{ * }\right), h\left( {\eta }_{n}^{ * }\right) }\right\rangle \geq - \frac{1}{n}. \] (4.77) We want to show that \( \exists {\eta }_{n} \in \mathcal{M}\left( {\varphi }_{n}\right) \) such that \[ \left\langle {{f}^{\prime } \circ {\varphi }_{n}\left( {\eta }_{n}\right), u}\right\rangle \geq - \frac{1}{n} \] (4.78) \( \forall u \in X \) with \( \parallel u\parallel = 1 \) . If not, \( \forall \eta \in \mathcal{M}\left( {\varphi }_{n}\right) ,\exists {v}_{\eta } \in X \) with \( \begin{Vmatrix}{v}_{\eta }\end{Vmatrix} = 1 \), satisfying \[ \left\langle {{f}^{\prime } \circ {\varphi }_{n}\left( \eta \right) ,{v}_{\eta }}\right\rangle < - \frac{1}{n} \] then there exists a neighborhood of \( \eta ,{O}_{\eta } \subset \operatorname{int}\left( Q\right) \) such that \[ \left\langle {{f}^{\prime } \circ {\varphi }_{n}\left( \xi \right) ,{v}_{\eta }}\right\rangle < - \frac{1}{n},\;\forall \xi \in {O}_{\eta }. \] Since \( \mathcal{M}\left( {\varphi }_{n}\right) \) is compact, there is a finite covering. Let \( m \) be the least number of covering: \( \mathop{\bigcup }\limits_{{i = 1}}^{m}{O}_{{\eta }_{i}} \supset \mathcal{M}\left( {\varphi }_{n}\right) \) . We obtain the associate \( {\left\{ {v}_{{\eta }_{i}}\right\} }_{1}^{m},\begin{Vmatrix}{v}_{{\eta }_{i}}\end{Vmatrix} = 1 \) satisfying \[ \left\langle {{f}^{\prime } \circ {\varphi }_{n}\left( \xi \right) ,{v}_{{\eta }_{i}}}\right\rangle < - \frac{1}{n}\;\forall \xi \in {O}_{{\eta }_{i}}, \] \( i = 1,2,\ldots, m \) . Construct a partition of unity subject to \( {\left\{ {O}_{{\eta }_{i}}\right\} }_{1}^{m} : 0 \leq {\varrho }_{i} \leq \) 1, \( \sup {\varrho }_{i} \subset {O}_{{\eta }_{i}}, i = 1,\ldots, m \), and \[ \mathop{\sum }\limits_{{i = 1}}^{m}{\varrho }_{i}\left( \xi \right) \equiv 1,\;\forall \xi \in \mathcal{M}\left( {\varphi }_{n}\right) . \] We set \[ v = v\left( \xi \right) = \mathop{\sum }\limits_{{i = 1}}^{m}{\varrho }_{i}\left( \xi \right) {v}_{{\eta }_{i}} \] Thus \( v \in {\Gamma }_{0} \) and \( \parallel v\parallel \leq 1 \) . But, \( \exists {\xi }^{ * } \in \mathcal{M}\left( {\varphi }_{n}\right) \) such that there is only one \( {i}_{0} \) satisfying \( {\xi }^{ * } \in {O}_{{\eta }_{{i}_{0}}} \) . Therefore \( \parallel v\parallel = 1 \) . We obtain \[ \left\langle {{f}^{\prime } \circ {\varphi }_{n}\left( \xi \right), v\left( \xi \right) }\right\rangle < - \frac{1}{n},\forall \xi \in \mathcal{M}\left( {\varphi }_{n}\right) . \] This contradicts (4.77). Thus (4.78) holds. Setting \( {x}_{n} = {\varphi }_{n}\left( {\eta }_{n}\right) \), we have \[ {f}^{\prime }\left( {x}_{n}\right) \rightarrow \theta \] Combining with (4.74), we have also \[ f\left( {x}_{n}\right) \rightarrow c. \] The \( {\left( PS\right) }_{c} \) condition implies that \( c \) is a critical value. The above proof is based on Shi [Shi] and Aubin and Ekeland [AE]. The following example due to Brezis and Nirenberg [BN 3] asserts that the \( {\left( PS\right) }_{c} \) condition is crucial in Theorem 4.5.5. Example. The function \( \varphi \left( {x, y}\right) = {x}^{2} + {\left( 1 - x\right) }^{3}{y}^{2} \) defined on \( {R}^{2} \) does have a mountain surround \( \left( {0,0}\right) : \varphi \left( {x, y}\right) \geq c > 0 \) on the circle \( {x}^{2} + {y}^{2} = \frac{1}{4} \) , \( \varphi \left( {0,0}\right) = 0 \), and \( \varphi \left( {4,1}\right) = - {11} \) . But by direct computation it has only one critical point: \( \left( {0,0}\right) \) . ## 4.8.3 Applications The mountain pass lemma and the related minimax principles are widely used in the study of differential equations. We are satisfied to give few examples. Example 1. We study the following forced resonance problem. Given \( h \in {L}^{1}\left( \left\lbrack {0,\pi }\right\rbrack \right) \) satisfying \[ {\int }_{0}^{\pi }h\left( t\right) \sin {tdt} = 0 \] (4.79) Assume that \( g \in C\left( {\mathbb{R}}^{1}\right) \) is \( T \) -periodic, \( T > 0 \), and satisfies: \[ {\int }_{0}^{T}g\left( t\right) {dt} = 0 \] (4.80) Find a solution of the nonlinear BVP:
1088_(GTM245)Complex Analysis
Definition 8.25
Definition 8.25. We define the hyperbolic length of a piecewise differentiable curve \( \gamma \) in \( D \) by \[ {l}_{D}\left( \gamma \right) = {\int }_{\gamma }{\lambda }_{D}\left( z\right) \left| {\mathrm{d}z}\right| \] and if \( {z}_{1} \) and \( {z}_{2} \) are any two points in \( D \), the hyperbolic (or Poincaré) distance between them by \[ {\rho }_{D}\left( {{z}_{1},{z}_{2}}\right) = \inf \left\{ {{l}_{D}\left( \gamma \right) ;\gamma \text{ is a pdp in }D\text{ from }{z}_{1}\text{ to }{z}_{2}}\right\} . \] (8.9) We leave to the reader (Exercise 8.13) the verification that \( {\rho }_{D} \) defines a metric on \( D \) . An isometry from one metric space to another is a distance preserving map between them. It follows from Proposition 8.24 that for every conformal map \( f \) defined on \( D \) and every pdp \( \gamma \) in \( D \) , \[ {l}_{f\left( D\right) }\left( {f \circ \gamma }\right) = {l}_{D}\left( \gamma \right) \] and \[ {\rho }_{f\left( D\right) }\left( {f\left( {z}_{1}\right), f\left( {z}_{2}\right) }\right) = {\rho }_{D}\left( {{z}_{1},{z}_{2}}\right) \text{ for all }{z}_{1}\text{ and }{z}_{2} \in D; \] that is, \( \rho \) is conformally invariant and \( f \) is an isometry between \( D \) and \( f\left( D\right) \) with respect to the appropriate hyperbolic metrics. In particular, every element of \( \operatorname{Aut}\left( D\right) \) is an isometry for the hyperbolic metric on \( D \) . ## 8.4.2 Upper Half-plane Model We know from Remark 8.23 that in \( {\mathbb{H}}^{2} \) we have \( \mathrm{d}s = \frac{\left| \mathrm{d}z\right| }{\mathbb{S}\left( z\right) } \) . The hyperbolic length of an arbitrary curve \( \gamma \) in \( {\mathbb{H}}^{2} \) and the hyperbolic distance between two points in \( {\mathbb{H}}^{2} \) may be hard to calculate directly from their definitions; an indirect approach is technically less complicated. We show that given any two distinct points in \( {\mathbb{H}}^{2} \) , they lie on either a unique Euclidean circle centered on the real axis or on a unique straight line perpendicular to the real axis. The corresponding portion of the circle or straight line lying in \( {\mathbb{H}}^{2} \) is called a hyperbolic line or geodesic; the unique portion of the geodesic between the two points is called a geodesic path or geodesic segment. The name is justified by showing that the hyperbolic length of a geodesic segment realizes the hyperbolic distance between its two end points. A straight line in \( \mathbb{C} \) is a circle in \( \mathbb{C} \cup \{ \infty \} \) passing through infinity (see Exercise 3.21). It is not useful, in general, to assign centers to these circles. However, if such a line intersects \( \mathbb{R} \) in one point and is perpendicular to \( \mathbb{R} \) at that point, we consider that point to be the center of the circle. In the current context, we shall be interested only in lines perpendicular to \( \mathbb{R} \) and use the related fact that a Euclidean circle with center on the real axis is perpendicular to the real axis. Definition 8.26. For a circle \( C \) in \( \mathbb{C} \cup \{ \infty \} \) centered on the real axis, the part of \( C \) lying in the upper half plane is called a hyperbolic line or a geodesic in \( {\mathbb{H}}^{2} \) . The reason for the terminology will shortly become clear. The following lemma establishes the existence of a geodesic path between two points; the proof of its uniqueness follows. ![a50267de-c956-4a7f-8c2e-850adafcee65_230_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_230_0.jpg) Fig. 8.2 Unique circles (in \( \widehat{\mathbb{C}} \) ) perpendicular to \( \mathbb{R} \) through two pairs of points in the upper half plane Lemma 8.27. For every pair \( z \) and \( w \) of distinct points in \( {\mathbb{H}}^{2} \), there exists a unique circle centered at the real axis passing through them, and a unique geodesic in \( {\mathbb{H}}^{2} \) passing through them. Proof. If \( \Re \left( z\right) = \Re \left( w\right) \), take \( \widetilde{C} \) to be the Euclidean line through \( z \) and \( w \) . Otherwise, let \( L \) be the perpendicular bisector of the Euclidean line segment connecting \( z \) and \( w \) . If \( c \) is the point where \( L \) intersects the real line, we take \( \widetilde{C} \) to be the circle with center \( c \) passing through \( z \) and \( w \) . See Fig. 8.2. The portion \( C \) of \( \widetilde{C} \) in \( {\mathbb{H}}^{2} \) gives the sought geodesic. Definition 8.28. Let \( z \) and \( w \) be two distinct points in \( {\mathbb{H}}^{2} \) . The arc of the unique geodesic determined by \( z \) and \( w \) between them is the geodesic segment or geodesic path joining \( z \) and \( w \) . The next two lemmas compute the hyperbolic length of the geodesic segment between two points in \( {\mathbb{H}}^{2} \) . Lemma 8.29. Let \( P \) and \( Q \) be two points in \( {\mathbb{H}}^{2} \) lying on an Euclidean circle \( C \) centered on the real axis, and let \( \gamma \) be the arc of \( C \) in \( {\mathbb{H}}^{2} \) between \( P \) and \( Q \) . Assume further that the radiifrom the center of \( C \) to \( P \) and \( Q \) make respective angles \( \alpha \) and \( \beta \) with the positive real axis. Then \[ {l}_{{\mathbb{H}}^{2}}\left( \gamma \right) = \left| {\log \frac{\csc \left( \beta \right) - \cot \left( \beta \right) }{\csc \left( \alpha \right) - \cot \left( \alpha \right) }}\right| . \] Proof. Assume the circle \( C \) has radius \( r \) and is centered at \( c \) (see Fig. 8.3). Let \( z = \left( {x, y}\right) \) be an arbitrary point on \( \gamma \) and let \( t \) be the angle that the radius from \( z \) to the center of \( C \) makes with the positive real axis; then \( x = c + r\cos t \) and ![a50267de-c956-4a7f-8c2e-850adafcee65_231_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_231_0.jpg) Fig. 8.3 Two points on a circle centered on \( \mathbb{R} \) \( y = r\sin t \) . Therefore \( \mathrm{d}x = - r\sin t \) and \( \mathrm{d}y = r\cos t \) . Thus \( {l}_{{\mathbb{H}}^{2}}\left( \gamma \right) = \left| {{\int }_{\alpha }^{\beta }\csc t\mathrm{\;d}t}\right| \) and the result follows. Similarly one establishes Lemma 8.30. Let \( P = {x}_{P} + \imath {y}_{P} \) and \( Q = {x}_{P} + \imath {y}_{Q} \) be two points in \( {\mathbb{H}}^{2} \) lying on a straight line \( C \) perpendicular to the real axis, and let \( \gamma \) be the segment of \( C \) in \( {\mathbb{H}}^{2} \) between \( P \) and \( Q \) . Then \[ {l}_{{\mathbb{H}}^{2}}\left( \gamma \right) = \left| {\log \frac{{y}_{P}}{{y}_{Q}}}\right| . \] The next definition and the following two lemmas allow us to prove that the hyperbolic length of a geodesic segment minimizes the hyperbolic lengths of all pdp’s joining two distinct points in \( {\mathbb{H}}^{2} \) ; they will also provide an explicit formula for the hyperbolic distance in \( {\mathbb{H}}^{2} \) . Definition 8.31. We have shown in Lemma 8.27 that any two distinct points \( z \) and \( w \) in \( {\mathbb{H}}^{2} \) lie on a unique circle \( \widetilde{C} \) centered on the real axis, and on a unique geodesic \( C \) . If \( \Re z \neq \Re w, C \) is the portion in \( {\mathbb{H}}^{2} \) of an Euclidean circle \( \widetilde{C} \) centered on the real axis; we let \( {z}^{ * } \) and \( {w}^{ * } \) denote the points on \( \widetilde{C} \cap \mathbb{R} \) closest to \( z \) and \( w \), respectively. If \( \Re \left( z\right) = \Re \left( w\right), C \) is the portion in \( {\mathbb{H}}^{2} \) of a straight line perpendicular to \( \mathbb{R} \) ; if \( \Im z < \Im w \) we let \( {z}^{ * } = \Re \left( z\right) \) and \( {w}^{ * } = \infty \) ; if \( \Im z > \Im w \) we let \( {z}^{ * } = \infty \) and \( {w}^{ * } = \Re \left( w\right) \) . See Fig. 8.2. Lemma 8.32. Let \( z \) and \( w \) distinct points in \( {\mathbb{H}}^{2} \) . There exists a unique \( T \in \operatorname{Aut}\left( {\mathbb{H}}^{2}\right) \) such that \( T\left( {z}^{ * }\right) = 0, T\left( z\right) = \iota, T\left( w\right) = {\iota yw} \) with \( y > 1 \), and \( T\left( {w}^{ * }\right) = \infty \), where \( {z}^{ * } \) and \( {w}^{ * } \) are as in Definition 8.31. Proof. Consider the unique circle \( \widetilde{C} \) centered on the real axis and passing through \( z \) and \( w \) . Since the Möbius group is triply transitive, there exists a unique Möbius transformation \( T \) that maps \( {z}^{ * }, z,{w}^{ * } \) to \( 0,\iota ,\infty \), respectively. Since Möbius transformations map circles to circles, \( T \) maps \( \widetilde{C} \) onto the imaginary axis union \( \{ \infty \} \), and hence \( T\left( w\right) = \imath y \) for some real \( y \) . Since Möbius transformations preserve orthogonality of curves, \( T \) maps \( \mathbb{R} \cup \{ \infty \} \) onto itself. Since \( T \) maps \( z \in {\mathbb{H}}^{2} \) to \( l \in {\mathbb{H}}^{2}, T \) is in \( \operatorname{Aut}\left( {\mathbb{H}}^{2}\right) \), and because it is orientation preserving, \( y > 1 \) . Lemma 8.33. If \( z \) and \( w \) are two distinct points in \( {\mathbb{H}}^{2} \), then the hyperbolic length of the geodesic segment \( \widetilde{\gamma } \) joining them is shorter than the hyperbolic length of any other pdp \( \gamma \) in \( {\mathbb{H}}^{2} \) joining them. Proof. Write \( z = {x}_{z} + \imath {y}_{z} \) and \( w = {x}_{w} + \imath {y}_{w} \) . First consider the case \( {x}_{z} = {x}_{w} \) . Assume the curve \( \gamma \) is parameterized by the closed interval \( \left\lbrack {a, b}\right\rbrack \subset \mathbb{R} \) and \( \gamma \left( t\right) = x\left( t\right) + \imath y\left( t\right) \) . Then \( x \) and \( y \) are differentiable functions except at finitely many points, and \[ {l}_{{\mathbb{H}}^{2}}\left( \gamma \right) = \left| {{\int }_{a}^{b}\frac{\sqrt{{x}^{\prime }{\left( t\right) }^{2} + {y}^{\prime }{\left( t\right) }^{2}}}{y\left( t\right) }\mathrm{d}t}\right| \geq {\int }_{a}^{b}\frac{\left| {y}^{\prime }\lef
1189_(GTM95)Probability-1
Definition 2
Definition 2. A sequence of probability measures \( \left\{ {\mathrm{P}}_{n}\right\} \) converges weakly to the probability measure \( \mathrm{P} \) (notation: \( {\mathrm{P}}_{n}\overset{w}{ \rightarrow }\mathrm{P} \) ) if \[ {\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) \rightarrow {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) \] (9) for every function \( f = f\left( x\right) \) in the class \( C\left( E\right) \) of continuous bounded functions on \( E \) . Definition 3. A sequence of probability measures \( \left\{ {\mathrm{P}}_{n}\right\} \) converges in general to the probability measure \( \mathrm{P} \) (notation: \( {\mathrm{P}}_{n} \Rightarrow \mathrm{P} \) ) if \[ {\mathrm{P}}_{n}\left( A\right) \rightarrow \mathrm{P}\left( A\right) \] (10) for every set \( A \) of \( \mathcal{E} \) for which \[ \mathrm{P}\left( {\partial A}\right) = 0. \] (11) (Here \( \partial A \) denotes the boundary of \( A \) : \[ \partial A = \left\lbrack A\right\rbrack \cap \left\lbrack \bar{A}\right\rbrack \] where \( \left\lbrack A\right\rbrack \) is the closure of \( A \) .) The following fundamental theorem shows the equivalence of the concepts of weak convergence and convergence in general for probability measures, and contains still other equivalent statements. Theorem 1. The following statements are equivalent. (I) \( {\mathrm{P}}_{n}\overset{w}{ \rightarrow }\mathrm{P} \) , (II) \( \limsup {\mathrm{P}}_{n}\left( A\right) \leq \mathrm{P}\left( A\right) ,\;A \) closed, (III) \( \liminf {\mathrm{P}}_{n}\left( A\right) \geq \mathrm{P}\left( A\right) ,\; \) A open, (IV) \( {\mathrm{P}}_{n} \Rightarrow \mathrm{P} \) . Proof. (I) \( \Rightarrow \) (II). Let \( A \) be closed and \[ {f}_{A}^{\varepsilon }\left( x\right) = {\left\lbrack 1 - \frac{\rho \left( {x, A}\right) }{\varepsilon }\right\rbrack }^{ + },\;\varepsilon > 0, \] where \[ \rho \left( {x, A}\right) = \inf \{ \rho \left( {x, y}\right) : y \in A\} ,\;{\left\lbrack x\right\rbrack }^{ + } = \max \left\lbrack {0, x}\right\rbrack . \] Let us also put \[ {A}^{\varepsilon } = \{ x : \rho \left( {x, A}\right) < \varepsilon \} \] and observe that \( {A}^{\varepsilon } \downarrow A \) as \( \varepsilon \downarrow 0 \) . Since \( {f}_{A}^{\varepsilon }\left( x\right) \) is bounded, continuous, and satisfies \[ {\mathrm{P}}_{n}\left( A\right) = {\int }_{E}{I}_{A}\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) \leq {\int }_{E}{f}_{A}^{\varepsilon }\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) , \] we have \[ \mathop{\limsup }\limits_{n}{\mathrm{P}}_{n}\left( A\right) \leq \mathop{\limsup }\limits_{n}{\int }_{E}{f}_{A}^{\varepsilon }\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) \] \[ = {\int }_{E}{f}_{A}^{\varepsilon }\left( x\right) \mathrm{P}\left( {dx}\right) \leq \mathrm{P}\left( {A}^{\varepsilon }\right) \downarrow \mathrm{P}\left( A\right) ,\;\varepsilon \downarrow 0, \] which establishes the required implication. The implications (II) \( \Rightarrow \) (III) and (III) \( \Rightarrow \) (II) become obvious if we take the complements of the sets concerned. (III) \( \Rightarrow \) (IV). Let \( {A}^{0} = A \smallsetminus \partial A \) be the interior, and \( \left\lbrack A\right\rbrack \) the closure, of \( A \) . Then from (II),(III), and the hypothesis \( \mathrm{P}\left( {\partial A}\right) = 0 \), we have \[ \mathop{\limsup }\limits_{n}{\mathrm{P}}_{n}\left( A\right) \leq \mathop{\limsup }\limits_{n}{\mathrm{P}}_{n}\left( \left\lbrack A\right\rbrack \right) \leq \mathrm{P}\left( \left\lbrack A\right\rbrack \right) = \mathrm{P}\left( A\right) , \] \[ \mathop{\liminf }\limits_{n}{\mathrm{P}}_{n}\left( A\right) \geq \mathop{\liminf }\limits_{n}{\mathrm{P}}_{n}\left( {A}^{0}\right) \geq \mathrm{P}\left( {A}^{0}\right) = \mathrm{P}\left( A\right) , \] and therefore \( {\mathrm{P}}_{n}\left( A\right) \rightarrow \mathrm{P}\left( A\right) \) for every \( A \) such that \( \mathrm{P}\left( {\partial A}\right) = 0 \) . (IV) \( \rightarrow \) (I). Let \( f = f\left( x\right) \) be a bounded continuous function with \( \left| {f\left( x\right) }\right| \leq M \) . We put \[ D = \{ t \in R : \mathrm{P}\{ x : f\left( x\right) = t\} \neq 0\} \] and consider a decomposition \( {T}_{k} = \left( {{t}_{0},{t}_{1},\ldots ,{t}_{k}}\right) \) of \( \left\lbrack {-M, M}\right\rbrack \) : \[ - M = {t}_{0} < {t}_{1} < \cdots < {t}_{k} = M,\;k \geq 1, \] with \( {t}_{i} \notin D, i = 0,1,\ldots, k \) . (Observe that \( D \) is at most countable since the sets \( {f}^{-1}\{ t\} \) are disjoint and \( \mathrm{P} \) is finite.) Let \( {B}_{i} = \left\{ {x : {t}_{i} \leq f\left( x\right) < {t}_{i + 1}}\right\} \) . Since \( f\left( x\right) \) is continuous and therefore the set \( {f}^{-1}\left( {{t}_{i},{t}_{i + 1}}\right) \) is open, we have \( \partial {B}_{i} \subseteq {f}^{-1}\left\{ {t}_{i}\right\} \cup {f}^{-1}\left\{ {t}_{i + 1}\right\} \) . The points \( {t}_{i},{t}_{i + 1} \notin D \) ; therefore \( \mathrm{P}\left( {\partial {B}_{i}}\right) = 0 \) and, by (IV), \[ \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{t}_{i}{\mathrm{P}}_{n}\left( {B}_{i}\right) \rightarrow \mathop{\sum }\limits_{{t = 0}}^{{k - 1}}{t}_{i}\mathrm{P}\left( {B}_{i}\right) \] (12) But \[ \left| {{\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) - {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) }\right| \leq \left| {{\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) - \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{t}_{i}{\mathrm{P}}_{n}\left( {B}_{i}\right) }\right| \] \[ + \left| {\mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{t}_{i}{\mathrm{P}}_{n}\left( {B}_{i}\right) - \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{t}_{i}\mathrm{P}\left( {B}_{i}\right) }\right| \] \[ + \left| {\mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{t}_{i}\mathrm{P}\left( {B}_{i}\right) - {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) }\right| \] \[ \leq 2\mathop{\max }\limits_{{0 \leq i \leq k - 1}}\left( {{t}_{i + 1} - {t}_{i}}\right) \] \[ + \left| {\mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{t}_{i}{\mathrm{P}}_{n}\left( {B}_{i}\right) - \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{t}_{i}\mathrm{P}\left( {B}_{i}\right) }\right| \] whence, by (12), since the \( {T}_{k}\left( {k \geq 1}\right) \) are arbitrary, \[ \mathop{\lim }\limits_{n}{\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) = {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) . \] This completes the proof of the theorem. Remark 1. The functions \( f\left( x\right) = {I}_{A}\left( x\right) \) and \( {f}_{A}^{\varepsilon }\left( x\right) \) that appear in the proof that (I) \( \Rightarrow \) (II) are respectively upper semicontinuous and uniformly continuous. Hence it is easy to show that each of the conditions of the theorem is equivalent to one of the following: (V) \( {\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) \rightarrow {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) \) for all bounded uniformly continuous \( f\left( x\right) \) ; (VI) \( {\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) \rightarrow {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) \) for all bounded functions satisfying the Lipschitz condition (see Lemma 2 in Sect. 7); (VII) \( \lim \sup {\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) \leq {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) \) for all bounded \( f\left( x\right) \) that are upper semicontinuous \( \left( {\lim \sup f\left( {x}_{n}\right) \leq f\left( x\right) ,{x}_{n} \rightarrow x}\right) \) ; (VIII) \( \liminf {\int }_{E}f\left( x\right) {\mathrm{P}}_{n}\left( {dx}\right) \geq {\int }_{E}f\left( x\right) \mathrm{P}\left( {dx}\right) \) for all bounded \( f\left( x\right) \) that are lower semicontinuous \( \left( {\lim \mathop{\inf }\limits_{n}f\left( {x}_{n}\right) \geq f\left( x\right) ,{x}_{n} \rightarrow x}\right) \) . Remark 2. Theorem 1 admits a natural generalization to the case when the probability measures \( \mathrm{P} \) and \( {\mathrm{P}}_{n} \) defined on \( \left( {E,\mathcal{E},\rho }\right) \) are replaced by arbitrary (not necessarily probability) finite measures \( \mu \) and \( {\mu }_{n} \) . For such measures we can introduce weak convergence \( {\mu }_{n}\overset{w}{ \rightarrow }\mu \) and convergence in general \( {\mu }_{n} \Rightarrow \mu \) and, just as in Theorem 1, we can establish the equivalence of the following conditions: \( \left( {\mathrm{I}}^{ * }\right) {\mu }_{n}\overset{w}{ \rightarrow }\mu \) \( \left( {\mathrm{{II}}}^{ * }\right) \lim \sup {\mu }_{n}\left( A\right) \leq \mu \left( A\right) \), where \( A \) is closed and \( {\mu }_{n}\left( E\right) \rightarrow \mu \left( E\right) \) ; (III*) \( \liminf {\mu }_{n}\left( A\right) \geq \mu \left( A\right) \), where \( A \) is open and \( {\mu }_{n}\left( E\right) \rightarrow \mu \left( E\right) \) ; \( \left( {\mathrm{{IV}}}^{ * }\right) {\mu }_{n} \Rightarrow \mu \) . Each of these is equivalent to any of \( \left( {\mathrm{V}}^{ * }\right) - \left( {\mathrm{{VIII}}}^{ * }\right) \), which are (V)-(VIII) with \( {\mathrm{P}}_{n} \) and \( \mathrm{P} \) replaced by \( {\mu }_{n} \) and \( \mu \) . 4. Let \( \left( {R,\mathcal{B}\left( R\right) }\right) \) be the real line with the system \( \mathcal{B}\left( R\right) \) of Borel sets generated by the Euclidean metric \( \rho \left( {x, y}\right) = \left| {x - y}\right| \) (compare Remark 2 in Sect. 2, Chapter 2). Let \( P \) and \( {P}_{n}, n \geq 1 \), be probability measures on \( \left( {R,\mathcal{B}\left( R\right) }\right) \) and let \( F \) and \( {F}_{n}, n \geq 1 \) , be the corresponding distribution functions. Theorem 2. The following conditions are equivalent: (1) \( {P}_{n}\overset{w}{ \rightarrow }P \) , (2) \( {P}_{n} \Rightarrow P \) , (3) \( {F}_{n}\overset{w}{ \rightarrow }F \) , (4) \( {F}_{n} \Rightarrow F \) . Proof. Since \( \left( 2\right) \Leftrightarrow \le
1075_(GTM233)Topics in Banach Space Theory
Definition 1.1.1
Definition 1.1.1. A sequence of elements \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) in an infinite-dimensional Banach space \( X \) is said to be a basis of \( X \) if for each \( x \in X \) there is a unique sequence of scalars \( {\left( {a}_{n}\right) }_{n = 1}^{\infty } \) such that \[ x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{e}_{n} \] This means that we require that the sequence \( {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{a}_{n}{e}_{n}\right) }_{N = 1}^{\infty } \) converge to \( x \) in the norm topology of \( X \) . It is clear from the definition that a basis consists of linearly independent, and in particular nonzero, vectors. If \( X \) has a basis \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) then its closed linear span, \( \left\lbrack {e}_{n}\right\rbrack \) , coincides with \( X \) and therefore \( X \) is separable (the rational finite linear combinations of \( \left( {e}_{n}\right) \) will be dense in \( X \) ). Let us stress that the order of the basis is important; if we permute the elements of the basis then the new sequence can very easily fail to be a basis. We will discuss this phenomenon in much greater detail later, in Chapter 3. The reader should not confuse the notion of basis in an infinite-dimensional Banach space with the purely algebraic concept of Hamel basis or vector space basis. A Hamel basis \( {\left( {e}_{i}\right) }_{i \in \mathcal{I}} \) for \( X \) is a collection of linearly independent vectors in \( X \) such that each \( x \) in \( X \) is uniquely representable as a finite linear combination of \( {e}_{i} \) . From the Baire category theorem it is easy to deduce that if \( {\left( {e}_{i}\right) }_{i \in \mathcal{I}} \) is a Hamel basis for an infinite-dimensional Banach space \( X \) then \( {\left( {e}_{i}\right) }_{i \in \mathcal{I}} \) must be uncountable. Henceforth, whenever we refer to a basis for an infinite-dimensional Banach space \( X \) it will be in the sense of Definition 1.1.1. We also note that if \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a basis of a Banach space \( X \), the maps \( x \mapsto {a}_{n} \) are linear functionals on \( X \) . Let us write, for the time being, \( {e}_{n}^{\# }\left( x\right) = {a}_{n} \) . However, it is by no means immediate that the linear functionals \( {\left( {e}_{n}^{\# }\right) }_{n = 1}^{\infty } \) are actually continuous. Let us make the following definition: Definition 1.1.2. Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be a sequence in a Banach space \( X \) . Suppose there is a sequence \( {\left( {e}_{n}^{ * }\right) }_{n = 1}^{\infty } \) in \( {X}^{ * } \) such that (i) \( {e}_{k}^{ * }\left( {e}_{j}\right) = 1 \) if \( j = k \), and \( {e}_{k}^{ * }\left( {e}_{j}\right) = 0 \) otherwise, for every \( k \) and \( j \) in \( \mathbb{N} \) , (ii) \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{e}_{n}^{ * }\left( x\right) {e}_{n} \) for each \( x \in X \) . Then \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is called a Schauder basis for \( X \) and the functionals \( {\left( {e}_{n}^{ * }\right) }_{n = 1}^{\infty } \) are called the biorthogonal functionals (or the coordinate functionals) associated to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) . If \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a Schauder basis for \( X \) and \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{e}_{n}^{ * }\left( x\right) {e}_{n} \in X \), the support of \( x \) is the subset of integers \( n \) such that \( {e}_{n}^{ * }\left( x\right) \neq 0 \) . We denote it by supp \( \left( x\right) \) . If \( \left| {\operatorname{supp}\left( x\right) }\right| < \infty \) we say that \( x \) is finitely supported. The name Schauder in the previous definition is in honor of J. Schauder, who first introduced the concept of a basis in 1927 [279]. In practice, nevertheless, every basis of a Banach space is a Schauder basis, and the concepts are not distinct (the distinction is important, however, in more general locally convex spaces). The proof of the equivalence between the concepts of basis and Schauder basis is an early application of the closed graph theorem [18, p. 111]. Although this result is a very nice use of some of the basic principles of functional analysis, it has to be conceded that it is essentially useless in the sense that in all practical situations we are able to prove that \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a basis only by showing the formally stronger conclusion that it is already a Schauder basis. Thus the reader can safely skip the next theorem. Theorem 1.1.3. Let \( X \) be a (separable) Banach space. A sequence \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) in \( X \) is a Schauder basis for \( X \) if and only if \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a basis for \( X \) . Proof. Let us assume that \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is a basis for \( X \) and introduce the partial sum projections \( {\left( {S}_{n}\right) }_{n = 0}^{\infty } \) associated to \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) defined by \( {S}_{0} = 0 \) and for \( n \geq 1 \) , \[ {S}_{n}\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{n}{e}_{k}^{\# }\left( x\right) {e}_{k} \] Of course, we do not yet know that these operators are bounded! Let us consider a new norm on \( X \) defined by the formula \[ \parallel \left| x\right| \parallel = \mathop{\sup }\limits_{{n \geq 1}}\begin{Vmatrix}{{S}_{n}x}\end{Vmatrix} \] Since \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}{x - {S}_{n}x}\end{Vmatrix} = 0 \) for each \( x \in X \), it follows that \( \parallel \left| \cdot \right| \parallel \geq \parallel \cdot \parallel \) . We will show that \( \left( {X,\parallel \parallel \cdot \parallel \parallel }\right) \) is complete. Suppose that \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a Cauchy sequence in \( \left( {X,\parallel \parallel \cdot \parallel \parallel }\right) \) . Of course, \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is convergent to some \( x \in X \) in the original norm. Our goal is to prove that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}\left| {{x}_{n} - x}\right| \end{Vmatrix} = 0 \) . Notice that for each fixed \( k \) the sequence \( {\left( {S}_{k}{x}_{n}\right) }_{n = 1}^{\infty } \) is convergent in the original norm to some \( {y}_{k} \in X \), and note also that \( {\left( {S}_{k}{x}_{n}\right) }_{n = 1}^{\infty } \) is contained in the finite-dimensional subspace \( \left\lbrack {{e}_{1},\ldots ,{e}_{k}}\right\rbrack \) . Certainly, the functionals \( {e}_{j}^{\# } \) are continuous on every finite-dimensional subspace; hence if \( 1 \leq j \leq k \) we have \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{e}_{j}^{\# }\left( {x}_{n}\right) = {e}_{j}^{\# }\left( {y}_{k}\right) \mathrel{\text{:=}} {a}_{j} \] Next we argue that \( \mathop{\sum }\limits_{{j = 1}}^{\infty }{a}_{j}{e}_{j} = x \) for the original norm. Given \( \epsilon > 0 \), pick an integer \( n \) such that if \( m \geq n \) then \( \begin{Vmatrix}\left| {{x}_{m} - {x}_{n}}\right| \end{Vmatrix} \leq \frac{1}{3}\epsilon \), and take \( {k}_{0} \) such that \( k \geq {k}_{0} \) implies \( \begin{Vmatrix}{{x}_{n} - {S}_{k}{x}_{n}}\end{Vmatrix} \leq \frac{1}{3}\epsilon \) . Then for \( k \geq {k}_{0} \) we have \[ \begin{Vmatrix}{{y}_{k} - x}\end{Vmatrix} \leq \mathop{\lim }\limits_{{m \rightarrow \infty }}\begin{Vmatrix}{{S}_{k}{x}_{m} - {S}_{k}{x}_{n}}\end{Vmatrix} + \begin{Vmatrix}{{S}_{k}{x}_{n} - {x}_{n}}\end{Vmatrix} + \mathop{\lim }\limits_{{m \rightarrow \infty }}\begin{Vmatrix}{{x}_{m} - {x}_{n}}\end{Vmatrix} \leq \epsilon . \] Thus \( \mathop{\lim }\limits_{{k \rightarrow \infty }}\begin{Vmatrix}{{y}_{k} - x}\end{Vmatrix} = 0 \) and, by the uniqueness of the expansion of \( x \) with respect to the basis, \( {S}_{k}x = {y}_{k} \) . Now, \[ \left| \right| \left| {{x}_{n} - x}\right| \left| \right| = \mathop{\sup }\limits_{{k \geq 1}}\begin{Vmatrix}{{S}_{k}{x}_{n} - {S}_{k}x}\end{Vmatrix} \leq \mathop{\limsup }\limits_{{m \rightarrow \infty }}\mathop{\sup }\limits_{{k \geq 1}}\begin{Vmatrix}{{S}_{k}{x}_{n} - {S}_{k}{x}_{m}}\end{Vmatrix}, \] so \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| \right| \left| {{x}_{n} - x}\right| \left| \right| = 0 \) and \( \left( {X,\parallel \left| \cdot \right| \parallel }\right) \) is complete. By the closed graph theorem (or the open mapping theorem), the identity map \( \left( {X,\parallel \cdot \parallel }\right) \rightarrow \left( {X,\parallel \parallel \cdot \parallel \parallel }\right) \) is bounded, i.e., there exists \( K \) such that \( \parallel \left| x\right| \left| \right| \leq K\parallel x\parallel \) for \( x \in X \) . This implies that \[ \begin{Vmatrix}{{S}_{n}x}\end{Vmatrix} \leq K\parallel x\parallel ,\;x \in X, n \in \mathbb{N}. \] In particular, \[ \left| {{e}_{n}^{\# }\left( x\right) }\right| \begin{Vmatrix}{e}_{n}\end{Vmatrix} = \begin{Vmatrix}{{S}_{n}x - {S}_{n - 1}x}\end{Vmatrix} \leq {2K}\parallel x\parallel \] hence \( {e}_{n}^{\# } \in {X}^{ * } \) and \( \begin{Vmatrix}{e}_{n}^{\# }\end{Vmatrix} \leq {2K}{\begin{Vmatrix}{e}_{n}\end{Vmatrix}}^{-1} \) . Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be a basis for a Banach space \( X \) . The preceding theorem tells us that \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) is actually a Schauder basis; hence we use \( {\left( {e}_{n}^{ * }\right) }_{n = 1}^{\infty } \) for the biorthogonal functionals. As above, we consider the partial sum operators \( {S}_{n} : X \rightarrow X \), given by \( {S}_{0} = 0 \) and, for \( n \geq 1 \) , \[ {S}_{n}\left( {\mathop{\sum }\limits_{{k = 1}}^{\infty }{e}_{k}^{ * }\left( x\right) {e}_{k}}\right) = \mathop{\sum }\limits_{{k = 1}}^{n}{e}_{k}^{ * }\left( x\right) {e}_{k} \] \( {S}_{n} \) is a continuous linear operator, since each \( {e}_{k}^{ * } \) is continuous. That the operators \( {\
18_Algebra Chapter 0
Definition 1.12
Definition 1.12. If \( G \) is finite as a set, its order \( \left| G\right| \) is the number of its elements; we write \( \left| G\right| = \infty \) if \( G \) is infinite. Cancellation implies that \( \left| g\right| \leq \left| G\right| \) for all \( g \in G \) . Indeed, this is vacuously true if \( \left| G\right| = \infty \) ; if \( G \) is finite, consider the \( \left| G\right| + 1 \) powers \[ {g}^{0} = e, g,{g}^{2},{g}^{3},\ldots ,{g}^{\left| G\right| } \] of \( g \) . These cannot all be distinct; hence \[ \left( {\exists i, j}\right) \;0 \leq i < j \leq \left| G\right| \;\text{ such that }{g}^{i} = {g}^{j}. \] By cancellation (that is, multiplying on the right by \( {g}^{-i} \) ) \[ {g}^{j - i} = e \] showing \( \left| g\right| \leq \left( {j - i}\right) \leq \left| G\right| \) . We will soon be able to formulate a much more precise statement concerning the relation between the order of a group and the order of its elements: if \( g \in G \) and \( \left| G\right| \) is finite, then the order of \( g \) divides the order of \( G \) . This will be an immediate consequence of Lagrange's theorem; cf. Example 8.15 Another general remark concerning orders is that their behavior with respect to the operation of the group is not always predictable: it may very well happen that \( g, h \) have finite order in a group \( G \), and yet \( \left| {gh}\right| = \infty \), or \( \left| {gh}\right| = \) your favorite positive integer: work out Exercise 1.12 and Exercise 2.6 if you don't believe it. On the other hand, the situation is more constrained if \( g \) and \( h \) commute. In the extreme case in which \( g = h \), it is easy to obtain a very precise statement: Proposition 1.13. Let \( q \in G \) be an element of finite order. Then \( {g}^{m} \) has finite order \( \forall m \geq 0 \), and in fact \[ \left| {g}^{m}\right| = \frac{\operatorname{lcm}\left( {m,\left| g\right| }\right) }{m} = \frac{\left| g\right| }{\gcd \left( {m,\left| g\right| }\right) }. \] Proof. The equality of the two numbers \( \frac{\operatorname{lcm}\left( {m,\left| g\right| }\right) }{m} \) and \( \frac{\left| g\right| }{\gcd \left( {m,\left| g\right| }\right) } \) follows from elementary properties of \( \gcd \) and \( \operatorname{lcm} : \operatorname{lcm}\left( {a, b}\right) = {ab}/\gcd \left( {a, b}\right) \) for all \( a \) and \( b \) . So we only need to prove that \( \left| {g}^{m}\right| = \frac{\operatorname{lcm}\left( {m,\left| g\right| }\right) }{m} \) . The order of \( {g}^{m} \) is the least positive \( d \) for which \[ {g}^{md} = e \] that is (by Corollary 1.11) for which \( {md} \) is a multiple of \( \left| g\right| \) . In other words, \( m\left| {g}^{m}\right| \) is the smallest multiple of \( m \) which is also a multiple of \( \left| g\right| \) : \[ m\left| {g}^{m}\right| = \operatorname{lcm}\left( {m,\left| g\right| }\right) . \] The stated formula follows immediately from this. In general, for commuting elements, Proposition 1.14. If \( {gh} = {hg} \), then \( \left| {gh}\right| \) divides \( \operatorname{lcm}\left( {\left| g\right| ,\left| h\right| }\right) \) . \( {}^{8} \) The notation lcm stands for ’least common multiple’. I am also assuming that the reader is familiar with simple properties of gcd and lcm. Proof. Let \( \left| g\right| = m,\left| h\right| = n \) . If \( N \) is any common multiple of \( m \) and \( n \), then \( {g}^{N} = {h}^{N} = e \) by Corollary 1.11 Since \( g \) and \( h \) commute, \[ {\left( gh\right) }^{N} = \underset{N\text{ times }}{\underbrace{\left( {gh}\right) \left( {gh}\right) \cdots \cdots \left( {gh}\right) }} = \underset{N\text{ times }}{\underbrace{{gg}\cdots \cdots g}} \cdot \underset{N\text{ times }}{\underbrace{{hh}\cdots \cdots h}} = {g}^{N}{h}^{N} = e. \] As this holds for every common multiple \( N \) of \( m \) and \( n \), in particular \[ {\left( gh\right) }^{\operatorname{lcm}\left( {m, n}\right) } = e. \] The statement then follows from Lemma 1.10 One cannot say more about \( \left| {gh}\right| \) in general, even if \( g \) and \( h \) commute (Exercise 1.13). But see Exercise 1.14 for an important special case. ## Exercises 1.1. \( \vartriangleright \) Write a careful proof that every group is the group of isomorphisms of a groupoid. In particular, every group is the group of automorphisms of some object in some category. [8.1 1.2. \( \vartriangleright \) Consider the ’sets of numbers’ listed in [1.1, and decide which are made into groups by conventional operations such as + and \( \cdot \) . Even if the answer is negative (for example, \( \left( {\mathbb{R}, \cdot }\right) \) is not a group), see if variations on the definition of these sets lead to groups (for example, \( \left( {{\mathbb{R}}^{ * }, \cdot }\right) \) is a group; cf. (1.4). [9.2] 1.3. Prove that \( {\left( gh\right) }^{-1} = {h}^{-1}{g}^{-1} \) for all elements \( g, h \) of a group \( G \) . 1.4. Suppose that \( {g}^{2} = e \) for all elements \( g \) of a group \( G \) ; prove that \( G \) is commutative. 1.5. The 'multiplication table' of a group is an array compiling the results of all multiplications \( g \bullet h \) : <table><tr><td>·</td><td>\( e \)</td><td>...</td><td>\( h \)</td><td>...</td></tr><tr><td>\( e \)</td><td>\( e \)</td><td>...</td><td>\( h \)</td><td>...</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td>\( g \)</td><td>\( g \)</td><td>. . .</td><td>\( g \bullet h \)</td><td>. . .</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr></table> (Here \( e \) is the identity element. Of course the table depends on the order in which the elements are listed in the top row and leftmost column.) Prove that every row and every column of the multiplication table of a group contains all elements of the group exactly once (like Sudoku diagrams!). 1.6. \( \neg \) Prove that there is only one possible multiplication table for \( G \) if \( G \) has exactly \( 1,2 \), or 3 elements. Analyze the possible multiplication tables for groups with exactly 4 elements, and show that there are two distinct tables, up to reordering the elements of \( G \) . Use these tables to prove that all groups with \( \leq 4 \) elements are commutative. (You are welcome to analyze groups with 5 elements using the same technique, but you will soon know enough about groups to be able to avoid such brute-force approaches.) [2.19] ## 1.7. Prove Corollary 1.11 1.8. \( \neg \) Let \( G \) be a finite abelian group with exactly one element \( f \) of order 2 . Prove that \( \mathop{\prod }\limits_{{g \in G}}g = f \) . 4.16 1.9. Let \( G \) be a finite group, of order \( n \), and let \( m \) be the number of elements \( g \in G \) of order exactly 2 . Prove that \( n - m \) is odd. Deduce that if \( n \) is even, then \( G \) necessarily contains elements of order 2 . 1.10. Suppose the order of \( g \) is odd. What can you say about the order of \( {g}^{2} \) ? 1.11. Prove that for all \( g, h \) in a group \( G,\left| {gh}\right| = \left| {hg}\right| \) . (Hint: Prove that \( \left| {{ag}{a}^{-1}}\right| = \) \( \left| g\right| \) for all \( a, g \) in \( G \) .) 1.12. \( \vartriangleright \) In the group of invertible \( 2 \times 2 \) matrices, consider \[ g = \left( \begin{array}{rr} 0 & - 1 \\ 1 & 0 \end{array}\right) ,\;h = \left( \begin{array}{rr} 0 & 1 \\ - 1 & - 1 \end{array}\right) . \] Verify that \( \left| g\right| = 4,\left| h\right| = 3 \), and \( \left| {gh}\right| = \infty \) . [9.6] 1.13. \( \vartriangleright \) Give an example showing that \( \left| {gh}\right| \) is not necessarily equal to \( \operatorname{lcm}\left( {\left| g\right| ,\left| h\right| }\right) \) , even if \( g \) and \( h \) commute. [9,1,6,1,14] 1.14. \( \vartriangleright \) As a counterpoint to Exercise 1.13, prove that if \( g \) and \( h \) commute and \( {gcd}\left( {\left| g\right| ,\left| h\right| }\right) = 1 \), then \( \left| {gh}\right| = \left| g\right| \left| h\right| \) . (Hint: Let \( N = \left| {gh}\right| \) ; then \( {g}^{N} = {\left( {h}^{-1}\right) }^{N} \) . What can you say about this element?) [9.6, 1.15, SIV 2.5 1.15. \( \neg \) Let \( G \) be a commutative group, and let \( g \in G \) be an element of maximal finite order, that is, such that if \( h \in G \) has finite order, then \( \left| h\right| \leq \left| g\right| \) . Prove that in fact if \( h \) has finite order in \( G \), then \( \left| h\right| \) divides \( \left| g\right| \) . (Hint: Argue by contradiction. If \( \left| h\right| \) is finite but does not divide \( \left| g\right| \), then there is a prime integer \( p \) such that \( \left| g\right| = \) \( {p}^{m}r,\left| h\right| = {p}^{n}s \), with \( r \) and \( s \) relatively prime to \( p \) and \( m < n \) . Use Exercise 1.14 to compute the order of \( {g}^{{p}^{m}}{h}^{s} \) .) [9.1,4.11, IV 6.15] ## 2. Examples of groups 2.1. Symmetric groups. In [14.1] we have already observed that every object \( A \) of every category \( \mathrm{C} \) determines a group, called \( \operatorname{Autc}\left( A\right) \), namely the group of auto-morphisms of \( A \) . In a somewhat artificial sense it is clear that every group arises in this fashion (cf. Exercise 1.1); this fact is true in more 'meaningful' ways, which will become apparent when we discuss group actions (§9): cf. especially Theorem 9.5 and Exercise 9.17 In any case, this observation provides the reader with an infinite class of very important examples: Definition 2.1. Let \( A \) be a set. The symmetric group, or group of permutations of \( A \), denoted \( {S}_{A} \), is the group \( {\operatorname{Aut}}_{\text{Set }}\left( A\right) \) . The group of permutations of the set \( \{ \mathbf{1},\ldots ,\mathbf{n}\} \) is denoted by \( {S}_{n} \) . The terminology is easily justified: the automorphisms of a set \( A \) are the set-isomorphisms, that is, the bijections, from \( A \) to itself; applying such a
1329_[肖梁] Abstract Algebra (2022F)
Definition 3.3.1
Definition 3.3.1. A (finite or infinite) group \( G \) is called simple if \( \left| G\right| > 1 \) and the only normal subgroups of \( G \) are \( \{ 1\} \) and \( G \) . Example 3.3.2. (1) \( {\mathbf{Z}}_{p} \) for a prime number \( p \) . (This is all abelian simple groups.) (2) Alternating group \( {A}_{n} \) for \( n \geq 5 \) (a subgroup of \( {S}_{n} \) we introduce later). (3) There are infinite simple groups, but not so easy to define. So the Hölder program consists of two steps: - Step I: classify all finite simple groups; - Step II: Find all ways of "putting simple groups together" to form other groups. The following is considered the most important achievement of group theory. Theorem 3.3.3 (Classification of finite simple groups). Every finite simple group is isomorphic to one in - 18 (infinite) families of simple groups, or - 26 sporadic simple groups. The list of family of finite groups includes - \( {\mathbf{Z}}_{p} \) with \( p \) a prime; - \( {A}_{n}\left( {n \geq 5}\right) \) ; - \( {\operatorname{PSL}}_{n}\left( \mathbb{F}\right) = {\mathrm{{SL}}}_{n}\left( \mathbb{F}\right) /Z\left( {{\mathrm{{SL}}}_{n}\left( \mathbb{F}\right) }\right) \) with \( n \geq 2 \) and \( \mathbb{F} \) a finite field (e.g. \( {\mathbb{F}}_{p} \) ). (Here \( Z\left( {{\mathrm{{SL}}}_{n}\left( \mathbb{F}\right) }\right) \) is the scalar matrices with coefficients in \( {\mathbb{F}}^{ \times } \) and whose determinant is 1.) There are other lists of finite groups mostly associated to a family of "Lie groups of finite type". For the interest of readers, we only on mention the following. Theorem 3.3.4 (Feit-Thompson theorem). If \( G \) is a simple group of odd order, then \( G \cong {\mathbf{Z}}_{p} \) for some odd prime \( p \) . Certainly, in this abstract course, we will only touch the very basics of group theory. We hope to learn some tools that frequently appears in applications to future questions in algebra. 3.4. Composition series. Inspired by the Hölder's program, we make the following. Definition 3.4.1. In a group \( G \), a sequence of subgroups \[ \{ 1\} = {N}_{0} \leq {N}_{1} \leq \cdots \leq {N}_{k} = G \] is called a composition series if \( {N}_{i - 1} \trianglelefteq {N}_{i} \) and \( {N}_{i}/{N}_{i - 1} \) is a simple group for each \( 1 \leq i \leq k \) . In this case, the factor groups \( {N}_{i}/{N}_{i - 1} \) are called composition factors or Jordan-Hölder factors of \( G \) . Example 3.4.2. For the dihedral group \( {D}_{8} = \left\langle {r, s \mid {r}^{4} = {s}^{2} = 1,{srs} = {r}^{-1}}\right\rangle \), the following are two composition series (and there are more): - \( \{ 1\} \vartriangleleft \langle s\rangle \vartriangleleft \left\langle {s,{r}^{2}}\right\rangle \vartriangleleft {D}_{8} \) , - \( \{ 1\} \vartriangleleft \left\langle {r}^{2}\right\rangle \vartriangleleft \langle r\rangle \vartriangleleft {D}_{8} \) . Theorem 3.4.3 (Jordan-Hölder). Let \( G \) be a nontrivial finite group. Then (1) \( G \) has a composition series, and (2) the composition factors are unique up to permutation, i.e. if we have two composition series \[ \{ 1\} = {A}_{0} \vartriangleleft {A}_{1} \vartriangleleft \cdots \vartriangleleft {A}_{m} = G\;\text{ and }\;\{ 1\} = {B}_{0} \vartriangleleft {B}_{1} \vartriangleleft \cdots \vartriangleleft {B}_{n} = G, \] then \( m = n \), and there exists a bijection \( \sigma : \{ 1,\ldots, m\} \rightarrow \{ 1,\ldots, n = m\} \) such that, for \( i = 1,\ldots, m \) , \[ {A}_{i}/{A}_{i - 1} \simeq {B}_{\sigma \left( i\right) }/{B}_{\sigma \left( i\right) - 1} \] Proof of (1). This is because if \( G \) is simple, then \( \{ 1\} \vartriangleleft G \) itself forms a composition series. If \( G \) has a nontrivial normal subgroup \( N \), then we may immediately reduce to \( N \) and \( G/N \) as follows: writing \( \pi : G \rightarrow G/N \) and giving composition series \[ \{ 1\} = {C}_{0} \vartriangleleft {C}_{1} \vartriangleleft \cdots \vartriangleleft {C}_{r} = N\;\text{ and }\;\{ N\} = {D}_{0} \vartriangleleft {D}_{1} \vartriangleleft \cdots \vartriangleleft {D}_{s} = G/N, \] we may "combine" them using the Fourth Isomorphism Theorem as \[ \{ 1\} = {C}_{0} \vartriangleleft {C}_{1} \vartriangleleft \cdots \vartriangleleft {C}_{r} = N = {\pi }^{-1}\left( {D}_{0}\right) \vartriangleleft {\pi }^{-1}\left( {D}_{1}\right) \vartriangleleft \cdots \vartriangleleft {\pi }^{-1}\left( {D}_{s}\right) = G. \] (Here we make essential use of the Fourth Isomorphism Theorem, in particular, \( {\pi }^{-1}\left( {D}_{i}\right) /{\pi }^{-1}\left( {D}_{i - 1}\right) \cong \) \( \left. {{D}_{i}/{D}_{i - 1}\text{for}i = 1,\ldots, s\text{.}}\right) \) The proof of (2) will be given in the next lecture. Definition 3.4.4. A group \( G \) is called solvable if there exists a chain of subgroups \[ \{ 1\} = {G}_{0} \vartriangleleft {G}_{1} \vartriangleleft \cdots \vartriangleleft {G}_{s} = G \] such that \( {G}_{i}/{G}_{i - 1} \) is abelian for \( i = 1,\ldots, s \) . Corollary 3.4.5. For a finite group \( G, G \) is solvable if and only if all of the composition factors of \( G \) are of the form \( {\mathbf{Z}}_{p} \) . Example 3.4.6. The group of upper triangular invertible matrices is solvable. \[ G = \left\{ {\left. {\left( \begin{matrix} * & * & * \\ 0 & * & * \\ 0 & 0 & * \end{matrix}\right) \in {\mathrm{{GL}}}_{3}\left( \mathbb{C}\right) }\right\} \supseteq N = \left\{ \left. {\left( \begin{matrix} 1 & * & * \\ 0 & 1 & * \\ 0 & 0 & 1 \end{matrix}\right) \in {\mathrm{{GL}}}_{3}\left( \mathbb{C}\right) }\right| \right\} \supseteq {N}^{\prime } = \left\{ {\left. {\left( \begin{matrix} 1 & 0 & * \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \in {\mathrm{{GL}}}_{3}\left( \mathbb{C}\right) }\right| \;}\right. }\right\} \] The subquotients are \[ G/N \cong {\left( {\mathbb{C}}^{ \times }, \cdot \right) }^{3},\;N/N \cong {\left( \mathbb{C}, + \right) }^{2},\;{N}^{\prime } \cong \left( {\mathbb{C}, + }\right) . \] 4. Jordan-Hölder theorem, simplicity of \( {A}_{n} \), and direct product groups ## 4.1. Jordan-Hölder theorem. Theorem 4.1.1 (Jordan-Hölder). Assume that a group \( G \) has the following two composition series \[ \{ 1\} = {A}_{0} \vartriangleleft {A}_{1} \vartriangleleft \cdots \vartriangleleft {A}_{m} = G\;\text{ and }\;\{ 1\} = {B}_{0} \vartriangleleft {B}_{1} \vartriangleleft \cdots \vartriangleleft {B}_{n} = G, \] then \( m = n \), and there exists a bijection \( \sigma : \{ 1,\ldots, m\} \rightarrow \{ 1,\ldots, n = m\} \) such that \[ {A}_{\sigma \left( i\right) }/{A}_{\sigma \left( i\right) - 1} \simeq {B}_{i}/{B}_{i - 1}. \] 4.1.2. Toy model. A set theoretic version. Let \( X \) be a set with two filtrations. \[ \varnothing = {A}_{0} \subseteq {A}_{1} \subseteq \cdots \subseteq {A}_{m} = X,\;\varnothing = {B}_{0} \subseteq {B}_{1} \subseteq \cdots \subseteq {B}_{n} = X. \] We use the following picture to explain the situation. ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_23_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_23_0.jpg) Then we must have for every \( i, j \) , (4.1.2.1) \[ \left( {{A}_{i - 1} \cup \left( {{A}_{i} \cap {B}_{j}}\right) }\right) \smallsetminus \left( {{A}_{i - 1} \cup \left( {{A}_{i} \cap {B}_{j - 1}}\right) }\right) = \left( {{B}_{j - 1} \cup \left( {{A}_{i} \cap {B}_{j}}\right) }\right) \smallsetminus \left( {{B}_{j - 1} \cup \left( {{A}_{i - 1} \cap {B}_{j}}\right) }\right) . \] Here \( {B}_{j - 1} \cup \left( {{A}_{i - 1} \cap {B}_{j}}\right) \) is the blue part and \( {A}_{i - 1} \cup \left( {{A}_{i} \cap {B}_{j - 1}}\right) \) is brown part. The equality can be seen as both parts represent the shaded red area. To make the proof a bit more effective, we can first show that both sides are the same as \[ \left( {{A}_{i} \cap {B}_{j}}\right) \smallsetminus \left( {\left( {{A}_{i} \cap {B}_{j - 1}}\right) \cup \left( {{A}_{i - 1} \cap {B}_{j}}\right) }\right) , \] where the latter set is the green shaded area. (Indeed, to identify the above complement with the left hand side of (4.1.2.1), we may intersect both terms with \( {B}_{j} \) ; and to identify the above complement with the left hand side of (4.1.2.1), we may intersect both terms with \( \left. {{A}_{i}\text{.}}\right) \) The above proof is of course trivial, but we will see quickly how that help us understand the proof of Jordan-Hölder theorem. 4.1.3. Proof of Theorem 4.1.1. We prove a slightly stronger version: let \( G \) be a group. Suppose that we are given two chains of subgroups \[ \{ 1\} = {A}_{0} \trianglelefteq {A}_{1} \trianglelefteq \cdots \trianglelefteq {A}_{m} = G,\;\{ 1\} = {B}_{0} \trianglelefteq {B}_{1} \trianglelefteq \cdots \trianglelefteq {B}_{n} = G. \] Then we have (1) \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) \) is a normal subgroup of the group \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) ; (2) \( {B}_{j - 1}\left( {{A}_{i - 1} \cap {B}_{j}}\right) \) is a normal subgroup of the group \( {B}_{j - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) ; (3) and we have an isomorphism (4.1.3.1) \[ \frac{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{A}_{i - 1}\left( {{A}_{i} \cap {B}_{j - 1}}\right) } \cong \frac{{B}_{j - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }{{B}_{j - 1}\left( {{A}_{i - 1} \cap {B}_{j}}\right) }. \] This in particular shows that one may refine both chains of subgroups into (setting \( {A}_{ij}^{\prime } = \) \( {A}_{i - 1}\left( {{A}_{i} \cap {B}_{j}}\right) \) and \( \left. {{B}_{ij}^{\prime } = {B}_{j - 1}\left( {{A}_{i} \cap {B}_{j}}\right) }\right) \) \[ \{ 1\} = {A}_{0} = {A}_{00}^{\prime } \trianglelefteq {A}_{01}^{\prime } \trianglelefteq \cdots \trianglelefteq {A}_{0n}^{\prime } = {A}_{1} = {A}_{10}^{\prime } \trianglelefteq \cdots \trianglelefteq {A}_{1n}^{\prime } = {A}_{2} = {A}_{20}^{\prime } \trianglelefteq \cdots \trianglelefteq {A}_{m - 1, n}^{\prime } = {A}_{n} = G, \] \[ \{ 1\} = {B}_{0} = {B}_{00}^{\prime } \trianglelefteq {B}_{10}^{\prime } \t
1329_[肖梁] Abstract Algebra (2022F)
Definition 12.3.1
Definition 12.3.1. A partial order on a nonempty set \( A \) is a relation \( \preccurlyeq \) on \( A \) satisfying for all \( x, y, z \in A \) , (1) (reflexive) \( x \preccurlyeq x \) ; (2) (antisymmetric) if \( x \preccurlyeq y \) and \( y \preccurlyeq x \), then \( x = y \) ; (3) (transitive) if \( x \preccurlyeq y \) and \( y \preccurlyeq z \), then \( x \preccurlyeq z \) . (Some times we say \( A \) is a poset.) A chain is a subset \( B \subseteq A \) where for any \( x, y \in B \), either \( x \preccurlyeq y \) or \( y \preccurlyeq x \) . Axiom 12.3.2 (Zorn’s Lemma). If \( A \) is a partially ordered set in which every chain \( B \) has an upper bound, i.e. an element \( m \in A \) such that \( m \succcurlyeq b \) for every \( b \in B \), then \( A \) has a maximal element \( x \), i.e. an element such that no \( y \succ x \) . Zorn's Lemma is independent of the Zermelo-Fraenkel axiom system and is equivalent to the Axiom of Choice and the Well-ordering Principle. See for example Dummit-Foote's Appendix A. 2 or Munkres' Topology for more discussion. In this lecture, we assume that Zorn's lemma holds. ## 12.4. Maximal ideals. Definition 12.4.1. If \( R \) is a ring, a (two-sided) ideal \( \mathfrak{m} \subseteq R \) is called maximal if \( \mathfrak{m} \neq R \) and the only (two-sided) ideals containing \( \mathfrak{m} \) are \( \mathfrak{m} \) and \( R \) . The existence of such maximal ideals relies on the Zorn's lemma. Proposition 12.4.2. Every proper (two-sided) ideal \( I \varsubsetneq R \) is contained in a maximal ideal of \( R \) . Proof. Put \( \mathcal{S} \mathrel{\text{:=}} \{ \) proper ideals of \( R \) containing \( I\} \) . I claim that it is a partially ordered set for inclusion. For this, we need to check that every increasing chain \( {J}_{i} \subseteq \cdots \) of ideals has an upper bound. Indeed, \( J = \mathop{\bigcup }\limits_{{i \in S}}{I}_{i} \) is an ideal; yet \( 1 \neq J \) ; so \( J \) is a proper ideal containing \( I \) . So by Zorn’s lemma, \( \mathcal{S} \) admits a maximal element, namely, the maximal ideal needed. The following is an important criterion for maximal ideals. Proposition 12.4.3. Let \( R \) be a commutative ring. An ideal \( \mathfrak{m} \subseteq R \) is maximal if and only if the quotient \( R/\mathfrak{m} \) is a field. Proof. By lattice isomorphism theorem, \( \mathfrak{m} \subseteq R \) if and only if \( \bar{R} \mathrel{\text{:=}} R/\mathfrak{m} \) has only two ideals (0) and (1). We claim that the latter statement is equivalent to that \( \bar{R} \) is a field. If \( \bar{R} \) is a field, then clearly it has only two ideals \( \left( 0\right) \) and \( \left( 1\right) \) . Conversely, if \( \bar{R} \) has only two ideals (0) and (1), then for any nonzero element \( a \in \bar{R} \), the ideal \( \left( a\right) \neq \left( 0\right) \) . Thus \( \left( a\right) = \left( 1\right) \) , namely there exists \( {a}^{\prime } \in \bar{R} \) such that \( a{a}^{\prime } = 1 \) . This implies that \( a \in {\bar{R}}^{ \times } \) . So \( \bar{R} \) is a field. Remark 12.4.4. If \( R \) is non-commutative, then \( R/\mathfrak{m} \) being a skew field implies that \( \mathfrak{m} \) is maximal. But the converse is not correct. For example, \( R = {\operatorname{Mat}}_{n \times n}\left( \mathbb{C}\right) \) has only two two-sided ideals: (0) and \( R \) . Yet \( R \) is not a skew field. 12.4.5. (1) When \( R = \mathbb{Z} \), for each prime number \( p,\left( p\right) = p\mathbb{Z} \) is a maximal ideal of \( \mathbb{Z} \) . (2) For \( R = \mathbb{Z}\left\lbrack x\right\rbrack ,\left( p\right) = p\mathbb{Z}\left\lbrack x\right\rbrack \) is not a maximal ideal. But \( \left( {p, x}\right) \), or \( \left( {p, x + 1}\right) \), or \( \left( {p, f\left( x\right) }\right) \) for any polynomial irreducible modulo \( p \), is a maximal ideal. (3) For \( G \) a finite group and \( R = \mathbb{C}\left\lbrack G\right\rbrack \) the group ring, the augmentation ideal \( {I}_{G} = \) \( \langle \left\lbrack g\right\rbrack - 1 \mid g \in G\rangle \) is a maximal (two-sided) ideal, and we have \( \mathbb{C}\left\lbrack G\right\rbrack /{I}_{G} \cong \mathbb{C} \) . 12.5. Prime ideals. Assume from now on that \( R \) is commutative. Definition 12.5.1. A proper ideal \( \mathfrak{p} \varsubsetneq R \) is called a prime ideal if \[ \text{for any}a, b \in R,\;{ab} \in \mathfrak{p} \Rightarrow a \in \mathfrak{p}\text{or}b \in \mathfrak{p}\text{.} \] Example 12.5.2. For \( p \) a prime number, \( p\mathbb{Z} \) is a prime ideal of \( \mathbb{Z} \) and \( p\mathbb{Z}\left\lbrack x\right\rbrack \subset \mathbb{Z}\left\lbrack x\right\rbrack \) is also a prime ideal. The following is an analogue of Proposition 12.4.3 for prime ideals. This is also quite useful in application. Proposition 12.5.3. An ideal \( \mathfrak{p} \subset R \) is a prime ideal if and only if \( R/\mathfrak{p} \) is an integral domain. Proof. Consider the natural quotient \[ \pi : R \rightarrow R/\mathfrak{p} \] \[ a \mapsto \bar{a}\text{.} \] If \( R/\mathfrak{p} \) is an integral domain, then for \( a, b \in R \) with \( {ab} \in \mathfrak{p} \), we must have \( \overline{ab} = 0 \), or equivalently \( \bar{a}\bar{b} = 0 \) . This means that either \( \bar{a} = 0 \) or \( \bar{b} = 0 \), i.e. either \( a \in \mathfrak{p} \) or \( b \in \mathfrak{p} \) . Conversely, suppose that \( R/\mathfrak{p} \) is not an integral domain, then there exists nonzero elements \( \bar{a},\bar{b} \in R/\mathfrak{p} \) such that \( \bar{a}\bar{b} = 0 \) . This is equivalent to say that there exists \( a, b \in R \smallsetminus \mathfrak{p} \) such that \( {ab} \in \mathfrak{p} \) . Thus \( \mathfrak{p} \) is not a prime ideal in this case. Corollary 12.5.4. A maximal ideal is always a prime ideal. The concepts of maximal ideals and prime ideals are extremely important in the study of commutative algebra. 12.6. Principal ideal domains. Initial study of rings is modeled on properties of \( \mathbb{Z} \), trying to generalize various aspects of \( \mathbb{Z} \) to other rings, such as certain quadratic rings, e.g. \( \mathbb{Z}\left\lbrack i\right\rbrack = \) \( \{ a + {bi} \mid a, b \in \mathbb{Z}\} \) . Definition 12.6.1. A principal ideal domain, writing PID for short, is an integral domain in which every ideal is principal. Example 12.6.2. (1) The ring of integers, \( \mathbb{Z} \), is a PID. All of its ideals are of the form \( n\mathbb{Z} \) for some \( n \) . (2) For \( k \) a field, the ring of polynomials in one variable \( k\left\lbrack x\right\rbrack \) is a PID. (3) We will prove later that the ring \( \mathbb{Z}\left\lbrack i\right\rbrack \) is a PID. This is a very important example. (4) (Non-example) The ring \( \mathbb{Z}\left\lbrack \sqrt{-5}\right\rbrack \) is not a principal ideal domain. For example, \( (3,1 + \) \( 2\sqrt{-5} \) ) is not a principal ideal. Proposition 12.6.3. Every nonzero prime ideal in a PID is a maximal ideal. Proof. Let \( \left( p\right) \) be a prime ideal in a PID \( R \) . If \( \mathfrak{m} = \left( m\right) \supseteq \left( p\right) \) is a maximal ideal containing \( \left( p\right) \) . Then \( p = {mn} \) for some \( n \in R \) . Using the property of prime ideals, we see that either \( m \) or \( n \) belongs to \( \left( p\right) \) . - If \( m \in \left( p\right) \), then \( \left( m\right) \subseteq \left( p\right) \) . This says that \( \left( p\right) = \left( m\right) \) . - If \( n \in \left( p\right) \), then \( n = {ps} \) for some \( s \in R \) . Thus we have \( p = {mn} = {mps} \), and thus \( 1 = {ms} \) . This implies that \( m \) is a unit. So \( \mathfrak{m} = \left( 1\right) \) contradicting with that \( \mathfrak{m} \) is a maximal ideal. For this entire lecture, \( R \) will be an integral domain. Our goal is summarized into the following picture ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_81_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_81_0.jpg) Here the purple colored statements were proved in previous lectures; we will prove the blue colored statements in this lecture, and the red colored statements in the following lectures. 13.1. Euclidean domains. There is a question we hope to answer: how to prove that an integral domain is a PID? In classical theory of rings, one verifies this by checking a certain algorithm of finding generators of ideals on \( R \) . Definition 13.1.1. An integral domain \( R \) is said to be an Euclidean domain if there is a norm \( \mathrm{{Nm}} : R \rightarrow {\mathbb{Z}}^{ + } \cup \{ 0\} \) such that (1) \( \mathrm{{Nm}}\left( 0\right) = 0 \) ; (2) for any \( a, b \in R \) with \( b \neq 0 \), there exists "quotient" \( q \in R \) and "remainder" \( r \in R \) such that \[ a = {bq} + r\;\text{ and either }r = 0\text{ or }\operatorname{Nm}\left( r\right) < \operatorname{Nm}\left( b\right) . \] It is important to point out that we do not require \( q \) and \( r \) to be unique. Remark 13.1.2. Recall that this Euclidean algorithm can be used to find the gcd of two integers. Euclidean domain is a ring in which such an algorithm is valid. Example 13.1.3. (1) For any field \( F \), we may take \( N : F \rightarrow {\mathbb{Z}}_{ \geq 0} \) by \( N\left( a\right) = 0 \) . (2) For a field \( F \) and \( R = F\left\lbrack x\right\rbrack \) a polynomial ring, define \( \operatorname{Nm}\left( {f\left( x\right) }\right) = \deg \left( f\right) \) . (3) The ring of Gaussian integers \( R = \mathbb{Z}\left\lbrack i\right\rbrack \) admits a norm \[ \operatorname{Nm}\left( {x + {yi}}\right) = {x}^{2} + {y}^{2} = {\left| x + yi\right| }^{2}. \] When \( a, b \in \mathbb{Z}\left\lbrack i\right\rbrack \) with \( b \neq 0 \), let \( q \in \mathbb{Z}\left\lbrack i\right\rbrack \) is taken so that \( \left| {\operatorname{Re}\left( {q - \frac{a}{b}}\right) }\right| \leq \frac{1}{2} \) and \( \left| {\operatorname{Im}\left( {q - \frac{a}{b}}\right) }\right| \leq \frac{1}{2} \), then \[ \operat
1167_(GTM73)Algebra
Definition 8.3
Definition 8.3. Let \( \alpha \) and \( \beta \) be cardinal numbers. The \( \operatorname{sum}\alpha + \beta \) is defined to be the cardinal number \( \left| {\mathrm{A} \cup \mathrm{B}}\right| \), where \( \mathrm{A} \) and \( \mathrm{B} \) are disjoint sets such that \( \left| \mathrm{A}\right| = \alpha \) and \( \left| \mathbf{B}\right| = \beta \) . The product \( {\alpha \beta } \) is defined to be the cardinal number \( \left| {\mathbf{A} \times \mathbf{B}}\right| \) . It is not actually necessary for \( A \) and \( B \) to be disjoint in the definition of the product \( {\alpha \beta } \) (Exercise 4). By the definition of a cardinal number \( \alpha \) there always exists a set \( A \) such that \( \left| A\right| = \alpha \) . It is easy to verify that disjoint sets, as required for the definition of \( \alpha + \beta \), always exist and that the sum \( \alpha + \beta \) and product \( {\alpha \beta } \) are independent of the choice of the sets \( A, B \) (Exercise 4). Addition and multiplication of cardinals are associative and commutative, and the distributive laws hold (Exercise 5). Furthermore, addition and multiplication of finite cardinals agree with addition and multiplication of the nonnegative integers with which they are identified; for if \( A \) has \( m \) elements, \( B \) has \( n \) elements and \( A \cap B = \varnothing \), then \( A \cup B \) has \( m + n \) elements and \( A \times B \) has \( {mn} \) elements (for more precision, see Exercise 6). Definition 8.4. Let \( \alpha ,\beta \) be cardinal numbers and \( \mathrm{A},\mathrm{B} \) sets such that \( \left| \mathrm{A}\right| = \alpha ,\left| \mathrm{B}\right| = \beta \) . \( \alpha \) is less than or equal to \( \beta \), denoted \( \alpha \leq \beta \) or \( \beta \geq \alpha \), if \( \mathrm{A} \) is equipollent with a subset of \( \mathrm{B} \) (that is, there is an injective map \( \mathrm{A} \rightarrow \mathrm{B} \) ). \( \alpha \) is strictly less than \( \beta \), denoted \( \alpha < \beta \) or \( \beta > \alpha \), if \( \alpha \leq \beta \) and \( \alpha \neq \beta \) . It is easy to verify that the definition of \( \leq \) does not depend on the choice of \( A \) and \( B \) (Exercise 7). It is shown in Theorem 8.7 that the class of all cardinal numbers is linearly ordered by \( \leq \) . For finite cardinals \( \leq \) agrees with the usual ordering of the nonnegative integers (Exercise 1). The fact that there is no largest cardinal number is an immediate consequence of Theorem 8.5. If \( \mathrm{A} \) is a set and \( \mathrm{P}\left( \mathrm{A}\right) \) its power set, then \( \left| \mathrm{A}\right| < \left| {\mathrm{P}\left( \mathrm{A}\right) }\right| \) . SKETCH OF PROOF. The assignment \( a \mapsto \{ a\} \) defines an injective map \( A \rightarrow P\left( A\right) \) so that \( \left| A\right| \leq \left| {P\left( A\right) }\right| \) . If there were a bijective map \( f : A \rightarrow P\left( A\right) \), then for some \( {a}_{0} \in A, f\left( {a}_{0}\right) = B \), where \( B = \{ a \in A \mid a \nmid f\left( a\right) \} \subset A \) . But this yields a contradiction: \( {a}_{0} \in B \) and \( {a}_{0} \notin B \) . Therefore \( \left| A\right| \neq \left| {P\left( A\right) }\right| \) and hence \( \left| A\right| < \left| {P\left( A\right) }\right| \) . REMARK. By Theorem 8.5, \( {\aleph }_{0} = \left| \mathbf{N}\right| < \left| {P\left( \mathbf{N}\right) }\right| \) . It can be shown that \( \left| {P\left( \mathbf{N}\right) }\right| = \left| \mathbf{R}\right| \), where \( \mathbf{R} \) is the set of real numbers. The conjecture that there is no cardinal number \( \beta \) such that \( {\aleph }_{0} < \beta < \left| {P\left( \mathbf{N}\right) }\right| = \left| \mathbf{R}\right| \) is called the Continuum Hypothesis. It has been proved to be independent of the Axiom of Choice and of the other basic axioms of set theory; see P. J. Cohen [59]. The remainder of this section is devoted to developing certain facts that will be needed at several points in the sequel (see the first paragraph of this section). Theorem 8.6. (Schroeder-Bernstein) If \( \mathrm{A} \) and \( \mathrm{B} \) are sets such that \( \left| \mathrm{A}\right| \leq \left| \mathrm{B}\right| \) and \( \left| \mathrm{B}\right| \leq \left| \mathrm{A}\right| \), then \( \left| \mathrm{A}\right| = \left| \mathrm{B}\right| \) . SKETCH OF PROOF. By hypothesis there are injective maps \( f : A \rightarrow B \) and \( g : B \rightarrow A \) . We shall use \( f \) and \( g \) to construct a bijection \( h : A \rightarrow B \) . This will imply that \( A \sim B \) and hence \( \left| A\right| = \left| B\right| \) . If \( {a\varepsilon A} \), then since \( g \) is injective the set \( {g}^{-1}\left( a\right) \) is either empty (in which case we say that \( a \) is parentless) or consists of exactly one element \( {b\varepsilon B} \) (in which case we write \( {g}^{-1}\left( a\right) = b \) and say that \( b \) is the parent of \( a \) ). Similarly for \( {b\varepsilon B} \), we have either \( {f}^{-1}\left( b\right) = \varnothing \) ( \( b \) is parentless) or \( {f}^{-1}\left( b\right) = {a}^{\prime }{\varepsilon A} \) ( \( {a}^{\prime } \) is the parent of \( b \) ). If we continue to trace back the "ancestry" of an element \( a \in A \) in this manner, one of three things must happen. Either we reach a parentless element in \( A \) (an ancestor of \( a \in A \) ), or we reach a parentless element in \( B \) (an ancestor of \( a \) ), or the ancestry of \( a \in A \) can be traced back forever (infinite ancestry). Now define three subsets of \( A \) [resp. B] as follows: \[ {A}_{1} = \{ a \in A \mid a\text{ has a parentless ancestor in }A\} ; \] \[ {A}_{2} = \{ {a\varepsilon A} \mid a\text{ has a parentless ancestor in }B\} ; \] \[ {A}_{3} = \{ a \in A \mid a\text{ has infinite ancestry }\} \] \[ {B}_{1} = \{ {b\varepsilon B} \mid b\text{ has a parentless ancestor in }A\} ; \] \[ {B}_{2} = \{ b\& B \mid b\text{ has a parentless ancestor in }B\} ; \] \[ {B}_{3} = \{ {b\varepsilon B} \mid b\text{ has infinite ancestry }\} . \] Verify that the \( {A}_{i} \) [resp. \( {B}_{i} \) ] are pairwise disjoint, that their union is \( A \) [resp. \( B \) ]; that \( f \mid {A}_{i} \) is a bijection \( {A}_{i} \rightarrow {B}_{i} \) for \( i = 1,3 \) ; and that \( g \mid {B}_{2} \) is a bijection \( {B}_{2} \rightarrow {A}_{2} \) . Consequently the map \( h : A \rightarrow B \) given as follows is a well-defined bijection: \[ h\left( a\right) = \left\{ \begin{array}{lll} f\left( a\right) & \text{ if } & a \in {A}_{1} \cup {A}_{3}; \\ {g}^{-1}\left( a\right) & \text{ if } & a \in {A}_{2}. \end{array}\right. \] Theorem 8.7. The class of all cardinal numbers is linearly ordered by \( \leq \) . If \( \alpha \) and \( \beta \) are cardinal numbers, then exactly one of the following is true: \[ \alpha < \beta ;\;\alpha = \beta ;\;\beta < \alpha \;\text{ (Trichotomy Law). } \] SKETCH OF PROOF. It is easy to verify that \( \leq \) is a partial ordering. Let \( \alpha ,\beta \) be cardinals and \( A, B \) be sets such that \( \left| A\right| = \alpha ,\left| B\right| = \beta \) . We shall show that \( \leq \) is a linear ordering (that is, either \( \alpha \leq \beta \) or \( \beta \leq \alpha \) ) by applying Zorn’s Lemma to the set \( \mathcal{F} \) of all pairs \( \left( {f, X}\right) \), where \( X \subset A \) and \( f : X \rightarrow B \) is an injective map. Verify that \( \mathfrak{F} \neq \varnothing \) and that the ordering of \( \mathfrak{F} \) given by \( \left( {{f}_{1},{X}_{1}}\right) \leq \left( {{f}_{2},{X}_{2}}\right) \) if and only if \( {X}_{1} \subset {X}_{2} \) and \( {f}_{2} \mid {X}_{1} = {f}_{1} \) is a partial ordering of \( \mathfrak{F} \) . If \( \left\{ {\left( {{f}_{i},{X}_{i}}\right) \mid i \in I}\right\} \) is a chain in \( \mathfrak{F} \), let \( X = \mathop{\bigcup }\limits_{{i \in I}}{X}_{i} \) and define \( f : X \rightarrow B \) by \( f\left( x\right) = {f}_{i}\left( x\right) \) for \( x \in {X}_{i} \) . Show that \( f \) is a well-de- fined injective map, and that \( \left( {f, X}\right) \) is an upper bound in \( \mathcal{F} \) of the given chain. Therefore by Zorn’s Lemma there is a maximal element \( \left( {g, X}\right) \) of \( \mathfrak{F} \) . We claim that either \( X = A \) or \( \operatorname{Im}g = B \) . For if both of these statements were false we could find \( {a\varepsilon A} - X \) and \( {b\varepsilon B} - \operatorname{Im}g \) and define an injective map \( h : X \cup \{ a\} \rightarrow B \) by \( h\left( x\right) = g\left( x\right) \) for \( {x\varepsilon X} \) and \( h\left( a\right) = b \) . Then \( \left( {h, X\cup \{ a\} }\right) \varepsilon \mathcal{F} \) and \( \left( {g, X}\right) < \left( {h, X\cup \{ a\} }\right) \), which contradicts the maximality of \( \left( {g, X}\right) \) . Therefore either \( X = A \) so that \( \left| A\right| \leq \left| B\right| \) or \( \operatorname{Im}g = B \) in which case the injective map \( B\overset{{g}^{-1}}{ \rightarrow }X \subset A \) shows that \( \left| B\right| \leq \left| A\right| \) . Use these facts, the Schroeder-Bernstein Theorem 8.6 and Definition 8.4 to prove the Trichotomy Law. REMARKS. A family of functions partially ordered as in the proof of Theorem 8.7 is said to be ordered by extension. The proof of the theorem is a typical example of the use of Zorn's Lemma. The details of similar arguments in the sequel will frequently be abbreviated. Theorem 8.8. Every infinite set has a denumerable subset. In particular, \( {\aleph }_{0} \leq \alpha \) for every infinite cardinal number \( \alpha \) . SKETCH OF PROOF. If \( B \) is a finite subset of the infinite set \( A \), then \( A - B \) is nonempty. For each finite subset \( B \) of \( A \), choose an element \( {x}_{B}{\varepsilon A} - B \) (Axiom of Choice). Let \( F \) be the set of all finite subsets of \( A \) and define a map \( f : F \rightarrow F \) by \( f\left( B\right)
1329_[肖梁] Abstract Algebra (2022F)
Definition 16.3.5
Definition 16.3.5. Let \( K \) be an extension of \( F \), and let \( {\alpha }_{1},\ldots ,{\alpha }_{n} \in K \) . (1) The field extension of \( F \) generated by \( {\alpha }_{1},\ldots ,{\alpha }_{n} \), denoted by \( F\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \), is the smallest subfield of \( K \) containing \( F \) . (2) If \( K = F\left( \alpha \right) \) for some \( \alpha \in K \), then we say that \( K \) is a simple extension of \( F \) . (3) If \( K = F\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \), we say that \( K \) is a finitely generated extension of \( F \) . We remark that \( F\left( {{\alpha }_{1},{\alpha }_{2}}\right) = \left( {F\left( {\alpha }_{1}\right) }\right) \left( {\alpha }_{2}\right) \) . Theorem 16.3.6. Let \( K \) be a field extension of \( F \) and let \( \alpha \in K \) . We have a dichotomy: (1) either \( 1,\alpha ,{\alpha }^{2},\ldots \) are linearly independent over \( F \), in which case \( F\left( \alpha \right) \simeq F\left( x\right) = \) \( \operatorname{Frac}\left( {F\left\lbrack x\right\rbrack }\right) \) , (2) or \( 1,\alpha ,{\alpha }^{2},\ldots \) are linearly dependent over \( F \), in which case, there exists a unique monic polynomial \( {m}_{\alpha }\left( x\right) = {m}_{\alpha, F}\left( x\right) \), called the minimal polynomial of \( \alpha \) over \( F \) , that is irreducible over \( F \) and \( {m}_{\alpha }\left( \alpha \right) = 0 \) . Moreover, \( F\left( \alpha \right) = F\left\lbrack x\right\rbrack /\left( {{m}_{\alpha }\left( x\right) }\right) \) and \( \left\lbrack {F\left( \alpha \right) : F}\right\rbrack = \deg {m}_{\alpha }\left( x\right) \) . Proof. Consider case (1): the condition implies that \[ \phi : F\left\lbrack x\right\rbrack \hookrightarrow K \] \[ f\left( x\right) \mapsto f\left( \alpha \right) \] is an injective homomorphism. This clearly extends to a homomorphism \[ \phi : F\left( x\right) \hookrightarrow K \] \[ f\left( x\right) /g\left( x\right) \mapsto f\left( \alpha \right) /g\left( \alpha \right) \] as \( g\left( \alpha \right) \neq 0 \) . This \( \phi \) must be injective by Lemma 16.3.1. Its image is \( \phi \left( {F\left( x\right) }\right) = F\left( \alpha \right) \) . Consider case (2): In this case, \[ \phi : F\left\lbrack x\right\rbrack \rightarrow K \] \[ f\left( x\right) \mapsto f\left( \alpha \right) \] is not injective. Then \( \ker \phi = \left( {p\left( x\right) }\right) \) is a prime ideal and hence a maximal ideal. We may take \( p\left( x\right) \) to be monic. This \( p\left( x\right) \) is the minimal polynomial of \( \alpha \), as it is the nonzero polynomial with with minimal degree in \( \ker \phi \) . Thus \[ F\left( \alpha \right) = \operatorname{Im}\left( \phi \right) \simeq F\left\lbrack x\right\rbrack /\left( {p\left( x\right) }\right) . \] Definition 16.3.7. Keep the setup in the above theorem. - In case (1), we call \( \alpha \) transcendental over \( F \) . - In case (2), we call \( \alpha \) algebraic over \( F \) . We say that the extension \( K \) over \( F \) is algebraic if every element \( \alpha \) of \( K \) is algebraic over \( F \) (or equivalently \( \left\lbrack {F\left( \alpha \right) : F}\right\rbrack \) is finite). 16.4. Finite vs. algebraic extensions. Example 16.4.1. A typical algebraic yet not finite field extension is \( \mathbb{Q}\left( {\sqrt{2},\sqrt{3},\sqrt{5},\sqrt{7},\ldots }\right) \) over \( \mathbb{Q} \) . Theorem 16.4.2. The following are equivalent for a field extension \( K \) of \( F \) : (1) \( K \) is a finite extension of \( F \) . (2) \( K \) is finitely generated and algebraic over \( F \) . Proof. \( \left( 1\right) \Rightarrow \left( 2\right) \) . If \( K \) is a finite extension of \( F, K \) is generated over \( F \) by the basis element (of \( K \) over \( F \) ). For any \( \alpha \in K,\left\lbrack {F\left( \alpha \right) : F}\right\rbrack \leq \left\lbrack {K : F}\right\rbrack \) is finite; so \( \alpha \) is algebraic over \( F \) . \( \left( 2\right) \Rightarrow \left( 1\right) \) . We will prove this after some preparation. Corollary 16.4.3. If \( K \) is a finite extension of \( F \) and \( \alpha \in K \), then \( \left\lbrack {F\left( \alpha \right) : F}\right\rbrack \mid \left\lbrack {K : F}\right\rbrack \) . Lemma 16.4.4. Given field extensions of \( K/E/F \) and \( \alpha \in K \), then \[ {m}_{\alpha, E}\left( x\right) \mid {m}_{\alpha, F}\left( x\right) \] as polynomials in \( E\left\lbrack x\right\rbrack \) . In particular, \[ \deg \left( {{m}_{\alpha, E}\left( x\right) }\right) \leq \deg \left( {{m}_{\alpha, F}\left( x\right) }\right) . \] Proof. This is because \( {m}_{\alpha, F}\left( \alpha \right) = 0 \) . So viewing this in the polynomial ring \( E\left\lbrack x\right\rbrack \), we have \( {m}_{\alpha, F}\left( x\right) \in \left( {{m}_{\alpha, E}\left( x\right) }\right) \) . This implies that \( {m}_{\alpha, E}\left( x\right) \mid {m}_{\alpha, F}\left( x\right) \) in \( E\left\lbrack x\right\rbrack \) and thus \( \deg \left( {{m}_{\alpha, E}\left( x\right) }\right) \leq \) \( \deg \left( {{m}_{\alpha, F}\left( x\right) }\right) \) . Corollary 16.4.5. Given field extensions of \( K/E/F \) and \( \alpha \in K \), then \[ \left\lbrack {E\left( \alpha \right) : E}\right\rbrack \leq \left\lbrack {F\left( \alpha \right) : F}\right\rbrack \] Proof. This is because \( \left\lbrack {E\left( \alpha \right) : E}\right\rbrack = \deg {m}_{\alpha, E}\left( x\right) \) and \( \left\lbrack {F\left( \alpha \right) : F}\right\rbrack = \deg {m}_{\alpha, F}\left( x\right) \) . Definition 16.4.6. Let \( K \) be a finite extension of \( F \) and \( F \subseteq {K}_{i} \subseteq K \) for intermediate fields \( {K}_{1} \) and \( {K}_{2} \) . Define the composite of \( {K}_{1} \) and \( {K}_{2} \) to be \[ {K}_{1}{K}_{2} \mathrel{\text{:=}} \text{minimal field that contains both}{K}_{1}\text{and}{K}_{2}\text{.} \] Example 16.4.7. Inside \( \mathbb{C} \), the composite of \( \mathbb{Q}\left( \sqrt{2}\right) \) and \( \mathbb{Q}\left( \sqrt{3}\right) \) is \[ \mathbb{Q}\left( {\sqrt{2},\sqrt{3}}\right) = \{ a + b\sqrt{2} + c\sqrt{3} + d\sqrt{6} \mid a, b, c, d \in \mathbb{Q}\} . \] Corollary 16.4.8. Let \( {K}_{1} \) and \( {K}_{2} \) be two intermediate fields in the field extension \( K \) over \( F \) such that \( \left\lbrack {{K}_{i} : F}\right\rbrack < + \infty \) . Then \[ \left\lbrack {{K}_{1}{K}_{2} : F}\right\rbrack \leq \left\lbrack {{K}_{1} : F}\right\rbrack \cdot \left\lbrack {{K}_{2} : F}\right\rbrack . \] Proof. As \( {K}_{1} \) is a finite extension of \( F \), we may write \( {K}_{1} = F\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \) . We consider the following tower ![97650b70-8b1b-4cc6-91b2-de9112f1d8bc_103_0.jpg](images/97650b70-8b1b-4cc6-91b2-de9112f1d8bc_103_0.jpg) Applying Corollary 16.4.5 to each parallelogram, we have \[ \left\lbrack {{K}_{2}\left( {\alpha }_{1}\right) : {K}_{2}}\right\rbrack \leq \left\lbrack {F\left( {\alpha }_{1}\right) : F}\right\rbrack \] \[ \left\lbrack {{K}_{2}\left( {{\alpha }_{1},{\alpha }_{2}}\right) : {K}_{2}\left( {\alpha }_{1}\right) }\right\rbrack \leq \left\lbrack {F\left( {{\alpha }_{1},{\alpha }_{2}}\right) : F\left( {\alpha }_{1}\right) }\right\rbrack \] \[ \ldots \] Taking the product of these inequalities gives \( \left\lbrack {{K}_{1}{K}_{2} : {K}_{2}}\right\rbrack \leq \left\lbrack {{K}_{1} : F}\right\rbrack \) . This implies the inequality of this corollary. 16.4.9. Continued with the proof of Theorem 16.4.2. (2) \( \Rightarrow \) (1). Assume that \( K \) is finitely generated algebraic extension of \( F \) . We may then write \( K = F\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) = F\left( {\alpha }_{1}\right) F\left( {\alpha }_{2}\right) \cdots F\left( {\alpha }_{n}\right) \) with each \( {\alpha }_{i} \) algebraic over \( F \) . Then Corollary 16.4.8 implies that \[ \left\lbrack {K : F}\right\rbrack \leq \left\lbrack {F\left( {\alpha }_{1}\right) : F}\right\rbrack \cdots \left\lbrack {F\left( {\alpha }_{n}\right) : F}\right\rbrack \] is finite. Corollary 16.4.10. Let \( K \) be a field extension of \( F \) and let \( \alpha ,\beta \in K \) be elements algebraic over \( F \) . Then \( \alpha \pm \beta ,{\alpha \beta } \), and \( \alpha /\beta \) (when \( \beta \neq 0 \) ) are all algebraic over \( F \) . Proof. This is because \( \alpha \pm \beta ,{\alpha \beta } \), and \( \alpha /\beta \) all belong to the field \( F\left( {\alpha ,\beta }\right) \) which is a finite extension of \( F \) . Definition 16.4.11. Let \( K \) be a field extension of \( F \) . It follows from the above corollary that \[ \{ \alpha \in K \mid \alpha \text{ is algebraic over }F\} \] is a subfield of \( K \), called the algebraic closure of \( F \) in \( K \) . Example 16.4.12. Consider the field extension \( \mathbb{C} \) of \( \mathbb{Q} \) . We put \[ \overline{\mathbb{Q}} \mathrel{\text{:=}} \{ \alpha \in \mathbb{C}\;|\;\alpha \text{ is algebraic over }\mathbb{Q}\} \] the algebraic closure of \( \mathbb{Q} \) in \( \mathbb{C} \) . Note that the condition \( \alpha \) being algebraic over \( \mathbb{Q} \) is equivalent to the condition that \( \alpha \) is a zero of a monic polynomial \( f\left( x\right) \in \mathbb{Q}\left\lbrack x\right\rbrack \) . Theorem 16.4.13. If \( L/K \) and \( K/F \) are both algebraic extensions, then \( L/F \) is algebraic. Proof. Let \( \alpha \in L \) . Its minimal polynomial \( {m}_{\alpha }\left( x\right) = {x}^{n} + {a}_{n - 1}{x}^{n - 1} + \cdots + {a}_{0} \) over \( K \) involves only finitely many elements of \( K \), each of them being algebraic over \( F \) . So we see that \( F\left( \alpha \right) \) is contained in the field extension \[ F\left( {{a}_{0},{a}_{1},\ldots ,{a}_{n - 1}}\right) \left( \alpha \right) \] over \( F \), which is finite. So \( L \) is algebraic over \( F \) . ## 17.1. Splitting fields. Definition 17.1.1. Given a field \( F \) and a polynomial \( f\lef
113_Topological Groups
Definition 24.8
Definition 24.8. Let \( \mathbf{K} \) be a class of \( \mathcal{L} \) -structures, where \( \mathcal{L} \) is not necessarily algebraic. We set \( \mathbf{{HK}} = \{ \mathfrak{A} : \mathfrak{A} \) is a homomorphic image of some \( \mathfrak{B} \in \mathbf{K}\} ; \) \( \mathbf{{UpK}} = \{ \mathfrak{A} : \mathfrak{A} \) is isomorphic to an ultraproduct of members of \( \mathbf{K}\} \) . Note that all of the classes SK, PK, HK and UpK are closed under taking isomorphic images. Proposition 24.9. If \( \mathbf{K} \) is a variety, then \( \mathbf{K} = \mathbf{{HK}} \) . ## Proof. Obvious from 24.4. Thus a variety is closed under all three operators \( \mathbf{H},\mathbf{S} \), and \( \mathbf{P} \) . The main theorem of this chapter is that the converse holds. Before beginning the proof of this, we want to develop the basics of general algebra and equational logic a little further. The following proposition is easy to prove. Proposition 24.10 (i) \( \mathrm{{SHK}} \subseteq \mathrm{{HSK}} \) . (ii) PSK \( \subseteq \) SPK. (iii) \( \mathbf{{PHK}} \subseteq \mathbf{{HPK}} \) . Easy examples can be given to show that none of the inclusions in 24.10 can be replaced by equalities, in general; see Exercises 24.25-24.27. These inequalities lead to the following useful equivalences. Proposition 24.11. The following conditions are equivalent: (i) \( \mathbf{K} = \mathbf{{SK}} = \mathbf{{HK}} = \mathbf{{PK}} \) . (ii) \( \mathbf{K} = \mathbf{{HSPK}} \) . (iii) \( \mathbf{K} = \mathbf{{HSPL}} \) for some \( \mathbf{L} \) . Proof. Obviously \( \left( i\right) \Rightarrow \left( {ii}\right) \Rightarrow \left( {iii}\right) \) . Now assume (iii). Clearly \( \mathbf{{HK}} = \mathbf{K} \) and \( \mathbf{K} \subseteq \mathbf{{SK}},\mathbf{K} \subseteq \mathbf{{PK}} \) . Next, \[ \mathrm{{SK}} = \mathrm{{SHSPL}} \subseteq \mathrm{{HSSPL}} \] by \( {24.10}\left( i\right) \) \[ = \text{HSPL} = \text{K.} \] Finally, \[ \mathbf{{PK}} = \mathbf{{PHSPL}} \subseteq \mathbf{{HPSPL}} \subseteq \mathbf{{HSPPL}}\;\text{by 24.10(ii),(iii)} \] \[ = \text{ HSPL } = \text{ K. } \] To prove our main theorem, we need the notion of an absolutely free algebra. As will be seen, this notion is very analogous to the construction in 11.12 of an "internal" model for a consistent set of sentences. It is really just a part of that construction. Definition 24.12. Let \( \mathcal{L} \) be an algebraic language. We construct an \( \mathcal{L} \) - structure \( {\mathfrak{{Fr}}}_{\mathcal{L}} \) which will be called the absolutely free \( \mathcal{L} \) -algebra. Its universe is \( {\operatorname{Trm}}_{\mathcal{L}} \), and for any operation symbol \( \mathbf{O} \) of \( \mathcal{L} \) , \[ {\mathbf{O}}^{\mathfrak{F}\mathfrak{r}}\mathcal{L}\left( {{\sigma }_{0},\ldots ,{\sigma }_{m - 1}}\right) = \mathbf{O}{\sigma }_{0}\cdots {\sigma }_{m - 1}. \] Now the definition of satisfaction yields the following basic fact about Fr \( x \) : Proposition 24.13. If \( \mathcal{L} \) is algebraic, \( \mathfrak{A} \) is an \( \mathcal{L} \) -structure, and \( x \in {}^{\omega }A \), then \( \left\langle {{\sigma }^{\mathfrak{A}}x : \sigma \in {\operatorname{Trm}}_{\mathcal{L}}}\right\rangle \) is a homomorphism of \( {\mathfrak{F}}^{\mathfrak{r}}\mathcal{L} \) into \( \mathfrak{A} \) . A very useful congruence relation on \( {\Im }^{r}\varphi \) is introduced in the following definition. Definition 24.14. If \( \mathcal{L} \) is an algebraic language and \( \Gamma \) is a set of equations of \( \mathcal{L} \), we let \[ { \equiv }_{\Gamma } = \left\{ {\left( {\sigma ,\tau }\right) : \sigma ,\tau \in {\operatorname{Trm}}_{\mathcal{L}}\text{ and }\Gamma \vDash \sigma = \tau }\right\} . \] Proposition 24.15. Under the assumptions of \( {24.14},{ \equiv }_{\Gamma } \) is a congruence relation on \( {\mathfrak{F}}^{\mathfrak{r}}\mathcal{L} \) . The following simple proposition is proved by induction on \( \sigma \) : Proposition 24.16. If \( g \) is a homomorphism from \( {\mathfrak{{Fr}}}_{\mathcal{L}} \) into \( \mathfrak{A} \), and if \( {x}_{i} = g{v}_{i} \) for every \( i < \omega \), then \( {\sigma }^{\mathfrak{A}}x = {g\sigma } \) for every term \( \sigma \) . Recall that, by the completeness theorem, the model-theoretic condition \( \Gamma \vDash \varphi \) is equivalent to the proof-theoretic condition \( \Gamma \vdash \varphi \) . If we apply this fact when \( \varphi \) is an equation and \( \Gamma \) is a set of equations, it seems a little unsatisfactory, since the proof-theoretic condition involves the logical axioms and hence the whole apparatus of first order logic. We now describe a proof-theoretic condition in which only equations appear-no quantifiers and not even any sentential connectives. Definition 24.17. Let \( \Gamma \) be a set of equations in an algebraic language. Then \( \Gamma \) -eqthm is the intersection of all sets \( \Delta \) of equations such that the following conditions hold: (i) \( \Gamma \subseteq \Delta \) ; (ii) \( {v}_{0} = {v}_{0} \in \Delta \) ; (iii) if \( \varphi \in \Delta, i < \omega ,\sigma \) is a term, and \( \psi \) is obtained from \( \varphi \) by replacing \( {v}_{i} \) throughout \( \varphi \) by \( \sigma \), then \( \psi \in \Delta \) ; (iv) if \( \sigma = \tau \in \Delta \) and \( \rho = \tau \in \Delta \), then \( \sigma = \rho \in \Delta \) ; (v) if \( \sigma = \tau \in \Delta \) , \( \mathbf{O} \) is an operation symbol, say of rank \( m, i < m \), and \( {\alpha }_{0},\ldots ,{\alpha }_{m - 2} \) are variables, then the following equation is in \( \Delta \) : \( \mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},\sigma ,{\alpha }_{i},\ldots ,{\alpha }_{m - 2}}\right) \) \[ = \mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},\tau ,{\alpha }_{i},\ldots ,{\alpha }_{m - 2}}\right) . \] We write \( \Gamma { \vdash }_{\mathrm{{eq}}}\varphi \) instead of \( \varphi \in \Gamma \) -eqthm. We shall prove an analog of the completeness theorem for this notion. First a technical lemma: Lemma 24.18. Let \( \Gamma \) be a set of equations in an algebraic language. Then for any terms \( \sigma ,\tau ,\rho \) , (i) \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \sigma \) ; (ii) if \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \tau \), then \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau = \sigma \) ; (iii) if \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma \equiv \tau \) and \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau \equiv \rho \), then \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \rho \) ; (iv) if \( \mathbf{O} \) is an operation symbol, say of rank \( m \), and if \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \) , \( {\tau }_{0},\ldots ,{\tau }_{m - 1} \) are terms such that \( \Gamma { \vdash }_{\mathrm{{eq}}}{\sigma }_{i} = {\tau }_{i} \) for each \( i < m \), then \( \Gamma { \vdash }_{\mathrm{{eq}}}\mathbf{O}{\sigma }_{0}\cdots {\sigma }_{m - 1} \equiv \mathbf{O}{\tau }_{0}\cdots {\tau }_{m - 1}. \) Proof. Condition \( \left( i\right) \) is obvious from 24.17(ii) and 24.17(iii). For (ii): assume that \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \tau \) . We also have \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau \equiv \tau \) by \( \left( i\right) \), so \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau \equiv \sigma \) by 24.17(iv). Condition (iii) clearly follows from (ii) and 24.17(iv). To prove (iv), let \( {\alpha }_{0},\ldots ,{\alpha }_{m - 1} \) be distinct variables not occurring in any of \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \) , \( {\tau }_{0},\ldots ,{\tau }_{m - 1} \) . Then \[ \Gamma { \vdash }_{\mathrm{{eq}}}\mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},{\sigma }_{i},{\alpha }_{i + 1},\ldots ,{\alpha }_{m - 1}}\right) \] \[ = \mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},{\tau }_{i},{\alpha }_{i + 1},\ldots ,{\alpha }_{m - 1}}\right) \] for each \( i < m \) . Applications of 24.17(iii) then give, for each \( i < m \) , \[ \Gamma { \vdash }_{\mathrm{{eq}}}\mathbf{O}\left( {{\tau }_{0},\ldots ,{\tau }_{i - 1},{\sigma }_{i},{\sigma }_{i + 1},\ldots ,{\sigma }_{m - 1}}\right) \] \[ = \mathbf{O}\left( {{\tau }_{0},\ldots ,{\tau }_{i - 1},{\tau }_{i},{\sigma }_{i + 1},\ldots ,{\sigma }_{m - 1}}\right) . \] Now several applications of (iv) give the desired result. Theorem 24.19 (Completeness theorem for equational logic). Let \( \Gamma \cup \{ \varphi \} \) be a set of equations in an algebraic language. Then \( \Gamma \vDash \varphi \) iff \( \Gamma { \vdash }_{\mathrm{{eq}}}\varphi \) . Proof. It is easily checked that \( \Gamma { \vdash }_{\text{eq }}\varphi \Rightarrow \Gamma \vDash \varphi \) . Now suppose that not \( \left( {\Gamma { \vdash }_{\text{eq }}\varphi }\right) \) ; we shall construct a model of \( \Gamma \cup \{ \neg \left\lbrack \left\lbrack \varphi \right\rbrack \right\rbrack \} \) . In fact, the desired model is simply \( \mathfrak{A} = \mathfrak{F}\mathfrak{r}/{ \equiv }_{\Gamma } \) . To show that \( \mathfrak{A} \) is a model of \( \Gamma \), let \( \sigma \equiv \tau \) be an arbitrary element of \( \Gamma \), and let \( x \in {}^{\omega }{\operatorname{Trm}}_{\mathcal{L}} \) ; we want to show that \( {\sigma }^{\mathfrak{A}}\left( {\left\lbrack \right\rbrack \circ x}\right) = {\tau }^{\mathfrak{A}}\left( {\left\lbrack \right\rbrack \circ x}\right) \) . To do this, for any \( p \in {\operatorname{Trm}}_{\mathcal{L}} \) let \( {S\rho } \) be the result of simultaneously replacing \( {v}_{i} \) by \( {x}_{i} \) in \( \rho \), for each \( i < \omega \) . Then (1) \[ \Gamma { \vdash }_{\mathrm{{eq}}}{S\sigma } = {S\tau }. \] To prove (1), let \( {v}_{i0},\ldots ,{v}_{i\left( {m - 1}\right) } \) be all of the variables occurring in the equation \( \sigma = \tau \) . Let \( {\beta }_{0},\ldots ,{\beta }_{m - 1} \) be new variables, not appearing in \( \sigma = \tau \) , different from \( {v}_{i0},\ldots ,{v}_{i\left( {m - 1}\right) } \) and not appearing in \( {x}_{i0},\ldots ,{x}_{i\left( {m - 1}\right) } \) . Let \( {\sigma
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 1.16
Definition 1.16. The real projective space of dimension \( n \), denoted \( \mathbb{R}{P}^{n} \), is the set of lines through the origin in \( {\mathbb{R}}^{n + 1} \) . Since each line through the origin intersects the unit sphere exactly twice, we may think of \( \mathbb{R}{P}^{n} \) as the unit sphere \( {S}^{n} \) with "antipodal" points \( u \) and \( - u \) identified. Using the second description, we think of points in \( \mathbb{R}{P}^{n} \) as pairs \( \{ u, - u\} \), with \( u \in {S}^{n} \) . There is a natural map \( \pi : {S}^{n} \rightarrow \mathbb{R}{P}^{n} \), given by \[ \pi \left( u\right) = \{ u, - u\} \] We may define a distance function on \( \mathbb{R}{P}^{n} \) by defining \[ d\left( {\{ u, - u\} ,\{ v, - v\} }\right) = \min \left( {d\left( {u, v}\right), d\left( {u, - v}\right), d\left( {-u, v}\right), d\left( {-u, - v}\right) }\right) \] \[ = \min \left( {d\left( {u, v}\right), d\left( {u, - v}\right) }\right) \text{.} \] (The second equality holds because \( d\left( {x, y}\right) = d\left( {-x, - y}\right) \) .) With this metric, \( \mathbb{R}{P}^{n} \) is locally isometric to \( {S}^{n} \), since if \( u \) and \( v \) are nearby points in \( {S}^{n} \), we have \( d\left( {\{ u, - u\} ,\{ v, - v\} }\right) = d\left( {u, v}\right) . \) It is known that \( \mathbb{R}{P}^{n} \) is not simply connected. (See, for example, Example 1.43 in [Hat].) Indeed, suppose \( u \) is any unit vector in \( {\mathbb{R}}^{n + 1} \) and \( B\left( t\right) \) is any path in \( {S}^{n} \) connecting \( u \) to \( - u \) . Then \[ A\left( t\right) \mathrel{\text{:=}} \pi \left( {B\left( t\right) }\right) \] is a loop in \( \mathbb{R}{P}^{n} \), and this loop cannot be shrunk continuously to a point in \( \mathbb{R}{P}^{n} \) . To prove this claim, suppose that a map \( A\left( {s, t}\right) \) as in Definition 1.14 exists. Then \( A\left( {s, t}\right) \) can be "lifted" to a continuous map \( B\left( {s, t}\right) \) into \( {S}^{n} \) such that \( B\left( {0, t}\right) = \) \( B\left( t\right) \) and such that \( A\left( {s, t}\right) = \pi \left( {B\left( {s, t}\right) }\right) \) . (See Proposition 1.30 in [Hat].) Since \( A\left( {s,0}\right) = A\left( {s,1}\right) \) for all \( s \), we must have \( B\left( {s,0}\right) = \pm B\left( {s,1}\right) \) . But by construction, \( B\left( {0,0}\right) = - B\left( {0,1}\right) \) . If order for \( B\left( {s, t}\right) \) to be continuous in \( s \), we must then have \( B\left( {s,0}\right) = - B\left( {s,1}\right) \) for all \( s \) . It follows that \( B\left( {1, t}\right) \) is a nonconstant path in \( {S}^{n} \) . It is then easily verified that \( A\left( {1, t}\right) = \pi \left( {B\left( {1, t}\right) }\right) \) cannot be constant, contradicting our assumption about \( A\left( {s, t}\right) \) . Let \( {D}^{n} \) denote the closed upper hemisphere in \( {S}^{n} \), that is, the set of points \( u \in {S}^{n} \) with \( {u}_{n + 1} \geq 0 \) . Then \( \pi \) maps \( {D}^{n} \) onto \( \mathbb{R}{P}^{n} \), since at least one of \( u \) and \( - u \) is in \( {D}^{n} \) . The restriction of \( \pi \) to \( {D}^{n} \) is injective except on the equator, that is, the set of \( u \in {S}^{n} \) with \( {u}_{n + 1} = 0 \) . If \( u \) is in the equator, then \( - u \) is also in the equator, and \( \pi \left( {-u}\right) = \pi \left( u\right) \) . Thus, we may also think of \( \mathbb{R}{P}^{n} \) as the upper hemisphere \( {D}^{n} \), with antipodal points on the equator identified (Figure 1.2). We may now make one last identification using the projection \( P \) of \( {\mathbb{R}}^{n + 1} \) onto \( {\mathbb{R}}^{n} \) . (That is to say, \( P \) is the map sending \( \left( {{x}_{1},\ldots ,{x}_{n},{x}_{n + 1}}\right) \) to \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) .) The restriction of \( P \) to \( {D}^{n} \) is a continuous bijection between \( {D}^{n} \) and the closed unit ball \( {B}^{n} \) in \( {\mathbb{R}}^{n} \), with the equator in \( {D}^{n} \) mapping to the boundary of the ball. Thus, our ![a7bfd4a7-7795-4350-a407-6ad11be11f96_33_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_33_0.jpg) Fig. 1.2 The space \( \mathbb{R}{P}^{n} \) is the upper hemisphere with antipodal points on the equator identified. The indicated path from \( u \) to \( - u \) corresponds to a loop in \( \mathbb{R}{P}^{n} \) that cannot be shrunk to a point last model of \( \mathbb{R}{P}^{n} \) is the closed unit ball \( {B}^{n} \subset {\mathbb{R}}^{n} \), with antipodal points on the boundary of \( {B}^{n} \) identified. We now turn to a topological analysis of \( \mathrm{{SO}}\left( 3\right) \) . Proposition 1.17. There is a continuous bijection between \( \mathrm{{SO}}\left( 3\right) \) and \( \mathbb{R}{P}^{3} \) . Since \( \mathbb{R}{P}^{3} \) is not simply connected, it follows that \( \mathrm{{SO}}\left( 3\right) \) is not simply connected, either. Proof. If \( v \) is a unit vector in \( {\mathbb{R}}^{3} \), let \( {R}_{v,\theta } \) be the element of \( \mathrm{{SO}}\left( 3\right) \) consisting of a "right-handed" rotation by angle \( \theta \) in the plane orthogonal to \( v \) . That is to say, let \( {v}^{ \bot } \) denote the plane orthogonal to \( v \) and choose an orthonormal basis \( \left( {{u}_{1},{u}_{2}}\right) \) for \( {v}^{ \bot } \) in such a way that the linear map taking the orthonormal basis \( \left( {{u}_{1},{u}_{2}, v}\right) \) to the standard basis \( \left( {{e}_{1},{e}_{2},{e}_{3}}\right) \) has positive determinant. We use the basis \( \left( {{u}_{1},{u}_{2}}\right) \) to identify \( {v}^{ \bot } \) with \( {\mathbb{R}}^{2} \), and the rotation is then in the counterclockwise direction in \( {\mathbb{R}}^{2} \) . It is easily seen that \( {R}_{-v,\theta } \) is the same as \( {R}_{v, - \theta } \) . It is also not hard to show (Exercise 14) that every element of \( \mathrm{{SO}}\left( 3\right) \) can be expressed as \( {R}_{v,\theta } \), for some \( v \) and \( \theta \) with \( - \pi \leq \theta \leq \pi \) . Furthermore, we can arrange that \( 0 \leq \theta \leq \pi \) by replacing \( v \) with \( - v \) if necessary. If \( R = I \), then \( R = {R}_{v,0} \) for any unit vector \( v \) . If \( R \) is a rotation by angle \( \pi \) about some axis \( v \), then \( R \) can be expressed both as \( {R}_{v,\pi } \) and as \( {R}_{-v,\pi } \) . It is not hard to see that if \( R \neq I \) and \( R \) is not a rotation by angle \( \pi \), then \( R \) has a unique representation as \( {R}_{v,\theta } \) with \( 0 < \theta < \pi \) . Now let \( {B}^{3} \) denote the closed ball of radius \( \pi \) in \( {\mathbb{R}}^{3} \) and consider the map \( \Phi \) : \( {B}^{3} \rightarrow \mathrm{{SO}}\left( 3\right) \) given by \[ \Phi \left( u\right) = {R}_{\widehat{u},\parallel u\parallel },\;u \neq 0, \] \[ \Phi \left( 0\right) = I\text{.} \] Here, \( \widehat{u} = u/\parallel u\parallel \) is the unit vector in the \( u \) -direction. The map \( \Phi \) is continuous, even at \( I \), since \( {R}_{v,\theta } \) approaches the identity as \( \theta \) approaches zero, regardless of how \( v \) is behaving. The discussion in the preceding paragraph shows that \( \Phi \) maps \( {B}^{3} \) onto \( \mathrm{{SO}}\left( 3\right) \) . The map \( \Phi \) is injective except that "antipodal" points on the boundary of \( {B}^{3} \) have the same image: \( {R}_{v,\pi } = {R}_{-v,\pi } \) . Thus, \( \Phi \) descends to a continuous, injective map of \( \mathbb{R}{P}^{3} \) onto \( \mathrm{{SO}}\left( 3\right) \) . Since both \( \mathbb{R}{P}^{3} \) and \( \mathrm{{SO}}\left( 3\right) \) are compact, Theorem 4.17 in [Rud1] tells us that the inverse map is also continuous, meaning that \( \mathrm{{SO}}\left( 3\right) \) is homeomorphic to \( \mathbb{R}{P}^{3} \) . For a different approach to proving Proposition 1.17, see the discussion following Proposition 1.19. ## 1.4 Homomorphisms We now look at the notion of homomorphisms for matrix Lie groups. Definition 1.18. Let \( G \) and \( H \) be matrix Lie groups. A map \( \Phi \) from \( G \) to \( H \) is called a Lie group homomorphism if (1) \( \Phi \) is a group homomorphism and (2) \( \Phi \) is continuous. If, in addition, \( \Phi \) is one-to-one and onto and the inverse map \( {\Phi }^{-1} \) is continuous, then \( \Phi \) is called a Lie group isomorphism. The condition that \( \Phi \) be continuous should be regarded as a technicality, in that it is very difficult to give an example of a group homomorphism between two matrix Lie groups which is not continuous. In fact, if \( G = \mathbb{R} \) and \( H = {\mathbb{C}}^{ * } \), then any group homomorphism from \( G \) to \( H \) which is even measurable (a very weak condition) must be continuous. (See Exercise 17 in Chapter 9 of [Rud2].) Note that the inverse of a Lie group isomorphism is continuous (by definition) and a group homomorphism (by elementary group theory), and thus a Lie group isomorphism. If \( G \) and \( H \) are matrix Lie groups and there exists a Lie group isomorphism from \( G \) to \( H \), then \( G \) and \( H \) are said to be isomorphic, and we write \( G \cong H \) . The simplest interesting example of a Lie group homomorphism is the determinant, which is a homomorphism of \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) into \( {\mathbb{C}}^{ * } \) . Another simple example is the map \( \Phi : \mathbb{R} \rightarrow \mathrm{{SO}}\left( 2\right) \) given by \[ \Phi \left( \theta \right) = \left( \begin{array}{rr} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{array}\right) \] This map is clearly continuous, and calculation (using standard trigonometric identities) shows that it is a homomorphism. An important topic for us will be the relationship between the groups \( \mathrm{{SU}}\left( 2\right) \) and \( \mathrm{{SO}}\left( 3\right) \), which are almost, but not quite, isomorphic. Specifically, we now show that there exists a Lie group homomorphism \( \Phi : \m
1167_(GTM73)Algebra
Definition 1.3
Definition 1.3. A module A is said to satisfy the maximum condition [resp. minimum condition] on submodules if every nonempty set of submodules of A contains a maximal [resp. minimal] element (with respect to set theoretic inclusion). Theorem 1.4. A module A satisfies the ascending [resp. descending] chain condition on submodules if and only if \( \mathrm{A} \) satisfies the maximal [resp. minimal] condition on submodules. PROOF. Suppose \( A \) satisfies the minimal condition on submodules and \( {A}_{1} \supset {A}_{2} \supset \cdots \) is a chain of submodules. Then the set \( \left\{ {{A}_{i} \mid i \geq 1}\right\} \) has a minimal element, say \( {A}_{n} \) . Consequently, for \( i \geq n \) we have \( {A}_{n} \supset {A}_{i} \) by hypothesis and \( {A}_{n} \subset {A}_{i} \) by minimality, whence \( {A}_{i} = {A}_{n} \) for each \( i \geq n \) . Therefore, \( A \) satisfies the descending chain condition. Conversely suppose \( A \) satisfies the descending chain condition, and \( S \) is a nonempty set of submodules of \( A \) . Then there exists \( {B}_{0} \in S \) . If \( S \) has no minimal element, then for each submodule \( B \) in \( S \) there exists at least one submodule \( {B}^{\prime } \) in \( S \) such that \( B\underset{ \neq }{ \supset }{B}^{\prime } \) . For each \( B \) in \( S \), choose one such \( {B}^{\prime } \) (Axiom of Choice). This choice then de- fines a function \( f : S \rightarrow S \) by \( B \mapsto {B}^{\prime } \) . By the Recursion Theorem 6.2 of the Introduction (with \( f = {f}_{n} \) for all \( n \) ) there is a function \( \varphi : \mathbf{N} \rightarrow S \) such that \[ \varphi \left( 0\right) = {B}_{0}\text{ and }\varphi \left( {n + 1}\right) = f\left( {\varphi \left( n\right) }\right) = \varphi {\left( n\right) }^{\prime }. \] Thus if \( {B}_{n}{\varepsilon S} \) denotes \( \varphi \left( n\right) \), then there is a sequence \( {B}_{0},{B}_{1},\ldots \) such that \( {B}_{0}\underset{ \neq }{ \supset }{B}_{1}\underset{ \neq }{ \supset } \) \( {B}_{2} \supsetneq \cdots \) . This contradicts the descending chain condition. Therefore, \( S \) must have a minimal element, whence \( A \) satisfies the minimum condition. The proof for the ascending chain and maximum conditions is analogous. Theorem 1.5. Let \( 0 \rightarrow \mathrm{A}\overset{\mathrm{f}}{ \rightarrow }\mathrm{B}\overset{\mathrm{g}}{ \rightarrow }\mathrm{C} \rightarrow 0 \) be a short exact sequence of modules. Then B satisfies the ascending [resp. descending] chain condition on submodules if and only if A and \( \mathrm{C} \) satisfy it. SKETCH OF PROOF. If \( B \) satisfies the ascending chain condition, then so does its submodule \( f\left( A\right) \) . By exactness \( A \) is isomorphic to \( f\left( A\right) \), whence \( A \) satisfies the ascending chain condition. If \( {C}_{1} \subset {C}_{2} \subset \cdots \) is a chain of submodules of \( C \), then \( {g}^{-1}\left( {C}_{1}\right) \subset {g}^{-1}\left( {C}_{2}\right) \subset \cdots \) is a chain of submodules of \( B \) . Therefore, there is an \( n \) such that \( {g}^{-1}\left( {C}_{i}\right) = {g}^{-1}\left( {C}_{n}\right) \) for all \( i \geq n \) . Since \( g \) is an epimorphism by exactness, it follows that \( {C}_{i} = {C}_{n} \) for all \( i \geq n \) . Therefore, \( C \) satisfies the ascending chain condition. Suppose \( A \) and \( C \) satisfy the ascending chain condition and \( {B}_{1} \subset {B}_{2} \subset \cdots \) is a chain of submodules of \( B \) . For each \( i \) let \[ {A}_{i} = {f}^{-1}\left( {f\left( A\right) \cap {B}_{i}}\right) \text{ and }{C}_{i} = g\left( {B}_{i}\right) . \] Let \( {f}_{i} = f \mid {A}_{i} \) and \( {g}_{i} = g \mid {B}_{i} \) . Verify that for each \( i \) the following sequence is exact: \[ 0 \rightarrow {A}_{i}\overset{{f}_{i}}{ \rightarrow }{B}_{i}\overset{{g}_{i}}{ \rightarrow }{C}_{i} \rightarrow 0. \] Verify that \( {A}_{1} \subset {A}_{2} \subset \cdots \) and \( {C}_{1} \subset {C}_{2} \subset \cdots \) . By hypothesis there exists an integer \( n \) such that \( {A}_{i} = {A}_{n} \) and \( {C}_{i} = {C}_{n} \) for all \( i \geq n \) . For each \( i \geq n \) there is a commutative diagram with exact rows: ![a635b0ec-463f-4a06-bfab-0631c5cb2124_393_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_393_0.jpg) where \( \alpha \) and \( \gamma \) are the respective identity maps and \( {\beta }_{i} \) is the inclusion map. The Short Five Lemma IV.1.17 implies that \( {\beta }_{i} \) is the identity map, whence \( B \) satisfies the ascending chain condition. The proof for descending chain condition is analogous. Corollary 1.6. If \( \mathrm{A} \) is a submodule of a module \( \mathrm{B} \) , then \( \mathrm{B} \) satisfies the ascending [resp. descending] chain condition if and only if \( \mathrm{A} \) and \( \mathrm{B}/\mathrm{A} \) satisfy it. PROOF. Apply Theorem 1.5 to the sequence \( 0 \rightarrow A\overset{ \subset }{ \rightarrow }B \rightarrow B/A \rightarrow 0 \) . Corollary 1.7. If \( {\mathrm{A}}_{1},\ldots ,{\mathrm{A}}_{\mathrm{n}} \) are modules, then the direct sum \( {\mathrm{A}}_{1} \oplus {\mathrm{A}}_{2} \oplus \cdots \oplus {\mathrm{A}}_{\mathrm{n}} \) satisfies the ascending [resp. descending] chain condition on submodules if and only if each \( {\mathrm{A}}_{\mathrm{i}} \) satisfies it. SKETCH OF PROOF. Use induction on \( n \) . If \( n = 2 \), apply Theorem 1.5 to the sequence \( 0 \rightarrow {A}_{1}\overset{{\iota }_{1}}{ \rightarrow }{A}_{1} \oplus {A}_{2}\overset{{\pi }_{2}}{ \rightarrow }{A}_{2} \rightarrow 0 \) . Theorem 1.8. If \( \mathrm{R} \) is a left Noetherian [resp. Artinian] ring with identity, then every finitely generated unitary left R-module A satisfies the ascending [resp. descending] chain condition on submodules. An analogous statement is true with "left" replaced by "right." PROOF OF 1.8. If \( A \) is finitely generated, then by Corollary IV.2.2 there is a free \( R \) -module \( F \) with a finite basis and an epimorphism \( \pi : F \rightarrow A \) . Since \( F \) is a direct sum of a finite number of copies of \( R \) by Theorem IV.2.1, \( F \) is left Noetherian [resp. Artinian] by Corollary 1.7. Therefore \( A \cong F/\operatorname{Ker}\pi \) is Noetherian [resp. Artinian] by Corollary 1.6. Here is a characterization of the ascending chain condition that has no analogue for the descending chain condition. Theorem 1.9. A module A satisfies the ascending chain condition on submodules if and only if every submodule of \( \mathrm{A} \) is finitely generated. In particular, a commutative ring \( \mathrm{R} \) is Noetherian if and only if every ideal of \( \mathrm{R} \) is finitely generated. PROOF. \( \left( \Rightarrow \right) \) If \( B \) is a submodule of \( A \), let \( S \) be the set of all finitely generated submodules of \( B \) . Since \( S \) is nonempty \( \left( {0\varepsilon S}\right), S \) contains a maximal element \( C \) by Theorem 1.4. \( C \) is finitely generated by \( {c}_{1},{c}_{2},\ldots ,{c}_{n} \) . For each \( b \in B \) let \( {D}_{b} \) be the submodule of \( B \) generated by \( b,{c}_{1},{c}_{2},\ldots ,{c}_{n} \) . Then \( {D}_{b} \in S \) and \( C \subset {D}_{b} \) . Since \( C \) is maximal, \( {D}_{b} = C \) for every \( {b\varepsilon B} \), whence \( {b\varepsilon }{D}_{b} = C \) for every \( {b\varepsilon B} \) and \( B \subset C \) . Since \( C \subset B \) by construction, \( B = C \) and thus \( B \) is finitely generated. \( \left( \Leftarrow \right) \) Given a chain of submodules \( {A}_{1} \subset {A}_{2} \subset {A}_{3} \subset \cdots \), then it is easy to verify that \( \mathop{\bigcup }\limits_{{i \geq 1}}{A}_{i} \) is also a submodule of \( A \) and therefore finitely generated, say by \( {a}_{1},\ldots ,{a}_{k} \) . Since each \( {a}_{i} \) is an element of some \( {A}_{j} \), there is an index \( n \) such that \( {a}_{i} \in {A}_{n} \) for \( i = 1,2,\ldots, k \) . Consequently, \( \bigcup {A}_{i} \subset {A}_{n} \), whence \( {A}_{i} = {A}_{n} \) for \( i \geq n \) . We close this section by carrying over to modules the principal results of Section II. 8 on subnormal series for groups. This material is introduced in order to prove Corollary 1.12, which will be useful in Chapter IX. We begin with a host of definitions, most of which are identical to those given for groups in Section II.8. A normal series for a module \( A \) is a chain of submodules: \( A = {A}_{0} \supset {A}_{1} \supset \) \( {A}_{2} \supset \cdots \supset {A}_{n} \) . The factors of the series are the quotient modules \[ {A}_{i}/{A}_{i + 1}\;\left( {i = 0,1,\ldots, n - 1}\right) . \] The length of the series is the number of proper inclusions \( ( = \) number of nontrivial factors). A refinement of the normal series \( {A}_{0} \supset {A}_{1} \supset \cdots \supset {A}_{n} \) is a normal series obtained by inserting a finite number of additional submodules between the given ones. A proper refinement is one which has length larger than the original series. Two normal series are equivalent if there is a one-to-one correspondence between the nontrivial factors such that corresponding factors are isomorphic modules. Thus equivalent series necessarily have the same length. A composition series for \( A \) is a normal series \( A = {A}_{0} \supset {A}_{1} \supset {A}_{2} \supset \cdots \supset {A}_{n} = 0 \) such that each factor \( {A}_{k}/{A}_{k + 1} \) \( \left( {k = 0,1,\ldots, n - 1}\right) \) is a nonzero module with no proper submodules. \( {}^{1} \) The various results in Section II. 8 carry over readily to modules. For example, \( a \) composition series has no proper refinements and therefore is equivalent to any of its refinements (see Theorems IV.1.10 and II.8.4 and Lemma II.8.8). Theorems of Schreier, Zassenhaus, and Jordan-Hölder are valid for modules: Theorem 1.10. Any two normal series of a module A have refinements that are equivalent. Any two composition series of \(
1088_(GTM245)Complex Analysis
Definition 8.12
Definition 8.12. A circle in \( \widehat{\mathbb{C}} \) is either an Euclidean (ordinary) circle in \( \mathbb{C} \), or a straight line in \( \mathbb{C} \) together with \( \infty \) (this is a circle passing through \( \infty \) ). See Exercise 3.21 for a justification for the name. ![a50267de-c956-4a7f-8c2e-850adafcee65_219_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_219_0.jpg) Fig. 8.1 The cross ratio arguments. (a) On a circle. (b) Not on a circle Proposition 8.13. The cross ratio of four distinct points in \( \widehat{\mathbb{C}} \) is a real number if and only if the four points lie on a circle in \( \widehat{\mathbb{C}} \) . Proof. This is an elementary geometric argument that goes as follows. It is clear that \[ \arg \left( {{z}_{1},{z}_{2},{z}_{3},{z}_{4}}\right) = \arg \frac{{z}_{1} - {z}_{3}}{{z}_{1} - {z}_{4}} - \arg \frac{{z}_{2} - {z}_{3}}{{z}_{2} - {z}_{4}}. \] It is also clear from the geometry of the situation (see Fig. 8.1 and Exercise 8.3) that the two quantities on the right-hand side differ by \( {\pi n} \), with \( n \in \mathbb{Z} \), if and only if the four points lie on a circle in \( \widehat{\mathbb{C}} \) . Theorem 8.14. A Möbius transformation maps circles in \( \widehat{\mathbb{C}} \) to circles in \( \widehat{\mathbb{C}} \) . Proof. This follows immediately from Propositions 8.11 and 8.13. We use the following standard notation in the rest of this chapter: \( \mathbb{D} \) denotes the unit disc \( \{ z \in \mathbb{C};\left| z\right| < 1\} \) and \( {\mathbb{H}}^{2} \) the upper half plane \( \{ z \in \mathbb{C};\Im z > 0\} \) . Note that both \( \mathbb{D} \) and \( {\mathbb{H}}^{2} \) should be regarded as discs in \( \widetilde{\mathbb{C}} \), since they are bounded by circles in \( \widehat{\mathbb{C}} \) : the unit circle \( {S}^{1} \) and the extended real line \( \widehat{\mathbb{R}} = \mathbb{R} \cup \{ \infty \} \), respectively. The next result shows that these two discs in \( \widehat{\mathbb{C}} \) are conformally equivalent. Corollary 8.15. If \( w\left( z\right) = \frac{z - \iota }{z + \iota } \) for \( z \in {\mathbb{H}}^{2} \), then \( w \) is a conformal map of \( {\mathbb{H}}^{2} \) onto \( \mathbb{D} \) . Proof. All Möbius transformations, in particular \( w \), are conformal. A calculation shows that \( w \) maps \( \widehat{\mathbb{R}} = \mathbb{R} \cup \{ \infty \} \) onto \( {S}^{1} \) (the unit circle centered at 0 ) and \( w\left( \iota \right) = \) 0 . By connectivity considerations, it follows that \( w\left( {\mathbb{H}}^{2}\right) = \mathbb{D} \) . ## 8.2 Aut \( \left( D\right) \) for \( D = \widehat{\mathbb{C}},\mathbb{C},\mathbb{D} \), and \( {\mathbb{H}}^{2} \) Theorem 8.16. A function \( f : \mathbb{C} \rightarrow \mathbb{C} \) belongs to \( \operatorname{Aut}\left( \mathbb{C}\right) \) if and only if there exist \( a \) and \( b \) in \( \mathbb{C}, a \neq 0 \), such that \( f\left( z\right) = {az} + b \) for all \( z \in \mathbb{C} \) . Proof. The if part is trivial. For the only if part, note that \( f \) is an entire function, and we can use its Taylor series at zero to conclude that \[ f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{z}^{n}\text{ for all }z \in \mathbb{C}. \] If \( \infty \) were an essential singularity of \( f \), then \( f\left( {\left| z\right| > 1}\right) \) would be dense in \( \mathbb{C} \) . But \[ f\left( {\left| z\right| > 1}\right) \cap f\left( {\left| z\right| < 1}\right) \text{is empty} \] since \( f \) is injective. Thus \( \infty \) is either a removable singularity or a pole of \( f \) ; in any case, there is a nonnegative integer \( N \) such that \( {a}_{n} = 0 \) for all \( n > N \) and \( {a}_{N} \neq 0 \) ; that is, \( f \) is a polynomial of degree \( N \) . If \( N \) were bigger than one or equal to zero, then \( f \) would not be injective. Theorem 8.17. Aut \( \left( \widehat{\mathbb{C}}\right) \cong \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) . Thus the last arrow in the exact sequence (8.3) corresponds to a surjective map. Proof. We need only show that \( \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \) is contained in the Möbius group. Let \( f \) be an element of \( \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \) . If \( f\left( \infty \right) = \infty \), then \( f \) is a Möbius transformation by Theorem 8.16. If \( f\left( \infty \right) = c \neq \infty \), then consider the Möbius transformation \( A\left( z\right) = \) \( \frac{1}{z - c} \) and conclude that \( B = A \circ f \) in \( \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \) and fixes \( \infty \) ; therefore \( B \) is a Möbius transformation. But then so is \( f = {A}^{-1} \circ B \) . We now provide a characterization of the elements of \( \operatorname{Aut}\left( \mathbb{D}\right) \) ; it shows that they form a subgroup of \( \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \) . Another useful characterization is given in Exercise 8.5. Theorem 8.18. A function \( B \) defined on \( \mathbb{D} \) is in \( \operatorname{Aut}\left( \mathbb{D}\right) \) if and only if there exist \( a \) and \( b \) in \( \mathbb{C} \) such that \( {\left| a\right| }^{2} - {\left| b\right| }^{2} = 1 \) and \[ B\left( z\right) = \frac{{az} + b}{\bar{b}z + \bar{a}} \] for all \( z \in \mathbb{D} \) . Proof. The if part: Assume that \( B \) is of the above form and observe that \( a \) is different from zero. We show that \( B \in \operatorname{Aut}\left( \mathbb{D}\right) \) . This follows from the following easy to prove facts: (1) Mappings \( B \) of the given form constitute a group under composition. In particular, if \( B\left( z\right) = \frac{{az} + b}{\bar{b}z + \bar{a}} \) with \( {\left| a\right| }^{2} - {\left| b\right| }^{2} = 1 \), then \( {B}^{-1}\left( z\right) = \frac{\bar{a}z - b}{-\bar{b}z + a} \) has the same form as \( B \) . (2) \( \left| z\right| = 1 \) if and only if \( \left| {B\left( z\right) }\right| = 1 \) . (3) \( \left| {B\left( 0\right) }\right| = \left| \frac{b}{a}\right| < 1 \) . (4) \( B\left( \mathbb{D}\right) \) is connected. Thus, from (2), either \( B\left( \mathbb{D}\right) \) is contained in \( \mathbb{D} \) or \( B\left( \mathbb{D}\right) \cap \mathbb{D} \) is empty. From (3) we see that \( B\left( \mathbb{D}\right) \subseteq \mathbb{D} \) . It follows from (1) that \( {B}^{-1}\left( \mathbb{D}\right) \subseteq \mathbb{D} \) . (5) Obviously \( \mathbb{D} = B \circ {B}^{-1}\left( \mathbb{D}\right) \), which implies that \( B\left( \mathbb{D}\right) = \mathbb{D} \) . The only if part: Let \( f \in \operatorname{Aut}\left( \mathbb{D}\right) \) and \( w = f\left( z\right) \) . Then \( {f}^{-1} \in \operatorname{Aut}\left( \mathbb{D}\right) \) and \( z = \) \( {f}^{-1}\left( w\right) \) . (6) If \( f\left( 0\right) = 0 \), then it follows by the Schwarz’ lemma applied first to \( {f}^{-1} \) and then to \( f \) that \[ \left| z\right| = \left| {{f}^{-1}\left( w\right) }\right| \leq \left| w\right| = \left| {f\left( z\right) }\right| \leq \left| z\right| \text{ for all }z \in \mathbb{D}. \] The same lemma implies that there exists a \( \theta \in \mathbb{R} \) such that \( f\left( z\right) = {\mathrm{e}}^{\iota \theta }z \) for all \( z \in \mathbb{D} \) . So we can take \( a = {\mathrm{e}}^{t\frac{\theta }{2}} \) and \( b = 0 \) to conclude that \( f \) has the required form. (7) If \( f\left( 0\right) = c \neq 0 \), then \( 0 < \left| c\right| < 1 \) and we set \( C\left( z\right) = \frac{z - c}{1 - \bar{c}z} \) . It follows from Exercise 8.5 (see also Exercise 2.2) that the Möbius transformation \( C \) belongs to \( \operatorname{Aut}\left( \mathbb{D}\right) \) . Since \( C \circ f \) fixes the origin, it follows from (6) that \( C \circ f \) is of the required form, and therefore so is \( f = {C}^{-1} \circ \left( {C \circ f}\right) \) by (1). Just as in Sect. 8.1 we defined \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) as the quotient of \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) by \( \pm I \) and then proved that it is isomorphic to the group \( \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \), we can define the group \( \operatorname{PSL}\left( {2,\mathbb{R}}\right) = \mathrm{{SL}}\left( {2,\mathbb{R}}\right) /\{ \pm I\} \) of appropriate matrices with real coefficients modulo plus or minus the identity matrix and obtain the following description: Theorem 8.19. Aut \( \left( {\mathbb{H}}^{2}\right) \cong \operatorname{PSL}\left( {2,\mathbb{R}}\right) \) . Proof. Consider the conformal map \( w : {\mathbb{H}}^{2} \rightarrow \mathbb{D} \) given in Corollary 8.15. Then \[ \operatorname{Aut}\left( {\mathbb{H}}^{2}\right) = {w}^{-1}\operatorname{Aut}\left( \mathbb{D}\right) w. \] By the preceding theorem, any element \( f \) of \( \operatorname{Aut}\left( \mathbb{D}\right) \) may be written as \[ f\left( z\right) = \frac{{az} + b}{\bar{b}z + \bar{a}} \] with \( {\left| a\right| }^{2} - {\left| b\right| }^{2} = 1 \) . Denote \( a = {a}_{1} + \imath {a}_{2}, b = {b}_{1} + \imath {b}_{2} \) . Then \[ \left( {{w}^{-1} \circ f \circ w}\right) \left( z\right) = \frac{-\left( {{a}_{1} + {b}_{1}}\right) z + {b}_{2} - {a}_{2}}{\left( {{a}_{2} + {b}_{2}}\right) z + {b}_{1} - {a}_{1}} \] with \( - \left( {{a}_{1} + {b}_{1}}\right) \left( {{b}_{1} - {a}_{1}}\right) + \left( {{a}_{2} + {b}_{2}}\right) \left( {{a}_{2} - {b}_{2}}\right) = {\left| a\right| }^{2} - {\left| b\right| }^{2} = 1 \) ; thus we have associated to any element \( {w}^{-1} \circ f \circ w \) of \( \operatorname{Aut}\left( {\mathbb{H}}^{2}\right) \) the image in \( \operatorname{PSL}\left( {2,\mathbb{R}}\right) \) of the \( \operatorname{matrix}\left\lbrack \begin{matrix} - \left( {{a}_{1} + {b}_{1}}\right) & {b}_{2} - {a}_{2} \\ {a}_{2} + {b}_{2} & {b}_{1} - {a}_{1} \end{matrix}\right\rbrack \) in \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) . Conversely, every matrix \( \left\lbrack \begin{array}{ll} a & b \\ c & d \end{array}\r
1098_(GTM254)Algebraic Function Fields and Codes
Definition 4.2.2
Definition 4.2.2. (a) A valued field \( T \) is said to be complete if every Cauchy sequence in \( T \) is convergent. (b) Suppose that \( \left( {T, v}\right) \) is a valued field. A completion of \( T \) is a valued field \( \left( {\widehat{T},\widehat{v}}\right) \) with the following properties: (1) \( T \subseteq \widehat{T} \), and \( v \) is the restriction of \( \widehat{v} \) to \( T \) . (2) \( \widehat{T} \) is complete with respect to the valuation \( \widehat{v} \) . (3) \( T \) is dense in \( \widehat{T} \) ; i.e., for each \( z \in \widehat{T} \) there is a sequence \( {\left( {x}_{n}\right) }_{n \geq 0} \) in \( T \) with \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} = z \) . Proposition 4.2.3. For each valued field \( \left( {T, v}\right) \) there exists a completion \( \left( {\widehat{T},\widehat{v}}\right) \) . It is unique in the following sense: If \( \left( {\widetilde{T},\widetilde{v}}\right) \) is another completion of \( \left( {T, v}\right) \) then there is a unique isomorphism \( f : \widehat{T} \rightarrow \widetilde{T} \) such that \( \widehat{v} = \widetilde{v} \circ f \) . Hence \( \left( {\widehat{T},\widehat{v}}\right) \) is called the completion of \( \left( {T, v}\right) \) . Proof. We give only a sketch of the proof; the tedious details are left to the reader. First of all, we consider the set \[ R \mathrel{\text{:=}} \left\{ {{\left( {x}_{n}\right) }_{n \geq 0} \mid {\left( {x}_{n}\right) }_{n \geq 0}\text{ is a Cauchy sequence in }T}\right\} . \] This is a ring if addition and multiplication are defined in the obvious manner via \( \left( {x}_{n}\right) + \left( {y}_{n}\right) \mathrel{\text{:=}} \left( {{x}_{n} + {y}_{n}}\right) \) and \( \left( {x}_{n}\right) \cdot \left( {y}_{n}\right) \mathrel{\text{:=}} \left( {{x}_{n}{y}_{n}}\right) \) . The set \[ I \mathrel{\text{:=}} \left\{ {{\left( {x}_{n}\right) }_{n \geq 0} \mid {\left( {x}_{n}\right) }_{n \geq 0}\text{ converges to }0}\right\} \] is an ideal in \( R \) ; actually \( I \) is a maximal ideal of \( R \) . Therefore the residue class ring \[ \widehat{T} \mathrel{\text{:=}} R/I \] is a field. For \( x \in T \) let \( \varrho \left( x\right) \mathrel{\text{:=}} \left( {x, x,\ldots }\right) \in R \) be the constant sequence and \( \nu \left( x\right) \mathrel{\text{:=}} \varrho \left( x\right) + I \in \widehat{T} \) . It is obvious that \( \nu : T \rightarrow \widehat{T} \) is an embedding, and we can consider \( T \) as a subfield of \( \widehat{T} \) via this embedding. Now we construct a valuation \( \widehat{v} \) on \( \widehat{T} \) as follows. If \( {\left( {x}_{n}\right) }_{n \geq 0} \) is a Cauchy sequence in \( T \), either \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}v\left( {x}_{n}\right) = \infty \] (in this case, \( {\left( {x}_{n}\right) }_{n \geq 0} \in I \) ), or there is an integer \( {n}_{0} \geq 0 \) such that \[ v\left( {x}_{n}\right) = v\left( {x}_{m}\right) \text{ for all }m, n \geq {n}_{0}. \] This follows easily from the Strict Triangle Inequality. In any case the limit \( \mathop{\lim }\limits_{{n \rightarrow \infty }}v\left( {x}_{n}\right) \) exists in \( \mathbb{Z} \cup \{ \infty \} \) . Moreover, if \( \left( {x}_{n}\right) - \left( {y}_{n}\right) \in I \) then we have \( \mathop{\lim }\limits_{{n \rightarrow \infty }}v\left( {x}_{n}\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}v\left( {y}_{n}\right) \) . Hence we can define the function \( \widehat{v} : \widehat{T} \rightarrow \) \( \mathbb{Z} \cup \{ \infty \} \) by \[ \widehat{v}\left( {{\left( {x}_{n}\right) }_{n \geq 0} + I}\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{n \rightarrow \infty }}v\left( {x}_{n}\right) . \] Using the corresponding properties of \( v \), it is easily verified that \( \widehat{v} \) is a valuation of \( \widehat{T} \) and \( \widehat{v}\left( x\right) = v\left( x\right) \) for \( x \in T \) . Next we consider a Cauchy sequence \( {\left( {z}_{m}\right) }_{m \geq 0} \) in \( \widehat{T} \), say \[ {z}_{m} = {\left( {x}_{mn}\right) }_{n \geq 0} + I\text{ with }{\left( {x}_{mn}\right) }_{n \geq 0} \in R. \] Then the diagonal sequence \( {\left( {x}_{nn}\right) }_{n \geq 0} \) is a Cauchy sequence in \( T \) and \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{z}_{n} = {\left( {x}_{nn}\right) }_{n \geq 0} + I \in \widehat{T} \] Thus \( \widehat{T} \) is complete with respect to \( \widehat{v} \) . Now let \( z = {\left( {x}_{n}\right) }_{n \geq 0} + I \) be an element of \( \widehat{T} \) . Upon checking, one finds that \( z = \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} \), hence \( T \) is dense in \( \widehat{T} \) . Thus far we have shown that a completion \( \left( {\widehat{T},\widehat{v}}\right) \) of \( \left( {T, v}\right) \) exists. Suppose that \( \left( {\widetilde{T},\widetilde{v}}\right) \) is another completion of \( \left( {T, v}\right) \) . For the moment, we denote by \( \widehat{v} \) -lim (resp. \( \widetilde{v} \) -lim) the limit of a sequence in \( \widehat{T} \) (resp. \( \widetilde{T} \) ). Then we can construct a mapping \( f : \widehat{T} \rightarrow \widetilde{T} \) as follows: if \( z \in \widehat{T} \) is represented as \[ z = \widehat{v} - \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n}\;\text{ with }\;{x}_{n} \in T, \] we define \[ f\left( z\right) \mathrel{\text{:=}} \widetilde{v} - \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} \] It turns out that \( f \) is a well-defined isomorphism of \( \widehat{T} \) onto \( \widetilde{T} \) with the additional property \( \widehat{v} = \widetilde{v} \circ f \) . It is often more convenient to consider convergent series instead of sequences. Let \( {\left( {z}_{n}\right) }_{n \geq 0} \) be a sequence in a valued field \( \left( {T, v}\right) \) and \( {s}_{m} \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 0}}^{m}{z}_{i} \) . We say that the infinite series \( \mathop{\sum }\limits_{{i = 0}}^{\infty }{z}_{i} \) is convergent if the sequence of its partial sums \( {\left( {s}_{m}\right) }_{m \geq 0} \) is convergent; in this case we write, as usual, \[ \mathop{\sum }\limits_{{i = 0}}^{\infty }{z}_{i} \mathrel{\text{:=}} \mathop{\lim }\limits_{{m \rightarrow \infty }}{s}_{m} \] In a complete field there is a very simple criterion for convergence of an infinite series. Lemma 4.2.4. Let \( {\left( {z}_{n}\right) }_{n \geq 0} \) be a sequence in a complete valued field \( \left( {T, v}\right) \) . Then we have: The infinite series \( \mathop{\sum }\limits_{{i = 0}}^{\infty }{z}_{i} \) is convergent if and only if the sequence \( {\left( {z}_{n}\right) }_{n \geq 0} \) converges to 0 . Proof. Suppose that \( {\left( {z}_{n}\right) }_{n \geq 0} \) converges to 0 . Consider the \( m \) -th partial sum \( {s}_{m} \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 0}}^{m}{z}_{i} \) . For \( n > m \) we have \[ v\left( {{s}_{n} - {s}_{m}}\right) = v\left( {\mathop{\sum }\limits_{{i = m + 1}}^{n}{z}_{i}}\right) \geq \min \left\{ {v\left( {z}_{i}\right) \mid m < i \leq n}\right\} \geq \min \left\{ {v\left( {z}_{i}\right) \mid i > m}\right\} . \] Since \( v\left( {z}_{i}\right) \rightarrow \infty \) for \( i \rightarrow \infty \), this shows that the sequence \( {\left( {s}_{n}\right) }_{n \geq 0} \) is a Cauchy sequence in \( T \), hence convergent. The converse statement is easy; its proof is the same as in analysis. Now we specialize the above results to the case of an algebraic function field \( F/K \) . Definition 4.2.5. Let \( P \) be a place of \( F/K \) . The completion of \( F \) with respect to the valuation \( {v}_{P} \) is called the \( P \) -adic completion of \( F \) . We denote this completion by \( {\widehat{F}}_{P} \) and the valuation of \( {\widehat{F}}_{P} \) by \( {v}_{P} \) . Theorem 4.2.6. Let \( P \in {\mathbb{P}}_{F} \) be a place of degree one and let \( t \in F \) be a P-prime element. Then every element \( z \in {\widetilde{F}}_{P} \) has a unique representation of the form \[ z = \mathop{\sum }\limits_{{i = n}}^{\infty }{a}_{i}{t}^{i}\;\text{ with }\;n \in \mathbb{Z}\;\text{ and }\;{a}_{i} \in K. \] (4.14) This representation is called the \( P \) -adic power series expansion of \( z \) with respect to \( t \) . Conversely, if \( {\left( {c}_{i}\right) }_{i \geq n} \) is a sequence in \( K \), then the series \( \mathop{\sum }\limits_{{i = n}}^{\infty }{c}_{i}{t}^{i} \) converges in \( {\widehat{F}}_{P} \), and we have \[ {v}_{P}\left( {\mathop{\sum }\limits_{{i = n}}^{\infty }{c}_{i}{t}^{i}}\right) = \min \left\{ {i \mid {c}_{i} \neq 0}\right\} . \] Proof. First we prove the existence of a representation of the form in (4.14). Given \( z \in {\widehat{F}}_{P} \) we choose \( n \in \mathbb{Z} \) with \( n \leq {v}_{P}\left( z\right) \) . There is an element \( y \in F \) with \( {v}_{P}\left( {z - y}\right) > n \) (since \( F \) is dense in \( {\widehat{F}}_{P} \) ). By the Triangle Inequality it follows that \( {v}_{P}\left( y\right) \geq n \), hence \( {v}_{P}\left( {y{t}^{-n}}\right) \geq 0 \) . As \( P \) is a place of degree one, there is an element \( {a}_{n} \in K \) with \( {v}_{P}\left( {y{t}^{-n} - {a}_{n}}\right) > 0 \), and \[ {v}_{P}\left( {z - {a}_{n}{t}^{n}}\right) = {v}_{P}\left( {\left( {z - y}\right) + \left( {y - {a}_{n}{t}^{n}}\right) }\right) > n. \] In the same manner, we find \( {a}_{n + 1} \in K \) such that \[ {v}_{P}\left( {z - {a}_{n}{t}^{n} - {a}_{n + 1}{t}^{n + 1}}\right) > n + 1. \] Iterating this construction, we obtain an infinite sequence \( {a}_{n},{a}_{n + 1},{a}_{n + 2},\ldots \) in \( K \) such that \[ {v}_{P}\left( {z - \mathop{\sum }\limits_{{i = n}}^{m}{a}_{i}{t}^{i}}\right) > m \] for all \( m \geq n \) . This shows that \[ z = \mathop{\sum }\limits_{{i = n}}^{\infty }{a}_{i}{t}^{i} \] In order to prove uniqueness we consider another sequence \( {\left( {b}_{i}\right) }_{i \geq m} \) in \( K \) which satisfies \[ z = \mathop{\sum }\limits_{{i = n}}^{\infty }{a}_{i}{t}
1096_(GTM252)Distributions and Operators
Definition 3.10
Definition 3.10. Let \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) . \( {1}^{ \circ } \) We say that \( u \) is 0 on the open subset \( \omega \subset \Omega \) when \[ \langle u,\varphi \rangle = 0\;\text{ for all }\varphi \in {C}_{0}^{\infty }\left( \omega \right) . \] (3.32) \( {2}^{ \circ } \) The support of \( u \) is defined as the set \[ \operatorname{supp}u = \Omega \smallsetminus \left( {\bigcup \{ \omega \mid \omega \text{ open } \subset \Omega, u\text{ is }0\text{ on }\omega \} }\right) . \] (3.33) Observe for example that the support of the nontrivial distribution \( {\partial }_{j}{1}_{\Omega } \) defined in (3.24) is contained in \( \partial \Omega \) (a deeper analysis will show that \( \operatorname{supp}{\partial }_{j}{1}_{\Omega } = \partial \Omega \) ). Since the support of \( {\partial }_{j}{1}_{\Omega } \) is a null-set in \( {\mathbb{R}}^{n} \), and 0 is the only \( {L}_{1,\mathrm{{loc}}} \) -function with support in a null-set, \( {\partial }_{j}{1}_{\Omega } \) cannot be a function in \( {L}_{1,\text{ loc }}\left( {\mathbb{R}}^{n}\right) \) (see also the discussion after Lemma 3.2). Lemma 3.11. Let \( {\left( {\omega }_{\lambda }\right) }_{\lambda \in \Lambda } \) be a family of open subsets of \( \Omega \) . If \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) is 0 on \( {\omega }_{\lambda } \) for each \( \lambda \in \Lambda \), then \( u \) is 0 on the union \( \mathop{\bigcup }\limits_{{\lambda \in \Lambda }}{\omega }_{\lambda } \) . Proof. Let \( \varphi \in {C}_{0}^{\infty }\left( \Omega \right) \) with support \( K \subset \mathop{\bigcup }\limits_{{\lambda \in \Lambda }}{\omega }_{\lambda } \) ; we must show that \( \langle u,\varphi \rangle = 0 \) . The compact set \( K \) is covered by a finite system of the \( {\omega }_{\lambda } \) ’s, say \( {\omega }_{1},\ldots ,{\omega }_{N} \) . According to Theorem 2.17, there exist \( {\psi }_{1},\ldots ,{\psi }_{N} \in {C}_{0}^{\infty }\left( \Omega \right) \) with \( {\psi }_{1} + \cdots + {\psi }_{N} = 1 \) on \( K \) and \( \operatorname{supp}{\psi }_{j} \subset {\omega }_{j} \) for each \( j \) . Now let \( {\varphi }_{j} = {\psi }_{j}\varphi \) , then \( \varphi = \mathop{\sum }\limits_{{j = 1}}^{N}{\varphi }_{j} \), and \( \langle u,\varphi \rangle = \mathop{\sum }\limits_{{j = 1}}^{N}\left\langle {u,{\varphi }_{j}}\right\rangle = 0 \) by assumption. Because of this lemma, we can also describe the support as the complement of the largest open set where \( u \) is 0 . An interesting subset of \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) is the set of distributions with compact support in \( \Omega \) . It is usually denoted \( {\mathcal{E}}^{\prime }\left( \Omega \right) \) , \[ {\mathcal{E}}^{\prime }\left( \Omega \right) = \left\{ {u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \mid \operatorname{supp}u\text{ is compact } \subset \Omega }\right\} . \] (3.34) When \( u \in {\mathcal{E}}^{\prime }\left( \Omega \right) \), there is a \( j \) such that \( \operatorname{supp}u \subset {K}_{j - 1} \subset {K}_{j}^{ \circ } \) (cf. (2.4)). Since \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \), there exist \( {c}_{j} \) and \( {N}_{j} \) so that \[ \left| {\langle u,\psi \rangle }\right| \leq {c}_{j}\sup \left\{ {\left| {{D}^{\alpha }\psi \left( x\right) }\right| \left| {x \in {K}_{j}}\right| ,\left| \alpha \right| \leq {N}_{j}}\right\} , \] for all \( \psi \) with support in \( {K}_{j} \) . Choose a function \( \eta \in {C}_{0}^{\infty }\left( \Omega \right) \) which is 1 on a neighborhood of \( {K}_{j - 1} \) and has support in \( {K}_{j}^{ \circ } \) (cf. Corollary 2.14). An arbitrary test function \( \varphi \in {C}_{0}^{\infty }\left( \Omega \right) \) can then be written as \[ \varphi = {\eta \varphi } + \left( {1 - \eta }\right) \varphi \] where \( \operatorname{supp}{\eta \varphi } \subset {K}_{j}^{ \circ } \) and \( \operatorname{supp}\left( {1 - \eta }\right) \varphi \subset \complement {K}_{j - 1} \) . Since \( u \) is 0 on \( \complement {K}_{j - 1} \) , \( \langle u,\left( {1 - \eta }\right) \varphi \rangle = 0 \), so that \[ \left| {\langle u,\varphi \rangle }\right| = \left| {\langle u,{\eta \varphi }\rangle }\right| \leq {c}_{j}\sup \left\{ {\left| {{D}^{\alpha }\left( {\eta \left( x\right) \varphi \left( x\right) }\right) }\right| \mid x \in {K}_{j},\left| \alpha \right| \leq {N}_{j}}\right\} \] (3.35) \[ \leq {c}^{\prime }\sup \left\{ {\left| {{D}^{\alpha }\varphi \left( x\right) }\right| \mid x \in \operatorname{supp}\varphi ,\left| \alpha \right| \leq {N}_{j}}\right\} , \] where \( {c}^{\prime } \) depends on the derivatives of \( \eta \) up to order \( {N}_{j} \) (by the Leibniz formula, cf. also (2.18)). Since \( \varphi \) was arbitrary, this shows that \( u \) has order \( {N}_{j} \) (it shows even more: that we can use the same constant \( {c}^{\prime } \) on all compact sets \( \left. {{K}_{m} \subset \Omega }\right) \) . We have shown: Theorem 3.12. When \( u \in {\mathcal{E}}^{\prime }\left( \Omega \right) \), there is an \( N \in {\mathbb{N}}_{0} \) so that \( u \) has order \( N \) . Let us also observe that when \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) has compact support, then \( \langle u,\varphi \rangle \) can be given a sense also for \( \varphi \in {C}^{\infty }\left( \Omega \right) \) (since it is only the behavior of \( \varphi \) on a neighborhood of the support of \( u \) that in reality enters in the expression). The space \( {\mathcal{E}}^{\prime }\left( \Omega \right) \) may in fact be identified with the space of continuous functionals on \( {C}^{\infty }\left( \Omega \right) \) (which is sometimes denoted \( \mathcal{E}\left( \Omega \right) \) ; this explains the terminology \( {\mathcal{E}}^{\prime }\left( \Omega \right) \) for the dual space). See Exercise 3.11. Remark 3.13. When \( {\Omega }^{\prime } \) is an open subset of \( \Omega \) with \( \overline{{\Omega }^{\prime }} \) compact \( \subset \Omega \), and \( K \) is compact with \( \overline{{\Omega }^{\prime }} \subset {K}^{ \circ } \subset K \subset \Omega \), then an arbitrary distribution \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) can be written as the sum of a distribution supported in \( K \) and a distribution which is 0 on \( {\Omega }^{\prime } \) : \[ u = {\zeta u} + \left( {1 - \zeta }\right) u \] (3.36) where \( \zeta \in {C}_{0}^{\infty }\left( {K}^{ \circ }\right) \) is chosen to be 1 on \( \overline{{\Omega }^{\prime }} \) (such functions exist according to Corollary 2.14). The distribution \( {\zeta u} \) has support in \( K \) since \( {\zeta \varphi } = 0 \) for \( \operatorname{supp}\varphi \subset \Omega \smallsetminus K \) ; and \( \left( {1 - \zeta }\right) u \) is 0 on \( {\Omega }^{\prime } \) since \( \left( {1 - \zeta }\right) \varphi = 0 \) for \( \operatorname{supp}\varphi \subset {\Omega }^{\prime } \) . In this connection we shall also consider restrictions of distributions, and describe how distributions are glued together. When \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) and \( {\Omega }^{\prime } \) is an open subset of \( \Omega \), we define the restriction of \( u \) to \( {\Omega }^{\prime } \) as the element \( {\left. u\right| }_{{\Omega }^{\prime }} \in {\mathcal{D}}^{\prime }\left( {\Omega }^{\prime }\right) \) defined by \[ {\left\langle {\left. u\right| }_{{\Omega }^{\prime }},\varphi \right\rangle }_{{\Omega }^{\prime }} = \langle u,\varphi {\rangle }_{\Omega }\;\text{ for }\varphi \in {C}_{0}^{\infty }\left( {\Omega }^{\prime }\right) . \] (3.37) (For the sake of precision, we here indicate the duality between \( {\mathcal{D}}^{\prime }\left( \omega \right) \) and \( {C}_{0}^{\infty }\left( \omega \right) \) by \( \langle \) , \( {\rangle }_{\omega } \), when \( \omega \) is an open set.) When \( {u}_{1} \in {\mathcal{D}}^{\prime }\left( {\Omega }_{1}\right) \) and \( {u}_{2} \in {\mathcal{D}}^{\prime }\left( {\Omega }_{2}\right) \), and \( \omega \) is an open subset of \( {\Omega }_{1} \cap {\Omega }_{2} \) , we say that \( {u}_{1} = {u}_{2} \) on \( \omega \), when \[ {\left. {u}_{1}\right| }_{\omega } - {\left. {u}_{2}\right| }_{\omega } = 0\;\text{ as an element of }{\mathcal{D}}^{\prime }\left( \omega \right) . \] (3.38) The following theorem is well-known for continuous functions and for \( {L}_{1,\text{ loc }} \) - functions. Theorem 3.14 (Gluing distributions together). Let \( {\left( {\omega }_{\lambda }\right) }_{\lambda \in \Lambda } \) be an arbitrary system of open sets in \( {\mathbb{R}}^{n} \) and let \( \Omega = \mathop{\bigcup }\limits_{{\lambda \in \Lambda }}{\omega }_{\lambda } \) . Assume that there is given a system of distributions \( {u}_{\lambda } \in {\mathcal{D}}^{\prime }\left( {\omega }_{\lambda }\right) \) with the property that \( {u}_{\lambda } \) equals \( {u}_{\mu } \) on \( {\omega }_{\lambda } \cap {\omega }_{\mu } \), for each pair of indices \( \lambda ,\mu \in \Lambda \) . Then there exists one and only one distribution \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) such that \( {\left. u\right| }_{{\omega }_{\lambda }} = {u}_{\lambda } \) for all \( \lambda \in \Lambda \) . Proof. Observe to begin with that there is at most one solution \( u \) . Namely, if \( u \) and \( v \) are solutions, then \( {\left. \left( u - v\right) \right| }_{{\omega }_{\lambda }} = 0 \) for all \( \lambda \) . This implies that \( u - v = 0 \), by Lemma 3.11. We construct \( u \) as follows: Let \( {\left( {K}_{l}\right) }_{l \in \mathbb{N}} \) be a sequence of compact sets as in (2.4) and consider a fixed \( l \) . Since \( {K}_{l} \) is compact, it is covered by a finite subfamily \( {\left( {\Omega }_{j}\right) }_{j = 1,\ldots, N} \) of the sets \( {\left( {\omega }_{\lambda }\right) }_{\lambda \in \Lambda } \) ; we denote \( {u}_{j} \) the associated distributions given in \( {\mathcal{D}}^{\prime }\left( {\Omega }_{j}\right) \), respe
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 2.2.1
Definition 2.2.1. Let \( X \) and \( Y \) be arbitrary spaces, and let \( p : Y \rightarrow X \) be a map. Then \( Y \) is a covering space of \( X \), and \( p \) is a covering projection, if every \( x \in X \) has an open neighborhood \( U \) with \( {p}^{-1}\left( U\right) = \left\{ {V}_{i}\right\} \) a union of open sets in \( Y \), and with \( p \mid {V}_{i} : {V}_{i} \rightarrow U \) a homeomorphism for each \( i \) . Such a set \( U \) is said to be evenly covered by \( p \) . \( \diamond \) Lemma 2.2.2. Let \( p : Y \rightarrow X \) be a covering projection. Then (i) For every \( x \in X,\left\{ {{p}^{-1}\left( x\right) }\right\} \) is a discrete subset of \( \gamma \) . (ii) \( p \) is a local homeomorphism. (iii) The topology on \( X \) is the quotient topology it inherts from \( Y \) via the map \( p \) . We have defined covering spaces in complete generality. But in order to obtain a relationship between covering spaces and the fundamental group, we shall have to assume that both \( X \) and \( Y \) are path connected. Indeed, to get the best relationship we shall have to restrict our attention even further. But for now we continue in general. Example 2.2.3. (i) Let \( D \) be any discrete space. Then \( X \times D \) is a covering space of \( X \), with the covering projection \( p \) being a projection onto the first factor. (iia) Let \( p : \mathbb{R} \rightarrow {S}^{1} \) be defined by \( p\left( t\right) = {e}^{2\pi it} \) . Then \( p \) is a covering projection. (iib) Let \( n \) be a positive integer and let \( p : {S}^{1} \rightarrow {S}^{1} \) be defined by \( p\left( z\right) = {z}^{n} \) . Then \( p \) is a covering projection. (iii) Let \( G \) be any topological group and let \( H \) be any discrete subgroup of \( G \) . Then the projection \( p : G \rightarrow H \smallsetminus G \) (the space of left cosets, with the quotient topology) is a covering projection. (iv) Let \( Y \) be any Hausdorff topological space and let \( G \) be any finite group that acts freely on \( Y \), i.e., with the property that if \( g\left( y\right) = y \) for any \( g \in G \) and any \( y \in Y \), then \( g \) is the identity element of \( G \) . Let \( G \smallsetminus Y \) be the quotient space under this action, i.e., \( {y}_{1},{y}_{2} \in Y \) are identified in \( G \smallsetminus Y \) if there is an element \( g \) of \( G \) with \( g\left( {y}_{1}\right) = {y}_{2} \), with the quotient topology. Note we are assuming here that \( G \) acts on \( Y \) on the left. Then \( p : Y \rightarrow G \smallsetminus Y \) is a covering projection. More generally, let \( Y \) be any topological space and let \( G \) be any group acting properly discontinuously on \( Y \), i.e., with the property that every \( y \in Y \) has a neighborhood \( U \) such that if \( g\left( U\right) \cap U \neq \varnothing \), then \( g \) is the identity element of \( G \) . (Note that such an action must be free.) Then \( p : Y \rightarrow G \smallsetminus Y \) is a covering projection. Note that (ii) is a special case of (iii), which is in turn a special case of (iv). \( \diamond \) A covering projection \( p : Y \rightarrow X \) has two important properties. Theorem 2.2.4 (Unique path lifting). Let \( p : Y \rightarrow X \) be a covering projection. Let \( {x}_{0} \in X \) be arbitrary and let \( {y}_{0} \in Y \) be any point with \( p\left( {y}_{0}\right) = {x}_{0} \) . Let \( f : I \rightarrow X \) be an arbitrary map with \( f\left( 0\right) = {x}_{0} \) . Then \( f \) has a unique lifting \( \widetilde{f} : I \rightarrow Y \) with \( \widetilde{f}\left( 0\right) = {y}_{0} \), i.e., there is a unique \( \widetilde{f} : I \rightarrow Y \) with \( \widetilde{f}\left( 0\right) = {y}_{0} \) making the following diagram commute: ![21ef530b-1e09-406a-b041-cf4539af5c14_20_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_20_0.jpg) Theorem 2.2.5 (Homotopy lifting property). Let \( p : Y \rightarrow X \) be a covering projection. Let \( E \) be an arbitrary space and let \( F : E \times I \rightarrow X \) be an arbitrary map. Suppose there is a map \( \widetilde{f} : E \times \{ 0\} \rightarrow Y \) such that \( p\widetilde{f}\left( {e,0}\right) = F\left( {e,0}\right) \) for every \( e \in E \) . Then \( \widetilde{f} \) extends to a map \( \widetilde{F} : E \times I \rightarrow Y \) making the following diagram commute: ![21ef530b-1e09-406a-b041-cf4539af5c14_20_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_20_1.jpg) The homotopy lifting property is sometimes also called the covering homotopy property. As a consequence of these two theorems, we can now compute some fundamental groups. Theorem 2.2.6. Let \( Y \) be a path connected and simply connected space and let the group \( G \) act properly discontinuously on \( Y \) on the left. Let \( X \) be the quotient space \( X = G \smallsetminus Y \) . Then for any \( {x}_{0} \in X,{\pi }_{1}\left( {X,{x}_{0}}\right) \) is isomorphic to \( G \) . Proof. Let \( p : Y \rightarrow X \) be the quotient map. As we have observed, \( p \) is a covering projection. Choose \( {y}_{0} \in Y \) with \( p\left( {y}_{0}\right) = {x}_{0} \) . Note that \( g \mapsto {y}_{g} = g\left( {y}_{0}\right) \) gives a 1-1 correspondence between the elements \( g \) of \( G \) and \( F = \left\{ {y \in Y \mid p\left( y\right) = {x}_{0}}\right\} \), and under this correspondence \( {y}_{e} = {y}_{0} \), where \( e \) is the identity element of \( G \) . For each \( g \in G \), define \( {\widetilde{f}}_{g} : I \rightarrow Y \) with \( {\widetilde{f}}_{g}\left( 0\right) = {y}_{0} \) and \( {\widetilde{f}}_{g}\left( 1\right) = {y}_{g} \) . Note that such a map, \( {\widetilde{f}}_{g} \) always exists as we are assuming \( Y \) is path connected. Then \( {f}_{g} = p\left( {\widetilde{f}}_{g}\right) \) is a map \( {f}_{g} : I \rightarrow X \) with \( {f}_{g}\left( 0\right) = {f}_{g}\left( 1\right) = {x}_{0} \), so represents an element \( \left\lbrack {f}_{g}\right\rbrack \) of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \) . We claim the map \( \varphi : G \rightarrow {\pi }_{1}\left( {X,{x}_{0}}\right) \) by \( \varphi \left( g\right) = \left\lbrack {f}_{g}\right\rbrack \) is an isomorphism. (i) \( \varphi \) is well-defined. Suppose we have another map \( {\widetilde{f}}_{g}^{\prime } : I \rightarrow Y \) with \( {\widetilde{f}}_{g}^{\prime }\left( 0\right) = {y}_{0} \) and \( {f}_{g}^{\prime }\left( 1\right) = {y}_{g} \) . Since \( Y \) is simply connected, \( {\widetilde{f}}_{g} \) and \( {\widetilde{f}}_{g}^{\prime } \) are homotopic rel \( \{ 0,1\} \), so taking the image of this homotopy under \( p \) gives a homotopy between \( {f}_{g} \) and \( {f}_{g}^{\prime } \) rel \( \{ 0,1\} \), so \( \left\lbrack {f}_{g}\right\rbrack = \left\lbrack {f}_{g}^{\prime }\right\rbrack \in {\pi }_{1}\left( {X,{x}_{0}}\right) \) . (ii) \( \varphi \) is onto. Let \( \alpha \in {\pi }_{1}\left( {X,{x}_{0}}\right) \) be arbitrary and let \( f : \left( {I,\{ 0,1\} }\right) \rightarrow \left( {X,{x}_{0}}\right) \) represent \( \alpha \) . Then by Theorem 2.2.4 \( f \) lifts to \( \widetilde{f} : I \rightarrow Y \) with \( \widetilde{f}\left( 0\right) = {y}_{0} \) and \( p\left( {\widetilde{f}\left( 1\right) }\right) = {x}_{0} \), i.e., \( \widetilde{f}\left( 1\right) = {y}_{g} \) for some element \( g \) of \( G \), and then by (i) we may take \( {\widetilde{f}}_{g} = \widetilde{f} \), so \( \varphi \left( \widetilde{f}\right) = g \) . (iii) \( \varphi \) is one-to-one. Suppose that for some \( g \in G,{f}_{g} = \pi \left( {\widetilde{f}}_{g}\right) \) represents the trivial element of \( {\pi }_{1}\left( {X,{x}_{0}}\right) \), i.e., it is null-homotopic rel \( \{ 0,1\} \) . Then there is a map \( F : I \times I \rightarrow X \) with \( F\left( {s,0}\right) = {f}_{g}\left( s\right) = p{\widetilde{f}}_{g}\left( s\right) \) for every \( s \in I, F\left( {0, t}\right) = F\left( {1, t}\right) = \) \( {x}_{0} \) for every \( t \in I \) and \( F\left( {s,1}\right) = {x}_{0} \) for every \( s \in I \) . By Theorem 2.2.5, \( F \) lifts to \( \widetilde{F} : I \times I \rightarrow Y \) with \( \widetilde{F}\left( {s,0}\right) = {\widetilde{f}}_{g}\left( s\right) \) for every \( s \in I \), and \( p\widetilde{F}\left( {0,1}\right) = p\widetilde{F}\left( {1, t}\right) = {x}_{0} \) . Now \( {\widetilde{h}}_{0} : I \rightarrow Y \) by \( {\widetilde{h}}_{0}\left( t\right) = \widetilde{F}\left( {0, t}\right) \) is a path in \( Y \), and \( {\widetilde{h}}_{0} = \widetilde{F}\left( {0,0}\right) = {\widetilde{f}}_{g}\left( 0\right) = {y}_{0} \) . Also, \( p{\widetilde{h}}_{0}\left( t\right) = {x}_{0} \) for every \( t \), i.e., \( {\widetilde{h}}_{0}\left( t\right) \in {p}^{-1}\left( {x}_{0}\right) \) for every \( t \in I \) . But \( {p}^{-1}\left( {x}_{0}\right) \) is a discrete space, so we must have \( {\widetilde{h}}_{0}\left( t\right) = {\widetilde{h}}_{0}\left( 0\right) \) for every \( t \), and in particular \( {\widetilde{h}}_{0}\left( 1\right) = {\widetilde{h}}_{0}\left( 0\right) = {y}_{0} \) . By exactly the same logic, if \( {\widetilde{h}}_{1} : I \rightarrow Y \) by \( {\widetilde{h}}_{1}\left( t\right) = \widetilde{F}\left( {1, t}\right) \) , we must have \( {\widetilde{h}}_{1}\left( 1\right) = {\widetilde{h}}_{1}\left( 0\right) \) . Now \( {\widetilde{h}}_{1}\left( 0\right) = \widetilde{F}\left( {1,0}\right) = \widetilde{f}\left( 1\right) = {y}_{g} \) . On the other hand, \( F\left( {s,1}\right) \) is a constant path starting at \( {x}_{0} \), so by Theorem 2.2.4 lifts to a unique path starting at \( \widetilde{F}\left( {0,1}\right) = {y}_{0} \) . Obviously, the constant path at \( {y}_{0} \) is such a lifting, so must be the only lifting, i.e., \( \widetilde{F}\left( {s,1}\right) = {y}_{0} \) for every \( s \) . In particular \( \widetilde{F}\left( {1,1}\right) = {\widetilde{h}}_{1}\left( 1\right) = {y}_{0} \) . But then \( {y}_{0} = {y}_{g} = g\left( {y}_{0}\right) \), and so \( g \) is the identity
1234_[丁一文] Number Theory 1
Definition 4.3.7
Definition 4.3.7. (1) A function \( f : {\left\lbrack 0,1\right\rbrack }^{n - 1} \rightarrow {\mathbb{R}}^{n} \) is called Lipschitz, if the ratio \( \frac{\left| f\left( x\right) - f\left( y\right) \right| }{\left| x - y\right| } \) is uniformly bounded for \( x \neq y \in {\left\lbrack 0,1\right\rbrack }^{n - 1} \) . (2) Let \( B \) be a bounded region in \( {\mathbb{R}}^{n} \) . We call \( \partial B = \bar{B} \smallsetminus {B}^{o} \) is \( \left( {n - 1}\right) \) -Lipschitz parametrizable, if it is covered by the images of finitely many Lipschitz functions: \( f \) : \( {\left\lbrack 0,1\right\rbrack }^{n - 1} \rightarrow {\mathbb{R}}^{n} \) . Lemma 4.3.8. Let \( B \) be a bounded region in \( {\mathbb{R}}^{n} \) such that \( \partial B \) is \( \left( {n - 1}\right) \) -Lipschitz parametriz-able, and \( \Lambda \subset {\mathbb{R}}^{n} \) be a complete lattice. Then for \( a > 1 \), we have \[ \# \left( {\Lambda \cap {aB}}\right) = \frac{\mu \left( B\right) }{\operatorname{Vol}\left( {{\mathbb{R}}^{n}/\Lambda }\right) }{a}^{n} + O\left( {a}^{n - 1}\right) . \] Proof. We give a sketch of the proof. First the map \( x \mapsto {ax} \) induces a bijection between \( \left( {\frac{1}{a}\Lambda }\right) \cap B \rightarrow \Lambda \cap {aB} \) . Let \( {e}_{1},\cdots ,{e}_{n} \) be a basis of \( \Lambda \), and \( \Omega \mathrel{\text{:=}} \left\{ {\mathop{\sum }\limits_{i}{x}_{i}{e}_{i} \mid - \frac{1}{2a} < {x}_{i} \leq \frac{1}{2a}}\right\} \) . Since \( \partial B \) is \( \left( {n - 1}\right) \) -Lipschitz parametrizable, for any \( r > 0 \), there exists \( {M}_{r} \) -points \( \left\{ {x}_{i}\right\} \) such that \( {M}_{r} = O\left( {a}^{n - 1}\right) \) and for any \( x \in \partial B \), there exists \( {x}_{i} \) such that \( \left| {x - {x}_{i}}\right| < \frac{r}{a} \) . We can then choose \( r \) (independent of \( a \) ) and \( M = O\left( {a}^{n - 1}\right) \) -points \( \left\{ {x}_{i}\right\} \), such that if \( \left( {\alpha + \Omega }\right) \cap \partial B \neq \varnothing \) , then \( \left( {\alpha + \Omega }\right) \subset B\left( {{x}_{i}, r/a}\right) \) for some \( {x}_{i} \) . Similarly as in Example 4.3.6, we have \[ \frac{\mu \left( {B \smallsetminus { \cup }_{i}B\left( {{x}_{i}, r/a}\right) }\right) }{\mu \left( \Omega \right) } \leq \# \left( {\frac{1}{a}\Lambda }\right) \cap B \leq \frac{\mu \left( {B \cup \left( {{ \cup }_{i}B\left( {{x}_{i}, r/a}\right) }\right) }\right. }{\mu \left( \Omega \right) }. \] Using \( \frac{\mu \left( B\right) }{\mu \left( \Omega \right) } = \frac{\mu \left( B\right) }{\operatorname{Vol}\left( {{\mathbb{R}}^{n}/\Lambda }\right) }{a}^{n} \), and \( \frac{\mu \left( {{ \cup }_{i}B\left( {{x}_{i}, r/a}\right) }\right) }{\mu \left( \Omega \right) } = O\left( {a}^{n - 1}\right) \), the lemma follows. We now calculate \( \# \left( {J \cap \{ \alpha \in K \mid N\left( \alpha \right) \leq {tN}\left( J\right) \} }\right) /{\mathcal{O}}_{K}^{ \times } \) . Let \( \lambda : K \hookrightarrow {\mathbb{R}}^{r} \times {\mathbb{C}}^{s} \) , \( x \mapsto \left( {{\sigma }_{1}\left( x\right) ,\cdots ,{\sigma }_{r}\left( x\right) ,{\sigma }_{r + 1}\left( x\right) ,\cdots ,{\sigma }_{r + s}\left( x\right) }\right) \) . Let \( {\varepsilon }_{1},\cdots ,{\varepsilon }_{r + s - 1} \) be a set of fundamental units (that are a basis of \( {\mathcal{O}}_{K}^{ \times }/{\mu }_{K} \) ). Consider ![86ca4626-7ae6-496f-b170-b6817ee2a1e9_72_0.jpg](images/86ca4626-7ae6-496f-b170-b6817ee2a1e9_72_0.jpg) where \( \log \left( {{x}_{1},\cdots ,{x}_{s}}\right) \mathrel{\text{:=}} \left( {\log \left| {x}_{1}\right| ,\cdots ,\log \left| {x}_{r}\right| ,2\log \left| {x}_{r + 1}\right| ,\cdots ,2\log \left| {x}_{r + s}\right| }\right) \), and where \( H \) is the hyperplane of \( {\mathbb{R}}^{r + s} \) defined by \( \mathop{\sum }\limits_{{i = 1}}^{{r + s}}{x}_{i} = 0 \) . Recall \( \ell \left( {\mathcal{O}}_{K}^{ \times }\right) \) is a complete lattice in \( H \) . Let \( {X}_{t} \mathrel{\text{:=}} \left( {J \cap \{ \alpha \in K \mid N\left( \alpha \right) \leq {tN}\left( J\right) \} }\right) \), and \( {X}_{t}^{ * } \mathrel{\text{:=}} {X}_{t} \smallsetminus \{ 0\} \) . The image of the region \( {X}_{t}^{ * } \) under the morphism Log is the region \( {X}_{t}^{\text{Log }} \mathrel{\text{:=}} \left\{ {\left( {x}_{i}\right) \in {\mathbb{R}}^{r + s} \mid \mathop{\sum }\limits_{{i = 1}}^{{r + s}}{x}_{i} \leq \log \left( {{tN}\left( J\right) }\right) }\right\} \) . The natural \( {\mathcal{O}}_{K}^{ \times } \) -action on \( {X}_{t}^{ * } \) transfers to an \( \ell \left( {\mathcal{O}}_{K}^{ \times }\right) \) -action on \( {X}_{t}^{\text{log }} \) by addition. Put \( v \mathrel{\text{:=}} \frac{1}{r + s}\left( {1,\cdots ,1}\right) \in b{R}^{r + s} \), then \( \left( {v,\ell \left( {\varepsilon }_{1}\right) ,\cdots ,\ell \left( {\varepsilon }_{r + s - 1}\right) }\right) \) form a basis of \( {\mathbb{R}}^{r + s} \) . Let \( {D}_{t}^{\text{Log }} \mathrel{\text{:=}} \) \( \left\{ {{t}_{0}v + \mathop{\sum }\limits_{{i = 1}}^{{r + s - 1}}{t}_{i}\ell \left( {\varepsilon }_{i}\right) \mid {t}_{0} \in \left( {-\infty ,\log \left( {{tN}\left( J\right) }\right) }\right) ,{t}_{i} \in \lbrack 0,1)}\right\} . \) Lemma 4.3.9. For any \( x \in {X}_{t}^{\text{Log }} \), there exists a unique \( y \in {D}_{t}^{\text{Log }} \) such that \( x - y \in \ell \left( {\mathcal{O}}_{K}^{ \times }\right) \) . Proof. The lemma follows easily from the fact \( \left\{ {\mathop{\sum }\limits_{{i = 1}}^{{r + s - 1}}{t}_{i}\ell \left( {\varepsilon }_{i}\right) \mid {t}_{i} \in \lbrack 0,1)}\right\} \) is a fundamental mesh of the lattice \( \oplus \mathbb{Z}\ell \left( {\varepsilon }_{i}\right) \) in \( H \) . Let \( {D}_{t} \subset {X}_{t}^{ * } \) be the inverse image of \( {D}_{t}^{\text{Log }} \) under the morphism Log. By Lemma 4.3.9, we have Lemma 4.3.10. For any \( x \in {X}_{t}^{ * } \), there exists a unique \( y \in {D}_{t} \) such that \( x{y}^{-1} \in \mathop{\prod }\limits_{{i = 1}}^{{r + s - 1}}{\varepsilon }_{i}^{\mathbb{Z}} \hookrightarrow \) \( {\mathcal{O}}_{K}^{ \times } \) . Consequently, we have a bijection \[ \left( {J \cap \left\{ {\alpha \in K \mid N\left( \alpha \right) \leq {tN}\left( J\right) }\right\} }\right) /{\mathcal{O}}_{K}^{ \times } \leftrightarrow \left( {\lambda \left( J\right) \cap {D}_{t}}\right) /{\mu }_{K} \] We have \( {D}_{t} = {t}^{\frac{1}{n}}{D}_{1} \) and \( \partial D \) is \( \left( {n - 1}\right) \) -Lipschitz. By Lemma 4.3.8, we have \[ {N}_{C}\left( t\right) = \frac{\mu \left( {D}_{1}\right) }{\left| {\mu }_{K}\right| \operatorname{Vol}\left( {{\mathbb{R}}^{n}/\lambda \left( J\right) }\right) } + O\left( {t}^{1 - \frac{1}{n}}\right) . \] We have \( \operatorname{Vol}\left( {{\mathbb{R}}^{n}/\lambda \left( J\right) }\right) = {2}^{-s}\sqrt{\left| {\Delta }_{K}\right| }N\left( J\right) \) . Now we calculate \( \mu \left( {D}_{1}\right) \) : \[ \mu \left( {D}_{1}\right) = {\int }_{{D}_{1}}d{y}_{1}\cdots d{y}_{r}d{z}_{1},\cdots d{z}_{s}. \] Writing \( {z}_{j} = {\rho }_{j}{e}^{i{\theta }_{j}} \) then \( {x}_{i} = \log \left| {y}_{i}\right| \) for \( 1 \leq i \leq r \), and \( {x}_{i} = 2\log {\rho }_{i} \) for \( r + 1 \leq i \leq r + s \) , we have \[ \mu \left( {D}_{1}\right) = {\int }_{{D}_{1}}d{y}_{1}\cdots d{y}_{r}{\rho }_{1}\cdots {\rho }_{s}d{\rho }_{1}\cdots d{\rho }_{s}d{\theta }_{1}\cdots d{\theta }_{s} = {2}^{r}{\pi }^{s}{\int }_{{D}_{1}^{\text{Log }}}{e}^{\mathop{\sum }\limits_{{i = 1}}^{{r + s}}{x}_{i}}d{x}_{1}\cdots d{x}_{r + s}. \] Let \( {t}_{0},\cdots ,{t}_{r + s - 1} \) be the new variables such that \[ {\left( {x}_{1},\cdots ,{x}_{r + s}\right) }^{T} = \left( {v,\ell \left( {\varepsilon }_{1}\right) ,\cdots ,\ell \left( {\varepsilon }_{r + s - 1}\right) }\right) \left( {{t}_{0},\cdots ,{t}_{r + s - 1}}\right) \] then we have \[ \mu \left( {D}_{1}\right) = \left| {\det \left( {v,\ell \left( {\varepsilon }_{1}\right) ,\cdots ,\ell \left( {\varepsilon }_{r + s - 1}\right) }\right) }\right| {\int }_{-\infty }^{\log \left( {{tN}\left( J\right) }\right) }{e}^{{t}_{0}}\mathop{\prod }\limits_{{i = 1}}^{{r + s - 1}}{\int }_{0}^{1}d{t}_{i}. \] We define \( {R}_{K} \mathrel{\text{:=}} \left| {\det \left( {v,\ell \left( {\varepsilon }_{1}\right) ,\cdots ,\ell \left( {\varepsilon }_{r + s - 1}\right) }\right) }\right| \), called the regulator of \( K \) . We see that \( {R}_{K} \) is independent of the choice of the fundamental units, and is unchanged if \( v \) is replaced by any vector \( \left( {{x}_{1},\cdots ,{x}_{r + s}}\right) \in {\mathbb{R}}^{r + s} \) such that \( \mathop{\sum }\limits_{{i = 1}}^{{r + s}}{x}_{i} = 1 \) . Thus \( \mu \left( {D}_{1}\right) = {2}^{r}{\pi }^{s}{R}_{K}N\left( J\right) \) and the theorem follows. ## 4.4 Dirichlet \( L \) -functions We first discuss characters of finite groups. Let \( G \) be a finite abelian group, we let \( \widehat{G} \mathrel{\text{:=}} \{ \chi \) : \( \left. {G \rightarrow {\mathbb{C}}^{ \times }}\right\} \) be the set of characters of \( G \) . Then \( \widehat{G} \) has a natural (abelian) group structure: \( \left( {{\chi }_{1}{\chi }_{2}}\right) \left( g\right) = {\chi }_{1}\left( g\right) {\chi }_{2}\left( g\right) \) . Lemma 4.4.1. (1) We have a (non-canonical) isomoprhism \( \widehat{G} \cong G \) . (2) Given an exact sequence \( 1 \rightarrow H \rightarrow G \rightarrow G/H \rightarrow 1 \), the induced sequence \( 1 \rightarrow \) \( \widehat{G/H} \rightarrow \widehat{G} \rightarrow \widehat{H} \rightarrow 1 \) is exact. (3) The morphism \( G \rightarrow \widehat{\widehat{G}}, g \mapsto \left( {\chi \mapsto \chi \left( g\right) }\right) \) is an isomorphism. Proof. (1) By the structure theorem of finite abelian groups, it suffices to show \( \widehat{\mathbb{Z}/n\mathbb{Z}} \cong \) \( \mathbb{Z}/n\mathbb{Z} \) . However, it is clear that \( \chi \in \overline{\mathbb{Z}/n\mathbb{Z}} \) is determined by \( \chi \left( 1\right) \in {\mu }_{n}\left( \mathbb{C}
1167_(GTM73)Algebra
Definition 7.7
Definition 7.7. Let \( \mathrm{F} \) be an object in a concrete category \( \mathrm{e},\mathrm{X} \) a nonempty set, and \( \mathrm{i} : \mathrm{X} \rightarrow \mathrm{F} \) a map (of sets). \( \mathrm{F} \) is free on the set \( \mathrm{X} \) provided that for any object \( \mathrm{A} \) of \( \mathrm{C} \) and map (of sets) \( \mathrm{f} : \mathrm{X} \rightarrow \mathrm{A} \), there exists a unique morphism of \( @,\overline{\mathrm{f}} : \mathrm{F} \rightarrow \mathrm{A} \), such that \( \mathrm{{fi}} = \mathrm{f} \) (as a map of sets \( \mathrm{X} \rightarrow \mathrm{A} \) ), The essential fact about a free object \( F \) is that in order to define a morphism with domain \( F \), it suffices to specify the image of the subset \( i\left( X\right) \) as is seen in the following examples. EXAMPLES. Let \( G \) be any group and \( {g\varepsilon G} \) . Then the map \( \bar{f} : \mathbf{Z} \rightarrow G \) defined by \( \bar{f}\left( n\right) = {g}^{n} \) is easily seen to be the unique homomorphism \( \mathbf{Z} \rightarrow G \) such that \( 1 \mapsto g \) . Consequently, if \( X = \{ 1\} \) and \( i : X \rightarrow \mathbf{Z} \) is the inclusion map, then \( \mathbf{Z} \) is free on \( X \) in the category of groups; (given \( f : X \rightarrow G \), let \( g = f\left( 1\right) \) and define \( \bar{f} \) as above). In other words, to determine a unique homomorphism from \( \mathbf{Z} \) to \( G \) we need only specify the image of \( {1\varepsilon }\mathbf{Z} \) (that is, the image of \( i\left( X\right) \) ). The (additive) group \( \mathbf{Q} \) of rational numbers does not have this property. It is not difficult to show that there is no nontrivial homomorphism \( \mathbf{Q} \rightarrow {S}_{3} \) . Thus for any set \( X \), function \( i : X \rightarrow \mathbf{Q} \) and function \( f : X \rightarrow {S}_{3} \) with \( f\left( {x}_{1}\right) \neq \left( 1\right) \) for some \( {x}_{1} \in X \), there is no homomorphism \( \bar{f} : \mathbf{Q} \rightarrow {S}_{3} \) with \( \bar{f}i = f \) . Theorem 7.8. If \( \mathcal{C} \) is a concrete category, \( \mathbf{F} \) and \( {\mathbf{F}}^{\prime } \) are objects of \( \mathcal{C} \) such that \( \mathbf{F} \) is free on the set \( \mathrm{X} \) and \( {\mathrm{F}}^{\prime } \) is free on the set \( {\mathrm{X}}^{\prime } \) and \( \left| \mathrm{X}\right| = \left| {\mathrm{X}}^{\prime }\right| \), then \( \mathrm{F} \) is equivalent to \( {\mathrm{F}}^{\prime } \) . Note that the hypotheses are satisfied when \( F \) and \( {F}^{\prime } \) are both free on the same set \( X \) . PROOF OF 7.8. Since \( F,{F}^{\prime } \) are free and \( \left| X\right| = \left| {X}^{\prime }\right| \), there is a bijection \( f : X \rightarrow {X}^{\prime } \) and maps \( i : X \rightarrow F \) and \( j : {X}^{\prime } \rightarrow {F}^{\prime } \) . Consider the map \( {jf} : X \rightarrow {F}^{\prime } \) . Since \( F \) is free, there is a morphism \( \varphi : F \rightarrow {F}^{\prime } \) such that the diagram: ![a635b0ec-463f-4a06-bfab-0631c5cb2124_75_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_75_0.jpg) is commutative. Similarly, since the bijection \( f \) has an inverse \( {f}^{-1} : {X}^{\prime } \rightarrow X \) and \( {F}^{\prime } \) is free, there is a morphism \( \psi : {F}^{\prime } \rightarrow F \) such that: ![a635b0ec-463f-4a06-bfab-0631c5cb2124_75_1.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_75_1.jpg) is commutative. Combining these gives a commutative diagram: ![a635b0ec-463f-4a06-bfab-0631c5cb2124_75_2.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_75_2.jpg) Hence \( \left( {\psi \circ \varphi }\right) i = i{1}_{X} = i \) . But \( {1}_{F}i = i \) . Thus by the uniqueness property of free objects we must have \( \psi \circ \varphi = {1}_{F} \) . A similar argument shows that \( \varphi \circ \psi = {1}_{{F}^{\prime }} \) . Therefore \( F \) is equivalent to \( {F}^{\prime } \) . Products, coproducts, and free objects are all defined via universal mapping properties (that is, in terms of the existence of certain uniquely determined morphisms). We have also seen that any two products (or coproducts) for a given family of objects are actually equivalent (Theorems 7.3 and 7.5). Likewise two free objects on the same set are equivalent (Theorem 7.8). Furthermore there is a distinct similarity between the proofs of Theorems 7.3 and 7.8. Consequently it is not surprising that all of the notions just mentioned are in fact special cases of a single concept. Definition 7.9. An object I in a category \( \mathcal{Q} \) is said to be universal (or initial) if for each object \( \mathrm{C} \) of \( \mathcal{C} \) there exists one and only one morphism \( \mathrm{I} \rightarrow \mathrm{C} \) . An object \( \mathrm{T} \) of \( \mathcal{C} \) is said to be couniversal (or terminal) if for each object \( \mathrm{C} \) of \( \mathbb{C} \) there exists one and only one morphism \( \mathrm{C} \rightarrow \mathrm{T} \) . We shall show below that products, coproducts, and free objects may be considered as (co)universal objects in suitably chosen categories. However, this characterization is not needed in the sequel. Since universal objects will not be mentioned again (except in occasional exercises) until Sections III.4, III.5, and IV.5, the reader may wish to omit the following material for the present. Theorem 7.10. Any two universal [resp. couniversal] objects in a category \( \mathcal{C} \) are equivalent. PROOF. Let \( I \) and \( J \) be universal objects in \( \mathcal{C} \) . Since \( I \) is universal, there is a unique morphism \( f : I \rightarrow J \) . Similarly, since \( J \) is universal, there is a unique morphism \( g : J \rightarrow I \) . The composition \( g \circ f : I \rightarrow I \) is a morphism of \( \mathcal{C} \) . But \( {1}_{I} : I \rightarrow I \) is also a morphism of \( \mathcal{C} \) . The universality of \( I \) implies that there is a unique morphism \( I \rightarrow I \) , whence \( g \circ f = {1}_{I} \) . Similarly the universality of \( J \) implies that \( f \circ g = {1}_{J} \) . Therefore \( f : I \rightarrow J \) is an equivalence. The proof for couniversal objects is analogous. EXAMPLE. The trivial group \( \langle e\rangle \) is both universal and couniversal in the category of groups. EXAMPLE. Let \( F \) be a free object on the set \( X \) (with \( i : X \rightarrow F \) ) in a concrete category \( \mathcal{C} \) . Define a new category \( \mathfrak{D} \) as follows. The objects of \( \mathfrak{D} \) are all maps of sets \( f : X \rightarrow A \), where \( A \) is (the underlying set of) an object of \( \mathcal{C} \) . A morphism in \( \mathfrak{D} \) from \( f : X \rightarrow A \) to \( g : X \rightarrow B \) is defined to be a morphism \( h : A \rightarrow B \) of \( \mathcal{C} \) such that the diagram: ![a635b0ec-463f-4a06-bfab-0631c5cb2124_76_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_76_0.jpg) is commutative (that is, \( {hf} = g \) ). Verify that \( {1}_{A} : A \rightarrow A \) is the identity morphism from \( f \) to \( f \) in \( \mathfrak{D} \) and that \( h \) is an equivalence in \( \mathfrak{D} \) if and only if \( h \) is an equivalence in (c). Since \( F \) is free on the set \( X \), there is for each map \( f : X \rightarrow A \) a unique morphism \( \bar{f} : F \rightarrow A \) such that \( \bar{f}i = f \) . This is precisely the statement that \( i : X \rightarrow F \) is a universal object in the category \( \mathfrak{D} \) . EXAMPLE. Let \( \left\{ {{A}_{i} \mid {i\varepsilon I}}\right\} \) be a family of objects in a category \( \mathcal{C} \) . Define a category \( \mathcal{E} \) whose objects are all pairs \( \left( {B,\left\{ {{f}_{i} \mid i \in I}\right\} }\right) \), where \( B \) is an object of \( \mathcal{C} \) and for each \( i,{f}_{i} : B \rightarrow {A}_{i} \) is a morphism of \( \mathcal{C} \) . A morphism in \( \mathcal{E} \) from \( \left( {B,\left\{ {{f}_{i} \mid i \in I}\right\} }\right) \) to \( \left( {D,\left\{ {{g}_{i} \mid i \in I}\right\} }\right) \) is defined to be a morphism \( h : B \rightarrow D \) of \( \mathcal{C} \) such that \( {g}_{i} \circ h = {f}_{i} \) for every \( i \in I \) . Verify that \( {1}_{B} \) is the identity morphism from \( \left( {B,\left\{ {f}_{i}\right\} }\right) \) to \( \left( {B,\left\{ {f}_{i}\right\} }\right) \) in \( \mathcal{E} \) and that \( h \) is an equivalence in \( \mathcal{E} \) if and only if \( h \) is an equivalence in \( \mathcal{C} \) . If a product exists in \( \mathcal{C} \) for the family \( \left\{ {{A}_{i} \mid {i\varepsilon I}}\right\} \) (with maps \( {\pi }_{k} : \prod {A}_{i} \rightarrow {A}_{k} \) for each \( {k\varepsilon I} \) ), then for every \( \left( {B,\left\{ {f}_{i}\right\} }\right) \) in \( \mathcal{E} \) there exists a unique morphism \( f : B \rightarrow \prod {A}_{i} \) such that \( {\pi }_{i} \circ f \) \( = {f}_{i} \) for every \( {i\varepsilon I} \) . But this says that \( \left( {\prod {A}_{i},\left\{ {{\pi }_{i} \mid i \in I}\right\} }\right) \) is a couniversal object in the category \( \mathcal{E} \) . Similarly the coproduct of a family of objects in \( \mathcal{C} \) may be considered as a universal object in an appropriately constructed category. Since a product \( \prod {A}_{i} \) of a family \( \left\{ {{A}_{i} \mid i \in I}\right\} \) in a category may be considered as a couniversal object in a suitable category, it follows immediately from Theorem 7.10 that \( \prod {A}_{i} \) is uniquely determined up to equivalence. Analogous results hold for co-products and free objects. ## EXERCISES 1. A pointed set is a pair \( \left( {S, x}\right) \) with \( S \) a set and \( {x\varepsilon S} \) . A morphism of pointed sets \( \left( {S, x}\right) \rightarrow \left( {{S}^{\prime },{x}^{\prime }}\right) \) is a triple \( \left( {f, x,{x}^{\prime }}\right) \), where \( f : S \rightarrow {S}^{\prime } \) is a function such that \( f\left( x\right) = {x}^{\prime } \) .
109_The rising sea Foundations of Algebraic Geometry
Definition 3.64
Definition 3.64. A simplicial complex \( \sum \) is called a Coxeter complex if it is isomorphic to \( \sum \left( {W, S}\right) \) for some Coxeter system \( \left( {W, S}\right) \) . It is called a spherical Coxeter complex if it is finite. This differs from our previous use of the term "Coxeter complex" in that we do not assume that \( \sum = \sum \left( {W, S}\right) \) . In fact, we do not assume that we are given a specific isomorphism \( \sum \rightarrow \sum \left( {W, S}\right) \) as part of the structure of \( \sum \) . In particular, no chamber of \( \sum \) has been singled out as "fundamental." The following theorem of Tits says, roughly speaking, that Coxeter complexes can be characterized as the thin chamber complexes with "enough" roots. Theorem 3.65. A thin chamber complex \( \sum \) is a Coxeter complex if and only if every pair of adjacent chambers is separated by a wall. (We can restate the condition of the theorem as follows: For every ordered pair \( C,{C}^{\prime } \) of adjacent chambers, there is a folding \( \phi \) of \( \sum \) with \( \phi \left( {C}^{\prime }\right) = C \) . We do not need to specify here that \( \phi \) is reversible; for this follows, as we saw above in the case of \( \sum \left( {W, S}\right) \), from the existence of a folding taking \( {C}^{\prime } \) to \( C \) .) Proof of Theorem 3.65 (start). We have already proven the "only if" part. For the converse, assume that every pair of adjacent chambers is separated by a wall. Choose an arbitrary chamber \( C \), called the fundamental chamber, and let \( S \) be the set of reflections determined by the panels of \( C \) . Let \( W \leq \) Aut \( \sum \) be the subgroup generated by \( S \) . We will prove that \( \left( {W, S}\right) \) is a Coxeter system and that \( \sum \cong \sum \left( {W, S}\right) \) . We could simply repeat, essentially verbatim, the arguments that led to the analogous results for finite reflection groups in Chapter 1. For the sake of variety, however, we will use a different method. This is actually a little longer, but it adds some geometric insight that we would not get by repeating the previous arguments. In particular, it gives a simple geometric explanation of the deletion condition. We now proceed with a sequence of lemmas, after which we can complete the proof. Lemma 3.66. \( W \) acts transitively on the chambers of \( \sum \) . Proof. This is identical to the proof given in Chapter 1 for finite reflection groups (Theorem 1.69). Lemma 3.67. \( \sum \) is colorable. Proof. Let \( \bar{C} \) be the subcomplex \( {\sum }_{ \leq C} \) . It suffices to show that \( \bar{C} \) is a retract of \( \sum \) . The idea for showing this is to construct a retraction \( \rho \) by folding and folding and folding…, until the whole complex \( \sum \) has been folded up onto \( \bar{C} \) . To make this precise, let \( {C}_{1},\ldots ,{C}_{n} \) be the chambers adjacent to \( C \), and let \( {\phi }_{1},\ldots ,{\phi }_{n} \) be the foldings such that \( {\phi }_{i}\left( {C}_{i}\right) = C \) . Let \( \psi \) be the composite \( {\phi }_{n} \circ \cdots \circ {\phi }_{1} \) . We claim that \( d\left( {C,\psi \left( D\right) }\right) < d\left( {C, D}\right) \) for any chamber \( D \neq C \) . To prove this, let \( \Gamma : C,{C}^{\prime },\ldots, D \) be a minimal gallery from \( C \) to \( D \) ; we will show that \( \psi \left( \Gamma \right) \) has a repetition. If \( {\phi }_{1}\left( \Gamma \right) \) has a repetition, we are done. Otherwise, the standard uniqueness argument shows that \( {\phi }_{1} \) fixes all the chambers of \( \Gamma \) pointwise. In this case, repeat the argument with \( {\phi }_{2} \), and so on. Eventually we will be ready to apply the folding \( {\phi }_{i} \) that takes \( {C}^{\prime } \) to \( C \) . If the previous foldings did not already produce a repetition in \( \Gamma \), then they have fixed \( \Gamma \) pointwise, and the application of \( {\phi }_{i} \) yields a pregallery with a repetition. This proves the claim. It follows that for any chamber \( D,{\psi }^{k}\left( D\right) = C \) for \( k \) sufficiently large. Since \( \psi \) fixes \( C \) pointwise, this implies that the "infinite iterate" \( \rho \mathrel{\text{:=}} \mathop{\lim }\limits_{{k \rightarrow \infty }}{\psi }^{k} \) is a well-defined chamber map that retracts \( \sum \) onto \( \bar{C} \) . It will be convenient to choose a fixed type function \( \tau \) with \( S \) as the set of types, analogous to the canonical type function that we used earlier in the chapter. To this end we assign types to the vertices of the fundamental chamber \( C \) by declaring that the panel fixed by the reflection \( s \in S \) is an \( s \) -panel. We then extend this to all of \( \sum \) by means of a retraction \( \rho \) of \( \sum \) onto \( \bar{C} \) . Note that this type function \( \tau \) has a property that by now should be very familiar: For any \( s \in S \), the chambers \( C \) and \( {sC} \) are \( s \) -adjacent. Lemma 3.68. Foldings and reflections are type-preserving; hence all elements of \( W \) are type-preserving. Consequently, \( {wC} \) and wsC are s-adjacent for any \( w \in W \) and \( s \in S \) . Proof. A folding \( \phi \) fixes at least one chamber pointwise, so the type-change map \( {\phi }_{ * } \) is the identity (see Proposition A.14). This proves that foldings are type-preserving, and everything else follows from this. If \( \Gamma : {C}_{0},\ldots ,{C}_{d} \) is a gallery and \( {H}_{i} \) is the wall separating \( {C}_{i - 1} \) from \( {C}_{i} \) , then, as usual, we will say that \( {H}_{1},\ldots ,{H}_{d} \) are the walls crossed by \( \Gamma \) . Lemma 3.69. If \( \Gamma : {C}_{0},\ldots ,{C}_{d} \) is a minimal gallery, then the walls crossed by \( \Gamma \) are distinct and are precisely the walls separating \( {C}_{0} \) from \( {C}_{d} \) . Hence the distance between two chambers is equal to the number of walls separating them. Proof. Suppose \( H \) is a wall separating \( {C}_{0} \) from \( {C}_{d} \) . Let \( \pm \alpha \) be the corresponding roots, say with \( {C}_{0} \in \alpha \) and \( {C}_{d} \in - \alpha \) . Then there must be some \( i \) with \( 1 \leq i \leq d \) such that \( {C}_{i - 1} \in \alpha \) and \( {C}_{i} \in - \alpha \) . Since \( \alpha \) and \( - \alpha \) are convex (Lemma 3.44), it follows that we have \( {C}_{0},\ldots ,{C}_{i - 1} \in \alpha \) and \( {C}_{i},\ldots ,{C}_{d} \in - \alpha \) . In other words, \( \Gamma \) crosses \( H \) exactly once. Now suppose \( H \) is a wall that does not separate \( {C}_{0} \) from \( {C}_{d} \) . Then \( {C}_{0} \) and \( {C}_{d} \) are both in the same root \( \alpha \), so the convexity of \( \alpha \) implies that \( \Gamma \) does not cross \( H \) . The crux of the proof of Lemma 3.69, obviously, is the convexity of roots, which in turn was based on the idea of using foldings to shorten galleries. We can now use this same idea to prove a geometric analogue of the deletion condition. The statement uses the notion of type of a gallery (Definition 3.22). Lemma 3.70. Let \( \Gamma \) be a gallery of type \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{d}}\right) \) . If \( \Gamma \) is not minimal, then there is a gallery \( {\Gamma }^{\prime } \) with the same extremities as \( \Gamma \) such that \( {\Gamma }^{\prime } \) has type \( {\mathbf{s}}^{\prime } = \left( {{s}_{1},\ldots ,{\widehat{s}}_{i},\ldots ,{\widehat{s}}_{j},\ldots ,{s}_{d}}\right) \) for some \( i < j \) . Proof. Since \( \Gamma \) is not minimal, Lemma 3.69 implies that the number of walls separating \( {C}_{0} \) from \( {C}_{d} \) is less than \( d \) . Hence the walls crossed by \( \Gamma \) cannot all be distinct; for if a wall is crossed exactly once by \( \Gamma \), then it certainly separates \( {C}_{0} \) from \( {C}_{d} \) . We can therefore find a root \( \alpha \) and indices \( i, j \), with \( 1 \leq i < j \leq d \), such that \( {C}_{i - 1} \) and \( {C}_{j} \) are in \( \alpha \) but \( {C}_{k} \in - \alpha \) for \( i \leq k < j \) ; see Figure 3.6. Let \( \phi \) be the folding with image \( \alpha \) . If we modify \( \Gamma \) by applying \( \phi \) to the portion \( {C}_{i},\ldots ,{C}_{j - 1} \), we obtain a pregallery with the same extremities that has exactly two repetitions: \[ {C}_{0},\ldots ,{C}_{i - 1},\phi \left( {C}_{i}\right) ,\ldots ,\phi \left( {C}_{j - 1}\right) ,{C}_{j},\ldots ,{C}_{d}. \] So we can delete \( {C}_{i - 1} \) and \( {C}_{j} \) to obtain a gallery \( {\Gamma }^{\prime } \) of length \( d - 2 \) . The type \( {\mathbf{s}}^{\prime } \) of \( {\Gamma }^{\prime } \) is \( \left( {{s}_{1},\ldots ,{\widehat{s}}_{i},\ldots ,{\widehat{s}}_{j},\ldots ,{s}_{d}}\right) \) because \( \phi \) is type-preserving. Lemma 3.71. The action of \( W \) is simply transitive on the chambers of \( \sum \) . ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_160_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_160_0.jpg) Fig. 3.6. A geometric proof of the deletion condition. Proof. We have already noted that the action is transitive. To prove that the stabilizer of \( C \) is trivial, note that if \( {wC} = C \) then \( w \) fixes \( C \) pointwise, since \( w \) is type-preserving. But then \( w = 1 \) by the standard uniqueness argument. It follows from Lemma 3.71 that we have a bijection \( W \rightarrow \mathcal{C}\left( \sum \right) \) given by \( w \mapsto {wC} \) . This yields the familiar \( 1 - 1 \) correspondence between galleries starting at \( C \) and words \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{d}}\right) \), where the gallery \( \left( {C}_{i}\right) \) corresponding to \( \mathbf{s} \) is given by \( {C}_{i} \mathrel{\text{:=}} {s}_{1}\cdots {s}_{i}C \) for \( i = 0,\ldots, d \) . In view of Lemma 3.68, the type of this gallery is the word \( \mathbf{s} \) that we started with. So a direct translation of Lemma 3.70 into the language of group theory yi
1069_(GTM228)A First Course in Modular Forms
Definition 8.2.1
Definition 8.2.1. Let \( C \) be a projective curve over \( {\overline{\mathbb{F}}}_{p} \) . The Frobenius map on \( C \) is \[ {\sigma }_{p} : C \rightarrow {C}^{{\sigma }_{p}},\;\left\lbrack {{x}_{0},{x}_{1},\ldots ,{x}_{n}}\right\rbrack \mapsto \left\lbrack {{x}_{0}^{p},{x}_{1}^{p},\ldots ,{x}_{n}^{p}}\right\rbrack . \] For example, the Frobenius map on \( {\mathbb{P}}^{1}\left( {\overline{\mathbb{F}}}_{p}\right) \) is \( {\sigma }_{p}\left( t\right) = {t}^{p} \) on the affine part, and so the induced map \( {\sigma }_{p}^{ * } \) gives the extension of function fields \( \mathbb{K}/\mathbf{k} \) where \[ \mathbb{K} = {\mathbb{F}}_{p}\left( t\right) ,\;\mathbf{k} = {\mathbb{F}}_{p}\left( s\right) ,\;s = {t}^{p}. \] Thus \( \mathbb{K} = \mathbf{k}\left( t\right) \) . The minimal polynomial of \( t \) over \( \mathbf{k} \) is \( {x}^{p} - s \), so the function field extension degree is \( p \) even though the Frobenius map bijects and we therefore might expect its degree to be 1 . Since \( {x}^{p} - s = {\left( x - t\right) }^{p} \) over \( \mathbb{K} \), the extension is generated by a \( p \) th root that repeats \( p \) times as the root of its minimal polynomial. Similarly the Frobenius map on an elliptic curve over \( {\mathbb{F}}_{p} \) is \( {\sigma }_{p}\left( {u, v}\right) = \) \( \left( {{u}^{p},{v}^{p}}\right) \) on the affine part, so that \( {\sigma }_{p}^{ * } \) gives the extension of function fields \( \mathbb{K}/\mathbf{k} \) where \[ \mathbb{K} = {\mathbb{F}}_{p}\left( u\right) \left\lbrack v\right\rbrack /\langle E\left( {u, v}\right) \rangle ,\;\mathbf{k} = {\mathbb{F}}_{p}\left( s\right) \left\lbrack t\right\rbrack /\langle E\left( {s, t}\right) \rangle ,\;s = {u}^{p}, t = {v}^{p}. \] Thus \( \mathbb{K} = \mathbf{k}\left( {u, v}\right) \) . The minimal polynomial of \( u \) in \( \mathbf{k}\left\lbrack x\right\rbrack \) is \( {x}^{p} - s \), and this factors as \( {\left( x - u\right) }^{p} \) over \( \mathbf{k}\left( u\right) \) . The minimal polynomial of \( v \) in \( \mathbf{k}\left( u\right) \left\lbrack y\right\rbrack \) divides \( E\left( {u, y}\right) \), a quadratic polynomial in \( y \) . So \[ \left\lbrack {\mathbf{k}\left( u\right) : \mathbf{k}}\right\rbrack = p\;\text{ and }\;\left\lbrack {\mathbf{k}\left( {u, v}\right) : \mathbf{k}\left( u\right) }\right\rbrack \in \{ 1,2\} . \] A similar argument shows that \[ \left\lbrack {\mathbf{k}\left( v\right) : \mathbf{k}}\right\rbrack = p\;\text{ and }\;\left\lbrack {\mathbf{k}\left( {u, v}\right) : \mathbf{k}\left( v\right) }\right\rbrack \in \{ 1,3\} . \] Therefore \( \mathbb{K} = \mathbf{k}\left( u\right) = \mathbf{k}\left( v\right) \) and \( \left\lbrack {\mathbb{K} : \mathbf{k}}\right\rbrack = p \) . Again the function field extension degree is \( p \) even though the Frobenius map is a bijection, and again the extension is generated by a \( p \) th root that repeats \( p \) times as the root of its minimal polynomial. Definition 8.2.2. An algebraic extension of fields \( \mathbb{K}/\mathbf{k} \) is separable if for every element \( u \) of \( \mathbb{K} \) the minimal polynomial of \( u \) in \( \mathbf{k}\left\lbrack x\right\rbrack \) has distinct roots in \( \overline{\mathbf{k}} \) . Otherwise the extension is inseparable. A field extension obtained by adjoining a succession of \( p \) th roots that repeat \( p \) times as the root of their minimal polynomials is called purely inseparable. Since a polynomial has a multiple root if and only if it shares a root with its derivative, every algebraic extension of fields of characteristic 0 is separable. Every algebraic extension \( \mathbb{K}/\mathbf{k} \) where \( \mathbf{k} \) is a finite field is separable as well. The examples given before Definition 8.2.2 are purely inseparable. Let \( h : C \rightarrow {C}^{\prime } \) be a surjective morphism over \( {\overline{\mathbb{F}}}_{p} \) of nonsingular projective curves over \( {\overline{\mathbb{F}}}_{p} \) . Let \( \mathbf{k} = {\overline{\mathbb{F}}}_{p}\left( {C}^{\prime }\right) \) and \( \mathbb{K} = {\overline{\mathbb{F}}}_{p}\left( C\right) \), so that the induced \( {\overline{\mathbb{F}}}_{p} \) -injection of function fields is \( {h}^{ * } : \mathbf{k} \rightarrow \mathbb{K} \) . The field extension \( \mathbb{K}/{h}^{ * }\left( \mathbf{k}\right) \) takes the form \[ {h}^{ * }\left( \mathbf{k}\right) \subset {\mathbf{k}}_{\text{sep }} \subset \mathbb{K} \] where \( {\mathbf{k}}_{\text{sep }}/{h}^{ * }\left( \mathbf{k}\right) \) is the maximal separable subextension of \( \mathbb{K}/{h}^{ * }\left( \mathbf{k}\right) \) (this exists since the composite of separable extensions is again separable) and thus \( \mathbb{K}/{\mathbf{k}}_{\text{sep }} \) is purely inseparable. Factoring \( {h}^{ * } : \mathbf{k} \rightarrow \mathbb{K} \) as \[ \mathbf{k}\xrightarrow[]{{h}_{\text{sep }}^{ * }}{\mathbf{k}}_{\text{sep }}\xrightarrow[]{{h}_{\text{ins }}^{ * }}\mathbb{K} \] gives a corresponding factorization of \( h : C \rightarrow {C}^{\prime } \) , \[ C\xrightarrow[]{{h}_{\text{ins }}}{C}_{\text{sep }}\xrightarrow[]{{h}_{\text{sep }}}{C}^{\prime } \] where the first map is \( {h}_{\text{ins }} = {\sigma }_{p}^{e} \) with \( {p}^{e} = \left\lbrack {\mathbb{K} : {\mathbf{k}}_{\text{sep }}}\right\rbrack \) . That is, \[ h = {h}_{\text{sep }} \circ {\sigma }_{p}^{e} \] The morphism \( h \) is called separable, inseparable, or purely inseparable according to whether the field extension \( \mathbb{K}/{h}^{ * }\left( \mathbf{k}\right) \) is separable (i.e., \( e = 0 \) ), inseparable \( \left( {e > 0}\right) \), or purely inseparable \( \left( {{h}_{\text{sep }} = 1}\right) \) . As in Chapter 7, the degree of \( h \) is \[ \deg \left( h\right) = \left\lbrack {\mathbb{K} : {h}^{ * }\left( \mathbf{k}\right) }\right\rbrack \] The separable and inseparable degrees of \( h \) are \[ {\deg }_{\text{sep }}\left( h\right) = \deg \left( {h}_{\text{sep }}\right) = \left\lbrack {{\mathbf{k}}_{\text{sep }} : {h}^{ * }\left( \mathbf{k}\right) }\right\rbrack \] \[ {\deg }_{\text{ins }}\left( h\right) = \deg \left( {h}_{\text{ins }}\right) = \left\lbrack {\mathbb{K} : {\mathbf{k}}_{\text{sep }}}\right\rbrack \] so that \[ \deg \left( h\right) = {\deg }_{\text{sep }}\left( h\right) {\deg }_{\text{ins }}\left( h\right) . \] In characteristic \( p \) the degree formula from Chapter 7 remains \[ \mathop{\sum }\limits_{{P \in {h}^{-1}\left( Q\right) }}{e}_{P}\left( h\right) = \deg \left( h\right) \;\text{ for any }Q \in {C}^{\prime }. \] In particular, \[ \mathop{\sum }\limits_{{P \in {h}_{\text{sep }}^{-1}\left( Q\right) }}{e}_{P}\left( {h}_{\text{sep }}\right) = {\deg }_{\text{sep }}\left( h\right) \;\text{ for any }Q \in {C}^{\prime }, \] and the ramification index \( {e}_{P}\left( {h}_{\text{sep }}\right) \in {\mathbb{Z}}^{ + } \) is 1 at all but finitely many points. Since \( {h}_{\text{ins }} \) bijects it follows that \( {\deg }_{\text{sep }}\left( h\right) = \left| {{h}_{\text{sep }}^{-1}\left( Q\right) }\right| = \left| {{h}^{-1}\left( Q\right) }\right| \) for all but finitely many \( Q \in {C}^{\prime } \) . This description applies even when \( h \) is not surjective (so that \( h \) maps to a single point and has degree 0 ), and this description shows that separable degree is multiplicative, i.e., if \( {h}^{\prime } : {C}^{\prime } \rightarrow {C}^{\prime \prime } \) is another morphism then \( {\deg }_{\text{sep }}\left( {{h}^{\prime } \circ h}\right) = {\deg }_{\text{sep }}\left( {h}^{\prime }\right) {\deg }_{\text{sep }}\left( h\right) \) . Consequently inseparable degree is multiplicative as well since total degree clearly is. We have seen that the Frobenius map \( {\sigma }_{p} \) is purely inseparable of degree \( p \) on \( {\mathbb{P}}^{1}\left( {\overline{\mathbb{F}}}_{p}\right) \) and on elliptic curves \( E \) over \( {\mathbb{F}}_{p} \), and in fact this holds on all curves. On elliptic curves, where there is a group law, the map \( {\sigma }_{p} - 1 \) makes sense, and it is separable: for otherwise \( {\sigma }_{p} - 1 = f \circ {\sigma }_{p} \) where \( f : E \rightarrow E \) is a morphism, so \( 1 = g \circ {\sigma }_{p} \) where \( g = 1 - f \), giving a contradiction because \( {\sigma }_{p} \) is not an isomorphism. The map \( \left\lbrack p\right\rbrack \) is an isogeny of degree \( {p}^{2} \) on elliptic curves over \( {\overline{\mathbb{F}}}_{p} \) . (This statement will be justified later by Theorem 8.5.10.) As a special case of the degree formula, if \( \varphi : E \rightarrow {E}^{\prime } \) is any isogeny of elliptic curves then \[ {\deg }_{\text{sep }}\left( \varphi \right) = \left| {\ker \left( \varphi \right) }\right| \] (8.13) By this formula and by the structure of \( \ker \left( \left\lbrack p\right\rbrack \right) = E\left\lbrack p\right\rbrack \) as \( \left\{ {0}_{E}\right\} \) or \( \mathbb{Z}/p\mathbb{Z} \) , \( {\deg }_{\text{sep }}\left( \left\lbrack p\right\rbrack \right) \) is 1 or \( p \) . Thus \( \left\lbrack p\right\rbrack \) is not separable, and so it takes the form \( \left\lbrack p\right\rbrack = \) \( f \circ {\sigma }_{p} \) for some rational \( f \) . This shows that although the inverse \( {\sigma }_{p}^{-1} \) of the Frobenius map is not rational, the dual isogeny \( {\widehat{\sigma }}_{p} = f \) is. If \( E\left\lbrack p\right\rbrack = \left\{ {0}_{E}\right\} \) then \( \left\lbrack p\right\rbrack \) is purely inseparable of degree \( {p}^{2} \) and so \( {\widehat{\sigma }}_{p} = i \circ {\sigma }_{p} \) where \( i \) is an automorphism of \( E \), while if \( E\left\lbrack p\right\rbrack \cong \mathbb{Z}/p\mathbb{Z} \) then \( \left\lbrack p\right\rbrack \) has separable and inseparable degrees \( p \) and so \( {\widehat{\sigma }}_{p} \) is separable of degree \( p \) . This section ends by deriving commutativity properties of the induced forward and reverse maps of the Frobenius map. Let \( C \) be a projective curve over \( {\mathbb{F}}_{p}
113_Topological Groups
Definition 1.11
Definition 1.11. If \( a \) is any set and \( m \in \omega \), let \( {a}^{\left( m\right) } \) be the unique element of \( {}^{m}\{ a\} \) . Thus \( {a}^{\left( m\right) } \) is an \( m \) -termed sequence of \( a \) ’s, \( {a}^{\left( m\right) } = \langle a, a,\ldots, a\rangle (m \) times). If \( x \) and \( y \) are finite sequences, say \( x = \left\langle {{x}_{0},\ldots ,{x}_{m - 1}}\right\rangle \) and \( y = \) \( \left\langle {{y}_{0},\ldots ,{y}_{n - 1}}\right\rangle \), we let \( {xy} = \left\langle {{x}_{0},\ldots ,{x}_{m - 1},{y}_{0},\ldots ,{y}_{n - 1}}\right\rangle \) . Frequently we write \( a \) for \( \langle a\rangle \) . Definition 1.12. \( {T}_{l\text{ seek }0} \) is the following machine: \[ \begin{array}{llll} 0 & 1 & 2 & 1 \end{array} \] \[ \begin{array}{llll} 1 & 0 & 4 & 1 \end{array} \] \[ \begin{array}{llll} 1 & 1 & 1 & 0 \end{array} \] A computation with \( {T}_{l\text{ seek }0} \) can be indicated as follows, where we use an obvious notation: ![57474f65-18c7-4127-acaf-c92c2d62e43e_25_0.jpg](images/57474f65-18c7-4127-acaf-c92c2d62e43e_25_0.jpg) Thus \( {T}_{l\text{ seek }0} \) finds the first 0 to the left of the square it first looks at and stops at that 0 . In this and future cases we shall not formulate an exact theorem describing such a fact; we now feel the reader can in principle translate such informal statements as the above into a rigorous form. Definition 1.13. \( {T}_{r\text{ seek }0} \) is the following machine: \[ \begin{array}{llll} 0 & 0 & 3 & 1 \end{array} \] \[ \begin{array}{llll} 0 & 1 & 3 & 1 \end{array} \] \[ \begin{array}{llll} 1 & 0 & 4 & 1 \end{array} \] \[ \begin{array}{llll} 1 & 1 & 1 & 0 \end{array} \] \( {T}_{r\text{ seek }0} \) finds the first 0 to the right of the square it first looks at and stops at that 0 . Definition 1.14. \( {T}_{l\text{ seek }0} \) is the following machine: \[ \begin{array}{llll} 0 & 0 & 2 & 1 \end{array} \] \[ \begin{array}{llll} 0 & 1 & 2 & 1 \end{array} \] \[ \begin{array}{llll} 1 & 0 & 0 & 0 \end{array} \] \[ \begin{array}{llll} 1 & 1 & 4 & 1 \end{array} \] \( {T}_{l\text{ seek }1} \) finds the first 1 to the left of the square it first looks at and stops at that 1 . It may be that no such 1 exists; then the machine continues forever, and no computation exists. Definition 1.15. \( {T}_{r\text{ seek }1} \) is the following machine: \[ \begin{array}{llll} 0 & 0 & 3 & 1 \end{array} \] \[ \begin{array}{llll} 0 & 1 & 3 & 1 \end{array} \] \[ \begin{array}{llll} 1 & 0 & 0 & 0 \end{array} \] \[ \begin{array}{llll} 1 & 1 & 4 & 1 \end{array} \] \( {T}_{r\text{ seek }1} \) finds the first \( 1 \) to the right of the square it first looks at and stops at that 1. But again, it may be that no such 1 exists. Definition 1.16. Suppose \( M, N \), and \( P \) are Turing machines with pairwise disjoint sets of states. By \( M \rightarrow N \) we mean the machine obtained by writing down \( N \) after \( M \), after first replacing all rows of \( M \) of the forms \( \left( \begin{array}{llll} c & 0 & 4 & d \end{array}\right) \) or \( \left( \begin{array}{llll} {c}^{\prime } & 1 & 4 & {d}^{\prime } \end{array}\right) \) by the rows \( \left( \begin{array}{llll} c, & 0 & 0 & e \end{array}\right) \) or \( \left( \begin{array}{llll} {c}^{\prime } & 1 & 1 & e \end{array}\right) \) respectively, where \( e \) is the initial state of \( N \) . By ![57474f65-18c7-4127-acaf-c92c2d62e43e_26_0.jpg](images/57474f65-18c7-4127-acaf-c92c2d62e43e_26_0.jpg) we mean the machine obtained by writing down \( M \), then \( N \), then \( P \), after first replacing all rows of \( M \) of the forms \( \left( \begin{array}{llll} c & 0 & 4 & d \end{array}\right) \) or \( \left( \begin{array}{llll} {c}^{\prime } & 1 & 4 & {d}^{\prime } \end{array}\right) \) by the rows \( \left( \begin{array}{llll} c & 0 & 0 & e \end{array}\right) \) or \( \left( \begin{array}{llll} {c}^{\prime } & 1 & 1 & {e}^{\prime } \end{array}\right) \) respectively, where \( e \) is the initial state of \( N \) and \( {e}^{\prime } \) is the initial state of \( P \) . Obviously we can change the states of a Turing machine by a one-one mapping without effecting what it does to a tape description. Hence we can apply the notation just introduced to machines even if they do not have pairwise disjoint sets of states. Furthermore, the above notation can be combined into large "flow charts" in an obvious way. Definition 1.17. \( {T}_{\text{seek }1} \) is the following machine: <table><tr><td></td><td>if 1</td></tr><tr><td>Stop</td><td>\( {T}_{r\text{ seek }1} \)</td></tr></table> (Here by \( {T}_{\text{right }}\overset{\text{ if }1}{ \rightarrow } \) Stop we mean that the row \( \left( \begin{array}{llll} 1 & 1 & 4 & 1 \end{array}\right) \) of \( {T}_{\text{right }} \) is not to be changed.) This machine just finds a 1 and stops there. It must look both left and right to find such a 1 ; 1 's are written (but later erased) to keep track of how far the search has gone, so that the final tape description is the same as the initial one. If the tape is blank initially the computation continues forever. Since this is a rather complicated procedure we again indicate in detail a computation using \( {T}_{\text{seek }1} \) . First we have two trivial cases: ![57474f65-18c7-4127-acaf-c92c2d62e43e_27_0.jpg](images/57474f65-18c7-4127-acaf-c92c2d62e43e_27_0.jpg) In the nontrivial case we start with \( - {0}^{\left( m\right) }{00}^{\left( n\right) } - ;m > 0 \) : \[ \begin{array}{l} {0}^{\left( m - 1\right) }\;{10}^{\left( n + 1\right) } \\ \land \end{array} \] \[ \begin{array}{l} {0}^{\left( m - 1\right) }\;1\;0\;{0}^{\left( n\right) } \\ \Delta \end{array} \] \[ \begin{array}{l} {0}^{\left( m - i\right) }\;1\;{0}^{\left( 2i - 2\right) }\;\begin{array}{l} \vdots \\ 0 \\ 0 \end{array}{0}^{\left( n - i + 1\right) } \\ \end{array} \] \[ \begin{array}{l} {0}^{\left( m - i\right) }\;1\;{0}^{\left( 2i - 2\right) }\;1\;{0}^{\left( n - i + 1\right) } \\ \end{array} \] \[ \begin{array}{lllll} {0}^{\left( m - i\right) } & 0 & {0}^{\left( 2i - 2\right) } & 1 & {0}^{\left( n - i + 1\right) } \\ & \Lambda & & & \end{array} \] \[ {0}^{\left( m - i - 1\right) }{00}^{\left( 2i - 1\right) }{10}^{\left( n - i + 1\right) } \] \[ {0}^{\left( m - i - 1\right) }{10}^{\left( 2i - 1\right) }{10}^{\left( n - i + 1\right) } \] \[ \begin{array}{lll} {0}^{\left( m - i - 1\right) }1 & {0}^{\left( 2i - 1\right) } & 1{0}^{\left( n - i + 1\right) } \\ & 1 & 0 \end{array} \] \[ \begin{array}{lll} {0}^{\left( m - i - 1\right) }1 & {0}^{\left( 2i - 1\right) } & 0{0}^{\left( n - i + 1\right) } \\ & \Delta & \end{array} \] Here \( i = 1 \) initially, and the portion beyond \( {0}^{\left( m - 1\right) }{10}^{\left( 2i - 2\right) }{00}^{\left( n - i + 1\right) } \) takes place only if \( i < m \) and \( i \leq n \) . Thus, if we start with \( - {10}^{\left( m\right) }{}_{\Lambda }{0}^{\left( n\right) } - \), and \( n + 1 \geq m \), we end as follows (setting \( i = m \) ): \[ \begin{array}{lllll} 1 & 1 & {0}^{\left( 2m - 2\right) } & 0 & {0}^{\left( n - m + 1\right) } \\ & & & \land & \end{array} \] \[ \begin{array}{lllll} 1 & 1 & {0}^{\left( 2m - 2\right) } & 1 & {0}^{\left( n - m + 1\right) } \\ & & & \land & \end{array} \] \[ \begin{array}{lllll} 1 & 1 & {0}^{\left( 2m - 2\right) } & 1 & {0}^{\left( n - m + 1\right) } \end{array} \] \[ \begin{matrix} 1 & 0 & {0}^{\left( 2m - 2\right) } & 1 & {0}^{\left( n - m + 1\right) } \end{matrix} \] \[ \begin{array}{llll} 1 & {0}^{\left( 2m - 1\right) } & 1 & {0}^{\left( n - m + 1\right) } \end{array} \] \[ \begin{array}{llll} 1 & {0}^{\left( 2m - 1\right) } & 1 & {0}^{\left( n - m + 1\right) } \\ & & \Lambda & \end{array} \] On the other hand, if we start with \( - {0}^{\left( m\right) }{00}^{\left( n\right) }1 - \), and \( n + 1 < m \) we end as follows (setting \( i = n + 1 \) ): \[ \left\lbrack \begin{matrix} {0}^{\left( m - n - 1\right) } & 1 & {0}^{\left( 2n\right) } \end{matrix}\right\rbrack \;\left\lbrack \begin{matrix} 0 & 1 \end{matrix}\right\rbrack \] \[ \left\lbrack \begin{matrix} {0}^{\left( m - n - 1\right) } & 1 & {0}^{\left( 2n\right) } \end{matrix}\right\rbrack \;\left\lbrack \begin{matrix} 1 & 1 \end{matrix}\right\rbrack \] \[ {0}^{\left( m - n - 1\right) }{10}^{\left( 2n\right) }{11}^{\prime } \] \[ {0}^{\left( m - n - 1\right) }{00}^{\left( 2n\right) }\;{11} \] \[ {0}^{\left( m - n - 2\right) }{00}^{\left( 2n + 1\right) }{11}^{\prime } \] \[ {0}^{\left( m - n - 2\right) }{10}^{\left( 2n + 1\right) }{11}^{\prime } \] \[ \begin{array}{l} {0}^{\left( m - n - 2\right) }\;1\;{0}^{\left( 2n + 1\right) }\;1\;1 \\ 1\;1 \end{array} \] \[ \left\lbrack \begin{matrix} {0}^{\left( m - n - 2\right) } & 1 & {0}^{\left( 2n + 1\right) } & 0 & 1 \end{matrix}\right\rbrack \] \[ \left\lbrack \begin{matrix} {0}^{\left( m - n - 2\right) } & 1 & {0}^{\left( 2n + 2\right) } & 1 \end{matrix}\right\rbrack \] \[ {0}^{\left( m - n - 2\right) }{1}_{\Lambda }{0}^{\left( 2n + 2\right) }{1}_{\Lambda } \] \[ {0}^{\left( m - n - 2\right) }{00}^{\left( 2n + 2\right) }{1}^{▱} \] \[ {0}^{\left( m + n + 1\right) }1 \] Definition 1.18. \( {T}_{l\text{ end }} \) is the following machine: \[ \text{ Start } \rightarrow \overleftarrow{{T}_{l\text{ seek }0} \rightarrow {T}_{\text{right }}}\xrightarrow[]{\text{ if }}\xrightarrow[]{\text{ if }}{T}_{\text{left }} \] \( {T}_{l\text{ end }} \) moves the tape to the right until finding 00, and stops on the rightmost of these two zeros. \( {T}_{\text{lend }} \) does not start counting zeros until moving the tape. Definition 1.19. \( {T}_{r\text{ end }} \) is the following machine: \[ \text{ Start } \rightarrow \overrightarrow{{T}_{r\text{ seek }0}} \rightarrow \overset{if1}{{T}_{\text{left }}}\xrightarrow[]{\text{ if }0}{T}_{\text{right }} \] \( {T}_{\tau \text{ end }} \) moves the tape to the left until finding 00, and stops on the left-most of these two zeros. \( {T}_{r\text{ end }} \) does not start counting zeros until
1077_(GTM235)Compact Lie Groups
Definition 1.36
Definition 1.36. (1) Let \( {\mathcal{C}}_{n}^{ + }\left( \mathbb{R}\right) \) be the subalgebra of \( {\mathcal{C}}_{n}\left( \mathbb{R}\right) \) spanned by all products of an even number of elements of \( {\mathbb{R}}^{n} \) . (2) Let \( {\mathcal{C}}_{n}^{ - }\left( \mathbb{R}\right) \) be the subspace of \( {\mathcal{C}}_{n}\left( \mathbb{R}\right) \) spanned by all products of an odd number of elements of \( {\mathbb{R}}^{n} \) so \( {\mathcal{C}}_{n}\left( \mathbb{R}\right) = {\mathcal{C}}_{n}^{ + }\left( \mathbb{R}\right) \oplus {\mathcal{C}}_{n}^{ - }\left( \mathbb{R}\right) \) as a vector space. (3) Let the automorphism \( \alpha \), called the main involution, of \( {\mathcal{C}}_{n}\left( \mathbb{R}\right) \) act as multiplication by \( \pm 1 \) on \( {\mathcal{C}}_{n}^{ \pm }\left( \mathbb{R}\right) \) . (4) Conjugation, an anti-involution on \( {\mathcal{C}}_{n}\left( \mathbb{R}\right) \), is defined by \[ {\left( {x}_{1}{x}_{2}\cdots {x}_{k}\right) }^{ * } = {\left( -1\right) }^{k}{x}_{k}\cdots {x}_{2}{x}_{1} \] for \( {x}_{i} \in {\mathbb{R}}^{n} \) . The next definition makes sense for \( n \geq 1 \) . However, because of Equation 1.25, we are really only interested in the case of \( n \geq 3 \) (see Exercise 1.34 for details when \( n = 1,2) \) . Definition 1.37. (1) Let \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) = \left\{ {g \in {\mathcal{C}}_{n}^{ + }\left( \mathbb{R}\right) \mid g{g}^{ * } = 1}\right. \) and \( {gx}{g}^{ * } \in {\mathbb{R}}^{n} \) for all \( \left. {x \in {\mathbb{R}}^{n}}\right\} \) . (2) Let \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) = \left\{ {g \in {\mathcal{C}}_{n}\left( \mathbb{R}\right) \mid g{g}^{ * } = 1}\right. \) and \( \left. {\alpha \left( g\right) x{g}^{ * } \in {\mathbb{R}}^{n}\text{for all}x \in {\mathbb{R}}^{n}}\right\} \) . Note \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \subseteq {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) . (3) For \( g \in {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) and \( x \in {\mathbb{R}}^{n} \), define the homomorphism \( \mathcal{A} : {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \rightarrow \) \( {GL}\left( {n,\mathbb{R}}\right) \) by \( \left( {\mathcal{A}g}\right) x = \alpha \left( g\right) x{g}^{ * } \) . Note \( \left( {\mathcal{A}g}\right) x = {gx}{g}^{ * } \) when \( g \in {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) . Viewing left multiplication by \( v \in {\mathcal{C}}_{n}\left( \mathbb{R}\right) \) as an element of \( \operatorname{End}\left( {{\mathcal{C}}_{n}\left( \mathbb{R}\right) }\right) \), use of the determinant shows that the set of invertible elements of \( {\mathcal{C}}_{n}\left( \mathbb{R}\right) \) is an open subgroup of \( {\mathcal{C}}_{n}\left( \mathbb{R}\right) \) . It follows fairly easily that the set of invertible elements is a Lie group. As both \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) and \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) are closed subgroups of this Lie group, Corollary 1.8 implies that \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) and \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) are Lie groups as well. Lemma 1.38. \( \mathcal{A} \) is a covering map of \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) onto \( O\left( n\right) \) with \( \ker \mathcal{A} = \{ \pm 1\} \), so there is an exact sequence \[ \{ 1\} \rightarrow \{ \pm 1\} \rightarrow {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \overset{\mathcal{A}}{ \rightarrow }O\left( n\right) \rightarrow \{ I\} . \] Proof. \( \mathcal{A} \) maps \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) into \( O\left( n\right) \) : Let \( g \in {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) and \( x \in {\mathbb{R}}^{n} \) . Using Equation 1.27 and the fact that conjugation on \( {\mathbb{R}}^{n} \) is multiplication by -1, we calculate \[ {\left| \left( \mathcal{A}g\right) x\right| }^{2} = - {\left( \alpha \left( g\right) x{g}^{ * }\right) }^{2} = - \left( {\alpha \left( g\right) x{g}^{ * }}\right) \left( {\alpha \left( g\right) x{g}^{ * }}\right) = \alpha \left( g\right) x{g}^{ * }{\left( \alpha \left( g\right) x{g}^{ * }\right) }^{ * } \] \[ = \alpha \left( g\right) x{g}^{ * }g{x}^{ * }\alpha {\left( g\right) }^{ * } = \alpha \left( g\right) x{x}^{ * }\alpha {\left( g\right) }^{ * } = - \alpha \left( g\right) {x}^{2}\alpha {\left( g\right) }^{ * } = {\left| x\right| }^{2}\;\alpha \left( {g{g}^{ * }}\right) \] \[ = {\left| x\right| }^{2}\text{.} \] Thus \( \mathcal{A}g \in O\left( n\right) \) . \( \mathcal{A} \) maps \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) onto \( O\left( n\right) \) : It is well known (Exercise 1.32) that each orthogonal matrix is a product of reflections. Thus it suffices to show that each reflection lies in the image of \( \mathcal{A} \) . Let \( x \in {S}^{n - 1} \) be any unit vector in \( {\mathbb{R}}^{n} \) and write \( {r}_{x} \) for the reflection across the plane perpendicular to \( x \) . Observe \( x{x}^{ * } = - {x}^{2} = {\left| x\right| }^{2} = 1 \) . Thus \( \alpha \left( x\right) x{x}^{ * } = - {xx}{x}^{ * } = - x \) . If \( y \in {\mathbb{R}}^{n} \) and \( \left( {x, y}\right) = 0 \), then Equation 1.28 says \( {xy} = - {yx} \) so that \( \alpha \left( x\right) y{x}^{ * } = {xyx} = - {x}^{2}y = y \) . Hence \( x \in {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) and \( \mathcal{A}x = {r}_{x} \) . \( \ker \mathcal{A} = \{ \pm 1\} : \) Since \( \mathbb{R} \cap {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) = \{ \pm 1\} \) and both elements are clearly in \( \ker \mathcal{A} \), it suffices to show that \( \ker \mathcal{A} \subseteq \mathbb{R} \) . So suppose \( g \in {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) with \( \mathcal{A}g = I \) . As \( {g}^{ * } = {g}^{-1},\alpha \left( g\right) x = {xg} \) for all \( x \in {\mathbb{R}}^{n} \) . Expanding \( g \) with respect to the standard basis from Equation 1.29, we may uniquely write \( g = {e}_{1}a + b \), where \( a, b \) are linear combinations of 1 and monomials in \( {e}_{2},{e}_{3},\ldots ,{e}_{n} \) . Looking at the special case of \( x = {e}_{1} \), we have \( \alpha \left( {{e}_{1}a + b}\right) {e}_{1} = {e}_{1}\left( {{e}_{1}a + b}\right) \) so that \( - {e}_{1}\alpha \left( a\right) {e}_{1} + \alpha \left( b\right) {e}_{1} = - a + {e}_{1}b \) . Since \( a \) and \( b \) contain no \( {e}_{1} \) ’s, \( \alpha \left( a\right) {e}_{1} = {e}_{1}a \) and \( \alpha \left( b\right) {e}_{1} = {e}_{1}b \) . Thus \( a + {e}_{1}{b}_{1} = \) \( - a + {e}_{1}b \) which implies that \( a = 0 \) so that \( g \) contains no \( {e}_{1} \) . Induction similarly shows that \( g \) contains no \( {e}_{k},1 \leq k \leq n \), and so \( g \in \mathbb{R} \) . \( \mathcal{A} \) is a covering map: From Theorem 1.10, \( \pi \) has constant rank with \( N = \operatorname{rk}\pi = \) \( \dim {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) since \( \ker \pi = \{ \pm 1\} \) . For any \( g \in {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \), the Rank Theorem from differential geometry ([8]) says there exists cubical charts \( \left( {U,\varphi }\right) \) of \( g \) and \( \left( {V,\psi }\right) \) of \( \pi \left( g\right) \) so that \( \psi \circ \pi \circ {\varphi }^{-1}\left( {{x}_{1},\ldots ,{x}_{N}}\right) = \left( {{x}_{1},\ldots ,{x}_{N},0,\ldots ,0}\right) \) with \( \dim O\left( n\right) - N \) zeros. Using the second countability of \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) and the Baire category theorem, surjectivity of \( \pi \) implies \( \dim O\left( n\right) = N \) . In particular, \( \pi \) restricted to \( U \) is a diffeomorphism onto \( V \) . Since \( \ker \pi = \{ \pm 1\} ,\pi \) is also a diffeomorphism of \( - U \) onto \( V \) . Finally, injectivity of \( \pi \) on \( U \) implies that \( \left( {-U}\right) \cap U = \varnothing \) so that the connected components of \( {\pi }^{-1}\left( V\right) \) are \( U \) and \( - U \) . Lemma 1.39. \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) and \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) are compact Lie groups with \[ \mathop{\operatorname{Pin}}\limits_{n}\left( \mathbb{R}\right) = \left\{ {{x}_{1}\cdots {x}_{k} \mid {x}_{i} \in {S}^{n - 1}\text{ for }1 \leq k \leq {2n}}\right\} \] \[ {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) = \left\{ {{x}_{1}{x}_{2}\cdots {x}_{2k} \mid {x}_{i} \in {S}^{n - 1}\text{ for }2 \leq {2k} \leq {2n}}\right\} \] and \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) = {\mathcal{A}}^{-1}\left( {{SO}\left( n\right) }\right) \) . Proof. We know from the proof of Lemma 1.38 that \( \mathcal{A}x = {r}_{x} \) for each \( x \in {S}^{n - 1} \subseteq \) \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) . Since elements of \( O\left( n\right) \) are products of at most \( {2n} \) reflections and \( \mathcal{A} \) is surjective with kernel \( \{ \pm 1\} \), this implies that \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) = \left\{ {{x}_{1}\cdots {x}_{k} \mid {x}_{i} \in {S}^{n - 1}}\right. \) for \( 1 \leq k \leq {2n}\} \) . The equality \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) = {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \cap {\mathcal{C}}_{n}^{ + }\left( \mathbb{R}\right) \) then implies \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) = \) \( \left\{ {{x}_{1}{x}_{2}\cdots {x}_{2k} \mid {x}_{i} \in {S}^{n - 1}\text{for}2 \leq {2k} \leq {2n}}\right\} \) . In particular, \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) and \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) \) are compact. Moreover because \( \det {r}_{x} = - 1 \), the last equality is equivalent to the equality \( {\operatorname{Spin}}_{n}\left( \mathbb{R}\right) = {\mathcal{A}}^{-1}\left( {{SO}\left( n\right) }\right) \) . Theorem 1.40. (I) \( {\operatorname{Pin}}_{n}\left( \mathbb{R}\right) \) has two connected \( \left( {n \geq 2}\right) \) components with \( {\o
113_Topological Groups
Definition 10.6
Definition 10.6. We introduce some operations on expressions \( \varphi ,\psi \) : (i) \( \neg \varphi = \left\langle {L}_{0}\right\rangle \varphi \) , (ii) \( \varphi \vee \psi = \left\langle {L}_{1}\right\rangle {\varphi \psi } \) (iii) \( \varphi \land \psi = \left\langle {L}_{2}\right\rangle {\varphi \psi } \) , (iv) \( \forall {\alpha \varphi } = \left\langle {L}_{3}\right\rangle \langle \alpha \rangle \varphi \), where \( \alpha \) is any individual variable, (v) \( \varphi = \psi = \left\langle {L}_{4}\right\rangle {\varphi \psi } \) , (vi) \( \varphi \rightarrow \psi = \neg \varphi \vee \psi \) , (vii) \( \varphi \leftrightarrow \psi = \left( {\varphi \rightarrow \psi }\right) \land \left( {\psi \rightarrow \varphi }\right) \) , (viii) \( \exists {\alpha \varphi } = \neg \forall \alpha \neg \varphi \), where \( \alpha \) is any individual variable. Now we introduce analogous operations on \( \omega \) . For the definition of Cat, see 3.30. For \( m, n \in \omega \), let \( {\left( i\right) }^{\prime }{\neg }^{\prime }m = \operatorname{Cat}\left( {{2}^{{gL0} + 1}, m}\right) \) \( {\left( ii\right) }^{\prime }m{\mathbf{v}}^{\prime }n = \operatorname{Cat}\left( {\operatorname{Cat}\left( {2{\mathcal{I}}^{{L1} + 1}, m}\right), n}\right) , \) \( {\left( iii\right) }^{\prime }m{ \land }^{\prime }n = \operatorname{Cat}\left( {\operatorname{Cat}\left( {2{\mathcal{I}}^{{L2} + 1}, m}\right), n}\right) , \) \( {\left( iv\right) }^{\prime }{\forall }^{\prime }{mn} = \operatorname{Cat}\left( {\operatorname{Cat}\left( {{2}^{\jmath {L3} + 1}, m}\right), n}\right) , \) \( {\left( v\right) }^{\prime }m{ = }^{\prime }n = \operatorname{Cat}\left( {\operatorname{Cat}\left( {2{\mathcal{I}}^{{L4} + 1}, m}\right), n}\right) , \) \( {\left( vi\right) }^{\prime }m{ \rightarrow }^{\prime }n = {\neg }^{\prime }m{ \vee }^{\prime }n, \) \( {\left( vii\right) }^{\prime }m{ \leftrightarrow }^{\prime }n = \left( {m{ \rightarrow }^{\prime }n}\right) { \land }^{\prime }\left( {n{ \rightarrow }^{\prime }m}\right) , \) \( {\left( viii\right) }^{\prime }{\exists }^{\prime }{mn} = {\neg }^{\prime }{\forall }^{\prime }m{\neg }^{\prime }n. \) In addition we use \( \mathop{\bigvee }\limits_{{i < n}}{\varphi }_{i} \) and \( \mathop{\bigwedge }\limits_{{i < n}}{\varphi }_{i} \) for the finite iterations of \( \mathbf{v} \) and 人: \[ \mathop{\bigvee }\limits_{{i < 1}}{\varphi }_{i} = {\varphi }_{0},\;\mathop{\bigvee }\limits_{{i < m + 1}}{\varphi }_{i} = \mathop{\bigvee }\limits_{{i < m}}{\varphi }_{i} \vee {\varphi }_{m}\text{ for }m > 0; \] \[ \mathop{\bigwedge }\limits_{{i < 1}}{\varphi }_{i} = {\varphi }_{0},\;\mathop{\bigwedge }\limits_{{i < m + 1}}{\varphi }_{i} = \mathop{\bigwedge }\limits_{{i < m}}{\varphi }_{i} \land {\varphi }_{m}\text{ for }m > 0. \] Strictly speaking, all of the operations in 10.6 are relative to \( \mathcal{L} \) . We might indicate this sometimes with a subscript, e.g., \( \neg \varphi ,{ \rightarrow }_{\varphi }^{\prime } \) . Note that, as for sentential languages, our languages do not have parentheses but yet we can use ordinary notation as in 10.6. The actual expressions of a language will rarely be written. That is, we will usually prefer to write an expression in the form \( \forall {v}_{0}\left( {{v}_{0} = {v}_{1} \rightarrow {v}_{1} = {v}_{0}}\right) \) for example rather than in the equal form \[ \left\langle {{L}_{3},{v}_{0},{L}_{1},{L}_{0},{L}_{4},{v}_{0},{v}_{1},{L}_{4},{v}_{1},{v}_{0}}\right\rangle . \] We use the boldface symbols for operations \( \rightarrow \) ,=, etc. on expressions to distinguish from the intuitive symbols. Note that the operations on \( \omega \) given in 10.6 act on Gödel numbers of expressions just like the corresponding operations act on expressions. Thus, for example, \( {g}^{ + }\neg \varphi = {\neg }^{\prime }{g}^{ + }\varphi ,{g}^{ + }\forall {\alpha \varphi } = \) \( {\forall }^{\prime }2{\mathcal{I}}^{\alpha + 1}{\mathcal{J}}^{ + }\varphi \), etc. The following proposition is obvious. Proposition 10.7. The operations \( {\neg }^{\prime },{ \vee }^{\prime },{ \land }^{\prime },{\forall }^{\prime },{ = }^{\prime },{ \rightarrow }^{\prime },{ \leftrightarrow }^{\prime },{\exists }^{\prime } \) are recursive. ## Definition 10.8 (i) For \( m \in \omega \sim 1 \) we define an \( m \) -ary operation \( {\operatorname{Con}}_{m} \) on \( \omega \) by induction on \( m \) . For any \( x,{y}_{0},\ldots ,{y}_{m} \in \omega \) , \[ {\operatorname{Con}}_{1}x = x \] \[ {\operatorname{Con}}_{m + 1}\left( {{y}_{0},\ldots ,{y}_{m}}\right) = \operatorname{Cat}\left( {{\operatorname{Con}}_{m}\left( {{y}_{0},\ldots ,{y}_{m - 1}}\right) ,{y}_{m}}\right) . \] Also, we define \( {\operatorname{Con}}^{\prime } \) by primitive recursion. For any \( x, y \in \omega \) , \[ {\operatorname{Con}}^{\prime }\left( {x,0}\right) = {\left( x\right) }_{0} \] \[ {\operatorname{Con}}^{\prime }\left( {x, y + 1}\right) = \operatorname{Cat}\left( {{\operatorname{Con}}^{\prime }\left( {x, y}\right) ,{\left( x\right) }_{y + 1}}\right) . \] Finally, we set \( {\operatorname{Con}}^{\prime \prime }y = {\operatorname{Con}}^{\prime }\left( {x,{1x}}\right) \) for any \( x \in \omega \) . (ii) \( {\operatorname{Trm}}_{\mathcal{L}} \), the collection of terms of \( \mathcal{L} \), is the intersection of all classes \( \Gamma \) of expressions such that the following conditions hold: (1) \( \left\langle {v}_{m}\right\rangle \in \Gamma \) for each \( m \in \omega \) ; (2) if \( \mathbf{O} \in \operatorname{Dmn}\mathcal{O} \), say with \( \mathcal{O}\mathbf{O} = m \), and if \( {\psi }_{0},\ldots ,{\psi }_{m - 1} \in \Gamma \), then \( \langle \mathbf{O}\rangle {\psi }_{0}\cdots {\psi }_{m - 1} \in \Gamma \) Note, with regard to \( {10.8}\left( {ii}\right) \), that if \( \mathcal{O}\mathbf{O} = \mathbf{O} \), then the condition merely says that \( \langle \mathbf{O}\rangle \in \Gamma \) . The number-theoretic functions in \( \left( i\right) \) are introduced so that the following properties of Gödel numbers will hold. If \( {\varphi }_{0},\ldots ,{\varphi }_{m - 1} \) are expressions, \( m > 0 \), then \[ {\operatorname{Con}}_{m}\left( {{g}^{ + }{\varphi }_{0},\ldots ,{g}^{ + }{\varphi }_{m - 1}}\right) = {g}^{ + }\left( {{\varphi }_{0}\cdots {\varphi }_{m - 1}}\right) ; \] and if \( x = \mathop{\prod }\limits_{{i < m}}{\mathrm{p}}_{i}^{g + {\varphi i}} \), then \[ {\operatorname{Con}}^{\prime \prime }x = {g}^{ + }\left( {{\varphi }_{0}\cdots {\varphi }_{m - 1}}\right) . \] The notion of term in 10.8 is the generalization to arbitrary first-order languages of the common mathematical notion of a polynomial. In case the first-order language is a language for rings (see above after 10.1), then we obtain exactly the ordinary notion of a polynomial with integer coefficients. The following construction property of terms is easily established. Proposition 10.9. An expression \( \sigma \) is a term iff there is a finite sequence \( \left\langle {{\tau }_{0},\ldots ,{\tau }_{n - 1}}\right\rangle \) of expressions with \( {\tau }_{n - 1} = \sigma \) such that for each \( i < n \) one of the following conditions holds: (i) \( {\tau }_{i} = \left\langle {v}_{m}\right\rangle \) for some \( m \in \omega \) , (ii) there is an \( \mathbf{O} \in \operatorname{Dmn}\mathcal{O} \), say with \( \mathcal{O}\mathbf{O} = m \), and there are \( {j}_{0},\ldots ,{j}_{p - 1} < i \) such that \( {\tau }_{i} = \langle \mathbf{O}\rangle {\tau }_{j0}\cdots {\tau }_{j\left( {p - 1}\right) } \) . There is an effective procedure for recognizing when an expression is a. term: Proposition 10.10. The functions \( {\mathrm{{Con}}}_{m},{\mathrm{{Con}}}^{\prime } \), and \( {\mathrm{{Con}}}^{\prime \prime } \) are recursive. The set \( {\mathcal{J}}^{+ * } \) Trm is recursive. Proof. The first statement is obvious. To prove the second, the reader should check the following statement, using 10.9. For any \( x \in \omega, x \in {y}^{+ * } \) Trm iff \( x > 1 \) and there is a \( y \leq {\mathrm{p}}_{1x}^{x \cdot {1x}} \) such that \( {\left( y\right) }_{1y} = x \) and for each \( i \leq {1y} \) one of the following conditions holds: (1) there is an \( m < {\left( y\right) }_{i} \) such that \( {\left( y\right) }_{i} = 2{y}^{{vm} + 1} \) ; (2) there is a \( k \leq x \) such that \( k \in {\mathcal{J}}^{ * }\operatorname{Dmn}\mathcal{O},\mathcal{O}{\mathcal{J}}^{-1}k = 0 \), and \( {\left( y\right) }_{i} = {2}^{k + 1} \) ; (3) there exist \( k \leq x \) and \( z \leq {\mathrm{p}}_{1x}^{x \cdot {1x}} \) such that \( k \in {y}^{ * }\operatorname{Dmn}\mathcal{O},\mathcal{O}{y}^{-1}k > 0 \) , \( \mathrm{I}z = 0{y}^{-1}k - 1 \), for each \( j \leq \mathrm{I}z \) there is an \( s < i \) such that \( {\left( z\right) }_{j} = {\left( y\right) }_{s} \) , and \( {\left( y\right) }_{i} = \operatorname{Cat}\left( {{2}^{k + 1},{\operatorname{Con}}^{\prime \prime }z}\right) \) . The following proposition follows directly from the definition of terms and is frequently of use. Proposition 10.11 (Induction on terms). If \( \left\langle {v}_{m}\right\rangle \in \Gamma \) for all \( m \in \omega \), and \( \langle 0\rangle {\sigma }_{0}\cdots {\sigma }_{m - 1} \in \Gamma \) whenever \( 0 \in \operatorname{Dmn}\mathcal{O},\mathcal{C}0 = m \), and \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \in \Gamma \) , then \( \operatorname{Trm} \subseteq \Gamma \) . Proposition 10.12 (Unique readability) (i) Every term is nonempty. (ii) If \( \sigma \) is a term, then either \( \sigma = \left\langle {v}_{m}\right\rangle \) for some \( m \in \omega \), or else there exist \( \mathbf{O} \in \operatorname{Dmn}\mathcal{O} \), say with \( \mathcal{O}\mathbf{O} = m \), and \( {\tau }_{0},\ldots ,{\tau }_{m - 1} \in \operatorname{Trm} \) such that \( \sigma = \) \( \langle \mathbf{O}\rangle {\tau }_{0}\cdots {\tau }_{m - 1}. \) (iii) If \( \sigma \) is a term and \( i < \operatorname{Dmn}\sigma \), then \( \left\langle {{\sigma }_{0},\ldots ,{\sigma
109_The rising sea Foundations of Algebraic Geometry
Definition 11.3
Definition 11.3. A metric space \( X \) is called a \( \operatorname{CAT}\left( 0\right) \) space if for any \( x, y \) in \( X \) there is a geodesic \( \left\lbrack {x, y}\right\rbrack \) with the following property: For all \( p \in \left\lbrack {x, y}\right\rbrack \) and all \( z \in X \), we have \[ {d}_{X}\left( {z, p}\right) \leq {d}_{{\mathbb{R}}^{2}}\left( {\bar{z},\bar{p}}\right) \] (11.1) with \( \bar{z} \) and \( \bar{p} \) as above. This should be thought of intuitively as being related to nonpositive curvature, as we have tried to suggest in Figure 11.1. The intuition is justified by a theorem in differential geometry that says that a Riemannian manifold has sectional curvature \( \leq 0 \) if and only if it is locally a \( \operatorname{CAT}\left( 0\right) \) space; a proof can be found in Bridson-Haefliger [48, Appendix to Chapter II.1]. See also Exercise 11.22 below. The "CAT" terminology is explained in [48, p. 159; 119, Section 2.4]; we will explain the " 0 " below. We have used one of several possible equivalent definitions of \( \operatorname{CAT}\left( 0\right) \) . For example, there is a variant of the definition that does not explicitly mention the comparison triangle. Such a definition exists because there is a formula for the distance from a vertex of a Euclidean triangle to any point on the opposite side: Given \( x, y, z \in {\mathbb{R}}^{2} \) and \( t \in \left\lbrack {0,1}\right\rbrack \), let \( {p}_{t} = \left( {1 - t}\right) x + {ty} \) ; then the formula is \[ {d}^{2}\left( {z,{p}_{t}}\right) = \left( {1 - t}\right) {d}^{2}\left( {z, x}\right) + t{d}^{2}\left( {z, y}\right) - t\left( {1 - t}\right) {d}^{2}\left( {x, y}\right) . \] (11.2) This is best known when \( t = 1/2 \), in which case it can be found in Pappus [185, Book VII, Proposition 122].* It is essentially the parallelogram law in that case, so we will call (11.2) the generalized parallelogram law. To prove it, we may assume that \( z = 0 \) . Then \[ {d}^{2}\left( {z,{p}_{t}}\right) = {\begin{Vmatrix}{p}_{t}\end{Vmatrix}}^{2} = {\left( 1 - t\right) }^{2}\parallel x{\parallel }^{2} + {t}^{2}\parallel y{\parallel }^{2} + {2t}\left( {1 - t}\right) \langle x, y\rangle ; \] one obtains (11.2) from this by using the formula \[ {d}^{2}\left( {x, y}\right) = \parallel x{\parallel }^{2} + \parallel y{\parallel }^{2} - 2\langle x, y\rangle \] to eliminate \( \langle x, y\rangle \) . The following reformulation of the \( \operatorname{CAT}\left( 0\right) \) property is now immediate: Proposition 11.4. A metric space \( X \) is a \( \operatorname{CAT}\left( 0\right) \) space if and only if for any \( x, y \in X \) there is a geodesic \( \left\lbrack {x, y}\right\rbrack \) with the following property: For any point \( p = {p}_{t} \in \left\lbrack {x, y}\right\rbrack \) and any \( z \in X \) , \[ {d}^{2}\left( {z, p}\right) \leq \left( {1 - t}\right) {d}^{2}\left( {z, x}\right) + t{d}^{2}\left( {z, y}\right) - t\left( {1 - t}\right) {d}^{2}\left( {x, y}\right) . \] (11.3) Note that both versions of the definition allow for the possibility that there is more than one geodesic joining two given points. But we can quickly deduce that in fact this does not happen: Proposition 11.5. For any two points \( x, y \) in a \( \operatorname{CAT}\left( 0\right) \) space \( X \), there is a unique geodesic \( \left\lbrack {x, y}\right\rbrack \) joining them. It is characterized by \[ \left\lbrack {x, y}\right\rbrack = \{ z \in X \mid d\left( {x, y}\right) = d\left( {x, z}\right) + d\left( {z, y}\right) \} . \] (11.4) Proof. Any geodesic joining \( x \) and \( y \) is contained in the set on the right side of (11.4). It therefore suffices to show that the right side is contained in the left, with \( \left\lbrack {x, y}\right\rbrack \) as in the definition of "CAT(0) space." Suppose \( z \) satisfies \( d\left( {x, y}\right) = d\left( {x, z}\right) + d\left( {z, y}\right) \), and let \( p = {p}_{t} \in \left\lbrack {x, y}\right\rbrack \), where \( t \mathrel{\text{:=}} d\left( {x, z}\right) /d\left( {x, y}\right) \) , so that \( d\left( {x, p}\right) = d\left( {x, z}\right) \) and \( d\left( {p, y}\right) = d\left( {z, y}\right) \) . Then the comparison triangle in \( {\mathbb{R}}^{2} \) with vertices \( \bar{x},\bar{y},\bar{z} \) degenerates to the line segment \( \left\lbrack {\bar{x},\bar{y}}\right\rbrack \), and \( \bar{z} = \bar{p} \) . Hence \( d\left( {z, p}\right) = 0 \) by (11.1), so \( z = p \in \left\lbrack {x, y}\right\rbrack \) . (Alternatively, check that the right side of (11.3) is equal to 0 under our assumptions.) Remarks 11.6. (a) Our definition of "CAT(0)" is similar to one of several equivalent conditions given in Bridson-Haefliger [48, I.1.7(2)], but it is superficially different from the latter. Where we stated "there is a geodesic \( \left\lbrack {x, y}\right\rbrack \) " with a certain property, they require the property to hold for all geodesics. But the two definitions are in fact equivalent, since our definition implies uniqueness of geodesics. * Pappus’s version is \( {d}^{2}\left( {z, x}\right) + {d}^{2}\left( {z, y}\right) = 2\left( {{d}^{2}\left( {z, m}\right) + {d}^{2}\left( {m, y}\right) }\right) \), where \( m \mathrel{\text{:=}} {p}_{1/2} \) is the midpoint of \( \left\lbrack {x, y}\right\rbrack \) . (b) The " 0 " in "CAT(0)" refers to the fact that the comparison space \( {\mathbb{R}}^{2} \) has curvature 0 . More generally, there is a notion of \( \operatorname{CAT}\left( \kappa \right) \) space for any real number \( \kappa \), where \( {\mathbb{R}}^{2} \) is replaced by the complete simply connected 2-manifold of constant curvature \( \kappa \) . This is the sphere of radius \( 1/\sqrt{\kappa } \) if \( \kappa > 0 \), and it is the hyperbolic plane with metric scaled by \( 1/\sqrt{-\kappa } \) if \( \kappa < 0 \) . In case \( \kappa > 0 \) , some care is needed in formulating the definition because comparison triangles in the sphere will exist only if the points \( x, y, z \) are sufficiently close together. When \( \kappa = 1 \), for example, one assumes in the definition that \( d\left( {x, y}\right) < \pi \) , and one considers only points \( z \) such that \( d\left( {x, y}\right) + d\left( {y, z}\right) + d\left( {z, x}\right) < {2\pi } \) . In particular, \( X \) is not required to be a geodesic metric space; we require only that geodesics exist between points at distance \( < \pi \) from one another. [And these geodesics then turn out to be unique.] We will use Proposition 11.4 to prove that geodesic segments in a \( \operatorname{CAT}\left( 0\right) \) space vary continuously with their endpoints; see [48, Proposition II.1.4] for a different proof. In the precise statement of the result, we denote by \( {p}_{t}\left( {x, y}\right) \) the point \( {p}_{t} \) as above on the unique geodesic \( \left\lbrack {x, y}\right\rbrack \) . Proposition 11.7. Let \( X \) be a \( \operatorname{CAT}\left( 0\right) \) space. Then the map \( \left( {x, y, t}\right) \mapsto \) \( {p}_{t}\left( {x, y}\right) \) is continuous as a function of \( x, y, t \) . In particular, \( X \) is contractible. Proof. Fix \( x, y, t \) and apply the inequality (11.3) to \( z = {p}_{{t}^{\prime }}\left( {{x}^{\prime },{y}^{\prime }}\right) \) for \( \left( {{x}^{\prime },{y}^{\prime },{t}^{\prime }}\right) \) close to \( \left( {x, y, t}\right) \) . Since \( d\left( {z, x}\right) \) is close to \( d\left( {z,{x}^{\prime }}\right) = {t}^{\prime }d\left( {{x}^{\prime },{y}^{\prime }}\right) \), it is clear that \( d\left( {z, x}\right) \rightarrow {td}\left( {x, y}\right) \) as \( \left( {{x}^{\prime },{y}^{\prime },{t}^{\prime }}\right) \rightarrow \left( {x, y, t}\right) \) ; hence the first term on the right side of (11.3) approaches \( \left( {1 - t}\right) {t}^{2}{d}^{2}\left( {x, y}\right) \) . Similarly, the second term approaches \( t{\left( 1 - t\right) }^{2}{d}^{2}\left( {x, y}\right) \) . The right side of (11.3) therefore approaches \[ \left( {1 - t}\right) {t}^{2}{d}^{2}\left( {x, y}\right) + t{\left( 1 - t\right) }^{2}{d}^{2}\left( {x, y}\right) - t\left( {1 - t}\right) {d}^{2}\left( {x, y}\right) = 0, \] whence \( d\left( {z,{p}_{t}\left( {x, y}\right) }\right) \rightarrow 0 \) . Next, we return to Euclidean geometry and note a simple qualitative consequence of the generalized parallelogram law (11.2): If we move \( z \) so that it gets closer to \( x \) and \( y \), then it also gets closer to \( p \) . More precisely: Proposition 11.8. Consider two triangles in the Euclidean plane, with vertices \( x, y, z \) and \( x, y,{z}^{\prime } \) . Let \( p \) be an arbitrary point on the common side \( \left\lbrack {x, y}\right\rbrack \) . If \( d\left( {{z}^{\prime }, x}\right) \leq d\left( {z, x}\right) \) and \( d\left( {{z}^{\prime }, y}\right) \leq d\left( {z, y}\right) \), then \( d\left( {{z}^{\prime }, p}\right) \leq d\left( {z, p}\right) \) . See Figure 11.2 for an illustration, and see Exercise 11.11 for an alternative proof that is slightly longer but provides better intuition as to why the result is true. Finally, we record the special case \( t = 1/2 \) of the inequality (11.3), which suffices for many purposes. Letting \( m \) be the midpoint \( {p}_{1/2} \) of \( \left\lbrack {x, y}\right\rbrack \), we can write this special case as \[ {d}^{2}\left( {z, m}\right) \leq \frac{1}{2}\left( {{d}^{2}\left( {z, x}\right) + {d}^{2}\left( {z, y}\right) }\right) - \frac{1}{4}{d}^{2}\left( {x, y}\right) . \] (NC) ["(NC)" is intended to suggest nonpositive curvature.] Thus every \( \operatorname{CAT}\left( 0\right) \) space has the following property, first introduced by Bruhat-Tits [59]: ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_566_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_566_0.jpg) Fig. 11.2. Two triangles in the Euclidean plane. (NC) For any two points \( x, y \in X \) there is a point \( m \in X \) such that inequality (NC) holds for all \( z \in X \) . Conversely, p
1167_(GTM73)Algebra
Definition 3.1
Definition 3.1. A nonzero element a of a commutative ring \( \mathrm{R} \) is said to divide an element \( \mathrm{b}\varepsilon \mathrm{R} \) (notation: \( \mathrm{a} \mid \mathrm{b} \) ) if there exists \( \mathrm{x}\varepsilon \mathrm{R} \) such that \( \mathrm{{ax}} = \mathrm{b} \) . Elements \( \mathrm{a},\mathrm{b} \) of \( \mathrm{R} \) are said to be associates if \( \mathrm{a} \mid \mathrm{b} \) and \( \mathrm{b} \mid \mathrm{a} \) . Virtually all statements about divisibility may be phrased in terms of principal ideals as we now see. Theorem 3.2. Let \( \mathrm{a},\mathrm{b} \) and \( \mathrm{u} \) be elements of a commutative ring \( \mathrm{R} \) with identity. (i) \( \mathrm{a} \mid \mathrm{b} \) if and only if \( \left( \mathrm{b}\right) \subset \left( \mathrm{a}\right) \) . (ii) \( \mathrm{a} \) and \( \mathrm{b} \) are associates if and only if \( \left( \mathrm{a}\right) = \left( \mathrm{b}\right) \) . (iii) \( \mathrm{u} \) is a unit if and only if \( \mathrm{u} \mid \mathrm{r} \) for all \( \mathrm{r}\varepsilon \mathrm{R} \) . (iv) \( \mathrm{u} \) is a unit if and only if \( \left( \mathrm{u}\right) = \mathrm{R} \) . (v) The relation "a is an associate of b" is an equivalence relation on \( \mathbf{R} \) . (vi) If \( \mathrm{a} = \mathrm{{br}} \) with \( \mathrm{r}\varepsilon \mathrm{R} \) a unit, then \( \mathrm{a} \) and \( \mathrm{b} \) are associates. If \( \mathrm{R} \) is an integral domain, the converse is true. PROOF. Exercise; Theorem 2.5(v) may be helpful for (i) and (ii). Definition 3.3. Let \( \mathrm{R} \) be a commutative ring with identity. An element \( \mathrm{c} \) of \( \mathrm{R} \) is irreducible provided that: (i) \( \mathrm{c} \) is a nonzero nonunit; (ii) \( \mathrm{c} = \mathrm{{ab}} \Rightarrow \) a or \( \mathrm{b} \) is a unit. An element \( \mathrm{p} \) of \( \mathrm{R} \) is prime provided that: (i) \( \mathrm{p} \) is a nonzero nonunit; (ii) \( \mathrm{p} \mid \mathrm{{ab}} \Rightarrow \mathrm{p} \mid \mathrm{a} \) or \( \mathrm{p} \mid \mathrm{b} \) . EXAMPLES. If \( p \) is an ordinary prime integer, then both \( p \) and \( - p \) are irreducible and prime in \( \mathbf{Z} \) in the sense of Definition 3.3. In the ring \( {Z}_{6},2 \) is easily seen to be a prime. However \( {2\varepsilon }{Z}_{6} \) is not irreducible since \( 2 = 2 \cdot 4 \) and neither 2 nor 4 are units in \( {Z}_{6} \) (indeed they are zero divisors). For an example of an irreducible element which is not prime, see Exercise 3. There is a close connection between prime [resp. irreducible] elements in a ring \( R \) and prime [resp. maximal] principal ideals in \( R \) . ## Theorem 3.4. Let \( \mathrm{p} \) and \( \mathrm{c} \) be nonzero elements in an integral domain \( \mathrm{R} \) . (i) \( \mathrm{p} \) is prime if and only if \( \left( \mathrm{p}\right) \) is nonzero prime ideal; (ii) \( \mathrm{c} \) is irreducible if and only if (c) is maximal in the set \( \mathrm{S} \) of all proper principal ideals of \( R \) . (iii) Every prime element of \( \mathrm{R} \) is irreducible. (iv) If \( \mathrm{R} \) is a principal ideal domain, then \( \mathrm{p} \) is prime if and only if \( \mathrm{p} \) is irreducible. (v) Every associate of an irreducible [resp. prime] element of \( \mathrm{R} \) is irreducible [resp. prime]. (vi) The only divisors of an irreducible element of \( \mathrm{R} \) are its associates and the units of \( \mathrm{R} \) . REMARK. Several parts of Theorem 3.4 are true for any commutative ring with identity, as is seen in the following proof. SKETCH OF PROOF OF 3.4. (i) Use Definition 3.3 and Theorem 2.15. (ii) If \( c \) is irreducible then \( \left( c\right) \) is a proper ideal of \( R \) by Theorem 3.2. If \( \left( c\right) \subset \left( d\right) \), then \( c = {dx} \) . Since \( c \) is irreducible either \( d \) is a unit (whence \( \left( d\right) = R \) ) or \( x \) is a unit (whence \( \left( c\right) = \left( d\right) \) by Theorem 3.2). Hence (c) is maximal in \( S \) . Conversely if \( \left( c\right) \) is maximal in \( S \), then \( c \) is a (nonzero) nonunit in \( R \) by Theorem 3.2. If \( c = {ab} \), then \( \left( c\right) \subset \left( a\right) \) , whence \( \left( c\right) = \left( a\right) \) or \( \left( a\right) = R \) . If \( \left( a\right) = R \), then \( a \) is a unit (Theorem 3.2). If \( \left( c\right) = \left( a\right) \) , then \( a = {cy} \) and hence \( c = {ab} = {cyb} \) . Since \( R \) is an integral domain \( 1 = {yb} \), whence \( b \) is a unit. Therefore, \( c \) is irreducible. (iii) If \( p = {ab} \), then \( p\left| {a\text{or}p}\right| b \) ; say \( p \mid a \) . Then \( {px} = a \) and \( p = {ab} = {pxb} \), which implies that \( 1 = {xb} \) . Therefore, \( b \) is a unit. (iv) If \( p \) is irreducible, use (ii), Theorem 2.19 and (i) to show that \( p \) is prime. (v) If \( c \) is irreducible and \( d \) is an associate of \( c \), then \( c = {du} \) with \( u \in R \) a unit (Theorem 3.2). If \( d = {ab} \), then \( c = {abu} \), whence \( a \) is a unit or \( {bu} \) is a unit. But if \( {bu} \) is a unit, so is \( b \) . Hence \( d \) is irreducible. (vi) If \( c \) is irreducible and \( a \mid c \), then \( \left( c\right) \subset \left( a\right) \), whence \( \left( c\right) = \left( a\right) \) or \( \left( a\right) = R \) by (ii). Therefore, \( a \) is either an associate of \( c \) or a unit by Theorem 3.2. We have now developed the analogues in an arbitrary integral domain of the concepts of divisibility and prime integers in the ring \( \mathbf{Z} \) . Recall that every element in \( \mathbf{Z} \) is a product of a finite number of irreducible elements (prime integers or their negatives) according to the Fundamental Theorem of Arithmetic (Introduction, Theorem 6.7). Furthermore this factorization is essentially unique (except for the order of the irreducible factors). Consequently, \( \mathbf{Z} \) is an example of: ## Definition 3.5. An integral domain \( \mathrm{R} \) is a unique factorization domain provided that: (i) every nonzero nonunit element a of \( \mathrm{R} \) can be written \( \mathrm{a} = {\mathrm{c}}_{1}{\mathrm{c}}_{2}\cdots {\mathrm{c}}_{\mathrm{n}} \), with \( {\mathrm{c}}_{1},\ldots ,{\mathrm{c}}_{\mathrm{n}} \) irreducible. (ii) If \( \mathrm{a} = {\mathrm{c}}_{1}{\mathrm{c}}_{2}\cdots {\mathrm{c}}_{\mathrm{n}} \) and \( \mathrm{a} = {\mathrm{d}}_{1}{\mathrm{\;d}}_{2}\cdots {\mathrm{d}}_{\mathrm{m}}\left( {{\mathrm{c}}_{\mathrm{i}},{\mathrm{d}}_{\mathrm{i}}}\right. \) irreducible \( ) \), then \( \mathrm{n} = \mathrm{m} \) and for some permutation \( \sigma \) of \( \{ 1,2,\ldots ,\mathrm{n}\} ,{\mathrm{c}}_{\mathrm{i}} \) and \( {\mathrm{d}}_{\sigma \left( \mathrm{i}\right) } \) are associates for every \( \mathrm{i} \) . REMARK. Every irreducible element in a unique factorization domain is necessarily prime by (ii). Consequently, irreducible and prime elements coincide by Theorem 3.4 (iii). Definition 3.5 is nontrivial in the sense that there are integral domains in which every element is a finite product of irreducible elements, but this factorization is not unique (that is, Definition 3.5 (ii) fails to hold); see Exercise 4. Indeed one of the historical reasons for introducing the concept of ideal was to obtain some sort of unique factorization theorems (for ideals) in rings of algebraic integers in which factorization of elements was not necessarily unique; see Chapter VIII. In view of the relationship between irreducible elements and principal ideals (Theorem 3.4) and the example of the integers, it seems plausible that every principal ideal domain is a unique factorization domain. In order to prove that this is indeed the case we need: Lemma 3.6. If \( \mathrm{R} \) is a principal ideal ring and \( \left( {\mathrm{a}}_{1}\right) \subset \left( {\mathrm{a}}_{2}\right) \subset \cdots \) is a chain of ideals in \( \mathrm{R} \), then for some positive integer \( \mathrm{n},\left( {\mathrm{a}}_{\mathrm{j}}\right) = \left( {\mathrm{a}}_{\mathrm{n}}\right) \) for all \( \mathrm{j} \geq \mathrm{n} \) . PROOF. Let \( A = \mathop{\bigcup }\limits_{{i \geq 1}}\left( {a}_{i}\right) \) . We claim that \( A \) is an ideal. If \( b, c \in A \), then \( b \in \left( {a}_{i}\right) \) and \( c \in \left( {a}_{j}\right) \) . Either \( i \leq j \) or \( i \geq j \) ; say \( i \geq j \) . Consequently \( \left( {a}_{j}\right) \subset \left( {a}_{i}\right) \) and \( b, c \in \left( {a}_{i}\right) \) . Since \( \left( {a}_{i}\right) \) is an ideal \( b - c \in \left( {a}_{i}\right) \subset A \) . Similarly if \( r \in R \) and \( b \in A \), then \( b \in \left( {a}_{i}\right) \) , whence \( {rb\varepsilon }\left( {a}_{i}\right) \subset A \) and \( {br\varepsilon }\left( {a}_{i}\right) \subset A \) . Therefore, \( A \) is an ideal by Theorem 2.2. By hypothesis \( A \) is principal, say \( A = \left( a\right) \) . Since \( a \in A = \bigcup \left( {a}_{i}\right), a \in \left( {a}_{n}\right) \) for some \( n \) . By Definition 2.4 \( \left( a\right) \subset \left( {a}_{n}\right) \) . Therefore, for every \( j \geq n,\left( a\right) \subset \left( {a}_{n}\right) \subset \left( {a}_{j}\right) \subset A = \) (a), whence \( \left( {a}_{j}\right) = \left( {a}_{n}\right) \) . Theorem 3.7. Every principal ideal domain \( \mathrm{R} \) is a unique factorization domain. REMARK. The converse of Theorem 3.7 is false. For example the polynomial ring \( \mathbf{Z}\left\lbrack x\right\rbrack \) can be shown to be a unique factorization domain (Theorem 6.14 below), but \( \mathbf{Z}\left\lbrack x\right\rbrack \) is not a principal ideal domain (Exercise 6.1). SKETCH OF PROOF OF 3.7. Let \( S \) be the set of all nonzero nonunit elements of \( R \) which cannot be factored as a finite product of irreducible elements. We shall first show that \( S \) is empty, whence every nonzero nonunit element of \( R \) has at least one factorization as a finite product of irreducibles
1029_(GTM192)Elements of Functional Analysis
Definition 2.4
Definition 2.4 We say that a family of closed subsets \( {F}_{1},\ldots ,{F}_{n} \) of \( {\mathbb{R}}^{d} \) satisfies condition (C) if, for every compact subset \( K \) of \( {\mathbb{R}}^{d} \), the set \[ \left\{ {\left( {{x}^{1},\ldots ,{x}^{n}}\right) \in {F}_{1} \times \cdots \times {F}_{n} : {x}^{1} + \cdots + {x}^{n} \in K}\right\} \] is a compact subset of \( {\left( {\mathbb{R}}^{d}\right) }^{n} \) . Obviously, we could have written this condition with "bounded" instead of "compact". Let's first give some examples and simple properties. Most of the proofs are left as exercises. 1. Suppose \( \left( {{F}_{1},\ldots ,{F}_{n}}\right) \) is a family of nonempty closed sets that satisfies condition (C). Then every family of closed sets \( \left( {{\widetilde{F}}_{1},\ldots ,{\widetilde{F}}_{p}}\right) \), where \( 1 \leq \) \( p \leq n \) and \( {\widetilde{F}}_{j} \subset {F}_{j} \) for all \( j \in \{ 1,\ldots, p\} \), also satisfies (C). 2. Clearly, every family of compact subsets satisfies condition (C). 3. If \( \left( {{F}_{1},\ldots ,{F}_{n}}\right) \) satisfies condition (C), so does the family \( \left( {{F}_{1},\ldots ,{F}_{n}, L}\right) \) , for every compact \( L \) in \( {\mathbb{R}}^{d} \) . Indeed, if \( K \) is a compact subset of \( {\mathbb{R}}^{d} \) , \[ \left\{ {\left( {{x}^{1},\ldots ,{x}^{n},{x}^{n + 1}}\right) \in {F}_{1} \times \cdots \times {F}_{n} \times L : {x}^{1} + \cdots + {x}^{n} + {x}^{n + 1} \in K}\right\} \] \[ \subset \left\{ {\left( {{x}^{1},\ldots ,{x}^{n}}\right) \in {F}_{1} \times \cdots \times {F}_{n} : {x}^{1} + \cdots + {x}^{n} \in K - L}\right\} \times L, \] and the set \( K - L \) is compact. It follows by induction that a family of closed sets all or all but one of which are compact satisfies property (C). 4. Let \( F \) be a closed subset of \( {\mathbb{R}}^{d} \) containing a one-dimensional subspace \( \mathbb{R}u \) of \( {\mathbb{R}}^{d} \), where \( u \neq 0 \) . Then the family \( \left( {F, F}\right) \) does not satisfy condition (C). Indeed, the set \[ \left\{ {\left( {{x}^{1},{x}^{2}}\right) \in \mathbb{R}u \times \mathbb{R}u : {x}^{1} + {x}^{2} = 0}\right\} = \{ \left( {{tu}, - {tu}}\right) : t \in \mathbb{R}\} \] is unbounded. 5. If \( a, b \in \mathbb{R} \), the family \( \left( {( - \infty, a\rbrack ,\lbrack b, + \infty }\right) ) \) does not satisfy condition (C). By Example 1 and because \( \mathbb{R} \supset ( - \infty ,0\rbrack \), neither does the pair \( \left( {\mathbb{R},{\mathbb{R}}^{ + }}\right) \) . By contrast, for every \( {a}_{1},\ldots ,{a}_{n} \in \mathbb{R} \), the family \[ \left( {\left\lbrack {{a}_{1}, + \infty }\right) ,\ldots ,\left\lbrack {{a}_{n}, + \infty }\right) }\right) \] satisfies (C). In particular, \( \left( {{\mathbb{R}}^{ + },\ldots ,{\mathbb{R}}^{ + }}\right) \) satisfies (C). For a generalization to dimension \( d \), see Exercise 4 on page 335. 6. If \( \left( {{F}_{1},\ldots ,{F}_{n}}\right) \) satisfies condition (C), the set \( {F}_{1} + \cdots + {F}_{n} \) is closed. (Recall that, in general, the sum \( {F}_{1} + {F}_{2} \) of closed sets \( {F}_{1} \) and \( {F}_{2} \) need not be closed.) 7. If \( \left( {{F}_{1},\ldots ,{F}_{n}}\right) \) satisfies condition (C) and if \( \left( {I, J}\right) \) is a partition of the set \( \{ 1,\ldots, n\} \) (that is, \( I \cap J = \varnothing \) and \( I \cup J = \{ 1,\ldots, n\} \) ), then the family \( \left( {{F}_{I},{F}_{J}}\right) \) satisfies \( \left( \mathrm{C}\right) \), with \( {F}_{I} = \mathop{\sum }\limits_{{k \in I}}{F}_{k} \) and \( {F}_{J} = \mathop{\sum }\limits_{{k \in J}}{F}_{k} \) . The next step in the construction consists in extending the bracket. If \( \varphi \in \mathcal{E}\left( {\mathbb{R}}^{d}\right) \), the expression \( \langle T,\varphi \rangle \) has so far been defined only when \( T \) is a distribution with compact support on \( {\mathbb{R}}^{d} \) (see Proposition 3.3 on page 282). The next proposition allows us to extend this definition to the case where \( \operatorname{Supp}T \cap \operatorname{Supp}\varphi \) is compact. Proposition 2.5 Let \( \Omega \) be open in \( {\mathbb{R}}^{d} \) . Let \( T \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) and \( \varphi \in \mathcal{E}\left( \Omega \right) \) be such that \( \operatorname{Supp}T \cap \operatorname{Supp}\varphi \) is compact. Then, if \( \rho \in \mathcal{D}\left( \Omega \right) \) is a function taking the value 1 on an open set containing \( \operatorname{Supp}T \cap \operatorname{Supp}\varphi \), the value of \( \langle T,{\rho \varphi }\rangle \) does not depend on \( \rho \) . This value is denoted by \( \langle T,\varphi \rangle \) . Proof. Take \( \rho \in \mathcal{D}\left( \Omega \right) \) such that \( \rho = 0 \) on an open set containing \( \operatorname{Supp}T \cap \) Supp \( \varphi \) . Then the support of \( \rho \) is contained in the complement of Supp \( T \cap \) Supp \( \varphi \), and therefore \[ \operatorname{Supp}{\rho \varphi } \subset \operatorname{Supp}\varphi \cap \left( {{\mathbb{R}}^{d} \smallsetminus \left( {\operatorname{Supp}T \cap \operatorname{Supp}\varphi }\right) }\right) = \operatorname{Supp}\varphi \cap \left( {{\mathbb{R}}^{d} \smallsetminus \operatorname{Supp}T}\right) , \] which implies that \( \langle T,{\rho \varphi }\rangle = 0 \) . Consequently, if \( \rho \) and \( \widetilde{\rho } \) are functions in \( \mathcal{D}\left( \Omega \right) \) that coincide on an open set containing \( \operatorname{Supp}T \cap \operatorname{Supp}\varphi \), we have \( \langle T,{\rho \varphi }\rangle = \langle T,\widetilde{\rho }\varphi \rangle \) . Naturally, if \( T \in {\mathcal{E}}^{\prime }\left( \Omega \right) \) and \( \varphi \in \mathcal{E}\left( \Omega \right) \), we recover the meaning of the brackets defined in Proposition 3.3 on page 282. If \( T \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) and \( \varphi \in \) \( \mathcal{D}\left( \Omega \right) \), we recover the usual meaning of the brackets. Note that we can define similarly the value of \( \langle T,\varphi \rangle \) for \( T \in {\mathcal{D}}^{\prime m}\left( \Omega \right) \) and \( \varphi \in {\mathcal{E}}^{m}\left( \Omega \right) \) if \( \operatorname{Supp}T \cap \operatorname{Supp}\varphi \) is compact. We can now define the convolution product of a family of distributions whose supports satisfy condition (C) of Definition 2.4. We will say from now on that such a family of distributions itself satisfies condition (C). Proposition 2.6 Let \( \left( {{T}_{1},\ldots ,{T}_{n}}\right) \) be a family of distributions on \( {\mathbb{R}}^{d} \) satisfying condition \( \left( \mathrm{C}\right) \) . 1. If \( \varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \), we define a function \( \widehat{\varphi } \) on \( {\left( {\mathbb{R}}^{d}\right) }^{n} \) by \[ \widehat{\varphi }\left( {{x}^{1},\ldots ,{x}^{n}}\right) = \varphi \left( {{x}^{1} + \cdots + {x}^{n}}\right) . \] Then \( \widehat{\varphi } \in \mathcal{E}\left( {\left( {\mathbb{R}}^{d}\right) }^{n}\right) \) and \( \operatorname{Supp}\left( {{T}_{1} \otimes \cdots \otimes {T}_{n}}\right) \cap \operatorname{Supp}\widehat{\varphi } \) is compact. The map defined on \( \mathcal{D}\left( {\mathbb{R}}^{d}\right) \) by \[ \varphi \mapsto \left\langle {{T}_{1} \otimes \cdots \otimes {T}_{n},\widehat{\varphi }}\right\rangle \] is a distribution on \( {\mathbb{R}}^{d} \), denoted \( {T}_{1} * \cdots * {T}_{n} \) and called the convolution of \( {T}_{1},\ldots ,{T}_{n} \) . 2. For each \( l > 0 \), let \( {\rho }_{l} \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \) be such that \( {\rho }_{l} = 1 \) on \( \bar{B}\left( {0, l}\right) \) . For every open bounded set \( \Omega \) in \( {\mathbb{R}}^{d} \), there exists a real number \( l > 0 \) such that the restrictions of \( {T}_{1} * \cdots * {T}_{n} \) and of \( \left( {{\rho }_{{l}^{\prime }}{T}_{1}}\right) * \cdots * \left( {{\rho }_{{l}^{\prime }}{T}_{n}}\right) \) to \( \Omega \) coincide for every \( {l}^{\prime } \geq l \) . In particular, \[ {T}_{1} * \cdots * {T}_{n} = \mathop{\lim }\limits_{{l \rightarrow + \infty }}\left( {{\rho }_{l}{T}_{1}}\right) * \cdots * \left( {{\rho }_{l}{T}_{n}}\right) \] in \( {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right) \) . In the preceding statement we have \( {\rho }_{l}{T}_{j} \in {\mathcal{E}}^{\prime }\left( {\mathbb{R}}^{d}\right) \), so the convolution \( \left( {{\rho }_{l}{T}_{1}}\right) * \cdots * \left( {{\rho }_{l}{T}_{n}}\right) \) is defined in the sense of Section 2A. Indeed, the preceding definition coincides with Definition 2.1 when all distributions have compact support. Proof. Let \( \Omega \) be a bounded open set in \( {\mathbb{R}}^{d} \) and \( \varphi \) an element of \( \mathcal{D}\left( \Omega \right) \) . We know that \( \operatorname{Supp}\left( {{T}_{1} \otimes \cdots \otimes {T}_{n}}\right) = \operatorname{Supp}{T}_{1} \times \cdots \times \operatorname{Supp}{T}_{n} \), so \( \operatorname{Supp}\left( {{T}_{1} \otimes \cdots \otimes {T}_{n}}\right) \cap \operatorname{Supp}\widehat{\varphi } \) \[ \subset \left\{ {\left( {{x}^{1},\ldots ,{x}^{n}}\right) \in \operatorname{Supp}{T}_{1} \times \cdots \times \operatorname{Supp}{T}_{n} : {x}^{1} + \cdots + {x}^{n} \in \bar{\Omega }}\right\} . \] By condition (C), we deduce that \( \operatorname{Supp}\left( {{T}_{1} \otimes \cdots \otimes {T}_{n}}\right) \cap \operatorname{Supp}\widehat{\varphi } \) is a compact subset of \( {\left( {\mathbb{R}}^{d}\right) }^{n} \) contained in a compact \( {K}_{\Omega } \) that depends only on \( \Omega \), not on \( \varphi \) . Thus, \( \left\langle {{T}_{1} \otimes \cdots \otimes {T}_{n},\widehat{\varphi }}\right\rangle \) is well defined and coincides with \( \left\langle {{T}_{1} \otimes \cdots \otimes {T}_{n}}\right. \) , \( \left. {\left. {\left( {{\rho }_{l} \otimes \cdots \otimes {\rho }_{l}}\right) \widehat{\varphi }}\right\rangle \text{ if }{K}_{\Omega }
18_Algebra Chapter 0
Definition 1.1
Definition 1.1. A field extension \( k \subseteq F \) is finite, of degree \( n \), if \( F \) has (finite) dimension \( \dim F = n \) as a vector space over \( k \) . The extension is infinite otherwise. The degree of a finite extension \( k \subseteq F \) is denoted by \( \left\lbrack {F : k}\right\rbrack \) (and we write \( \left\lbrack {F : k}\right\rbrack = \infty \) if the extension is infinite). \( {}^{2} \) They do interact in other ways, however-for example, because \( \mathbb{Z} \) is initial in Ring, so \( \mathbb{Z} \) maps to all fields. \( {}^{3} \) I am going to denote the field \( \mathbb{Z}/p\mathbb{Z} \) by \( {\mathbb{F}}_{p} \) in this chapter, to stress its role as a field (rather than 'just' as a group). In [V15.2] we have encountered a prototypical example of finite field extension: a procedure starting from an irreducible polynomial \( f\left( x\right) \in k\left\lbrack x\right\rbrack \) with coefficients in a field and producing an extension \( K \) of \( k \) in which \( f\left( t\right) \) has a root. Explicitly, \[ K = \frac{k\left\lbrack t\right\rbrack }{\left( f\left( t\right) \right) } \] is such an extension (Proposition V15.7). The quotient is a field because \( k\left\lbrack t\right\rbrack \) is a PID; hence irreducible elements generate maximal ideals - prime because of the UFD property (cf. Theorem V12.5) and then maximal because nonzero prime ideals are maximal in a PID (cf. Proposition 1114.13). The coset of \( t \) is a root of \( f\left( x\right) \) when this is viewed as an element of \( K\left\lbrack x\right\rbrack \) . The degree of the extension \( k \subseteq K \) equals the degree of the polynomial \( f\left( x\right) \) (this is hopefully clear at this point and was checked carefully in Proposition III 4.6 . The reader may wonder whether perhaps all finite extensions are of this type. This is unfortunately not the case, but as it happens, it is the case in a large class of examples. As I often do, I promise that the situation will clarify itself considerably once we have accumulated more general knowledge; in this case, the relevant fact will be Proposition 5.19 Also recall that we proved that these extensions are almost universal with respect to the problem of extending \( k \) so that the polynomial \( f\left( t\right) \) acquires a root. I pointed out, however, that the uni in universal is missing; that is, if \( k \subseteq F \) is an extension of \( k \) in which \( f\left( t\right) \) has a root, there may be many different ways to put \( K \) in between: \[ k \subseteq K \subseteq F\text{.} \] Such questions are central to the study of extensions, so we begin by giving a second look at this situation. 1.2. Simple extensions. Let \( k \subseteq F \) be a field extension, and let \( \alpha \in F \) . The smallest subfield of \( F \) containing both \( k \) and \( \alpha \) is denoted \( k\left( \alpha \right) \) ; that is, \( k\left( \alpha \right) \) is the intersection of all subfields of \( F \) containing \( k \) and \( \alpha \) . Definition 1.2. A field extension \( k \subseteq F \) is simple if there exists an element \( \alpha \in F \) such that \( F = k\left( \alpha \right) \) . The extensions recalled above are of this kind: if \( K = k\left\lbrack t\right\rbrack /\left( {f\left( t\right) }\right) \) and \( \alpha \) denotes the coset of \( t \), then \( K = k\left( \alpha \right) \) : indeed, if a subfield of \( K \) contains the coset of \( t \) , then it must contain (the coset of) every polynomial expression in \( t \), and hence it must be the whole of \( K \) . I have always found the notation \( k\left( \alpha \right) \) somewhat unfortunate, since it suggests that all such extensions are in some way isomorphic and possibly all isomorphic to the field \( k\left( t\right) \) of rational functions in one indeterminate \( t \) (cf. Definition V14.13). This is not true, although it is clear that every element of \( k\left( \alpha \right) \) may be written as a rational function in \( \alpha \) with coefficients in \( k \) (Exercise 1.3). In any case, it is easy to classify simple extensions: they are either isomorphic to \( k\left( t\right) \) or they are of the prototypical kind recalled above. Here is the precise statement. Proposition 1.3. Let \( k \subseteq k\left( \alpha \right) \) be a simple extension. Consider the evaluation map \( \epsilon : k\left\lbrack t\right\rbrack \rightarrow k\left( \alpha \right) \), defined by \( f\left( t\right) \mapsto f\left( \alpha \right) \) . Then we have the following: - \( \epsilon \) is injective if and only if \( k \subseteq k\left( \alpha \right) \) is an infinite extension. In this case, \( k\left( \alpha \right) \) is isomorphic to the field of rational functions \( k\left( t\right) \) . - \( \epsilon \) is not injective if and only if \( k \subseteq k\left( \alpha \right) \) is finite. In this case there exists a unique monic irreducible nonconstant polynomial \( p\left( t\right) \in k\left\lbrack t\right\rbrack \) of degree \( n = \) \( \left\lbrack {k\left( \alpha \right) : k}\right\rbrack \) such that \[ k\left( \alpha \right) \cong \frac{k\left\lbrack t\right\rbrack }{\left( p\left( t\right) \right) } \] Via this isomorphism, \( \alpha \) corresponds to the coset of \( t \) . The polynomial \( p\left( t\right) \) is the monic polynomial of smallest degree in \( k\left\lbrack t\right\rbrack \) such that \( p\left( \alpha \right) = 0 \) in \( k\left( \alpha \right) \) . The polynomial \( p\left( t\right) \) appearing in this statement is called the minimal polynomial of \( \alpha \) over \( k \) . Of course the minimal polynomial of an element \( \alpha \) of a (’large’) field depends on the base (’small’) field \( k \) . For example, \( \sqrt{2} \in \mathbb{C} \) has minimal polynomial \( {t}^{2} - 2 \) over \( \mathbb{Q} \), but \( t - \sqrt{2} \) over \( \mathbb{R} \) . Proof. Let \( F = k\left( \alpha \right) \) . By the ’first isomorphism theorem’, the image of \( \epsilon : k\left\lbrack t\right\rbrack \rightarrow F \) is isomorphic to \( k\left\lbrack t\right\rbrack /\ker \left( \epsilon \right) \) . Since \( F \) is an integral domain, so is \( k\left\lbrack t\right\rbrack /\ker \left( \epsilon \right) \) ; hence \( \ker \left( \epsilon \right) \) is a prime ideal in \( k\left\lbrack t\right\rbrack \) . -Assume \( \ker \left( \epsilon \right) = 0 \) ; that is, \( \epsilon \) is an injective map from the integral domain \( k\left\lbrack t\right\rbrack \) to the field \( F \) . By the universal property of fields of fractions (cf. [V14.2], \( \epsilon \) extends to a unique homomorphism \[ k\left( t\right) \rightarrow F\text{.} \] The (isomorphic) image of \( k\left( t\right) \) in \( F \) is a field containing \( k \) and \( \alpha \) ; hence it equals \( F \) by definition of simple extension. Since \( \epsilon \) is injective, the powers \( {\alpha }^{0} = 1,\alpha ,{\alpha }^{2},{\alpha }^{3},\ldots \) (that is, the images \( \epsilon \left( {t}^{i}\right) \) ) are all distinct and linearly independent over \( k \) (because the powers \( 1, t,{t}^{2},\ldots \) are linearly independent over \( k \) ); therefore the extension \( k \subseteq F \) is infinite in this case. -If \( \ker \left( \epsilon \right) \neq 0 \), then \( \ker \left( \epsilon \right) = \left( {p\left( t\right) }\right) \) for a unique monic irreducible nonconstant polynomial \( p\left( t\right) \), which has smallest degree (cf. Exercise III 4.4) among all nonzero polynomials in \( \ker \left( \epsilon \right) \) . As \( \left( {p\left( t\right) }\right) \) is then maximal in \( k\left\lbrack t\right\rbrack \), the image of \( \epsilon \) is a subfield of \( F \) containing \( \alpha = \epsilon \left( t\right) \) . By definition of simple extension, \( F = \) the image of \( \epsilon \) ; that is, the induced homomorphism \[ \frac{k\left\lbrack t\right\rbrack }{\left( p\left( t\right) \right) } \rightarrow F \] is an isomorphism. In this case \( \left\lbrack {F : k}\right\rbrack = \deg p\left( t\right) \), as recalled in [1.1, and in particular the extension is finite, as claimed. The alert reader will have noticed that the proof of Proposition 1.3 is essentially a rehash of the argument proving the 'versality' part of Proposition V15.7 Example 1.4. Consider the extension \( \mathbb{Q} \subseteq \mathbb{R} \) . The polynomial \( {x}^{2} - 2 \in \mathbb{Q}\left\lbrack x\right\rbrack \) has roots in \( \mathbb{R} \) : therefore, by Proposition V15.7 there exists a homomorphism (hence a field extension) \[ \bar{\epsilon } : \;\frac{\mathbb{Q}\left\lbrack t\right\rbrack }{\left( {t}^{2} - 2\right) } \hookrightarrow \mathbb{R} \] such that the image of (the coset of) \( t \) is a root \( \alpha \) of \( {x}^{2} - 2 \) . Proposition 1.3 simply identifies the image of this homomorphism with \( \mathbb{Q}\left( \alpha \right) \subseteq \mathbb{R} \) . This is hopefully crystal clear; however, note that even this simple example shows that the induced morphism \( \bar{\epsilon } \) is not unique (hence the ’lack of uni’): because there are more than one root of \( {x}^{2} - 2 \) in \( \mathbb{R} \) . Concretely, there are two possible choices for \( \alpha : \alpha = + \sqrt{2} \) and \( \alpha = - \sqrt{2} \) . The choice of \( \alpha \) determines the evaluation map \( \epsilon \) and therefore the specific realization of \( \mathbb{Q}\left\lbrack t\right\rbrack /\left( {{t}^{2} - 2}\right) \) as a subfield of \( \mathbb{R} \) . The reader may be misled by one feature of this example: clearly \( \mathbb{Q}\left( \sqrt{2}\right) = \) \( \mathbb{Q}\left( {-\sqrt{2}}\right) \), and this may seem to compensate for the lack of uniqueness lamented above. The morphism may not be unique, but the image of the morphism surely is? No. The reader will check that there are three distinct subfields of \( \mathbb{C} \) isomorphic to \( \mathbb{Q}\left\lbrack t\right\rbrack /\left( {{t}^{3} - 2}\right)
1042_(GTM203)The Symmetric Group
Definition 3.10.1
Definition 3.10.1 If \( v = \left( {i, j}\right) \) is a node in the diagram of \( \lambda \), then it has hook \[ {H}_{v} = {H}_{i, j} = \left\{ {\left( {i,{j}^{\prime }}\right) : {j}^{\prime } \geq j}\right\} \cup \left\{ {\left( {{i}^{\prime }, j}\right) : {i}^{\prime } \geq i}\right\} \] with corresponding hooklength \[ {h}_{v} = {h}_{i, j} = \left| {H}_{i, j}\right| \text{. ∎} \] To illustrate, if \( \lambda = \left( {{4}^{2},{3}^{3},1}\right) \), then the dotted cells in ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_137_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_137_0.jpg) are the hook \( {H}_{2,2} \) with hooklength \( {h}_{2,2} = 6 \) . It is now easy to state the hook formula of Frame, Robinson, and Thrall. Theorem 3.10.2 (Hook Formula [FRT 54]) If \( \lambda \vdash n \), then \[ {f}^{\lambda } = \frac{n!}{\mathop{\prod }\limits_{{\left( {i, j}\right) \in \lambda }}{h}_{i, j}}. \] Before proving this theorem, let us pause for an example and an anecdote. Suppose we wish to calculate the number of standard Young tableaux of shape \( \lambda = \left( {2,2,1}\right) \vdash 5 \) . The hooklengths are given in the array ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_137_1.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_137_1.jpg) where \( {h}_{i, j} \) is placed in cell \( \left( {i, j}\right) \) . Thus \[ {f}^{\left( 2,2,1\right) } = \frac{5!}{4 \cdot 3 \cdot 2 \cdot {1}^{2}} = 5. \] This result can be verified by listing all possible tableaux: \[ \begin{array}{llllll} {12} & {12} & {13} & {13} & {14} & \\ 3 & 4, & {35}, & {24}, & {25}, & {25}. \\ 5 & 4 & 5 & 4 & 3 & \end{array} \] The tale of how the hook formula was born is an amusing one. One Thursday in May of 1953, Robinson was visiting Frame at Michigan State University. Discussing the work of Staal [Sta 50] (a student of Robinson), Frame was led to conjecture the hook formula. At first Robinson could not believe that such a simple formula existed, but after trying some examples he became convinced, and together they proved the identity. On Saturday they went to the University of Michigan, where Frame presented their new result after a lecture by Robinson. This surprised Thrall, who was in the audience, because he had just proved the same result on the same day! There are many different bijective proofs of the hook formula. Franzblau and Zeilberger [F-Z 82] were the first to come up with a (complicated) bijection. Later, Zeilberger [Zei 84] gave a bijective version of a probabilistic proof of Greene, Nijenhuis, and Wilf [GNW 79] (see Exercise 17). But the map was still fairly complex. Also, Remmel [Rem 82] used the Garsia-Milne involution principle [G-M 81] to produce a bijection as a composition of maps. It was not until the 1990s that a truly straightforward one-to-one correspondence was found (even though the proof that it is correct is still somewhat involved), and that is the one we will present here. The algorithm was originally outlined by Pak and Stoyanovskii [PS 92], and then a complete proof was given by these two authors together with Novelli [NPS 97]. (A generalization of this method can be found in the work of Krattenthaler [Kra pr].) To get the hook formula in a form suitable for bijective proof, we rewrite it as \[ n! = {f}^{\lambda }\mathop{\prod }\limits_{{\left( {i, j}\right) \in \lambda }}{h}_{i, j} \] So it suffices to find a bijection \[ T\overset{\mathrm{N} - \mathrm{P} - \mathrm{S}}{ \leftrightarrow }\left( {P, J}\right) \] where \( \operatorname{sh}T = \operatorname{sh}P = \operatorname{sh}J = \lambda, T \) is an arbitrary Young tableau, \( P \) is a standard tableau, and \( J \) is an array such that the number of choices for the entry \( {J}_{i, j} \) is \( {h}_{i, j} \) . More specifically, define the arm and leg of \( {H}_{i, j} \) to be \[ {A}_{i, j} = \left\{ {\left( {i,{j}^{\prime }}\right) : {j}^{\prime } > j}\right\} \;\text{ and }\;{L}_{i, j} = \left\{ {\left( {{i}^{\prime }, j}\right) : {i}^{\prime } > i}\right\} , \] respectively, with corresponding arm length and leg length \[ a{l}_{i, j} = \left| {A}_{i, j}\right| \;\text{ and }\;l{l}_{i, j} = \left| {L}_{i, j}\right| . \] In the previous example with \( \lambda = \left( {{4}^{2},{3}^{3},1}\right) \) we have \[ {A}_{2,2} = \{ \left( {2,3}\right) ,\left( {2,4}\right) \} ,\;a{l}_{2,2} = 2; \] \[ {L}_{2,2} = \{ \left( {3,2}\right) ,\left( {4,2}\right) ,\left( {5,2}\right) \} ;\;l{l}_{2,2} = 3. \] Note that \( {h}_{i, j} = a{l}_{i, j} + l{l}_{i, j} + 1 \) . So our requirement on the array \( J \) will be \[ - l{l}_{i, j} \leq {J}_{i, j} \leq a{l}_{i, j} \] (3.18) and we will call \( J \) a hook tableau. " \( T\overset{\mathrm{N} - \mathrm{P} - \mathrm{S}}{ \rightarrow }\left( {P, J}\right) \) " The basic idea behind this map is simple. We will use a modified jeu de taquin to unscramble the numbers in \( T \) so that rows and columns increase to form \( P \) . The hook tableau, \( J \), will keep track of the unscrambling process so that \( T \) can be reconstructed from \( P \) . We first need to totally order the cells of \( \lambda \) by defining \[ \left( {i, j}\right) \leq \left( {{i}^{\prime },{j}^{\prime }}\right) \text{ if and only if }j > {j}^{\prime }\text{ or }j = {j}^{\prime }\text{ and }i \geq {i}^{\prime }. \] Label the cells of \( \lambda \) in the given order \[ {c}_{1} < {c}_{2} < \ldots < {c}_{n} \] So, for example, if \( \lambda = \left( {3,3,2}\right) \), then the ordering is <table><tr><td>\( {c}_{8} \)</td><td>\( {c}_{5} \)</td><td>\( {c}_{2} \)</td><td rowspan="3">.</td></tr><tr><td>\( {c}_{7} \)</td><td>\( {c}_{4} \)</td><td>\( {c}_{1} \)</td></tr><tr><td>\( {c}_{6} \)</td><td>\( {c}_{3} \)</td><td></td></tr></table> Now if \( T \) is a \( \lambda \) -tableau and \( c \) is a cell, we let \( {T}^{ \leq c} \) (respectively, \( {T}^{ < c} \) ) be the tableau containing all cells \( b \) of \( T \) with \( b \leq c \) (respectively, \( b < c \) ). Continuing our example, if \[ T = \begin{array}{lll} 3 & 2 & 7 \\ 6 & 1 & 8 \\ 4 & 5 & \end{array},\;\text{ then }\;{T}_{ \leq {c}_{6}} = \begin{array}{ll} 2 & 7 \\ 1 & 8 \\ 4 & 5 \end{array}. \] Given \( T \), we will construct a sequence of pairs \[ \left( {{T}_{1},{J}_{1}}\right) = \left( {T,0}\right) ,\left( {{T}_{2},{J}_{2}}\right) ,\left( {{T}_{3},{J}_{3}}\right) ,\ldots ,\left( {{T}_{n},{J}_{n}}\right) = \left( {P, J}\right) , \] (3.19) where 0 is the tableau of all zeros and for all \( k \) we have that \( {T}_{k}^{ \leq {c}_{k}} \) is standard, and that \( {J}_{k} \) is a hook tableau which is zero outside \( {J}_{k}^{ \leq {c}_{k}} \) . If \( T \) is a tableau and \( c \) is a cell of \( T \), then we perform a modified forward slide to form a new tableau \( {j}^{c}\left( T\right) \) as follows. NPS1 Pick \( c \) such that \( {T}^{ < c} \) is standard. NPS2 While \( {T}^{ \leq c} \) is not standard do NPSa If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \min \left\{ {{T}_{i + 1, j},{T}_{i, j + 1}}\right\} \) . NPSb Exchange \( {T}_{c} \) and \( {T}_{{c}^{\prime }} \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . It is easy to see that the algorithm will terminate since \( {T}^{ \leq c} \) will eventually become standard, in the worst case when \( c \) reaches an inner corner of \( \lambda \) . We call the sequence, \( p \), of cells that \( c \) passes through the path of \( c \) . To illustrate, if \[ T = \begin{array}{llll} 9 & 6 & 1 & 5 \\ 8 & 2 & 3 & 7 \\ 4 & & & \end{array} \] and \( c = \left( {1,2}\right) \), then here is the computation of \( {j}^{c}\left( T\right) \), where the moving element is in boldface: \[ T = \begin{array}{llll} 9 & 6 & 1 & 5 \\ 8 & 2 & 3 & 7 \\ 4 & & & \end{array},\;\begin{array}{llll} 9 & 1 & 6 & 5 \\ 8 & 2 & 3 & 7 \\ 4 & & & \end{array},\;\begin{array}{llll} 9 & 1 & 3 & 5 \\ 8 & 2 & 6 & 7 \\ 4 & & & \end{array} = {j}^{c}\left( T\right) . \] So in this case the path of \( c \) is \[ p = \left( {\left( {1,2}\right) ,\left( {1,3}\right) ,\left( {2,3}\right) }\right) . \] We can now construct the sequence (3.19). The initial pair \( \left( {{T}_{1},{J}_{1}}\right) \) is already defined. Assuming that we have \( \left( {{T}_{k - 1},{J}_{k - 1}}\right) \) satisfying the conditions after (3.19), then let \[ {T}_{k} = {j}^{{c}_{k}}\left( {T}_{k - 1}\right) \] Furthermore, if \( {j}^{{c}_{k}} \) starts at \( {c}_{k} = \left( {i, j}\right) \) and ends at \( \left( {{i}^{\prime },{j}^{\prime }}\right) \), then \( {J}_{k} = {J}_{k - 1} \) except for the values \[ {\left( {J}_{k}\right) }_{h, j} = \left\{ \begin{array}{ll} {\left( {J}_{k - 1}\right) }_{h + 1, j} - 1 & \text{ for }i \leq h < {i}^{\prime }, \\ {j}^{\prime } - j & \text{ for }h = {i}^{\prime }. \end{array}\right. \] It is not hard to see that if \( {J}_{k - 1} \) was a hook tableau, then \( {J}_{k} \) will still be one. Starting with the tableau \[ T = \begin{array}{ll} 6 & 2 \\ 4 & 3 \\ 5 & 1 \end{array} \] here is the whole algorithm. \[ \begin{matrix} & 6 & 2 & 6 & 2 & 6 & 1 & 6 & 1 & 6 & 1 & 1 & 4 & & & \\ {T}_{k} : & 4 & 3\;, & 4 & 1\;, & 4 & 2\;, & 4 & 2\;, & 2 & 4 & , & 2 & 5 & = & P, \\ & 5 & 1 & 5 & 3 & 5 & 3 & 3 & 5 & 3 & 5 & 3 & 5 & 3 & 6 & \end{matrix} \] \[ {J}_{k} : \begin{matrix} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{matrix},\begin{matrix} 0 & 0 \\ 0 & - 1 \\ 0 & 0 \end{matrix},\begin{matrix} 0 & - 2 \\ 0 & 0 \\ 0 & 0 \end{matrix},\begin{matrix} 0 & - 2 \\ 0 & 0 \\ 0 & 0 \end{matrix},\begin{matrix} 0 & - 2 \\ 1 & 0 \\ 1 & 0 \end{matrix},\begin{matrix} 0 & - 2 \\ 1 & 0 \\ 1 & 0 \end{matrix},\begin{matrix} 0 & - 2 \\ 0 & 0 \\ 1 & 0 \end{matrix} = J. \] Theorem 3.10.3 ([NPS 97]) For fixed \( \lambda \), the map \[ T\overset{\mathrm{N} - \mathrm{P} - \mathrm{S}}{ \rightarrow }\left( {P, J}\right) \] just defined is a bijection between tableaux \( T \) and pairs \( \left( {P, J}\right) \) with \( P \) a standard tableau and \( J \) a hook tableau such that \( \operatorname{sh}T = \operatorname{sh}P = \
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 5.1.2
Definition 5.1.2. Let \( {I}^{n} \) be the standard \( n \) -cube. Its boundary is given by \[ \partial {I}^{0} = 0 \] and \[ \partial {I}^{0} = \mathop{\sum }\limits_{{i = 1}}^{n}{\left( -1\right) }^{i}\left( {{A}_{i} - {B}_{i}}\right) \] for \( n > 0 \) . \( \diamond \) In this definition, \( \partial {I}^{n} \) is considered to be an element in the free abelian group generated by \( \left\{ {{A}_{i},{B}_{i} \mid i = 1,\ldots, n}\right\} \) . We then have the following basic lemma: Lemma 5.1.3. For any \( n,\partial \left( {\partial {I}^{n}}\right) = 0 \) . Proof. For \( n \leq 1 \) this is clear. For \( n \geq 2,\partial \left( {\partial {I}^{n}}\right) \) is an element in the free abelian group generated by the \( \left( {n - 2}\right) \) - faces of \( {I}^{n} \), i.e., by the subsets, for each \( i \neq j \) and each \( {\varepsilon }_{i} = 0 \) or \( 1,{\varepsilon }_{j} = 0 \) or 1, \[ \left\{ {x \in {I}^{n} \mid {x}_{i} = {\varepsilon }_{i},{x}_{j} = {\varepsilon }_{j}}\right\} . \] Geometrically, each \( \left( {n - 2}\right) \) -face of \( {I}^{n} \) is a free of two \( \left( {n - 1}\right) \) -faces, and the signs are chosen in Definition 5.1.2 so that they cancel. This is a routine but tedious calculation. Definition 5.1.4. Let \( X \) be a topological space. A singular \( n \) -cube of \( X \) is a map \( \Phi : {I}^{n} \rightarrow X \) . \( \diamond \) We let \( {\alpha }_{i} : {I}^{n - 1} \rightarrow {I}^{n} \) be the inclusion of the \( i \) -th front face and \( {\beta }_{i} : {I}^{n - 1} \rightarrow {I}^{n} \) be the inclusion of the \( i \) -th back face, i.e. \[ {\alpha }_{i}\left( {{x}_{1},\ldots ,{x}_{n - 1}}\right) = \left( {{x}_{1},\ldots ,{x}_{i - 1},0,{x}_{i},\ldots ,{x}_{n - 1}}\right) \] \[ {\beta }_{i}\left( {{x}_{1},\ldots ,{x}_{n - 1}}\right) = \left( {{x}_{1},\ldots ,{x}_{i - 1},1,{x}_{i},\ldots ,{x}_{n - 1}}\right) . \] Definition 5.1.5. For \( n \geq 2 \), a singular \( n \) -cube \( f : {I}^{n} \rightarrow X \) is degenerate if the value of \( f \) is independent of at least one coordinate \( {x}_{i} \), i.e., if there is a singular \( \left( {n - 1}\right) \) - cube \( \Psi : {I}^{n - 1} \rightarrow X \) with \[ \Phi \left( {{x}_{1},\ldots ,{x}_{n}}\right) = \Psi \left( {{x}_{1},\ldots ,{x}_{i - 1},{x}_{i + 1},\ldots ,{x}_{n}}\right) . \] A singular 0-cube is always non-degenerate. Observe that in the degenerate case we have, in particular, \[ \Phi {\alpha }_{i} = \Phi {\beta }_{i} = \Psi : {I}^{n - 1} \rightarrow X,\;\text{ for }n \geq 1. \] Definition 5.1.6. Let \( X \) be a topological space. The group \( {Q}_{n}\left( X\right) \) is the free abelian group generated by the singular \( n \) -cubes of \( X \) . The subgroup \( {D}_{n}\left( X\right) \) is the free abelian group generated by the degenerate singular \( n \) -cubes. (In particular, \( {D}_{0}\left( X\right) = \) \( \{ 0\} \) .) For each \( n \geq 0 \), the boundary map \( {\partial }_{n}^{Q} = {\partial }^{Q} \) is defined by \( {\partial }^{Q} = 0 \) if \( n = 0 \), and if \( n \geq 1 \), and \( \Phi : {I}^{n} \rightarrow X \) is a singular \( n \) -cube, \[ {\partial }^{Q}\Phi = \mathop{\sum }\limits_{{i = 1}}^{n}{\left( -1\right) }^{i}\left( {{A}_{i}\Phi - {B}_{i}\Phi }\right) \in {Q}_{n - 1}\left( X\right) \] where \( {A}_{i}\Phi = \Phi {\alpha }_{i} : {I}^{n - 1} \rightarrow X \) and \( {B}_{i}\Phi = \Phi {\beta }_{i} : {I}^{n - 1} \rightarrow X \) . \( \diamond \) Lemma 5.1.7. (1) \( {\partial }_{n - 1}^{Q}{\partial }_{n}^{Q} : {Q}_{n}\left( X\right) \rightarrow {Q}_{n - 2}\left( X\right) \) is the 0 map. (2) \( {\partial }^{Q}\left( {{D}_{n}\left( X\right) }\right) \subseteq {D}_{n - 1}\left( X\right) \) . Corollary 5.1.8. Let \( {C}_{n}\left( X\right) = {Q}_{n}\left( X\right) /{D}_{n}\left( X\right) \) . Then \( {\partial }_{n}^{Q} : {Q}_{n}\left( X\right) \rightarrow {Q}_{n - 1}\left( X\right) \) induces \( {\partial }_{n} : {C}_{n}\left( X\right) \rightarrow {C}_{n - 1}\left( X\right) \) with \( {\partial }_{n - 1}{\partial }_{n} = 0 \) . Definition 5.1.9. The chain complex \( C\left( X\right) \) : \[ \cdots \rightarrow {C}_{2}\left( X\right) \overset{{\partial }_{2}}{ \rightarrow }{C}_{1}\left( X\right) \overset{{\partial }_{1}}{ \rightarrow }{C}_{0}\left( X\right) \overset{{\partial }_{0}}{ \rightarrow }0 \rightarrow 0 \rightarrow \cdots \] is the singular chain complex of \( X \) . \( \diamond \) Definition 5.1.10. The homology of the singular chain complex of \( X \) is the singular homology of \( X \) . To quote the definition of the homology of a chain complex from Sect. A.2: \( {Z}_{n}\left( X\right) = \operatorname{Ker}\left( {{\partial }_{n} : {C}_{n}\left( X\right) \rightarrow {C}_{n - 1}\left( X\right) }\right) , \) the group of singular \( n \) -cycles, \( {B}_{n}\left( X\right) = \operatorname{Im}\left( {{\partial }_{n + 1} : {C}_{n + 1}\left( X\right) \rightarrow {C}_{n}\left( X\right) }\right) , \) the group of singular \( n \) -boundaries, \( {H}_{n}\left( X\right) = {Z}_{n}\left( X\right) /{B}_{n}\left( X\right) \), the \( n \) -th singular homology group of \( X \) We now define the homology of a pair. Definition 5.1.11. Let \( \left( {X, A}\right) \) be a pair. The relative singular chain complex \( C\left( {X, A}\right) \) is the chain complex \[ \cdots \rightarrow {C}_{2}\left( X\right) /{C}_{2}\left( A\right) \overset{\partial }{ \rightarrow }{C}_{1}\left( X\right) /{C}_{1}\left( A\right) \overset{\partial }{ \rightarrow }{C}_{0}\left( X\right) /{C}_{0}\left( A\right) \overset{\partial }{ \rightarrow }0 \rightarrow 0 \rightarrow \cdots . \] Its homology is the singular homology of the pair \( \left( {X, A}\right) \) . Finally, we define the induced map on homology of a map of spaces, or of pairs. Lemma 5.1.12. Let \( f : X \rightarrow Y \) be a map. Then finduces a chain map \( \left\{ {{f}_{n} : {C}_{n}\left( X\right) \rightarrow }\right. \) \( \left. {{C}_{n}\left( Y\right) }\right\} \) where \( {f}_{n} : {C}_{n}\left( X\right) \rightarrow {C}_{n}\left( Y\right) \) as follows. Let \( \Phi : {I}^{n} \rightarrow X \) be a singular \( n \) -cube. Then \( {f}_{n}\Phi = {f\Phi } : {I}_{n} \rightarrow Y \) where \( {f\Phi } \) denotes the composition. Similarly \( f : \left( {X, A}\right) \rightarrow \) \( \left( {Y, B}\right) \) induces a map \( {f}_{n} : {C}_{n}\left( {X, A}\right) \rightarrow {C}_{n}\left( {Y, B}\right) \) by composition. This chain map induces a map on singular homology \( \left\{ {{f}_{n} : {H}_{n}\left( X\right) \rightarrow {H}_{n}\left( Y\right) }\right\} \) and similarly \( \left\{ {{f}_{n} : {H}_{n}\left( {X, A}\right) \rightarrow {H}_{n}\left( {Y, B}\right) }\right\} \) . Proof. This would be immediate if we were dealing with \( {Q}_{n}\left( X\right) \) and \( {Q}_{n}\left( Y\right) \) . But since \( {f\Phi } \) is degenerate wherever \( \Phi \) is, it is just about immediate for \( {C}_{n}\left( X\right) \) and \( {C}_{n}\left( Y\right) \) . Then the fact that we have maps on homology is a direct consequence of Lemma A.2.7. Definition 5.1.13. The above maps \( \left\{ {{f}_{n} : {H}_{n}\left( X\right) \rightarrow {H}_{n}\left( Y\right) }\right\} \) or \( \left\{ {{f}_{n} : {H}_{n}\left( {X, A}\right) \rightarrow }\right. \) \( \left. {{H}_{n}\left( {Y, B}\right) }\right\} \) are the induced maps on singular homology by the map \( f : X \rightarrow Y \) or the \( \operatorname{map}f : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) . We now verify that singular homology satisfies the Eilenberg-Steenrod axioms. Theorem 5.1.14. Singular homology satisfies Axioms 1 and 2. Proof. Immediate from the definition of the induced map on singular cubes as composition. Theorem 5.1.15. Singular homology satisfies Axiom 3. Proof. Immediate from the definition of the boundary map on singular cubes and from the definition of the induced map on singular cubes as composition. Theorem 5.1.16. Singular homology satisfies Axiom 4. Proof. We have defined \( {C}_{n}\left( {X, A}\right) = {C}_{n}\left( X\right) /{C}_{n}\left( A\right) \) . Thus for every \( n \), we have a short exact sequence \[ 0 \rightarrow {C}_{n}\left( A\right) \rightarrow {C}_{n}\left( X\right) \rightarrow {C}_{n}\left( {X, A}\right) \rightarrow 0. \] In other words, we have a short exact sequence of chain complexes \[ 0 \rightarrow {C}_{ * }\left( A\right) \rightarrow {C}_{ * }\left( X\right) \rightarrow {C}_{ * }\left( {X, A}\right) \rightarrow 0. \] But then we have a long exact sequence in homology by Theorem A.2.10. Theorem 5.1.17. Singular homology satisfies Axiom 5. Proof. For simplicity we consider the case of homotopic maps of spaces \( f : X \rightarrow Y \) and \( g : X \rightarrow Y \) (rather than maps of pairs). Then by definition, setting \( {f}_{0} = f \) and \( {f}_{1} = g \), there is a map \( F : X \times I \rightarrow Y \) with \( F\left( {x,0}\right) = {f}_{0}\left( x\right) \) and \( F\left( {x,1}\right) = {f}_{1}\left( x\right) \) . Define a map \( \widetilde{F} : {C}_{n}\left( X\right) \rightarrow {C}_{n + 1}\left( Y\right) \) as follows. Let \( \Phi : {I}^{n} \rightarrow X \) be a singular \( n \) -cube. Then \( \widetilde{F}\Phi : {I}^{n + 1} \rightarrow Y \) is defined by \[ \widetilde{F}\mathbf{\Phi }\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) = F\left( {\mathbf{\Phi }\left( {{x}_{1},\ldots ,{x}_{n}}\right) ,{x}_{n + 1}}\right) . \] Then it is routine (but lengthy) to check that \( F \) provides a chain homotopy between \( {f}_{ * } : {C}_{ * }\left( X\right) \rightarrow {C}_{ * }\left( Y\right) \) and \( {g}_{ * } : {C}_{ * }\left( X\right) \rightarrow {C}_{ * }\left( Y\right) \), so that \( {f}_{ * } = {g}_{ * } : {H}_{ * }\left( X\right) \rightarrow \) \( {H}_{ * }\left( Y\right) \) by Lemma A.2.9. ## Theorem 5.1.18. Singular homology satisfies Axiom 6. We shall not prove that singular homology satisfies Axiom 6, the excision axiom. This proof is quite involved. We merely give a quick sketch of th
18_Algebra Chapter 0
Definition 1.2
Definition 1.2. A field extension \( k \subseteq F \) is simple if there exists an element \( \alpha \in F \) such that \( F = k\left( \alpha \right) \) . The extensions recalled above are of this kind: if \( K = k\left\lbrack t\right\rbrack /\left( {f\left( t\right) }\right) \) and \( \alpha \) denotes the coset of \( t \), then \( K = k\left( \alpha \right) \) : indeed, if a subfield of \( K \) contains the coset of \( t \) , then it must contain (the coset of) every polynomial expression in \( t \), and hence it must be the whole of \( K \) . I have always found the notation \( k\left( \alpha \right) \) somewhat unfortunate, since it suggests that all such extensions are in some way isomorphic and possibly all isomorphic to the field \( k\left( t\right) \) of rational functions in one indeterminate \( t \) (cf. Definition V14.13). This is not true, although it is clear that every element of \( k\left( \alpha \right) \) may be written as a rational function in \( \alpha \) with coefficients in \( k \) (Exercise 1.3). In any case, it is easy to classify simple extensions: they are either isomorphic to \( k\left( t\right) \) or they are of the prototypical kind recalled above. Here is the precise statement. Proposition 1.3. Let \( k \subseteq k\left( \alpha \right) \) be a simple extension. Consider the evaluation map \( \epsilon : k\left\lbrack t\right\rbrack \rightarrow k\left( \alpha \right) \), defined by \( f\left( t\right) \mapsto f\left( \alpha \right) \) . Then we have the following: - \( \epsilon \) is injective if and only if \( k \subseteq k\left( \alpha \right) \) is an infinite extension. In this case, \( k\left( \alpha \right) \) is isomorphic to the field of rational functions \( k\left( t\right) \) . - \( \epsilon \) is not injective if and only if \( k \subseteq k\left( \alpha \right) \) is finite. In this case there exists a unique monic irreducible nonconstant polynomial \( p\left( t\right) \in k\left\lbrack t\right\rbrack \) of degree \( n = \) \( \left\lbrack {k\left( \alpha \right) : k}\right\rbrack \) such that \[ k\left( \alpha \right) \cong \frac{k\left\lbrack t\right\rbrack }{\left( p\left( t\right) \right) } \] Via this isomorphism, \( \alpha \) corresponds to the coset of \( t \) . The polynomial \( p\left( t\right) \) is the monic polynomial of smallest degree in \( k\left\lbrack t\right\rbrack \) such that \( p\left( \alpha \right) = 0 \) in \( k\left( \alpha \right) \) . The polynomial \( p\left( t\right) \) appearing in this statement is called the minimal polynomial of \( \alpha \) over \( k \) . Of course the minimal polynomial of an element \( \alpha \) of a (’large’) field depends on the base (’small’) field \( k \) . For example, \( \sqrt{2} \in \mathbb{C} \) has minimal polynomial \( {t}^{2} - 2 \) over \( \mathbb{Q} \), but \( t - \sqrt{2} \) over \( \mathbb{R} \) . Proof. Let \( F = k\left( \alpha \right) \) . By the ’first isomorphism theorem’, the image of \( \epsilon : k\left\lbrack t\right\rbrack \rightarrow F \) is isomorphic to \( k\left\lbrack t\right\rbrack /\ker \left( \epsilon \right) \) . Since \( F \) is an integral domain, so is \( k\left\lbrack t\right\rbrack /\ker \left( \epsilon \right) \) ; hence \( \ker \left( \epsilon \right) \) is a prime ideal in \( k\left\lbrack t\right\rbrack \) . -Assume \( \ker \left( \epsilon \right) = 0 \) ; that is, \( \epsilon \) is an injective map from the integral domain \( k\left\lbrack t\right\rbrack \) to the field \( F \) . By the universal property of fields of fractions (cf. [V14.2], \( \epsilon \) extends to a unique homomorphism \[ k\left( t\right) \rightarrow F\text{.} \] The (isomorphic) image of \( k\left( t\right) \) in \( F \) is a field containing \( k \) and \( \alpha \) ; hence it equals \( F \) by definition of simple extension. Since \( \epsilon \) is injective, the powers \( {\alpha }^{0} = 1,\alpha ,{\alpha }^{2},{\alpha }^{3},\ldots \) (that is, the images \( \epsilon \left( {t}^{i}\right) \) ) are all distinct and linearly independent over \( k \) (because the powers \( 1, t,{t}^{2},\ldots \) are linearly independent over \( k \) ); therefore the extension \( k \subseteq F \) is infinite in this case. -If \( \ker \left( \epsilon \right) \neq 0 \), then \( \ker \left( \epsilon \right) = \left( {p\left( t\right) }\right) \) for a unique monic irreducible nonconstant polynomial \( p\left( t\right) \), which has smallest degree (cf. Exercise III 4.4) among all nonzero polynomials in \( \ker \left( \epsilon \right) \) . As \( \left( {p\left( t\right) }\right) \) is then maximal in \( k\left\lbrack t\right\rbrack \), the image of \( \epsilon \) is a subfield of \( F \) containing \( \alpha = \epsilon \left( t\right) \) . By definition of simple extension, \( F = \) the image of \( \epsilon \) ; that is, the induced homomorphism \[ \frac{k\left\lbrack t\right\rbrack }{\left( p\left( t\right) \right) } \rightarrow F \] is an isomorphism. In this case \( \left\lbrack {F : k}\right\rbrack = \deg p\left( t\right) \), as recalled in [1.1, and in particular the extension is finite, as claimed. The alert reader will have noticed that the proof of Proposition 1.3 is essentially a rehash of the argument proving the 'versality' part of Proposition V15.7 Example 1.4. Consider the extension \( \mathbb{Q} \subseteq \mathbb{R} \) . The polynomial \( {x}^{2} - 2 \in \mathbb{Q}\left\lbrack x\right\rbrack \) has roots in \( \mathbb{R} \) : therefore, by Proposition V15.7 there exists a homomorphism (hence a field extension) \[ \bar{\epsilon } : \;\frac{\mathbb{Q}\left\lbrack t\right\rbrack }{\left( {t}^{2} - 2\right) } \hookrightarrow \mathbb{R} \] such that the image of (the coset of) \( t \) is a root \( \alpha \) of \( {x}^{2} - 2 \) . Proposition 1.3 simply identifies the image of this homomorphism with \( \mathbb{Q}\left( \alpha \right) \subseteq \mathbb{R} \) . This is hopefully crystal clear; however, note that even this simple example shows that the induced morphism \( \bar{\epsilon } \) is not unique (hence the ’lack of uni’): because there are more than one root of \( {x}^{2} - 2 \) in \( \mathbb{R} \) . Concretely, there are two possible choices for \( \alpha : \alpha = + \sqrt{2} \) and \( \alpha = - \sqrt{2} \) . The choice of \( \alpha \) determines the evaluation map \( \epsilon \) and therefore the specific realization of \( \mathbb{Q}\left\lbrack t\right\rbrack /\left( {{t}^{2} - 2}\right) \) as a subfield of \( \mathbb{R} \) . The reader may be misled by one feature of this example: clearly \( \mathbb{Q}\left( \sqrt{2}\right) = \) \( \mathbb{Q}\left( {-\sqrt{2}}\right) \), and this may seem to compensate for the lack of uniqueness lamented above. The morphism may not be unique, but the image of the morphism surely is? No. The reader will check that there are three distinct subfields of \( \mathbb{C} \) isomorphic to \( \mathbb{Q}\left\lbrack t\right\rbrack /\left( {{t}^{3} - 2}\right) \) (Exercise 1.5). Ay, there's the rub. One of our main goals in this chapter will be to single out a condition on an extension \( k \subseteq F \) that guarantees that no matter how we embed \( F \) in a larger extension, the images of these (possibly many different) embeddings will all coincide. Up to further technicalities, this is what makes an extension Galois. Thus, \( \mathbb{Q} \subseteq \mathbb{Q}\left( \sqrt{2}\right) \) will be a Galois extension, while \( \mathbb{Q} \subseteq \mathbb{Q}\left( \sqrt[3]{2}\right) \) will not be a Galois extension. But Galois extensions will have to wait until 16, and the reader can put this issue aside for now. One way to recover uniqueness is to incorporate the choice of the root in the data. This leads to the following refinement of the (uni)versality statement, adapted for future applications. Proposition 1.5. Let \( {k}_{1} \subseteq {F}_{1} = {k}_{1}\left( {\alpha }_{1}\right) ,{k}_{2} \subseteq {F}_{2} = {k}_{2}\left( {\alpha }_{2}\right) \) be two finite simple extensions. Let \( {p}_{1}\left( t\right) \in {k}_{1}\left\lbrack t\right\rbrack \), resp., \( {p}_{2}\left( t\right) \in {k}_{2}\left\lbrack t\right\rbrack \), be the minimal polynomials of \( {\alpha }_{1} \) , resp., \( {\alpha }_{2} \) . Let \( i : {k}_{1} \rightarrow {k}_{2} \) be an isomorphism, such that \[ i\left( {{p}_{1}\left( t\right) }\right) = {p}_{2}\left( t\right) \] Then there exists a unique isomorphism \( j : {F}_{1} \rightarrow {F}_{2} \) agreeing with \( i \) on \( {k}_{1} \) and such that \( j\left( {\alpha }_{1}\right) = {\alpha }_{2} \) . Proof. Since every element of \( {k}_{1}\left( {\alpha }_{1}\right) \) is a linear combination of powers of \( {\alpha }_{1} \) with coefficients in \( {k}_{1}, j \) is determined by its action on \( {k}_{1} \) (which agrees with \( i \) ) and by \( j\left( {\alpha }_{1}\right) \), which is prescribed to be \( {\alpha }_{2} \) . Thus an isomorphism \( j \) as in the statement is uniquely determined. To see that the isomorphism \( j \) exists, note that as \( i \) maps \( {p}_{1}\left( t\right) \) to \( {p}_{2}\left( t\right) \), it induces an isomorphism \[ \frac{{k}_{1}\left\lbrack t\right\rbrack }{\left( {p}_{1}\left( t\right) \right) }\overset{ \sim }{ \rightarrow }\frac{{k}_{2}\left\lbrack t\right\rbrack }{\left( {p}_{2}\left( t\right) \right) } \] \( {}^{4} \) Of course any homomorphism of rings \( f : R \rightarrow S \) induces a unique ring homomorphism \( f : R\left\lbrack t\right\rbrack \rightarrow S\left\lbrack t\right\rbrack \) sending \( t \) to \( t \), named in the same way by a harmless abuse of language. Composing with the isomorphisms found in Proposition 1.3 gives \( j \) : \[ j : {k}_{1}\left( {\alpha }_{1}\right) \overset{ \sim }{ \rightarrow }\frac{{k}_{1}\left\lbrack t\right\rbrack }{\left( {p}_{1}\left( t\right) \right) }\overset{ \sim }{ \rightarrow }\frac{{k}_{2}\left\lbrack t\right\rbrack }{\left( {p}_{2}\left( t\right) \right) }\overset{ \sim }{ \rightarrow }{k}_{2}\left( {\alpha }_{2}\right) , \]
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 10.35
Definition 10.35. Let \( \Lambda \) denote the set of integral elements. If \( \mu \) is a dominant integral element, define a set \( {S}_{\mu } \subset \Lambda \) as follows: \[ {S}_{\mu } = \{ \eta \in \Lambda \mid \langle \eta + \delta ,\eta + \delta \rangle = \langle \mu + \delta ,\mu + \delta \rangle \} . \] Note that \( {S}_{\mu } \) is the intersection of the integral lattice \( \Lambda \) with sphere of radius \( \parallel \mu + \delta \parallel \) centered at \( - \delta \) ; see Figure 10.10. Since there are only finitely many elements of \( \Lambda \) in any bounded region, the set \( {S}_{\mu } \) is finite, for any fixed \( \mu \) . Our strategy for the proof of Proposition 10.33 is as discussed at the beginning of this section. We will decompose the formal character of each Verma module \( {W}_{\eta } \) with \( \eta \in {S}_{\mu } \) as a finite sum of formal characters of irreducible representations \( {V}_{\gamma } \) , with each \( \gamma \) also belonging to \( {S}_{\mu } \) . This expansion turns out to be of an "upper ![a7bfd4a7-7795-4350-a407-6ad11be11f96_308_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_308_0.jpg) Fig. 10.10 The set \( {S}_{\mu } \) (black dots) consists of integral elements \( \lambda \) for which \( \parallel \lambda + \delta \parallel = \parallel \mu + \delta \parallel \) triangular with ones on the diagonal" form, allowing us to invert the expansion to express the formal character of irreducible representations in terms of formal characters of Verma modules. In particular, the character of the finite-dimensional representation \( {V}_{\mu } \) will be expressed as a linear combination of formal characters of Verma modules \( {W}_{\eta } \) with \( \eta \in {S}_{\mu } \) . When we multiply both sides of this formula by the Weyl denominator and use the character formula for the Verma module (Proposition 10.31), we obtain the claimed form for \( q{\chi }^{\mu } \) . Of course, a key point in the argument is to verify that whenever the character of an irreducible representation \( {V}_{\gamma } \) appears in the expansion of the character of \( {W}_{\eta },\eta \in {S}_{\mu } \), the highest weight \( \gamma \) is also in \( {S}_{\mu } \) . This claim holds because any subrepresentation \( {V}_{\gamma } \) occurring in the decomposition of \( {W}_{\eta } \) must have the same eigenvalue of the Casimir as \( {W}_{\eta } \) . In light of Proposition 10.34, this means that \( \langle \gamma + \delta ,\gamma + \delta \rangle \) must equal \( \langle \eta + \delta ,\eta + \delta \rangle \), which is assumed to be equal to \( \langle \mu + \delta ,\mu + \delta \rangle \) . Proposition 10.36. For each \( \eta \) in \( {S}_{\mu } \), the formal character of the Verma module \( {W}_{\eta } \) can be expressed as a linear combination of formal characters of irreducible representations \( {V}_{\gamma } \) with \( \gamma \) in \( {S}_{\mu } \) and \( \gamma \preccurlyeq \eta \) : \[ {Q}_{{W}_{\eta }} = \mathop{\sum }\limits_{\substack{{\gamma \in {S}_{\mu }} \\ {\gamma \preccurlyeq \eta } }}{a}_{\gamma }^{\eta }{Q}_{{V}_{\gamma }} \] (10.35) Furthermore, the coefficient \( {a}_{\eta }^{\eta } \) of \( {Q}_{{V}_{\eta }} \) in this decomposition is equal to 1 . See Figure 10.11. ![a7bfd4a7-7795-4350-a407-6ad11be11f96_309_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_309_0.jpg) Fig. 10.11 For \( \eta \in {S}_{\mu } \), the character of \( {W}_{\eta } \) is a linear combination of characters of irreducible representations \( {V}_{\gamma } \) with highest weights \( \gamma \preccurlyeq \eta \) in \( {S}_{\mu } \) Lemma 10.37. Let \( \left( {\pi, V}\right) \) be a representation of \( \mathfrak{g} \), possibly infinite dimensional, that decomposes as direct sum of weight spaces of finite multiplicity, and let \( U \) be a nonzero invariant subspace of \( V \) . Then both \( U \) and the quotient representation \( V/U \) decompose as a direct sum of weight spaces. Furthermore, the multiplicity of any weight in \( V \) is the sum of its multiplicity in \( U \) and its multiplicity in \( V/U \) . Proof. Let \( u \) be an element of \( U \) . By assumption, we can decompose \( u \) as \( u = \) \( {v}_{1} + \cdots + {v}_{j} \), where the \( {v}_{k} \) ’s belong to weight spaces in \( V \) corresponding to distinct weights \( {\lambda }_{1},\ldots ,{\lambda }_{j} \) . We wish to show that each \( {v}_{k} \) actually belongs to \( U \) . If \( j = 1 \) , there is nothing to prove. If \( j > 1 \), then \( {\lambda }_{j} \neq {\lambda }_{1} \), which means that there is some \( H \in \mathfrak{h} \) for which \( \left\langle {{\lambda }_{j}, H}\right\rangle \neq \left\langle {{\lambda }_{1}, H}\right\rangle \) . Then apply to \( u \) the operator \( \pi \left( H\right) - \) \( \left\langle {{\lambda }_{1}, H}\right\rangle I \) \[ \left( {\pi \left( H\right) - \left\langle {{\lambda }_{1}, H}\right\rangle I}\right) u = \mathop{\sum }\limits_{{k = 1}}^{j}\left( {\left\langle {{\lambda }_{k}, H}\right\rangle - \left\langle {{\lambda }_{1}, H}\right\rangle }\right) {v}_{k}. \] (10.36) Since the coefficient of \( {v}_{1} \) is zero, the vector in (10.36) is the sum of fewer than \( j \) weight vectors. Thus, by induction on \( j \), we can assume that each term on the righthand side of (10.36) belongs to \( U \) . In particular, a nonzero multiple of \( {v}_{j} \) belongs to \( U \), which means \( {v}_{j} \) itself belongs to \( U \) . Now, if \( {v}_{j} \) is in \( U \), then \( u - {v}_{j} = {v}_{1} + \) \( \cdots + {v}_{j - 1} \) is also in \( U \) . Thus, using induction again, we see that each of \( {v}_{1},\ldots ,{v}_{j - 1} \) belongs to \( U \) . We conclude that the sum of the weight spaces in \( U \) is all of \( U \) . Since weight vectors with distinct weights are linearly independent (Proposition A.17), the sum must be direct. We turn, then, to the quotient space \( V/U \) . It is evident that the images of the weight spaces in \( V \) are weight spaces in \( V/U \) with the same weight. Thus, the sum of the weight spaces in \( V/U \) is all of \( V/U \) and, again, the sum must be direct. Finally, consider a fixed weight \( \lambda \) occurring in \( V \), and let \( {V}_{\lambda } \) be the associated weight space. Let \( {q}_{\lambda } \) be the restriction to \( {V}_{\lambda } \) of the quotient map \( q : V \rightarrow V/U \) . The kernel of \( {q}_{\lambda } \) consists precisely of the weight vectors with weight \( \lambda \) in \( U \) . Thus, the dimension of the image of \( {q}_{\lambda } \), which is the weight space in \( V/U \) with weight \( \lambda \), is equal to \( \dim {V}_{\lambda } - \dim \left( {{V}_{\lambda } \cap U}\right) \) . The claim about multiplicities in \( V, U \), and \( V/U \) follows. Proof of Proposition 10.36. We actually prove a stronger result, that the formal character of any highest-weight cyclic representation \( {U}_{\eta } \) with \( \eta \in {S}_{\mu } \) can be decomposed as in (10.35). As the proof of Proposition 6.11, any such \( {U}_{\eta } \) decomposes as a direct sum of weight spaces with weights lower than \( \eta \) and with the multiplicity of the weight \( \eta \) being 1 . For any such \( {U}_{\eta } \), let \[ M = \mathop{\sum }\limits_{{\gamma \in {S}_{\mu }}}\operatorname{mult}\left( \gamma \right) \] Our proof will be by induction on \( M \) . We first argue that if \( M = 1 \), then \( {U}_{\eta } \) must be irreducible. If not, \( {U}_{\eta } \) would have a nontrivial invariant subspace \( X \), and this subspace would, by Lemma 10.37, decompose as a direct sum of weight spaces, all of which are lower than \( \eta \) . Thus, \( X \) would have to contain a weight vector \( w \) that is annihilated by each raising operator \( \pi \left( {X}_{\alpha }\right) ,\alpha \in {R}^{ + } \) . Thus, \( X \) would contain a highest weight cyclic subspace \( {X}^{\prime } \) with some highest weight \( \gamma \) . By Proposition 10.34, the Casimir would act as the scalar \( \langle \gamma + \delta ,\gamma + \delta \rangle - \langle \delta ,\delta \rangle \) in \( {X}^{\prime } \) . On the other hand, since \( {X}^{\prime } \) is contained in \( {U}_{\eta } \), the Casimir has to act as \( \langle \eta + \delta ,\eta + \delta \rangle - \langle \delta ,\delta \rangle \) in \( {X}^{\prime } \), which, since \( \eta \in {S}_{\mu } \), is equal to \( \langle \mu + \delta ,\mu + \delta \rangle - \langle \delta ,\delta \rangle \) . Thus, \( \gamma \) must belong to \( {S}_{\mu } \) . Meanwhile, \( \gamma \) cannot equal \( \eta \) or else \( {X}^{\prime } \) would be all of \( {U}_{\eta } \) . Thus, both \( \eta \) and \( \gamma \) would have to have positive multiplicities and \( M \) would have to be at least 2 . Thus, when \( M = 1 \), the representation \( {U}_{\eta } \) is irreducible, in which case, Proposition 10.36 holds trivially. Assume now that the proposition holds for highest weight cyclic representations with \( M \leq {M}_{0} \), and consider a representation \( {U}_{\eta } \) with \( M = {M}_{0} + 1 \) . If \( {U}_{\eta } \) is irreducible, there is nothing to prove. If not, then as we argued in the previous paragraph, \( {U}_{\eta } \) must contain a nontrivial invariant subspace \( {X}^{\prime } \) that is highest weight cyclic with some highest weight \( \gamma \) that belongs to \( {S}_{\mu } \) and is strictly lower than \( \eta \) . We can then form the quotient vector space \( {U}_{\eta }/{X}^{\prime } \), which will still be highest weight cyclic with highest weight \( \eta \) . By Lemma 10.37, the multiplicity of \( \xi \) in \( {U}_{\eta } \) is the sum of the multiplicities of \( \xi \) in \( {X}^{\prime } \) and in \( {U}_{\eta }/{X}^{\prime } \) . Thus, \[ {Q}_{{U}_{\eta }} = {Q}_{{X}^{\prime }} + {Q}_{{U}_{\eta }/{X}^{\prime }} \] Now, both \( {X}^{\prime } \) and \( {U}_{\eta }/{X}^{\prime } \) contain at least one weight in \( {S}_{\mu } \) with nonzero multiplicity, namel
1042_(GTM203)The Symmetric Group
Definition 3.6.4
Definition 3.6.4 The ith skeleton of \( \pi \in {\mathcal{S}}_{n},{\pi }^{\left( i\right) } \), is defined inductively by \( {\pi }^{\left( 1\right) } = \pi \) and \[ {\pi }^{\left( i\right) } = \begin{array}{llll} {k}_{1} & {k}_{2} & \cdots & {k}_{m} \\ {l}_{1} & {l}_{2} & \cdots & {l}_{m} \end{array} \] where \( \left( {{k}_{1},{l}_{1}}\right) ,\ldots ,\left( {{k}_{m},{l}_{m}}\right) \) are the northeast corners of the shadow diagram of \( {\pi }^{\left( i - 1\right) } \) listed in lexicographic order. The shadow lines for \( {\pi }^{\left( i\right) } \) are denoted by \( {L}_{j}^{\left( i\right) } \) . ∎ The next theorem should be clear, given Corollary 3.6.3 and the discussion surrounding Lemma 3.6.2. Theorem 3.6.5 ([Vie 76]) Suppose \( \pi \overset{\mathrm{R} - \mathrm{S}}{ \rightarrow }\left( {P, Q}\right) \) . Then \( {\pi }^{\left( i\right) } \) is a partial permutation such that \[ {\pi }^{\left( i\right) }\overset{\mathrm{R} - \mathrm{S}}{ \rightarrow }\left( {{P}^{\left( i\right) },{Q}^{\left( i\right) }}\right) \] where \( {P}^{\left( i\right) } \) (respectively, \( {Q}^{\left( i\right) } \) ) consists of the rows \( i \) and below of \( P \) (respectively, \( Q \) ). Furthermore, \[ {P}_{i, j} = {y}_{{L}_{j}^{\left( i\right) }}\;\text{ and }\;{Q}_{i, j} = {x}_{{L}_{j}^{\left( i\right) }} \] for all \( i, j \) . ∎ It is now trivial to demonstrate Schützenberger's theorem. Theorem 3.6.6 ([Sci 63]) If \( \pi \in {\mathcal{S}}_{n} \), then \[ P\left( {\pi }^{-1}\right) = Q\left( \pi \right) \;\text{ and }\;Q\left( {\pi }^{-1}\right) = P\left( \pi \right) . \] Proof. Taking the inverse of a permutation corresponds to reflecting the shadow diagram in the line \( y = x \) . The theorem now follows from Theorem 3.6.5. ∎ As an application of Theorem 3.6.6 we can find those transpositions that, when applied to \( \pi \in {\mathcal{S}}_{n} \), leave \( Q\left( \pi \right) \) invariant. Dual to our definition of \( P \) -equivalence is the following. Definition 3.6.7 Two permutations \( \pi ,\sigma \in {\mathcal{S}}_{n} \) are said to be \( Q \) -equivalent, written \( \pi \overset{Q}{ \cong }\sigma \), if \( Q\left( \pi \right) = Q\left( \sigma \right) \) . ∎ For example, \[ Q\left( \begin{array}{lll} 2 & 1 & 3 \end{array}\right) = Q\left( \begin{array}{lll} 3 & 1 & 2 \end{array}\right) = \begin{array}{ll} 1 & 3 \\ 2 & \end{array}\;\text{ and }\;Q\left( \begin{array}{lll} 1 & 3 & 2 \end{array}\right) = Q\left( \begin{array}{lll} 2 & 3 & 1 \end{array}\right) = \begin{array}{ll} 1 & 2 \\ 3 & \end{array}, \] so \[ {213}\overset{Q}{ \cong }{312}\;\text{ and }\;{132}\overset{Q}{ \cong }{231}. \] (3.12) We also have a dual notion for the Knuth relations. Definition 3.6.8 Permutations \( \pi ,\sigma \in {\mathcal{S}}_{n} \) differ by a dual Knuth relation of the first kind, written \( \pi \overset{{1}^{ * }}{ \cong }\sigma \), if for some \( k \) , 1. \( \pi = \ldots k + 1\ldots k\ldots k + 2\ldots \) and \( \sigma = \ldots k + 2\ldots k\ldots k + 1\ldots \) or vice versa. They differ by a dual Knuth relation of the second kind, written \( \pi \overset{{2}^{ * }}{ \cong }\sigma \), if for some \( k \) , 2. \( \pi = \ldots k\ldots k + 2\ldots k + 1\ldots \) and \( \sigma = \ldots k + 1\ldots k + 2\ldots k\ldots \) or vice versa. The two permutations are dual Knuth equivalent, written \( \pi \overset{{K}^{ * }}{ \cong }\sigma \), if there is a sequence of permutations such that \[ \pi = {\pi }_{1}\overset{{i}^{ * }}{ \cong }{\pi }_{2}\overset{{j}^{ * }}{ \cong }\cdots \overset{{l}^{ * }}{ \cong }{\pi }_{k} = \sigma \] where \( i, j,\ldots, l \in \{ 1,2\} \) . ∎ Note that the only two nontrivial dual Knuth relations in \( {S}_{3} \) are \[ {213}\overset{{1}^{ * }}{ \cong }{312}\text{ and }{132}\overset{{2}^{ * }}{ \cong }{231}\text{. } \] These correspond exactly to (3.12). The following lemma is obvious from the definitions. In fact, the definition of the dual Knuth relations was concocted precisely so that this result should hold. Lemma 3.6.9 If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then \[ \pi \overset{K}{ \cong }\sigma \Leftrightarrow {\pi }^{-1}\overset{{K}^{ * }}{ \cong }{\sigma }^{-1}\text{. ∎} \] Now it is an easy matter to derive the dual version of Knuth's theorem about \( P \) -equivalence (Theorem 3.4.3). Theorem 3.6.10 If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then \[ \pi \overset{{K}^{ * }}{ \cong }\sigma \Leftrightarrow \pi \overset{Q}{ \cong }\sigma . \] Proof. We have the following string of equivalences: \[ \pi \overset{{K}^{ * }}{ \cong }\sigma \; \Leftrightarrow \;{\pi }^{-1}\overset{K}{ \cong }{\sigma }^{-1}\;\text{ (Lem } \] \[ \Leftrightarrow \;P\left( {\pi }^{-1}\right) = P\left( {\sigma }^{-1}\right) \;\text{(Theorem 3.4.3)} \] \[ \Leftrightarrow \;Q\left( \pi \right) = Q\left( \sigma \right) .\;\text{ (Theorem 3.6.6) } \bullet \] ## 3.7 Schützenberger’s Jeu de Taquin The jeu de taquin (or "teasing game") of Schützenberger [Scii 76] is a powerful tool. It can be used to give alternative descriptions of both the \( P \) - and \( Q \) - tableaux of the Robinson-Schensted algorithm (Theorems 3.7.7 and 3.9.4) as well as the ordinary and dual Knuth relations (Theorems 3.7.8 and 3.8.8). To get the full-strength version of these concepts, we must generalize to skew tableaux. Definition 3.7.1 If \( \mu \subseteq \lambda \) as Ferrers diagrams, then the corresponding skew diagram, or skew shape, is the set of cells \[ \lambda /\mu = \{ c : c \in \lambda \text{ and }c \notin \mu \} . \] A skew diagram is normal if \( \mu = \varnothing \) . ∎ If \( \lambda = \left( {3,3,2,1}\right) \) and \( \mu = \left( {2,1,1}\right) \), then we have the skew diagram \[ \lambda /\mu = \] Of course, normal shapes are the left-justified ones we have been considering all along. The definitions of skew tableaux, standard skew tableaux, and so on, are all as expected. In particular, the definition of the row word of a tableau still makes sense in this setting. Thus we can say that two skew partial tableaux \( P, Q \) are Knuth equivalent, written \( P\overset{K}{ \cong }Q \), if \[ {\pi }_{P}\overset{K}{ \cong }{\pi }_{Q} \] Similar definitions hold for the other equivalence relations that we have introduced. Note that if \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \), then we can make \( \pi \) into a skew tableau by putting \( {x}_{i} \) in the cell \( \left( {n - i + 1, i}\right) \) for all \( i \) . This object is called the antidi-agonal strip tableau associated with \( \pi \) and is also denoted by \( \pi \) . For example, if \( \pi = {3142} \) (a good approximation, albeit without the decimal point), then ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_126_0.jpg) So \( \pi \overset{K}{ \cong }\sigma \) as permutations if and only if \( \pi \overset{K}{ \cong }\sigma \) as tableaux. We now come to the definition of a jeu de taquin slide, which is essential to all that follows. Definition 3.7.2 Given a partial tableau \( P \) of shape \( \lambda /\mu \), we perform a forward slide on \( P \) from cell \( c \) as follows. F1 Pick \( c \) to be an inner corner of \( \mu \) . F2 While \( c \) is not an inner corner of \( \lambda \) do Fa If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \min \left\{ {{P}_{i + 1, j},{P}_{i, j + 1}}\right\} \) . Fb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . If only one of \( {P}_{i + 1, j},{P}_{i, j + 1} \) exists in step Fa, then the maximum is taken to be that single value. We denote the resulting tableau by \( {j}^{c}\left( P\right) \) . Similarly, a backward slide on \( P \) from cell \( c \) produces a tableau \( {j}_{c}\left( P\right) \) as follows B1 Pick \( c \) to be an outer corner of \( \lambda \) . B2 While \( c \) is not an outer corner of \( \mu \) do Ba If \( c = \left( {i, j}\right) \), then let \( {c}^{\prime } \) be the cell of \( \max \left\{ {{P}_{i - 1, j},{P}_{i, j - 1}}\right\} \) . Bb Slide \( {P}_{{c}^{\prime }} \) into cell \( c \) and let \( c \mathrel{\text{:=}} {c}^{\prime } \) . ∎ By way of illustration, let \[ P = \begin{array}{lllll} & & & 6 & 8 \\ & 2 & 4 & 5 & 9. \\ 1 & 3 & 7 & & \end{array} \] We let a dot indicate the position of the empty cell as we perform a forward slide from \( c = \left( {1,3}\right) \) . <table><tr><td></td><td></td><td>0</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td><td></td><td></td><td>4</td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>-</td><td>5</td><td>9</td><td></td><td>2</td><td>5</td><td>-</td><td>9</td><td></td><td>2</td><td>5</td><td>9</td><td>0</td></tr><tr><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td><td>1</td><td>3</td><td>7</td><td></td><td></td></tr></table> Thus \[ {j}^{c}\left( P\right) = \begin{array}{llll} & 4 & 6 & 8 \\ 2 & 5 & 9 & \\ 1 & 3 & 7 & \end{array} \] A backward slide from \( c = \left( {3,4}\right) \) looks like the following. <table><tr><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td><td></td><td></td><td></td><td>6</td><td>8</td></tr><tr><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>4</td><td>5</td><td>9</td><td></td><td>2</td><td>0</td><td>5</td><td>9</td><td></td><td>-</td><td>2</td><td>5</td><td>9</td></tr><tr><td>1</td><td>3</td><td>7</td><td>-</td><td></td><td>1</td><td>3</td><td></td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td><td>1</td><td>3</td><td>4</td><td>7</td><td></td></tr></table> So ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_127_0.jpg) Note that a slide is a
1088_(GTM245)Complex Analysis
Definition 8.2
Definition 8.2. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \) . Aut \( D \) is defined as the group (under composition) of conformal automorphisms (or conformal bijections) of \( D \) ; that is, it consists of the conformal maps from \( D \) onto itself. There are two naturally related problems: Problem I. Describe Aut \( D \) for a given \( D \) . Problem II. Given two domains \( D \) and \( {D}^{\prime } \), determine when they are conformally equivalent. We solve Problem Ifor \( D = \widehat{\mathbb{C}}, D = \mathbb{C} \), and \( D = \mathbb{D} \) (the unit disc \( \{ z \in \mathbb{C};\left| z\right| < \) \( 1\} \) in \( \widehat{\mathbb{C}} \) ), and Problem IIfor \( D \) and \( {D}^{\prime } \) any pair of simply connected domains in \( \widehat{\mathbb{C}} \) . ## 8.1 Fractional Linear (Möbius) Transformations We describe the (orientation preserving) Möbius group, and show that for the domains \( D = \widehat{\mathbb{C}},\mathbb{C} \), a disc or a half plane, the group Aut \( D \) is a subgroup of this group. Definition 8.3. A fractional linear transformation (or Möbius transformation) is a meromorphic function \( A : \widehat{\mathbb{C}} \rightarrow \widehat{\mathbb{C}} \) of the form \[ z \mapsto A\left( z\right) = \frac{{az} + b}{{cz} + d} \] (8.1) where \( a, b, c \), and \( d \) are complex numbers such that \( {ad} - {bc} \neq 0 \) . Specifically, \[ A\left( z\right) = \left\{ \begin{array}{ll} \frac{{az} + b}{{cz} + d} & \text{ if }c \neq 0, z \neq \infty \text{ and }z \neq - \frac{d}{c}, \\ \frac{a}{c} & \text{ if }c \neq 0\text{ and }z = \infty , \\ \infty & \text{ if }c \neq 0\text{ and }z = - \frac{d}{c}, \\ \frac{a}{d}z + \frac{b}{d} & \text{ if }c = 0\text{ and }z \neq \infty , \\ \infty & \text{ if }c = 0\text{ and }z = \infty . \end{array}\right. \] (8.2) From now on, the abbreviated notation (8.1) will be interpreted as the expanded version (8.2). Without loss of generality we assume subsequently that \( {ad} - {bc} = 1 \) (the reader should prove that there is really no loss of generality in this assumption; that is, establish Exercise 8.1). Also, whenever convenient we will multiply each of the four constants \( a, b, c \), and \( d \) by -1, since this does not alter the Möbius transformation’s action on \( \widehat{\mathbb{C}} \) nor the condition \( {ad} - {bc} = 1 \) . Remark 8.4. A Möbius transformation is an element of the group Aut \( \left( \widehat{\mathbb{C}}\right) \), and the set of all Möbius transformations is a group under composition, the Möbius group. We will soon see that these two groups coincide. Remark 8.5. Other related groups are the matrix group \[ \mathrm{{SL}}\left( {2,\mathbb{C}}\right) = \left\{ {\left\lbrack \begin{array}{ll} a & b \\ c & d \end{array}\right\rbrack ;a, b, c, d \in \mathbb{C},{ad} - {bc} = 1}\right\} , \] the corresponding quotient group \[ \operatorname{PSL}\left( {2,\mathbb{C}}\right) = \operatorname{SL}\left( {2,\mathbb{C}}\right) /\{ \pm I\} \] where \( I = \left\lbrack \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right\rbrack \) is the identity matrix, and the extended Möbius group of orientation preserving and reversing transformations, consisting of the maps \[ z \mapsto \frac{{az} + b}{{cz} + d}\text{ and }z \mapsto \frac{a\bar{z} + b}{c\bar{z} + d},\text{ with }{ad} - {bc} = 1. \] Here orientation reversing means that angles are preserved in magnitude but reversed in sense (as the map \( z \rightarrow \bar{z} \) does). It is clear that \[ 1 \rightarrow \{ \pm I\} \rightarrow \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \rightarrow \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \] (8.3) is an exact sequence, where the first two arrows denote inclusion, and by the last arrow, a matrix \( \left\lbrack \begin{array}{ll} a & b \\ c & d \end{array}\right\rbrack \) in \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) is sent to the element of \( \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \) given by (8.1). An exact sequence is, of course, one where for any pair of consecutive maps in the sequence, the kernel of the second map coincides with the image of the first one. It is also clear that the image of the last arrow in the sequence (8.3) is precisely the Möbius group, and, therefore, that it is isomorphic to \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \), the quotient of \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) by \( \pm I \) as defined above. It is natural to ask whether the last arrow is surjective; that is, whether the Möbius group coincides with \( \operatorname{Aut}\left( \widehat{\mathbb{C}}\right) \) . We will see that this is the case in Theorem 8.17. Let \( A \) be an element of \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) . The square of the trace of a preimage of \( A \) in \( \mathrm{{SL}}\left( {2,\mathbb{C}}\right) \) is the same for both of the two preimages of \( A \) . Thus even though the trace of an element in the Möbius group is not well defined, the trace squared of an element in \( \operatorname{PSL}\left( {2,\mathbb{C}}\right) \) is. Thus it makes sense to have the following: Definition 8.6. For \( A \) in the Möbius group, given by (8.1) with \( {ad} - {bc} = 1 \), we define \( {\operatorname{tr}}^{2}A = {\left( a + d\right) }^{2} \) . ## 8.1.1 Fixed Points of Möbius Transformations Let \( A \) be any element of the Möbius group different from the identity map. We are interested in the fixed points of \( A \) in \( \widehat{\mathbb{C}} \) ; that is, those \( z \in \widehat{\mathbb{C}} \) with \( A\left( z\right) = z \) . If \( A\left( z\right) = \frac{{az} + b}{{cz} + d} \) with \( {ad} - {bc} = 1 \), then for a fixed point \( z \) of \( A \) we have either \( z = \infty \), or \( z \in \mathbb{C} \) and \( c{z}^{2} + \left( {d - a}\right) z - b = 0 \) . We consider two cases: Case 1: \( c = 0 \) . In this case \( \infty \) is a fixed point of \( A \) and \( {ad} = 1 \) . If \( d = a \) then \( A\left( z\right) = z + \frac{b}{a} \) with \( {ab} \neq 0\left( {b \neq 0\text{because}A\text{is not the identity map}}\right) \), and \( A \) has no other fixed point. If \( d \neq a \), then \( A\left( z\right) = \frac{a}{d}z + \frac{b}{d} \), and \( A \) has one more fixed point, at \( \frac{b}{d - a} \) in \( \mathbb{C} \) . We note that in this case \( A \) has precisely one fixed point if and only if \( {\operatorname{tr}}^{2}A = 4 \) . Case 2: \( c \neq 0 \) . In this case \( \infty \) is not fixed by \( A \), and the fixed points of \( A \) are given by \[ \frac{a - d \pm \sqrt{{\left( a - d\right) }^{2} + {4bc}}}{2c} = \frac{\left( {a - d}\right) \pm \sqrt{{\operatorname{tr}}^{2}A - 4}}{2c}. \] We have thus proved. Proposition 8.7. If \( A \) is a Möbius transformation different from the identity map, then \( A \) has either one or two fixed points in \( \widehat{\mathbb{C}} \) . It has exactly one if and only if \( {\operatorname{tr}}^{2}A = 4 \) . ## 8.1.2 Cross Ratios Proposition 8.8. Given three distinct points \( {z}_{2},{z}_{3},{z}_{4} \) in \( \widehat{\mathbb{C}} \), there exists a unique Möbius transformation \( S \) with \( S\left( {z}_{2}\right) = 1, S\left( {z}_{3}\right) = 0 \), and \( S\left( {z}_{4}\right) = \infty \) . Proof. The proof has two parts. Uniqueness: If \( {S}_{1} \) and \( {S}_{2} \) are Möbius transformations that solve our problem, then \( {S}_{1} \circ {S}_{2}^{-1} \) is a Möbius transformation that fixes 1,0 and \( \infty \) and hence, by Proposition 8.7, it is the identity map. Existence: If the \( {z}_{i} \) are complex numbers, then \[ S\left( z\right) = \frac{z - {z}_{3}}{z - {z}_{4}}\frac{{z}_{2} - {z}_{4}}{{z}_{2} - {z}_{3}} \] is the required map. If one of the \( {z}_{i} \) equals \( \infty \), use a limiting procedure to obtain \[ S\left( z\right) = \left\{ \begin{array}{l} \frac{z - {z}_{3}}{z - {z}_{4}},\text{ if }{z}_{2} = \infty , \\ \frac{{z}_{2} - {z}_{4}}{z - {z}_{4}},\text{ if }{z}_{3} = \infty , \\ \frac{z - {z}_{3}}{{z}_{2} - {z}_{3}},\text{ if }{z}_{4} = \infty , \end{array}\right. \] respectively. Corollary 8.9. If \( \left\{ {z}_{i}\right\} \) and \( \left\{ {w}_{i}\right\} \left( {i = 2,3,4}\right) \) are two triples of distinct points in \( \widehat{\mathbb{C}} \), then there exists a unique Möbius transformation \( S \) with \( S\left( {z}_{i}\right) = {w}_{i} \) ; thus the Möbius group is uniquely triply transitive on \( \widehat{\mathbb{C}} \) . Definition 8.10. The cross ratio \( \left( {{z}_{1},{z}_{2},{z}_{3},{z}_{4}}\right) \) of four distinct points in \( \widehat{\mathbb{C}} \) is the image of \( {z}_{1} \) under the Möbius transformation taking \( {z}_{2} \) to \( 1,{z}_{3} \) to 0, and \( {z}_{4} \) to \( \infty \) ; that is, \[ \left( {{z}_{1},{z}_{2},{z}_{3},{z}_{4}}\right) = \frac{{z}_{1} - {z}_{3}}{{z}_{1} - {z}_{4}}\frac{{z}_{2} - {z}_{4}}{{z}_{2} - {z}_{3}} \] if the four points are finite, with the corresponding limiting values if one of the \( {z}_{i} \) equals \( \infty \) . As we will see in the next proposition, it is useful to view the cross ratio as a Möbius transformation (a function of \( {z}_{1} \) ) \( S = {S}_{{z}_{2},{z}_{3},{z}_{4}} \) that takes the four distinct ordered points \( {z}_{1},{z}_{2},{z}_{3},{z}_{4} \) to the four distinct ordered points \( {w}_{1} = S\left( {z}_{1}\right) = \) \( \left( {{z}_{1},{z}_{2},{z}_{3},{z}_{4}}\right) ,{w}_{2} = 1,{w}_{3} = 0 \), and \( {w}_{4} = \infty \) . It hence makes sense to allow one repetition among the four points \( {z}_{j} \) and hence have \( S \) defined on \( \widehat{\mathbb{C}} \) and conclude that \( \left( {{z}_{2},{z}_{2},{z}_{3},{z}_{4}}\right) = 1 \), for example. This point of view will be used from now on when needed. Proposition 8.11. If \( {z}_{1},{z}_{2},{z}_{3},{z}_{4} \) are four distinct points in \( \widehat{\mathbb{C}} \), and \( T \) is any Möbius transformation, then \[ \left( {T\left( {z}_{1}\right), T\left( {z}_{2}\right), T\left( {z}_{3}\right), T\left( {z}_{4
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 2.1.3
Definition 2.1.3 Let \( {A}_{0} \) be the subspace of \( A \) spanned by the vectors \( i, j \) and \( k \) . Then the elements of \( {A}_{0} \) are the pure quaternions in \( A \) . This definition does not depend on the choice of basis. For, let \( x = {a}_{0} + \) \( {a}_{1}i + {a}_{2}j + {a}_{3}k \) . Then \[ {x}^{2} = \left( {{a}_{0}^{2} + a{a}_{1}^{2} + b{a}_{2}^{2} - {ab}{a}_{3}^{2}}\right) + 2{a}_{0}\left( {{a}_{1}i + {a}_{2}j + {a}_{3}k}\right) . \] Lemma 2.1.4 \( x \in A\left( {x \neq 0}\right) \) is a pure quaternion if and only if \( x \notin Z\left( A\right) \) and \( {x}^{2} \in Z\left( A\right) \) . Thus each \( x \in A \) has a unique decomposition as \( x = a + \alpha \), where \( a \in \) \( Z\left( A\right) = F \) and \( \alpha \in {A}_{0} \) . Define the conjugate \( \bar{x} \) of \( x \) by \( \bar{x} = a - \alpha \) . This defines an anti-involution of the algebra such that \( \overline{\left( x + y\right) } = \bar{x} + \bar{y},\overline{xy} = \bar{y}\bar{x} \) , \( \overline{\bar{x}} = x \) and \( \overline{rx} = r\bar{x} \) for \( r \in F \) . On a matrix algebra \( {M}_{2}\left( F\right) \) , \[ \overline{\left( \begin{array}{ll} a & b \\ c & d \end{array}\right) } = \left( \begin{matrix} d & - b \\ - c & a \end{matrix}\right) \] Definition 2.1.5 For \( x \in A \), the (reduced) norm and (reduced) trace of \( x \) lie in \( F \) and are defined by \( n\left( x\right) = x\bar{x} \) and \( \operatorname{tr}\left( x\right) = x + \bar{x} \), respectively. Thus on a matrix algebra, these coincide with the notions of determinant and trace. The norm map \( n : A \rightarrow F \) is multiplicative, as \( n\left( {xy}\right) = \left( {xy}\right) \overline{\left( xy\right) } = \) \( {xy}\bar{y}\bar{x} = n\left( x\right) n\left( y\right) \) . Thus the invertible elements of \( A \) are precisely those such that \( n\left( x\right) \neq 0 \), with the inverse of such an \( x \) being \( \bar{x}/n\left( x\right) \) . Thus if we let \( {A}^{ * } \) denote the invertible elements of \( A \), and \[ {A}^{1} = \{ x \in A \mid n\left( x\right) = 1\} \] then \( {A}^{1} \subset {A}^{ * } \) . This reduced norm \( n \) is related to field norms (see also Exercise 2.1, No. 7). An element \( w \) of the quaternion algebra \( A \) satisfies the quadratic \[ {x}^{2} - \operatorname{tr}\left( w\right) x + n\left( w\right) = 0 \] (2.3) with \( \operatorname{tr}\left( w\right), n\left( w\right) \in F \) . Let \( F\left( w\right) \) be the smallest subalgebra of \( A \) which contains \( {F1} \) and \( w \), so that \( F\left( w\right) \) is commutative. If \( A \) is a division algebra, then the polynomial (2.3) is reducible over \( F \) if and only if \( w \in Z\left( A\right) \) . Thus for \( w \notin Z\left( A\right), F\left( w\right) = E \) is a quadratic field extension \( E \mid F \) . Then \( {\left. {N}_{E \mid F} = n\right| }_{E} \) . Lemma 2.1.6 If the quaternion algebra \( A \) over \( F \) is a division algebra and \( w \notin Z\left( A\right) \), then \( E = F\left( w\right) \) is a quadratic field extension of \( F \) and \( {\left. n\right| }_{E} = {N}_{E \mid F} \) If \( A = \left( \frac{a, b}{F}\right) \) and \( x = {a}_{0} + {a}_{1}i + {a}_{2}j + {a}_{3}k \), then \[ n\left( x\right) = {a}_{0}^{2} - a{a}_{1}^{2} - b{a}_{2}^{2} + {ab}{a}_{3}^{2}. \] In the case of Hamilton’s quaternions \( \left( \frac{-1, - 1}{\mathbb{R}}\right) ,\;n\left( x\right) = {a}_{0}^{2} + {a}_{1}^{2} + {a}_{2}^{2} + {a}_{3}^{2} \) so that every non-zero element is invertible and \( \mathcal{H} \) is a division algebra. The matrix algebras \( {M}_{2}\left( F\right) \) are, of course, not division algebras. That these matrix algebras are the only non-division algebras among quaternion algebras is a consequence of Wedderburn's Theorem. From Wedderburn's Structure Theorem for finite-dimensional simple algebras (see Theorem 2.9.6), a quaternion algebra \( A \) is isomorphic to a full matrix algebra \( {M}_{n}\left( D\right) \), where \( D \) is a division algebra, with \( n \) and \( D \) uniquely determined by \( A \) . The \( F \) -dimension of \( {M}_{n}\left( D\right) \) is \( m{n}^{2} \), where \( m = {\dim }_{F}\left( D\right) \) and, so, for the four-dimensional quaternion algebras there are only two possibilities: \( m = 4, n = 1;m = 1, n = 2 \) . Theorem 2.1.7 If \( A \) is a quaternion algebra over \( F \), then \( A \) is either a division algebra or \( A \) is isomorphic to \( {M}_{2}\left( F\right) \) . We now use the Skolem Noether Theorem (see Theorem 2.9.8) to show that quaternion algebras can be characterised algebraically as follows: Theorem 2.1.8 Every four-dimensional central simple algebra over a field \( F \) of characteristic \( \neq 2 \) is a quaternion algebra. Proof: Let \( A \) be a four-dimensional central simple algebra over \( F \) . If \( A \) is isomorphic to \( {M}_{2}\left( F\right) \), it is a quaternion algebra, so by Theorem 2.1.7 we can assume that \( A \) is a division algebra. For \( w \notin Z\left( A\right) \), the subalgebra \( F\left( w\right) \) will be commutative. As a subring of \( A, F\left( w\right) \) is an integral domain and since \( A \) is finite-dimensional, \( w \) will satisfy an \( F \) -polynomial. Thus \( F\left( w\right) \) is a field. Since \( A \) is central, \( F\left( w\right) \neq A \) . Pick \( {w}^{\prime } \in A \smallsetminus F\left( w\right) \) . Now the elements \( 1, w,{w}^{\prime } \) and \( w{w}^{\prime } \) are necessarily independent over \( F \) and so form a basis of A. Thus \[ {w}^{2} = {a}_{0} + {a}_{1}w + {a}_{2}{w}^{\prime } + {a}_{3}w{w}^{\prime },\;{a}_{i} \in F. \] Since \( {w}^{\prime } \notin F\left( w\right) \), it follows that \( {w}^{2} = {a}_{0} + {a}_{1}w \) . Thus \( F\left( w\right) = E \) is a quadratic extension of \( F \) . Choose \( y \in E \) such that \( {y}^{2} = a \in F \) and \( E = F\left( y\right) \) . The automorphism on \( E \) induced by \( y \rightarrow - y \) will be induced by conjugation in \( A \) by an invertible element \( z \) of \( A \) by the Skolem Noether Theorem (see Theorem 2.9.8). Thus \( {zy}{z}^{-1} = - y \) . Clearly \( z \notin E \) and \( 1, y, z \) and \( {yz} \) are linearly independent over \( F \) . Also \( {z}^{2}y{z}^{-2} = y \) so that \( {z}^{2} \in Z\left( A\right) \) (i.e., \( {z}^{2} = b \in F \) ). However, \( \{ 1, y, z,{yz}\} \) is then a standard basis of \( A \) and \( A \cong \left( \frac{a, b}{F}\right) \) . \( ▱ \) Corollary 2.1.9 Let \( A \) be a quaternion division algebra over \( F \) . If \( w \in \) \( A \smallsetminus F \) and \( E = F\left( w\right) \), then \( A{ \otimes }_{F}E \cong {M}_{2}\left( E\right) \) . Proof: As in the above theorem, \( E \) is a quadratic extension field of \( F \) . Furthermore, there exists a standard basis \( \{ 1, y, z,{yz}\} \) of \( A \) with \( E = F\left( y\right) \) and \( {y}^{2} = a \in F \) . Thus there exists \( x \in A{ \otimes }_{F}E \) such that \( {x}^{2} = 1 \) . However, then \( A{ \otimes }_{F}E \) cannot be a division algebra and so must be isomorphic to \( {M}_{2}\left( E\right) \) . Deciding for a given quaternion algebra \( \left( \frac{a, b}{F}\right) \) whether or not it is isomorphic to \( {M}_{2}\left( F\right) \) is an important problem and, as will be seen later in our applications, has topological implications. For a given \( a \) and \( b \), the problem can be re-expressed in terms of quadratic forms, as will be shown in \( §{2.3} \) . ## Exercise 2.1 1. Let \( A \) be a four-dimensional central algebra over the field \( F \) such that there is a two-dimensional separable subalgebra \( L \) over \( F \) and an element \( c \in {F}^{ * } \) with \( A = L + {Lu} \) for some \( u \in A \) with \[ {u}^{2} = c\;\text{ and }\;{um} = \bar{m}u \] where \( m \in L \) and \( m \mapsto \bar{m} \) is the non-trivial \( F \) -automorphism of \( L \) . Prove that if \( F \) has characteristic \( \neq 2 \), then \( A \) is a quaternion algebra. Indeed, this is a definition of a quaternion algebra valid for any characteristic. Show that, under this definition, conjugation can be defined as: that \( F \) - endomorphism of \( A \), denoted \( x \mapsto \bar{x} \), such that \( \bar{u} = - u \) and restricted to \( L \) is the non-trivial automorphism. Prove also that Theorem 2.1.8 is valid in any characteristic. 2. Show that the ring of Hamilton’s quaternions \( \mathcal{H} = \left( \frac{-1, - 1}{\mathbb{R}}\right) \) is isomorphic to the \( \mathbb{R} \) -subalgebra \[ \left\{ {\left. \left( \begin{matrix} \alpha & \beta \\ - \bar{\beta } & \bar{\alpha } \end{matrix}\right) \right| \;\alpha ,\beta \in \mathbb{C}}\right\} . \] Hence show that \( {\mathcal{H}}^{1} = \{ h \in \mathcal{H} \mid n\left( h\right) = 1\} \) is isomorphic to \( \mathrm{{SU}}\left( 2\right) \) . 3. Let \[ A = \left\{ {\left. \left( \begin{matrix} \alpha & \sqrt{2}\beta \\ \sqrt{2}\bar{\beta } & \bar{\alpha } \end{matrix}\right) \right| \;\alpha ,\beta \in \mathbb{Q}\left( i\right) }\right\} . \] Prove that \( A \) is a quaternion algebra over \( \mathbb{Q} \) . Prove that it is isomorphic to \( {M}_{2}\left( \mathbb{Q}\right) \) (cf. Exercise 2.7, No.1). 4. Let \( A \) be a quaternion algebra over a number field \( k \) . Show that there exists a quadratic extension field \( L \mid k \) such that \( A \) has a faithful representation \( \rho \) in \( {M}_{2}\left( L\right) \), such that \( \rho \left( \bar{x}\right) = \overline{\rho \left( x\right) } \) for all \( x \in A \) . 5. Let \( F \) be a finite field of characteristic \( \neq 2 \) . If \( A \) is a quaternion algebra over \( F \), prove that \( A \cong {M}_{2}\left( F\right) \) . 6. For any quaternion algebra \( A \) and \( x \in A \), show that \[ \operatorname{tr}\left( {x}^{2}\right) = \operatorname{tr}{\left( x\right) }^{2} - {2n}\left( x\right) \] 7. Let \( \lambda \) denote the left regular representation of a quaternion algebra \( A \) . Pr
1065_(GTM224)Metric Structures in Differential Geometry
Definition 11.2
Definition 11.2. A tensor field of type \( \left( {r, s}\right) \) on \( M \) is a (smooth) map \( T : M \rightarrow {T}_{r, s}\left( M\right) \) such that \( \pi \circ T = {1}_{M} \) . A differential \( k \) -form on \( M \) is a map \( \alpha : M \rightarrow {\Lambda }_{k}^{ * }\left( M\right) \) such that \( \pi \circ \alpha = {1}_{M} \) . Notice that a vector field on \( M \) is just a tensor field of type \( \left( {1,0}\right) \) . The following proposition is proved in the same way as we did for vector fields: Proposition 11.2. \( T \) is a tensor field of type \( \left( {r, s}\right) \) on \( M \) iff for any chart \( \left( {U, x}\right) \) of \( M \) , \[ {T}_{\mid U} = \sum {T}_{{i}_{1},\ldots ,{i}_{r};{j}_{1},\ldots ,{j}_{s}}\frac{\partial }{\partial {x}^{{i}_{1}}} \otimes \cdots \otimes \frac{\partial }{\partial {x}^{{i}_{r}}} \otimes d{x}^{{j}_{1}} \otimes \cdots \otimes d{x}^{{j}_{s}} \] for smooth functions \( {T}_{{i}_{1},\ldots ,{i}_{r};{j}_{1},\ldots ,{j}_{s}} \) on \( U \) . Similarly, \( \alpha \) is a differential \( k \) -form on \( M \) iff \[ {\alpha }_{\mid U} = \mathop{\sum }\limits_{{1 \leq {i}_{1} < \cdots < {i}_{k} \leq n}}{\alpha }_{{i}_{1}\ldots {i}_{k}}d{x}^{{i}_{1}} \land \cdots \land d{x}^{{i}_{k}} \] for smooth functions \( {\alpha }_{{i}_{1}\ldots {i}_{k}} \) on \( U \) . Observe that the functions in Proposition 11.2 satisfy \[ {T}_{{i}_{1},\ldots ,{i}_{r};{j}_{1},\ldots ,{j}_{s}} = T\left( {d{x}^{{i}_{1}},\ldots, d{x}^{{i}_{r}},\frac{\partial }{\partial {x}^{{j}_{1}}},\ldots ,\frac{\partial }{\partial {x}^{{j}_{s}}}}\right) , \] \[ {\alpha }_{{i}_{1},\ldots ,{i}_{k}} = \alpha \left( {\frac{\partial }{\partial {x}^{{i}_{1}}},\ldots ,\frac{\partial }{\partial {x}^{{i}_{k}}}}\right) . \] EXAMPLES AND REMARKS 11.1. (i) Let \( \left( {U, x}\right) \) and \( \left( {V, y}\right) \) be charts of \( {M}^{n} \) . Then on \( U \cap V \) , \[ d{y}^{1} \land \cdots \land d{y}^{n} = \det D\left( {y \circ {x}^{-1}}\right) d{x}^{1} \land \cdots \land d{x}^{n} \] because \( d{y}^{1} \land \cdots \land d{y}^{n}\left( {\partial /\partial {x}^{1},\ldots ,\partial /\partial {x}^{n}}\right) = \det \left( {d{y}^{i}\left( {\partial /\partial {x}^{j}}\right) }\right) = \det {D}_{j}\left( {{y}^{i} \circ {x}^{-1}}\right) \) . (ii) A Riemannian metric on a manifold \( M \) is a tensor field \( g \) of type \( \left( {0,2}\right) \) on \( M \) such that for every \( p \in M, g\left( p\right) \) is an inner product on \( {M}_{p} \) . We denote by \( {A}_{k}\left( M\right) \) the set of all differential \( k \) -forms on \( M \), and by \( A\left( M\right) \) the collection of all forms. Given \( \alpha ,\beta \in A\left( M\right), a \in \mathbb{R} \), define \( {a\alpha },\alpha + \beta ,\alpha \land \beta \in \) \( A\left( M\right) \) by setting \[ \left( {a\alpha }\right) \left( p\right) = {a\alpha }\left( p\right) ,\left( {\alpha + \beta }\right) \left( p\right) = \alpha \left( p\right) + \beta \left( p\right) ,\left( {\alpha \land \beta }\right) \left( p\right) = \alpha \left( p\right) \land \beta \left( p\right) ,\;p \in M. \] \( A\left( M\right) \) then becomes a graded algebra with these operations. Since \( {\Lambda }_{0}^{ * }\left( M\right) = { \cup }_{p \in M}{\Lambda }_{0}\left( {M}_{p}^{ * }\right) = { \cup }_{p \in M}\mathbb{R} = M \times \mathbb{R},{A}_{0}\left( M\right) \) is naturally isomorphic to \( \mathcal{F}\left( M\right) \) if we identify \( \alpha \in {A}_{0}\left( M\right) \) with \( f = {\pi }_{2} \circ \alpha \), where \( {\pi }_{2} \) : \( M \times \mathbb{R} \rightarrow \mathbb{R} \) is projection. For \( f \in {A}_{0}\left( M\right) \), we write \( {f\alpha } \) instead of \( f \land \alpha \) . Thus, \( A\left( M\right) \) is a module over \( \mathcal{F}\left( M\right) \) . Any \( \alpha \in {A}_{k}\left( M\right) \) is an alternating multilinear map \[ \alpha : \mathfrak{X}\left( M\right) \times \cdots \times \mathfrak{X}\left( M\right) \rightarrow \mathcal{F}\left( M\right) ,\;\alpha \left( {{X}_{1},\ldots ,{X}_{k}}\right) \left( p\right) = \alpha \left( p\right) \left( {{X}_{1 \mid p},\ldots ,{X}_{k \mid p}}\right) . \] Moreover, \( \alpha \) is linear over \( \mathcal{F}\left( M\right) \) ; i.e., \[ \alpha \left( {{X}_{1},\ldots, f{X}_{i},\ldots ,{X}_{k}}\right) = {f\alpha }\left( {{X}_{1},\ldots ,{X}_{i},\ldots ,{X}_{k}}\right) . \] The converse is also true: Proposition 11.3. A multilinear map \( T : \mathfrak{X}\left( M\right) \times \cdots \times \mathfrak{X}\left( M\right) \rightarrow \mathbb{R} \) is a tensor field iff it is linear over \( \mathcal{F}\left( M\right) \) . Proof. The condition is necessary by the above remark. Conversely, suppose \( T \) is multilinear and linear over \( \mathcal{F}\left( M\right) \) . We claim that \( T \) "lives pointwise": If \( {X}_{i \mid p} = {Y}_{i \mid p} \) for all \( i \), then \( T\left( {{X}_{1},\ldots ,{X}_{k}}\right) \left( p\right) = T\left( {{Y}_{1},\ldots ,{Y}_{k}}\right) \left( p\right) \) . To see this, assume for simplicity that \( k = 1 \), the general case being analogous. It suffices to establish that if \( {X}_{p} = 0 \), then \( T\left( X\right) \left( p\right) = 0 \) . Consider a chart \( \left( {U, x}\right) \) around \( p \), and write \( {X}_{\mid U} = \sum {f}_{i}\partial /\partial {x}^{i} \) with \( {f}_{i}\left( p\right) = 0 \) . Let \( V \) be a neighborhood of \( p \) whose closure is contained in \( U \), and \( \phi \) a nonnegative function with support in \( U \) which equals 1 on the closure of \( V \) . Define vector fields \( {X}_{i} \) on \( M \) by setting them equal to \( \phi \partial /\partial {x}^{i} \) on \( U \) and to 0 outside \( U \) . Similarly, let \( {g}_{i} \) be the functions that equal \( \phi {f}_{i} \) on \( U \) and 0 outside \( U \) . Then \( X = {\phi }^{2}X + \left( {1 - {\phi }^{2}}\right) X = \sum {g}_{i}{X}_{i} + \left( {1 - {\phi }^{2}}\right) X \), and \[ \left( {TX}\right) \left( p\right) = \sum {g}_{i}\left( p\right) \left( {T{X}_{i}}\right) \left( p\right) + \left( {1 - {\phi }^{2}\left( p\right) }\right) \left( {TX}\right) \left( p\right) = 0, \] establishing the claim. We may therefore define for each \( p \in M \) an element \( {T}_{p} \in {T}_{0, k}\left( {M}_{p}\right) \) by \[ {T}_{p}\left( {{u}_{1},\ldots ,{u}_{k}}\right) \mathrel{\text{:=}} T\left( {{X}_{1},\ldots ,{X}_{k}}\right) \left( p\right) \] for any vector fields \( {X}_{i} \) with \( {X}_{i \mid p} = {u}_{i} \) . The map \( p \mapsto {T}_{p} \) is clearly smooth. EXAMPLES AND REMARKS 11.2. (i) Given \( \alpha ,\beta \in {A}_{1}\left( M\right) \) and \( X, Y \in \) \( \mathfrak{X}\left( M\right) \) , \[ \left( {\alpha \land \beta }\right) \left( {X, Y}\right) = \alpha \left( X\right) \beta \left( Y\right) - \alpha \left( Y\right) \beta \left( X\right) . \] (ii) Recall that for \( f \in \mathcal{F}\left( M\right) \) and \( p \in M,{df}\left( p\right) \) is the element of \( {M}_{p}^{ * } \) given by \( {df}\left( p\right) u = u\left( f\right) \) for \( u \in {M}_{p} \) . The assignment \( p \mapsto {df}\left( p\right) \) defines a differential 1-form \( {df} \), since in a chart \( \left( {U, x}\right), d{f}_{\mid U} = \mathop{\sum }\limits_{i}\left( {\partial f/\partial {x}^{i}}\right) d{x}^{i} \) with \( \partial f/\partial {x}^{i} \in \mathcal{F}\left( U\right) \) . \( {df} \) is called the differential of \( f \) . Observe that \( d : {A}_{0}\left( M\right) \rightarrow {A}_{1}\left( M\right) \) and that \( d\left( {fg}\right) = {fdg} + {gdf}. \) THEOREM 11.1. There is a unique linear map \( d : A\left( M\right) \rightarrow A\left( M\right) \), called the exterior derivative operator such that (i) \( d : {A}_{k}\left( M\right) \rightarrow {A}_{k + 1}\left( M\right) ,\;k \in \mathbb{N} \) ; (ii) \( d\left( {\alpha \land \beta }\right) = {d\alpha } \land \beta + {\left( -1\right) }^{k}\alpha \land {d\beta },\;\alpha \in {A}_{k}\left( M\right) ,\;\beta \in A\left( M\right) \) ; (iii) \( {d}^{2} = 0 \) ; and (iv) for \( f \in {A}_{0}\left( M\right) \), df is the differential of \( f \) . Proof. We will first define \( d \) locally in terms of charts, and then show that the definition is independent of the chosen chart. An invariant formula for \( d \) will be given later on. Given \( p \in M \), and a chart \( \left( {U, x}\right) \) around \( p \), any form \( \alpha \) defined on a neighborhood of \( p \) may be locally written as \( \alpha = \sum {\alpha }_{I}d{x}^{I} \), where \( I \) ranges over subsets of \( \{ 1,\ldots, n\}, d{x}^{I} = d{x}^{{i}_{1}} \land \cdots \land d{x}^{{i}_{k}} \) if \( I = \left\{ {{i}_{1},\ldots ,{i}_{k}}\right\} \) with \( {i}_{1} < \cdots < {i}_{k} \) (or \( d{x}^{I} = 1 \) if \( I = \varnothing \) ), and the \( {\alpha }_{I} \) are smooth functions on a neighborhood of \( p \) . Define \[ {d\alpha }\left( p\right) = \sum d{\alpha }_{I}\left( p\right) \land d{x}^{I}\left( p\right) \] We first check that \( d \) satisfies the following properties at \( p \) : Given \( \alpha \in {A}_{k}\left( M\right) \) , (1) \( {d\alpha }\left( p\right) \in {\Lambda }_{k + 1}\left( {M}_{p}^{ * }\right) \) (2) if \( \alpha = \beta \) on a neighborhood of \( p \), then \( {d\alpha }\left( p\right) = {d\beta }\left( p\right) \) ; (3) \( d\left( {{a\alpha } + {b\beta }}\right) \left( p\right) = {ad\alpha }\left( p\right) + {bd\beta }\left( p\right) ,\;a, b \in \mathbb{R},\;\beta \in A\left( M\right) \) ; (4) \( d\left( {\alpha \land \beta }\right) \left( p\right) = {d\alpha } \land \beta \left( p\right) + {\left( -1\right) }^{k}\alpha \land {d\beta }\left( p\right) \) ; (5) \( d\left( {df}\right) \left( p\right) = 0,\;f \in {A}_{0}\left( M\right) \) . Properties (1)-(3) are immediate. To establish (4), we may, by (3), assume that \( \alpha = {fd}{x}^{I} \) and \( \beta = {gd}{x}^{J} \) . In case \( I \) and/or \( J \) are empty, the statement is clear from the definition and the fact that (4) is true for functions. Otherwise, \[
1089_(GTM246)A Course in Commutative Banach Algebras
Definition 2.10.12
Definition 2.10.12. The topological group \( G \) is said to be almost periodic if the homomorphism \( \phi : G \rightarrow \Delta \left( {{AP}\left( G\right) }\right) \) is injective. Even though in general \( \phi \) need not be injective, \( \Delta \left( {{AP}\left( G\right) }\right) \) is called the Bohr or almost periodic compactification of \( G \) and usually denoted \( b\left( G\right) \) . In the remainder of this section we use the notation \( b\left( G\right) \) in order to emphasize the fact that \( b\left( G\right) \) is a compact group rather than just the structure space of the algebra \( {AP}\left( G\right) \) . We now turn to locally compact Abelian groups. Of course, in that case a major portion of the analysis in this section is superfluous. However, for such \( G \), considerably more can be said about \( {AP}\left( G\right) \), and \( b\left( G\right) \) can be identified in terms of \( G \) only. Let \( T\left( G\right) \) denote the linear subspace of \( {C}^{b}\left( G\right) \) consisting of all finite linear combinations of characters of \( G \) . Functions in \( T\left( G\right) \) are called trigonometric polynomials. Since \( \widehat{G} \) is a group, \( T\left( G\right) \) is a subalgebra of \( {C}^{b}\left( G\right) \) . For \( \chi \in \widehat{G} \) and \( x \in G \) we have \( {L}_{x}\chi \left( y\right) = \overline{\chi \left( x\right) }\chi \left( y\right) \) . Thus \[ {C}_{\chi } = \{ \chi \left( x\right) \chi : x \in G\} \subseteq \mathbb{T} \cdot \chi \] which is a compact subset of \( {C}^{b}\left( G\right) \) . This implies that \( T\left( G\right) \subseteq {AP}\left( G\right) \) . Theorem 2.10.13. Let \( G \) be a locally compact Abelian group. The Gelfand isomorphism \( f \rightarrow \widehat{f} \) from \( {AP}\left( G\right) \) onto \( C\left( {b\left( G\right) }\right) \) maps \( \widehat{G} \) onto \( \widehat{b\left( G\right) } \) and hence \( T\left( G\right) \) onto \( T\left( {b\left( G\right) }\right) \) . Moreover, \( T\left( G\right) \) is norm dense in \( {AP}\left( G\right) \) . Proof. It suffices to show that if \( \gamma \in \widehat{G} \), then \( \widehat{\gamma } \in \widehat{b\left( G\right) } \), and that every character of \( b\left( G\right) \) arises in this way. For \( x, y \in G \), we have \[ \widehat{\gamma }\left( {{\varphi }_{x}{\varphi }_{y}}\right) = \widehat{\gamma }\left( {\varphi }_{xy}\right) = \gamma \left( {xy}\right) = \gamma \left( x\right) \gamma \left( y\right) = \widehat{\gamma }\left( {\varphi }_{x}\right) \widehat{\gamma }\left( {\varphi }_{y}\right) . \] Since \( \widehat{\gamma } \) is continuous on \( b\left( G\right) \) and \( \phi \left( G\right) \) is dense in \( b\left( G\right) \), we conclude that \( \widehat{\gamma } \in \widehat{b\left( G\right) } \) . Conversely, if \( \chi \in b\left( \widehat{G}\right) \) then \( \chi \circ \phi \in \widehat{G} \) since \( \phi \) is a continuous homomorphism from \( G \) into \( b\left( G\right) \) . By the first part of the proof \( \widehat{\chi \circ \phi } \in \widehat{b\left( G\right) } \) . The two characters \( \chi \) and \( \widehat{\chi \circ \phi } \) of \( b\left( G\right) \) agree on the dense subset \( \phi \left( G\right) \), whence \( \chi = \overset{⏜}{\chi \circ \phi } \) Because the Gelfand homomorphism of \( {AP}\left( G\right) \) onto \( C\left( {b\left( G\right) }\right) \) is isometric and, as we have just seen, maps \( T\left( G\right) \) onto \( T\left( {b\left( G\right) }\right) \) . Thus for the last statement of the theorem it is enough to observe that \( T\left( {b\left( G\right) }\right) \) is norm dense in \( C\left( {b\left( G\right) }\right) \) . Now, if \( H \) is a compact Abelian group, then \( T\left( H\right) \) is \( * \) -subalgebra of \( C\left( H\right) \) which strongly separates the points of \( H \) . Thus \( T\left( H\right) \) is dense in \( C\left( H\right) \) by the Stone-Weierstrass theorem. Corollary 2.10.14. Let \( G \) be a locally compact Abelian group, and let \( {\widehat{G}}_{d} \) denote the algebraic group \( \widehat{G} \) endowed with the discrete topology. Then the discrete dual group \( \widehat{b\left( G\right) } \) of \( b\left( G\right) \) is isomorphic to \( {\widehat{G}}_{d} \) . Proof. Being the dual group of the compact group \( b\left( G\right) ,\widehat{b\left( G\right) } \) is discrete. By Theorem 2.10.13, the Gelfand homomorphism of \( {AP}\left( G\right) \) maps \( \widehat{G} \) onto \( b\left( \widehat{G}\right) \) and this map is obviously a group isomorphism. Thus \( {\widehat{G}}_{d} \) is isomorphic to \( \widehat{b\left( G\right) } \) . Employing the Pontryagin duality theorem for locally compact Abelian groups, Corollary 2.10.14 can be rephrased as follows. The group \( b\left( G\right) \) is topologically isomorphic to the dual group of \( {\widehat{G}}_{d} \) since it is topologically isomorphic to the dual group of \( b\left( G\right) \) . ## 2.11 Structure spaces of tensor products The purpose of this section is to determine the structure space of the tensor product of two commutative Banach algebras and to investigate its semisim-plicity. For the basic theory of tensor products of Banach algebras we refer to Section 1.5. We remind the reader that \( \epsilon \) denotes the injective tensor norm. Lemma 2.11.1. Let \( A \) and \( B \) be commutative Banach algebras and let \( \gamma \) be an algebra cross-norm on \( A \otimes B \) such that \( \gamma \geq \epsilon \) . Given \( \varphi \in \Delta \left( A\right) \) and \( \psi \in \Delta \left( B\right) \), there is a unique element of \( \Delta \left( {A{\widehat{ \otimes }}_{\gamma }B}\right) \), denoted \( \varphi {\widehat{ \otimes }}_{\gamma }\psi \), such that \[ \left( {\varphi {\widehat{ \otimes }}_{\gamma }\psi }\right) \left( {x \otimes y}\right) = \varphi \left( x\right) \psi \left( y\right) \] for all \( x \in A \) and \( y \in B \) . Furthermore, the mapping \[ \Delta \left( A\right) \times \Delta \left( B\right) \rightarrow \Delta \left( {A{\widehat{ \otimes }}_{\gamma }B}\right) ,\left( {\varphi ,\psi }\right) \rightarrow \varphi {\widehat{ \otimes }}_{\gamma }\psi \] is a bijection. Proof. Let \( \varphi \in \Delta \left( A\right) \) and \( \psi \in \Delta \left( B\right) \) and recall first that there is a unique homomorphism \( \omega : A \otimes B \rightarrow \mathbb{C} \) such that \( \omega \left( {x \otimes y}\right) = \varphi \left( x\right) \psi \left( y\right) \) for all \( x \in A \) and \( y \in B \) . By definition of \( \epsilon \) and since \( \gamma \geq \epsilon \), for any \( {x}_{1},\ldots ,{x}_{n} \in A \) and \( {y}_{1},\ldots ,{y}_{n} \in B \), we have \[ \left| {\omega \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j} \otimes {y}_{j}}\right) }\right| = \left| {\mathop{\sum }\limits_{{j = 1}}^{n}\varphi \left( {x}_{j}\right) \psi \left( {y}_{j}\right) }\right| \leq \epsilon \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j} \otimes {y}_{j}}\right) \leq \gamma \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j} \otimes {y}_{j}}\right) . \] Thus \( \omega \) is continuous with respect to \( \gamma \) and therefore extends uniquely to an element of \( \Delta \left( {A{ \otimes }_{\gamma }B}\right) \), denoted \( \varphi {\widehat{ \otimes }}_{\gamma }\psi \) . The mapping \( \left( {\varphi ,\psi }\right) \rightarrow \varphi {\widehat{ \otimes }}_{\gamma }\psi \) is injective. To verify this, let \( {\varphi }_{1},{\varphi }_{2} \in \Delta \left( A\right) \) and \( {\psi }_{1},{\psi }_{2} \in \Delta \left( B\right) \) such that \( {\varphi }_{1}{\widehat{ \otimes }}_{\gamma }{\psi }_{1} = {\varphi }_{2}{\widehat{ \otimes }}_{\gamma }{\psi }_{2} \) . Fix \( b \in B \) such that \( {\psi }_{1}\left( b\right) = 1 \) . Then, for all \( x \in A \) , \[ {\varphi }_{1}\left( x\right) = {\varphi }_{1}\left( x\right) {\psi }_{1}\left( b\right) = \left( {{\varphi }_{1} \otimes {\psi }_{1}}\right) \left( {x \otimes b}\right) = \left( {{\varphi }_{2} \otimes {\psi }_{2}}\right) \left( {x \otimes b}\right) = {\varphi }_{2}\left( x\right) {\psi }_{2}\left( b\right) . \] Now, since \( {\varphi }_{1} \) and \( {\varphi }_{2} \) are non-zero homomorphisms, this equation implies that \( {\psi }_{2}\left( b\right) = 1 \) . Hence \( {\varphi }_{1} = {\varphi }_{2} \), and this in turn yields that \( {\psi }_{1} = {\psi }_{2} \) . It remains to show that given \( \rho \in \Delta \left( {A{\widehat{ \otimes }}_{\gamma }B}\right) \), there exist \( \varphi \in \Delta \left( A\right) \) and \( \psi \in \Delta \left( B\right) \) such that \( \rho \left( {x \otimes y}\right) = \varphi \left( x\right) \psi \left( y\right) \) for all \( x \in A \) and \( y \in B \) . Choose \( a \in A \) and \( b \in B \) such that \( \rho \left( {a \otimes b}\right) = 1 \), and define \( \varphi : A \rightarrow \mathbb{C} \) and \( \psi : B \rightarrow \mathbb{C} \) by \[ \varphi \left( x\right) = \rho \left( {{xa} \otimes b}\right) \text{ and }\psi \left( y\right) = \rho \left( {a \otimes {yb}}\right) . \] Clearly, \( \varphi \) and \( \psi \) are linear maps and \[ \varphi \left( x\right) \psi \left( y\right) = \rho \left( {x{a}^{2} \otimes y{b}^{2}}\right) = \rho \left( {x \otimes y}\right) \rho \left( {{a}^{2} \otimes {b}^{2}}\right) = \rho \left( {x \otimes y}\right) \] for all \( x \in A \) and \( y \in B \) . In particular, both \( \varphi \) and \( \psi \) are nonzero. Finally, for \( {x}_{1},{x}_{2} \in A \) \[ \varphi \left( {{x}_{1}{x}_{2}}\right) = \rho \left( {{x}_{1}{x}_{2}a \otimes b}\right) = \rho \left( {{x}_{1}{x}_{2}a \otimes b}\right) \rho \left( {a \otimes b}\right) \] \[ = \rho \left( {\left( {{x}_{1}a \otimes b}\right) \left( {{x}_{2}a \otimes b}\right) }\right) = \rho \left( {{x}_{1}a \otimes b}\right) \rho \left( {{x}_{2}a \otimes b}\right) \] \[ = \varphi \left( {x}_{1}\right) \varphi \left( {x}_{2}\right) \] and similarly, \( \psi \left( {{y}_{1}{y}_{2}}\right) = \psi \left( {y}_{1}\rig
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 0.3.5
Definition 0.3.5 If \( I \) is a non-zero ideal of \( {R}_{k} \), define the norm of \( I \) by \[ N\left( I\right) = \left| {{R}_{k}/I}\right| \] The unique factorisation enables the determination of the norm of ideals to be reduced to the determination of norms of prime ideals. This reduction firstly requires the use of the Chinese Remainder Theorem in this context: Lemma 0.3.6 Let \( {\mathcal{Q}}_{1},{\mathcal{Q}}_{2},\ldots ,{\mathcal{Q}}_{r} \) be ideals in \( {R}_{k} \) such that \( {\mathcal{Q}}_{i} + {\mathcal{Q}}_{j} = {R}_{k} \) for \( i \neq j \) . Then \[ {\mathcal{Q}}_{1}{\mathcal{Q}}_{2}\cdots {\mathcal{Q}}_{r} = { \cap }_{i = 1}^{r}{\mathcal{Q}}_{i}\text{ and }{R}_{k}/{\mathcal{Q}}_{1}\cdots {\mathcal{Q}}_{r} \cong \oplus \mathop{\sum }\limits_{i}{R}_{k}/{\mathcal{Q}}_{i}. \] For distinct prime ideals \( {\mathcal{P}}_{1},{\mathcal{P}}_{2} \) the condition \( {\mathcal{P}}_{1}^{a} + {\mathcal{P}}_{2}^{b} = {R}_{k} \) can be shown to hold for any positive integers \( a, b \) (see Exercise 0.3, No. 3). Secondly, the ring \( {R}_{k}/{\mathcal{P}}^{a} \) has ideals \( {\mathcal{P}}^{a + b}/{\mathcal{P}}^{a} \) and each ideal of the form \( {\mathcal{P}}^{c}/{\mathcal{P}}^{c + 1} \) can be shown to be a one-dimensional vector space over the field \( {R}_{k}/\mathcal{P} \) . Thus if \[ I = {\mathcal{P}}_{1}^{{a}_{1}}{\mathcal{P}}_{2}^{{a}_{2}}\cdots {\mathcal{P}}_{r}^{{a}_{r}} \] then \[ N\left( I\right) = \mathop{\prod }\limits_{{i = 1}}^{r}{\left( N\left( {\mathcal{P}}_{i}\right) \right) }^{{a}_{i}} \] \( \left( {0.13}\right) \) and \( N \) is multiplicative so that \[ N\left( {IJ}\right) = N\left( I\right) N\left( J\right) \] \( \left( {0.14}\right) \) The unique factorisation thus requires that the prime ideals in \( {R}_{k} \) be investigated. If \( \mathcal{P} \) is a prime ideal of \( {R}_{k} \), then \( {R}_{k}/\mathcal{P} \) is a finite field and so has order of the form \( {p}^{f} \) for some prime number \( p \) . Note that \( \mathcal{P} \cap \mathbb{Z} \) is a prime ideal \( {p}^{\prime }\mathbb{Z} \) of \( \mathbb{Z} \) and that \( \mathbb{Z}/{p}^{\prime }\mathbb{Z} \) embeds in \( {R}_{k}/\mathcal{P} \) . Thus \( {p}^{\prime } = p \) and \[ p{R}_{k} = {\mathcal{P}}_{1}^{{e}_{1}}{\mathcal{P}}_{2}^{{e}_{2}}\cdots {\mathcal{P}}_{g}^{{e}_{g}} \] \( \left( {0.15}\right) \) where, for each \( i,{R}_{k}/{\mathcal{P}}_{i} \) is a field of order \( {p}^{{f}_{i}} \) for some \( {f}_{i} \geq 1 \) . The primes \( {\mathcal{P}}_{i} \) are said to lie over or above \( p \), or \( p\mathbb{Z} \) . Note that \( {f}_{i} \) is the degree of the extension of finite fields \( \left\lbrack {{R}_{k}/{\mathcal{P}}_{i} : \mathbb{Z}/p\mathbb{Z}}\right\rbrack \) . If \( \left\lbrack {k : \mathbb{Q}}\right\rbrack = d \), then \( N\left( {p{R}_{k}}\right) = {p}^{d} \) and so \[ d = \mathop{\sum }\limits_{{i = 1}}^{g}{e}_{i}{f}_{i} \] (0.16) Definition 0.3.7 The prime number \( p \) is said to be ramified in the extension \( k \mid \mathbb{Q} \) if, in the decomposition at (0.15), some \( {e}_{i} > 1 \) . Otherwise, \( p \) is unramified. The following theorem of Dedekind connects ramification with the discriminant. Theorem 0.3.8 A prime number \( p \) is ramified in the extension \( k \mid \mathbb{Q} \) if and only if \( p \mid {\Delta }_{k} \) . There are thus only finitely many rational primes which ramify in the extension \( k \mid \mathbb{Q} \) . If \( \mathcal{P} \) is a prime ideal in \( {R}_{k} \) with \( \left| {{R}_{k}/\mathcal{P}}\right| = q\left( { = {p}^{m}}\right) \), and \( \ell \mid k \) is a finite extension, then a similar analysis to that given above holds. Thus in \( {R}_{\ell } \) , \[ \mathcal{P}{R}_{\ell } = {\mathcal{Q}}_{1}^{{e}_{1}}{\mathcal{Q}}_{2}^{{e}_{2}}\cdots {\mathcal{Q}}_{g}^{{e}_{g}} \] \( \left( {0.17}\right) \) where, for each \( i,{R}_{\ell }/{\mathcal{Q}}_{i} \) is a field of order \( {q}^{{f}_{i}} \) . The \( {e}_{i},{f}_{i} \) then satisfy (0.16) where \( \left\lbrack {\ell : k}\right\rbrack = d \) . Dedekind’s Theorem 0.3.8 also still holds when \( {\Delta }_{k} \) is replaced by the relative discriminant, and, of course, in this case, the ideal \( \mathcal{P} \) must divide the ideal \( {\delta }_{\ell \mid k} \) . Now consider the cases of quadratic extensions \( \mathbb{Q}\left( \sqrt{d}\right) \mid \mathbb{Q} \) in some detail. Denote the ring of integers in \( \mathbb{Q}\left( \sqrt{d}\right) \) by \( {O}_{d} \) . Note that from (0.16), there are exactly three possibilities and it is convenient to use some special terminology to describe these. 1. \( p{O}_{d} = {\mathcal{P}}^{2} \) (i.e., \( g = 1,{e}_{1} = 2 \) and so \( {f}_{1} = 1 \) ). Thus \( p \) is ramified in \( \mathbb{Q}\left( \sqrt{d}\right) \mid \mathbb{Q} \) and this will occur if \( p \mid d \) when \( d \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) and if \( p \mid {4d} \) when \( d ≢ 1\left( {\;\operatorname{mod}\;4}\right) \) . Note also in this case that \( {O}_{d}/\mathcal{P} \cong {\mathbb{F}}_{p} \), so that \( N\left( \mathcal{P}\right) = p \) 2. \( p{O}_{d} = {\mathcal{P}}_{1}{\mathcal{P}}_{2} \) (i.e., \( g = 2,{e}_{1} = {e}_{2} = {f}_{1} = {f}_{2} = 1 \) ). In this case, we say that \( p \) decomposes in \( \mathbb{Q}\left( \sqrt{d}\right) \mid \mathbb{Q} \) . In this case \( N\left( {\mathcal{P}}_{1}\right) = N\left( {\mathcal{P}}_{2}\right) = p \) . 3. \( p{O}_{d} = \mathcal{P} \) (i.e., \( \left. {g = 1,{e}_{1} = 1,{f}_{1} = 2}\right) \) . In this case, we say that \( p \) is inert in the extension. Note that \( N\left( \mathcal{P}\right) = {p}^{2} \) . The deductions here are particularly simple since the degree of the extension is 2 . To determine how the prime ideals of \( {R}_{k} \) lie over a given rational prime \( p \) can often be decided by the result below, which is particularly useful in computations. We refer to this result as Kummer's Theorem. (It is not clear to us that this is a correct designation, and in algebraic number theory, it is not a unique designation. However, in this book, it will uniquely pick out this result.) Theorem 0.3.9 Let \( {R}_{k} = \mathbb{Z}\left\lbrack \theta \right\rbrack \) for some \( \theta \in {R}_{k} \) with minimum polynomial h. Let \( p \) be a (rational) prime. Suppose, over \( {\mathbb{F}}_{p} \), that \[ \bar{h} = {\bar{h}}_{1}^{{e}_{1}}{\bar{h}}_{2}^{{e}_{2}}\cdots {\bar{h}}_{r}^{{e}_{r}} \] where \( {h}_{i} \in \mathbb{Z}\left\lbrack x\right\rbrack \) is monic of degree \( {f}_{i} \) and the overbar denotes the natural map \( \;\mathbb{Z}\left\lbrack x\right\rbrack \rightarrow {\overline{\mathbb{F}}}_{p}\left\lbrack x\right\rbrack .\; \) Then \( \;{\mathcal{P}}_{i} = p{R}_{k} + {h}_{i}\left( \theta \right) {R}_{k}\; \) is a prime ideal, \( \;N\left( {\mathcal{P}}_{i}\right) = {p}^{{f}_{i}} \) and \[ p{R}_{k} = {\mathcal{P}}_{1}^{{e}_{1}}{\mathcal{P}}_{2}^{{e}_{2}}\cdots {\mathcal{P}}_{r}^{{e}_{r}} \] There is also a relative version of this theorem applying to an extension \( \ell \mid k \) with \( {R}_{\ell } = {R}_{k}\left\lbrack \theta \right\rbrack \) and \( P \) a prime ideal in \( {R}_{k} \) . As noted earlier, such extensions may not have integral bases. Even in the absolute case of \( k \mid \mathbb{Q} \) , it is not always possible to find a \( \theta \in {R}_{k} \) such that \( \left\{ {1,\theta ,{\theta }^{2},\ldots ,{\theta }^{d - 1}}\right\} \) is an integral basis. Thus the theorem as stated is not always applicable. There are further versions of this theorem which apply in a wider range of cases. Once again we consider quadratic extensions, which always have such a basis as required by Kummer’s Theorem, with \( \theta = \sqrt{d} \) if \( d ≢ 1\left( {\;\operatorname{mod}\;4}\right) \) and \( \theta = \left( {1 + \sqrt{d}}\right) /2 \) if \( d \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) . In the first case, \( p \) is ramified if \( p \mid {4d} \) . For other values of \( p,{x}^{2} - \bar{d} \in {\mathbb{F}}_{p}\left\lbrack x\right\rbrack \) factorises if and only if there exists \( a \in \mathbb{Z} \) such that \( {a}^{2} \equiv d\left( {\;\operatorname{mod}\;p}\right) \) [i.e. if and only if \( \left( \frac{d}{p}\right) = 1 \) ]. In the second case, if \( p \) is odd and \( p \nmid d \), then \( {x}^{2} - x + \left( {1 - \bar{d}}\right) /4 \in {\mathbb{F}}_{p}\left\lbrack x\right\rbrack \) factorises if and only if \( {\left( 2x - 1\right) }^{2} - \bar{d} \in {\mathbb{F}}_{p}\left\lbrack x\right\rbrack \) factorises [i.e. if and only if \( \left( \frac{d}{p}\right) = 1 \) ]. If \( p = 2 \), then \[ {x}^{2} - x + \frac{1 - d}{4} = \left\{ \begin{array}{ll} {x}^{2} + x \in {\mathbb{F}}_{2}\left\lbrack x\right\rbrack & \text{ if }d \equiv 1\left( {\;\operatorname{mod}\;8}\right) \\ {x}^{2} + x + 1 \in {\mathbb{F}}_{2}\left\lbrack x\right\rbrack & \text{ if }d \equiv 5\left( {\;\operatorname{mod}\;8}\right) \end{array}\right. \] Thus using Kummer's Theorem, we have the following complete picture of prime ideals in the ring of integers of a quadratic extension of \( \mathbb{Q} \) . Lemma 0.3.10 In the quadratic extension, \( \mathbb{Q}\left( \sqrt{d}\right) \mid \mathbb{Q} \), where the integer \( d \) is square-free and \( p \) a prime, the following hold: 1. Let \( p \) be odd. (a) If \( p \mid d, p \) is ramified. (b) If \( \left( \frac{d}{p}\right) = 1, p \) decomposes. (c) If \( \left( \frac{d}{p}\right) = - 1, p \) is inert. 2. Let \( p = 2 \) . (a) If \( d ≢ 1\left( {\;\operatorname{mod}\;4}\right) ,2 \) is ramified. (b) If \( d \equiv 1\left( {\;\operatorname{mod}\;8}\right) ,2 \) decomposes. (c) If \( d \equiv 5\left( {\;\operatorname{mod}\;8}\right) ,2 \) is inert. ## Examples 0.3.11 1. The examples treated at the end of the preceding section will be considered further here. Thus let \( k = \mathbb{Q}\left( t\right) \) where \( t \) satisfies \( {x}^{3} + x + 1 \) . This polynomial is irreducible mod 2, so there is one prime ideal \( {\mathcal{P}}_{2} \
1343_[鄂维南&李铁军&Vanden-Eijnden] Applied Stochastic Analysis -GSM199
Definition 1.11
Definition 1.11. The distribution of the random variable \( X \) is a probability measure \( \mu \) on \( \mathbb{R} \), defined for any set \( B \in \mathcal{R} \) by (1.11) \[ \mu \left( B\right) = \mathbb{P}\left( {X \in B}\right) = \mathbb{P} \circ {X}^{-1}\left( B\right) . \] In particular, we define the distribution function \( F\left( x\right) = \mathbb{P}\left( {X \leq x}\right) \) when \( B = ( - \infty, x\rbrack \) . If there exists an integrable function \( \rho \left( x\right) \) such that (1.12) \[ \mu \left( B\right) = {\int }_{B}\rho \left( x\right) {dx} \] for any \( B \in \mathcal{R} \), then \( \rho \) is called the probability density function (PDF) of \( X \) . Here \( \rho \left( x\right) = {d\mu }/{dm} \) is the Radon-Nikodym derivative of \( \mu \left( {dx}\right) \) with respect to the Lebesgue measure \( m\left( {dx}\right) \) if \( \mu \left( {dx}\right) \) is absolutely continuous with respect to \( m\left( {dx}\right) \) ; i.e., for any set \( B \in \mathcal{R} \), if \( m\left( B\right) = 0 \), then \( \mu \left( B\right) = 0 \) (see also Section C of the appendix) [Bil79]. In this case, we write \( \mu \ll m \) . Definition 1.12. The expectation of a random variable \( X \) is defined as (1.13) \[ \mathbb{E}X = {\int }_{\Omega }X\left( \omega \right) \mathbb{P}\left( {d\omega }\right) = {\int }_{\mathbb{R}}{x\mu }\left( {dx}\right) \] if the integrals are well-defined. The variance of \( X \) is defined as (1.14) \[ \operatorname{Var}\left( X\right) = \mathbb{E}{\left( X - \mathbb{E}X\right) }^{2} \] For two random variables \( X \) and \( Y \), we can define their covariance as \( \left( {1.15}\right) \) \[ \operatorname{Cov}\left( {X, Y}\right) = \mathbb{E}\left( {X - \mathbb{E}X}\right) \left( {Y - \mathbb{E}Y}\right) \] \( X \) and \( Y \) are called uncorrelated if \( \operatorname{Cov}\left( {X, Y}\right) = 0 \) . All of the above definitions can be extended to the vectorial case in which \( \mathbf{X} = {\left( {X}_{1},{X}_{2},\ldots ,{X}_{d}\right) }^{T} \in {\mathbb{R}}^{d} \) is a random vector and each component \( {X}_{k} \) is a random variable. In this case, the covariance matrix of \( \mathbf{X} \) is defined as (1.16) \[ \operatorname{Cov}\left( \mathbf{X}\right) = \mathbb{E}\left( {\mathbf{X} - \mathbb{E}\mathbf{X}}\right) {\left( \mathbf{X} - \mathbb{E}\mathbf{X}\right) }^{T}. \] Definition 1.13. For any \( p \geq 1 \), the space \( {L}^{p}\left( \Omega \right) \) (or \( {L}_{\omega }^{p} \) ) consists of random variables whose \( p \) th-order moment is finite: (1.17) \[ {L}^{p}\left( \Omega \right) = \left\{ {\mathbf{X}\left( \omega \right) : \mathbb{E}{\left| \mathbf{X}\right| }^{p} < \infty }\right\} . \] For \( \mathbf{X} \in {L}^{p}\left( \Omega \right) \), let (1.18) \[ \parallel \mathbf{X}{\parallel }_{p} = {\left( \mathbb{E}{\left| \mathbf{X}\right| }^{p}\right) }^{1/p},\;p \geq 1. \] Theorem 1.14. (i) Minkowski inequality. \[ \parallel \mathbf{X} + \mathbf{Y}{\parallel }_{p} \leq \parallel \mathbf{X}{\parallel }_{p} + \parallel \mathbf{Y}{\parallel }_{p},\;p \geq 1,\mathbf{X},\mathbf{Y} \in {L}^{p}\left( \Omega \right) \] (ii) Hölder inequality. \( \mathbb{E}\left| \left( {\mathbf{X},\mathbf{Y}}\right) \right| \leq \parallel \mathbf{X}{\parallel }_{p}\parallel \mathbf{Y}{\parallel }_{q},\;p > 1,1/p + 1/q = 1,\mathbf{X} \in {L}^{p}\left( \Omega \right) ,\mathbf{Y} \in {L}^{q}\left( \Omega \right) , \) where \( \left( {\mathbf{X},\mathbf{Y}}\right) \) denotes the standard scalar product in \( {\mathbb{R}}^{d} \) . (iii) Schwartz inequality. \[ \mathbb{E}\left| \left( {\mathbf{X},\mathbf{Y}}\right) \right| \leq \parallel \mathbf{X}{\parallel }_{2}\parallel \mathbf{Y}{\parallel }_{2} \] Obviously Schwartz inequality is a special case of Hölder inequality when \( p = q = 2 \) . The proof of these inequalities can be found, for example, in Chapter 2 of [Shi96]. It also follows that \( \parallel \cdot {\parallel }_{p} \) is a norm. One can further prove that \( {L}^{p}\left( \Omega \right) \) is a Banach space and \( {L}^{2}\left( \Omega \right) \) is a Hilbert space with inner product (1.19) \[ {\left( \mathbf{X},\mathbf{Y}\right) }_{{L}_{\omega }^{2}} = \mathbb{E}\left( {\mathbf{X},\mathbf{Y}}\right) \] Lemma 1.15 (Chebyshev’s inequality). Let \( \mathbf{X} \) be a random variable such that \( \mathbb{E}{\left| \mathbf{X}\right| }^{p} < \infty \) for some \( p > 0 \) . Then \( \left( {1.20}\right) \) \[ \mathbb{P}\{ \left| \mathbf{X}\right| \geq \lambda \} \leq \frac{1}{{\lambda }^{p}}\mathbb{E}{\left| \mathbf{X}\right| }^{p} \] for any positive constant \( \lambda \) . Proof. For any \( \lambda > 0 \) , \[ \mathbb{E}{\left| \mathbf{X}\right| }^{p} = {\int }_{{\mathbb{R}}^{d}}{\left| \mathbf{x}\right| }^{p}\mu \left( {d\mathbf{x}}\right) \geq {\int }_{\left| \mathbf{x}\right| \geq \lambda }{\left| \mathbf{x}\right| }^{p}\mu \left( {d\mathbf{x}}\right) \geq {\lambda }^{p}{\int }_{\left| \mathbf{x}\right| \geq \lambda }\mu \left( {d\mathbf{x}}\right) = {\lambda }^{p}\mathbb{P}\left( {\left| \mathbf{X}\right| \geq \lambda }\right) . \] It is straightforward to generalize the above estimate to any nonnegative increasing function \( f\left( x\right) \), which gives \( \mathbb{P}\left( {\left| \mathbf{X}\right| \geq \lambda }\right) \leq \mathbb{E}f\left( \left| \mathbf{X}\right| \right) /f\left( \lambda \right) \) if \( f\left( \lambda \right) > \) 0. Lemma 1.16 (Jensen’s inequality). Let \( \mathbf{X} \) be a random variable such that \( \mathbb{E}\left| \mathbf{X}\right| < \infty \) and \( \phi : \mathbb{R} \rightarrow \mathbb{R} \) is a convex function such that \( \mathbb{E}\left| {\phi \left( \mathbf{X}\right) }\right| < \infty \) . Then (1.21) \[ \mathbb{E}\phi \left( \mathbf{X}\right) \geq \phi \left( {\mathbb{E}\mathbf{X}}\right) \] This follows directly from the definition of convex functions. Readers can also refer to \( \mathbf{{Chu01}} \) for the details. Below we list some typical continuous distributions. Example 1.17 (Uniform distribution). The uniform distribution on a domain \( B \) (in \( {\mathbb{R}}^{d} \) ) is defined by the probability density function: \[ \rho \left( x\right) = \left\{ \begin{array}{ll} \frac{1}{\operatorname{vol}\left( B\right) }, & \text{ if }\mathbf{x} \in B, \\ 0, & \text{ otherwise. } \end{array}\right. \] In one dimension if \( B = \left\lbrack {0,1}\right\rbrack \) (denoted as \( \mathcal{U}\left\lbrack {0,1}\right\rbrack \) later), this reduces to \[ \rho \left( x\right) = \left\{ \begin{array}{ll} 1, & \text{ if }x \in \left\lbrack {0,1}\right\rbrack \\ 0, & \text{ otherwise. } \end{array}\right. \] For the uniform distribution on \( \left\lbrack {0,1}\right\rbrack \), we have \[ \mathbb{E}X = \frac{1}{2},\;\operatorname{Var}\left( X\right) = \frac{1}{12}. \] Example 1.18 (Exponential distribution). The exponential distribution \( \mathcal{E}\left( \lambda \right) \) is defined by the probability density function: \[ \rho \left( x\right) = \left\{ \begin{array}{ll} 0, & \text{ if }x < 0 \\ \lambda {e}^{-{\lambda x}}, & \text{ if }x \geq 0 \end{array}\right. \] The mean and variance of \( E\left( \lambda \right) \) are (1.22) \[ \mathbb{E}X = \frac{1}{\lambda },\;\operatorname{Var}\left( X\right) = \frac{1}{{\lambda }^{2}}. \] As an example, the waiting time of a Poisson process with rate \( \lambda \) is exponentially distributed with parameter \( \lambda \) . Example 1.19 (Normal distribution). The one-dimensional normal distribution (also called Gaussian distribution) \( N\left( {\mu ,{\sigma }^{2}}\right) \) is defined by the probability density function: (1.23) \[ \rho \left( x\right) = \frac{1}{\sqrt{{2\pi }{\sigma }^{2}}}\exp \left( {-\frac{1}{2{\sigma }^{2}}{\left( x - \mu \right) }^{2}}\right) \] with mean \( \mu \) and variance \( {\sigma }^{2} \) . If \( \mathbf{\sum } \) is an \( n \times n \) symmetric positive definite matrix and \( \mathbf{\mu } \) is a vector in \( {\mathbb{R}}^{n} \), we can also define the \( n \) -dimensional normal distribution \( N\left( {\mathbf{\mu },\mathbf{\sum }}\right) \) through the density (1.24) \[ \rho \left( \mathbf{x}\right) = \frac{1}{{\left( 2\pi \right) }^{n/2}{\left( \det \mathbf{\sum }\right) }^{1/2}}\exp \left( {-\frac{1}{2}{\left( \mathbf{x} - \mathbf{\mu }\right) }^{T}{\mathbf{\sum }}^{-1}\left( {\mathbf{x} - \mathbf{\mu }}\right) }\right) . \] In this case, we have \[ \mathbb{E}\mathbf{X} = \mathbf{\mu },\;\operatorname{Cov}\left( \mathbf{X}\right) = \mathbf{\sum }. \] The normal distribution is the most important probability distribution. It is also called the Gaussian distribution. Random variables with normal distribution are also called Gaussian random variables. In the case of degeneracy, i.e., the covariance matrix \( \mathbf{\sum } \) is not invertible, which corresponds to the case that some components are in the subspace spanned by other components, we need to define the Gaussian distribution via characteristic functions (see Section 1.9). Example 1.20 (Gibbs distribution). In equilibrium statistical mechanics, we are concerned with a probability distribution \( \pi \) over a state space \( S \) . In the case of an \( n \) -particle system with continuous states, we have \( \mathbf{x} = \) \( \left( {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{n},{\mathbf{p}}_{1},\ldots ,{\mathbf{p}}_{n}}\right) \in S = {\mathbb{R}}^{6n} \), where \( {\mathbf{x}}_{k} \) and \( {\mathbf{p}}_{k} \) are the position and momentum of the \( k \) th particle, respectively. The PDF \( \pi \left( \mathbf{x}\right) \), called the Gibbs distribution, has a specific form: (1.25) \[ \pi \left( \mathbf{x}\right) = \frac{1}{Z}{e}^{-{\beta H}\left( \mathbf{x}\right) },\;\mathbf{x} \in {\mathbb{R}}^{6n},\beta = {\left( {k}_{B}T\right) }^{-1}, \] where \( H \) is the energy of the considered system, \( T \) is the absolute temperature, \( {k}_{B} \) is the Boltzmann constant, and (1.26) \[ Z = {\int }_{{\
1569_混合相依变量的极限理论(陆传荣)
Definition 6.2.1
Definition 6.2.1. The random field \( \left\{ {{\xi }_{n,\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n}}\right\} \) is said to be \( \alpha \) -mixing if \( \alpha \left( {nx}\right) \rightarrow 0 \) as \( n \rightarrow \infty \), where \[ \alpha \left( {nx}\right) = \mathop{\sup }\limits_{\substack{{I, J \subset {\mathcal{J}}_{n}} \\ {d\left( {I, J}\right) \geq x} }}\mathop{\sup }\limits_{{A \in \sigma \left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in I}\right) }}\left| {P\left( {AB}\right) - P\left( A\right) P\left( B\right) }\right| . \] Definition 6.2.2. The random field \( \left\{ {{\xi }_{n,\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n},}\right\} \) is said to be \( \rho \) -mixing if \( \rho \left( {nx}\right) \rightarrow 0 \) as \( n \rightarrow \infty \), where \[ \rho \left( {nx}\right) = \mathop{\sup }\limits_{\substack{{I, J \subset {\mathcal{J}}_{n}} \\ {d\left( {I, J}\right) \geq x} }}\mathop{\sup }\limits_{{X \in {L}_{2}\left( {\sigma \left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in I}\right) }\right) }}\frac{\left| \operatorname{Cov}\left( X, Y\right) \right| }{\sqrt{\operatorname{Var}X\operatorname{Var}Y}} \] and \( {L}_{2}\left( \mathcal{F}\right) \) is the set of \( {L}_{2} \) random variables measurable with respect to \( \mathcal{F} \) . Definition 6.2.3. The random field \( \left\{ {{\xi }_{n,\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n}}\right\} \) is said to be symmetric \( \varphi \) -mixing if \( \varphi \left( {nx}\right) \rightarrow 0 \) as \( n \rightarrow \infty \), where \[ \varphi \left( {nx}\right) = \mathop{\sup }\limits_{\substack{{I, J \subset {\mathcal{J}}_{n}} \\ {d\left( {I, J}\right) \geq x} }}\mathop{\sup }\limits_{\substack{{A \in \sigma \left( {{\xi }_{n,\mathbf{j}};\mathbf{j} \in I}\right) } \\ {B \in \sigma \left( {{\xi }_{n,\mathbf{j}};\mathbf{j} \in J}\right) } }}\max \left( {\left| {P\left( {A \mid B}\right) - P\left( A\right) }\right| ,\left| {P\left( {B \mid A}\right) - P\left( B\right) }\right| }\right) . \] Definition 6.2.4. The random field \( \left\{ {{\xi }_{n\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n}}\right\} \) is said to be absolutely regular if \( \beta \left( {nx}\right) \rightarrow 0 \) as \( n \rightarrow \infty \), where \[ \beta \left( {nx}\right) = \mathop{\sup }\limits_{{I, J \subset {\mathcal{J}}_{n}, d\left( {I, J}\right) \geq x}}\parallel \mathcal{L}\left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in I \cup J}\right) \] \[ - \mathcal{L}\left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in I}\right) \mathcal{L}\left( {{\xi }_{n,\mathbf{j}},\mathbf{j} \in J}\right) {\parallel }_{\mathrm{{Var}}} \] \( \mathcal{L}\left( {\xi \left( \cdot \right) }\right) \) is the distribution law of \( \{ \xi \left( \cdot \right) \} \) and \( \parallel \cdot {\parallel }_{\mathrm{{Var}}} \) is variation norm. It is clear that \[ \alpha \left( {nx}\right) \leq \rho \left( {nx}\right) \leq {2\varphi }\left( {nx}\right) ,\;\alpha \left( {nx}\right) \leq \beta \left( {nx}\right) \leq \varphi \left( {nx}\right) . \] \( \left( {6.2.2}\right) \) Next, we introduce the metric entropy condition. We say that Borel sets \( A, B \) in \( {\mathcal{B}}^{d} \) are equivalent if \( \left| {A\bigtriangleup B}\right| = 0 \), and denote the set of equivalence classes by \( \mathcal{E} \) . Define \( {d}_{L}\left( {A, B}\right) = \left| {A\bigtriangleup B}\right| \), it can be proved that \( {d}_{L}\left( {\cdot , \cdot }\right) \) is a metric on \( \mathcal{E} \) . The set \( \mathcal{E} \) forms a complete metric space under \( {d}_{L} \) . Definition 6.2.5. A subset \( \mathcal{A} \) of \( \mathcal{E} \) is called totally bounded with inclusion, if for every \( \delta > 0 \) there is a finite set \( {\mathcal{A}}_{\delta } \subset \mathcal{E} \), such that for every \( A \in \mathcal{A} \) there exist \( {A}^{ + },{A}^{ - } \in {\mathcal{A}}_{\delta } \) with \( {A}^{ - } \subset A \subset {A}^{ + } \) and \( \left| {{A}^{ + } \smallsetminus {A}^{ - }}\right| \leq \delta . \) Note that \( {\mathcal{A}}_{\delta } \) is a \( \delta \) -net with respect to \( {d}_{L} \) for \( \mathcal{A} \) . Let \( \mathcal{A} \) be a totally bounded subset of \( \mathcal{E} \) . Its closure \( \overline{\mathcal{A}} \) is complete and totally bounded, hence compact. Let \( C\left( \overline{\mathcal{A}}\right) \) be the space of continous functions on \( \overline{\mathcal{A}} \) with the sup norm \( \parallel \cdot \parallel \) . Because \( \overline{\mathcal{A}} \) is compact, \( C\left( \overline{\mathcal{A}}\right) \) is separable. Thus \( C\left( \overline{\mathcal{A}}\right) \) is a complete, separable metric space. Let \( {CA}\left( \overline{\mathcal{A}}\right) \) be the set of everywhere additive elements of \( C\left( \overline{\mathcal{A}}\right) \), namely, elements \( f \) such that \( \;f\left( {A \cup B}\right) = f\left( A\right) + f\left( B\right) - f\left( {A \cap B}\right) \; \) whenever \( \;A, B, A \cup B, A \cap B \in \overline{\mathcal{A}}. \) It can be shown that for fixed \( \omega ,{Z}_{n}\left( \cdot \right) \in {CA}\left( \overline{\mathcal{A}}\right) \), i.e. \( {Z}_{n} \) are random elements of \( {CA}\left( \overline{\mathcal{A}}\right) \) . A standard Wiener process on \( \overline{\mathcal{A}} \) is a random element \( W \) of \( {CA}\left( \overline{\mathcal{A}}\right) \) whose finite dimensional laws are Gaussian with \( {EW}\left( A\right) = \) \( 0,{EW}\left( A\right) W\left( B\right) = \left| {A \cap B}\right| \) . In order that \( W \) should exist it is necessary (see Dudley 1973) that \( \mathcal{A} \) satisfies a metric entropy condition. Definition 6.2.6. Let \( \mathcal{A} \) be a totally bounded subset of \( \mathcal{E},{\mathcal{A}}_{\delta } \) be the smallest \( \delta \) -net of \( \mathcal{A} \) . Denote \[ N\left( {\delta ,\mathcal{A}}\right) = \operatorname{Card}{\mathcal{A}}_{\delta },\;H\left( \delta \right) = \log N\left( {\delta ,\mathcal{A}}\right) . \] \( \mathcal{A} \) is said to satisfy a metric entropy condition, or a convergent entropy integral, if \[ {\int }_{0}^{1}{\left( \frac{H\left( \delta \right) }{\delta }\right) }^{1/2}{d\delta } < \infty \] \( \left( {6.2.3}\right) \) Define the exponent of metric entropy of \( \mathcal{A} \), denoted by \( r \mathrel{\text{:=}} \inf \{ s, s > \) \( \left. {0, H\left( \delta \right) = O\left( {\delta }^{-s}\right) \text{as}\delta \rightarrow 0}\right\} \) . If \( r < 1 \), then (6.2.3) holds. Remark 6.2.1. Some examples of classes of sets which satisfy the metric entropy condition are as follows: If \( {\mathcal{C}}^{d} \) denotes the convex subsets of \( {\left\lbrack 0,1\right\rbrack }^{d} \), then \( r = \left( {d - 1}\right) /2 \) (see Dudley 1974). If \( {\mathcal{I}}^{d} = \left\{ {(\mathbf{a},\mathbf{b}\rbrack : \mathbf{a},\mathbf{b} \in {\left\lbrack 0,1\right\rbrack }^{d}}\right\} \) as above, then \( r = 0 \) . If \( {\mathcal{P}}^{d, m} \) denotes the family of all polygonal regions of \( {\left\lbrack 0,1\right\rbrack }^{d} \) with no more than \( m \) vertices, then \( r = 0 \) (see Erickson 1981). If \( {\mathcal{E}}^{d} \) denotes the sets of all ellipsoidal regions in \( {\left\lbrack 0,1\right\rbrack }^{d} \), then \( r = 0 \) (see Gaeussler 1983). For the Vapnik- \( \check{C} \) ervonenkis class \( \mathcal{V} \) that includes the above three examples, it is known that \( N\left( {\delta ,\mathcal{V},{d}_{\lambda }}\right) = \operatorname{Card}{\mathcal{V}}_{\delta } \leq c{\delta }^{-v} \) for some \( c \) and \( v > 0 \) (Dudley 1978). When \( \left\{ {{\xi }_{\mathbf{t}},\mathbf{t} \in {\mathbb{Z}}^{d}}\right\} \) is independent, the weak convergence of \( {Z}_{n} \) to \( W \) has been studied by Bass and Pyke \( \left( {{1984},{1985}}\right) \), Alexander and Pyke (1986), Lu (1992), etc. For the mixing random field \( \left\{ {{\xi }_{n,\mathbf{j}},\mathbf{j} \in {\mathcal{J}}_{n}, n \geq }\right. \) \( 1\} \), the weak convergence of \( {Z}_{n} \) to \( W \) was first discussed by Goldie and Greenwood (1986a, b). They proved the following theorem. Theorem 6.2.1. Assume that \( E{\xi }_{n,\mathbf{j}} = 0 \), and (i) for some \( s > 2,\left\{ {{\left| {n}^{d/2}{\xi }_{n,\mathbf{j}}\right| }^{s},\mathbf{j} \in {\mathcal{J}}_{n}, n \geq 1}\right\} \) is uniformly integrable; (ii) the exponent \( r \) of metric entropy (with inclusion) of \( \mathcal{A} \) satisfies \( r < 1 \) (iii) \( \beta \left( {nx}\right) = O\left( {\left( nx\right) }^{b}\right) \) as \( {nx} \rightarrow \infty \), the exponent \( b \) of absolute regularity satisfies \( b \geq {ds}/\left( {s - 2}\right) \) and \( b > d\left( {1 + r}\right) /\left( {1 - r}\right) \) ; (iv) the symmetric \( \varphi \) -mixing coefficients satisfy \[ \mathop{\sup }\limits_{{n \geq 1}}\mathop{\sum }\limits_{{j = 1}}^{\infty }{\varphi }^{1/2}\left( {{2}^{j}{n}^{-1}}\right) < \infty \] (v) for any null family \( \left\{ {{D}_{h},0 < h < {h}_{0}}\right\} \) in \( {\mathcal{I}}^{d} \) (a null family is a collection such that \( {D}_{h} \subseteq {D}_{{h}^{\prime }} \) for \( h \leq {h}^{\prime } \) and \( \left| {D}_{h}\right| = h \) for each \( h \) ), \[ \mathop{\lim }\limits_{{h \downarrow 0}}\mathop{\limsup }\limits_{{n \rightarrow \infty }}\left| {\frac{E{Z}_{n}^{2}\left( {D}_{h}\right) }{\left| {D}_{h}\right| } - 1}\right| = 0. \] Then \( {Z}_{n} \) converges weakly in \( {CA}\left( \overline{\mathcal{A}}\right) \) to \( W \) . For a random field \( \left\{ {{\xi }_{\mathbf{t}},\mathbf{t} \in {\mathbb{Z}}^{d}}\right\} \), the versions of Definitions 6.2.1,6.2.2, 6.2.3 and 6.2.4 are as follows: \[ \alpha \left( x\right) = \mathop{\sup }\limits_{\substack{{I, J \subset {\mathbb{Z}}^{d}} \\ {d\left( {I, J}\right) \geq x} }}\mathop{\sup }\limits_{{A \in \sigma \left( {{\xi }_{\mathbf{i}},\mathbf{i} \in I}\right) }}\left| {P\left( {AB}\right) - P\left( A\right) P\left( B\right) }\right| , \] \[ \rho \left( x\right) = \mathop{\sup }\limits_{{I, J \subset {\mathbb{Z}}^{d}}}\mathop{\sup }\limits_{{X \in {L}_{
1167_(GTM73)Algebra
Definition 7.3
Definition 7.3. Let \( \mathrm{K} \) be a commutative ring with identity and \( \mathrm{A},\mathrm{B}\mathrm{K} \) -algebras. (i) \( A \) subalgebra of \( \mathrm{A} \) is a subring of \( \mathrm{A} \) that is also a \( \mathrm{K} \) -submodule of \( \mathrm{A} \) . (ii) \( A \) (left, right, two-sided) algebra ideal of \( \mathrm{A} \) is a (left, right, two-sided) ideal of the ring \( \mathrm{A} \) that is also a \( \mathrm{K} \) -submodule of \( \mathrm{A} \) . (iii) A homomorphism [resp. isomorphism] of K-algebras \( \mathrm{f} : \mathrm{A} \rightarrow \mathrm{B} \) is a ring homomorphism [isomorphism] that is also a \( \mathbf{K} \) -module homomorphism [isomorphism]. REMARKS. If \( A \) is a \( K \) -algebra, an ideal of the ring \( A \) need not be an algebra ideal of \( A \) (Exercise 4). If, however, \( A \) has an identity, then for all \( k \in K \) and \( a \in A \) \[ {ka} = k\left( {{1}_{A}a}\right) = \left( {k{1}_{A}}\right) a\text{ and }{ka} = \left( {ka}\right) {1}_{A} = a\left( {k{1}_{A}}\right) , \] with \( k{1}_{A}{\varepsilon A} \) . Consequently, for a left [resp. right] ideal \( J \) in the ring \( A \) , \[ {kJ} = \left( {k{1}_{A}}\right) J \subset J\;\left\lbrack {\text{ resp. }{kJ} = J\left( {k{1}_{A}}\right) \subset J}\right\rbrack . \] Therefore, if A has an identity, every (left, right, two-sided) ideal is also a (left, right, two-sided) algebra ideal. The quotient algebra of a \( K \) -algebra \( A \) by an algebra ideal \( I \) is now defined in the obvious way, as are the direct product and direct sum of a family of \( K \) -algebras. Tensor products furnish another way to manufacture new algebras. We first observe that if \( A \) and \( B \) are \( K \) -modules, then there is a \( K \) -module isomorphism \( \alpha : A{ \otimes }_{K}B \rightarrow B{ \otimes }_{K}A \) such that \( \alpha \left( {a \otimes b}\right) = b \otimes a\left( {{a\varepsilon A},{b\varepsilon B}}\right) \) ; see Exercise 2. Theorem 7.4. Let \( \mathrm{A} \) and \( \mathrm{B} \) be algebras [with identity] over a commutative ring \( \mathrm{K} \) with identity. Let \( \pi \) be the composition \[ \left( {\mathrm{A}{\bigotimes }_{\mathrm{K}}\mathrm{B}}\right) {\bigotimes }_{\mathrm{K}}\left( {\mathrm{A}{\bigotimes }_{\mathrm{K}}\mathrm{B}}\right) \xrightarrow[]{{1}_{\mathrm{A}}\bigotimes \alpha \bigotimes {1}_{\mathrm{B}}}\left( {\mathrm{A}{\bigotimes }_{\mathrm{K}}\mathrm{A}}\right) {\bigotimes }_{\mathrm{K}}\left( {\mathrm{B}{\bigotimes }_{\mathrm{K}}\mathrm{B}}\right) \xrightarrow[]{{\pi }_{\mathrm{A}}\bigotimes {\pi }_{\mathrm{B}}}\mathrm{A}{\bigotimes }_{\mathrm{K}}\mathrm{B}, \] where \( {\pi }_{\mathrm{A}},{\pi }_{\mathrm{B}} \) are the product maps of \( \mathrm{A} \) and \( \mathrm{B} \) respectively. Then \( \mathrm{A}{\bigotimes }_{\mathrm{K}}\mathrm{B} \) is a \( \mathrm{K} \) - algebra [with identity] with product map \( \pi \) . PROOF. Exercise; note that for generators \( a \otimes b \) and \( {a}_{1} \otimes {b}_{1} \) of \( A{ \otimes }_{K}B \) the product is defined to be \[ \left( {a \otimes b}\right) \left( {{a}_{1} \otimes {b}_{1}}\right) = \pi \left( {a \otimes b \otimes {a}_{1} \otimes {b}_{1}}\right) = a{a}_{1} \otimes b{b}_{1}. \] Thus if \( A \) and \( B \) have identities \( {1}_{A},{1}_{B} \) respectively, then \( {1}_{A} \otimes {1}_{B} \) is the identity in \( A{ \otimes }_{K}B \) . The \( K \) -algebra \( A{\bigotimes }_{K}B \) of Theorem 7.4 is called the tensor product of the K-algebras \( A \) and \( B \) . Tensor products of algebras are useful in studying the structure of division algebras over a field \( K \) (Section IX.6). ## EXERCISES Note: \( K \) is always a commutative ring with identity. 1. Let \( \mathcal{C} \) be the category whose objects are all commutative \( K \) -algebras with identity and whose morphisms are all \( K \) -algebra homomorphisms \( f : A \rightarrow B \) such that \( f\left( {1}_{A}\right) = {1}_{B} \) . Then any two \( K \) -algebras \( A, B \) of \( \mathcal{C} \) have a coproduct. [Hint: consider \( A \rightarrow A{\bigotimes }_{K}B \leftarrow B \), where \( a \mapsto a\bigotimes {1}_{B} \) and \( b \mapsto {1}_{A}\bigotimes b \) .] 2. If \( A \) and \( B \) are unitary \( K \) -modules [resp. \( K \) -algebras], then there is an isomorphism of \( K \) -modules [resp. \( K \) -algebras] \( \alpha : A{\bigotimes }_{K}B \rightarrow B{\bigotimes }_{K}A \) such that \( \alpha \left( {a\bigotimes b}\right) \) \( = b \otimes a \) for all \( a \in A, b \in B \) . 3. Let \( A \) be a ring with identity. Then \( A \) is a \( K \) -algebra with identity if and only if there is a ring homomorphism of \( K \) into the center of \( A \) such that \( {1}_{K} \mapsto {1}_{A} \) . 4. Let \( A \) be a one-dimensional vector space over the rational field \( \mathbf{Q} \) . If we define \( {ab} = 0 \) for all \( a,{b\varepsilon A} \), then \( A \) is a \( \mathbf{Q} \) -algebra. Every proper additive subgroup of \( A \) is an ideal of the ring \( A \), but not an algebra ideal. 5. Let \( \mathcal{C} \) be the category of Exercise 1. If \( X \) is the set \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \), then the polynomial algebra \( K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) is a free object on the set \( X \) in the category \( \mathcal{C} \) . [Hint: Given an algebra \( A \) in \( \mathcal{C} \) and a map \( g : \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \rightarrow A \), apply Theorem III.5.5 to the unit map \( I : K \rightarrow A \) and the elements \( g\left( {x}_{1}\right) ,\ldots, g\left( {x}_{n}\right) \in A \) .] # FIELDS AND GALOIS THEORY The first principal theme of this chapter is the structure theory of fields. We shall study a field \( F \) in terms of a specified subfield \( K(F \) is said to be an extension field of \( K \) ). The basic facts about field extensions are developed in Section 1, in particular, the distinction between algebraic and transcendental extensions. For the most part we deal only with algebraic extensions in this chapter. Arbitrary field extensions are considered in Chapter VI. The structure of certain fields and field extensions is thoroughly analyzed: simple extensions (Section 1); splitting fields (normal extensions) and algebraic closures (Section 3); finite fields (Section 5); and separable algebraic extensions (Sections 3 and 6). The Galois theory of field extensions (the other main theme of this chapter) had its historical origin in a classical problem in the theory of equations, which is discussed in detail in Sections 4 and 9. Various results of Galois theory have important applications, especially in the study of algebraic numbers (see E. Artin [48]) and algebraic geometry (see S. Lang [54]). The key idea of Galois theory is to relate a field extension \( K \subset F \) to the group of all automorphisms of \( F \) that fix \( K \) elementwise (the Galois group of the extension). A Galois field extension may be defined in terms of its Galois group (Section 2) or in terms of the internal structure of the extension (Section 3). The Fundamental Theorem of Galois theory (Section 2) states that there is a one-to-one correspondence between the intermediate fields of a (finite dimensional) Galois field extension and the subgroups of the Galois group of the extension. This theorem allows us to translate properties and problems involving fields, polynomials, and field extensions into group theoretic terms. Frequently, the corresponding problem in groups has a solution, whence the original problem in field theory can be solved. This is the case, for instance, with the classical problem in the theory of equations mentioned in the previous paragraph. We shall characterize those Galois field extensions whose Galois groups are finite cyclic (Section 7) or solvable (Section 9). The approximate interdependence of the sections of this chapter is as follows: ![a635b0ec-463f-4a06-bfab-0631c5cb2124_250_0.jpg](images/a635b0ec-463f-4a06-bfab-0631c5cb2124_250_0.jpg) A broken arrow \( A \rightarrow B \) indicates that an occasional result from section \( A \) is used in section \( B \), but that section \( B \) is essentially independent of section \( A \) . See page xviii for a description of a short basic course in fields and Galois theory. ## 1. FIELD EXTENSIONS The basic facts needed for the study of field extensions are presented first, followed by a discussion of simple extensions. Finally a number of essential properties of algebraic extensions are proved. In the appendix, which is not used in the sequel, several famous geometric problems of antiquity are settled, such as the trisection of an angle by ruler and compass constructions. Definition 1.1. A field \( \mathrm{F} \) is said to be an extension field of \( \mathrm{K} \) (or simply an extension of \( \mathrm{K} \) ) provided that \( \mathrm{K} \) is a subfield of \( \mathrm{F} \) . If \( F \) is an extension field of \( K \), then it is easy to see that \( {1}_{K} = {1}_{F} \) . Furthermore, \( F \) is a vector space over \( K \) (Definition IV.1.1). Throughout this chapter the dimension of the \( K \) -vector space \( F \) will be denoted by \( \left\lbrack {F : K}\right\rbrack \) rather than \( {\dim }_{K}F \) as previously. \( F \) is said to be a finite dimensional extension or infinite dimensional extension of \( K \) according as \( \left\lbrack {F : K}\right\rbrack \) is finite or infinite. Theorem 1.2. Let \( \mathrm{F} \) be an extension field of \( \mathrm{E} \) and \( \mathrm{E} \) an extension field of \( \mathrm{K} \) . Then \( \left\lbrack {\mathrm{F} : \mathrm{K}}\right\rbrack = \left\lbrack {\mathrm{F} : \mathrm{E}}\right\rbrack \left\lbrack {\mathrm{E} : \mathrm{K}}\right\rbrack \) . Furthermore \( \left\lbrack {\mathrm{F} : \mathrm{K}}\right\rbrack \) is finite if and only if \( \left\lbrack {\mathrm{F} : \mathrm{E}}\right\rbrack \) and \( \left\lbrack {\mathrm{E} : \mathrm{K}}\right\rbrack \) are finite.
109_The rising sea Foundations of Algebraic Geometry
Definition 4.116
Definition 4.116. Given an arbitrary collection of simplices containing at least one chamber, their convex hull is the intersection of all convex chamber subcomplexes containing them. The convex hull is itself a convex chamber subcomplex containing the given simplices, so it is the smallest one. We will also call the convex hull the combinatorial convex hull in any context where the term "convex hull" might be ambiguous. An important example is the convex hull \( \Gamma \left( {A, C}\right) \) of a simplex \( A \) and a chamber \( C \) . Since apartments are convex, \( \Gamma \left( {A, C}\right) \) is contained in any apartment \( \sum \) containing \( A \) and \( C \) ; hence \( \Gamma \left( {A, C}\right) \) coincides with the convex hull of \( A \) and \( C \) in \( \sum \) . In particular, the convex hull \( \Gamma \left( {C, D}\right) \) of two chambers \( C, D \) is a convex chamber subcomplex whose chambers are precisely the chambers that can occur in a minimal gallery between \( C \) and \( D \) (see Example 3.133(a)). A special case of this played an important role in Section 4.7. The following example gives another interesting special case, involving roots. Here a root of \( \Delta \) is a subcomplex that is contained in some apartment \( \sum \) and is a root in \( \sum \) . It then follows easily from the building axioms that \( \alpha \) is a root in every apartment containing it and that \( \alpha \) has a well-defined boundary \( \partial \alpha \), equal to its boundary in any apartment containing it; see Exercise 4.61. Example 4.117. Suppose \( \alpha \) is a root of a spherical building \( \Delta \) . Let \( P \) be a panel in \( \partial \alpha \), let \( {P}^{\prime } \) be the panel in \( \partial \alpha \) opposite \( P \), and let \( {C}^{\prime } \) be the chamber of \( \alpha \) having \( {P}^{\prime } \) as a face. Then \( \Gamma \left( {P,{C}^{\prime }}\right) = \alpha \) . To see this, choose an apartment \( \sum \) containing \( \alpha \) . As we noted above, \( \Gamma \left( {P,{C}^{\prime }}\right) \) coincides with the convex hull of \( P \) and \( {C}^{\prime } \) in \( \sum \) . Our assertion now follows from Example 3.133(d). As another illustration of convex hulls, we give a characterization of the set of apartments containing a given root of a spherical building. This result will be used in Chapter 7. We need some notation. Given a root \( \alpha \) of \( \Delta \) and a panel \( P \in \partial \alpha \), set \[ \mathcal{C}\left( {P,\alpha }\right) \mathrel{\text{:=}} {\mathcal{C}}_{P} \smallsetminus \{ C\} \] where \( {\mathcal{C}}_{P} = \mathcal{C}{\left( \Delta \right) }_{ \geq P} \) and \( C \) is the unique chamber in \( \alpha \) having \( P \) as a face. One can visualize \( \overline{\mathcal{C}}\left( {P,\alpha }\right) \) as the set of chambers of \( \Delta \) that are "attached to \( \alpha \) along \( P \) ." We denote by \( \mathcal{A}\left( \alpha \right) \) the set of apartments of \( \Delta \) containing \( \alpha \) . Lemma 4.118. Let \( \alpha \) be a root in a spherical building \( \Delta \), and let \( P \) be a panel in \( \partial \alpha \) . Then there is a canonical bijection from \( \mathcal{C}\left( {P,\alpha }\right) \) to \( \mathcal{A}\left( \alpha \right) \) . It associates to any chamber \( D \in \mathcal{C}\left( {P,\alpha }\right) \) the convex hull of \( D \) and \( \alpha \) . Proof. Let \( C \) be the chamber in \( \alpha \) having \( P \) as a face, let \( {P}^{\prime } \) be the panel opposite \( P \) in \( \alpha \), and let \( {C}^{\prime } \) be the chamber in \( \alpha \) having \( {P}^{\prime } \) as a face. By Proposition 4.103, \( C \) is the unique chamber in \( {\mathcal{C}}_{P} \) that is not opposite \( {C}^{\prime } \) . In particular, every chamber \( D \in \mathcal{C}\left( {P,\alpha }\right) \) is opposite \( {C}^{\prime } \) ; hence the convex hull \( \Gamma \left( {D,{C}^{\prime }}\right) \) is an apartment \( {\sum }^{\prime } \) . Now \( \alpha = \Gamma \left( {P,{C}^{\prime }}\right) \) by Example 4.117, so \( {\sum }^{\prime } \) contains \( \alpha \) and hence is the convex hull of \( D \) and \( \alpha \) . We therefore have a map \( \mathcal{C}\left( {P,\alpha }\right) \rightarrow \mathcal{A}\left( \alpha \right) \) that sends a chamber \( D \in \mathcal{C}\left( {P,\alpha }\right) \) to the convex hull \( {\sum }^{\prime } \) of \( D \) and \( \alpha \) . It is easily seen to be a bijection; the inverse associates to an apartment \( {\sum }^{\prime } \in \mathcal{A}\left( \alpha \right) \) the unique chamber \( D \in {\sum }^{\prime } \) opposite \( {C}^{\prime } \) . Exercise 4.119. Let \( \Delta \) be a spherical building of diameter \( d \) . Show that the roots of \( \Delta \) are precisely the convex hulls \( \Gamma \left( {C, D}\right) \), where \( C \) and \( D \) are chambers such that \( d\left( {C, D}\right) = d - 1 \) . ## *4.11.2 General Subcomplexes The results of this optional subsection will not be needed later, so we will be brief. As in Section 3.6.6, we can extend the notion of convexity to arbitrary subcomplexes: Definition 4.120. Let \( {\Delta }^{\prime } \) be a subcomplex of \( \Delta \) . We say that \( {\Delta }^{\prime } \) is a convex subcomplex if it is closed under products. See Remark 3.132 for the intuition behind this definition. Exercise 4.124 below provides further motivation. There is a smallest convex subcomplex containing any given collection of simplices, called their convex hull. As above, the convex hull \( \Gamma \left( {A, B}\right) \) of two simplices \( A, B \) is the same as their convex hull in any apartment containing them. We record the one useful fact about general convex subcomplexes: Proposition 4.121. Every convex subcomplex \( {\Delta }^{\prime } \) of \( \Delta \) is a chamber complex. Proof. Let \( A \) and \( B \) be maximal simplices of \( {\Delta }^{\prime } \), choose an apartment \( \sum \) containing them, and let \( {\sum }^{\prime } \mathrel{\text{:=}} \sum \cap {\Delta }^{\prime } \) . Then \( {\sum }^{\prime } \) is a convex subcomplex of \( \sum \) ; hence it is a chamber complex by Proposition 3.136. Since \( A \) and \( B \) are maximal in \( {\sum }^{\prime } \), they have the same dimension and are connected by a gallery in \( {\sum }^{\prime } \) ; this is also a gallery in \( {\Delta }^{\prime } \) . Remark 4.122. Our definition of convexity seems to us to be the most useful and intuitive one, for reasons mentioned above. But the reader should be aware that there is a different definition in the literature, due to Tits [247, 1.5], according to which a subcomplex is called convex if it is an intersection of convex chamber subcomplexes. In view of Proposition 4.115, convexity in the sense of Tits (or "T-convexity" for short) implies convexity in our sense. And Proposition 3.137 says that T-convexity is equivalent to convexity if \( \Delta \) consists of a single apartment. But convexity in our sense does not imply T-convexity in general. [The point is that we have no tools in general for constructing "enough" convex chamber subcomplexes. In the case of a single apartment, however, we can use roots.] If fact, H. Van Maldeghem has pointed out to us that there are infinitely many counterexamples. The remainder of this section is devoted to a detailed description of the smallest of these. In Exercise 4.24 we constructed a generalized quadrangle \( Q \) whose associated building \( \Delta \) of type \( {\mathrm{C}}_{2} \) has the following properties: (a) Every vertex is a face of exactly 3 edges. (b) \( \Delta \) contains 5 pairwise opposite vertices. Clearly any set \( X \) of pairwise opposite vertices forms a 0 -dimensional convex subcomplex of \( \Delta \), since \( {uv} = u \) if \( u \) and \( v \) are opposite (see part (3) of Proposition 3.112). But the following proposition shows that \( X \) is not T-convex if \( \left| X\right| \) is at least 4 . Proposition 4.123. With \( \Delta \) as above, let \( X \) be a set of 4 pairwise opposite vertices. Then the only convex chamber subcomplex of \( \Delta \) containing \( X \) is \( \Delta \) itself. Proof. Let \( {\Delta }^{\prime } \) be a convex chamber subcomplex containing \( X \) . We will show that \( {\Delta }^{\prime } \) is a thick subbuilding of \( \Delta \), from which it follows (by (a) above) that \( {\Delta }^{\prime } = \Delta \) . The first step is to show that \( {\Delta }^{\prime } \) contains an apartment of \( \Delta \), so that \( {\Delta }^{\prime } \) is a subbuilding by Exercise 4.125 below. The reader is advised to draw a picture as we proceed; the final result is in Figure 4.7. Assume for definiteness that the elements of \( X \) are points of the quadrangle \( Q \), and let \( x,{x}^{\prime } \) be two of these points. It follows easily from the convexity assumption that \( {\Delta }^{\prime } \) contains a geodesic from \( x \) to \( {x}^{\prime } \), i.e., a path of length 4 (see Exercise 4.48). Denote the vertices along this path by \( \left( {x, L, p, M,{x}^{\prime }}\right) \) . The vertex \( p \) is at distance 2 from at most 3 elements of \( X \), since each of the 3 lines of \( Q \) containing \( p \) can be incident with at most one element of \( X \) . So, since \( \left| X\right| = 4 \), there is a \( y \in X \) that is opposite \( p \) . Using \( d\left( {-, - }\right) \) to denote the graph distance between two vertices, we have \( d\left( {y, M}\right) = 3 \) and hence, by the exercise just cited, \( {\Delta }^{\prime } \) contains the geodesic \( \left( {y,{L}^{\prime }, q, M}\right) \) from \( y \) to \( M \) . Note that \( q \) is different from \( p \) and \( {x}^{\prime } \) because \( p \) and \( {x}^{\prime } \) are both opposite \( y \) ; in the case of \( {x}^{\prime } \), we are using here the assumption that any two elements of \( X \) are opposite. For the same reason, \( {L}^{\prime } \neq M \) . It follows tha
1099_(GTM255)Symmetry, Representations, and Invariants
Definition 9.3.16
Definition 9.3.16. A semistandard skew tableau of shape \( \lambda /\mu \) and weight \( v \in {\mathbb{N}}^{n} \) is an assignment of positive integers to the boxes of the skew Ferrers diagram \( \lambda /\mu \) such that 1. if \( {\mathbf{v}}_{j} \neq 0 \) then the integer \( j \) occurs in \( {\mathbf{v}}_{j} \) boxes for \( j = 1,\ldots, n \) , 2. the integers in each row are nondecreasing from left to right, and 3. the integers in each column are strictly increasing from top to bottom. By condition (1), the weight \( v \) of a semistandard skew tableau of shape \( \lambda /\mu \) satisfies \( \left| v\right| + \left| \mu \right| = \left| \lambda \right| \) . Let \( \operatorname{SSTab}\left( {\lambda /\mu, v}\right) \) denote the set of semistandard skew tableaux of shape \( \lambda /\mu \) and weight \( v \) . There is the following stability property for the weights of semistandard skew tableaux: if \( p > n \) and we set \( {v}^{\prime } = \left\lbrack {{v}_{1},\ldots ,{v}_{n},0,\ldots ,0}\right\rbrack \) (with \( p - n \) trailing zeros adjoined), then \( \operatorname{SSTab}\left( {\lambda /\mu, v}\right) = \operatorname{SSTab}\left( {\lambda /\mu ,{v}^{\prime }}\right) \) (this is clear from condition (1) in the definition). For example, there are two semistandard skew tableaux of shape \( \left\lbrack {3,2}\right\rbrack /\left\lbrack {2,1}\right\rbrack \) and weight \( \left\lbrack {1,1}\right\rbrack \), namely ![5eec4b33-2d74-487e-bbfe-c2de39f4bff7_438_0.jpg](images/5eec4b33-2d74-487e-bbfe-c2de39f4bff7_438_0.jpg) (9.40) When \( \mu = 0 \) we set \( \lambda /\mu = \lambda \) . A filling of the Ferrers diagram of \( \lambda \) that satisfies conditions (1), (2), and (3) in Definition 9.3.16 is called a semistandard tableau of shape \( \lambda \) and weight \( v \) . For example, take \( \lambda = \left\lbrack {2,1}\right\rbrack \in \operatorname{Par}\left( 3\right) \) . If \( v = \left\lbrack {2,1}\right\rbrack \) , then \( \operatorname{SSTab}\left( {\lambda, v}\right) \) consists of the single tableau \( \frac{1}{2} \), while if \( v = \left\lbrack {1,1,1}\right\rbrack \), then \( \operatorname{SSTab}\left( {\lambda, v}\right) \) consists of the two standard tableaux ![5eec4b33-2d74-487e-bbfe-c2de39f4bff7_438_1.jpg](images/5eec4b33-2d74-487e-bbfe-c2de39f4bff7_438_1.jpg) In general, for any \( \lambda \in \operatorname{Par}\left( k\right) \), if \( v = \lambda \) then the set \( \operatorname{SSTab}\left( {\lambda ,\lambda }\right) \) consists of a single tableau with the number \( j \) in all the boxes of row \( j \), for \( 1 \leq j \leq \operatorname{depth}\left( \lambda \right) \) . At the other extreme, if \( v = {1}^{k} \in {\mathbb{N}}^{k} \) (the weight \( {\det }_{k} \) ) then every \( A \in \operatorname{SSTab}\left( {\lambda, v}\right) \) is a standard tableau, since all the entries in \( A \) are distinct. Thus in this case we have \( \operatorname{SSTab}\left( {\lambda ,{\det }_{k}}\right) = \operatorname{STab}\left( \lambda \right) . \) We already encountered semistandard tableaux in Section 8.1.2, where each \( n \) - fold branching pattern was encoded by a semistandard tableau arising from the \( {\mathbf{{GL}}}_{n} \rightarrow {\mathbf{{GL}}}_{n - 1} \) branching law. Let \( \lambda \in \operatorname{Par}\left( {k, n}\right) \) . By Corollary 8.1.7 the irreducible \( \mathbf{{GL}}\left( {n,\mathbb{C}}\right) \) module \( {F}_{n}^{\lambda } \) has a basis \( \left\{ {{u}_{A} : v \in {\mathbb{N}}^{n}, A \in \operatorname{SSTab}\left( {\lambda, v}\right) }\right\} \), and \( {u}_{A} \) has weight \( v \) for the diagonal torus of \( \mathbf{{GL}}\left( {n,\mathbb{C}}\right) \) . Thus the Kostka coefficients (weight multiplicities) are \( {K}_{\lambda v} = \left| {\operatorname{SSTab}\left( {\lambda, v}\right) }\right| \) . We now introduce the second new concept needed for the Littlewood-Richardson rule. Call an ordered string \( w = {x}_{1}{x}_{2}\cdots {x}_{r} \) of positive integers \( {x}_{j} \) a word, and the integers \( {x}_{j} \) the letters in \( w \) . If \( T \) is a semistandard skew tableau with \( n \) rows, then the row word of \( T \) is the juxtaposition \( {w}_{\text{row }}\left( T\right) = {R}_{n}\cdots {R}_{1} \), where \( {R}_{j} \) is the word formed by the entries in the \( j \) th row of \( T \) . Definition 9.3.17. A word \( w = {x}_{1}{x}_{2}\cdots {x}_{r} \) is a reverse lattice word if when \( w \) is read from right to left from the end \( {x}_{r} \) to any letter \( {x}_{s} \), the sequence \( {x}_{r},{x}_{r - 1},\ldots ,{x}_{s} \) contains at least as many 1 ’s as it does 2’s, at least as many 2’s as 3’s, and so on for all positive integers. A semistandard skew tableau \( T \) is an \( L - R \) skew tableau if \( {w}_{\text{row }}\left( T\right) \) is a reverse lattice word. For example, the two skew tableaux (9.40) have row words 21 and 12, respectively. The first is a reverse lattice word, but the second is not. Littlewood-Richardson Rule: The L-R coefficient \( {c}_{\mu \nu }^{\lambda } \) is the number of L-R skew tableaux of shape \( \lambda /\mu \) and weight \( v \) . See Macdonald [106], Sagan [128], or Fulton [51] for a proof of the correctness of the L-R rule. The representation-theoretic portion of the proof (based on the branching law) is outlined in the exercises. Note that from their representation-theoretic definition the L-R coefficients have the symmetry \( {c}_{\mu v}^{\lambda } = {c}_{v\mu }^{\lambda } \) ; however, this symmetry is not obvious from the L-R rule. In applying the rule, it is natural to take \( \left| \mu \right| \geq \left| v\right| \) ## Examples 1. Pieri's rule (Corollary 9.2.4) is a direct consequence of the L-R rule. To see this, take a Ferrers diagram \( \mu \) of depth at most \( n - 1 \) and a diagram \( v \) of depth one. Let \( \lambda \) be a diagram that contains \( \mu \) and has \( \left| \mu \right| + \left| v\right| \) boxes. If \( T \in \operatorname{SSTab}\left( {\lambda /\mu, v}\right) \) , then 1 has to occur each of the boxes of \( T \), since \( {v}_{j} = 0 \) for \( j > 1 \) . In this case \( {w}_{\text{row }}\left( T\right) = 1\cdots 1\left( {\left| v\right| \text{ occurrences of }1}\right) \) is a reverse lattice word, and so \( T \) is an L-R skew tableau. Since the entries in the columns of \( T \) are strictly increasing, each box of \( T \) must be in a different column. In particular, \( \lambda \) has depth at most \( n \) . Thus to each skew Ferrers diagram \( \lambda /\mu \) with at most one box in each column there is a unique L-R skew tableau \( T \) of shape \( \lambda /\mu \) and weight \( v \), and hence \( {c}_{\mu, v}^{\lambda } = 1 \) . If \( \lambda /\mu \) has a column with more than one box, then \( {c}_{\mu, v}^{\lambda } = 0 \) . This is Pieri’s rule. 2. Let \( \mu = \left\lbrack {2,1}\right\rbrack \) and \( v = \left\lbrack {1,1}\right\rbrack \) . Consider the decomposition of the tensor product representation \( {F}_{n}^{\mu } \otimes {F}_{n}^{v} \) of \( \mathbf{{GL}}\left( {n,\mathbb{C}}\right) \) for \( n \geq 4 \) . The irreducible representations \( {F}_{n}^{\lambda } \) that occur have \( \left| \lambda \right| = \left| \mu \right| + \left| v\right| = 5 \) and \( \lambda \supset \mu \) . The highest (Cartan) component has \( \lambda = \mu + v = \left\lbrack {3,2}\right\rbrack \) and occurs with multiplicity one by Proposition 5.5.19. This can also be seen from the L-R rule: The semistandard skew tableaux of shape \( \left\lbrack {3,2}\right\rbrack /\left\lbrack {2,1}\right\rbrack \) and weight \( v \) are shown in (9.40) and only one of them is an L-R tableau. The other possible highest weights \( \lambda \) are \( \left\lbrack {3,1,1}\right\rbrack ,\left\lbrack {2,2,1}\right\rbrack \), and \( \left\lbrack {2,1,1,1}\right\rbrack \) (since we are assuming \( n \geq 4 \) ). The corresponding semistandard skew tableaux of shape \( \lambda /\mu \) and weight \( v \) are as follows: ![5eec4b33-2d74-487e-bbfe-c2de39f4bff7_439_0.jpg](images/5eec4b33-2d74-487e-bbfe-c2de39f4bff7_439_0.jpg) In each case exactly one of the skew tableaux satisfies the L-R condition. Hence the \( \mathrm{L} - \mathrm{R} \) rule implies that \[ {F}_{n}^{\left\lbrack 2,1\right\rbrack } \otimes {F}_{n}^{\left\lbrack 1,1\right\rbrack } = {F}_{n}^{\left\lbrack 3,2\right\rbrack } \oplus {F}_{n}^{\left\lbrack 3,1,1\right\rbrack } \oplus {F}_{n}^{\left\lbrack 2,2,1\right\rbrack } \oplus {F}_{n}^{\left\lbrack 2,1,1,1\right\rbrack }. \] (9.41) In particular, the multiplicities are all one. As a check, we can calculate the dimension of each of these representations by the Weyl dimension formula (7.18). This yields \[ \dim {F}_{n}^{\left\lbrack 2,1\right\rbrack } \cdot \dim {F}_{n}^{\left\lbrack 1,1\right\rbrack } = \left( {\left( {n + 1}\right) n\left( {n - 1}\right) /3}\right) \cdot \left( {n\left( {n - 1}\right) /2}\right) , \] \[ \dim {F}_{n}^{\left\lbrack 3,2\right\rbrack } = \left( {n + 2}\right) \left( {n + 1}\right) {n}^{2}\left( {n - 1}\right) /{24}, \] \[ \dim {F}_{n}^{\left\lbrack 3,1,1\right\rbrack } = \left( {n + 2}\right) \left( {n + 1}\right) n\left( {n - 1}\right) \left( {n - 2}\right) /{20}, \] \[ \dim {F}_{n}^{\left\lbrack 2,2,1\right\rbrack } = \left( {n + 1}\right) {n}^{2}\left( {n - 1}\right) \left( {n - 2}\right) /{24} \] \[ \dim {F}_{n}^{\left\lbrack 2,1,1,1\right\rbrack } = \left( {n + 1}\right) n\left( {n - 1}\right) \left( {n - 2}\right) \left( {n - 3}\right) /{30}. \] These formulas imply that both sides of (9.41) have the same dimension. ## 9.3.6 Exercises 1. Let \( \lambda \in \operatorname{Par}\left( k\right) \) and let \( {\lambda }^{t} \) be the transposed partition. Prove that \( {\sigma }^{{\lambda }^{t}} \cong \operatorname{sgn} \otimes {\sigma }^{\lambda } \) (HINT: Observe that \( \operatorname{Row}\left( A\right) = \operatorname{Col}\left( {A}^{t}\right) \) for \( A \in \operatorname{Tab}\left( \lambda \right) \) ; now apply Lemma 9.3.
109_The rising sea Foundations of Algebraic Geometry
Definition 3.86
Definition 3.86. Let \( \left( {W, S}\right) \) be a Coxeter system with Coxeter matrix \( M \) . We say that a Coxeter complex \( \sum \) is of type \( \left( {W, S}\right) \) (or of type \( M \) ) if \( \sum \) comes equipped with a type function having values in \( S \) such that the Coxeter matrix of \( \sum \) is \( M \) or, equivalently, such that there is a type-preserving isomorphism \( \sum \rightarrow \sum \left( {W, S}\right) \) . We can then identify \( W \) with the Weyl group \( {W}_{M} \) of \( \sum \) . We now wish to define a function \( \delta : \mathcal{C}\left( \sum \right) \times \mathcal{C}\left( \sum \right) \rightarrow {W}_{M} \), called the Weyl distance function, such that \[ d\left( {{C}_{1},{C}_{2}}\right) = l\left( {\delta \left( {{C}_{1},{C}_{2}}\right) }\right) \] (3.5) for any two chambers \( {C}_{1},{C}_{2} \) . Intuitively, \( \delta \left( {{C}_{1},{C}_{2}}\right) \) is something like a vector pointing from \( {C}_{1} \) to \( {C}_{2} \) ; it tells us the distance from \( {C}_{1} \) to \( {C}_{2} \) as well as what "direction" to go in to get from \( {C}_{1} \) to \( {C}_{2} \) . To define \( \delta \left( {{C}_{1},{C}_{2}}\right) \), choose an arbitrary gallery from \( {C}_{1} \) to \( {C}_{2} \), let \( \left( {{s}_{1},{s}_{2},\ldots ,{s}_{d}}\right) \) be its type, and set \[ \delta \left( {{C}_{1},{C}_{2}}\right) \mathrel{\text{:=}} {s}_{1}{s}_{2}\cdots {s}_{d} \in {W}_{M}. \] (3.6) To see that the right-hand side is independent of the choice of gallery, we may assume that \( \sum = \sum \left( {W, S}\right) \) with its canonical type function. Then we can identify \( \mathcal{C}\left( \sum \right) \) with \( W \), and a gallery of type \( \left( {{s}_{1},\ldots ,{s}_{d}}\right) \) from a chamber \( {w}_{1} \) to a chamber \( {w}_{2} \) has the form \( {w}_{1},{w}_{1}{s}_{1},\ldots ,{w}_{1}{s}_{1}\cdots {s}_{d} = {w}_{2} \) . Hence the righthand side of (3.6) is equal to \( {w}_{1}^{-1}{w}_{2} \), which is indeed independent of the choice of gallery. See Figure 3.2 for an example, where \( \delta \left( {C, D}\right) = {utstu} = {ustsu} \) . This discussion gives us a concrete interpretation of \( \delta \) as a "difference" map \( W \times W \rightarrow W \), sending \( \left( {{w}_{1},{w}_{2}}\right) \) to \( {w}_{1}^{-1}{w}_{2} \), when \( \sum = \sum \left( {W, S}\right) \) . Equivalently, \[ \delta \left( {{w}_{1}C,{w}_{2}C}\right) = {w}_{1}^{-1}{w}_{2}, \] (3.7) where \( C \) is the fundamental chamber. In particular, \[ \delta \left( {C,{wC}}\right) = w. \] (3.8) One can also deduce from the discussion that for arbitrary \( \sum \), galleries from \( {C}_{1} \) to \( {C}_{2} \) are in \( 1 - 1 \) correspondence with decompositions of \( \delta \left( {{C}_{1},{C}_{2}}\right) \), and minimal galleries correspond to reduced decompositions. Finally, returning to an arbitrary Coxeter complex of type \( \left( {W, S}\right) \) we show that \( \delta \) extends in a natural way to a function on arbitrary pairs of simplices. Let \( A \) be a simplex of cotype \( J \) and let \( B \) be a simplex of cotype \( K\left( {J, K \subset S}\right) \) . Consider the set of elements \( \delta \left( {C, D}\right) \), where \( C \) and \( D \) are chambers with \( C \geq A \) and \( D \geq B \) . We claim that this set is a double coset \( {W}_{J}w{W}_{K} \) . To see this, we may assume \( \sum = \sum \left( {W, S}\right) \) with its canonical type function. Thus \( A \) is a coset \( {w}_{1}{W}_{J}, B \) is a coset \( {w}_{2}{W}_{K}, C \) corresponds to an arbitrary element of \( A \) , and \( D \) corresponds to an arbitrary element of \( B \) . The set of elements \( \delta \left( {C, D}\right) \) is then the set of differences \( {A}^{-1}B \mathrel{\text{:=}} \left\{ {{a}^{-1}b \mid a \in A, b \in B}\right\} \), which is the double coset \( {\left( {w}_{1}{W}_{J}\right) }^{-1}\left( {{w}_{2}{W}_{K}}\right) = {W}_{J}{w}_{1}^{-1}{w}_{2}{W}_{K} \), whence the claim. We can now define \( \delta \left( {A, B}\right) \) to be the element of minimal length in the double coset (see Proposition 2.23). Note that pairs \( C, D \) with \( C \geq A, D \geq B \) , and \( \delta \left( {C, D}\right) = \delta \left( {A, B}\right) \) are precisely those pairs such that there is a minimal gallery from \( A \) to \( B \) of the form \[ C = {C}_{0},\ldots ,{C}_{l} = D. \] We have proved the following: Proposition 3.87. Let \( \sum \) be a Coxeter complex of type \( \left( {W, S}\right) \), and let \( A \) and \( B \) be arbitrary simplices. Then there is an element \( \delta \left( {A, B}\right) \in W \) such that \[ \delta \left( {A, B}\right) = \delta \left( {{C}_{0},{C}_{l}}\right) \] for any minimal gallery \( {C}_{0},\ldots ,{C}_{l} \) from \( A \) to \( B \) . In particular, \[ d\left( {A, B}\right) = l\left( {\delta \left( {A, B}\right) }\right) . \] Note that reduced decompositions of \( \delta \left( {A, B}\right) \) are not necessarily in 1-1 correspondence with minimal galleries from \( A \) to \( B \), since in general there is more than one possible \( {C}_{0} \) that can start such a gallery. The reader can easily find examples of this in Figure 3.2. We will get a clearer understanding of this phenomenon in the next section. Exercise 3.88. Prove the following strong version of the triangle inequality: Given three chambers \( {C}_{1},{C}_{2},{C}_{3} \), we have \( \delta \left( {{C}_{1},{C}_{3}}\right) = \delta \left( {{C}_{1},{C}_{2}}\right) \delta \left( {{C}_{2},{C}_{3}}\right) \) . ## 3.6 Products and Convexity This section gives analogues for Coxeter complexes of some of the results of Chapter 1 on hyperplane arrangements. The results should all be believable because of this analogy, but there are technicalities. The reader anxious to get to buildings may want to skip ahead to Chapter 4 and return to the present section as needed. Throughout this section, \( \sum \) denotes an arbitrary Coxeter complex in the sense of Definition 3.64, and \( \mathcal{H} \) denotes its set of walls (Definition 3.50). For each pair \( \pm \alpha \) of opposite roots, we arbitrarily declare one of them to be positive and the other negative. The most common convention is to choose a "fundamental chamber" \( C \) and declare a root to be positive if it contains \( C \) . ## 3.6.1 Sign Sequences Let \( A \) be a simplex of \( \sum \) and let \( H \) be a wall with its associated pair of roots \( \pm \alpha \), where \( \alpha \) is the positive one. We have three possibilities: \( A \) is in \( \alpha \) but not \( - \alpha, A \) is in \( - \alpha \) but not \( \alpha \), or \( A \) is in \( H \) . We set \( {\sigma }_{H}\left( A\right) = + , - \), or 0, accordingly. The resulting family \[ \sigma \left( A\right) \mathrel{\text{:=}} {\left( {\sigma }_{H}\left( A\right) \right) }_{H \in \mathcal{H}} \] is the sign sequence of \( A \) . Remark 3.89. If \( \sum = \sum \left( {W, S}\right) \), the sign sequence just defined can be identified with the sign sequence introduced in our study of the Tits cone (Section 2.6). The first observation is that the sign sequence determines the face relation in the expected way. As in Definition 1.20, we order sign sequences coordi-natewise, with the convention that \( 0 < + \) and \( 0 < - \) . Proposition 3.90. Given simplices \( A, B \in \sum \), we have \( B \leq A \) if and only if \( \sigma \left( B\right) \leq \sigma \left( A\right) \) . In particular, \( A = B \) if and only if \( \sigma \left( A\right) = \sigma \left( B\right) \), i.e., a simplex is uniquely determined by its sign sequence. Proof. If one wants to use the Tits cone, the result is already contained in Section 2.6 (see the paragraph following Definition 2.79). But here is a purely combinatorial proof. Suppose \( B \leq A \) . Then \( \sigma \left( B\right) \leq \sigma \left( A\right) \), since every root containing \( A \) must contain \( B \) (roots are subcomplexes). Conversely, suppose \( \sigma \left( B\right) \leq \sigma \left( A\right) \) . Then Proposition 3.78 implies that \( d\left( {A, B}\right) = 0 \) . In other words, there is a chamber \( C \) having both \( A \) and \( B \) as faces. We now show that every vertex \( v \) of \( B \) is also a vertex of \( A \) . Let \( P \) be the panel of \( C \) not containing \( v \), and let \( H \) be the wall containing \( P \) . Then \( v \notin H \) (Exercise 3.61), so \( {\sigma }_{H}\left( B\right) \neq 0 \) and hence \( {\sigma }_{H}\left( A\right) \neq 0 \) . Thus \( A \) is not a face of the panel \( P \), which means that \( v \) is a vertex of \( A \) . Exercise 3.91. Show that a simplex \( A \) is a chamber if and only if \( {\sigma }_{H}\left( A\right) \neq 0 \) for all \( H \in \mathcal{H} \) . We will make extensive use of sign sequences, primarily in connection with products (Section 3.6.4 below). But first we pause to give two easier applications, the first involving convexity and the second involving supports. ## 3.6.2 Convex Sets of Chambers Definition 3.92. Let \( \Delta \) be a chamber complex, and let \( \mathcal{C} \mathrel{\text{:=}} \mathcal{C}\left( \Delta \right) \) be its set of chambers. A subset \( \mathcal{D} \subseteq \mathcal{C} \) is called convex if it is nonempty and for all \( D,{D}^{\prime } \in \mathcal{D} \), every minimal gallery in \( \mathcal{C} \) from \( D \) to \( {D}^{\prime } \) is contained in \( \mathcal{D} \) . For example, \( \mathcal{C}\left( \alpha \right) \) is a convex subset of \( \mathcal{C}\left( \sum \right) \) for any root \( \alpha \) of our Coxeter complex \( \sum \) (Lemma 3.44). We can get further examples from this one, since an intersection of convex sets is convex (if it is nonempty). For example: Proposition 3.93. For any simplex \( A \in \sum \), the set \( \mathcal{C}{\left( \sum \right) }_{ \geq A} \) of chambers having \( A \) as a face is convex. More concisely, residues are convex. Proof. Proposition 3.90 implies that \( \mathca
1068_(GTM227)Combinatorial Commutative Algebra
Definition 11.18
Definition 11.18 An (injective) monomial matrix is a matrix of constants \( {\lambda }_{qp} \in \mathbb{k} \) such that: 1. Each row is labeled by a vector \( {\mathbf{a}}_{q} \in {\mathbb{Z}}^{d} \) and a face \( {F}^{q} \) of \( Q \) . 2. Each column is labeled by a vector \( {\mathbf{a}}_{p} \in {\mathbb{Z}}^{d} \) and a face \( {F}^{p} \) of \( Q \) . 3. \( {\lambda }_{qp} = 0 \) unless \( {F}^{p} \subseteq {F}^{q} \) and \( {\mathbf{a}}_{p} \in {\mathbf{a}}_{q} + {F}^{q} - Q \) . Sometimes we use monomial labels \( {\mathbf{t}}^{{\mathbf{a}}_{q}} \) and \( {\mathbf{t}}^{{\mathbf{a}}_{p}} \) in place of the vector labels \( {\mathbf{a}}_{q} \) and \( {\mathbf{a}}_{p} \) . Theorem 11.19 Monomial matrices represent maps of injective modules: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_227_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_227_0.jpg) Two monomial matrices represent the same map of injectives (with fixed direct sum decompositions) if and only if (i) their scalar entries are equal, (ii) the corresponding faces \( {F}^{r} \) are equal, where \( r = p, q \), and (iii) the corresponding vectors \( {\mathbf{a}}_{r} \) are congruent modulo \( \mathbb{Z}{F}^{r} \) . Proof. Proposition 11.17 immediately implies the first sentence. The second sentence is the content of Proposition 11.9. Definition 11.18 really does constitute an extension of the notion of monomial matrix from Section 1.4. All that we have done here is added face labels to the data of the row and column labels and changed the condition for \( {\lambda }_{qp} \) to be nonzero accordingly. The reader should check that when \( Q = {\mathbb{N}}^{n} \) and \( {F}^{q} = {F}^{p} = \{ \mathbf{0}\} \) for all \( q \) and \( p \), the only surviving condition on \( {\lambda }_{qp} \) is \( {\mathbf{a}}_{q} \succcurlyeq {\mathbf{a}}_{p} \), and this is precisely the condition on \( - {\mathbf{a}}_{q} \) and \( - {\mathbf{a}}_{p} \) stipulated by Definition 1.23. (The negatives on \( {\mathbf{a}}_{q} \) and \( {\mathbf{a}}_{p} \) stem from Matlis duality.) As with cellular monomial matrices for complexes of free modules, cellular injective monomial matrices can be specified simply by labeling the cell complex with the appropriate face and vector labels. Example 11.20 Resume the notation from Example 11.8. The following sequence of maps is cellular, supported on a line segment. The vector labels are all zero. The vertices have face labels \( X \) and \( Y \), the interior has face label \( {\mathbb{N}}^{2} \), and the empty set has face label \( \mathcal{O} \) . \[ 0 \rightarrow \mathbb{k}\{ {\mathbb{Z}}^{2}\} \overset{\begin{matrix} X & Y \\ 0 & 0 \end{matrix}}{\overbrace{{\mathbb{N}}^{2} \cup \left\lbrack \begin{array}{lll} 1 & 1 & 1 \end{array}\right\rbrack }}\mathbb{k}\{ X - {\mathbb{N}}^{2}\} \oplus \mathbb{k}\{ Y - {\mathbb{N}}^{2}\} \overset{\begin{matrix} \mathcal{O} \\ 0 \\ X \end{matrix}}{\overbrace{\overset{X}{Y} \cup \left\lbrack \begin{array}{rr} - 1 & 1 \\ 1 & 1 \end{array}\right\rbrack }}\mathbb{k}\{ \mathcal{O} - {\mathbb{N}}^{2}\} \rightarrow 0 \] This sequence of maps is actually a complex, and it would be exact except that the kernel of the first map \( \mathbb{k}\left\{ {\mathbb{Z}}^{2}\right\} \rightarrow \mathbb{k}\left\{ {X - {\mathbb{N}}^{2}}\right\} \oplus \mathbb{k}\left\{ {Y - {\mathbb{N}}^{2}}\right\} \) is isomorphic to \( \mathbb{k}\left\{ {\left( {1,1}\right) + {\mathbb{N}}^{2}}\right\} \) . The same cell complex also supports a completely different complex of injectives. Here, monomials \( {\mathbf{t}}^{\mathbf{a}} \) replace the vector labels \( \mathbf{a} \) : ![9d852306-8a03-41f2-b2e7-a141e7b451e2_228_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_228_0.jpg) This complex is also exact except at the left, where the kernel is just \( \mathbb{k} \) in \( {\mathbb{Z}}^{2} \) -graded degree \( \mathbf{0} \) . In fact, this is just the Matlis dual of the Koszul complex in two variables (Definition 1.26). ## 11.4 Essential properties of injectives In more general commutative algebraic settings, injectives are important because of their simple homological behavior, in analogy with free modules. Definition 11.21 A graded \( \mathbb{k}\left\lbrack Q\right\rbrack \) -module \( J \) is called homologically injective if \( M \mapsto {\underline{\operatorname{Hom}}}_{\mathbb{k}\left\lbrack Q\right\rbrack }\left( {M, J}\right) \) takes exact sequences to exact sequences. In other words, if \( 0 \rightarrow M \rightarrow N \rightarrow P \rightarrow 0 \) is exact, then so is \[ 0 \leftarrow \underline{\operatorname{Hom}}\left( {M, J}\right) \leftarrow \underline{\operatorname{Hom}}\left( {N, J}\right) \leftarrow \underline{\operatorname{Hom}}\left( {P, J}\right) \leftarrow 0. \] For (10.2) in Chapter 10 we exploited this valuable property in the context of (ungraded) \( \mathbb{Z} \) -modules, otherwise known as abelian groups: divisible groups, such as \( {\mathbb{C}}^{ * } \), are homologically injective. In general, only the sur-jectivity of \( \underline{\operatorname{Hom}}\left( {M, J}\right) \leftarrow \underline{\operatorname{Hom}}\left( {N, J}\right) \) can fail, even for arbitrary \( J \) . The surjectivity for homologically injective \( J \) can be read equivalently as follows. Lemma 11.22 \( J \) is homologically injective if whenever \( M \subseteq N \) and \( \phi \) : \( M \rightarrow J \) are given, some map \( \psi : N \rightarrow J \) extends \( \phi \) ; that is, \( {\left. \psi \right| }_{M} = \phi \) . Judging from what we have already called the modules \( \mathbb{k}\{ F - Q\} \) and their direct sums in Definition 11.10, we had better reconcile our combinatorial definition of injective module with the usual homological one. The goal of this section is to accomplish just that, in Theorem 11.30. Recall that a module \( N \) is flat if tensoring any exact sequence with \( N \) yields another exact sequence. The examples of flat modules to keep in mind are the localizations \( \mathbb{k}\left\lbrack {Q - F}\right\rbrack \) . In fact, localizations are pretty much the only examples that can come up in the context of graded modules over affine semigroup rings (cf. the next lemma and Theorem 11.30). Lemma 11.23 \( N \) is flat if and only if \( {N}^{ \vee } \) is homologically injective. Proof. \( M \mapsto M \otimes N \) is exact if and only if \( M \mapsto {\left( M \otimes N\right) }^{ \vee } \) is. Now use the equality \( {\left( M \otimes N\right) }^{ \vee } = \underline{\operatorname{Hom}}\left( {M,{N}^{ \vee }}\right) \) of Lemma 11.16. Thus "flat" and "injective" are Matlis dual conditions. Heuristically, a module \( \mathbb{k}\{ T\} \) is flat if \( T \) is an intersection of positive half-spaces for facets of \( Q \), whereas \( \mathbb{k}\left\lbrack T\right\rbrack \) is injective if \( T \) is an intersection of negative half-spaces. Proposition 11.24 Indecomposable injectives are homologically injective. Proof. Since \( \mathbb{k}{\left\lbrack Q - F\right\rbrack }^{ \vee } = \mathbb{k}\{ F - Q\} \), this follows from Lemma 11.23. For any \( {\mathbb{Z}}^{d} \) -graded module \( M \), the Matlis dual can be expressed as \( {M}^{ \vee } = \) \( {\underline{\operatorname{Hom}}}_{\mathbb{k}\left\lbrack Q\right\rbrack }\left( {M,\mathbb{k}{\left\lbrack Q\right\rbrack }^{ \vee }}\right) \) by Lemma 11.16 with \( N = \mathbb{k}\left\lbrack Q\right\rbrack \) . Proposition 11.24 says in this case that Matlis duality is exact, which is obvious from the fact that \( \mathbb{k} \) is a field, because taking vector space duals is exact. Taking Hom into \( \mathbb{k}{\left\lbrack Q\right\rbrack }^{ \vee } \) (= the injective hull of \( \mathbb{k} \) ) provides a better algebraic formulation of Matlis duality than Definition 11.15, by avoiding degree-by-degree vector space duals. It should convince you that dualization with respect to injective modules can have concrete combinatorial interpretations. Homological injectivity behaves very well with respect to (categorical) direct products of modules. Unfortunately, the usual product of infinitely many \( {\mathbb{Z}}^{d} \) -graded modules \( {\left( {M}^{p}\right) }_{p \in P} \) is not necessarily \( {\mathbb{Z}}^{d} \) -graded. Indeed, there may be sequences \( {\left( {y}_{p}\right) }_{p \in P} \in \mathop{\prod }\limits_{{p \in P}}{M}^{p} \) of homogeneous elements that have distinct degrees, in which case \( \mathop{\prod }\limits_{{p \in P}}{M}^{p} \) fails to be the direct sum of its graded components. Such poor behavior occurs even in the simplest of cases, in the presence of only one variable \( x \) (so \( Q = \mathbb{N} \) ): the product \( \mathop{\prod }\limits_{{i = 0}}^{\infty }\mathbb{k}\left\lbrack x\right\rbrack \) of infinitely many copies of \( \mathbb{k}\left\lbrack x\right\rbrack \) has an element \( \left( {1, x,{x}^{2},\ldots }\right) \) that is not expressible as a finite sum of homogeneous elements. The remedy is to take the largest \( {\mathbb{Z}}^{d} \) -graded submodule of the usual product. Definition 11.25 The \( {\mathbb{Z}}^{d} \) -graded product \( {}^{ * }\mathop{\prod }\limits_{{p \in P}}{M}^{p} \) is the submodule of the usual product generated by arbitrary products of homogeneous elements of the same degree. Explicitly, this is the module that has \[ {\left( \mathop{\prod }\limits_{{p \in P}}{M}^{p}\right) }_{\mathbf{b}} = \mathop{\prod }\limits_{{p \in P}}{M}_{\mathbf{b}}^{p} \] as its component in \( {\mathbb{Z}}^{d} \) -graded degree \( \mathbf{b} \) . Lemma 11.26 Arbitrary \( {\mathbb{Z}}^{d} \) -graded products of homologically injective modules are homologically injective. Proof. The natural map \( \operatorname{Hom}\left( {N,{}^{ * }\mathop{\prod }\limits_{{p \in P}}{M}^{p}}\right) \rightarrow {}^{ * }\mathop{\prod }\limits_{{p \in P}}\operatorname{Hom}\left( {N,{M}^{p}}\right) \) is an isomorphism (write out carefully what it means to be a homogeneous element of degree a on each side). App
108_The Joys of Haar Measure
Definition 3.3.1
Definition 3.3.1. Let \( A \in \mathbb{Z}\left\lbrack X\right\rbrack \) be a nonzero polynomial. We define the content of \( A \) and denote by \( c\left( A\right) \) the GCD of all the coefficients of \( A \) . Proposition 3.3.2 (Gauss’s lemma). If \( A \) and \( B \) are two nonzero polynomials in \( \mathbb{Z}\left\lbrack X\right\rbrack \), we have \( c\left( {AB}\right) = c\left( A\right) c\left( B\right) \) . Proof. Let us say that a polynomial \( A \in \mathbb{Z}\left\lbrack X\right\rbrack \) is primitive if its content is equal to 1 . Since \( A = c\left( A\right) {A}_{1} \) with \( {A}_{1} \) primitive, it is clear that the proposition is equivalent to the statement that the product of two primitive polynomials \( A \) and \( B \) is primitive. Assume the contrary, so that there exists a prime number \( p \) that divides all the coefficients of \( {AB} \) ; in other words \( \overline{AB} = 0 \) where, for any \( P \in \mathbb{Z}\left\lbrack X\right\rbrack ,\bar{P} \in \left( {\mathbb{Z}/p\mathbb{Z}}\right) \left\lbrack X\right\rbrack \) denotes the polynomial obtained by reducing the coefficients of \( P \) modulo \( p \) . Since we evidently have \( \overline{AB} = \bar{A}\bar{B} \) and since \( \left( {\mathbb{Z}/p\mathbb{Z}}\right) \left\lbrack X\right\rbrack \) is an integral domain, it follows that \( \bar{A} = 0 \) or \( \bar{B} = 0 \) , in other words that \( p \) divides all the coefficients of \( A \) or all the coefficients of \( B \), in contradiction with the fact that \( A \) and \( B \) are primitive. Corollary 3.3.3. Let \( C \in \mathbb{Z}\left\lbrack X\right\rbrack \) be a monic polynomial and assume that \( A \in \mathbb{Q}\left\lbrack X\right\rbrack \) is a monic polynomial such that \( A \mid C \) in \( \mathbb{Q}\left\lbrack X\right\rbrack \) . Then in fact \( A \in \mathbb{Z}\left\lbrack X\right\rbrack \) . Proof. Write \( C = {AB} \) with \( B \in \mathbb{Q}\left\lbrack X\right\rbrack \) . Let \( {d}_{A} \) (respectively \( {d}_{B} \) ) be the smallest integer such that \( {d}_{A}A \) (respectively \( {d}_{B}B \) ) is in \( \mathbb{Z}\left\lbrack X\right\rbrack \), in other words, the LCM of the denominators of the coefficients of \( A \) (respectively \( B \) ). We can write \( {d}_{A}{d}_{B}C = \left( {{d}_{A}A}\right) \left( {{d}_{B}B}\right) \) . By the minimality assumption, we have \( c\left( {{d}_{A}A}\right) = c\left( {{d}_{B}B}\right) = 1 \), hence by Gauss’s lemma \( c\left( {{d}_{A}{d}_{B}C}\right) = 1 \), and in particular \( {d}_{A} = 1 \), hence \( A \in \mathbb{Z}\left\lbrack X\right\rbrack \) . ## 3.3.2 Algebraic Integers We begin with the following basic proposition. Proposition 3.3.4. Let \( \alpha \) be an algebraic number. The following four properties are equivalent. (1) The number \( \alpha \) is a root of a monic polynomial with coefficients in \( \mathbb{Z} \) . (2) The minimal monic polynomial of \( \alpha \) has coefficients in \( \mathbb{Z} \) . (3) The ring \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) of polynomials in \( \alpha \) with integer coefficients is a finitely generated \( \mathbb{Z} \) -module. (4) There exists a commutative ring with unit \( R \) that is a finitely generated \( \mathbb{Z} \) -module and such that \( \alpha \in R \) . Proof. (1) \( \Rightarrow \) (2): Assume that \( P\left( \alpha \right) = 0 \) with \( P \in \mathbb{Z}\left\lbrack X\right\rbrack \) monic, and let \( T \) be the minimal monic polynomial of \( \alpha \) . By definition, \( T \) divides \( P \) in \( \mathbb{Q}\left\lbrack X\right\rbrack \), and \( T \) is monic, so we conclude by Corollary 3.3.3. (2) \( \Rightarrow \) (3): Let \( T\left( \alpha \right) = 0 \), where \( T \) is the minimal monic polynomial of \( \alpha \), hence with integral coefficients, and set \( n = \deg \left( T\right) \) . If \( L \) is the \( \mathbb{Z} \) - module generated by \( 1,{\alpha }^{1},\ldots ,{\alpha }^{n - 1} \), then by assumption \( {\alpha }^{n} \in L \) ; hence by induction \( {\alpha }^{k} \in L \) also for any \( k \geq n \) . Thus \( L = \mathbb{Z}\left\lbrack \alpha \right\rbrack \), so that the elements \( 1,{\alpha }^{1},\ldots ,{\alpha }^{n - 1} \) form a generating set of \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) ; hence \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) is a finitely generated \( \mathbb{Z} \) -module. (3) \( \Rightarrow \) (4): Simply choose \( R = \mathbb{Z}\left\lbrack \alpha \right\rbrack \) . (4) \( \Rightarrow \) (1): This is the only really amusing part of the proof. Since \( R \) is a finitely generated \( \mathbb{Z} \) -module, there exist \( {\omega }_{1},\ldots ,{\omega }_{n} \) that generate \( R \) as a \( \mathbb{Z} \) -module. Since \( R \) is a ring and \( \alpha \in R \), there exist \( {a}_{i, j} \in \mathbb{Z} \) such that for \( 1 \leq j \leq n \) we have \( \alpha {\omega }_{j} = \mathop{\sum }\limits_{{1 \leq i \leq n}}{a}_{i, j}{\omega }_{i} \) . If \( A = {\left( {a}_{i, j}\right) }_{1 \leq i, j \leq n} \) is the matrix of the \( {a}_{i, j} \), if we set \( M = \alpha {I}_{n} - A \) with \( {I}_{n} \) the \( n \times n \) identity matrix, and finally if \( B = \left( {{\omega }_{1},\ldots ,{\omega }_{n}}\right) \) is the row vector of the \( {\omega }_{j} \), then this can be written \( {BM} = 0 \) . If \( M \) was invertible as a matrix with coefficients in the field \( \mathbb{Q}\left( \alpha \right) \), then multiplying by \( {M}^{-1} \), we would obtain \( B = 0 \), hence \( R = \{ 0\} \), contradicting the fact that \( 1 \in R \) (unless \( \alpha = 0 \), but in that case the implication is trivial). Thus \( M \) is not invertible, so that \( \det \left( M\right) = 0 \) . This means that \( \alpha \) is a root of \( \det \left( {X{I}_{n} - A}\right) \), the characteristic polynomial of the matrix \( A \), and this is clearly a monic polynomial with integral coefficients. Note that we could not use directly the Cayley-Hamilton theorem since \( R \) is not necessarily a free \( \mathbb{Z} \) -module, and even so the \( {\omega }_{j} \) are not necessarily \( \mathbb{Z} \) -linearly independent. Definition 3.3.5. (1) An algebraic number satisfying one of the above equivalent properties is called an algebraic integer. (2) A nonzero algebraic integer whose inverse is also an algebraic integer is called a unit. By Proposition 3.3.4, when \( \alpha \) is not an algebraic integer, \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) is not finitely generated. The simplest example is with \( \alpha = 1/2 \) : the ring \( \mathbb{Z}\left\lbrack {1/2}\right\rbrack \) is the subring of elements of \( \mathbb{Q} \) whose denominator is a power of 2 . This ring is also not free (although it has no torsion), since two rational numbers are always \( \mathbb{Z} \) -linearly dependent. Proposition 3.3.6. If \( \alpha \) and \( \beta \) are algebraic integers, then so are \( \alpha + \beta \) and \( {\alpha \beta } \) . In other words, algebraic integers belonging to a fixed algebraic closure of \( \mathbb{Q} \) form a ring. Proof. Consider \( R = \mathbb{Z}\left\lbrack {\alpha ,\beta }\right\rbrack \), the ring of polynomials in \( \alpha \) and \( \beta \) . Since \( \alpha \) and \( \beta \) are algebraic integers, of respective degree \( m \) and \( n \), say, it is clear that the \( {\left( {\alpha }^{i}{\beta }^{j}\right) }_{0 \leq i < m,0 \leq j < n} \) form a finite set that generates \( R \) as a \( \mathbb{Z} \) -module, and since \( \alpha + \beta \) and \( {\alpha \beta } \) belong to \( R \) we conclude by Proposition 3.3.4. It is possible to give a direct (but less elegant) proof of this proposition that directly uses the fact that \( \alpha \) and \( \beta \) are roots of monic integral polynomials. This uses the notion of resultant, and gives an algorithm for computing the minimal polynomials of \( \alpha + \beta \) and of \( {\alpha \beta } \) ; see Exercise 12. Proposition 3.3.7. Let \( P\left( X\right) \) be a monic polynomial whose coefficients are algebraic integers, and let \( \alpha \) be such that \( P\left( \alpha \right) = 0 \) . Then \( \alpha \) is an algebraic integer. Proof. Write \( P\left( X\right) = {X}^{n} + \mathop{\sum }\limits_{{1 \leq i \leq n - 1}}{\beta }_{i}{X}^{i} \) . Since the \( {\beta }_{i} \) are algebraic integers, it follows that \( \mathbb{Z}\left\lbrack {{\beta }_{1},\ldots ,{\beta }_{n - 1}}\right\rbrack \) is a finitely generated \( \mathbb{Z} \) -module. Let \( {\gamma }_{1},\ldots ,{\gamma }_{N} \) be a finite generating set, and let \( R = \mathbb{Z}\left\lbrack {\alpha ,{\beta }_{1},\ldots ,{\beta }_{n - 1}}\right\rbrack \) . As in the proof of the implication (2) \( \Rightarrow \) (3) of Proposition 3.3.4, it is clear that the \( {\gamma }_{i}{\alpha }^{j} \) for \( 1 \leq i \leq N \) and \( 0 \leq j \leq n - 1 \) form a finite generating set for \( R \) . We conclude by Proposition 3.3.4. When an algebraic number \( \alpha \) is not necessarily an algebraic integer, we can still obtain finitely generated free \( \mathbb{Z} \) -modules as follows. Proposition 3.3.8 (Dedekind). Let \( \alpha \) be an algebraic number and let \( T \in \) \( \mathbb{Z}\left\lbrack X\right\rbrack \) be a nonzero polynomial such that \( T\left( \alpha \right) = 0 \) . Write \( T\left( X\right) = {a}_{n}{X}^{n} + \) \( {a}_{n - 1}{X}^{n - 1} + \cdots + {a}_{1}X + {a}_{0} \), let \( {\omega }_{0} = 1 \), and for \( 1 \leq j \leq n - 1 \), set \[ {\omega }_{j} = {a}_{n}{\alpha }^{j} + {a}_{n - 1}{\alpha }^{j - 1} + \cdots + {a}_{n - j}. \] The finitely generated \( \mathbb{Z} \) -module \( R \) generated by the \( {\omega }_{j} \) for \( 0 \leq j \leq n - 1 \) is a subring of \( \mathbb{Z}\left\lbrack \alpha \right\rbrack \) . In particular, the \( {\omega }_{j} \) are algebraic integers for all \( j \) . Note that this proposition does not claim that \( \alpha \in R \) (otherwise \( \alpha \) would be an algebraic integer). Proof. If we define \( {\omega }
113_Topological Groups
Definition 24.12
Definition 24.12. Let \( \mathcal{L} \) be an algebraic language. We construct an \( \mathcal{L} \) - structure \( {\mathfrak{{Fr}}}_{\mathcal{L}} \) which will be called the absolutely free \( \mathcal{L} \) -algebra. Its universe is \( {\operatorname{Trm}}_{\mathcal{L}} \), and for any operation symbol \( \mathbf{O} \) of \( \mathcal{L} \) , \[ {\mathbf{O}}^{\mathfrak{F}\mathfrak{r}}\mathcal{L}\left( {{\sigma }_{0},\ldots ,{\sigma }_{m - 1}}\right) = \mathbf{O}{\sigma }_{0}\cdots {\sigma }_{m - 1}. \] Now the definition of satisfaction yields the following basic fact about Fr \( x \) : Proposition 24.13. If \( \mathcal{L} \) is algebraic, \( \mathfrak{A} \) is an \( \mathcal{L} \) -structure, and \( x \in {}^{\omega }A \), then \( \left\langle {{\sigma }^{\mathfrak{A}}x : \sigma \in {\operatorname{Trm}}_{\mathcal{L}}}\right\rangle \) is a homomorphism of \( {\mathfrak{F}}^{\mathfrak{r}}\mathcal{L} \) into \( \mathfrak{A} \) . A very useful congruence relation on \( {\Im }^{r}\varphi \) is introduced in the following definition. Definition 24.14. If \( \mathcal{L} \) is an algebraic language and \( \Gamma \) is a set of equations of \( \mathcal{L} \), we let \[ { \equiv }_{\Gamma } = \left\{ {\left( {\sigma ,\tau }\right) : \sigma ,\tau \in {\operatorname{Trm}}_{\mathcal{L}}\text{ and }\Gamma \vDash \sigma = \tau }\right\} . \] Proposition 24.15. Under the assumptions of \( {24.14},{ \equiv }_{\Gamma } \) is a congruence relation on \( {\mathfrak{F}}^{\mathfrak{r}}\mathcal{L} \) . The following simple proposition is proved by induction on \( \sigma \) : Proposition 24.16. If \( g \) is a homomorphism from \( {\mathfrak{{Fr}}}_{\mathcal{L}} \) into \( \mathfrak{A} \), and if \( {x}_{i} = g{v}_{i} \) for every \( i < \omega \), then \( {\sigma }^{\mathfrak{A}}x = {g\sigma } \) for every term \( \sigma \) . Recall that, by the completeness theorem, the model-theoretic condition \( \Gamma \vDash \varphi \) is equivalent to the proof-theoretic condition \( \Gamma \vdash \varphi \) . If we apply this fact when \( \varphi \) is an equation and \( \Gamma \) is a set of equations, it seems a little unsatisfactory, since the proof-theoretic condition involves the logical axioms and hence the whole apparatus of first order logic. We now describe a proof-theoretic condition in which only equations appear-no quantifiers and not even any sentential connectives. Definition 24.17. Let \( \Gamma \) be a set of equations in an algebraic language. Then \( \Gamma \) -eqthm is the intersection of all sets \( \Delta \) of equations such that the following conditions hold: (i) \( \Gamma \subseteq \Delta \) ; (ii) \( {v}_{0} = {v}_{0} \in \Delta \) ; (iii) if \( \varphi \in \Delta, i < \omega ,\sigma \) is a term, and \( \psi \) is obtained from \( \varphi \) by replacing \( {v}_{i} \) throughout \( \varphi \) by \( \sigma \), then \( \psi \in \Delta \) ; (iv) if \( \sigma = \tau \in \Delta \) and \( \rho = \tau \in \Delta \), then \( \sigma = \rho \in \Delta \) ; (v) if \( \sigma = \tau \in \Delta \) , \( \mathbf{O} \) is an operation symbol, say of rank \( m, i < m \), and \( {\alpha }_{0},\ldots ,{\alpha }_{m - 2} \) are variables, then the following equation is in \( \Delta \) : \( \mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},\sigma ,{\alpha }_{i},\ldots ,{\alpha }_{m - 2}}\right) \) \[ = \mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},\tau ,{\alpha }_{i},\ldots ,{\alpha }_{m - 2}}\right) . \] We write \( \Gamma { \vdash }_{\mathrm{{eq}}}\varphi \) instead of \( \varphi \in \Gamma \) -eqthm. We shall prove an analog of the completeness theorem for this notion. First a technical lemma: Lemma 24.18. Let \( \Gamma \) be a set of equations in an algebraic language. Then for any terms \( \sigma ,\tau ,\rho \) , (i) \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \sigma \) ; (ii) if \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \tau \), then \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau = \sigma \) ; (iii) if \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma \equiv \tau \) and \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau \equiv \rho \), then \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \rho \) ; (iv) if \( \mathbf{O} \) is an operation symbol, say of rank \( m \), and if \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \) , \( {\tau }_{0},\ldots ,{\tau }_{m - 1} \) are terms such that \( \Gamma { \vdash }_{\mathrm{{eq}}}{\sigma }_{i} = {\tau }_{i} \) for each \( i < m \), then \( \Gamma { \vdash }_{\mathrm{{eq}}}\mathbf{O}{\sigma }_{0}\cdots {\sigma }_{m - 1} \equiv \mathbf{O}{\tau }_{0}\cdots {\tau }_{m - 1}. \) Proof. Condition \( \left( i\right) \) is obvious from 24.17(ii) and 24.17(iii). For (ii): assume that \( \Gamma { \vdash }_{\mathrm{{eq}}}\sigma = \tau \) . We also have \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau \equiv \tau \) by \( \left( i\right) \), so \( \Gamma { \vdash }_{\mathrm{{eq}}}\tau \equiv \sigma \) by 24.17(iv). Condition (iii) clearly follows from (ii) and 24.17(iv). To prove (iv), let \( {\alpha }_{0},\ldots ,{\alpha }_{m - 1} \) be distinct variables not occurring in any of \( {\sigma }_{0},\ldots ,{\sigma }_{m - 1} \) , \( {\tau }_{0},\ldots ,{\tau }_{m - 1} \) . Then \[ \Gamma { \vdash }_{\mathrm{{eq}}}\mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},{\sigma }_{i},{\alpha }_{i + 1},\ldots ,{\alpha }_{m - 1}}\right) \] \[ = \mathbf{O}\left( {{\alpha }_{0},\ldots ,{\alpha }_{i - 1},{\tau }_{i},{\alpha }_{i + 1},\ldots ,{\alpha }_{m - 1}}\right) \] for each \( i < m \) . Applications of 24.17(iii) then give, for each \( i < m \) , \[ \Gamma { \vdash }_{\mathrm{{eq}}}\mathbf{O}\left( {{\tau }_{0},\ldots ,{\tau }_{i - 1},{\sigma }_{i},{\sigma }_{i + 1},\ldots ,{\sigma }_{m - 1}}\right) \] \[ = \mathbf{O}\left( {{\tau }_{0},\ldots ,{\tau }_{i - 1},{\tau }_{i},{\sigma }_{i + 1},\ldots ,{\sigma }_{m - 1}}\right) . \] Now several applications of (iv) give the desired result. Theorem 24.19 (Completeness theorem for equational logic). Let \( \Gamma \cup \{ \varphi \} \) be a set of equations in an algebraic language. Then \( \Gamma \vDash \varphi \) iff \( \Gamma { \vdash }_{\mathrm{{eq}}}\varphi \) . Proof. It is easily checked that \( \Gamma { \vdash }_{\text{eq }}\varphi \Rightarrow \Gamma \vDash \varphi \) . Now suppose that not \( \left( {\Gamma { \vdash }_{\text{eq }}\varphi }\right) \) ; we shall construct a model of \( \Gamma \cup \{ \neg \left\lbrack \left\lbrack \varphi \right\rbrack \right\rbrack \} \) . In fact, the desired model is simply \( \mathfrak{A} = \mathfrak{F}\mathfrak{r}/{ \equiv }_{\Gamma } \) . To show that \( \mathfrak{A} \) is a model of \( \Gamma \), let \( \sigma \equiv \tau \) be an arbitrary element of \( \Gamma \), and let \( x \in {}^{\omega }{\operatorname{Trm}}_{\mathcal{L}} \) ; we want to show that \( {\sigma }^{\mathfrak{A}}\left( {\left\lbrack \right\rbrack \circ x}\right) = {\tau }^{\mathfrak{A}}\left( {\left\lbrack \right\rbrack \circ x}\right) \) . To do this, for any \( p \in {\operatorname{Trm}}_{\mathcal{L}} \) let \( {S\rho } \) be the result of simultaneously replacing \( {v}_{i} \) by \( {x}_{i} \) in \( \rho \), for each \( i < \omega \) . Then (1) \[ \Gamma { \vdash }_{\mathrm{{eq}}}{S\sigma } = {S\tau }. \] To prove (1), let \( {v}_{i0},\ldots ,{v}_{i\left( {m - 1}\right) } \) be all of the variables occurring in the equation \( \sigma = \tau \) . Let \( {\beta }_{0},\ldots ,{\beta }_{m - 1} \) be new variables, not appearing in \( \sigma = \tau \) , different from \( {v}_{i0},\ldots ,{v}_{i\left( {m - 1}\right) } \) and not appearing in \( {x}_{i0},\ldots ,{x}_{i\left( {m - 1}\right) } \) . Let \( {\sigma }^{\prime } = {\tau }^{\prime } \) be obtained from \( \sigma = \tau \) by replacing in succession \( {v}_{i0} \) throughout by \( {\beta }_{0},{v}_{i1} \) throughout by \( {\beta }_{1},\ldots ,{v}_{i\left( {m - 1}\right) } \) throughout by \( {\beta }_{m - 1} \) . By \( {24.17}\left( {iii}\right) \) we have \( \Gamma { \vdash }_{\text{eq }}{\sigma }^{\prime } = {\tau }^{\prime } \) . Since \( {\beta }_{0},\ldots ,{\beta }_{m - 1} \) do not occur in \( \sigma = \tau \) and they are different from \( {v}_{i0},\ldots ,{v}_{i\left( {m - 1}\right) } \), the equation \( {\sigma }^{\prime } = {\tau }^{\prime } \) is also obtainable from \( \sigma \equiv \tau \) by simultaneously replacing \( {v}_{i0},\ldots ,{v}_{i\left( {m - 1}\right) } \) by \( {\beta }_{0},\ldots ,{\beta }_{m - 1} \) respectively. Now by replacing \( {\beta }_{0} \) throughout by \( {x}_{i0} \), then \( {\beta }_{1} \) throughout by \( {x}_{i1},\ldots ,{\beta }_{m - 1} \) throughout by \( {x}_{i\left( {m - 1}\right) } \) we obtain \( {S\sigma } = {S\tau } \) and (1) follows. Note that \( {\beta }_{0},\ldots ,{\beta }_{m - 1} \) are introduced to reduce simultaneous substitution to a sequence of simple substitutions. Next, note that (2) \[ \text{for any term}\rho \text{we have}{\rho }^{\mathfrak{A}}\left( {\left\lbrack \right\rbrack \circ x}\right) = \left\lbrack {S\rho }\right\rbrack \text{.} \] Condition (2) is easily proven by induction on \( \rho \) . From (1) and (2) we obtain \[ {\sigma }^{\mathfrak{A}}\left( {\left\lbrack \right\rbrack \circ x}\right) = \left\lbrack {S\sigma }\right\rbrack = \left\lbrack {S\tau }\right\rbrack = {\tau }^{\mathfrak{A}}\left( {\left\lbrack \right\rbrack \circ x}\right) . \] Thus \( \sigma = \tau \) holds in \( \mathfrak{A} \), so \( \mathfrak{A} \) is a model of \( \Gamma \) . However, \( \mathfrak{A} \) is not a model of \( \varphi \) . For, say \( \varphi \) is the equation \( {\sigma }_{0} = {\tau }_{0} \) . Let \( {y}_{i} = \left\lbrack {v}_{i}\right\rbrack \) for each \( i \in \omega \) . Then by 24.16 we obtain \( {\sigma }_{0}^{\mathfrak{A}}y = \left\lbrack {\sigma }_{0}\right\rbrack \) and \( {\tau }_{0}^{\mathfrak{A}}y = \left\lbrack {\tau }_{0}\right\rbrack \) . Since \( \operatorname{not}\left( {\Gamma { \vdash }_{\mathrm{{eq}}}{\sigma }_{0} = {\tau }_{0}}\right) \), we have \( \left\lbrack {\sigma }
1059_(GTM219)The Arithmetic of Hyperbolic 3-Manifolds
Definition 2.7.1
Definition 2.7.1 If \( A \) is a quaternion algebra over the number field \( k \), let \( {A}_{v} \) (resp. \( {A}_{\mathcal{P}} \) ) denote the quaternion algebra \( A{ \otimes }_{k}{k}_{v} \) (resp. \( A{ \otimes }_{k}{k}_{\mathcal{P}} \) ) over \( {k}_{v} \) (resp \( {k}_{\mathcal{P}} \) ). Then \( A \) is said to be ramified at \( v \) (resp. at \( \mathcal{P} \) ) if \( {A}_{v} \) (resp. \( \left. {A}_{\mathcal{P}}\right) \) is the unique division algebra over \( {k}_{v} \) (resp. \( {k}_{\mathcal{P}} \) ) (assuming that \( v \) is not a complex embedding). Otherwise, \( A \) splits at \( v \) or \( \mathcal{P} \) . The local-global result which we now present follows directly from the Hasse-Minkowski Theorem on quadratic forms. Theorem 2.7.2 Let \( A \) be a quaternion algebra over a number field \( k \) . Then \( A \) splits over \( k \) if and only if \( A{ \otimes }_{k}{k}_{v} \) splits over \( {k}_{v} \) for all places \( v \) . Proof: Let \( A = \left( \frac{a, b}{k}\right) \) . Then, by Theorem 2.3.1, \( A \) splits over \( k \) if and only if \( a{x}^{2} + b{y}^{2} = 1 \) has a solution in \( k \) . By the Hasse-Minkowski Theorem (see Corollary 0.9.9), \( a{x}^{2} + b{y}^{2} = 1 \) has a solution in \( k \) if and only if it has a solution in \( {k}_{v} \) for all places \( v \) . However, \( a{x}^{2} + b{y}^{2} = 1 \) has a solution in \( {k}_{v} \) if and only if \( A{ \otimes }_{k}{k}_{v} \) splits over \( {k}_{v} \) . \( ▱ \) The finiteness of the set of places at which \( a{x}^{2} + b{y}^{2} = 1 \) fails to have a solution, given in Hilbert's Reciprocity Law, will follow from Theorem 2.6.6. For any \( a \) and \( b \), which we can assume lie in \( {R}_{k} \), there are only finitely many prime ideals so that \( a \) or \( b \in \mathcal{P} \) . Thus \( \left( \frac{a, b}{k}\right) \) splits at all but a finite number of non-dyadic places. As there are only finitely many Archimedean places and finitely many dyadic places, then \( \left( \frac{a, b}{k}\right) \) splits at all but a finite number of places. Hilbert's Reciprocity Theorem 0.9.10 further implies that the number of places at which \( A \) is ramified is of even cardinality. Theorem 2.7.3 Let \( A \) be a quaternion algebra over the number field \( k \) . The number of places \( v \) on \( k \) such that \( A \) is ramified at \( v \) is of even cardinality. Although the quaternion algebra \( A \) does not uniquely determine the pair \( a, b \) when \( A \cong \left( \frac{a, b}{k}\right) \), the set of places at which \( A \) is ramified clearly depends only on the isomorphism class of \( A \) . Indeed the set of places at which \( A \) is ramified determines the isomorphism class of \( A \), as will be shown in Theorem 2.7.5. Definition 2.7.4 The finite set of places at which \( A \) is ramified will be denoted by \( \operatorname{Ram}\left( A\right) \), the subset of Archimedean ones by \( {\operatorname{Ram}}_{\infty }\left( A\right) \) and the non-Archimedean ones by \( {\operatorname{Ram}}_{f}\left( A\right) \) . The places \( v \in {\operatorname{Ram}}_{f}\left( A\right) \) correspond to prime ideals \( \mathcal{P} \), and the (reduced) discriminant of \( A,\Delta \left( A\right) \), is the ideal defined by \[ \Delta \left( A\right) = \mathop{\prod }\limits_{{\mathcal{P} \in {\operatorname{Ram}}_{f}\left( A\right) }}\mathcal{P} \] (2.9) Theorem 2.7.5 Let \( A \) and \( {A}^{\prime } \) be quaternion algebras over a number field \( k \) . Then \( A \cong {A}^{\prime } \) if and only if \( \operatorname{Ram}\left( A\right) = \operatorname{Ram}\left( {A}^{\prime }\right) \) . Proof: By Theorem 2.3.4, \( A \) and \( {A}^{\prime } \) are isomorphic if and only if the quadratic spaces \( {A}_{0} \) and \( {A}_{0}^{\prime } \) are isometric. However, by Theorem 0.9.12, \( {A}_{0} \) and \( {A}_{0}^{\prime } \) are isometric if and only if \( {\left( {A}_{0}\right) }_{v} \) and \( {\left( {A}_{0}^{\prime }\right) }_{v} \) are isometric over \( {k}_{v} \) for all places \( v \) on \( k \) . Now, since \( {\left( {A}_{0}\right) }_{v} = {\left( {A}_{v}\right) }_{0} \), it follows that \( A \) and \( {A}^{\prime } \) are isomorphic if and only if \( {A}_{v} \) and \( {A}_{v}^{\prime } \) are isomorphic for all \( v \) . For each complex Archimedean place \( v,{A}_{v} \cong {A}_{v}^{\prime } \) and for all other \( v \), there are precisely two possibilities by Theorem 2.5.1 and Corollary 2.6.4. However, \( \operatorname{Ram}\left( A\right) = \operatorname{Ram}\left( {A}^{\prime }\right) \) shows that \( {A}_{v} \cong {A}_{v}^{\prime } \) for all \( v.▱ \) Thus the isomorphism class of a quaternion algebra over a number field is determined by its ramification set. By Theorem' 2.7.3, this ramification set is finite of even cardinality. To complete the classification theorem of quaternion algebras over numbers fields \( k \), it will be shown that for each set of places on \( k \) of even cardinality, excluding the complex Archimedean ones, there is a quaternion algebra with precisely that set as its ramification set. This will be carried out in Chapter 7. ## Examples 2.7.6 1. Let \( A \cong \left( \frac{-1, - 1}{\mathbb{Q}}\right) \) . Then \( \Delta \left( A\right) = 2\mathbb{Z} \) because \( A \) splits at all the odd primes by Theorem 2.6.6. It is established in Exercise 2.6, No. 3 that \( A \) is ramified at the prime 2. Alternatively, \( A \) is ramified at the Archimedean place by Theorem 2.5.1 and so by Theorem 2.7.3 must also be ramified at the prime 2. 2. Let \( t = \sqrt{}\left( {3 - 2\sqrt{5}}\right), k = \mathbb{Q}\left( t\right) \) and \( A \cong \left( \frac{-1, t}{k}\right) \) . We want to determine \( \Delta \left( A\right) \) . Recall some information on \( k \) from \( §{0.2} \) . Thus \( k = \mathbb{Q}\left( u\right) \), where \( u = \left( {1 + t}\right) /2 \) and so \( u \) satisfies \( {x}^{4} - 2{x}^{3} + x - 1 = 0 \) . Also \( {R}_{k} = \mathbb{Z}\left\lbrack u\right\rbrack \) . Now \( k \) has two real places and since \( t \) is positive at one and negative at the other, \( A \) is ramified at just one of these Archimedean places. Further \( {N}_{k \mid \mathbb{Q}}\left( t\right) = - {11} \) so that \( t{R}_{k} = \mathcal{P} \) is a prime ideal. The quadratic form \( - {x}^{2} + t{y}^{2} = 1 \) has no solution in \( {k}_{\mathcal{P}} \) since \( \left( \frac{-1}{11}\right) = - 1 \) . This follows from Theorem 0.9.5. Finally, modulo 2, the polynomial \( {x}^{4} - 2{x}^{3} + x + 1 \) is irreducible, so, by Kummer’s Theorem, there is just one prime in \( k \) lying over 2. Thus \( \Delta \left( A\right) = t{R}_{k} \) by Theorem 2.7.3. ## Exercise 2.7 1. Show that the following quaternion algebras split: \[ \left( \frac{3, - 2}{\mathbb{Q}}\right) ,\;\left( \frac{-1 + \sqrt{5},\left( {1 - 3\sqrt{5}}\right) /2}{\mathbb{Q}\left( \sqrt{5}\right) }\right) . \] 2. Let \( A \) be a quaternion division algebra over \( \mathbb{Q} \) . Show that there are infinitely many quadratic fields \( k \) such that \( A{ \otimes }_{\mathbb{Q}}k \) is still a division algebra over \( k \) . 3. Let \( k = \mathbb{Q}\left( t\right) \), where \( t \) satisfies \( {x}^{3} + x + 1 = 0 \) . Let \( A = \left( \frac{t,1 + {2t}}{k}\right) \) . Show that \( \Delta \left( A\right) = 2{R}_{k} \) . 4. (Norm Theorem for quadratic extensions) Let \( L \mid k \) be a quadratic extension and let \( a \in {k}^{ * } \) . Prove that \( a \in N\left( L\right) \) if and only if \( a \in N\left( {L{ \otimes }_{k}{k}_{v}}\right) \) for all places \( v \) . 5. Let \( k \) be a number field and let \( L \mid k \) be an extension of degree 2. Let \( {\mathcal{P}}_{1} \) be an ideal in \( {R}_{k} \) which decomposes in the extension \( L \mid k \) and let \( {\mathcal{P}}_{2} \) be an ideal in \( {R}_{k} \) which is inert in the extension. Let \( {\mathcal{Q}}_{1} \) lie over \( {\mathcal{P}}_{1} \) and let \( {\mathcal{Q}}_{2} \) lie over \( {\mathcal{P}}_{2} \) . Let \( A \) be a quaternion algebra over \( k \) and let \( B = L{ \otimes }_{k}A \) . Prove that \( A \) is ramified at \( {\mathcal{P}}_{1} \) if and only if \( B \) is ramified at \( {\mathcal{Q}}_{1} \) . Prove that \( B \) cannot be ramified at \( {\mathcal{Q}}_{2} \) . ## 2.8 Central Simple Algebras In our discussion of quaternion algebras so far in this chapter, two crucial results on central simple algebras have been used. These are Wedderburn's Structure Theorem and the Skolem Noether Theorem. The latter result, in particular, will play a critical role in the arithmetic applications to Kleinian groups later in the book. Thus, in this section and the next, an introduction to central simple algebras will be given, sufficient to deduce these two results. These sections are independent of the results so far in this chapter. Let \( F \) denote a field. Unless otherwise stated, all module and vector space actions will be on the right. Definition 2.8.1 An \( F \) -algebra \( A \) is a vector space over \( F \), which is a ring with 1 satisfying \[ \left( {ab}\right) x = a\left( {bx}\right) = \left( {ax}\right) b\;\forall a, b \in A, x \in F. \] Throughout, all algebras will be finite-dimensional. If \( {A}^{\prime } \) is a subalgebra of \( A \), then the centraliser of \( {A}^{\prime } \) \[ {C}_{A}\left( {A}^{\prime }\right) = \left\{ {a \in A \mid a{a}^{\prime } = {a}^{\prime }a\;\forall {a}^{\prime } \in {A}^{\prime }}\right\} \] is also a subalgebra. In particular, the centre \( Z\left( A\right) = {C}_{A}\left( A\right) \) is a subalgebra. Furthermore, \( F \) embeds as \( {1}_{A}F \) as a subset of \( Z\left( A\right) \) . If \( M \) is an \( A \) -module, then \( {\operatorname{End}}_{A}\left( M\right) \) is the set of \( A \) -module endomorph-isms \( \phi : A \rightarrow A \) . Under composition of mappings, \( {\operatorname{End}}_{A}\left( M\right) \) is also an \( F \) -algebra with the identity mapping as 1. Lemma 2.8.2 The left regular representation
1088_(GTM245)Complex Analysis
Definition 4.6
Definition 4.6. Let \( \gamma : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{C} \) be a continuous path. We say that \( \gamma \) is a piecewise differentiable path (henceforth abbreviated \( {pdp} \) ) if there exists a partition of \( \left\lbrack {a, b}\right\rbrack \) of the form given in (4.1) such that each of the paths defined by (4.2) is differentiable. Then we use (4.3) to define the path integral \( {\int }_{\gamma }\omega \) . Remark 4.7. The path integral over a pdp is well defined (independent of the partition) and agrees with our earlier definition for differentiable paths. The verification of these facts is left as an exercise. Remark 4.8. There are three pictures in \( {\mathbb{R}}^{2} \) that are naturally associated to each path \( \gamma = x + {\iota y} \) : the picture of the range of the curve and the graphs of the functions \( x \) and \( y \) . Figure 4.1 illustrates this for a curve whose image is the boundary of the rectangle with vertices \( \left( {c, e}\right) ,\left( {d, e}\right) ,\left( {d, f}\right) \), and \( \left( {c, f}\right) \) . Definition 4.9. Let \( \gamma \) denote a pdp parameterized by \( \left\lbrack {a, b}\right\rbrack \) in a domain \( D \) . Then the path \( \gamma \) traversed backward is defined by \[ {\gamma }_{ - }\left( t\right) = \gamma \left( {-t + b + a}\right) \text{ for all }t \in \left\lbrack {a, b}\right\rbrack . \] It follows that \[ {\int }_{{\gamma }_{ - }}\omega = - {\int }_{\gamma }\omega \] for all differential forms \( \omega \) defined in a neighborhood of the range of \( \gamma \) . Definition 4.10. If \( {\gamma }_{1} \) and \( {\gamma }_{2} \) are pdp’s in \( D \) parameterized by \( \left\lbrack {0,1}\right\rbrack \), with \( {\gamma }_{1}\left( 1\right) = \) \( {\gamma }_{2}\left( 0\right) \), then a new pdp \( {\gamma }_{1} * {\gamma }_{2} \) in \( D \) may be defined by first traversing \( {\gamma }_{1} \) and then continuing with \( {\gamma }_{2} \), as follows: \[ {\gamma }_{1} * {\gamma }_{2}\left( t\right) = \left\{ \begin{array}{l} {\gamma }_{1}\left( {2t}\right) ,\;\text{ for }0 \leq t \leq \frac{1}{2} \\ {\gamma }_{2}\left( {{2t} - 1}\right) ,\text{ for }\frac{1}{2} \leq t \leq 1 \end{array}\right. \] ![a50267de-c956-4a7f-8c2e-850adafcee65_105_0.jpg](images/a50267de-c956-4a7f-8c2e-850adafcee65_105_0.jpg) Fig. 4.1 Three figures for a curve (the boundary of a rectangle). (a) Picture of the curve. (b) The graph of \( x \) . (c) The graph of \( y \) Thus \[ {\int }_{{\gamma }_{1} * {\gamma }_{2}}\omega = {\int }_{{\gamma }_{1}}\omega + {\int }_{{\gamma }_{2}}\omega \] for all differential forms \( \omega \) defined in a neighborhood of the range of \( {\gamma }_{1} * {\gamma }_{2} \) . Lemma 4.11. If \( D \) is a domain in \( \mathbb{C} \), then any two points in \( D \) can be joined by a pdp in \( D \) . Proof. Fix \( c \in D \) and let \[ E = \{ z \in D;z\text{ can be joined to }c\text{ by a pdp in }D\} . \] The set \( E \) is open in \( D \), because if \( z \) denotes any point in \( E \), then, since \( D \) is an open set, there is a small disc \( U \) with center at \( z \) contained in \( D \), and any point \( w \) in \( U \) may be joined to \( z \) by a radial segment in \( U \) ; the pdp consisting of this segment followed by the pdp joining \( z \) to \( c \) gives that \( w \) in \( E \) (see Definition 4.10). Similarly, \( D - E \) is also open in \( D \) . Since \( D \) is connected and \( c \in E \), we conclude that \( E = D \) . Definition 4.12. Although the concepts introduced here were used in previous chapters without explanation, this is a good place for formalizing them. Let \( D \) be a domain in \( \mathbb{C} \) (1) We recall that a function \( f \) defined on \( D \) is of class \( {\mathbf{C}}^{p} \) on \( D \), with \( p \in {\mathbb{Z}}_{ \geq 0} \) , if \( f \) has partial derivatives (with respect to \( x \) and \( y \) ) up to and including order \( p \), and these are continuous on \( D \) . We also say that \( f \) is a \( {\mathbf{C}}^{p} \) -function on \( D \) . Of course \( p = 0 \) just means that \( f \) is continuous on \( D \) . The vector space of complex-valued functions of class \( {\mathbf{C}}^{p} \) on \( D \) is denoted by \( {\mathbf{C}}^{p}\left( D\right) \) . We also say that \( f \) is of class \( {\mathbf{C}}^{\infty } \) or smooth in \( D \) if it is of class \( {\mathbf{C}}^{p} \) for all \( p \in {\mathbb{Z}}_{ \geq 0} \) . In this case we also write \( f \in {\mathbf{C}}^{\infty }\left( D\right) \) . (2) A differential form \( \omega = P\mathrm{\;d}x + Q\mathrm{\;d}y \) is of class \( {\mathbf{C}}^{p} \) on \( D \) if and only if \( P \) and \( Q \) are. (3) For a given function \( f \), we have the (real) partial derivatives \( {f}_{x} \) and \( {f}_{y} \) as well as the formal (complex) partial derivatives \( {f}_{z} \) and \( {f}_{\bar{z}} \) introduced in Definition 2.37. Similarly, we consider the differentials \[ \mathrm{d}z = \mathrm{d}x + \imath \mathrm{d}y\text{ and }\mathrm{d}\bar{z} = \mathrm{d}x - \imath \mathrm{d}y. \] It follows that every differential form can be written in the following two ways: \[ P\mathrm{\;d}x + Q\mathrm{\;d}y = \frac{1}{2}\left\lbrack {\left( {P - \imath Q}\right) \mathrm{d}z + \left( {P + \imath Q}\right) \mathrm{d}\bar{z}}\right\rbrack . \] Remark 4.13. We recommend that all concepts and definitions that are formulated in terms of \( x \) and \( y \) be reformulated by the reader in terms of \( z \) and \( \bar{z} \) (and vice versa). (4) If \( f \) is a \( {\mathbf{C}}^{1} \) -function on \( D \), then we define \( \mathrm{d}f \), the total differential of \( f \), by either of the two equivalent formulae: \[ \mathrm{d}f = {f}_{x}\mathrm{\;d}x + {f}_{y}\mathrm{\;d}y = {f}_{z}\mathrm{\;d}z + {f}_{\bar{z}}\mathrm{\;d}\bar{z}. \] In addition to the differential operator \( d \), we have two other important differential operators \( \partial \) and \( \bar{\partial } \) defined by \[ \partial f = {f}_{z}\mathrm{\;d}z\text{ and }\bar{\partial }f = {f}_{\bar{z}}\mathrm{\;d}\bar{z}, \] as well as the formula \[ d = \partial + \bar{\partial } \] We have defined the three differential operators on spaces of \( {\mathbf{C}}^{1} \) -functions. They can be also defined on spaces of \( {\mathbf{C}}^{1} \) -differential forms, and it follows from these definitions that, for example, on \( {\mathbf{C}}^{2} \) -functions the equality \( {d}^{2} = 0 \) holds. We shall not need these extended definitions but will outline some facts concerning the exterior differential calculus in the first appendix to this chapter. (5) A differential form \( \omega \) is called exact if there exists a \( {\mathbf{C}}^{1} \) -function \( F \) on \( D \) (called a primitive for \( \omega \) ) such that \( \omega = \mathrm{d}F \) . A primitive (if it exists) is unique up to addition of a constant, because \( \mathrm{d}F = \) 0 means \( {F}_{x} = {F}_{y} = 0 \), and this implies that \( F \) is constant on the connected set \( D \) . By abuse of language we also say that a function \( F \) is a primitive for a function \( f \) if \( F \) is a primitive for the differential form \( \omega = f\left( z\right) \mathrm{d}z \) . (6) A differential form \( \omega \) on \( D \) is closed if it is locally exact; that is, if for each \( c \in D \) there exists a neighborhood \( U \) of \( c \) in \( D \) such that \( {\left. \omega \right| }_{U} \) is exact. ## 4.2 The Precise Difference Between Closed and Exact Forms While the definitions of exact and closed forms are straightforward, as is the fact that every exact differential is closed, an intuitive sense of the difference between the two properties may not immediately present itself. This is because these differences arise from the topology of the domain where the differential form is defined and from the behavior of the differential form along certain paths in that domain. A closed but not exact differential is given in Example 4.30. We will see that on a disc the two properties are equivalent, but situations where they are not equivalent are especially significant. To understand this difference, we study the pairing that associates the complex number \[ \langle \gamma ,\omega \rangle = {\int }_{\gamma }\omega \] to a pdp \( \gamma \) in a domain \( D \) and a differential form \( \omega \) on \( D \) (when the integral exists). Lemma 4.14. Let \( \omega \) be a differential form on a domain \( D \) . Then \( \omega \) is exact on \( D \) if and only if \( {\int }_{\gamma }\omega = 0 \) for all closed pdps \( \gamma \) in \( D \) . Proof. Assume that \( \omega \) is exact. Then there exists a \( {\mathbf{C}}^{1} \) -function \( F \) on \( D \) with \[ \omega = {F}_{x}\mathrm{\;d}x + {F}_{y}\mathrm{\;d}y. \] If \( \gamma \) is a pdp parameterized by \( \left\lbrack {a, b}\right\rbrack \) joining two points \( {P}_{1} \) to \( {P}_{2} \) in \( D \), then \[ {\int }_{\gamma }\omega = {\int }_{a}^{b}\left( {{F}_{x}\frac{\mathrm{d}x}{\mathrm{\;d}t} + {F}_{y}\frac{\mathrm{d}y}{\mathrm{\;d}t}}\right) \mathrm{d}t = {\int }_{a}^{b}\frac{\mathrm{d}F}{\mathrm{\;d}t}\mathrm{\;d}t = F\left( {P}_{2}\right) - F\left( {P}_{1}\right) , \] which equals zero if \( {P}_{1} = {P}_{2} \), as happens for every closed curve. To prove the converse, let \( {Z}_{0} = \left( {{x}_{0},{y}_{0}}\right) \) be a fixed point in \( D \) and let \( Z = \left( {x, y}\right) \) be an arbitrary point in \( D \) . Let \( \gamma \) be a pdp in \( D \) joining \( {Z}_{0} \) to \( Z \) and define \[ F\left( {x, y}\right) = {\int }_{\gamma }\omega . \] To see that the function \( F \) is well defined on \( D \), note that if \( {\gamma }_{2} \) is another pdp in \( D \) joining \( {Z}_{0} \) to \( Z \), then \( {\gamma }_{2} * {\gamma }_{ - } \) is a closed pdp in \( D \), and it follows from the hypothesis and from Definitions 4.9 and 4.10 that ![a50267de-c956-4a7f-8c2e-850adafcee65_108_0.jpg](images/a502
1288_[张芷芬&丁同仁&黄文灶&董镇喜] Qualitative Theory of Differential Equations
Definition 4.1
Definition 4.1. Let \( l \subset {R}^{2} \) be a smooth Jordan curve, and let \( G \) be the region enclosed by \( l \) . Suppose at the point \( P \in l \), the tangent to \( l \) coincides with the field vector determined by (4.1). If there exists a sufficiently small orbit arc \( r\left( P\right) \) for equation (4.1) through \( P \) such that \( r\left( P\right) \smallsetminus P \subset G \) (or \( r\left( P\right) \smallsetminus P \subset {R}^{2} \smallsetminus \bar{G} \) ), then \( P \) is called an interior (or exterior) tangent point with respect to \( l \) . THEOREM 4.1. Let \( Q \) be an isolated critical point for (4.1). Suppose \( l \) is a smooth Jordan curve enclosing the region \( G \), with \( Q \in G \) and there is no other critical point of (4.1) in \( \bar{G} \) except \( Q \) . Then the critical point index \( J\left( Q\right) \) satisfies \[ J\left( Q\right) = \frac{1}{2}\mathop{\sum }\limits_{{P \in l}}I\left( P\right) + 1 \] ( * ) where \[ I\left( P\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\mathrm{P}\text{ is an interior tangent point with respect to }l, \\ - 1 & \text{ if }\mathrm{P}\text{ is an exterior tangent point with respect to }l, \\ 0 & \text{ otherwise. } \end{array}\right. \] Proof. Let \( \delta > 0 \) be sufficiently small, such that \( {S}_{\delta }\left( Q\right) \) satisfies Theorem 6.3 in \( §6 \) of Chapter II. In \( §6 \) of Chapter III, we prove that an isolated critical point \( Q \) satisfies the Bendixson formula \[ J\left( Q\right) = 1 + \frac{e - h}{2} \] where \( h \) and \( e \) are respectively the number of hyperbolic (including hyperbolic-elliptic) sectors and the number of elliptic sectors which have points in common with \( \partial {S}_{\delta }\left( Q\right) \) . Clearly \( \delta > 0 \) can be chosen sufficiently small, and a nonsingular continuous transformation can be made homotopic to the original vector field on \( \partial {S}_{\delta }\left( Q\right) \) such that the resulting homotopic vector field does not have any interior or exterior tangent point with respect to \( \partial {S}_{\delta }\left( Q\right) \) in the parabolic or parabolic-elliptic sectors. Further, the resulting vector field has a unique interior tangent point with respect to \( \partial {S}_{\delta } \) in each elliptic sector, and a unique exterior tangent point with respect to \( \partial {S}_{\delta } \) in each hyperbolic or hyperbolic-elliptic sector. Since homotopic vector fields have the same rotation number, the Bendixson formula implies the validity of \( \left( *\right) \) THEOREM 4.2. The index of an isolated critical point for a continuous vector field (4.1) on the plane is invariant under diffeomorphisms (differentiable homeomorphisms). Proof. Let \( Q \) be an isolated critical point for (4.1), and \( l \) be a smooth Jordan curve enclosing \( Q \), with \( l \subset D \subset {R}^{2} \) . Assume that in the bounded domain \( D \) there is no critical point other than \( Q \) . Let \[ \varphi \left( x\right) : x \in D \rightarrow y = \varphi \left( x\right) \in \varphi \left( D\right) \] be a diffeomorphism on \( D \), where \( \varphi \left( l\right) \) is clearly a smooth Jordan curve enclosing \( \varphi \left( Q\right) \) ; and \( \varphi \left( l\right) \subset \varphi \left( D\right) \), with \( \varphi \left( D\right) \) containing no critical point other than \( \varphi \left( Q\right) \) . Since \( \varphi \) is a diffeomorphism, we have the relation: \[ \begin{matrix} T\left( l\right) & \overset{d\varphi }{ \rightarrow } & T\left( {\varphi \left( l\right) }\right) \\ \uparrow & & \uparrow \\ l & \underset{\varphi }{ \rightarrow } & \varphi \left( l\right) , \end{matrix} \] where \( T\left( l\right) \) and \( T\left( {\varphi \left( l\right) }\right) \) denote the tangent vector field on \( l \) and \( \varphi \left( l\right) \) respectively, as indicated in the above diagram. The tangent map \( {d\varphi } \), the derivative of \( \varphi \), maps each tangent vector at any point \( P \) on \( l \) to a tangent vector at a corresponding point \( \varphi \left( P\right) \) on \( \varphi \left( l\right) \), and vise versa. We have the following commutative diagram: \[ \begin{matrix} T\left( D\right) & \overset{d\varphi }{ \rightarrow } & T\left( {\varphi \left( D\right) }\right) \\ X\left( x\right) \uparrow & & \uparrow Y\left( y\right) \\ D & \underset{\varphi }{ \rightarrow } & \varphi \left( D\right) . \end{matrix} \] Under the transformation \( \varphi \), equation (4.1) becomes \[ \dot{y} = \frac{\partial \varphi }{\partial x}\dot{x} = \frac{\partial \varphi }{\partial x}X\left( {{\varphi }^{-1}\left( y\right) }\right) = Y\left( y\right) = {d\varphi } \cdot X \cdot {\varphi }^{-1}\left( y\right) . \] Hence, at points where the original vector field is tangent to \( l \), the vector field after the transformation will be tangent to \( \varphi \left( l\right) \) at the corresponding points, and vice versa. Moreover, an interior or exterior tangent point will respectively become an interior or exterior tangent point after the transformation. Consequently, formula \( \left( *\right) \) in Theorem 4.1 implies that \[ J\left( {\varphi \left( Q\right) }\right) = \frac{1}{2}\mathop{\sum }\limits_{{\varphi \left( P\right) \in \varphi \left( l\right) }}I\left( {\varphi \left( P\right) }\right) + 1 \] \[ = \frac{1}{2}\mathop{\sum }\limits_{{P \in l}}I\left( P\right) + 1 = J\left( Q\right) \] where \( I\left( {\varphi \left( P\right) }\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\varphi \left( P\right) \text{ is an interior tangent point with respect to }\varphi \left( l\right) , \\ - 1 & \text{ if }\varphi \left( P\right) \text{ is an exterior tangent point with respect to }\varphi \left( l\right) , \\ 0 & \text{ otherwise. } \end{array}\right. \) This completes the proof. Theorem 4.2 shows that the index of a critical point is independent of the choice of coordinates, and therefore we can proceed to define the index of a critical point for a continuous vector field on a two dimensional surface \( M \) . We will not give a detailed description here for the definition of a continuous vector fields or flow on \( M \) . Intuitively, they are mappings of families of trajectories on \( {R}^{2} \) through diffeomorphisms to families of trajectories on certain open subsets of \( M \), and are then patched together to form a flow on \( M \) . For example, in early discussions we patch the three coordinate planes \( \alpha ,{\alpha }^{ * } \), and \( \widehat{\alpha } \) to form a flow on the projective plane \( {P}_{2} \) . Suppose that a continuous vector field is defined on \( M \) . Let \( Q \in M \) be an isolated critical point. There exists \( \left( {{U}_{Q},\varphi }\right) \), where \( {U}_{Q} \) is an open subset of \( M, Q \in {U}_{Q},{\bar{U}}_{Q} \) contains no critical point except \( Q \), and \( \varphi \) is a diffeomorphism \[ \varphi : {U}_{Q} \rightarrow \varphi \left( {U}_{Q}\right) \subset {R}^{2}. \] \( \varphi \left( {U}_{O}\right) \) is an open region in \( {R}^{2} \) . \( \varphi \) maps the orbits in \( {U}_{O} \) to orbits in \( \varphi \left( {U}_{Q}\right) \) . The tangent map, \( {d\varphi } \), maps the vector field \( {V}_{Q} \) on \( {U}_{Q} \) to the vector field \( {d\varphi }\left( {V}_{Q}\right) \) on \( \varphi \left( {U}_{Q}\right) \) . Indeed, the flow on \( M \) is systematically formed by \( {\varphi }^{-1} \), which maps the orbits in \( \varphi \left( {U}_{Q}\right) \) to it. Definition 4.2. The index of an isolated critical point \( Q \in M \) is defined as \[ {J}_{M}\left( Q\right) = J\left( {\varphi \left( Q\right) }\right) \] where \( J\left( {\varphi \left( Q\right) }\right) \) is the index of the critical point \( \varphi \left( Q\right) \) of the vector field \( {d\varphi }\left( {V}_{Q}\right) \) . Theorem 4.2 implies that \( J\left( {\varphi \left( Q\right) }\right) \) is independent of the choice of \( \varphi \) . Hence, this definition is meaningful. THEOREM 4.3. Any continuous vector field on \( {S}^{2} \), with only isolated critical point \( \left( s\right) \) must have the sum of indices equal to 2,(denoted as \( \chi \left( {S}^{2}\right) = 2 \) ). Proof. Since there are only a finite number of critical points on \( {S}^{2} \), we can choose a regular point \( P \in {S}^{2} \) and a neighborhood \( U\left( P\right) \subset {S}^{2} \) such that ![bea09977-be18-4815-a30e-4fa2fe3b219c_366_0.jpg](images/bea09977-be18-4815-a30e-4fa2fe3b219c_366_0.jpg) FIGURE 5.21 \( \bar{U}\left( P\right) \) contains no critical point. The tangent map \( {d\sigma } \) of the diffeomorphism \( \sigma : U\left( P\right) \rightarrow {R}^{2} \) maps a vector \( V\left( P\right) \) on \( U\left( P\right) \) to a vector \( {d\sigma }\left( {V\left( P\right) }\right) \) on \( \sigma \left( {U\left( P\right) }\right) \) . Choose a closed Jordan curve \( s \) surrounding \( \sigma \left( P\right) \), with \( D \) as the region surrounded by \( s, D \subset \sigma \left( {U\left( P\right) }\right) \), such that there are only two exterior tangent points \( {P}^{\left( 1\right) },{P}^{\left( 2\right) } \) on \( s \) with respect to the field vectors \( {d\sigma }\left( {V\left( P\right) }\right) \) . (Such a choice of \( s \) is possible, since a vector field is roughly parallel near a regular point). Use \( P \) as the north pole for a sterographic projection \( \varphi \), and choose an open set \( {G}_{1} \) in \( {S}^{2} \) such that \( P \in {G}_{1} \subset {\sigma }^{-1}\left( D\right) = G \) . We have \( \varphi \left( {{S}^{2} \smallsetminus {G}_{1}}\right) \subset {R}^{2} \), and \( \varphi \) is a diffeomorphism on \( {S}^{2} \smallsetminus {G}_{1} \) . All the critical points on \( {S}^{2} \) are mapped into the bounded region \( \varphi \left( {{S}^{2} \smallsetminus \bar{G}}\right) \) . Let \( l = {\sigma }^{-1}\left( s\right) \) and \( \varphi \left( l\right) \) be the bou
110_The Schwarz Function and Its Generalization to Higher Dimensions
Definition 9.2
Definition 9.2. Let \( {x}^{ * } \) be a feasible point of \( \left( P\right) \) in (9.1). A tangent direction of \( \mathcal{F}\left( P\right) \) at \( {x}^{ * } \) is called a feasible direction of \( \left( P\right) \) at \( {x}^{ * } \) . We denote the set of feasible directions of \( \left( P\right) \) at \( {x}^{ * } \) by \( \mathcal{F}\mathcal{D}\left( {x}^{ * }\right) \) . A vector \( d \in {\mathbb{R}}^{n} \) is called a descent direction for \( f \) at \( {x}^{ * } \) if there exists a sequence of points \( {x}_{n} \rightarrow {x}^{ * } \) in \( {\mathbb{R}}^{n} \) (not necessarily feasible) with tangent direction \( d \) such that \( f\left( {x}_{n}\right) \leq f\left( {x}^{ * }\right) \) for all \( n \) . If \( f\left( {x}_{n}\right) < f\left( {x}^{ * }\right) \) for all \( n \), we call d a strict descent direction for \( f \) at \( {x}^{ * } \) . We denote the set of strict descent directions at \( {x}^{ * } \) by \( \mathcal{{SD}}\left( {f;{x}^{ * }}\right) \) . Lemma 9.3. If \( {x}^{ * } \in \mathcal{F}\left( P\right) \) is a local minimum of \( \left( P\right) \), then \[ \mathcal{F}\mathcal{D}\left( {x}^{ * }\right) \cap \mathcal{S}\mathcal{D}\left( {f;{x}^{ * }}\right) = \varnothing . \] Proof. The lemma is obvious: if the intersection is not empty, then there exists a sequence of feasible points \( {x}_{n} \rightarrow {x}^{ * } \) such that \( f\left( {x}_{n}\right) < f\left( {x}^{ * }\right) \), which contradicts our assumption that \( {x}^{ * } \) is a local minimizer of \( \left( P\right) \) . Although this lemma is very important from a conceptual point of view, it is hard to extract meaningful results from it, since the sets \( \mathcal{F}\mathcal{D}\left( {x}^{ * }\right) \) and \( \mathcal{{SD}}\left( {f;{x}^{ * }}\right) \) are difficult to describe in a useful way, unless the functions \( f,{g}_{i} \) , \( {h}_{j} \) appearing in \( \left( P\right) \) have additional useful properties, such as differentiability. ## 9.1 First-Order Necessary Conditions (Fritz John Optimality Conditions) The Fritz John (FJ) conditions are first-order necessary conditions for a local minimizer in the nonlinear program \( \left( P\right) \) in (9.1) when all the functions involved, \( f,{g}_{i},{h}_{j} \), are continuously differentiable in an open neighborhood of the feasible region \( \mathcal{F}\left( P\right) \) . Let us define the linearized versions of the feasible and strict descent directions defined above, 9.1 First-Order Necessary Conditions (Fritz John Optimality Conditions) \[ \mathcal{L}\mathcal{F}\mathcal{D}\left( {x}^{ * }\right) \mathrel{\text{:=}} \left\{ {d : \left\langle {\nabla {g}_{i}\left( {x}^{ * }\right), d}\right\rangle < 0,\;i = 1,\ldots, r,}\right. \] \[ \left. {\left\langle {\nabla {h}_{j}\left( {x}^{ * }\right), d}\right\rangle = 0,\;j = 1,\ldots, m,}\right\} \] \[ \mathcal{L}\mathcal{S}\mathcal{D}\left( {f;{x}^{ * }}\right) \mathrel{\text{:=}} \{ d : \langle \nabla f\left( x\right), d\rangle < 0\} . \] To motivate these definitions, let \( F \) be a differentiable function defined on an open set in \( {\mathbb{R}}^{n} \) . If \( d \in \mathcal{{LSD}}\left( {F;x}\right) \), then \[ F\left( {x + {td}}\right) = F\left( x\right) + t\left\lbrack {\langle \nabla F\left( x\right), d\rangle + \frac{o\left( t\right) }{t}}\right\rbrack < F\left( x\right) , \] for all \( t > 0 \) small, since \( \langle \nabla F\left( x\right), d\rangle < 0 \) and \( \mathop{\lim }\limits_{{t \rightarrow 0}}o\left( t\right) /t = 0 \), so that the term inside the brackets is negative. Thus, if \( d \in \mathcal{L}\mathcal{F}\mathcal{D}\left( {x}^{ * }\right) \cap \mathcal{L}\mathcal{S}\mathcal{D}\left( {f;{x}^{ * }}\right) \) and \( t > 0 \) is small enough, then \( f\left( {{x}^{ * } + {td}}\right) < f\left( {x}^{ * }\right) \) and \( {g}_{i}\left( {{x}^{ * } + {td}}\right) < {g}_{i}\left( {x}^{ * }\right) = 0 \) for an active constraint function \( {g}_{i} \) . The requirement that \( \left\langle {\nabla {h}_{j}\left( x\right), d}\right\rangle = 0 \) is more delicate and will be handled below. Theorem 9.4. (Fritz John) If a point \( {x}^{ * } \) is a local minimizer of \( \left( P\right) \) , then there exist multipliers \( \left( {\lambda ,\mu }\right) \mathrel{\text{:=}} \left( {{\lambda }_{0},{\lambda }_{1},\ldots ,{\lambda }_{r},{\mu }_{1},\ldots ,{\mu }_{m}}\right) \), not all zero, \( \left( {{\lambda }_{0},{\lambda }_{1},\ldots ,{\lambda }_{r}}\right) \geq 0 \), such that \[ {\lambda }_{0}\nabla f\left( {x}^{ * }\right) + \mathop{\sum }\limits_{{i = 1}}^{r}{\lambda }_{i}\nabla {g}_{i}\left( {x}^{ * }\right) + \mathop{\sum }\limits_{{j = 1}}^{m}{\mu }_{j}\nabla {h}_{j}\left( {x}^{ * }\right) = 0, \] (9.2) \[ {\lambda }_{i} \geq 0,{g}_{i}\left( {x}^{ * }\right) \leq 0,{\lambda }_{i}{g}_{i}\left( {x}^{ * }\right) = 0, i = 1,\ldots, r. \] (9.3) Proof. Because of the "complementarity conditions" (9.3), we may write (9.2) in the form \[ {\lambda }_{0}\nabla f\left( {x}^{ * }\right) + \mathop{\sum }\limits_{{j \in I\left( {x}^{ * }\right) }}{\lambda }_{j}\nabla {g}_{j}\left( {x}^{ * }\right) + \mathop{\sum }\limits_{{j = 1}}^{m}{\mu }_{j}\nabla {h}_{j}\left( {x}^{ * }\right) = 0. \] If the vectors \( {\left\{ \nabla {h}_{i}\left( {x}^{ * }\right) \right\} }_{1}^{m} \) are linearly dependent, then there exist multipliers \( \mu \mathrel{\text{:=}} \left( {{\mu }_{1},\ldots ,{\mu }_{m}}\right) \neq 0 \) such that \( \mathop{\sum }\limits_{{j = 1}}^{m}{\mu }_{j}\nabla {h}_{j}\left( {x}^{ * }\right) = 0 \) . Then setting \( \lambda \mathrel{\text{:=}} \) \( \left( {{\lambda }_{0},\ldots ,{\lambda }_{r}}\right) = 0 \), we see that the theorem holds with the multipliers \( \left( {\lambda ,\mu }\right) \neq 0 \) . Assume now that \( {\left\{ \nabla {h}_{j}\left( {x}^{ * }\right) \right\} }_{j = 1}^{m} \) are linearly independent. We claim that \[ \left\{ {d : \left\langle {\nabla f\left( {x}^{ * }\right), d}\right\rangle < 0,\left\langle {\nabla {g}_{i}\left( {x}^{ * }\right), d}\right\rangle < 0, i \in I\left( {x}^{ * }\right) ,}\right. \] (9.4) \[ \left. {\left\langle {\nabla {h}_{j}\left( {x}^{ * }\right), d}\right\rangle = 0, j = 1,\ldots, m}\right\} = \varnothing . \] Suppose that (9.4) is false, and pick a direction \( d,\parallel d\parallel = 1 \), in the set above. Since \( {\left\{ \nabla {h}_{j}\left( {x}^{ * }\right) \right\} }_{1}^{m} \) is linearly independent, it follows from Lyusternik’s theorem (see Theorem 2.29 or Theorem 3.23) that there exists a sequence \( {x}_{n} \rightarrow {x}^{ * } \) that has tangent direction \( d \) and satisfies the equations \( {h}_{j}\left( {x}_{n}\right) = 0 \) , \( j = 1,\ldots, m \) . We also have \[ f\left( {x}_{n}\right) = f\left( {x}^{ * }\right) + \left\lbrack {\left\langle {\nabla f\left( {x}^{ * }\right) ,\frac{{x}_{n} - {x}^{ * }}{\begin{Vmatrix}{x}_{n} - {x}^{ * }\end{Vmatrix}}}\right\rangle + \frac{o\left( {{x}_{n} - {x}^{ * }}\right) }{\begin{Vmatrix}{x}_{n} - {x}^{ * }\end{Vmatrix}}}\right\rbrack \cdot \begin{Vmatrix}{{x}_{n} - {x}^{ * }}\end{Vmatrix}, \] where \( \left( {{x}_{n} - {x}^{ * }}\right) /\begin{Vmatrix}{{x}_{n} - {x}^{ * }}\end{Vmatrix} \rightarrow d \), and \( o\left( {{x}_{n} - {x}^{ * }}\right) /\begin{Vmatrix}{{x}_{n} - {x}^{ * }}\end{Vmatrix} \rightarrow 0 \) as \( n \rightarrow \infty \) . It follows that the term inside the brackets is negative and thus \( f\left( {x}_{n}\right) < f\left( {x}^{ * }\right) \) for sufficiently large \( n \) . The same arguments show that if \( {g}_{i} \) is an active constraint function at \( {x}^{ * } \), then \( {g}_{i}\left( {x}_{n}\right) < {g}_{i}\left( {x}^{ * }\right) = 0 \) for sufficiently large \( n \) . We conclude that \( {\left\{ {x}_{n}\right\} }_{1}^{\infty } \) is a feasible sequence for \( \left( P\right) \) such that \( f\left( {x}_{n}\right) < f\left( {x}^{ * }\right) \) for large enough \( n \) . This contradicts our assumption that \( {x}^{ * } \) is a local minimizer of \( \left( P\right) \), and proves (9.4). The theorem follows immediately from (9.4) using the homogeneous version of Motzkin's transposition theorem; see Theorem 3.15, Theorem 7.17, or Theorem A.3. Definition 9.5. The function \[ L\left( {x;\lambda ,\mu }\right) \mathrel{\text{:=}} {\lambda }_{0}f\left( x\right) + \mathop{\sum }\limits_{{i = 1}}^{r}{\lambda }_{i}{g}_{i}\left( x\right) + \mathop{\sum }\limits_{{j = 1}}^{m}{\mu }_{j}{h}_{j}\left( x\right) \;\left( {{\lambda }_{i} \geq 0, i = 0,\ldots, r}\right) \] is called the weak Lagrangian function for \( \left( P\right) \) . If \( {\lambda }_{0} > 0 \), then we may assume without loss of generality that \( {\lambda }_{0} = 1 \), and the resulting function, \[ L\left( {x,\lambda ,\mu }\right) = f\left( x\right) + \mathop{\sum }\limits_{{i = 1}}^{r}{\lambda }_{i}{g}_{i}\left( x\right) + \mathop{\sum }\limits_{{j = 1}}^{m}{\mu }_{j}{h}_{j}\left( x\right) ,\;{\lambda }_{i} \geq 0, i = 1,\ldots, r, \] is called the Lagrangian function. The Lagrangian function is named in honor of Lagrange, who first introduced an analogue of the function \( L \) in the eighteenth century in order investigate optimality conditions in calculus of variations problems. We remark that the equality (9.3) in the FJ conditions can be written as \[ {\nabla }_{x}L\left( {x,\lambda ,\mu }\right) = 0. \] The conditions expressed in (9.3) are called complementarity conditions, since \[ {\lambda }_{i}{g}_{i}\left( {x}^{ * }\right) = 0,\;{\lambda }_{i} \geq 0,\;{g}_{i}\left( {x}^{ * }\right) \leq 0, \] imply that either \( {\lambda }_{i} = 0 \) or \( {g}_{i}\left( {x}^{ * }\right) = 0 \) . In particular, if \( {g}_{i}\left( {x}^{ * }\right) < 0 \), that is, \( {g}_{i} \) is inactive at \( {x}^{ * } \), then \( {\lambda }_{i} = 0 \) . It is possible that a constraint is active and the corresponding multiplier is zero. For example, in Exercise 17 on page 245, this happens at every KKT point. Otherwise, we say that strict complementarity holds at
1068_(GTM227)Combinatorial Commutative Algebra
Definition 3.10
Definition 3.10 A planar map is a graph \( G \) together with an embedding of \( G \) into a surface homeomorphic to the plane \( {\mathbb{R}}^{2} \) . That being said, we refer to the planar map simply as \( G \) if its embedding is given. We require the surface to be homeomorphic rather than equal to \( {\mathbb{R}}^{2} \) to encourage the drawing of planar maps in staircase surfaces. Indeed, the proof of Proposition 3.9 endows the Buchberger graph of a generic monomial ideal with a canonical embedding in its staircase surface. Theorem 3.11 says that planar maps encode minimal free resolutions insofar as they organize into single diagrams the syzygies and their interrelations. The free resolution given by a planar map \( G \) with \( v \) vertices, \( e \) edges, and \( f \) faces, all labeled by monomials, has the form \[ {\mathcal{F}}_{G} : \;0 \leftarrow S \leftarrow {S}^{v}\overset{{\partial }_{E}}{ \leftarrow }{S}^{e}\overset{{\partial }_{F}}{ \leftarrow }{S}^{f} \leftarrow 0. \] (3.2) If we express the differentials by monomial matrices as in Chapter 1, then the scalar entries are precisely those coming from the usual differentials on a planar map (after choosing orientations on the edges), but with monomial row and column labels. For instance, the matrix for \( {\partial }_{F} \) has the edge monomials for row labels and the face monomials for column labels, while its scalar entries take each face to the signed sum of the oriented edges on its boundary. To express the differentials in ungraded notation, we write \( {m}_{ij} = \operatorname{lcm}\left( {{m}_{i},{m}_{j}}\right) \) for each edge \( \{ i, j\} \) of \( G \), and \( {m}_{R} \) for the least common multiple of the monomial labels on the edges in each region \( R \) . Then \[ {\partial }_{E}\left( {\mathbf{e}}_{ij}\right) = \frac{{m}_{ij}}{{m}_{j}} \cdot {\mathbf{e}}_{j} - \frac{{m}_{ij}}{{m}_{i}} \cdot {\mathbf{e}}_{i} \] if an edge oriented toward \( {m}_{j} \) joins the vertices labeled \( {m}_{i} \) and \( {m}_{j} \), whereas \[ {\partial }_{F}\left( {\mathbf{e}}_{R}\right) = \mathop{\sum }\limits_{\substack{\text{ edges } \\ {\{ i, j\} \subset R} }} \pm \frac{{m}_{R}}{{m}_{ij}} \cdot {\mathbf{e}}_{ij} \] for each region \( R \), where the sign is positive precisely when the edge \( \{ i, j\} \) is oriented counterclockwise around \( R \) . The construction of these differentials in arbitrary dimensions is the subject of Chapter 4. The rigorous \( n \) - dimensional proof of the next result will follow even later, in Theorem 6.13. Theorem 3.11 Given a strongly generic monomial ideal \( I \) in \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \), the planar map \( \operatorname{Buch}\left( I\right) \) provides a minimal free resolution of \( I \) . Sketch of proof. Begin by throwing high powers \( {x}^{a},{y}^{b} \), and \( {z}^{c} \) into \( I \) . What results is still strongly generic, but now artinian. If we are given a minimal free resolution of this new ideal by a planar map, then deleting all edges and regions incident to one or more of \( \left\{ {{x}^{a},{y}^{b},{z}^{c}}\right\} \) leaves a minimal free resolution of \( I \) . Indeed, these deletions have no effect on the \( {\mathbb{N}}^{3} \) -graded components of degree \( \preccurlyeq \left( {a - 1, b - 1, c - 1}\right) \), which remain exact, and \( I \) has no syzygies in any other degree. Therefore we assume that \( I \) is artinian. Each triangle in \( \operatorname{Buch}\left( I\right) \) contains a unique "mountain peak" in the surface of the staircase, located at the outside corner \( \operatorname{lcm}\left( {m,{m}^{\prime },{m}^{\prime \prime }}\right) \) . That peak is surrounded by three "mountain passes" \( \operatorname{lcm}\left( {m,{m}^{\prime }}\right) ,\operatorname{lcm}\left( {m,{m}^{\prime \prime }}\right) \) , and \( \operatorname{lcm}\left( {{m}^{\prime },{m}^{\prime \prime }}\right) \), each of which represents a minimal first syzygy of \( I \) by Theorem 1.34 (check that the simplicial complex \( {K}^{\mathbf{b}}\left( I\right) \) from Definition 1.33 is disconnected precisely when a mountain pass sits in degree b). The mountain peak represents a second syzygy relating these three first syzygies by the identity in the proof of Proposition 3.5, and all minimal second syzygies arise this way by Theorem 1.34. Next we show how to approximate arbitrary monomial ideals by strongly generic ones. The idea is to add small rational numbers to the exponents on the generators of \( I \) without reversing any strict inequalities between the degrees in \( x, y \), or \( z \) of any two generators. This process occurs inside a polynomial ring \( {S}_{\epsilon } = \mathbb{k}\left\lbrack {{x}^{\epsilon },{y}^{\epsilon },{z}^{\epsilon }}\right\rbrack \), where \( \epsilon = 1/N \) for some large positive integer \( N \), which contains \( S = \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) as a subring. Equalities among \( x \) -, \( y \) -, and \( z \) -degrees can turn into strict inequalities potentially going either way. Definition 3.12 Let \( I = \left\langle {{m}_{1},\ldots ,{m}_{r}}\right\rangle \) and \( {I}_{\epsilon } = \left\langle {{m}_{\epsilon ,1},\ldots ,{m}_{\epsilon, r}}\right\rangle \) be monomial ideals in \( S \) and \( {S}_{\epsilon } \), respectively. Call \( {I}_{\epsilon } \) a strong deformation of \( I \) if the partial order on \( \{ 1,\ldots, r\} \) by \( x \) -degree of the \( {m}_{\epsilon, i} \) refines the partial order by \( x \) -degree of the \( {m}_{i} \), and the same holds for \( y \) and \( z \) . We also say that \( I \) is a specialization of \( {I}_{\epsilon } \) . Constructing a strong deformation \( {I}_{\epsilon } \) of any given monomial ideal \( I \) is easy: simply replace each generator \( {m}_{i} \) by a nearby generator \( {m}_{\epsilon, i} \) in such a way that \( \mathop{\lim }\limits_{{\epsilon \rightarrow 0}}{m}_{\epsilon, i} = {m}_{i} \) . The ideal \( {I}_{\epsilon } \) need not be strongly generic; however, it will be if the strong deformation is chosen randomly. Example 3.13 The ideal in \( {S}_{\epsilon } \) given by \[ \left\langle {{x}^{3},{x}^{2 + \epsilon }{y}^{1 + \epsilon },{x}^{2}{z}^{1},{x}^{1 + {2\epsilon }}{y}^{2},{x}^{1 + \epsilon }{y}^{1}{z}^{1 + \epsilon },{x}^{1}{z}^{2 + \epsilon },{y}^{3},{y}^{2 - \epsilon }{z}^{1 + {2\epsilon }},{y}^{1 + {2\epsilon }}{z}^{2},{z}^{3}}\right\rangle \] is one possible strongly generic deformation of the ideal \( \langle x, y, z{\rangle }^{3} \) in \( S \) . \( \diamond \) Proposition 3.14 Suppose \( I \) is a monomial ideal in \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) and \( {I}_{\epsilon } \) is a strong deformation resolved by a planar map \( {G}_{\epsilon } \) . Specializing the vertices (hence also the edges and regions) of \( {G}_{\epsilon } \) yields a planar map resolution of \( I \) . Proof. Consider the minimal free resolution \( {\mathcal{F}}_{{G}_{\epsilon }} \) determined by the triangulation \( {G}_{\epsilon } \) as in (3.2). The specialization \( G \) of the labeled planar map \( {G}_{\epsilon } \) still gives a complex \( {\mathcal{F}}_{G} \) of free modules over \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \), and we need to demonstrate its exactness. Considering any fixed \( {\mathbb{N}}^{3} \) -degree \( \omega = \left( {a, b, c}\right) \) , we must demonstrate exactness of the complex of vector spaces over \( \mathbb{k} \) in the degree \( \omega \) part of \( {\mathcal{F}}_{G} \) . Define \( {\omega }_{\epsilon } \) as the exponent vector on \[ \operatorname{lcm}\left( {{m}_{\epsilon, i} \mid {m}_{i}\text{ divides }{x}^{a}{y}^{b}{z}^{c}}\right) . \] The summands contributing to the degree \( \omega \) part of \( {\mathcal{F}}_{G} \) are exactly those summands of \( {\mathcal{F}}_{{G}_{\epsilon }} \) contributing to its degree \( {\omega }_{\epsilon } \) part, which is exact. In the next section we will demonstrate how any planar map resolution can be made minimal by successively removing edges and joining adjacent regions. For now, we derive a sharp complexity bound from Proposition 3.14 using Euler’s formula, which states that \( v - e + f = 1 \) for any connected planar map with \( v \) vertices, \( e \) edges, and \( f \) bounded faces [Wes01, Theorem 6.1.21], plus its consequences for simple planar graphs with at least three vertices: \( e \leq {3v} - 6 \) [Wes01, Theorem 6.1.23] and \( f \leq {2v} - 5 \) . Corollary 3.15 An ideal \( I \) generated by \( r \geq 3 \) monomials in \( \mathbb{k}\left\lbrack {x, y, z}\right\rbrack \) has at most \( {3r} - 6 \) minimal first syzygies and \( {2r} - 5 \) minimal second syzygies. These Betti number bounds are attained if \( I \) is artinian, strongly generic, and \( {xyz} \) divides all but three minimal generators. Proof. Choose a strong deformation \( {I}_{\epsilon } \) of \( I \) that is strongly generic. Proposition 3.14 implies that \( I \) has Betti numbers no larger than those of \( {I}_{\epsilon } \), so we need only prove the first sentence of the theorem for \( {I}_{\epsilon } \) . Theorem 3.11 implies that \( {I}_{\epsilon } \) is resolved by a planar map, so Euler’s formula and its consequences give the desired result. For the second statement, let \( {x}^{a},{y}^{b} \), and \( {z}^{c} \) be the three special generators of \( I \) . Every other minimal generator \( {x}^{i}{y}^{j}{z}^{k} \) satisfies \( i \geq 1, j \geq 1 \) , and \( k \geq 1 \), so that \( \left\{ {{x}^{a},{y}^{b}}\right\} ,\left\{ {{x}^{a},{z}^{c}}\right\} \), and \( \left\{ {{y}^{b},{z}^{c}}\right\} \) are edges in Buch \( \left( I\right) \) . By Proposition 3.9, Buch \( \left( I\right) \) is a triangulation of a triangle with \( r \) vertices such that \( r - 3 \) vertices lie in the interior. It follows from Euler’s formula and the easy equality \( {2e} = 3\left( {f + 1}\right) \) for any such triangulation that the number of edges is \( {3r} - 6 \) and the number of triangle
1105_(GTM260)Monomial.Ideals,Jurgen.Herzog(2010)
Definition 6.1.1
Definition 6.1.1. The numerical function \[ H\left( {M, - }\right) : \mathbb{Z} \rightarrow \mathbb{Z},\;i \mapsto H\left( {M, i}\right) \mathrel{\text{:=}} {\dim }_{K}{M}_{i} \] is called the Hilbert function of \( M \) . The formal Laurent series \( {H}_{M}\left( t\right) = \) \( \mathop{\sum }\limits_{{i \in \mathbb{Z}}}H\left( {M, i}\right) {t}^{i} \) is called the Hilbert series of \( M \) . Example 6.1.2. Let \( S = K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) be the polynomial ring in \( n \) variables. The monomials of degree \( i \) in \( S \) form a \( K \) -basis of \( {S}_{i} \) . It follows that \[ H\left( {S, i}\right) = \left( \begin{matrix} n + i - 1 \\ i \end{matrix}\right) = \left( \begin{matrix} n + i - 1 \\ n - 1 \end{matrix}\right) \;\text{ and }\;{H}_{S}\left( t\right) = \frac{1}{{\left( 1 - t\right) }^{n}}. \] Note that \( H\left( {S, - }\right) \) is a polynomial function of degree \( n - 1 \) and that \( {H}_{S}\left( t\right) \) is a rational function with exactly one pole at \( t = 1 \) . For an arbitrary graded \( R \) -module the Hilbert function and the Hilbert series are of the same nature as in the special case described in the example. Theorem 6.1.3 (Hilbert). Let \( K \) be a field, \( R \) a standard graded \( K \) -algebra and \( M \) a nonzero finitely generated graded \( R \) -module of dimension \( d \) . Then (a) there exists a Laurent-polynomial \( {Q}_{M}\left( t\right) \in \mathbb{Z}\left\lbrack {t,{t}^{-1}}\right\rbrack \) with \( {Q}_{M}\left( 1\right) > 0 \) such that \[ {H}_{M}\left( t\right) = \frac{{Q}_{M}\left( t\right) }{{\left( 1 - t\right) }^{d}} \] (b) there exists a polynomial \( {P}_{M}\left( x\right) \in \mathbb{Q}\left\lbrack x\right\rbrack \) of degree \( d - 1 \) (called the Hilbert polynomial of \( M \) ) such that \[ H\left( {M, i}\right) = {P}_{M}\left( i\right) \;\text{ for all }\;i > \deg {Q}_{M} - d. \] Proof. (a) After a base field extension we may assume that \( K \) is infinite. We proceed by induction on \( \dim M \) . If \( \dim M = 0 \), then \( {M}_{i} = 0 \) for \( i \gg 0 \), and the assertion is trivial. Suppose now that \( d = \dim M > 0 \) . We choose \( y \in {R}_{1} \) such that \( y \in \) \( \mathfrak{m} \smallsetminus \mathop{\bigcup }\limits_{{P \in \operatorname{Ass}\left( M\right) \smallsetminus \{ \mathfrak{m}\} }}P \), where \( \mathfrak{m} = {\bigoplus }_{i > 0}{R}_{i} \) . Then \( y \) is almost regular on \( M \) (cf. the proof of Lemma 4.3.1), and \( \dim M/{yM} = d - 1 \) since \( y \) does not belong to any minimal prime ideal of \( M \) . The exact sequence \[ 0 \rightarrow N \rightarrow M\left( {-1}\right) \overset{y}{ \rightarrow }M \rightarrow M/{yM} \rightarrow 0 \] with \( N = \left( {0{ : }_{M}y}\right) \left( {-1}\right) \) yields the identity \[ {H}_{M/{yM}}\left( t\right) - {H}_{M}\left( t\right) + t{H}_{M}\left( t\right) - {H}_{N}\left( t\right) = 0. \] In other words we have \[ {H}_{M}\left( t\right) = \frac{{H}_{M/{yM}}\left( t\right) - {H}_{N}\left( t\right) }{1 - t}. \] By our induction hypothesis there exists a Laurent polynomial \( {Q}_{M/{yM}}\left( t\right) \) with \( {Q}_{M/{yM}}\left( 1\right) > 0 \) such that \[ {H}_{M/{yM}}\left( t\right) = \frac{{Q}_{M/{yM}}\left( t\right) }{{\left( 1 - t\right) }^{d - 1}}. \] Thus we see that \[ {H}_{M}\left( t\right) = \frac{{Q}_{M/{yM}}\left( t\right) /{\left( 1 - t\right) }^{d - 1} - {H}_{N}\left( t\right) }{1 - t} = \frac{{Q}_{M}\left( t\right) }{{\left( 1 - t\right) }^{d}} \] with \[ {Q}_{M}\left( t\right) = {Q}_{M/{yM}}\left( t\right) - {H}_{N}\left( t\right) {\left( 1 - t\right) }^{d - 1}. \] (6.1) Since \( {H}_{N}\left( t\right) \) is a Laurent polynomial it follows from (6.1) that \( {Q}_{M}\left( t\right) \) is a Laurent polynomial. Equation (6.1) also implies that \( {Q}_{M}\left( 1\right) = {Q}_{M/{yM}}\left( 1\right) > 0 \) if \( d > 1 \) . Thus it remains to be shown that \( {Q}_{M}\left( 1\right) > 0 \) if \( d = 1 \), or equivalently that \( \ell \left( {0{ : }_{M}y}\right) < \ell \left( {M/{yM}}\right) \) if \( \dim M = 1 \) . Observe that \( M \) is a finitely generated \( A = K\left\lbrack y\right\rbrack \) -module, since \( M/{yM} \) has finite length. Since \( A \) is a principal ideal domain, and since \( M \) is a graded \( A \) -module of dimension 1, it follows that \( M = {A}^{r} \oplus {\bigoplus }_{i = 1}^{s}A/\left( {y}^{{a}_{i}}\right) \) with \( r > 0 \) . Thus we see that \( \ell \left( {M/{yM}}\right) = r + s > s = \ell \left( {0{ : }_{M}y}\right) \) . (b) Let \( {Q}_{M}\left( t\right) = \mathop{\sum }\limits_{{i = r}}^{s}{h}_{i}{t}^{i} \) . Then \[ {H}_{M}\left( t\right) = \left( {\mathop{\sum }\limits_{{i = r}}^{s}{h}_{i}{t}^{i}}\right) /{\left( 1 - t\right) }^{d} = \mathop{\sum }\limits_{{i = r}}^{s}{h}_{i}{t}^{i}\mathop{\sum }\limits_{{j \geq 0}}\left( \begin{matrix} d + j - 1 \\ d - 1 \end{matrix}\right) {t}^{j}. \] By using the convention that \( \left( \begin{array}{l} a \\ i \end{array}\right) = 0 \) for \( a < i \), we deduce from the preceding equation that \[ H\left( {M, i}\right) = \mathop{\sum }\limits_{{j = r}}^{s}{h}_{j}\left( \begin{matrix} d + \left( {i - j}\right) - 1 \\ d - 1 \end{matrix}\right) . \] (6.2) In particular, if we set \( {P}_{M}\left( x\right) = \mathop{\sum }\limits_{{j = r}}^{s}{h}_{j}\left( \begin{matrix} x + d - j - 1 \\ d - 1 \end{matrix}\right) \), then \( {P}_{M}\left( x\right) \) is a polynomial of degree \( d - 1 \) with \( H\left( {M, i}\right) = {P}_{M}\left( i\right) \) for \( i > s - d \) . Theorem 6.1.3 implies that the Krull dimension \( d \) of \( M \) is the pole order of the rational function \( {H}_{M}\left( t\right) \) at \( t = 1 \) . The multiplicity \( e\left( M\right) \) of \( M \) is defined to be the positive number \( {Q}_{M}\left( 1\right) \) . It follows from (6.2) that \( e\left( M\right) /\left( {d - 1}\right) \) ! is the leading coefficient of the Hilbert polynomial \( {P}_{M}\left( x\right) \) of \( M \) . The \( a \) -invariant is the degree of the Hilbert series \( {H}_{M}\left( t\right) \), that is, the number \( \deg {Q}_{M}\left( t\right) - d \) . Let \( {Q}_{M}\left( t\right) = \mathop{\sum }\limits_{{i = r}}^{s}{h}_{i}{t}^{i} \) . The coefficient vector \( \left( {{h}_{r},{h}_{r + 1},\ldots ,{h}_{s}}\right) \) of \( {Q}_{M}\left( t\right) \) is called the \( h \) -vector of \( M \) . ## 6.1.2 Hilbert functions and initial ideals Let \( K \) be a field, \( S = K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) the polynomial ring in \( n \) variables and \( I \subset S \) an ideal. For a given monomial order \( < \) there is a natural monomial \( K \) -basis of the residue class ring \( S/I \) . We denote by \( \operatorname{Mon}\left( {{\operatorname{in}}_{ < }\left( I\right) }\right) \) the set of monomials in \( {\operatorname{in}}_{ < }\left( I\right) \) . Theorem 6.1.4 (Macaulay). The set of monomials \( \operatorname{Mon}\left( S\right) \smallsetminus \operatorname{Mon}\left( {{\operatorname{in}}_{ < }\left( I\right) }\right) \) form a \( K \) -basis of \( S/I \) . Proof. Let \( \mathcal{G} = \left\{ {{g}_{1},\ldots ,{g}_{r}}\right\} \) be a Gröbner basis of \( I \), and let \( f \in S \) . Then by Lemma 2.2.3, \( f \) has a unique remainder \( {f}^{\prime } \) with respect to \( \mathcal{G} \) . The residue class of \( f \) modulo \( I \) is the same as that of \( {f}^{\prime } \), and no monomial in the support of \( {f}^{\prime } \) is divided by any of the monomials \( {\operatorname{in}}_{ < }\left( {g}_{i}\right) \) . This shows that \( \operatorname{Mon}\left( S\right) \smallsetminus \) \( \operatorname{Mon}\left( {{\operatorname{in}}_{ < }\left( I\right) }\right) \) is a system of generators of the \( K \) -vector space \( S/I \) . Assume there exists a set \( \left\{ {{u}_{1},\ldots ,{u}_{s}}\right\} \subset \operatorname{Mon}\left( S\right) \smallsetminus \operatorname{Mon}\left( {{\operatorname{in}}_{ < }\left( I\right) }\right) \) and \( {a}_{i} \in \) \( K \smallsetminus \{ 0\} \) such that \( h = \mathop{\sum }\limits_{{i = 1}}^{s}{a}_{i}{u}_{i} \in I \) . We may assume that \( {u}_{1} = \operatorname{in}\left( h\right) \) . Then \( {u}_{1} = {\operatorname{in}}_{ < }\left( h\right) \in \operatorname{Mon}\left( {{\operatorname{in}}_{ < }\left( I\right) }\right) \), a contradiction. As an immediate consequence we obtain the following important result Corollary 6.1.5. Let \( I \subset S \) be a graded ideal and \( < \) a monomial order on \( S \) . Then \( S/I \) and \( S/{\operatorname{in}}_{ < }\left( I\right) \) have the same Hilbert function, i.e. \( H\left( {S/I, i}\right) = \) \( H\left( {S/{\operatorname{in}}_{ < }\left( I\right), i}\right) \) for all \( i \) . We also obtain a Gröbner basis criterion. Corollary 6.1.6. Let \( \mathcal{G} = \left\{ {{g}_{1},\ldots ,{g}_{r}}\right\} \) be a homogeneous system of generators of \( I \), and let \( J = \left( {{\operatorname{in}}_{ < }\left( {g}_{1}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{r}\right) }\right) \) . Then \( \mathcal{G} \) is a Gröbner basis of \( I \) if and only if \( S/I \) and \( S/J \) have the same Hilbert function. Proof. We have \( J \subset {\operatorname{in}}_{ < }\left( I\right) \), so that \( H\left( {S/J, i}\right) \geq H\left( {S/{\operatorname{in}}_{ < }\left( I\right), i}\right) = H\left( {S/I, i}\right) \) for all \( i \) . Equality holds if and only if \( J = {\operatorname{in}}_{ < }\left( I\right) \) . ## 6.1.3 Hilbert functions and resolutions Let \( K \) be a field, \( S = K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) the polynomial ring in \( n \) variables and \( M \) a finitely generated graded \( S \) -module. Let \[ \mathbb{F} : 0 \rightarrow {F}_{p} \rightarrow {F}_{p - 1} \rightarrow \cdots \rightarrow {F}_{1} \rightarrow {F}_{0} \rightarrow M \rightarrow 0 \] be a graded minimal free \( S \) -resolution of \( M \) with \[ {F}_{i} = {\bigoplus }_{j}S{\le
1185_(GTM91)The Geometry of Discrete Groups
Definition 7.2.3
Definition 7.2.3. Let \( E \) be a subset of the hyperbolic plane. Then (i) \( \widetilde{E} \) denotes the closure of \( E \) relative to the hyperbolic plane; (ii) \( \bar{E} \) denotes the closure of \( E \) relative to the closed hyperbolic plane. Of course, \( \bar{E} \) is also the closure of \( E \) in \( \widehat{\mathbb{C}} \) . ## EXERCISE 7.2 1. Let \( L \) be the set of points \( x + {iy} \) in \( {H}^{2} \) where \( x = y \) . Find where \[ \inf \left\{ {\rho \left( {z, w}\right) : z \in L}\right\} \;\left( {w \in {H}^{2}}\right) \] is attained and describe this point in geometric terms. 2. Suppose that \( {x}_{1} < {x}_{2} < {x}_{3} < {x}_{4} \) . Let the semi-circle in \( {H}^{2} \) with diameter \( \left\lbrack {{x}_{1},{x}_{3}}\right\rbrack \) meet the line \( x = {x}_{2} \) at the point \( {z}_{3} \) . Similarly, let \( {z}_{4} \) be the intersection of this line and the semi-circle with diameter \( \left\lbrack {{x}_{1},{x}_{4}}\right\rbrack \) . Prove that \[ \rho \left( {{z}_{3},{z}_{4}}\right) = \frac{1}{2}\log \left\lbrack {{x}_{2},{x}_{3},{x}_{4},\infty }\right\rbrack . \] 3. Show that if \( \sigma \) is a metric on a set \( X \) then \( \tanh \sigma \) is also a metric on \( X \) . Deduce that \[ {\rho }_{0}\left( {z, w}\right) = \left| \frac{z - w}{z - \bar{w}}\right| \] is a metric on \( {H}^{2} \) . Show that \[ {\rho }_{0}\left( {u, v}\right) = {\rho }_{0}\left( {u, w}\right) + {\rho }_{0}\left( {w, v}\right) \] if and only if \( w = u \) or \( w = v \) . 4. Show that \( \left( {{H}^{2},\rho }\right) \) is complete but not compact. ## §7.3. The Geodesics We begin by defining a hyperbolic line or, more briefly, an h-line to be the intersection of the hyperbolic plane with a Euclidean circle or straight line which is orthogonal to the circle at infinity. With this definition, the following facts are easily established. (1) There is a unique h-line through any two distinct points of the hyperbolic plane. (2) Two distinct h-lines intersect in at most one point in the hyperbolic plane. (3) The reflection in an h-line is a \( \rho \) -isometry (see Section 3.3). (4) Given any two h-lines \( {L}_{1} \) and \( {L}_{2} \), there is a \( \rho \) -isometry \( g \) such that \( g\left( {L}_{1}\right) \) \( = {L}_{2} \) (see the proof of Theorem 7.2.1). Given any \( w \) in \( {H}^{2} \), it is clear that \[ \left\{ {z \in {H}^{2} : \left| z\right| = \left| w\right| }\right\} \] is the unique h-line which contains \( w \) and which is orthogonal to the positive imaginary axis (an h-line). As the isometry in (4) can be taken to be a Möbius transformation we obtain: (5) given any h-line and any point \( w \), there is a unique h-line through \( w \) and orthogonal to \( L \) . Without going into the details, the reader should be aware that an essential feature of axiomatic geometry is the notion of "between" on a line. In our case, this notion can be described in terms of the metric. Given two distinct points \( z \) and \( w \) on an h-line \( L \), the set \( L - \{ z, w\} \) has three components exactly one of which has a compact closure (relative to the hyperbolic plane). This component is the open segment \( \left( {z, w}\right) \) and \( \zeta \) is between \( z \) and \( w \) if and only if \( \zeta \in \left( {z, w}\right) \) . The closed segment \( \left\lbrack {z, w}\right\rbrack \) and segments \( \left\lbrack {z, w),(z, w}\right\rbrack \) are defined in the obvious way. The discussion preceding (7.2.3) shows that a curve \( \gamma \) joining ip to iq satisfies \[ \parallel \gamma \parallel = \rho \left( {{ip},{iq}}\right) \] if and only if \( \gamma \) is a parametrization of \( \left\lbrack {{ip},{iq}}\right\rbrack \) as a simple curve. Clearly, this can be phrased in an invariant form as follows. Theorem 7.3.1. Let \( z \) and \( w \) be any points in the hyperbolic plane. A curve \( \gamma \) joining \( z \) to \( w \) satisfies \[ \parallel \gamma \parallel = \rho \left( {z, w}\right) \] if and only if \( \gamma \) is a parametrization of \( \left\lbrack {z, w}\right\rbrack \) as a simple curve. It is for this reason that we refer to h-lines as geodesics (that is, curves of shortest length). Now consider any three points \( z, w \) and \( \zeta \) . It is clear from the special case (7.2.3) that if \( \zeta \) is between \( z \) and \( w \), then \[ \rho \left( {z, w}\right) = \rho \left( {z,\zeta }\right) + \rho \left( {\zeta, w}\right) . \] Equally clearly, if \( \zeta \) is not between \( z \) and \( w \) then the curve \( \gamma \) comprising of the segments \( \left\lbrack {z,\zeta }\right\rbrack \) and \( \left\lbrack {\zeta, w}\right\rbrack \) satisfies (by Theorem 7.3.1) \[ \parallel \gamma \parallel > \rho \left( {z, w}\right) \text{.} \] Thus we obtain the next result. Theorem 7.3.2. Let \( z \) and \( w \) be distinct points in the hyperbolic plane. Then \[ \rho \left( {z, w}\right) = \rho \left( {z,\zeta }\right) + \rho \left( {\zeta, w}\right) \] if and only if \( \zeta \in \left\lbrack {z, w}\right\rbrack \) . ![32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_148_0.jpg](images/32ff4eba-fdcc-4eb0-a03c-f403959f1f6d_148_0.jpg) Figure 7.3.1 We end this section with more terminology. First, the points \( {z}_{1},{z}_{2},\ldots \) are collinear if they lie on a single geodesic. Each geodesic has two end-points, each on the circle at infinity. It is natural to extend the notation for a segment so as to include geodesics: thus \( \left( {\alpha ,\beta }\right) \) denotes the geodesic segment with end-points \( \alpha \) and \( \beta \) even if these are on the circle at infinity. A ray from \( z \) is a segment \( \lbrack z,\alpha ) \) where \( \alpha \) lies on the circle at infinity: each geodesic \( \left( {\alpha ,\beta }\right) \) through \( z \) determines exactly two rays from \( z \), namely \( \lbrack z,\alpha ) \) and \( \lbrack z,\beta ) \) . Definition 7.3.3. Let \( {L}_{1} \) and \( {L}_{2} \) be distinct geodesics. We say that \( {L}_{1} \) and \( {L}_{2} \) are parallel if and only if they have exactly one end-point in common. If \( {L}_{1} \) and \( {L}_{2} \) have no end-points in common, then they are intersecting when \( {L}_{1} \cap {L}_{2} \neq \varnothing \) and disjoint when \( {L}_{1} \cap {L}_{2} = \varnothing \) . Warning. This terminology is not standard and the terms are illustrated in the model \( \Delta \) in Figure 7.3.1. Much of the geometry is based on a discussion of these three mutually exclusive possibilities (parallel, intersecting and disjoint) and for this reason we prefer a particularly descriptive terminology. ## EXERCISE 7.3 1. Let \( w = u + {iv},{w}^{\prime } = {iv} \) and \( z = {ri} \) be points in \( {H}^{2} \) . Prove that \[ \rho \left( {w, z}\right) \geq \rho \left( {{w}^{\prime }, z}\right) \] with equality if and only if \( w = {w}^{\prime } \) . Deduce Theorem 7.3.2. ## §7.4. The Isometries The objective here is to identify all isometries of the hyperbolic plane. Let \( z, w \) and \( \zeta \) be distinct points in \( {H}^{2} \) with \( \zeta \) between \( z \) and \( w \) . It is an immediate consequence of Theorem 7.3.2 that for any isometry \( \phi \), the point \( \phi \left( \zeta \right) \) is between \( \phi \left( z\right) \) and \( \phi \left( w\right) \) . Thus \( \phi \) maps the segment \( \left\lbrack {z, w}\right\rbrack \) onto the segment \( \left\lbrack {\phi \left( z\right) ,\phi \left( w\right) }\right\rbrack \) : because of this, \( \phi \) maps h-lines to h-lines. Given any isometry \( \phi \), there is an isometry \[ g\left( z\right) = \frac{{az} + b}{{cz} + d}\;\left( {{ad} - {bc} > 0}\right) , \] such that \( {g\phi } \) leaves the positive imaginary axis \( L \) invariant (simply choose \( g \) to map \( \phi \left( L\right) \) to \( L \) ). By applying the isometries \( z \rightarrow {kz}\left( {k > 0}\right) \) and \( z \rightarrow - 1/z \) as necessary, we may assume that \( {g\phi } \) fixes \( i \) and leaves invariant the rays \( \left( {i,\infty }\right) ,\left( {0, i}\right) \) . It is now an immediate consequence of (7.2.3) that \( {g\phi } \) fixes each point of \( L \) . Now select any \( z \) in \( {H}^{2} \) and write \[ z = x + {iy},\;{g\phi }\left( z\right) = u + {iv}. \] For all positive \( t \) , \[ \rho \left( {z,{it}}\right) = \rho \left( {{g\phi }\left( z\right) ,{g\phi }\left( {it}\right) }\right) \] \[ = \rho \left( {u + {iv},{it}}\right) \] and so, by Theorem 7.2.1(iii), \[ \left\lbrack {{x}^{2} + {\left( y - t\right) }^{2}}\right\rbrack v = \left\lbrack {{u}^{2} + {\left( v - t\right) }^{2}}\right\rbrack y. \] As this holds for all positive \( t \) we have \( y = v \) and \( {x}^{2} = {u}^{2} \) : thus \[ {g\phi }\left( z\right) = z\text{ or } - \bar{z} \] A straightforward continuity argument (isometries are necessarily continuous) shows that one of these equations holds for all \( z \) in \( {H}^{2} \) : for example, the set of \( z \) in the open first quadrant with \( {g\phi }\left( z\right) = z \) is both open and closed in that quadrant. This proves the next result. Theorem 7.4.1. The group of isometries of \( \left( {{H}^{2},\rho }\right) \) is precisely the group of maps of the form \[ z \mapsto \frac{{az} + b}{{cz} + d},\;z \mapsto \frac{a\left( {-\bar{z}}\right) + b}{c\left( {-\bar{z}}\right) + d}, \] where \( a, b, c \) and \( d \) are real and ad \( - {bc} > 0 \) . Further, the group of isometries is generated by reflections in \( h \) -lines. A similar development holds for the model \( \Delta \) : here, the isometries are \[ z \mapsto \frac{{az} + \bar{c}}{{cz} + \bar{a}},\;z \mapsto \frac{a\bar{z} + \bar{c}}{c\bar{z} + \bar{a}}, \] where \( {\left| a\right| }^{2} - {\left| c\right| }^{2} = 1 \) . Note that if \[ g\left( z\right) = \frac{{az} + \bar{c}}{{cz} + \bar{a}}\;{\left| a\right| }^{2} - {\left| c\right| }^{2} = 1, \] then from (7.2.4) we obtain the useful expressions \[ \left| c\right|
113_Topological Groups
Definition 6.26
Definition 6.26. A set \( A \subseteq \omega \) is simple if \( A \) is r.e., \( \omega \sim A \) is infinite, and \( B \cap A \neq 0 \) whenever \( B \) is an infinite r.e. set. Theorem 6.27. A simple set is neither recursive nor creative. Proof. If \( A \) is simple and recursive, then \( \omega \sim A \) is an infinite r.e. set and \( A \cap \left( {\omega \sim A}\right) = 0 \), contradiction. If \( A \) is simple and creative, by 6.20 choose \( B \) infinite recursive such that \( B \subseteq \omega \sim A \) . Contradiction. Theorem 6.28. Simple sets exist. Proof. Let \( g \) be a recursive function universal for unary primitive recursive functions (see Lemma 3.5). For any \( e \in \omega \) let \[ {fe} \simeq {\left( \mu y\left\lbrack g\left( e,{\left( y\right) }_{0}\right. = {\left( y\right) }_{1}\text{ and }{\left( y\right) }_{1} > 2e\right\rbrack \right) }_{1}. \] Thus \( f \) is partial recursive. For each \( e \in \omega \) let \( {\psi }_{e}x = g\left( {e, x}\right) \) for all \( x \in \omega \) . Clearly for any \( e \in \omega \) , (1) \[ \text{if}e \in \operatorname{Dmn}f\text{, then}{fe} \in \operatorname{Rng}{\psi }_{e}\text{and}{fe} > {2e}\text{;} \] (2) if \( \operatorname{Rng}{\psi }_{e} \) is infinite then \( e \in \operatorname{Dmn}f \) . Now \( \operatorname{Rng}f \) is simple. For, it is obviously r.e. Suppose \( B \) is any infinite r.e. set. By choice of \( g \), choose \( e \in \omega \) so that \( \operatorname{Rng}{\psi }_{e} = B \) . By (2) and (1), \( {fe} \in \operatorname{Rng}{\psi }_{e} \) . Thus \( B \cap \operatorname{Rng}f \neq 0 \) . Finally, to show that \( \omega \sim \operatorname{Rng}f \) is infinite, note (3) \[ \text{if}n \in \omega \text{, then}{2n} \cap \operatorname{Rng}f \subseteq {f}^{ * }n\text{.} \] For, let \( i \in {2n} \cap \operatorname{Rng}f \) . Say \( i = {fj} \) . By (1), \( {2j} < {fj} \), so \( {2j} < i < {2n} \) . Thus \( j < n \), so \( i \in {f}^{ * }n \) . Since (3) holds, \( \left| {{2n} \cap \operatorname{Rng}f}\right| \leq n \), hence \( \left| {{2n} \sim \operatorname{Rng}f}\right| \geq n \), for any \( n \in \omega \) . Thus \( \omega \sim \operatorname{Rng}f \) is infinite. ## BIBLIOGRAPHY 1. Malcev, A. I. Algorithms and Recursive Functions. Groningen: Wolters-Noordhoff (1970). 2. Rogers, H. Theory of Recursive Functions and Effective Computability. New York: McGraw-Hill (1967). 3. Smullyan, R. M. Theory of Formal Systems. Princeton: Princeton University Press (1961). ## EXERCISES 6.29. Let \( f : \omega \rightarrow \omega \) . Then the following conditions are equivalent: (1) \( f \) is recursive; (2) \( \{ \left( {x,{fx}}\right) : x \in \omega \} \) is an r.e. relation; (3) \( \{ \left( {x,{fx}}\right) : x \in \omega \} \) is a recursive relation. 6.30. Prove that the class of r.e. sets is closed under union and intersection using the argument following 6.8 , but rigorously. 6.31. Show that if \( A \) is a \( {\sum }_{n} \) -set, \( n > 0 \), and \( f \) is partial recursive, then \( {f}^{ * }A \) is \( {\sum }_{n} \) . 6.32. If \( A \) and \( B \) are r.e. sets, then there exist r.e. sets \( C, D \) such that \( C \subseteq A \) , \( D \subseteq B, C \cup D = A \cup B \), and \( C \cap D = 0 \) . 6.33. Suppose that \( f \) and \( g \) are unary recursive functions, \( g \) is one-one, \( \operatorname{Rng}g \) is recursive, and \( \forall x\left( {{fx} \geq {gx}}\right) \) . Show that \( \operatorname{Rng}f \) is recursive. 6.34. For each of the following determine if the set in question is recursive, r.e., or has an r.e. complement: (1) \( \{ x \) : there are at least \( x \) consecutive 7’s in the decimal representation of \( \pi \} \) ; (2) \( \{ x \) : there is a run of exactly \( x \) consecutive 7’s in the decimal representation of \( \pi \} \) ; (3) \( \left\{ {x : {\varphi }_{x}^{1}}\right. \) is total \( \} \) ; (4) \( \left\{ {x : \operatorname{Dmn}{\varphi }_{x}^{1}}\right. \) is recursive \( \} \) . 6.35. There are \( {\aleph }_{0} \) r.e. sets which are not recursive. 6.36. There is a recursive set \( A \) such that \( \mathop{\bigcap }\limits_{{x \in A}}\operatorname{Dmn}{\varphi }_{x}^{1} \) is not r.e. 6.37. If \( A \) is productive, then so is \( \left\{ {\mathrm{e} : \operatorname{Dmn}{\varphi }_{e}^{1} \subseteq A}\right\} \) . 6.38. There are \( {2}^{{\aleph }_{0}} \) productive sets. Hint: Let \( A = \left\{ {e : \operatorname{Dmn}{\varphi }_{e}^{1} \subseteq \omega \sim \mathrm{K}}\right\} \) . Show that \( A \subseteq \omega \sim \mathrm{K},\left( {\omega \sim \mathrm{K}}\right) \sim A \) is infinite, and any set \( P \) with \( A \subseteq P \subseteq \omega \sim \mathrm{K} \) is productive. 6.39. Any infinite r.e. set is the disjoint union of a creative set and a productive set. Hint: say \( \operatorname{Rng}f = A \) . Let \( {gn} = {f\mu i}\left( {{fi} \neq {gj}\text{for all}j < n}\right) \) . Show that \( {g}^{ * }\mathrm{\;K} \) is creative and \( A \sim {g}^{ * }\mathrm{\;K} \) is productive. 6.40. If \( B \) is r.e. and \( A \cap B \) is productive, then \( A \) is productive. 6.41. There is an r.e. set which is neither recursive, simple, nor creative. Hint: let \( A \) be simple and set \( B = \left\{ {x : {\left( x\right) }_{0} \in A}\right\} \) . 6.42. For \( A \subseteq \omega \) the following are equivalent: (1) \( A \) is recursive and \( A \neq 0 \) ; (2) there is a recursive function \( f \) with \( \operatorname{Rng}f = A \) and \( \forall x \in \omega \left( {{fx} \leq f\left( {x + 1}\right) }\right) \) . 6.43. For \( A \subseteq \omega \) the following are equivalent: (1) \( A \) is productive; (2) there is a partial recursive function \( f \) such that \( \forall e \in \omega \) (if Dmn \( {\varphi }_{e}^{1} \subseteq A \) then \( {fe} \) is defined and \( {fe} \in A \sim \operatorname{Dmn}{\varphi }_{e}^{1} \) ). 6.44. If \( A \) is creative, \( B \) is r.e., and \( A \cap B = 0 \), then \( A \cup B \) is creative. 6.45. There is a set \( A \) such that both \( A \) and \( \omega \sim A \) are productive. 6.46. If \( A \) is productive and \( B \) is simple, then \( A \cap B \) is productive. 6.47. Two sets \( A \) and \( B \) are strongly recursively inseparable if \( A \cap B = 0,\omega \sim \) \( \left( {A \cup B}\right) \) is infinite, and for every r.e. set \( C, C \sim A \) infinite \( \Rightarrow C \cap B \neq 0 \) , \( C \sim B \) infinite \( \Rightarrow C \cap A \neq 0 \) . Show that if \( A \) and \( B \) are r.e. but strongly recursively inseparable, then: (1) \( A \) and \( B \) are recursively inseparable. (2) \( A \cup B \) is simple. (3) neither \( A \) nor \( B \) is creative. (4) \( A \) and \( B \) are not effectively inseparable. 6.48. Show that there exist two r.e. strongly recursively inseparable sets. Hint: let \( E = \left\{ {\left( {e, x}\right) : \exists y\left( {\left( {e, x, y}\right) \in {\mathbf{T}}_{1}}\right) }\right\} \) . Show that there exist recursive functions \( f, g \) such that \[ E = \{ \left( {{fi},{gi}}\right) : i < \omega \} . \] ## Part 1: Recursive Function Theory Show that there exist recursive functions \( h, k \) such that \[ {h0} = {\mu i}\left( {{gi} > {3fi}}\right) \text{;} \] \[ {k0} = {\mu i}\left( {{gi} > {3fi}\text{ and }{gi} \neq {gh0}}\right) ; \] \[ h\left( {n + 1}\right) = {\mu i}({gi} > {3fi}\& \forall j \leq n\left( {{gi} \neq {gkj}}\right) \& \] \[ \forall j \leq n\left( {{fi} \neq {fhj}}\right) \& \forall j \leq n\left( {{gi} \neq {ghj}}\right) ); \] \[ k\left( {n + 1}\right) = {\mu i}\left( {{gi} > {3fi}\& \forall j \leq n + 1\left( {{gi} \neq {ghj}}\right) \& }\right. \] \[ \forall j \leq n\left( {{fi} \neq {fkj}}\right) \& \forall j \leq n\left( {{gi} \neq {gkj}}\right) ). \] Let \( A = \operatorname{Rng}\left( {g \circ h}\right), B = \operatorname{Rng}\left( {g \circ k}\right) \) . ## Recursion Theory We have developed recursion theory as much as we need for our later purposes in logic. But in this chapter we want to survey, without proofs, some further topics. Most of these topics are also frequently useful in logical investigations. ## Turing Degrees Let \( g \) be a function mapping \( \omega \) into \( \omega \) . Imagine a Turing machine equipped with an oracle-an inpenetrable black box-which gives the answer \( {gx} \) when presented with \( x \) . The function \( g \) may be nonrecursive, so that the oracle is not an effective device. Rigorously, one defines a \( g \) -Turing machine just like Turing machines were defined in 1.1, except that \( {v}_{1},\ldots ,{v}_{2m} \) are arbitrary members of \( \{ 0,1,2,3,4,5\} \) . And one adds one more stipulation in 1.2: If \( w = 5 \), and \( F\left( {e - 1}\right) = 0 \) or \( {Fe} = 1 \), then \( {F}^{\prime } = F,{d}^{\prime } = f \) , \( {e}^{\prime } = e \), while if \( w = 5 \) and \( {01}^{\left( x + 1\right) }0 \) lies on \( F \) ending at \( e \), then \( {01}^{\left( x + 1\right) }{01}^{\left( gx + 1\right) }0 \) lies on \( {F}^{\prime } \) ending at \( {e}^{\prime },{e}^{\prime } = e + {gx} + 2 \) , \( {F}^{\prime } \) is otherwise like \( F \) and \( {d}^{\prime } = f \) . Then the notion of \( g \) -Turing computable function is easily defined. One can also define g-recursive function: in 3.1, each class \( A \) is required to have \( g \) as a member. These two notions, \( g \) -Turing computable and \( g \) - recursive function, are shown equivalent just as in Chapter 3. In fact, most considerations of Chapters 1 through 6 carry over to this situation. If \( h \) is \( g \) -recursive, we also say that \( h \) is recursive in \( g \) . One can extend the notion in an obvious way to a set of \( F \) of functions, arriving at the notion of a function being recursive in \( F \) . At present we restrict ourselves to the simpler notion. We say that \( h \) and \( g \) are Turing equivalent if each is recursive in the other. This establishes an equivalence relation on the set of all functions
1112_(GTM267)Quantum Theory for Mathematicians
Definition 10.19
Definition 10.19 A bounded operator \( A \) on \( \mathbf{H} \) is normal if \( A \) commutes with its adjoint: \( A{A}^{ * } = {A}^{ * }A \) . Every bounded self-adjoint operator is obviously normal. Other examples of normal operators are skew-self-adjoint operators \( \left( {{A}^{ * } = - A}\right) \) and unitary operators \( \left( {U{U}^{ * } = {U}^{ * }U = I}\right) \) . The spectrum of a bounded normal operator need not be contained in \( \mathbb{R} \), but can be an arbitrary closed, bounded, nonempty subset of \( \mathbb{C} \) . On the other hand, if \( U \) is unitary, then the spectrum of \( U \) is contained in the unit circle (Exercise 6 in Chap. 7). In this section, we consider the spectral theorem for a bounded normal operator \( A \) . The statements of the two versions of the theorem are precisely the same as in the self-adjoint case, except that \( \sigma \left( A\right) \) is no longer necessarily contained in the real line. Almost all of the proofs of these results are the same as in the self-adjoint case; we will, therefore, consider only those steps where some modification in the argument is required. Theorem 10.20 Suppose \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is normal. Then there exists a unique projection-valued measure \( {\mu }^{A} \) on the Borel \( \sigma \) -algebra in \( \sigma \left( A\right) \), with values in \( \mathcal{B}\left( \mathbf{H}\right) \), such that \[ {\int }_{\sigma \left( A\right) }{\lambda d}{\mu }^{A}\left( \lambda \right) = A \] Furthermore, for any measurable set \( E \subset \sigma \left( A\right) \), Range \( \left( {{\mu }^{A}\left( E\right) }\right) \) is invariant under \( A \) and \( {A}^{ * } \) . Once we have the projection-valued measure \( {\mu }^{A} \), we can define a functional calculus for \( A \), as in the self-adjoint case, by setting \[ f\left( A\right) = {\int }_{\sigma \left( A\right) }f\left( \lambda \right) d{\mu }^{A}\left( \lambda \right) \] for any bounded measurable function \( f \) on \( \sigma \left( A\right) \) . We can also define spectral subspaces, as in the self-adjoint case, by setting \[ {V}_{E} \mathrel{\text{:=}} \operatorname{Range}\left( {{\mu }^{A}\left( E\right) }\right) \] for each Borel set \( E \subset \sigma \left( A\right) \) . These spectral subspaces have precisely the same properties (with the same proofs) as in Proposition 7.15, with the following two exceptions. First, the assertion that \( {V}_{E} \) is invariant under \( A \) should be replaced by the assertion that \( {V}_{E} \) is invariant under \( A \) and \( {A}^{ * } \) . Second, in Point 2 of the proposition, the condition \( E \subset \left\lbrack {{\lambda }_{0} - \varepsilon ,{\lambda }_{0} + \varepsilon }\right\rbrack \) should be replaced by \( E \subset \overline{D\left( {{\lambda }_{0},\varepsilon }\right) } \), where \( D\left( {z, r}\right) \) denotes the disk of radius \( r \) in \( \mathbb{C} \) centered at \( z \) . Meanwhile, the spectral theorem in its direct integral and multiplication operator versions also holds for a bounded normal operator \( A \) . The statements are identical to the self-adjoint case, except that we no longer assume \( \sigma \left( A\right) \subset \mathbb{R} \) and we no longer assume that the function \( h \) in the multiplication operator version is real valued. Let us recall the two stages in the proof of the spectral theorem (first version) for bounded self-adjoint operators. The first stage is the construction of the continuous functional calculus. The steps in this construction are (1) the equality of the norm and spectral radius for self-adjoint operators, (2) the spectral mapping theorem, and (3) the Stone-Weierstrass theorem. The second stage is a sort of operator-valued Riesz representation theorem, which we prove by reducing it to the ordinary Riesz representation theorem using quadratic forms. In generalizing from bounded self-adjoint to bounded normal operators, the second stage of the proof is precisely the same as in the self-adjoint case. In the first stage, however, there are some additional ideas needed in each step of the argument. There is a relatively simple argument that reduces the equality of norm and spectral radius for normal operators to the self-adjoint case. Meanwhile, since the spectral mapping theorem, as stated in Chap. 8, already holds for arbitrary bounded operators, it appears that no change is needed in this step. We must think, however, about the proper notion of "polynomial." For a general normal operator \( A \), the spectrum of \( A \) is not contained in \( \mathbb{R} \), and, thus, powers of \( \lambda \) are complex-valued functions on \( \sigma \left( A\right) \) . We must, therefore, use the complex-valued version of the Stone-Weierstrass theorem (Appendix A.3.1), which requires that our algebra of functions be closed under complex-conjugation. This means that we need to consider polynomials in \( \lambda \) and \( \bar{\lambda } \), that is, linear combinations of functions of the form \( {\lambda }^{m}{\bar{\lambda }}^{n} \) . What we need, then, is a form of the spectral mapping theorem that applies to this sort of polynomial. On the operator side, the natural counterpart to the complex conjugate of a function is the adjoint of an operator. Thus, applying the function \( {\lambda }^{m}{\bar{\lambda }}^{n} \) to a normal operator \( A \) should give \( {A}^{m}{\left( {A}^{ * }\right) }^{n} \) . The desired "spectral mapping theorem" is then the following: If \( p \) is a polynomial in two variables, and \( A \) is a bounded normal operator, then \[ \sigma \left( {p\left( {A,{A}^{ * }}\right) }\right) = \{ p\left( {\lambda ,\bar{\lambda }}\right) \mid \lambda \in \sigma \left( A\right) \} . \] (10.21) This statement is true (Theorem 10.23), but its proof is not nearly as simple as the proof of the ordinary spectral mapping theorem. One way to prove (10.21) is to use the theory of commutative \( {C}^{ * } \) -algebras, as in [33]. (See Theorem 11.19 in [33] along with the assertion on p. 321 that the spectrum of an element is independent of the algebra containing that element.) Another approach is the direct argument found in Bernau [3], which uses no fancy machinery but which is long and not easily motivated. A third approach is to use the spectral theorem for bounded self-adjoint operators to help us prove (10.21); this is the approach we will follow. We begin with the equality of norm and spectral radius and then turn to \( \left( {10.21}\right) \) . Proposition 10.21 If \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is normal, then \[ \parallel A\parallel = R\left( A\right) . \] Lemma 10.22 If \( A \) and \( B \) are commuting elements of \( \mathcal{B}\left( \mathbf{H}\right) \), then \[ R\left( {AB}\right) \leq R\left( A\right) R\left( B\right) \] Proof. If \( A \) is any bounded operator, the proof of Lemma 8.1 shows that for any real number \( T \) with \( T > R\left( A\right) \), we have \[ \mathop{\lim }\limits_{{m \rightarrow \infty }}\frac{\begin{Vmatrix}{A}^{m}\end{Vmatrix}}{{T}^{m}} = 0 \] If \( A \) and \( B \) are two commuting bounded operators and \( S \) and \( T \) are two real numbers, with \( S > R\left( A\right) \) and \( T > R\left( B\right) \), then \[ \frac{\begin{Vmatrix}{\left( AB\right) }^{m}\end{Vmatrix}}{{S}^{m}{T}^{m}} = \frac{\begin{Vmatrix}{A}^{m}{B}^{m}\end{Vmatrix}}{{S}^{m}{T}^{m}} \leq \frac{\begin{Vmatrix}{A}^{m}\end{Vmatrix}\begin{Vmatrix}{B}^{m}\end{Vmatrix}}{{S}^{m}{T}^{m}}. \] Thus, \[ \mathop{\lim }\limits_{{m \rightarrow \infty }}\frac{\begin{Vmatrix}{\left( AB\right) }^{m}\end{Vmatrix}}{{S}^{m}{T}^{m}} = 0. \] \( \left( {10.22}\right) \) Meanwhile, if we apply the expression for the resolvent in the proof of Lemma 8.1 to \( {AB} \), we obtain \[ {\left( AB - \lambda \right) }^{-1} = - \mathop{\sum }\limits_{{m = 0}}^{\infty }\frac{{A}^{m}{B}^{m}}{{\lambda }^{m + 1}} \] \( \left( {10.23}\right) \) since \( A \) and \( B \) commute. For any \( {\lambda }_{1} \) with \( \left| {\lambda }_{1}\right| > R\left( A\right) R\left( B\right) \), take \( {\lambda }_{2} \) with \( \left| {\lambda }_{1}\right| > \left| {\lambda }_{2}\right| > R\left( A\right) R\left( B\right) \) . The terms in (10.23) with \( \lambda = {\lambda }_{2} \) tend to zero by (10.22), which means that (10.23) converges with \( \lambda = {\lambda }_{1} \) . Thus, \( {\lambda }_{1} \) is in the resolvent set of \( {AB} \) . - Proof of Proposition 10.21. For any bounded operator, \( \parallel A\parallel \geq R\left( A\right) \) (Proposition 7.5). To get the inequality in the other direction, recall (Proposition 7.2) that \( \parallel A{\parallel }^{2} = \begin{Vmatrix}{{A}^{ * }A}\end{Vmatrix} \) . Note also that \( {A}^{ * }A \) is self-adjoint, since its adjoint is \( {A}^{ * }{A}^{* * } = {A}^{ * }A \) . Thus, if \( A \) and \( {A}^{ * } \) commute, we have \[ \parallel A{\parallel }^{2} = \begin{Vmatrix}{{A}^{ * }A}\end{Vmatrix} = R\left( {{A}^{ * }A}\right) \leq R\left( {A}^{ * }\right) R\left( A\right) \] \[ \leq \begin{Vmatrix}{A}^{ * }\end{Vmatrix}R\left( A\right) = \parallel A\parallel R\left( A\right) . \] Here we have used Lemmas 8.1 and 10.22 and the general inequality between norm and spectral radius. Dividing by \( \parallel A\parallel \) gives \( \parallel A\parallel \leq R\left( A\right) \), unless \( \parallel A\parallel = 0 \), in which case the desired inequality is trivially satisfied. Theorem 10.23 If \( A \in \mathcal{B}\left( \mathbf{H}\right) \) is normal, then for any polynomial \( p \) in two variables, we have \[ \sigma \left( {p\left( {A,{A}^{ * }}\right) }\right) = \{ p\left( {\lambda ,\bar{\lambda }}\right) \mid \lambda \in \sigma \left( A\right) \} . \] If, for example, \( p\left( {\lambda ,\bar{\lambda }}\right) = {\lambda }^{2}{\bar{\lambda }}^{3} \), then \( p\left( {A,{A}^{ * }}\right) = {A}^{2}{\left( {A}^{ * }\right) }^{3} \) . Note that since \( A \) and \( {A}^{ * } \
1112_(GTM267)Quantum Theory for Mathematicians
Definition 9.6
Definition 9.6 An unbounded operator \( A \) on \( \mathbf{H} \) is said to be closed if the graph of \( A \) is a closed subset of \( \overline{\mathbf{H}} \times \mathbf{H} \) . An unbounded operator \( A \) on \( \mathbf{H} \) is said to be closable if the closure in \( \mathbf{H} \times \mathbf{H} \) of the graph of \( A \) is the graph of a function. If \( A \) is closable, then the closure \( {A}^{cl} \) of \( A \) is the operator with graph equal to the closure of the graph of \( A \) . To be more explicit, an operator \( A \) is closed if and only if the following condition holds: Suppose a sequence \( {\psi }_{n} \) belongs to \( \operatorname{Dom}\left( A\right) \) and suppose that there exist vectors \( \psi \) and \( \phi \) in \( \mathbf{H} \) with \( {\psi }_{n} \rightarrow \psi \) and \( A{\psi }_{n} \rightarrow \phi \) . Then \( \psi \) belongs to \( \operatorname{Dom}\left( A\right) \) and \( {A\psi } = \phi \) . Regarding closability, an operator \( A \) is not closable if there exist two elements in the closure of the graph of \( A \) of the form \( \left( {\phi ,\psi }\right) \) and \( \left( {\phi ,\chi }\right) \), with \( \psi \neq \chi \) . Another way of putting it is to say that an operator \( A \) is closable if there exists some closed extension of it, in which case the closure of \( A \) is the smallest closed extension of \( A \) . The notion of the closure of a (closable) operator is useful because it sweeps away some of the arbitrariness in the choice of a domain of an operator. If we consider, for example, the operator \( A = - i\hslash d/{dx} \) as an unbounded operator on \( {L}^{2}\left( \mathbb{R}\right) \), there are many different reasonable choices for \( \operatorname{Dom}\left( A\right) \), including (1) the space of \( {C}^{\infty } \) functions of compact support, (2) the Schwartz space (Definition A.15), and (3) the space of continuously differentiable functions \( \psi \) for which both \( \psi \) and \( {\psi }^{\prime } \) belong to \( {L}^{2}\left( \mathbb{R}\right) \) . As it turns out, each of these three choices for \( \operatorname{Dom}\left( A\right) \) leads to the same operator \( {A}^{cl} \) . Note that we are not claiming that every choice for \( \operatorname{Dom}\left( A\right) \) leads to the same closure; nevertheless, it is often the case that many reasonable choices do lead to the same closure. Definition 9.7 An unbounded operator \( A \) on \( \mathbf{H} \) is said to be essentially self-adjoint if \( A \) is symmetric and closable and \( {A}^{cl} \) is self-adjoint. Actually, as we shall see in the next section, a symmetric operator is always closable. Many symmetric operators fail to be even essentially selfadjoint. We will see examples of such operators in Sects. 9.6 and 9.10. Section 9.5 gives some reasonably simple criteria for determining when a symmetric operator is essentially self-adjoint. ## 9.3 Elementary Properties of Adjoints and Closed Operators In this section, we spell out some of the most basic and useful properties of adjoints and closures of unbounded operators. In Sect. 9.5, we will draw on these results to prove some more substantial results. In what follows, if we say that two operators "coincide," it means that they have the same domain and that they are equal on that common domain. Proposition 9.8 1. If \( A \) is an unbounded operator on \( \mathbf{H} \), then the graph of the operator \( {A}^{ * } \) (which may or may not be densely defined) is closed in \( \mathbf{H} \times \mathbf{H} \) . 2. A symmetric operator is always closable. Proof. Suppose \( {\psi }_{n} \) is a sequence in the domain of \( {A}^{ * } \) that converges to some \( \psi \in \mathbf{H} \) . Suppose also that \( {A}^{ * }{\psi }_{n} \) converges to some \( \phi \in \mathbf{H} \) . Then \( \left\langle {{\psi }_{n}, A \cdot }\right\rangle = \left\langle {{A}^{ * }{\psi }_{n}, \cdot }\right\rangle \) and for any \( \chi \in \operatorname{Dom}\left( A\right) \), we have \[ \langle \psi ,{A\chi }\rangle = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left\langle {{\psi }_{n},{A\chi }}\right\rangle = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left\langle {{A}^{ * }{\psi }_{n},\chi }\right\rangle = \langle \phi ,\chi \rangle . \] This shows that \( \psi \) belongs to the domain of \( {A}^{ * } \) and that \( {A}^{ * }\psi = \phi \), establishing that the graph of \( {A}^{ * } \) is closed. If \( A \) is symmetric, \( {A}^{ * } \) is an extension of \( A \) . Since, as we have just proved, \( {A}^{ * } \) is closed, \( A \) has a closed extension and is therefore closable. ∎ Corollary 9.9 If \( A \) is a symmetric operator with \( \operatorname{Dom}\left( A\right) = \mathbf{H} \), then \( A \) is bounded. Proof. Since \( A \) is symmetric, it is closable by Proposition 9.8. But since the domain of \( A \) is already all of \( \mathbf{H} \), the closure of \( A \) must coincide with \( A \) itself. (The closure of \( A \) always agrees with \( A \) on \( \operatorname{Dom}\left( A\right) \), which in this case is all of \( \mathbf{H} \) .) Thus, \( A \) is a closed operator defined on all of \( \mathbf{H} \), and the closed graph theorem (Theorem A.39) implies that \( A \) is bounded. Proposition 9.10 If \( A \) is a closable operator on \( \mathbf{H} \), then the adjoint of \( {A}^{cl} \) coincides with the adjoint of \( A \) . Proof. Suppose that for some \( \psi \in \mathbf{H} \) there exists a \( \phi \) such that \( \left\langle {\psi ,{A}^{cl}\chi }\right\rangle = \) \( \langle \phi ,\chi \rangle \) for all \( \chi \in \operatorname{Dom}\left( {A}^{cl}\right) \) . Since \( {A}^{cl} \) is an extension of \( A \), it follows that \( \langle \psi ,{A\chi }\rangle = \langle \phi ,\chi \rangle \) for all \( \chi \in \operatorname{Dom}\left( A\right) \) . This shows that \( \operatorname{Dom}\left( {A}^{ * }\right) \supset \) \( \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) and that \( {A}^{ * } \) agrees with \( {\left( {A}^{cl}\right) }^{ * } \) on \( \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) . In the other direction, suppose for some \( \psi \in \mathbf{H} \) there exists a \( \phi \) such that \( \langle \psi ,{A\chi }\rangle = \langle \phi ,\chi \rangle \) for all \( \chi \in \operatorname{Dom}\left( A\right) \) . Suppose now \( \xi \in \operatorname{Dom}\left( {A}^{cl}\right) \) with \( {A}^{cl}\xi = \eta \) . Then there exists a sequence \( {\chi }_{n} \) in \( \operatorname{Dom}\left( A\right) \) with \( {\chi }_{n} \rightarrow \xi \) and \( A{\chi }_{n} \rightarrow \eta \), and we have \[ \left\langle {\psi, A{\chi }_{n}}\right\rangle = \left\langle {\phi ,{\chi }_{n}}\right\rangle \] for all \( n \) . Letting \( n \) tend to infinity, we obtain \( \langle \psi ,\eta \rangle = \langle \phi ,\xi \rangle \), or \( \left\langle {\psi ,{A}^{cl}\xi }\right\rangle = \) \( \langle \phi ,\xi \rangle \) . This shows that \( \psi \in \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) and \( {A}^{cl}\psi = \phi \) . Thus, \( \operatorname{Dom}\left( {A}^{ * }\right) \subset \) \( \operatorname{Dom}\left( {\left( {A}^{cl}\right) }^{ * }\right) \) . ∎ Proposition 9.11 If \( A \) is essentially self-adjoint, then \( {A}^{cl} \) is the unique self-adjoint extension of \( A \) . Proof. Suppose \( B \) is a self-adjoint extension of \( A \) . Since \( B = {B}^{ * }, B \) is closed and is, therefore, an extension of \( {A}^{cl} \) . It then follows from the definition of the adjoint that \( \operatorname{Dom}\left( {B}^{ * }\right) \subset \operatorname{Dom}\left( {A}^{cl}\right) \) . Thus, we have \[ \operatorname{Dom}\left( {B}^{ * }\right) \subset \operatorname{Dom}\left( {A}^{cl}\right) \subset \operatorname{Dom}\left( B\right) . \] Since \( B \) is self-adjoint, all three of the above sets must be equal, so actually \( B = {A}^{cl} \) . Proposition 9.12 If \( A \) is an unbounded operator on \( \mathbf{H} \), then \[ {\left( \operatorname{Range}\left( A\right) \right) }^{ \bot } = \ker \left( {A}^{ * }\right) \] Proof. First assume that \( \psi \in {\left( \operatorname{Range}\left( A\right) \right) }^{ \bot } \) . Then for all \( \phi \in \operatorname{Dom}\left( A\right) \) we have \[ \langle \psi ,{A\phi }\rangle = 0 \] That is to say, the linear functional \( \langle \psi, A \cdot \rangle \) is bounded-in fact, zero-on \( \operatorname{Dom}\left( A\right) \) . Thus, from the definition of the adjoint, we conclude that \( \psi \in \operatorname{Dom}\left( {A}^{ * }\right) \) and \( {A}^{ * }\psi = 0 \) . Meanwhile, suppose that \( \psi \) is in \( \operatorname{Dom}\left( {A}^{ * }\right) \) and that \( {A}^{ * }\psi = 0 \) . The only way this can happen is if the linear functional \( \langle \psi, A \cdot \rangle \) is zero on \( \operatorname{Dom}\left( A\right) \) , which means that \( \psi \) is orthogonal to the image of \( A \) . Proposition 9.13 Suppose \( A \) is an unbounded operator on \( \mathbf{H} \) and that \( B \) is a bounded operator defined on all of \( \mathbf{H} \) . Let \( A + B \) denote the operator with \( \operatorname{Dom}\left( {A + B}\right) = \operatorname{Dom}\left( A\right) \) and given by \( \left( {A + B}\right) \psi = {A\psi } + {B\psi } \) for all \( \psi \in \operatorname{Dom}\left( A\right) \) . Then \( {\left( A + B\right) }^{ * } \) has the same domain as \( {A}^{ * } \) and \( {\left( A + B\right) }^{ * }\psi = \) \( {A}^{ * }\psi + {B}^{ * }\psi \) for all \( \psi \in \operatorname{Dom}\left( {A}^{ * }\right) \) . In particular, the sum of an unbounded self-adjoint operator and a bounded self-adjoint operator (defined on all of \( \mathbf{H} \) ) is self-adjoint on the domain of the unbounded operator. Proof. See Exercise 3. ∎ The sum of two unbounded self-adjoint operators is not, in general, selfadjoint. See Sect. 9.9 for more information about this issue. Proposition 9.14 Let \( A \) be a closed operator and \( \
1112_(GTM267)Quantum Theory for Mathematicians
Definition 21.3
Definition 21.3 If \( f \) and \( g \) are smooth functions on \( N \), define the Poisson bracket \( \{ f, g\} \) of \( f \) and \( g \) by \[ \{ f, g\} = - {\omega }^{-1}\left( {{df},{dg}}\right) \] In particular, if \( \mathbf{1} \) denotes the constant function on \( N \), then \( \{ \mathbf{1}, f\} = \) \( \{ f,\mathbf{1}\} = 0 \) for all smooth functions \( f \) . Example 21.4 If \( \omega \) is the canonical 2-form on \( {T}^{ * }M \), then the associated Poisson bracket may be computed in standard coordinates as \[ \{ f, g\} = \frac{\partial f}{\partial {x}_{j}}\frac{\partial g}{\partial {p}_{j}} - \frac{\partial f}{\partial {p}_{j}}\frac{\partial g}{\partial {x}_{j}} \] for all smooth functions \( f \) and \( g \) on \( {T}^{ * }M \) . Proof. The linear functional \[ \omega \left( {\frac{\partial }{\partial {x}_{j}}, \cdot }\right) \] has a value of -1 on the vector \( \partial /\partial {p}_{j} \) and a value of 0 on all the other basic partial derivatives. This means that \( \omega \left( {\partial /\partial {x}_{j}, \cdot }\right) = - d{p}_{j} \) . Similarly, \( \omega \left( {\partial /\partial {p}_{j}, \cdot }\right) = d{x}_{j} \) . We may thus compute, for example, that \[ - 1 = \omega \left( {\frac{\partial }{\partial {x}_{j}},\frac{\partial }{\partial {p}_{j}}}\right) \] \[ = {\omega }^{-1}\left( {-d{p}_{j}, d{x}_{j}}\right) \] \[ = {\omega }^{-1}\left( {d{x}_{j}, d{p}_{j}}\right) \] Meanwhile, \( {\omega }^{-1}\left( {d{x}_{j}, d{x}_{k}}\right) = {\omega }^{-1}\left( {d{p}_{j}, d{p}_{k}}\right) = 0 \) and \( {\omega }^{-1}\left( {d{p}_{j}, d{x}_{k}}\right) = 0 \) when \( j \neq k \) . Thus, we compute that \[ \{ f, g\} = - {\omega }^{-1}\left( {\frac{\partial f}{\partial {x}_{j}}d{x}_{j} + \frac{\partial f}{\partial {p}_{j}}d{p}_{j},\frac{\partial g}{\partial {x}_{k}}d{x}_{k} + \frac{\partial g}{\partial {p}_{k}}d{p}_{k}}\right) \] \[ = \frac{\partial f}{\partial {x}_{j}}\frac{\partial g}{\partial {p}_{k}}{\delta }_{jk} - \frac{\partial f}{\partial {p}_{j}}\frac{\partial g}{\partial {x}_{k}}{\delta }_{jk} \] which reduces to the claimed expression. - Proposition 21.5 For any smooth functions \( f, g, h \) on \( N \), we have \[ \{ g, f\} = - \{ f, g\} \] and \[ \{ f,{gh}\} = \{ f, g\} h + g\{ f, h\} . \] Proof. Since \( \omega \) is skew-symmetric on the tangent space to \( N \) at each point and \( {\omega }^{-1} \) is obtained from \( \omega \) by means of an isomorphism of tangent and cotangent space, \( {\omega }^{-1} \) is a skew-symmetric form on the cotangent space. The skew-symmetry of the Poisson bracket follows. The second relation follows from the Leibniz product rule for \( d\left( {gh}\right) \) together with the bilinearity of \( {\omega }^{-1} \) . ∎ Definition 21.6 If \( f \) is a smooth function on \( N \), let \( {X}_{f} \) be the unique vector field on \( N \) such that \[ {df} = \omega \left( {{X}_{f}, \cdot }\right) \] (21.8) We call \( {X}_{f} \) the Hamiltonian vector field associated to \( f \) . That is to say, \( {X}_{f} \) corresponds to \( {df} \) under the isomorphism between tangent and cotangent spaces established by \( \omega \) . Proposition 21.7 For all \( f \) and \( g \) , \[ {X}_{f}\left( g\right) = \{ f, g\} = - {X}_{g}\left( f\right) . \] Furthermore, \[ \omega \left( {{X}_{f},{X}_{g}}\right) = - \{ f, g\} . \] Proof. For each \( z \in N \), we are using \( \omega \) to identify \( {T}_{z}N \) with \( {T}_{z}^{ * }N \) . Equation (21.8) says that under this identification, \( {X}_{f} \) is identified with \( {df} \) . Thus, \[ - {\omega }^{-1}\left( {{df},{dg}}\right) = - \omega \left( {{X}_{f},{X}_{g}}\right) = - {df}\left( {X}_{g}\right) = - {X}_{g}\left( f\right) . \] Thus, \( \{ f, g\} = - {X}_{g}\left( f\right) \), as claimed. A similar argument with the roles of \( f \) and \( g \) reversed gives the claimed relationship between \( {X}_{f}\left( g\right) \) and \( \{ g, f\} \) . Finally, \[ \omega \left( {{X}_{f},{X}_{g}}\right) = {df}\left( {X}_{g}\right) = {X}_{g}\left( f\right) = - \{ f, g\} , \] as claimed. Definition 21.8 For any smooth function \( f \) on \( N \), the Hamiltonian flow generated by \( f \), denoted \( {\Phi }^{f} \), is the flow generated by the vector field \( - {X}_{f} \) . In the case \( N = {T}^{ * }{\mathbb{R}}^{n} \cong {\mathbb{R}}^{2n} \), this definition agrees with our notation in Sect. 2.5. Proposition 21.9 For any smooth function \( f \) on \( N \), the Hamiltonian flow \( {\Phi }^{f} \) preserves \( \omega \) . Proof. In general, a flow \( \Phi \) preserves a differential form \( \alpha \) if and only if the Lie derivative \( {L}_{X}\alpha = 0 \), where \( X \) is the vector field generating \( \Phi \) . In our case, since \( \omega \) is closed, we have, by (21.7), \[ {\mathcal{L}}_{{X}_{f}}\omega = d\left\lbrack {{i}_{{X}_{f}}\omega }\right\rbrack = {d}^{2}f = 0, \] since \( {i}_{{X}_{f}}\omega \) is, by the definition of \( {X}_{f} \), equal to \( {df} \) . ∎ Proposition 21.10 For any smooth functions \( f, g, h \) on \( N \), the Jacobi identity holds: \[ \{ f,\{ g, h\} \} + \{ g,\{ h, f\} \} + \{ h,\{ f, g\} \} = 0. \] This result shows that the space of smooth function on \( N \) forms a Lie algebra under the Poisson bracket. The proof of Proposition 21.10 relies on Proposition 21.9, which in turn relies on the fact that \( \omega \) is closed. Proof. Since the Hamiltonian flow \( {\Phi }^{f} \) preserves \( \omega \), it also preserves \( {\omega }^{-1} \) and thus \[ {\omega }^{-1}\left( {d\left( {g \circ {\Phi }_{t}^{f}}\right), d\left( {h \circ {\Phi }_{t}^{f}}\right) }\right) = {\omega }^{-1}\left( {{dg},{dh}}\right) \circ {\Phi }_{t}^{f}, \] or, equivalently, \[ \left\{ {g \circ {\Phi }_{t}^{f}, h \circ {\Phi }_{t}^{f}}\right\} = \{ g, h\} \circ {\Phi }_{t}^{f}. \] Differentiating this relation with respect to \( t \) at \( t = 0 \) gives \[ \left\{ {-{X}_{f}\left( g\right), h}\right\} + \left\{ {g, - {X}_{f}\left( h\right) }\right\} = - {X}_{f}\left( {\{ g, h\} }\right) , \] or, equivalently, \[ - \{ \{ f, g\}, h\} + \{ g,\{ f, h\} \} = - \{ f,\{ g, h\} \} . \] After moving \( - \{ f,\{ g, h\} \} \) to the left-hand side of the equation and using the skew-symmetry of the Poisson bracket, we obtain the Jacobi identity. Proposition 21.11 For any smooth functions \( f \) and \( g \) on \( N \), the Hamiltonian vector fields \( {X}_{f} \) and \( {X}_{g} \) satisfy \[ \left\lbrack {{X}_{f},{X}_{g}}\right\rbrack = {X}_{\{ f, g\} } \] Proof. See Exercise 3. ∎ ## 21.2.3 Hamiltonian Flows and Conserved Quantities We have seen (Proposition 21.9) that if \( f \) is a smooth function, then the flow generated by \( {X}_{f} \) preserves \( \omega \) . We have the following partial converse to this result. Proposition 21.12 Suppose \( \Phi \) is the flow generated by a vector field \( - X \) on \( N \) . If \( \Phi \) preserves \( \omega \) then \( X \) can be represented locally in the form \( X = \) \( {X}_{f} \) for some smooth function \( f \) on \( N \) . If \( N \) is simply connected, the function \( f \) exists globally on \( N \) . Proof. The statement that \( \Phi \) preserves \( \omega \) can be expressed infinitesimally as \[ {\mathcal{L}}_{X}\omega = 0. \] Since also \( \omega \) is closed,(21.7) tells us that \[ d\left( {{i}_{X}\omega }\right) = 0. \] Since \( {i}_{X}\omega \) is closed, this 1 -form can be expressed locally as \( {i}_{X}\omega = {df} \) for some smooth function \( f \), which says precisely that \( X = {X}_{f} \) . If \( N \) is simply connected, then every closed 1-form can be expressed globally as \( {df} \), for some smooth function \( f \) . - A flow of the sort in Proposition 21.12 is said to be locally Hamiltonian. Such a flow is said to be (globally) Hamiltonian if the function \( f \) in the proposition can be defined on all of \( N \) . (Compare Definition 21.8.) If \( \Phi \) is a Hamiltonian flow, the function \( f \) such that \( \Phi = {\Phi }^{f} \) is called a Hamiltonian generator of \( \Phi \) . If \( N \) is connected, then any two Hamiltonian generators of \( \Phi \) must differ by a constant. To see that, in general, \( f \) is only defined locally, consider the symplectic manifold \( {S}^{1} \times \mathbb{R} \), with symplectic form \( \omega = {d\phi } \land {dx} \), where \( \phi \) is the angular coordinate on \( {S}^{1} \) and \( x \) is the linear coordinate on \( \mathbb{R} \) . Note that the 1 -form \( {d\phi } \) is independent of the choice of a local angle variable on \( {S}^{1} \), since any two such angle functions differ by a constant (an integer multiple of \( {2\pi } \) ). Thus, \( {d\phi } \) is a globally defined, smooth 1 -form, even though there is no globally defined, smooth angle function \( \phi \) . Define a flow \( \Phi \) by \[ {\Phi }_{t}\left( {\phi, x}\right) = \left( {\phi, x + t}\right) \] This flow certainly preserves \( \omega \), since \( {dx} \) is invariant under translations. The flow \( \Phi \) is generated by the vector field \( - X = \partial /\partial x \), and \[ \omega \left( {-\partial /\partial x, \cdot }\right) = {d\phi } \] As we have already noted, however, there is no globally defined function \( \phi \) whose differential is \( {d\phi } \) . Although any smooth function on a symplectic manifold \( N \) generates a Hamiltonian flow, in physical examples there is usually one distinguished function with a Hamiltonian flow that is thought of as "the" time-evolution of the system. Definition 21.13 A Hamiltonian system is a symplectic manifold \( N \) together with a distinguished Hamiltonian flow \( {\Phi }^{H} \), generated by smooth function \( H \) on \( N \), called the Hamiltonian of the system. A function \( f \) is called a conserved quantity for a Hamiltonian system \( \left( {N,{\Phi }^{H}}\right) \) if \( f\left( {{\Phi }_{t}^{H}\left( x\right) }\right) \) is independent of \( t \) for each fixed \( x \in N \) . As in the \( {\mathb
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 6.50
Definition 6.50. The multimaps \( {F}_{i} : X \rightrightarrows {Y}_{i} \) are said to be cooperative (resp. coordinated) at \( \left( {\bar{x},{\bar{y}}_{1},\ldots ,{\bar{y}}_{k}}\right) \) if the graphs of the multimaps \( {M}_{i} : \bar{X} \rightrightarrows Y \mathrel{\text{:=}} {Y}_{1} \times \) \( \cdots \times {Y}_{k} \) given by \( {M}_{1}\left( x\right) \mathrel{\text{:=}} {F}_{1}\left( x\right) \times {Y}_{2} \times \cdots \times {Y}_{k},{M}_{i}\left( x\right) \mathrel{\text{:=}} {Y}_{1} \times \cdots \times {Y}_{i - 1} \times {F}_{i}\left( x\right) \times \) \( {Y}_{i + 1} \times \cdots \times {Y}_{k} \) for \( i = 2,\ldots, k - 1,{M}_{k}\left( x\right) \mathrel{\text{:=}} {Y}_{1} \times \cdots \times {Y}_{k - 1} \times {F}_{k}\left( x\right) \) are allied (resp. synergetic) at \( \left( {\bar{x},{\bar{y}}_{1},\ldots ,{\bar{y}}_{k}}\right) \) . It is easy to see that \( {F}_{1},\ldots ,{F}_{k} \) are cooperative (resp. coordinated) whenever all but one of the \( {F}_{i} \) ’s are coderivatively bounded around \( \left( {\bar{x},{\bar{y}}_{i},0}\right) \) (resp. coderivatively compact at \( \left. \left( {\bar{x},{\bar{y}}_{i}}\right) \right) \) . Corollary 6.51. Let \( X,{Y}_{1},{Y}_{2} \) be Asplund spaces and let the multimaps \( {F}_{1} : X \rightrightarrows {Y}_{1} \) , \( {F}_{2} : X \rightrightarrows {Y}_{2} \) have closed graphs. If they are cooperative at \( \left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \), then for every \( \left( {{\bar{y}}_{1}^{ * },{\bar{y}}_{2}^{ * }}\right) \in {Y}_{1}^{ * } \times {Y}_{2}^{ * } \), one has \[ {D}_{L}^{ * }\left( {{F}_{1},{F}_{2}}\right) \left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \left( {{\bar{y}}_{1}^{ * },{\bar{y}}_{2}^{ * }}\right) \subset {D}_{L}^{ * }{F}_{1}\left( {\bar{x},{\bar{y}}_{1}}\right) \left( {\bar{y}}_{1}^{ * }\right) + {D}_{L}^{ * }{F}_{2}\left( {\bar{x},{\bar{y}}_{2}}\right) \left( {\bar{y}}_{2}^{ * }\right) . \] (6.23) The same relation holds if \[ \left( {-{D}_{M}^{ * }{F}_{1}\left( {\bar{x},{\bar{y}}_{1}}\right) \left( 0\right) }\right) \cap {D}_{M}^{ * }{F}_{2}\left( {\bar{x},{\bar{y}}_{2}}\right) \left( 0\right) = \{ 0\} \] (6.24) and if \( {F}_{1},{F}_{2} \) are coordinated at \( \left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \), in particular, if either \( {F}_{1} \) or \( {F}_{2} \) is coderivatively compact at \( \left( {\bar{x},{\bar{y}}_{1}}\right) \) or \( \left( {\bar{x},{\bar{y}}_{2}}\right) \) respectively. Proof. Let \( F \mathrel{\text{:=}} \left( {{F}_{1},{F}_{2}}\right) \) and let \( {M}_{1} \) and \( {M}_{2} \) be defined as above, so that \( F = {M}_{1} \cap {M}_{2} \) and one has the relations \[ {D}_{L}^{ * }{F}_{1}\left( {\bar{x},{\bar{y}}_{1}}\right) \left( {\bar{y}}_{1}^{ * }\right) = {D}_{L}^{ * }{M}_{1}\left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \left( {{\bar{y}}_{1}^{ * },0}\right) , \] \[ {D}_{L}^{ * }{F}_{2}\left( {\bar{x},{\bar{y}}_{2}}\right) \left( {\bar{y}}_{2}^{ * }\right) = {D}_{L}^{ * }{M}_{2}\left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \left( {0,{\bar{y}}_{2}^{ * }}\right) , \] and similar ones in which the limiting coderivatives are replaced with mixed coderivatives. Expressing \( {D}_{L}^{ * }{M}_{1}\left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) ▱{D}_{L}^{ * }{M}_{2}\left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \), the first assertion is a consequence of Theorem 6.44. The proof of the second one is similar to the proof of case (f) of Proposition 6.48, observing that here we can dispense with the condition that \( {M}_{1} \) or \( {M}_{2} \) is coderiva-tively compact at \( \bar{z} \mathrel{\text{:=}} \left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \in {M}_{1} \cap {M}_{2} \) . The details are left as an exercise. Corollary 6.51 can be generalized to a finite family of multimaps in an obvious way. We just state an application to the case of a map with values in \( {\mathbb{R}}^{k} \) . Corollary 6.52. Let \( X \) be an Asplund space and let \( f \mathrel{\text{:=}} \left( {{f}_{1},\ldots ,{f}_{k}}\right) : X \rightarrow {\mathbb{R}}^{k} \) . Suppose \( \left( {\operatorname{epi}{f}_{1},\ldots ,\operatorname{epi}{f}_{k}}\right) \) is cooperative at \( \left( {\bar{x},\bar{y}}\right) \mathrel{\text{:=}} \left( {\bar{x},{\bar{y}}_{1},\ldots ,{\bar{y}}_{k}}\right) \mathrel{\text{:=}} \left( {\bar{x},{f}_{1}\left( \bar{x}\right) }\right. \) , \( \left. {\ldots ,{f}_{k}\left( \bar{x}\right) }\right) \) . Then for all \( \left( {{\bar{y}}_{1}^{ * },\ldots ,{\bar{y}}_{k}^{ * }}\right) \in {\mathbb{R}}^{k} \) one has \[ {D}_{L}^{ * }f\left( {\bar{x},\bar{y}}\right) \left( {{\bar{y}}_{1}^{ * },\ldots ,{\bar{y}}_{k}^{ * }}\right) \subset {D}_{L}^{ * }{f}_{1}\left( {\bar{x},{\bar{y}}_{1}}\right) \left( {\bar{y}}_{1}^{ * }\right) + \cdots + {D}_{L}^{ * }f\left( {\bar{x},{\bar{y}}_{k}}\right) \left( {\bar{y}}_{k}^{ * }\right) . \] The versatility of set-valued analysis can be experienced through the following statement whose proof consists in taking inverses. Corollary 6.53. Let \( {X}_{1},{X}_{2}, Y \) be Asplund spaces, let \( {G}_{1} : {X}_{1} \rightrightarrows Y,{G}_{2} : {X}_{2} \rightrightarrows Y \) be multimaps with closed graphs and let \( G : {X}_{1} \times {X}_{2} \rightrightarrows Y \) be defined by \( G\left( {{x}_{1},{x}_{2}}\right) \mathrel{\text{:=}} \) \( {G}_{1}\left( {x}_{1}\right) \cap {G}_{2}\left( {x}_{2}\right) \) for \( \left( {{x}_{1},{x}_{2}}\right) \in X \mathrel{\text{:=}} {X}_{1} \times {X}_{2} \) . If \( {G}_{1}^{-1} \) and \( {G}_{2}^{-1} \) are cooperative at \( \left( {\bar{y},{\bar{x}}_{1},{\bar{x}}_{2}}\right) \) then for every \( {y}^{ * } \in {Y}^{ * } \) one has \[ {D}_{L}^{ * }G\left( {{\bar{x}}_{1},{\bar{x}}_{2},\bar{y}}\right) \left( {y}^{ * }\right) \subset {D}_{L}^{ * }{G}_{1}\left( {{\bar{x}}_{1},\bar{y}}\right) \left( {y}^{ * }\right) \times {D}_{L}^{ * }{G}_{2}\left( {{\bar{x}}_{2},\bar{y}}\right) \left( {y}^{ * }\right) . \] The same conclusion holds when \( {G}_{1}^{-1} \) and \( {G}_{2}^{-1} \) are coordinated at \( \left( {\bar{y},{\bar{x}}_{1},{\bar{x}}_{2}}\right) \) and \[ \left( {-{D}_{M}^{ * }{G}_{1}^{-1}\left( {\bar{y},{\bar{x}}_{1}}\right) \left( 0\right) }\right) \cap {D}_{M}^{ * }{G}_{2}^{-1}\left( {\bar{y},{\bar{x}}_{2}}\right) \left( 0\right) = \{ 0\} . \] (6.25) Proof. One has \( y \in G\left( {{x}_{1},{x}_{2}}\right) \) if and only if \( \left( {{x}_{1},{x}_{2}}\right) \in {F}_{1}\left( y\right) \times {F}_{2}\left( y\right) \) for \( {F}_{1} \mathrel{\text{:=}} {G}_{1}^{-1} \) , \( {F}_{2} \mathrel{\text{:=}} {G}_{2}^{-1} \) . Thus, the result stems from Corollary 6.51 when coderivatives are rewritten in terms of normal cones (exercise). ## Exercises 1. Show that the multimaps \( {F}^{\prime } : X \rightrightarrows {Y}^{\prime },{F}^{\prime \prime } : X \rightrightarrows {Y}^{\prime \prime } \) are cooperative (resp. coordinated) at \( \left( {\bar{x},{\bar{y}}^{\prime },{\bar{y}}^{\prime \prime }}\right) \) iff for all sequences \( \left( \left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) \right) \rightarrow \left( {\bar{x},{\bar{y}}^{\prime }}\right) \) in \( {F}^{\prime },\left( \left( {{x}_{n}^{\prime \prime },{y}_{n}^{\prime \prime }}\right) \right) \rightarrow \) \( \left( {\bar{x},{\bar{y}}^{\prime \prime }}\right) \) in \( {F}^{\prime \prime },\left( {x}_{n}^{\prime * }\right) ,\left( {x}_{n}^{\prime \prime * }\right) \) in \( {X}^{ * } \) (resp. \( \left( {x}_{n}^{\prime * }\right) ,\left( {x}_{n}^{\prime \prime * }\right) \overset{ * }{ \rightarrow }0 \) ), \( \left( {y}_{n}^{\prime * }\right) ,\left( {y}_{n}^{\prime \prime * }\right) \rightarrow 0 \) such that \( \left( {{x}_{n}^{\prime * } + {x}_{n}^{\prime \prime * }}\right) \rightarrow 0 \) and \( {x}_{n}^{\prime * } \in {D}_{F}^{ * }{F}^{\prime }\left( {{x}_{n}^{\prime },{y}_{n}^{\prime }}\right) \left( {y}_{n}^{\prime * }\right) ,{x}_{n}^{\prime \prime * } \in {D}_{F}^{ * }{F}^{\prime \prime }\left( {{x}_{n}^{\prime \prime },{y}_{n}^{\prime \prime }}\right) \left( {y}_{n}^{\prime \prime * }\right) \) for all \( n \), one has \( \left( {x}_{n}^{\prime * }\right) \rightarrow 0 \) (and \( \left( {x}_{n}^{\prime \prime * }\right) \rightarrow 0 \) ). 2. (a) Check that if \( {F}_{1} \) is coderivatively bounded around \( \left( {\bar{x},{\bar{y}}_{1}}\right) \) or if \( {F}_{2} \) is coderivatively bounded around \( \left( {\bar{x},{\bar{y}}_{2}}\right) \), then \( {F}_{1} \) and \( {F}_{2} \) are cooperative at \( \left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \) . (b) Check that if \( {F}_{1} \) is coderivatively compact at \( \left( {\bar{x},{\bar{y}}_{1}}\right) \) (or if \( {F}_{2} \) is coderivatively compact at \( \left. \left( {\bar{x},{\bar{y}}_{2}}\right) \right) \), then \( {F}_{1} \) and \( {F}_{2} \) are coordinated at \( \left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \) . 3. Show that two subsets \( {S}_{1},{S}_{2} \) of a normed space \( X \) are allied (resp. synergetic) at \( \bar{x} \in {S}_{1} \cap {S}_{2} \) if and only if the multimaps \( {F}_{1},{F}_{2} : X \rightrightarrows Y \mathrel{\text{:=}} \{ 0\} \) with graphs \( {S}_{1} \times \{ 0\} \) and \( {S}_{2} \times \{ 0\} \) respectively are cooperative (resp. coordinated) at \( \left( {\bar{x},0,0}\right) \) . 4. With the notation of Definition 6.50, prove that \( {F}_{1} \) and \( {F}_{2} \) are cooperative at \( \left( {\bar{x},{\bar{y}}_{1},{\bar{y}}_{2}}\right) \) if and only if \( {M}_{1} \) and \( {M}_{2} \) are source-allied at \( \left( {\bar{x},\left( {{\bar{y}}_{1},{\bar{y}}_{2}}\right) }\right) \) . 5. (Restriction of a multimap) Let \( F : X \rightrightarrows Y \) be a multimap and let \( C \subset X \) . Denote by \( {F}_{C} \) the multimap defined by \( {F}_{C}\left( x\right) \mathrel{\text{:=}} F\left( x\right) \) for \( x \in C,{F}_{C}\left( x\right) = \varnothing \) for \( x \in X \smallsetminus C \) . Check that \( {F}_{C} = F \cap G \) with \( G \mathrel{\text{:=}} C \times Y \) . Describe alliedness of
1129_(GTM35)Several Complex Variables and Banach Algebras
Definition 4.6
Definition 4.6. A \( k \) -form \( {\omega }^{k} \) on \( \Omega \) is a map \( {\omega }^{k} \) assigning to each \( x \) in \( \Omega \) an element of \( { \land }^{k}\left( {T}_{x}\right) \) . \( k \) -forms form a module over the algebra of scalar functions on \( \Omega \) in a natural way. Let \( {\tau }^{k} \) and \( {\sigma }^{l} \) be, respectively, a \( k \) -form and an \( l \) -form. For \( x \in \Omega \), put \[ {\tau }^{k} \land {\sigma }^{l}\left( x\right) = {\tau }^{k}\left( x\right) \land {\sigma }^{l}\left( x\right) \in { \land }^{k + 1}\left( {T}_{x}\right) . \] In particular, since \( d{x}_{1},\ldots, d{x}_{N} \) are 1 -forms, \[ d{x}_{{i}_{1}} \land d{x}_{{i}_{2}} \land \cdots \land d{x}_{{i}_{k}} \] is a \( k \) -form for each choice of \( \left( {{i}_{1},\ldots ,{i}_{k}}\right) \) . Because of Lemma 4.5, \[ d{x}_{j} \land d{x}_{j} = 0\text{ for each }j. \] Hence \( d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}} = 0 \) unless the \( {i}_{v} \) are distinct. Lemma 4.7. Let \( {\omega }^{k} \) be any \( k \) -form on \( \Omega \) . Then there exist (unique) scalar functions \( {C}_{{i}_{1}},\ldots ,{i}_{k} \) on \( \Omega \) such that \[ {\omega }^{k} = \mathop{\sum }\limits_{{{i}_{1} < {i}_{2} < \cdots < {i}_{k}}}{C}_{{i}_{1}}\cdots {i}_{k}d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}}. \] Definition 4.7. \( { \land }^{k}\left( \Omega \right) \) consists of all \( k \) -forms \( {\omega }^{k} \) such that the functions \( {C}_{{i}_{1}}\ldots {i}_{k} \) occurring in Lemma 4.7 lie in \( {C}^{\infty }.{ \land }^{0}\left( \Omega \right) = {C}^{\infty } \) . Recall now the map \( f \rightarrow {df} \) from \( {C}^{\infty } \rightarrow { \land }^{1}\left( \Omega \right) \) . We wish to extend \( d \) to a linear map \( { \land }^{k}\left( \Omega \right) \rightarrow { \land }^{k + 1}\left( \Omega \right) \), for all \( k \) . Definition 4.8. Let \( {\omega }^{k} \in { \land }^{k}\left( \Omega \right), k = 0,1,2,\ldots \) Then \[ {\omega }^{k} = \mathop{\sum }\limits_{{{i}_{1} < \cdots < {i}_{k}}}{C}_{{i}_{1}}\cdots {}_{{i}_{k}}d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}}. \] Define \[ d{\omega }^{k} = \mathop{\sum }\limits_{{{i}_{1} < \cdots < {i}_{k}}}d{C}_{{i}_{1}}\cdots {}_{{i}_{k}} \land d{x}_{{i}_{1}} \land \cdots \land d{x}_{{i}_{k}}. \] Note that \( d \) maps \( { \land }^{k}\left( \Omega \right) \rightarrow { \land }^{k + 1}\left( \Omega \right) \) . We call \( d{\omega }^{k} \) the exterior derivative of \( {\omega }^{k} \) . For \( \omega \in { \land }^{1}\left( \Omega \right) \) , \[ \omega = \mathop{\sum }\limits_{{i = 1}}^{N}{C}_{i}d{x}_{i} \] \[ {d\omega } = \mathop{\sum }\limits_{{i, j}}\frac{\partial {C}_{i}}{\partial {x}_{j}}d{x}_{j} \land d{x}_{i} = \mathop{\sum }\limits_{{i < j}}\left( {\frac{\partial {C}_{j}}{\partial {x}_{i}} - \frac{\partial {C}_{i}}{\partial {x}_{j}}}\right) d{x}_{i} \land d{x}_{j}. \] It follows that for \( f \in {C}^{\infty } \) , \[ d\left( {df}\right) = d\left( {\mathop{\sum }\limits_{{i = 1}}^{N}\frac{\partial f}{d{x}_{i}}d{x}_{i}}\right) \] \[ = \mathop{\sum }\limits_{{i < j}}\left( {\frac{\partial }{\partial {x}_{i}}\left( \frac{\partial f}{\partial {x}_{j}}\right) - \frac{\partial }{\partial {x}_{j}}\left( \frac{\partial f}{\partial {x}_{i}}\right) }\right) d{x}_{i} \land d{x}_{j} = 0 \] or \( {d}^{2} = 0 \) on \( {C}^{\infty } \) . More generally, Lemma 4.8. \( {d}^{2} = 0 \) for every \( k \) ; i.e., if \( {\omega }^{k} \in { \land }^{k}\left( \Omega \right), k \) arbitrary, then \( d\left( {d{\omega }^{k}}\right) = \) 0. To prove Lemma 4.8, it is useful to prove first Lemma 4.9. Let \( {\omega }^{k} \in { \land }^{k}\left( \Omega \right) ,{\omega }^{l} \in { \land }^{l}\left( \Omega \right) \) . Then \[ d\left( {{\omega }^{k} \land {\omega }^{l}}\right) = d{\omega }^{k} \land {\omega }^{l} + {\left( -1\right) }^{k}{\omega }^{k} \land d{\omega }^{l}. \] NOTES For an exposition of the material in this section, see, e.g., I. M. Singer and J. A. Thorpe, Lecture Notes on Elementary Topology and Geometry, Scott, Foresman, Glenview, Ill., 1967, Chap. V. 5 ## The \( \bar{\partial } \) -Operator Note. As in the preceding section, the proofs in this section are left as exercises. Let \( \Omega \) be an open subset of \( {\mathbb{C}}^{n} \) . The complex coordinate functions \( {z}_{1},\ldots ,{z}_{n} \) as well as their conjugates \( {\bar{z}}_{1},\ldots ,{\bar{z}}_{n} \) lie in \( {C}^{\infty }\left( \Omega \right) \) . Hence the forms \[ d{z}_{1},\ldots, d{z}_{n},\;d{\bar{z}}_{1},\ldots, d{\bar{z}}_{n} \] all belong to \( { \land }^{1}\left( \Omega \right) \) . Fix \( x \in \Omega \) . Note that \( { \land }^{1}\left( {T}_{x}\right) = {T}_{x}^{ * } \) has dimension \( {2n} \) over \( \mathbb{C} \), since \( {\mathbb{C}}^{n} = {\mathbb{R}}^{2n} \) . If \( {x}_{j} = \operatorname{Re}\left( {z}_{j}\right) \) and \( {y}_{j} = \operatorname{Im}\left( {z}_{j}\right) \), then \[ {\left( d{x}_{1}\right) }_{x},\ldots ,{\left( d{x}_{n}\right) }_{x},\;{\left( d{y}_{l}\right) }_{x},\ldots ,{\left( d{y}_{n}\right) }_{x} \] form a basis for \( {T}_{x}^{ * } \) . Since \( d{x}_{j} = 1/2\left( {d{z}_{j} + d{\bar{z}}_{j}}\right) \) and \( d{y}_{j} = 1/{2i}\left( {d{z}_{j} - d{\bar{z}}_{j}}\right) \) , \[ {\left( d{z}_{1}\right) }_{x},\ldots ,{\left( d{z}_{n}\right) }_{x},\;{\left( d{\bar{z}}_{1}\right) }_{x},\ldots ,{\left( d{\bar{z}}_{n}\right) }_{x} \] also form a basis for \( {T}_{x}^{ * } \) . In fact, Lemma 5.1. If \( \omega \in { \land }^{1}\left( \Omega \right) \), then \[ \omega = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}d{z}_{j} + {b}_{j}d{\bar{z}}_{j} \] where \( {a}_{j},{b}_{j} \in {C}^{x} \) . Fix \( f \in {C}^{x} \) . Since \( \left( {{x}_{1},\ldots ,{x}_{n},{y}_{1},\ldots ,{y}_{n}}\right) \) are real coordinates in \( {\mathbb{C}}^{n} \) , \[ {df} = \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial f}{\partial {x}_{j}}d{x}_{j} + \frac{\partial f}{\partial {y}_{j}}d{y}_{j} \] \[ = \mathop{\sum }\limits_{{j = 1}}^{n}\left( {\frac{\partial f}{\partial {x}_{j}} \cdot \frac{1}{2} + \frac{\partial f}{\partial {y}_{j}} \cdot \frac{1}{2i}}\right) d{z}_{j} + \left( {\frac{\partial f}{\partial {x}_{j}} \cdot \frac{1}{2} - \frac{1}{2i}\frac{\partial f}{\partial {y}_{j}}}\right) d{\bar{z}}_{j}. \] Definition 5.1. We define operators on \( {C}^{\infty } \) as follows: \[ \frac{\partial }{\partial {z}_{j}} = \frac{1}{2}\left( {\frac{\partial }{\partial {x}_{j}} - i\frac{\partial }{\partial {y}_{j}}}\right) ,\;\frac{\partial }{\partial {\bar{z}}_{j}} = \frac{1}{2}\left( {\frac{\partial }{\partial {x}_{j}} + i\frac{\partial }{\partial {y}_{j}}}\right) . \] Then for \( f \in {C}^{\infty } \) , (1) \[ {df} = \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial f}{\partial {z}_{j}}d{z}_{j} + \frac{\partial f}{\partial {\bar{z}}_{j}}d{\bar{z}}_{j}. \] Definition 5.2. We define two maps from \( {C}^{\infty } \rightarrow { \land }^{1}\left( \Omega \right) ,\partial \) and \( \bar{\partial } \) . For \( f \in {C}^{\infty } \) , \[ \partial f = \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial f}{\partial {z}_{j}}d{z}_{j},\;\bar{\partial }f = \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial f}{\partial {\bar{z}}_{j}}d{\bar{z}}_{j}. \] Note. \( \partial f + \bar{\partial }f = {df} \), if \( f \in {C}^{\infty } \) . We need some notation. Let \( I \) be any \( r \) -tuple of integers, \( I = \left( {{i}_{1},{i}_{2},\ldots ,{i}_{r}}\right) \) , \( 1 \leq {i}_{j} \leq n \), all \( j \) . Put \[ d{z}_{I} = d{z}_{{j}_{1}} \land \cdots \land d{z}_{{i}_{r}} \] Thus \( d{z}_{I} \in { \land }^{r}\left( \Omega \right) \) . Let \( J \) be any \( s \) -tuple \( \left( {{j}_{1},\ldots ,{j}_{s}}\right) ,1 \leq {j}_{k} \leq n \), all \( k \), and put \[ d{\bar{z}}_{J} = d{\bar{z}}_{{j}_{1}} \land \cdots \land d{\bar{z}}_{{j}_{s}} \] So \( d{\bar{z}}_{J} \in { \land }^{s}\left( \Omega \right) \) . Then \[ d{z}_{I} \land d{\bar{z}}_{J} \in { \land }^{r + s}\left( \Omega \right) . \] For \( I \) as above, put \( \left| I\right| = r \) . Then \( \left| J\right| = s \) . Definition 5.3. Fix integers \( r, s \geq 0.{ \land }^{r, s}\left( \Omega \right) \) is the space of all \( \omega \in { \land }^{r + s}\left( \Omega \right) \) such that \[ \omega = \mathop{\sum }\limits_{{I, J}}{a}_{IJ}d{z}_{I} \land d{\bar{z}}_{J} \] the sum being extended over all \( I, J \) with \( \left| I\right| = r,\left| J\right| = s \), and with each \( {a}_{IJ} \in {C}^{\infty } \) . An element of \( { \land }^{r, s}\left( \Omega \right) \) is called a form of type \( \left( {r, s}\right) \) . We now have a direct sum decomposition of each \( { \land }^{k}\left( \Omega \right) \) : Lemma 5.2. \[ { \land }^{k}\left( \Omega \right) = { \land }^{0, k}\left( \Omega \right) \oplus { \land }^{1, k - 1}\left( \Omega \right) \oplus { \land }^{2, k - 2}\left( \Omega \right) \oplus \cdots \oplus { \land }^{k,0}\left( \Omega \right) . \] We extend the definition of \( \partial \) and \( \bar{\partial } \) (see Definition 5.2) to maps from \( { \land }^{k}\left( \Omega \right) \rightarrow \) \( { \land }^{k + 1}\left( \Omega \right) \) for \( k \), as follows: Definition 5.4. Choose \( {\omega }^{k} \) in \( { \land }^{k}\left( \omega \right) \) , \[ {\omega }^{k} = \mathop{\sum }\limits_{{I, J}}{a}_{IJ}d{z}_{I} \land d{\bar{z}}_{J} \] \[ \partial {\omega }^{k} = \mathop{\sum }\limits_{{I, J}}\partial {a}_{IJ} \land d{z}_{I} \land d{\bar{z}}_{J} \] and \[ \bar{\partial }{\omega }^{k} = \mathop{\sum }\limits_{{I, J}}\bar{\partial }{a}_{IJ} \land d{z}_{I} \land d{\bar{z}}_{J} \] Observe that, by (l), if \( {\omega }^{k} \) is as above, \[ \bar{\partial }{\omega }^{k} + \partial {\omega }^{k} = \mathop{\sum }\limits_{{I, J}}d{a}_{IJ} \land d{z}_{I} \land d{\bar{z}}_{J} = d{\omega }^{k}, \] so we have (2) \[ \bar{\partial } + \partial = d \] as operators from \( { \land }^{k}\left( \Omega \right) \rightarrow { \land }^{k + 1}\left( \Omega \right) \)
1329_[肖梁] Abstract Algebra (2022F)
Definition 8.1.1
Definition 8.1.1. For \( x, y \in G \), define \( \left\lbrack {x, y}\right\rbrack \mathrel{\text{:=}} {x}^{-1}{y}^{-1}{xy} \), the commutator of \( x \) and \( y \) . Let \( {G}^{\text{der }} = {G}^{\prime } \mathrel{\text{:=}} \langle \left\lbrack {x, y}\right\rbrack \mid x, y \in G\rangle \) be the subgroup of \( G \) generated by all commutators; this is called the commutator subgroup or the derived subgroup of \( G \) . Note: It is NOT true that every element of \( {G}^{\prime } \) is a commutator itself. Properties 8.1.2. The commutator of \( x, y \in G \) enjoy the following properties: (1) if \( {xy} = {yx} \), then \( \left\lbrack {x, y}\right\rbrack = 1 \) ; (2) for \( g \in G \), we have \( g\left\lbrack {x, y}\right\rbrack {g}^{-1} = \left\lbrack {{gx}{g}^{-1},{gy}{g}^{-1}}\right\rbrack \) ; (3) \( {G}^{\prime } \) is a normal subgroup of \( G \) and \( G/{G}^{\prime } \) is abelian. Proof. (1) is clear. For (2), we compute directly \[ g\left\lbrack {x, y}\right\rbrack {g}^{-1} = g{x}^{-1}{y}^{-1}{xy}{g}^{-1} = g{x}^{-1}{g}^{-1} \cdot g{y}^{-1}{g}^{-1} \cdot {gx}{g}^{-1} \cdot {gy}{g}^{-1} = \left\lbrack {{gx}{g}^{-1},{gy}{g}^{-1}}\right\rbrack . \] (3) For \( g \in G, g{G}^{\prime }{g}^{-1} \) is generated by elements of the form \( g\left\lbrack {x, y}\right\rbrack {g}^{-1} \), which are the same as \( \left\lbrack {{gx}{g}^{-1},{gy}{g}^{-1}}\right\rbrack \) by (2). So \( g{G}^{\prime }{g}^{-1} = {G}^{\prime } \) . Moreover, for \( x, y \in G \), we have \[ x{G}^{\prime } \cdot y{G}^{\prime } = y{G}^{\prime } \cdot x{G}^{\prime } \Leftrightarrow {x}^{-1}{y}^{-1}{xy}{G}^{\prime } = {G}^{\prime }. \] So \( x{G}^{\prime } \) and \( y{G}^{\prime } \) commute with each other in \( G/{G}^{\prime } \) ; it is abelian. Proposition 8.1.3. If \( A \) is an abelian group and \( \phi : G \rightarrow A \) is a homomorphism, then \( {G}^{\prime } \subseteq \ker \phi \) . Moreover, \( \phi \) factors as the composition of (8.1.3.1) \[ G\overset{\pi }{ \rightarrow }G/{G}^{\prime }\overset{\bar{\phi }}{ \rightarrow }A \] \[ g \mapsto g{G}^{\prime } \mapsto \phi \left( g\right) . \] In particular, we have a bijection for every abelian group \( A \) : \[ {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {G, A}\right) \xrightarrow[]{ \cong }{\operatorname{Hom}}_{\mathrm{{gp}}}\left( {G/{G}^{\prime }, A}\right) \] \[ \phi \vdash \vdash \rightarrow \left( {\bar{\phi } : g{G}^{\prime } \mapsto \phi \left( g\right) }\right) . \] In other words, if we want to Hom a group out to an abelian group, it is enough to Hom out from \( G/{G}^{\prime } \) . Proof. Note that for any \( x, y \in G,\phi \left( \left\lbrack {x, y}\right\rbrack \right) = \phi \left( {{x}^{-1}{y}^{-1}{xy}}\right) = \phi {\left( x\right) }^{-1}\phi {\left( y\right) }^{-1}\phi \left( x\right) \phi \left( y\right) = 1 \) because \( A \) is abelian. So \( {G}^{\prime } \subseteq \ker \phi \) . It follows that \( G \) must factor through \( G/{G}^{\prime } \) as shown in (8.1.3.1). In particular this gives the map \( {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {G, A}\right) \rightarrow {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {G/{G}^{\prime }, A}\right) \) . The converse map is easier and it follows from sending a homomorphism \( \psi : G/{G}^{\prime } \rightarrow A \) to the composition \[ G\overset{\pi }{ \rightarrow }G/{G}^{\prime }\overset{\psi }{ \rightarrow }A \] Example 8.1.4. For \( G = {D}_{2n} = \left\langle {r, s \mid {r}^{n} = {s}^{2} = 1,{srs} = {r}^{-1}}\right\rangle \), compute all homomorphisms \( {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {G,{\mathbb{C}}^{ \times }}\right) \) . First note that \( {G}^{\prime } \) contains \( {sr}{s}^{-1}{r}^{-1} = {r}^{-2} \) . We separate two cases. - If \( n \) is odd, then \( \langle r\rangle = \left\langle {r}^{-2}\right\rangle \subseteq {G}^{\prime } \) . We claim that \( {G}^{\prime } = \langle r\rangle \) . Instead of checking every pair of elements, we use the following argument: construct a map \[ \psi : G \rightarrow \{ \pm 1\} \] \[ {r}^{i} \mapsto 1 \] \[ s{r}^{i} \mapsto - 1\text{.} \] One checks that \( \psi {\left( r\right) }^{n} = \psi {\left( s\right) }^{2} = 1 \) and \( \psi \left( s\right) \psi \left( r\right) \psi \left( s\right) = \psi {\left( r\right) }^{-1} \) ; so \( \psi \) is a homomorphism. By Proposition 8.1.3 above, we have \( {G}^{\prime } \subseteq \ker \psi = \langle r\rangle \) . So \( {G}^{\prime } = \langle r\rangle \) and \( G/{G}^{\prime } \cong \) \( \{ \pm 1\} \) . - If \( n \) is even, then \( \left\langle {r}^{-2}\right\rangle = \left\langle {r}^{2}\right\rangle \subseteq {G}^{\prime } \) . Similar to above, we define \[ \psi : G \rightarrow \{ \pm 1\} \times \pm 1\} \] \[ {r}^{i} \mapsto \left( {{\left( -1\right) }^{i},1}\right) \] \[ s \mapsto \left( {1, - 1}\right) \text{.} \] Once again, we check the relations \( \psi {\left( r\right) }^{n} = \left( {{\left( -1\right) }^{n},1}\right) = \left( {1,1}\right) \) as \( n \) is even, \( \psi {\left( s\right) }^{2} = \) \( \left( {1,1}\right) \), and \( \psi \left( s\right) \psi \left( r\right) \psi \left( s\right) = \left( {-1,1}\right) = \psi {\left( r\right) }^{-1} \) . So \( \psi \) is a well-defined homomorphism. Now Proposition 8.1.3 implies that \( {G}^{\prime } \subseteq \ker \psi = \left\langle {r}^{2}\right\rangle \) . This implies that \( {G}^{\prime } = \left\langle {r}^{2}\right\rangle \) and \( G/{G}^{\prime } \cong \{ \pm 1\} \times \{ \pm 1\} \) . Now, we apply Proposition 8.1.3, to get: - when \( n \) is odd, \[ {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {{D}_{2n},{\mathbb{C}}^{ \times }}\right) \cong {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {\{ \pm 1\} ,{\mathbb{C}}^{ \times }}\right) \cong \{ \operatorname{tr},\psi \} . \] There are two such homomorphisms: the trivial one, and the \( \psi \) above, given by sending \( r \) to 1 and \( s \) to -1 . - when \( n \) is even, \[ \begin{matrix} {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {G,{\mathbb{C}}^{ \times }}\right) \leftarrow \cong & \rightarrow {\operatorname{Hom}}_{\mathrm{{gp}}}\left( {\{ \pm 1\} \times \{ \pm 1\} ,{\mathbb{C}}^{ \times }}\right) & \\ \psi : G \rightarrow {\mathbb{C}}^{ \times } & \leftarrow \sim \sim & \bar{\psi }\left( {-1,1}\right) = \lambda \in \{ \pm 1\} \subset {\mathbb{C}}^{ \times } \\ \psi \left( r\right) = \lambda ,\psi \left( s\right) = \mu & \leftarrow \sim & \bar{\psi }\left( {1, - 1}\right) = \mu \in \{ \pm 1\} \subset {\mathbb{C}}^{ \times } \end{matrix} \] 8.2. Solvable groups. We recall from Definition 3.4.4 that a group \( G \) is called solvable if there exists a chain of subgroups \( 1 = {G}_{0} \leq {G}_{1} \leq \cdots \leq {G}_{r} = G \) such that \( {G}_{i - 1} \trianglelefteq {G}_{i} \) and \( {G}_{i}/{G}_{i - 1} \) is abelian, for every \( i = 1,\ldots, r \) . (When \( G \) is finite, this is equivalent to existing such a chain of subgroups such that each \( {G}_{i}/{G}_{i - 1} \) is isomorphic to \( {\mathbf{Z}}_{{p}_{i}} \) for some prime number \( \left. {{p}_{i}\text{.}}\right) \) In particular, all abelian groups are solvable. A good way to test solvable groups is through the following. Definition 8.2.1. For any group \( G \), define the following sequence of subgroups inductively. \[ {G}^{\left( 0\right) } = G,\;{G}^{\left( 1\right) } = \left\lbrack {G, G}\right\rbrack ,\;{G}^{\left( i + 1\right) } = \left\lbrack {{G}^{\left( i\right) },{G}^{\left( i\right) }}\right\rbrack \text{ for any }i. \] This is called the derived or commutator series of \( G \) . An interesting typical example of commutator series is the following. Example 8.2.2. Consider \[ G = \left( \begin{matrix} {\mathbb{C}}^{ \times } & \mathbb{C} & \mathbb{C} \\ 0 & {\mathbb{C}}^{ \times } & \mathbb{C} \\ 0 & 0 & {\mathbb{C}}^{ \times } \end{matrix}\right) \supseteq {G}^{\left( 1\right) } = \left( \begin{matrix} 1 & \mathbb{C} & \mathbb{C} \\ 0 & 1 & \mathbb{C} \\ 0 & 0 & 1 \end{matrix}\right) \supseteq {G}^{\left( 2\right) } = \left( \begin{matrix} 1 & 0 & \mathbb{C} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \supseteq {G}^{\left( 3\right) } = \left\{ {I}_{3}\right\} . \] Proposition 8.2.3. A group is solvable if and only if \( {G}^{\left( n\right) } = \{ 1\} \) for some finite \( n \in \mathbb{N} \) . Proof. " \( \Leftarrow \) " Note that each \( {G}^{\left( i\right) } \) is a normal subgroup of \( {G}^{\left( i - 1\right) } \) and the quotient \( {G}^{\left( i - 1\right) }/{G}^{\left( i\right) } \) is abelian. So \[ \{ 1\} = {G}^{\left( n\right) } \trianglelefteq {G}^{\left( n - 1\right) } \trianglelefteq \cdots \trianglelefteq {G}^{\left( 1\right) } \trianglelefteq G \] is the required chain of subgroups with ablelian subquotients. " \( \Rightarrow \) " As \( G \) is solvable, there exists a chain of subgroups \[ \{ 1\} = {H}_{0} \leq {H}_{1} \leq {H}_{2} \leq \cdots \leq {H}_{r} = G \] such that \( {H}_{i - 1} \trianglelefteq {H}_{i} \) and \( {H}_{i}/{H}_{i - 1} \) is abelian. It follows that \( \left\lbrack {{H}_{i},{H}_{i}}\right\rbrack \subseteq {H}_{i - 1} \) . This implies that \[ {G}^{\left( 1\right) } = \left\lbrack {G, G}\right\rbrack \subseteq {H}_{r - 1} \] \[ {G}^{\left( 2\right) } = \left\lbrack {{G}^{\left( 1\right) },{G}^{\left( 1\right) }}\right\rbrack \subseteq \left\lbrack {{H}_{r - 1},{H}_{r - 1}}\right\rbrack \subseteq {H}_{r - 2}, \] \[ \begin{array}{ll} \text{ . . . } & \text{ . . . } \end{array} \] \[ {G}^{\left( i\right) } \subseteq {H}_{r + 1 - i},\;\cdots \] Eventually, we prove \( {G}^{\left( r + 1\right) } \subseteq {H}_{0} = \{ 1\} \) . Remark 8.2.4. (1) The derived series is the "fastest-decreasing" series so that the sub-quotients are abelian. (2) The smallest \( n \in {\mathbb{Z}}_{ \geq 0} \) for which \( {G}^{\left( n\right) } = \{ 1\} \) is called the solvable length of \( G \) . Lemma 8.2.5. All \( {G}^{\left( i\right) } \) are normal subgroups of \( G \) . In fact, they are characteristic subgroups of \( G \) . Proof. Recall that
1172_(GTM8)Axiomatic Set Theory
Definition 13.1
Definition 13.1. Let \( \mathbf{B} = \langle B, + , \cdot , - ,\mathbf{0},\mathbf{1}\rangle \) be a complete Boolean algebra. Then \( {V}_{\alpha }^{\left( \mathbf{B}\right) } \) is defined by recursion with respect to \( \alpha \) as follows: \[ {V}_{0}^{\left( \mathbf{B}\right) } \triangleq 0 \] \[ {V}_{\alpha }^{\left( \mathbf{B}\right) } \triangleq \left\{ {u \mid \left\lbrack {u : \mathcal{D}\left( u\right) \rightarrow B}\right\rbrack \land \left( {\exists \xi < \alpha }\right) \left\lbrack {\mathcal{D}\left( u\right) \subseteq {V}_{\xi }^{\left( \mathbf{B}\right) }}\right\rbrack }\right\} ,\;\alpha > 0. \] \[ {V}^{\left( \mathbf{B}\right) } \triangleq \mathop{\bigcup }\limits_{{\alpha \in {On}}}{V}_{\alpha }^{\left( \mathbf{B}\right) } \] Remark. Elements of \( {V}^{\left( \mathbf{B}\right) } \) are called B-valued sets, these are functions \( u \) from their domain, \( \mathcal{D}\left( u\right) \), into \( B \) where \( \mathcal{D}\left( u\right) \) itself consists of \( \mathbf{B} \) -valued sets. Theorem 13.2. \( \alpha \in {K}_{\mathrm{{II}}} \rightarrow {V}_{\alpha }^{\left( \mathrm{B}\right) } = \mathop{\bigcup }\limits_{{\xi < \alpha }}{V}_{\xi }^{\left( \mathrm{B}\right) } \) . Remark. In order to obtain a B-valued structure \( {\mathbf{V}}^{\left( \mathbf{B}\right) } = \left\langle {{V}^{\left( \mathbf{B}\right) },\bar{ = },\bar{ \in }}\right\rangle \) , we define \( \equiv \) and \( \bar{\epsilon } \) in the following way. Definition 13.3. For \( u, v \in {V}^{\left( \mathbf{B}\right) } \) , 1. \( \llbracket u\bar{ \in }v\rrbracket \overset{\Delta }{ = }\mathop{\sum }\limits_{{y \in \mathcal{D}\left( v\right) }}\left( {v\left( y\right) \cdot \llbracket u = y\rrbracket }\right) \) . 2. \( \llbracket u \equiv v\rrbracket \triangleq \mathop{\prod }\limits_{{x \in \mathcal{D}\left( u\right) }}\left\lbrack {u\left( x\right) \Rightarrow \llbracket x \in v\rrbracket }\right\rbrack \cdot \mathop{\prod }\limits_{{y \in \mathcal{D}\left( v\right) }}\left\lbrack {v\left( y\right) \Rightarrow \llbracket y \in u\rrbracket }\right\rbrack \) . Remark. Thus \( \bar{\epsilon } \) and \( \bar{\epsilon } \) are defined simultaneously by recursion. Hereafter we will write \( = \) and \( \in \) for \( \equiv \) and \( \bar{\epsilon } \) respectively. There are several ways to check that 1 and 2 really constitute a definition by recursion: 1. The definition of \( \llbracket u \in v\rrbracket \) and \( \llbracket u = v\rrbracket \) is recursive with respect to the well-founded relation \[ \left\{ {\left\langle {\langle u, v}\right\rangle ,\left\langle {{u}^{\prime },{v}^{\prime }}\right\rangle \rangle \mid \operatorname{rank}\left( u\right) \sharp \operatorname{rank}\left( v\right) < \operatorname{rank}\left( {u}^{\prime }\right) \sharp \operatorname{rank}\left( {v}^{\prime }\right) }\right\} \] where \( \alpha \# \beta \) is the natural sum of \( \alpha \) and \( \beta \) . (For a definition and elementary properties of the natural sum of ordinals the reader may consult one of the following monographs: H. Bachmann: Transfinite Zahlers, Ergebnisse elev. Math. Vol. 1 (1955), pp. 102f, or A. A. Fraenkel: Abstract Set Theory (1953), p. 297.) 2. Alternatively, we would use Gödel’s pairing function \( {J}_{0} \) which is a one-to-one correspondence between \( {On} \times {On} \) and \( {On} \) with the following property. \[ {J}_{0}\left( {\alpha ,\beta }\right) < {J}_{0}\left( {{\alpha }^{\prime },{\beta }^{\prime }}\right) \leftrightarrow \max \left( {\alpha ,\beta }\right) < \max \left( {{\alpha }^{\prime },{\beta }^{\prime }}\right) \] \[ \vee \left\lbrack \left\lbrack {\max \left( {\alpha ,\beta }\right) = \max \left( {{\alpha }^{\prime },{\beta }^{\prime }}\right) \rbrack \land \left\lbrack {\beta < {\beta }^{\prime } \vee \left\lbrack {\beta = {\beta }^{\prime } \land \alpha < {\alpha }^{\prime }}\right\rbrack }\right\rbrack }\right\rbrack \right\rbrack . \] If we assign to \( \llbracket u \in v\rrbracket \) the ordinal \( {J}_{0}\left( {\operatorname{rank}\left( v\right) ,\operatorname{rank}\left( u\right) }\right) \) and to \( \llbracket u = v\rrbracket \) the ordinal \( \max \left( {{J}_{0}\left( {\operatorname{rank}\left( u\right) ,\operatorname{rank}\left( v\right) }\right) ,{J}_{0}\left( {\operatorname{rank}\left( v\right) ,\operatorname{rank}\left( u\right) }\right) }\right) \), it is easy to see that \( \llbracket u \in v\rrbracket \) and \( \llbracket u = v\rrbracket \) in 1 and 2 respectively are reduced to \( \llbracket u = {v}^{\prime }\rrbracket \) and \( \llbracket {u}^{\prime } \in {v}^{\prime }\rrbracket \) in such a way that the associated ordinals are reduced to lower ordinals. 3. We would also eliminate \( \epsilon \) in 2 by substituting the definition 1 : \[ \llbracket u = v\rrbracket = \mathop{\prod }\limits_{{x \in \mathcal{D}\left( u\right) }}\left\lbrack {u\left( x\right) = \mathop{\sum }\limits_{{y \in \mathcal{D}\left( v\right) }}\left\lbrack {v\left( y\right) \cdot \llbracket x = y\rrbracket }\right\rbrack }\right\rbrack \] \[ \cdot \mathop{\prod }\limits_{{y \in \mathcal{D}\left( v\right) }}\left\lbrack {v\left( y\right) \Rightarrow \mathop{\sum }\limits_{{x \in \mathcal{D}\left( u\right) }}\left\lbrack {u\left( x\right) \cdot \left\lbrack {y = x}\right\rbrack }\right\rbrack }\right\rbrack \] which is a definition by recursion with respect to the well-founded relation \( \left\{ {\left\langle {\langle u, v\rangle ,\left\langle {{u}^{\prime },{v}^{\prime }}\right\rangle }\right\rangle \mid \max \left( {\operatorname{rank}\left( u\right) ,\operatorname{rank}\left( v\right) }\right) < \max \left( {\operatorname{rank}\left( {u}^{\prime }\right) ,\operatorname{rank}\left( {v}^{\prime }\right) }\right) }\right\} \) . Then 1 becomes an explicit definition in terms of \( = \) . Next we prove that the Axioms of Equality hold in \( {\mathbf{V}}^{\left( \mathbf{B}\right) } \) (see Definition 6.5). Theorem 13.4. For \( u, v \in {V}^{\left( \mathbf{B}\right) } \) , 1. \( \llbracket u = v\rrbracket = \llbracket v = u\rrbracket \) . 2. \( \llbracket u = u\rrbracket = 1 \) . 3. \( x \in \mathcal{D}\left( u\right) \rightarrow u\left( x\right) \leq \llbracket x \in u\rrbracket \) . Proof. 1. The definition of \( \llbracket u = v\rrbracket \) is symmetric in \( u \) and \( v \) . 2 and 3 are proved by induction on rank \( \left( u\right) \) . Let \( x \in \mathcal{D}\left( u\right) \) . Then \[ \llbracket x \in u\rrbracket = \mathop{\sum }\limits_{{y \in \mathcal{D}\left( u\right) }}u\left( y\right) \cdot \llbracket x = y\rrbracket \] hence \[ u\left( x\right) \cdot \llbracket x = x\rrbracket \leq \llbracket x \in u\rrbracket \;\text{ if }\;x \in \mathcal{D}\left( u\right) \] \[ u\left( x\right) \leq \llbracket x \in u\rrbracket \text{by the induction hypothesis}\llbracket x = x\rrbracket = 1\text{.} \] Therefore \[ \left( {\forall x \in \mathcal{D}\left( u\right) }\right) \left\lbrack {\left\lbrack {u\left( x\right) \Rightarrow \llbracket x \in u\rrbracket }\right\rbrack = 1}\right\rbrack \] and \[ \llbracket u = u\rrbracket = \mathop{\prod }\limits_{{x \in \mathcal{D}\left( u\right) }}\left\lbrack {u\left( x\right) \Rightarrow \llbracket x \in u\rrbracket }\right\rbrack = 1. \] 122 Theorem 13.5. Let \( u,{u}^{\prime }, v,{v}^{\prime }, w \in {V}^{\left( \mathbf{B}\right) } \) . Then for each \( \alpha \) 1. \( \left\lbrack {\operatorname{rank}\left( u\right) < \alpha }\right\rbrack \land \left\lbrack {\operatorname{rank}\left( {u}^{\prime }\right) < \alpha }\right\rbrack \land \left\lbrack {\operatorname{rank}\left( v\right) \leq \alpha }\right\rbrack \) \[ \rightarrow \left\lbrack {u = {u}^{\prime }}\right\rbrack \cdot \llbracket u \in v\rrbracket \leq \left\lbrack {{u}^{\prime } \in v}\right\rbrack . \] 2. \( \left\lbrack {\operatorname{rank}\left( u\right) < \alpha }\right\rbrack \land \left\lbrack {\operatorname{rank}\left( v\right) \leq \alpha }\right\rbrack \land \left\lbrack {\operatorname{rank}\left( {v}^{\prime }\right) \leq \alpha }\right\rbrack \) \[ \rightarrow \llbracket u \in v\rrbracket \cdot \llbracket v = {v}^{\prime }\rrbracket \leq \llbracket u \in {v}^{\prime }\rrbracket . \] 3. \( \left\lbrack {\operatorname{rank}\left( u\right) \leq \alpha }\right\rbrack \land \left\lbrack {\operatorname{rank}\left( v\right) \leq \alpha }\right\rbrack \land \left\lbrack {\operatorname{rank}\left( w\right) \leq \alpha }\right\rbrack \) \[ \rightarrow \llbracket u = v\rrbracket \cdot \llbracket v = w\rrbracket \leq \llbracket u = w\rrbracket . \] Proof. (By induction on \( \alpha \) ). 1. If \( \operatorname{rank}\left( u\right) < \alpha ,\operatorname{rank}\left( {u}^{\prime }\right) < \alpha \), and \( \operatorname{rank}\left( v\right) \leq \alpha \), then \[ \llbracket u = {u}^{\prime }\rrbracket \cdot \llbracket u \in v\rrbracket = \mathop{\sum }\limits_{{y \in \mathcal{L}\left( v\right) }}v\left( y\right) \cdot \llbracket y = u\rrbracket \cdot \left\lbrack {u = {u}^{\prime }}\right\rbrack \] \[ \leq \mathop{\sum }\limits_{{y \in \mathcal{D}\left( v\right) }}v\left( y\right) \cdot \left\lbrack {y = {u}^{\prime }}\right\rbrack \;\text{ by the induction hypothesis for }3 \] \[ = \llbracket {u}^{\prime } \in v\rrbracket \text{.} \] 2. If \( \operatorname{rank}\left( u\right) < \alpha ,\operatorname{rank}\left( v\right) \leq \alpha \), and \( \operatorname{rank}\left( {v}^{\prime }\right) \leq \alpha \), then for \( y \in \mathcal{D}\left( v\right) \) \[ \llbracket u = y\rrbracket \cdot v\left( y\right) \cdot \llbracket v = {v}^{\prime }\rrbracket \leq \llbracket u = y\rrbracket \cdot v\left( y\right) \left( {v\left( y\right) \Rightarrow \llbracket y \in {v}^{\prime }\rrbracket }\right) \] \[ \leq \llbracket u = y\rrbracket \llbracket y \in {v}^{\prime }\rrbracket \] \[ \leq \llbracket u \in {v}^{\prime }\rrbracket \;\text{by 1 .} \] Therefore taking the sup over all \( y \in \mathcal{D}\left( v\right) \) \[ \llbracket u \in v\rrbracket \cdot
109_The rising sea Foundations of Algebraic Geometry
Definition 7.2
Definition 7.2. A family \( {\left( {X}_{\alpha }\right) }_{\alpha \in \Phi } \) of subgroups of \( {\operatorname{Aut}}_{0}\Delta \) is called a system of pre-root groups if it satisfies the following three conditions: (1) \( {X}_{\alpha } \) fixes \( \alpha \) pointwise for each \( \alpha \in \Phi \) . (2) For each \( \alpha \in \Phi \) and each panel \( P \in \partial \alpha \), the action of \( {X}_{\alpha } \) on \( \mathcal{C}\left( {P,\alpha }\right) \) is transitive. (3) For each \( \alpha \in \Phi \) there is an element \( {n}_{\alpha } \) in the subgroup \( \left\langle {{X}_{\alpha },{X}_{-\alpha }}\right\rangle \) such that \( {n}_{\alpha }\left( \alpha \right) = - \alpha \) and \( {n}_{\alpha }\left( {-\alpha }\right) = \alpha \) . In other words, \( {n}_{\alpha } \) stabilizes \( \sum \) and acts on \( \sum \) as the reflection \( {s}_{\alpha } \) . We say that \( \Delta \) is a pre-Moufang building if it admits a system of pre-root groups. Remark 7.3. Pre-Moufang buildings have some (but not all) of the features of Moufang buildings, which will be defined in Section 7.3 along with root groups. The root groups in a (spherical) Moufang building satisfy conditions (1)-(3) above; this explains the terminology "pre-root groups." Conditions (2) and (3) take a little time to digest. We begin by reformulating (2) in the spherical case. For any root \( \alpha \) of \( \Delta \), we denote by \( \mathcal{A}\left( \alpha \right) \) the set of apartments of \( \Delta \) containing \( \alpha \) . Recall from Lemma 4.118 that for any panel \( P \in \partial \alpha \), there is a canonical bijection between \( \mathcal{C}\left( {P,\alpha }\right) \) and \( \mathcal{A}\left( \alpha \right) \) . This leads immediately to the following result: Lemma 7.4. Assume that \( \Delta \) is spherical and that \( {\left( {X}_{\alpha }\right) }_{\alpha \in \Phi } \) is a system of subgroups of \( {\operatorname{Aut}}_{0}\Delta \) satisfying condition (1) of Definition 7.2. Then the following conditions are equivalent: (i) The system \( {\left( {X}_{\alpha }\right) }_{\alpha \in \Phi } \) satisfies condition (2). (ii) For each \( \alpha \in \Phi \) there exists a panel \( P \in \partial \alpha \) such that the action of \( {X}_{\alpha } \) on \( \mathcal{C}\left( {P,\alpha }\right) \) is transitive. (iii) For each \( \alpha \in \Phi \), the action of \( {X}_{\alpha } \) on \( \mathcal{A}\left( \alpha \right) \) is transitive. Proof. Let \( \alpha \) be a root and let \( P \) be a panel in \( \partial \alpha \) . Then Lemma 4.118 implies that the action of \( {X}_{\alpha } \) is transitive on \( \mathcal{C}\left( {P,\alpha }\right) \) if and only if it is transitive on \( \mathcal{A}\left( \alpha \right) \) . Thus (ii) \( \Rightarrow \) (iii) \( \Rightarrow \) (i), and the implication (i) \( \Rightarrow \) (ii) is trivial. Next, we attempt to demystify condition (3) in Definition 7.2 by showing that in the spherical case, it actually follows from (1) and (2). Moreover, we will see that it has some geometric content. Our standing assumption that \( \Delta \) is thick is crucial here. Lemma 7.5. If \( \Delta \) is spherical and \( {\left( {X}_{\alpha }\right) }_{\alpha \in \Phi } \) is a system of subgroups of \( {\operatorname{Aut}}_{0}\Delta \) satisfying (1) and (2), then it also satisfies (3). Proof. Choose a panel \( P \in \partial \alpha \), and let \( C \) and \( D \) be the chambers of \( \sum \) having \( P \) as a face, with \( C \in \alpha \) and \( D \in - \alpha \) . By thickness and condition (2), we can find an element \( x \in {X}_{\alpha } \) such that \( {xD} \neq D \) ; see Figure 7.1. We ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_393_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_393_0.jpg) Fig. 7.1. Constructing \( {n}_{\alpha } \) ; step 1. claim now that there are elements \( {x}^{\prime },{x}^{\prime \prime } \in {X}_{-\alpha } \) such that the composite \( m\left( x\right) \mathrel{\text{:=}} {x}^{\prime }x{x}^{\prime \prime } \) interchanges \( C \) and \( D \) . Indeed, it suffices to choose, by (2), an element \( {x}^{\prime \prime } \in {X}_{-\alpha } \) such that \( {x}^{\prime \prime }C = {x}^{-1}D \) [so that \( x{x}^{\prime \prime }C = D \) ] and an element \( {x}^{\prime } \in {X}_{-\alpha } \) such that \( {x}^{\prime }\left( {xD}\right) = C \) . The claim now follows from the fact that \( {x}^{\prime } \) and \( {x}^{\prime \prime } \) fix \( D \) . See Figure 7.2. ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_393_1.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_393_1.jpg) Fig. 7.2. Constructing \( {n}_{\alpha } \) ; step 2. Note that we have not yet used the assumption that \( \Delta \) is spherical, and indeed, without that assumption there is no reason to expect \( m\left( x\right) \) to interchange \( \pm \alpha \) just because it interchanges \( C \) and \( D \) . In the presence of sphericity, however, we know that \( \alpha \) is the convex hull of \( C \) and \( \partial \alpha \) (see Example 3.133(d)), and similarly \( - \alpha \) is the convex hull of \( D \) and \( \partial \alpha \) . Since \( m\left( x\right) \) fixes \( \partial \alpha \), it must therefore interchange \( \pm \alpha \), so it is the desired element \( {n}_{\alpha } \in \left\langle {{X}_{\alpha },{X}_{-\alpha }}\right\rangle \) For future reference, we record the following corollary of the proof: Corollary 7.6. Under the hypotheses of Lemma 7.5, let \( \alpha \) be a root of \( \sum \), let \( D \in - \alpha \) be a chamber with a panel in \( \partial \alpha \), and let \( x \in {X}_{\alpha } \) be an element such that \( {xD} \neq D \) . Then there is an element \( m\left( x\right) \in {X}_{-\alpha }x{X}_{-\alpha } \) such that \( m\left( x\right) \) interchanges \( \pm \alpha \) . Remark 7.7. It often happens that the action of \( {X}_{\alpha } \) on \( \mathcal{C}\left( {P,\alpha }\right) \) not only is transitive, but is in fact simply transitive for each \( \alpha \) . In this case the elements \( {x}^{\prime },{x}^{\prime \prime } \) in the proof of the lemma are uniquely determined by \( x \) ; hence so is \( {x}^{\prime }x{x}^{\prime \prime } \) . We used the notation \( m\left( x\right) \) in order to emphasize this dependence on \( x \), which will be important later. With Lemmas 7.4 and 7.5 at our disposal, we can now show that systems of pre-root groups are extremely common: Proposition 7.8. If \( \Delta \) is spherical and \( G \) is a strongly transitive group of type-preserving automorphisms of \( \Delta \), then \( G \) contains a system of pre-root groups. Proof. For each \( \alpha \in \Phi \), let \( {X}_{\alpha } \) be the pointwise fixer of \( \alpha \) in \( G \), i.e., \[ {X}_{\alpha } = {\operatorname{Fix}}_{G}\left( \alpha \right) \mathrel{\text{:=}} \{ g \in G \mid {gA} = A\text{ for all }A \in \alpha \} . \] Then \( {X}_{\alpha } \) acts transitively on \( \mathcal{A}\left( \alpha \right) \) by Corollary 6.7, so the proposition follows from Lemmas 7.4 and 7.5. The main goal of the rest of the section is to prove a converse of Proposition 7.8, i.e., to construct a strongly transitive action from a system of pre-root groups. This works in general; \( \Delta \) does not have to be spherical. The following lemma is the crucial step. Assume that we have a system \( {\left( {X}_{\alpha }\right) }_{\alpha \in \Phi } \) of pre-root groups in \( {\operatorname{Aut}}_{0}\Delta \), and set \[ U \mathrel{\text{:=}} \left\langle {{X}_{\alpha } \mid \alpha \in {\Phi }_{ + }}\right\rangle \] (7.1) Lemma 7.9. \( \Delta = \mathop{\bigcup }\limits_{{u \in U}}{u\sum } \) . Proof. It suffices to show that the union on the right contains every chamber \( C \in \mathcal{C}\left( \Delta \right) \) . We argue by induction on \( d\left( {C,\mathcal{C}\left( \sum \right) }\right) \), which may be assumed \( > 0 \) . Choose a gallery \( {D}_{0},\ldots ,{D}_{l} = C \) of minimal length \( l \mathrel{\text{:=}} d\left( {C,\mathcal{C}\left( \sum \right) }\right) \) with \( {D}_{0} \in \sum \) . Let \( P \mathrel{\text{:=}} {D}_{0} \cap {D}_{1} \), let \( {D}_{0}^{\prime } \) be the chamber of \( \sum \) adjacent to \( {D}_{0} \) along \( P \), and let \( \alpha \) be the root of \( \sum \) containing \( {D}_{0} \) but not \( {D}_{0}^{\prime } \) ; see Figure 7.3. If \( \alpha \) is a positive root, choose \( x \in {X}_{\alpha } \) such that \( x{D}_{1} = {D}_{0}^{\prime } \) . Otherwise choose \( x \in {X}_{-\alpha } \) such that \( x{D}_{1} = {D}_{0} \) . In either case we have \( x \in U \) with \( x{D}_{1} \in \sum \) , and the gallery \( x{D}_{1},\ldots, x{D}_{l} = {xC} \) shows that \( d\left( {{xC},\mathcal{C}\left( \sum \right) }\right) < l \) . So we may apply the induction hypothesis to find \( {u}^{\prime } \in U \) with \( {u}^{\prime }{xC} \in \sum \) ; hence \( C \) is in \( \mathop{\bigcup }\limits_{{u \in U}}{u\sum } \) . ![85b011f4-34bf-48b4-8882-cd79e6f4beb0_395_0.jpg](images/85b011f4-34bf-48b4-8882-cd79e6f4beb0_395_0.jpg) Fig. 7.3. A gallery leaving \( \sum \) . We can now prove the main result of this section. Suppose we have a subgroup \( G \leq {\operatorname{Aut}}_{0}\Delta \) containing a system of pre-root groups \( {\left( {X}_{\alpha }\right) }_{\alpha \in \Phi } \) . Let \( B \) be the stabilizer in \( G \) of the fundamental chamber \( {C}_{0} \), and set \[ N \mathrel{\text{:=}} \left\langle {{n}_{\alpha } \mid \alpha \in \Phi }\right\rangle \] for some choice of elements \( {n}_{\alpha } \) as in Definition 7.2. ## Theorem 7.10. (1) The action of \( G \) on \( \Delta \) is strongly transitive with respect to the apartment system \( {G\sum } \mathrel{\text{:=}} \{ {g\sum } \mid g \in G\} \) . (2) The subgroups \( B \) and \( N \) defined above form a BN-pair in \( G \), and \( \Delta \cong \) \( \Delta \left( {G, B}\right) \) . Proof. Note that \( N \) stabilizes \( \sum \) and acts transitively on the chambers of \( \sum \) . Note further that \
1063_(GTM222)Lie Groups, Lie Algebras, and Representations
Definition 3.40
Definition 3.40. If \( G \) is a matrix Lie group with Lie algebra \( \mathfrak{g} \), then the exponential map for \( G \) is the map \[ \exp : \mathfrak{g} \rightarrow G\text{.} \] That is to say, the exponential map for \( G \) is the matrix exponential restricted to the Lie algebra \( \mathfrak{g} \) of \( G \) . We have shown (Theorem 2.10) that every matrix in \( \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) is the exponential of some \( n \times n \) matrix. Nevertheless, if \( G \subset \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) is a closed subgroup, there may exist \( A \) in \( G \) such that there is no \( X \) in the Lie algebra \( \mathfrak{g} \) of \( G \) with \( \exp X = A \) . Example 3.41. There does not exist a matrix \( X \in \operatorname{sl}\left( {2;\mathbb{C}}\right) \) with \[ {e}^{X} = \left( \begin{array}{rr} - 1 & 1 \\ 0 & - 1 \end{array}\right) \] (3.18) even though the matrix on the right-hand side of (3.18) is in \( \mathrm{{SL}}\left( {2;\mathbb{C}}\right) \) . Proof. If \( X \in \operatorname{sl}\left( {2;\mathbb{C}}\right) \) has distinct eigenvalues, then \( X \) is diagonalizable and \( {e}^{X} \) will also be diagonalizable, unlike the matrix on the right-hand side of (3.18). If \( X \in \operatorname{sl}\left( {2;\mathbb{C}}\right) \) has a repeated eigenvalue, this eigenvalue must be 0 or the trace of \( X \) would not be zero. Thus, there is a nonzero vector \( v \) with \( {Xv} = 0 \), from which it follows that \( {e}^{X}v = {e}^{0}v = v \) . We conclude that \( {e}^{X} \) has 1 as an eigenvalue, unlike the matrix on the right-hand side of (3.18). We see, then, that the exponential map for a matrix Lie group \( G \) does not necessarily map \( \mathfrak{g} \) onto \( G \) . Furthermore, the exponential map may not be one-toone on \( \mathfrak{g} \), as may be seen, for example, from the case \( \mathfrak{g} = \mathrm{{su}}\left( 2\right) \) . Nevertheless, it provides a crucial mechanism for passing information between the group and the Lie algebra. Indeed, we will see (Corollary 3.44) that the exponential map is locally one-to-one and onto, a result that will be essential later. Theorem 3.42. For \( 0 < \varepsilon < \log 2 \), let \( {U}_{\varepsilon } = \left\{ {X \in {M}_{n}\left( \mathbb{C}\right) \mid \parallel X\parallel < \varepsilon }\right\} \) and let \( {V}_{\varepsilon } = \exp \left( {U}_{\varepsilon }\right) \) . Suppose \( G \subset \mathrm{{GL}}\left( {n;\mathbb{C}}\right) \) is a matrix Lie group with Lie algebra \( \mathfrak{g} \) . Then there exists \( \varepsilon \in \left( {0,\log 2}\right) \) such that for all \( A \in {V}_{\varepsilon }, A \) is in \( G \) if and only if \( \log A \) is in \( \mathfrak{g} \) . The condition \( \varepsilon < \log 2 \) guarantees (Theorem 2.8) that for all \( X \in {U}_{\varepsilon },\log \left( {e}^{X}\right) \) is defined and equal to \( X \) . Note that if \( X = \log A \) is in \( \mathfrak{g} \), then \( A = {e}^{X} \) is in \( G \) . Thus, the content of the theorem is that for some \( \varepsilon \), having \( A \) in \( {V}_{\varepsilon } \cap G \) implies that \( \log A \) must be in \( \mathfrak{g} \) . See Figure 3.1. We begin with a lemma. Lemma 3.43. Suppose \( {B}_{m} \) are elements of \( G \) and that \( {B}_{m} \rightarrow I \) . Let \( {Y}_{m} = \log {B}_{m} \) , which is defined for all sufficiently large \( m \) . Suppose that \( {Y}_{m} \) is nonzero for all \( m \) and that \( {Y}_{m}/\begin{Vmatrix}{Y}_{m}\end{Vmatrix} \rightarrow Y \in {M}_{n}\left( \mathbb{C}\right) \) . Then \( Y \) is in \( \mathfrak{g} \) . Proof. For any \( t \in \mathbb{R} \), we have \( \left( {t/\begin{Vmatrix}{Y}_{m}\end{Vmatrix}}\right) {Y}_{m} \rightarrow {tY} \) . Note that since \( {B}_{m} \rightarrow I \), we have \( \begin{Vmatrix}{Y}_{m}\end{Vmatrix} \rightarrow 0 \) . Thus, we can find integers \( {k}_{m} \) such that \( {k}_{m}\begin{Vmatrix}{Y}_{m}\end{Vmatrix} \rightarrow t \) . We have, then, \[ {e}^{{k}_{m}{Y}_{m}} = \exp \left\lbrack {\left( {{k}_{m}\begin{Vmatrix}{Y}_{m}\end{Vmatrix}}\right) \frac{{Y}_{m}}{\begin{Vmatrix}{Y}_{m}\end{Vmatrix}}}\right\rbrack \rightarrow {e}^{tY}. \] ![a7bfd4a7-7795-4350-a407-6ad11be11f96_81_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_81_0.jpg) Fig. 3.1 If \( A \in {V}_{\varepsilon } \) belongs to \( G \), then \( \log A \) belongs to \( \mathfrak{g} \) ![a7bfd4a7-7795-4350-a407-6ad11be11f96_82_0.jpg](images/a7bfd4a7-7795-4350-a407-6ad11be11f96_82_0.jpg) Fig. 3.2 The points \( {k}_{m}{Y}_{m} \) are converging to \( {tY} \) However, \[ {e}^{{k}_{m}{Y}_{m}} = {\left( {e}^{{Y}_{m}}\right) }^{{k}_{m}} = {\left( {B}_{m}\right) }^{{k}_{m}} \in G \] and \( G \) is closed, and we conclude that \( {e}^{tY} \in G \) . This shows that \( Y \in \mathfrak{g} \) . (See Figure 3.2.) Proof of Theorem 3.42. Let us think of \( {M}_{n}\left( \mathbb{C}\right) \) as \( {\mathbb{C}}^{{n}^{2}} \cong {\mathbb{R}}^{2{n}^{2}} \) and let \( D \) denote the orthogonal complement of \( \mathfrak{g} \) with respect to the usual inner product on \( {\mathbb{R}}^{2{n}^{2}} \) . Consider the map \( \Phi : {M}_{n}\left( \mathbb{C}\right) \rightarrow {M}_{n}\left( \mathbb{C}\right) \) given by \[ \Phi \left( Z\right) = {e}^{X}{e}^{Y} \] where \( Z = X + Y \) with \( X \in \mathfrak{g} \) and \( Y \in D \) . Since (Proposition 2.16) the exponential is continuously differentiable, the map \( \Phi \) is also continuously differentiable, and we may compute that \[ {\left. \frac{d}{dt}\Phi \left( tX,0\right) \right| }_{t = 0} = X \] \[ {\left. \frac{d}{dt}\Phi \left( 0, tY\right) \right| }_{t = 0} = Y \] This calculation shows that the derivative of \( \Phi \) at the point \( 0 \in {\mathbb{R}}^{2{n}^{2}} \) is the identity. (Recall that the derivative at a point of a function from \( {\mathbb{R}}^{2{n}^{2}} \) to itself is a linear map of \( {\mathbb{R}}^{2{n}^{2}} \) to itself.) Since the derivative of \( \Phi \) at the origin is invertible, the inverse function theorem says that \( \Phi \) has a continuous local inverse, defined in a neighborhood of \( I \) . We need to prove that for some \( \varepsilon \), if \( A \in {V}_{\varepsilon } \cap G \), then \( \log A \in \mathfrak{g} \) . If this were not the case, we could find a sequence \( {A}_{m} \) in \( G \) such that \( {A}_{m} \rightarrow I \) as \( m \rightarrow \infty \) and such that for all \( m,\log {A}_{m} \notin \mathfrak{g} \) . Using the local inverse of the map \( \Phi \), we can write \( {A}_{m} \) (for all sufficiently large \( m \) ) as \[ {A}_{m} = {e}^{{X}_{m}}{e}^{{Y}_{m}},\;{X}_{m} \in \mathfrak{g},{Y}_{m} \in D, \] with \( {X}_{m} \) and \( {Y}_{m} \) tending to zero as \( m \) tends to infinity. We must have \( {Y}_{m} \neq 0 \), since otherwise we would have \( \log {A}_{m} = {X}_{m} \in \mathfrak{g} \) . Since \( {e}^{{X}_{m}} \) and \( {A}_{m} \) are in \( G \), we see that \[ {B}_{m} \mathrel{\text{:=}} {e}^{-{X}_{m}}{A}_{m} = {e}^{{Y}_{m}} \] is in \( G \) . Since the unit sphere in \( D \) is compact, we can choose a subsequence of the \( {Y}_{m} \) ’s (still called \( {Y}_{m} \) ) so that \( {Y}_{m}/\begin{Vmatrix}{Y}_{m}\end{Vmatrix} \) converges to some \( Y \in D \), with \( \parallel Y\parallel = 1 \) . Then, by the lemma, \( Y \in \mathfrak{g} \) . This is a contradiction, because \( D \) is the orthogonal complement of \( \mathfrak{g} \) . Thus, there must be some \( \varepsilon \) such that \( \log A \in \mathfrak{g} \) for all \( A \) in \( {V}_{\varepsilon } \cap G \) . ## 3.8 Consequences of Theorem 3.42 In this section, we derive several consequences of the main result of the last section, Theorem 3.42. Corollary 3.44. If \( G \) is a matrix Lie group with Lie algebra \( \mathfrak{g} \), there exists a neighborhood \( U \) of \( 0 \) in \( \mathfrak{g} \) and a neighborhood \( V \) of \( I \) in \( G \) such that the exponential map takes \( U \) homeomorphically onto \( V \) . Proof. Let \( \varepsilon \) be such that Theorem 3.42 holds and set \( U = {U}_{\varepsilon } \cap \mathfrak{g} \) and \( V = \) \( {V}_{\varepsilon } \cap G \) . The theorem implies that exp takes \( U \) onto \( V \) . Furthermore, exp is a homeomorphism of \( U \) onto \( V \), since there is a continuous inverse map, namely, the restriction of the matrix logarithm to \( V \) . Corollary 3.45. Let \( G \) be a matrix Lie group with Lie algebra \( \mathfrak{g} \) and let \( k \) be the dimension of \( \mathfrak{g} \) as a real vector space. Then \( G \) is a smooth embedded submanifold of \( {M}_{n}\left( \mathbb{C}\right) \) of dimension \( k \) and hence a Lie group. It follows from the corollary that \( G \) is locally path connected: every point in \( G \) has a neighborhood \( U \) that is homeomorphic to a ball in \( {\mathbb{R}}^{k} \) and hence path connected. It then follows that \( G \) is connected (in the usual topological sense) if and only if it is path connected. (See, for example, Proposition 3.4.25 of [Run].) Proof. Let \( \varepsilon \in \left( {0,\log 2}\right) \) be such that Theorem 3.42 holds. Then for any \( {A}_{0} \in G \) , consider the neighborhood \( {A}_{0}{V}_{\varepsilon } \) of \( {A}_{0} \) in \( {M}_{n}\left( \mathbb{C}\right) \) . Note that \( A \in {A}_{0}{V}_{\varepsilon } \) if and only if \( {A}_{0}^{-1}A \in {V}_{\varepsilon } \) . Define a local coordinate system on \( {A}_{0}{V}_{\varepsilon } \) by writing each \( A \in {A}_{0}{V}_{\varepsilon } \) as \( A = {A}_{0}{e}^{X} \), for \( X \in {U}_{\varepsilon } \subset {M}_{n}\left( \mathbb{C}\right) \) . It follows from Theorem 3.42 that (for \( A \in {A}_{0}{V}_{\varepsilon })A \in G \) if and only if \( X \in \mathfrak{g} \) . Thus, in this local coordinate system defined near \( {A}_{0} \), the group \( G \) looks like the subspace \( \mathfrak{g} \) of \( {M}_{n}\left( \mathbb{C}\right) \) . Since we can find such local coordinates near any point \( {A}_{0} \) in \( G \), we conclude that \
111_Three Dimensional Navier-Stokes Equations-James_C._Robinson,_Jos_L._Rodrigo,_Witold_Sadows(z-lib.org
Definition 6.42
Definition 6.42 ([813]). A finite family \( {\left( {S}_{i}\right) }_{i \in I}\left( {I \mathrel{\text{:=}} {\mathbb{N}}_{k}}\right) \) of closed subsets of a normed space \( X \) is said to be allied at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if whenever \( {x}_{n, i}^{ * } \in \) \( {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) with \( {\left( {x}_{n, i}\right) }_{n} \in {S}_{i} \) for \( \left( {n, i}\right) \in \mathbb{N} \times I,{\left( {x}_{n, i}\right) }_{n} \rightarrow \bar{x}, \) \[ {\left( \begin{Vmatrix}{x}_{n,1}^{ * } + \cdots + {x}_{n, k}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0 \Rightarrow \forall i \in I\;{\left( \begin{Vmatrix}{x}_{n, i}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0. \] This property can be reformulated as follows: there exist \( \rho > 0, c > 0 \) such that \[ \forall {x}_{i} \in {S}_{i} \cap B\left( {\bar{x},\rho }\right) ,{x}_{i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{i}}\right) ,\;c\mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{x}_{i}^{ * }\end{Vmatrix} \leq \begin{Vmatrix}{{x}_{1}^{ * } + \cdots + {x}_{k}^{ * }}\end{Vmatrix}. \] (6.11) This reformulation follows by homogeneity from the fact that one can find \( \rho > 0 \) and \( c > 0 \) such that for \( {x}_{i} \in {S}_{i} \cap B\left( {\bar{x},\rho }\right) ,{x}_{i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{i}}\right) \) with \( \begin{Vmatrix}{{x}_{1}^{ * } + \cdots + {x}_{k}^{ * }}\end{Vmatrix} < c \) one has \( \mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{x}_{i}^{ * }\end{Vmatrix} < 1 \) or, equivalently, \( \mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{x}_{i}^{ * }\end{Vmatrix} \geq 1 \Rightarrow \begin{Vmatrix}{{x}_{1}^{ * } + \cdots + {x}_{k}^{ * }}\end{Vmatrix} \geq c \) . The result that follows reduces alliedness to an easier requirement. Proposition 6.43. A finite family \( {\left( {S}_{i}\right) }_{i \in I}\left( {I \mathrel{\text{:=}} {\mathbb{N}}_{k}}\right) \) of closed subsets of a normed space \( X \) is allied at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if and only if given \( {x}_{n, i} \in {S}_{i},{x}_{n, i}^{ * } \in {\partial }_{F}{d}_{{S}_{i}}\left( {x}_{n, i}\right) \) for \( \left( {n, i}\right) \in \mathbb{N} \times I \), with \( {\left( {x}_{n, i}\right) }_{n} \rightarrow \bar{x} \), one has \[ {\left( \begin{Vmatrix}{x}_{n,1}^{ * } + \cdots + {x}_{n, k}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0 \Rightarrow {\left( \mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{x}_{n, i}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0. \] (6.12) Proof. Since \( {\partial }_{F}{d}_{{S}_{i}}\left( {x}_{n, i}\right) \subset {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) for all \( {x}_{n, i} \in {S}_{i} \) and all \( \left( {n, i}\right) \), condition (6.12) follows from alliedness. Conversely, suppose condition (6.12) is satisfied. Let \( {\left( {x}_{n, i}\right) }_{n} \rightarrow \bar{x} \) in \( {S}_{i},{\left( {x}_{n, i}^{ * }\right) }_{n} \) in \( {X}^{ * } \) be sequences satisfying \( {\left( \begin{Vmatrix}{x}_{n,1}^{ * } + \cdots + {x}_{n, k}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0 \) and \( {x}_{n, i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) for all \( \left( {n, i}\right) \in \mathbb{N} \times I \) . Let \( {r}_{n} \mathrel{\text{:=}} \mathop{\max }\limits_{{i \in I}}\left( \begin{Vmatrix}{x}_{n, i}^{ * }\end{Vmatrix}\right) \) . If \( \left( {r}_{n}\right) \) is bounded, setting \( {w}_{n, i}^{ * } \mathrel{\text{:=}} {x}_{n, i}^{ * }/r \in N\left( {{S}_{i},{x}_{n, i}}\right) \cap {B}_{{X}^{ * }} = {\partial }_{F}{d}_{{S}_{i}}\left( {x}_{n, i}\right) \) with \( r > \mathop{\sup }\limits_{n}{r}_{n} \), we get that \( {\left( {w}_{n, i}^{ * }\right) }_{n} \rightarrow 0 \), hence \( {\left( {x}_{n, i}^{ * }\right) }_{n} \rightarrow 0 \) . It remains to discard the case in which \( \left( {r}_{n}\right) \) is unbounded. Taking a subsequence, we may suppose \( \left( {r}_{n}\right) \rightarrow + \infty \) . Setting \( {u}_{n, i}^{ * } \mathrel{\text{:=}} {x}_{n, i}^{ * }/{r}_{n} \), so that \( {\left( \begin{Vmatrix}{u}_{n,1}^{ * } + \cdots + {u}_{n, k}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0 \), we obtain from our assumption that \( {\left( \begin{Vmatrix}{u}_{n, i}^{ * }\end{Vmatrix}\right) }_{n} \rightarrow 0 \) for all \( i \in I \), a contradiction to \( \mathop{\max }\limits_{{i \in I}}\begin{Vmatrix}{u}_{n, i}^{ * }\end{Vmatrix} = 1 \) for all \( n \in \mathbb{N} \) . We have seen in Proposition 4.81 that alliedness implies linear coherence in Fréchet smooth spaces or in Asplund spaces. Let us give another proof. Theorem 6.44. Let \( {\left( {S}_{i}\right) }_{i \in I}\left( {I \mathrel{\text{:=}} {\mathbb{N}}_{k}}\right) \) be a finite family of closed subsets of an Asplund space \( X \) that is allied at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) . Then there exist \( c,\rho > 0 \) such that the linear coherence condition \[ \forall x \in B\left( {\bar{x},\rho }\right) ,\;d\left( {x, S}\right) \leq {cd}\left( {x,{S}_{1}}\right) + \cdots + {cd}\left( {x,{S}_{k}}\right) , \] (6.13) is satisfied, whence \[ {N}_{L}\left( {S,\bar{x}}\right) \subset {N}_{L}\left( {{S}_{1},\bar{x}}\right) + \cdots + {N}_{L}\left( {{S}_{k},\bar{x}}\right) . \] (6.14) Proof. Relation (6.11) yields some \( \gamma ,\rho \in \left( {0,1}\right) \) such that for all \( {x}_{i} \in {S}_{i} \cap B\left( {\bar{x},{5\rho }}\right) \) , \( {x}_{i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{i}}\right) \) for \( i \in I \) satisfying \( \begin{Vmatrix}{{x}_{1}^{ * } + \cdots + {x}_{k}^{ * }}\end{Vmatrix} < {3\gamma } \) one has \( \mathop{\sup }\limits_{{i \in I}}\begin{Vmatrix}{x}_{i}^{ * }\end{Vmatrix} < 1/2 \) . It follows from Lemma 6.7 that for all \( {w}_{i} \in B\left( {\bar{x},{2\rho }}\right) ,{w}_{i}^{ * } \in {\partial }_{F}d\left( {\cdot ,{S}_{i}}\right) \left( {w}_{i}\right) \) for \( i \in I \) satisfying \( \begin{Vmatrix}{{w}_{1}^{ * } + \cdots + {w}_{k}^{ * }}\end{Vmatrix} < {2\gamma } \) one has \( \mathop{\sup }\limits_{{i \in I}}\begin{Vmatrix}{w}_{i}^{ * }\end{Vmatrix} < 1 \), since we can find \( {x}_{i} \in {S}_{i} \) , \( {x}_{i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{i}}\right) \) such that \( \begin{Vmatrix}{{x}_{i}^{ * } - {w}_{i}^{ * }}\end{Vmatrix} < \gamma /k < 1/2 \), and \( \begin{Vmatrix}{{x}_{i} - {w}_{i}}\end{Vmatrix} < d\left( {{w}_{i},{S}_{i}}\right) + \rho \leq \) \( {3\rho } \), and hence \( {x}_{i} \in B\left( {\bar{x},{5\rho }}\right) \) . Let \( f : X \rightarrow \mathbb{R} \) be given by \( f\left( x\right) \mathrel{\text{:=}} d\left( {x,{S}_{1}}\right) + \cdots + \) \( d\left( {x,{S}_{k}}\right) \) . Let \( x \in B\left( {\bar{x},\rho }\right) \smallsetminus S \) and \( {x}^{ * } \in {\partial }_{F}f\left( x\right) \) . Since the \( {S}_{i} \) ’s are closed, we have \( {\delta }_{j} \mathrel{\text{:=}} d\left( {x,{S}_{j}}\right) > 0 \) for some \( j \in I \) . Let \( \delta \in \left( {0,{\delta }_{j}}\right) \cap \left( {0,\rho }\right) \) . The fuzzy sum rule yields \( {w}_{i} \in B\left( {x,\delta }\right) \) and \( {w}_{i}^{ * } \in {\partial }_{F}d\left( {\cdot ,{S}_{i}}\right) \) for \( i \in I \) such that \( \begin{Vmatrix}{{w}_{1}^{ * } + \cdots + {w}_{k}^{ * } - {x}^{ * }}\end{Vmatrix} < \gamma \) . Since \( \delta < {\delta }_{j} \), we have \( {w}_{j} \in X \smallsetminus {S}_{j} \), hence \( \begin{Vmatrix}{w}_{j}^{ * }\end{Vmatrix} = 1 \) . Thus \( \begin{Vmatrix}{{w}_{1}^{ * } + \cdots + {w}_{k}^{ * }}\end{Vmatrix} \geq {2\gamma } \) and \( \begin{Vmatrix}{x}^{ * }\end{Vmatrix} \geq \gamma \) . It follows from Theorems 1.114,4.80 that \( d\left( {x, S}\right) \leq \left( {1/\gamma }\right) f\left( x\right) \) for all \( x \in B\left( {\bar{x},\rho }\right) \) . A weaker notion of nice joint behavior can be given (it is weaker because a weak* convergence assumption is added). It is always satisfied in finite-dimensional spaces. Definition 6.45 ([813, Definition 3.2]). A finite family \( {\left( {S}_{i}\right) }_{i \in I} \) of closed subsets of a normed space \( X \) with \( I \mathrel{\text{:=}} {\mathbb{N}}_{k} \) is said to be synergetic at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if \( \left( {x}_{n, i}\right) \rightarrow \bar{x},\left( {x}_{n, i}^{ * }\right) \overset{ * }{ \rightarrow }0 \) are such that \( {x}_{n, i} \in {S}_{i},{x}_{n, i}^{ * } \in {N}_{F}\left( {{S}_{i},{x}_{n, i}}\right) \) for all \( \left( {n, i}\right) \in \mathbb{N} \times I \) and \( \left( {{x}_{n,1}^{ * } + \cdots + {x}_{n, k}^{ * }}\right) \rightarrow 0 \) implies that for all \( i \in I \), one has \( \left( {x}_{n, i}^{ * }\right) \rightarrow 0 \) . Two subsets are synergetic at some point \( \bar{z} \) of their intersection whenever one of them is normally compact at \( \bar{z} \) . However, it may happen that they are synergetic at \( \bar{z} \) while none of them is normally compact at \( \bar{z} \) . This happens for \( A \times B \) and \( C \times D \) with \( \bar{z} \mathrel{\text{:=}} \left( {\bar{x},\bar{y}}\right), A \) (resp. \( D \) ) being normally compact at \( \bar{x} \) (resp. \( \bar{y} \) ) while \( B \) and \( C \) are arbitrary (for instance singletons in infinite-dimensional spaces). The preceding notion can be related to alliedness with the help of the following normal qualification condition (NQC): \[ {x}_{i}^{ * } \in {N}_{L}\left( {{S}_{i},\bar{x}}\right) ,{x}_{1}^{ * } + \cdots + {x}_{k}^{ * } = 0 \Rightarrow {x}_{1}^{ * } = \cdots = {x}_{k}^{ * } = 0. \] (6.15) Proposition 6.46. A finite family \( {\left( {S}_{i}\right) }_{i \in I}\left( {I \mathrel{\text{:=}} {\mathbb{N}}_{k}}\right) \) of closed subsets of an Asplund space \( X \) is allied at \( \bar{x} \in S \mathrel{\text{:=}} {S}_{1} \cap \cdots \cap {S}_{k} \) if and only if it is synergetic at \( \bar{x} \) and the normal qualification condition (6.15) holds. In particular, if \( X \) is finite-dimensional,(6.15) implies a
1097_(GTM253)Elementary Functional Analysis
Definition 1.12
Definition 1.12. Let \( X \) be a normed linear space. If \( X \) is complete in the metric \( d \) defined from the norm by \( d\left( {x, y}\right) = \parallel x - y\parallel \), we call \( X \) a Banach space. All of the above examples of normed linear spaces are Banach spaces. We will not stop to prove this now, but we do make a couple of observations. The statement that the space \( {L}^{p}\left( {Y,\mu }\right) \) is complete for any \( 1 \leq p \leq \infty \) and any positive measure space \( \left( {Y,\mu }\right) \) goes by the name of the Riesz-Fischer theorem. In its full generality it is a deep result of real analysis (see also the discussion in Section 1.5 below and in Section A. 3 of the Appendix). Notice that this general class of examples includes the \( {\ell }^{p} \) spaces and weighted sequences spaces as special cases (see Exercise 1.6 for a more elementary approach), as well as the finite-dimensional spaces in Examples 1.2 and 1.3. In Exercise 1.2 the reader is asked to provide a proof of completeness for the spaces in Example 1.4, and a similar argument can be used for the space \( {H}^{\infty }\left( \Omega \right) \) of Example 1.8. You can get an example of a normed linear space which is not a Banach space by taking a nonclosed subspace of a Banach space; see for example Exercise 1.3. (A subspace of a vector space \( V \) is a subset of \( V \) which is itself a vector space under the same addition and scalar multiplication operations.) Banach spaces are named in honor of the Polish mathematician Stefan Banach, a dominating figure in the birth of functional analysis, who wrote a fundamentally important book called Opérations Linéaires in 1932. In this book (which had its beginnings in Banach's 1920 doctoral thesis) many of the properties of complete normed linear spaces are developed. Banach calls these spaces "spaces of type (B)," perhaps in the hope they would eventually be known as "Banach spaces"1 This is precisely what happened, with the terminology "Banach space" making its formal appearance in Fréchet's text Les Espaces Abstraits [13]. Hugo Steinhaus, Banach's teacher and collaborator, writes in a 1963 memoir of Banach that Banach's axiomatic definition of a complete normed linear space provided precisely the right level of generality; broad enough to encompass a wide variety of natural examples, but not so general as to permit only uninteresting theorems: His foreign competitors in the theory of linear operations either dealt with spaces that were too general, and that is why they either obtained only trivial results, or assumed too much about those spaces, which restricted the extent of the applications to a few and artificial examples - Banach's genius reveals itself in finding the golden mean. This ability of hitting the mark proves that Banach was born a high class mathematician ([44], p. 12). In fact, a few months after Banach set down the axioms for a normed linear space, the American Norbert Wiener independently gave nearly the same definition, and for a short while the terminology "Banach-Wiener spaces" was used. However, as Wiener's interest in the area did not continue, these spaces, in Wiener's words, became "quite justly named after Banach alone ([46], p. 60)." Hilbert spaces, which we turn to now, are Banach spaces with some additional structure, coming from the presence of an inner product. Definition 1.13. Let \( X \) be a vector space over \( \mathbb{C} \) . An inner product is a map \( \langle \cdot , \cdot \rangle \) : \( X \times X \rightarrow \mathbb{C} \) satisfying, for \( x, y \), and \( z \) in \( X \) and scalars \( \alpha \in \mathbb{C} \) , (1) \( \langle x, y\rangle = \overline{\langle y, x\rangle } \) for all \( x, y \) in \( X \) , (2) \( \langle x, x\rangle \geq 0 \), with \( \langle x, x\rangle = 0 \) (if and) only if \( x = 0 \) , (3) \( \langle x + y, z\rangle = \langle x, z\rangle + \langle y, z\rangle \), and (4) \( \langle {\alpha x}, y\rangle = \alpha \langle x, y\rangle \) . Some comments on this definition are in order. The bar in (1) denotes complex conjugation. Property (2) is referred to as "positive-definiteness," and the adjective "Hermitian" is used for property (1). The parenthetical "if" statement in (2) need not be included in the definition, as it follows from the other parts since \( \langle 0,0\rangle = \) \( \langle 2 \cdot 0,0\rangle = 2\langle 0,0\rangle \) . An inner product is linear in the first slot and conjugate linear in \( {}^{1} \) Though this interpretation of Banach’s choice of notation is widely repeated, V.D. Milman, in writing about Banach, says,"In his book...Banach denotes operators by the letter \( A \) . These were the initial objects of study, and the complete normed spaces on which they operated were denoted by the Latin letter \( B \) . That was natural, and there is no indication that he was ’hinting’ at his own name by using that letter" ([32], p. 228). the second \( \left( {\langle x,{\alpha y} + z\rangle = \bar{\alpha }\langle x, y\rangle +\langle x, z\rangle }\right) \), so the defining properties are encapsulated by saying that an inner product is a Hermitian, positive definite, sesquilinear form (sesquilinear from the Latin for " \( 1\frac{1}{2} \) " linear). The reader is cautioned that some authors (in physics, for example) define the inner product to be linear in the second slot, and conjugate linear in the first. A standard example is to define an inner product on \( {L}^{2}\left( {X,\mu }\right) \) for a positive measure space \( \left( {X,\mu }\right) \) by \[ \langle f, g\rangle = {\int }_{X}f\bar{g}{d\mu } \] This general framework includes, as special cases, the example \( {\mathbb{C}}^{n} \) with \[ \left\langle {\left( {{z}_{1},{z}_{2},\ldots ,{z}_{n}}\right) ,\left( {{w}_{1},{w}_{2},\ldots ,{w}_{n}}\right) }\right\rangle = \mathop{\sum }\limits_{{j = 1}}^{n}{z}_{j}\overline{{w}_{j}}, \] the example \( {\ell }^{2} \) of all square summable sequences with \[ \left\langle {\left( {{z}_{1},{z}_{2},\ldots }\right) ,\left( {{w}_{1},{w}_{2},\ldots }\right) }\right\rangle = \mathop{\sum }\limits_{{j = 1}}^{\infty }{z}_{j}\overline{{w}_{j}}, \] and the weighted analogues \( {\ell }_{\beta }^{2} \) with \[ \left\langle {\left( {{z}_{0},{z}_{1},\ldots }\right) ,\left( {{w}_{0},{w}_{1},\ldots }\right) }\right\rangle = \mathop{\sum }\limits_{{j = 0}}^{\infty }{z}_{j}\overline{{w}_{j}}\beta {\left( j\right) }^{2}. \] The first two are obtained by taking \( X \) to be, respectively, \( \{ 1,2,\ldots, n\} \) or \( \mathbb{N} \), with \( \mu \) equal to counting measure. In the case of weighted sequence spaces, \( X = {\mathbb{N}}_{0} \) and \( \mu \) assigns mass \( \beta {\left( n\right) }^{2} \) to the set \( \{ n\} \) . Any inner product satisfies an important inequality, called the Cauchy-Schwarz inequality, which we describe next. Proposition 1.14. If \( \langle \cdot , \cdot \rangle \) is an inner product on a vector space \( X \), then for all \( x \) and y in \( X \) we have \[ {\left| \langle x, y\rangle \right| }^{2} \leq \langle x, x\rangle \langle y, y\rangle \] In this general form, the Cauchy-Schwarz inequality is due to John von Neumann (1930), who is often credited with the "axiomatization" of Hilbert spaces (defined below). Earlier versions of Proposition 1.14, for specific settings, go back to Cauchy, Bunyakowsky, and Schwarz, and the Cauchy-Schwarz inequality is sometimes referred to as the Cauchy-Bunyakowsky-Schwarz inequality. One particularly simple proof of Proposition 1.14 is outlined in Exercise 1.7. As an important application of Proposition 1.14, we show next how any inner product defines a norm. Proposition 1.15. If \( \langle \cdot , \cdot \rangle \) is an inner product on a vector space \( X \), then \[ \parallel x\parallel \equiv \langle x, x{\rangle }^{\frac{1}{2}} \] is a norm on \( X \) . Proof. We will check the triangle inequality, and leave the verification of the other norm properties to the reader. Using the linearity of the inner product we have \[ \parallel x + y{\parallel }^{2} = \langle x + y, x + y\rangle = \langle x, x\rangle + \langle y, x\rangle + \langle x, y\rangle + \langle y, y\rangle \] \[ = \parallel x{\parallel }^{2} + 2\operatorname{Re}\langle x, y\rangle + \parallel y{\parallel }^{2} \] \[ \leq \parallel x{\parallel }^{2} + 2\left| {\langle x, y\rangle }\right| + \parallel y{\parallel }^{2} \] \[ \leq \parallel x{\parallel }^{2} + 2\parallel x\parallel \parallel y\parallel + \parallel y{\parallel }^{2} \] \[ = {\left( \parallel x\parallel + \parallel y\parallel \right) }^{2} \] where \( \operatorname{Re}z \) denotes the real part of a complex number \( z \), and we have used the Cauchy-Schwarz inequality in the penultimate step. Definition 1.16. A (complex) Hilbert space \( \mathcal{H} \) is a vector space over \( \mathbb{C} \) with an inner product such that \( \mathcal{H} \) is complete in the metric \[ d\left( {x, y}\right) = \parallel x - y\parallel = \langle x - y, x - y{\rangle }^{\frac{1}{2}}. \] Any space \( {L}^{2}\left( {X,\mu }\right) \) as described above is thus an example of a Hilbert space, since we have already observed that \( {L}^{2}\left( {X,\mu }\right) \) is a Banach space under the norm \( \parallel f{\parallel }_{2} = {\left( {\int }_{X}{\left| f\right| }^{2}d\mu \right) }^{\frac{1}{2}} \) which we recognize as \( \langle f, f{\rangle }^{\frac{1}{2}} \) . There are various anecdotes, of dubious validity, about David Hilbert and the terminology "Hilbert space." Steve Krantz, writing in Mathematical Apocrypha [27] says It is said that, late in his life, Hilbert was reading a paper and got stuck at one point. He went to his colleague in the office next door and queried, "What is a Hilbert space?" (p. 89) Another version is given by Laurence Young [47]: When Weyl presented a proof of the Riesz-Fischer theorem in a Göttingen colloquium, Hilbert went up to the speaker afterward to s
1112_(GTM267)Quantum Theory for Mathematicians
Definition 23.52
Definition 23.52 If \( f \) is a function on \( N \) for which \( {X}_{f} \) preserves \( \bar{P} \), let \( Q\left( f\right) \) be the operator on the half-form Hilbert space of \( P \) given by \[ Q\left( f\right) s = \left( {{Q}_{\text{pre }}\left( f\right) \mu }\right) \otimes \nu - i\hslash \mu \otimes {\mathcal{L}}_{{X}_{f}}\nu \] where \( s \) is decomposed locally as \( s = \mu \otimes \nu \), with \( \mu \) being a section of \( L \) and \( \nu \) a section of \( {\delta }_{P} \) . These operators satisfy \( \left\lbrack {Q\left( f\right), Q\left( g\right) }\right\rbrack /\left( {i\hslash }\right) = Q\left( {\{ f, g\} }\right) \) on the space of smooth polarized sections of \( L \otimes {\delta }_{P} \), with the proof of this result being identical to the proof of Theorem 23.46 in the real case. If \( f \) is real-valued and \( {X}_{f} \) preserves \( \bar{P} \), then \( Q\left( f\right) \) will be at least symmetric, assuming we can find a dense subspace of the half-form Hilbert space consisting of "nice" functions. (Finding dense subspaces is more difficult in the holomorphic case than in the real case.) A proof of this claim is sketched in Exercise 18. Example 23.53 Consider \( {\mathbb{R}}^{2} \cong {T}^{ * }\mathbb{R} \) with the Kähler polarization \( P \) given by the global complex coordinate \( z = \left( {x - {ip}/\left( {m\omega }\right) }\right) \), for some positive number \( \omega \) . Take \( {\delta }_{P} \) to be trivial with trivializing section \( \sqrt{dz} \) . Consider also the harmonic oscillator Hamiltonian \( H \mathrel{\text{:=}} \left( {{p}^{2} + {\left( m\omega x\right) }^{2}}\right) /\left( {2m}\right) \) . Then \( {X}_{H} \) preserves the \( P \) and the operator \( Q\left( H\right) \) on the half-form Hilbert space has spectrum consisting of numbers of the form \( \left( {n + 1/2}\right) \hslash \omega \), where \( n = \) \( 0,1,2,\ldots \) In this example, \( \omega \) is the frequency of the oscillator and not the canonical 2-form. Proof. The calculation is the same as in the proof of Proposition 22.14, except for the addition of the Lie derivative term. A simple calculation shows that \( {\mathcal{L}}_{{X}_{H}}\left( {dz}\right) = {i\omega dz} \), from which it follows that \( {\mathcal{L}}_{{X}_{H}}\sqrt{dz} = \) \( \left( {{i\omega }/2}\right) \sqrt{dz} \) . It is then easy to see that the set of elements of the form \( {e}^{-{m\omega }{\left| \operatorname{Im}z\right| }^{2}/\left( {2\hslash }\right) }{z}^{n} \otimes \sqrt{dz} \) form an orthonormal basis of eigenvectors for \( Q\left( H\right) \), with eigenvalues \( \left( {n + 1/2}\right) \hslash \omega \) . ∎ ## 23.8 Pairing Maps Pairing maps are designed to allow us to compare the results of quantizing with respect to two different polarizations. We consider mainly the case of two "transverse" real polarizations; the case of two complex polarizations or one real and one complex polarization can be treated with minor modifications. Suppose that \( P \) and \( {P}^{\prime } \) are two purely real polarizations and that the associated leaf spaces \( {\Xi }_{1} \) and \( {\Xi }_{2} \) are oriented manifolds. Suppose also that \( P \) and \( {P}^{\prime } \) are transverse at each point \( z \in N \), meaning that \( {P}_{z} \cap {P}_{z}^{\prime } = \) \( \{ 0\} \) . If \( \alpha \) and \( \beta \) are polarized sections of \( {\mathcal{K}}_{P} \) and \( {\mathcal{K}}_{{P}^{\prime }} \), respectively, the transversality assumption is easily shown to imply that \( \alpha \land \beta \) is a nowhere-vanishing \( {2n} \) -form on \( N \) . Thus, for any point \( z \in N \), we can define a bilinear "pairing" from \( {\delta }_{P, z} \times {\delta }_{{P}^{\prime }, z} \rightarrow \mathbb{R} \) by \[ \left( {{\nu }_{1},{\nu }_{2}}\right) = {\left( \frac{\left( {{\nu }_{1} \otimes {\nu }_{1}}\right) \land \left( {{\nu }_{2} \otimes {\nu }_{2}}\right) }{\lambda }\right) }^{1/2}. \] (23.42) (Recall Notation 23.49.) We can extend this pairing to a pairing \( {\delta }_{P, z}^{\mathbb{C}} \times \) \( {\delta }_{{P}^{\prime }, z}^{\mathbb{C}} \rightarrow \mathbb{C} \) that is conjugate linear in the first factor and linear in the second factor. Finally, we extend to a pairing of \( \left( {{L}_{z} \otimes {\delta }_{P, z}^{\mathbb{C}}}\right) \times \left( {{L}_{z} \otimes {\delta }_{{P}^{\prime }, z}^{\mathbb{C}}}\right) \rightarrow \mathbb{C} \) by setting \( \left( {{\mu }_{1} \otimes {\nu }_{1},{\mu }_{2} \otimes {\nu }_{2}}\right) \) equal to \( \left( {{\mu }_{1},{\mu }_{2}}\right) \left( {{\nu }_{1},{\nu }_{2}}\right) \), where \( \left( {{\mu }_{1},{\mu }_{2}}\right) \) is computed with respect to the Hermitian structure on \( L \) . Let \( {\mathbf{H}}_{1} \) and \( {\mathbf{H}}_{2} \) denote the half-form Hilbert spaces for \( P \) and \( {P}^{\prime } \), respectively. Given \( {s}_{1} \in {\mathbf{H}}_{1} \) and \( {s}_{2} \in {\mathbf{H}}_{2} \), we define the pairing of \( {s}_{1} \) and \( {s}_{2} \) by \[ {\left\langle {s}_{1},{s}_{2}\right\rangle }_{P,{P}^{\prime }} \mathrel{\text{:=}} c{\int }_{N}\left( {{s}_{1},{s}_{2}}\right) \lambda \] provided that the integral is absolutely convergent. Here \( \left( {{s}_{1},{s}_{2}}\right) \) is the pointwise pairing of \( {s}_{1} \) and \( {s}_{2} \) defined in the previous paragraph and \( c \) is a certain "universal" constant, depending only on \( \hslash \) and the dimension of \( n \), that can be chosen to make certain examples work out nicely. We now look for a pairing map \( {\Lambda }_{P,{P}^{\prime }} : {\mathbf{H}}_{1} \rightarrow {\mathbf{H}}_{2} \) with the property that \[ {\left\langle {s}_{1},{s}_{2}\right\rangle }_{P,{P}^{\prime }} = {\left\langle {\Lambda }_{P,{P}^{\prime }}{s}_{1},{s}_{2}\right\rangle }_{{\mathbf{H}}_{2}}. \] (23.43) If the pairing is bounded (i.e., it satisfies \( \left| {\left\langle {s}_{1},{s}_{2}\right\rangle }_{P,{P}^{\prime }}\right| \leq C\begin{Vmatrix}{s}_{1}\end{Vmatrix}\begin{Vmatrix}{s}_{2}\end{Vmatrix} \) for some constant \( C \) ), there is a unique bounded operator \( {\Lambda }_{P,{P}^{\prime }} \) satisfying (23.43). Even if the pairing is unbounded, we may be able to define \( {\Lambda }_{P,{P}^{\prime }} \) as an unbounded operator. If we were optimistic, we might hope that the pairing map for any two transverse polarizations would be unitary, or at least a constant multiple of a unitary map. If this were the case, it would suggest that quantization is independent of the choice of polarization, in the sense that there would be a natural unitary map between the Hilbert spaces for two different polarizations. As it turns out, however, the typical pairing map is not a constant multiple of a unitary map. Nevertheless, there are certain special cases where the pairing map is unitary (up to a constant), including the case of translation-invariant polarizations on \( {\mathbb{R}}^{2n} \) . See also [20] for an example of a pairing map between a real and a complex polarization that is a constant multiple of a unitary map. We compute just one very special case of the pairing map between two real polarizations. Example 23.54 Consider \( N = {\mathbb{R}}^{2} \cong {T}^{ * }\mathbb{R} \) and take \( L \) to be trivial with connection 1-form \( \theta = {pdx} \) . Let \( P \) be the vertical polarization, spanned at each point by \( \partial /\partial p \), and let \( {P}^{\prime } \) be the horizontal polarization, spanned at each point by \( \partial /\partial x \) . Then elements \( {s}_{1} \) of the half-form space for \( P \) have the form \[ {s}_{1}\left( {x, p}\right) = \phi \left( x\right) \otimes \sqrt{dx} \] (23.44) and elements \( {s}_{2} \) of the half-form space for \( {P}^{\prime } \) have the form \[ {s}_{2}\left( {x, p}\right) = \psi \left( p\right) {e}^{{ixp}/\hslash } \otimes \sqrt{dp} \] (23.45) where \( \phi \) and \( \psi \) are functions on \( \mathbb{R} \) . If \( c = 1 \), the pairing is computed as \[ {\left\langle {s}_{1},{s}_{2}\right\rangle }_{P,{P}^{\prime }} = - {\int }_{{\mathbb{R}}^{2}}\overline{\phi \left( x\right) }\psi \left( p\right) {e}^{{ixp}/\hslash }{dxdp}. \] (23.46) If \( {s}_{1} \) has the form (23.44), then \( {\Lambda }_{P,{P}^{\prime }}\left( {s}_{1}\right) \) has the form (23.45), where \[ \psi \left( p\right) = - {\int }_{\mathbb{R}}\phi \left( x\right) {e}^{-{ixp}/\hslash }{dx}. \] Thus, \( {\Lambda }_{P,{P}^{\prime }} \) is a scaled version of the Fourier transform and is, in particular, a constant multiple of a unitary map. The pairing should be defined initially on some dense subspace of the Hilbert spaces, such as the subspaces where \( \phi \) and \( \psi \) are Schwartz functions. The pairing map can also be defined initially on the Schwartz space, recognized as being unitary (up to a constant), and then extended by continuity to all of \( {\mathbf{H}}_{1} \) . Once the pairing map is extended to \( {\mathbf{H}}_{1} \), the pairing itself can be defined for all \( {s}_{1} \in {\mathbf{H}}_{1} \) and \( {s}_{2} \in {\mathbf{H}}_{2} \) by taking (23.43) as the definition of \( {\left\langle {s}_{1},{s}_{2}\right\rangle }_{P,{P}^{\prime }} \) . Even though it is possible, as just described, to extend the pairing to all of \( {\mathbf{H}}_{1} \times {\mathbf{H}}_{2} \), the integral in (23.46) is not always absolutely convergent. Proof. The forms (23.44) and (23.45) are obtained by a simple modification of the argument in the proof of Proposition 22.8. We can compute that the pointwise pairing of \( \sqrt{dx} \) and \( \sqrt{dp} \) is -1, which gives the indicated form of the pairing in (23.46). The pairing may be rewritten as \[ {\int }_{\mathbb{R}}\overline{{\int }_{\mathbb{R}}\phi \left( x\right) {e}^{-{ixp}/\hslash }{dx}}\psi \left( p\right) {dp} \] which gives the indicated form of the pairing map. I ## 23.9 Exercises 1. Let \( L \) be a line bundle with connection \( \nabla \) over \( N \) . Let \( s \) be a section of \( L \
18_Algebra Chapter 0
Definition 2.1
Definition 2.1. Let \( A \) be a set. The symmetric group, or group of permutations of \( A \), denoted \( {S}_{A} \), is the group \( {\operatorname{Aut}}_{\text{Set }}\left( A\right) \) . The group of permutations of the set \( \{ \mathbf{1},\ldots ,\mathbf{n}\} \) is denoted by \( {S}_{n} \) . The terminology is easily justified: the automorphisms of a set \( A \) are the set-isomorphisms, that is, the bijections, from \( A \) to itself; applying such a bijection amounts precisely to permuting (’scrambling’) the elements of \( A \) . This operation may be viewed as a transformation of \( A \) which does not change it (as a set), hence a 'symmetry'. The groups \( {S}_{A} \) are famously large: as the reader checked in Exercise 1.2.1. \( \left| {S}_{n}\right| = n! \) . For example, \( \left| {S}_{70}\right| > {10}^{100} \), which is substantially larger than the estimated number of elementary particles in the observable universe. Potentially confusing point: The various conventions clash in the way the operation in \( {S}_{A} \) should be written. From the ’automorphism’ point of view, elements of \( {S}_{A} \) are functions and should be composed as such; thus, if \( f, g \in {S}_{A} = {\operatorname{Aut}}_{\text{Set }}\left( A\right) \) , then the ’product’ of \( f \) and \( g \) should be written \( g \circ f \) and should act as follows: \[ \left( {\forall p \in A}\right) : \;g \circ f\left( p\right) = g\left( {f\left( p\right) }\right) . \] But the prevailing style of notation in group theory would write this element as \( {fg} \), apparently reversing the order in which the operation is performed. Everything would fall back into agreement if we adopted the convention of writing functions after the elements on which they act rather than before: \( \left( p\right) f \) rather than \( f\left( p\right) \) . But one cannot change century-old habits, so we have no alternative but to live with both conventions and to state carefully which one we are using at any given time. Contemplating the groups \( {S}_{n} \) for small values of \( n \) is an exercise of inestimable value. Of course \( {S}_{1} \) is a trivial group; \( {S}_{2} \) consists of the two possible permutations: \[ \left\{ {\begin{array}{l} 1 \mapsto 1 \\ 2 \mapsto 2 \end{array}\;\text{ and }\;\left\{ \begin{array}{l} 1 \mapsto 2 \\ 2 \mapsto 1 \end{array}\right. }\right. \] which we could call \( e \) (identity) and \( f \) (flip), with operation \[ {ee} = {ff} = e,\;{ef} = {fe} = f. \] In practice we cannot give a new name to every different element of every permutation group, so we have to develop a more flexible notation. There are in fact several possible choices for this; for the time being, we will indicate an element \( \sigma \in {S}_{n} \) by listing the effect of applying \( \sigma \) underneath the list \( 1,\ldots, n \), as a matrix 9 . Thus the elements \( e, f \) in \( {S}_{2} \) may be denoted by \[ e = \left( \begin{array}{ll} 1 & 2 \\ 1 & 2 \end{array}\right) \;,\;f = \left( \begin{array}{ll} 1 & 2 \\ 2 & 1 \end{array}\right) . \] \( {}^{9} \) This is only a notational device - these matrices should not be confused with the matrices appearing in linear algebra. In the same notational style, \( {S}_{3} \) consists of \[ \left\{ {\left( \begin{array}{lll} 1 & 2 & 3 \\ 1 & 2 & 3 \end{array}\right) ,\left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array}\right) ,\left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 2 & 1 \end{array}\right) ,\left( \begin{array}{lll} 1 & 2 & 3 \\ 1 & 3 & 2 \end{array}\right) ,\left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) ,\left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 3 & 1 \end{array}\right) }\right\} . \] For the multiplication, I will adopt the sensible (but not very standard) convention mentioned above and have permutations act 'on the right': thus, for example, \[ \mathbf{1}\left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array}\right) \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) = \mathbf{2}\left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) = \mathbf{1} \] and similarly \[ \mathbf{2}\left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array}\right) \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) = \mathbf{3},\;\mathbf{3}\left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array}\right) \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) = \mathbf{2}. \] That is, \[ \left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array}\right) \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) = \left( \begin{array}{lll} 1 & 2 & 3 \\ 1 & 3 & 2 \end{array}\right) \] since the permutations on both sides of the equal sign act in the same way on \( \mathbf{1},\mathbf{2},\mathbf{3} \) . The reader should now check that \[ \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) \left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array}\right) = \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 2 & 1 \end{array}\right) . \] That is, letting \[ x = \left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array}\right) ,\;y = \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array}\right) , \] then \[ {yx} \neq {xy} \] showing that the operation in \( {S}_{3} \) does not satisfy the commutative axiom. Thus, \( {S}_{3} \) is a noncommutative group; the reader will immediately realize that in fact \( {S}_{n} \) is noncommutative for all \( n \geq 3 \) . While the commutation relation does not hold, other interesting relations do hold in \( {S}_{3} \) . For example, \[ {x}^{2} = e,\;{y}^{3} = e, \] showing that \( {S}_{3} \) contains elements of order 1 (the identity \( e \) ),2 (the element \( x \) ), and 3 (the element \( y \) ); cf. Exercise 2.2 (Incidentally, this shows that the result of Exercise 1.15 does require the commutativity hypothesis.) Also, \[ {yx} = \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 2 & 1 \end{array}\right) = x{y}^{2} \] as the reader may check. Using these relations, we see that every product of any assortment of \( x \) and \( y,{x}^{{i}_{1}}{y}^{{i}_{2}}{x}^{{i}_{3}}{y}^{{i}_{4}}\cdots \), may be reduced to a product \( {x}^{i}{y}^{j} \) with \( 0 \leq i \leq 1,0 \leq j \leq 2 \), that is, to one of the six elements \[ e,\;y,\;{y}^{2},\;x,\;{xy},\;x{y}^{2} : \] for example, \[ {y}^{7}{x}^{13}{y}^{5} = {\left( {y}^{3}\right) }^{2}y{\left( {x}^{2}\right) }^{6}x{y}^{3}{y}^{2} = \left( {yx}\right) {y}^{2} = \left( {x{y}^{2}}\right) {y}^{2} = x{y}^{3}y = {xy}. \] On the other hand, these six elements are all distinct - this may be checked by cancellation and order considerations 10. For example, if we had \( x{y}^{2} = y \), then we would get \( x = {y}^{-1} \) by cancellation, and this cannot be since the relations tell us that \( x \) has order 2 and \( {y}^{-1} \) has order 3 . The conclusion is that the six products displayed above must be all six elements of \( {S}_{3} \) : \[ {S}_{3} = \left\{ {e, x, y,{xy},{y}^{2}, x{y}^{2}}\right\} \] In the process we have verified that \( {S}_{3} \) may also be described as the group ’generated’ by two elements \( x \) and \( y \), with the ’relations’ \( {x}^{2} = e,{y}^{3} = e,{yx} = x{y}^{2} \) . More generally, a subset \( A \) of a group \( G \) ’generates’ \( G \) if every element of \( G \) may be written as a product of elements of \( A \) and of inverses of elements of \( A \) . We will deal with this notion more formally in [6.3] and with descriptions of groups in terms of generators and relations in 8.2 2.2. Dihedral groups. A 'symmetry' is a transformation which preserves a structure. This is of course just a loose way to talk about automorphisms, when we may be too lazy to define rigorously the relevant category. As automorphisms of objects of a category, symmetries will naturally form groups. One context in which this notion may be visualized vividly is that of 'geometric figures' such as polygons in the plane or polyhedra in space. The relevant category could be defined as follows: let the objects be subsets of an ordinary plane \( {\mathbb{R}}^{2} \) and let morphisms between two subsets \( A, B \) consist of the ’rigid motions’ of the plane (such as translations, rotations, or reflections about a line) which map \( A \) to a subset of \( B \) . A rigorous treatment of these notions would be too distracting at this point, so I will appeal to the intuition of the reader, as I do every now and then. From this perspective, the 'symmetries' of a subset of the plane are the rigid motions which map it onto itself; they clearly form a group. The dihedral groups may be defined as these groups of symmetries for the regular polygons. Placing the polygon so that it is centered at the origin (thereby excluding translations as possible symmetries), we see that the dihedral group for a regular \( n \) -sided polygon consists of the \( n \) rotations by \( {2\pi }/n \) radians about the origin and the \( n \) distinct reflections about lines through the origin and a vertex or a midpoint of a side. Thus, the dihedral group for a regular \( n \) -sided polygon consists of \( {2n} \) elements; I will denote 11 this group by the symbol \( {D}_{2n} \) . Again, contemplating these groups, at least for small values of \( n \), is a wonderful exercise. There is a simple way to relate the dihedral groups to the symmetric groups of [2.1], capturing the fact that a symmetry of a regular polygon \( P \) is determined by the fate of the vertices of \( P \) . For example, label the vertices of an equilateral triangle clockwise by \( \mathbf{1},\mathbf{2},\mathbf{3} \) ; then a counterclockwise rotation by an angle of \( {2\pi }/3 \) sends vertex \( \mathbf{1} \) to vertex \( \mathbf{3},\mathbf{3} \) to \( \mathbf{2} \), and \( \mathbf{2} \) to \( \mathbf{1} \), and no other symmetry of the triangle does the same. \( {}^{10} \) It may of course also be checked by explicit computation of the correspo
1167_(GTM73)Algebra
Definition 3.10
Definition 3.10. Let \( \mathrm{X} \) be a nonempty subset of a commutative ring \( \mathrm{R} \) . An element \( \mathrm{d}\varepsilon \mathrm{R} \) is a greatest common divisor of \( \mathrm{X} \) provided: (i) \( \mathrm{d} \mid \mathrm{a} \) for all \( \mathrm{a}\varepsilon \mathrm{X} \) ; (ii) \( \mathrm{c} \mid \mathrm{a} \) for all \( \mathrm{a}\varepsilon \mathrm{X} \Rightarrow \mathrm{c} \mid \mathrm{d} \) . Greatest common divisors do not always exist. For example, in the ring \( E \) of even integers 2 has no divisors at all, whence 2 and 4 have no (greatest) common divisor. Even when a greatest common divisor of \( {a}_{1},\ldots ,{a}_{n} \) exists, it need not be unique. However, any two greatest common divisors of \( X \) are clearly associates by (ii). Furthermore any associate of a greatest common divisor of \( X \) is easily seen to be a greatest common divisor of \( X \) . If \( R \) has an identity and \( {a}_{1},{a}_{2},\ldots ,{a}_{n} \) have \( {1}_{R} \) as a greatest common divisor, then \( {a}_{1},{a}_{2},\ldots {a}_{n} \) are said to be relatively prime. Theorem 3.11. Let \( {\mathrm{a}}_{1},\ldots ,{\mathrm{a}}_{\mathrm{n}} \) be elements of a commutative ring \( \mathrm{R} \) with identity. (i) \( \mathrm{d}\varepsilon \mathrm{R} \) is a greatest common divisor of \( \left\{ {{\mathrm{a}}_{1},\ldots ,{\mathrm{a}}_{\mathrm{n}}}\right\} \) such that \( \mathrm{d} = {\mathrm{r}}_{1}{\mathrm{a}}_{1} \) \( + \cdots + {\mathrm{r}}_{\mathrm{n}}{\mathrm{a}}_{\mathrm{n}} \) for some \( {\mathrm{r}}_{\mathrm{i}} \in \mathrm{R} \) if and only if \( \left( \mathrm{d}\right) = \left( {\mathrm{a}}_{1}\right) + \left( {\mathrm{a}}_{2}\right) + \cdots + \left( {\mathrm{a}}_{\mathrm{n}}\right) \) ; (ii) if \( \mathrm{R} \) is a principal ideal ring, then a greatest common divisor of \( {\mathrm{a}}_{1},\ldots ,{\mathrm{a}}_{\mathrm{n}} \) exists and every one is of the form \( {\mathrm{r}}_{1}{\mathrm{a}}_{1} + \cdots + {\mathrm{r}}_{\mathrm{n}}{\mathrm{a}}_{\mathrm{n}}\left( {{\mathrm{r}}_{\mathrm{i}}\varepsilon \mathrm{R}}\right) \) ; (iii) if \( \mathrm{R} \) is a unique factorization domain, then there exists a greatest common divisor of \( {\mathrm{a}}_{1},\ldots ,{\mathrm{a}}_{\mathrm{n}} \) . REMARK. Theorem 3.11(i) does not state that every greatest common divisor of \( {a}_{1},\ldots ,{a}_{n} \) is expressible as a linear combination of \( {a}_{1},\ldots ,{a}_{n} \) . In general this is not the case (Exercise 6.15). See also Exercise 12. SKETCH OF PROOF OF 3.11. (i) Use Definition 3.10 and Theorem 2.5. (ii) follows from (i). (iii) Each \( {a}_{i} \) has a factorization: \( {a}_{i} = {c}_{1}^{{m}_{i1}}{c}_{2}^{{m}_{i2}} \cdot \cdot \cdot {c}_{t}^{{m}_{it}} \) with \( {c}_{1},\ldots ,{c}_{t} \) distinct irreducible elements and each \( {m}_{ij} \geq 0 \) . Show that \( d = {c}_{1}{}^{{k}_{1}}{c}_{2}{}^{{k}_{2}}\cdots {c}_{t}{}^{{k}_{t}} \) is a greatest common divisor of \( {a}_{1},\ldots ,{a}_{n} \), where \( {k}_{j} = \min \left\{ {{m}_{1j},{m}_{2j},{m}_{3j},\ldots ,{m}_{nj}}\right\} \) . ## EXERCISES 1. A nonzero ideal in a principal ideal domain is maximal if and only if it is prime. 2. An integral domain \( R \) is a unique factorization domain if and only if every nonzero prime ideal in \( R \) contains a nonzero principal ideal that is prime. 3. Let \( R \) be the subring \( \{ a + b\sqrt{10} \mid a, b \in \mathbf{Z}\} \) of the field of real numbers. (a) The map \( N : R \rightarrow \mathbf{Z} \) given by \( a + b\sqrt{10} \mapsto \left( {a + b\sqrt{10}}\right) \left( {a - b\sqrt{10}}\right) \) \( = {a}^{2} - {10}{b}^{2} \) is such that \( N\left( {uv}\right) = N\left( u\right) N\left( v\right) \) for all \( u, v \in R \) and \( N\left( u\right) = 0 \) if and only if \( u = 0 \) . (b) \( u \) is a unit in \( R \) if and only if \( N\left( u\right) = \pm 1 \) . (c) \( 2,3,4 + \sqrt{10} \) and \( 4 - \sqrt{10} \) are irreducible elements of \( R \) . (d) \( 2,3,4 + \sqrt{10} \) and \( 4 - \sqrt{10} \) are not prime elements of \( R \) . [Hint: \( 3 \cdot 2 = 6 \) \( = \left( {4 + \sqrt{10}}\right) \left( {4 - \sqrt{10}}\right) \) .] 4. Show that in the integral domain of Exercise 3 every element can be factored into a product of irreducibles, but this factorization need not be unique (in the sense of Definition 3.5 (ii)). 5. Let \( R \) be a principal ideal domain. (a) Every proper ideal is a product \( {P}_{1}{P}_{2}\cdots {P}_{n} \) of maximal ideals, which are uniquely determined up to order. (b) An ideal \( P \) in \( R \) is said to be primary if \( {ab\varepsilon P} \) and \( a \notin P \) imply \( {b}^{n}{\varepsilon P} \) for some \( n \) . Show that \( P \) is primary if and only if for some \( n, P = \left( {p}^{n}\right) \), where \( p \in R \) is prime \( \left( { = \text{irreducible}}\right) \) or \( p = 0 \) . (c) If \( {P}_{1},{P}_{2},\ldots ,{P}_{n} \) are primary ideals such that \( {P}_{i} = \left( {p}_{i}^{{n}_{i}}\right) \) and the \( {p}_{i} \) are distinct primes, then \( {P}_{1}{P}_{2}\cdots {P}_{n} = {P}_{1} \cap {P}_{2} \cap \cdots \cap {P}_{n} \) . (d) Every proper ideal in \( R \) can be expressed (uniquely up to order) as the intersection of a finite number of primary ideals. 6. (a) If \( a \) and \( n \) are integers, \( n > 0 \), then there exist integers \( q \) and \( r \) such that \( a = {qn} + r \), where \( \left| r\right| \leq n/2 \) . (b) The Gaussian integers \( \mathbf{Z}\left\lbrack i\right\rbrack \) form a Euclidean domain with \( \varphi \left( {a + {bi}}\right) \) \( = {a}^{2} + {b}^{2} \) . [Hint: to show that Definition 3.8(ii) holds, first let \( y = a + {bi} \) and assume \( x \) is a positive integer. By part (a) there are integers such that \( a = {q}_{1}x + {r}_{1} \) and \( b = {q}_{2}x + {r}_{2} \), with \( \left| {r}_{1}\right| \leq x/2,\left| {r}_{2}\right| \leq x/2 \) . Let \( q = {q}_{1} + {q}_{2}i \) and \( r = {r}_{1} + {r}_{2}i \) ; then \( y = {qx} + r \), with \( r = 0 \) or \( \varphi \left( r\right) < \varphi \left( x\right) \) . In the general case, observe that for \( x = c + {di} \neq 0 \) and \( \bar{x} = c - {di}, x\bar{x} > 0 \) . There are \( q,{r}_{0}\varepsilon \mathbf{Z}\left\lbrack i\right\rbrack \) such that \( y\bar{x} = q\left( {x\bar{x}}\right) + {r}_{0} \), with \( {r}_{0} = 0 \) or \( \varphi \left( {r}_{0}\right) < \varphi \left( {x\bar{x}}\right) \) . Let \( r = y - {qx} \) ; then \( y = {qx} + r \) and \( r = 0 \) or \( \varphi \left( r\right) < \varphi \left( x\right) \) .] 7. What are the units in the ring of Gaussian integers \( \mathbf{Z}\left\lbrack i\right\rbrack \) ? 8. Let \( R \) be the following subring of the complex numbers: \( R = \{ a + b\left( {1 + \sqrt{19}i}\right) /2 \mid a, b \in \mathbf{Z}\} \) . Then \( R \) is a principal ideal domain that is not a Euclidean domain. 9. Let \( R \) be a unique factorization domain and \( d \) a nonzero element of \( R \) . There are only a finite number of distinct principal ideals that contain the ideal (d). [Hint: \( \left( d\right) \subset \left( k\right) \Rightarrow k \mid d \) .] 10. If \( R \) is a unique factorization domain and \( a, b \in R \) are relatively prime and \( a \mid {bc} \) , then \( a \mid c \) . 11. Let \( R \) be a Euclidean ring and \( {a\varepsilon R} \) . Then \( a \) is a unit in \( R \) if and only if \( \varphi \left( a\right) = \varphi \left( {1}_{R}\right) \) . 12. Every nonempty set of elements (possibly infinite) in a commutative principal ideal ring with identity has a greatest common divisor. 13. (Euclidean algorithm). Let \( R \) be a Euclidean domain with associated function \( \varphi : R - \{ 0\} \rightarrow \mathbf{N} \) . If \( a, b \in R \) and \( b \neq 0 \), here is a method for finding the greatest common divisor of \( a \) and \( b \) . By repeated use of Definition 3.8(ii) we have: \[ a = {q}_{0}b + {r}_{1},\;\text{ with }\;{r}_{1} = 0\;\text{ or }\;\varphi \left( {r}_{1}\right) < \varphi \left( b\right) ; \] \[ b = {q}_{1}{r}_{1} + {r}_{2},\;\text{ with }\;{r}_{2} = 0\;\text{ or }\;\varphi \left( {r}_{2}\right) < \varphi \left( {r}_{1}\right) ; \] \[ {r}_{1} = {q}_{2}{r}_{2} + {r}_{3},\;\text{ with }\;{r}_{3} = 0\;\text{ or }\;\varphi \left( {r}_{3}\right) < \varphi \left( {r}_{2}\right) ; \] \[ {r}_{k} = {q}_{k + 1}{r}_{k + 1} + {r}_{k + 2},\;\text{ with }\;{r}_{k + 2} = 0\;\text{ or }\;\varphi \left( {r}_{k + 2}\right) < \varphi \left( {r}_{k + 1}\right) ; \] Let \( {r}_{0} = b \) and let \( n \) be the least integer such that \( {r}_{n + 1} = 0 \) (such an \( n \) exists since the \( \varphi \left( {r}_{k}\right) \) form a strictly decreasing sequence of nonnegative integers). Show that \( {r}_{n} \) is the greatest common divisor \( a \) and \( b \) . ## 4. RINGS OF QUOTIENTS AND LOCALIZATION In the first part of this section the familiar construction of the field of rational numbers from the ring of integers is considerably generalized. The rings of quotients so constructed from any commutative ring are characterized by a universal mapping property (Theorem 4.5). The last part of this section, which is referred to only occasionally in the sequel, deals with the (prime) ideal structure of rings of quotients and introduces localization at a prime ideal. Definition 4.1. A nonempty subset \( \mathrm{S} \) of a ring \( \mathrm{R} \) is multiplicative provided that \[ \mathrm{a},\mathrm{b}\varepsilon \mathrm{S} \Rightarrow \mathrm{{ab}}\varepsilon \mathrm{S}\text{.} \] EXAMPLES. The set \( S \) of all elements in a nonzero ring with identity that are not zero divisors is multiplicative. In particular, the set of all nonzero elements in an integral domain is multiplicative. The set of units in any ring with identity is a multiplicative set. If \( P \) is a prime ideal in a commutative ring \( R \), then both \( P \) and \( S = R - P \) are multiplicative sets by Theorem 2.15. The motivation for what follows may be seen most easily in the ring \( \mathbf{Z}
1172_(GTM8)Axiomatic Set Theory
Definition 13.8
Definition 13.8. \( {\mathbf{V}}_{\alpha }^{\left( \mathbf{B}\right) } = \left\langle {{V}_{\alpha }^{\left( \mathbf{B}\right) }, \equiv ,\bar{ \in }}\right\rangle \) is defined by \[ \llbracket u = v{\rrbracket }_{\alpha } \triangleq \llbracket u = v\rrbracket \] \[ \llbracket u \in v{\rrbracket }_{\alpha } \triangleq \llbracket u \in v\rrbracket \] for \( u, v \in {V}_{\alpha }^{\left( \mathbf{B}\right) } \) . (We write \( \llbracket {\rrbracket }_{\alpha } \) for \( \llbracket {\rrbracket }_{{V}_{\alpha }\left( \mathbf{B}\right) } \) .) Remark. Thus \( {\mathbf{V}}_{\alpha }{}^{\left( \mathbf{B}\right) } \) is a \( \mathbf{B} \) -valued structure for \( {\mathcal{L}}_{0} \) . Next we shall prove that this sequence of structures satisfies the conditions specified in \( §9 \) . (See Remark following Definition 9.2) Obviously \( {\mathbf{V}}_{\alpha }{}^{\left( \mathbf{B}\right) } \) satisfies 1 and 2 . We will now show that \( {\mathbf{V}}_{\alpha }{}^{\left( \mathbf{B}\right) } \) also satisfies 3,4, and 5 . Theorem 13.9. \( {V}_{\alpha }^{\left( \mathbf{B}\right) } \) satisfies the Axiom of Extensionality. Proof. Let \( u, v \in {V}_{\alpha }^{\left( \mathbf{B}\right) } \) . Then \[ \llbracket \left( {\forall x}\right) \left\lbrack {x \in u \leftrightarrow x \in v}\right\rbrack {\rrbracket }_{\alpha } = \mathop{\prod }\limits_{{x \in {V}_{\alpha }\left( \mathbf{B}\right) }}\llbracket x \in u \rightarrow x \in v{\rrbracket }_{\alpha } \cdot \mathop{\prod }\limits_{{x \in {V}_{\alpha }\left( \mathbf{B}\right) }}\llbracket x \in v \rightarrow x \in u{\rrbracket }_{\alpha } \] \[ \leq \mathop{\prod }\limits_{{x \in \mathcal{Q}\left( u\right) }}\left( {u\left( x\right) \Rightarrow \left\lbrack \left\lbrack {x \in v}\right\rbrack \right\rbrack }\right) \mathop{\prod }\limits_{{x \in \mathcal{D}\left( v\right) }}\left( {v\left( x\right) \Rightarrow \left\lbrack \left\lbrack {x \in u}\right\rbrack \right\rbrack }\right) \] \[ \text{by Theorem 13.4(3)} \] \[ = \llbracket u = v\rrbracket = \llbracket u = v{\rrbracket }_{\alpha }. \] Theorem 13.10. If \( u \in {V}_{\alpha + 1}^{\left( \mathrm{B}\right) } \) then \( u \) is defined over \( {V}_{\alpha }^{\left( \mathrm{B}\right) } \), i.e., \[ \left( {\forall v \in {V}_{\alpha + 1}^{\left( \mathbf{B}\right) }}\right) \left\lbrack {\llbracket v \in u\rrbracket = \mathop{\sum }\limits_{{x \in {V}_{\alpha }\left( \mathbf{B}\right) }}\llbracket v = x\rrbracket \llbracket x \in u\rrbracket }\right\rbrack . \] Proof. Let \( u \) and \( v \) be in \( {V}_{\alpha + 1}^{\left( \mathrm{B}\right) } \) . \[ \llbracket v \in u\rrbracket = \mathop{\sum }\limits_{{x \in \mathcal{D}\left( u\right) }}u\left( x\right) \cdot \llbracket v = x\rrbracket \] \[ \leq \mathop{\sum }\limits_{{x \in \mathcal{D}\left( u\right) }}\llbracket x \in u\rrbracket \cdot \llbracket x = v\rrbracket \;\text{ by Theorem }{13.4}\left( 3\right) \] \[ \leq \mathop{\sum }\limits_{{x \in {V}_{\alpha }\left( \mathbf{B}\right) }}\llbracket x \in u\rrbracket \cdot \llbracket x = v\rrbracket \;\text{ since }\;\mathcal{D}\left( u\right) \subseteq {V}_{\alpha }^{\left( \mathbf{B}\right) } \] \[ \leq \llbracket v \in u\rrbracket \;\text{by Theorem 13.6.} \] Therefore \[ \llbracket v \in u\rrbracket = \mathop{\sum }\limits_{{x \in {V}_{\alpha }\left( \mathbf{B}\right) }}\llbracket v = x\rrbracket \cdot \llbracket x \in u\rrbracket . \] Theorem 13.11. For every formula \( \varphi \) of \( {\mathcal{L}}_{0} \) , \[ \left( {\forall {a}_{1},\ldots ,{a}_{n} \in {V}_{\alpha }^{\left( \mathbf{B}\right) }}\right) \left( {\exists b \in {V}_{\alpha + 1}^{\left( \mathbf{B}\right) }}\right) \left( {\forall a \in {V}_{\alpha }^{\left( \mathbf{B}\right) }}\right) \left\lbrack {{\left\lbrack \varphi \left( a,{a}_{1},\ldots ,{a}_{n}\right) \right\rbrack }_{\alpha } = \llbracket a \in b\rrbracket }\right\rbrack . \] Proof. Let \( {a}_{1},\ldots ,{a}_{n} \in {V}_{\alpha }^{\left( \mathbf{B}\right) } \) and define \( b : {V}_{\alpha }^{\left( \mathbf{B}\right) } \rightarrow B \) by \[ b\left( a\right) = {\left\lbrack \varphi \left( a,{a}_{1},\ldots ,{a}_{n}\right) \right\rbrack }_{\alpha }\;\text{ for }\;a \in {V}_{\alpha }^{\left( \mathbf{B}\right) }. \] Then \( b \in {V}_{\alpha + 1}^{\left( \mathbf{B}\right) } \) and \[ \llbracket a \in b\rrbracket = \mathop{\sum }\limits_{{{a}^{\prime } \in {V}_{\alpha }\left( \mathbf{B}\right) }}{\left\lbrack \varphi \left( {a}^{\prime },{a}_{1},\ldots ,{a}_{n}\right) \right\rbrack }_{\alpha } \cdot \left\lbrack {{a}^{\prime } = a}\right\rbrack \] \[ \leq {\left\lbrack \varphi \left( a,{a}_{1},\ldots ,{a}_{n}\right) \right\rbrack }_{\alpha }\;\text{by the Axioms of Equality.} \] On the other hand, for \( a \in {V}_{\alpha }{}^{\left( \mathbf{B}\right) } \) \[ \llbracket a \in b\rrbracket \geq {\left\lbrack \varphi \left( a,{a}_{1},\ldots ,{a}_{n}\right) \right\rbrack }_{\alpha } \] by Theorem 13.4(3). Remark. Since the conditions of \( §9 \) are satisfied by \( \left\langle {{\mathbf{V}}_{\alpha }{}^{\left( \mathbf{B}\right) } \mid \alpha \in {On}}\right\rangle \), we have, by Theorem 9.26, the following result: Theorem 13.12. \( {\mathbf{V}}^{\left( \mathbf{B}\right) } \) is a \( \mathbf{B} \) -valued model of \( {ZF} \) . Remark. The following theorem is very useful. Theorem 13.13. For \( u \in {V}^{\left( \mathbf{B}\right) } \) , 1. \( \llbracket \left( {\exists x \in u}\right) \varphi \left( x\right) \rrbracket = \mathop{\sum }\limits_{{x \in \mathcal{D}\left( u\right) }}u\left( x\right) \cdot \llbracket \varphi \left( x\right) \rrbracket \) . 2. \( \llbracket \left( {\forall x \in u}\right) \varphi \left( x\right) \rrbracket = \mathop{\prod }\limits_{{x \in \mathcal{D}\left( u\right) }}\left( {u\left( x\right) \Rightarrow \llbracket \varphi \left( x\right) \rrbracket }\right) \) . Proof. For \( u \in {V}^{\left( \mathbf{B}\right) } \) , \[ \llbracket \left( {\exists x \in u}\right) \varphi \left( x\right) \rrbracket = \mathop{\sum }\limits_{{{x}^{\prime } \in {V}^{\left( \mathrm{B}\right) }}}\llbracket {x}^{\prime } \in u\rrbracket \cdot \llbracket \varphi \left( {x}^{\prime }\right) \rrbracket \] \[ = \mathop{\sum }\limits_{{{x}^{\prime } \in {V}^{\left( \mathbf{B}\right) }}}\mathop{\sum }\limits_{{x \in \mathcal{D}\left( u\right) }}u\left( x\right) \cdot \left\lbrack {x = {x}^{\prime }}\right\rbrack \cdot \left\lbrack {\varphi \left( {x}^{\prime }\right) }\right\rbrack \] \[ \leq \mathop{\sum }\limits_{{x \in \mathcal{D}\left( u\right) }}u\left( x\right) \cdot \left\lbrack {\varphi \left( x\right) }\right\rbrack \;\text{by the Axioms of Equality} \] \[ \leq \mathop{\sum }\limits_{{{x}^{\prime } \in {V}^{\left( \mathbf{B}\right) }}}\left\lbrack {{x}^{\prime } \in u}\right\rbrack \cdot \left\lbrack {\varphi \left( {x}^{\prime }\right) }\right\rbrack \;\text{ by Theorem }{13.4}\left( 3\right) . \] This proves 1, and 2 follows by duality. Definition 13.14. Let \( {\mathbf{B}}^{\prime } \) be a complete Boolean algebra. Then \( \mathbf{B} \) is a complete subalgebra of \( {\mathbf{B}}^{\prime } \) iff \( \mathbf{B} \) is a subalgebra of \( {\mathbf{B}}^{\prime },\mathbf{B} \) is complete, but in addition, for each \( A \subseteq \left| \mathbf{B}\right| \) \[ \prod {}^{\mathbf{B}}A = \prod {}^{{\mathbf{B}}^{\prime }}A \] and \[ \mathop{\sum }\limits^{\mathbf{B}}A = \mathop{\sum }\limits^{{\mathbf{B}}^{\prime }}A \] that is, a class \( A \subseteq \left| \mathbf{B}\right| \) has the same sup and inf relative to \( \mathbf{B} \) that it has relative to \( {\mathbf{B}}^{\prime } \) . Remark. Next we shall show how \( V \) can be embedded in \( {V}^{\left( \mathbf{B}\right) } \) . As preparation we prove the following. Theorem 13.15. Let \( \mathbf{B} \) be a complete subalgebra of the complete Boolean algebra \( {\mathbf{B}}^{\prime } \) . Then 1. \( {V}^{\left( \mathbf{B}\right) } \subseteq {V}^{\left( {\mathbf{B}}^{\prime }\right) } \) . 2. \( u, v \in {V}^{\left( \mathbf{B}\right) } \rightarrow \llbracket u \in v{\rrbracket }^{\left( \mathbf{B}\right) } = \llbracket u \in v{\rrbracket }^{\left( {\mathbf{B}}^{\prime }\right) } \land \llbracket u = v{\rrbracket }^{\left( \mathbf{B}\right) } = \llbracket u = v{\rrbracket }^{\left( {\mathbf{B}}^{\prime }\right) } \) . Proof. (By induction) 1. Obvious, since any function into \( B \) is also a function into \( {B}^{\prime } \) . 2. Follows from the fact that \( \Pi \) and \( \sum \), over values in \( \mathbf{B} \), are the same in \( \mathbf{B} \) and \( {\mathbf{B}}^{\prime } \) respectively. Remark. Since any (standard) set \( u \) may be identified with the function \( {f}_{u} \) having domain \( u \) and assuming the constant value 1 on \( u \), we expect that \( V \) can be identified with some part of \( {V}^{\left( \mathbf{B}\right) } \) . The corresponding mapping is defined in the following way. Definition 13.16. For \( y \in V,\check{y} \triangleq \{ \langle \check{x},1\rangle \mid x \in y\} \) is defined by recursion with respect to the well-founded \( \epsilon \) -relation. Remark. Obviously, \( \breve{y} \in {V}^{\left( 2\right) } \) . Theorem 13.17. For \( x, y \in V \) , 1. \( x \in y \leftrightarrow \llbracket \check{x} \in \check{y}\rrbracket = \mathbf{1} \land x \notin y \leftrightarrow \llbracket \check{x} \in \check{y}\rrbracket = \mathbf{0} \) , 2. \( x = y \leftrightarrow \llbracket \check{x} = \check{y}\rrbracket = \mathbf{1} \land x \neq y \leftrightarrow \llbracket \check{x} = \check{y}\rrbracket = \mathbf{0} \) , 3. \( \left( {\forall u \in {V}^{\left( 2\right) }}\right) \left( {\exists !v \in V}\right) \left\lbrack {\llbracket u = \check{v}\rrbracket = 1}\right\rbrack \) . Proof. 1 and 2 are proved simultaneously by induction from Definition 13.3. Proving this is in fact a very good exercise that we leave to the reader. In order to prove 3, let \( u \in {V}^{\left( 2\right) } \) and assume as induction hypothesis (i) \( \left( {\forall x \in \mathcal{D}\left( u
1139_(GTM44)Elementary Algebraic Geometry
Definition 3.1
Definition 3.1. Let \( R \) be a ring and suppose the associated p.o. set \( \left( {\mathcal{I}, \subset }\right) \) satisfies the a.c.c.-that is, each strictly ascending chain of ideals of \( R \) , \( {\mathfrak{a}}_{1} \subsetneqq {\mathfrak{a}}_{2} \subsetneqq \ldots \), terminates after finitely many steps. Then by abuse of language, we say that \( R \) satisfies the a.c.c. Similarly, \( R \) satisfies the d.c.c. if \( \left( {\mathcal{I}, \subset }\right) \) does. We now turn our attention to proving that the a.c.c. holds for polynomial rings over a field. We begin by giving an equivalent formulation of the a.c.c. on \( R \) (Lemma 3.3). Definition 3.2. A basis (or base) for an ideal \( \mathfrak{a} \) in \( R \) is any collection \( \left\{ {a}_{y}\right\} \) of elements \( {a}_{\gamma } \in \mathfrak{a} \) ( \( \gamma \) in some indexing set \( \Gamma \) ) such that \[ \mathfrak{a} = \left\{ {{r}_{{\gamma }_{1}}{a}_{{\gamma }_{1}} + \ldots + {r}_{{\gamma }_{k}}{a}_{{\gamma }_{k}} \mid {r}_{{\gamma }_{i}} \in R\text{ and }{\gamma }_{i} \in \Gamma }\right\} . \] We write \( \mathfrak{a} = \left( \left\{ {a}_{\gamma }\right\} \right) \), or \( \mathfrak{a} = \left( {{a}_{1},{a}_{2},\ldots }\right) \) if \( \Gamma \) is countable, and \( \mathfrak{a} = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) if \( \Gamma \) is finite. If we can write \( \mathfrak{a} = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \), we say \( \mathfrak{a} \) has a finite basis. Lemma 3.3. R satisfies the a.c.c. iff every ideal of \( R \) has a finite basis. Proof. \( \Rightarrow \) : Suppose some ideal \( \mathfrak{a} \) did not have a finite basis. Then one could find a sequence of elements \( {a}_{1},{a}_{2},\ldots \left( {{a}_{k} \in \mathfrak{a}}\right) \) such that \[ \left( {a}_{1}\right) \subsetneqq \left( {{a}_{1},{a}_{2}}\right) \subsetneqq \ldots , \] and \( R \) would not satisfy the a.c.c. \( \Leftarrow \) : Suppose \( R \) did not satisfy the a.c.c.; let \( {\mathfrak{a}}_{1} \subsetneqq {\mathfrak{a}}_{2} \subsetneqq \ldots \) be an infinite strict sequence. Then \( \mathfrak{a} = \mathop{\bigcup }\limits_{j}{\mathfrak{a}}_{j} \) is an ideal. The ideal \( \mathfrak{a} \) cannot have a finite basis \( {a}_{1},\ldots ,{a}_{n} \), since surely \( {a}_{1} \in {\mathfrak{a}}_{{j}_{1}} \) for some \( {j}_{1},{a}_{2} \in {\mathfrak{a}}_{{j}_{2}} \) for some \( {j}_{2},\ldots \) , and so on. This would mean \( \mathop{\bigcup }\limits_{{k = 1}}^{n}{\mathfrak{a}}_{jk} = \mathfrak{a} \), so the ideals \( {\mathfrak{a}}_{j} \) could strictly increase at most up to \( {\mathfrak{a}}_{{j}_{n}} \) . This explains the commonly-used alternate Definition 3.4. A ring satisfying the a.c.c. is said to satisfy the finite basis condition; such a ring is further called Noetherian. (This term is named after the German mathematician Emmy Noether (1882-1935), the daughter of Max Noether (1844-1921). M. Noether was the "father of algebraic geometry." E. Noether was a central figure in the development of modern ideal theory.) If \( R \) is any ring, then \( R\left\lbrack X\right\rbrack \) as usual denotes the ring of all polynomials in \( X \) with coefficients in \( R \) . Our main result of this section is Theorem 3.5 (Hilbert basis theorem). If \( R \) is Noetherian, so is \( R\left\lbrack X\right\rbrack \) . Before proving it, let us note Corollary 3.6. If \( k \) is a field, then \( k\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) is Noetherian. Proof. Certainly \( k \) satisfies the a.c.c. since it has only two ideals. Then by repeated application of Theorem 3.5, \( k\left\lbrack {X}_{1}\right\rbrack, k\left\lbrack {X}_{1}\right\rbrack \left\lbrack {X}_{2}\right\rbrack = k\left\lbrack {{X}_{1},{X}_{2}}\right\rbrack ,\ldots \) , \( k\left\lbrack {{X}_{1},\ldots ,{X}_{n - 1}}\right\rbrack \left\lbrack {X}_{n}\right\rbrack = k\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) must all be Noetherian. Remark 3.7. In the next section we apply the Hilbert basis theorem to get at once decomposition into irreducibles in \( \mathcal{I} \), and unique decomposition in \( \mathcal{J} \) and in \( \mathcal{V} \) . Remark 3.8. The Basis Theorem does not have a dual-that is, no polynomial ring \( R\left\lbrack {{X}_{1},\ldots ,{X}_{n}}\right\rbrack \) where \( n \geq 1 \) ever satisfies the d.c.c.; one strictly descending sequence is always \[ \left( {X}_{1}\right) \supsetneqq \left( {{X}_{1}{}^{2}}\right) \supsetneqq \left( {{X}_{1}{}^{3}}\right) \supsetneqq \ldots \] ## Note on the Hilbert basis theorem The basis theorem lies at the very foundations of algebraic geometry; it shows there are "fundamental building blocks," in the sense that each variety is uniquely the finite union of irreducible varieties (Theorem 4.4). This is very much akin to the fundamental theorem of arithmetic, which lies at the foundations of number theory; it says that every integer is a product of primes (the "building blocks"), and that this representation is unique (up to order and units.) The essential idea of the basis theorem, though couched in older language, led at once to a solution of one of the outstanding unsolved problems of mathematics in the period 1868-1888, known as "Gordan's problem" (in honor of Paul Gordan). Gordan's computational abilities were recognized as a youth, and he became the world's leading expert in unbelievably extended algorithms in a field of mathematics called invariant theory. In 1868 he found a long, computational proof of the basis theorem for two variables which showed, in essence, how to construct a specific base for a given ideal. Proving the generalization to \( n \) variables defied the attempts of some of the world’s most distinguished mathematicians. All their attempts were along the same basic path that Gordan followed and, one by one, they became trapped in a dense jungle of complicated algebraic computations. Now it was Hilbert's belief that the trick in doing mathematics is to start at the right end, and there can hardly be a more beautiful example of this than Hilbert's own solution to Gordan's problem. He looked at it as an existence problem rather than as a construction problem (wherein a basis is actually produced). In a short notice submitted in 1888 in the Nachrichten he showed in the \( n \) -variable case the existence of a finite basis for any ideal. Many in the mathematical community reacted by doubting that this was even mathematics; the philosophy of their day was that if you want to prove that something exists, you must explicitly find it. Thus Gordan saw the proof as akin to those of theologians for the existence of God, and his comment has become forever famous: "Das ist nicht Mathematik. Das ist Theologie." However, later Hilbert was able to build upon his existence proof, and he actually found a general constructive proof. This served as a monumental vindication of Hilbert's outlook and began a revolution in mathematical thinking. Even Gordan had to admit that theology had its merits. Hilbert's philosophy, so simple, yet so important, may perhaps be looked at this way: If we see a fly in an airtight room and then it hides from us, we still know there is a fly in the room even though we cannot specify its coordinates. Acceptance of this broader viewpoint has made possible some of the most elegant and important contributions to mathematics, and mathematicians of today would find themselves hopelessly straitjacketed by a reversion to the attitude that you must find it to show it exists. (For an absorbing account of Hilbert's life and times, see [Reid].) The following proof is essentially Hilbert's-his language was a bit different, and he took \( R \) to be the integers, but the basic ideas are all the same. Proof of the Basis Theorem. We show that if \( R \) satisfies the finite basis condition, then so does \( R\left\lbrack X\right\rbrack \) . First, if \( {r}_{0}{X}^{n} + \ldots + {r}_{n}\left( {{r}_{0} \neq 0}\right) \) is any nonzero polynomial of \( R\left\lbrack X\right\rbrack \), we call \( {r}_{0} \) the leading coefficient of the polynomial. Now let \( \mathfrak{A} \) be any ideal of \( R\left\lbrack X\right\rbrack \) . Then \( \mathfrak{A} \) induces an ideal \( \mathfrak{a} \) in \( R \) , as well as smaller ideals \( {\mathfrak{a}}_{k} \) in \( R \), as follows: Let \( \mathfrak{a} \) consist of 0 together with all leading coefficients of all polynomials in \( \mathfrak{A} \) . (We show that this is an ideal in a moment.) Since \( R \) is Noetherian, for some \( N,\mathfrak{a} = \left( {{a}_{1},\ldots ,{a}_{N}}\right) \), where \( {a}_{i} \in R \) . Let \( {p}_{i}\left( X\right) \in \mathfrak{A} \) have \( {a}_{i} \) as leading coefficient and let \( {m}^{ * } = \max \left( {\deg {p}_{1},\ldots ,\deg {p}_{N}}\right) \) . Then for each \( k < {m}^{ * } \), let \( {a}_{k} \) consist of 0 together with all leading coefficients of all polynomials in \( \mathfrak{A} \) whose degree is equal to or less than \( k \) . We now show a is an ideal. (The proof for \( {\mathfrak{a}}_{k} \) is similar.) First, \( \mathfrak{a} \) is closed under subtraction, for \( a, b \in \mathfrak{a} \) implies that there are polynomials \( p\left( X\right) = \) \( a{X}^{m} + \mathop{\sum }\limits_{{i = 1}}^{m}{c}_{i}{X}^{m - i} \) and \( q\left( X\right) = b{X}^{n} + \mathop{\sum }\limits_{{i = 1}}^{n}{d}_{i}{X}^{n - i} \) in \( \mathfrak{A} \) . Then \( m \geq n \) implies that \( p\left( X\right) - \left( {{X}^{m - n}q\left( X\right) }\right) \in \mathfrak{A} \) ; if \( a = b \), then \( a - b = 0 \in \mathfrak{a} \), and if \( a \neq b \), then \( a - b \in \mathfrak{a} \) since \( a - b \) is then the leading coefficient of \( p\left( X\right) - \left( {{X}^{m - n}q\left( X\right) }\right) . \) Second, a has the absorption property, for if \( r \in R \), then \( r \neq 0 \) implies that the leading coefficient of \( {rp}\left( X\right) \) is \( {ra} \in \mathfrak{a} \), and \( r = 0 \) implies
1167_(GTM73)Algebra
Definition 1.3
Definition 1.3. A nonzero element a in a ring \( \mathrm{R} \) is said to be a left [resp. right] zero divisor if there exists a nonzero \( \mathrm{b}\varepsilon \mathrm{R} \) such that \( \mathrm{{ab}} = 0 \) [resp. \( \mathrm{{ba}} = 0 \) ]. \( A \) zero divisor is an element of \( \mathrm{R} \) which is both a left and a right zero divisor. It is easy to verify that a ring \( R \) has no zero divisors if and only if the right and left cancellation laws hold in \( R \) ; that is, for all \( a, b, c \in R \) with \( a \neq 0 \) , \[ {ab} = {ac}\text{ or }{ba} = {ca}\; \Rightarrow \;b = c. \] Definition 1.4. An element \( \mathrm{a} \) in a ring \( \mathrm{R} \) with identity is said to be left [resp. right] invertible if there exists \( \mathrm{c}\varepsilon \mathrm{R} \) [resp. \( \mathrm{b}\varepsilon \mathrm{R} \) ] such that \( \mathrm{{ca}} = {1}_{\mathrm{R}} \) [resp. \( \mathrm{{ab}} = {1}_{\mathrm{R}} \) ]. The element \( \mathrm{c} \) [resp. \( \mathrm{b} \) ] is called a left [resp. right] inverse of \( \mathrm{a} \) . An element \( \mathrm{a}\varepsilon \mathrm{R} \) that is both left and right invertible is said to be invertible or to be a unit. REMARKS. (i) The left and right inverses of a unit \( a \) in a ring \( R \) with identity necessarily coincide (since \( {ab} = {1}_{R} = {ca} \) implies \( b = {1}_{R}b = \left( {ca}\right) b = c\left( {ab}\right) = c{1}_{R} = c \) ). (ii) The set of units in a ring \( R \) with identity forms a group under multiplication. Definition 1.5. A commutative ring \( \mathrm{R} \) with identity \( {1}_{\mathrm{R}} \neq 0 \) and no zero divisors is called an integral domain. A ring \( \mathrm{D} \) with identity \( {1}_{\mathrm{D}} \neq 0 \) in which every nonzero element is a unit is called a division ring. \( A \) field is a commutative division ring. REMARKS. (i) Every integral domain and every division ring has at least two elements (namely 0 and \( {1}_{R} \) ). (ii) A ring \( R \) with identity is a division ring if and only if the nonzero elements of \( R \) form a group under multiplication (see Remark (ii) after Definition 1.4). (iii) Every field \( F \) is an integral domain since \( {ab} = 0 \) and \( a \neq 0 \) imply that \( b = {1}_{F}b = \left( {{a}^{-1}a}\right) b = {a}^{-1}\left( {ab}\right) = {a}^{-1}0 = 0 \) . EXAMPLES. The ring \( \mathbf{Z} \) of integers is an integral domain. The set \( E \) of even integers is a commutative ring without identity. Each of \( \mathbf{Q} \) (rationals), \( \mathbf{R} \) (real numbers), and C (complex numbers) is a field under the usual operations of addition and multiplication. The \( n \times n \) matrices over \( \mathbf{Q} \) (or \( \mathbf{R} \) or \( \mathbf{C} \) ) form a noncommutative ring with identity. The units in this ring are precisely the nonsingular matrices. EXAMPLE. For each positive integer \( n \) the set \( {Z}_{n} \) of integers modulo \( n \) is a ring. See the example after Theorem I.1.5 for details. If \( n \) is not prime, say \( n = {kr} \) with \( k > 1, r > 1 \), then \( \bar{k} \neq \bar{0},\bar{r} \neq 0 \) and \( \bar{k}\bar{r} = \overline{kr} = \bar{n} = \bar{0} \) in \( {Z}_{n} \), whence \( \bar{k} \) and \( \bar{r} \) are zero divisors. If \( p \) is prime, then \( {Z}_{p} \) is a field by Exercise I.1.7. EXAMPLE. Let \( A \) be an abelian group and let End \( A \) be the set of endomor-phisms \( f : A \rightarrow A \) . Define addition in End \( A \) by \( \left( {f + g}\right) \left( a\right) = f\left( a\right) + g\left( a\right) \) . Verify that \( f + g \) : End \( A \) . Since \( A \) is abelian, this makes End \( A \) an abelian group. Let multiplication in End \( A \) be given by composition of functions. Then End \( A \) is a (possibly noncommutative) ring with identity \( {1}_{A} : A \rightarrow A \) . EXAMPLE. Let \( G \) be a (multiplicative) group and \( R \) a ring. Let \( R\left( G\right) \) be the additive abelian group \( \mathop{\sum }\limits_{{g \in G}}R \) (one copy of \( R \) for each \( g : G \) ). It will be convenient to adopt a new notation for the elements of \( R\left( G\right) \) . An element \( x = {\left\{ {r}_{g}\right\} }_{g \in G} \) of \( R\left( G\right) \) has only finitely many nonzero coordinates, say \( {r}_{{g}_{1}},\ldots ,{r}_{{g}_{n}}\left( {{g}_{i} \in G}\right) \) . Denote \( x \) by the formal sum \( {r}_{{g}_{1}}{g}_{1} + {r}_{{g}_{2}}{g}_{2} + \cdots + {r}_{{g}_{n}}{g}_{n} \) or \( \mathop{\sum }\limits_{{i = 1}}^{n}{r}_{{g}_{i}}{g}_{i} \) . We also allow the possibility that some of the \( {r}_{{g}_{i}} \) are zero or that some \( {g}_{i} \) are repeated, so that an element of \( R\left( G\right) \) may be written in formally different ways (for example, \( {r}_{1}{g}_{1} + 0{g}_{2} = {r}_{1}{g}_{1} \) or \( \left. {{r}_{1}{g}_{1} + {s}_{1}{g}_{1} = \left( {{r}_{1} + {s}_{1}}\right) {g}_{1}}\right) \) . In this notation, addition in the group \( R\left( G\right) \) is given by: \[ \mathop{\sum }\limits_{{i = 1}}^{n}{r}_{{g}_{i}}{g}_{i} + \mathop{\sum }\limits_{{i = 1}}^{n}{s}_{{g}_{i}}{g}_{i} = \mathop{\sum }\limits_{{i = 1}}^{n}\left( {{r}_{{g}_{i}} + {s}_{{g}_{i}}}\right) {g}_{i} \] (by inserting zero coefficients if necessary we can always assume that two formal sums involve exactly the same indices \( {g}_{1},\ldots ,{g}_{n} \) ). Define multiplication in \( R\left( G\right) \) by \[ \left( {\mathop{\sum }\limits_{{i = 1}}^{n}{r}_{i}{g}_{i}}\right) \left( {\mathop{\sum }\limits_{{j = 1}}^{m}{s}_{j}{h}_{j}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{m}\left( {{r}_{i}{s}_{j}}\right) \left( {{g}_{i}{h}_{j}}\right) \] this makes sense since there is a product defined in both \( R\left( {{r}_{i}{s}_{j}}\right) \) and \( G\left( {{g}_{i}{h}_{j}}\right) \) and thus the expression on the right is a formal sum as desired. With these operations \( R\left( G\right) \) is a ring, called the group ring of \( G \) over \( R.R\left( G\right) \) is commutative if and only if both \( R \) and \( G \) are commutative. If \( R \) has an identity \( {1}_{R} \), and \( e \) is the identity element of \( G \) , then \( {1}_{R}e \) is the identity element of \( R\left( G\right) \) . EXAMPLE. Let \( \mathbf{R} \) be the field of real numbers and \( S \) the set of symbols \( 1, i, j, k \) . Let \( K \) be the additive abelian group \( \mathbf{R}\bigoplus \mathbf{R}\bigoplus \mathbf{R}\bigoplus \mathbf{R} \) and write the elements of \( K \) as formal sums \( \left( {{a}_{0},{a}_{1},{a}_{2},{a}_{3}}\right) = {a}_{0}1 + {a}_{1}i + {a}_{2}j + {a}_{3}k \) . Then \( {a}_{0}1 + {a}_{1}i + {a}_{2}j + {a}_{3}k = \) \( {b}_{0}1 + {b}_{1}i + {b}_{2}j + {b}_{3}k \) if and only if \( {a}_{i} = {b}_{i} \) for every \( i \) . We adopt the conventions that \( {a}_{0}{1\varepsilon K} \) is identified with \( {a}_{0}\varepsilon \mathbf{R} \) and that terms with zero coefficients may be omitted (for example, \( 4 + {2j} = 4 \cdot 1 + {0i} + {2j} + {0k} \) and \( i = 0 + {1i} + {0j} + {0k} \) ). Then addition in \( K \) is given by \[ \left( {{a}_{0} + {a}_{1}i + {a}_{2}j + {a}_{3}k}\right) + \left( {{b}_{0} + {b}_{1}i + {b}_{2}j + {b}_{3}k}\right) \] \[ = \left( {{a}_{0} + {b}_{0}}\right) + \left( {{a}_{1} + {b}_{1}}\right) i + \left( {{a}_{2} + {b}_{2}}\right) j + \left( {{a}_{3} + {b}_{3}}\right) k. \] Define multiplication in \( K \) by \[ \left( {{a}_{0} + {a}_{1}i + {a}_{2}j + {a}_{3}k}\right) \left( {{b}_{0} + {b}_{1}i + {b}_{2}j + {b}_{3}k}\right) \] \[ = \left( {{a}_{0}{b}_{0} - {a}_{1}{b}_{1} - {a}_{2}{b}_{2} - {a}_{3}{b}_{3}}\right) + \left( {{a}_{0}{b}_{1} + {a}_{1}{b}_{0} + {a}_{2}{b}_{3} - {a}_{3}{b}_{2}}\right) i \] \[ + \left( {{a}_{0}{b}_{2} + {a}_{2}{b}_{0} + {a}_{3}{b}_{1} - {a}_{1}{b}_{3}}\right) j + \left( {{a}_{0}{b}_{3} + {a}_{3}{b}_{0} + {a}_{1}{b}_{2} - {a}_{2}{b}_{1}}\right) k. \] This product formula is obtained by multiplying the formal sums term by term subject to the following relations: (i) associativity; (ii) \( {ri} = {ir};{rj} = {jr},{rk} = {kr} \) (for all \( r \in \mathbf{R}) \) ; (iii) \( {i}^{2} = {j}^{2} = {k}^{2} = {ijk} = - 1;{ij} = - {ji} = k;{jk} = - {kj} = i;{ki} = - {ik} = j \) . Under this product \( K \) is a noncommutative division ring in which the multiplicative inverse of \( {a}_{0} + {a}_{1}i + {a}_{2}j + {a}_{3}k \) is \( \left( {{a}_{0}/d}\right) - \left( {{a}_{1}/d}\right) i - \left( {{a}_{2}/d}\right) j - \left( {{a}_{3}/d}\right) k \), where \( d = {a}_{0}{}^{2} + {a}_{1}{}^{2} + {a}_{2}{}^{2} + {a}_{3}{}^{2}.K \) is called the division ring of real quaternions. The quaternions may also be interpreted as a certain subring of the ring of all \( 2 \times 2 \) matrices over the field \( \mathbf{C} \) of complex numbers (Exercise 8). Definition 1.1 shows that under multiplication the elements of a ring \( R \) form a semigroup (a monoid if \( R \) has an identity). Consequently Definition I.1.8 is applicable and exponentiation is defined in \( R \) . We have for each \( a \in R \) and \( {n\varepsilon }{\mathbf{N}}^{ * } \) , \( {a}^{n} = a\cdots a \) ( \( n \) factors) and \( {a}^{0} = {1}_{R} \) if \( R \) has an identity. By Theorem I.1.9 \[ {a}^{m}{a}^{n} = {a}^{m + n}\text{ and }{\left( {a}^{m}\right) }^{n} = {a}^{mn} \] Subtraction in a ring \( R \) is defined in the usual way: \( a - b = a + \left( {-b}\right) \) . Clearly \( a\left( {b - c}\right) = {ab} - {ac} \) and \( \left( {a - b}\right) c = {ac} - {bc} \) for all \( a, b, c \in R \) . The next theorem is frequently useful in computations. Recall that if \( k \) and \( n \) are integers with \( 0 \leq k \leq n \), then the binomial coefficient \( \left( \begin{array}{l} n \\ k \end{array}\right) \) is the number \( n!/\left( {n - k}\right) !k! \), where \( 0! = 1 \) and \( n! = n\left( {n - 1}\right) \left( {n - 2}\right) \cdots 2 \cdot 1 \) for \( n \geq 1 \) . \( \left( \begin{array}{l} n \\ k \end{array}\right) \) is actually an integer (Exercise 10). Theorem 1.6. (Binomial Theorem). L
1068_(GTM227)Combinatorial Commutative Algebra
Definition 4.2
Definition 4.2 Suppose \( X \) is a labeled cell complex, by which we mean that its \( r \) vertices have labels that are vectors \( {\mathbf{a}}_{1},\ldots ,{\mathbf{a}}_{r} \) in \( {\mathbb{N}}^{n} \) . The label on an arbitrary face \( F \) of \( X \) is the exponent \( {\mathbf{a}}_{F} \) on the least common multiple \( \operatorname{lcm}\left( {{\mathbf{x}}^{{\mathbf{a}}_{i}} \mid i \in F}\right) \) of the monomial labels \( {\mathbf{x}}^{{\mathbf{a}}_{i}} \) on vertices in \( F \) . The point of labeling a cell complex \( X \) is to get enough data to construct a monomial matrix for a complex of \( {\mathbb{N}}^{n} \) -graded free modules over the polynomial ring \( S = \mathbb{k}\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . Definition 4.3 Let \( X \) be a labeled cell complex. The cellular monomial matrix supported on \( X \) uses the reduced chain complex of \( X \) for scalar entries, with \( \varnothing \) in homological degree 0 . Row and column labels are those on the corresponding faces of \( X \) . The cellular free complex \( {\mathcal{F}}_{X} \) supported on \( X \) is the complex of \( {\mathbb{N}}^{n} \) -graded free \( S \) -modules (with basis) represented by the cellular monomial matrix supported on \( X \) . The free complex \( {\mathcal{F}}_{X} \) is a cellular resolution if it is acyclic (homology only in degree 0). By convention, the label on the empty face \( \varnothing \in X \) is \( \mathbf{0} \in {\mathbb{N}}^{n} \), which is the exponent on \( 1 \in S \), the least common multiple of no monomials. It is also possible to write down the differential \( \partial \) of \( {\mathcal{F}}_{X} \) without using monomial matrices, where it can be written as \[ {\mathcal{F}}_{X} = {\bigoplus }_{F \in X}S\left( {-{\mathbf{a}}_{F}}\right) ,\;\partial \left( F\right) \; = \mathop{\sum }\limits_{{\text{facets }G\text{ of }F}}\operatorname{sign}\left( {G, F}\right) {\mathbf{x}}^{{\mathbf{a}}_{F} - {\mathbf{a}}_{G}}G. \] The symbols \( F \) and \( G \) here are thought of both as faces of \( X \) and as basis vectors in degrees \( {\mathbf{a}}_{F} \) and \( {\mathbf{a}}_{G} \) . The sign for \( \left( {G, F}\right) \) equals \( \pm 1 \) and is part of the data in the boundary map of the chain complex of \( X \) . Example 4.4 The following labeled hexagon appears as a face of the three-dimensional polytope at the beginning of this chapter: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_74_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_74_0.jpg) Given the orientations that we have chosen for the faces of \( X \), the cellular free complex \( {\mathcal{F}}_{X} \) supported by this labeled hexagon is written as follows: ![9d852306-8a03-41f2-b2e7-a141e7b451e2_74_1.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_74_1.jpg) ![9d852306-8a03-41f2-b2e7-a141e7b451e2_74_2.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_74_2.jpg) This is the representation of the resolution in terms of cellular monomial matrices. The arrows drawn in and on the hexagon denote the orientations of its faces, which determine the values of \( \operatorname{sign}\left( {G, F}\right) \) . For example, \[ \partial \left( {\langle v\rangle }\right) = {b}^{2}\cdots / + {bc}\cdots \partial + {c}^{2}\cdots \] \[ + {ac} \cdot / + {a}^{2} \cdot / + {ab} \cdot / \] in the non-monomial matrix way of writing cellular free complexes. Given two vectors \( \mathbf{a},\mathbf{b} \in {\mathbb{N}}^{n} \), we write \( \mathbf{a} \preccurlyeq \mathbf{b} \) and say that \( \mathbf{a} \) precedes \( \mathbf{b} \) , if \( \mathbf{b} - \mathbf{a} \in {\mathbb{N}}^{n} \) . A subset \( Q \subseteq {\mathbb{N}}^{n} \) is an order ideal if \( \mathbf{a} \in Q \) whenever \( \mathbf{b} \in Q \) and \( \mathbf{a} \preccurlyeq \mathbf{b} \) . Loosely, \( Q \) is "closed under going down" in the partial order on \( {\mathbb{N}}^{n} \) . For an order ideal \( Q \), define the labeled subcomplex \[ {X}_{Q} = \left\{ {F \in X \mid {\mathbf{a}}_{F} \in Q}\right\} \] of a labeled cell complex \( X \) . For each \( \mathbf{b} \in {\mathbb{N}}^{n} \) there are two important such subcomplexes. By \( {X}_{ \preccurlyeq \mathbf{b}} \) we mean the subcomplex of \( X \) consisting of all faces with labels coordinatewise at most \( \mathbf{b} \) . Similarly, denote by \( {X}_{ \prec \mathbf{b}} \) the subcomplex of \( X \) consisting of all faces with labels \( \prec \mathbf{b} \), where \( {\mathbf{b}}^{\prime } \prec \mathbf{b} \) if \( {\mathbf{b}}^{\prime } \preccurlyeq \mathbf{b} \) and \( {\mathbf{b}}^{\prime } \neq \mathbf{b} \) . A fundamental property of cellular free complexes is that their acyclicity can be determined using merely the geometry of polyhedral cell complexes. Let us call a cell complex acyclic if it is either empty or has zero reduced homology. In the empty case, its only homology lies in homological degree -1 . The property of being acylic depends on the underlying field \( \mathbb{k} \) , as we shall see in Section 4.3.5. Proposition 4.5 The cellular free complex \( {\mathcal{F}}_{X} \) supported on \( X \) is a cellular resolution if and only if \( {X}_{ \preccurlyeq \mathbf{b}} \) is acyclic over \( \mathbb{k} \) for all \( \mathbf{b} \in {\mathbb{N}}^{n} \) . When \( {\mathcal{F}}_{X} \) is acyclic, it is a free resolution of \( S/I \), where \( I = \left\langle {{\mathbf{x}}^{{\mathbf{a}}_{v}} \mid v \in X}\right. \) is a vertex \( \rangle \) is generated by the monomial labels on vertices. Proof. The free modules contributing to the part of \( {\mathcal{F}}_{X} \) in degree \( \mathbf{b} \in {\mathbb{N}}^{n} \) are precisely those generated in degrees \( \preccurlyeq \mathbf{b} \) . This proves the criterion for acyclicity, noting that if this degree \( \mathbf{b} \) complex is acyclic, then its homology contributes to the homology of \( {\mathcal{F}}_{X} \) in homological degree 0 . If \( {\mathcal{F}}_{X} \) is acyclic, then it resolves \( S/I \) because the image of its last map equals \( I \subseteq S \) . Example 4.6 Let \( I \) be the ideal whose generating exponents are the vertex labels on the right-hand cell complex in Fig. 4.1. The label '215' in the diagrams is short for \( \left( {2,1,5}\right) \) . The labeled complex \( X \) on the left supports a cellular minimal free resolution of \( S/\left( {I + \left\langle {{x}^{5},{y}^{6},{z}^{6}}\right\rangle }\right) \), so Proposition 4.5 implies that the subcomplex \( {\mathcal{F}}_{{X}_{ \preccurlyeq {455}}} \) resolves \( S/I \) . ![9d852306-8a03-41f2-b2e7-a141e7b451e2_76_0.jpg](images/9d852306-8a03-41f2-b2e7-a141e7b451e2_76_0.jpg) Figure 4.1: The cell complexes from Example 4.6 ## 4.2 Betti numbers and \( K \) -polynomials Given a monomial ideal \( I \) with a cellular resolution \( {\mathcal{F}}_{X} \), we next see how the Betti numbers and the \( K \) -polynomial of the monomial ideal \( I \) can be computed from the labeled cell complex \( X \) . The key is that \( X \) satisfies the acylicity criterion of Proposition 4.5. In the forthcoming statement and its proof, we use freely the fact that \( {\beta }_{i,\mathbf{b}}\left( I\right) = {\beta }_{i + 1,\mathbf{b}}\left( {S/I}\right) \) . As in Chapter 1 for the simplicial case, if \( X \) is a polyhedral cell complex and \( \mathbb{k} \) is a field then \( {\widetilde{H}}_{ \bullet }\left( {X;\mathbb{k}}\right) \) denotes the homology of the reduced chain complex \( {\widetilde{\mathcal{C}}}_{ \bullet }\left( {X;\mathbb{k}}\right) \) . Theorem 4.7 If \( {\mathcal{F}}_{X} \) is a cellular resolution of the monomial quotient \( S/I \) , then the Betti numbers of \( I \) can be calculated for \( i \geq 1 \) as \[ {\beta }_{i,\mathbf{b}}\left( I\right) = {\dim }_{\mathbb{k}}{\widetilde{H}}_{i - 1}\left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) . \] Proof. When \( {\mathbf{x}}^{\mathbf{b}} \) does not lie in \( I \), the complex \( {X}_{ \prec \mathbf{b}} \) consists at most of the empty face \( \varnothing \in X \), which has no homology in homological degrees \( \geq 0 \) . This is good, because \( {\beta }_{i,\mathbf{b}}\left( I\right) \) is zero unless \( {\mathbf{x}}^{\mathbf{b}} \in I \), as \( {K}^{\mathbf{b}}\left( I\right) \) is void if \( {\mathbf{x}}^{\mathbf{b}} \notin I \) . Now assume \( {\mathbf{x}}^{\mathbf{b}} \in I \), and calculate Betti numbers as in Lemma 1.32 by tensoring \( {\mathcal{F}}_{X} \) with \( \mathbb{k} \) . The resulting complex in degree \( \mathbf{b} \) is the complex of vector spaces over \( \mathbb{k} \) obtained by taking the quotient of the reduced chain complex \( {\widetilde{\mathcal{C}}}_{ \bullet }\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \) modulo its subcomplex \( {\widetilde{\mathcal{C}}}_{ \bullet }\left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) \) . In other words, the desired Betti number \( {\beta }_{i,\mathbf{b}}\left( I\right) \) is the dimension over \( \mathbb{k} \) of the \( {i}^{\text{th }} \) homology of the rightmost complex in the following exact sequence of complexes: \[ 0 \rightarrow \widetilde{\mathcal{C}} \cdot \left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) \rightarrow \widetilde{\mathcal{C}} \cdot \left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \rightarrow \widetilde{\mathcal{C}} \cdot \left( \mathbf{b}\right) \rightarrow 0. \] The long exact sequence for homology reads \[ \cdots \rightarrow {\widetilde{H}}_{i}\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \rightarrow {\widetilde{H}}_{i}\left( {{\widetilde{\mathcal{C}}}_{ \bullet }\left( \mathbf{b}\right) }\right) \rightarrow {\widetilde{H}}_{i - 1}\left( {{X}_{ \prec \mathbf{b}};\mathbb{k}}\right) \rightarrow {\widetilde{H}}_{i - 1}\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right) \rightarrow \cdots \] Our assumption \( {\mathbf{x}}^{\mathbf{b}} \in I \) implies by Proposition 4.5 that \( {X}_{ \preccurlyeq \mathbf{b}} \) has no reduced homology: \( {\widetilde{H}}_{j}\left( {{X}_{ \preccurlyeq \mathbf{b}};\mathbb{k}}\right
1094_(GTM250)Modern Fourier Analysis
Definition 2.3.5
Definition 2.3.5. Let \( 0 < p, q < \infty \) . A sequence of complex numbers \( r = {\left\{ {r}_{Q}\right\} }_{Q \in \mathcal{D}} \) is called an \( \infty \) -atom for \( {\dot{f}}_{p}^{\alpha, q} \) if there exists a dyadic cube \( {Q}_{0} \) such that (a) \( {r}_{Q} = 0 \) if \( Q \nsubseteq {Q}_{0} \) ; (b) \( {\begin{Vmatrix}{g}^{\alpha, q}\left( r\right) \end{Vmatrix}}_{{L}^{\infty }} \leq {\left| {Q}_{0}\right| }^{-\frac{1}{p}} \) , where, recalling from (2.3.2), \[ {g}^{\alpha, q}\left( {\left\{ {r}_{Q}\right\} }_{Q}\right) = {\left( \mathop{\sum }\limits_{{Q \in \mathcal{D}}}{\left( {\left| Q\right| }^{-\frac{\alpha }{n} - \frac{1}{2}}\left| {r}_{Q}\right| {\chi }_{Q}\right) }^{q}\right) }^{\frac{1}{q}}. \] We observe that every \( \infty \) -atom \( r = \left\{ {r}_{Q}\right\} \) for \( {\dot{f}}_{p}^{\alpha, q} \) satisfies \( \parallel r{\parallel }_{{\dot{f}}_{p}^{\alpha, q}} \leq 1 \) . Indeed, \[ \parallel r{\parallel }_{{\dot{f}}_{p}^{\alpha, q}}^{p} = {\int }_{{Q}_{0}}{\left| {g}^{\alpha, q}\left( r\right) \right| }^{p}{dx} \leq {\left| {Q}_{0}\right| }^{-1}\left| {Q}_{0}\right| = 1. \] The following theorem concerns the atomic decomposition of the spaces \( {\dot{f}}_{p}^{\alpha, q} \) . Theorem 2.3.6. Let \( \alpha \in \mathbf{R},0 < p, q < \infty \), and \( s = {\left\{ {s}_{Q}\right\} }_{Q \in \mathcal{D}} \) be in \( {\dot{f}}_{p}^{\alpha, q} \) . Then there exist \( {C}_{n, p, q} > 0 \), a sequence of scalars \( {\lambda }_{j} \), and a sequence of \( \infty \) -atoms \( {r}_{j} = {\left\{ {r}_{j, Q}\right\} }_{Q \in \mathcal{D}} \) for \( {\dot{f}}_{p}^{\alpha, q} \) such that for each \( Q \in \mathcal{D} \) the series \( \mathop{\sum }\limits_{{j = 1}}^{\infty }{\lambda }_{j}{r}_{j, Q} \) is absolutely convergent and equal to \( {s}_{Q} \), i.e., \[ s = {\left\{ {s}_{Q}\right\} }_{Q \in \mathcal{D}} = \mathop{\sum }\limits_{{j = 1}}^{\infty }{\lambda }_{j}{\left\{ {r}_{j, Q}\right\} }_{Q \in \mathcal{D}} = \mathop{\sum }\limits_{{j = 1}}^{\infty }{\lambda }_{j}{r}_{j} \] and such that \[ {\left( \mathop{\sum }\limits_{{j = 1}}^{\infty }{\left| {\lambda }_{j}\right| }^{p}\right) }^{\frac{1}{p}} \leq {C}_{n, p, q}\parallel s{\parallel }_{{\dot{f}}_{p}^{\alpha, q}} \] (2.3.15) Proof. We fix \( \alpha, p, q \), and a sequence \( s = {\left\{ {s}_{Q}\right\} }_{Q \in \mathcal{D}} \) in \( {\dot{f}}_{p}^{\alpha, q} \) . For a dyadic cube \( R \) in \( \mathcal{D} \) we define the function \[ {g}_{R}^{\alpha, q}\left( s\right) \left( x\right) = {\left( \mathop{\sum }\limits_{\substack{{Q \in \mathcal{D}} \\ {R \subseteq Q} }}{\left( {\left| Q\right| }^{\frac{\alpha }{n} - \frac{1}{2}}\left| {s}_{Q}\right| {\chi }_{Q}\left( x\right) \right) }^{q}\right) }^{\frac{1}{q}} \] and we observe that this function is constant on \( R \) . We also note that for dyadic cubes \( {R}_{1} \) and \( {R}_{2} \) with \( {R}_{1} \subseteq {R}_{2} \) we have \[ {g}_{{R}_{2}}^{\alpha, q}\left( s\right) \leq {g}_{{R}_{1}}^{\alpha, q}\left( s\right) \] Finally, we observe that \[ \mathop{\lim }\limits_{\substack{{\ell \left( R\right) \rightarrow \infty } \\ {x \in R} }}{g}_{R}^{\alpha, q}\left( s\right) \left( x\right) = 0 \] and \[ \mathop{\lim }\limits_{\substack{{\ell \left( R\right) \rightarrow 0} \\ {x \in R} }}{g}_{R}^{\alpha, q}\left( s\right) \left( x\right) = {g}^{\alpha, q}\left( s\right) \left( x\right) \] where \( {g}^{\alpha, q}\left( s\right) \) is the function defined in (2.3.2). For \( k \in \mathbf{Z} \) we set \[ {\mathcal{A}}_{k} = \left\{ {R \in \mathcal{D} : {g}_{R}^{\alpha, q}\left( s\right) \left( x\right) > {2}^{k}\;\text{ for all }x \in R}\right\} . \] We note that \( {\mathcal{A}}_{k + 1} \subseteq {\mathcal{A}}_{k} \) for all \( k \) in \( \mathbf{Z} \) and that \[ \left\{ {x \in {\mathbf{R}}^{n} : {g}^{\alpha, q}\left( s\right) \left( x\right) > {2}^{k}}\right\} = \mathop{\bigcup }\limits_{{R \in {\mathcal{A}}_{k}}}R. \] (2.3.16) Moreover, we have for all \( k \in \mathbf{Z} \) , \[ {\left( \mathop{\sum }\limits_{{Q \in \mathcal{D} \smallsetminus {\mathcal{A}}_{k}}}{\left( {\left| Q\right| }^{-\frac{\alpha }{n} - \frac{1}{2}}\left| {s}_{Q}\right| {\chi }_{Q}\left( x\right) \right) }^{q}\right) }^{\frac{1}{q}} \leq {2}^{k} \] (2.3.17) for all \( x \in {\mathbf{R}}^{n} \) . To prove (2.3.17) we assume that \( {g}^{\alpha, q}\left( s\right) \left( x\right) > {2}^{k} \) ; otherwise, the conclusion is trivial. Then there exists a maximal dyadic cube \( {R}_{\max } \) in \( {\mathcal{A}}_{k} \) such that \( x \in {R}_{\max } \) . Letting \( {R}_{0} \) be the unique dyadic cube that contains \( {R}_{\max } \) and has twice its side length, we have that the left-hand side of (2.3.17) is equal to \( {g}_{{R}_{0}}^{\alpha, q}\left( s\right) \left( x\right) \), which is at most \( {2}^{k} \) , since \( {R}_{0} \) is not contained in \( {\mathcal{A}}_{k} \) . Since \( {g}^{\alpha, q}\left( s\right) \in {L}^{p}\left( {\mathbf{R}}^{n}\right) \), by our assumption, and \( {g}^{\alpha, q}\left( s\right) > {2}^{k} \) for all \( x \in Q \) if \( Q \in {\mathcal{A}}_{k} \), the cubes in \( {\mathcal{A}}_{k} \) must have size bounded above by some constant. We set \[ {\mathcal{B}}_{k} = \left\{ {J \in \mathcal{D} : \;J\text{ is a maximal dyadic cube in }{\mathcal{A}}_{k} \smallsetminus {\mathcal{A}}_{k + 1}}\right\} . \] For \( J \) in \( {\mathcal{B}}_{k} \) we define a sequence \( t\left( {k, J}\right) = {\left\{ t{\left( k, J\right) }_{Q}\right\} }_{Q \in \mathcal{D}} \) by setting \[ t{\left( k, J\right) }_{Q} = \left\{ \begin{array}{ll} {s}_{Q} & \text{ if }Q \subseteq J\text{ and }Q \in {\mathcal{A}}_{k} \smallsetminus {\mathcal{A}}_{k + 1}, \\ 0 & \text{ otherwise. } \end{array}\right. \] Notice that \[ \text{ if }\;Q \notin \mathop{\bigcup }\limits_{{k \in \mathbf{Z}}}{\mathcal{A}}_{k},\;\text{ then }\;{s}_{Q} = 0. \] Moreover, the identity \[ s = \mathop{\sum }\limits_{{k \in \mathbf{Z}}}\mathop{\sum }\limits_{{J \in {\mathcal{B}}_{k}}}t\left( {k, J}\right) \] (2.3.18) is valid and it is worth noticing that for each \( Q \in \mathcal{D} \), there is at most one \( k \in \mathbf{Z} \) and at most one \( J \in {\mathcal{B}}_{k} \) such that \( t{\left( k, J\right) }_{Q} \) is nonzero, i.e., the sum in (2.3.18) evaluated at \( Q \) has at most one nonzero term. For all \( x \in {\mathbf{R}}^{n} \) we have \[ {g}^{\alpha, q}\left( {t\left( {k, J}\right) }\right) \left( x\right) = {\left( \mathop{\sum }\limits_{\substack{{Q \subseteq J} \\ {Q \in {\mathcal{A}}_{k} \smallsetminus {\mathcal{A}}_{k + 1}} }}{\left( {\left| Q\right| }^{-\frac{\alpha }{n} - \frac{1}{2}}\left| {s}_{Q}\right| {\chi }_{Q}\left( x\right) \right) }^{q}\right) }^{\frac{1}{q}} \] \[ \leq {\left( \mathop{\sum }\limits_{\overset{Q \subseteq J}{Q \in \mathcal{D} \smallsetminus {\mathcal{A}}_{k + 1}}}{\left( {\left| Q\right| }^{-\frac{\alpha }{n} - \frac{1}{2}}\left| {s}_{Q}\right| {\chi }_{Q}\left( x\right) \right) }^{q}\right) }^{\frac{1}{q}} \] (2.3.19) \[ \leq {2}^{k + 1} \] where we used (2.3.17) in the last estimate. We define atoms \( r\left( {k, J}\right) = {\left\{ r{\left( k, J\right) }_{Q}\right\} }_{Q \in \mathcal{D}} \) by setting \[ r{\left( k, J\right) }_{Q} = {2}^{-k - 1}{\left| J\right| }^{-\frac{1}{p}}t{\left( k, J\right) }_{Q}, \] (2.3.20) and we also define scalars \[ {\lambda }_{k, J} = {2}^{k + 1}{\left| J\right| }^{\frac{1}{p}}. \] To see that each \( r\left( {k, J}\right) \) is an \( \infty \) -atom for \( {\dot{f}}_{p}^{\alpha, q} \), we observe that \( r{\left( k, J\right) }_{Q} = 0 \) if \( Q \nsubseteq J \) and that \[ {g}^{\alpha, q}\left( {r\left( {k, J}\right) }\right) \left( x\right) \leq {\left| J\right| }^{-\frac{1}{p}},\;\text{ for all }x \in {\mathbf{R}}^{n}, \] in view of (2.3.19) and (2.3.20). Also using (2.3.18) and (2.3.20), we obtain that \[ s = \mathop{\sum }\limits_{{k \in \mathbf{Z}}}\mathop{\sum }\limits_{{J \in {\mathcal{B}}_{k}}}{\lambda }_{k, J}r\left( {k, J}\right) \] (2.3.21) which says that \( s \) can be written as a countably infinite sum of atoms. We now reindex the countable set \( \mathcal{U} = \left\{ {\left( {k, J}\right) : k \in \mathbf{Z}, J \in {\mathcal{B}}_{k}}\right\} \) by \( {\mathbf{Z}}^{ + } \) and write \[ s = \mathop{\sum }\limits_{{j = 1}}^{\infty }{\lambda }_{j}{r}_{j} \] (2.3.22) where \( \left\{ {{\lambda }_{1},{\lambda }_{2},\ldots }\right\} = \left\{ {{\lambda }_{k, J} : \left( {k, J}\right) \in \mathcal{U}}\right\} \) and \( \left\{ {{r}_{1},{r}_{2},\ldots }\right\} = \{ r\left( {k, J}\right) : \left( {k, J}\right) \in \mathcal{U}\} \) . As observed the sum in (2.3.21) has the property that for each \( Q \in \mathcal{D} \), there is at most one \( k \in \mathbf{Z} \) and at most one \( J \in {\mathcal{B}}_{k} \) such that \( {\lambda }_{k, J}r{\left( k, J\right) }_{Q} = t{\left( k, J\right) }_{Q} \) is nonzero. Thus for each \( Q \in \mathcal{D} \), at most one term in the sum \( \mathop{\sum }\limits_{{j = 1}}^{\infty }{\lambda }_{j}{r}_{j, Q} \) is nonzero; in particular, this series is absolutely convergent. Finally, we estimate the sum of the \( p \) th power of the coefficients \( {\lambda }_{k, J} \) . We have \[ \mathop{\sum }\limits_{{j = 1}}^{\infty }{\left| {\lambda }_{j}\right| }^{p} = \mathop{\sum }\limits_{{k \in \mathbf{Z}}}\mathop{\sum }\limits_{{J \in {\mathcal{B}}_{k}}}{\lambda }_{k, J}^{p} \] \[ = \mathop{\sum }\limits_{{k \in \mathbf{Z}}}{2}^{\left( {k + 1}\right) p}\mathop{\sum }\limits_{{J \in {\mathcal{B}}_{k}}}\left| J\right| \] \[ \leq {2}^{p}\mathop{\sum }\limits_{{k \in \mathbf{Z}}}{2}^{kp}\left| {\mathop{\bigcup }\limits_{{Q \in {\mathcal{A}}_{k}}}Q}\right| \] \[ = {2}^{p}\mathop{\sum }\limits_{{k \in \mathbf{Z}}}{2}^{k\left( {p - 1}\right) }{2}^{k}\left| \left\{ {x \in {\mathbf{R}}^{n} : {g}^{\alpha, q}\left( s\right) \left( x\right) > {2}^{k}}\right\} \right| \] \[ \leq {2}^{p}\mathop{\sum }\limits_{{k \in \mathbf{Z}}}{\int }_{{2}^{k}}^{{2}^{k + 1}}{2}^{k\lef
108_The Joys of Haar Measure
Definition 3.3.14
Definition 3.3.14. Let \( K \) be a number field. An ideal \( I \) of \( {\mathbb{Z}}_{K} \) is a sub- \( {\mathbb{Z}}_{K} \) -module of \( {\mathbb{Z}}_{K} \) ; in other words, it is an additive subgroup of \( {\mathbb{Z}}_{K} \) such that \( {\alpha x} \in I \) for all \( \alpha \in {\mathbb{Z}}_{K} \) and \( x \in I \) . By extension, a fractional ideal is a nonzero \( {\mathbb{Z}}_{K} \) -module of the form \( I/d \), where \( I \) is an ideal and \( d \in {K}^{ * } \) . A nonzero ideal will be called an integral ideal. If \( I \) and \( J \) are two ideals we can naturally define their sum (as a sum of \( {\mathbb{Z}}_{K} \) -modules), but also their product: if \( I \) and \( J \) are ideals then \( {IJ} \) is the smallest ideal containing \( {xy} \) for all \( x \in I \) and \( y \in J \), in other words, the set of finite \( \mathbb{Z} \) -linear combinations \( \sum {x}_{i}{y}_{i} \) with \( {x}_{i} \in I \) and \( {y}_{i} \in J \) . Proposition 3.3.15. Let \( K \) be a number field such that \( \left\lbrack {K : \mathbb{Q}}\right\rbrack = n \) . (1) Any fractional ideal is a free \( \mathbb{Z} \) -module of rank \( n \), and an integral ideal has finite index in \( {\mathbb{Z}}_{K} \) . (2) The set of fractional ideals forms an abelian group under ideal multiplication. Definition 3.3.16. A prime ideal \( \mathfrak{p} \) of \( {\mathbb{Z}}_{K} \) is an integral ideal different from \( {\mathbb{Z}}_{K} \) such that \( {\mathbb{Z}}_{K}/\mathfrak{p} \) is an integral domain. Since \( {Z}_{K}/\mathfrak{p} \) is finite when \( \mathfrak{p} \neq 0 \), and since every finite integral domain is a field, it follows that any nonzero prime ideal \( \mathfrak{p} \) is a maximal ideal. In other words, \( \mathfrak{p} \subset I \subset {\mathbb{Z}}_{K} \) for an ideal \( I \) implies that \( I = \mathfrak{p} \) or \( I = {\mathbb{Z}}_{K} \) . Since the zero ideal is always excluded, when we talk of a prime ideal \( \mathfrak{p} \) in the context of number fields, we always implicitly assume that \( \mathfrak{p} \) is nonzero, in other words, that \( \mathfrak{p} \) is a maximal ideal. The first important theorem concerning ideals in number fields is the existence and uniqueness of prime ideal factorization. Theorem 3.3.17. Let \( I \) be a fractional ideal of \( K \) . There exists a factorization \[ I = \mathop{\prod }\limits_{{i = 1}}^{g}{\mathfrak{p}}_{i}^{{v}_{i}} \] where the \( {\mathfrak{p}}_{i} \) are distinct prime ideals and \( {v}_{i} \in \mathbb{Z} \smallsetminus \{ 0\} \), and this factorization is unique up to permutation of the factors. This theorem is one of the most important consequences of the fact that \( {\mathbb{Z}}_{K} \) is a Dedekind domain. In Kummer's study of Fermat's last theorem, as in many other Diophantine equations, one side of the equation can be factored algebraically, and the other side is a perfect power. If we are in \( \mathbb{Z} \), or more generally in a PID, we can conclude by unique factorization that the algebraic factors are themselves perfect powers, at least up to units (more on units below). Unfortunately, most number rings are not PIDs. However, they are always Dedekind domains, and as such by the above theorem they have unique factorization into prime ideals. Thus each of the algebraic factors is a perfect power of an ideal. Thus assume that we know that an ideal \( \mathfrak{a} \) is such that \( {\mathfrak{a}}^{n} = \gamma {\mathbb{Z}}_{K} \) for some element \( \gamma \) . This is where the second basic theorem on ideals and units of algebraic number theory comes into play. Theorem 3.3.18 (Finiteness of the class group). Define two fractional ideals \( \mathfrak{a} \) and \( \mathfrak{b} \) to be equivalent if there exists \( \alpha \in {K}^{ * } \) such that \( \mathfrak{b} = \alpha \mathfrak{a} \) . This equivalence relation is compatible with the multiplicative group structure of ideals, and the quotient group is a finite abelian group. The group of ideal classes of \( K \) is denoted by \( {Cl}\left( K\right) \), and the class number, in other words \( \left| {{Cl}\left( K\right) }\right| \), is denoted by \( h\left( K\right) \) . Standard group theory implies that for any ideal \( \mathfrak{a} \) the ideal \( {\mathfrak{a}}^{h\left( K\right) } \) has the form \( \beta {\mathbb{Z}}_{K} \) for some element \( \beta \) of \( {\mathbb{Z}}_{K} \) . Thus if we know that \( {\mathfrak{a}}^{n} = \gamma {\mathbb{Z}}_{K} \), then if \( n \) and \( h\left( K\right) \) are coprime, the extended Euclidean algorithm implies that \( \mathfrak{a} \) itself has the form \( \mathfrak{a} = \alpha {\mathbb{Z}}_{K} \) for some \( \alpha \in {\mathbb{Z}}_{K} \) . It follows that \( {\alpha }^{n}{\mathbb{Z}}_{K} = \gamma {\mathbb{Z}}_{K} \) . Thus, even though we are not working in a PID, the conclusion is very similar: the principal ideal generated by \( \gamma \) is indeed equal to the \( n \) th power of a principal ideal. This can be refined further. The above equality can be written \( \gamma = \varepsilon {\alpha }^{n} \) , where \( \varepsilon \) is a unit of \( {\mathbb{Z}}_{K} \), in other words an element of \( {\mathbb{Z}}_{K} \) such that \( {\varepsilon }^{-1} \in {\mathbb{Z}}_{K} \) . The group of units of \( K \) will be denoted by \( U\left( K\right) \) . We now need the third basic theorem on ideals and units. Theorem 3.3.19 (Unit group structure). There exist units \( {\varepsilon }_{0},{\varepsilon }_{1},\ldots ,{\varepsilon }_{r} \) having the following properties: (1) The group of roots of unity in \( K \) is the finite cyclic group (of order \( w\left( K\right) \) , say) generated by \( {\varepsilon }_{0} \) . (2) Any unit \( \varepsilon \) of \( K \) can be written in a unique way as \[ \varepsilon = \mathop{\prod }\limits_{{0 \leq i \leq r}}{\varepsilon }_{i}^{{n}_{i}} \] with \( {n}_{i} \in \mathbb{Z} \) for \( 1 \leq i \leq r \) and \( 0 \leq {n}_{0} < w\left( K\right) \) . The rank of the unit group is thus equal to \( r \), and we have \( r = {r}_{1} + {r}_{2} - 1 \) , where \( \left( {{r}_{1},{r}_{2}}\right) \) is the signature of \( K \) . Such a family \( \left( {{\varepsilon }_{1},\ldots ,{\varepsilon }_{r}}\right) \) (with \( {\varepsilon }_{0} \) not included) is called a basis of fundamental units of \( K \) . A last easy but important remark concerning roots of unity: if \( K \) is not a totally complex number field, in other words if \( {r}_{1} > 0 \), then \( w\left( K\right) = 2 \) , so that the only roots of unity are \( \pm 1 \), since all the embeddings of all other roots of unity in \( \mathbb{C} \) are nonreal. ## 3.3.5 Decomposition of Primes and Ramification Definition and Proposition 3.3.20. Let \( L/K \) be an extension of number fields, let \( \mathfrak{p} \) be a prime ideal of \( K \), and let \[ \mathfrak{p}{\mathbb{Z}}_{L} = \mathop{\prod }\limits_{{i = 1}}^{g}{\mathfrak{P}}_{i}^{{e}_{i}} \] be its prime ideal decomposition in \( {\mathbb{Z}}_{L} \), where \( {e}_{i} \geq 1 \) and the prime ideals \( {\mathfrak{P}}_{i} \) are above \( \mathfrak{p} \) . (1) The exponent \( {e}_{i} \) is denoted by \( e\left( {{\mathfrak{P}}_{i}/\mathfrak{p}}\right) \) and is called the ramification index of \( {\mathfrak{P}}_{i} \) . (2) The degree of the finite field extension \( \left\lbrack {{\mathbb{Z}}_{L}/{\mathfrak{P}}_{i} : {\mathbb{Z}}_{K}/\mathfrak{p}}\right\rbrack \) is called the residual degree, and denoted by \( f\left( {{\mathfrak{P}}_{i}/\mathfrak{p}}\right) \) . If \( \mathfrak{p} = p\mathbb{Z} \), we call it simply the degree of \( {\mathfrak{P}}_{i} \) . (3) We have the equality \( \left\lbrack {L : K}\right\rbrack = \mathop{\sum }\limits_{{1 \leq i \leq q}}e\left( {{\mathfrak{P}}_{i}/\mathfrak{p}}\right) f\left( {{\mathfrak{P}}_{i}/\mathfrak{p}}\right) \) . (4) We say that \( {\mathfrak{P}}_{i} \) is ramified if \( e\left( {{\mathfrak{P}}_{i}/\mathfrak{p}}\right) \geq 2 \), and we say that \( \mathfrak{p} \) itself is ramified if there exists a ramified \( {\mathfrak{P}}_{i} \) above \( \mathfrak{p} \) . An easy but fundamental result concerning ramification and residual indices is their transitivity. Proposition 3.3.21. Let \( M/L \) and \( L/K \) be extensions of number fields, \( \mathfrak{p} \) an ideal of \( K,{\mathfrak{P}}_{L} \) an ideal of \( L \) above \( \mathfrak{p} \), and \( {\mathfrak{P}}_{M} \) an ideal of \( M \) above \( {\mathfrak{P}}_{L} \) . We have the transitivity relations: \[ e\left( {{\mathfrak{P}}_{M}/\mathfrak{p}}\right) = e\left( {{\mathfrak{P}}_{M}/{\mathfrak{P}}_{L}}\right) e\left( {{\mathfrak{P}}_{L}/\mathfrak{p}}\right) \;\text{ and }\;f\left( {{\mathfrak{P}}_{M}/\mathfrak{p}}\right) = f\left( {{\mathfrak{P}}_{M}/{\mathfrak{P}}_{L}}\right) f\left( {{\mathfrak{P}}_{L}/\mathfrak{p}}\right) . \] Proof. Left to the reader (Exercise 18). A simple case in which it is easy to obtain explicitly the prime ideal decomposition is the following, which we state only in the absolute case, although it is immediate to generalize to relative extensions; see [Coh1], Proposition 2.3.9. Proposition 3.3.22. Let \( K = \mathbb{Q}\left( \theta \right) \) be a number field, where \( \theta \) is an algebraic integer, and denote by \( T\left( X\right) \) its (monic) minimal polynomial. Let \( f \) be the index of \( \theta \), i.e., \( f = \left\lbrack {{\mathbb{Z}}_{K} : \mathbb{Z}\left\lbrack \theta \right\rbrack }\right\rbrack \) . Then for any prime \( p \) not dividing \( f \) we can obtain the prime decomposition of \( p{\mathbb{Z}}_{K} \) as follows. Let \[ T\left( X\right) \equiv \mathop{\prod }\limits_{{i = 1}}^{g}\overline{{T}_{i}}{\left( X\right) }^{{e}_{i}}\left( {\;\operatorname{mod}\;p}\right) \] be the decomposition of \( T \) into monic irreducible factors in \( {\mathbb{F}}_{p}\left\lbrack X\right\rbrack \) . Then \[ p{\mathbb{Z}}_{K} = \mathop{\prod }\limits_{{i = 1}}^{g}{\mathfrak{p}}_{i}^{{e}_{i}} \] where \[ {\mathfrak{p}}_{i} = \left( {p,{T}_{i}\left(
1116_(GTM270)Fundamentals of Algebraic Topology
Definition 4.2.13
Definition 4.2.13. Let \( X \) be a CW-complex. The cellular chain complex of \( X \) is defined by \[ {C}_{n}^{\text{cell }}\left( X\right) = {H}_{n}\left( {{X}^{n},{X}^{n - 1}}\right) \] with \( {\partial }_{n} : {C}_{n}^{\text{cell }}\left( X\right) \rightarrow {C}_{n - 1}^{\text{cell }}\left( X\right) \) the composition \[ {H}_{n}\left( {{X}^{n},{X}^{n - 1}}\right) \overset{\partial }{ \rightarrow }{H}_{n - 1}\left( {X}^{n - 1}\right) \rightarrow {H}_{n - 1}\left( {{X}^{n - 1},{X}^{n - 2}}\right) . \] \( \diamond \) Lemma 4.2.14. \( {C}_{ * }^{\text{cell }}\left( X\right) \) is a chain complex. Proof. We need only check that \( {\partial }_{n - 1}{\partial }_{n} = 0 \) . But this is the composition \[ {H}_{n}\left( {{X}^{n},{X}^{n - 1}}\right) \overset{\partial }{ \rightarrow }{H}_{n - 1}\left( {X}^{n - 1}\right) \rightarrow {H}_{n - 1}\left( {{X}^{n - 1},{X}^{n - 2}}\right) \] \[ \overset{\partial }{ \rightarrow }{H}_{n - 2}\left( {X}^{n - 2}\right) \rightarrow {H}_{n - 2}\left( {{X}^{n - 2},{X}^{n - 3}}\right) \] and the middle two maps are two successive maps in the exact homotopy sequence of the pair \( \left( {{X}^{n - 1},{X}^{n - 2}}\right) \), so their composition is the zero map. Definition 4.2.15. Let \( X \) be a CW-complex. Then the cellular homology of \( X \) is the homology of the cellular chain complex \( {C}_{ * }^{\text{cell }}\left( X\right) \) as defined in Definition A.2.2. \( \diamond \) Lemma 4.2.16. The group \( {C}_{n}^{\text{cell }}\left( X\right) \) is the free abelian group on the n-cells of \( X \) . If \( {\alpha }_{\lambda }^{n} \) is the generator corresponding to the n-cell \( {D}_{\lambda }^{n},\lambda \in {\Lambda }_{n} \), then \( \partial \left( {\alpha }_{\lambda }^{n}\right) \) is given as follows: \[ {H}_{n}\left( {{D}_{\lambda }^{n},{S}_{\lambda }^{n - 1}}\right) \overset{\partial }{ \rightarrow }{H}_{n - 1}\left( {S}_{\lambda }^{n - 1}\right) \overset{{f}_{\lambda } \mid {S}_{\lambda }^{n - 1}}{ \rightarrow }{H}_{n - 1}\left( {X}^{n - 1}\right) \rightarrow {H}_{n - 1}\left( {{X}^{n - 1},{X}^{n - 2}}\right) . \] W \[ {\alpha }_{\lambda }^{n} \mid \; > \partial \left( {\alpha }_{\lambda }^{n}\right) \] Proof. This follows directly from Lemma 4.2.11 and its proof. We let \( {Z}_{n}^{\text{cell }}\left( X\right) \) denote \( \operatorname{Ker}\left( {\partial }_{n}\right) : {C}_{n}^{\text{cell }}\left( X\right) \rightarrow {C}_{n - 1}^{\text{cell }}\left( X\right) \) and \( {B}_{n}^{\text{cell }}\left( X\right) \) denote \( \operatorname{Im}\left( {\partial }_{n + 1}\right) : {C}_{n + 1}^{\text{cell }}\left( X\right) \rightarrow {C}_{n}^{\text{cell }}\left( X\right) \), so that \[ {H}_{n}^{\text{cell }}\left( X\right) = {Z}_{n}^{\text{cell }}\left( X\right) /{B}_{n}^{\text{cell }}\left( X\right) . \] Theorem 4.2.17. Let \( X \) be a CW-complex. Suppose that \( X \) is finite dimensional or that \( {H}_{ * } \) is compactly supported. Then the cellular homology of \( X \) is isomorphic to the ordinary homology of \( X \) . Proof. We begin with the following purely algebraic observation. Suppose we have three abelian groups and maps as shown: \[ {H}^{1}\overset{k}{ \leftarrow }{H}^{2}\overset{j}{ \rightarrow }{H}^{3} \] ## where 1. \( k \) is a surjection. 2. \( j \) is an injection; set \( {Z}^{3} = \operatorname{Im}\left( j\right) \) . 3. There is a subgroup \( {B}^{3} \subseteq {Z}^{3} \) with \( {j}^{-1}\left( {B}^{3}\right) = \operatorname{Ker}\left( k\right) \) . Then \( k \circ {j}^{-1} : {Z}^{3}/{B}^{3} \rightarrow {H}^{1} \) is an isomorphism with inverse \( j \circ {k}^{-1} \) . To see this, note that \( j : {H}^{2} \rightarrow {Z}^{3} \) is an isomorphism, so \( {j}^{-1} : {Z}^{3} \rightarrow {H}^{2} \) is well-defined, and then we have isomorphisms \[ {Z}^{3}/{B}^{3}\overset{{j}^{-1}}{ \rightarrow }{H}^{2}/{j}^{-1}\left( {B}^{3}\right) = {H}^{2}/\operatorname{Ker}\left( k\right) \cong {H}^{1}. \] Note also that \( j \circ {k}^{-1} \) is well-defined, as \( j\left( {\operatorname{Ker}\left( k\right) }\right) = {B}^{3} \) . We apply this here to construct isomorphisms, for each \( n \) , \[ {\Theta }_{n} : {H}_{n}^{\text{cell }}\left( X\right) \rightarrow {H}_{n}\left( X\right) \] Consider \[ {H}_{n}\left( X\right) \overset{{k}_{n}}{ \leftarrow }{H}_{n}\left( {X}^{n}\right) \overset{{j}_{n}}{ \rightarrow }{H}_{n}\left( {{X}^{n},{X}^{n - 1}}\right) \] with the maps induced by inclusion. We must verify the three conditions above. We have already shown (1), that \( {k}_{n} \) is a surjection, in Corollary 4.2.12. Also, (2) is immediate from \( 0 = {H}_{n}\left( {X}^{n - 1}\right) \rightarrow {H}_{n}\left( {X}^{n}\right) \rightarrow {H}_{n}\left( {{X}^{n},{X}^{n - 1}}\right) \) . Let us identify \( {Z}^{3} = \operatorname{Im}\left( {j}_{n}\right) \) . We have ![21ef530b-1e09-406a-b041-cf4539af5c14_53_0.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_53_0.jpg) Then \( {j}_{n - 1} \) is an injection, so \[ \operatorname{Im}\left( {j}_{n}\right) = \operatorname{Ker}\left( \partial \right) = \operatorname{Ker}\left( {{j}_{n - 1} \circ \partial }\right) = \operatorname{Ker}\left( {\partial }_{n}\right) = {Z}_{n}^{\text{cell }}\left( X\right) . \] As for (3), we have ![21ef530b-1e09-406a-b041-cf4539af5c14_53_1.jpg](images/21ef530b-1e09-406a-b041-cf4539af5c14_53_1.jpg) Note that \( {H}_{n}\left( {X}^{n + 1}\right) \rightarrow {H}_{n}\left( X\right) \) is an isomorphism. Also, \( {B}_{n + 1}^{\text{cell }}\left( X\right) = \operatorname{Im}\left( {\partial }_{n + 1}\right) = \) \( \operatorname{Im}\left( {{j}_{n} \circ \partial }\right) \subseteq \operatorname{Im}\left( {j}_{n}\right) \) . Then \[ {j}_{n}^{-1}\left( {{B}_{n + 1}^{\text{cell }}\left( X\right) }\right) = {j}_{n}^{-1}\left( {\operatorname{Im}\left( {\partial }_{n + 1}\right) }\right) = {j}_{n}^{-1}\left( {\operatorname{Im}\left( {{j}_{n}\partial }\right) }\right) \] \[ = \operatorname{Im}\left( \partial \right) = \operatorname{Ker}\left( {\widetilde{k}}_{n}\right) = \operatorname{Ker}\left( {k}_{n}\right) . \] Thus if \( {\Theta }_{n} = {k}_{n} \circ {j}_{n}^{-1} \), we have an isomorphism \[ {\Theta }_{n} : {Z}_{n}^{\text{cell }}\left( X\right) /{B}_{n}^{\text{cell }}\left( X\right) = {H}_{n}^{\text{cell }}\left( X\right) \overset{ \cong }{ \rightarrow }{H}_{n}\left( X\right) . \] Remark 4.2.18. Theorem 4.2.17 shows that the point of cellular homology is not that it is different from ordinary homology. Rather, for CW-complexes (where it is defined) it is the same. The point of cellular homology is that it is a better way of looking at homology. It is better for two reasons. The first reason is psychological. It makes clear how the homology of a CW-complex comes from its cells. The second reason is mathematical. If \( X \) is a finite complex, the cellular chain complex of \( X \) is finitely generated. This inherent finiteness not only makes cellular homology easier to work with, it allows us to effectively, and indeed easily, compute an important and very classical invariant of topological spaces as well. Recall that any finitely generated abelian group \( A \) is isomorphic to \( F \oplus T \), where \( F \) is a free abelian group of well-defined rank \( r \) (i.e., \( F \) is isomorphic to \( {\mathbb{Z}}^{r} \) ) and \( T \) is a torsion group. In this case we define the rank of \( A \) to be \( r \) . Definition 4.2.19. Let \( X \) be a space with \( {H}_{i}\left( X\right) \) finitely generated for each \( i \), and nonzero for only finitely many values of \( i \) . Then the Euler characteristic \( \chi \left( X\right) \) is \[ \chi \left( X\right) = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\operatorname{rank}{H}_{i}\left( X\right) \] \( \diamond \) Theorem 4.2.20. Let \( X \) be a finite CW-complex. Then \[ \chi \left( X\right) = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i} \cdot \text{ number of }i\text{-cells of }X. \] Proof. Let \( X \) have \( {d}_{i}i \) -cells and suppose \( {d}_{i} = 0 \) for \( i > n \) . We have the cellular chain complex of \( X \) \[ 0 \rightarrow {C}_{n}^{\mathrm{{cell}}}\left( X\right) \rightarrow {C}_{n - 1}^{\mathrm{{cell}}}\left( X\right) \rightarrow \cdots \rightarrow {C}_{1}^{\mathrm{{cell}}}\left( X\right) \rightarrow {C}_{0}^{\mathrm{{cell}}}\left( X\right) \rightarrow 0 \] with \( {C}_{i}^{\text{cell }}\left( X\right) \) free abelian of rank \( {d}_{i} \) . But then it is purely algebraic result that \[ \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}{d}_{i} = \mathop{\sum }\limits_{i}{\left( -1\right) }^{i}\operatorname{rank}{H}_{i}^{\text{cell }}\left( X\right) \] and by Theorem A.2.13 this is equal to \( \chi \left( X\right) \) . Remark 4.2.21. Note in particular that \( \chi \left( X\right) \) is independent of the CW-structure on \( X \) . For example, let \( X = {S}^{n} \) . Then \( \chi \left( X\right) = 2 \) for \( n \) even and 0 for \( n \) odd. We have seen in Example 4.2.9 three different CW-structures on \( X \) . In the first, \( X \) has a single 0-cell and a single \( n \) -cell. In the second, \( X \) has two \( i \) -cells for each \( i \) between 0 and \( n \) . In the third, \( X \) has a single 0-cell, a single \( \left( {n - 1}\right) \) -cell, and two \( n \) -cells. But counting cells in any of these CW-structures gives \( \chi \left( X\right) = 2 \) for \( n \) even and 0 for \( n \) odd. \( \;\diamond \) Remark 4.2.22. Let \( X \) be the surface of a convex polyhedron in \( {\mathbb{R}}^{3} \) . It is a famous theorem of Euler that, if \( V, E \), and \( F \) denote the number of vertices, edges, and faces of \( X \), then \[ V - E + F = 2\text{.} \] But \( X \) is topologically \( {S}^{2} \) and regarding \( X \) as the surface of a polyhedron gives a CW-structure on \( X \) with \( V \) 0-cells, \( E \) 1-cells, and \( F \) 2-cells, so this equation is a special case of Theorem 4.2.20. For example, we may compute \( V - E + F \) for each of the five Platonic s
109_The rising sea Foundations of Algebraic Geometry
Definition 8.10
Definition 8.10. A set of twin roots \( \Psi \subseteq \Phi \) is said to be convex if it has the form \( \Psi = \Psi \left( \mathcal{M}\right) \) for some pair \( \mathcal{M} \) with \( {\mathcal{M}}_{ + } \) and \( {\mathcal{M}}_{ - } \) both nonempty. Remarks 8.11. (a) Recall that a twin root \( \alpha = \left( {{\alpha }_{ + },{\alpha }_{ - }}\right) \) is completely determined by its first component \( {\alpha }_{ + } \), which can be an arbitrary root of \( {\sum }_{ + } \) . We can therefore identify \( \Phi \) with the set of roots of \( {\sum }_{ + } \) . Moreover, the condition \( \alpha \supseteq \mathcal{M} \) in (8.1) is equivalent to the two conditions \[ {\alpha }_{ + } \supseteq {\mathcal{M}}_{ + }\text{ and } - {\alpha }_{ + } \supseteq {\operatorname{op}}_{\sum }\left( {\mathcal{M}}_{ - }\right) \] So a set of twin roots is convex if and only if its set of first components is convex in the sense of Definition 7.18. Note, however, how much more natural the notion of "convexity" is from the twin point of view. Moreover, the analogy with the spherical case points the way toward results that would be more cumbersome to state if we worked only with \( {\sum }_{ + } \) . (b) A convex set of twin roots is always finite in view of an observation made after Definition 7.18. The following lemma is proved in exactly the same way as Lemma 7.14, with the aid of Proposition 5.193. It refers to the concept of "convex pair" that was introduced in Definition 5.158. Lemma 8.12. There is an order-reversing \( 1 - 1 \) correspondence between convex pairs \( \mathcal{M} \) and convex subsets of \( \Phi \) . It is given by \( \mathcal{M} \mapsto \Psi \left( \mathcal{M}\right) \), and its inverse is given by \( \Psi \mapsto \mathop{\bigcap }\limits_{{\alpha \in \Psi }}\alpha \) . Example 8.13. Given two chambers \( C, D \in {\sum }_{ + } \), set \[ \Phi \left( {C, D}\right) \mathrel{\text{:=}} \left\{ {\alpha \in \Phi \mid C \in {\alpha }_{ + }, D \notin {\alpha }_{ + }}\right\} . \] Then \( \Phi \left( {C, D}\right) = \Psi \left( \mathcal{M}\right) \), where \( {\mathcal{M}}_{ + } = \{ C\} \) and \( {\mathcal{M}}_{ - } = \left\{ {D}^{\prime }\right\} \), with \( {D}^{\prime } \mathrel{\text{:=}} \) \( {\operatorname{op}}_{\sum }D \) . Hence \( \Phi \left( {C, D}\right) \) is a convex set of roots. Its set of first components is precisely what was called \( \Phi \left( {C, D}\right) \) in the setting of ordinary Coxeter complexes (following Definition 7.18). Thus it contains precisely \( d = d\left( {C, D}\right) \) roots, one for each wall of \( {\sum }_{ + } \) that separates \( C \) from \( D \), and we can enumerate these roots as in Example 7.15 by choosing a minimal gallery \( C = {C}_{0},\ldots ,{C}_{d} = D \) from \( C \) to \( D \) . Observe next that the notion of admissible ordering in Definition 7.16 applies verbatim to the present setting. Using Lemma 5.191(1), one can now imitate the proof of Lemma 7.17 to obtain the following: Lemma 8.14. Every convex set of twin roots admits an admissible ordering. We are now ready to return to pre-root groups. ## 8.2.2 Fixers Assume that \( \mathcal{C} = \left( {{\mathcal{C}}_{ + },{\mathcal{C}}_{ - }}\right) \) is a twin building and that \( G \) is a subgroup of \( {\operatorname{Aut}}_{0}\mathcal{C} \) that contains a system \( {\left( {X}_{\alpha }\right) }_{\alpha \in \Phi } \) of pre-root groups. Here \( \Phi \) is the set of twin roots of a fundamental twin apartment \( \sum = \left( {{\sum }_{ + },{\sum }_{ - }}\right) \) . For any subset \( \psi \subseteq \Phi \) we set \[ {X}_{\Psi } \mathrel{\text{:=}} \left\langle {{X}_{\alpha } \mid \alpha \in \Psi }\right\rangle . \] We also set \[ T \mathrel{\text{:=}} {\operatorname{Fix}}_{G}\left( \sum \right) \] We can now record the analogue of Proposition 7.20, whose proof goes through with minor modifications: Proposition 8.15. Let \( \Psi \) be a convex set of twin roots in the fundamental twin apartment \( \sum \), let \( {\alpha }_{1},\ldots ,{\alpha }_{m} \) be an admissible ordering of \( \Psi \), and set \( {X}_{i} \mathrel{\text{:=}} {X}_{{\alpha }_{i}} \) for \( i = 1,\ldots, m \) . (1) \( {X}_{\Psi }T \) is a subgroup of \( G \), and \[ {X}_{\Psi }T = {X}_{1}\cdots {X}_{m}T. \] (2) If \( {x}_{1}\cdots {x}_{m}t = {x}_{1}^{\prime }\cdots {x}_{m}^{\prime }{t}^{\prime } \) with \( {x}_{i},{x}_{i}^{\prime } \in {X}_{i} \) for \( i = 1,\ldots, m \) and \( t,{t}^{\prime } \in T \) , then there are elements \( {t}_{1},\ldots ,{t}_{m} \in T \) such that \[ {x}_{1}^{\prime } = {x}_{1}{t}_{1} \] \[ {x}_{2}^{\prime } = {t}_{1}^{-1}{x}_{2}{t}_{2} \] \[ \text{: } \] \[ {x}_{m}^{\prime } = {t}_{m - 1}^{-1}{x}_{m}{t}_{m} \] \[ {t}^{\prime } = {t}_{m}^{-1}t \] (3) If \( \Psi = \Psi \left( \mathcal{M}\right) \) for some pair \( \mathcal{M} = \left( {{\mathcal{M}}_{ + },{\mathcal{M}}_{ - }}\right) \) with \( {\mathcal{M}}_{ \pm } \neq \varnothing \), then the pointwise fixer of \( \mathcal{M} \) is given by \[ {\operatorname{Fix}}_{G}\left( \mathcal{M}\right) = {X}_{\Psi }T \] ## 8.3 Root Groups and Moufang Twin Buildings Throughout this section, \( \mathcal{C} = \left( {{\mathcal{C}}_{ + },{\mathcal{C}}_{ - }}\right) \) denotes a thick twin building of type \( \left( {W, S}\right) \) . We continue to record the (still mostly routine) generalizations of the concepts and results of Chapter 7. ## 8.3.1 Definitions and Simple Consequences Definition 8.16. For any twin root \( \alpha \) of \( \mathcal{C} \), the root group \( {U}_{\alpha } \) is defined to be the set of automorphisms \( g \) of \( \mathcal{C} \) such that (a) \( g \) fixes \( \alpha \) pointwise and (b) \( g \) fixes \( \mathcal{P} \) pointwise for every interior panel \( \mathcal{P} \) of \( \alpha \) . Note that \( {U}_{\alpha } \leq {\operatorname{Aut}}_{0}\mathcal{C} \) and that as in Definition 7.24,(a) is redundant if the rank is at least 2. Note also that the panel \( \mathcal{P} \) might be in either \( {\mathcal{C}}_{ + } \) or \( {\mathcal{C}}_{ - } \) . Lemma 8.17. (1) For any twin root \( \alpha \) and any \( g \in {\operatorname{Aut}}_{0}\mathcal{C} \) , \[ g{U}_{\alpha }{g}^{-1} = {U}_{g\alpha }. \] (2) Let \( \alpha \) be a twin root and let \( \mathcal{P} \) be a boundary panel of \( \alpha \) . Then the root group \( {U}_{\alpha } \) acts on the sets \( \mathcal{A}\left( \alpha \right) \) and \( \mathcal{C}\left( {\mathcal{P},\alpha }\right) \), and these two actions are equivalent. (3) If the Coxeter diagram of \( \left( {W, S}\right) \) has no isolated nodes, then the actions in (2) are free. Proof. This is similar to the proof of Lemma 7.25. For (2) one uses Lemma 5.198 instead of Lemma 4.118, and for (3) one needs to recall that the rigidity theorem is valid for twin buildings (see Remark 5.208). Definition 8.18. We say that \( \mathcal{C} \) is Moufang, or is a Moufang twin building, if the actions in Lemma 8.17(2) are transitive for every twin root \( \alpha \) of \( \mathcal{C} \) . If, in addition, these actions are simply transitive, then we say that \( \mathcal{C} \) is strictly Moufang, or is a strictly Moufang twin building. Note that a Moufang twin building whose Coxeter diagram has no isolated nodes is strictly Moufang. Proposition 8.19. If \( \mathcal{C} \) is Moufang, then it is pre-Moufang. More precisely, if we choose a twin apartment \( \sum \) and let \( \Phi \) be its set of twin roots, then \( {\left( {U}_{\alpha }\right) }_{\alpha \in \Phi } \) is a system of pre-root groups. Hence \( G \mathrel{\text{:=}} \left\langle {{U}_{\alpha } \mid \alpha \in \Phi }\right\rangle \) acts strongly transitively on \( \mathcal{C} \), and \( \mathcal{C} \cong \mathcal{C}\left( {G,{B}_{ + },{B}_{ - }}\right) \), where \( {B}_{ \pm } \) are the stabilizers in \( G \) of any pair of opposite chambers. Remarks 8.20. (a) As in Remarks 7.29(a) and (b), a twin building is Mou-fang if \( {U}_{\alpha } \) is transitive on \( \mathcal{A}\left( \alpha \right) \) for every \( \alpha \in \Phi \), where \( \Phi \) is the set of twin roots in a fundamental twin apartment. Moreover, \( G \) is then generated by all the root groops \( {U}_{\alpha } \) with \( \alpha \) a twin root of \( \mathcal{C} \) . (b) Even more than in the case of Moufang spherical buildings, one needs to be careful in reading the literature. In particular, the original definition of "Moufang twin building" given by Tits [261, p. 261] is based on asymmetrically defined root groups, and we do not know whether these are always the same as our root groups as defined above. But our (symmetric) definition seems to be the "right" one, since, as we will show, it leads to the expected equivalence between Moufang twin buildings and RGD systems; see Example 8.47(a) and Theorem 8.81. Moreover, Ronan and Tits used the symmetric definition of root groups in their paper [205] on twin trees. We suspect, incidentally, that the two definitions do not agree for twin trees, but we do not have a counterexample. (c) As a byproduct of our work in Section 8.4, we will see that the symmetric and asymmetric versions of root groups agree in the 2-spherical case (see Remark 8.26). We turn next to links (or, rather, residues, since we are now using the W-metric approach). We will not need a systematic study of residues in Mou-fang twin buildings, so we confine ourselves to recording one result that will be needed later. Its proof is similar to that of Proposition 7.32(1). Proposition 8.21. If \( \mathcal{C} \) is a Moufang twin building, then every spherical residue of \( \mathcal{C} \) is a Moufang (spherical) building. Finally, we remark that the results of Section 7.3.3 on subbuildings extend to Moufang twin buildings with no difficulty. In particular, we have the following analogue of Proposition 7.37: Proposition 8.22. Let \( \mathcal{C} \) be a Moufang twin building whose Coxeter diagram has no isolated nodes. If \( {\mathcal{C}}^{\prime } \) is
1167_(GTM73)Algebra
Definition 7.4
Definition 7.4. Let \( \mathrm{S} \) be a nonempty set of automorphisms of a field \( \mathrm{F} \) . \( \mathrm{S} \) is linearly independent provided that for any \( {\mathrm{a}}_{1},\ldots ,{\mathrm{a}}_{\mathrm{n}}\varepsilon \mathrm{F} \) and \( {\sigma }_{1},\ldots ,{\sigma }_{\mathrm{n}}\varepsilon \mathrm{S}\left( {\mathrm{n} \geq 1}\right) \) : \[ {\mathrm{a}}_{1}{\sigma }_{1}\left( \mathrm{u}\right) + \cdots + {\mathrm{a}}_{\mathrm{n}}{\sigma }_{\mathrm{n}}\left( \mathrm{u}\right) = 0\text{for all}\mathrm{u}\varepsilon \mathrm{F} \Rightarrow {\mathrm{a}}_{\mathrm{i}} = 0\text{for every}\mathrm{i}\text{.} \] Lemma 7.5. If \( \mathrm{S} \) is a set of distinct automorphisms of a field \( \mathrm{F} \), then \( \mathrm{S} \) is linearly independent. PROOF. If \( S \) is not linearly independent then there exist nonzero \( {a}_{i}{\varepsilon F} \) and distinct \( {\sigma }_{i}{\varepsilon S} \) such that \[ {a}_{1}{\sigma }_{1}\left( u\right) + {a}_{2}{\sigma }_{2}\left( u\right) + \cdots + {a}_{n}{\sigma }_{n}\left( u\right) = 0\text{ for all }{u\varepsilon F}. \] (1) Among all such "dependence relations" choose one with \( n \) minimal; clearly \( n > 1 \) . Since \( {\sigma }_{1} \) and \( {\sigma }_{2} \) are distinct, there exists \( v \in F \) with \( {\sigma }_{1}\left( v\right) \neq {\sigma }_{2}\left( v\right) \) . Applying (1) to the element \( {uv} \) (for any \( {u\varepsilon F} \) ) yields: \[ {a}_{1}{\sigma }_{1}\left( u\right) {\sigma }_{1}\left( v\right) + {a}_{2}{\sigma }_{2}\left( u\right) {\sigma }_{2}\left( v\right) + \cdots + {a}_{n}{\sigma }_{n}\left( u\right) {\sigma }_{n}\left( v\right) = 0; \] (2) and multiplying (1) by \( {\sigma }_{1}\left( v\right) \) gives: \[ {a}_{1}{\sigma }_{1}\left( u\right) {\sigma }_{1}\left( v\right) + {a}_{2}{\sigma }_{2}\left( u\right) {\sigma }_{1}\left( v\right) + \cdots + {a}_{n}{\sigma }_{n}\left( u\right) {\sigma }_{1}\left( v\right) = 0. \] (3) The difference of (2) and (3) is a relation: \[ {a}_{2}\left\lbrack {{\sigma }_{2}\left( v\right) - {\sigma }_{1}\left( v\right) }\right\rbrack {\sigma }_{2}\left( u\right) + {a}_{3}\left\lbrack {{\sigma }_{3}\left( v\right) - {\sigma }_{1}\left( v\right) }\right\rbrack {\sigma }_{3}\left( u\right) + \cdots + {a}_{n}\left\lbrack {{\sigma }_{n}\left( v\right) - {\sigma }_{1}\left( v\right) }\right\rbrack {\sigma }_{n}\left( u\right) = 0 \] for all \( u \in F \) . Since \( {a}_{2} \neq 0 \) and \( {\sigma }_{2}\left( v\right) \neq {\sigma }_{1}\left( v\right) \) not all the coefficients are zero and this contradicts the minimality of \( n \) . An extension field \( F \) of a field \( K \) is said to be cyclic [resp. abelian] if \( F \) is algebraic and Galois over \( K \) and \( {\operatorname{Aut}}_{K}F \) is a cyclic [resp. abelian] group. If in this situation \( {\operatorname{Aut}}_{K}F \) is a finite cyclic group of order \( n \), then \( F \) is said to be a cyclic extension of degree \( \mathbf{n} \) (and \( \left\lbrack {F : K}\right\rbrack = n \) by the Fundamental Theorem 2.5). For example, Theorem 5.10 states that every finite dimensional extension of a finite field is a cyclic extension. The next theorem is the crucial link between cyclic extensions and the norm and trace. Theorem 7.6. Let \( \mathrm{F} \) be a cyclic extension field of \( \mathrm{K} \) of degree \( \mathrm{n},\sigma \) a generator of \( {Au}{t}_{\mathrm{K}}\mathrm{F} \) and \( \mathrm{u}\varepsilon \mathrm{F} \) . Then (i) \( {\mathrm{T}}_{\mathrm{K}}{}^{\mathrm{F}}\left( \mathrm{u}\right) = 0 \) if and only if \( \mathrm{u} = \mathrm{v} - \sigma \left( \mathrm{v}\right) \) for some \( \mathrm{v} \in \mathrm{F} \) ; (ii) (Hilbert’s Theorem 90) \( {\mathrm{N}}_{\mathrm{K}}{}^{\mathrm{F}}\left( \mathrm{u}\right) = {1}_{\mathrm{K}} \) if and only if \( \mathrm{u} = \mathrm{v}\sigma {\left( \mathrm{v}\right) }^{-1} \) for some nonzero \( \mathrm{v}\varepsilon \mathrm{F} \) . SKETCH OF PROOF. For convenience write \( \sigma \left( x\right) = {\sigma x} \) . Since \( \sigma \) generates \( {\operatorname{Aut}}_{K}F \), it has order \( n \) and \( \sigma ,{\sigma }^{2},{\sigma }^{3},\ldots ,{\sigma }^{n - 1},{\sigma }^{n} = {1}_{F} = {\sigma }^{0} \) are \( n \) distinct automor-phisms of \( F \) . By Theorem 7.2, \( T\left( u\right) = u + {\sigma u} + {\sigma }^{2}u + \cdots + {\sigma }^{n - 1}u \) and \( N\left( u\right) = \) \( u\left( {\sigma u}\right) \left( {{\sigma }^{2}u}\right) \cdots \left( {{\sigma }^{n - 1}u}\right) \) . (i) If \( u = v - {\sigma v} \), then use the definition and the facts that \[ T\left( {v - {\sigma v}}\right) = T\left( v\right) - T\left( {\sigma v}\right) \text{ and }{\sigma }^{n}\left( v\right) = v \] to show that \( T\left( u\right) = 0 \) . Conversely suppose \( T\left( u\right) = 0 \) . Choose \( w \in F \) such that \( T\left( w\right) = {1}_{K} \) as follows. By Lemma 7.5 (since \( {1}_{K} \neq 0 \) ) there exists \( z \in F \) such that \[ 0 \neq {1}_{F}z + {\sigma z} + {\sigma }^{2}z + \cdots + {\sigma }^{n - 1}z = T\left( z\right) . \] Since \( T\left( z\right) {\varepsilon K} \) by the remarks after Theorem 7.2, we have \( \sigma \left\lbrack {T{\left( z\right) }^{-1}z}\right\rbrack = T{\left( z\right) }^{-1}\sigma \left( z\right) \) . Consequently, if \( w = T{\left( z\right) }^{-1}z \), then \[ T\left( w\right) = T{\left( z\right) }^{-1}z + T{\left( z\right) }^{-1}{\sigma z} + \cdots + T{\left( z\right) }^{-1}{\sigma }^{n - 1}z \] \[ = T{\left( z\right) }^{-1}T\left( z\right) = {1}_{K}. \] Now let \[ v = {uw} + \left( {u + {\sigma u}}\right) \left( {\sigma w}\right) + \left( {u + {\sigma u} + {\sigma }^{2}u}\right) \left( {{\sigma }^{2}w}\right) \] \[ + \left( {u + {\sigma u} + {\sigma }^{2}u + {\sigma }^{3}u}\right) \left( {{\sigma }^{3}w}\right) + \cdots + \left( {u + {\sigma u} + \cdots + {\sigma }^{n - 2}u}\right) \left( {{\sigma }^{n - 2}w}\right) . \] Use the fact that \( \sigma \) is an automorphism and that \[ 0 = T\left( u\right) = u + {\sigma u} + {\sigma }^{2}u + \cdots + {\sigma }^{n - 1}u, \] which implies that \( u = - \left( {{\sigma u} + {\sigma }^{2}u + \cdots + {\sigma }^{n - 1}u}\right) \), to show that \[ v - {\sigma v} = {uw} + u\left( {\sigma w}\right) + u\left( {{\sigma }^{2}w}\right) + u\left( {{\sigma }^{3}w}\right) + \cdots + u\left( {{\sigma }^{n - 2}w}\right) \] \[ + u\left( {{\sigma }^{n - 1}w}\right) = {uT}\left( w\right) = u{1}_{K} = u. \] (ii) If \( u = {v\sigma }{\left( v\right) }^{-1} \), then since \( \sigma \) is an automorphism of order \( n,{\sigma }^{n}\left( {v}^{-1}\right) = {v}^{-1} \) , \( \sigma \left( {v}^{-1}\right) = \sigma {\left( v\right) }^{-1} \) and for each \( 1 \leq i \leq n - 1,{\sigma }^{i}\left( {{v\sigma }{\left( v\right) }^{-1}}\right) = {\sigma }^{i}\left( v\right) {\sigma }^{i + 1}{\left( v\right) }^{-1} \) . Hence: \[ N\left( u\right) = \left( {{v\sigma }{\left( v\right) }^{-1}}\right) \left( {{\sigma v}{\sigma }^{2}{\left( v\right) }^{-1}}\right) \left( {{\sigma }^{2}v{\sigma }^{3}{\left( v\right) }^{-1}}\right) \cdots \left( {{\sigma }^{n - 1}v{\sigma }^{n}{\left( v\right) }^{-1}}\right) = {1}_{K}. \] Conversely suppose \( N\left( u\right) = {1}_{K} \), which implies \( u \neq 0 \) . By Lemma 7.5 there exists \( {y\varepsilon F} \) such that the element \( v \) given by \[ v = {uy} + \left( {u\sigma u}\right) {\sigma y} + \left( {{u\sigma u}{\sigma }^{2}u}\right) {\sigma }^{2}y + \cdots + \left( {{u\sigma u}\cdots {\sigma }^{n - 2}u}\right) {\sigma }^{n - 2}y \] \[ + \left( {{u\sigma u}\cdots {\sigma }^{n - 1}u}\right) {\sigma }^{n - 1}y \] is nonzero. Since the last summand of \( v \) is \( N\left( u\right) {\sigma }^{n - 1}y = {1}_{K}{\sigma }^{n - 1}y = {\sigma }^{n - 1}y \), it is easy to verify that \( {u}^{-1}v = {\sigma v} \), whence \( u = {v\sigma }{\left( v\right) }^{-1}(\sigma \left( v\right) \neq 0 \) since \( v \neq 0 \) and \( \sigma \) is injective). We now have at hand all the necessary equipment for an analysis of cyclic extensions. We begin by reducing the problem to simpler form. Proposition 7.7. Let \( \mathrm{F} \) be a cyclic extension field of \( \mathrm{K} \) of degree \( \mathrm{n} \) and suppose \( \mathrm{n} = {\mathrm{{mp}}}^{\mathrm{t}} \) where \( 0 \neq \mathrm{p} = \) char \( \mathrm{K} \) and \( \left( {\mathrm{m},\mathrm{p}}\right) = 1 \) . Then there is a chain of intermediate fields \( \mathrm{F} \supset {\mathrm{E}}_{0} \supset {\mathrm{E}}_{1} \supset \cdots \supset {\mathrm{E}}_{\mathrm{t} - 1} \supset {\mathrm{E}}_{\mathrm{t}} = \mathrm{K} \) such that \( \mathrm{F} \) is a cyclic extension of \( {\mathrm{E}}_{0} \) of degree \( \mathrm{m} \) and for each \( 0 \leq \mathrm{i} \leq \mathrm{t},{\mathrm{E}}_{\mathrm{i} - 1} \) is a cyclic extension of \( {\mathrm{E}}_{\mathrm{i}} \) of degree \( \mathbf{p} \) . SKETCH OF PROOF. By hypothesis \( F \) is Galois over \( K \) and \( {\operatorname{Aut}}_{K}F \) is cyclic (abelian) so that every subgroup is normal. Recall that every subgroup and quotient group of a cyclic group is cyclic (Theorem I.3.5). Consequently, the Fundamental Theorem 2.5(ii) implies that for any intermediate field \( E, F \) is cyclic over \( E \) and \( E \) is cyclic over \( K \) . It follows that for any pair \( L, M \) of intermediate fields with \( L \subset M \) , \( M \) is a cyclic extension of \( L \) ; in particular, \( M \) is algebraic Galois over \( L \) . Let \( H \) be the unique (cyclic) subgroup of order \( m \) of \( {\operatorname{Aut}}_{K}F \) (Exercise I.3.6) and let \( {E}_{0} \) be its fixed field (so that \( H = {H}^{\prime \prime } = {E}_{0}{}^{\prime } = {\operatorname{Aut}}_{{E}_{0}}F \) ). Then \( F \) is cyclic over \( {E}_{0} \) of degree \( m \) and \( {E}_{0} \) is cyclic over \( K \) of degree \( {p}^{t} \) . Since \( {\operatorname{Aut}}_{K}{E}_{0} \) is cyclic of order \( {p}^{t} \) it has a chain of subgroups \[ 1 = {G}_{0} < {G}_{1} < {G}_{2} < \cdots < {G}_{t - 1} < {G}_{t} = {\operato